Configure safety attributes

The Vertex AI Gemini API blocks unsafe content based on a list of safety attributes and their configured blocking thresholds. This page outlines key safety concepts and shows you how to configure the blocking thresholds of each safety attribute to control how often prompts and responses are blocked.

Safety attribute confidence scoring

Content processed through the Vertex AI Gemini API is assessed against a list of safety attributes, which include "harmful categories" and topics that can be considered sensitive. Those safety attributes are denoted in the following table:

Safety attribute scoring

Safety Attribute Definition
Hate Speech Negative or harmful comments targeting identity and/or protected attributes.
Harassment Malicious, intimidating, bullying, or abusive comments targeting another individual.
Sexually Explicit Contains references to sexual acts or other lewd content.
Dangerous Content Promotes or enables access to harmful goods, services, and activities.

Safety attribute probabilities

Each safety attribute has an associated confidence score between 0.0 and 1.0, rounded to one decimal place. The confidence score reflects the likelihood of the input or response belonging to a given category.

The confidence score in the following table is returned with a safety-confidence level:

Probability Description
NEGLIGIBLE Content has a negligible probability of being unsafe.
LOW Content has a low probability of being unsafe.
MEDIUM Content has a medium probability of being unsafe.
HIGH Content has a high probability of being unsafe.

Safety attribute severity

Each of the four safety attributes is assigned a safety rating (severity level) and a severity score ranging from 0.0 to 1.0, rounded to one decimal place. The ratings and scores in the following table reflect the predicted severity of the content belonging to a given category:

Severity Description
NEGLIGIBLE Content severity is predicted as negligible in respect to Google's safety policy.
LOW Content severity is predicted as low in respect to Google's safety policy.
MEDIUM Content severity is predicted as medium in respect to Google's safety policy.
HIGH Content severity is predicted as high in respect to Google's safety policy.

Probability scores compared to severity scores

There are two types of safety scores:

  • Safety scores based on probability of being unsafe
  • Safety scores based on severity of harmful content

The probability safety attribute reflects the likelihood that an input or model response is associated with the respective safety attribute. The severity safety attribute reflects the magnitude of how harmful an input or model response might be.

Content can have a low probability score and a high severity score, or a high probability score and a low severity score. For example, consider the following two sentences:

  1. The robot punched me.
  2. The robot slashed me up.

The first sentence might cause a higher probability of being unsafe and the second sentence might have a higher severity in terms of violence. Because of this, it's important to carefully test and consider the appropriate level of blocking required to support your key use cases and also minimize harm to end users.

Safety settings

Safety settings are part of the request you send to the API service. It can be adjusted for each request you make to the API. The following table describes the block settings you can adjust for each category. For example, if you set the block setting to Block few for the Dangerous Content category, everything that has a high probability of being dangerous content is blocked. But anything with a lower probability is allowed. If not set, the default block setting is Block some.

Threshold (Studio) Threshold (API) Threshold (Description)
BLOCK_NONE (Restricted) Always show regardless of probability of unsafe content.
Block few BLOCK_ONLY_HIGH Block when high probability of unsafe content.
Block some (Default) BLOCK_MEDIUM_AND_ABOVE (Default) Block when medium or high probability of unsafe content.
Block most BLOCK_LOW_AND_ABOVE Block when medium or high probability of unsafe content.
HARM_BLOCK_THRESHOLD_UNSPECIFIED Threshold is unspecified, block using default threshold.

You can change these settings for each request that you make to the text service. See the HarmBlockThreshold API reference for details.

How to remove automated response blocking for select safety attributes

The "BLOCK_NONE" safety setting removes automated response blocking (for the safety attributes described under Safety Settings) and allow you to configure your own safety guidelines with the scores that are returned. In order to access the "BLOCK_NONE" setting, you have two options:

(1) You might apply for the allowlist through the Gemini safety filter allowlist form, or

(2) You might switch your account type to monthly invoiced billing with the GCP invoiced billing reference.

Key differences between Gemini and other model families

While the same safety classifiers are applied to Gemini and PaLM, the number of safety attributes returned in the API might vary across different model families. The blocking logic (ie. confidence threshold) is based on rigorous evaluation against each model. Therefore, a safety setting that is applied to one model may not perfectly match the behavior of a safety setting applied to a different model. If this is a concern, we recommend that you configure your own blocking logic with raw severity scores and raw confidence scores, applying the same scoring thresholds across models.

Configure thresholds

Python

To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Python API reference documentation.

import vertexai

from vertexai import generative_models

# TODO(developer): Update and un-comment below line
# project_id = "PROJECT_ID"

vertexai.init(project=project_id, location="us-central1")

model = generative_models.GenerativeModel(model_name="gemini-1.0-pro-vision-001")

# Generation config
generation_config = generative_models.GenerationConfig(
    max_output_tokens=2048, temperature=0.4, top_p=1, top_k=32
)

# Safety config
safety_config = [
    generative_models.SafetySetting(
        category=generative_models.HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT,
        threshold=generative_models.HarmBlockThreshold.BLOCK_LOW_AND_ABOVE,
    ),
    generative_models.SafetySetting(
        category=generative_models.HarmCategory.HARM_CATEGORY_HARASSMENT,
        threshold=generative_models.HarmBlockThreshold.BLOCK_LOW_AND_ABOVE,
    ),
]

image_file = Part.from_uri(
    "gs://cloud-samples-data/generative-ai/image/scones.jpg", "image/jpeg"
)

# Generate content
responses = model.generate_content(
    [image_file, "What is in this image?"],
    generation_config=generation_config,
    safety_settings=safety_config,
    stream=True,
)

text_responses = []
for response in responses:
    print(response.text)
    text_responses.append(response.text)

Node.js

Before trying this sample, follow the Node.js setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Node.js API reference documentation.

To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.

const {
  VertexAI,
  HarmCategory,
  HarmBlockThreshold,
} = require('@google-cloud/vertexai');

/**
 * TODO(developer): Update these variables before running the sample.
 */
async function setSafetySettings(
  projectId = 'PROJECT_ID',
  location = 'us-central1',
  model = 'gemini-1.0-pro-001'
) {
  // Initialize Vertex with your Cloud project and location
  const vertexAI = new VertexAI({project: projectId, location: location});

  // Instantiate the model
  const generativeModel = vertexAI.getGenerativeModel({
    model: model,
    // The following parameters are optional
    // They can also be passed to individual content generation requests
    safety_settings: [
      {
        category: HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT,
        threshold: HarmBlockThreshold.BLOCK_LOW_AND_ABOVE,
      },
    ],
    generation_config: {
      max_output_tokens: 256,
      temperature: 0.4,
      top_p: 1,
      top_k: 16,
    },
  });

  const request = {
    contents: [{role: 'user', parts: [{text: 'Tell me something dangerous.'}]}],
  };

  console.log('Prompt:');
  console.log(request.contents[0].parts[0].text);
  console.log('Streaming Response Text:');

  // Create the response stream
  const responseStream = await generativeModel.generateContentStream(request);

  // Log the text response as it streams
  for await (const item of responseStream.stream) {
    if (item.candidates[0].finishReason === 'SAFETY') {
      console.log('This response stream terminated due to safety concerns.');
      break;
    } else {
      process.stdout.write(item.candidates[0].content.parts[0].text);
    }
  }
  console.log('This response stream terminated due to safety concerns.');
}

Java

Before trying this sample, follow the Java setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Java API reference documentation.

To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.

import com.google.cloud.vertexai.VertexAI;
import com.google.cloud.vertexai.api.Candidate;
import com.google.cloud.vertexai.api.GenerateContentResponse;
import com.google.cloud.vertexai.api.GenerationConfig;
import com.google.cloud.vertexai.api.HarmCategory;
import com.google.cloud.vertexai.api.SafetySetting;
import com.google.cloud.vertexai.generativeai.GenerativeModel;
import java.util.Arrays;
import java.util.List;

public class WithSafetySettings {

  public static void main(String[] args) throws Exception {
    // TODO(developer): Replace these variables before running the sample.
    String projectId = "your-google-cloud-project-id";
    String location = "us-central1";
    String modelName = "gemini-1.0-pro-vision-001";
    String textPrompt = "your-text-here";

    String output = safetyCheck(projectId, location, modelName, textPrompt);
    System.out.println(output);
  }

  // Use safety settings to avoid harmful questions and content generation.
  public static String safetyCheck(String projectId, String location, String modelName,
      String textPrompt) throws Exception {
    // Initialize client that will be used to send requests. This client only needs
    // to be created once, and can be reused for multiple requests.
    try (VertexAI vertexAI = new VertexAI(projectId, location)) {
      StringBuilder output = new StringBuilder();

      GenerationConfig generationConfig =
          GenerationConfig.newBuilder()
              .setMaxOutputTokens(2048)
              .setTemperature(0.4F)
              .setTopK(32)
              .setTopP(1)
              .build();

      List<SafetySetting> safetySettings = Arrays.asList(
          SafetySetting.newBuilder()
              .setCategory(HarmCategory.HARM_CATEGORY_HATE_SPEECH)
              .setThreshold(SafetySetting.HarmBlockThreshold.BLOCK_LOW_AND_ABOVE)
              .build(),
          SafetySetting.newBuilder()
              .setCategory(HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT)
              .setThreshold(SafetySetting.HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE)
              .build()
      );

      GenerativeModel model = new GenerativeModel(modelName, vertexAI)
          .withGenerationConfig(generationConfig)
          .withSafetySettings(safetySettings);

      GenerateContentResponse response = model.generateContent(textPrompt);
      output.append(response).append("\n");

      // Verifies if the above content has been blocked for safety reasons.
      boolean blockedForSafetyReason = response.getCandidatesList()
          .stream()
          .anyMatch(candidate -> candidate.getFinishReason() == Candidate.FinishReason.SAFETY);
      output.append("Blocked for safety reasons?: ").append(blockedForSafetyReason);

      return output.toString();
    }
  }
}

Go

Before trying this sample, follow the Go setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Go API reference documentation.

To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.

import (
	"context"
	"fmt"
	"io"
	"mime"
	"path/filepath"

	"cloud.google.com/go/vertexai/genai"
)

// generateMultimodalContent generates a response into w, based upon the prompt
// and image provided.
func generateMultimodalContent(w io.Writer, prompt, image, projectID, location, modelName string) error {
	// prompt := "describe this image."
	// location := "us-central1"
	// model := "gemini-1.0-pro-vision-001"
	// image := "gs://cloud-samples-data/generative-ai/image/320px-Felis_catus-cat_on_snow.jpg"
	ctx := context.Background()

	client, err := genai.NewClient(ctx, projectID, location)
	if err != nil {
		return fmt.Errorf("unable to create client: %v", err)
	}
	defer client.Close()

	model := client.GenerativeModel(modelName)
	model.SetTemperature(0.4)
	// configure the safety settings thresholds
	model.SafetySettings = []*genai.SafetySetting{
		{
			Category:  genai.HarmCategoryHarassment,
			Threshold: genai.HarmBlockLowAndAbove,
		},
		{
			Category:  genai.HarmCategoryDangerousContent,
			Threshold: genai.HarmBlockLowAndAbove,
		},
	}

	// Given an image file URL, prepare image file as genai.Part
	img := genai.FileData{
		MIMEType: mime.TypeByExtension(filepath.Ext(image)),
		FileURI:  image,
	}

	res, err := model.GenerateContent(ctx, img, genai.Text(prompt))
	if err != nil {
		return fmt.Errorf("unable to generate contents: %w", err)
	}

	fmt.Fprintf(w, "generated response: %s\n", res.Candidates[0].Content.Parts[0])
	return nil
}

C#

Before trying this sample, follow the C# setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI C# API reference documentation.

To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.


using Google.Api.Gax.Grpc;
using Google.Cloud.AIPlatform.V1;
using System.Text;
using System.Threading.Tasks;
using static Google.Cloud.AIPlatform.V1.SafetySetting.Types;

public class WithSafetySettings
{
    public async Task<string> GenerateContent(
        string projectId = "your-project-id",
        string location = "us-central1",
        string publisher = "google",
        string model = "gemini-1.0-pro-vision"
    )
    {
        var predictionServiceClient = new PredictionServiceClientBuilder
        {
            Endpoint = $"{location}-aiplatform.googleapis.com"
        }.Build();


        var generateContentRequest = new GenerateContentRequest
        {
            Model = $"projects/{projectId}/locations/{location}/publishers/{publisher}/models/{model}",
            Contents =
            {
                new Content
                {
                    Role = "USER",
                    Parts =
                    {
                        new Part { Text = "Hello!" }
                    }
                }
            },
            SafetySettings =
            {
                new SafetySetting
                {
                    Category = HarmCategory.HateSpeech,
                    Threshold = HarmBlockThreshold.BlockLowAndAbove
                },
                new SafetySetting
                {
                    Category = HarmCategory.DangerousContent,
                    Threshold = HarmBlockThreshold.BlockMediumAndAbove
                }
            }
        };

        using PredictionServiceClient.StreamGenerateContentStream response = predictionServiceClient.StreamGenerateContent(generateContentRequest);

        StringBuilder fullText = new();

        AsyncResponseStream<GenerateContentResponse> responseStream = response.GetResponseStream();
        await foreach (GenerateContentResponse responseItem in responseStream)
        {
            // Check if the content has been blocked for safety reasons.
            bool blockForSafetyReason = responseItem.Candidates[0].FinishReason == Candidate.Types.FinishReason.Safety;
            if (blockForSafetyReason)
            {
                fullText.Append("Blocked for safety reasons");
            }
            else
            {
                fullText.Append(responseItem.Candidates[0].Content.Parts[0].Text);
            }
        }

        return fullText.ToString();
    }
}

REST

Before using any of the request data, make the following replacements:

  • LOCATION: The region to process the request. Available options include the following:

    Click to expand available regions

    • us-central1
    • us-west4
    • northamerica-northeast1
    • us-east4
    • us-west1
    • asia-northeast3
    • asia-southeast1
    • asia-northeast1
  • PROJECT_ID: Your project ID.
  • MODEL_ID: The model ID of the multimodal model that you want to use. The options are:
    • gemini-1.0-pro
    • gemini-1.0-pro-vision
  • ROLE: The role in a conversation associated with the content. Specifying a role is required even in singleturn use cases. Acceptable values include the following:
    • USER: Specifies content that's sent by you.
    • MODEL: Specifies the model's response.
  • TEXT: The text instructions to include in the prompt.
  • SAFETY_CATEGORY: The safety category to configure a threshold for. Acceptable values include the following:

    Click to expand safety categories

    • HARM_CATEGORY_SEXUALLY_EXPLICIT
    • HARM_CATEGORY_HATE_SPEECH
    • HARM_CATEGORY_HARASSMENT
    • HARM_CATEGORY_DANGEROUS_CONTENT
  • THRESHOLD: The threshold for blocking responses that could belong to the specified safety category based on probability. Acceptable values include the following:

    Click to expand blocking thresholds

    • BLOCK_NONE
    • BLOCK_ONLY_HIGH
    • BLOCK_MEDIUM_AND_ABOVE (default)
    • BLOCK_LOW_AND_ABOVE
    BLOCK_LOW_AND_ABOVE blocks the most while BLOCK_ONLY_HIGH blocks the least.

HTTP method and URL:

POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/MODEL_ID:streamGenerateContent

Request JSON body:

{
  "contents": {
    "role": "ROLE",
    "parts": { "text": "TEXT" }
  },
  "safety_settings": {
    "category": "SAFETY_CATEGORY",
    "threshold": "THRESHOLD"
  },
}

To send your request, choose one of these options:

curl

Save the request body in a file named request.json, and execute the following command:

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/MODEL_ID:streamGenerateContent"

PowerShell

Save the request body in a file named request.json, and execute the following command:

$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/MODEL_ID:streamGenerateContent" | Select-Object -Expand Content

You should receive a JSON response similar to the following.

Example curl command

LOCATION="us-central1"
MODEL_ID="gemini-1.0-pro"
PROJECT_ID="test-project"

curl \
-X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
https://${LOCATION}-aiplatform.googleapis.com/v1/projects/${PROJECT_ID}/locations/${LOCATION}/publishers/google/models/${MODEL_ID}:streamGenerateContent -d \
$'{
  "contents": {
    "role": "user",
    "parts": { "text": "Hello!" }
  },
  "safety_settings": [
    {
      "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
      "threshold": "BLOCK_NONE"
    },
    {
      "category": "HARM_CATEGORY_HATE_SPEECH",
      "threshold": "BLOCK_LOW_AND_ABOVE"
    },
    {
      "category": "HARM_CATEGORY_HARASSMENT",
      "threshold": "BLOCK_MEDIUM_AND_ABOVE"
    },
    {
      "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
      "threshold": "BLOCK_ONLY_HIGH"
    }
  ]
}'

Console

  1. In the Vertex AI section of the Google Cloud console, go to the Vertex AI Studio page.

    Go to Vertex AI Studio

  2. Under Create a new prompt, click any of the buttons to open the prompt design page.

  3. Click Safety settings.

    The Safety settings dialog window opens.

  4. For each safety attribute, configure the desired threshold value.

  5. Click Save.

What's next