Hi there! Are you looking for the official Deno documentation? Try docs.deno.com for all your Deno learning needs.

GoogleCloudAiplatformV1SafetySetting

import type { GoogleCloudAiplatformV1SafetySetting } from "https://googleapis.deno.dev/v1/aiplatform:v1.ts";

A safety setting that affects the safety-blocking behavior. A SafetySetting consists of a harm category and a threshold for that category.

interface GoogleCloudAiplatformV1SafetySetting {
category?:
| "HARM_CATEGORY_UNSPECIFIED"
| "HARM_CATEGORY_HATE_SPEECH"
| "HARM_CATEGORY_DANGEROUS_CONTENT"
| "HARM_CATEGORY_HARASSMENT"
| "HARM_CATEGORY_SEXUALLY_EXPLICIT"
| "HARM_CATEGORY_CIVIC_INTEGRITY"
| "HARM_CATEGORY_IMAGE_HATE"
| "HARM_CATEGORY_IMAGE_DANGEROUS_CONTENT"
| "HARM_CATEGORY_IMAGE_HARASSMENT"
| "HARM_CATEGORY_IMAGE_SEXUALLY_EXPLICIT"
| "HARM_CATEGORY_JAILBREAK";
method?: "HARM_BLOCK_METHOD_UNSPECIFIED" | "SEVERITY" | "PROBABILITY";
threshold?:
| "HARM_BLOCK_THRESHOLD_UNSPECIFIED"
| "BLOCK_LOW_AND_ABOVE"
| "BLOCK_MEDIUM_AND_ABOVE"
| "BLOCK_ONLY_HIGH"
| "BLOCK_NONE"
| "OFF";
}

§Properties

§
category?: "HARM_CATEGORY_UNSPECIFIED" | "HARM_CATEGORY_HATE_SPEECH" | "HARM_CATEGORY_DANGEROUS_CONTENT" | "HARM_CATEGORY_HARASSMENT" | "HARM_CATEGORY_SEXUALLY_EXPLICIT" | "HARM_CATEGORY_CIVIC_INTEGRITY" | "HARM_CATEGORY_IMAGE_HATE" | "HARM_CATEGORY_IMAGE_DANGEROUS_CONTENT" | "HARM_CATEGORY_IMAGE_HARASSMENT" | "HARM_CATEGORY_IMAGE_SEXUALLY_EXPLICIT" | "HARM_CATEGORY_JAILBREAK"
[src]

Required. The harm category to be blocked.

§
method?: "HARM_BLOCK_METHOD_UNSPECIFIED" | "SEVERITY" | "PROBABILITY"
[src]

Optional. The method for blocking content. If not specified, the default behavior is to use the probability score.

§
threshold?: "HARM_BLOCK_THRESHOLD_UNSPECIFIED" | "BLOCK_LOW_AND_ABOVE" | "BLOCK_MEDIUM_AND_ABOVE" | "BLOCK_ONLY_HIGH" | "BLOCK_NONE" | "OFF"
[src]

Required. The threshold for blocking content. If the harm probability exceeds this threshold, the content will be blocked.