GuardrailLlmPromptSecurity
import type { GuardrailLlmPromptSecurity } from "https://googleapis.deno.dev/v1/ces:v1.ts";Guardrail that blocks the conversation if the input is considered unsafe based on the LLM classification.
interface GuardrailLlmPromptSecurity {
customPolicy?: GuardrailLlmPolicy;
defaultSettings?: GuardrailLlmPromptSecurityDefaultSecuritySettings;
failOpen?: boolean;
}§Properties
§
customPolicy?: GuardrailLlmPolicy
[src]Optional. Use a user-defined LlmPolicy to configure the security guardrail.
§
defaultSettings?: GuardrailLlmPromptSecurityDefaultSecuritySettings
[src]Optional. Use the system's predefined default security settings. To select this mode, include an empty 'default_settings' message in the request. The 'default_prompt_template' field within will be populated by the server in the response.
§
failOpen?: boolean
[src]Optional. Determines the behavior when the guardrail encounters an LLM error. - If true: the guardrail is bypassed. - If false (default): the guardrail triggers/blocks. Note: If a custom policy is provided, this field is ignored in favor of the policy's 'fail_open' configuration.