Hi there! Are you looking for the official Deno documentation? Try docs.deno.com for all your Deno learning needs.

GoogleCloudDialogflowCxV3QueryResult

import type { GoogleCloudDialogflowCxV3QueryResult } from "https://googleapis.deno.dev/v1/dialogflow:v3.ts";

Represents the result of a conversational query.

interface GoogleCloudDialogflowCxV3QueryResult {
allowAnswerFeedback?: boolean;
diagnosticInfo?: {
[key: string]: any;
}
;
intentDetectionConfidence?: number;
languageCode?: string;
parameters?: {
[key: string]: any;
}
;
text?: string;
transcript?: string;
triggerEvent?: string;
triggerIntent?: string;
webhookPayloads?: {
[key: string]: any;
}
[]
;
webhookStatuses?: GoogleRpcStatus[];
}

§Properties

§

Returns the current advanced settings including IVR settings. Even though the operations configured by these settings are performed by Dialogflow, the client may need to perform special logic at the moment. For example, if Dialogflow exports audio to Google Cloud Storage, then the client may need to wait for the resulting object to appear in the bucket before proceeding.

§
allowAnswerFeedback?: boolean
[src]

Indicates whether the Thumbs up/Thumbs down rating controls are need to be shown for the response in the Dialogflow Messenger widget.

§

The current Page. Some, not all fields are filled in this message, including but not limited to name and display_name.

§

Optional. Data store connection feature output signals. Filled only when data stores are involved in serving the query and DetectIntentRequest.populate data_store_connection_quality_signals is set to true in the request.

§
diagnosticInfo?: {
[key: string]: any;
}
[src]

The free-form diagnostic info. For example, this field could contain webhook call latency. The fields of this data can change without notice, so you should not write code that depends on its structure. One of the fields is called "Alternative Matched Intents", which may aid with debugging. The following describes these intent results: - The list is empty if no intent was matched to end-user input. - Only intents that are referenced in the currently active flow are included. - The matched intent is included. - Other intents that could have matched end-user input, but did not match because they are referenced by intent routes that are out of scope, are included. - Other intents referenced by intent routes in scope that matched end-user input, but had a lower confidence score.

§

If a DTMF was provided as input, this field will contain a copy of the DtmfInput.

§

The Intent that matched the conversational query. Some, not all fields are filled in this message, including but not limited to: name and display_name. This field is deprecated, please use QueryResult.match instead.

§
intentDetectionConfidence?: number
[src]

The intent detection confidence. Values range from 0.0 (completely uncertain) to 1.0 (completely certain). This value is for informational purpose only and is only used to help match the best intent within the classification threshold. This value may change for the same end-user expression at any time due to a model retraining or change in implementation. This field is deprecated, please use QueryResult.match instead.

§
languageCode?: string
[src]

The language that was triggered during intent detection. See Language Support for a list of the currently supported language codes.

§

Intent match result, could be an intent or an event.

§
parameters?: {
[key: string]: any;
}
[src]

The collected session parameters. Depending on your protocol or client library language, this is a map, associative array, symbol table, dictionary, or JSON object composed of a collection of (MapKey, MapValue) pairs: * MapKey type: string * MapKey value: parameter name * MapValue type: If parameter's entity type is a composite entity then use map, otherwise, depending on the parameter value type, it could be one of string, number, boolean, null, list or map. * MapValue value: If parameter's entity type is a composite entity then use map from composite entity property names to property values, otherwise, use parameter value.

§

The list of rich messages returned to the client. Responses vary from simple text messages to more sophisticated, structured payloads used to drive complex logic.

§

The sentiment analyss result, which depends on analyze_query_text_sentiment, specified in the request.

§
text?: string
[src]

If natural language text was provided as input, this field will contain a copy of the text.

§
transcript?: string
[src]

If natural language speech audio was provided as input, this field will contain the transcript for the audio.

§
triggerEvent?: string
[src]

If an event was provided as input, this field will contain the name of the event.

§
triggerIntent?: string
[src]

If an intent was provided as input, this field will contain a copy of the intent identifier. Format: projects//locations//agents//intents/.

§
webhookPayloads?: {
[key: string]: any;
}
[]
[src]

The list of webhook payload in WebhookResponse.payload, in the order of call sequence. If some webhook call fails or doesn't return any payload, an empty Struct would be used instead.

§
webhookStatuses?: GoogleRpcStatus[]
[src]

The list of webhook call status in the order of call sequence.