GoogleCloudDialogflowV2QueryResult
import type { GoogleCloudDialogflowV2QueryResult } from "https://googleapis.deno.dev/v1/dialogflow:v3.ts";
Represents the result of conversational query or event processing.
§Properties
This field is set to: - false
if the matched intent has required
parameters and not all of the required parameter values have been
collected. - true
if all required parameter values have been collected,
or if the matched intent doesn't contain any required parameters.
Indicates whether the conversational query triggers a cancellation for slot filling. For more information, see the cancel slot filling documentation.
Free-form diagnostic information for the associated detect intent request. The fields of this data can change without notice, so you should not write code that depends on its structure. The data may contain: - webhook call latency - webhook errors
The collection of rich messages to present to the user.
The text to be pronounced to the user or shown on the screen. Note: This
is a legacy field, fulfillment_messages
should be preferred.
The intent that matched the conversational query. Some, not all fields are
filled in this message, including but not limited to: name
,
display_name
, end_interaction
and is_fallback
.
The intent detection confidence. Values range from 0.0 (completely
uncertain) to 1.0 (completely certain). This value is for informational
purpose only and is only used to help match the best intent within the
classification threshold. This value may change for the same end-user
expression at any time due to a model retraining or change in
implementation. If there are multiple knowledge_answers
messages, this
value is set to the greatest knowledgeAnswers.match_confidence
value in
the list.
The language that was triggered during intent detection. See Language Support for a list of the currently supported language codes.
The collection of output contexts. If applicable,
output_contexts.parameters
contains entries with name .original
containing the original parameter values before the query.
The collection of extracted parameters. Depending on your protocol or client library language, this is a map, associative array, symbol table, dictionary, or JSON object composed of a collection of (MapKey, MapValue) pairs: * MapKey type: string * MapKey value: parameter name * MapValue type: If parameter's entity type is a composite entity then use map, otherwise, depending on the parameter value type, it could be one of string, number, boolean, null, list or map. * MapValue value: If parameter's entity type is a composite entity then use map from composite entity property names to property values, otherwise, use parameter value.
The original conversational query text: - If natural language text was
provided as input, query_text
contains a copy of the input. - If natural
language speech audio was provided as input, query_text
contains the
speech recognition result. If speech recognizer produced multiple
alternatives, a particular one is picked. - If automatic spell correction
is enabled, query_text
will contain the corrected user input.
The sentiment analysis result, which depends on the
sentiment_analysis_request_config
specified in the request.
The Speech recognition confidence between 0.0 and 1.0. A higher number indicates an estimated greater likelihood that the recognized words are correct. The default of 0.0 is a sentinel value indicating that confidence was not set. This field is not guaranteed to be accurate or set. In particular this field isn't set for StreamingDetectIntent since the streaming endpoint has separate confidence estimates per portion of the audio in StreamingRecognitionResult.