Hi there! Are you looking for the official Deno documentation? Try docs.deno.com for all your Deno learning needs.

XPSTrainResponse

import type { XPSTrainResponse } from "https://googleapis.deno.dev/v1/language:v2.ts";

Next ID: 18

interface XPSTrainResponse {
deployedModelSizeBytes?: bigint;
errorAnalysisConfigs?: XPSVisionErrorAnalysisConfig[];
evaluatedExampleSet?: XPSExampleSet;
evaluationMetricsSet?: XPSEvaluationMetricsSet;
explanationConfigs?: XPSResponseExplanationSpec[];
imageClassificationTrainResp?: XPSImageClassificationTrainResponse;
imageObjectDetectionTrainResp?: XPSImageObjectDetectionModelSpec;
imageSegmentationTrainResp?: XPSImageSegmentationTrainResponse;
modelToken?: Uint8Array;
speechTrainResp?: XPSSpeechModelSpec;
tablesTrainResp?: XPSTablesTrainResponse;
textToSpeechTrainResp?: XPSTextToSpeechTrainResponse;
textTrainResp?: XPSTextTrainResponse;
translationTrainResp?: XPSTranslationTrainResponse;
videoActionRecognitionTrainResp?: XPSVideoActionRecognitionTrainResponse;
videoClassificationTrainResp?: XPSVideoClassificationTrainResponse;
videoObjectTrackingTrainResp?: XPSVideoObjectTrackingTrainResponse;
}

§Properties

§
deployedModelSizeBytes?: bigint
[src]

Estimated model size in bytes once deployed.

§
errorAnalysisConfigs?: XPSVisionErrorAnalysisConfig[]
[src]

Optional vision model error analysis configuration. The field is set when model error analysis is enabled in the training request. The results of error analysis will be binded together with evaluation results (in the format of AnnotatedExample).

§
evaluatedExampleSet?: XPSExampleSet
[src]

Examples used to evaluate the model (usually the test set), with the predicted annotations. The file_spec should point to recordio file(s) of AnnotatedExample. For each returned example, the example_id_token and annotations predicted by the model must be set. The example payload can and is recommended to be omitted.

§
evaluationMetricsSet?: XPSEvaluationMetricsSet
[src]

The trained model evaluation metrics. This can be optionally returned.

§
explanationConfigs?: XPSResponseExplanationSpec[]
[src]

VisionExplanationConfig for XAI on test set. Optional for when XAI is enable in training request.

§
imageClassificationTrainResp?: XPSImageClassificationTrainResponse
[src]
§
imageObjectDetectionTrainResp?: XPSImageObjectDetectionModelSpec
[src]
§
imageSegmentationTrainResp?: XPSImageSegmentationTrainResponse
[src]
§
modelToken?: Uint8Array
[src]

Token that represents the trained model. This is considered immutable and is persisted in AutoML. xPS can put their own proto in the byte string, to e.g. point to the model checkpoints. The token is passed to other xPS APIs to refer to the model.

§
speechTrainResp?: XPSSpeechModelSpec
[src]
§
textToSpeechTrainResp?: XPSTextToSpeechTrainResponse
[src]
§

Will only be needed for uCAIP from Beta.

§
translationTrainResp?: XPSTranslationTrainResponse
[src]
§
videoActionRecognitionTrainResp?: XPSVideoActionRecognitionTrainResponse
[src]
§
videoClassificationTrainResp?: XPSVideoClassificationTrainResponse
[src]
§
videoObjectTrackingTrainResp?: XPSVideoObjectTrackingTrainResponse
[src]