Hi there! Are you looking for the official Deno documentation? Try docs.deno.com for all your Deno learning needs.

GoogleCloudDatalabelingV1beta1Evaluation

import type { GoogleCloudDatalabelingV1beta1Evaluation } from "https://googleapis.deno.dev/v1/datalabeling:v1beta1.ts";

Describes an evaluation between a machine learning model's predictions and ground truth labels. Created when an EvaluationJob runs successfully.

interface GoogleCloudDatalabelingV1beta1Evaluation {
annotationType?:
| "ANNOTATION_TYPE_UNSPECIFIED"
| "IMAGE_CLASSIFICATION_ANNOTATION"
| "IMAGE_BOUNDING_BOX_ANNOTATION"
| "IMAGE_ORIENTED_BOUNDING_BOX_ANNOTATION"
| "IMAGE_BOUNDING_POLY_ANNOTATION"
| "IMAGE_POLYLINE_ANNOTATION"
| "IMAGE_SEGMENTATION_ANNOTATION"
| "VIDEO_SHOTS_CLASSIFICATION_ANNOTATION"
| "VIDEO_OBJECT_TRACKING_ANNOTATION"
| "VIDEO_OBJECT_DETECTION_ANNOTATION"
| "VIDEO_EVENT_ANNOTATION"
| "TEXT_CLASSIFICATION_ANNOTATION"
| "TEXT_ENTITY_EXTRACTION_ANNOTATION"
| "GENERAL_CLASSIFICATION_ANNOTATION";
createTime?: Date;
evaluatedItemCount?: bigint;
evaluationJobRunTime?: Date;
name?: string;
}

§Properties

§
annotationType?: "ANNOTATION_TYPE_UNSPECIFIED" | "IMAGE_CLASSIFICATION_ANNOTATION" | "IMAGE_BOUNDING_BOX_ANNOTATION" | "IMAGE_ORIENTED_BOUNDING_BOX_ANNOTATION" | "IMAGE_BOUNDING_POLY_ANNOTATION" | "IMAGE_POLYLINE_ANNOTATION" | "IMAGE_SEGMENTATION_ANNOTATION" | "VIDEO_SHOTS_CLASSIFICATION_ANNOTATION" | "VIDEO_OBJECT_TRACKING_ANNOTATION" | "VIDEO_OBJECT_DETECTION_ANNOTATION" | "VIDEO_EVENT_ANNOTATION" | "TEXT_CLASSIFICATION_ANNOTATION" | "TEXT_ENTITY_EXTRACTION_ANNOTATION" | "GENERAL_CLASSIFICATION_ANNOTATION"
[src]

Output only. Type of task that the model version being evaluated performs, as defined in the evaluationJobConfig.inputConfig.annotationType field of the evaluation job that created this evaluation.

§

Output only. Options used in the evaluation job that created this evaluation.

§
createTime?: Date
[src]

Output only. Timestamp for when this evaluation was created.

§
evaluatedItemCount?: bigint
[src]

Output only. The number of items in the ground truth dataset that were used for this evaluation. Only populated when the evaulation is for certain AnnotationTypes.

§
evaluationJobRunTime?: Date
[src]

Output only. Timestamp for when the evaluation job that created this evaluation ran.

§

Output only. Metrics comparing predictions to ground truth labels.

§
name?: string
[src]

Output only. Resource name of an evaluation. The name has the following format: "projects/{project_id}/datasets/{dataset_id}/evaluations/ {evaluation_id}'