Hi there! Are you looking for the official Deno documentation? Try docs.deno.com for all your Deno learning needs.

GoogleCloudDatalabelingV1beta1EvaluationJob

import type { GoogleCloudDatalabelingV1beta1EvaluationJob } from "https://googleapis.deno.dev/v1/datalabeling:v1beta1.ts";

Defines an evaluation job that runs periodically to generate Evaluations. Creating an evaluation job is the starting point for using continuous evaluation.

interface GoogleCloudDatalabelingV1beta1EvaluationJob {
annotationSpecSet?: string;
createTime?: Date;
description?: string;
labelMissingGroundTruth?: boolean;
modelVersion?: string;
name?: string;
schedule?: string;
state?:
| "STATE_UNSPECIFIED"
| "SCHEDULED"
| "RUNNING"
| "PAUSED"
| "STOPPED";
}

§Properties

§
annotationSpecSet?: string
[src]

Required. Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}"

§

Output only. Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.

§
createTime?: Date
[src]

Output only. Timestamp of when this evaluation job was created.

§
description?: string
[src]

Required. Description of the job. The description can be up to 25,000 characters long.

§

Required. Configuration details for the evaluation job.

§
labelMissingGroundTruth?: boolean
[src]

Required. Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to true. If you want to provide your own ground truth labels in the evaluation job's BigQuery table, set this to false.

§
modelVersion?: string
[src]

Required. The AI Platform Prediction model version to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/{project_id}/models/{model_name}/versions/{version_name}" There can only be one evaluation job per model version.

§
name?: string
[src]

Output only. After you create a job, Data Labeling Service assigns a name to the job with the following format: "projects/{project_id}/evaluationJobs/ {evaluation_job_id}"

§
schedule?: string
[src]

Required. Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in crontab format or in an English-like format. Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.

§
state?: "STATE_UNSPECIFIED" | "SCHEDULED" | "RUNNING" | "PAUSED" | "STOPPED"
[src]

Output only. Describes the current state of the job.