Hi there! Are you looking for the official Deno documentation? Try docs.deno.com for all your Deno learning needs.

GoogleCloudAiplatformV1SchemaModelevaluationMetricsClassificationEvaluationMetricsConfidenceMetrics

import type { GoogleCloudAiplatformV1SchemaModelevaluationMetricsClassificationEvaluationMetricsConfidenceMetrics } from "https://googleapis.deno.dev/v1/aiplatform:v1.ts";
interface GoogleCloudAiplatformV1SchemaModelevaluationMetricsClassificationEvaluationMetricsConfidenceMetrics {
confidenceThreshold?: number;
f1Score?: number;
f1ScoreAt1?: number;
f1ScoreMacro?: number;
f1ScoreMicro?: number;
falseNegativeCount?: bigint;
falsePositiveCount?: bigint;
falsePositiveRate?: number;
falsePositiveRateAt1?: number;
maxPredictions?: number;
precision?: number;
precisionAt1?: number;
recall?: number;
recallAt1?: number;
trueNegativeCount?: bigint;
truePositiveCount?: bigint;
}

§Properties

§
confidenceThreshold?: number
[src]

Metrics are computed with an assumption that the Model never returns predictions with score lower than this value.

§

Confusion matrix of the evaluation for this confidence_threshold.

§
f1Score?: number
[src]

The harmonic mean of recall and precision. For summary metrics, it computes the micro-averaged F1 score.

§
f1ScoreAt1?: number
[src]

The harmonic mean of recallAt1 and precisionAt1.

§
f1ScoreMacro?: number
[src]

Macro-averaged F1 Score.

§
f1ScoreMicro?: number
[src]

Micro-averaged F1 Score.

§
falseNegativeCount?: bigint
[src]

The number of ground truth labels that are not matched by a Model created label.

§
falsePositiveCount?: bigint
[src]

The number of Model created labels that do not match a ground truth label.

§
falsePositiveRate?: number
[src]

False Positive Rate for the given confidence threshold.

§
falsePositiveRateAt1?: number
[src]

The False Positive Rate when only considering the label that has the highest prediction score and not below the confidence threshold for each DataItem.

§
maxPredictions?: number
[src]

Metrics are computed with an assumption that the Model always returns at most this many predictions (ordered by their score, descendingly), but they all still need to meet the confidenceThreshold.

§
precision?: number
[src]

Precision for the given confidence threshold.

§
precisionAt1?: number
[src]

The precision when only considering the label that has the highest prediction score and not below the confidence threshold for each DataItem.

§
recall?: number
[src]

Recall (True Positive Rate) for the given confidence threshold.

§
recallAt1?: number
[src]

The Recall (True Positive Rate) when only considering the label that has the highest prediction score and not below the confidence threshold for each DataItem.

§
trueNegativeCount?: bigint
[src]

The number of labels that were not created by the Model, but if they would, they would not match a ground truth label.

§
truePositiveCount?: bigint
[src]

The number of Model created labels that match a ground truth label.