Hi there! Are you looking for the official Deno documentation? Try docs.deno.com for all your Deno learning needs.

XPSConfidenceMetricsEntry

import type { XPSConfidenceMetricsEntry } from "https://googleapis.deno.dev/v1/language:v2.ts";

ConfidenceMetricsEntry includes generic precision, recall, f1 score etc. Next tag: 16.

interface XPSConfidenceMetricsEntry {
confidenceThreshold?: number;
f1Score?: number;
f1ScoreAt1?: number;
falseNegativeCount?: bigint;
falsePositiveCount?: bigint;
falsePositiveRate?: number;
falsePositiveRateAt1?: number;
positionThreshold?: number;
precision?: number;
precisionAt1?: number;
recall?: number;
recallAt1?: number;
trueNegativeCount?: bigint;
truePositiveCount?: bigint;
}

§Properties

§
confidenceThreshold?: number
[src]

Metrics are computed with an assumption that the model never return predictions with score lower than this value.

§
f1Score?: number
[src]

The harmonic mean of recall and precision.

§
f1ScoreAt1?: number
[src]

The harmonic mean of recall_at1 and precision_at1.

§
falseNegativeCount?: bigint
[src]

The number of ground truth labels that are not matched by a model created label.

§
falsePositiveCount?: bigint
[src]

The number of model created labels that do not match a ground truth label.

§
falsePositiveRate?: number
[src]

False Positive Rate for the given confidence threshold.

§
falsePositiveRateAt1?: number
[src]

The False Positive Rate when only considering the label that has the highest prediction score and not below the confidence threshold for each example.

§
positionThreshold?: number
[src]

Metrics are computed with an assumption that the model always returns at most this many predictions (ordered by their score, descendingly), but they all still need to meet the confidence_threshold.

§
precision?: number
[src]

Precision for the given confidence threshold.

§
precisionAt1?: number
[src]

The precision when only considering the label that has the highest prediction score and not below the confidence threshold for each example.

§
recall?: number
[src]

Recall (true positive rate) for the given confidence threshold.

§
recallAt1?: number
[src]

The recall (true positive rate) when only considering the label that has the highest prediction score and not below the confidence threshold for each example.

§
trueNegativeCount?: bigint
[src]

The number of labels that were not created by the model, but if they would, they would not match a ground truth label.

§
truePositiveCount?: bigint
[src]

The number of model created labels that match a ground truth label.