Hi there! Are you looking for the official Deno documentation? Try docs.deno.com for all your Deno learning needs.

GetFaceDetectionResponse

import type { GetFaceDetectionResponse } from "https://aws-api.deno.dev/v0.4/services/rekognition.ts?docs=full";
interface GetFaceDetectionResponse {
Faces?: FaceDetection[] | null;
JobStatus?: VideoJobStatus | null;
NextToken?: string | null;
StatusMessage?: string | null;
VideoMetadata?: VideoMetadata | null;
}

§Properties

§
Faces?: FaceDetection[] | null
[src]

An array of faces detected in the video. Each element contains a detected face's details and the time, in milliseconds from the start of the video, the face was detected.

§
JobStatus?: VideoJobStatus | null
[src]

The current status of the face detection job.

§
NextToken?: string | null
[src]

If the response is truncated, Amazon Rekognition returns this token that you can use in the subsequent request to retrieve the next set of faces.

§
StatusMessage?: string | null
[src]

If the job fails, StatusMessage provides a descriptive error message.

§
VideoMetadata?: VideoMetadata | null
[src]

Information about a video that Amazon Rekognition Video analyzed. Videometadata is returned in every page of paginated responses from a Amazon Rekognition video operation.