Hi there! Are you looking for the official Deno documentation? Try docs.deno.com for all your Deno learning needs.

Usage

import * as mod from "https://aws-api.deno.dev/v0.4/services/rekognition.ts?docs=full";

§Classes

Rekognition

§Interfaces

AgeRange

Structure containing the estimated age range, in years, for a face.

Asset

Assets are the images that you use to train and evaluate a model version. Assets can also contain validation information that you use to debug a failed model training.

AudioMetadata

Metadata information about an audio stream. An array of AudioMetadata objects for the audio streams found in a stored video is returned by "GetSegmentDetection".

Beard

Indicates whether or not the face has a beard, and the confidence level in the determination.

BlackFrame

A filter that allows you to control the black frame detection by specifying the black levels and pixel coverage of black pixels in a frame. As videos can come from multiple sources, formats, and time periods, they may contain different standards and varying noise levels for black frames that need to be accounted for. For more information, see "StartSegmentDetection".

BoundingBox

Identifies the bounding box around the label, face, text, object of interest, or personal protective equipment. The left (x-coordinate) and top (y-coordinate) are coordinates representing the top and left sides of the bounding box. Note that the upper-left corner of the image is the origin (0,0).

Celebrity

Provides information about a celebrity recognized by the "RecognizeCelebrities" operation.

CelebrityDetail

Information about a recognized celebrity.

CelebrityRecognition

Information about a detected celebrity and the time the celebrity was detected in a stored video. For more information, see GetCelebrityRecognition in the Amazon Rekognition Developer Guide.

ComparedFace

Provides face metadata for target image faces that are analyzed by CompareFaces and RecognizeCelebrities.

ComparedSourceImageFace

Type that describes the face Amazon Rekognition chose to compare with the faces in the target. This contains a bounding box for the selected face and confidence level that the bounding box contains a face. Note that Amazon Rekognition selects the largest face in the source image for this comparison.

CompareFacesMatch

Provides information about a face in a target image that matches the source image face analyzed by CompareFaces. The Face property contains the bounding box of the face in the target image. The Similarity property is the confidence that the source image face matches the face in the bounding box.

CompareFacesRequest
CompareFacesResponse
ConnectedHomeSettings

Label detection settings to use on a streaming video. Defining the settings is required in the request parameter for "CreateStreamProcessor". Including this setting in the CreateStreamProcessor request enables you to use the stream processor for label detection. You can then select what you want the stream processor to detect, such as people or pets. When the stream processor has started, one notification is sent for each object class specified. For example, if packages and pets are selected, one SNS notification is published the first time a package is detected and one SNS notification is published the first time a pet is detected, as well as an end-of-session summary.

ConnectedHomeSettingsForUpdate

The label detection settings you want to use in your stream processor. This includes the labels you want the stream processor to detect and the minimum confidence level allowed to label objects.

ContentModerationDetection

Information about an inappropriate, unwanted, or offensive content label detection in a stored video.

CopyProjectVersionRequest
CopyProjectVersionResponse
CoversBodyPart

Information about an item of Personal Protective Equipment covering a corresponding body part. For more information, see "DetectProtectiveEquipment".

CreateCollectionRequest
CreateCollectionResponse
CreateDatasetRequest
CreateDatasetResponse
CreateProjectRequest
CreateProjectResponse
CreateProjectVersionRequest
CreateProjectVersionResponse
CreateStreamProcessorRequest
CreateStreamProcessorResponse
CustomLabel

A custom label detected in an image by a call to "DetectCustomLabels".

DatasetChanges

Describes updates or additions to a dataset. A Single update or addition is an entry (JSON Line) that provides information about a single image. To update an existing entry, you match the source-ref field of the update entry with the source-ref filed of the entry that you want to update. If the source-ref field doesn't match an existing entry, the entry is added to dataset as a new entry.

DatasetDescription

A description for a dataset. For more information, see "DescribeDataset".

DatasetLabelDescription

Describes a dataset label. For more information, see "ListDatasetLabels".

DatasetLabelStats

Statistics about a label used in a dataset. For more information, see "DatasetLabelDescription".

DatasetMetadata

Summary information for an Amazon Rekognition Custom Labels dataset. For more information, see "ProjectDescription".

DatasetSource

The source that Amazon Rekognition Custom Labels uses to create a dataset. To use an Amazon Sagemaker format manifest file, specify the S3 bucket location in the GroundTruthManifest field. The S3 bucket must be in your AWS account. To create a copy of an existing dataset, specify the Amazon Resource Name (ARN) of an existing dataset in DatasetArn.

DatasetStats

Provides statistics about a dataset. For more information, see "DescribeDataset".

DeleteCollectionRequest
DeleteCollectionResponse
DeleteDatasetRequest
DeleteFacesRequest
DeleteFacesResponse
DeleteProjectPolicyRequest
DeleteProjectRequest
DeleteProjectResponse
DeleteProjectVersionRequest
DeleteProjectVersionResponse
DeleteStreamProcessorRequest
DescribeCollectionRequest
DescribeCollectionResponse
DescribeDatasetRequest
DescribeDatasetResponse
DescribeProjectsRequest
DescribeProjectsResponse
DescribeProjectVersionsRequest
DescribeProjectVersionsResponse
DescribeStreamProcessorRequest
DescribeStreamProcessorResponse
DetectCustomLabelsRequest
DetectCustomLabelsResponse
DetectFacesRequest
DetectFacesResponse
DetectionFilter

A set of parameters that allow you to filter out certain results from your returned results.

DetectLabelsImageBackground

The background of the image with regard to image quality and dominant colors.

DetectLabelsImageForeground

The foreground of the image with regard to image quality and dominant colors.

DetectLabelsImageProperties

Information about the quality and dominant colors of an input image. Quality and color information is returned for the entire image, foreground, and background.

DetectLabelsImagePropertiesSettings

Settings for the IMAGE_PROPERTIES feature type.

DetectLabelsImageQuality

The quality of an image provided for label detection, with regard to brightness, sharpness, and contrast.

DetectLabelsRequest
DetectLabelsResponse
DetectLabelsSettings

Settings for the DetectLabels request. Settings can include filters for both GENERAL_LABELS and IMAGE_PROPERTIES. GENERAL_LABELS filters can be inclusive or exclusive and applied to individual labels or label categories. IMAGE_PROPERTIES filters allow specification of a maximum number of dominant colors.

DetectModerationLabelsRequest
DetectModerationLabelsResponse
DetectProtectiveEquipmentRequest
DetectProtectiveEquipmentResponse
DetectTextFilters

A set of optional parameters that you can use to set the criteria that the text must meet to be included in your response. WordFilter looks at a word’s height, width, and minimum confidence. RegionOfInterest lets you set a specific region of the image to look for text in.

DetectTextRequest
DetectTextResponse
DistributeDataset

A training dataset or a test dataset used in a dataset distribution operation. For more information, see "DistributeDatasetEntries".

DistributeDatasetEntriesRequest
DominantColor

A description of the dominant colors in an image.

Emotion

The emotions that appear to be expressed on the face, and the confidence level in the determination. The API is only making a determination of the physical appearance of a person's face. It is not a determination of the person’s internal emotional state and should not be used in such a way. For example, a person pretending to have a sad face might not be sad emotionally.

EquipmentDetection

Information about an item of Personal Protective Equipment (PPE) detected by "DetectProtectiveEquipment". For more information, see "DetectProtectiveEquipment".

EvaluationResult

The evaluation results for the training of a model.

Eyeglasses

Indicates whether or not the face is wearing eye glasses, and the confidence level in the determination.

EyeOpen

Indicates whether or not the eyes on the face are open, and the confidence level in the determination.

Face

Describes the face properties such as the bounding box, face ID, image ID of the input image, and external image ID that you assigned.

FaceDetail

Structure containing attributes of the face that the algorithm detected.

FaceDetection

Information about a face detected in a video analysis request and the time the face was detected in the video.

FaceMatch

Provides face metadata. In addition, it also provides the confidence in the match of this face with the input face.

FaceRecord

Object containing both the face metadata (stored in the backend database), and facial attributes that are detected but aren't stored in the database.

FaceSearchSettings

Input face recognition parameters for an Amazon Rekognition stream processor. Includes the collection to use for face recognition and the face attributes to detect. Defining the settings is required in the request parameter for "CreateStreamProcessor".

Gender

The predicted gender of a detected face.

GeneralLabelsSettings

Contains filters for the object labels returned by DetectLabels. Filters can be inclusive, exclusive, or a combination of both and can be applied to individual l abels or entire label categories.

Geometry

Information about where an object ("DetectCustomLabels") or text ("DetectText") is located on an image.

GetCelebrityInfoRequest
GetCelebrityInfoResponse
GetCelebrityRecognitionRequest
GetCelebrityRecognitionResponse
GetContentModerationRequest
GetContentModerationResponse
GetFaceDetectionRequest
GetFaceDetectionResponse
GetFaceSearchRequest
GetFaceSearchResponse
GetLabelDetectionRequest
GetLabelDetectionResponse
GetPersonTrackingRequest
GetPersonTrackingResponse
GetSegmentDetectionRequest
GetSegmentDetectionResponse
GetTextDetectionRequest
GetTextDetectionResponse
GroundTruthManifest

The S3 bucket that contains an Amazon Sagemaker Ground Truth format manifest file.

HumanLoopActivationOutput

Shows the results of the human in the loop evaluation. If there is no HumanLoopArn, the input did not trigger human review.

HumanLoopConfig

Sets up the flow definition the image will be sent to if one of the conditions is met. You can also set certain attributes of the image before review.

HumanLoopDataAttributes

Allows you to set attributes of the image. Currently, you can declare an image as free of personally identifiable information.

Image

Provides the input image either as bytes or an S3 object.

ImageQuality

Identifies face image brightness and sharpness.

IndexFacesRequest
IndexFacesResponse
Instance

An instance of a label returned by Amazon Rekognition Image ("DetectLabels") or by Amazon Rekognition Video ("GetLabelDetection").

KinesisDataStream

The Kinesis data stream Amazon Rekognition to which the analysis results of a Amazon Rekognition stream processor are streamed. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.

KinesisVideoStream

Kinesis video stream stream that provides the source streaming video for a Amazon Rekognition Video stream processor. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.

KinesisVideoStreamStartSelector

Specifies the starting point in a Kinesis stream to start processing. You can use the producer timestamp or the fragment number. One of either producer timestamp or fragment number is required. If you use the producer timestamp, you must put the time in milliseconds. For more information about fragment numbers, see Fragment.

KnownGender

The known gender identity for the celebrity that matches the provided ID. The known gender identity can be Male, Female, Nonbinary, or Unlisted.

Label

Structure containing details about the detected label, including the name, detected instances, parent labels, and level of confidence.

LabelAlias

A potential alias of for a given label.

LabelCategory

The category that applies to a given label.

LabelDetection

Information about a label detected in a video analysis request and the time the label was detected in the video.

LabelDetectionSettings

Contains the specified filters that should be applied to a list of returned GENERAL_LABELS.

Landmark

Indicates the location of the landmark on the face.

ListCollectionsRequest
ListCollectionsResponse
ListDatasetEntriesRequest
ListDatasetEntriesResponse
ListDatasetLabelsRequest
ListDatasetLabelsResponse
ListFacesRequest
ListFacesResponse
ListProjectPoliciesRequest
ListProjectPoliciesResponse
ListStreamProcessorsRequest
ListStreamProcessorsResponse
ListTagsForResourceRequest
ListTagsForResourceResponse
ModerationLabel

Provides information about a single type of inappropriate, unwanted, or offensive content found in an image or video. Each type of moderated content has a label within a hierarchical taxonomy. For more information, see Content moderation in the Amazon Rekognition Developer Guide.

MouthOpen

Indicates whether or not the mouth on the face is open, and the confidence level in the determination.

Mustache

Indicates whether or not the face has a mustache, and the confidence level in the determination.

NotificationChannel

The Amazon Simple Notification Service topic to which Amazon Rekognition publishes the completion status of a video analysis operation. For more information, see Calling Amazon Rekognition Video operations. Note that the Amazon SNS topic must have a topic name that begins with AmazonRekognition if you are using the AmazonRekognitionServiceRole permissions policy to access the topic. For more information, see Giving access to multiple Amazon SNS topics.

OutputConfig

The S3 bucket and folder location where training output is placed.

Parent

A parent label for a label. A label can have 0, 1, or more parents.

PersonDetail

Details about a person detected in a video analysis request.

PersonDetection

Details and path tracking information for a single time a person's path is tracked in a video. Amazon Rekognition operations that track people's paths return an array of PersonDetection objects with elements for each time a person's path is tracked in a video.

PersonMatch

Information about a person whose face matches a face(s) in an Amazon Rekognition collection. Includes information about the faces in the Amazon Rekognition collection ("FaceMatch"), information about the person ("PersonDetail"), and the time stamp for when the person was detected in a video. An array of PersonMatch objects is returned by "GetFaceSearch".

Point

The X and Y coordinates of a point on an image or video frame. The X and Y values are ratios of the overall image size or video resolution. For example, if an input image is 700x200 and the values are X=0.5 and Y=0.25, then the point is at the (350,50) pixel coordinate on the image.

Pose

Indicates the pose of the face as determined by its pitch, roll, and yaw.

ProjectDescription

A description of an Amazon Rekognition Custom Labels project. For more information, see "DescribeProjects".

ProjectPolicy

Describes a project policy in the response from "ListProjectPolicies".

ProjectVersionDescription

A description of a version of an Amazon Rekognition Custom Labels model.

ProtectiveEquipmentBodyPart

Information about a body part detected by "DetectProtectiveEquipment" that contains PPE. An array of ProtectiveEquipmentBodyPart objects is returned for each person detected by DetectProtectiveEquipment.

ProtectiveEquipmentPerson

A person detected by a call to "DetectProtectiveEquipment". The API returns all persons detected in the input image in an array of ProtectiveEquipmentPerson objects.

ProtectiveEquipmentSummarizationAttributes

Specifies summary attributes to return from a call to "DetectProtectiveEquipment". You can specify which types of PPE to summarize. You can also specify a minimum confidence value for detections. Summary information is returned in the Summary ("ProtectiveEquipmentSummary") field of the response from DetectProtectiveEquipment. The summary includes which persons in an image were detected wearing the requested types of person protective equipment (PPE), which persons were detected as not wearing PPE, and the persons in which a determination could not be made. For more information, see "ProtectiveEquipmentSummary".

ProtectiveEquipmentSummary

Summary information for required items of personal protective equipment (PPE) detected on persons by a call to "DetectProtectiveEquipment". You specify the required type of PPE in the SummarizationAttributes ("ProtectiveEquipmentSummarizationAttributes") input parameter. The summary includes which persons were detected wearing the required personal protective equipment (PersonsWithRequiredEquipment), which persons were detected as not wearing the required PPE (PersonsWithoutRequiredEquipment), and the persons in which a determination could not be made (PersonsIndeterminate).

PutProjectPolicyRequest
PutProjectPolicyResponse
RecognizeCelebritiesRequest
RecognizeCelebritiesResponse
RegionOfInterest

Specifies a location within the frame that Rekognition checks for objects of interest such as text, labels, or faces. It uses a BoundingBox or Polygon to set a region of the screen.

S3Destination

The Amazon S3 bucket location to which Amazon Rekognition publishes the detailed inference results of a video analysis operation. These results include the name of the stream processor resource, the session ID of the stream processing session, and labeled timestamps and bounding boxes for detected labels.

S3Object

Provides the S3 bucket name and object name.

SearchFacesByImageRequest
SearchFacesByImageResponse
SearchFacesRequest
SearchFacesResponse
SegmentDetection

A technical cue or shot detection segment detected in a video. An array of SegmentDetection objects containing all segments detected in a stored video is returned by "GetSegmentDetection".

SegmentTypeInfo

Information about the type of a segment requested in a call to "StartSegmentDetection". An array of SegmentTypeInfo objects is returned by the response from "GetSegmentDetection".

ShotSegment

Information about a shot detection segment detected in a video. For more information, see "SegmentDetection".

Smile

Indicates whether or not the face is smiling, and the confidence level in the determination.

StartCelebrityRecognitionRequest
StartCelebrityRecognitionResponse
StartContentModerationRequest
StartContentModerationResponse
StartFaceDetectionRequest
StartFaceDetectionResponse
StartFaceSearchRequest
StartFaceSearchResponse
StartLabelDetectionRequest
StartLabelDetectionResponse
StartPersonTrackingRequest
StartPersonTrackingResponse
StartProjectVersionRequest
StartProjectVersionResponse
StartSegmentDetectionFilters

Filters applied to the technical cue or shot detection segments. For more information, see "StartSegmentDetection".

StartSegmentDetectionRequest
StartSegmentDetectionResponse
StartShotDetectionFilter

Filters for the shot detection segments returned by GetSegmentDetection. For more information, see "StartSegmentDetectionFilters".

StartStreamProcessorRequest
StartStreamProcessorResponse
StartTechnicalCueDetectionFilter

Filters for the technical segments returned by "GetSegmentDetection". For more information, see "StartSegmentDetectionFilters".

StartTextDetectionFilters

Set of optional parameters that let you set the criteria text must meet to be included in your response. WordFilter looks at a word's height, width and minimum confidence. RegionOfInterest lets you set a specific region of the screen to look for text in.

StartTextDetectionRequest
StartTextDetectionResponse
StopProjectVersionRequest
StopProjectVersionResponse
StopStreamProcessorRequest
StreamProcessingStartSelector

This is a required parameter for label detection stream processors and should not be used to start a face search stream processor.

StreamProcessingStopSelector

Specifies when to stop processing the stream. You can specify a maximum amount of time to process the video.

StreamProcessor

An object that recognizes faces or labels in a streaming video. An Amazon Rekognition stream processor is created by a call to "CreateStreamProcessor". The request parameters for CreateStreamProcessor describe the Kinesis video stream source for the streaming video, face recognition parameters, and where to stream the analysis resullts.

StreamProcessorDataSharingPreference

Allows you to opt in or opt out to share data with Rekognition to improve model performance. You can choose this option at the account level or on a per-stream basis. Note that if you opt out at the account level this setting is ignored on individual streams.

StreamProcessorInput

Information about the source streaming video.

StreamProcessorNotificationChannel

The Amazon Simple Notification Service topic to which Amazon Rekognition publishes the object detection results and completion status of a video analysis operation.

StreamProcessorOutput

Information about the Amazon Kinesis Data Streams stream to which a Amazon Rekognition Video stream processor streams the results of a video analysis. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.

StreamProcessorSettings

Input parameters used in a streaming video analyzed by a Amazon Rekognition stream processor. You can use FaceSearch to recognize faces in a streaming video, or you can use ConnectedHome to detect labels.

StreamProcessorSettingsForUpdate

The stream processor settings that you want to update. ConnectedHome settings can be updated to detect different labels with a different minimum confidence.

Summary

The S3 bucket that contains the training summary. The training summary includes aggregated evaluation metrics for the entire testing dataset and metrics for each individual label.

Sunglasses

Indicates whether or not the face is wearing sunglasses, and the confidence level in the determination.

TagResourceRequest
TechnicalCueSegment

Information about a technical cue segment. For more information, see "SegmentDetection".

TestingData

The dataset used for testing. Optionally, if AutoCreate is set, Amazon Rekognition Custom Labels uses the training dataset to create a test dataset with a temporary split of the training dataset.

TestingDataResult

Sagemaker Groundtruth format manifest files for the input, output and validation datasets that are used and created during testing.

TextDetection

Information about a word or line of text detected by "DetectText".

TextDetectionResult

Information about text detected in a video. Incudes the detected text, the time in milliseconds from the start of the video that the text was detected, and where it was detected on the screen.

TrainingData

The dataset used for training.

TrainingDataResult

Sagemaker Groundtruth format manifest files for the input, output and validation datasets that are used and created during testing.

UnindexedFace

A face that "IndexFaces" detected, but didn't index. Use the Reasons response attribute to determine why a face wasn't indexed.

UntagResourceRequest
UpdateDatasetEntriesRequest
UpdateStreamProcessorRequest
ValidationData

Contains the Amazon S3 bucket location of the validation data for a model training job.

Video

Video file stored in an Amazon S3 bucket. Amazon Rekognition video start operations such as "StartLabelDetection" use Video to specify a video for analysis. The supported file formats are .mp4, .mov and .avi.

VideoMetadata

Information about a video that Amazon Rekognition analyzed. Videometadata is returned in every page of paginated responses from a Amazon Rekognition video operation.

§Type Aliases

Attribute
BodyPart
CelebrityRecognitionSortBy
ContentClassifier
ContentModerationSortBy
DatasetStatus
DatasetStatusMessageCode
DatasetType
DetectLabelsFeatureName
EmotionName
FaceAttributes
FaceSearchSortBy
GenderType
KnownGenderType

A list of enum string of possible gender values that Celebrity returns.

LabelDetectionAggregateBy
LabelDetectionFeatureName
LabelDetectionSortBy
LandmarkType
OrientationCorrection
PersonTrackingSortBy
ProjectStatus
ProjectVersionStatus
ProtectiveEquipmentType
QualityFilter
Reason
SegmentType
StreamProcessorParameterToDelete
StreamProcessorStatus
TechnicalCueType
TextTypes
VideoColorRange
VideoJobStatus