import * as mod from "https://aws-api.deno.dev/v0.1/services/rekognition.ts?docs=full";
Rekognition |
AgeRange | Structure containing the estimated age range, in years, for a face. |
Asset | Assets are the images that you use to train and evaluate a model version. Assets can also contain validation information that you use to debug a failed model training. |
AudioMetadata | Metadata information about an audio stream.
An array of |
Beard | Indicates whether or not the face has a beard, and the confidence level in the determination. |
BoundingBox | Identifies the bounding box around the label, face, text or personal protective equipment.
The |
Celebrity | Provides information about a celebrity recognized by the "RecognizeCelebrities" operation. |
CelebrityDetail | Information about a recognized celebrity. |
CelebrityRecognition | Information about a detected celebrity and the time the celebrity was detected in a stored video. For more information, see GetCelebrityRecognition in the Amazon Rekognition Developer Guide. |
ComparedFace | Provides face metadata for target image faces that are analyzed by |
ComparedSourceImageFace | Type that describes the face Amazon Rekognition chose to compare with the faces in the target. This contains a bounding box for the selected face and confidence level that the bounding box contains a face. Note that Amazon Rekognition selects the largest face in the source image for this comparison. |
CompareFacesMatch | Provides information about a face in a target image that matches the source image face analyzed by |
CompareFacesRequest | |
CompareFacesResponse | |
ContentModerationDetection | Information about an unsafe content label detection in a stored video. |
CoversBodyPart | Information about an item of Personal Protective Equipment covering a corresponding body part. For more information, see "DetectProtectiveEquipment". |
CreateCollectionRequest | |
CreateCollectionResponse | |
CreateProjectRequest | |
CreateProjectResponse | |
CreateProjectVersionRequest | |
CreateProjectVersionResponse | |
CreateStreamProcessorRequest | |
CreateStreamProcessorResponse | |
CustomLabel | A custom label detected in an image by a call to "DetectCustomLabels". |
DeleteCollectionRequest | |
DeleteCollectionResponse | |
DeleteFacesRequest | |
DeleteFacesResponse | |
DeleteProjectRequest | |
DeleteProjectResponse | |
DeleteProjectVersionRequest | |
DeleteProjectVersionResponse | |
DeleteStreamProcessorRequest | |
DescribeCollectionRequest | |
DescribeCollectionResponse | |
DescribeProjectsRequest | |
DescribeProjectsResponse | |
DescribeProjectVersionsRequest | |
DescribeProjectVersionsResponse | |
DescribeStreamProcessorRequest | |
DescribeStreamProcessorResponse | |
DetectCustomLabelsRequest | |
DetectCustomLabelsResponse | |
DetectFacesRequest | |
DetectFacesResponse | |
DetectionFilter | A set of parameters that allow you to filter out certain results from your returned results. |
DetectLabelsRequest | |
DetectLabelsResponse | |
DetectModerationLabelsRequest | |
DetectModerationLabelsResponse | |
DetectProtectiveEquipmentRequest | |
DetectProtectiveEquipmentResponse | |
DetectTextFilters | A set of optional parameters that you can use to set the criteria that the text must meet to be included in your response.
|
DetectTextRequest | |
DetectTextResponse | |
Emotion | The emotions that appear to be expressed on the face, and the confidence level in the determination. The API is only making a determination of the physical appearance of a person's face. It is not a determination of the person’s internal emotional state and should not be used in such a way. For example, a person pretending to have a sad face might not be sad emotionally. |
EquipmentDetection | Information about an item of Personal Protective Equipment (PPE) detected by "DetectProtectiveEquipment". For more information, see "DetectProtectiveEquipment". |
EvaluationResult | The evaluation results for the training of a model. |
Eyeglasses | Indicates whether or not the face is wearing eye glasses, and the confidence level in the determination. |
EyeOpen | Indicates whether or not the eyes on the face are open, and the confidence level in the determination. |
Face | Describes the face properties such as the bounding box, face ID, image ID of the input image, and external image ID that you assigned. |
FaceDetail | Structure containing attributes of the face that the algorithm detected. |
FaceDetection | Information about a face detected in a video analysis request and the time the face was detected in the video. |
FaceMatch | Provides face metadata. In addition, it also provides the confidence in the match of this face with the input face. |
FaceRecord | Object containing both the face metadata (stored in the backend database), and facial attributes that are detected but aren't stored in the database. |
FaceSearchSettings | Input face recognition parameters for an Amazon Rekognition stream processor.
|
Gender | The predicted gender of a detected face. |
Geometry | Information about where an object ("DetectCustomLabels") or text ("DetectText") is located on an image. |
GetCelebrityInfoRequest | |
GetCelebrityInfoResponse | |
GetCelebrityRecognitionRequest | |
GetCelebrityRecognitionResponse | |
GetContentModerationRequest | |
GetContentModerationResponse | |
GetFaceDetectionRequest | |
GetFaceDetectionResponse | |
GetFaceSearchRequest | |
GetFaceSearchResponse | |
GetLabelDetectionRequest | |
GetLabelDetectionResponse | |
GetPersonTrackingRequest | |
GetPersonTrackingResponse | |
GetSegmentDetectionRequest | |
GetSegmentDetectionResponse | |
GetTextDetectionRequest | |
GetTextDetectionResponse | |
GroundTruthManifest | The S3 bucket that contains an Amazon Sagemaker Ground Truth format manifest file. |
HumanLoopActivationOutput | Shows the results of the human in the loop evaluation. If there is no HumanLoopArn, the input did not trigger human review. |
HumanLoopConfig | Sets up the flow definition the image will be sent to if one of the conditions is met. You can also set certain attributes of the image before review. |
HumanLoopDataAttributes | Allows you to set attributes of the image. Currently, you can declare an image as free of personally identifiable information. |
Image | Provides the input image either as bytes or an S3 object. |
ImageQuality | Identifies face image brightness and sharpness. |
IndexFacesRequest | |
IndexFacesResponse | |
Instance | An instance of a label returned by Amazon Rekognition Image ("DetectLabels") or by Amazon Rekognition Video ("GetLabelDetection"). |
KinesisDataStream | The Kinesis data stream Amazon Rekognition to which the analysis results of a Amazon Rekognition stream processor are streamed. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide. |
KinesisVideoStream | Kinesis video stream stream that provides the source streaming video for a Amazon Rekognition Video stream processor. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide. |
Label | Structure containing details about the detected label, including the name, detected instances, parent labels, and level of confidence. |
LabelDetection | Information about a label detected in a video analysis request and the time the label was detected in the video. |
Landmark | Indicates the location of the landmark on the face. |
ListCollectionsRequest | |
ListCollectionsResponse | |
ListFacesRequest | |
ListFacesResponse | |
ListStreamProcessorsRequest | |
ListStreamProcessorsResponse | |
ListTagsForResourceRequest | |
ListTagsForResourceResponse | |
ModerationLabel | Provides information about a single type of unsafe content found in an image or video. Each type of moderated content has a label within a hierarchical taxonomy. For more information, see Detecting Unsafe Content in the Amazon Rekognition Developer Guide. |
MouthOpen | Indicates whether or not the mouth on the face is open, and the confidence level in the determination. |
Mustache | Indicates whether or not the face has a mustache, and the confidence level in the determination. |
NotificationChannel | The Amazon Simple Notification Service topic to which Amazon Rekognition publishes the completion status of a video analysis operation. For more information, see "api-video". |
OutputConfig | The S3 bucket and folder location where training output is placed. |
Parent | A parent label for a label. A label can have 0, 1, or more parents. |
PersonDetail | Details about a person detected in a video analysis request. |
PersonDetection | Details and path tracking information for a single time a person's path is tracked in a video.
Amazon Rekognition operations that track people's paths return an array of |
PersonMatch | Information about a person whose face matches a face(s) in an Amazon Rekognition collection.
Includes information about the faces in the Amazon Rekognition collection ("FaceMatch"), information about the person ("PersonDetail"), and the time stamp for when the person was detected in a video.
An array of |
Point | The X and Y coordinates of a point on an image. The X and Y values returned are ratios of the overall image size. For example, if the input image is 700x200 and the operation returns X=0.5 and Y=0.25, then the point is at the (350,50) pixel coordinate on the image. |
Pose | Indicates the pose of the face as determined by its pitch, roll, and yaw. |
ProjectDescription | A description of a Amazon Rekognition Custom Labels project. |
ProjectVersionDescription | The description of a version of a model. |
ProtectiveEquipmentBodyPart | Information about a body part detected by "DetectProtectiveEquipment" that contains PPE.
An array of |
ProtectiveEquipmentPerson | A person detected by a call to "DetectProtectiveEquipment".
The API returns all persons detected in the input image in an array of |
ProtectiveEquipmentSummarizationAttributes | Specifies summary attributes to return from a call to "DetectProtectiveEquipment".
You can specify which types of PPE to summarize.
You can also specify a minimum confidence value for detections.
Summary information is returned in the |
ProtectiveEquipmentSummary | Summary information for required items of personal protective equipment (PPE) detected on persons by a call to "DetectProtectiveEquipment".
You specify the required type of PPE in the |
RecognizeCelebritiesRequest | |
RecognizeCelebritiesResponse | |
RegionOfInterest | Specifies a location within the frame that Rekognition checks for text.
Uses a |
S3Object | Provides the S3 bucket name and object name. |
SearchFacesByImageRequest | |
SearchFacesByImageResponse | |
SearchFacesRequest | |
SearchFacesResponse | |
SegmentDetection | A technical cue or shot detection segment detected in a video.
An array of |
SegmentTypeInfo | Information about the type of a segment requested in a call to "StartSegmentDetection".
An array of |
ShotSegment | Information about a shot detection segment detected in a video. For more information, see "SegmentDetection". |
Smile | Indicates whether or not the face is smiling, and the confidence level in the determination. |
StartCelebrityRecognitionRequest | |
StartCelebrityRecognitionResponse | |
StartContentModerationRequest | |
StartContentModerationResponse | |
StartFaceDetectionRequest | |
StartFaceDetectionResponse | |
StartFaceSearchRequest | |
StartFaceSearchResponse | |
StartLabelDetectionRequest | |
StartLabelDetectionResponse | |
StartPersonTrackingRequest | |
StartPersonTrackingResponse | |
StartProjectVersionRequest | |
StartProjectVersionResponse | |
StartSegmentDetectionFilters | Filters applied to the technical cue or shot detection segments. For more information, see "StartSegmentDetection". |
StartSegmentDetectionRequest | |
StartSegmentDetectionResponse | |
StartShotDetectionFilter | Filters for the shot detection segments returned by |
StartStreamProcessorRequest | |
StartTechnicalCueDetectionFilter | Filters for the technical segments returned by "GetSegmentDetection". For more information, see "StartSegmentDetectionFilters". |
StartTextDetectionFilters | Set of optional parameters that let you set the criteria text must meet to be included in your response.
|
StartTextDetectionRequest | |
StartTextDetectionResponse | |
StopProjectVersionRequest | |
StopProjectVersionResponse | |
StopStreamProcessorRequest | |
StreamProcessor | An object that recognizes faces in a streaming video.
An Amazon Rekognition stream processor is created by a call to "CreateStreamProcessor".
The request parameters for |
StreamProcessorInput | Information about the source streaming video. |
StreamProcessorOutput | Information about the Amazon Kinesis Data Streams stream to which a Amazon Rekognition Video stream processor streams the results of a video analysis. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide. |
StreamProcessorSettings | Input parameters used to recognize faces in a streaming video analyzed by a Amazon Rekognition stream processor. |
Summary | The S3 bucket that contains the training summary. The training summary includes aggregated evaluation metrics for the entire testing dataset and metrics for each individual label. |
Sunglasses | Indicates whether or not the face is wearing sunglasses, and the confidence level in the determination. |
TagResourceRequest | |
TechnicalCueSegment | Information about a technical cue segment. For more information, see "SegmentDetection". |
TestingData | The dataset used for testing.
Optionally, if |
TestingDataResult | Sagemaker Groundtruth format manifest files for the input, output and validation datasets that are used and created during testing. |
TextDetection | Information about a word or line of text detected by "DetectText". |
TextDetectionResult | Information about text detected in a video. Incudes the detected text, the time in milliseconds from the start of the video that the text was detected, and where it was detected on the screen. |
TrainingData | The dataset used for training. |
TrainingDataResult | Sagemaker Groundtruth format manifest files for the input, output and validation datasets that are used and created during testing. |
UnindexedFace | A face that "IndexFaces" detected, but didn't index.
Use the |
UntagResourceRequest | |
ValidationData | Contains the Amazon S3 bucket location of the validation data for a model training job. |
Video | Video file stored in an Amazon S3 bucket.
Amazon Rekognition video start operations such as "StartLabelDetection" use |
VideoMetadata | Information about a video that Amazon Rekognition analyzed.
|