Hi there! Are you looking for the official Deno documentation? Try docs.deno.com for all your Deno learning needs.

HyperParameterTrainingJobDefinition

import type { HyperParameterTrainingJobDefinition } from "https://aws-api.deno.dev/v0.4/services/sagemaker.ts?docs=full";

Defines the training jobs launched by a hyperparameter tuning job.

interface HyperParameterTrainingJobDefinition {
AlgorithmSpecification: HyperParameterAlgorithmSpecification;
CheckpointConfig?: CheckpointConfig | null;
DefinitionName?: string | null;
EnableInterContainerTrafficEncryption?: boolean | null;
EnableManagedSpotTraining?: boolean | null;
EnableNetworkIsolation?: boolean | null;
Environment?: {
[key: string]: string | null | undefined;
}
| null;
HyperParameterRanges?: ParameterRanges | null;
HyperParameterTuningResourceConfig?: HyperParameterTuningResourceConfig | null;
InputDataConfig?: Channel[] | null;
OutputDataConfig: OutputDataConfig;
ResourceConfig?: ResourceConfig | null;
RetryStrategy?: RetryStrategy | null;
RoleArn: string;
StaticHyperParameters?: {
[key: string]: string | null | undefined;
}
| null;
StoppingCondition: StoppingCondition;
TuningObjective?: HyperParameterTuningJobObjective | null;
VpcConfig?: VpcConfig | null;
}

§Properties

§

The "HyperParameterAlgorithmSpecification" object that specifies the resource algorithm to use for the training jobs that the tuning job launches.

§
CheckpointConfig?: CheckpointConfig | null
[src]
§
DefinitionName?: string | null
[src]

The job definition name.

§
EnableInterContainerTrafficEncryption?: boolean | null
[src]

To encrypt all communications between ML compute instances in distributed training, choose True. Encryption provides greater security for distributed training, but training might take longer. How long it takes depends on the amount of communication between compute instances, especially if you use a deep learning algorithm in distributed training.

§
EnableManagedSpotTraining?: boolean | null
[src]

A Boolean indicating whether managed spot training is enabled (True) or not (False).

§
EnableNetworkIsolation?: boolean | null
[src]

Isolates the training container. No inbound or outbound network calls can be made, except for calls between peers within a training cluster for distributed training. If network isolation is used for training jobs that are configured to use a VPC, SageMaker downloads and uploads customer data and model artifacts through the specified VPC, but the training container does not have network access.

§
Environment?: {
[key: string]: string | null | undefined;
}
| null
[src]

An environment variable that you can pass into the SageMaker CreateTrainingJob API. You can use an existing environment variable from the training container or use your own. See Define metrics and variables for more information.

Note: The maximum number of items specified for Map Entries refers to the maximum number of environment variables for each TrainingJobDefinition and also the maximum for the hyperparameter tuning job itself. That is, the sum of the number of environment variables for all the training job definitions can't exceed the maximum number specified.

§
HyperParameterRanges?: ParameterRanges | null
[src]
§
HyperParameterTuningResourceConfig?: HyperParameterTuningResourceConfig | null
[src]

The configuration for the hyperparameter tuning resources, including the compute instances and storage volumes, used for training jobs launched by the tuning job. By default, storage volumes hold model artifacts and incremental states. Choose File for TrainingInputMode in the AlgorithmSpecification parameter to additionally store training data in the storage volume (optional).

§
InputDataConfig?: Channel[] | null
[src]

An array of "Channel" objects that specify the input for the training jobs that the tuning job launches.

§
OutputDataConfig: OutputDataConfig
[src]

Specifies the path to the Amazon S3 bucket where you store model artifacts from the training jobs that the tuning job launches.

§
ResourceConfig?: ResourceConfig | null
[src]

The resources, including the compute instances and storage volumes, to use for the training jobs that the tuning job launches.

Storage volumes store model artifacts and incremental states. Training algorithms might also use storage volumes for scratch space. If you want SageMaker to use the storage volume to store the training data, choose File as the TrainingInputMode in the algorithm specification. For distributed training algorithms, specify an instance count greater than 1.

Note: If you want to use hyperparameter optimization with instance type flexibility, use HyperParameterTuningResourceConfig instead.

§
RetryStrategy?: RetryStrategy | null
[src]

The number of times to retry the job when the job fails due to an InternalServerError.

§
RoleArn: string
[src]

The Amazon Resource Name (ARN) of the IAM role associated with the training jobs that the tuning job launches.

§
StaticHyperParameters?: {
[key: string]: string | null | undefined;
}
| null
[src]

Specifies the values of hyperparameters that do not change for the tuning job.

§
StoppingCondition: StoppingCondition
[src]

Specifies a limit to how long a model hyperparameter training job can run. It also specifies how long a managed spot training job has to complete. When the job reaches the time limit, SageMaker ends the training job. Use this API to cap model training costs.

§
VpcConfig?: VpcConfig | null
[src]

The "VpcConfig" object that specifies the VPC that you want the training jobs that this hyperparameter tuning job launches to connect to. Control access to and from your training container by configuring the VPC. For more information, see Protect Training Jobs by Using an Amazon Virtual Private Cloud.