GoogleCloudAiplatformV1BatchPredictionJob
import type { GoogleCloudAiplatformV1BatchPredictionJob } from "https://googleapis.deno.dev/v1/aiplatform:v1.ts";
A job that uses a Model to produce predictions on multiple input instances. If predictions for significant portion of the instances fail, the job may finish without attempting predictions for all remaining instances.
§Properties
Output only. Statistics on completed and failed prediction instances.
The config of resources used by the Model during the batch prediction. If the Model supports DEDICATED_RESOURCES this config may be provided (and the job will use these resources), if the Model doesn't support AUTOMATIC_RESOURCES, this config must be provided.
For custom-trained Models and AutoML Tabular Models, the container of the
DeployedModel instances will send stderr
and stdout
streams to Cloud
Logging by default. Please note that the logs incur cost, which are subject
to Cloud Logging pricing. User
can disable container logging by setting this flag to true.
Customer-managed encryption key options for a BatchPredictionJob. If this is set, then all resources created by the BatchPredictionJob will be encrypted with the provided encryption key.
Output only. Time when the BatchPredictionJob entered any of the following
states: JOB_STATE_SUCCEEDED
, JOB_STATE_FAILED
, JOB_STATE_CANCELLED
.
Output only. Only populated when the job's state is JOB_STATE_FAILED or JOB_STATE_CANCELLED.
Explanation configuration for this BatchPredictionJob. Can be specified
only if generate_explanation is set to true
. This value overrides the
value of Model.explanation_spec. All fields of explanation_spec are
optional in the request. If a field of the explanation_spec object is not
populated, the corresponding field of the Model.explanation_spec object is
inherited.
Generate explanation with the batch prediction results. When set to
true
, the batch prediction output changes based on the
predictions_format
field of the BatchPredictionJob.output_config object:
bigquery
: output includes a column namedexplanation
. The value is a struct that conforms to the Explanation object. *jsonl
: The JSON objects on each line include an additional entry keyedexplanation
. The value of the entry is a JSON object that conforms to the Explanation object. *csv
: Generating explanations for CSV format is not supported. If this field is set to true, either the Model.explanation_spec or explanation_spec must be populated.
Required. Input configuration of the instances on which predictions are performed. The schema of any single instance may be specified via the Model's PredictSchemata's instance_schema_uri.
Configuration for how to convert batch prediction input instances to the prediction instances that are sent to the Model.
The labels with user-defined metadata to organize BatchPredictionJobs. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
Immutable. Parameters configuring the batch behavior. Currently only applicable when dedicated_resources are used (in other cases Vertex AI does the tuning itself).
The name of the Model resource that produces the predictions via this job,
must share the same ancestor Location. Starting this job has no impact on
any existing deployments of the Model and their resources. Exactly one of
model and unmanaged_container_model must be set. The model resource name
may contain version id or version alias to specify the version. Example:
projects/{project}/locations/{location}/models/{model}@2
or
projects/{project}/locations/{location}/models/{model}@golden
if no
version is specified, the default version will be deployed. The model
resource could also be a publisher model. Example:
publishers/{publisher}/models/{model}
or
projects/{project}/locations/{location}/publishers/{publisher}/models/{model}
The parameters that govern the predictions. The schema of the parameters may be specified via the Model's PredictSchemata's parameters_schema_uri.
Output only. The version ID of the Model that produces the predictions via this job.
Required. The Configuration specifying where output predictions should be written. The schema of any single prediction may be specified as a concatenation of Model's PredictSchemata's instance_schema_uri and prediction_schema_uri.
Output only. Information further describing the output of this job.
Output only. Partial failures encountered. For example, single files that can't be read. This field never exceeds 20 entries. Status details fields contain standard Google Cloud error details.
Output only. Information about resources that had been consumed by this job. Provided in real time at best effort basis, as well as a final value once the job completes. Note: This field currently may be not populated for batch predictions that use AutoML Models.
The service account that the DeployedModel's container runs as. If not
specified, a system generated one will be used, which has minimal
permissions and the custom container, if used, may not have enough
permission to access other Google Cloud resources. Users deploying the
Model must have the iam.serviceAccounts.actAs
permission on this service
account.
Output only. Time when the BatchPredictionJob for the first time entered
the JOB_STATE_RUNNING
state.
Output only. The detailed state of the job.
Contains model information necessary to perform batch prediction without requiring uploading to model registry. Exactly one of model and unmanaged_container_model must be set.