Hi there! Are you looking for the official Deno documentation? Try docs.deno.com for all your Deno learning needs.

Environment

import type { Environment } from "https://googleapis.deno.dev/v1/dataflow:v1b3.ts";

Describes the environment in which a Dataflow Job runs.

interface Environment {
clusterManagerApiService?: string;
dataset?: string;
debugOptions?: DebugOptions;
experiments?: string[];
flexResourceSchedulingGoal?: "FLEXRS_UNSPECIFIED" | "FLEXRS_SPEED_OPTIMIZED" | "FLEXRS_COST_OPTIMIZED";
internalExperiments?: {
[key: string]: any;
}
;
sdkPipelineOptions?: {
[key: string]: any;
}
;
serviceAccountEmail?: string;
serviceKmsKeyName?: string;
serviceOptions?: string[];
readonly shuffleMode?: "SHUFFLE_MODE_UNSPECIFIED" | "VM_BASED" | "SERVICE_BASED";
streamingMode?: "STREAMING_MODE_UNSPECIFIED" | "STREAMING_MODE_EXACTLY_ONCE" | "STREAMING_MODE_AT_LEAST_ONCE";
tempStoragePrefix?: string;
userAgent?: {
[key: string]: any;
}
;
readonly useStreamingEngineResourceBasedBilling?: boolean;
version?: {
[key: string]: any;
}
;
workerPools?: WorkerPool[];
workerRegion?: string;
workerZone?: string;
}

§Properties

§
clusterManagerApiService?: string
[src]

The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".

§
dataset?: string
[src]

The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}

§
debugOptions?: DebugOptions
[src]

Any debugging options to be supplied to the job.

§
experiments?: string[]
[src]

The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.

§
flexResourceSchedulingGoal?: "FLEXRS_UNSPECIFIED" | "FLEXRS_SPEED_OPTIMIZED" | "FLEXRS_COST_OPTIMIZED"
[src]

Which Flexible Resource Scheduling mode to run in.

§
internalExperiments?: {
[key: string]: any;
}
[src]

Experimental settings.

§
sdkPipelineOptions?: {
[key: string]: any;
}
[src]

The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.

§
serviceAccountEmail?: string
[src]

Identity to run virtual machines as. Defaults to the default account.

§
serviceKmsKeyName?: string
[src]

If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY

§
serviceOptions?: string[]
[src]

Optional. The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).

§
readonly shuffleMode?: "SHUFFLE_MODE_UNSPECIFIED" | "VM_BASED" | "SERVICE_BASED"
[src]

Output only. The shuffle mode used for the job.

§
streamingMode?: "STREAMING_MODE_UNSPECIFIED" | "STREAMING_MODE_EXACTLY_ONCE" | "STREAMING_MODE_AT_LEAST_ONCE"
[src]

Optional. Specifies the Streaming Engine message processing guarantees. Reduces cost and latency but might result in duplicate messages committed to storage. Designed to run simple mapping streaming ETL jobs at the lowest cost. For example, Change Data Capture (CDC) to BigQuery is a canonical use case. For more information, see Set the pipeline streaming mode.

§
tempStoragePrefix?: string
[src]

The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

§
userAgent?: {
[key: string]: any;
}
[src]

A description of the process that generated the request.

§
readonly useStreamingEngineResourceBasedBilling?: boolean
[src]

Output only. Whether the job uses the Streaming Engine resource-based billing model.

§
version?: {
[key: string]: any;
}
[src]

A structure describing which components and their versions of the service are required in order to run the job.

§
workerPools?: WorkerPool[]
[src]

The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.

§
workerRegion?: string
[src]

The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.

§
workerZone?: string
[src]

The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.