Hi there! Are you looking for the official Deno documentation? Try docs.deno.com for all your Deno learning needs.

ClusterConfig

import type { ClusterConfig } from "https://googleapis.deno.dev/v1/dataproc:v1.ts";

The cluster config.

interface ClusterConfig {
autoscalingConfig?: AutoscalingConfig;
auxiliaryNodeGroups?: AuxiliaryNodeGroup[];
configBucket?: string;
dataprocMetricConfig?: DataprocMetricConfig;
encryptionConfig?: EncryptionConfig;
endpointConfig?: EndpointConfig;
gceClusterConfig?: GceClusterConfig;
gkeClusterConfig?: GkeClusterConfig;
initializationActions?: NodeInitializationAction[];
lifecycleConfig?: LifecycleConfig;
masterConfig?: InstanceGroupConfig;
metastoreConfig?: MetastoreConfig;
secondaryWorkerConfig?: InstanceGroupConfig;
securityConfig?: SecurityConfig;
softwareConfig?: SoftwareConfig;
tempBucket?: string;
workerConfig?: InstanceGroupConfig;
}

§Properties

§
autoscalingConfig?: AutoscalingConfig
[src]

Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.

§
auxiliaryNodeGroups?: AuxiliaryNodeGroup[]
[src]

Optional. The node group settings.

§
configBucket?: string
[src]

Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

§
dataprocMetricConfig?: DataprocMetricConfig
[src]

Optional. The config for Dataproc metrics.

§
encryptionConfig?: EncryptionConfig
[src]

Optional. Encryption settings for the cluster.

§
endpointConfig?: EndpointConfig
[src]

Optional. Port/endpoint configuration for this cluster

§
gceClusterConfig?: GceClusterConfig
[src]

Optional. The shared Compute Engine config settings for all instances in a cluster.

§
gkeClusterConfig?: GkeClusterConfig
[src]

Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.

§
initializationActions?: NodeInitializationAction[]
[src]

Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi

§
lifecycleConfig?: LifecycleConfig
[src]

Optional. Lifecycle setting for the cluster.

§

Optional. The Compute Engine config settings for the cluster's master instance.

§
metastoreConfig?: MetastoreConfig
[src]

Optional. Metastore configuration.

§
secondaryWorkerConfig?: InstanceGroupConfig
[src]

Optional. The Compute Engine config settings for a cluster's secondary worker instances

§
securityConfig?: SecurityConfig
[src]

Optional. Security settings for the cluster.

§
softwareConfig?: SoftwareConfig
[src]

Optional. The config settings for cluster software.

§
tempBucket?: string
[src]

Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.

§

Optional. The Compute Engine config settings for the cluster's worker instances.