Hi there! Are you looking for the official Deno documentation? Try docs.deno.com for all your Deno learning needs.

SparkStandaloneAutoscalingConfig

import type { SparkStandaloneAutoscalingConfig } from "https://googleapis.deno.dev/v1/dataproc:v1.ts";

Basic autoscaling configurations for Spark Standalone.

interface SparkStandaloneAutoscalingConfig {
gracefulDecommissionTimeout?: number;
removeOnlyIdleWorkers?: boolean;
scaleDownFactor?: number;
scaleDownMinWorkerFraction?: number;
scaleUpFactor?: number;
scaleUpMinWorkerFraction?: number;
}

§Properties

§
gracefulDecommissionTimeout?: number
[src]

Required. Timeout for Spark graceful decommissioning of spark workers. Specifies the duration to wait for spark worker to complete spark decommissioning tasks before forcefully removing workers. Only applicable to downscaling operations.Bounds: 0s, 1d.

§
removeOnlyIdleWorkers?: boolean
[src]

Optional. Remove only idle workers when scaling down cluster

§
scaleDownFactor?: number
[src]

Required. Fraction of required executors to remove from Spark Serverless clusters. A scale-down factor of 1.0 will result in scaling down so that there are no more executors for the Spark Job.(more aggressive scaling). A scale-down factor closer to 0 will result in a smaller magnitude of scaling donw (less aggressive scaling).Bounds: 0.0, 1.0.

§
scaleDownMinWorkerFraction?: number
[src]

Optional. Minimum scale-down threshold as a fraction of total cluster size before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1 means the autoscaler must recommend at least a 2 worker scale-down for the cluster to scale. A threshold of 0 means the autoscaler will scale down on any recommended change.Bounds: 0.0, 1.0. Default: 0.0.

§
scaleUpFactor?: number
[src]

Required. Fraction of required workers to add to Spark Standalone clusters. A scale-up factor of 1.0 will result in scaling up so that there are no more required workers for the Spark Job (more aggressive scaling). A scale-up factor closer to 0 will result in a smaller magnitude of scaling up (less aggressive scaling).Bounds: 0.0, 1.0.

§
scaleUpMinWorkerFraction?: number
[src]

Optional. Minimum scale-up threshold as a fraction of total cluster size before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1 means the autoscaler must recommend at least a 2-worker scale-up for the cluster to scale. A threshold of 0 means the autoscaler will scale up on any recommended change.Bounds: 0.0, 1.0. Default: 0.0.