import * as mod from "https://googleapis.deno.dev/v1/dataproc:v1.ts";
Dataproc | Manages Hadoop-based clusters and jobs on Google Cloud Platform. |
GoogleAuth |
AcceleratorConfig | Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine (https://cloud.google.com/compute/docs/gpus/). |
AnalyzeBatchRequest | A request to analyze a batch workload. |
AnalyzeOperationMetadata | Metadata describing the Analyze operation. |
AutoscalingConfig | Autoscaling Policy config associated with the cluster. |
AutoscalingPolicy | Describes an autoscaling policy for Dataproc cluster autoscaler. |
AuxiliaryNodeGroup | Node group identification and configuration information. |
AuxiliaryServicesConfig | Auxiliary services configuration for a Cluster. |
BasicAutoscalingAlgorithm | Basic algorithm for autoscaling. |
BasicYarnAutoscalingConfig | Basic autoscaling configurations for YARN. |
Batch | A representation of a batch workload in the service. |
BatchOperationMetadata | Metadata describing the Batch operation. |
Binding | Associates members, or principals, with a role. |
CancelJobRequest | A request to cancel a job. |
Cluster | Describes the identifying information, config, and status of a Dataproc cluster |
ClusterConfig | The cluster config. |
ClusterMetrics | Contains cluster daemon metrics, such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release. |
ClusterOperation | The cluster operation triggered by a workflow. |
ClusterOperationMetadata | Metadata describing the operation. |
ClusterOperationStatus | The status of the operation. |
ClusterSelector | A selector that chooses target cluster for jobs based on metadata. |
ClusterStatus | The status of a cluster and its instances. |
ConfidentialInstanceConfig | Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs) |
CredentialsClient | Defines the root interface for all clients that generate credentials for calling Google APIs. All clients should implement this interface. |
DataprocMetricConfig | Dataproc metric config. |
DiagnoseClusterRequest | A request to collect cluster diagnostic information. |
DiagnoseClusterResults | The location of diagnostic output. |
DiskConfig | Specifies the config of disk options for a group of VM instances. |
DriverSchedulingConfig | Driver scheduling configuration. |
Empty | A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); } |
EncryptionConfig | Encryption settings for the cluster. |
EndpointConfig | Endpoint config for this cluster |
EnvironmentConfig | Environment configuration for a workload. |
ExecutionConfig | Execution configuration for a workload. |
Expr | Represents a textual expression in the Common Expression Language (CEL) syntax. CEL is a C-like expression language. The syntax and semantics of CEL are documented at https://github.com/google/cel-spec.Example (Comparison): title: "Summary size limit" description: "Determines if a summary is less than 100 chars" expression: "document.summary.size() < 100" Example (Equality): title: "Requestor is owner" description: "Determines if requestor is the document owner" expression: "document.owner == request.auth.claims.email" Example (Logic): title: "Public documents" description: "Determine whether the document should be publicly visible" expression: "document.type != 'private' && document.type != 'internal'" Example (Data Manipulation): title: "Notification string" description: "Create a notification string with a timestamp." expression: "'New message received at ' + string(document.create_time)" The exact variables and functions that may be referenced within an expression are determined by the service that evaluates it. See the service documentation for additional information. |
FlinkJob | A Dataproc job for running Apache Flink applications on YARN. |
GceClusterConfig | Common config settings for resources of Compute Engine cluster instances, applicable to all instances in the cluster. |
GetIamPolicyRequest | Request message for GetIamPolicy method. |
GetPolicyOptions | Encapsulates settings provided to GetIamPolicy. |
GkeClusterConfig | The cluster's GKE config. |
GkeNodeConfig | Parameters that describe cluster nodes. |
GkeNodePoolAcceleratorConfig | A GkeNodeConfigAcceleratorConfig represents a Hardware Accelerator request for a node pool. |
GkeNodePoolAutoscalingConfig | GkeNodePoolAutoscaling contains information the cluster autoscaler needs to adjust the size of the node pool to the current cluster usage. |
GkeNodePoolConfig | The configuration of a GKE node pool used by a Dataproc-on-GKE cluster (https://cloud.google.com/dataproc/docs/concepts/jobs/dataproc-gke#create-a-dataproc-on-gke-cluster). |
GkeNodePoolTarget | GKE node pools that Dataproc workloads run on. |
GoogleCloudDataprocV1WorkflowTemplateEncryptionConfig | Encryption settings for encrypting workflow template job arguments. |
HadoopJob | A Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html). |
HiveJob | A Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN. |
IdentityConfig | Identity related configuration, including service account based secure multi-tenancy user mappings. |
InjectCredentialsRequest | A request to inject credentials into a cluster. |
InstanceFlexibilityPolicy | Instance flexibility Policy allowing a mixture of VM shapes and provisioning models. |
InstanceGroupAutoscalingPolicyConfig | Configuration for the size bounds of an instance group, including its proportional size to other groups. |
InstanceGroupConfig | The config settings for Compute Engine resources in an instance group, such as a master or worker group. |
InstanceReference | A reference to a Compute Engine instance. |
InstanceSelection | Defines machines types and a rank to which the machines types belong. |
InstanceSelectionResult | Defines a mapping from machine types to the number of VMs that are created with each machine type. |
InstantiateWorkflowTemplateRequest | A request to instantiate a workflow template. |
Interval | Represents a time interval, encoded as a Timestamp start (inclusive) and a Timestamp end (exclusive).The start must be less than or equal to the end. When the start equals the end, the interval is empty (matches no time). When both start and end are unspecified, the interval matches any time. |
Job | A Dataproc job resource. |
JobMetadata | Job Operation metadata. |
JobPlacement | Dataproc job config. |
JobReference | Encapsulates the full scoping used to reference a job. |
JobScheduling | Job scheduling options. |
JobStatus | Dataproc job status. |
JupyterConfig | Jupyter configuration for an interactive session. |
KerberosConfig | Specifies Kerberos related configuration. |
KubernetesClusterConfig | The configuration for running the Dataproc cluster on Kubernetes. |
KubernetesSoftwareConfig | The software configuration for this Dataproc cluster running on Kubernetes. |
LifecycleConfig | Specifies the cluster auto-delete schedule configuration. |
ListAutoscalingPoliciesResponse | A response to a request to list autoscaling policies in a project. |
ListBatchesResponse | A list of batch workloads. |
ListClustersResponse | The list of all clusters in a project. |
ListJobsResponse | A list of jobs in a project. |
ListOperationsResponse | The response message for Operations.ListOperations. |
ListSessionsResponse | A list of interactive sessions. |
ListSessionTemplatesResponse | A list of session templates. |
ListWorkflowTemplatesResponse | A response to a request to list workflow templates in a project. |
LoggingConfig | The runtime logging config of the job. |
ManagedCluster | Cluster that is managed by the workflow. |
ManagedGroupConfig | Specifies the resources used to actively manage an instance group. |
MetastoreConfig | Specifies a Metastore configuration. |
Metric | A Dataproc custom metric. |
NamespacedGkeDeploymentTarget | Deprecated. Used only for the deprecated beta. A full, namespace-isolated deployment target for an existing GKE cluster. |
NodeGroup | Dataproc Node Group. The Dataproc NodeGroup resource is not related to the Dataproc NodeGroupAffinity resource. |
NodeGroupAffinity | Node Group Affinity for clusters using sole-tenant node groups. The Dataproc NodeGroupAffinity resource is not related to the Dataproc NodeGroup resource. |
NodeGroupOperationMetadata | Metadata describing the node group operation. |
NodeInitializationAction | Specifies an executable to run on a fully configured node and a timeout period for executable completion. |
NodePool | indicating a list of workers of same type |
Operation | This resource represents a long-running operation that is the result of a network API call. |
OrderedJob | A job executed by the workflow. |
ParameterValidation | Configuration for parameter validation. |
PeripheralsConfig | Auxiliary services configuration for a workload. |
PigJob | A Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN. |
Policy | An Identity and Access Management (IAM) policy, which specifies access controls for Google Cloud resources.A Policy is a collection of bindings. A binding binds one or more members, or principals, to a single role. Principals can be user accounts, service accounts, Google groups, and domains (such as G Suite). A role is a named list of permissions; each role can be an IAM predefined role or a user-created custom role.For some types of Google Cloud resources, a binding can also specify a condition, which is a logical expression that allows access to a resource only if the expression evaluates to true. A condition can add constraints based on attributes of the request, the resource, or both. To learn which resources support conditions in their IAM policies, see the IAM documentation (https://cloud.google.com/iam/help/conditions/resource-policies).JSON example: { "bindings": [ { "role": "roles/resourcemanager.organizationAdmin", "members": [ "user:mike@example.com", "group:admins@example.com", "domain:google.com", "serviceAccount:my-project-id@appspot.gserviceaccount.com" ] }, { "role": "roles/resourcemanager.organizationViewer", "members": [ "user:eve@example.com" ], "condition": { "title": "expirable access", "description": "Does not grant access after Sep 2020", "expression": "request.time < timestamp('2020-10-01T00:00:00.000Z')", } } ], "etag": "BwWWja0YfJA=", "version": 3 } YAML example: bindings: - members: - user:mike@example.com - group:admins@example.com - domain:google.com - serviceAccount:my-project-id@appspot.gserviceaccount.com role: roles/resourcemanager.organizationAdmin - members: - user:eve@example.com role: roles/resourcemanager.organizationViewer condition: title: expirable access description: Does not grant access after Sep 2020 expression: request.time < timestamp('2020-10-01T00:00:00.000Z') etag: BwWWja0YfJA= version: 3 For a description of IAM and its features, see the IAM documentation (https://cloud.google.com/iam/docs/). |
PrestoJob | A Dataproc job for running Presto (https://prestosql.io/) queries. IMPORTANT: The Dataproc Presto Optional Component (https://cloud.google.com/dataproc/docs/concepts/components/presto) must be enabled when the cluster is created to submit a Presto job to the cluster. |
ProjectsLocationsAutoscalingPoliciesListOptions | Additional options for Dataproc#projectsLocationsAutoscalingPoliciesList. |
ProjectsLocationsBatchesCreateOptions | Additional options for Dataproc#projectsLocationsBatchesCreate. |
ProjectsLocationsBatchesListOptions | Additional options for Dataproc#projectsLocationsBatchesList. |
ProjectsLocationsOperationsListOptions | Additional options for Dataproc#projectsLocationsOperationsList. |
ProjectsLocationsSessionsCreateOptions | Additional options for Dataproc#projectsLocationsSessionsCreate. |
ProjectsLocationsSessionsDeleteOptions | Additional options for Dataproc#projectsLocationsSessionsDelete. |
ProjectsLocationsSessionsListOptions | Additional options for Dataproc#projectsLocationsSessionsList. |
ProjectsLocationsSessionTemplatesListOptions | Additional options for Dataproc#projectsLocationsSessionTemplatesList. |
ProjectsLocationsWorkflowTemplatesDeleteOptions | Additional options for Dataproc#projectsLocationsWorkflowTemplatesDelete. |
ProjectsLocationsWorkflowTemplatesGetOptions | Additional options for Dataproc#projectsLocationsWorkflowTemplatesGet. |
ProjectsLocationsWorkflowTemplatesInstantiateInlineOptions | Additional options for Dataproc#projectsLocationsWorkflowTemplatesInstantiateInline. |
ProjectsLocationsWorkflowTemplatesListOptions | Additional options for Dataproc#projectsLocationsWorkflowTemplatesList. |
ProjectsRegionsAutoscalingPoliciesListOptions | Additional options for Dataproc#projectsRegionsAutoscalingPoliciesList. |
ProjectsRegionsClustersCreateOptions | Additional options for Dataproc#projectsRegionsClustersCreate. |
ProjectsRegionsClustersDeleteOptions | Additional options for Dataproc#projectsRegionsClustersDelete. |
ProjectsRegionsClustersListOptions | Additional options for Dataproc#projectsRegionsClustersList. |
ProjectsRegionsClustersNodeGroupsCreateOptions | Additional options for Dataproc#projectsRegionsClustersNodeGroupsCreate. |
ProjectsRegionsClustersPatchOptions | Additional options for Dataproc#projectsRegionsClustersPatch. |
ProjectsRegionsJobsListOptions | Additional options for Dataproc#projectsRegionsJobsList. |
ProjectsRegionsJobsPatchOptions | Additional options for Dataproc#projectsRegionsJobsPatch. |
ProjectsRegionsOperationsListOptions | Additional options for Dataproc#projectsRegionsOperationsList. |
ProjectsRegionsWorkflowTemplatesDeleteOptions | Additional options for Dataproc#projectsRegionsWorkflowTemplatesDelete. |
ProjectsRegionsWorkflowTemplatesGetOptions | Additional options for Dataproc#projectsRegionsWorkflowTemplatesGet. |
ProjectsRegionsWorkflowTemplatesInstantiateInlineOptions | Additional options for Dataproc#projectsRegionsWorkflowTemplatesInstantiateInline. |
ProjectsRegionsWorkflowTemplatesListOptions | Additional options for Dataproc#projectsRegionsWorkflowTemplatesList. |
PyPiRepositoryConfig | Configuration for PyPi repository |
PySparkBatch | A configuration for running an Apache PySpark (https://spark.apache.org/docs/latest/api/python/getting_started/quickstart.html) batch workload. |
PySparkJob | A Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/python-programming-guide.html) applications on YARN. |
QueryList | A list of queries to run on a cluster. |
RegexValidation | Validation based on regular expressions. |
RepairClusterRequest | A request to repair a cluster. |
RepairNodeGroupRequest | |
RepositoryConfig | Configuration for dependency repositories |
ReservationAffinity | Reservation Affinity for consuming Zonal reservation. |
ResizeNodeGroupRequest | A request to resize a node group. |
RuntimeConfig | Runtime configuration for a workload. |
RuntimeInfo | Runtime information about workload execution. |
SecurityConfig | Security related configuration, including encryption, Kerberos, etc. |
Session | A representation of a session. |
SessionOperationMetadata | Metadata describing the Session operation. |
SessionStateHistory | Historical state information. |
SessionTemplate | A representation of a session template. |
SetIamPolicyRequest | Request message for SetIamPolicy method. |
ShieldedInstanceConfig | Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm). |
SoftwareConfig | Specifies the selection and config of software inside the cluster. |
SparkBatch | A configuration for running an Apache Spark (https://spark.apache.org/) batch workload. |
SparkHistoryServerConfig | Spark History Server configuration for the workload. |
SparkJob | A Dataproc job for running Apache Spark (https://spark.apache.org/) applications on YARN. |
SparkRBatch | A configuration for running an Apache SparkR (https://spark.apache.org/docs/latest/sparkr.html) batch workload. |
SparkRJob | A Dataproc job for running Apache SparkR (https://spark.apache.org/docs/latest/sparkr.html) applications on YARN. |
SparkSqlBatch | A configuration for running Apache Spark SQL (https://spark.apache.org/sql/) queries as a batch workload. |
SparkSqlJob | A Dataproc job for running Apache Spark SQL (https://spark.apache.org/sql/) queries. |
SparkStandaloneAutoscalingConfig | Basic autoscaling configurations for Spark Standalone. |
StartClusterRequest | A request to start a cluster. |
StartupConfig | Configuration to handle the startup of instances during cluster create and update process. |
StateHistory | Historical state information. |
Status | The Status type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC (https://github.com/grpc). Each Status message contains three pieces of data: error code, error message, and error details.You can find out more about this error model and how to work with it in the API Design Guide (https://cloud.google.com/apis/design/errors). |
StopClusterRequest | A request to stop a cluster. |
SubmitJobRequest | A request to submit a job. |
TemplateParameter | A configurable parameter that replaces one or more fields in the template. Parameterizable fields: - Labels - File uris - Job properties - Job arguments
|
TerminateSessionRequest | A request to terminate an interactive session. |
TestIamPermissionsRequest | Request message for TestIamPermissions method. |
TestIamPermissionsResponse | Response message for TestIamPermissions method. |
TrinoJob | A Dataproc job for running Trino (https://trino.io/) queries. IMPORTANT: The Dataproc Trino Optional Component (https://cloud.google.com/dataproc/docs/concepts/components/trino) must be enabled when the cluster is created to submit a Trino job to the cluster. |
UsageMetrics | Usage metrics represent approximate total resources consumed by a workload. |
UsageSnapshot | The usage snapshot represents the resources consumed by a workload at a specified time. |
ValueValidation | Validation based on a list of allowed values. |
VirtualClusterConfig | The Dataproc cluster config for a cluster that does not directly control the underlying compute resources, such as a Dataproc-on-GKE cluster (https://cloud.google.com/dataproc/docs/guides/dpgke/dataproc-gke-overview). |
WorkflowGraph | The workflow graph. |
WorkflowMetadata | A Dataproc workflow template resource. |
WorkflowNode | The workflow node. |
WorkflowTemplate | A Dataproc workflow template resource. |
WorkflowTemplatePlacement | Specifies workflow execution target.Either managed_cluster or cluster_selector is required. |
YarnApplication | A YARN application created by a job. Application information is a subset of org.apache.hadoop.yarn.proto.YarnProtos.ApplicationReportProto.Beta Feature: This report is available for testing purposes only. It may be changed before final release. |