Hi there! Are you looking for the official Deno documentation? Try docs.deno.com for all your Deno learning needs.

HadoopJob

import type { HadoopJob } from "https://googleapis.deno.dev/v1/dataproc:v1.ts";
interface HadoopJob {
archiveUris?: string[];
args?: string[];
fileUris?: string[];
jarFileUris?: string[];
loggingConfig?: LoggingConfig;
mainClass?: string;
mainJarFileUri?: string;
properties?: {
[key: string]: string;
}
;
}

§Properties

§
archiveUris?: string[]
[src]

Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.

§
args?: string[]
[src]

Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision might occur that causes an incorrect job submission.

§
fileUris?: string[]
[src]

Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.

§
jarFileUris?: string[]
[src]

Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.

§
loggingConfig?: LoggingConfig
[src]

Optional. The runtime log config for job execution.

§
mainClass?: string
[src]

The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.

§
mainJarFileUri?: string
[src]

The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'

§
properties?: {
[key: string]: string;
}
[src]

Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.