Uses of Interface
org.apache.hadoop.mapreduce.JobContext
Packages that use JobContext
Package
Description
This is the "Magic" committer and support.
Intermediate manifest committer.
-
Uses of JobContext in org.apache.hadoop.fs.s3a.commit.magic
Methods in org.apache.hadoop.fs.s3a.commit.magic with parameters of type JobContext -
Uses of JobContext in org.apache.hadoop.mapred
Subinterfaces of JobContext in org.apache.hadoop.mapredMethods in org.apache.hadoop.mapred with parameters of type JobContextModifier and TypeMethodDescriptionfinal voidOutputCommitter.abortJob(JobContext context, JobStatus.State runState) This method implements the new interface by calling the old method.final voidOutputCommitter.cleanupJob(JobContext context) Deprecated.final voidOutputCommitter.commitJob(JobContext context) This method implements the new interface by calling the old method.booleanOutputCommitter.isCommitJobRepeatable(JobContext jobContext) final booleanOutputCommitter.isRecoverySupported(JobContext context) This method implements the new interface by calling the old method.final voidOutputCommitter.setupJob(JobContext jobContext) This method implements the new interface by calling the old method. -
Uses of JobContext in org.apache.hadoop.mapred.lib
Methods in org.apache.hadoop.mapred.lib with parameters of type JobContextModifier and TypeMethodDescriptionprotected booleanCombineFileInputFormat.isSplitable(JobContext context, Path file) Subclasses should avoid overriding this method and should instead only overrideCombineFileInputFormat.isSplitable(FileSystem, Path). -
Uses of JobContext in org.apache.hadoop.mapreduce
Subinterfaces of JobContext in org.apache.hadoop.mapreduceModifier and TypeInterfaceDescriptioninterfaceMapContext<KEYIN,VALUEIN, KEYOUT, VALUEOUT> The context that is given to theMapper.interfaceReduceContext<KEYIN,VALUEIN, KEYOUT, VALUEOUT> The context passed to theReducer.interfaceThe context for task attempts.interfaceTaskInputOutputContext<KEYIN,VALUEIN, KEYOUT, VALUEOUT> A context object that allows input and output from the task.Classes in org.apache.hadoop.mapreduce that implement JobContextMethods in org.apache.hadoop.mapreduce with parameters of type JobContextModifier and TypeMethodDescriptionvoidOutputCommitter.abortJob(JobContext jobContext, JobStatus.State state) For aborting an unsuccessful job's output.abstract voidOutputFormat.checkOutputSpecs(JobContext context) Check for validity of the output-specification for the job.voidOutputCommitter.cleanupJob(JobContext jobContext) Deprecated.voidOutputCommitter.commitJob(JobContext jobContext) For committing job's output after successful job completion.abstract List<InputSplit>InputFormat.getSplits(JobContext context) Logically split the set of input files for the job.booleanOutputCommitter.isCommitJobRepeatable(JobContext jobContext) Returns true if an in-progress job commit can be retried.booleanOutputCommitter.isRecoverySupported(JobContext jobContext) Is task output recovery supported for restarting jobs?abstract voidOutputCommitter.setupJob(JobContext jobContext) For the framework to setup the job output during initialization. -
Uses of JobContext in org.apache.hadoop.mapreduce.lib.db
Methods in org.apache.hadoop.mapreduce.lib.db with parameters of type JobContextModifier and TypeMethodDescriptionvoidDBOutputFormat.checkOutputSpecs(JobContext context) DataDrivenDBInputFormat.getSplits(JobContext job) Logically split the set of input files for the job.DBInputFormat.getSplits(JobContext job) Logically split the set of input files for the job. -
Uses of JobContext in org.apache.hadoop.mapreduce.lib.input
Methods in org.apache.hadoop.mapreduce.lib.input with parameters of type JobContextModifier and TypeMethodDescriptionstatic booleanFileInputFormat.getInputDirRecursive(JobContext job) static PathFilterFileInputFormat.getInputPathFilter(JobContext context) Get a PathFilter instance of the filter set for the input paths.static Path[]FileInputFormat.getInputPaths(JobContext context) Get the list of inputPaths for the map-reduce job.static longFileInputFormat.getMaxSplitSize(JobContext context) Get the maximum split size.static longFileInputFormat.getMinSplitSize(JobContext job) Get the minimum split sizestatic intNLineInputFormat.getNumLinesPerSplit(JobContext job) Get the number of lines per splitCombineFileInputFormat.getSplits(JobContext job) FileInputFormat.getSplits(JobContext job) Generate the list of files and make them into FileSplits.NLineInputFormat.getSplits(JobContext job) Logically splits the set of input files for the job, splits N lines of the input as one split.protected booleanCombineFileInputFormat.isSplitable(JobContext context, Path file) protected booleanFileInputFormat.isSplitable(JobContext context, Path filename) Is the given filename splittable?protected booleanFixedLengthInputFormat.isSplitable(JobContext context, Path file) protected booleanKeyValueTextInputFormat.isSplitable(JobContext context, Path file) protected booleanTextInputFormat.isSplitable(JobContext context, Path file) protected List<FileStatus>FileInputFormat.listStatus(JobContext job) List input directories.protected List<FileStatus>SequenceFileInputFormat.listStatus(JobContext job) -
Uses of JobContext in org.apache.hadoop.mapreduce.lib.join
Methods in org.apache.hadoop.mapreduce.lib.join with parameters of type JobContextModifier and TypeMethodDescriptionCompositeInputFormat.getSplits(JobContext job) Build a CompositeInputSplit from the child InputFormats by assigning the ith split from each child to the ith composite split. -
Uses of JobContext in org.apache.hadoop.mapreduce.lib.map
Methods in org.apache.hadoop.mapreduce.lib.map with parameters of type JobContextModifier and TypeMethodDescriptionMultithreadedMapper.getMapperClass(JobContext job) Get the application's mapper class.static intMultithreadedMapper.getNumberOfThreads(JobContext job) The number of threads in the thread pool that will run the map function. -
Uses of JobContext in org.apache.hadoop.mapreduce.lib.output
Methods in org.apache.hadoop.mapreduce.lib.output with parameters of type JobContextModifier and TypeMethodDescriptionvoidBindingPathOutputCommitter.abortJob(JobContext jobContext, JobStatus.State state) voidFileOutputCommitter.abortJob(JobContext context, JobStatus.State state) Delete the temporary directory, including all of the work directories.voidFileOutputFormat.checkOutputSpecs(JobContext job) voidFilterOutputFormat.checkOutputSpecs(JobContext context) voidLazyOutputFormat.checkOutputSpecs(JobContext context) voidNullOutputFormat.checkOutputSpecs(JobContext context) voidSequenceFileAsBinaryOutputFormat.checkOutputSpecs(JobContext job) voidBindingPathOutputCommitter.cleanupJob(JobContext jobContext) voidFileOutputCommitter.cleanupJob(JobContext context) Deprecated.voidBindingPathOutputCommitter.commitJob(JobContext jobContext) voidFileOutputCommitter.commitJob(JobContext context) The job has completed, so do works in commitJobInternal().protected voidFileOutputCommitter.commitJobInternal(JobContext context) The job has completed, so do following commit job, include: Move all committed tasks to the final output dir (algorithm 1 only).static booleanFileOutputFormat.getCompressOutput(JobContext job) Is the job output compressed?static booleanMultipleOutputs.getCountersEnabled(JobContext job) Returns if the counters for the named outputs are enabled or not.FileOutputCommitter.getJobAttemptPath(JobContext context) Compute the path where the output of a given job attempt will be placed.static PathFileOutputCommitter.getJobAttemptPath(JobContext context, Path out) Compute the path where the output of a given job attempt will be placed.static SequenceFile.CompressionTypeSequenceFileOutputFormat.getOutputCompressionType(JobContext job) Get theSequenceFile.CompressionTypefor the outputSequenceFile.static Class<? extends CompressionCodec>FileOutputFormat.getOutputCompressorClass(JobContext job, Class<? extends CompressionCodec> defaultValue) Get theCompressionCodecfor compressing the job outputs.protected static StringFileOutputFormat.getOutputName(JobContext job) Get the base output name for the output file.static PathFileOutputFormat.getOutputPath(JobContext job) Get thePathto the output directory for the map-reduce job.static Class<? extends WritableComparable>SequenceFileAsBinaryOutputFormat.getSequenceFileOutputKeyClass(JobContext job) Get the key class for theSequenceFileSequenceFileAsBinaryOutputFormat.getSequenceFileOutputValueClass(JobContext job) Get the value class for theSequenceFilebooleanBindingPathOutputCommitter.isCommitJobRepeatable(JobContext jobContext) booleanFileOutputCommitter.isCommitJobRepeatable(JobContext context) booleanBindingPathOutputCommitter.isRecoverySupported(JobContext jobContext) protected static voidFileOutputFormat.setOutputName(JobContext job, String name) Set the base output name for output file to be created.voidBindingPathOutputCommitter.setupJob(JobContext jobContext) voidFileOutputCommitter.setupJob(JobContext context) Create the temporary directory that is the root of all of the task work directories.Constructors in org.apache.hadoop.mapreduce.lib.output with parameters of type JobContextModifierConstructorDescriptionFileOutputCommitter(Path outputPath, JobContext context) Create a file output committerPartialFileOutputCommitter(Path outputPath, JobContext context) protectedPathOutputCommitter(Path outputPath, JobContext context) Constructor for a job attempt. -
Uses of JobContext in org.apache.hadoop.mapreduce.lib.output.committer.manifest
Methods in org.apache.hadoop.mapreduce.lib.output.committer.manifest with parameters of type JobContextModifier and TypeMethodDescriptionvoidManifestCommitter.abortJob(JobContext jobContext, JobStatus.State state) Abort the job.voidManifestCommitter.cleanupJob(JobContext jobContext) Execute theCleanupJobStageto remove the job attempt dir.voidManifestCommitter.commitJob(JobContext jobContext) This is the big job commit stage.ManifestCommitter.getJobAttemptPath(JobContext context) Compute the path where the output of a task attempt is stored until that task is committed.booleanManifestCommitter.isCommitJobRepeatable(JobContext jobContext) Failure during Job Commit is not recoverable from.booleanManifestCommitter.isRecoverySupported(JobContext jobContext) Declare that task recovery is not supported.voidManifestCommitter.setupJob(JobContext jobContext) Set up a job through aSetupJobStage. -
Uses of JobContext in org.apache.hadoop.mapreduce.lib.partition
Methods in org.apache.hadoop.mapreduce.lib.partition with parameters of type JobContextModifier and TypeMethodDescriptionstatic StringKeyFieldBasedComparator.getKeyFieldComparatorOption(JobContext job) Get theKeyFieldBasedComparatoroptionsKeyFieldBasedPartitioner.getKeyFieldPartitionerOption(JobContext job) Get theKeyFieldBasedPartitioneroptions -
Uses of JobContext in org.apache.hadoop.mapreduce.task
Classes in org.apache.hadoop.mapreduce.task that implement JobContextModifier and TypeClassDescriptionclassorg.apache.hadoop.mapreduce.task.JobContextImplA read-only view of the job that is provided to the tasks while they are running.
OutputCommitter.commitJob(org.apache.hadoop.mapreduce.JobContext)orOutputCommitter.abortJob(org.apache.hadoop.mapreduce.JobContext, org.apache.hadoop.mapreduce.JobStatus.State)instead.