Package | Description |
---|---|
org.apache.hadoop.fs.s3a.commit.magic |
This is the "Magic" committer and support.
|
org.apache.hadoop.mapred | |
org.apache.hadoop.mapreduce | |
org.apache.hadoop.mapreduce.lib.db | |
org.apache.hadoop.mapreduce.lib.input | |
org.apache.hadoop.mapreduce.lib.join | |
org.apache.hadoop.mapreduce.lib.map | |
org.apache.hadoop.mapreduce.lib.output | |
org.apache.hadoop.mapreduce.lib.output.committer.manifest |
Intermediate manifest committer.
|
org.apache.hadoop.mapreduce.lib.partition | |
org.apache.hadoop.mapreduce.task |
Modifier and Type | Method and Description |
---|---|
void |
MagicS3GuardCommitter.setupJob(JobContext context) |
Modifier and Type | Interface and Description |
---|---|
interface |
JobContext |
Modifier and Type | Method and Description |
---|---|
void |
OutputCommitter.abortJob(JobContext context,
org.apache.hadoop.mapreduce.JobStatus.State runState)
This method implements the new interface by calling the old method.
|
void |
OutputCommitter.cleanupJob(JobContext context)
|
void |
OutputCommitter.commitJob(JobContext context)
This method implements the new interface by calling the old method.
|
boolean |
OutputCommitter.isCommitJobRepeatable(JobContext jobContext) |
boolean |
OutputCommitter.isRecoverySupported(JobContext context)
This method implements the new interface by calling the old method.
|
void |
OutputCommitter.setupJob(JobContext jobContext)
This method implements the new interface by calling the old method.
|
Modifier and Type | Interface and Description |
---|---|
interface |
MapContext<KEYIN,VALUEIN,KEYOUT,VALUEOUT>
The context that is given to the
Mapper . |
interface |
ReduceContext<KEYIN,VALUEIN,KEYOUT,VALUEOUT>
The context passed to the
Reducer . |
interface |
TaskAttemptContext
The context for task attempts.
|
interface |
TaskInputOutputContext<KEYIN,VALUEIN,KEYOUT,VALUEOUT>
A context object that allows input and output from the task.
|
Modifier and Type | Class and Description |
---|---|
class |
Job
The job submitter's view of the Job.
|
Modifier and Type | Method and Description |
---|---|
void |
OutputCommitter.abortJob(JobContext jobContext,
org.apache.hadoop.mapreduce.JobStatus.State state)
For aborting an unsuccessful job's output.
|
abstract void |
OutputFormat.checkOutputSpecs(JobContext context)
Check for validity of the output-specification for the job.
|
void |
OutputCommitter.cleanupJob(JobContext jobContext)
Deprecated.
|
void |
OutputCommitter.commitJob(JobContext jobContext)
For committing job's output after successful job completion.
|
abstract List<InputSplit> |
InputFormat.getSplits(JobContext context)
Logically split the set of input files for the job.
|
boolean |
OutputCommitter.isCommitJobRepeatable(JobContext jobContext)
Returns true if an in-progress job commit can be retried.
|
boolean |
OutputCommitter.isRecoverySupported(JobContext jobContext)
Is task output recovery supported for restarting jobs?
If task output recovery is supported, job restart can be done more
efficiently.
|
abstract void |
OutputCommitter.setupJob(JobContext jobContext)
For the framework to setup the job output during initialization.
|
Modifier and Type | Method and Description |
---|---|
void |
DBOutputFormat.checkOutputSpecs(JobContext context) |
List<InputSplit> |
DataDrivenDBInputFormat.getSplits(JobContext job)
Logically split the set of input files for the job.
|
List<InputSplit> |
DBInputFormat.getSplits(JobContext job)
Logically split the set of input files for the job.
|
Modifier and Type | Method and Description |
---|---|
static boolean |
FileInputFormat.getInputDirRecursive(JobContext job) |
static PathFilter |
FileInputFormat.getInputPathFilter(JobContext context)
Get a PathFilter instance of the filter set for the input paths.
|
static Path[] |
FileInputFormat.getInputPaths(JobContext context)
Get the list of input
Path s for the map-reduce job. |
static long |
FileInputFormat.getMaxSplitSize(JobContext context)
Get the maximum split size.
|
static long |
FileInputFormat.getMinSplitSize(JobContext job)
Get the minimum split size
|
static int |
NLineInputFormat.getNumLinesPerSplit(JobContext job)
Get the number of lines per split
|
List<InputSplit> |
FileInputFormat.getSplits(JobContext job)
Generate the list of files and make them into FileSplits.
|
List<InputSplit> |
CombineFileInputFormat.getSplits(JobContext job) |
List<InputSplit> |
NLineInputFormat.getSplits(JobContext job)
Logically splits the set of input files for the job, splits N lines
of the input as one split.
|
protected boolean |
KeyValueTextInputFormat.isSplitable(JobContext context,
Path file) |
protected boolean |
FixedLengthInputFormat.isSplitable(JobContext context,
Path file) |
protected boolean |
TextInputFormat.isSplitable(JobContext context,
Path file) |
protected boolean |
FileInputFormat.isSplitable(JobContext context,
Path filename)
Is the given filename splittable? Usually, true, but if the file is
stream compressed, it will not be.
|
protected boolean |
CombineFileInputFormat.isSplitable(JobContext context,
Path file) |
protected List<FileStatus> |
SequenceFileInputFormat.listStatus(JobContext job) |
protected List<FileStatus> |
FileInputFormat.listStatus(JobContext job)
List input directories.
|
Modifier and Type | Method and Description |
---|---|
List<InputSplit> |
CompositeInputFormat.getSplits(JobContext job)
Build a CompositeInputSplit from the child InputFormats by assigning the
ith split from each child to the ith composite split.
|
Modifier and Type | Method and Description |
---|---|
static <K1,V1,K2,V2> |
MultithreadedMapper.getMapperClass(JobContext job)
Get the application's mapper class.
|
static int |
MultithreadedMapper.getNumberOfThreads(JobContext job)
The number of threads in the thread pool that will run the map function.
|
Modifier and Type | Method and Description |
---|---|
void |
BindingPathOutputCommitter.abortJob(JobContext jobContext,
org.apache.hadoop.mapreduce.JobStatus.State state) |
void |
FileOutputCommitter.abortJob(JobContext context,
org.apache.hadoop.mapreduce.JobStatus.State state)
Delete the temporary directory, including all of the work directories.
|
void |
NullOutputFormat.checkOutputSpecs(JobContext context) |
void |
FilterOutputFormat.checkOutputSpecs(JobContext context) |
void |
FileOutputFormat.checkOutputSpecs(JobContext job) |
void |
LazyOutputFormat.checkOutputSpecs(JobContext context) |
void |
SequenceFileAsBinaryOutputFormat.checkOutputSpecs(JobContext job) |
void |
BindingPathOutputCommitter.cleanupJob(JobContext jobContext) |
void |
FileOutputCommitter.cleanupJob(JobContext context)
Deprecated.
|
void |
BindingPathOutputCommitter.commitJob(JobContext jobContext) |
void |
FileOutputCommitter.commitJob(JobContext context)
The job has completed, so do works in commitJobInternal().
|
protected void |
FileOutputCommitter.commitJobInternal(JobContext context)
The job has completed, so do following commit job, include:
Move all committed tasks to the final output dir (algorithm 1 only).
|
static boolean |
FileOutputFormat.getCompressOutput(JobContext job)
Is the job output compressed?
|
static boolean |
MultipleOutputs.getCountersEnabled(JobContext job)
Returns if the counters for the named outputs are enabled or not.
|
Path |
FileOutputCommitter.getJobAttemptPath(JobContext context)
Compute the path where the output of a given job attempt will be placed.
|
static Path |
FileOutputCommitter.getJobAttemptPath(JobContext context,
Path out)
Compute the path where the output of a given job attempt will be placed.
|
static org.apache.hadoop.io.SequenceFile.CompressionType |
SequenceFileOutputFormat.getOutputCompressionType(JobContext job)
Get the
SequenceFile.CompressionType for the output SequenceFile . |
static Class<? extends CompressionCodec> |
FileOutputFormat.getOutputCompressorClass(JobContext job,
Class<? extends CompressionCodec> defaultValue)
Get the
CompressionCodec for compressing the job outputs. |
protected static String |
FileOutputFormat.getOutputName(JobContext job)
Get the base output name for the output file.
|
static Path |
FileOutputFormat.getOutputPath(JobContext job)
Get the
Path to the output directory for the map-reduce job. |
static Class<? extends WritableComparable> |
SequenceFileAsBinaryOutputFormat.getSequenceFileOutputKeyClass(JobContext job)
Get the key class for the
SequenceFile |
static Class<? extends Writable> |
SequenceFileAsBinaryOutputFormat.getSequenceFileOutputValueClass(JobContext job)
Get the value class for the
SequenceFile |
boolean |
BindingPathOutputCommitter.isCommitJobRepeatable(JobContext jobContext) |
boolean |
FileOutputCommitter.isCommitJobRepeatable(JobContext context) |
boolean |
BindingPathOutputCommitter.isRecoverySupported(JobContext jobContext) |
protected static void |
FileOutputFormat.setOutputName(JobContext job,
String name)
Set the base output name for output file to be created.
|
void |
BindingPathOutputCommitter.setupJob(JobContext jobContext) |
void |
FileOutputCommitter.setupJob(JobContext context)
Create the temporary directory that is the root of all of the task
work directories.
|
Constructor and Description |
---|
FileOutputCommitter(Path outputPath,
JobContext context)
Create a file output committer
|
PartialFileOutputCommitter(Path outputPath,
JobContext context) |
PathOutputCommitter(Path outputPath,
JobContext context)
Constructor for a job attempt.
|
Modifier and Type | Method and Description |
---|---|
void |
ManifestCommitter.abortJob(JobContext jobContext,
org.apache.hadoop.mapreduce.JobStatus.State state)
Abort the job.
|
void |
ManifestCommitter.cleanupJob(JobContext jobContext)
Execute the
CleanupJobStage to remove the job attempt dir. |
void |
ManifestCommitter.commitJob(JobContext jobContext)
This is the big job commit stage.
|
Path |
ManifestCommitter.getJobAttemptPath(JobContext context)
Compute the path where the output of a task attempt is stored until
that task is committed.
|
boolean |
ManifestCommitter.isCommitJobRepeatable(JobContext jobContext)
Failure during Job Commit is not recoverable from.
|
boolean |
ManifestCommitter.isRecoverySupported(JobContext jobContext)
Declare that task recovery is not supported.
|
void |
ManifestCommitter.setupJob(JobContext jobContext)
Set up a job through a
SetupJobStage . |
Modifier and Type | Method and Description |
---|---|
static String |
KeyFieldBasedComparator.getKeyFieldComparatorOption(JobContext job)
Get the
KeyFieldBasedComparator options |
String |
KeyFieldBasedPartitioner.getKeyFieldPartitionerOption(JobContext job)
Get the
KeyFieldBasedPartitioner options |
Modifier and Type | Class and Description |
---|---|
class |
org.apache.hadoop.mapreduce.task.JobContextImpl
A read-only view of the job that is provided to the tasks while they
are running.
|
Copyright © 2024 Apache Software Foundation. All rights reserved.