Modifier and Type | Method and Description |
---|---|
JobConf |
JobContext.getJobConf()
Get the job Configuration
|
JobConf |
TaskAttemptContext.getJobConf() |
Modifier and Type | Method and Description |
---|---|
static void |
FileInputFormat.addInputPath(JobConf conf,
Path path)
Add a
Path to the list of inputs for the map-reduce job. |
static void |
FileInputFormat.addInputPaths(JobConf conf,
String commaSeparatedPaths)
Add the given comma separated paths to the list of inputs for
the map-reduce job.
|
void |
OutputFormat.checkOutputSpecs(FileSystem ignored,
JobConf job)
Check for validity of the output-specification for the job.
|
void |
SequenceFileAsBinaryOutputFormat.checkOutputSpecs(FileSystem ignored,
JobConf job) |
void |
FileOutputFormat.checkOutputSpecs(FileSystem ignored,
JobConf job) |
void |
MapReduceBase.configure(JobConf job)
Default implementation that does nothing.
|
void |
KeyValueTextInputFormat.configure(JobConf conf) |
void |
TextInputFormat.configure(JobConf conf) |
void |
JobConfigurable.configure(JobConf job)
Initializes a new instance from a
JobConf . |
void |
MapRunner.configure(JobConf job) |
void |
FixedLengthInputFormat.configure(JobConf conf) |
static boolean |
FileOutputFormat.getCompressOutput(JobConf conf)
Is the job output compressed?
|
static PathFilter |
FileInputFormat.getInputPathFilter(JobConf conf)
Get a PathFilter instance of the filter set for the input paths.
|
static Path[] |
FileInputFormat.getInputPaths(JobConf conf)
Get the list of input
Path s for the map-reduce job. |
static org.apache.hadoop.io.SequenceFile.CompressionType |
SequenceFileOutputFormat.getOutputCompressionType(JobConf conf)
Get the
SequenceFile.CompressionType for the output SequenceFile . |
static Class<? extends CompressionCodec> |
FileOutputFormat.getOutputCompressorClass(JobConf conf,
Class<? extends CompressionCodec> defaultValue)
Get the
CompressionCodec for compressing the job outputs. |
static Path |
FileOutputFormat.getOutputPath(JobConf conf)
Get the
Path to the output directory for the map-reduce job. |
static Path |
FileOutputFormat.getPathForCustomFile(JobConf conf,
String name)
Helper function to generate a
Path for a file that is unique for
the task within the job output directory. |
RecordReader<Text,Text> |
SequenceFileAsTextInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter) |
abstract RecordReader<K,V> |
FileInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter) |
RecordReader<Text,Text> |
KeyValueTextInputFormat.getRecordReader(InputSplit genericSplit,
JobConf job,
Reporter reporter) |
RecordReader<LongWritable,Text> |
TextInputFormat.getRecordReader(InputSplit genericSplit,
JobConf job,
Reporter reporter) |
RecordReader<BytesWritable,BytesWritable> |
SequenceFileAsBinaryInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter) |
RecordReader<K,V> |
SequenceFileInputFilter.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
Create a record reader for the given split
|
RecordReader<K,V> |
InputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
Get the
RecordReader for the given InputSplit . |
RecordReader<K,V> |
SequenceFileInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter) |
abstract RecordReader<K,V> |
MultiFileInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter) |
RecordReader<LongWritable,BytesWritable> |
FixedLengthInputFormat.getRecordReader(InputSplit genericSplit,
JobConf job,
Reporter reporter) |
RecordWriter<K,V> |
OutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress)
Get the
RecordWriter for the given job. |
RecordWriter<BytesWritable,BytesWritable> |
SequenceFileAsBinaryOutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress) |
abstract RecordWriter<K,V> |
FileOutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress) |
RecordWriter<WritableComparable,Writable> |
MapFileOutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress) |
RecordWriter<K,V> |
SequenceFileOutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress) |
RecordWriter<K,V> |
TextOutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress) |
static Class<? extends WritableComparable> |
SequenceFileAsBinaryOutputFormat.getSequenceFileOutputKeyClass(JobConf conf)
Get the key class for the
SequenceFile |
static Class<? extends Writable> |
SequenceFileAsBinaryOutputFormat.getSequenceFileOutputValueClass(JobConf conf)
Get the value class for the
SequenceFile |
InputSplit[] |
FileInputFormat.getSplits(JobConf job,
int numSplits)
Splits files returned by
FileInputFormat.listStatus(JobConf) when
they're too big. |
InputSplit[] |
InputFormat.getSplits(JobConf job,
int numSplits)
Logically split the set of input files for the job.
|
InputSplit[] |
MultiFileInputFormat.getSplits(JobConf job,
int numSplits) |
static org.apache.hadoop.mapred.JobClient.TaskStatusFilter |
JobClient.getTaskOutputFilter(JobConf job)
Get the task output filter out of the JobConf.
|
static Path |
FileOutputFormat.getTaskOutputPath(JobConf conf,
String name)
Helper function to create the task's temporary output directory and
return the path to the task's output file.
|
static String |
FileOutputFormat.getUniqueName(JobConf conf,
String name)
Helper function to generate a name that is unique for the task.
|
static Path |
FileOutputFormat.getWorkOutputPath(JobConf conf)
Get the
Path to the task's temporary output directory
for the map-reduce job
Tasks' Side-Effect Files |
void |
JobClient.init(JobConf conf)
Connect to the default cluster
|
protected FileStatus[] |
FileInputFormat.listStatus(JobConf job)
List input directories.
|
protected FileStatus[] |
SequenceFileInputFormat.listStatus(JobConf job) |
boolean |
JobClient.monitorAndPrintJob(JobConf conf,
RunningJob job)
Monitor a job and print status in real-time as progress is made and tasks
fail.
|
static RunningJob |
JobClient.runJob(JobConf job)
Utility that submits a job, then polls for progress until the job is
complete.
|
static void |
FileOutputFormat.setCompressOutput(JobConf conf,
boolean compress)
Set whether the output of the job is compressed.
|
static void |
FileInputFormat.setInputPathFilter(JobConf conf,
Class<? extends PathFilter> filter)
Set a PathFilter to be applied to the input paths for the map-reduce job.
|
static void |
FileInputFormat.setInputPaths(JobConf conf,
Path... inputPaths)
Set the array of
Path s as the list of inputs
for the map-reduce job. |
static void |
FileInputFormat.setInputPaths(JobConf conf,
String commaSeparatedPaths)
Sets the given comma separated paths as the list of inputs
for the map-reduce job.
|
static void |
SequenceFileOutputFormat.setOutputCompressionType(JobConf conf,
org.apache.hadoop.io.SequenceFile.CompressionType style)
Set the
SequenceFile.CompressionType for the output SequenceFile . |
static void |
FileOutputFormat.setOutputCompressorClass(JobConf conf,
Class<? extends CompressionCodec> codecClass)
Set the
CompressionCodec to be used to compress job outputs. |
static void |
FileOutputFormat.setOutputPath(JobConf conf,
Path outputDir)
Set the
Path of the output directory for the map-reduce job. |
static void |
SequenceFileAsBinaryOutputFormat.setSequenceFileOutputKeyClass(JobConf conf,
Class<?> theClass)
Set the key class for the
SequenceFile |
static void |
SequenceFileAsBinaryOutputFormat.setSequenceFileOutputValueClass(JobConf conf,
Class<?> theClass)
Set the value class for the
SequenceFile |
static void |
SkipBadRecords.setSkipOutputPath(JobConf conf,
Path path)
Set the directory to which skipped records are written.
|
static void |
JobClient.setTaskOutputFilter(JobConf job,
org.apache.hadoop.mapred.JobClient.TaskStatusFilter newValue)
Modify the JobConf to set the task output filter.
|
static void |
FileOutputFormat.setWorkOutputPath(JobConf conf,
Path outputDir)
Set the
Path of the task's temporary output directory
for the map-reduce job. |
RunningJob |
JobClient.submitJob(JobConf conf)
Submit a job to the MR system.
|
RunningJob |
JobClient.submitJobInternal(JobConf conf) |
Constructor and Description |
---|
FileSplit(Path file,
long start,
long length,
JobConf conf)
Deprecated.
|
JobClient(JobConf conf)
Build a job client with the given
JobConf , and connect to the
default cluster |
MultiFileSplit(JobConf job,
Path[] files,
long[] lengths) |
Modifier and Type | Method and Description |
---|---|
JobConf |
Job.getJobConf() |
Modifier and Type | Method and Description |
---|---|
void |
Job.setJobConf(JobConf jobConf)
Set the mapred job conf for this job.
|
Constructor and Description |
---|
Job(JobConf conf) |
Job(JobConf jobConf,
ArrayList<?> dependingJobs)
Construct a job.
|
Modifier and Type | Method and Description |
---|---|
ComposableRecordReader<K,TupleWritable> |
CompositeInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
Construct a CompositeRecordReader for the children of this InputFormat
as defined in the init expression.
|
ComposableRecordReader<K,V> |
ComposableInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter) |
InputSplit[] |
CompositeInputFormat.getSplits(JobConf job,
int numSplits)
Build a CompositeInputSplit from the child InputFormats by assigning the
ith split from each child to the ith composite split.
|
void |
CompositeInputFormat.setFormat(JobConf job)
Interpret a given string as a composite expression.
|
Constructor and Description |
---|
JoinRecordReader(int id,
JobConf conf,
int capacity,
Class<? extends WritableComparator> cmpcl) |
MultiFilterRecordReader(int id,
JobConf conf,
int capacity,
Class<? extends WritableComparator> cmpcl) |
Modifier and Type | Field and Description |
---|---|
protected JobConf |
CombineFileRecordReader.jc |
Modifier and Type | Method and Description |
---|---|
JobConf |
CombineFileSplit.getJob() |
Modifier and Type | Method and Description |
---|---|
static void |
MultipleInputs.addInputPath(JobConf conf,
Path path,
Class<? extends InputFormat> inputFormatClass)
Add a
Path with a custom InputFormat to the list of
inputs for the map-reduce job. |
static void |
MultipleInputs.addInputPath(JobConf conf,
Path path,
Class<? extends InputFormat> inputFormatClass,
Class<? extends Mapper> mapperClass)
|
static <K1,V1,K2,V2> |
ChainReducer.addMapper(JobConf job,
Class<? extends Mapper<K1,V1,K2,V2>> klass,
Class<? extends K1> inputKeyClass,
Class<? extends V1> inputValueClass,
Class<? extends K2> outputKeyClass,
Class<? extends V2> outputValueClass,
boolean byValue,
JobConf mapperConf)
Adds a Mapper class to the chain job's JobConf.
|
static <K1,V1,K2,V2> |
ChainMapper.addMapper(JobConf job,
Class<? extends Mapper<K1,V1,K2,V2>> klass,
Class<? extends K1> inputKeyClass,
Class<? extends V1> inputValueClass,
Class<? extends K2> outputKeyClass,
Class<? extends V2> outputValueClass,
boolean byValue,
JobConf mapperConf)
Adds a Mapper class to the chain job's JobConf.
|
static void |
MultipleOutputs.addMultiNamedOutput(JobConf conf,
String namedOutput,
Class<? extends OutputFormat> outputFormatClass,
Class<?> keyClass,
Class<?> valueClass)
Adds a multi named output for the job.
|
static void |
MultipleOutputs.addNamedOutput(JobConf conf,
String namedOutput,
Class<? extends OutputFormat> outputFormatClass,
Class<?> keyClass,
Class<?> valueClass)
Adds a named output for the job.
|
void |
FilterOutputFormat.checkOutputSpecs(FileSystem ignored,
JobConf job) |
void |
LazyOutputFormat.checkOutputSpecs(FileSystem ignored,
JobConf job) |
void |
NullOutputFormat.checkOutputSpecs(FileSystem ignored,
JobConf job) |
void |
MultithreadedMapRunner.configure(JobConf jobConf) |
void |
HashPartitioner.configure(JobConf job) |
void |
ChainReducer.configure(JobConf job)
Configures the ChainReducer, the Reducer and all the Mappers in the chain.
|
void |
ChainMapper.configure(JobConf job)
Configures the ChainMapper and all the Mappers in the chain.
|
void |
RegexMapper.configure(JobConf job) |
void |
TotalOrderPartitioner.configure(JobConf job) |
void |
KeyFieldBasedComparator.configure(JobConf job) |
void |
FieldSelectionMapReduce.configure(JobConf job) |
void |
NLineInputFormat.configure(JobConf conf) |
void |
KeyFieldBasedPartitioner.configure(JobConf job) |
void |
BinaryPartitioner.configure(JobConf job) |
protected void |
CombineFileInputFormat.createPool(JobConf conf,
List<PathFilter> filters)
Deprecated.
|
protected void |
CombineFileInputFormat.createPool(JobConf conf,
PathFilter... filters)
Deprecated.
|
protected RecordWriter<K,V> |
MultipleSequenceFileOutputFormat.getBaseRecordWriter(FileSystem fs,
JobConf job,
String name,
Progressable arg3) |
protected RecordWriter<K,V> |
MultipleTextOutputFormat.getBaseRecordWriter(FileSystem fs,
JobConf job,
String name,
Progressable arg3) |
protected abstract RecordWriter<K,V> |
MultipleOutputFormat.getBaseRecordWriter(FileSystem fs,
JobConf job,
String name,
Progressable arg3) |
static boolean |
MultipleOutputs.getCountersEnabled(JobConf conf)
Returns if the counters for the named outputs are enabled or not.
|
protected String |
MultipleOutputFormat.getInputFileBasedOutputFileName(JobConf job,
String name)
Generate the outfile name based on a given anme and the input file name.
|
static Class<? extends OutputFormat> |
MultipleOutputs.getNamedOutputFormatClass(JobConf conf,
String namedOutput)
Returns the named output OutputFormat.
|
static Class<?> |
MultipleOutputs.getNamedOutputKeyClass(JobConf conf,
String namedOutput)
Returns the key class for a named output.
|
static List<String> |
MultipleOutputs.getNamedOutputsList(JobConf conf)
Returns list of channel names.
|
static Class<?> |
MultipleOutputs.getNamedOutputValueClass(JobConf conf,
String namedOutput)
Returns the value class for a named output.
|
static String |
TotalOrderPartitioner.getPartitionFile(JobConf job)
Deprecated.
|
RecordReader<K,V> |
CombineSequenceFileInputFormat.getRecordReader(InputSplit split,
JobConf conf,
Reporter reporter) |
abstract RecordReader<K,V> |
CombineFileInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
This is not implemented yet.
|
RecordReader<LongWritable,Text> |
NLineInputFormat.getRecordReader(InputSplit genericSplit,
JobConf job,
Reporter reporter) |
RecordReader<LongWritable,Text> |
CombineTextInputFormat.getRecordReader(InputSplit split,
JobConf conf,
Reporter reporter) |
RecordWriter<K,V> |
FilterOutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress) |
RecordWriter<K,V> |
LazyOutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress) |
RecordWriter<K,V> |
NullOutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress) |
RecordWriter<K,V> |
MultipleOutputFormat.getRecordWriter(FileSystem fs,
JobConf job,
String name,
Progressable arg3)
Create a composite record writer that can write key/value data to different
output files
|
InputSplit[] |
CombineFileInputFormat.getSplits(JobConf job,
int numSplits) |
InputSplit[] |
NLineInputFormat.getSplits(JobConf job,
int numSplits)
Logically splits the set of input files for the job, splits N lines
of the input as one split.
|
static boolean |
MultipleOutputs.isMultiNamedOutput(JobConf conf,
String namedOutput)
Returns if a named output is multiple.
|
protected FileStatus[] |
CombineFileInputFormat.listStatus(JobConf job)
List input directories.
|
static void |
MultipleOutputs.setCountersEnabled(JobConf conf,
boolean enabled)
Enables or disables counters for the named outputs.
|
static void |
LazyOutputFormat.setOutputFormatClass(JobConf job,
Class<? extends OutputFormat> theClass)
Set the underlying output format for LazyOutputFormat.
|
static void |
TotalOrderPartitioner.setPartitionFile(JobConf job,
Path p)
Deprecated.
|
static <K1,V1,K2,V2> |
ChainReducer.setReducer(JobConf job,
Class<? extends Reducer<K1,V1,K2,V2>> klass,
Class<? extends K1> inputKeyClass,
Class<? extends V1> inputValueClass,
Class<? extends K2> outputKeyClass,
Class<? extends V2> outputValueClass,
boolean byValue,
JobConf reducerConf)
Sets the Reducer class to the chain job's JobConf.
|
static <K,V> void |
InputSampler.writePartitionFile(JobConf job,
org.apache.hadoop.mapred.lib.InputSampler.Sampler<K,V> sampler) |
Constructor and Description |
---|
CombineFileRecordReader(JobConf job,
CombineFileSplit split,
Reporter reporter,
Class<RecordReader<K,V>> rrClass)
A generic RecordReader that can hand out different recordReaders
for each chunk in the CombineFileSplit.
|
CombineFileSplit(JobConf job,
Path[] files,
long[] lengths) |
CombineFileSplit(JobConf job,
Path[] files,
long[] start,
long[] lengths,
String[] locations) |
InputSampler(JobConf conf) |
MultipleOutputs(JobConf job)
Creates and initializes multiple named outputs support, it should be
instantiated in the Mapper/Reducer configure method.
|
Modifier and Type | Method and Description |
---|---|
static JobConf |
ValueAggregatorJob.createValueAggregatorJob(String[] args)
Create an Aggregate based map/reduce job.
|
static JobConf |
ValueAggregatorJob.createValueAggregatorJob(String[] args,
Class<?> caller)
Create an Aggregate based map/reduce job.
|
static JobConf |
ValueAggregatorJob.createValueAggregatorJob(String[] args,
Class<? extends ValueAggregatorDescriptor>[] descriptors) |
static JobConf |
ValueAggregatorJob.createValueAggregatorJob(String[] args,
Class<? extends ValueAggregatorDescriptor>[] descriptors,
Class<?> caller) |
Modifier and Type | Method and Description |
---|---|
void |
ValueAggregatorCombiner.configure(JobConf job)
Combiner does not need to configure.
|
void |
UserDefinedValueAggregatorDescriptor.configure(JobConf job)
Do nothing.
|
void |
ValueAggregatorDescriptor.configure(JobConf job)
Configure the object
|
void |
ValueAggregatorJobBase.configure(JobConf job) |
void |
ValueAggregatorBaseDescriptor.configure(JobConf job)
get the input file name.
|
static void |
ValueAggregatorJob.setAggregatorDescriptors(JobConf job,
Class<? extends ValueAggregatorDescriptor>[] descriptors) |
Constructor and Description |
---|
UserDefinedValueAggregatorDescriptor(String className,
JobConf job) |
Modifier and Type | Method and Description |
---|---|
void |
DBOutputFormat.checkOutputSpecs(FileSystem filesystem,
JobConf job)
Check for validity of the output-specification for the job.
|
void |
DBInputFormat.configure(JobConf job)
Initializes a new instance from a
JobConf . |
static void |
DBConfiguration.configureDB(JobConf job,
String driverClass,
String dbUrl)
Sets the DB access related fields in the JobConf.
|
static void |
DBConfiguration.configureDB(JobConf job,
String driverClass,
String dbUrl,
String userName,
String passwd)
Sets the DB access related fields in the JobConf.
|
RecordReader<LongWritable,T> |
DBInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
Get the
RecordReader for the given InputSplit . |
RecordWriter<K,V> |
DBOutputFormat.getRecordWriter(FileSystem filesystem,
JobConf job,
String name,
Progressable progress)
Get the
RecordWriter for the given job. |
InputSplit[] |
DBInputFormat.getSplits(JobConf job,
int chunks)
Logically split the set of input files for the job.
|
static void |
DBInputFormat.setInput(JobConf job,
Class<? extends DBWritable> inputClass,
String inputQuery,
String inputCountQuery)
Initializes the map-part of the job with the appropriate input settings.
|
static void |
DBInputFormat.setInput(JobConf job,
Class<? extends DBWritable> inputClass,
String tableName,
String conditions,
String orderBy,
String... fieldNames)
Initializes the map-part of the job with the appropriate input settings.
|
static void |
DBOutputFormat.setOutput(JobConf job,
String tableName,
int fieldCount)
Initializes the reduce-part of the job with the appropriate output settings
|
static void |
DBOutputFormat.setOutput(JobConf job,
String tableName,
String... fieldNames)
Initializes the reduce-part of the job with the appropriate output settings
|
Modifier and Type | Method and Description |
---|---|
static String |
Submitter.getExecutable(JobConf conf)
Get the URI of the application's executable.
|
static boolean |
Submitter.getIsJavaMapper(JobConf conf)
Check whether the job is using a Java Mapper.
|
static boolean |
Submitter.getIsJavaRecordReader(JobConf conf)
Check whether the job is using a Java RecordReader
|
static boolean |
Submitter.getIsJavaRecordWriter(JobConf conf)
Will the reduce use a Java RecordWriter?
|
static boolean |
Submitter.getIsJavaReducer(JobConf conf)
Check whether the job is using a Java Reducer.
|
static boolean |
Submitter.getKeepCommandFile(JobConf conf)
Does the user want to keep the command file for debugging? If this is
true, pipes will write a copy of the command data to a file in the
task directory named "downlink.data", which may be used to run the C++
program under the debugger.
|
static RunningJob |
Submitter.jobSubmit(JobConf conf)
Submit a job to the Map-Reduce framework.
|
static RunningJob |
Submitter.runJob(JobConf conf)
Submit a job to the map/reduce cluster.
|
static void |
Submitter.setExecutable(JobConf conf,
String executable)
Set the URI for the application's executable.
|
static void |
Submitter.setIsJavaMapper(JobConf conf,
boolean value)
Set whether the Mapper is written in Java.
|
static void |
Submitter.setIsJavaRecordReader(JobConf conf,
boolean value)
Set whether the job is using a Java RecordReader.
|
static void |
Submitter.setIsJavaRecordWriter(JobConf conf,
boolean value)
Set whether the job will use a Java RecordWriter.
|
static void |
Submitter.setIsJavaReducer(JobConf conf,
boolean value)
Set whether the Reducer is written in Java.
|
static void |
Submitter.setKeepCommandFile(JobConf conf,
boolean keep)
Set whether to keep the command file for debugging
|
static RunningJob |
Submitter.submitJob(JobConf conf)
Deprecated.
|
Modifier and Type | Method and Description |
---|---|
static org.apache.hadoop.security.Credentials |
TokenCache.loadTokens(String jobTokenFile,
JobConf conf)
Deprecated.
Use
Credentials.readTokenStorageFile(org.apache.hadoop.fs.Path, org.apache.hadoop.conf.Configuration) instead,
this method is included for compatibility against Hadoop-1. |
Copyright © 2015 Apache Software Foundation. All Rights Reserved.