org.apache.hadoop.examples
Class SleepJob.SleepInputFormat
java.lang.Object
org.apache.hadoop.conf.Configured
org.apache.hadoop.examples.SleepJob.SleepInputFormat
- All Implemented Interfaces:
- Configurable, InputFormat<IntWritable,IntWritable>
- Enclosing class:
- SleepJob
public static class SleepJob.SleepInputFormat
- extends Configured
- implements InputFormat<IntWritable,IntWritable>
Methods inherited from class java.lang.Object |
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait |
SleepJob.SleepInputFormat
public SleepJob.SleepInputFormat()
getSplits
public InputSplit[] getSplits(JobConf conf,
int numSplits)
- Description copied from interface:
InputFormat
- Logically split the set of input files for the job.
Each InputSplit
is then assigned to an individual Mapper
for processing.
Note: The split is a logical split of the inputs and the
input files are not physically split into chunks. For e.g. a split could
be <input-file-path, start, offset> tuple.
- Specified by:
getSplits
in interface InputFormat<IntWritable,IntWritable>
- Parameters:
conf
- job configuration.numSplits
- the desired number of splits, a hint.
- Returns:
- an array of
InputSplit
s for the job.
getRecordReader
public RecordReader<IntWritable,IntWritable> getRecordReader(InputSplit ignored,
JobConf conf,
Reporter reporter)
throws IOException
- Description copied from interface:
InputFormat
- Get the
RecordReader
for the given InputSplit
.
It is the responsibility of the RecordReader
to respect
record boundaries while processing the logical split to present a
record-oriented view to the individual task.
- Specified by:
getRecordReader
in interface InputFormat<IntWritable,IntWritable>
- Parameters:
ignored
- the InputSplit
conf
- the job that this split belongs to
- Returns:
- a
RecordReader
- Throws:
IOException
Copyright © 2009 The Apache Software Foundation