org.apache.hadoop.mapreduce.lib.input
Class SequenceFileAsBinaryInputFormat
java.lang.Object
org.apache.hadoop.mapreduce.InputFormat<K,V>
org.apache.hadoop.mapreduce.lib.input.FileInputFormat<K,V>
org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat<BytesWritable,BytesWritable>
org.apache.hadoop.mapreduce.lib.input.SequenceFileAsBinaryInputFormat
@InterfaceAudience.Public
@InterfaceStability.Stable
public class SequenceFileAsBinaryInputFormat
- extends SequenceFileInputFormat<BytesWritable,BytesWritable>
InputFormat reading keys, values from SequenceFiles in binary (raw)
format.
Methods inherited from class org.apache.hadoop.mapreduce.lib.input.FileInputFormat |
addInputPath, addInputPaths, computeSplitSize, getBlockIndex, getInputPathFilter, getInputPaths, getMaxSplitSize, getMinSplitSize, getSplits, isSplitable, setInputPathFilter, setInputPaths, setInputPaths, setMaxInputSplitSize, setMinInputSplitSize |
Methods inherited from class java.lang.Object |
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait |
SequenceFileAsBinaryInputFormat
public SequenceFileAsBinaryInputFormat()
createRecordReader
public RecordReader<BytesWritable,BytesWritable> createRecordReader(InputSplit split,
TaskAttemptContext context)
throws IOException
- Description copied from class:
InputFormat
- Create a record reader for a given split. The framework will call
RecordReader.initialize(InputSplit, TaskAttemptContext)
before
the split is used.
- Overrides:
createRecordReader
in class SequenceFileInputFormat<BytesWritable,BytesWritable>
- Parameters:
split
- the split to be readcontext
- the information about the task
- Returns:
- a new record reader
- Throws:
IOException
Copyright © 2009 The Apache Software Foundation