|
||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |
java.lang.Object org.apache.hadoop.mapred.FileInputFormat<K,V> org.apache.hadoop.mapred.lib.CombineFileInputFormat<K,V>
public abstract class CombineFileInputFormat<K,V>
An abstract InputFormat
that returns CombineFileSplit
's
in InputFormat.getSplits(JobConf, int)
method.
Splits are constructed from the files under the input paths.
A split cannot have files from different pools.
Each split returned may contain blocks from different files.
If a maxSplitSize is specified, then blocks on the same node are
combined to form a single split. Blocks that are left over are
then combined with other blocks in the same rack.
If maxSplitSize is not specified, then blocks from the same rack
are combined in a single split; no attempt is made to create
node-local splits.
If the maxSplitSize is equal to the block size, then this class
is similar to the default spliting behaviour in Hadoop: each
block is a locally processed split.
Subclasses implement InputFormat.getRecordReader(InputSplit, JobConf, Reporter)
to construct RecordReader
's for CombineFileSplit
's.
CombineFileSplit
Nested Class Summary |
---|
Nested classes/interfaces inherited from class org.apache.hadoop.mapred.FileInputFormat |
---|
FileInputFormat.Counter |
Field Summary |
---|
Fields inherited from class org.apache.hadoop.mapred.FileInputFormat |
---|
LOG |
Constructor Summary | |
---|---|
CombineFileInputFormat()
default constructor |
Method Summary | |
---|---|
protected void |
createPool(JobConf conf,
List<PathFilter> filters)
Create a new pool and add the filters to it. |
protected void |
createPool(JobConf conf,
PathFilter... filters)
Create a new pool and add the filters to it. |
abstract RecordReader<K,V> |
getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
This is not implemented yet. |
InputSplit[] |
getSplits(JobConf job,
int numSplits)
Splits files returned by FileInputFormat.listStatus(JobConf) when
they're too big. |
protected void |
setMaxSplitSize(long maxSplitSize)
Specify the maximum size (in bytes) of each split. |
protected void |
setMinSplitSizeNode(long minSplitSizeNode)
Specify the minimum size (in bytes) of each split per node. |
protected void |
setMinSplitSizeRack(long minSplitSizeRack)
Specify the minimum size (in bytes) of each split per rack. |
Methods inherited from class org.apache.hadoop.mapred.FileInputFormat |
---|
addInputPath, addInputPaths, computeSplitSize, getBlockIndex, getInputPathFilter, getInputPaths, getSplitHosts, isSplitable, listStatus, setInputPathFilter, setInputPaths, setInputPaths, setMinSplitSize |
Methods inherited from class java.lang.Object |
---|
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait |
Constructor Detail |
---|
public CombineFileInputFormat()
Method Detail |
---|
protected void setMaxSplitSize(long maxSplitSize)
protected void setMinSplitSizeNode(long minSplitSizeNode)
protected void setMinSplitSizeRack(long minSplitSizeRack)
protected void createPool(JobConf conf, List<PathFilter> filters)
protected void createPool(JobConf conf, PathFilter... filters)
public InputSplit[] getSplits(JobConf job, int numSplits) throws IOException
FileInputFormat
FileInputFormat.listStatus(JobConf)
when
they're too big.
getSplits
in interface InputFormat<K,V>
getSplits
in class FileInputFormat<K,V>
job
- job configuration.numSplits
- the desired number of splits, a hint.
InputSplit
s for the job.
IOException
public abstract RecordReader<K,V> getRecordReader(InputSplit split, JobConf job, Reporter reporter) throws IOException
getRecordReader
in interface InputFormat<K,V>
getRecordReader
in class FileInputFormat<K,V>
split
- the InputSplit
job
- the job that this split belongs to
RecordReader
IOException
|
||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |