|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
Packages that use InputSplit | |
---|---|
org.apache.hadoop.contrib.index.example | |
org.apache.hadoop.examples | Hadoop example code. |
org.apache.hadoop.examples.terasort | This package consists of 3 map/reduce applications for Hadoop to compete in the annual terabyte sort competition. |
org.apache.hadoop.mapred | A software framework for easily writing applications which process vast amounts of data (multi-terabyte data-sets) parallelly on large clusters (thousands of nodes) built of commodity hardware in a reliable, fault-tolerant manner. |
org.apache.hadoop.mapred.join | Given a set of sorted datasets keyed with the same class and yielding equal partitions, it is possible to effect a join of those datasets prior to the map. |
org.apache.hadoop.mapred.lib | Library of generally useful mappers, reducers, and partitioners. |
org.apache.hadoop.mapred.lib.db | org.apache.hadoop.mapred.lib.db Package |
org.apache.hadoop.mapreduce.split | |
org.apache.hadoop.streaming | Hadoop Streaming is a utility which allows users to create and run Map-Reduce jobs with any executables (e.g. |
Uses of InputSplit in org.apache.hadoop.contrib.index.example |
---|
Methods in org.apache.hadoop.contrib.index.example with parameters of type InputSplit | |
---|---|
RecordReader<DocumentID,LineDocTextAndOp> |
LineDocInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
|
Uses of InputSplit in org.apache.hadoop.examples |
---|
Classes in org.apache.hadoop.examples that implement InputSplit | |
---|---|
static class |
SleepJob.EmptySplit
|
Methods in org.apache.hadoop.examples that return InputSplit | |
---|---|
InputSplit[] |
SleepJob.SleepInputFormat.getSplits(JobConf conf,
int numSplits)
|
Methods in org.apache.hadoop.examples with parameters of type InputSplit | |
---|---|
RecordReader<IntWritable,IntWritable> |
SleepJob.SleepInputFormat.getRecordReader(InputSplit ignored,
JobConf conf,
Reporter reporter)
|
RecordReader<MultiFileWordCount.WordOffset,Text> |
MultiFileWordCount.MyInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
|
Uses of InputSplit in org.apache.hadoop.examples.terasort |
---|
Methods in org.apache.hadoop.examples.terasort that return InputSplit | |
---|---|
InputSplit[] |
TeraInputFormat.getSplits(JobConf conf,
int splits)
|
Methods in org.apache.hadoop.examples.terasort with parameters of type InputSplit | |
---|---|
RecordReader<Text,Text> |
TeraInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
|
Uses of InputSplit in org.apache.hadoop.mapred |
---|
Classes in org.apache.hadoop.mapred that implement InputSplit | |
---|---|
class |
FileSplit
A section of an input file. |
class |
MultiFileSplit
Deprecated. Use CombineFileSplit instead |
Methods in org.apache.hadoop.mapred that return InputSplit | |
---|---|
InputSplit |
Reporter.getInputSplit()
Get the InputSplit object for a map. |
InputSplit |
Task.TaskReporter.getInputSplit()
|
InputSplit[] |
InputFormat.getSplits(JobConf job,
int numSplits)
Logically split the set of input files for the job. |
InputSplit[] |
FileInputFormat.getSplits(JobConf job,
int numSplits)
Splits files returned by FileInputFormat.listStatus(JobConf) when
they're too big. |
InputSplit[] |
MultiFileInputFormat.getSplits(JobConf job,
int numSplits)
Deprecated. |
Methods in org.apache.hadoop.mapred with parameters of type InputSplit | |
---|---|
RecordReader<K,V> |
InputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
Get the RecordReader for the given InputSplit . |
RecordReader<BytesWritable,BytesWritable> |
SequenceFileAsBinaryInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
|
RecordReader<K,V> |
SequenceFileInputFilter.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
Create a record reader for the given split |
RecordReader<K,V> |
SequenceFileInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
|
RecordReader<LongWritable,Text> |
TextInputFormat.getRecordReader(InputSplit genericSplit,
JobConf job,
Reporter reporter)
|
RecordReader<Text,Text> |
SequenceFileAsTextInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
|
abstract RecordReader<K,V> |
FileInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
|
RecordReader<Text,Text> |
KeyValueTextInputFormat.getRecordReader(InputSplit genericSplit,
JobConf job,
Reporter reporter)
|
abstract RecordReader<K,V> |
MultiFileInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
Deprecated. |
void |
Task.TaskReporter.setInputSplit(InputSplit split)
|
Uses of InputSplit in org.apache.hadoop.mapred.join |
---|
Classes in org.apache.hadoop.mapred.join that implement InputSplit | |
---|---|
class |
CompositeInputSplit
This InputSplit contains a set of child InputSplits. |
Methods in org.apache.hadoop.mapred.join that return InputSplit | |
---|---|
InputSplit |
CompositeInputSplit.get(int i)
Get ith child InputSplit. |
InputSplit[] |
CompositeInputFormat.getSplits(JobConf job,
int numSplits)
Build a CompositeInputSplit from the child InputFormats by assigning the ith split from each child to the ith composite split. |
Methods in org.apache.hadoop.mapred.join with parameters of type InputSplit | |
---|---|
void |
CompositeInputSplit.add(InputSplit s)
Add an InputSplit to this collection. |
ComposableRecordReader<K,V> |
ComposableInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
|
ComposableRecordReader<K,TupleWritable> |
CompositeInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
Construct a CompositeRecordReader for the children of this InputFormat as defined in the init expression. |
Uses of InputSplit in org.apache.hadoop.mapred.lib |
---|
Classes in org.apache.hadoop.mapred.lib that implement InputSplit | |
---|---|
class |
CombineFileSplit
A sub-collection of input files. |
Methods in org.apache.hadoop.mapred.lib that return InputSplit | |
---|---|
InputSplit[] |
DelegatingInputFormat.getSplits(JobConf conf,
int numSplits)
|
InputSplit[] |
CombineFileInputFormat.getSplits(JobConf job,
int numSplits)
|
InputSplit[] |
NLineInputFormat.getSplits(JobConf job,
int numSplits)
Logically splits the set of input files for the job, splits N lines of the input as one split. |
Methods in org.apache.hadoop.mapred.lib with parameters of type InputSplit | |
---|---|
RecordReader<K,V> |
DelegatingInputFormat.getRecordReader(InputSplit split,
JobConf conf,
Reporter reporter)
|
abstract RecordReader<K,V> |
CombineFileInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
This is not implemented yet. |
RecordReader<LongWritable,Text> |
NLineInputFormat.getRecordReader(InputSplit genericSplit,
JobConf job,
Reporter reporter)
|
Uses of InputSplit in org.apache.hadoop.mapred.lib.db |
---|
Classes in org.apache.hadoop.mapred.lib.db that implement InputSplit | |
---|---|
protected static class |
DBInputFormat.DBInputSplit
A InputSplit that spans a set of rows |
Methods in org.apache.hadoop.mapred.lib.db that return InputSplit | |
---|---|
InputSplit[] |
DBInputFormat.getSplits(JobConf job,
int chunks)
Logically split the set of input files for the job. |
Methods in org.apache.hadoop.mapred.lib.db with parameters of type InputSplit | |
---|---|
RecordReader<LongWritable,T> |
DBInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
Get the RecordReader for the given InputSplit . |
Uses of InputSplit in org.apache.hadoop.mapreduce.split |
---|
Methods in org.apache.hadoop.mapreduce.split with parameters of type InputSplit | |
---|---|
static void |
JobSplitWriter.createSplitFiles(Path jobSubmitDir,
Configuration conf,
FileSystem fs,
InputSplit[] splits)
|
Uses of InputSplit in org.apache.hadoop.streaming |
---|
Methods in org.apache.hadoop.streaming with parameters of type InputSplit | |
---|---|
RecordReader<Text,Text> |
StreamInputFormat.getRecordReader(InputSplit genericSplit,
JobConf job,
Reporter reporter)
|
RecordReader |
AutoInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
|
|
||||||||||
PREV NEXT | FRAMES NO FRAMES |