|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
Packages that use InputSplit | |
---|---|
org.apache.hadoop.mapred | A software framework for easily writing applications which process vast amounts of data (multi-terabyte data-sets) parallelly on large clusters (thousands of nodes) built of commodity hardware in a reliable, fault-tolerant manner. |
org.apache.hadoop.mapreduce | |
org.apache.hadoop.mapreduce.lib.db | |
org.apache.hadoop.mapreduce.lib.input | |
org.apache.hadoop.mapreduce.split |
Uses of InputSplit in org.apache.hadoop.mapred |
---|
Subclasses of InputSplit in org.apache.hadoop.mapred | |
---|---|
class |
FileSplit
A section of an input file. |
Uses of InputSplit in org.apache.hadoop.mapreduce |
---|
Methods in org.apache.hadoop.mapreduce that return InputSplit | |
---|---|
InputSplit |
MapContext.getInputSplit()
Get the input split for this map. |
Methods in org.apache.hadoop.mapreduce that return types with arguments of type InputSplit | |
---|---|
abstract List<InputSplit> |
InputFormat.getSplits(JobContext context)
Logically split the set of input files for the job. |
Methods in org.apache.hadoop.mapreduce with parameters of type InputSplit | |
---|---|
abstract RecordReader<K,V> |
InputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context)
Create a record reader for a given split. |
abstract void |
RecordReader.initialize(InputSplit split,
TaskAttemptContext context)
Called once at initialization. |
Constructors in org.apache.hadoop.mapreduce with parameters of type InputSplit | |
---|---|
MapContext(Configuration conf,
TaskAttemptID taskid,
RecordReader<KEYIN,VALUEIN> reader,
RecordWriter<KEYOUT,VALUEOUT> writer,
OutputCommitter committer,
StatusReporter reporter,
InputSplit split)
|
|
Mapper.Context(Configuration conf,
TaskAttemptID taskid,
RecordReader<KEYIN,VALUEIN> reader,
RecordWriter<KEYOUT,VALUEOUT> writer,
OutputCommitter committer,
StatusReporter reporter,
InputSplit split)
|
Uses of InputSplit in org.apache.hadoop.mapreduce.lib.db |
---|
Subclasses of InputSplit in org.apache.hadoop.mapreduce.lib.db | |
---|---|
static class |
DataDrivenDBInputFormat.DataDrivenDBInputSplit
A InputSplit that spans a set of rows |
static class |
DBInputFormat.DBInputSplit
A InputSplit that spans a set of rows |
Methods in org.apache.hadoop.mapreduce.lib.db that return types with arguments of type InputSplit | |
---|---|
List<InputSplit> |
DBInputFormat.getSplits(JobContext job)
Logically split the set of input files for the job. |
List<InputSplit> |
DataDrivenDBInputFormat.getSplits(JobContext job)
Logically split the set of input files for the job. |
List<InputSplit> |
DateSplitter.split(Configuration conf,
ResultSet results,
String colName)
|
List<InputSplit> |
BooleanSplitter.split(Configuration conf,
ResultSet results,
String colName)
|
List<InputSplit> |
DBSplitter.split(Configuration conf,
ResultSet results,
String colName)
Given a ResultSet containing one record (and already advanced to that record) with two columns (a low value, and a high value, both of the same type), determine a set of splits that span the given values. |
List<InputSplit> |
FloatSplitter.split(Configuration conf,
ResultSet results,
String colName)
|
List<InputSplit> |
TextSplitter.split(Configuration conf,
ResultSet results,
String colName)
This method needs to determine the splits between two user-provided strings. |
List<InputSplit> |
IntegerSplitter.split(Configuration conf,
ResultSet results,
String colName)
|
List<InputSplit> |
BigDecimalSplitter.split(Configuration conf,
ResultSet results,
String colName)
|
Methods in org.apache.hadoop.mapreduce.lib.db with parameters of type InputSplit | |
---|---|
RecordReader<LongWritable,T> |
DBInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context)
Create a record reader for a given split. |
void |
DBRecordReader.initialize(InputSplit split,
TaskAttemptContext context)
|
Uses of InputSplit in org.apache.hadoop.mapreduce.lib.input |
---|
Subclasses of InputSplit in org.apache.hadoop.mapreduce.lib.input | |
---|---|
class |
CombineFileSplit
A sub-collection of input files. |
Methods in org.apache.hadoop.mapreduce.lib.input that return types with arguments of type InputSplit | |
---|---|
List<InputSplit> |
DelegatingInputFormat.getSplits(JobContext job)
|
List<InputSplit> |
CombineFileInputFormat.getSplits(JobContext job)
|
List<InputSplit> |
NLineInputFormat.getSplits(JobContext job)
Logically splits the set of input files for the job, splits N lines of the input as one split. |
List<InputSplit> |
FileInputFormat.getSplits(JobContext job)
Generate the list of files and make them into FileSplits. |
Methods in org.apache.hadoop.mapreduce.lib.input with parameters of type InputSplit | |
---|---|
RecordReader<K,V> |
DelegatingInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context)
|
RecordReader<BytesWritable,BytesWritable> |
SequenceFileAsBinaryInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context)
|
abstract RecordReader<K,V> |
CombineFileInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context)
This is not implemented yet. |
RecordReader<K,V> |
SequenceFileInputFilter.createRecordReader(InputSplit split,
TaskAttemptContext context)
Create a record reader for the given split |
RecordReader<LongWritable,Text> |
NLineInputFormat.createRecordReader(InputSplit genericSplit,
TaskAttemptContext context)
|
RecordReader<K,V> |
SequenceFileInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context)
|
RecordReader<LongWritable,Text> |
TextInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context)
|
RecordReader<Text,Text> |
SequenceFileAsTextInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context)
|
RecordReader<Text,Text> |
KeyValueTextInputFormat.createRecordReader(InputSplit genericSplit,
TaskAttemptContext context)
|
void |
SequenceFileRecordReader.initialize(InputSplit split,
TaskAttemptContext context)
|
void |
SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader.initialize(InputSplit split,
TaskAttemptContext context)
|
void |
SequenceFileAsTextRecordReader.initialize(InputSplit split,
TaskAttemptContext context)
|
void |
DelegatingRecordReader.initialize(InputSplit split,
TaskAttemptContext context)
|
void |
LineRecordReader.initialize(InputSplit genericSplit,
TaskAttemptContext context)
|
void |
CombineFileRecordReader.initialize(InputSplit split,
TaskAttemptContext context)
|
void |
KeyValueLineRecordReader.initialize(InputSplit genericSplit,
TaskAttemptContext context)
|
Constructors in org.apache.hadoop.mapreduce.lib.input with parameters of type InputSplit | |
---|---|
DelegatingRecordReader(InputSplit split,
TaskAttemptContext context)
Constructs the DelegatingRecordReader. |
Uses of InputSplit in org.apache.hadoop.mapreduce.split |
---|
Methods in org.apache.hadoop.mapreduce.split with type parameters of type InputSplit | ||
---|---|---|
static
|
JobSplitWriter.createSplitFiles(Path jobSubmitDir,
Configuration conf,
FileSystem fs,
List<InputSplit> splits)
|
|
static
|
JobSplitWriter.createSplitFiles(Path jobSubmitDir,
Configuration conf,
FileSystem fs,
T[] splits)
|
Methods in org.apache.hadoop.mapreduce.split with parameters of type InputSplit | ||
---|---|---|
static
|
JobSplitWriter.createSplitFiles(Path jobSubmitDir,
Configuration conf,
FileSystem fs,
T[] splits)
|
Method parameters in org.apache.hadoop.mapreduce.split with type arguments of type InputSplit | ||
---|---|---|
static
|
JobSplitWriter.createSplitFiles(Path jobSubmitDir,
Configuration conf,
FileSystem fs,
List<InputSplit> splits)
|
Constructors in org.apache.hadoop.mapreduce.split with parameters of type InputSplit | |
---|---|
JobSplit.SplitMetaInfo(InputSplit split,
long startOffset)
|
|
JobSplit.TaskSplitMetaInfo(InputSplit split,
long startOffset)
|
|
||||||||||
PREV NEXT | FRAMES NO FRAMES |