Package | Description |
---|---|
org.apache.hadoop.io |
Generic i/o code for use when reading and writing data to the network,
to databases, and to files.
|
org.apache.hadoop.mapred | |
org.apache.hadoop.mapred.lib | |
org.apache.hadoop.mapred.lib.db | |
org.apache.hadoop.mapreduce.lib.db | |
org.apache.hadoop.mapreduce.lib.input | |
org.apache.hadoop.mapreduce.lib.reduce |
Modifier and Type | Method and Description |
---|---|
int |
LongWritable.compareTo(LongWritable o)
Compares two LongWritables.
|
Modifier and Type | Method and Description |
---|---|
RecordReader<LongWritable,Text> |
TextInputFormat.getRecordReader(InputSplit genericSplit,
JobConf job,
Reporter reporter) |
RecordReader<LongWritable,BytesWritable> |
FixedLengthInputFormat.getRecordReader(InputSplit genericSplit,
JobConf job,
Reporter reporter) |
Modifier and Type | Method and Description |
---|---|
RecordReader<LongWritable,Text> |
CombineTextInputFormat.getRecordReader(InputSplit split,
JobConf conf,
Reporter reporter) |
RecordReader<LongWritable,Text> |
NLineInputFormat.getRecordReader(InputSplit genericSplit,
JobConf job,
Reporter reporter) |
Modifier and Type | Method and Description |
---|---|
void |
RegexMapper.map(K key,
Text value,
OutputCollector<Text,LongWritable> output,
Reporter reporter) |
void |
TokenCountMapper.map(K key,
Text value,
OutputCollector<Text,LongWritable> output,
Reporter reporter) |
void |
LongSumReducer.reduce(K key,
Iterator<LongWritable> values,
OutputCollector<K,LongWritable> output,
Reporter reporter) |
void |
LongSumReducer.reduce(K key,
Iterator<LongWritable> values,
OutputCollector<K,LongWritable> output,
Reporter reporter) |
Modifier and Type | Method and Description |
---|---|
RecordReader<LongWritable,T> |
DBInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
Get the
RecordReader for the given InputSplit . |
Modifier and Type | Method and Description |
---|---|
LongWritable |
DBRecordReader.getCurrentKey()
Get the current key
|
Modifier and Type | Method and Description |
---|---|
protected RecordReader<LongWritable,T> |
DBInputFormat.createDBRecordReader(org.apache.hadoop.mapreduce.lib.db.DBInputFormat.DBInputSplit split,
Configuration conf) |
protected RecordReader<LongWritable,T> |
OracleDataDrivenDBInputFormat.createDBRecordReader(org.apache.hadoop.mapreduce.lib.db.DBInputFormat.DBInputSplit split,
Configuration conf) |
protected RecordReader<LongWritable,T> |
DataDrivenDBInputFormat.createDBRecordReader(org.apache.hadoop.mapreduce.lib.db.DBInputFormat.DBInputSplit split,
Configuration conf) |
RecordReader<LongWritable,T> |
DBInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context)
Create a record reader for a given split.
|
Modifier and Type | Method and Description |
---|---|
boolean |
DBRecordReader.next(LongWritable key,
T value)
Deprecated.
|
Modifier and Type | Method and Description |
---|---|
RecordReader<LongWritable,Text> |
TextInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context) |
RecordReader<LongWritable,Text> |
CombineTextInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context) |
RecordReader<LongWritable,BytesWritable> |
FixedLengthInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context) |
RecordReader<LongWritable,Text> |
NLineInputFormat.createRecordReader(InputSplit genericSplit,
TaskAttemptContext context) |
Modifier and Type | Method and Description |
---|---|
void |
LongSumReducer.reduce(KEY key,
Iterable<LongWritable> values,
org.apache.hadoop.mapreduce.Reducer.Context context) |
Copyright © 2015 Apache Software Foundation. All rights reserved.