|
||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |
java.lang.Object org.apache.hadoop.contrib.utils.join.JobBase org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
public abstract class DataJoinReducerBase
This abstract class serves as the base class for the reducer class of a data join job. The reduce function will first group the values according to their input tags, and then compute the cross product of over the groups. For each tuple in the cross product, it calls the following method, which is expected to be implemented in a subclass. protected abstract TaggedMapOutput combine(Object[] tags, Object[] values); The above method is expected to produce one output value from an array of records of different sources. The user code can also perform filtering here. It can return null if it decides to the records do not meet certain conditions.
Field Summary | |
---|---|
protected long |
collected
|
protected JobConf |
job
|
protected long |
largestNumOfValues
|
static Text |
NUM_OF_VALUES_FIELD
|
protected long |
numOfValues
|
protected Reporter |
reporter
|
static Text |
SOURCE_TAGS_FIELD
|
Fields inherited from class org.apache.hadoop.contrib.utils.join.JobBase |
---|
LOG |
Constructor Summary | |
---|---|
DataJoinReducerBase()
|
Method Summary | |
---|---|
void |
close()
|
protected void |
collect(Object key,
TaggedMapOutput aRecord,
OutputCollector output,
Reporter reporter)
The subclass can overwrite this method to perform additional filtering and/or other processing logic before a value is collected. |
protected abstract TaggedMapOutput |
combine(Object[] tags,
Object[] values)
|
void |
configure(JobConf job)
Initializes a new instance from a JobConf . |
protected ResetableIterator |
createResetableIterator()
The subclass can provide a different implementation on ResetableIterator. |
void |
map(Object arg0,
Object arg1,
OutputCollector arg2,
Reporter arg3)
Maps a single input key/value pair into an intermediate key/value pair. |
void |
reduce(Object key,
Iterator values,
OutputCollector output,
Reporter reporter)
Reduces values for a given key. |
Methods inherited from class org.apache.hadoop.contrib.utils.join.JobBase |
---|
addDoubleValue, addLongValue, getDoubleValue, getLongValue, getReport, report, setDoubleValue, setLongValue |
Methods inherited from class java.lang.Object |
---|
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait |
Field Detail |
---|
protected Reporter reporter
protected long largestNumOfValues
protected long numOfValues
protected long collected
protected JobConf job
public static Text SOURCE_TAGS_FIELD
public static Text NUM_OF_VALUES_FIELD
Constructor Detail |
---|
public DataJoinReducerBase()
Method Detail |
---|
public void close() throws IOException
IOException
public void configure(JobConf job)
JobBase
JobConf
.
configure
in interface JobConfigurable
configure
in class JobBase
job
- the configurationprotected ResetableIterator createResetableIterator()
public void reduce(Object key, Iterator values, OutputCollector output, Reporter reporter) throws IOException
Reducer
The framework calls this method for each
<key, (list of values)>
pair in the grouped inputs.
Output values must be of the same type as input values. Input keys must
not be altered. The framework will reuse the key and value objects
that are passed into the reduce, therefore the application should clone
the objects they want to keep a copy of. In many cases, all values are
combined into zero or one value.
Output pairs are collected with calls to
OutputCollector.collect(Object,Object)
.
Applications can use the Reporter
provided to report progress
or just indicate that they are alive. In scenarios where the application
takes an insignificant amount of time to process individual key/value
pairs, this is crucial since the framework might assume that the task has
timed-out and kill that task. The other way of avoiding this is to set
mapred.task.timeout to a high-enough value (or even zero for no
time-outs).
key
- the key.values
- the list of values to reduce.output
- to collect keys and combined values.reporter
- facility to report progress.
IOException
protected void collect(Object key, TaggedMapOutput aRecord, OutputCollector output, Reporter reporter) throws IOException
key
- aRecord
- output
- reporter
-
IOException
protected abstract TaggedMapOutput combine(Object[] tags, Object[] values)
tags
- a list of source tagsvalues
- a value per source
public void map(Object arg0, Object arg1, OutputCollector arg2, Reporter arg3) throws IOException
Mapper
Output pairs need not be of the same types as input pairs. A given
input pair may map to zero or many output pairs. Output pairs are
collected with calls to
OutputCollector.collect(Object,Object)
.
Applications can use the Reporter
provided to report progress
or just indicate that they are alive. In scenarios where the application
takes an insignificant amount of time to process individual key/value
pairs, this is crucial since the framework might assume that the task has
timed-out and kill that task. The other way of avoiding this is to set
mapred.task.timeout to a high-enough value (or even zero for no
time-outs).
arg0
- the input key.arg1
- the input value.arg2
- collects mapped keys and values.arg3
- facility to report progress.
IOException
|
||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |