org.apache.hadoop.contrib.utils.join
Class DataJoinReducerBase

java.lang.Object
  extended by org.apache.hadoop.contrib.utils.join.JobBase
      extended by org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
All Implemented Interfaces:
Closeable, JobConfigurable, Mapper, Reducer

public abstract class DataJoinReducerBase
extends JobBase

This abstract class serves as the base class for the reducer class of a data join job. The reduce function will first group the values according to their input tags, and then compute the cross product of over the groups. For each tuple in the cross product, it calls the following method, which is expected to be implemented in a subclass. protected abstract TaggedMapOutput combine(Object[] tags, Object[] values); The above method is expected to produce one output value from an array of records of different sources. The user code can also perform filtering here. It can return null if it decides to the records do not meet certain conditions.


Field Summary
protected  long collected
           
protected  JobConf job
           
protected  long largestNumOfValues
           
static Text NUM_OF_VALUES_FIELD
           
protected  long numOfValues
           
protected  Reporter reporter
           
static Text SOURCE_TAGS_FIELD
           
 
Fields inherited from class org.apache.hadoop.contrib.utils.join.JobBase
LOG
 
Constructor Summary
DataJoinReducerBase()
           
 
Method Summary
 void close()
           
protected  void collect(Object key, TaggedMapOutput aRecord, OutputCollector output, Reporter reporter)
          The subclass can overwrite this method to perform additional filtering and/or other processing logic before a value is collected.
protected abstract  TaggedMapOutput combine(Object[] tags, Object[] values)
           
 void configure(JobConf job)
          Initializes a new instance from a JobConf.
protected  ResetableIterator createResetableIterator()
          The subclass can provide a different implementation on ResetableIterator.
 void map(Object arg0, Object arg1, OutputCollector arg2, Reporter arg3)
          Maps a single input key/value pair into an intermediate key/value pair.
 void reduce(Object key, Iterator values, OutputCollector output, Reporter reporter)
          Reduces values for a given key.
 
Methods inherited from class org.apache.hadoop.contrib.utils.join.JobBase
addDoubleValue, addLongValue, getDoubleValue, getLongValue, getReport, report, setDoubleValue, setLongValue
 
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 

Field Detail

reporter

protected Reporter reporter

largestNumOfValues

protected long largestNumOfValues

numOfValues

protected long numOfValues

collected

protected long collected

job

protected JobConf job

SOURCE_TAGS_FIELD

public static Text SOURCE_TAGS_FIELD

NUM_OF_VALUES_FIELD

public static Text NUM_OF_VALUES_FIELD
Constructor Detail

DataJoinReducerBase

public DataJoinReducerBase()
Method Detail

close

public void close()
           throws IOException
Throws:
IOException

configure

public void configure(JobConf job)
Description copied from class: JobBase
Initializes a new instance from a JobConf.

Specified by:
configure in interface JobConfigurable
Overrides:
configure in class JobBase
Parameters:
job - the configuration

createResetableIterator

protected ResetableIterator createResetableIterator()
The subclass can provide a different implementation on ResetableIterator. This is necessary if the number of values in a reduce call is very high. The default provided here uses ArrayListBackedIterator

Returns:
an Object of ResetableIterator.

reduce

public void reduce(Object key,
                   Iterator values,
                   OutputCollector output,
                   Reporter reporter)
            throws IOException
Description copied from interface: Reducer
Reduces values for a given key.

The framework calls this method for each <key, (list of values)> pair in the grouped inputs. Output values must be of the same type as input values. Input keys must not be altered. The framework will reuse the key and value objects that are passed into the reduce, therefore the application should clone the objects they want to keep a copy of. In many cases, all values are combined into zero or one value.

Output pairs are collected with calls to OutputCollector.collect(Object,Object).

Applications can use the Reporter provided to report progress or just indicate that they are alive. In scenarios where the application takes an insignificant amount of time to process individual key/value pairs, this is crucial since the framework might assume that the task has timed-out and kill that task. The other way of avoiding this is to set mapred.task.timeout to a high-enough value (or even zero for no time-outs).

Parameters:
key - the key.
values - the list of values to reduce.
output - to collect keys and combined values.
reporter - facility to report progress.
Throws:
IOException

collect

protected void collect(Object key,
                       TaggedMapOutput aRecord,
                       OutputCollector output,
                       Reporter reporter)
                throws IOException
The subclass can overwrite this method to perform additional filtering and/or other processing logic before a value is collected.

Parameters:
key -
aRecord -
output -
reporter -
Throws:
IOException

combine

protected abstract TaggedMapOutput combine(Object[] tags,
                                           Object[] values)
Parameters:
tags - a list of source tags
values - a value per source
Returns:
combined value derived from values of the sources

map

public void map(Object arg0,
                Object arg1,
                OutputCollector arg2,
                Reporter arg3)
         throws IOException
Description copied from interface: Mapper
Maps a single input key/value pair into an intermediate key/value pair.

Output pairs need not be of the same types as input pairs. A given input pair may map to zero or many output pairs. Output pairs are collected with calls to OutputCollector.collect(Object,Object).

Applications can use the Reporter provided to report progress or just indicate that they are alive. In scenarios where the application takes an insignificant amount of time to process individual key/value pairs, this is crucial since the framework might assume that the task has timed-out and kill that task. The other way of avoiding this is to set mapred.task.timeout to a high-enough value (or even zero for no time-outs).

Parameters:
arg0 - the input key.
arg1 - the input value.
arg2 - collects mapped keys and values.
arg3 - facility to report progress.
Throws:
IOException


Copyright © 2009 The Apache Software Foundation