Uses of Package
org.apache.hadoop.mapreduce

Packages that use org.apache.hadoop.mapreduce
org.apache.hadoop.examples Hadoop example code. 
org.apache.hadoop.filecache   
org.apache.hadoop.mapred A software framework for easily writing applications which process vast amounts of data (multi-terabyte data-sets) parallelly on large clusters (thousands of nodes) built of commodity hardware in a reliable, fault-tolerant manner. 
org.apache.hadoop.mapreduce   
org.apache.hadoop.mapreduce.lib.db   
org.apache.hadoop.mapreduce.lib.fieldsel   
org.apache.hadoop.mapreduce.lib.input   
org.apache.hadoop.mapreduce.lib.jobcontrol   
org.apache.hadoop.mapreduce.lib.map   
org.apache.hadoop.mapreduce.lib.output   
org.apache.hadoop.mapreduce.lib.partition   
org.apache.hadoop.mapreduce.lib.reduce   
org.apache.hadoop.mapreduce.security.token   
org.apache.hadoop.mapreduce.server.jobtracker   
org.apache.hadoop.mapreduce.server.tasktracker.userlogs   
org.apache.hadoop.mapreduce.split   
 

Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.examples
Mapper
          Maps input key/value pairs to a set of intermediate key/value pairs.
Mapper.Context
           
Partitioner
          Partitions the key space.
Reducer
          Reduces a set of intermediate values which share a key to a smaller set of values.
Reducer.Context
           
 

Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.filecache
JobID
          JobID represents the immutable and unique identifier for the job.
 

Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapred
ClusterMetrics
          Status information on the current state of the Map-Reduce cluster.
Counter
          A named counter that tracks the progress of a map/reduce job.
ID
          A general identifier, which internally stores the id as an integer.
InputSplit
          InputSplit represents the data to be processed by an individual Mapper.
JobACL
          Job related ACLs
JobContext
          A read-only view of the job that is provided to the tasks while they are running.
JobID
          JobID represents the immutable and unique identifier for the job.
JobStatus.State
          Current state of the job
OutputCommitter
          OutputCommitter describes the commit of task output for a Map-Reduce job.
OutputFormat
          OutputFormat describes the output-specification for a Map-Reduce job.
RecordWriter
          RecordWriter writes the output <key, value> pairs to an output file.
Reducer
          Reduces a set of intermediate values which share a key to a smaller set of values.
Reducer.Context
           
StatusReporter
           
TaskAttemptContext
          The context for task attempts.
TaskAttemptID
          TaskAttemptID represents the immutable and unique identifier for a task attempt.
TaskID
          TaskID represents the immutable and unique identifier for a Map or Reduce Task.
TaskType
          Enum for map, reduce, job-setup, job-cleanup, task-cleanup task types.
 

Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce
Counter
          A named counter that tracks the progress of a map/reduce job.
CounterGroup
          A group of Counters that logically belong together.
Counters
           
ID
          A general identifier, which internally stores the id as an integer.
InputFormat
          InputFormat describes the input-specification for a Map-Reduce job.
InputSplit
          InputSplit represents the data to be processed by an individual Mapper.
Job.JobState
           
JobACL
          Job related ACLs
JobContext
          A read-only view of the job that is provided to the tasks while they are running.
JobID
          JobID represents the immutable and unique identifier for the job.
JobStatus.State
          Current state of the job
MapContext
          The context that is given to the Mapper.
Mapper
          Maps input key/value pairs to a set of intermediate key/value pairs.
Mapper.Context
           
OutputCommitter
          OutputCommitter describes the commit of task output for a Map-Reduce job.
OutputFormat
          OutputFormat describes the output-specification for a Map-Reduce job.
Partitioner
          Partitions the key space.
RecordReader
          The record reader breaks the data into key/value pairs for input to the Mapper.
RecordWriter
          RecordWriter writes the output <key, value> pairs to an output file.
ReduceContext
          The context passed to the Reducer.
Reducer
          Reduces a set of intermediate values which share a key to a smaller set of values.
Reducer.Context
           
StatusReporter
           
TaskAttemptContext
          The context for task attempts.
TaskAttemptID
          TaskAttemptID represents the immutable and unique identifier for a task attempt.
TaskID
          TaskID represents the immutable and unique identifier for a Map or Reduce Task.
TaskInputOutputContext
          A context object that allows input and output from the task.
TaskType
          Enum for map, reduce, job-setup, job-cleanup, task-cleanup task types.
 

Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce.lib.db
InputFormat
          InputFormat describes the input-specification for a Map-Reduce job.
InputSplit
          InputSplit represents the data to be processed by an individual Mapper.
Job
          The job submitter's view of the Job.
JobContext
          A read-only view of the job that is provided to the tasks while they are running.
OutputCommitter
          OutputCommitter describes the commit of task output for a Map-Reduce job.
OutputFormat
          OutputFormat describes the output-specification for a Map-Reduce job.
RecordReader
          The record reader breaks the data into key/value pairs for input to the Mapper.
RecordWriter
          RecordWriter writes the output <key, value> pairs to an output file.
TaskAttemptContext
          The context for task attempts.
 

Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce.lib.fieldsel
Mapper
          Maps input key/value pairs to a set of intermediate key/value pairs.
Mapper.Context
           
Reducer
          Reduces a set of intermediate values which share a key to a smaller set of values.
Reducer.Context
           
 

Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce.lib.input
InputFormat
          InputFormat describes the input-specification for a Map-Reduce job.
InputSplit
          InputSplit represents the data to be processed by an individual Mapper.
Job
          The job submitter's view of the Job.
JobContext
          A read-only view of the job that is provided to the tasks while they are running.
Mapper
          Maps input key/value pairs to a set of intermediate key/value pairs.
Mapper.Context
           
RecordReader
          The record reader breaks the data into key/value pairs for input to the Mapper.
TaskAttemptContext
          The context for task attempts.
 

Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce.lib.jobcontrol
Job
          The job submitter's view of the Job.
JobID
          JobID represents the immutable and unique identifier for the job.
 

Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce.lib.map
Job
          The job submitter's view of the Job.
JobContext
          A read-only view of the job that is provided to the tasks while they are running.
Mapper
          Maps input key/value pairs to a set of intermediate key/value pairs.
Mapper.Context
           
 

Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce.lib.output
Job
          The job submitter's view of the Job.
JobContext
          A read-only view of the job that is provided to the tasks while they are running.
JobStatus.State
          Current state of the job
OutputCommitter
          OutputCommitter describes the commit of task output for a Map-Reduce job.
OutputFormat
          OutputFormat describes the output-specification for a Map-Reduce job.
RecordWriter
          RecordWriter writes the output <key, value> pairs to an output file.
TaskAttemptContext
          The context for task attempts.
TaskInputOutputContext
          A context object that allows input and output from the task.
 

Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce.lib.partition
InputFormat
          InputFormat describes the input-specification for a Map-Reduce job.
Job
          The job submitter's view of the Job.
JobContext
          A read-only view of the job that is provided to the tasks while they are running.
Partitioner
          Partitions the key space.
 

Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce.lib.reduce
Reducer
          Reduces a set of intermediate values which share a key to a smaller set of values.
Reducer.Context
           
 

Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce.security.token
JobID
          JobID represents the immutable and unique identifier for the job.
 

Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce.server.jobtracker
TaskType
          Enum for map, reduce, job-setup, job-cleanup, task-cleanup task types.
 

Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce.server.tasktracker.userlogs
JobID
          JobID represents the immutable and unique identifier for the job.
 

Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce.split
InputSplit
          InputSplit represents the data to be processed by an individual Mapper.
JobID
          JobID represents the immutable and unique identifier for the job.
 



Copyright © 2009 The Apache Software Foundation