|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapred | |
---|---|
Cluster
Provides a way to access information about the map/reduce cluster. |
|
Counter
A named counter that tracks the progress of a map/reduce job. |
|
Counters
Counters holds per job/task counters, defined either by the
Map-Reduce framework or applications. |
|
ID
A general identifier, which internally stores the id as an integer. |
|
InputSplit
InputSplit represents the data to be processed by an
individual Mapper . |
|
JobContext
A read-only view of the job that is provided to the tasks while they are running. |
|
JobID
JobID represents the immutable and unique identifier for the job. |
|
JobStatus
Describes the current status of a job. |
|
OutputCommitter
OutputCommitter describes the commit of task output for a
Map-Reduce job. |
|
QueueInfo
Class that contains the information regarding the Job Queues which are maintained by the Hadoop Map/Reduce framework. |
|
TaskAttemptContext
The context for task attempts. |
|
TaskAttemptID
TaskAttemptID represents the immutable and unique identifier for a task attempt. |
|
TaskCompletionEvent
This is used to track task completion events on job tracker. |
|
TaskID
TaskID represents the immutable and unique identifier for a Map or Reduce Task. |
|
TaskType
Enum for map, reduce, job-setup, job-cleanup, task-cleanup task types. |
Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapred.lib | |
---|---|
InputSplit
InputSplit represents the data to be processed by an
individual Mapper . |
|
JobContext
A read-only view of the job that is provided to the tasks while they are running. |
|
RecordReader
The record reader breaks the data into key/value pairs for input to the Mapper . |
|
TaskAttemptContext
The context for task attempts. |
Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce | |
---|---|
Cluster
Provides a way to access information about the map/reduce cluster. |
|
ClusterMetrics
Status information on the current state of the Map-Reduce cluster. |
|
Counter
A named counter that tracks the progress of a map/reduce job. |
|
Counters
Counters holds per job/task counters, defined either by the
Map-Reduce framework or applications. |
|
ID
A general identifier, which internally stores the id as an integer. |
|
InputFormat
InputFormat describes the input-specification for a
Map-Reduce job. |
|
InputSplit
InputSplit represents the data to be processed by an
individual Mapper . |
|
Job
The job submitter's view of the Job. |
|
JobContext
A read-only view of the job that is provided to the tasks while they are running. |
|
JobCounter
|
|
JobID
JobID represents the immutable and unique identifier for the job. |
|
JobPriority
Used to describe the priority of the running job. |
|
JobStatus
Describes the current status of a job. |
|
Mapper
Maps input key/value pairs to a set of intermediate key/value pairs. |
|
OutputCommitter
OutputCommitter describes the commit of task output for a
Map-Reduce job. |
|
OutputFormat
OutputFormat describes the output-specification for a
Map-Reduce job. |
|
Partitioner
Partitions the key space. |
|
QueueAclsInfo
Class to encapsulate Queue ACLs for a particular user. |
|
QueueInfo
Class that contains the information regarding the Job Queues which are maintained by the Hadoop Map/Reduce framework. |
|
QueueState
Enum representing queue state |
|
RecordReader
The record reader breaks the data into key/value pairs for input to the Mapper . |
|
RecordWriter
RecordWriter writes the output <key, value> pairs
to an output file. |
|
Reducer
Reduces a set of intermediate values which share a key to a smaller set of values. |
|
TaskAttemptContext
The context for task attempts. |
|
TaskAttemptID
TaskAttemptID represents the immutable and unique identifier for a task attempt. |
|
TaskCompletionEvent
This is used to track task completion events on job tracker. |
|
TaskCompletionEvent.Status
|
|
TaskCounter
|
|
TaskID
TaskID represents the immutable and unique identifier for a Map or Reduce Task. |
|
TaskInputOutputContext
A context object that allows input and output from the task. |
|
TaskTrackerInfo
Information about TaskTracker. |
|
TaskType
Enum for map, reduce, job-setup, job-cleanup, task-cleanup task types. |
Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce.counters | |
---|---|
Counter
A named counter that tracks the progress of a map/reduce job. |
Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce.lib.aggregate | |
---|---|
Job
The job submitter's view of the Job. |
|
Mapper
Maps input key/value pairs to a set of intermediate key/value pairs. |
|
Reducer
Reduces a set of intermediate values which share a key to a smaller set of values. |
Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce.lib.chain | |
---|---|
Job
The job submitter's view of the Job. |
|
Mapper
Maps input key/value pairs to a set of intermediate key/value pairs. |
|
Reducer
Reduces a set of intermediate values which share a key to a smaller set of values. |
Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce.lib.db | |
---|---|
InputFormat
InputFormat describes the input-specification for a
Map-Reduce job. |
|
InputSplit
InputSplit represents the data to be processed by an
individual Mapper . |
|
Job
The job submitter's view of the Job. |
|
JobContext
A read-only view of the job that is provided to the tasks while they are running. |
|
OutputCommitter
OutputCommitter describes the commit of task output for a
Map-Reduce job. |
|
OutputFormat
OutputFormat describes the output-specification for a
Map-Reduce job. |
|
RecordReader
The record reader breaks the data into key/value pairs for input to the Mapper . |
|
RecordWriter
RecordWriter writes the output <key, value> pairs
to an output file. |
|
TaskAttemptContext
The context for task attempts. |
Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce.lib.fieldsel | |
---|---|
Mapper
Maps input key/value pairs to a set of intermediate key/value pairs. |
|
Reducer
Reduces a set of intermediate values which share a key to a smaller set of values. |
Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce.lib.input | |
---|---|
InputFormat
InputFormat describes the input-specification for a
Map-Reduce job. |
|
InputSplit
InputSplit represents the data to be processed by an
individual Mapper . |
|
Job
The job submitter's view of the Job. |
|
JobContext
A read-only view of the job that is provided to the tasks while they are running. |
|
Mapper
Maps input key/value pairs to a set of intermediate key/value pairs. |
|
RecordReader
The record reader breaks the data into key/value pairs for input to the Mapper . |
|
TaskAttemptContext
The context for task attempts. |
Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce.lib.jobcontrol | |
---|---|
Job
The job submitter's view of the Job. |
|
JobID
JobID represents the immutable and unique identifier for the job. |
Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce.lib.join | |
---|---|
InputFormat
InputFormat describes the input-specification for a
Map-Reduce job. |
|
InputSplit
InputSplit represents the data to be processed by an
individual Mapper . |
|
JobContext
A read-only view of the job that is provided to the tasks while they are running. |
|
RecordReader
The record reader breaks the data into key/value pairs for input to the Mapper . |
|
TaskAttemptContext
The context for task attempts. |
Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce.lib.map | |
---|---|
Job
The job submitter's view of the Job. |
|
JobContext
A read-only view of the job that is provided to the tasks while they are running. |
|
MapContext
The context that is given to the Mapper . |
|
Mapper
Maps input key/value pairs to a set of intermediate key/value pairs. |
Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce.lib.output | |
---|---|
Job
The job submitter's view of the Job. |
|
JobContext
A read-only view of the job that is provided to the tasks while they are running. |
|
OutputCommitter
OutputCommitter describes the commit of task output for a
Map-Reduce job. |
|
OutputFormat
OutputFormat describes the output-specification for a
Map-Reduce job. |
|
Partitioner
Partitions the key space. |
|
RecordWriter
RecordWriter writes the output <key, value> pairs
to an output file. |
|
TaskAttemptContext
The context for task attempts. |
|
TaskInputOutputContext
A context object that allows input and output from the task. |
Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce.lib.partition | |
---|---|
Job
The job submitter's view of the Job. |
|
JobContext
A read-only view of the job that is provided to the tasks while they are running. |
|
Partitioner
Partitions the key space. |
Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce.lib.reduce | |
---|---|
ReduceContext
The context passed to the Reducer . |
|
Reducer
Reduces a set of intermediate values which share a key to a smaller set of values. |
Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce.task | |
---|---|
JobContext
A read-only view of the job that is provided to the tasks while they are running. |
Classes in org.apache.hadoop.mapreduce used by org.apache.hadoop.mapreduce.tools | |
---|---|
Cluster
Provides a way to access information about the map/reduce cluster. |
|
Counters
Counters holds per job/task counters, defined either by the
Map-Reduce framework or applications. |
|
Job
The job submitter's view of the Job. |
|
JobStatus
Describes the current status of a job. |
|
TaskAttemptID
TaskAttemptID represents the immutable and unique identifier for a task attempt. |
|
||||||||||
PREV NEXT | FRAMES NO FRAMES |