Package org.apache.hadoop.mapreduce
package org.apache.hadoop.mapreduce
-
ClassDescriptionProvides a way to access information about the map/reduce cluster.Status information on the current state of the Map-Reduce cluster.org.apache.hadoop.mapreduce.ContextFactoryA factory to allow applications to deal with inconsistencies between MapReduce Context Objects API between hadoop-0.20 and later versions.A named counter that tracks the progress of a map/reduce job.A group of
Counters that logically belong together.Countersholds per job/task counters, defined either by the Map-Reduce framework or applications.org.apache.hadoop.mapreduce.CryptoUtilsThis class provides utilities to make it easier to work with Cryptographic Streams.org.apache.hadoop.mapreduce.CustomJobEndNotifierAn interface for implementing a custom Job end notifier.org.apache.hadoop.mapreduce.FileSystemCounterA general identifier, which internally stores the id as an integer.InputFormat<K,V> InputFormatdescribes the input-specification for a Map-Reduce job.InputSplitrepresents the data to be processed by an individualMapper.The job submitter's view of the Job.org.apache.hadoop.mapreduce.JobACLJob related ACLsA read-only view of the job that is provided to the tasks while they are running.JobID represents the immutable and unique identifier for the job.Used to describe the priority of the running job.Describes the current status of a job.Current state of the joborg.apache.hadoop.mapreduce.JobSubmissionFilesA utility to manage job submission files.MapContext<KEYIN,VALUEIN, KEYOUT, VALUEOUT> The context that is given to theMapper.Mapper<KEYIN,VALUEIN, KEYOUT, VALUEOUT> Maps input key/value pairs to a set of intermediate key/value pairs.MarkableIterator<VALUE>MarkableIteratoris a wrapper iterator class that implements theMarkableIteratorInterface.org.apache.hadoop.mapreduce.MRConfigPlace holder for cluster level configuration keys.org.apache.hadoop.mapreduce.MRJobConfigOutputCommitterdescribes the commit of task output for a Map-Reduce job.OutputFormat<K,V> OutputFormatdescribes the output-specification for a Map-Reduce job.Partitioner<KEY,VALUE> Partitions the key space.Class to encapsulate Queue ACLs for a particular user.Class that contains the information regarding the Job Queues which are maintained by the Hadoop Map/Reduce framework.Enum representing queue stateRecordReader<KEYIN,VALUEIN> The record reader breaks the data into key/value pairs for input to theMapper.RecordWriter<K,V> RecordWriterwrites the output <key, value> pairs to an output file.ReduceContext<KEYIN,VALUEIN, KEYOUT, VALUEOUT> The context passed to theReducer.org.apache.hadoop.mapreduce.ReduceContext.ValueIterator<VALUEIN>Iteratorto iterate over values for a given group of records.Reducer<KEYIN,VALUEIN, KEYOUT, VALUEOUT> Reduces a set of intermediate values which share a key to a smaller set of values.org.apache.hadoop.mapreduce.SharedCacheConfigA class for parsing configuration parameters associated with the shared cache.org.apache.hadoop.mapreduce.StatusReporterThe context for task attempts.TaskAttemptID represents the immutable and unique identifier for a task attempt.This is used to track task completion events on job tracker.TaskID represents the immutable and unique identifier for a Map or Reduce Task.TaskInputOutputContext<KEYIN,VALUEIN, KEYOUT, VALUEOUT> A context object that allows input and output from the task.org.apache.hadoop.mapreduce.TaskReportA report on the state of a task.Information about TaskTracker.Enum for map, reduce, job-setup, job-cleanup, task-cleanup task types.org.apache.hadoop.mapreduce.TypeConverter