Package org.apache.hadoop.mapreduce


package org.apache.hadoop.mapreduce
  • Class
    Description
    Provides a way to access information about the map/reduce cluster.
     
    Status information on the current state of the Map-Reduce cluster.
    org.apache.hadoop.mapreduce.ContextFactory
    A factory to allow applications to deal with inconsistencies between MapReduce Context Objects API between hadoop-0.20 and later versions.
    A named counter that tracks the progress of a map/reduce job.
    A group of Counters that logically belong together.
    Counters holds per job/task counters, defined either by the Map-Reduce framework or applications.
    org.apache.hadoop.mapreduce.CryptoUtils
    This class provides utilities to make it easier to work with Cryptographic Streams.
    org.apache.hadoop.mapreduce.CustomJobEndNotifier
    An interface for implementing a custom Job end notifier.
    org.apache.hadoop.mapreduce.FileSystemCounter
     
    A general identifier, which internally stores the id as an integer.
    InputFormat describes the input-specification for a Map-Reduce job.
    InputSplit represents the data to be processed by an individual Mapper.
    The job submitter's view of the Job.
     
     
    org.apache.hadoop.mapreduce.JobACL
    Job related ACLs
    A read-only view of the job that is provided to the tasks while they are running.
     
    JobID represents the immutable and unique identifier for the job.
    Used to describe the priority of the running job.
    Describes the current status of a job.
    Current state of the job
    org.apache.hadoop.mapreduce.JobSubmissionFiles
    A utility to manage job submission files.
    MapContext<KEYIN,VALUEIN,KEYOUT,VALUEOUT>
    The context that is given to the Mapper.
    Mapper<KEYIN,VALUEIN,KEYOUT,VALUEOUT>
    Maps input key/value pairs to a set of intermediate key/value pairs.
    MarkableIterator is a wrapper iterator class that implements the MarkableIteratorInterface.
    org.apache.hadoop.mapreduce.MRConfig
    Place holder for cluster level configuration keys.
    org.apache.hadoop.mapreduce.MRJobConfig
     
    OutputCommitter describes the commit of task output for a Map-Reduce job.
    OutputFormat describes the output-specification for a Map-Reduce job.
    Partitioner<KEY,VALUE>
    Partitions the key space.
    Class to encapsulate Queue ACLs for a particular user.
    Class that contains the information regarding the Job Queues which are maintained by the Hadoop Map/Reduce framework.
    Enum representing queue state
    RecordReader<KEYIN,VALUEIN>
    The record reader breaks the data into key/value pairs for input to the Mapper.
    RecordWriter writes the output <key, value> pairs to an output file.
    ReduceContext<KEYIN,VALUEIN,KEYOUT,VALUEOUT>
    The context passed to the Reducer.
    org.apache.hadoop.mapreduce.ReduceContext.ValueIterator<VALUEIN>
    Iterator to iterate over values for a given group of records.
    Reducer<KEYIN,VALUEIN,KEYOUT,VALUEOUT>
    Reduces a set of intermediate values which share a key to a smaller set of values.
    org.apache.hadoop.mapreduce.SharedCacheConfig
    A class for parsing configuration parameters associated with the shared cache.
    org.apache.hadoop.mapreduce.StatusReporter
     
    The context for task attempts.
    TaskAttemptID represents the immutable and unique identifier for a task attempt.
    This is used to track task completion events on job tracker.
     
     
    TaskID represents the immutable and unique identifier for a Map or Reduce Task.
    TaskInputOutputContext<KEYIN,VALUEIN,KEYOUT,VALUEOUT>
    A context object that allows input and output from the task.
    org.apache.hadoop.mapreduce.TaskReport
    A report on the state of a task.
    Information about TaskTracker.
    Enum for map, reduce, job-setup, job-cleanup, task-cleanup task types.
    org.apache.hadoop.mapreduce.TypeConverter