Uses of Class
org.apache.hadoop.mapreduce.Job

Packages that use Job
org.apache.hadoop.mapreduce.lib.db   
org.apache.hadoop.mapreduce.lib.input   
org.apache.hadoop.mapreduce.lib.jobcontrol   
org.apache.hadoop.mapreduce.lib.map   
org.apache.hadoop.mapreduce.lib.output   
org.apache.hadoop.mapreduce.lib.partition   
 

Uses of Job in org.apache.hadoop.mapreduce.lib.db
 

Methods in org.apache.hadoop.mapreduce.lib.db with parameters of type Job
static void DBInputFormat.setInput(Job job, Class<? extends DBWritable> inputClass, String inputQuery, String inputCountQuery)
          Initializes the map-part of the job with the appropriate input settings.
static void DataDrivenDBInputFormat.setInput(Job job, Class<? extends DBWritable> inputClass, String inputQuery, String inputBoundingQuery)
          setInput() takes a custom query and a separate "bounding query" to use instead of the custom "count query" used by DBInputFormat.
static void DBInputFormat.setInput(Job job, Class<? extends DBWritable> inputClass, String tableName, String conditions, String orderBy, String... fieldNames)
          Initializes the map-part of the job with the appropriate input settings.
static void DataDrivenDBInputFormat.setInput(Job job, Class<? extends DBWritable> inputClass, String tableName, String conditions, String splitBy, String... fieldNames)
          Note that the "orderBy" column is called the "splitBy" in this version.
static void DBOutputFormat.setOutput(Job job, String tableName, int fieldCount)
          Initializes the reduce-part of the job with the appropriate output settings
static void DBOutputFormat.setOutput(Job job, String tableName, String... fieldNames)
          Initializes the reduce-part of the job with the appropriate output settings
 

Uses of Job in org.apache.hadoop.mapreduce.lib.input
 

Methods in org.apache.hadoop.mapreduce.lib.input with parameters of type Job
static void FileInputFormat.addInputPath(Job job, Path path)
          Add a Path to the list of inputs for the map-reduce job.
static void MultipleInputs.addInputPath(Job job, Path path, Class<? extends InputFormat> inputFormatClass)
          Add a Path with a custom InputFormat to the list of inputs for the map-reduce job.
static void MultipleInputs.addInputPath(Job job, Path path, Class<? extends InputFormat> inputFormatClass, Class<? extends Mapper> mapperClass)
          Add a Path with a custom InputFormat and Mapper to the list of inputs for the map-reduce job.
static void FileInputFormat.addInputPaths(Job job, String commaSeparatedPaths)
          Add the given comma separated paths to the list of inputs for the map-reduce job.
static void SequenceFileInputFilter.setFilterClass(Job job, Class<?> filterClass)
          set the filter class
static void FileInputFormat.setInputPathFilter(Job job, Class<? extends PathFilter> filter)
          Set a PathFilter to be applied to the input paths for the map-reduce job.
static void FileInputFormat.setInputPaths(Job job, Path... inputPaths)
          Set the array of Paths as the list of inputs for the map-reduce job.
static void FileInputFormat.setInputPaths(Job job, String commaSeparatedPaths)
          Sets the given comma separated paths as the list of inputs for the map-reduce job.
static void FileInputFormat.setMaxInputSplitSize(Job job, long size)
          Set the maximum split size
static void FileInputFormat.setMinInputSplitSize(Job job, long size)
          Set the minimum input split size
static void NLineInputFormat.setNumLinesPerSplit(Job job, int numLines)
          Set the number of lines per split
 

Uses of Job in org.apache.hadoop.mapreduce.lib.jobcontrol
 

Methods in org.apache.hadoop.mapreduce.lib.jobcontrol that return Job
 Job ControlledJob.getJob()
           
 

Methods in org.apache.hadoop.mapreduce.lib.jobcontrol with parameters of type Job
 void ControlledJob.setJob(Job job)
          Set the mapreduce job
 

Constructors in org.apache.hadoop.mapreduce.lib.jobcontrol with parameters of type Job
ControlledJob(Job job, List<ControlledJob> dependingJobs)
          Construct a job.
 

Uses of Job in org.apache.hadoop.mapreduce.lib.map
 

Methods in org.apache.hadoop.mapreduce.lib.map with parameters of type Job
static
<K1,V1,K2,V2>
void
MultithreadedMapper.setMapperClass(Job job, Class<? extends Mapper<K1,V1,K2,V2>> cls)
          Set the application's mapper class.
static void MultithreadedMapper.setNumberOfThreads(Job job, int threads)
          Set the number of threads in the pool for running maps.
 

Uses of Job in org.apache.hadoop.mapreduce.lib.output
 

Methods in org.apache.hadoop.mapreduce.lib.output with parameters of type Job
static void MultipleOutputs.addNamedOutput(Job job, String namedOutput, Class<? extends OutputFormat> outputFormatClass, Class<?> keyClass, Class<?> valueClass)
          Adds a named output for the job.
static void FileOutputFormat.setCompressOutput(Job job, boolean compress)
          Set whether the output of the job is compressed.
static void MultipleOutputs.setCountersEnabled(Job job, boolean enabled)
          Enables or disables counters for the named outputs.
static void SequenceFileOutputFormat.setOutputCompressionType(Job job, SequenceFile.CompressionType style)
          Set the SequenceFile.CompressionType for the output SequenceFile.
static void FileOutputFormat.setOutputCompressorClass(Job job, Class<? extends CompressionCodec> codecClass)
          Set the CompressionCodec to be used to compress job outputs.
static void LazyOutputFormat.setOutputFormatClass(Job job, Class<? extends OutputFormat> theClass)
          Set the underlying output format for LazyOutputFormat.
static void FileOutputFormat.setOutputPath(Job job, Path outputDir)
          Set the Path of the output directory for the map-reduce job.
static void SequenceFileAsBinaryOutputFormat.setSequenceFileOutputKeyClass(Job job, Class<?> theClass)
          Set the key class for the SequenceFile
static void SequenceFileAsBinaryOutputFormat.setSequenceFileOutputValueClass(Job job, Class<?> theClass)
          Set the value class for the SequenceFile
 

Uses of Job in org.apache.hadoop.mapreduce.lib.partition
 

Methods in org.apache.hadoop.mapreduce.lib.partition with parameters of type Job
 K[] InputSampler.Sampler.getSample(InputFormat<K,V> inf, Job job)
          For a given job, collect and return a subset of the keys from the input data.
 K[] InputSampler.SplitSampler.getSample(InputFormat<K,V> inf, Job job)
          From each split sampled, take the first numSamples / numSplits records.
 K[] InputSampler.RandomSampler.getSample(InputFormat<K,V> inf, Job job)
          Randomize the split order, then take the specified number of keys from each split sampled, where each key is selected with the specified probability and possibly replaced by a subsequently selected key when the quota of keys from that split is satisfied.
 K[] InputSampler.IntervalSampler.getSample(InputFormat<K,V> inf, Job job)
          For each split sampled, emit when the ratio of the number of records retained to the total record count is less than the specified frequency.
static void KeyFieldBasedComparator.setKeyFieldComparatorOptions(Job job, String keySpec)
          Set the KeyFieldBasedComparator options used to compare keys.
 void KeyFieldBasedPartitioner.setKeyFieldPartitionerOptions(Job job, String keySpec)
          Set the KeyFieldBasedPartitioner options used for Partitioner
static
<K,V> void
InputSampler.writePartitionFile(Job job, InputSampler.Sampler<K,V> sampler)
          Write a partition file for the given job, using the Sampler provided.
 



Copyright © 2009 The Apache Software Foundation