|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
Packages that use JobConf | |
---|---|
org.apache.hadoop.contrib.index.example | |
org.apache.hadoop.contrib.index.mapred | |
org.apache.hadoop.contrib.utils.join | |
org.apache.hadoop.examples | Hadoop example code. |
org.apache.hadoop.examples.dancing | This package is a distributed implementation of Knuth's dancing links algorithm that can run under Hadoop. |
org.apache.hadoop.examples.terasort | This package consists of 3 map/reduce applications for Hadoop to compete in the annual terabyte sort competition. |
org.apache.hadoop.mapred | A software framework for easily writing applications which process vast amounts of data (multi-terabyte data-sets) parallelly on large clusters (thousands of nodes) built of commodity hardware in a reliable, fault-tolerant manner. |
org.apache.hadoop.mapred.jobcontrol | Utilities for managing dependent jobs. |
org.apache.hadoop.mapred.join | Given a set of sorted datasets keyed with the same class and yielding equal partitions, it is possible to effect a join of those datasets prior to the map. |
org.apache.hadoop.mapred.lib | Library of generally useful mappers, reducers, and partitioners. |
org.apache.hadoop.mapred.lib.aggregate | Classes for performing various counting and aggregations. |
org.apache.hadoop.mapred.lib.db | org.apache.hadoop.mapred.lib.db Package |
org.apache.hadoop.mapred.pipes | Hadoop Pipes allows C++ code to use Hadoop DFS and map/reduce. |
org.apache.hadoop.mapreduce | |
org.apache.hadoop.streaming | Hadoop Streaming is a utility which allows users to create and run Map-Reduce jobs with any executables (e.g. |
Uses of JobConf in org.apache.hadoop.contrib.index.example |
---|
Methods in org.apache.hadoop.contrib.index.example with parameters of type JobConf | |
---|---|
void |
LineDocLocalAnalysis.configure(JobConf job)
|
void |
IdentityLocalAnalysis.configure(JobConf job)
|
RecordReader<DocumentID,LineDocTextAndOp> |
LineDocInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
|
Uses of JobConf in org.apache.hadoop.contrib.index.mapred |
---|
Methods in org.apache.hadoop.contrib.index.mapred with parameters of type JobConf | |
---|---|
void |
IndexUpdateReducer.configure(JobConf job)
|
void |
IndexUpdateCombiner.configure(JobConf job)
|
void |
IndexUpdateMapper.configure(JobConf job)
|
void |
IndexUpdatePartitioner.configure(JobConf job)
|
RecordWriter<Shard,Text> |
IndexUpdateOutputFormat.getRecordWriter(FileSystem fs,
JobConf job,
String name,
Progressable progress)
|
Uses of JobConf in org.apache.hadoop.contrib.utils.join |
---|
Fields in org.apache.hadoop.contrib.utils.join declared as JobConf | |
---|---|
protected JobConf |
DataJoinMapperBase.job
|
protected JobConf |
DataJoinReducerBase.job
|
Methods in org.apache.hadoop.contrib.utils.join that return JobConf | |
---|---|
static JobConf |
DataJoinJob.createDataJoinJob(String[] args)
|
Methods in org.apache.hadoop.contrib.utils.join with parameters of type JobConf | |
---|---|
TaggedMapOutput |
TaggedMapOutput.clone(JobConf job)
|
void |
DataJoinMapperBase.configure(JobConf job)
|
void |
DataJoinReducerBase.configure(JobConf job)
|
void |
JobBase.configure(JobConf job)
Initializes a new instance from a JobConf . |
static boolean |
DataJoinJob.runJob(JobConf job)
Submit/run a map/reduce job. |
Uses of JobConf in org.apache.hadoop.examples |
---|
Methods in org.apache.hadoop.examples that return JobConf | |
---|---|
JobConf |
SleepJob.setupJobConf(int numMapper,
int numReducer,
long mapSleepTime,
int mapSleepCount,
long reduceSleepTime,
int reduceSleepCount)
|
Methods in org.apache.hadoop.examples with parameters of type JobConf | |
---|---|
void |
SleepJob.configure(JobConf job)
|
void |
PiEstimator.PiReducer.configure(JobConf job)
Store job configuration. |
static BigDecimal |
PiEstimator.estimate(int numMaps,
long numPoints,
JobConf jobConf)
Run a map/reduce job for estimating Pi. |
RecordReader<IntWritable,IntWritable> |
SleepJob.SleepInputFormat.getRecordReader(InputSplit ignored,
JobConf conf,
Reporter reporter)
|
RecordReader<MultiFileWordCount.WordOffset,Text> |
MultiFileWordCount.MyInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
|
InputSplit[] |
SleepJob.SleepInputFormat.getSplits(JobConf conf,
int numSplits)
|
Uses of JobConf in org.apache.hadoop.examples.dancing |
---|
Methods in org.apache.hadoop.examples.dancing with parameters of type JobConf | |
---|---|
void |
DistributedPentomino.PentMap.configure(JobConf conf)
|
Uses of JobConf in org.apache.hadoop.examples.terasort |
---|
Methods in org.apache.hadoop.examples.terasort with parameters of type JobConf | |
---|---|
static boolean |
TeraOutputFormat.getFinalSync(JobConf conf)
Does the user want a final sync at close? |
RecordReader<Text,Text> |
TeraInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
|
RecordWriter<Text,Text> |
TeraOutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress)
|
InputSplit[] |
TeraInputFormat.getSplits(JobConf conf,
int splits)
|
static void |
TeraOutputFormat.setFinalSync(JobConf conf,
boolean newValue)
Set the requirement for a final sync before the stream is closed. |
static void |
TeraInputFormat.writePartitionFile(JobConf conf,
Path partFile)
Use the input splits to take samples of the input and generate sample keys. |
Uses of JobConf in org.apache.hadoop.mapred |
---|
Fields in org.apache.hadoop.mapred declared as JobConf | |
---|---|
protected JobConf |
Task.conf
|
protected JobConf |
Task.CombinerRunner.job
|
protected JobConf |
JobLocalizer.ttConf
|
Methods in org.apache.hadoop.mapred that return JobConf | |
---|---|
JobConf |
JobTracker.getConf()
Returns a handle to the JobTracker's Configuration |
JobConf |
JobContext.getJobConf()
Get the job Configuration |
JobConf |
TaskAttemptContext.getJobConf()
|
Methods in org.apache.hadoop.mapred with parameters of type JobConf | |
---|---|
static void |
FileInputFormat.addInputPath(JobConf conf,
Path path)
Add a Path to the list of inputs for the map-reduce job. |
static void |
FileInputFormat.addInputPaths(JobConf conf,
String commaSeparatedPaths)
Add the given comma separated paths to the list of inputs for the map-reduce job. |
void |
FileOutputFormat.checkOutputSpecs(FileSystem ignored,
JobConf job)
|
void |
SequenceFileAsBinaryOutputFormat.checkOutputSpecs(FileSystem ignored,
JobConf job)
|
void |
OutputFormat.checkOutputSpecs(FileSystem ignored,
JobConf job)
Check for validity of the output-specification for the job. |
void |
MapReduceBase.configure(JobConf job)
Default implementation that does nothing. |
void |
MapRunner.configure(JobConf job)
|
void |
TextInputFormat.configure(JobConf conf)
|
void |
JobConfigurable.configure(JobConf job)
Initializes a new instance from a JobConf . |
void |
KeyValueTextInputFormat.configure(JobConf conf)
|
void |
JobLocalizer.createWorkDir(JobConf jConf)
|
static boolean |
FileOutputFormat.getCompressOutput(JobConf conf)
Is the job output compressed? |
static PathFilter |
FileInputFormat.getInputPathFilter(JobConf conf)
Get a PathFilter instance of the filter set for the input paths. |
static Path[] |
FileInputFormat.getInputPaths(JobConf conf)
Get the list of input Path s for the map-reduce job. |
static String |
JobHistory.JobInfo.getJobHistoryFileName(JobConf jobConf,
JobID id)
Recover the job history filename from the history folder. |
static Path |
JobHistory.JobInfo.getJobHistoryLogLocationForUser(String logFileName,
JobConf jobConf)
Get the user job history file path |
static SequenceFile.CompressionType |
SequenceFileOutputFormat.getOutputCompressionType(JobConf conf)
Get the SequenceFile.CompressionType for the output SequenceFile . |
static Class<? extends CompressionCodec> |
FileOutputFormat.getOutputCompressorClass(JobConf conf,
Class<? extends CompressionCodec> defaultValue)
Get the CompressionCodec for compressing the job outputs. |
static Path |
FileOutputFormat.getOutputPath(JobConf conf)
Get the Path to the output directory for the map-reduce job. |
static Path |
FileOutputFormat.getPathForCustomFile(JobConf conf,
String name)
Helper function to generate a Path for a file that is unique for
the task within the job output directory. |
RecordReader<K,V> |
InputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
Get the RecordReader for the given InputSplit . |
RecordReader<BytesWritable,BytesWritable> |
SequenceFileAsBinaryInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
|
RecordReader<K,V> |
SequenceFileInputFilter.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
Create a record reader for the given split |
RecordReader<K,V> |
SequenceFileInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
|
RecordReader<LongWritable,Text> |
TextInputFormat.getRecordReader(InputSplit genericSplit,
JobConf job,
Reporter reporter)
|
RecordReader<Text,Text> |
SequenceFileAsTextInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
|
abstract RecordReader<K,V> |
FileInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
|
RecordReader<Text,Text> |
KeyValueTextInputFormat.getRecordReader(InputSplit genericSplit,
JobConf job,
Reporter reporter)
|
abstract RecordReader<K,V> |
MultiFileInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
Deprecated. |
RecordWriter<WritableComparable,Writable> |
MapFileOutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress)
|
abstract RecordWriter<K,V> |
FileOutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress)
|
RecordWriter<BytesWritable,BytesWritable> |
SequenceFileAsBinaryOutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress)
|
RecordWriter<K,V> |
OutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress)
Get the RecordWriter for the given job. |
RecordWriter<K,V> |
SequenceFileOutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress)
|
RecordWriter<K,V> |
TextOutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress)
|
String |
TaskController.getRunAsUser(JobConf conf)
Returns the local unix user that a given job will run as. |
static Class<? extends WritableComparable> |
SequenceFileAsBinaryOutputFormat.getSequenceFileOutputKeyClass(JobConf conf)
Get the key class for the SequenceFile |
static Class<? extends Writable> |
SequenceFileAsBinaryOutputFormat.getSequenceFileOutputValueClass(JobConf conf)
Get the value class for the SequenceFile |
InputSplit[] |
InputFormat.getSplits(JobConf job,
int numSplits)
Logically split the set of input files for the job. |
InputSplit[] |
FileInputFormat.getSplits(JobConf job,
int numSplits)
Splits files returned by FileInputFormat.listStatus(JobConf) when
they're too big. |
InputSplit[] |
MultiFileInputFormat.getSplits(JobConf job,
int numSplits)
Deprecated. |
static long |
TaskLog.getTaskLogLength(JobConf conf)
Get the desired maximum length of task's logs. |
static JobClient.TaskStatusFilter |
JobClient.getTaskOutputFilter(JobConf job)
Get the task output filter out of the JobConf. |
static Path |
FileOutputFormat.getTaskOutputPath(JobConf conf,
String name)
Helper function to create the task's temporary output directory and return the path to the task's output file. |
static String |
FileOutputFormat.getUniqueName(JobConf conf,
String name)
Helper function to generate a name that is unique for the task. |
static String |
JobHistory.JobInfo.getUserName(JobConf jobConf)
Get the user name from the job conf |
static Path |
FileOutputFormat.getWorkOutputPath(JobConf conf)
Get the Path to the task's temporary output directory
for the map-reduce job
Tasks' Side-Effect Files |
void |
JobClient.init(JobConf conf)
Connect to the default JobTracker . |
static void |
JobHistory.init(JobTracker jobTracker,
JobConf conf,
String hostname,
long jobTrackerStartTime)
Initialize JobHistory files. |
void |
Task.initialize(JobConf job,
JobID id,
Reporter reporter,
boolean useNewApi)
|
protected void |
TaskTracker.launchTaskForJob(org.apache.hadoop.mapred.TaskTracker.TaskInProgress tip,
JobConf jobConf,
org.apache.hadoop.mapred.TaskTracker.RunningJob rjob)
|
protected FileStatus[] |
SequenceFileInputFormat.listStatus(JobConf job)
|
protected FileStatus[] |
FileInputFormat.listStatus(JobConf job)
List input directories. |
void |
Task.localizeConfiguration(JobConf conf)
Localize the given JobConf to be specific for this task. |
void |
JobLocalizer.localizeJobFiles(JobID jobid,
JobConf jConf,
Path localJobFile,
Path localJobTokenFile,
TaskUmbilicalProtocol taskTracker)
|
void |
JobLocalizer.localizeJobFiles(JobID jobid,
JobConf jConf,
Path localJobTokenFile,
TaskUmbilicalProtocol taskTracker)
|
static void |
JobEndNotifier.localRunnerNotification(JobConf conf,
JobStatus status)
|
static void |
JobHistory.JobInfo.logSubmitted(JobID jobId,
JobConf jobConf,
String jobConfPath,
long submitTime)
Deprecated. Use JobHistory.JobInfo.logSubmitted(JobID, JobConf, String, long, boolean) instead. |
static void |
JobHistory.JobInfo.logSubmitted(JobID jobId,
JobConf jobConf,
String jobConfPath,
long submitTime,
boolean restarted)
|
boolean |
JobClient.monitorAndPrintJob(JobConf conf,
RunningJob job)
Monitor a job and print status in real-time as progress is made and tasks fail. |
static Path |
JobHistory.JobInfo.recoverJobHistoryFile(JobConf conf,
Path logFilePath)
Selects one of the two files generated as a part of recovery. |
static void |
JobEndNotifier.registerNotification(JobConf jobConf,
JobStatus status)
|
abstract void |
Task.run(JobConf job,
TaskUmbilicalProtocol umbilical)
Run this task as a part of the named job. |
static RunningJob |
JobClient.runJob(JobConf job)
Utility that submits a job, then polls for progress until the job is complete. |
static void |
FileOutputFormat.setCompressOutput(JobConf conf,
boolean compress)
Set whether the output of the job is compressed. |
static void |
FileInputFormat.setInputPathFilter(JobConf conf,
Class<? extends PathFilter> filter)
Set a PathFilter to be applied to the input paths for the map-reduce job. |
static void |
FileInputFormat.setInputPaths(JobConf conf,
Path... inputPaths)
Set the array of Path s as the list of inputs
for the map-reduce job. |
static void |
FileInputFormat.setInputPaths(JobConf conf,
String commaSeparatedPaths)
Sets the given comma separated paths as the list of inputs for the map-reduce job. |
static void |
SequenceFileOutputFormat.setOutputCompressionType(JobConf conf,
SequenceFile.CompressionType style)
Set the SequenceFile.CompressionType for the output SequenceFile . |
static void |
FileOutputFormat.setOutputCompressorClass(JobConf conf,
Class<? extends CompressionCodec> codecClass)
Set the CompressionCodec to be used to compress job outputs. |
static void |
FileOutputFormat.setOutputPath(JobConf conf,
Path outputDir)
Set the Path of the output directory for the map-reduce job. |
static void |
SequenceFileAsBinaryOutputFormat.setSequenceFileOutputKeyClass(JobConf conf,
Class<?> theClass)
Set the key class for the SequenceFile |
static void |
SequenceFileAsBinaryOutputFormat.setSequenceFileOutputValueClass(JobConf conf,
Class<?> theClass)
Set the value class for the SequenceFile |
static void |
SkipBadRecords.setSkipOutputPath(JobConf conf,
Path path)
Set the directory to which skipped records are written. |
static void |
JobClient.setTaskOutputFilter(JobConf job,
JobClient.TaskStatusFilter newValue)
Modify the JobConf to set the task output filter. |
static JobTracker |
JobTracker.startTracker(JobConf conf)
Start the JobTracker with given configuration. |
static JobTracker |
JobTracker.startTracker(JobConf conf,
String identifier)
|
static JobTracker |
JobTracker.startTracker(JobConf conf,
String identifier,
boolean initialize)
|
RunningJob |
JobClient.submitJob(JobConf job)
Submit a job to the MR system. |
RunningJob |
JobClient.submitJobInternal(JobConf job)
Internal method for submitting jobs to the system. |
protected boolean |
Task.supportIsolationRunner(JobConf conf)
|
static void |
JobLocalizer.writeLocalJobFile(Path jobFile,
JobConf conf)
Write the task specific job-configuration file. |
Constructors in org.apache.hadoop.mapred with parameters of type JobConf | |
---|---|
FileSplit(Path file,
long start,
long length,
JobConf conf)
Deprecated. |
|
JobClient(JobConf conf)
Build a job client with the given JobConf , and connect to the
default JobTracker . |
|
JobHistoryServer(JobConf conf)
Starts job history server as a independent process * Initializes ACL Manager * Starts a webapp to service history requests |
|
JobHistoryServer(JobConf conf,
org.apache.hadoop.mapred.ACLsManager aclsManager,
HttpServer httpServer)
Starts job history server as a embedded server within job tracker * Starts a webapp to service history requests |
|
JobInProgress(JobID jobid,
JobConf conf,
JobTracker tracker)
Create an almost empty JobInProgress, which can be used only for tests |
|
JobLocalizer(JobConf ttConf,
String user,
String jobid)
|
|
JobLocalizer(JobConf ttConf,
String user,
String jobid,
String... localDirs)
|
|
LocalJobRunner(JobConf conf)
|
|
MultiFileSplit(JobConf job,
Path[] files,
long[] lengths)
Deprecated. |
|
Task.OldCombinerRunner(Class<? extends Reducer<K,V,K,V>> cls,
JobConf conf,
Counters.Counter inputCounter,
Task.TaskReporter reporter)
|
|
TaskTracker(JobConf conf)
Start with the local machine name, and the default JobTracker |
Uses of JobConf in org.apache.hadoop.mapred.jobcontrol |
---|
Methods in org.apache.hadoop.mapred.jobcontrol that return JobConf | |
---|---|
JobConf |
Job.getJobConf()
|
Methods in org.apache.hadoop.mapred.jobcontrol with parameters of type JobConf | |
---|---|
void |
Job.setJobConf(JobConf jobConf)
Set the mapred job conf for this job. |
Constructors in org.apache.hadoop.mapred.jobcontrol with parameters of type JobConf | |
---|---|
Job(JobConf jobConf)
Construct a job. |
|
Job(JobConf jobConf,
ArrayList<Job> dependingJobs)
Construct a job. |
Uses of JobConf in org.apache.hadoop.mapred.join |
---|
Methods in org.apache.hadoop.mapred.join with parameters of type JobConf | |
---|---|
ComposableRecordReader<K,V> |
ComposableInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
|
ComposableRecordReader<K,TupleWritable> |
CompositeInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
Construct a CompositeRecordReader for the children of this InputFormat as defined in the init expression. |
InputSplit[] |
CompositeInputFormat.getSplits(JobConf job,
int numSplits)
Build a CompositeInputSplit from the child InputFormats by assigning the ith split from each child to the ith composite split. |
void |
CompositeInputFormat.setFormat(JobConf job)
Interpret a given string as a composite expression. |
Constructors in org.apache.hadoop.mapred.join with parameters of type JobConf | |
---|---|
JoinRecordReader(int id,
JobConf conf,
int capacity,
Class<? extends WritableComparator> cmpcl)
|
|
MultiFilterRecordReader(int id,
JobConf conf,
int capacity,
Class<? extends WritableComparator> cmpcl)
|
Uses of JobConf in org.apache.hadoop.mapred.lib |
---|
Fields in org.apache.hadoop.mapred.lib declared as JobConf | |
---|---|
protected JobConf |
CombineFileRecordReader.jc
|
Methods in org.apache.hadoop.mapred.lib that return JobConf | |
---|---|
JobConf |
CombineFileSplit.getJob()
|
Methods in org.apache.hadoop.mapred.lib with parameters of type JobConf | ||
---|---|---|
static void |
MultipleInputs.addInputPath(JobConf conf,
Path path,
Class<? extends InputFormat> inputFormatClass)
Add a Path with a custom InputFormat to the list of
inputs for the map-reduce job. |
|
static void |
MultipleInputs.addInputPath(JobConf conf,
Path path,
Class<? extends InputFormat> inputFormatClass,
Class<? extends Mapper> mapperClass)
Add a Path with a custom InputFormat and
Mapper to the list of inputs for the map-reduce job. |
|
static
|
ChainReducer.addMapper(JobConf job,
Class<? extends Mapper<K1,V1,K2,V2>> klass,
Class<? extends K1> inputKeyClass,
Class<? extends V1> inputValueClass,
Class<? extends K2> outputKeyClass,
Class<? extends V2> outputValueClass,
boolean byValue,
JobConf mapperConf)
Adds a Mapper class to the chain job's JobConf. |
|
static
|
ChainMapper.addMapper(JobConf job,
Class<? extends Mapper<K1,V1,K2,V2>> klass,
Class<? extends K1> inputKeyClass,
Class<? extends V1> inputValueClass,
Class<? extends K2> outputKeyClass,
Class<? extends V2> outputValueClass,
boolean byValue,
JobConf mapperConf)
Adds a Mapper class to the chain job's JobConf. |
|
static void |
MultipleOutputs.addMultiNamedOutput(JobConf conf,
String namedOutput,
Class<? extends OutputFormat> outputFormatClass,
Class<?> keyClass,
Class<?> valueClass)
Adds a multi named output for the job. |
|
static void |
MultipleOutputs.addNamedOutput(JobConf conf,
String namedOutput,
Class<? extends OutputFormat> outputFormatClass,
Class<?> keyClass,
Class<?> valueClass)
Adds a named output for the job. |
|
void |
NullOutputFormat.checkOutputSpecs(FileSystem ignored,
JobConf job)
|
|
void |
MultithreadedMapRunner.configure(JobConf jobConf)
|
|
void |
KeyFieldBasedComparator.configure(JobConf job)
|
|
void |
ChainReducer.configure(JobConf job)
Configures the ChainReducer, the Reducer and all the Mappers in the chain. |
|
void |
RegexMapper.configure(JobConf job)
|
|
void |
TotalOrderPartitioner.configure(JobConf job)
Read in the partition file and build indexing data structures. |
|
void |
NLineInputFormat.configure(JobConf conf)
|
|
void |
KeyFieldBasedPartitioner.configure(JobConf job)
|
|
void |
BinaryPartitioner.configure(JobConf job)
|
|
void |
HashPartitioner.configure(JobConf job)
|
|
void |
FieldSelectionMapReduce.configure(JobConf job)
|
|
void |
DelegatingMapper.configure(JobConf conf)
|
|
void |
ChainMapper.configure(JobConf job)
Configures the ChainMapper and all the Mappers in the chain. |
|
protected void |
CombineFileInputFormat.createPool(JobConf conf,
List<PathFilter> filters)
Create a new pool and add the filters to it. |
|
protected void |
CombineFileInputFormat.createPool(JobConf conf,
PathFilter... filters)
Create a new pool and add the filters to it. |
|
protected RecordWriter<K,V> |
MultipleTextOutputFormat.getBaseRecordWriter(FileSystem fs,
JobConf job,
String name,
Progressable arg3)
|
|
protected RecordWriter<K,V> |
MultipleSequenceFileOutputFormat.getBaseRecordWriter(FileSystem fs,
JobConf job,
String name,
Progressable arg3)
|
|
protected abstract RecordWriter<K,V> |
MultipleOutputFormat.getBaseRecordWriter(FileSystem fs,
JobConf job,
String name,
Progressable arg3)
|
|
static boolean |
MultipleOutputs.getCountersEnabled(JobConf conf)
Returns if the counters for the named outputs are enabled or not. |
|
protected String |
MultipleOutputFormat.getInputFileBasedOutputFileName(JobConf job,
String name)
Generate the outfile name based on a given anme and the input file name. |
|
static Class<? extends OutputFormat> |
MultipleOutputs.getNamedOutputFormatClass(JobConf conf,
String namedOutput)
Returns the named output OutputFormat. |
|
static Class<? extends WritableComparable> |
MultipleOutputs.getNamedOutputKeyClass(JobConf conf,
String namedOutput)
Returns the key class for a named output. |
|
static List<String> |
MultipleOutputs.getNamedOutputsList(JobConf conf)
Returns list of channel names. |
|
static Class<? extends Writable> |
MultipleOutputs.getNamedOutputValueClass(JobConf conf,
String namedOutput)
Returns the value class for a named output. |
|
static String |
TotalOrderPartitioner.getPartitionFile(JobConf job)
Get the path to the SequenceFile storing the sorted partition keyset. |
|
RecordReader<K,V> |
DelegatingInputFormat.getRecordReader(InputSplit split,
JobConf conf,
Reporter reporter)
|
|
abstract RecordReader<K,V> |
CombineFileInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
This is not implemented yet. |
|
RecordReader<LongWritable,Text> |
NLineInputFormat.getRecordReader(InputSplit genericSplit,
JobConf job,
Reporter reporter)
|
|
RecordWriter<K,V> |
MultipleOutputFormat.getRecordWriter(FileSystem fs,
JobConf job,
String name,
Progressable arg3)
Create a composite record writer that can write key/value data to different output files |
|
RecordWriter<K,V> |
NullOutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress)
|
|
K[] |
InputSampler.Sampler.getSample(InputFormat<K,V> inf,
JobConf job)
For a given job, collect and return a subset of the keys from the input data. |
|
K[] |
InputSampler.SplitSampler.getSample(InputFormat<K,V> inf,
JobConf job)
From each split sampled, take the first numSamples / numSplits records. |
|
K[] |
InputSampler.RandomSampler.getSample(InputFormat<K,V> inf,
JobConf job)
Randomize the split order, then take the specified number of keys from each split sampled, where each key is selected with the specified probability and possibly replaced by a subsequently selected key when the quota of keys from that split is satisfied. |
|
K[] |
InputSampler.IntervalSampler.getSample(InputFormat<K,V> inf,
JobConf job)
For each split sampled, emit when the ratio of the number of records retained to the total record count is less than the specified frequency. |
|
InputSplit[] |
DelegatingInputFormat.getSplits(JobConf conf,
int numSplits)
|
|
InputSplit[] |
CombineFileInputFormat.getSplits(JobConf job,
int numSplits)
|
|
InputSplit[] |
NLineInputFormat.getSplits(JobConf job,
int numSplits)
Logically splits the set of input files for the job, splits N lines of the input as one split. |
|
static boolean |
MultipleOutputs.isMultiNamedOutput(JobConf conf,
String namedOutput)
Returns if a named output is multiple. |
|
static void |
MultipleOutputs.setCountersEnabled(JobConf conf,
boolean enabled)
Enables or disables counters for the named outputs. |
|
static void |
TotalOrderPartitioner.setPartitionFile(JobConf job,
Path p)
Set the path to the SequenceFile storing the sorted partition keyset. |
|
static
|
ChainReducer.setReducer(JobConf job,
Class<? extends Reducer<K1,V1,K2,V2>> klass,
Class<? extends K1> inputKeyClass,
Class<? extends V1> inputValueClass,
Class<? extends K2> outputKeyClass,
Class<? extends V2> outputValueClass,
boolean byValue,
JobConf reducerConf)
Sets the Reducer class to the chain job's JobConf. |
|
static
|
InputSampler.writePartitionFile(JobConf job,
InputSampler.Sampler<K,V> sampler)
Write a partition file for the given job, using the Sampler provided. |
Constructors in org.apache.hadoop.mapred.lib with parameters of type JobConf | |
---|---|
CombineFileRecordReader(JobConf job,
CombineFileSplit split,
Reporter reporter,
Class<RecordReader<K,V>> rrClass)
A generic RecordReader that can hand out different recordReaders for each chunk in the CombineFileSplit. |
|
CombineFileSplit(JobConf job,
Path[] files,
long[] lengths)
|
|
CombineFileSplit(JobConf job,
Path[] files,
long[] start,
long[] lengths,
String[] locations)
|
|
InputSampler(JobConf conf)
|
|
MultipleOutputs(JobConf job)
Creates and initializes multiple named outputs support, it should be instantiated in the Mapper/Reducer configure method. |
Uses of JobConf in org.apache.hadoop.mapred.lib.aggregate |
---|
Methods in org.apache.hadoop.mapred.lib.aggregate that return JobConf | |
---|---|
static JobConf |
ValueAggregatorJob.createValueAggregatorJob(String[] args)
Create an Aggregate based map/reduce job. |
static JobConf |
ValueAggregatorJob.createValueAggregatorJob(String[] args,
Class<?> caller)
Create an Aggregate based map/reduce job. |
static JobConf |
ValueAggregatorJob.createValueAggregatorJob(String[] args,
Class<? extends ValueAggregatorDescriptor>[] descriptors)
|
static JobConf |
ValueAggregatorJob.createValueAggregatorJob(String[] args,
Class<? extends ValueAggregatorDescriptor>[] descriptors,
Class<?> caller)
|
Methods in org.apache.hadoop.mapred.lib.aggregate with parameters of type JobConf | |
---|---|
void |
ValueAggregatorCombiner.configure(JobConf job)
Combiner does not need to configure. |
void |
UserDefinedValueAggregatorDescriptor.configure(JobConf job)
Do nothing. |
void |
ValueAggregatorDescriptor.configure(JobConf job)
Configure the object |
void |
ValueAggregatorJobBase.configure(JobConf job)
|
void |
ValueAggregatorBaseDescriptor.configure(JobConf job)
get the input file name. |
static void |
ValueAggregatorJob.setAggregatorDescriptors(JobConf job,
Class<? extends ValueAggregatorDescriptor>[] descriptors)
|
Constructors in org.apache.hadoop.mapred.lib.aggregate with parameters of type JobConf | |
---|---|
UserDefinedValueAggregatorDescriptor(String className,
JobConf job)
|
Uses of JobConf in org.apache.hadoop.mapred.lib.db |
---|
Methods in org.apache.hadoop.mapred.lib.db with parameters of type JobConf | |
---|---|
void |
DBOutputFormat.checkOutputSpecs(FileSystem filesystem,
JobConf job)
Check for validity of the output-specification for the job. |
void |
DBInputFormat.configure(JobConf job)
Initializes a new instance from a JobConf . |
static void |
DBConfiguration.configureDB(JobConf job,
String driverClass,
String dbUrl)
Sets the DB access related fields in the JobConf. |
static void |
DBConfiguration.configureDB(JobConf job,
String driverClass,
String dbUrl,
String userName,
String passwd)
Sets the DB access related fields in the JobConf. |
RecordReader<LongWritable,T> |
DBInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
Get the RecordReader for the given InputSplit . |
RecordWriter<K,V> |
DBOutputFormat.getRecordWriter(FileSystem filesystem,
JobConf job,
String name,
Progressable progress)
Get the RecordWriter for the given job. |
InputSplit[] |
DBInputFormat.getSplits(JobConf job,
int chunks)
Logically split the set of input files for the job. |
static void |
DBInputFormat.setInput(JobConf job,
Class<? extends DBWritable> inputClass,
String inputQuery,
String inputCountQuery)
Initializes the map-part of the job with the appropriate input settings. |
static void |
DBInputFormat.setInput(JobConf job,
Class<? extends DBWritable> inputClass,
String tableName,
String conditions,
String orderBy,
String... fieldNames)
Initializes the map-part of the job with the appropriate input settings. |
static void |
DBOutputFormat.setOutput(JobConf job,
String tableName,
String... fieldNames)
Initializes the reduce-part of the job with the appropriate output settings |
Constructors in org.apache.hadoop.mapred.lib.db with parameters of type JobConf | |
---|---|
DBInputFormat.DBRecordReader(DBInputFormat.DBInputSplit split,
Class<T> inputClass,
JobConf job)
|
Uses of JobConf in org.apache.hadoop.mapred.pipes |
---|
Methods in org.apache.hadoop.mapred.pipes with parameters of type JobConf | |
---|---|
static String |
Submitter.getExecutable(JobConf conf)
Get the URI of the application's executable. |
static boolean |
Submitter.getIsJavaMapper(JobConf conf)
Check whether the job is using a Java Mapper. |
static boolean |
Submitter.getIsJavaRecordReader(JobConf conf)
Check whether the job is using a Java RecordReader |
static boolean |
Submitter.getIsJavaRecordWriter(JobConf conf)
Will the reduce use a Java RecordWriter? |
static boolean |
Submitter.getIsJavaReducer(JobConf conf)
Check whether the job is using a Java Reducer. |
static boolean |
Submitter.getKeepCommandFile(JobConf conf)
Does the user want to keep the command file for debugging? If this is true, pipes will write a copy of the command data to a file in the task directory named "downlink.data", which may be used to run the C++ program under the debugger. |
static RunningJob |
Submitter.jobSubmit(JobConf conf)
Submit a job to the Map-Reduce framework. |
static RunningJob |
Submitter.runJob(JobConf conf)
Submit a job to the map/reduce cluster. |
static void |
Submitter.setExecutable(JobConf conf,
String executable)
Set the URI for the application's executable. |
static void |
Submitter.setIsJavaMapper(JobConf conf,
boolean value)
Set whether the Mapper is written in Java. |
static void |
Submitter.setIsJavaRecordReader(JobConf conf,
boolean value)
Set whether the job is using a Java RecordReader. |
static void |
Submitter.setIsJavaRecordWriter(JobConf conf,
boolean value)
Set whether the job will use a Java RecordWriter. |
static void |
Submitter.setIsJavaReducer(JobConf conf,
boolean value)
Set whether the Reducer is written in Java. |
static void |
Submitter.setKeepCommandFile(JobConf conf,
boolean keep)
Set whether to keep the command file for debugging |
static RunningJob |
Submitter.submitJob(JobConf conf)
Deprecated. Use Submitter.runJob(JobConf) |
Uses of JobConf in org.apache.hadoop.mapreduce |
---|
Fields in org.apache.hadoop.mapreduce declared as JobConf | |
---|---|
protected JobConf |
JobContext.conf
|
Uses of JobConf in org.apache.hadoop.streaming |
---|
Fields in org.apache.hadoop.streaming declared as JobConf | |
---|---|
protected JobConf |
StreamJob.jobConf_
|
Methods in org.apache.hadoop.streaming that return JobConf | |
---|---|
static JobConf |
StreamJob.createJob(String[] argv)
This method creates a streaming job from the given argument list. |
Methods in org.apache.hadoop.streaming with parameters of type JobConf | |
---|---|
void |
PipeMapper.configure(JobConf job)
|
void |
PipeReducer.configure(JobConf job)
|
void |
PipeMapRed.configure(JobConf job)
|
void |
AutoInputFormat.configure(JobConf job)
|
static FileSplit |
StreamUtil.getCurrentSplit(JobConf job)
|
RecordReader<Text,Text> |
StreamInputFormat.getRecordReader(InputSplit genericSplit,
JobConf job,
Reporter reporter)
|
RecordReader |
AutoInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
|
static org.apache.hadoop.streaming.StreamUtil.TaskId |
StreamUtil.getTaskInfo(JobConf job)
|
static boolean |
StreamUtil.isLocalJobTracker(JobConf job)
|
Constructors in org.apache.hadoop.streaming with parameters of type JobConf | |
---|---|
StreamBaseRecordReader(FSDataInputStream in,
FileSplit split,
Reporter reporter,
JobConf job,
FileSystem fs)
|
|
StreamXmlRecordReader(FSDataInputStream in,
FileSplit split,
Reporter reporter,
JobConf job,
FileSystem fs)
|
|
||||||||||
PREV NEXT | FRAMES NO FRAMES |