|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
Uses of Configuration in org.apache.hadoop.conf |
---|
Methods in org.apache.hadoop.conf that return Configuration | |
---|---|
Configuration |
Configured.getConf()
|
Configuration |
Configurable.getConf()
Return the configuration used by this object. |
Methods in org.apache.hadoop.conf with parameters of type Configuration | |
---|---|
void |
Configuration.addResource(Configuration conf)
Add a configuration resource. |
static void |
Configuration.dumpConfiguration(Configuration config,
Writer out)
Writes out all the parameters and their properties (final and resource) to the given Writer
The format of the output would be
{ "properties" : [ {key1,value1,key1.isFinal,key1.resource}, {key2,value2,
key2.isFinal,key2.resource}... |
void |
Configured.setConf(Configuration conf)
|
void |
Configurable.setConf(Configuration conf)
Set the configuration to be used by this object. |
Constructors in org.apache.hadoop.conf with parameters of type Configuration | |
---|---|
Configuration(Configuration other)
A new configuration with the same settings cloned from another. |
|
Configured(Configuration conf)
Construct a Configured. |
Uses of Configuration in org.apache.hadoop.filecache |
---|
Methods in org.apache.hadoop.filecache with parameters of type Configuration | |
---|---|
static void |
DistributedCache.addLocalArchives(Configuration conf,
String str)
Deprecated. |
static void |
DistributedCache.addLocalFiles(Configuration conf,
String str)
Deprecated. |
static void |
DistributedCache.createAllSymlink(Configuration conf,
File jobCacheDir,
File workDir)
Deprecated. Internal to MapReduce framework. Use DistributedCacheManager instead. |
static FileStatus |
DistributedCache.getFileStatus(Configuration conf,
URI cache)
Deprecated. |
static long |
DistributedCache.getTimestamp(Configuration conf,
URI cache)
Deprecated. |
static void |
DistributedCache.setArchiveTimestamps(Configuration conf,
String timestamps)
Deprecated. |
static void |
DistributedCache.setFileTimestamps(Configuration conf,
String timestamps)
Deprecated. |
static void |
DistributedCache.setLocalArchives(Configuration conf,
String str)
Deprecated. |
static void |
DistributedCache.setLocalFiles(Configuration conf,
String str)
Deprecated. |
Uses of Configuration in org.apache.hadoop.fs |
---|
Methods in org.apache.hadoop.fs that return Configuration | |
---|---|
Configuration |
FilterFileSystem.getConf()
|
Methods in org.apache.hadoop.fs with parameters of type Configuration | |
---|---|
static boolean |
FileUtil.copy(File src,
FileSystem dstFS,
Path dst,
boolean deleteSource,
Configuration conf)
Copy local files to a FileSystem. |
static boolean |
FileUtil.copy(FileSystem srcFS,
FileStatus srcStatus,
FileSystem dstFS,
Path dst,
boolean deleteSource,
boolean overwrite,
Configuration conf)
Copy files between FileSystems. |
static boolean |
FileUtil.copy(FileSystem srcFS,
Path[] srcs,
FileSystem dstFS,
Path dst,
boolean deleteSource,
boolean overwrite,
Configuration conf)
|
static boolean |
FileUtil.copy(FileSystem srcFS,
Path src,
File dst,
boolean deleteSource,
Configuration conf)
Copy FileSystem files to local files. |
static boolean |
FileUtil.copy(FileSystem srcFS,
Path src,
FileSystem dstFS,
Path dst,
boolean deleteSource,
boolean overwrite,
Configuration conf)
Copy files between FileSystems. |
static boolean |
FileUtil.copy(FileSystem srcFS,
Path src,
FileSystem dstFS,
Path dst,
boolean deleteSource,
Configuration conf)
Copy files between FileSystems. |
static boolean |
FileUtil.copyMerge(FileSystem srcFS,
Path srcDir,
FileSystem dstFS,
Path dstFile,
boolean deleteSource,
Configuration conf,
String addString)
Copy all files in a directory to one output file (merge). |
static AbstractFileSystem |
AbstractFileSystem.createFileSystem(URI uri,
Configuration conf)
Create a file system instance for the specified uri using the conf. |
static FileSystem |
FileSystem.get(Configuration conf)
Returns the configured filesystem implementation. |
static AbstractFileSystem |
AbstractFileSystem.get(URI uri,
Configuration conf)
The main factory method for creating a file system. |
static FileSystem |
FileSystem.get(URI uri,
Configuration conf)
Returns the FileSystem for this URI's scheme and authority. |
static FileSystem |
FileSystem.get(URI uri,
Configuration conf,
String user)
Get a filesystem instance based on the uri, the passed configuration and the user |
static URI |
FileSystem.getDefaultUri(Configuration conf)
Get the default filesystem URI from a configuration. |
static FileContext |
FileContext.getFileContext(AbstractFileSystem defFS,
Configuration aConf)
Create a FileContext with specified FS as default using the specified config. |
static FileContext |
FileContext.getFileContext(Configuration aConf)
Create a FileContext using the passed config. |
static FileContext |
FileContext.getFileContext(URI defaultFsUri,
Configuration aConf)
Create a FileContext for specified default URI using the specified config. |
FileSystem |
Path.getFileSystem(Configuration conf)
Return the FileSystem that owns this Path. |
static Class<? extends FileSystem> |
FileSystem.getFileSystemClass(String scheme,
Configuration conf)
|
protected static FileSystem |
FileSystem.getFSofPath(Path absOrFqPath,
Configuration conf)
|
static TrashPolicy |
TrashPolicy.getInstance(Configuration conf,
FileSystem fs,
Path home)
Get an instance of the configured TrashPolicy based on the value of the configuration parameter fs.trash.classname. |
static LocalFileSystem |
FileSystem.getLocal(Configuration conf)
Get the local file system. |
static FileContext |
FileContext.getLocalFSFileContext(Configuration aConf)
|
static FileSystem |
FileSystem.getNamed(String name,
Configuration conf)
Deprecated. call #get(URI,Configuration) instead. |
abstract void |
TrashPolicy.initialize(Configuration conf,
FileSystem fs,
Path home)
Used to setup the trash policy. |
void |
RawLocalFileSystem.initialize(URI uri,
Configuration conf)
|
void |
FilterFileSystem.initialize(URI name,
Configuration conf)
Called after a new FileSystem instance is constructed. |
void |
LocalFileSystem.initialize(URI name,
Configuration conf)
|
void |
FileSystem.initialize(URI name,
Configuration conf)
Called after a new FileSystem instance is constructed. |
static boolean |
Trash.moveToAppropriateTrash(FileSystem fs,
Path p,
Configuration conf)
In case of the symlinks or mount points, one has to move the appropriate trashbin in the actual volume of the path p being deleted. |
static FileSystem |
FileSystem.newInstance(Configuration conf)
Returns a unique configured filesystem implementation. |
static FileSystem |
FileSystem.newInstance(URI uri,
Configuration conf)
Returns the FileSystem for this URI's scheme and authority. |
static FileSystem |
FileSystem.newInstance(URI uri,
Configuration conf,
String user)
Returns the FileSystem for this URI's scheme and authority and the passed user. |
static LocalFileSystem |
FileSystem.newInstanceLocal(Configuration conf)
Get a unique local file system object |
void |
ChecksumFileSystem.setConf(Configuration conf)
|
static void |
FileSystem.setDefaultUri(Configuration conf,
String uri)
Set the default filesystem URI in a configuration. |
static void |
FileSystem.setDefaultUri(Configuration conf,
URI uri)
Set the default filesystem URI in a configuration. |
Constructors in org.apache.hadoop.fs with parameters of type Configuration | |
---|---|
Trash(Configuration conf)
Construct a trash can accessor. |
|
Trash(FileSystem fs,
Configuration conf)
Construct a trash can accessor for the FileSystem provided. |
Uses of Configuration in org.apache.hadoop.fs.ftp |
---|
Methods in org.apache.hadoop.fs.ftp with parameters of type Configuration | |
---|---|
void |
FTPFileSystem.initialize(URI uri,
Configuration conf)
|
Uses of Configuration in org.apache.hadoop.fs.permission |
---|
Methods in org.apache.hadoop.fs.permission with parameters of type Configuration | |
---|---|
static FsPermission |
FsPermission.getUMask(Configuration conf)
Get the user file creation mask (umask) UMASK_LABEL config param has umask value that is either symbolic
or octal. |
static void |
FsPermission.setUMask(Configuration conf,
FsPermission umask)
Set the user file creation mask (umask) |
Uses of Configuration in org.apache.hadoop.fs.s3 |
---|
Methods in org.apache.hadoop.fs.s3 with parameters of type Configuration | |
---|---|
void |
S3FileSystem.initialize(URI uri,
Configuration conf)
|
Uses of Configuration in org.apache.hadoop.fs.s3native |
---|
Methods in org.apache.hadoop.fs.s3native with parameters of type Configuration | |
---|---|
void |
NativeS3FileSystem.initialize(URI uri,
Configuration conf)
|
Uses of Configuration in org.apache.hadoop.fs.viewfs |
---|
Methods in org.apache.hadoop.fs.viewfs with parameters of type Configuration | |
---|---|
void |
ViewFileSystem.initialize(URI theUri,
Configuration conf)
Called after a new FileSystem instance is constructed. |
Constructors in org.apache.hadoop.fs.viewfs with parameters of type Configuration | |
---|---|
ViewFileSystem(Configuration conf)
Convenience Constructor for apps to call directly |
|
ViewFs(Configuration conf)
|
Uses of Configuration in org.apache.hadoop.ha |
---|
Methods in org.apache.hadoop.ha with parameters of type Configuration | |
---|---|
HAServiceProtocol |
HAServiceTarget.getProxy(Configuration conf,
int timeoutMs)
|
org.apache.hadoop.ha.ZKFCProtocol |
HAServiceTarget.getZKFCProxy(Configuration conf,
int timeoutMs)
|
Uses of Configuration in org.apache.hadoop.io |
---|
Methods in org.apache.hadoop.io that return Configuration | |
---|---|
Configuration |
EnumSetWritable.getConf()
|
Configuration |
ObjectWritable.getConf()
|
Configuration |
WritableComparator.getConf()
|
Configuration |
AbstractMapWritable.getConf()
|
Configuration |
GenericWritable.getConf()
|
Methods in org.apache.hadoop.io with parameters of type Configuration | ||
---|---|---|
static
|
WritableUtils.clone(T orig,
Configuration conf)
Make a copy of a writable object using serialization to a buffer. |
|
static void |
IOUtils.copyBytes(InputStream in,
OutputStream out,
Configuration conf)
Copies from one stream to another. |
|
static void |
IOUtils.copyBytes(InputStream in,
OutputStream out,
Configuration conf,
boolean close)
Copies from one stream to another. |
|
static org.apache.hadoop.io.SequenceFile.Writer |
SequenceFile.createWriter(Configuration conf,
FSDataOutputStream out,
Class keyClass,
Class valClass,
org.apache.hadoop.io.SequenceFile.CompressionType compressionType,
CompressionCodec codec)
Deprecated. Use SequenceFile.createWriter(Configuration, Writer.Option...)
instead. |
|
static org.apache.hadoop.io.SequenceFile.Writer |
SequenceFile.createWriter(Configuration conf,
FSDataOutputStream out,
Class keyClass,
Class valClass,
org.apache.hadoop.io.SequenceFile.CompressionType compressionType,
CompressionCodec codec,
org.apache.hadoop.io.SequenceFile.Metadata metadata)
Deprecated. Use SequenceFile.createWriter(Configuration, Writer.Option...)
instead. |
|
static org.apache.hadoop.io.SequenceFile.Writer |
SequenceFile.createWriter(Configuration conf,
org.apache.hadoop.io.SequenceFile.Writer.Option... opts)
Create a new Writer with the given options. |
|
static org.apache.hadoop.io.SequenceFile.Writer |
SequenceFile.createWriter(FileContext fc,
Configuration conf,
Path name,
Class keyClass,
Class valClass,
org.apache.hadoop.io.SequenceFile.CompressionType compressionType,
CompressionCodec codec,
org.apache.hadoop.io.SequenceFile.Metadata metadata,
EnumSet<CreateFlag> createFlag,
org.apache.hadoop.fs.Options.CreateOpts... opts)
Construct the preferred type of SequenceFile Writer. |
|
static org.apache.hadoop.io.SequenceFile.Writer |
SequenceFile.createWriter(FileSystem fs,
Configuration conf,
Path name,
Class keyClass,
Class valClass)
Deprecated. Use SequenceFile.createWriter(Configuration, Writer.Option...)
instead. |
|
static org.apache.hadoop.io.SequenceFile.Writer |
SequenceFile.createWriter(FileSystem fs,
Configuration conf,
Path name,
Class keyClass,
Class valClass,
int bufferSize,
short replication,
long blockSize,
boolean createParent,
org.apache.hadoop.io.SequenceFile.CompressionType compressionType,
CompressionCodec codec,
org.apache.hadoop.io.SequenceFile.Metadata metadata)
Deprecated. |
|
static org.apache.hadoop.io.SequenceFile.Writer |
SequenceFile.createWriter(FileSystem fs,
Configuration conf,
Path name,
Class keyClass,
Class valClass,
int bufferSize,
short replication,
long blockSize,
org.apache.hadoop.io.SequenceFile.CompressionType compressionType,
CompressionCodec codec,
Progressable progress,
org.apache.hadoop.io.SequenceFile.Metadata metadata)
Deprecated. Use SequenceFile.createWriter(Configuration, Writer.Option...)
instead. |
|
static org.apache.hadoop.io.SequenceFile.Writer |
SequenceFile.createWriter(FileSystem fs,
Configuration conf,
Path name,
Class keyClass,
Class valClass,
org.apache.hadoop.io.SequenceFile.CompressionType compressionType)
Deprecated. Use SequenceFile.createWriter(Configuration, Writer.Option...)
instead. |
|
static org.apache.hadoop.io.SequenceFile.Writer |
SequenceFile.createWriter(FileSystem fs,
Configuration conf,
Path name,
Class keyClass,
Class valClass,
org.apache.hadoop.io.SequenceFile.CompressionType compressionType,
CompressionCodec codec)
Deprecated. Use SequenceFile.createWriter(Configuration, Writer.Option...)
instead. |
|
static org.apache.hadoop.io.SequenceFile.Writer |
SequenceFile.createWriter(FileSystem fs,
Configuration conf,
Path name,
Class keyClass,
Class valClass,
org.apache.hadoop.io.SequenceFile.CompressionType compressionType,
CompressionCodec codec,
Progressable progress)
Deprecated. Use SequenceFile.createWriter(Configuration, Writer.Option...)
instead. |
|
static org.apache.hadoop.io.SequenceFile.Writer |
SequenceFile.createWriter(FileSystem fs,
Configuration conf,
Path name,
Class keyClass,
Class valClass,
org.apache.hadoop.io.SequenceFile.CompressionType compressionType,
CompressionCodec codec,
Progressable progress,
org.apache.hadoop.io.SequenceFile.Metadata metadata)
Deprecated. Use SequenceFile.createWriter(Configuration, Writer.Option...)
instead. |
|
static org.apache.hadoop.io.SequenceFile.Writer |
SequenceFile.createWriter(FileSystem fs,
Configuration conf,
Path name,
Class keyClass,
Class valClass,
org.apache.hadoop.io.SequenceFile.CompressionType compressionType,
Progressable progress)
Deprecated. Use SequenceFile.createWriter(Configuration, Writer.Option...)
instead. |
|
static long |
MapFile.fix(FileSystem fs,
Path dir,
Class<? extends Writable> keyClass,
Class<? extends Writable> valueClass,
boolean dryrun,
Configuration conf)
This method attempts to fix a corrupt MapFile by re-creating its index. |
|
static WritableComparator |
WritableComparator.get(Class<? extends WritableComparable> c,
Configuration conf)
Get a comparator for a WritableComparable implementation. |
|
static org.apache.hadoop.io.SequenceFile.CompressionType |
SequenceFile.getDefaultCompressionType(Configuration job)
Get the compression type for the reduce outputs |
|
static
|
DefaultStringifier.load(Configuration conf,
String keyName,
Class<K> itemClass)
Restores the object from the configuration. |
|
static
|
DefaultStringifier.loadArray(Configuration conf,
String keyName,
Class<K> itemClass)
Restores the array of objects from the configuration. |
|
static Class<?> |
ObjectWritable.loadClass(Configuration conf,
String className)
Find and load the class with given name className by first finding it in the specified conf. |
|
static Writable |
WritableFactories.newInstance(Class<? extends Writable> c,
Configuration conf)
Create a new instance of a class with a defined factory. |
|
static Object |
ObjectWritable.readObject(DataInput in,
Configuration conf)
Read a Writable , String , primitive type, or an array of
the preceding. |
|
static Object |
ObjectWritable.readObject(DataInput in,
ObjectWritable objectWritable,
Configuration conf)
Read a Writable , String , primitive type, or an array of
the preceding. |
|
void |
EnumSetWritable.setConf(Configuration conf)
|
|
void |
ObjectWritable.setConf(Configuration conf)
|
|
void |
WritableComparator.setConf(Configuration conf)
|
|
void |
AbstractMapWritable.setConf(Configuration conf)
|
|
void |
GenericWritable.setConf(Configuration conf)
|
|
static void |
SequenceFile.setDefaultCompressionType(Configuration job,
org.apache.hadoop.io.SequenceFile.CompressionType val)
Set the default compression type for sequence files. |
|
static
|
DefaultStringifier.store(Configuration conf,
K item,
String keyName)
Stores the item in the configuration with the given keyName. |
|
static
|
DefaultStringifier.storeArray(Configuration conf,
K[] items,
String keyName)
Stores the array of items in the configuration with the given keyName. |
|
static void |
ObjectWritable.writeObject(DataOutput out,
Object instance,
Class declaredClass,
Configuration conf)
Write a Writable , String , primitive type, or an array of
the preceding. |
|
static void |
ObjectWritable.writeObject(DataOutput out,
Object instance,
Class declaredClass,
Configuration conf,
boolean allowCompactArrays)
Write a Writable , String , primitive type, or an array of
the preceding. |
Constructors in org.apache.hadoop.io with parameters of type Configuration | |
---|---|
DefaultStringifier(Configuration conf,
Class<T> c)
|
|
WritableComparator(Class<? extends WritableComparable> keyClass,
Configuration conf,
boolean createInstances)
|
Uses of Configuration in org.apache.hadoop.io.compress |
---|
Methods in org.apache.hadoop.io.compress that return Configuration | |
---|---|
Configuration |
BZip2Codec.getConf()
Return the configuration used by this object. |
Configuration |
DefaultCodec.getConf()
|
Methods in org.apache.hadoop.io.compress with parameters of type Configuration | |
---|---|
static List<Class<? extends CompressionCodec>> |
CompressionCodecFactory.getCodecClasses(Configuration conf)
Get the list of codecs discovered via a Java ServiceLoader, or listed in the configuration. |
static Compressor |
CodecPool.getCompressor(CompressionCodec codec,
Configuration conf)
Get a Compressor for the given CompressionCodec from the
pool or a new one. |
void |
Compressor.reinit(Configuration conf)
Prepare the compressor to be used in a new stream with settings defined in the given Configuration |
static void |
CompressionCodecFactory.setCodecClasses(Configuration conf,
List<Class> classes)
Sets a list of codec classes in the configuration. |
void |
BZip2Codec.setConf(Configuration conf)
Set the configuration to be used by this object. |
void |
DefaultCodec.setConf(Configuration conf)
|
Constructors in org.apache.hadoop.io.compress with parameters of type Configuration | |
---|---|
CompressionCodecFactory(Configuration conf)
Find the codecs specified in the config value io.compression.codecs and register them. |
Uses of Configuration in org.apache.hadoop.mapred |
---|
Subclasses of Configuration in org.apache.hadoop.mapred | |
---|---|
class |
JobConf
A map/reduce job configuration. |
Fields in org.apache.hadoop.mapred declared as Configuration | |
---|---|
protected Configuration |
SequenceFileRecordReader.conf
|
Methods in org.apache.hadoop.mapred that return Configuration | |
---|---|
Configuration |
RunningJob.getConfiguration()
Get the underlying job configuration |
Methods in org.apache.hadoop.mapred with parameters of type Configuration | |
---|---|
static int |
SkipBadRecords.getAttemptsToStartSkipping(Configuration conf)
Get the number of Task attempts AFTER which skip mode will be kicked off. |
static boolean |
SkipBadRecords.getAutoIncrMapperProcCount(Configuration conf)
Get the flag which if set to true, SkipBadRecords.COUNTER_MAP_PROCESSED_RECORDS is incremented
by MapRunner after invoking the map function. |
static boolean |
SkipBadRecords.getAutoIncrReducerProcCount(Configuration conf)
Get the flag which if set to true, SkipBadRecords.COUNTER_REDUCE_PROCESSED_GROUPS is incremented
by framework after invoking the reduce function. |
static long |
SkipBadRecords.getMapperMaxSkipRecords(Configuration conf)
Get the number of acceptable skip records surrounding the bad record PER bad record in mapper. |
static org.apache.hadoop.io.SequenceFile.Reader[] |
SequenceFileOutputFormat.getReaders(Configuration conf,
Path dir)
Open the output generated by this format. |
static org.apache.hadoop.io.MapFile.Reader[] |
MapFileOutputFormat.getReaders(FileSystem ignored,
Path dir,
Configuration conf)
Open the output generated by this format. |
static int |
FixedLengthInputFormat.getRecordLength(Configuration conf)
Get record length value |
static long |
SkipBadRecords.getReducerMaxSkipGroups(Configuration conf)
Get the number of acceptable skip groups surrounding the bad group PER bad group in reducer. |
static Path |
SkipBadRecords.getSkipOutputPath(Configuration conf)
Get the directory to which skipped records are written. |
static void |
SkipBadRecords.setAttemptsToStartSkipping(Configuration conf,
int attemptsToStartSkipping)
Set the number of Task attempts AFTER which skip mode will be kicked off. |
static void |
SkipBadRecords.setAutoIncrMapperProcCount(Configuration conf,
boolean autoIncr)
Set the flag which if set to true, SkipBadRecords.COUNTER_MAP_PROCESSED_RECORDS is incremented
by MapRunner after invoking the map function. |
static void |
SkipBadRecords.setAutoIncrReducerProcCount(Configuration conf,
boolean autoIncr)
Set the flag which if set to true, SkipBadRecords.COUNTER_REDUCE_PROCESSED_GROUPS is incremented
by framework after invoking the reduce function. |
static void |
SequenceFileInputFilter.setFilterClass(Configuration conf,
Class filterClass)
set the filter class |
static void |
SkipBadRecords.setMapperMaxSkipRecords(Configuration conf,
long maxSkipRecs)
Set the number of acceptable skip records surrounding the bad record PER bad record in mapper. |
static void |
FixedLengthInputFormat.setRecordLength(Configuration conf,
int recordLength)
Set the length of each record |
static void |
SkipBadRecords.setReducerMaxSkipGroups(Configuration conf,
long maxSkipGrps)
Set the number of acceptable skip groups surrounding the bad group PER bad group in reducer. |
Constructors in org.apache.hadoop.mapred with parameters of type Configuration | |
---|---|
JobClient(Configuration conf)
Build a job client with the given Configuration ,
and connect to the default cluster |
|
JobClient(InetSocketAddress jobTrackAddr,
Configuration conf)
Build a job client, connect to the indicated job tracker. |
|
JobConf(Configuration conf)
Construct a map/reduce job configuration. |
|
JobConf(Configuration conf,
Class exampleClass)
Construct a map/reduce job configuration. |
|
KeyValueLineRecordReader(Configuration job,
FileSplit split)
|
|
SequenceFileAsTextRecordReader(Configuration conf,
FileSplit split)
|
|
SequenceFileRecordReader(Configuration conf,
FileSplit split)
|
Uses of Configuration in org.apache.hadoop.mapred.join |
---|
Methods in org.apache.hadoop.mapred.join that return Configuration | |
---|---|
Configuration |
CompositeRecordReader.getConf()
Return the configuration used by this object. |
Configuration |
WrappedRecordReader.getConf()
|
Methods in org.apache.hadoop.mapred.join with parameters of type Configuration | |
---|---|
void |
CompositeRecordReader.setConf(Configuration conf)
Set the configuration to be used by this object. |
void |
WrappedRecordReader.setConf(Configuration conf)
|
Uses of Configuration in org.apache.hadoop.mapred.lib |
---|
Constructors in org.apache.hadoop.mapred.lib with parameters of type Configuration | |
---|---|
CombineFileRecordReaderWrapper(FileInputFormat<K,V> inputFormat,
CombineFileSplit split,
Configuration conf,
Reporter reporter,
Integer idx)
|
Uses of Configuration in org.apache.hadoop.mapred.pipes |
---|
Constructors in org.apache.hadoop.mapred.pipes with parameters of type Configuration | |
---|---|
Submitter(Configuration conf)
|
Uses of Configuration in org.apache.hadoop.mapreduce |
---|
Methods in org.apache.hadoop.mapreduce that return Configuration | |
---|---|
Configuration |
JobContext.getConfiguration()
Return the configuration for the job. |
Methods in org.apache.hadoop.mapreduce with parameters of type Configuration | |
---|---|
static int |
Job.getCompletionPollInterval(Configuration conf)
The interval at which waitForCompletion() should check. |
static Job |
Job.getInstance(Cluster ignored,
Configuration conf)
Deprecated. Use Job.getInstance(Configuration) |
static Job |
Job.getInstance(Cluster cluster,
JobStatus status,
Configuration conf)
Creates a new Job with no particular Cluster and given
Configuration and JobStatus . |
static Job |
Job.getInstance(Configuration conf)
Creates a new Job with no particular Cluster and a
given Configuration . |
static Job |
Job.getInstance(Configuration conf,
String jobName)
Creates a new Job with no particular Cluster and a given jobName. |
static Job |
Job.getInstance(JobStatus status,
Configuration conf)
Creates a new Job with no particular Cluster and given
Configuration and JobStatus . |
static int |
Job.getProgressPollInterval(Configuration conf)
The interval at which monitorAndPrintJob() prints status |
static org.apache.hadoop.mapreduce.Job.TaskStatusFilter |
Job.getTaskOutputFilter(Configuration conf)
Get the task output filter. |
static void |
Job.setTaskOutputFilter(Configuration conf,
org.apache.hadoop.mapreduce.Job.TaskStatusFilter newValue)
Modify the Configuration to set the task output filter. |
Constructors in org.apache.hadoop.mapreduce with parameters of type Configuration | |
---|---|
Cluster(Configuration conf)
|
|
Cluster(InetSocketAddress jobTrackAddr,
Configuration conf)
|
|
Job(Configuration conf)
Deprecated. |
|
Job(Configuration conf,
String jobName)
Deprecated. |
Uses of Configuration in org.apache.hadoop.mapreduce.lib.aggregate |
---|
Methods in org.apache.hadoop.mapreduce.lib.aggregate that return Configuration | |
---|---|
static Configuration |
ValueAggregatorJob.setAggregatorDescriptors(Class<? extends ValueAggregatorDescriptor>[] descriptors)
|
Methods in org.apache.hadoop.mapreduce.lib.aggregate with parameters of type Configuration | |
---|---|
void |
ValueAggregatorBaseDescriptor.configure(Configuration conf)
get the input file name. |
void |
UserDefinedValueAggregatorDescriptor.configure(Configuration conf)
Do nothing. |
void |
ValueAggregatorDescriptor.configure(Configuration conf)
Configure the object |
static Job |
ValueAggregatorJob.createValueAggregatorJob(Configuration conf,
String[] args)
Create an Aggregate based map/reduce job. |
protected static ArrayList<ValueAggregatorDescriptor> |
ValueAggregatorJobBase.getAggregatorDescriptors(Configuration conf)
|
protected static ValueAggregatorDescriptor |
ValueAggregatorJobBase.getValueAggregatorDescriptor(String spec,
Configuration conf)
|
static void |
ValueAggregatorJobBase.setup(Configuration job)
|
Constructors in org.apache.hadoop.mapreduce.lib.aggregate with parameters of type Configuration | |
---|---|
UserDefinedValueAggregatorDescriptor(String className,
Configuration conf)
|
Uses of Configuration in org.apache.hadoop.mapreduce.lib.chain |
---|
Methods in org.apache.hadoop.mapreduce.lib.chain with parameters of type Configuration | |
---|---|
static void |
ChainMapper.addMapper(Job job,
Class<? extends Mapper> klass,
Class<?> inputKeyClass,
Class<?> inputValueClass,
Class<?> outputKeyClass,
Class<?> outputValueClass,
Configuration mapperConf)
Adds a Mapper class to the chain mapper. |
static void |
ChainReducer.addMapper(Job job,
Class<? extends Mapper> klass,
Class<?> inputKeyClass,
Class<?> inputValueClass,
Class<?> outputKeyClass,
Class<?> outputValueClass,
Configuration mapperConf)
Adds a Mapper class to the chain reducer. |
static void |
ChainReducer.setReducer(Job job,
Class<? extends Reducer> klass,
Class<?> inputKeyClass,
Class<?> inputValueClass,
Class<?> outputKeyClass,
Class<?> outputValueClass,
Configuration reducerConf)
Sets the Reducer class to the chain job. |
Uses of Configuration in org.apache.hadoop.mapreduce.lib.db |
---|
Methods in org.apache.hadoop.mapreduce.lib.db that return Configuration | |
---|---|
Configuration |
DBInputFormat.getConf()
|
Configuration |
DBConfiguration.getConf()
|
Methods in org.apache.hadoop.mapreduce.lib.db with parameters of type Configuration | |
---|---|
static void |
DBConfiguration.configureDB(Configuration job,
String driverClass,
String dbUrl)
Sets the DB access related fields in the JobConf. |
static void |
DBConfiguration.configureDB(Configuration conf,
String driverClass,
String dbUrl,
String userName,
String passwd)
Sets the DB access related fields in the Configuration . |
protected RecordReader<LongWritable,T> |
DBInputFormat.createDBRecordReader(org.apache.hadoop.mapreduce.lib.db.DBInputFormat.DBInputSplit split,
Configuration conf)
|
protected RecordReader<LongWritable,T> |
OracleDataDrivenDBInputFormat.createDBRecordReader(org.apache.hadoop.mapreduce.lib.db.DBInputFormat.DBInputSplit split,
Configuration conf)
|
protected RecordReader<LongWritable,T> |
DataDrivenDBInputFormat.createDBRecordReader(org.apache.hadoop.mapreduce.lib.db.DBInputFormat.DBInputSplit split,
Configuration conf)
|
static void |
DataDrivenDBInputFormat.setBoundingQuery(Configuration conf,
String query)
Set the user-defined bounding query to use with a user-defined query. |
void |
DBInputFormat.setConf(Configuration conf)
Set the configuration to be used by this object. |
static void |
OracleDBRecordReader.setSessionTimeZone(Configuration conf,
Connection conn)
Set session time zone |
List<InputSplit> |
TextSplitter.split(Configuration conf,
ResultSet results,
String colName)
This method needs to determine the splits between two user-provided strings. |
List<InputSplit> |
DBSplitter.split(Configuration conf,
ResultSet results,
String colName)
Given a ResultSet containing one record (and already advanced to that record) with two columns (a low value, and a high value, both of the same type), determine a set of splits that span the given values. |
List<InputSplit> |
IntegerSplitter.split(Configuration conf,
ResultSet results,
String colName)
|
List<InputSplit> |
DateSplitter.split(Configuration conf,
ResultSet results,
String colName)
|
List<InputSplit> |
FloatSplitter.split(Configuration conf,
ResultSet results,
String colName)
|
List<InputSplit> |
BigDecimalSplitter.split(Configuration conf,
ResultSet results,
String colName)
|
List<InputSplit> |
BooleanSplitter.split(Configuration conf,
ResultSet results,
String colName)
|
Constructors in org.apache.hadoop.mapreduce.lib.db with parameters of type Configuration | |
---|---|
DataDrivenDBRecordReader(org.apache.hadoop.mapreduce.lib.db.DBInputFormat.DBInputSplit split,
Class<T> inputClass,
Configuration conf,
Connection conn,
DBConfiguration dbConfig,
String cond,
String[] fields,
String table,
String dbProduct)
|
|
DBConfiguration(Configuration job)
|
|
DBRecordReader(org.apache.hadoop.mapreduce.lib.db.DBInputFormat.DBInputSplit split,
Class<T> inputClass,
Configuration conf,
Connection conn,
DBConfiguration dbConfig,
String cond,
String[] fields,
String table)
|
|
MySQLDataDrivenDBRecordReader(org.apache.hadoop.mapreduce.lib.db.DBInputFormat.DBInputSplit split,
Class<T> inputClass,
Configuration conf,
Connection conn,
DBConfiguration dbConfig,
String cond,
String[] fields,
String table)
|
|
MySQLDBRecordReader(org.apache.hadoop.mapreduce.lib.db.DBInputFormat.DBInputSplit split,
Class<T> inputClass,
Configuration conf,
Connection conn,
DBConfiguration dbConfig,
String cond,
String[] fields,
String table)
|
|
OracleDataDrivenDBRecordReader(org.apache.hadoop.mapreduce.lib.db.DBInputFormat.DBInputSplit split,
Class<T> inputClass,
Configuration conf,
Connection conn,
DBConfiguration dbConfig,
String cond,
String[] fields,
String table)
|
|
OracleDBRecordReader(org.apache.hadoop.mapreduce.lib.db.DBInputFormat.DBInputSplit split,
Class<T> inputClass,
Configuration conf,
Connection conn,
DBConfiguration dbConfig,
String cond,
String[] fields,
String table)
|
Uses of Configuration in org.apache.hadoop.mapreduce.lib.input |
---|
Fields in org.apache.hadoop.mapreduce.lib.input declared as Configuration | |
---|---|
protected Configuration |
SequenceFileRecordReader.conf
|
Methods in org.apache.hadoop.mapreduce.lib.input with parameters of type Configuration | |
---|---|
static int |
FixedLengthInputFormat.getRecordLength(Configuration conf)
Get record length value |
static List<FileSplit> |
NLineInputFormat.getSplitsForFile(FileStatus status,
Configuration conf,
int numLinesPerSplit)
|
static void |
FixedLengthInputFormat.setRecordLength(Configuration conf,
int recordLength)
Set the length of each record |
Constructors in org.apache.hadoop.mapreduce.lib.input with parameters of type Configuration | |
---|---|
KeyValueLineRecordReader(Configuration conf)
|
Uses of Configuration in org.apache.hadoop.mapreduce.lib.jobcontrol |
---|
Constructors in org.apache.hadoop.mapreduce.lib.jobcontrol with parameters of type Configuration | |
---|---|
ControlledJob(Configuration conf)
Construct a job. |
Uses of Configuration in org.apache.hadoop.mapreduce.lib.join |
---|
Fields in org.apache.hadoop.mapreduce.lib.join declared as Configuration | |
---|---|
protected Configuration |
CompositeRecordReader.conf
|
Methods in org.apache.hadoop.mapreduce.lib.join that return Configuration | |
---|---|
Configuration |
CompositeRecordReader.getConf()
Return the configuration used by this object. |
Methods in org.apache.hadoop.mapreduce.lib.join with parameters of type Configuration | |
---|---|
void |
CompositeRecordReader.setConf(Configuration conf)
Set the configuration to be used by this object. |
void |
CompositeInputFormat.setFormat(Configuration conf)
Interpret a given string as a composite expression. |
Constructors in org.apache.hadoop.mapreduce.lib.join with parameters of type Configuration | |
---|---|
JoinRecordReader(int id,
Configuration conf,
int capacity,
Class<? extends WritableComparator> cmpcl)
|
|
MultiFilterRecordReader(int id,
Configuration conf,
int capacity,
Class<? extends WritableComparator> cmpcl)
|
Uses of Configuration in org.apache.hadoop.mapreduce.lib.output |
---|
Methods in org.apache.hadoop.mapreduce.lib.output with parameters of type Configuration | |
---|---|
static org.apache.hadoop.io.MapFile.Reader[] |
MapFileOutputFormat.getReaders(Path dir,
Configuration conf)
Open the output generated by this format. |
Uses of Configuration in org.apache.hadoop.mapreduce.lib.partition |
---|
Methods in org.apache.hadoop.mapreduce.lib.partition that return Configuration | |
---|---|
Configuration |
KeyFieldBasedComparator.getConf()
|
Configuration |
BinaryPartitioner.getConf()
|
Configuration |
KeyFieldBasedPartitioner.getConf()
|
Configuration |
TotalOrderPartitioner.getConf()
|
Methods in org.apache.hadoop.mapreduce.lib.partition with parameters of type Configuration | |
---|---|
static String |
TotalOrderPartitioner.getPartitionFile(Configuration conf)
Get the path to the SequenceFile storing the sorted partition keyset. |
void |
KeyFieldBasedComparator.setConf(Configuration conf)
|
void |
BinaryPartitioner.setConf(Configuration conf)
|
void |
KeyFieldBasedPartitioner.setConf(Configuration conf)
|
void |
TotalOrderPartitioner.setConf(Configuration conf)
Read in the partition file and build indexing data structures. |
static void |
BinaryPartitioner.setLeftOffset(Configuration conf,
int offset)
Set the subarray to be used for partitioning to bytes[offset:] in Python syntax. |
static void |
BinaryPartitioner.setOffsets(Configuration conf,
int left,
int right)
Set the subarray to be used for partitioning to bytes[left:(right+1)] in Python syntax. |
static void |
TotalOrderPartitioner.setPartitionFile(Configuration conf,
Path p)
Set the path to the SequenceFile storing the sorted partition keyset. |
static void |
BinaryPartitioner.setRightOffset(Configuration conf,
int offset)
Set the subarray to be used for partitioning to bytes[:(offset+1)] in Python syntax. |
Constructors in org.apache.hadoop.mapreduce.lib.partition with parameters of type Configuration | |
---|---|
InputSampler(Configuration conf)
|
Uses of Configuration in org.apache.hadoop.mapreduce.security |
---|
Methods in org.apache.hadoop.mapreduce.security with parameters of type Configuration | |
---|---|
static void |
TokenCache.cleanUpTokenReferral(Configuration conf)
Remove jobtoken referrals which don't make sense in the context of the task execution. |
static org.apache.hadoop.security.Credentials |
TokenCache.loadTokens(String jobTokenFile,
Configuration conf)
Deprecated. Use Credentials.readTokenStorageFile(org.apache.hadoop.fs.Path, org.apache.hadoop.conf.Configuration) instead,
this method is included for compatibility against Hadoop-1. |
static void |
TokenCache.obtainTokensForNamenodes(org.apache.hadoop.security.Credentials credentials,
Path[] ps,
Configuration conf)
Convenience method to obtain delegation tokens from namenodes corresponding to the paths passed. |
Uses of Configuration in org.apache.hadoop.mapreduce.tools |
---|
Constructors in org.apache.hadoop.mapreduce.tools with parameters of type Configuration | |
---|---|
CLI(Configuration conf)
|
Uses of Configuration in org.apache.hadoop.mapreduce.v2.hs |
---|
Methods in org.apache.hadoop.mapreduce.v2.hs with parameters of type Configuration | |
---|---|
protected void |
HistoryFileManager.serviceInit(Configuration conf)
|
Uses of Configuration in org.apache.hadoop.net |
---|
Methods in org.apache.hadoop.net that return Configuration | |
---|---|
Configuration |
ScriptBasedMapping.getConf()
|
Configuration |
SocksSocketFactory.getConf()
|
Configuration |
TableMapping.getConf()
|
Configuration |
AbstractDNSToSwitchMapping.getConf()
|
Methods in org.apache.hadoop.net with parameters of type Configuration | |
---|---|
void |
ScriptBasedMapping.setConf(Configuration conf)
Set the configuration to be used by this object. |
void |
SocksSocketFactory.setConf(Configuration conf)
|
void |
TableMapping.setConf(Configuration conf)
|
void |
AbstractDNSToSwitchMapping.setConf(Configuration conf)
|
Constructors in org.apache.hadoop.net with parameters of type Configuration | |
---|---|
AbstractDNSToSwitchMapping(Configuration conf)
Create an instance, caching the configuration file. |
|
ScriptBasedMapping(Configuration conf)
Create an instance from the given configuration |
Uses of Configuration in org.apache.hadoop.service |
---|
Methods in org.apache.hadoop.service that return Configuration | |
---|---|
Configuration |
Service.getConfig()
Get the configuration of this service. |
Configuration |
AbstractService.getConfig()
|
Methods in org.apache.hadoop.service with parameters of type Configuration | |
---|---|
void |
Service.init(Configuration config)
Initialize the service. |
void |
AbstractService.init(Configuration conf)
Initialize the service. |
protected void |
AbstractService.serviceInit(Configuration conf)
All initialization code needed by a service. |
protected void |
CompositeService.serviceInit(Configuration conf)
|
protected void |
AbstractService.setConfig(Configuration conf)
Set the configuration for this service. |
Uses of Configuration in org.apache.hadoop.util |
---|
Methods in org.apache.hadoop.util with parameters of type Configuration | ||
---|---|---|
static
|
ReflectionUtils.copy(Configuration conf,
T src,
T dst)
Make a copy of the writable object using serialization to a buffer |
|
static
|
ReflectionUtils.newInstance(Class<T> theClass,
Configuration conf)
Create an object for the given class and initialize it from conf |
|
static int |
ToolRunner.run(Configuration conf,
Tool tool,
String[] args)
Runs the given Tool by Tool.run(String[]) , after
parsing with the given generic arguments. |
|
static void |
ReflectionUtils.setConf(Object theObject,
Configuration conf)
Check and set 'configuration' if necessary. |
Uses of Configuration in org.apache.hadoop.yarn.applications.distributedshell |
---|
Constructors in org.apache.hadoop.yarn.applications.distributedshell with parameters of type Configuration | |
---|---|
Client(Configuration conf)
|
Uses of Configuration in org.apache.hadoop.yarn.client |
---|
Methods in org.apache.hadoop.yarn.client with parameters of type Configuration | ||
---|---|---|
static
|
AHSProxy.createAHSProxy(Configuration conf,
Class<T> protocol,
InetSocketAddress ahsAddress)
|
|
static org.apache.hadoop.io.retry.RetryPolicy |
RMProxy.createRetryPolicy(Configuration conf)
Fetch retry policy from Configuration |
|
static
|
ClientRMProxy.createRMProxy(Configuration configuration,
Class<T> protocol)
Create a proxy to the ResourceManager for the specified protocol. |
|
static
|
RMProxy.createRMProxy(Configuration conf,
Class<T> protocol,
InetSocketAddress rmAddress)
Deprecated. This method is deprecated and is not used by YARN internally any more. To create a proxy to the RM, use ClientRMProxy#createRMProxy or ServerRMProxy#createRMProxy. Create a proxy to the ResourceManager at the specified address. |
|
protected static
|
RMProxy.createRMProxy(Configuration configuration,
Class<T> protocol,
RMProxy instance)
Create a proxy for the specified protocol. |
|
protected static
|
AHSProxy.getProxy(Configuration conf,
Class<T> protocol,
InetSocketAddress rmAddress)
|
|
static Text |
ClientRMProxy.getRMDelegationTokenService(Configuration conf)
Get the token service name to be used for RMDelegationToken. |
Uses of Configuration in org.apache.hadoop.yarn.conf |
---|
Subclasses of Configuration in org.apache.hadoop.yarn.conf | |
---|---|
class |
YarnConfiguration
|
Methods in org.apache.hadoop.yarn.conf with parameters of type Configuration | |
---|---|
static String |
YarnConfiguration.getClusterId(Configuration conf)
|
static int |
YarnConfiguration.getRMDefaultPortNumber(String addressPrefix,
Configuration conf)
|
static List<String> |
YarnConfiguration.getServiceAddressConfKeys(Configuration conf)
|
static boolean |
YarnConfiguration.useHttps(Configuration conf)
|
Constructors in org.apache.hadoop.yarn.conf with parameters of type Configuration | |
---|---|
YarnConfiguration(Configuration conf)
|
Uses of Configuration in org.apache.hadoop.yarn.event |
---|
Methods in org.apache.hadoop.yarn.event with parameters of type Configuration | |
---|---|
protected void |
AsyncDispatcher.serviceInit(Configuration conf)
|
Uses of Configuration in org.apache.hadoop.yarn.logaggregation |
---|
Constructors in org.apache.hadoop.yarn.logaggregation with parameters of type Configuration | |
---|---|
AggregatedLogFormat.LogReader(Configuration conf,
Path remoteAppLogFile)
|
Uses of Configuration in org.apache.hadoop.yarn.security |
---|
Methods in org.apache.hadoop.yarn.security with parameters of type Configuration | |
---|---|
org.apache.hadoop.security.KerberosInfo |
SchedulerSecurityInfo.getKerberosInfo(Class<?> protocol,
Configuration conf)
|
org.apache.hadoop.security.KerberosInfo |
ContainerManagerSecurityInfo.getKerberosInfo(Class<?> protocol,
Configuration conf)
|
org.apache.hadoop.security.token.TokenInfo |
SchedulerSecurityInfo.getTokenInfo(Class<?> protocol,
Configuration conf)
|
org.apache.hadoop.security.token.TokenInfo |
ContainerManagerSecurityInfo.getTokenInfo(Class<?> protocol,
Configuration conf)
|
Uses of Configuration in org.apache.hadoop.yarn.security.admin |
---|
Methods in org.apache.hadoop.yarn.security.admin with parameters of type Configuration | |
---|---|
org.apache.hadoop.security.KerberosInfo |
AdminSecurityInfo.getKerberosInfo(Class<?> protocol,
Configuration conf)
|
org.apache.hadoop.security.token.TokenInfo |
AdminSecurityInfo.getTokenInfo(Class<?> protocol,
Configuration conf)
|
Uses of Configuration in org.apache.hadoop.yarn.security.client |
---|
Methods in org.apache.hadoop.yarn.security.client with parameters of type Configuration | |
---|---|
org.apache.hadoop.security.KerberosInfo |
ClientTimelineSecurityInfo.getKerberosInfo(Class<?> protocol,
Configuration conf)
|
org.apache.hadoop.security.KerberosInfo |
ClientRMSecurityInfo.getKerberosInfo(Class<?> protocol,
Configuration conf)
|
org.apache.hadoop.security.token.TokenInfo |
ClientTimelineSecurityInfo.getTokenInfo(Class<?> protocol,
Configuration conf)
|
org.apache.hadoop.security.token.TokenInfo |
ClientRMSecurityInfo.getTokenInfo(Class<?> protocol,
Configuration conf)
|
Uses of Configuration in org.apache.hadoop.yarn.util.timeline |
---|
Methods in org.apache.hadoop.yarn.util.timeline with parameters of type Configuration | |
---|---|
static Text |
TimelineUtils.buildTimelineTokenService(Configuration conf)
|
static InetSocketAddress |
TimelineUtils.getTimelineTokenServiceAddress(Configuration conf)
|
|
||||||||||
PREV NEXT | FRAMES NO FRAMES |