- ABORTED - Static variable in class org.apache.hadoop.yarn.api.records.ContainerExitStatus
-
Containers killed by the framework, either due to being released by
the application or being 'lost' due to node failures etc.
- abortJob(JobContext, int) - Method in class org.apache.hadoop.mapred.FileOutputCommitter
-
- abortJob(JobContext, int) - Method in class org.apache.hadoop.mapred.OutputCommitter
-
For aborting an unsuccessful job's output.
- abortJob(JobContext, JobStatus.State) - Method in class org.apache.hadoop.mapred.OutputCommitter
-
This method implements the new interface by calling the old method.
- abortJob(JobContext, JobStatus.State) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
Delete the temporary directory, including all of the work directories.
- abortJob(JobContext, JobStatus.State) - Method in class org.apache.hadoop.mapreduce.OutputCommitter
-
For aborting an unsuccessful job's output.
- abortTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapred.FileOutputCommitter
-
- abortTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
-
Discard the task output.
- abortTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
-
This method implements the new interface by calling the old method.
- abortTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
Delete the work directory
- abortTask(TaskAttemptContext, Path) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
- abortTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.OutputCommitter
-
Discard the task output.
- ABSOLUTE - Static variable in class org.apache.hadoop.metrics.spi.MetricValue
-
- AbstractCounters<C extends Counter,G extends CounterGroupBase<C>> - Class in org.apache.hadoop.mapreduce.counters
-
An abstract class to provide common implementation for the Counters
container in both mapred and mapreduce packages.
- AbstractCounters(CounterGroupFactory<C, G>) - Constructor for class org.apache.hadoop.mapreduce.counters.AbstractCounters
-
- AbstractCounters(AbstractCounters<C1, G1>, CounterGroupFactory<C, G>) - Constructor for class org.apache.hadoop.mapreduce.counters.AbstractCounters
-
Construct from another counters object.
- AbstractDNSToSwitchMapping - Class in org.apache.hadoop.net
-
This is a base class for DNS to Switch mappings.
- AbstractDNSToSwitchMapping() - Constructor for class org.apache.hadoop.net.AbstractDNSToSwitchMapping
-
Create an unconfigured instance
- AbstractDNSToSwitchMapping(Configuration) - Constructor for class org.apache.hadoop.net.AbstractDNSToSwitchMapping
-
Create an instance, caching the configuration file.
- AbstractEvent<TYPE extends Enum<TYPE>> - Class in org.apache.hadoop.yarn.event
-
Parent class of all the events.
- AbstractEvent(TYPE) - Constructor for class org.apache.hadoop.yarn.event.AbstractEvent
-
- AbstractEvent(TYPE, long) - Constructor for class org.apache.hadoop.yarn.event.AbstractEvent
-
- AbstractFileSystem - Class in org.apache.hadoop.fs
-
This class provides an interface for implementors of a Hadoop file system
(analogous to the VFS of Unix).
- AbstractFileSystem(URI, String, boolean, int) - Constructor for class org.apache.hadoop.fs.AbstractFileSystem
-
Constructor to be called by subclasses.
- AbstractLivelinessMonitor<O> - Class in org.apache.hadoop.yarn.util
-
A simple liveliness monitor with which clients can register, trust the
component to monitor liveliness, get a call-back on expiry and then finally
unregister.
- AbstractLivelinessMonitor(String, Clock) - Constructor for class org.apache.hadoop.yarn.util.AbstractLivelinessMonitor
-
- AbstractMapWritable - Class in org.apache.hadoop.io
-
Abstract base class for MapWritable and SortedMapWritable
Unlike org.apache.nutch.crawl.MapWritable, this class allows creation of
MapWritable<Writable, MapWritable> so the CLASS_TO_ID and ID_TO_CLASS
maps travel with the class instead of being static.
- AbstractMapWritable() - Constructor for class org.apache.hadoop.io.AbstractMapWritable
-
constructor.
- AbstractMetric - Class in org.apache.hadoop.metrics2
-
The immutable metric
- AbstractMetric(MetricsInfo) - Constructor for class org.apache.hadoop.metrics2.AbstractMetric
-
Construct the metric
- AbstractMetricsContext - Class in org.apache.hadoop.metrics.spi
-
The main class of the Service Provider Interface.
- AbstractMetricsContext() - Constructor for class org.apache.hadoop.metrics.spi.AbstractMetricsContext
-
Creates a new instance of AbstractMetricsContext
- AbstractService - Class in org.apache.hadoop.service
-
This is the base implementation class for services.
- AbstractService(String) - Constructor for class org.apache.hadoop.service.AbstractService
-
Construct the service.
- accept(Path) - Method in class org.apache.hadoop.fs.GlobFilter
-
- accept(Path) - Method in interface org.apache.hadoop.fs.PathFilter
-
Tests whether or not the specified abstract pathname should be
included in a pathname list.
- accept(Class<?>) - Method in class org.apache.hadoop.io.serializer.avro.AvroReflectSerialization
-
- accept(Class<?>) - Method in class org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization
-
- accept(Class<?>) - Method in class org.apache.hadoop.io.serializer.JavaSerialization
-
- accept(Class<?>) - Method in class org.apache.hadoop.io.serializer.WritableSerialization
-
- accept(CompositeRecordReader.JoinCollector, K) - Method in interface org.apache.hadoop.mapred.join.ComposableRecordReader
-
While key-value pairs from this RecordReader match the given key, register
them with the JoinCollector provided.
- accept(CompositeRecordReader.JoinCollector, K) - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
If key provided matches that of this Composite, give JoinCollector
iterator over values it may emit.
- accept(CompositeRecordReader.JoinCollector, K) - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
-
Add an iterator to the collector at the position occupied by this
RecordReader over the values in this stream paired with the key
provided (ie register a stream of values from this source matching K
with a collector).
- accept(Path) - Method in class org.apache.hadoop.mapred.OutputLogFilter
-
- accept(CompositeRecordReader.JoinCollector, K) - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
-
If key provided matches that of this Composite, give JoinCollector
iterator over values it may emit.
- accept(CompositeRecordReader.JoinCollector, K) - Method in class org.apache.hadoop.mapreduce.lib.join.WrappedRecordReader
-
Add an iterator to the collector at the position occupied by this
RecordReader over the values in this stream paired with the key
provided (ie register a stream of values from this source matching K
with a collector).
- accepts(String) - Method in class org.apache.hadoop.metrics2.MetricsFilter
-
Whether to accept the name
- accepts(MetricsTag) - Method in class org.apache.hadoop.metrics2.MetricsFilter
-
Whether to accept the tag
- accepts(Iterable<MetricsTag>) - Method in class org.apache.hadoop.metrics2.MetricsFilter
-
Whether to accept the tags
- accepts(MetricsRecord) - Method in class org.apache.hadoop.metrics2.MetricsFilter
-
Whether to accept the record
- access(Path, FsAction) - Method in class org.apache.hadoop.fs.AbstractFileSystem
-
The specification of this method matches that of
FileContext.access(Path, FsAction)
except that an UnresolvedLinkException may be thrown if a symlink is
encountered in the path.
- access(Path, FsAction) - Method in class org.apache.hadoop.fs.FileContext
-
Checks if the user can access a path.
- access(Path, FsAction) - Method in class org.apache.hadoop.fs.FileSystem
-
Checks if the user can access a path.
- access(Path, FsAction) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- access(Path, FsAction) - Method in class org.apache.hadoop.fs.viewfs.ViewFileSystem
-
- access(Path, FsAction) - Method in class org.apache.hadoop.fs.viewfs.ViewFs
-
- ACCESS_DENIED - Static variable in class org.apache.hadoop.yarn.api.records.timeline.TimelinePutResponse.TimelinePutError
-
Error code returned if the user is denied to access the timeline data
- AccessControlException - Exception in org.apache.hadoop.fs.permission
-
Deprecated.
Use AccessControlException
instead.
- AccessControlException() - Constructor for exception org.apache.hadoop.fs.permission.AccessControlException
-
Deprecated.
Default constructor is needed for unwrapping from
RemoteException
.
- AccessControlException(String) - Constructor for exception org.apache.hadoop.fs.permission.AccessControlException
-
Deprecated.
- AccessControlException(Throwable) - Constructor for exception org.apache.hadoop.fs.permission.AccessControlException
-
Deprecated.
Constructs a new exception with the specified cause and a detail
message of (cause==null ? null : cause.toString()) (which
typically contains the class and detail message of cause).
- AccessControlList - Class in org.apache.hadoop.security.authorize
-
Class representing a configured access control list.
- AccessControlList() - Constructor for class org.apache.hadoop.security.authorize.AccessControlList
-
This constructor exists primarily for AccessControlList to be Writable.
- AccessControlList(String) - Constructor for class org.apache.hadoop.security.authorize.AccessControlList
-
Construct a new ACL from a String representation of the same.
- AccessControlList(String, String) - Constructor for class org.apache.hadoop.security.authorize.AccessControlList
-
Construct a new ACL from String representation of users and groups
The arguments are comma separated lists
- AclEntry - Class in org.apache.hadoop.fs.permission
-
Defines a single entry in an ACL.
- AclEntryScope - Enum in org.apache.hadoop.fs.permission
-
Specifies the scope or intended usage of an ACL entry.
- AclEntryType - Enum in org.apache.hadoop.fs.permission
-
Specifies the type of an ACL entry.
- aclSpecToString(List<AclEntry>) - Static method in class org.apache.hadoop.fs.permission.AclEntry
-
Convert a List of AclEntries into a string - the reverse of parseAclSpec.
- AclStatus - Class in org.apache.hadoop.fs.permission
-
An AclStatus contains the ACL information of a specific file.
- activateOptions() - Method in class org.apache.hadoop.yarn.ContainerLogAppender
-
- activateOptions() - Method in class org.apache.hadoop.yarn.ContainerRollingLogAppender
-
- add(E) - Method in class org.apache.hadoop.io.EnumSetWritable
-
- add(InputSplit) - Method in class org.apache.hadoop.mapred.join.CompositeInputSplit
-
Add an InputSplit to this collection.
- add(ComposableRecordReader<K, ? extends V>) - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
Add a RecordReader to this collection.
- add(X) - Method in class org.apache.hadoop.mapreduce.lib.join.ArrayListBackedIterator
-
- add(InputSplit) - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeInputSplit
-
Add an InputSplit to this collection.
- add(ComposableRecordReader<K, ? extends V>) - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
-
Add a RecordReader to this collection.
- add(T) - Method in interface org.apache.hadoop.mapreduce.lib.join.ResetableIterator
-
Add an element to the collection of elements to iterate over.
- add(X) - Method in class org.apache.hadoop.mapreduce.lib.join.StreamBackedIterator
-
- add(String, long) - Method in class org.apache.hadoop.metrics2.lib.MetricsRegistry
-
Add sample to a stat metric by name.
- add(long) - Method in class org.apache.hadoop.metrics2.lib.MutableQuantiles
-
- add(String, long) - Method in class org.apache.hadoop.metrics2.lib.MutableRates
-
Add a rate sample for a rate metric
- add(long, long) - Method in class org.apache.hadoop.metrics2.lib.MutableStat
-
Add a number of samples and their sum to the running stat
- add(long) - Method in class org.apache.hadoop.metrics2.lib.MutableStat
-
Add a snapshot to the metric
- add(MetricsTag) - Method in class org.apache.hadoop.metrics2.MetricsRecordBuilder
-
Add an immutable metrics tag object
- add(AbstractMetric) - Method in class org.apache.hadoop.metrics2.MetricsRecordBuilder
-
Add a pre-made immutable metric object
- add(Key) - Method in class org.apache.hadoop.util.bloom.BloomFilter
-
- add(Key) - Method in class org.apache.hadoop.util.bloom.CountingBloomFilter
-
- add(Key) - Method in class org.apache.hadoop.util.bloom.DynamicBloomFilter
-
- add(Key) - Method in class org.apache.hadoop.util.bloom.RetouchedBloomFilter
-
- add_escapes(String) - Method in exception org.apache.hadoop.record.compiler.generated.ParseException
-
Deprecated.
Used to convert raw characters to their escaped version
when these raw version cannot be used as part of an ASCII
string literal.
- addArchiveToClassPath(Path) - Method in class org.apache.hadoop.mapreduce.Job
-
Add an archive path to the current set of classpath entries.
- addCacheArchive(URI) - Method in class org.apache.hadoop.mapreduce.Job
-
Add a archives to be localized
- addCacheFile(URI) - Method in class org.apache.hadoop.mapreduce.Job
-
Add a file to be localized
- addConfigurationPair(String, String) - Method in class org.apache.hadoop.tracing.SpanReceiverInfoBuilder
-
- addContainerRequest(T) - Method in class org.apache.hadoop.yarn.client.api.AMRMClient
-
Request containers for resources before calling allocate
- addContainerRequest(T) - Method in class org.apache.hadoop.yarn.client.api.async.AMRMClientAsync
-
Request containers for resources before calling allocate
- addCounter(Counters.Counter) - Method in class org.apache.hadoop.mapred.Counters.Group
-
- addCounter(String, String, long) - Method in class org.apache.hadoop.mapred.Counters.Group
-
- addCounter(T) - Method in interface org.apache.hadoop.mapreduce.counters.CounterGroupBase
-
Add a counter to this group.
- addCounter(String, String, long) - Method in interface org.apache.hadoop.mapreduce.counters.CounterGroupBase
-
Add a counter to this group
- addCounter(MetricsInfo, int) - Method in class org.apache.hadoop.metrics2.MetricsRecordBuilder
-
Add an integer metric
- addCounter(MetricsInfo, long) - Method in class org.apache.hadoop.metrics2.MetricsRecordBuilder
-
Add an long metric
- addDefaultResource(String) - Static method in class org.apache.hadoop.conf.Configuration
-
Add a default resource.
- addDefaults() - Method in class org.apache.hadoop.mapred.join.CompositeInputFormat
-
Adds the default set of identifiers to the parser.
- addDefaults() - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeInputFormat
-
Adds the default set of identifiers to the parser.
- addDelegationTokens(String, Credentials) - Method in class org.apache.hadoop.fs.FileSystem
-
Obtain all delegation tokens used by this FileSystem that are not
already present in the given Credentials.
- addDependingJob(Job) - Method in class org.apache.hadoop.mapred.jobcontrol.Job
-
Add a job to this jobs' dependency list.
- addDependingJob(ControlledJob) - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
Add a job to this jobs' dependency list.
- addDeprecation(String, String[], String) - Static method in class org.apache.hadoop.conf.Configuration
-
- addDeprecation(String, String, String) - Static method in class org.apache.hadoop.conf.Configuration
-
Adds the deprecated key to the global deprecation map.
- addDeprecation(String, String[]) - Static method in class org.apache.hadoop.conf.Configuration
-
- addDeprecation(String, String) - Static method in class org.apache.hadoop.conf.Configuration
-
Adds the deprecated key to the global deprecation map when no custom
message is provided.
- addDeprecations(Configuration.DeprecationDelta[]) - Static method in class org.apache.hadoop.conf.Configuration
-
Adds a set of deprecated keys to the global deprecations.
- addDomain(TimelineDomain) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineDomains
-
Add a single domain into the existing domain list
- addDomains(List<TimelineDomain>) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineDomains
-
All a list of domains into the existing domain list
- addEntities(List<TimelineEntity>) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEntities
-
All a list of entities into the existing entity list
- addEntity(TimelineEntity) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEntities
-
Add a single entity into the existing entity list
- addError(TimelinePutResponse.TimelinePutError) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelinePutResponse
-
- addErrors(List<TimelinePutResponse.TimelinePutError>) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelinePutResponse
-
- addEscapes(String) - Static method in error org.apache.hadoop.record.compiler.generated.TokenMgrError
-
Deprecated.
Replaces unprintable characters by their espaced (or unicode escaped)
equivalents in the given string
- addEvent(TimelineEvent) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEntity
-
Add a single event related to the entity to the existing event list
- addEvent(TimelineEvents.EventsOfOneEntity) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEvents
-
- addEvent(TimelineEvent) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEvents.EventsOfOneEntity
-
Add a single event to the existing event list
- addEventInfo(String, Object) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEvent
-
Add one piece of the information of the event to the existing information
map
- addEventInfo(Map<String, Object>) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEvent
-
Add a map of the information of the event to the existing information map
- addEvents(List<TimelineEvent>) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEntity
-
Add a list of events related to the entity to the existing event list
- addEvents(List<TimelineEvents.EventsOfOneEntity>) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEvents
-
- addEvents(List<TimelineEvent>) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEvents.EventsOfOneEntity
-
Add a list of event to the existing event list
- addExternalEndpoint(Endpoint) - Method in class org.apache.hadoop.registry.client.types.ServiceRecord
-
Add an external endpoint
- addFalsePositive(Key) - Method in class org.apache.hadoop.util.bloom.RetouchedBloomFilter
-
Adds a false positive information to this retouched Bloom filter.
- addFalsePositive(Collection<Key>) - Method in class org.apache.hadoop.util.bloom.RetouchedBloomFilter
-
Adds a collection of false positive information to this retouched Bloom filter.
- addFalsePositive(List<Key>) - Method in class org.apache.hadoop.util.bloom.RetouchedBloomFilter
-
Adds a list of false positive information to this retouched Bloom filter.
- addFalsePositive(Key[]) - Method in class org.apache.hadoop.util.bloom.RetouchedBloomFilter
-
Adds an array of false positive information to this retouched Bloom filter.
- addFencingParameters(Map<String, String>) - Method in class org.apache.hadoop.ha.HAServiceTarget
-
Hook to allow subclasses to add any parameters they would like to
expose to fencing implementations/scripts.
- addField(String, TypeID) - Method in class org.apache.hadoop.record.meta.RecordTypeInfo
-
Deprecated.
Add a field.
- addFileset(FileSet) - Method in class org.apache.hadoop.record.compiler.ant.RccTask
-
Deprecated.
Adds a fileset that can consist of one or more files
- addFileToClassPath(Path) - Method in class org.apache.hadoop.mapreduce.Job
-
Add an file path to the current set of classpath entries It adds the file
to cache as well.
- addGauge(MetricsInfo, int) - Method in class org.apache.hadoop.metrics2.MetricsRecordBuilder
-
Add a integer gauge metric
- addGauge(MetricsInfo, long) - Method in class org.apache.hadoop.metrics2.MetricsRecordBuilder
-
Add a long gauge metric
- addGauge(MetricsInfo, float) - Method in class org.apache.hadoop.metrics2.MetricsRecordBuilder
-
Add a float gauge metric
- addGauge(MetricsInfo, double) - Method in class org.apache.hadoop.metrics2.MetricsRecordBuilder
-
Add a double gauge metric
- addGroup(G) - Method in class org.apache.hadoop.mapreduce.counters.AbstractCounters
-
Add a group.
- addGroup(String, String) - Method in class org.apache.hadoop.mapreduce.counters.AbstractCounters
-
Add a new group
- addGroup(String) - Method in class org.apache.hadoop.security.authorize.AccessControlList
-
Add group to the names of groups allowed for this service.
- addIdentifier(String, Class<?>[], Class<? extends Parser.Node>, Class<? extends ComposableRecordReader>) - Static method in class org.apache.hadoop.mapred.join.Parser.Node
-
For a given identifier, add a mapping to the nodetype for the parse
tree and to the ComposableRecordReader to be created, including the
formals required to invoke the constructor.
- addIdentifier(String, Class<?>[], Class<? extends Parser.Node>, Class<? extends ComposableRecordReader>) - Static method in class org.apache.hadoop.mapreduce.lib.join.Parser.Node
-
For a given identifier, add a mapping to the nodetype for the parse
tree and to the ComposableRecordReader to be created, including the
formals required to invoke the constructor.
- addIfService(Object) - Method in class org.apache.hadoop.service.CompositeService
-
- AddingCompositeService - Class in org.apache.hadoop.registry.server.services
-
Composite service that exports the add/remove methods.
- AddingCompositeService(String) - Constructor for class org.apache.hadoop.registry.server.services.AddingCompositeService
-
- addInputPath(JobConf, Path) - Static method in class org.apache.hadoop.mapred.FileInputFormat
-
Add a
Path
to the list of inputs for the map-reduce job.
- addInputPath(JobConf, Path, Class<? extends InputFormat>) - Static method in class org.apache.hadoop.mapred.lib.MultipleInputs
-
Add a
Path
with a custom
InputFormat
to the list of
inputs for the map-reduce job.
- addInputPath(JobConf, Path, Class<? extends InputFormat>, Class<? extends Mapper>) - Static method in class org.apache.hadoop.mapred.lib.MultipleInputs
-
- addInputPath(Job, Path) - Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
Add a
Path
to the list of inputs for the map-reduce job.
- addInputPath(Job, Path, Class<? extends InputFormat>) - Static method in class org.apache.hadoop.mapreduce.lib.input.MultipleInputs
-
Add a
Path
with a custom
InputFormat
to the list of
inputs for the map-reduce job.
- addInputPath(Job, Path, Class<? extends InputFormat>, Class<? extends Mapper>) - Static method in class org.apache.hadoop.mapreduce.lib.input.MultipleInputs
-
- addInputPathRecursively(List<FileStatus>, FileSystem, Path, PathFilter) - Method in class org.apache.hadoop.mapred.FileInputFormat
-
Add files in the input path recursively into the results.
- addInputPathRecursively(List<FileStatus>, FileSystem, Path, PathFilter) - Method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
Add files in the input path recursively into the results.
- addInputPaths(JobConf, String) - Static method in class org.apache.hadoop.mapred.FileInputFormat
-
Add the given comma separated paths to the list of inputs for
the map-reduce job.
- addInputPaths(Job, String) - Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
Add the given comma separated paths to the list of inputs for
the map-reduce job.
- addInternalEndpoint(Endpoint) - Method in class org.apache.hadoop.registry.client.types.ServiceRecord
-
Add an internal endpoint
- addJob(ControlledJob) - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl
-
Add a new controlled job.
- addJob(Job) - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl
-
Add a new job.
- addJobCollection(Collection<ControlledJob>) - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl
-
Add a collection of jobs
- addJobs(Collection<Job>) - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
-
Add a collection of jobs
- addLocalArchives(Configuration, String) - Static method in class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
- addLocalFiles(Configuration, String) - Static method in class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
- addMapper(JobConf, Class<? extends Mapper<K1, V1, K2, V2>>, Class<? extends K1>, Class<? extends V1>, Class<? extends K2>, Class<? extends V2>, boolean, JobConf) - Static method in class org.apache.hadoop.mapred.lib.ChainMapper
-
Adds a Mapper class to the chain job's JobConf.
- addMapper(JobConf, Class<? extends Mapper<K1, V1, K2, V2>>, Class<? extends K1>, Class<? extends V1>, Class<? extends K2>, Class<? extends V2>, boolean, JobConf) - Static method in class org.apache.hadoop.mapred.lib.ChainReducer
-
Adds a Mapper class to the chain job's JobConf.
- addMapper(Job, Class<? extends Mapper>, Class<?>, Class<?>, Class<?>, Class<?>, Configuration) - Static method in class org.apache.hadoop.mapreduce.lib.chain.ChainMapper
-
Adds a
Mapper
class to the chain mapper.
- addMapper(Job, Class<? extends Mapper>, Class<?>, Class<?>, Class<?>, Class<?>, Configuration) - Static method in class org.apache.hadoop.mapreduce.lib.chain.ChainReducer
-
Adds a
Mapper
class to the chain reducer.
- addMultiNamedOutput(JobConf, String, Class<? extends OutputFormat>, Class<?>, Class<?>) - Static method in class org.apache.hadoop.mapred.lib.MultipleOutputs
-
Adds a multi named output for the job.
- addNamedOutput(JobConf, String, Class<? extends OutputFormat>, Class<?>, Class<?>) - Static method in class org.apache.hadoop.mapred.lib.MultipleOutputs
-
Adds a named output for the job.
- addNamedOutput(Job, String, Class<? extends OutputFormat>, Class<?>, Class<?>) - Static method in class org.apache.hadoop.mapreduce.lib.output.MultipleOutputs
-
Adds a named output for the job.
- addNextValue(Object) - Method in class org.apache.hadoop.mapreduce.lib.aggregate.DoubleValueSum
-
add a value to the aggregator
- addNextValue(double) - Method in class org.apache.hadoop.mapreduce.lib.aggregate.DoubleValueSum
-
add a value to the aggregator
- addNextValue(Object) - Method in class org.apache.hadoop.mapreduce.lib.aggregate.LongValueMax
-
add a value to the aggregator
- addNextValue(long) - Method in class org.apache.hadoop.mapreduce.lib.aggregate.LongValueMax
-
add a value to the aggregator
- addNextValue(Object) - Method in class org.apache.hadoop.mapreduce.lib.aggregate.LongValueMin
-
add a value to the aggregator
- addNextValue(long) - Method in class org.apache.hadoop.mapreduce.lib.aggregate.LongValueMin
-
add a value to the aggregator
- addNextValue(Object) - Method in class org.apache.hadoop.mapreduce.lib.aggregate.LongValueSum
-
add a value to the aggregator
- addNextValue(long) - Method in class org.apache.hadoop.mapreduce.lib.aggregate.LongValueSum
-
add a value to the aggregator
- addNextValue(Object) - Method in class org.apache.hadoop.mapreduce.lib.aggregate.StringValueMax
-
add a value to the aggregator
- addNextValue(Object) - Method in class org.apache.hadoop.mapreduce.lib.aggregate.StringValueMin
-
add a value to the aggregator
- addNextValue(Object) - Method in class org.apache.hadoop.mapreduce.lib.aggregate.UniqValueCount
-
add a value to the aggregator
- addNextValue(Object) - Method in interface org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregator
-
add a value to the aggregator
- addNextValue(Object) - Method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueHistogram
-
add the given val to the aggregator.
- addOtherInfo(String, Object) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEntity
-
Add one piece of other information of the entity to the existing other info
map
- addOtherInfo(Map<String, Object>) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEntity
-
Add a map of other information of the entity to the existing other info map
- addPrimaryFilter(String, Object) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEntity
-
Add a single piece of primary filter to the existing primary filter map
- addPrimaryFilters(Map<String, Set<Object>>) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEntity
-
Add a map of primary filters to the existing primary filter map
- addRecord(String) - Method in interface org.apache.hadoop.metrics2.MetricsCollector
-
Add a metrics record
- addRecord(MetricsInfo) - Method in interface org.apache.hadoop.metrics2.MetricsCollector
-
Add a metrics record
- addRelatedEntities(Map<String, Set<String>>) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEntity
-
Add a map of related entities to the existing related entity map
- addRelatedEntity(String, String) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEntity
-
Add an entity to the existing related entity map
- addResource(String) - Method in class org.apache.hadoop.conf.Configuration
-
Add a configuration resource.
- addResource(URL) - Method in class org.apache.hadoop.conf.Configuration
-
Add a configuration resource.
- addResource(Path) - Method in class org.apache.hadoop.conf.Configuration
-
Add a configuration resource.
- addResource(InputStream) - Method in class org.apache.hadoop.conf.Configuration
-
Add a configuration resource.
- addResource(InputStream, String) - Method in class org.apache.hadoop.conf.Configuration
-
Add a configuration resource.
- addResource(Configuration) - Method in class org.apache.hadoop.conf.Configuration
-
Add a configuration resource.
- ADDRESS_HOSTNAME_AND_PORT - Static variable in interface org.apache.hadoop.registry.client.types.AddressTypes
-
hostname/FQDN and port pair: "host/port".
- ADDRESS_HOSTNAME_FIELD - Static variable in interface org.apache.hadoop.registry.client.types.AddressTypes
-
- ADDRESS_OTHER - Static variable in interface org.apache.hadoop.registry.client.types.AddressTypes
-
Any other address: "".
- ADDRESS_PATH - Static variable in interface org.apache.hadoop.registry.client.types.AddressTypes
-
Path /a/b/c
style: "path".
- ADDRESS_PORT_FIELD - Static variable in interface org.apache.hadoop.registry.client.types.AddressTypes
-
- ADDRESS_URI - Static variable in interface org.apache.hadoop.registry.client.types.AddressTypes
-
URI entries: "uri".
- ADDRESS_ZOOKEEPER - Static variable in interface org.apache.hadoop.registry.client.types.AddressTypes
-
Zookeeper addresses as a triple : "zktriple".
- addresses - Variable in class org.apache.hadoop.registry.client.types.Endpoint
-
a list of address tuples —tuples whose format depends on the address type
- addressType - Variable in class org.apache.hadoop.registry.client.types.Endpoint
-
Type of address.
- AddressTypes - Interface in org.apache.hadoop.registry.client.types
-
Enum of address types -as integers.
- addService(Service) - Method in class org.apache.hadoop.registry.server.services.AddingCompositeService
-
- addService(Service) - Method in class org.apache.hadoop.service.CompositeService
-
- addSpanReceiver(SpanReceiverInfo) - Method in interface org.apache.hadoop.tracing.TraceAdminProtocol
-
Add a new trace span receiver.
- addToMap(Class<?>) - Method in class org.apache.hadoop.io.AbstractMapWritable
-
Add a Class to the maps if it is not already present.
- addTransition(STATE, STATE, EVENTTYPE) - Method in class org.apache.hadoop.yarn.state.StateMachineFactory
-
- addTransition(STATE, STATE, Set<EVENTTYPE>) - Method in class org.apache.hadoop.yarn.state.StateMachineFactory
-
- addTransition(STATE, STATE, Set<EVENTTYPE>, SingleArcTransition<OPERAND, EVENT>) - Method in class org.apache.hadoop.yarn.state.StateMachineFactory
-
- addTransition(STATE, STATE, EVENTTYPE, SingleArcTransition<OPERAND, EVENT>) - Method in class org.apache.hadoop.yarn.state.StateMachineFactory
-
- addTransition(STATE, Set<STATE>, EVENTTYPE, MultipleArcTransition<OPERAND, EVENT, STATE>) - Method in class org.apache.hadoop.yarn.state.StateMachineFactory
-
- addUser(String) - Method in class org.apache.hadoop.security.authorize.AccessControlList
-
Add user to the names of users allowed for this service.
- addWriteAccessor(String, String) - Method in interface org.apache.hadoop.registry.client.api.RegistryOperations
-
Add a new write access entry to be added to node permissions in all
future write operations of a session connected to a secure registry.
- adjustBeginLineColumn(int, int) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
Deprecated.
Method to adjust line and column numbers for the start of a token.
- AdminSecurityInfo - Class in org.apache.hadoop.yarn.security.admin
-
- AdminSecurityInfo() - Constructor for class org.apache.hadoop.yarn.security.admin.AdminSecurityInfo
-
- AggregatedLogFormat - Class in org.apache.hadoop.yarn.logaggregation
-
- AggregatedLogFormat() - Constructor for class org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat
-
- AggregatedLogFormat.LogKey - Class in org.apache.hadoop.yarn.logaggregation
-
- AggregatedLogFormat.LogKey() - Constructor for class org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat.LogKey
-
- AggregatedLogFormat.LogKey(ContainerId) - Constructor for class org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat.LogKey
-
- AggregatedLogFormat.LogKey(String) - Constructor for class org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat.LogKey
-
- AggregatedLogFormat.LogReader - Class in org.apache.hadoop.yarn.logaggregation
-
- AggregatedLogFormat.LogReader(Configuration, Path) - Constructor for class org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat.LogReader
-
- aggregatorDescriptorList - Variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJobBase
-
- aggregatorDescriptorList - Static variable in class org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorJobBase
-
- AHSClient - Class in org.apache.hadoop.yarn.client.api
-
- AHSClient(String) - Constructor for class org.apache.hadoop.yarn.client.api.AHSClient
-
- AHSProxy<T> - Class in org.apache.hadoop.yarn.client
-
- AHSProxy() - Constructor for class org.apache.hadoop.yarn.client.AHSProxy
-
- allFinished() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl
-
- allocate(AllocateRequest) - Method in interface org.apache.hadoop.yarn.api.ApplicationMasterProtocol
-
The main interface between an ApplicationMaster
and the
ResourceManager
.
- allocate(float) - Method in class org.apache.hadoop.yarn.client.api.AMRMClient
-
Request additional containers and receive new container allocations.
- AllocateRequest - Class in org.apache.hadoop.yarn.api.protocolrecords
-
The core request sent by the ApplicationMaster
to the
ResourceManager
to obtain resources in the cluster.
- AllocateRequest() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.AllocateRequest
-
- AllocateResponse - Class in org.apache.hadoop.yarn.api.protocolrecords
-
The response sent by the ResourceManager
the
ApplicationMaster
during resource negotiation.
- AllocateResponse() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.AllocateResponse
-
- AMCommand - Enum in org.apache.hadoop.yarn.api.records
-
Command sent by the Resource Manager to the Application Master in the
AllocateResponse
- AMRMClient<T extends org.apache.hadoop.yarn.client.api.AMRMClient.ContainerRequest> - Class in org.apache.hadoop.yarn.client.api
-
- AMRMClient(String) - Constructor for class org.apache.hadoop.yarn.client.api.AMRMClient
-
- AMRMClientAsync<T extends org.apache.hadoop.yarn.client.api.AMRMClient.ContainerRequest> - Class in org.apache.hadoop.yarn.client.api.async
-
AMRMClientAsync
handles communication with the ResourceManager
and provides asynchronous updates on events such as container allocations and
completions.
- AMRMClientAsync(int, AMRMClientAsync.CallbackHandler) - Constructor for class org.apache.hadoop.yarn.client.api.async.AMRMClientAsync
-
- AMRMClientAsync(AMRMClient<T>, int, AMRMClientAsync.CallbackHandler) - Constructor for class org.apache.hadoop.yarn.client.api.async.AMRMClientAsync
-
- AMRMTokenIdentifier - Class in org.apache.hadoop.yarn.security
-
AMRMTokenIdentifier is the TokenIdentifier to be used by
ApplicationMasters to authenticate to the ResourceManager.
- AMRMTokenIdentifier() - Constructor for class org.apache.hadoop.yarn.security.AMRMTokenIdentifier
-
- AMRMTokenIdentifier(ApplicationAttemptId, int) - Constructor for class org.apache.hadoop.yarn.security.AMRMTokenIdentifier
-
- AMRMTokenSelector - Class in org.apache.hadoop.yarn.security
-
- AMRMTokenSelector() - Constructor for class org.apache.hadoop.yarn.security.AMRMTokenSelector
-
- and(FsAction) - Method in enum org.apache.hadoop.fs.permission.FsAction
-
AND operation.
- and(Filter) - Method in class org.apache.hadoop.util.bloom.BloomFilter
-
- and(Filter) - Method in class org.apache.hadoop.util.bloom.CountingBloomFilter
-
- and(Filter) - Method in class org.apache.hadoop.util.bloom.DynamicBloomFilter
-
- ANY - Static variable in class org.apache.hadoop.yarn.api.records.ResourceRequest
-
The constant string representing no locality.
- api - Variable in class org.apache.hadoop.registry.client.types.Endpoint
-
API implemented at the end of the binding
- APP_SUBMIT_TIME_ENV - Static variable in interface org.apache.hadoop.yarn.api.ApplicationConstants
-
The environment variable for APP_SUBMIT_TIME.
- appAttemptID - Variable in class org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster
-
- appAttemptIdStrPrefix - Static variable in class org.apache.hadoop.yarn.api.records.ApplicationAttemptId
-
- append(Path, int, Progressable) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
-
- append(Path) - Method in class org.apache.hadoop.fs.FileSystem
-
Append to an existing file (optional operation).
- append(Path, int) - Method in class org.apache.hadoop.fs.FileSystem
-
Append to an existing file (optional operation).
- append(Path, int, Progressable) - Method in class org.apache.hadoop.fs.FileSystem
-
Append to an existing file (optional operation).
- append(Path, int, Progressable) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- append(Path, int, Progressable) - Method in class org.apache.hadoop.fs.ftp.FTPFileSystem
-
This optional operation is not yet supported.
- append(Path, int, Progressable) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
-
- append(Path, int, Progressable) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
-
This optional operation is not yet supported.
- append(Path, int, Progressable) - Method in class org.apache.hadoop.fs.s3native.NativeS3FileSystem
-
This optional operation is not yet supported.
- append(Path, int, Progressable) - Method in class org.apache.hadoop.fs.viewfs.ViewFileSystem
-
- append(byte[], int, int) - Method in class org.apache.hadoop.io.Text
-
Append a range of bytes to the end of the given text
- append(LoggingEvent) - Method in class org.apache.hadoop.log.metrics.EventCounter
-
- append(byte[], int, int) - Method in class org.apache.hadoop.record.Buffer
-
Deprecated.
Append specified bytes to the buffer.
- append(byte[]) - Method in class org.apache.hadoop.record.Buffer
-
Deprecated.
Append specified bytes to the buffer
- append(LoggingEvent) - Method in class org.apache.hadoop.yarn.ContainerLogAppender
-
- appendTo(StringBuilder) - Method in class org.apache.hadoop.mapreduce.JobID
-
Add the stuff after the "job" prefix to the given builder.
- appendTo(StringBuilder) - Method in class org.apache.hadoop.mapreduce.TaskAttemptID
-
Add the unique string to the StringBuilder
- appendTo(StringBuilder) - Method in class org.apache.hadoop.mapreduce.TaskID
-
Add the unique string to the given builder.
- appIdStrPrefix - Static variable in class org.apache.hadoop.yarn.api.records.ApplicationId
-
- APPLICATION_HISTORY_ENABLED - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
The setting that controls whether application history service is
enabled or not.
- APPLICATION_HISTORY_MAX_APPS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
Defines the max number of applications could be fetched using
REST API or application history protocol and shown in timeline
server web ui.
- APPLICATION_HISTORY_PREFIX - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- APPLICATION_HISTORY_SAVE_NON_AM_CONTAINER_META_INFO - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
Save container meta-info in the application history store.
- APPLICATION_HISTORY_STORE - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
Application history store class
- APPLICATION_MAX_TAG_LENGTH - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- APPLICATION_MAX_TAGS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- APPLICATION_TYPE_LENGTH - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
Default application type length
- APPLICATION_WEB_PROXY_BASE_ENV - Static variable in interface org.apache.hadoop.yarn.api.ApplicationConstants
-
The environmental variable for APPLICATION_WEB_PROXY_BASE.
- ApplicationAccessType - Enum in org.apache.hadoop.yarn.api.records
-
Application access types.
- ApplicationAttemptId - Class in org.apache.hadoop.yarn.api.records
-
ApplicationAttemptId
denotes the particular
attempt
of an
ApplicationMaster
for a given
ApplicationId
.
- ApplicationAttemptId() - Constructor for class org.apache.hadoop.yarn.api.records.ApplicationAttemptId
-
- ApplicationAttemptNotFoundException - Exception in org.apache.hadoop.yarn.exceptions
-
- ApplicationAttemptNotFoundException(Throwable) - Constructor for exception org.apache.hadoop.yarn.exceptions.ApplicationAttemptNotFoundException
-
- ApplicationAttemptNotFoundException(String) - Constructor for exception org.apache.hadoop.yarn.exceptions.ApplicationAttemptNotFoundException
-
- ApplicationAttemptNotFoundException(String, Throwable) - Constructor for exception org.apache.hadoop.yarn.exceptions.ApplicationAttemptNotFoundException
-
- ApplicationAttemptReport - Class in org.apache.hadoop.yarn.api.records
-
ApplicationAttemptReport
is a report of an application attempt.
- ApplicationAttemptReport() - Constructor for class org.apache.hadoop.yarn.api.records.ApplicationAttemptReport
-
- ApplicationClassLoader - Class in org.apache.hadoop.util
-
- ApplicationClassLoader(URL[], ClassLoader, List<String>) - Constructor for class org.apache.hadoop.util.ApplicationClassLoader
-
- ApplicationClassLoader(String, ClassLoader, List<String>) - Constructor for class org.apache.hadoop.util.ApplicationClassLoader
-
- ApplicationClassLoader - Class in org.apache.hadoop.yarn.util
-
Deprecated.
- ApplicationClassLoader(URL[], ClassLoader, List<String>) - Constructor for class org.apache.hadoop.yarn.util.ApplicationClassLoader
-
Deprecated.
- ApplicationClassLoader(String, ClassLoader, List<String>) - Constructor for class org.apache.hadoop.yarn.util.ApplicationClassLoader
-
Deprecated.
- ApplicationClientProtocol - Interface in org.apache.hadoop.yarn.api
-
The protocol between clients and the ResourceManager
to submit/abort jobs and to get information on applications, cluster metrics,
nodes, queues and ACLs.
- ApplicationConstants - Interface in org.apache.hadoop.yarn.api
-
This is the API for the applications comprising of constants that YARN sets
up for the applications and the containers.
- ApplicationHistoryProtocol - Interface in org.apache.hadoop.yarn.api
-
The protocol between clients and the ApplicationHistoryServer
to
get the information of completed applications etc.
- ApplicationId - Class in org.apache.hadoop.yarn.api.records
-
ApplicationId
represents the globally unique
identifier for an application.
- ApplicationId() - Constructor for class org.apache.hadoop.yarn.api.records.ApplicationId
-
- ApplicationIdNotProvidedException - Exception in org.apache.hadoop.yarn.exceptions
-
- ApplicationIdNotProvidedException(Throwable) - Constructor for exception org.apache.hadoop.yarn.exceptions.ApplicationIdNotProvidedException
-
- ApplicationIdNotProvidedException(String) - Constructor for exception org.apache.hadoop.yarn.exceptions.ApplicationIdNotProvidedException
-
- ApplicationIdNotProvidedException(String, Throwable) - Constructor for exception org.apache.hadoop.yarn.exceptions.ApplicationIdNotProvidedException
-
- ApplicationMaster - Class in org.apache.hadoop.yarn.applications.distributedshell
-
An ApplicationMaster for executing shell commands on a set of launched
containers using the YARN framework.
- ApplicationMaster() - Constructor for class org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster
-
- ApplicationMasterProtocol - Interface in org.apache.hadoop.yarn.api
-
The protocol between a live instance of ApplicationMaster
and the ResourceManager
.
- ApplicationNotFoundException - Exception in org.apache.hadoop.yarn.exceptions
-
This exception is thrown on
(GetApplicationReportRequest)
API
when the Application doesn't exist in RM and AHS
- ApplicationNotFoundException(Throwable) - Constructor for exception org.apache.hadoop.yarn.exceptions.ApplicationNotFoundException
-
- ApplicationNotFoundException(String) - Constructor for exception org.apache.hadoop.yarn.exceptions.ApplicationNotFoundException
-
- ApplicationNotFoundException(String, Throwable) - Constructor for exception org.apache.hadoop.yarn.exceptions.ApplicationNotFoundException
-
- ApplicationReport - Class in org.apache.hadoop.yarn.api.records
-
ApplicationReport
is a report of an application.
- ApplicationReport() - Constructor for class org.apache.hadoop.yarn.api.records.ApplicationReport
-
- ApplicationResourceUsageReport - Class in org.apache.hadoop.yarn.api.records
-
Contains various scheduling metrics to be reported by UI and CLI.
- ApplicationResourceUsageReport() - Constructor for class org.apache.hadoop.yarn.api.records.ApplicationResourceUsageReport
-
- ApplicationsRequestScope - Enum in org.apache.hadoop.yarn.api.protocolrecords
-
Enumeration that controls the scope of applications fetched
- ApplicationSubmissionContext - Class in org.apache.hadoop.yarn.api.records
-
ApplicationSubmissionContext
represents all of the
information needed by the ResourceManager
to launch
the ApplicationMaster
for an application.
- ApplicationSubmissionContext() - Constructor for class org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext
-
- applyUMask(FsPermission) - Method in class org.apache.hadoop.fs.permission.FsPermission
-
Apply a umask to this permission and return a new one.
- approximateCount(Key) - Method in class org.apache.hadoop.util.bloom.CountingBloomFilter
-
This method calculates an approximate count of the key, i.e.
- areSymlinksEnabled() - Static method in class org.apache.hadoop.fs.FileSystem
-
- ArrayFile - Class in org.apache.hadoop.io
-
A dense file-based mapping from integers to values.
- ArrayFile() - Constructor for class org.apache.hadoop.io.ArrayFile
-
- ArrayListBackedIterator<X extends Writable> - Class in org.apache.hadoop.mapred.join
-
This class provides an implementation of ResetableIterator.
- ArrayListBackedIterator() - Constructor for class org.apache.hadoop.mapred.join.ArrayListBackedIterator
-
- ArrayListBackedIterator(ArrayList<X>) - Constructor for class org.apache.hadoop.mapred.join.ArrayListBackedIterator
-
- ArrayListBackedIterator<X extends Writable> - Class in org.apache.hadoop.mapreduce.lib.join
-
This class provides an implementation of ResetableIterator.
- ArrayListBackedIterator() - Constructor for class org.apache.hadoop.mapreduce.lib.join.ArrayListBackedIterator
-
- ArrayListBackedIterator(ArrayList<X>) - Constructor for class org.apache.hadoop.mapreduce.lib.join.ArrayListBackedIterator
-
- ArrayPrimitiveWritable - Class in org.apache.hadoop.io
-
This is a wrapper class.
- ArrayPrimitiveWritable() - Constructor for class org.apache.hadoop.io.ArrayPrimitiveWritable
-
Construct an empty instance, for use during Writable read
- ArrayPrimitiveWritable(Class<?>) - Constructor for class org.apache.hadoop.io.ArrayPrimitiveWritable
-
Construct an instance of known type but no value yet
for use with type-specific wrapper classes
- ArrayPrimitiveWritable(Object) - Constructor for class org.apache.hadoop.io.ArrayPrimitiveWritable
-
Wrap an existing array of primitives
- ArrayWritable - Class in org.apache.hadoop.io
-
A Writable for arrays containing instances of a class.
- ArrayWritable(Class<? extends Writable>) - Constructor for class org.apache.hadoop.io.ArrayWritable
-
- ArrayWritable(Class<? extends Writable>, Writable[]) - Constructor for class org.apache.hadoop.io.ArrayWritable
-
- ArrayWritable(String[]) - Constructor for class org.apache.hadoop.io.ArrayWritable
-
- AsyncDispatcher - Class in org.apache.hadoop.yarn.event
-
Dispatches
Event
s in a separate thread.
- AsyncDispatcher() - Constructor for class org.apache.hadoop.yarn.event.AsyncDispatcher
-
- AsyncDispatcher(BlockingQueue<Event>) - Constructor for class org.apache.hadoop.yarn.event.AsyncDispatcher
-
- ATTEMPT - Static variable in class org.apache.hadoop.mapreduce.TaskAttemptID
-
- attributes() - Method in class org.apache.hadoop.registry.client.types.ServiceRecord
-
The map of "other" attributes set when parsing.
- authenticate(URL, AuthenticatedURL.Token) - Method in class org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator
-
- authorize(UserGroupInformation, String) - Method in class org.apache.hadoop.security.authorize.DefaultImpersonationProvider
-
- authorize(UserGroupInformation, String) - Method in interface org.apache.hadoop.security.authorize.ImpersonationProvider
-
Authorize the superuser which is doing doAs
- AUTO_FAILOVER_EMBEDDED - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- AUTO_FAILOVER_ENABLED - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- AUTO_FAILOVER_PREFIX - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- AUTO_FAILOVER_ZK_BASE_PATH - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- available() - Method in class org.apache.hadoop.io.compress.DecompressorStream
-
- AVRO_REFLECT_PACKAGES - Static variable in class org.apache.hadoop.io.serializer.avro.AvroReflectSerialization
-
Key to configure packages that contain classes to be serialized and
deserialized using this class.
- AVRO_SCHEMA_KEY - Static variable in class org.apache.hadoop.io.serializer.avro.AvroSerialization
-
- AvroFSInput - Class in org.apache.hadoop.fs
-
- AvroFSInput(FSDataInputStream, long) - Constructor for class org.apache.hadoop.fs.AvroFSInput
-
- AvroFSInput(FileContext, Path) - Constructor for class org.apache.hadoop.fs.AvroFSInput
-
- AvroReflectSerializable - Interface in org.apache.hadoop.io.serializer.avro
-
Tag interface for Avro 'reflect' serializable classes.
- AvroReflectSerialization - Class in org.apache.hadoop.io.serializer.avro
-
Serialization for Avro Reflect classes.
- AvroReflectSerialization() - Constructor for class org.apache.hadoop.io.serializer.avro.AvroReflectSerialization
-
- AvroSerialization<T> - Class in org.apache.hadoop.io.serializer.avro
-
Base class for providing serialization to Avro types.
- AvroSerialization() - Constructor for class org.apache.hadoop.io.serializer.avro.AvroSerialization
-
- AvroSpecificSerialization - Class in org.apache.hadoop.io.serializer.avro
-
Serialization for Avro Specific classes.
- AvroSpecificSerialization() - Constructor for class org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization
-
- CACHE_ARCHIVES - Static variable in class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
- CACHE_ARCHIVES_SIZES - Static variable in class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
- CACHE_ARCHIVES_TIMESTAMPS - Static variable in class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
- CACHE_FILES - Static variable in class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
- CACHE_FILES_SIZES - Static variable in class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
- CACHE_FILES_TIMESTAMPS - Static variable in class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
- CACHE_LOCALARCHIVES - Static variable in class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
- CACHE_LOCALFILES - Static variable in class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
- CACHE_SYMLINK - Static variable in class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
- CachedDNSToSwitchMapping - Class in org.apache.hadoop.net
-
A cached implementation of DNSToSwitchMapping that takes an
raw DNSToSwitchMapping and stores the resolved network location in
a cache.
- CachedDNSToSwitchMapping(DNSToSwitchMapping) - Constructor for class org.apache.hadoop.net.CachedDNSToSwitchMapping
-
cache a raw DNS mapping
- CacheFlag - Enum in org.apache.hadoop.fs
-
Specifies semantics for CacheDirective operations.
- cacheGroupsAdd(List<String>) - Method in interface org.apache.hadoop.security.GroupMappingServiceProvider
-
Caches the group user information
- cacheGroupsRefresh() - Method in interface org.apache.hadoop.security.GroupMappingServiceProvider
-
Refresh the cache of groups and user mapping
- callbackHandler - Variable in class org.apache.hadoop.yarn.client.api.async.NMClientAsync
-
- cancelDelegationToken(Token<DelegationTokenIdentifier>) - Method in class org.apache.hadoop.mapred.JobClient
-
Deprecated.
Use Token.cancel(org.apache.hadoop.conf.Configuration)
instead
- cancelDelegationToken(Token<DelegationTokenIdentifier>) - Method in class org.apache.hadoop.mapreduce.Cluster
-
Deprecated.
Use Token.cancel(org.apache.hadoop.conf.Configuration)
instead
- cancelDelegationToken(URL, DelegationTokenAuthenticatedURL.Token) - Method in class org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL
-
Cancels a delegation token from the server end-point.
- cancelDelegationToken(URL, DelegationTokenAuthenticatedURL.Token, String) - Method in class org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL
-
Cancels a delegation token from the server end-point.
- cancelDelegationToken(URL, AuthenticatedURL.Token, Token<AbstractDelegationTokenIdentifier>) - Method in class org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator
-
Cancels a delegation token from the server end-point.
- cancelDelegationToken(URL, AuthenticatedURL.Token, Token<AbstractDelegationTokenIdentifier>, String) - Method in class org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator
-
Cancels a delegation token from the server end-point.
- cancelDelegationToken(Token<TimelineDelegationTokenIdentifier>) - Method in class org.apache.hadoop.yarn.client.api.TimelineClient
-
Cancel a timeline delegation token.
- CancelDelegationTokenRequest - Interface in org.apache.hadoop.mapreduce.v2.api.protocolrecords
-
The request issued by the client to the ResourceManager
to cancel a
delegation token.
- CancelDelegationTokenResponse - Interface in org.apache.hadoop.mapreduce.v2.api.protocolrecords
-
The response from the ResourceManager
to a cancelDelegationToken
request.
- cancelDeleteOnExit(Path) - Method in class org.apache.hadoop.fs.FileSystem
-
Cancel the deletion of the path when the FileSystem is closed
- canExecute(File) - Static method in class org.apache.hadoop.fs.FileUtil
-
- canonicalizeUri(URI) - Method in class org.apache.hadoop.fs.FileSystem
-
Canonicalize the given URI.
- canonicalizeUri(URI) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- canRead(File) - Static method in class org.apache.hadoop.fs.FileUtil
-
- CanSetDropBehind - Interface in org.apache.hadoop.fs
-
- CanSetReadahead - Interface in org.apache.hadoop.fs
-
- CanUnbuffer - Interface in org.apache.hadoop.fs
-
FSDataInputStreams implement this interface to indicate that they can clear
their buffers on request.
- canWrite(File) - Static method in class org.apache.hadoop.fs.FileUtil
-
- ChainMapper - Class in org.apache.hadoop.mapred.lib
-
The ChainMapper class allows to use multiple Mapper classes within a single
Map task.
- ChainMapper() - Constructor for class org.apache.hadoop.mapred.lib.ChainMapper
-
Constructor.
- ChainMapper<KEYIN,VALUEIN,KEYOUT,VALUEOUT> - Class in org.apache.hadoop.mapreduce.lib.chain
-
The ChainMapper class allows to use multiple Mapper classes within a single
Map task.
- ChainMapper() - Constructor for class org.apache.hadoop.mapreduce.lib.chain.ChainMapper
-
- ChainReducer - Class in org.apache.hadoop.mapred.lib
-
The ChainReducer class allows to chain multiple Mapper classes after a
Reducer within the Reducer task.
- ChainReducer() - Constructor for class org.apache.hadoop.mapred.lib.ChainReducer
-
Constructor.
- ChainReducer<KEYIN,VALUEIN,KEYOUT,VALUEOUT> - Class in org.apache.hadoop.mapreduce.lib.chain
-
The ChainReducer class allows to chain multiple Mapper classes after a
Reducer within the Reducer task.
- ChainReducer() - Constructor for class org.apache.hadoop.mapreduce.lib.chain.ChainReducer
-
- changed() - Method in class org.apache.hadoop.metrics2.lib.MutableMetric
-
- charAt(int) - Method in class org.apache.hadoop.io.Text
-
Returns the Unicode Scalar Value (32-bit integer value)
for the character at position
.
- checkAllowedProtocols(Class<?>) - Method in class org.apache.hadoop.yarn.client.ClientRMProxy
-
- checkAllowedProtocols(Class<?>) - Method in class org.apache.hadoop.yarn.client.RMProxy
-
Verify the passed protocol is supported.
- checkArgs(String) - Method in interface org.apache.hadoop.ha.FenceMethod
-
Verify that the given fencing method's arguments are valid.
- checkFencingConfigured() - Method in class org.apache.hadoop.ha.HAServiceTarget
-
- checkOutputSpecs(FileSystem, JobConf) - Method in class org.apache.hadoop.mapred.FileOutputFormat
-
- checkOutputSpecs(FileSystem, JobConf) - Method in class org.apache.hadoop.mapred.lib.db.DBOutputFormat
-
Check for validity of the output-specification for the job.
- checkOutputSpecs(FileSystem, JobConf) - Method in class org.apache.hadoop.mapred.lib.FilterOutputFormat
-
- checkOutputSpecs(FileSystem, JobConf) - Method in class org.apache.hadoop.mapred.lib.LazyOutputFormat
-
- checkOutputSpecs(FileSystem, JobConf) - Method in class org.apache.hadoop.mapred.lib.NullOutputFormat
-
- checkOutputSpecs(FileSystem, JobConf) - Method in interface org.apache.hadoop.mapred.OutputFormat
-
Check for validity of the output-specification for the job.
- checkOutputSpecs(FileSystem, JobConf) - Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat
-
- checkOutputSpecs(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.db.DBOutputFormat
-
- checkOutputSpecs(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
- checkOutputSpecs(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FilterOutputFormat
-
- checkOutputSpecs(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.output.LazyOutputFormat
-
- checkOutputSpecs(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.output.NullOutputFormat
-
- checkOutputSpecs(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.output.SequenceFileAsBinaryOutputFormat
-
- checkOutputSpecs(JobContext) - Method in class org.apache.hadoop.mapreduce.OutputFormat
-
Check for validity of the output-specification for the job.
- checkPath(Path) - Method in class org.apache.hadoop.fs.AbstractFileSystem
-
Check that a Path belongs to this FileSystem.
- checkPath(Path) - Method in class org.apache.hadoop.fs.FileSystem
-
Check that a Path belongs to this FileSystem.
- checkPath(Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
Check that a Path belongs to this FileSystem.
- checkpoint() - Method in class org.apache.hadoop.fs.Trash
-
Create a trash checkpoint.
- Checkpointable - Annotation Type in org.apache.hadoop.mapreduce.task.annotation
-
Contract representing to the framework that the task can be safely preempted
and restarted between invocations of the user-defined function.
- checkScheme(URI, String) - Method in class org.apache.hadoop.fs.AbstractFileSystem
-
Check that the Uri's scheme matches
- checkStateTransition(String, Service.STATE, Service.STATE) - Static method in class org.apache.hadoop.service.ServiceStateModel
-
Check that a state tansition is valid and
throw an exception if not
- checkStream() - Method in class org.apache.hadoop.io.compress.DecompressorStream
-
- ChecksumException - Exception in org.apache.hadoop.fs
-
Thrown for checksum errors.
- ChecksumException(String, long) - Constructor for exception org.apache.hadoop.fs.ChecksumException
-
- ChecksumFileSystem - Class in org.apache.hadoop.fs
-
Abstract Checksumed FileSystem.
- ChecksumFileSystem(FileSystem) - Constructor for class org.apache.hadoop.fs.ChecksumFileSystem
-
- children - Variable in class org.apache.hadoop.registry.client.types.RegistryPathStatus
-
Number of child nodes
- chmod(String, String) - Static method in class org.apache.hadoop.fs.FileUtil
-
Change the permissions on a filename.
- chmod(String, String, boolean) - Static method in class org.apache.hadoop.fs.FileUtil
-
Change the permissions on a file / directory, recursively, if
needed.
- CLASS_PATH_SEPARATOR - Static variable in interface org.apache.hadoop.yarn.api.ApplicationConstants
-
This constant is used to construct class path and it will be replaced with
real class path separator(':' for Linux and ';' for Windows) by
NodeManager on container launch.
- cleanup(Log, Closeable...) - Static method in class org.apache.hadoop.io.IOUtils
-
Close the Closeable objects and
ignore any
IOException
or
null pointers.
- cleanup(Mapper<KEYIN, VALUEIN, KEYOUT, VALUEOUT>.Context) - Method in class org.apache.hadoop.mapreduce.Mapper
-
Called once at the end of the task.
- cleanup(Reducer<KEYIN, VALUEIN, KEYOUT, VALUEOUT>.Context) - Method in class org.apache.hadoop.mapreduce.Reducer
-
Called once at the end of the task.
- cleanupJob(JobContext) - Method in class org.apache.hadoop.mapred.FileOutputCommitter
-
Deprecated.
- cleanupJob(JobContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
-
- cleanupJob(JobContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
-
- cleanupJob(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
Deprecated.
- cleanupJob(JobContext) - Method in class org.apache.hadoop.mapreduce.OutputCommitter
-
- cleanUpPartialOutputForTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.PartialFileOutputCommitter
-
- cleanUpPartialOutputForTask(TaskAttemptContext) - Method in interface org.apache.hadoop.mapreduce.lib.output.PartialOutputCommitter
-
Remove all previously committed outputs from prior executions of this task.
- cleanupProgress() - Method in class org.apache.hadoop.mapred.JobStatus
-
- cleanupProgress() - Method in interface org.apache.hadoop.mapred.RunningJob
-
Get the progress of the job's cleanup-tasks, as a float between 0.0
and 1.0.
- cleanupProgress() - Method in class org.apache.hadoop.mapreduce.Job
-
Get the progress of the job's cleanup-tasks, as a float between 0.0
and 1.0.
- cleanupRunningContainersOnStop(boolean) - Method in class org.apache.hadoop.yarn.client.api.NMClient
-
Set whether the containers that are started by this client, and are
still running should be stopped when the client stops.
- cleanUpTokenReferral(Configuration) - Static method in class org.apache.hadoop.mapreduce.security.TokenCache
-
Remove jobtoken referrals which don't make sense in the context
of the task execution.
- clear() - Method in class org.apache.hadoop.conf.Configuration
-
Clears all keys from the configuration.
- clear() - Method in class org.apache.hadoop.io.MapWritable
-
- clear() - Method in class org.apache.hadoop.io.SortedMapWritable
-
- clear() - Method in class org.apache.hadoop.io.Text
-
Clear the string to empty.
- clear() - Method in class org.apache.hadoop.mapreduce.lib.join.ArrayListBackedIterator
-
- clear() - Method in interface org.apache.hadoop.mapreduce.lib.join.ResetableIterator
-
Close datasources, but do not release internal resources.
- clear() - Method in class org.apache.hadoop.mapreduce.lib.join.StreamBackedIterator
-
- clear() - Method in class org.apache.hadoop.util.bloom.HashFunction
-
Clears this hash function.
- CLEAR_TEXT_FALLBACK - Static variable in class org.apache.hadoop.security.alias.CredentialProvider
-
- clearCache() - Method in class org.apache.hadoop.yarn.client.api.NMTokenCache
-
It will remove all the nm tokens from its cache
- clearChanged() - Method in class org.apache.hadoop.metrics2.lib.MutableMetric
-
Clear the changed flag in the snapshot operations
- clearMark() - Method in class org.apache.hadoop.mapreduce.MarkableIterator
-
- clearStatistics() - Static method in class org.apache.hadoop.fs.AbstractFileSystem
-
- clearStatistics() - Static method in class org.apache.hadoop.fs.FileContext
-
Clears all the statistics stored in AbstractFileSystem, for all the file
systems.
- clearStatistics() - Static method in class org.apache.hadoop.fs.FileSystem
-
Reset all statistics for all file systems
- clearWriteAccessors() - Method in interface org.apache.hadoop.registry.client.api.RegistryOperations
-
Clear all write accessors.
- CLI - Class in org.apache.hadoop.mapreduce.tools
-
Interprets the map reduce cli options
- CLI() - Constructor for class org.apache.hadoop.mapreduce.tools.CLI
-
- CLI(Configuration) - Constructor for class org.apache.hadoop.mapreduce.tools.CLI
-
- Client - Class in org.apache.hadoop.yarn.applications.distributedshell
-
Client for Distributed Shell application submission to YARN.
- Client(Configuration) - Constructor for class org.apache.hadoop.yarn.applications.distributedshell.Client
-
- Client() - Constructor for class org.apache.hadoop.yarn.applications.distributedshell.Client
-
- client - Variable in class org.apache.hadoop.yarn.client.api.async.AMRMClientAsync
-
- client - Variable in class org.apache.hadoop.yarn.client.api.async.NMClientAsync
-
- CLIENT_FAILOVER_MAX_ATTEMPTS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- CLIENT_FAILOVER_PREFIX - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- CLIENT_FAILOVER_PROXY_PROVIDER - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- CLIENT_FAILOVER_RETRIES - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- CLIENT_FAILOVER_RETRIES_ON_SOCKET_TIMEOUTS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- CLIENT_FAILOVER_SLEEPTIME_BASE_MS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- CLIENT_FAILOVER_SLEEPTIME_MAX_MS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- CLIENT_NM_CONNECT_MAX_WAIT_MS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
Max time to wait to establish a connection to NM
- CLIENT_NM_CONNECT_RETRY_INTERVAL_MS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
Time interval between each attempt to connect to NM
- ClientRMProxy<T> - Class in org.apache.hadoop.yarn.client
-
- ClientRMSecurityInfo - Class in org.apache.hadoop.yarn.security.client
-
- ClientRMSecurityInfo() - Constructor for class org.apache.hadoop.yarn.security.client.ClientRMSecurityInfo
-
- ClientTimelineSecurityInfo - Class in org.apache.hadoop.yarn.security.client
-
- ClientTimelineSecurityInfo() - Constructor for class org.apache.hadoop.yarn.security.client.ClientTimelineSecurityInfo
-
- ClientToAMTokenIdentifier - Class in org.apache.hadoop.yarn.security.client
-
- ClientToAMTokenIdentifier() - Constructor for class org.apache.hadoop.yarn.security.client.ClientToAMTokenIdentifier
-
- ClientToAMTokenIdentifier(ApplicationAttemptId, String) - Constructor for class org.apache.hadoop.yarn.security.client.ClientToAMTokenIdentifier
-
- ClientToAMTokenSecretManager - Class in org.apache.hadoop.yarn.security.client
-
A simple SecretManager
for AMs to validate Client-RM tokens issued to
clients by the RM using the underlying master-key shared by RM to the AMs on
their launch.
- ClientToAMTokenSecretManager(ApplicationAttemptId, byte[]) - Constructor for class org.apache.hadoop.yarn.security.client.ClientToAMTokenSecretManager
-
- Clock - Interface in org.apache.hadoop.yarn.util
-
A simple clock interface that gives you time.
- clone(T, Configuration) - Static method in class org.apache.hadoop.io.WritableUtils
-
Make a copy of a writable object using serialization to a buffer.
- clone() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
- clone() - Method in class org.apache.hadoop.record.Buffer
-
Deprecated.
- clone() - Method in class org.apache.hadoop.registry.client.types.Endpoint
-
Shallow clone: the lists of addresses are shared
- clone() - Method in class org.apache.hadoop.registry.client.types.ServiceRecord
-
Shallow clone: all endpoints will be shared across instances
- cloneInto(Writable, Writable) - Static method in class org.apache.hadoop.io.WritableUtils
-
Deprecated.
use ReflectionUtils.cloneInto instead.
- cloneWritableInto(Writable, Writable) - Static method in class org.apache.hadoop.util.ReflectionUtils
-
Deprecated.
- close() - Method in class org.apache.hadoop.crypto.key.KeyProvider
-
Can be used by implementing classes to close any resources
that require closing
- close() - Method in class org.apache.hadoop.fs.AvroFSInput
-
- close() - Method in class org.apache.hadoop.fs.FileSystem
-
No more filesystem operations are needed.
- close() - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- close() - Method in class org.apache.hadoop.fs.FSDataOutputStream
-
Close the underlying output stream.
- close() - Method in class org.apache.hadoop.fs.RawLocalFileSystem
-
- close() - Method in class org.apache.hadoop.io.compress.CompressionInputStream
-
- close() - Method in class org.apache.hadoop.io.compress.CompressionOutputStream
-
- close() - Method in class org.apache.hadoop.io.compress.CompressorStream
-
- close() - Method in class org.apache.hadoop.io.compress.DecompressorStream
-
- close() - Method in class org.apache.hadoop.io.DefaultStringifier
-
- close() - Method in interface org.apache.hadoop.io.Stringifier
-
Closes this object.
- close() - Method in class org.apache.hadoop.log.metrics.EventCounter
-
- close() - Method in class org.apache.hadoop.mapred.JobClient
-
Close the JobClient
.
- close() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
Close all child RRs.
- close() - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
-
Forward close request to proxied RR.
- close() - Method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
-
- close() - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorCombiner
-
Do nothing.
- close() - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJobBase
-
- close() - Method in class org.apache.hadoop.mapred.lib.ChainMapper
-
Closes the ChainMapper and all the Mappers in the chain.
- close() - Method in class org.apache.hadoop.mapred.lib.ChainReducer
-
Closes the ChainReducer, the Reducer and all the Mappers in the chain.
- close() - Method in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
-
- close() - Method in class org.apache.hadoop.mapred.lib.CombineFileRecordReaderWrapper
-
- close() - Method in class org.apache.hadoop.mapred.lib.FieldSelectionMapReduce
-
- close() - Method in class org.apache.hadoop.mapred.lib.MultipleOutputs
-
Closes all the opened named outputs.
- close() - Method in class org.apache.hadoop.mapred.MapReduceBase
-
Default implementation that does nothing.
- close() - Method in interface org.apache.hadoop.mapred.RecordReader
-
- close(Reporter) - Method in interface org.apache.hadoop.mapred.RecordWriter
-
Close this RecordWriter
to future operations.
- close() - Method in class org.apache.hadoop.mapred.SequenceFileAsTextRecordReader
-
- close() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
-
- close() - Method in class org.apache.hadoop.mapreduce.Cluster
-
Close the Cluster
.
- close() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
Close the record reader.
- close() - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader
-
- close() - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReaderWrapper
-
- close() - Method in class org.apache.hadoop.mapreduce.lib.input.KeyValueLineRecordReader
-
- close() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsTextRecordReader
-
- close() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader
-
- close() - Method in class org.apache.hadoop.mapreduce.lib.join.ArrayListBackedIterator
-
- close() - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
-
Close all child RRs.
- close() - Method in interface org.apache.hadoop.mapreduce.lib.join.ResetableIterator
-
Close datasources and release resources.
- close() - Method in class org.apache.hadoop.mapreduce.lib.join.StreamBackedIterator
-
- close() - Method in class org.apache.hadoop.mapreduce.lib.join.WrappedRecordReader
-
Forward close request to proxied RR.
- close() - Method in class org.apache.hadoop.mapreduce.lib.output.MultipleOutputs
-
Closes all the opened outputs.
- close() - Method in class org.apache.hadoop.mapreduce.RecordReader
-
Close the record reader.
- close(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.RecordWriter
-
Close this RecordWriter
to future operations.
- close() - Method in class org.apache.hadoop.metrics.ganglia.GangliaContext
-
method to close the datagram socket
- close() - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
-
Stops monitoring and frees buffered data, returning this
object to its initial state.
- close() - Method in class org.apache.hadoop.metrics.spi.CompositeContext
-
- close() - Method in class org.apache.hadoop.metrics2.sink.FileSink
-
- close() - Method in class org.apache.hadoop.metrics2.sink.GraphiteSink
-
- close() - Method in class org.apache.hadoop.service.AbstractService
-
- close() - Method in interface org.apache.hadoop.service.Service
-
A version of stop() that is designed to be usable in Java7 closure
clauses.
- close() - Method in class org.apache.hadoop.yarn.ContainerLogAppender
-
- close() - Method in class org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat.LogReader
-
- Closeable - Interface in org.apache.hadoop.io
-
Deprecated.
use java.io.Closeable
- closeAll() - Static method in class org.apache.hadoop.fs.FileSystem
-
Close all cached filesystems.
- closeAllForUGI(UserGroupInformation) - Static method in class org.apache.hadoop.fs.FileSystem
-
Close all cached filesystems for a given UGI.
- closeConnection() - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
- closed - Variable in class org.apache.hadoop.io.compress.CompressorStream
-
- closed - Variable in class org.apache.hadoop.io.compress.DecompressorStream
-
- closeSocket(Socket) - Static method in class org.apache.hadoop.io.IOUtils
-
- closeStream(Closeable) - Static method in class org.apache.hadoop.io.IOUtils
-
- Cluster - Class in org.apache.hadoop.mapreduce
-
Provides a way to access information about the map/reduce cluster.
- Cluster(Configuration) - Constructor for class org.apache.hadoop.mapreduce.Cluster
-
- Cluster(InetSocketAddress, Configuration) - Constructor for class org.apache.hadoop.mapreduce.Cluster
-
- cluster - Variable in class org.apache.hadoop.mapreduce.tools.CLI
-
- ClusterMetrics - Class in org.apache.hadoop.mapreduce
-
Status information on the current state of the Map-Reduce cluster.
- ClusterMetrics() - Constructor for class org.apache.hadoop.mapreduce.ClusterMetrics
-
- ClusterMetrics(int, int, int, int, int, int, int, int, int, int, int, int) - Constructor for class org.apache.hadoop.mapreduce.ClusterMetrics
-
- ClusterMetrics(int, int, int, int, int, int, int, int, int, int, int, int, int) - Constructor for class org.apache.hadoop.mapreduce.ClusterMetrics
-
- ClusterStatus - Class in org.apache.hadoop.mapred
-
Status information on the current state of the Map-Reduce cluster.
- clusterTimestamp - Variable in class org.apache.hadoop.yarn.api.records.ReservationId
-
- cmp - Variable in class org.apache.hadoop.mapreduce.lib.join.WrappedRecordReader
-
- cmpcl - Variable in class org.apache.hadoop.mapred.join.Parser.Node
-
- cmpcl - Variable in class org.apache.hadoop.mapreduce.lib.join.Parser.Node
-
- CodeBuffer - Class in org.apache.hadoop.record.compiler
-
- CodecPool - Class in org.apache.hadoop.io.compress
-
A global compressor/decompressor pool used to save and reuse
(possibly native) compression/decompression codecs.
- CodecPool() - Constructor for class org.apache.hadoop.io.compress.CodecPool
-
- collect(K, V) - Method in interface org.apache.hadoop.mapred.OutputCollector
-
Adds a key/value pair to the output.
- column - Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
Deprecated.
- combine(Object[], TupleWritable) - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
- combine(Object[], TupleWritable) - Method in class org.apache.hadoop.mapred.join.InnerJoinRecordReader
-
Return true iff the tuple is full (all data sources contain this key).
- combine(Object[], TupleWritable) - Method in class org.apache.hadoop.mapred.join.MultiFilterRecordReader
-
- combine(Object[], TupleWritable) - Method in class org.apache.hadoop.mapred.join.OuterJoinRecordReader
-
Emit everything from the collector.
- combine(Object[], TupleWritable) - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
-
- combine(Object[], TupleWritable) - Method in class org.apache.hadoop.mapreduce.lib.join.InnerJoinRecordReader
-
Return true iff the tuple is full (all data sources contain this key).
- combine(Object[], TupleWritable) - Method in class org.apache.hadoop.mapreduce.lib.join.MultiFilterRecordReader
-
- combine(Object[], TupleWritable) - Method in class org.apache.hadoop.mapreduce.lib.join.OuterJoinRecordReader
-
Emit everything from the collector.
- CombineFileInputFormat<K,V> - Class in org.apache.hadoop.mapred.lib
-
- CombineFileInputFormat() - Constructor for class org.apache.hadoop.mapred.lib.CombineFileInputFormat
-
default constructor
- CombineFileInputFormat<K,V> - Class in org.apache.hadoop.mapreduce.lib.input
-
- CombineFileInputFormat() - Constructor for class org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat
-
default constructor
- CombineFileRecordReader<K,V> - Class in org.apache.hadoop.mapred.lib
-
A generic RecordReader that can hand out different recordReaders
for each chunk in a
CombineFileSplit
.
- CombineFileRecordReader(JobConf, CombineFileSplit, Reporter, Class<RecordReader<K, V>>) - Constructor for class org.apache.hadoop.mapred.lib.CombineFileRecordReader
-
A generic RecordReader that can hand out different recordReaders
for each chunk in the CombineFileSplit.
- CombineFileRecordReader<K,V> - Class in org.apache.hadoop.mapreduce.lib.input
-
A generic RecordReader that can hand out different recordReaders
for each chunk in a
CombineFileSplit
.
- CombineFileRecordReader(CombineFileSplit, TaskAttemptContext, Class<? extends RecordReader<K, V>>) - Constructor for class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader
-
A generic RecordReader that can hand out different recordReaders
for each chunk in the CombineFileSplit.
- CombineFileRecordReaderWrapper<K,V> - Class in org.apache.hadoop.mapred.lib
-
A wrapper class for a record reader that handles a single file split.
- CombineFileRecordReaderWrapper(FileInputFormat<K, V>, CombineFileSplit, Configuration, Reporter, Integer) - Constructor for class org.apache.hadoop.mapred.lib.CombineFileRecordReaderWrapper
-
- CombineFileRecordReaderWrapper<K,V> - Class in org.apache.hadoop.mapreduce.lib.input
-
A wrapper class for a record reader that handles a single file split.
- CombineFileRecordReaderWrapper(FileInputFormat<K, V>, CombineFileSplit, TaskAttemptContext, Integer) - Constructor for class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReaderWrapper
-
- CombineFileSplit - Class in org.apache.hadoop.mapred.lib
-
- CombineFileSplit() - Constructor for class org.apache.hadoop.mapred.lib.CombineFileSplit
-
- CombineFileSplit(JobConf, Path[], long[], long[], String[]) - Constructor for class org.apache.hadoop.mapred.lib.CombineFileSplit
-
- CombineFileSplit(JobConf, Path[], long[]) - Constructor for class org.apache.hadoop.mapred.lib.CombineFileSplit
-
- CombineFileSplit(CombineFileSplit) - Constructor for class org.apache.hadoop.mapred.lib.CombineFileSplit
-
Copy constructor
- CombineFileSplit - Class in org.apache.hadoop.mapreduce.lib.input
-
A sub-collection of input files.
- CombineFileSplit() - Constructor for class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
default constructor
- CombineFileSplit(Path[], long[], long[], String[]) - Constructor for class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
- CombineFileSplit(Path[], long[]) - Constructor for class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
- CombineFileSplit(CombineFileSplit) - Constructor for class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
Copy constructor
- CombineSequenceFileInputFormat<K,V> - Class in org.apache.hadoop.mapred.lib
-
Input format that is a CombineFileInputFormat
-equivalent for
SequenceFileInputFormat
.
- CombineSequenceFileInputFormat() - Constructor for class org.apache.hadoop.mapred.lib.CombineSequenceFileInputFormat
-
- CombineSequenceFileInputFormat<K,V> - Class in org.apache.hadoop.mapreduce.lib.input
-
Input format that is a CombineFileInputFormat
-equivalent for
SequenceFileInputFormat
.
- CombineSequenceFileInputFormat() - Constructor for class org.apache.hadoop.mapreduce.lib.input.CombineSequenceFileInputFormat
-
- CombineTextInputFormat - Class in org.apache.hadoop.mapred.lib
-
Input format that is a CombineFileInputFormat
-equivalent for
TextInputFormat
.
- CombineTextInputFormat() - Constructor for class org.apache.hadoop.mapred.lib.CombineTextInputFormat
-
- CombineTextInputFormat - Class in org.apache.hadoop.mapreduce.lib.input
-
Input format that is a CombineFileInputFormat
-equivalent for
TextInputFormat
.
- CombineTextInputFormat() - Constructor for class org.apache.hadoop.mapreduce.lib.input.CombineTextInputFormat
-
- COMMA_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
-
Deprecated.
- commitJob(JobContext) - Method in class org.apache.hadoop.mapred.FileOutputCommitter
-
- commitJob(JobContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
-
For committing job's output after successful job completion.
- commitJob(JobContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
-
This method implements the new interface by calling the old method.
- commitJob(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
The job has completed so move all committed tasks to the final output dir.
- commitJob(JobContext) - Method in class org.apache.hadoop.mapreduce.OutputCommitter
-
For committing job's output after successful job completion.
- commitTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapred.FileOutputCommitter
-
- commitTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
-
To promote the task's temporary output to final output location.
- commitTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
-
This method implements the new interface by calling the old method.
- commitTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
Move the files from the work directory to the job output directory
- commitTask(TaskAttemptContext, Path) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
- commitTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.OutputCommitter
-
To promote the task's temporary output to final output location.
- CommonConfigurationKeysPublic - Class in org.apache.hadoop.fs
-
This class contains constants for configuration keys used
in the common code.
- CommonConfigurationKeysPublic() - Constructor for class org.apache.hadoop.fs.CommonConfigurationKeysPublic
-
- comparator() - Method in class org.apache.hadoop.io.SortedMapWritable
-
- COMPARATOR_JCLASS - Static variable in class org.apache.hadoop.io.file.tfile.TFile
-
comparator prefix: java class
- COMPARATOR_MEMCMP - Static variable in class org.apache.hadoop.io.file.tfile.TFile
-
comparator: memcmp
- COMPARATOR_OPTIONS - Static variable in class org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedComparator
-
- compare(byte[], int, int, byte[], int, int) - Method in interface org.apache.hadoop.io.RawComparator
-
Compare two objects in binary.
- compare(T, T) - Method in class org.apache.hadoop.io.serializer.JavaSerializationComparator
-
- compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.io.WritableComparator
-
Optimization hook.
- compare(WritableComparable, WritableComparable) - Method in class org.apache.hadoop.io.WritableComparator
-
Compare two WritableComparables.
- compare(Object, Object) - Method in class org.apache.hadoop.io.WritableComparator
-
- compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedComparator
-
- compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.record.RecordComparator
-
Deprecated.
- compare(ReservationRequest, ReservationRequest) - Method in class org.apache.hadoop.yarn.api.records.ReservationRequest.ReservationRequestComparator
-
- compare(ResourceRequest, ResourceRequest) - Method in class org.apache.hadoop.yarn.api.records.ResourceRequest.ResourceRequestComparator
-
- compareBytes(byte[], int, int, byte[], int, int) - Static method in class org.apache.hadoop.io.WritableComparator
-
Lexicographic order of binary data.
- compareBytes(byte[], int, int, byte[], int, int) - Static method in class org.apache.hadoop.record.Utils
-
Deprecated.
Lexicographic order of binary data.
- compareTo(Object) - Method in class org.apache.hadoop.fs.FileStatus
-
Compare this object to another object
- compareTo(VolumeId) - Method in class org.apache.hadoop.fs.HdfsVolumeId
-
- compareTo(Object) - Method in class org.apache.hadoop.fs.LocatedFileStatus
-
Compare this object to another object
- compareTo(Object) - Method in class org.apache.hadoop.fs.Path
-
- compareTo(VolumeId) - Method in interface org.apache.hadoop.fs.VolumeId
-
- compareTo(BinaryComparable) - Method in class org.apache.hadoop.io.BinaryComparable
-
Compare bytes from {#getBytes()}.
- compareTo(byte[], int, int) - Method in class org.apache.hadoop.io.BinaryComparable
-
Compare bytes from {#getBytes()} to those provided.
- compareTo(BooleanWritable) - Method in class org.apache.hadoop.io.BooleanWritable
-
- compareTo(ByteWritable) - Method in class org.apache.hadoop.io.ByteWritable
-
Compares two ByteWritables.
- compareTo(DoubleWritable) - Method in class org.apache.hadoop.io.DoubleWritable
-
- compareTo(FloatWritable) - Method in class org.apache.hadoop.io.FloatWritable
-
Compares two FloatWritables.
- compareTo(IntWritable) - Method in class org.apache.hadoop.io.IntWritable
-
Compares two IntWritables.
- compareTo(LongWritable) - Method in class org.apache.hadoop.io.LongWritable
-
Compares two LongWritables.
- compareTo(MD5Hash) - Method in class org.apache.hadoop.io.MD5Hash
-
Compares this object with the specified object for order.
- compareTo(NullWritable) - Method in class org.apache.hadoop.io.NullWritable
-
- compareTo(ShortWritable) - Method in class org.apache.hadoop.io.ShortWritable
-
Compares two ShortWritable.
- compareTo(VIntWritable) - Method in class org.apache.hadoop.io.VIntWritable
-
Compares two VIntWritables.
- compareTo(VLongWritable) - Method in class org.apache.hadoop.io.VLongWritable
-
Compares two VLongWritables.
- compareTo(ComposableRecordReader<K, ?>) - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
Implement Comparable contract (compare key of join or head of heap
with that of another).
- compareTo(ComposableRecordReader<K, ?>) - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
-
Implement Comparable contract (compare key at head of proxied RR
with that of another).
- compareTo(ID) - Method in class org.apache.hadoop.mapreduce.ID
-
Compare IDs by associated numbers
- compareTo(ID) - Method in class org.apache.hadoop.mapreduce.JobID
-
Compare JobIds by first jtIdentifiers, then by job numbers
- compareTo(ComposableRecordReader<K, ?>) - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
-
Implement Comparable contract (compare key of join or head of heap
with that of another).
- compareTo(ComposableRecordReader<K, ?>) - Method in class org.apache.hadoop.mapreduce.lib.join.WrappedRecordReader
-
Implement Comparable contract (compare key at head of proxied RR
with that of another).
- compareTo(ID) - Method in class org.apache.hadoop.mapreduce.TaskAttemptID
-
Compare TaskIds by first tipIds, then by task numbers.
- compareTo(ID) - Method in class org.apache.hadoop.mapreduce.TaskID
-
Compare TaskInProgressIds by first jobIds, then by tip numbers.
- compareTo(Object) - Method in class org.apache.hadoop.record.Buffer
-
Deprecated.
Define the sort order of the Buffer.
- compareTo(Object) - Method in class org.apache.hadoop.record.meta.RecordTypeInfo
-
Deprecated.
This class doesn't implement Comparable as it's not meant to be used
for anything besides de/serializing.
- compareTo(Object) - Method in class org.apache.hadoop.record.Record
-
Deprecated.
- compareTo(ApplicationAttemptId) - Method in class org.apache.hadoop.yarn.api.records.ApplicationAttemptId
-
- compareTo(ApplicationId) - Method in class org.apache.hadoop.yarn.api.records.ApplicationId
-
- compareTo(ContainerId) - Method in class org.apache.hadoop.yarn.api.records.ContainerId
-
- compareTo(NodeId) - Method in class org.apache.hadoop.yarn.api.records.NodeId
-
- compareTo(Priority) - Method in class org.apache.hadoop.yarn.api.records.Priority
-
- compareTo(ReservationId) - Method in class org.apache.hadoop.yarn.api.records.ReservationId
-
- compareTo(ReservationRequest) - Method in class org.apache.hadoop.yarn.api.records.ReservationRequest
-
- compareTo(ResourceRequest) - Method in class org.apache.hadoop.yarn.api.records.ResourceRequest
-
- compareTo(TimelineEntity) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEntity
-
- compareTo(TimelineEvent) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEvent
-
- compile(String) - Method in class org.apache.hadoop.metrics2.filter.GlobFilter
-
- compile(String) - Method in class org.apache.hadoop.metrics2.filter.RegexFilter
-
- completeLocalOutput(Path, Path) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
-
- completeLocalOutput(Path, Path) - Method in class org.apache.hadoop.fs.FileSystem
-
Called when we're all done writing to the target.
- completeLocalOutput(Path, Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
Called when we're all done writing to the target.
- completeLocalOutput(Path, Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
-
- COMPLETION_POLL_INTERVAL_KEY - Static variable in class org.apache.hadoop.mapreduce.Job
-
Key in mapred-*.xml that sets completionPollInvervalMillis
- componentListPath(String, String, String) - Static method in class org.apache.hadoop.registry.client.binding.RegistryUtils
-
Create a path for listing components under a service
- componentPath(String, String, String, String) - Static method in class org.apache.hadoop.registry.client.binding.RegistryUtils
-
Create the path to a service record for a component
- ComposableInputFormat<K extends WritableComparable,V extends Writable> - Interface in org.apache.hadoop.mapred.join
-
Refinement of InputFormat requiring implementors to provide
ComposableRecordReader instead of RecordReader.
- ComposableInputFormat<K extends WritableComparable<?>,V extends Writable> - Class in org.apache.hadoop.mapreduce.lib.join
-
Refinement of InputFormat requiring implementors to provide
ComposableRecordReader instead of RecordReader.
- ComposableInputFormat() - Constructor for class org.apache.hadoop.mapreduce.lib.join.ComposableInputFormat
-
- ComposableRecordReader<K extends WritableComparable,V extends Writable> - Interface in org.apache.hadoop.mapred.join
-
Additional operations required of a RecordReader to participate in a join.
- ComposableRecordReader<K extends WritableComparable<?>,V extends Writable> - Class in org.apache.hadoop.mapreduce.lib.join
-
Additional operations required of a RecordReader to participate in a join.
- ComposableRecordReader() - Constructor for class org.apache.hadoop.mapreduce.lib.join.ComposableRecordReader
-
- compose(Class<? extends InputFormat>, String) - Static method in class org.apache.hadoop.mapred.join.CompositeInputFormat
-
Convenience method for constructing composite formats.
- compose(String, Class<? extends InputFormat>, String...) - Static method in class org.apache.hadoop.mapred.join.CompositeInputFormat
-
Convenience method for constructing composite formats.
- compose(String, Class<? extends InputFormat>, Path...) - Static method in class org.apache.hadoop.mapred.join.CompositeInputFormat
-
Convenience method for constructing composite formats.
- compose(Class<? extends InputFormat>, String) - Static method in class org.apache.hadoop.mapreduce.lib.join.CompositeInputFormat
-
Convenience method for constructing composite formats.
- compose(String, Class<? extends InputFormat>, String...) - Static method in class org.apache.hadoop.mapreduce.lib.join.CompositeInputFormat
-
Convenience method for constructing composite formats.
- compose(String, Class<? extends InputFormat>, Path...) - Static method in class org.apache.hadoop.mapreduce.lib.join.CompositeInputFormat
-
Convenience method for constructing composite formats.
- CompositeContext - Class in org.apache.hadoop.metrics.spi
-
- CompositeContext() - Constructor for class org.apache.hadoop.metrics.spi.CompositeContext
-
- CompositeInputFormat<K extends WritableComparable> - Class in org.apache.hadoop.mapred.join
-
An InputFormat capable of performing joins over a set of data sources sorted
and partitioned the same way.
- CompositeInputFormat() - Constructor for class org.apache.hadoop.mapred.join.CompositeInputFormat
-
- CompositeInputFormat<K extends WritableComparable> - Class in org.apache.hadoop.mapreduce.lib.join
-
An InputFormat capable of performing joins over a set of data sources sorted
and partitioned the same way.
- CompositeInputFormat() - Constructor for class org.apache.hadoop.mapreduce.lib.join.CompositeInputFormat
-
- CompositeInputSplit - Class in org.apache.hadoop.mapred.join
-
This InputSplit contains a set of child InputSplits.
- CompositeInputSplit() - Constructor for class org.apache.hadoop.mapred.join.CompositeInputSplit
-
- CompositeInputSplit(int) - Constructor for class org.apache.hadoop.mapred.join.CompositeInputSplit
-
- CompositeInputSplit - Class in org.apache.hadoop.mapreduce.lib.join
-
This InputSplit contains a set of child InputSplits.
- CompositeInputSplit() - Constructor for class org.apache.hadoop.mapreduce.lib.join.CompositeInputSplit
-
- CompositeInputSplit(int) - Constructor for class org.apache.hadoop.mapreduce.lib.join.CompositeInputSplit
-
- CompositeRecordReader<K extends WritableComparable,V extends Writable,X extends Writable> - Class in org.apache.hadoop.mapred.join
-
A RecordReader that can effect joins of RecordReaders sharing a common key
type and partitioning.
- CompositeRecordReader(int, int, Class<? extends WritableComparator>) - Constructor for class org.apache.hadoop.mapred.join.CompositeRecordReader
-
Create a RecordReader with capacity children to position
id in the parent reader.
- CompositeRecordReader<K extends WritableComparable<?>,V extends Writable,X extends Writable> - Class in org.apache.hadoop.mapreduce.lib.join
-
A RecordReader that can effect joins of RecordReaders sharing a common key
type and partitioning.
- CompositeRecordReader(int, int, Class<? extends WritableComparator>) - Constructor for class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
-
Create a RecordReader with capacity children to position
id in the parent reader.
- CompositeService - Class in org.apache.hadoop.service
-
Composition of services.
- CompositeService(String) - Constructor for class org.apache.hadoop.service.CompositeService
-
- compress() - Method in class org.apache.hadoop.io.compress.BlockCompressorStream
-
- compress(byte[], int, int) - Method in interface org.apache.hadoop.io.compress.Compressor
-
Fills specified buffer with compressed data.
- compress() - Method in class org.apache.hadoop.io.compress.CompressorStream
-
- COMPRESS - Static variable in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
- COMPRESS_CODEC - Static variable in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
- COMPRESS_TYPE - Static variable in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
- CompressedWritable - Class in org.apache.hadoop.io
-
A base-class for Writables which store themselves compressed and lazily
inflate on field access.
- CompressedWritable() - Constructor for class org.apache.hadoop.io.CompressedWritable
-
- COMPRESSION_GZ - Static variable in class org.apache.hadoop.io.file.tfile.TFile
-
compression: gzip
- COMPRESSION_LZO - Static variable in class org.apache.hadoop.io.file.tfile.TFile
-
compression: lzo
- COMPRESSION_NONE - Static variable in class org.apache.hadoop.io.file.tfile.TFile
-
compression: none
- CompressionCodec - Interface in org.apache.hadoop.io.compress
-
This class encapsulates a streaming compression/decompression pair.
- CompressionCodecFactory - Class in org.apache.hadoop.io.compress
-
A factory that will find the correct codec for a given filename.
- CompressionCodecFactory(Configuration) - Constructor for class org.apache.hadoop.io.compress.CompressionCodecFactory
-
Find the codecs specified in the config value io.compression.codecs
and register them.
- CompressionInputStream - Class in org.apache.hadoop.io.compress
-
A compression input stream.
- CompressionInputStream(InputStream) - Constructor for class org.apache.hadoop.io.compress.CompressionInputStream
-
Create a compression input stream that reads
the decompressed bytes from the given stream.
- CompressionOutputStream - Class in org.apache.hadoop.io.compress
-
A compression output stream.
- CompressionOutputStream(OutputStream) - Constructor for class org.apache.hadoop.io.compress.CompressionOutputStream
-
Create a compression output stream that writes
the compressed bytes to the given stream.
- Compressor - Interface in org.apache.hadoop.io.compress
-
- compressor - Variable in class org.apache.hadoop.io.compress.CompressorStream
-
- CompressorStream - Class in org.apache.hadoop.io.compress
-
- CompressorStream(OutputStream, Compressor, int) - Constructor for class org.apache.hadoop.io.compress.CompressorStream
-
- CompressorStream(OutputStream, Compressor) - Constructor for class org.apache.hadoop.io.compress.CompressorStream
-
- CompressorStream(OutputStream) - Constructor for class org.apache.hadoop.io.compress.CompressorStream
-
Allow derived classes to directly set the underlying stream.
- computeSplitSize(long, long, long) - Method in class org.apache.hadoop.mapred.FileInputFormat
-
- computeSplitSize(long, long, long) - Method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
- concat(Path, Path[]) - Method in class org.apache.hadoop.fs.FileSystem
-
Concat existing files together.
- concat(Path, Path[]) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- conditions - Variable in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
- conf - Variable in class org.apache.hadoop.mapred.SequenceFileRecordReader
-
- conf - Variable in class org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader
-
- conf - Variable in class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
-
- Configurable - Interface in org.apache.hadoop.conf
-
- Configuration - Class in org.apache.hadoop.conf
-
Provides access to configuration parameters.
- Configuration() - Constructor for class org.apache.hadoop.conf.Configuration
-
A new configuration.
- Configuration(boolean) - Constructor for class org.apache.hadoop.conf.Configuration
-
A new configuration where the behavior of reading from the default
resources can be turned off.
- Configuration(Configuration) - Constructor for class org.apache.hadoop.conf.Configuration
-
A new configuration with the same settings cloned from another.
- configure(JobConf) - Method in class org.apache.hadoop.mapred.FixedLengthInputFormat
-
- configure(JobConf) - Method in interface org.apache.hadoop.mapred.JobConfigurable
-
Initializes a new instance from a
JobConf
.
- configure(JobConf) - Method in class org.apache.hadoop.mapred.KeyValueTextInputFormat
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.aggregate.UserDefinedValueAggregatorDescriptor
-
Do nothing.
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
-
get the input file name.
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorCombiner
-
Combiner does not need to configure.
- configure(JobConf) - Method in interface org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorDescriptor
-
Configure the object
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJobBase
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.BinaryPartitioner
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.ChainMapper
-
Configures the ChainMapper and all the Mappers in the chain.
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.ChainReducer
-
Configures the ChainReducer, the Reducer and all the Mappers in the chain.
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat
-
Initializes a new instance from a
JobConf
.
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.FieldSelectionMapReduce
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.HashPartitioner
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.KeyFieldBasedComparator
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.KeyFieldBasedPartitioner
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.MultithreadedMapRunner
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.NLineInputFormat
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.RegexMapper
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.TotalOrderPartitioner
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.MapReduceBase
-
Default implementation that does nothing.
- configure(JobConf) - Method in class org.apache.hadoop.mapred.MapRunner
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.TextInputFormat
-
- configure(Configuration) - Method in class org.apache.hadoop.mapreduce.lib.aggregate.UserDefinedValueAggregatorDescriptor
-
Do nothing.
- configure(Configuration) - Method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorBaseDescriptor
-
get the input file name.
- configure(Configuration) - Method in interface org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorDescriptor
-
Configure the object
- Configured - Class in org.apache.hadoop.conf
-
Base class for things that may be configured with a
Configuration
.
- Configured() - Constructor for class org.apache.hadoop.conf.Configured
-
Construct a Configured.
- Configured(Configuration) - Constructor for class org.apache.hadoop.conf.Configured
-
Construct a Configured.
- configureDB(JobConf, String, String, String, String) - Static method in class org.apache.hadoop.mapred.lib.db.DBConfiguration
-
Sets the DB access related fields in the JobConf.
- configureDB(JobConf, String, String) - Static method in class org.apache.hadoop.mapred.lib.db.DBConfiguration
-
Sets the DB access related fields in the JobConf.
- configureDB(Configuration, String, String, String, String) - Static method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- configureDB(Configuration, String, String) - Static method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
Sets the DB access related fields in the JobConf.
- confirmPrompt(String) - Static method in class org.apache.hadoop.util.ToolRunner
-
Print out a prompt to the user, and return true if the user
responds with "y" or "yes".
- connection - Variable in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
- ConnectTimeoutException - Exception in org.apache.hadoop.net
-
Thrown by NetUtils.connect(java.net.Socket, java.net.SocketAddress, int)
if it times out while connecting to the remote host.
- ConnectTimeoutException(String) - Constructor for exception org.apache.hadoop.net.ConnectTimeoutException
-
- constructOutputStream(DataOutput) - Static method in class org.apache.hadoop.io.DataOutputOutputStream
-
Construct an OutputStream from the given DataOutput.
- constructQuery(String, String[]) - Method in class org.apache.hadoop.mapreduce.lib.db.DBOutputFormat
-
Constructs the query used as the prepared statement to insert data.
- Consts - Class in org.apache.hadoop.record.compiler
-
- Container - Class in org.apache.hadoop.yarn.api.records
-
Container
represents an allocated resource in the cluster.
- Container() - Constructor for class org.apache.hadoop.yarn.api.records.Container
-
- CONTAINER_ID_BITMASK - Static variable in class org.apache.hadoop.yarn.api.records.ContainerId
-
- CONTAINER_TOKEN_FILE_ENV_NAME - Static variable in interface org.apache.hadoop.yarn.api.ApplicationConstants
-
The cache file into which container token is written
- ContainerExitStatus - Class in org.apache.hadoop.yarn.api.records
-
Container exit statuses indicating special exit circumstances.
- ContainerExitStatus() - Constructor for class org.apache.hadoop.yarn.api.records.ContainerExitStatus
-
- ContainerId - Class in org.apache.hadoop.yarn.api.records
-
ContainerId
represents a globally unique identifier
for a
Container
in the cluster.
- ContainerId() - Constructor for class org.apache.hadoop.yarn.api.records.ContainerId
-
- ContainerLaunchContext - Class in org.apache.hadoop.yarn.api.records
-
ContainerLaunchContext
represents all of the information
needed by the NodeManager
to launch a container.
- ContainerLaunchContext() - Constructor for class org.apache.hadoop.yarn.api.records.ContainerLaunchContext
-
- ContainerLogAppender - Class in org.apache.hadoop.yarn
-
A simple log4j-appender for container's logs.
- ContainerLogAppender() - Constructor for class org.apache.hadoop.yarn.ContainerLogAppender
-
- ContainerManagementProtocol - Interface in org.apache.hadoop.yarn.api
-
The protocol between an ApplicationMaster
and a
NodeManager
to start/stop containers and to get status
of running containers.
- ContainerManagerSecurityInfo - Class in org.apache.hadoop.yarn.security
-
- ContainerManagerSecurityInfo() - Constructor for class org.apache.hadoop.yarn.security.ContainerManagerSecurityInfo
-
- ContainerNotFoundException - Exception in org.apache.hadoop.yarn.exceptions
-
This exception is thrown on
(GetContainerReportRequest)
API when the container doesn't exist in AHS
- ContainerNotFoundException(Throwable) - Constructor for exception org.apache.hadoop.yarn.exceptions.ContainerNotFoundException
-
- ContainerNotFoundException(String) - Constructor for exception org.apache.hadoop.yarn.exceptions.ContainerNotFoundException
-
- ContainerNotFoundException(String, Throwable) - Constructor for exception org.apache.hadoop.yarn.exceptions.ContainerNotFoundException
-
- ContainerReport - Class in org.apache.hadoop.yarn.api.records
-
ContainerReport
is a report of an container.
- ContainerReport() - Constructor for class org.apache.hadoop.yarn.api.records.ContainerReport
-
- ContainerResourceIncreaseRequest - Class in org.apache.hadoop.yarn.api.records
-
Used by Application Master, send a container resource increase request to
Resource Manager
- ContainerResourceIncreaseRequest() - Constructor for class org.apache.hadoop.yarn.api.records.ContainerResourceIncreaseRequest
-
- ContainerRollingLogAppender - Class in org.apache.hadoop.yarn
-
A simple log4j-appender for container's logs.
- ContainerRollingLogAppender() - Constructor for class org.apache.hadoop.yarn.ContainerRollingLogAppender
-
- ContainerState - Enum in org.apache.hadoop.yarn.api.records
-
State of a Container
.
- ContainerStatus - Class in org.apache.hadoop.yarn.api.records
-
ContainerStatus
represents the current status of a
Container
.
- ContainerStatus() - Constructor for class org.apache.hadoop.yarn.api.records.ContainerStatus
-
- ContainerTokenIdentifier - Class in org.apache.hadoop.yarn.security
-
TokenIdentifier for a container.
- ContainerTokenIdentifier(ContainerId, String, String, Resource, long, int, long, Priority, long) - Constructor for class org.apache.hadoop.yarn.security.ContainerTokenIdentifier
-
- ContainerTokenIdentifier(ContainerId, String, String, Resource, long, int, long, Priority, long, LogAggregationContext) - Constructor for class org.apache.hadoop.yarn.security.ContainerTokenIdentifier
-
- ContainerTokenIdentifier() - Constructor for class org.apache.hadoop.yarn.security.ContainerTokenIdentifier
-
Default constructor needed by RPC layer/SecretManager.
- ContainerTokenSelector - Class in org.apache.hadoop.yarn.security
-
- ContainerTokenSelector() - Constructor for class org.apache.hadoop.yarn.security.ContainerTokenSelector
-
- containsKey(Object) - Method in class org.apache.hadoop.io.MapWritable
-
- containsKey(Object) - Method in class org.apache.hadoop.io.SortedMapWritable
-
- containsToken(String) - Method in class org.apache.hadoop.yarn.client.api.NMTokenCache
-
Returns true if NMToken is present in cache.
- containsValue(Object) - Method in class org.apache.hadoop.io.MapWritable
-
- containsValue(Object) - Method in class org.apache.hadoop.io.SortedMapWritable
-
- contentEquals(Counters.Counter) - Method in class org.apache.hadoop.mapred.Counters.Counter
-
Deprecated.
- ContentSummary - Class in org.apache.hadoop.fs
-
Store the summary of a content (a directory or a file).
- ContentSummary() - Constructor for class org.apache.hadoop.fs.ContentSummary
-
Constructor
- ContentSummary(long, long, long) - Constructor for class org.apache.hadoop.fs.ContentSummary
-
Constructor
- ContentSummary(long, long, long, long, long, long) - Constructor for class org.apache.hadoop.fs.ContentSummary
-
Constructor
- context - Variable in class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader
-
- context() - Method in interface org.apache.hadoop.metrics2.MetricsRecord
-
- ControlledJob - Class in org.apache.hadoop.mapreduce.lib.jobcontrol
-
This class encapsulates a MapReduce job and its dependency.
- ControlledJob(Job, List<ControlledJob>) - Constructor for class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
Construct a job.
- ControlledJob(Configuration) - Constructor for class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
Construct a job.
- convert(Throwable) - Static method in exception org.apache.hadoop.service.ServiceStateException
-
- convert(String, Throwable) - Static method in exception org.apache.hadoop.service.ServiceStateException
-
- convertUsername(String) - Static method in class org.apache.hadoop.registry.client.binding.RegistryUtils
-
Convert the username to that which can be used for registry
entries.
- copy(FileSystem, Path, FileSystem, Path, boolean, Configuration) - Static method in class org.apache.hadoop.fs.FileUtil
-
Copy files between FileSystems.
- copy(FileSystem, Path[], FileSystem, Path, boolean, boolean, Configuration) - Static method in class org.apache.hadoop.fs.FileUtil
-
- copy(FileSystem, Path, FileSystem, Path, boolean, boolean, Configuration) - Static method in class org.apache.hadoop.fs.FileUtil
-
Copy files between FileSystems.
- copy(FileSystem, FileStatus, FileSystem, Path, boolean, boolean, Configuration) - Static method in class org.apache.hadoop.fs.FileUtil
-
Copy files between FileSystems.
- copy(File, FileSystem, Path, boolean, Configuration) - Static method in class org.apache.hadoop.fs.FileUtil
-
Copy local files to a FileSystem.
- copy(FileSystem, Path, File, boolean, Configuration) - Static method in class org.apache.hadoop.fs.FileUtil
-
Copy FileSystem files to local files.
- copy(Writable) - Method in class org.apache.hadoop.io.AbstractMapWritable
-
Used by child copy constructors.
- copy(byte[], int, int) - Method in class org.apache.hadoop.record.Buffer
-
Deprecated.
Copy the specified byte array to the Buffer.
- copy(Configuration, T, T) - Static method in class org.apache.hadoop.util.ReflectionUtils
-
Make a copy of the writable object using serialization to a buffer
- copyBytes() - Method in class org.apache.hadoop.io.BytesWritable
-
Get a copy of the bytes that is exactly the length of the data.
- copyBytes(InputStream, OutputStream, int, boolean) - Static method in class org.apache.hadoop.io.IOUtils
-
Copies from one stream to another.
- copyBytes(InputStream, OutputStream, int) - Static method in class org.apache.hadoop.io.IOUtils
-
Copies from one stream to another.
- copyBytes(InputStream, OutputStream, Configuration) - Static method in class org.apache.hadoop.io.IOUtils
-
Copies from one stream to another.
- copyBytes(InputStream, OutputStream, Configuration, boolean) - Static method in class org.apache.hadoop.io.IOUtils
-
Copies from one stream to another.
- copyBytes(InputStream, OutputStream, long, boolean) - Static method in class org.apache.hadoop.io.IOUtils
-
Copies count bytes from one stream to another.
- copyBytes() - Method in class org.apache.hadoop.io.Text
-
Get a copy of the bytes that is exactly the length of the data.
- copyFromLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
-
- copyFromLocalFile(Path, Path) - Method in class org.apache.hadoop.fs.FileSystem
-
The src file is on the local disk.
- copyFromLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.FileSystem
-
The src file is on the local disk.
- copyFromLocalFile(boolean, boolean, Path[], Path) - Method in class org.apache.hadoop.fs.FileSystem
-
The src files are on the local disk.
- copyFromLocalFile(boolean, boolean, Path, Path) - Method in class org.apache.hadoop.fs.FileSystem
-
The src file is on the local disk.
- copyFromLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
The src file is on the local disk.
- copyFromLocalFile(boolean, boolean, Path[], Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
The src files are on the local disk.
- copyFromLocalFile(boolean, boolean, Path, Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
The src file is on the local disk.
- copyFromLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.LocalFileSystem
-
- copyMerge(FileSystem, Path, FileSystem, Path, boolean, Configuration, String) - Static method in class org.apache.hadoop.fs.FileUtil
-
Copy all files in a directory to one output file (merge).
- copyToLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
-
The src file is under FS, and the dst is on the local disk.
- copyToLocalFile(Path, Path, boolean) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
-
The src file is under FS, and the dst is on the local disk.
- copyToLocalFile(Path, Path) - Method in class org.apache.hadoop.fs.FileSystem
-
The src file is under FS, and the dst is on the local disk.
- copyToLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.FileSystem
-
The src file is under FS, and the dst is on the local disk.
- copyToLocalFile(boolean, Path, Path, boolean) - Method in class org.apache.hadoop.fs.FileSystem
-
The src file is under FS, and the dst is on the local disk.
- copyToLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
The src file is under FS, and the dst is on the local disk.
- copyToLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.LocalFileSystem
-
- CORE_SITE_CONFIGURATION_FILE - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- countCounters() - Method in class org.apache.hadoop.mapreduce.counters.AbstractCounters
-
Returns the total number of counters, by summing the number of counters
in each group.
- Counter - Interface in org.apache.hadoop.mapreduce
-
A named counter that tracks the progress of a map/reduce job.
- counter(MetricsInfo, int) - Method in interface org.apache.hadoop.metrics2.MetricsVisitor
-
Callback for integer value counters
- counter(MetricsInfo, long) - Method in interface org.apache.hadoop.metrics2.MetricsVisitor
-
Callback for long value counters
- COUNTER_GROUP - Static variable in class org.apache.hadoop.mapred.SkipBadRecords
-
Special counters which are written by the application and are
used by the framework for detecting bad records.
- COUNTER_MAP_PROCESSED_RECORDS - Static variable in class org.apache.hadoop.mapred.SkipBadRecords
-
Number of processed map records.
- COUNTER_REDUCE_PROCESSED_GROUPS - Static variable in class org.apache.hadoop.mapred.SkipBadRecords
-
Number of processed reduce groups.
- CounterGroup - Interface in org.apache.hadoop.mapreduce
-
A group of
Counter
s that logically belong together.
- CounterGroupBase<T extends Counter> - Interface in org.apache.hadoop.mapreduce.counters
-
The common counter group interface.
- Counters - Class in org.apache.hadoop.mapred
-
A set of named counters.
- Counters() - Constructor for class org.apache.hadoop.mapred.Counters
-
- Counters(Counters) - Constructor for class org.apache.hadoop.mapred.Counters
-
- Counters - Class in org.apache.hadoop.mapreduce
-
Counters
holds per job/task counters, defined either by the
Map-Reduce framework or applications.
- Counters() - Constructor for class org.apache.hadoop.mapreduce.Counters
-
Default constructor
- Counters(AbstractCounters<C, G>) - Constructor for class org.apache.hadoop.mapreduce.Counters
-
Construct the Counters object from the another counters object
- Counters.Counter - Class in org.apache.hadoop.mapred
-
A counter record, comprising its name and value.
- Counters.Counter() - Constructor for class org.apache.hadoop.mapred.Counters.Counter
-
- Counters.Group - Class in org.apache.hadoop.mapred
-
Group
of counters, comprising of counters from a particular
counter
Enum
class.
- Counters.Group() - Constructor for class org.apache.hadoop.mapred.Counters.Group
-
- CountingBloomFilter - Class in org.apache.hadoop.util.bloom
-
Implements a counting Bloom filter, as defined by Fan et al.
- CountingBloomFilter() - Constructor for class org.apache.hadoop.util.bloom.CountingBloomFilter
-
Default constructor - use with readFields
- CountingBloomFilter(int, int, int) - Constructor for class org.apache.hadoop.util.bloom.CountingBloomFilter
-
Constructor
- create(Path, EnumSet<CreateFlag>, Options.CreateOpts...) - Method in class org.apache.hadoop.fs.AbstractFileSystem
-
- create(Path, FsPermission, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
-
- create(Path, EnumSet<CreateFlag>, Options.CreateOpts...) - Method in class org.apache.hadoop.fs.FileContext
-
Create or overwrite file on indicated path and returns an output stream for
writing into the file.
- create(FileSystem, Path, FsPermission) - Static method in class org.apache.hadoop.fs.FileSystem
-
create a file with the provided permission
The permission of the file is set to be the provided permission as in
setPermission, not permission&~umask
It is implemented using two RPCs.
- create(Path) - Method in class org.apache.hadoop.fs.FileSystem
-
Create an FSDataOutputStream at the indicated Path.
- create(Path, boolean) - Method in class org.apache.hadoop.fs.FileSystem
-
Create an FSDataOutputStream at the indicated Path.
- create(Path, Progressable) - Method in class org.apache.hadoop.fs.FileSystem
-
Create an FSDataOutputStream at the indicated Path with write-progress
reporting.
- create(Path, short) - Method in class org.apache.hadoop.fs.FileSystem
-
Create an FSDataOutputStream at the indicated Path.
- create(Path, short, Progressable) - Method in class org.apache.hadoop.fs.FileSystem
-
Create an FSDataOutputStream at the indicated Path with write-progress
reporting.
- create(Path, boolean, int) - Method in class org.apache.hadoop.fs.FileSystem
-
Create an FSDataOutputStream at the indicated Path.
- create(Path, boolean, int, Progressable) - Method in class org.apache.hadoop.fs.FileSystem
-
Create an FSDataOutputStream at the indicated Path with write-progress
reporting.
- create(Path, boolean, int, short, long) - Method in class org.apache.hadoop.fs.FileSystem
-
Create an FSDataOutputStream at the indicated Path.
- create(Path, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.FileSystem
-
Create an FSDataOutputStream at the indicated Path with write-progress
reporting.
- create(Path, FsPermission, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.FileSystem
-
Create an FSDataOutputStream at the indicated Path with write-progress
reporting.
- create(Path, FsPermission, EnumSet<CreateFlag>, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.FileSystem
-
Create an FSDataOutputStream at the indicated Path with write-progress
reporting.
- create(Path, FsPermission, EnumSet<CreateFlag>, int, short, long, Progressable, Options.ChecksumOpt) - Method in class org.apache.hadoop.fs.FileSystem
-
Create an FSDataOutputStream at the indicated Path with a custom
checksum option
- create(Path, FsPermission, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- create(Path, FsPermission, EnumSet<CreateFlag>, int, short, long, Progressable, Options.ChecksumOpt) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- create(Path, FsPermission, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.ftp.FTPFileSystem
-
A stream obtained via this call must be closed before using other APIs of
this class or else the invocation will block.
- create(Path, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
-
- create(Path, FsPermission, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
-
- create(Path, FsPermission, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
-
- create(Path, FsPermission, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.s3native.NativeS3FileSystem
-
- create(Path, FsPermission, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.viewfs.ViewFileSystem
-
- CREATE - Static variable in interface org.apache.hadoop.registry.client.api.BindFlags
-
Create the entry..
- CREATE_DIR - Static variable in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
- createAHSClient() - Static method in class org.apache.hadoop.yarn.client.api.AHSClient
-
Create a new instance of AHSClient.
- createAHSProxy(Configuration, Class<T>, InetSocketAddress) - Static method in class org.apache.hadoop.yarn.client.AHSProxy
-
- createAllSymlink(Configuration, File, File) - Static method in class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
Internal to MapReduce framework. Use DistributedCacheManager
instead.
- createAMRMClient() - Static method in class org.apache.hadoop.yarn.client.api.AMRMClient
-
Create a new instance of AMRMClient.
- createAMRMClientAsync(int, AMRMClientAsync.CallbackHandler) - Static method in class org.apache.hadoop.yarn.client.api.async.AMRMClientAsync
-
- createAMRMClientAsync(AMRMClient<T>, int, AMRMClientAsync.CallbackHandler) - Static method in class org.apache.hadoop.yarn.client.api.async.AMRMClientAsync
-
- createApplication() - Method in class org.apache.hadoop.yarn.client.api.YarnClient
-
- createCheckpoint() - Method in class org.apache.hadoop.fs.TrashPolicy
-
Create a trash checkpoint.
- createCompressor() - Method in class org.apache.hadoop.io.compress.BZip2Codec
-
- createCompressor() - Method in interface org.apache.hadoop.io.compress.CompressionCodec
-
- createCompressor() - Method in class org.apache.hadoop.io.compress.DefaultCodec
-
- createCompressor() - Method in class org.apache.hadoop.io.compress.GzipCodec
-
- createConnection() - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
- createCredentialEntry(String, char[]) - Method in class org.apache.hadoop.security.alias.CredentialProvider
-
Create a new credential.
- createDBRecordReader(DBInputFormat.DBInputSplit, Configuration) - Method in class org.apache.hadoop.mapreduce.lib.db.DataDrivenDBInputFormat
-
- createDBRecordReader(DBInputFormat.DBInputSplit, Configuration) - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
- createDBRecordReader(DBInputFormat.DBInputSplit, Configuration) - Method in class org.apache.hadoop.mapreduce.lib.db.OracleDataDrivenDBInputFormat
-
- createDecompressor() - Method in class org.apache.hadoop.io.compress.BZip2Codec
-
- createDecompressor() - Method in interface org.apache.hadoop.io.compress.CompressionCodec
-
- createDecompressor() - Method in class org.apache.hadoop.io.compress.DefaultCodec
-
- createDecompressor() - Method in class org.apache.hadoop.io.compress.GzipCodec
-
- createDirectDecompressor() - Method in class org.apache.hadoop.io.compress.DefaultCodec
-
- createDirectDecompressor() - Method in interface org.apache.hadoop.io.compress.DirectDecompressionCodec
-
- createDirectDecompressor() - Method in class org.apache.hadoop.io.compress.GzipCodec
-
- createFileSplit(Path, long, long) - Static method in class org.apache.hadoop.mapred.lib.NLineInputFormat
-
NLineInputFormat uses LineRecordReader, which always reads
(and consumes) at least one character out of its upper split
boundary.
- createFileSplit(Path, long, long) - Static method in class org.apache.hadoop.mapreduce.lib.input.NLineInputFormat
-
NLineInputFormat uses LineRecordReader, which always reads
(and consumes) at least one character out of its upper split
boundary.
- createFileSystem(URI, Configuration) - Static method in class org.apache.hadoop.fs.AbstractFileSystem
-
Create a file system instance for the specified uri using the conf.
- CreateFlag - Enum in org.apache.hadoop.fs
-
CreateFlag specifies the file create semantic.
- createIdentifier() - Method in class org.apache.hadoop.yarn.security.client.BaseClientToAMTokenSecretManager
-
- createImmutable(short) - Static method in class org.apache.hadoop.fs.permission.FsPermission
-
- createInputStream(InputStream) - Method in class org.apache.hadoop.io.compress.BZip2Codec
-
Create a
CompressionInputStream
that will read from the given
input stream and return a stream for uncompressed data.
- createInputStream(InputStream, Decompressor) - Method in class org.apache.hadoop.io.compress.BZip2Codec
-
- createInputStream(InputStream, Decompressor, long, long, SplittableCompressionCodec.READ_MODE) - Method in class org.apache.hadoop.io.compress.BZip2Codec
-
Creates CompressionInputStream to be used to read off uncompressed data
in one of the two reading modes.
- createInputStream(InputStream) - Method in interface org.apache.hadoop.io.compress.CompressionCodec
-
- createInputStream(InputStream, Decompressor) - Method in interface org.apache.hadoop.io.compress.CompressionCodec
-
- createInputStream(InputStream) - Method in class org.apache.hadoop.io.compress.DefaultCodec
-
- createInputStream(InputStream, Decompressor) - Method in class org.apache.hadoop.io.compress.DefaultCodec
-
- createInputStream(InputStream) - Method in class org.apache.hadoop.io.compress.GzipCodec
-
- createInputStream(InputStream, Decompressor) - Method in class org.apache.hadoop.io.compress.GzipCodec
-
- createInputStream(InputStream, Decompressor, long, long, SplittableCompressionCodec.READ_MODE) - Method in interface org.apache.hadoop.io.compress.SplittableCompressionCodec
-
Create a stream as dictated by the readMode.
- createInstance(String) - Static method in class org.apache.hadoop.mapred.lib.aggregate.UserDefinedValueAggregatorDescriptor
-
Create an instance of the given class
- createInstance(String) - Static method in class org.apache.hadoop.mapreduce.lib.aggregate.UserDefinedValueAggregatorDescriptor
-
Create an instance of the given class
- createInternal(Path, EnumSet<CreateFlag>, FsPermission, int, short, long, Progressable, Options.ChecksumOpt, boolean) - Method in class org.apache.hadoop.fs.AbstractFileSystem
-
- createInternal(Path, EnumSet<CreateFlag>, FsPermission, int, short, long, Progressable, Options.ChecksumOpt, boolean) - Method in class org.apache.hadoop.fs.viewfs.ViewFs
-
- createInternalValue() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
Create a value to be used internally for joins.
- createIOException(List<IOException>) - Static method in exception org.apache.hadoop.io.MultipleIOException
-
- createJarWithClassPath(String, Path, Map<String, String>) - Static method in class org.apache.hadoop.fs.FileUtil
-
- createJarWithClassPath(String, Path, Path, Map<String, String>) - Static method in class org.apache.hadoop.fs.FileUtil
-
Create a jar file at the given path, containing a manifest with a classpath
that references all specified entries.
- createJobListCache() - Method in class org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager
-
- createKey(String, byte[], KeyProvider.Options) - Method in class org.apache.hadoop.crypto.key.KeyProvider
-
Create a new key.
- createKey(String, KeyProvider.Options) - Method in class org.apache.hadoop.crypto.key.KeyProvider
-
Create a new key generating the material for it.
- createKey() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
Create a new key value common to all child RRs.
- createKey() - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
-
Request new key from proxied RR.
- createKey() - Method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
-
- createKey() - Method in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
-
- createKey() - Method in class org.apache.hadoop.mapred.lib.CombineFileRecordReaderWrapper
-
- createKey() - Method in interface org.apache.hadoop.mapred.RecordReader
-
Create an object of the appropriate type to be used as a key.
- createKey() - Method in class org.apache.hadoop.mapred.SequenceFileAsTextRecordReader
-
- createKey() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
-
- createKey() - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
-
Create a new key common to all child RRs.
- createKey() - Method in class org.apache.hadoop.mapreduce.lib.join.WrappedRecordReader
-
Request new key from proxied RR.
- createLocalTempFile(File, String, boolean) - Static method in class org.apache.hadoop.fs.FileUtil
-
Create a tmp file for a base file.
- createNewFile(Path) - Method in class org.apache.hadoop.fs.FileSystem
-
Creates the given Path as a brand-new zero-length file.
- createNMClient() - Static method in class org.apache.hadoop.yarn.client.api.NMClient
-
Create a new instance of NMClient.
- createNMClient(String) - Static method in class org.apache.hadoop.yarn.client.api.NMClient
-
Create a new instance of NMClient.
- createNMClientAsync(NMClientAsync.CallbackHandler) - Static method in class org.apache.hadoop.yarn.client.api.async.NMClientAsync
-
- createNMProxy(Configuration, Class<T>, UserGroupInformation, YarnRPC, InetSocketAddress) - Static method in class org.apache.hadoop.yarn.client.NMProxy
-
- createNonRecursive(Path, FsPermission, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
-
- createNonRecursive(Path, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.FileSystem
-
Deprecated.
API only for 0.20-append
- createNonRecursive(Path, FsPermission, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.FileSystem
-
Deprecated.
API only for 0.20-append
- createNonRecursive(Path, FsPermission, EnumSet<CreateFlag>, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.FileSystem
-
Deprecated.
API only for 0.20-append
- createNonRecursive(Path, FsPermission, EnumSet<CreateFlag>, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
Deprecated.
- createNonRecursive(Path, FsPermission, EnumSet<CreateFlag>, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
-
Deprecated.
- createNonRecursive(Path, FsPermission, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
-
- createNonRecursive(Path, FsPermission, EnumSet<CreateFlag>, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.viewfs.ViewFileSystem
-
- createOutputStream(Path, boolean) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
-
- createOutputStream(OutputStream) - Method in class org.apache.hadoop.io.compress.BZip2Codec
-
- createOutputStream(OutputStream, Compressor) - Method in class org.apache.hadoop.io.compress.BZip2Codec
-
- createOutputStream(OutputStream) - Method in interface org.apache.hadoop.io.compress.CompressionCodec
-
- createOutputStream(OutputStream, Compressor) - Method in interface org.apache.hadoop.io.compress.CompressionCodec
-
- createOutputStream(OutputStream) - Method in class org.apache.hadoop.io.compress.DefaultCodec
-
- createOutputStream(OutputStream, Compressor) - Method in class org.apache.hadoop.io.compress.DefaultCodec
-
- createOutputStream(OutputStream) - Method in class org.apache.hadoop.io.compress.GzipCodec
-
- createOutputStream(OutputStream, Compressor) - Method in class org.apache.hadoop.io.compress.GzipCodec
-
- createPassword(ClientToAMTokenIdentifier) - Method in class org.apache.hadoop.yarn.security.client.BaseClientToAMTokenSecretManager
-
- createPool(JobConf, List<PathFilter>) - Method in class org.apache.hadoop.mapred.lib.CombineFileInputFormat
-
- createPool(JobConf, PathFilter...) - Method in class org.apache.hadoop.mapred.lib.CombineFileInputFormat
-
- createPool(List<PathFilter>) - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat
-
Create a new pool and add the filters to it.
- createPool(PathFilter...) - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat
-
Create a new pool and add the filters to it.
- createProvider(URI, Configuration) - Method in class org.apache.hadoop.crypto.key.KeyProviderFactory
-
- createProvider(URI, Configuration) - Method in class org.apache.hadoop.security.alias.CredentialProviderFactory
-
- createRecord(String) - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
-
Creates a new AbstractMetricsRecord instance with the given recordName
.
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapred.lib.CombineFileInputFormat
-
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.InputFormat
-
Create a record reader for a given split.
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
Create a record reader for a given split.
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat
-
This is not implemented yet.
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.CombineSequenceFileInputFormat
-
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.CombineTextInputFormat
-
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.FixedLengthInputFormat
-
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.KeyValueTextInputFormat
-
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.NLineInputFormat
-
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsBinaryInputFormat
-
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsTextInputFormat
-
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFilter
-
Create a record reader for the given split
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat
-
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.TextInputFormat
-
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.join.ComposableInputFormat
-
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeInputFormat
-
Construct a CompositeRecordReader for the children of this InputFormat
as defined in the init expression.
- createRetriableProxy(Configuration, Class<T>, UserGroupInformation, YarnRPC, InetSocketAddress, RetryPolicy) - Static method in class org.apache.hadoop.yarn.client.ServerProxy
-
- createRetryPolicy(Configuration) - Static method in class org.apache.hadoop.yarn.client.RMProxy
-
Fetch retry policy from Configuration
- createRetryPolicy(Configuration, String, long, String, long) - Static method in class org.apache.hadoop.yarn.client.ServerProxy
-
- createRMProxy(Configuration, Class<T>) - Static method in class org.apache.hadoop.yarn.client.ClientRMProxy
-
Create a proxy to the ResourceManager for the specified protocol.
- createRMProxy(Configuration, Class<T>, RMProxy) - Static method in class org.apache.hadoop.yarn.client.RMProxy
-
Create a proxy for the specified protocol.
- createRMProxy(Configuration, Class<T>, InetSocketAddress) - Static method in class org.apache.hadoop.yarn.client.RMProxy
-
Deprecated.
This method is deprecated and is not used by YARN internally any more.
To create a proxy to the RM, use ClientRMProxy#createRMProxy or
ServerRMProxy#createRMProxy.
Create a proxy to the ResourceManager at the specified address.
- createSnapshot(Path) - Method in class org.apache.hadoop.fs.FileSystem
-
Create a snapshot with a default name.
- createSnapshot(Path, String) - Method in class org.apache.hadoop.fs.FileSystem
-
Create a snapshot
- createSnapshot(Path, String) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- createSocket() - Method in class org.apache.hadoop.net.SocksSocketFactory
-
- createSocket(InetAddress, int) - Method in class org.apache.hadoop.net.SocksSocketFactory
-
- createSocket(InetAddress, int, InetAddress, int) - Method in class org.apache.hadoop.net.SocksSocketFactory
-
- createSocket(String, int) - Method in class org.apache.hadoop.net.SocksSocketFactory
-
- createSocket(String, int, InetAddress, int) - Method in class org.apache.hadoop.net.SocksSocketFactory
-
- createSocket() - Method in class org.apache.hadoop.net.StandardSocketFactory
-
- createSocket(InetAddress, int) - Method in class org.apache.hadoop.net.StandardSocketFactory
-
- createSocket(InetAddress, int, InetAddress, int) - Method in class org.apache.hadoop.net.StandardSocketFactory
-
- createSocket(String, int) - Method in class org.apache.hadoop.net.StandardSocketFactory
-
- createSocket(String, int, InetAddress, int) - Method in class org.apache.hadoop.net.StandardSocketFactory
-
- createSymlink(Path, Path, boolean) - Method in class org.apache.hadoop.fs.AbstractFileSystem
-
- createSymlink(Path, Path, boolean) - Method in class org.apache.hadoop.fs.FileContext
-
Creates a symbolic link to an existing file.
- createSymlink(Path, Path, boolean) - Method in class org.apache.hadoop.fs.FileSystem
-
- createSymlink(Path, Path, boolean) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- createSymlink(Path, Path, boolean) - Method in class org.apache.hadoop.fs.LocalFileSystem
-
- createSymlink(Path, Path, boolean) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
-
- createSymlink(Path, Path, boolean) - Method in class org.apache.hadoop.fs.viewfs.ViewFs
-
- createSymlink() - Method in class org.apache.hadoop.mapreduce.Job
-
Deprecated.
- createTimelineClient() - Static method in class org.apache.hadoop.yarn.client.api.TimelineClient
-
- createTupleWritable() - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
-
Create a value to be used internally for joins.
- createValue() - Method in class org.apache.hadoop.mapred.join.JoinRecordReader
-
Create an object of the appropriate type to be used as a value.
- createValue() - Method in class org.apache.hadoop.mapred.join.MultiFilterRecordReader
-
Create an object of the appropriate type to be used as a value.
- createValue() - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
-
Request new value from proxied RR.
- createValue() - Method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
-
- createValue() - Method in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
-
- createValue() - Method in class org.apache.hadoop.mapred.lib.CombineFileRecordReaderWrapper
-
- createValue() - Method in interface org.apache.hadoop.mapred.RecordReader
-
Create an object of the appropriate type to be used as a value.
- createValue() - Method in class org.apache.hadoop.mapred.SequenceFileAsTextRecordReader
-
- createValue() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
-
- createValue() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
Deprecated.
- createValue() - Method in class org.apache.hadoop.mapreduce.lib.join.JoinRecordReader
-
- createValue() - Method in class org.apache.hadoop.mapreduce.lib.join.OverrideRecordReader
-
- createValue() - Method in class org.apache.hadoop.mapreduce.lib.join.WrappedRecordReader
-
- createValueAggregatorJob(String[], Class<?>) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
-
Create an Aggregate based map/reduce job.
- createValueAggregatorJob(String[]) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
-
Create an Aggregate based map/reduce job.
- createValueAggregatorJob(String[], Class<? extends ValueAggregatorDescriptor>[]) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
-
- createValueAggregatorJob(String[], Class<? extends ValueAggregatorDescriptor>[], Class<?>) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
-
- createValueAggregatorJob(Configuration, String[]) - Static method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorJob
-
Create an Aggregate based map/reduce job.
- createValueAggregatorJob(String[], Class<? extends ValueAggregatorDescriptor>[]) - Static method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorJob
-
- createValueAggregatorJobs(String[], Class<? extends ValueAggregatorDescriptor>[]) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
-
- createValueAggregatorJobs(String[]) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
-
- createValueAggregatorJobs(String[], Class<? extends ValueAggregatorDescriptor>[]) - Static method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorJob
-
- createValueAggregatorJobs(String[]) - Static method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorJob
-
- createWriter(Configuration, SequenceFile.Writer.Option...) - Static method in class org.apache.hadoop.io.SequenceFile
-
Create a new Writer with the given options.
- createWriter(FileSystem, Configuration, Path, Class, Class) - Static method in class org.apache.hadoop.io.SequenceFile
-
- createWriter(FileSystem, Configuration, Path, Class, Class, SequenceFile.CompressionType) - Static method in class org.apache.hadoop.io.SequenceFile
-
- createWriter(FileSystem, Configuration, Path, Class, Class, SequenceFile.CompressionType, Progressable) - Static method in class org.apache.hadoop.io.SequenceFile
-
- createWriter(FileSystem, Configuration, Path, Class, Class, SequenceFile.CompressionType, CompressionCodec) - Static method in class org.apache.hadoop.io.SequenceFile
-
- createWriter(FileSystem, Configuration, Path, Class, Class, SequenceFile.CompressionType, CompressionCodec, Progressable, SequenceFile.Metadata) - Static method in class org.apache.hadoop.io.SequenceFile
-
- createWriter(FileSystem, Configuration, Path, Class, Class, int, short, long, SequenceFile.CompressionType, CompressionCodec, Progressable, SequenceFile.Metadata) - Static method in class org.apache.hadoop.io.SequenceFile
-
- createWriter(FileSystem, Configuration, Path, Class, Class, int, short, long, boolean, SequenceFile.CompressionType, CompressionCodec, SequenceFile.Metadata) - Static method in class org.apache.hadoop.io.SequenceFile
-
Deprecated.
- createWriter(FileContext, Configuration, Path, Class, Class, SequenceFile.CompressionType, CompressionCodec, SequenceFile.Metadata, EnumSet<CreateFlag>, Options.CreateOpts...) - Static method in class org.apache.hadoop.io.SequenceFile
-
Construct the preferred type of SequenceFile Writer.
- createWriter(FileSystem, Configuration, Path, Class, Class, SequenceFile.CompressionType, CompressionCodec, Progressable) - Static method in class org.apache.hadoop.io.SequenceFile
-
- createWriter(Configuration, FSDataOutputStream, Class, Class, SequenceFile.CompressionType, CompressionCodec, SequenceFile.Metadata) - Static method in class org.apache.hadoop.io.SequenceFile
-
- createWriter(Configuration, FSDataOutputStream, Class, Class, SequenceFile.CompressionType, CompressionCodec) - Static method in class org.apache.hadoop.io.SequenceFile
-
- createYarnClient() - Static method in class org.apache.hadoop.yarn.client.api.YarnClient
-
Create a new instance of YarnClient.
- createYarnClient() - Method in class org.apache.hadoop.yarn.client.cli.LogsCLI
-
- CREDENTIAL_PROVIDER_PATH - Static variable in class org.apache.hadoop.security.alias.CredentialProviderFactory
-
- CredentialProvider - Class in org.apache.hadoop.security.alias
-
A provider of credentials or password for Hadoop applications.
- CredentialProvider() - Constructor for class org.apache.hadoop.security.alias.CredentialProvider
-
- CredentialProviderFactory - Class in org.apache.hadoop.security.alias
-
A factory to create a list of CredentialProvider based on the path given in a
Configuration.
- CredentialProviderFactory() - Constructor for class org.apache.hadoop.security.alias.CredentialProviderFactory
-
- CS_CONFIGURATION_FILE - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- CSTRING_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
-
Deprecated.
- CsvRecordInput - Class in org.apache.hadoop.record
-
- CsvRecordInput(InputStream) - Constructor for class org.apache.hadoop.record.CsvRecordInput
-
Deprecated.
Creates a new instance of CsvRecordInput
- CsvRecordOutput - Class in org.apache.hadoop.record
-
- CsvRecordOutput(OutputStream) - Constructor for class org.apache.hadoop.record.CsvRecordOutput
-
Deprecated.
Creates a new instance of CsvRecordOutput
- CUR_DIR - Static variable in class org.apache.hadoop.fs.Path
-
- curChar - Variable in class org.apache.hadoop.record.compiler.generated.RccTokenManager
-
Deprecated.
- curReader - Variable in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
-
- curReader - Variable in class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader
-
- currentConfig() - Method in interface org.apache.hadoop.metrics2.MetricsSystemMXBean
-
- currentToken - Variable in exception org.apache.hadoop.record.compiler.generated.ParseException
-
Deprecated.
This is the last token that has been consumed successfully.
- currentUser() - Static method in class org.apache.hadoop.registry.client.binding.RegistryUtils
-
Get the current user path formatted for the registry
- DATA_FIELD_SEPERATOR - Static variable in class org.apache.hadoop.mapreduce.lib.fieldsel.FieldSelectionHelper
-
- DATA_FILE_NAME - Static variable in class org.apache.hadoop.io.MapFile
-
The name of the data file.
- DataDrivenDBInputFormat<T extends DBWritable> - Class in org.apache.hadoop.mapreduce.lib.db
-
A InputFormat that reads input data from an SQL table.
- DataDrivenDBInputFormat() - Constructor for class org.apache.hadoop.mapreduce.lib.db.DataDrivenDBInputFormat
-
- DataDrivenDBRecordReader<T extends DBWritable> - Class in org.apache.hadoop.mapreduce.lib.db
-
A RecordReader that reads records from a SQL table,
using data-driven WHERE clause splits.
- DataDrivenDBRecordReader(DBInputFormat.DBInputSplit, Class<T>, Configuration, Connection, DBConfiguration, String, String[], String, String) - Constructor for class org.apache.hadoop.mapreduce.lib.db.DataDrivenDBRecordReader
-
- datagramSocket - Variable in class org.apache.hadoop.metrics.ganglia.GangliaContext
-
- DataOutputOutputStream - Class in org.apache.hadoop.io
-
OutputStream implementation that wraps a DataOutput.
- dataPattern - Static variable in class org.apache.hadoop.mapreduce.tools.CLI
-
- DateSplitter - Class in org.apache.hadoop.mapreduce.lib.db
-
Implement DBSplitter over date/time values.
- DateSplitter() - Constructor for class org.apache.hadoop.mapreduce.lib.db.DateSplitter
-
- dateToString(Date) - Method in class org.apache.hadoop.mapreduce.lib.db.DateSplitter
-
Given a Date 'd', format it as a string for use in a SQL date
comparison operation.
- dateToString(Date) - Method in class org.apache.hadoop.mapreduce.lib.db.OracleDateSplitter
-
- dbConf - Variable in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
- DBConfiguration - Class in org.apache.hadoop.mapred.lib.db
-
- DBConfiguration - Class in org.apache.hadoop.mapreduce.lib.db
-
A container for configuration property names for jobs with DB input/output.
- DBConfiguration(Configuration) - Constructor for class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- DBInputFormat<T extends DBWritable> - Class in org.apache.hadoop.mapred.lib.db
-
- DBInputFormat() - Constructor for class org.apache.hadoop.mapred.lib.db.DBInputFormat
-
- DBInputFormat<T extends DBWritable> - Class in org.apache.hadoop.mapreduce.lib.db
-
A InputFormat that reads input data from an SQL table.
- DBInputFormat() - Constructor for class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
- DBOutputFormat<K extends DBWritable,V> - Class in org.apache.hadoop.mapred.lib.db
-
- DBOutputFormat() - Constructor for class org.apache.hadoop.mapred.lib.db.DBOutputFormat
-
- DBOutputFormat<K extends DBWritable,V> - Class in org.apache.hadoop.mapreduce.lib.db
-
A OutputFormat that sends the reduce output to a SQL table.
- DBOutputFormat() - Constructor for class org.apache.hadoop.mapreduce.lib.db.DBOutputFormat
-
- dbProductName - Variable in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
- DBRecordReader<T extends DBWritable> - Class in org.apache.hadoop.mapreduce.lib.db
-
A RecordReader that reads records from a SQL table.
- DBRecordReader(DBInputFormat.DBInputSplit, Class<T>, Configuration, Connection, DBConfiguration, String, String[], String) - Constructor for class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
- DBSplitter - Interface in org.apache.hadoop.mapreduce.lib.db
-
DBSplitter will generate DBInputSplits to use with DataDrivenDBInputFormat.
- DBWritable - Interface in org.apache.hadoop.mapred.lib.db
-
- DBWritable - Interface in org.apache.hadoop.mapreduce.lib.db
-
Objects that are read from/written to a database should implement
DBWritable
.
- DEBUG_NM_DELETE_DELAY_SEC - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
Delay before deleting resource to ease debugging of NM issues
- debugStream - Variable in class org.apache.hadoop.record.compiler.generated.RccTokenManager
-
Deprecated.
- decode(byte[]) - Static method in class org.apache.hadoop.io.Text
-
Converts the provided byte array to a String using the
UTF-8 encoding.
- decode(byte[], int, int) - Static method in class org.apache.hadoop.io.Text
-
- decode(byte[], int, int, boolean) - Static method in class org.apache.hadoop.io.Text
-
Converts the provided byte array to a String using the
UTF-8 encoding.
- decodeValue(String) - Static method in enum org.apache.hadoop.fs.XAttrCodec
-
Decode string representation of a value and check whether it's
encoded.
- decodeVIntSize(byte) - Static method in class org.apache.hadoop.io.WritableUtils
-
Parse the first byte of a vint/vlong to determine the number of bytes
- decompress(byte[], int, int) - Method in class org.apache.hadoop.io.compress.BlockDecompressorStream
-
- decompress(byte[], int, int) - Method in interface org.apache.hadoop.io.compress.Decompressor
-
Fills specified buffer with uncompressed data.
- decompress(byte[], int, int) - Method in class org.apache.hadoop.io.compress.DecompressorStream
-
- decompress(ByteBuffer, ByteBuffer) - Method in interface org.apache.hadoop.io.compress.DirectDecompressor
-
- Decompressor - Interface in org.apache.hadoop.io.compress
-
Specification of a stream-based 'de-compressor' which can be
plugged into a
CompressionInputStream
to compress data.
- decompressor - Variable in class org.apache.hadoop.io.compress.DecompressorStream
-
- DecompressorStream - Class in org.apache.hadoop.io.compress
-
- DecompressorStream(InputStream, Decompressor, int) - Constructor for class org.apache.hadoop.io.compress.DecompressorStream
-
- DecompressorStream(InputStream, Decompressor) - Constructor for class org.apache.hadoop.io.compress.DecompressorStream
-
- DecompressorStream(InputStream) - Constructor for class org.apache.hadoop.io.compress.DecompressorStream
-
Allow derived classes to directly set the underlying stream.
- decr() - Method in class org.apache.hadoop.metrics2.lib.MutableGauge
-
Decrement the value of the metric by 1
- decr() - Method in class org.apache.hadoop.metrics2.lib.MutableGaugeInt
-
- decr(int) - Method in class org.apache.hadoop.metrics2.lib.MutableGaugeInt
-
decrement by delta
- decr() - Method in class org.apache.hadoop.metrics2.lib.MutableGaugeLong
-
- decr(long) - Method in class org.apache.hadoop.metrics2.lib.MutableGaugeLong
-
decrement by delta
- DEFAULT - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
-
Deprecated.
- DEFAULT_APPLICATION_HISTORY_ENABLED - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_APPLICATION_HISTORY_MAX_APPS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_APPLICATION_HISTORY_SAVE_NON_AM_CONTAINER_META_INFO - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_APPLICATION_NAME - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
Default application name
- DEFAULT_APPLICATION_TYPE - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
Default application type
- DEFAULT_AUTO_FAILOVER_EMBEDDED - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_AUTO_FAILOVER_ENABLED - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_AUTO_FAILOVER_ZK_BASE_PATH - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_BITLENGTH - Static variable in class org.apache.hadoop.crypto.key.KeyProvider
-
- DEFAULT_BITLENGTH_NAME - Static variable in class org.apache.hadoop.crypto.key.KeyProvider
-
- DEFAULT_BLOCK_SIZE - Static variable in class org.apache.hadoop.fs.ftp.FTPFileSystem
-
- DEFAULT_BUFFER_SIZE - Static variable in class org.apache.hadoop.fs.ftp.FTPFileSystem
-
- DEFAULT_CIPHER - Static variable in class org.apache.hadoop.crypto.key.KeyProvider
-
- DEFAULT_CIPHER_NAME - Static variable in class org.apache.hadoop.crypto.key.KeyProvider
-
- DEFAULT_CLIENT_FAILOVER_PROXY_PROVIDER - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_CLIENT_FAILOVER_RETRIES - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_CLIENT_FAILOVER_RETRIES_ON_SOCKET_TIMEOUTS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_CLIENT_NM_CONNECT_MAX_WAIT_MS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_CLIENT_NM_CONNECT_RETRY_INTERVAL_MS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_CONTAINER_TEMP_DIR - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
Container temp directory
- DEFAULT_DISPATCHER_DRAIN_EVENTS_TIMEOUT - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_DISPATCHER_EXIT_ON_ERROR - Static variable in interface org.apache.hadoop.yarn.event.Dispatcher
-
- DEFAULT_EXPIRE - Static variable in class org.apache.hadoop.yarn.util.AbstractLivelinessMonitor
-
- DEFAULT_FS - Static variable in class org.apache.hadoop.fs.FileSystem
-
- DEFAULT_FS_APPLICATION_HISTORY_STORE_COMPRESSION_TYPE - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_FS_BASED_RM_CONF_STORE - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_FS_NODE_LABELS_STORE_RETRY_POLICY_SPEC - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_FS_RM_STATE_STORE_RETRY_POLICY_SPEC - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_IPC_CLIENT_FACTORY_CLASS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_IPC_RECORD_FACTORY_CLASS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_IPC_RPC_IMPL - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_IPC_SERVER_FACTORY_CLASS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_LIST_STATUS_NUM_THREADS - Static variable in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
- DEFAULT_LOG_AGGREGATION_ENABLED - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_LOG_AGGREGATION_RETAIN_CHECK_INTERVAL_SECONDS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_LOG_AGGREGATION_RETAIN_SECONDS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_LOG_LEVEL - Static variable in class org.apache.hadoop.mapred.JobConf
-
Default logging level for map/reduce tasks.
- DEFAULT_MAPRED_TASK_JAVA_OPTS - Static variable in class org.apache.hadoop.mapred.JobConf
-
- DEFAULT_MAPREDUCE_RECOVER_JOB - Static variable in class org.apache.hadoop.mapred.JobConf
-
Deprecated.
- DEFAULT_MAX_LEN - Static variable in class org.apache.hadoop.io.Text
-
- DEFAULT_NM_ADDRESS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_ADMIN_USER_ENV - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_CLIENT_ASYNC_THREAD_POOL_MAX_SIZE - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_CLIENT_MAX_NM_PROXIES - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_CONTAINER_EXECUTOR_SCHED_PRIORITY - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_CONTAINER_MGR_THREAD_COUNT - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_CONTAINER_MON_INTERVAL_MS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_DELETE_THREAD_COUNT - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_DISK_HEALTH_CHECK_INTERVAL_MS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
By default, disks' health is checked every 2 minutes.
- DEFAULT_NM_ENV_WHITELIST - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_HEALTH_CHECK_INTERVAL_MS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_HEALTH_CHECK_SCRIPT_TIMEOUT_MS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_LINUX_CONTAINER_CGROUPS_DELETE_TIMEOUT - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_LINUX_CONTAINER_CGROUPS_STRICT_RESOURCE_USAGE - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_LOCAL_CACHE_MAX_FILES_PER_DIRECTORY - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_LOCAL_DIRS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_LOCALIZER_ADDRESS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_LOCALIZER_CACHE_CLEANUP_INTERVAL_MS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_LOCALIZER_CACHE_TARGET_SIZE_MB - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_LOCALIZER_CLIENT_THREAD_COUNT - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_LOCALIZER_FETCH_THREAD_COUNT - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_LOCALIZER_PORT - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_LOG_AGG_COMPRESSION_TYPE - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_LOG_AGGREGATION_ROLL_MONITORING_INTERVAL_SECONDS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_LOG_DELETE_THREAD_COUNT - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_LOG_DIRS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_LOG_RETAIN_SECONDS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_MAX_PER_DISK_UTILIZATION_PERCENTAGE - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
By default, 90% of the disk can be used before it is marked as offline.
- DEFAULT_NM_MIN_HEALTHY_DISKS_FRACTION - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
By default, at least 25% of disks are to be healthy to say that the node is
healthy in terms of disks.
- DEFAULT_NM_MIN_PER_DISK_FREE_SPACE_MB - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
By default, all of the disk can be used before it is marked as offline.
- DEFAULT_NM_NONSECURE_MODE_LIMIT_USERS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_NONSECURE_MODE_LOCAL_USER - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_NONSECURE_MODE_USER_PATTERN - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_PMEM_CHECK_ENABLED - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_PMEM_MB - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_PORT - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_PROCESS_KILL_WAIT_MS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_RECOVERY_COMPACTION_INTERVAL_SECS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_RECOVERY_ENABLED - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_REMOTE_APP_LOG_DIR - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_REMOTE_APP_LOG_DIR_SUFFIX - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_RESOURCE_PERCENTAGE_PHYSICAL_CPU_LIMIT - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_RESOURCEMANAGER_MINIMUM_VERSION - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_SLEEP_DELAY_BEFORE_SIGKILL_MS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_USER_HOME_DIR - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_VCORES - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_VMEM_CHECK_ENABLED - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_VMEM_PMEM_RATIO - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_WEBAPP_ADDRESS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_WEBAPP_HTTPS_ADDRESS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_WEBAPP_HTTPS_PORT - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_NM_WEBAPP_PORT - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_PATH - Static variable in class org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner
-
- DEFAULT_PERM - Static variable in class org.apache.hadoop.fs.FileContext
-
Default permission for directory and symlink
In previous versions, this default permission was also used to
create files, so files created end up with ugo+x permission.
- DEFAULT_PROCFS_USE_SMAPS_BASED_RSS_ENABLED - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_PROXY_ADDRESS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_PROXY_PORT - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_QUEUE_NAME - Static variable in class org.apache.hadoop.mapred.JobConf
-
Name of the queue to which jobs will be submitted, if no queue
name is mentioned.
- DEFAULT_QUEUE_NAME - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
Default queue name
- DEFAULT_REGISTRY_CLIENT_JAAS_CONTEXT - Static variable in interface org.apache.hadoop.registry.client.api.RegistryConstants
-
default client-side registry JAAS context: "Client"
- DEFAULT_REGISTRY_ENABLED - Static variable in interface org.apache.hadoop.registry.client.api.RegistryConstants
-
Defaut value for enabling the registry in the RM: false
- DEFAULT_REGISTRY_SECURE - Static variable in interface org.apache.hadoop.registry.client.api.RegistryConstants
-
Default registry security policy: false.
- DEFAULT_REGISTRY_SYSTEM_ACCOUNTS - Static variable in interface org.apache.hadoop.registry.client.api.RegistryConstants
-
Default system accounts given global access to the registry: "sasl:yarn@, sasl:mapred@, sasl:hdfs@, sasl:hadoop@".
- DEFAULT_REGISTRY_USER_ACCOUNTS - Static variable in interface org.apache.hadoop.registry.client.api.RegistryConstants
-
Default system acls: "".
- DEFAULT_REGISTRY_ZK_QUORUM - Static variable in interface org.apache.hadoop.registry.client.api.RegistryConstants
-
The default zookeeper quorum binding for the registry: "localhost:2181"
- DEFAULT_RESOURCEMANAGER_CONNECT_MAX_WAIT_MS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RESOURCEMANAGER_CONNECT_RETRY_INTERVAL_MS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_ADDRESS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_ADMIN_ADDRESS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_ADMIN_CLIENT_THREAD_COUNT - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_ADMIN_PORT - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_AM_EXPIRY_INTERVAL_MS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_AM_MAX_ATTEMPTS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_AMRM_TOKEN_MASTER_KEY_ROLLING_INTERVAL_SECS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_CLIENT_THREAD_COUNT - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_CONFIGURATION_PROVIDER_CLASS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_CONTAINER_ALLOC_EXPIRY_INTERVAL_MS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_CONTAINER_TOKEN_MASTER_KEY_ROLLING_INTERVAL_SECS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_DELAYED_DELEGATION_TOKEN_REMOVAL_INTERVAL_MS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_DELEGATION_TOKEN_RENEWER_THREAD_COUNT - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_HA_ENABLED - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_HISTORY_WRITER_MULTI_THREADED_DISPATCHER_POOL_SIZE - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_MAX_COMPLETED_APPLICATIONS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_METRICS_RUNTIME_BUCKETS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
Default sizes of the runtime metric buckets in minutes.
- DEFAULT_RM_NM_EXPIRY_INTERVAL_MS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_NM_HEARTBEAT_INTERVAL_MS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_NMTOKEN_MASTER_KEY_ROLLING_INTERVAL_SECS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_NODEMANAGER_MINIMUM_VERSION - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_NODES_EXCLUDE_FILE_PATH - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_NODES_INCLUDE_FILE_PATH - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_PORT - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_PROXY_USER_PRIVILEGES_ENABLED - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_RECOVERY_ENABLED - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_RESERVATION_SYSTEM_ENABLE - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_RESERVATION_SYSTEM_PLAN_FOLLOWER_TIME_STEP - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_RESOURCE_TRACKER_ADDRESS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_RESOURCE_TRACKER_CLIENT_THREAD_COUNT - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_RESOURCE_TRACKER_PORT - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_SCHEDULER - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_SCHEDULER_ADDRESS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_SCHEDULER_CLIENT_THREAD_COUNT - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_SCHEDULER_ENABLE_MONITORS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_SCHEDULER_MAXIMUM_ALLOCATION_MB - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_SCHEDULER_MAXIMUM_ALLOCATION_VCORES - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_SCHEDULER_MINIMUM_ALLOCATION_MB - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_SCHEDULER_MINIMUM_ALLOCATION_VCORES - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_SCHEDULER_PORT - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_SCHEDULER_USE_PORT_FOR_NODE_NAME - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_STATE_STORE_MAX_COMPLETED_APPLICATIONS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_SYSTEM_METRICS_PUBLISHER_DISPATCHER_POOL_SIZE - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_SYSTEM_METRICS_PUBLISHER_ENABLED - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_WEBAPP_ADDRESS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_WEBAPP_DELEGATION_TOKEN_AUTH_FILTER - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_WEBAPP_HTTPS_ADDRESS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_WEBAPP_HTTPS_PORT - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_WEBAPP_PORT - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_WEBAPP_UI_ACTIONS_ENABLED - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_WORK_PRESERVING_RECOVERY_ENABLED - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_WORK_PRESERVING_RECOVERY_SCHEDULING_WAIT_MS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_ZK_ACL - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_ZK_RETRY_INTERVAL_MS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_RM_ZK_TIMEOUT_MS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_SUBMIT_REPLICATION - Static variable in class org.apache.hadoop.mapreduce.Job
-
- DEFAULT_TIMELINE_SERVICE_ADDRESS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_TIMELINE_SERVICE_CLIENT_MAX_RETRIES - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_TIMELINE_SERVICE_CLIENT_RETRY_INTERVAL_MS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_TIMELINE_SERVICE_CLIENT_THREAD_COUNT - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_TIMELINE_SERVICE_ENABLED - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_TIMELINE_SERVICE_LEVELDB_READ_CACHE_SIZE - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_TIMELINE_SERVICE_LEVELDB_START_TIME_READ_CACHE_SIZE - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_TIMELINE_SERVICE_LEVELDB_START_TIME_WRITE_CACHE_SIZE - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_TIMELINE_SERVICE_LEVELDB_TTL_INTERVAL_MS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_TIMELINE_SERVICE_PORT - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_TIMELINE_SERVICE_TTL_MS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_TIMELINE_SERVICE_WEBAPP_ADDRESS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_TIMELINE_SERVICE_WEBAPP_HTTPS_ADDRESS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_TIMELINE_SERVICE_WEBAPP_HTTPS_PORT - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_TIMELINE_SERVICE_WEBAPP_PORT - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_UMASK - Static variable in class org.apache.hadoop.fs.permission.FsPermission
-
- DEFAULT_YARN_ACL_ENABLE - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_YARN_ADMIN_ACL - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_YARN_APP_ACL - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
ACL used in case none is found.
- DEFAULT_YARN_APPLICATION_CLASSPATH - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
Default platform-specific CLASSPATH for YARN applications.
- DEFAULT_YARN_CLIENT_APPLICATION_CLIENT_PROTOCOL_POLL_INTERVAL_MS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_YARN_CLIENT_APPLICATION_CLIENT_PROTOCOL_POLL_TIMEOUT_MS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_YARN_CROSS_PLATFORM_APPLICATION_CLASSPATH - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
Default platform-agnostic CLASSPATH for YARN applications.
- DEFAULT_YARN_FAIL_FAST - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_YARN_MINICLUSTER_CONTROL_RESOURCE_MONITORING - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_YARN_MINICLUSTER_FIXED_PORTS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
Default is false to be able to run tests concurrently without port
conflicts.
- DEFAULT_YARN_MINICLUSTER_USE_RPC - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_ZK_CONNECTION_TIMEOUT - Static variable in interface org.apache.hadoop.registry.client.api.RegistryConstants
-
The default ZK connection timeout: 15000.
- DEFAULT_ZK_REGISTRY_ROOT - Static variable in interface org.apache.hadoop.registry.client.api.RegistryConstants
-
Default root of the yarn registry: "/registry"
- DEFAULT_ZK_RETRY_CEILING - Static variable in interface org.apache.hadoop.registry.client.api.RegistryConstants
-
Default limit on retries: 60000.
- DEFAULT_ZK_RETRY_INTERVAL - Static variable in interface org.apache.hadoop.registry.client.api.RegistryConstants
-
The default interval between connection retries: 1000.
- DEFAULT_ZK_RETRY_TIMES - Static variable in interface org.apache.hadoop.registry.client.api.RegistryConstants
-
The default # of times to retry a ZK connection: 5.
- DEFAULT_ZK_RM_NUM_RETRIES - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_ZK_RM_STATE_STORE_PARENT_PATH - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DEFAULT_ZK_SESSION_TIMEOUT - Static variable in interface org.apache.hadoop.registry.client.api.RegistryConstants
-
The default ZK session timeout: 60000.
- DefaultCodec - Class in org.apache.hadoop.io.compress
-
- DefaultCodec() - Constructor for class org.apache.hadoop.io.compress.DefaultCodec
-
- DefaultImpersonationProvider - Class in org.apache.hadoop.security.authorize
-
- DefaultImpersonationProvider() - Constructor for class org.apache.hadoop.security.authorize.DefaultImpersonationProvider
-
- DefaultMetricsSystem - Enum in org.apache.hadoop.metrics2.lib
-
The default metrics system singleton
- DefaultStringifier<T> - Class in org.apache.hadoop.io
-
DefaultStringifier is the default implementation of the
Stringifier
interface which stringifies the objects using base64 encoding of the
serialized version of the objects.
- DefaultStringifier(Configuration, Class<T>) - Constructor for class org.apache.hadoop.io.DefaultStringifier
-
- define(Class, WritableComparator) - Static method in class org.apache.hadoop.io.WritableComparator
-
- define(Class, RecordComparator) - Static method in class org.apache.hadoop.record.RecordComparator
-
Deprecated.
Register an optimized comparator for a
Record
implementation.
- DELEGATION_KEY_UPDATE_INTERVAL_DEFAULT - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DELEGATION_KEY_UPDATE_INTERVAL_KEY - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DELEGATION_PARAM - Static variable in class org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator
-
- DELEGATION_TOKEN_HEADER - Static variable in class org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator
-
- DELEGATION_TOKEN_JSON - Static variable in class org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator
-
- DELEGATION_TOKEN_MAX_LIFETIME_DEFAULT - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DELEGATION_TOKEN_MAX_LIFETIME_KEY - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DELEGATION_TOKEN_RENEW_INTERVAL_DEFAULT - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DELEGATION_TOKEN_RENEW_INTERVAL_KEY - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DELEGATION_TOKEN_URL_STRING_JSON - Static variable in class org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator
-
- DelegationTokenAuthenticatedURL - Class in org.apache.hadoop.security.token.delegation.web
-
The DelegationTokenAuthenticatedURL
is a
AuthenticatedURL
sub-class with built-in Hadoop Delegation Token
functionality.
- DelegationTokenAuthenticatedURL() - Constructor for class org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL
-
Creates an DelegationTokenAuthenticatedURL
.
- DelegationTokenAuthenticatedURL(DelegationTokenAuthenticator) - Constructor for class org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL
-
Creates an DelegationTokenAuthenticatedURL
.
- DelegationTokenAuthenticatedURL(ConnectionConfigurator) - Constructor for class org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL
-
- DelegationTokenAuthenticatedURL(DelegationTokenAuthenticator, ConnectionConfigurator) - Constructor for class org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL
-
Creates an DelegationTokenAuthenticatedURL
.
- DelegationTokenAuthenticatedURL.Token - Class in org.apache.hadoop.security.token.delegation.web
-
Client side authentication token that handles Delegation Tokens.
- DelegationTokenAuthenticatedURL.Token() - Constructor for class org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.Token
-
- DelegationTokenAuthenticator - Class in org.apache.hadoop.security.token.delegation.web
-
Authenticator
wrapper that enhances an Authenticator
with
Delegation Token support.
- DelegationTokenAuthenticator(Authenticator) - Constructor for class org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator
-
- delete(Path, boolean) - Method in class org.apache.hadoop.fs.AbstractFileSystem
-
- delete(Path, boolean) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
-
Implement the delete(Path, boolean) in checksum
file system.
- delete(Path, boolean) - Method in class org.apache.hadoop.fs.FileContext
-
Delete a file.
- delete(Path) - Method in class org.apache.hadoop.fs.FileSystem
-
- delete(Path, boolean) - Method in class org.apache.hadoop.fs.FileSystem
-
Delete a file.
- delete(Path, boolean) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
Delete a file
- delete(Path, boolean) - Method in class org.apache.hadoop.fs.ftp.FTPFileSystem
-
- delete(Path, boolean) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
-
Delete the given path to a file or directory.
- delete(Path, boolean) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
-
- delete(Path, boolean) - Method in class org.apache.hadoop.fs.s3native.NativeS3FileSystem
-
- delete(Path, boolean) - Method in class org.apache.hadoop.fs.viewfs.ViewFileSystem
-
- delete(Path) - Method in class org.apache.hadoop.fs.viewfs.ViewFileSystem
-
- delete(Path, boolean) - Method in class org.apache.hadoop.fs.viewfs.ViewFs
-
- delete(FileSystem, String) - Static method in class org.apache.hadoop.io.BloomMapFile
-
- delete(FileSystem, String) - Static method in class org.apache.hadoop.io.MapFile
-
Deletes the named map file.
- delete(String, boolean) - Method in interface org.apache.hadoop.registry.client.api.RegistryOperations
-
Delete a path.
- delete(String, boolean) - Method in class org.apache.hadoop.registry.client.impl.zk.RegistryOperationsService
-
- delete(Key) - Method in class org.apache.hadoop.util.bloom.CountingBloomFilter
-
Removes a specified key from this counting Bloom filter.
- deleteCheckpoint() - Method in class org.apache.hadoop.fs.TrashPolicy
-
Delete old trash checkpoint(s).
- deleteCredentialEntry(String) - Method in class org.apache.hadoop.security.alias.CredentialProvider
-
Delete the given credential.
- deleteDir(FileStatus) - Method in class org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager
-
- deleteKey(String) - Method in class org.apache.hadoop.crypto.key.KeyProvider
-
Delete the given key.
- deleteLocalFiles() - Method in class org.apache.hadoop.mapred.JobConf
-
Deprecated.
- deleteLocalFiles(String) - Method in class org.apache.hadoop.mapred.JobConf
-
- deleteOnExit(Path) - Method in class org.apache.hadoop.fs.FileContext
-
Mark a path to be deleted on JVM shutdown.
- deleteOnExit(Path) - Method in class org.apache.hadoop.fs.FileSystem
-
Mark a path to be deleted when FileSystem is closed.
- deleteReservation(ReservationDeleteRequest) - Method in interface org.apache.hadoop.yarn.api.ApplicationClientProtocol
-
The interface used by clients to remove an existing Reservation.
- deleteReservation(ReservationDeleteRequest) - Method in class org.apache.hadoop.yarn.client.api.YarnClient
-
The interface used by clients to remove an existing Reservation.
- deleteSnapshot(Path, String) - Method in class org.apache.hadoop.fs.FileSystem
-
Delete a snapshot of a directory
- deleteSnapshot(Path, String) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- deletionInterval - Variable in class org.apache.hadoop.fs.TrashPolicy
-
- DEPENDENT_FAILED - Static variable in class org.apache.hadoop.mapred.jobcontrol.Job
-
- DEPRECATED_UMASK_LABEL - Static variable in class org.apache.hadoop.fs.permission.FsPermission
-
umask property label deprecated key and code in getUMask method
to accommodate it may be removed in version .23
- depth() - Method in class org.apache.hadoop.fs.Path
-
Return the number of elements in this path.
- description() - Method in class org.apache.hadoop.metrics2.AbstractMetric
-
- description() - Method in interface org.apache.hadoop.metrics2.MetricsInfo
-
- description() - Method in interface org.apache.hadoop.metrics2.MetricsRecord
-
- description() - Method in class org.apache.hadoop.metrics2.MetricsTag
-
- description - Variable in class org.apache.hadoop.registry.client.impl.zk.BindingInformation
-
Any information that may be useful for diagnostics
- description - Variable in class org.apache.hadoop.registry.client.types.ServiceRecord
-
Description string
- DESCRIPTOR - Static variable in class org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorJobBase
-
- DESCRIPTOR_NUM - Static variable in class org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorJobBase
-
- deserialize(RecordInput, String) - Method in class org.apache.hadoop.record.meta.RecordTypeInfo
-
Deprecated.
Deserialize the type information for a record
- deserialize(RecordInput, String) - Method in class org.apache.hadoop.record.Record
-
Deprecated.
Deserialize a record with a tag (usually field name)
- deserialize(RecordInput) - Method in class org.apache.hadoop.record.Record
-
Deprecated.
Deserialize a record without a tag
- digest(byte[]) - Static method in class org.apache.hadoop.io.MD5Hash
-
Construct a hash value for a byte array.
- digest(InputStream) - Static method in class org.apache.hadoop.io.MD5Hash
-
Construct a hash value for the content from the InputStream.
- digest(byte[], int, int) - Static method in class org.apache.hadoop.io.MD5Hash
-
Construct a hash value for a byte array.
- digest(String) - Static method in class org.apache.hadoop.io.MD5Hash
-
Construct a hash value for a String.
- digest(UTF8) - Static method in class org.apache.hadoop.io.MD5Hash
-
Construct a hash value for a String.
- DIR_DEFAULT_PERM - Static variable in class org.apache.hadoop.fs.FileContext
-
Default permission for directory
- DIR_FORMATS - Static variable in class org.apache.hadoop.mapreduce.lib.input.MultipleInputs
-
- DIR_MAPPERS - Static variable in class org.apache.hadoop.mapreduce.lib.input.MultipleInputs
-
- DirectDecompressionCodec - Interface in org.apache.hadoop.io.compress
-
This class encapsulates a codec which can decompress direct bytebuffers.
- DirectDecompressor - Interface in org.apache.hadoop.io.compress
-
Specification of a direct ByteBuffer 'de-compressor'.
- disable_tracing() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
-
Deprecated.
- DISABLED_MEMORY_LIMIT - Static variable in class org.apache.hadoop.mapred.JobConf
-
Deprecated.
- DISKS_FAILED - Static variable in class org.apache.hadoop.yarn.api.records.ContainerExitStatus
-
When threshold number of the nodemanager-local-directories or
threshold number of the nodemanager-log-directories become bad.
- dispatch(Event) - Method in class org.apache.hadoop.yarn.event.AsyncDispatcher
-
- Dispatcher - Interface in org.apache.hadoop.yarn.event
-
Event Dispatcher interface.
- DISPATCHER_DRAIN_EVENTS_TIMEOUT - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- DISPATCHER_EXIT_ON_ERROR_KEY - Static variable in interface org.apache.hadoop.yarn.event.Dispatcher
-
- displayByteArray(byte[]) - Static method in class org.apache.hadoop.io.WritableUtils
-
- displayJobList(JobStatus[]) - Method in class org.apache.hadoop.mapreduce.tools.CLI
-
- displayJobList(JobStatus[], PrintWriter) - Method in class org.apache.hadoop.mapreduce.tools.CLI
-
- displayTasks(JobID, String, String) - Method in class org.apache.hadoop.mapred.JobClient
-
Display the information about a job's tasks, of a particular type and
in a particular state
- displayTasks(Job, String, String) - Method in class org.apache.hadoop.mapreduce.tools.CLI
-
Display the information about a job's tasks, of a particular type and
in a particular state
- DistributedCache - Class in org.apache.hadoop.filecache
-
Deprecated.
- DistributedCache() - Constructor for class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
- DISTRIBUTEDSHELLSCRIPTLEN - Static variable in class org.apache.hadoop.yarn.applications.distributedshell.DSConstants
-
Environment key name denoting the file content length for the shell script.
- DISTRIBUTEDSHELLSCRIPTLOCATION - Static variable in class org.apache.hadoop.yarn.applications.distributedshell.DSConstants
-
Environment key name pointing to the shell script's location
- DISTRIBUTEDSHELLSCRIPTTIMESTAMP - Static variable in class org.apache.hadoop.yarn.applications.distributedshell.DSConstants
-
Environment key name denoting the file timestamp for the shell script.
- DISTRIBUTEDSHELLTIMELINEDOMAIN - Static variable in class org.apache.hadoop.yarn.applications.distributedshell.DSConstants
-
Environment key name denoting the timeline domain ID.
- DNSToSwitchMapping - Interface in org.apache.hadoop.net
-
An interface that must be implemented to allow pluggable
DNS-name/IP-address to RackID resolvers.
- Done() - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
Deprecated.
- done() - Method in interface org.apache.hadoop.record.Index
-
Deprecated.
- DOT_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
-
Deprecated.
- doTransition(EVENTTYPE, EVENT) - Method in interface org.apache.hadoop.yarn.state.StateMachine
-
- DOUBLE_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
-
Deprecated.
- DOUBLE_VALUE_SUM - Static variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
-
- DOUBLE_VALUE_SUM - Static variable in class org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorBaseDescriptor
-
- DoubleTypeID - Static variable in class org.apache.hadoop.record.meta.TypeID
-
Deprecated.
- DoubleValueSum - Class in org.apache.hadoop.mapred.lib.aggregate
-
This class implements a value aggregator that sums up a sequence of double
values.
- DoubleValueSum() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.DoubleValueSum
-
- DoubleValueSum - Class in org.apache.hadoop.mapreduce.lib.aggregate
-
This class implements a value aggregator that sums up a sequence of double
values.
- DoubleValueSum() - Constructor for class org.apache.hadoop.mapreduce.lib.aggregate.DoubleValueSum
-
The default constructor
- DoubleWritable - Class in org.apache.hadoop.io
-
Writable for Double values.
- DoubleWritable() - Constructor for class org.apache.hadoop.io.DoubleWritable
-
- DoubleWritable(double) - Constructor for class org.apache.hadoop.io.DoubleWritable
-
- downgrade(JobID) - Static method in class org.apache.hadoop.mapred.JobID
-
Downgrade a new JobID to an old one
- downgrade(JobStatus) - Static method in class org.apache.hadoop.mapred.JobStatus
-
- downgrade(TaskAttemptID) - Static method in class org.apache.hadoop.mapred.TaskAttemptID
-
Downgrade a new TaskAttemptID to an old one
- downgrade(TaskCompletionEvent) - Static method in class org.apache.hadoop.mapred.TaskCompletionEvent
-
- downgrade(TaskID) - Static method in class org.apache.hadoop.mapred.TaskID
-
Downgrade a new TaskID to an old one
- driver(String[]) - Static method in class org.apache.hadoop.record.compiler.generated.Rcc
-
Deprecated.
- DRIVER_CLASS_PROPERTY - Static variable in class org.apache.hadoop.mapred.lib.db.DBConfiguration
-
The JDBC Driver class name
- DRIVER_CLASS_PROPERTY - Static variable in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
The JDBC Driver class name
- DSConstants - Class in org.apache.hadoop.yarn.applications.distributedshell
-
Constants used in both Client and Application Master
- DSConstants() - Constructor for class org.apache.hadoop.yarn.applications.distributedshell.DSConstants
-
- dumpConfiguration(Configuration, Writer) - Static method in class org.apache.hadoop.conf.Configuration
-
Writes out all the parameters and their properties (final and resource) to
the given
Writer
The format of the output would be
{ "properties" : [ {key1,value1,key1.isFinal,key1.resource}, {key2,value2,
key2.isFinal,key2.resource}...
- dumpDeprecatedKeys() - Static method in class org.apache.hadoop.conf.Configuration
-
- dumpTimelineRecordtoJSON(Object) - Static method in class org.apache.hadoop.yarn.util.timeline.TimelineUtils
-
Serialize a POJO object into a JSON string not in a pretty format
- dumpTimelineRecordtoJSON(Object, boolean) - Static method in class org.apache.hadoop.yarn.util.timeline.TimelineUtils
-
Serialize a POJO object into a JSON string
- dumpTopology() - Method in class org.apache.hadoop.net.AbstractDNSToSwitchMapping
-
Generate a string listing the switch mapping implementation,
the mapping for every known node and the number of nodes and
unique switches known about -each entry to a separate line.
- DynamicBloomFilter - Class in org.apache.hadoop.util.bloom
-
Implements a dynamic Bloom filter, as defined in the INFOCOM 2006 paper.
- DynamicBloomFilter() - Constructor for class org.apache.hadoop.util.bloom.DynamicBloomFilter
-
Zero-args constructor for the serialization.
- DynamicBloomFilter(int, int, int, int) - Constructor for class org.apache.hadoop.util.bloom.DynamicBloomFilter
-
Constructor.
- E_SAME_DIRECTORY_ONLY - Static variable in class org.apache.hadoop.fs.ftp.FTPFileSystem
-
- ElasticByteBufferPool - Class in org.apache.hadoop.io
-
This is a simple ByteBufferPool which just creates ByteBuffers as needed.
- ElasticByteBufferPool() - Constructor for class org.apache.hadoop.io.ElasticByteBufferPool
-
- emit(TupleWritable) - Method in class org.apache.hadoop.mapred.join.MultiFilterRecordReader
-
For each tuple emitted, return a value (typically one of the values
in the tuple).
- emit(TupleWritable) - Method in class org.apache.hadoop.mapred.join.OverrideRecordReader
-
Emit the value with the highest position in the tuple.
- emit(TupleWritable) - Method in class org.apache.hadoop.mapreduce.lib.join.MultiFilterRecordReader
-
For each tuple emitted, return a value (typically one of the values
in the tuple).
- emit(TupleWritable) - Method in class org.apache.hadoop.mapreduce.lib.join.OverrideRecordReader
-
Emit the value with the highest position in the tuple.
- emitMetric(String, String, String) - Method in class org.apache.hadoop.metrics.ganglia.GangliaContext
-
- emitRecord(String, String, OutputRecord) - Method in class org.apache.hadoop.metrics.file.FileContext
-
Deprecated.
Emits a metrics record to a file.
- emitRecord(String, String, OutputRecord) - Method in class org.apache.hadoop.metrics.ganglia.GangliaContext
-
- emitRecord(String, String, OutputRecord) - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
-
Sends a record to the metrics system.
- emitRecord(String, String, OutputRecord) - Method in class org.apache.hadoop.metrics.spi.CompositeContext
-
- emitRecord(String, String, OutputRecord) - Method in class org.apache.hadoop.metrics.spi.NoEmitMetricsContext
-
Do-nothing version of emitRecord
- emitRecord(String, String, OutputRecord) - Method in class org.apache.hadoop.metrics.spi.NullContext
-
Do-nothing version of emitRecord
- emitRecord(String, String, OutputRecord) - Method in class org.apache.hadoop.metrics.spi.NullContextWithUpdateThread
-
Do-nothing version of emitRecord
- empty - Variable in class org.apache.hadoop.mapreduce.lib.join.WrappedRecordReader
-
- EMPTY_ARRAY - Static variable in class org.apache.hadoop.mapred.TaskCompletionEvent
-
- EMPTY_ARRAY - Static variable in class org.apache.hadoop.mapreduce.TaskCompletionEvent
-
- emptyText - Static variable in class org.apache.hadoop.mapreduce.lib.fieldsel.FieldSelectionHelper
-
- enable_tracing() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
-
Deprecated.
- enableSymlinks() - Static method in class org.apache.hadoop.fs.FileSystem
-
- encode(String) - Static method in class org.apache.hadoop.io.Text
-
Converts the provided String to bytes using the
UTF-8 encoding.
- encode(String, boolean) - Static method in class org.apache.hadoop.io.Text
-
Converts the provided String to bytes using the
UTF-8 encoding.
- encodeValue(byte[], XAttrCodec) - Static method in enum org.apache.hadoop.fs.XAttrCodec
-
Encode byte[] value to string representation with encoding.
- end() - Method in interface org.apache.hadoop.io.compress.Compressor
-
Closes the compressor and discards any unprocessed input.
- end() - Method in interface org.apache.hadoop.io.compress.Decompressor
-
Closes the decompressor and discards any unprocessed input.
- endColumn - Variable in class org.apache.hadoop.record.compiler.generated.Token
-
Deprecated.
beginLine and beginColumn describe the position of the first character
of this token; endLine and endColumn describe the position of the
last character of this token.
- endLine - Variable in class org.apache.hadoop.record.compiler.generated.Token
-
Deprecated.
beginLine and beginColumn describe the position of the first character
of this token; endLine and endColumn describe the position of the
last character of this token.
- endMap(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
-
Deprecated.
- endMap(TreeMap, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
-
Deprecated.
- endMap(String) - Method in class org.apache.hadoop.record.CsvRecordInput
-
Deprecated.
- endMap(TreeMap, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
-
Deprecated.
- endMap(String) - Method in interface org.apache.hadoop.record.RecordInput
-
Deprecated.
Check the mark for end of the serialized map.
- endMap(TreeMap, String) - Method in interface org.apache.hadoop.record.RecordOutput
-
Deprecated.
Mark the end of a serialized map.
- endMap(String) - Method in class org.apache.hadoop.record.XmlRecordInput
-
Deprecated.
- endMap(TreeMap, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
-
Deprecated.
- Endpoint - Class in org.apache.hadoop.registry.client.types
-
Description of a single service/component endpoint.
- Endpoint() - Constructor for class org.apache.hadoop.registry.client.types.Endpoint
-
Create an empty instance.
- Endpoint(Endpoint) - Constructor for class org.apache.hadoop.registry.client.types.Endpoint
-
Create an endpoint from another endpoint.
- Endpoint(String, String, String, List<Map<String, String>>) - Constructor for class org.apache.hadoop.registry.client.types.Endpoint
-
Build an endpoint with a list of addresses
- Endpoint(String, String, String) - Constructor for class org.apache.hadoop.registry.client.types.Endpoint
-
Build an endpoint with an empty address list
- Endpoint(String, String, String, Map<String, String>) - Constructor for class org.apache.hadoop.registry.client.types.Endpoint
-
Build an endpoint with a single address entry.
- Endpoint(String, String, String, Map<String, String>...) - Constructor for class org.apache.hadoop.registry.client.types.Endpoint
-
Build an endpoint with a list of addresses
- Endpoint(String, String, URI...) - Constructor for class org.apache.hadoop.registry.client.types.Endpoint
-
Build an endpoint from a list of URIs; each URI
is ASCII-encoded and added to the list of addresses.
- endRecord() - Method in class org.apache.hadoop.metrics2.MetricsRecordBuilder
-
Syntactic sugar to add multiple records in a collector in a one liner.
- endRecord(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
-
Deprecated.
- endRecord(Record, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
-
Deprecated.
- endRecord(String) - Method in class org.apache.hadoop.record.CsvRecordInput
-
Deprecated.
- endRecord(Record, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
-
Deprecated.
- endRecord(String) - Method in interface org.apache.hadoop.record.RecordInput
-
Deprecated.
Check the mark for end of the serialized record.
- endRecord(Record, String) - Method in interface org.apache.hadoop.record.RecordOutput
-
Deprecated.
Mark the end of a serialized record.
- endRecord(String) - Method in class org.apache.hadoop.record.XmlRecordInput
-
Deprecated.
- endRecord(Record, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
-
Deprecated.
- endVector(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
-
Deprecated.
- endVector(ArrayList, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
-
Deprecated.
- endVector(String) - Method in class org.apache.hadoop.record.CsvRecordInput
-
Deprecated.
- endVector(ArrayList, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
-
Deprecated.
- endVector(String) - Method in interface org.apache.hadoop.record.RecordInput
-
Deprecated.
Check the mark for end of the serialized vector.
- endVector(ArrayList, String) - Method in interface org.apache.hadoop.record.RecordOutput
-
Deprecated.
Mark the end of a serialized vector.
- endVector(String) - Method in class org.apache.hadoop.record.XmlRecordInput
-
Deprecated.
- endVector(ArrayList, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
-
Deprecated.
- ensembleProvider - Variable in class org.apache.hadoop.registry.client.impl.zk.BindingInformation
-
The Curator Ensemble Provider
- ensureCurrentState(Service.STATE) - Method in class org.apache.hadoop.service.ServiceStateModel
-
Verify that that a service is in a given state.
- ensureInflated() - Method in class org.apache.hadoop.io.CompressedWritable
-
Must be called by all methods which access fields to ensure that the data
has been uncompressed.
- enterState(Service.STATE) - Method in class org.apache.hadoop.service.ServiceStateModel
-
Enter a state -thread safe.
- entrySet() - Method in class org.apache.hadoop.io.MapWritable
-
- entrySet() - Method in class org.apache.hadoop.io.SortedMapWritable
-
- EnumSetWritable<E extends Enum<E>> - Class in org.apache.hadoop.io
-
A Writable wrapper for EnumSet.
- EnumSetWritable(EnumSet<E>, Class<E>) - Constructor for class org.apache.hadoop.io.EnumSetWritable
-
Construct a new EnumSetWritable.
- EnumSetWritable(EnumSet<E>) - Constructor for class org.apache.hadoop.io.EnumSetWritable
-
Construct a new EnumSetWritable.
- eof - Variable in class org.apache.hadoop.io.compress.DecompressorStream
-
- EOF - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
-
Deprecated.
- eol - Variable in exception org.apache.hadoop.record.compiler.generated.ParseException
-
Deprecated.
The end of line string for this machine.
- equals(Object) - Method in class org.apache.hadoop.fs.AbstractFileSystem
-
- equals(Object) - Method in class org.apache.hadoop.fs.FileChecksum
-
Return true if both the algorithms and the values are the same.
- equals(Object) - Method in class org.apache.hadoop.fs.FileStatus
-
Compare if this object is equal to another object
- equals(Object) - Method in class org.apache.hadoop.fs.HdfsVolumeId
-
- equals(Object) - Method in class org.apache.hadoop.fs.LocatedFileStatus
-
Compare if this object is equal to another object
- equals(Object) - Method in class org.apache.hadoop.fs.Path
-
- equals(Object) - Method in class org.apache.hadoop.fs.permission.AclEntry
-
- equals(Object) - Method in class org.apache.hadoop.fs.permission.AclStatus
-
- equals(Object) - Method in class org.apache.hadoop.fs.permission.FsPermission
-
- equals(Object) - Method in interface org.apache.hadoop.fs.VolumeId
-
- equals(Object) - Method in class org.apache.hadoop.io.BinaryComparable
-
Return true if bytes from {#getBytes()} match.
- equals(Object) - Method in class org.apache.hadoop.io.BooleanWritable
-
- equals(Object) - Method in class org.apache.hadoop.io.BytesWritable
-
Are the two byte sequences equal?
- equals(Object) - Method in class org.apache.hadoop.io.ByteWritable
-
Returns true iff o
is a ByteWritable with the same value.
- equals(Object) - Method in class org.apache.hadoop.io.DoubleWritable
-
Returns true iff o
is a DoubleWritable with the same value.
- equals(Object) - Method in class org.apache.hadoop.io.EnumSetWritable
-
Returns true if o
is an EnumSetWritable with the same value,
or both are null.
- equals(Object) - Method in class org.apache.hadoop.io.FloatWritable
-
Returns true iff o
is a FloatWritable with the same value.
- equals(Object) - Method in class org.apache.hadoop.io.IntWritable
-
Returns true iff o
is a IntWritable with the same value.
- equals(Object) - Method in class org.apache.hadoop.io.LongWritable
-
Returns true iff o
is a LongWritable with the same value.
- equals(Object) - Method in class org.apache.hadoop.io.MapWritable
-
- equals(Object) - Method in class org.apache.hadoop.io.MD5Hash
-
Returns true iff o
is an MD5Hash whose digest contains the
same values.
- equals(Object) - Method in class org.apache.hadoop.io.NullWritable
-
- equals(Object) - Method in class org.apache.hadoop.io.ShortWritable
-
Returns true iff o
is a ShortWritable with the same value.
- equals(Object) - Method in class org.apache.hadoop.io.SortedMapWritable
-
- equals(Object) - Method in class org.apache.hadoop.io.Text
-
Returns true iff o
is a Text with the same contents.
- equals(Object) - Method in class org.apache.hadoop.io.VIntWritable
-
Returns true iff o
is a VIntWritable with the same value.
- equals(Object) - Method in class org.apache.hadoop.io.VLongWritable
-
Returns true iff o
is a VLongWritable with the same value.
- equals(Object) - Method in class org.apache.hadoop.mapred.Counters.Counter
-
- equals(Object) - Method in class org.apache.hadoop.mapred.Counters.Group
-
- equals(Object) - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
-
Return true iff compareTo(other) retn true.
- equals(Object) - Method in class org.apache.hadoop.mapreduce.counters.AbstractCounters
-
- equals(Object) - Method in class org.apache.hadoop.mapreduce.ID
-
- equals(Object) - Method in class org.apache.hadoop.mapreduce.JobID
-
- equals(Object) - Method in class org.apache.hadoop.mapreduce.lib.join.TupleWritable
- equals(Object) - Method in class org.apache.hadoop.mapreduce.lib.join.WrappedRecordReader
-
Return true iff compareTo(other) retn true.
- equals(Object) - Method in class org.apache.hadoop.mapreduce.TaskAttemptID
-
- equals(Object) - Method in class org.apache.hadoop.mapreduce.TaskCompletionEvent
-
- equals(Object) - Method in class org.apache.hadoop.mapreduce.TaskID
-
- equals(Object) - Method in class org.apache.hadoop.metrics2.AbstractMetric
-
- equals(Object) - Method in class org.apache.hadoop.metrics2.MetricsTag
-
- equals(Object) - Method in class org.apache.hadoop.net.SocksSocketFactory
-
- equals(Object) - Method in class org.apache.hadoop.net.StandardSocketFactory
-
- equals(Object) - Method in class org.apache.hadoop.record.Buffer
-
Deprecated.
- equals(Object) - Method in class org.apache.hadoop.record.meta.FieldTypeInfo
-
Deprecated.
Two FieldTypeInfos are equal if ach of their fields matches
- equals(FieldTypeInfo) - Method in class org.apache.hadoop.record.meta.FieldTypeInfo
-
Deprecated.
- equals(Object) - Method in class org.apache.hadoop.record.meta.MapTypeID
-
Deprecated.
Two map typeIDs are equal if their constituent elements have the
same type
- equals(Object) - Method in class org.apache.hadoop.record.meta.StructTypeID
-
Deprecated.
- equals(Object) - Method in class org.apache.hadoop.record.meta.TypeID
-
Deprecated.
Two base typeIDs are equal if they refer to the same type
- equals(Object) - Method in class org.apache.hadoop.record.meta.VectorTypeID
-
Deprecated.
Two vector typeIDs are equal if their constituent elements have the
same type
- equals(Object) - Method in class org.apache.hadoop.registry.client.types.RegistryPathStatus
-
Equality operator checks size, time and path of the entries.
- equals(Object) - Method in class org.apache.hadoop.yarn.api.records.ApplicationAttemptId
-
- equals(Object) - Method in class org.apache.hadoop.yarn.api.records.ApplicationId
-
- equals(Object) - Method in class org.apache.hadoop.yarn.api.records.ContainerId
-
- equals(Object) - Method in class org.apache.hadoop.yarn.api.records.ContainerResourceIncreaseRequest
-
- equals(Object) - Method in class org.apache.hadoop.yarn.api.records.NMToken
-
- equals(Object) - Method in class org.apache.hadoop.yarn.api.records.NodeId
-
- equals(Object) - Method in class org.apache.hadoop.yarn.api.records.Priority
-
- equals(Object) - Method in class org.apache.hadoop.yarn.api.records.ReservationId
-
- equals(Object) - Method in class org.apache.hadoop.yarn.api.records.ReservationRequest
-
- equals(Object) - Method in class org.apache.hadoop.yarn.api.records.Resource
-
- equals(Object) - Method in class org.apache.hadoop.yarn.api.records.ResourceRequest
-
- equals(Object) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEntity
-
- equals(Object) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEvent
-
- equals(Object) - Method in class org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat.LogKey
-
- equals(Object) - Method in class org.apache.hadoop.yarn.security.AMRMTokenIdentifier
-
- equals(Object) - Method in class org.apache.hadoop.yarn.security.client.ClientToAMTokenIdentifier
-
- equals(Object) - Method in class org.apache.hadoop.yarn.security.ContainerTokenIdentifier
-
- equals(Object) - Method in class org.apache.hadoop.yarn.security.NMTokenIdentifier
-
- Event<TYPE extends Enum<TYPE>> - Interface in org.apache.hadoop.yarn.event
-
Interface defining events api.
- EventCounter - Class in org.apache.hadoop.log.metrics
-
A log4J Appender that simply counts logging events in three levels:
fatal, error and warn.
- EventCounter() - Constructor for class org.apache.hadoop.log.metrics.EventCounter
-
- eventDispatchers - Variable in class org.apache.hadoop.yarn.event.AsyncDispatcher
-
- EventHandler<T extends Event> - Interface in org.apache.hadoop.yarn.event
-
Interface for handling events of type T
- EXECUTABLE - Static variable in class org.apache.hadoop.mapred.pipes.Submitter
-
- execute() - Method in class org.apache.hadoop.record.compiler.ant.RccTask
-
Deprecated.
Invoke the Hadoop record compiler on each record definition file
- executeQuery(String) - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
- executeQuery(String) - Method in class org.apache.hadoop.mapreduce.lib.db.MySQLDataDrivenDBRecordReader
-
- executeQuery(String) - Method in class org.apache.hadoop.mapreduce.lib.db.MySQLDBRecordReader
-
- exists(Path) - Method in class org.apache.hadoop.fs.FileSystem
-
Check if exists.
- exists(String) - Method in interface org.apache.hadoop.registry.client.api.RegistryOperations
-
Probe for a path existing.
- exists(String) - Method in class org.apache.hadoop.registry.client.impl.zk.RegistryOperationsService
-
- ExpandBuff(boolean) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
Deprecated.
- expectedTokenSequences - Variable in exception org.apache.hadoop.record.compiler.generated.ParseException
-
Deprecated.
Each entry in this array is an array of integers.
- expire(O) - Method in class org.apache.hadoop.yarn.util.AbstractLivelinessMonitor
-
- expunge() - Method in class org.apache.hadoop.fs.Trash
-
Delete old checkpoint(s).
- external - Variable in class org.apache.hadoop.registry.client.types.ServiceRecord
-
List of endpoints intended for use to external callers
- extractOutputKeyValue(String, String, String, List<Integer>, List<Integer>, int, boolean, boolean) - Method in class org.apache.hadoop.mapreduce.lib.fieldsel.FieldSelectionHelper
-
- extractServiceRecords(RegistryOperations, String, Collection<RegistryPathStatus>) - Static method in class org.apache.hadoop.registry.client.binding.RegistryUtils
-
Extract all service records under a list of stat operations...this
skips entries that are too short or simply not matching
- extractServiceRecords(RegistryOperations, String, Map<String, RegistryPathStatus>) - Static method in class org.apache.hadoop.registry.client.binding.RegistryUtils
-
Extract all service records under a list of stat operations...this
non-atomic action skips entries that are too short or simply not matching.
- extractServiceRecords(RegistryOperations, String) - Static method in class org.apache.hadoop.registry.client.binding.RegistryUtils
-
Extract all service records under a list of stat operations...this
non-atomic action skips entries that are too short or simply not matching.
- FAILED - Static variable in class org.apache.hadoop.mapred.jobcontrol.Job
-
- FAILED - Static variable in class org.apache.hadoop.mapred.JobStatus
-
- failJob(String) - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
- FailoverFailedException - Exception in org.apache.hadoop.ha
-
Exception thrown to indicate service failover has failed.
- FailoverFailedException(String) - Constructor for exception org.apache.hadoop.ha.FailoverFailedException
-
- FailoverFailedException(String, Throwable) - Constructor for exception org.apache.hadoop.ha.FailoverFailedException
-
- failTask(TaskAttemptID) - Method in class org.apache.hadoop.mapreduce.Job
-
Fail indicated task attempt.
- FenceMethod - Interface in org.apache.hadoop.ha
-
A fencing method is a method by which one node can forcibly prevent
another node from making continued progress.
- Field() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
-
Deprecated.
- fieldNames - Variable in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
- FieldSelectionHelper - Class in org.apache.hadoop.mapreduce.lib.fieldsel
-
This class implements a mapper/reducer class that can be used to perform
field selections in a manner similar to unix cut.
- FieldSelectionHelper() - Constructor for class org.apache.hadoop.mapreduce.lib.fieldsel.FieldSelectionHelper
-
- FieldSelectionHelper(Text, Text) - Constructor for class org.apache.hadoop.mapreduce.lib.fieldsel.FieldSelectionHelper
-
- FieldSelectionMapper<K,V> - Class in org.apache.hadoop.mapreduce.lib.fieldsel
-
This class implements a mapper class that can be used to perform
field selections in a manner similar to unix cut.
- FieldSelectionMapper() - Constructor for class org.apache.hadoop.mapreduce.lib.fieldsel.FieldSelectionMapper
-
- FieldSelectionMapReduce<K,V> - Class in org.apache.hadoop.mapred.lib
-
This class implements a mapper/reducer class that can be used to perform
field selections in a manner similar to unix cut.
- FieldSelectionMapReduce() - Constructor for class org.apache.hadoop.mapred.lib.FieldSelectionMapReduce
-
- FieldSelectionReducer<K,V> - Class in org.apache.hadoop.mapreduce.lib.fieldsel
-
This class implements a reducer class that can be used to perform field
selections in a manner similar to unix cut.
- FieldSelectionReducer() - Constructor for class org.apache.hadoop.mapreduce.lib.fieldsel.FieldSelectionReducer
-
- FieldTypeInfo - Class in org.apache.hadoop.record.meta
-
- FILE_DEFAULT_PERM - Static variable in class org.apache.hadoop.fs.FileContext
-
Default permission for file
- FILE_NAME_PROPERTY - Static variable in class org.apache.hadoop.metrics.file.FileContext
-
Deprecated.
- FileAlreadyExistsException - Exception in org.apache.hadoop.fs
-
Used when target file already exists for any operation and
is not configured to be overwritten.
- FileAlreadyExistsException() - Constructor for exception org.apache.hadoop.fs.FileAlreadyExistsException
-
- FileAlreadyExistsException(String) - Constructor for exception org.apache.hadoop.fs.FileAlreadyExistsException
-
- FileAlreadyExistsException - Exception in org.apache.hadoop.mapred
-
Used when target file already exists for any operation and
is not configured to be overwritten.
- FileAlreadyExistsException() - Constructor for exception org.apache.hadoop.mapred.FileAlreadyExistsException
-
- FileAlreadyExistsException(String) - Constructor for exception org.apache.hadoop.mapred.FileAlreadyExistsException
-
- FileChecksum - Class in org.apache.hadoop.fs
-
An abstract class representing file checksums for files.
- FileChecksum() - Constructor for class org.apache.hadoop.fs.FileChecksum
-
- FileContext - Class in org.apache.hadoop.fs
-
The FileContext class provides an interface to the application writer for
using the Hadoop file system.
- FileContext - Class in org.apache.hadoop.metrics.file
-
Deprecated.
- FileContext() - Constructor for class org.apache.hadoop.metrics.file.FileContext
-
Deprecated.
Creates a new instance of FileContext
- FileInputFormat<K,V> - Class in org.apache.hadoop.mapred
-
- FileInputFormat() - Constructor for class org.apache.hadoop.mapred.FileInputFormat
-
- FileInputFormat<K,V> - Class in org.apache.hadoop.mapreduce.lib.input
-
- FileInputFormat() - Constructor for class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
- FileInputFormatCounter - Enum in org.apache.hadoop.mapreduce.lib.input
-
- FileOutputCommitter - Class in org.apache.hadoop.mapred
-
An
OutputCommitter
that commits files specified
in job output directory i.e.
- FileOutputCommitter() - Constructor for class org.apache.hadoop.mapred.FileOutputCommitter
-
- FileOutputCommitter - Class in org.apache.hadoop.mapreduce.lib.output
-
An
OutputCommitter
that commits files specified
in job output directory i.e.
- FileOutputCommitter(Path, TaskAttemptContext) - Constructor for class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
Create a file output committer
- FileOutputCommitter(Path, JobContext) - Constructor for class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
Create a file output committer
- FileOutputFormat<K,V> - Class in org.apache.hadoop.mapred
-
- FileOutputFormat() - Constructor for class org.apache.hadoop.mapred.FileOutputFormat
-
- FileOutputFormat<K,V> - Class in org.apache.hadoop.mapreduce.lib.output
-
- FileOutputFormat() - Constructor for class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
- FileOutputFormatCounter - Enum in org.apache.hadoop.mapreduce.lib.output
-
- FileSink - Class in org.apache.hadoop.metrics2.sink
-
A metrics sink that writes to a file
- FileSink() - Constructor for class org.apache.hadoop.metrics2.sink.FileSink
-
- FileSplit - Class in org.apache.hadoop.mapred
-
A section of an input file.
- FileSplit() - Constructor for class org.apache.hadoop.mapred.FileSplit
-
- FileSplit(Path, long, long, JobConf) - Constructor for class org.apache.hadoop.mapred.FileSplit
-
Deprecated.
- FileSplit(Path, long, long, String[]) - Constructor for class org.apache.hadoop.mapred.FileSplit
-
Constructs a split with host information
- FileSplit(Path, long, long, String[], String[]) - Constructor for class org.apache.hadoop.mapred.FileSplit
-
Constructs a split with host information
- FileSplit(FileSplit) - Constructor for class org.apache.hadoop.mapred.FileSplit
-
- FileSplit - Class in org.apache.hadoop.mapreduce.lib.input
-
A section of an input file.
- FileSplit() - Constructor for class org.apache.hadoop.mapreduce.lib.input.FileSplit
-
- FileSplit(Path, long, long, String[]) - Constructor for class org.apache.hadoop.mapreduce.lib.input.FileSplit
-
Constructs a split with host information
- FileSplit(Path, long, long, String[], String[]) - Constructor for class org.apache.hadoop.mapreduce.lib.input.FileSplit
-
Constructs a split with host and cached-blocks information
- FileStatus - Class in org.apache.hadoop.fs
-
Interface that represents the client side information for a file.
- FileStatus() - Constructor for class org.apache.hadoop.fs.FileStatus
-
- FileStatus(long, boolean, int, long, long, Path) - Constructor for class org.apache.hadoop.fs.FileStatus
-
- FileStatus(long, boolean, int, long, long, long, FsPermission, String, String, Path) - Constructor for class org.apache.hadoop.fs.FileStatus
-
Constructor for file systems on which symbolic links are not supported
- FileStatus(long, boolean, int, long, long, long, FsPermission, String, String, Path, Path) - Constructor for class org.apache.hadoop.fs.FileStatus
-
- FileStatus(FileStatus) - Constructor for class org.apache.hadoop.fs.FileStatus
-
Copy constructor.
- FileSystem - Class in org.apache.hadoop.fs
-
An abstract base class for a fairly generic filesystem.
- FileSystem() - Constructor for class org.apache.hadoop.fs.FileSystem
-
- FileUtil - Class in org.apache.hadoop.fs
-
A collection of file-processing util methods
- FileUtil() - Constructor for class org.apache.hadoop.fs.FileUtil
-
- FillBuff() - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
Deprecated.
- fillJoinCollector(K) - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
For all child RRs offering the key provided, obtain an iterator
at that position in the JoinCollector.
- fillJoinCollector(K) - Method in class org.apache.hadoop.mapred.join.OverrideRecordReader
-
Instead of filling the JoinCollector with iterators from all
data sources, fill only the rightmost for this key.
- fillJoinCollector(K) - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
-
For all child RRs offering the key provided, obtain an iterator
at that position in the JoinCollector.
- fillJoinCollector(K) - Method in class org.apache.hadoop.mapreduce.lib.join.OverrideRecordReader
-
Instead of filling the JoinCollector with iterators from all
data sources, fill only the rightmost for this key.
- FILTER_CLASS - Static variable in class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFilter
-
- FILTER_FREQUENCY - Static variable in class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFilter
-
- FILTER_REGEX - Static variable in class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFilter
-
- FilterFileSystem - Class in org.apache.hadoop.fs
-
A FilterFileSystem
contains
some other file system, which it uses as
its basic file system, possibly transforming
the data along the way or providing additional
functionality.
- FilterFileSystem() - Constructor for class org.apache.hadoop.fs.FilterFileSystem
-
- FilterFileSystem(FileSystem) - Constructor for class org.apache.hadoop.fs.FilterFileSystem
-
- FilterOutputFormat<K,V> - Class in org.apache.hadoop.mapred.lib
-
FilterOutputFormat is a convenience class that wraps OutputFormat.
- FilterOutputFormat() - Constructor for class org.apache.hadoop.mapred.lib.FilterOutputFormat
-
- FilterOutputFormat(OutputFormat<K, V>) - Constructor for class org.apache.hadoop.mapred.lib.FilterOutputFormat
-
Create a FilterOutputFormat based on the supplied output format.
- FilterOutputFormat<K,V> - Class in org.apache.hadoop.mapreduce.lib.output
-
FilterOutputFormat is a convenience class that wraps OutputFormat.
- FilterOutputFormat() - Constructor for class org.apache.hadoop.mapreduce.lib.output.FilterOutputFormat
-
- FilterOutputFormat(OutputFormat<K, V>) - Constructor for class org.apache.hadoop.mapreduce.lib.output.FilterOutputFormat
-
Create a FilterOutputFormat based on the underlying output format.
- FinalApplicationStatus - Enum in org.apache.hadoop.yarn.api.records
-
Enumeration of various final states of an Application
.
- find(String) - Method in class org.apache.hadoop.io.Text
-
- find(String, int) - Method in class org.apache.hadoop.io.Text
-
Finds any occurence of what
in the backing
buffer, starting as position start
.
- findContainingJar(Class) - Static method in class org.apache.hadoop.mapred.JobConf
-
Find a jar that contains a class of the same name, if any.
- findCounter(String, String) - Method in class org.apache.hadoop.mapred.Counters
-
- findCounter(String, int, String) - Method in class org.apache.hadoop.mapred.Counters
-
- findCounter(String, String) - Method in class org.apache.hadoop.mapred.Counters.Group
-
- findCounter(String, boolean) - Method in class org.apache.hadoop.mapred.Counters.Group
-
- findCounter(String) - Method in class org.apache.hadoop.mapred.Counters.Group
-
- findCounter(String, String) - Method in class org.apache.hadoop.mapreduce.counters.AbstractCounters
-
Find a counter, create one if necessary
- findCounter(Enum<?>) - Method in class org.apache.hadoop.mapreduce.counters.AbstractCounters
-
Find the counter for the given enum.
- findCounter(String, FileSystemCounter) - Method in class org.apache.hadoop.mapreduce.counters.AbstractCounters
-
Find the file system counter for the given scheme and enum.
- findCounter(String, String) - Method in interface org.apache.hadoop.mapreduce.counters.CounterGroupBase
-
Find a counter in the group.
- findCounter(String, boolean) - Method in interface org.apache.hadoop.mapreduce.counters.CounterGroupBase
-
Find a counter in the group
- findCounter(String) - Method in interface org.apache.hadoop.mapreduce.counters.CounterGroupBase
-
Find a counter in the group.
- findProvider(List<KeyProvider>, String) - Static method in class org.apache.hadoop.crypto.key.KeyProvider
-
Find the provider with the given key.
- findSeparator(byte[], int, int, byte) - Static method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
-
- findSeparator(byte[], int, int, byte) - Static method in class org.apache.hadoop.mapreduce.lib.input.KeyValueLineRecordReader
-
- findTimestampedDirectories() - Method in class org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager
-
Finds all history directories with a timestamp component by scanning the
filesystem.
- finish() - Method in class org.apache.hadoop.io.compress.BlockCompressorStream
-
- finish() - Method in class org.apache.hadoop.io.compress.CompressionOutputStream
-
Finishes writing compressed data to the output stream
without closing the underlying stream.
- finish() - Method in interface org.apache.hadoop.io.compress.Compressor
-
When called, indicates that compression should end
with the current contents of the input buffer.
- finish() - Method in class org.apache.hadoop.io.compress.CompressorStream
-
- finish() - Method in class org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster
-
- finishApplicationMaster(FinishApplicationMasterRequest) - Method in interface org.apache.hadoop.yarn.api.ApplicationMasterProtocol
-
The interface used by an ApplicationMaster
to notify the
ResourceManager
about its completion (success or failed).
- FinishApplicationMasterRequest - Class in org.apache.hadoop.yarn.api.protocolrecords
-
The finalization request sent by the ApplicationMaster
to
inform the ResourceManager
about its completion.
- FinishApplicationMasterRequest() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.FinishApplicationMasterRequest
-
- FinishApplicationMasterResponse - Class in org.apache.hadoop.yarn.api.protocolrecords
-
The response sent by the ResourceManager
to a
ApplicationMaster
on it's completion.
- FinishApplicationMasterResponse() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.FinishApplicationMasterResponse
-
- finished() - Method in interface org.apache.hadoop.io.compress.Compressor
-
Returns true if the end of the compressed
data output stream has been reached.
- finished() - Method in interface org.apache.hadoop.io.compress.Decompressor
-
Returns true
if the end of the decompressed
data output stream has been reached.
- firstKey() - Method in class org.apache.hadoop.io.SortedMapWritable
-
- fix(FileSystem, Path, Class<? extends Writable>, Class<? extends Writable>, boolean, Configuration) - Static method in class org.apache.hadoop.io.MapFile
-
This method attempts to fix a corrupt MapFile by re-creating its index.
- FIXED_RECORD_LENGTH - Static variable in class org.apache.hadoop.mapred.FixedLengthInputFormat
-
- FIXED_RECORD_LENGTH - Static variable in class org.apache.hadoop.mapreduce.lib.input.FixedLengthInputFormat
-
- FixedLengthInputFormat - Class in org.apache.hadoop.mapred
-
FixedLengthInputFormat is an input format used to read input files
which contain fixed length records.
- FixedLengthInputFormat() - Constructor for class org.apache.hadoop.mapred.FixedLengthInputFormat
-
- FixedLengthInputFormat - Class in org.apache.hadoop.mapreduce.lib.input
-
FixedLengthInputFormat is an input format used to read input files
which contain fixed length records.
- FixedLengthInputFormat() - Constructor for class org.apache.hadoop.mapreduce.lib.input.FixedLengthInputFormat
-
- fixRelativePart(Path) - Method in class org.apache.hadoop.fs.FileSystem
-
- FLOAT_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
-
Deprecated.
- FloatSplitter - Class in org.apache.hadoop.mapreduce.lib.db
-
Implement DBSplitter over floating-point values.
- FloatSplitter() - Constructor for class org.apache.hadoop.mapreduce.lib.db.FloatSplitter
-
- FloatTypeID - Static variable in class org.apache.hadoop.record.meta.TypeID
-
Deprecated.
- FloatWritable - Class in org.apache.hadoop.io
-
A WritableComparable for floats.
- FloatWritable() - Constructor for class org.apache.hadoop.io.FloatWritable
-
- FloatWritable(float) - Constructor for class org.apache.hadoop.io.FloatWritable
-
- flush() - Method in class org.apache.hadoop.crypto.key.KeyProvider
-
Ensures that any changes to the keys are written to persistent store.
- flush() - Method in class org.apache.hadoop.io.compress.CompressionOutputStream
-
- flush() - Method in class org.apache.hadoop.metrics.file.FileContext
-
Deprecated.
Flushes the output writer, forcing updates to disk.
- flush() - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
-
Called each period after all records have been emitted, this method does nothing.
- flush() - Method in class org.apache.hadoop.metrics.spi.CompositeContext
-
- flush() - Method in interface org.apache.hadoop.metrics2.MetricsSink
-
Flush any buffered metrics
- flush() - Method in class org.apache.hadoop.metrics2.sink.FileSink
-
- flush() - Method in class org.apache.hadoop.metrics2.sink.GraphiteSink
-
- flush() - Method in class org.apache.hadoop.security.alias.CredentialProvider
-
Ensures that any changes to the credentials are written to persistent store.
- flush() - Method in class org.apache.hadoop.yarn.ContainerLogAppender
-
- flush() - Method in class org.apache.hadoop.yarn.ContainerRollingLogAppender
-
- FORBIDDEN_RELATION - Static variable in class org.apache.hadoop.yarn.api.records.timeline.TimelinePutResponse.TimelinePutError
-
Error code returned if the user is denied to relate the entity to another
one in different domain
- forceKillApplication(KillApplicationRequest) - Method in interface org.apache.hadoop.yarn.api.ApplicationClientProtocol
-
The interface used by clients to request the
ResourceManager
to abort submitted application.
- forName(String) - Static method in class org.apache.hadoop.mapred.JobID
-
Construct a JobId object from given string
- forName(String) - Static method in class org.apache.hadoop.mapred.TaskAttemptID
-
Construct a TaskAttemptID object from given string
- forName(String) - Static method in class org.apache.hadoop.mapred.TaskID
-
- forName(String) - Static method in class org.apache.hadoop.mapreduce.JobID
-
Construct a JobId object from given string
- forName(String) - Static method in class org.apache.hadoop.mapreduce.TaskAttemptID
-
Construct a TaskAttemptID object from given string
- forName(String) - Static method in class org.apache.hadoop.mapreduce.TaskID
-
Construct a TaskID object from given string
- fromEscapedCompactString(String) - Static method in class org.apache.hadoop.mapred.Counters
-
- fromShort(short) - Method in class org.apache.hadoop.fs.permission.FsPermission
-
- fromString(String) - Method in class org.apache.hadoop.io.DefaultStringifier
-
- fromString(String) - Method in interface org.apache.hadoop.io.Stringifier
-
Restores the object from its string representation.
- fromString(String) - Static method in class org.apache.hadoop.yarn.api.records.ContainerId
-
- fs - Variable in class org.apache.hadoop.fs.FilterFileSystem
-
- fs - Variable in class org.apache.hadoop.fs.TrashPolicy
-
- fs - Variable in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
-
- fs - Variable in class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader
-
- FS_APPLICATION_HISTORY_STORE_COMPRESSION_TYPE - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
T-file compression types used to compress history data.
- FS_APPLICATION_HISTORY_STORE_URI - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
URI for FileSystemApplicationHistoryStore
- FS_AUTOMATIC_CLOSE_DEFAULT - Static variable in class org.apache.hadoop.fs.CommonConfigurationKeysPublic
-
Default value for FS_AUTOMATIC_CLOSE_KEY
- FS_AUTOMATIC_CLOSE_KEY - Static variable in class org.apache.hadoop.fs.CommonConfigurationKeysPublic
-
- FS_BASED_RM_CONF_STORE - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
Store the related configuration files in File System
- FS_CLIENT_RESOLVE_REMOTE_SYMLINKS_DEFAULT - Static variable in class org.apache.hadoop.fs.CommonConfigurationKeysPublic
-
Default value for FS_CLIENT_RESOLVE_REMOTE_SYMLINKS_KEY
- FS_CLIENT_RESOLVE_REMOTE_SYMLINKS_KEY - Static variable in class org.apache.hadoop.fs.CommonConfigurationKeysPublic
-
- FS_DEFAULT_NAME_DEFAULT - Static variable in class org.apache.hadoop.fs.CommonConfigurationKeysPublic
-
Default value for FS_DEFAULT_NAME_KEY
- FS_DEFAULT_NAME_KEY - Static variable in class org.apache.hadoop.fs.CommonConfigurationKeysPublic
-
- FS_DEFAULT_NAME_KEY - Static variable in class org.apache.hadoop.fs.FileSystem
-
- FS_DF_INTERVAL_DEFAULT - Static variable in class org.apache.hadoop.fs.CommonConfigurationKeysPublic
-
Default value for FS_DF_INTERVAL_KEY
- FS_DF_INTERVAL_KEY - Static variable in class org.apache.hadoop.fs.CommonConfigurationKeysPublic
-
- FS_DU_INTERVAL_DEFAULT - Static variable in class org.apache.hadoop.fs.CommonConfigurationKeysPublic
-
Default value for FS_DU_INTERVAL_KEY
- FS_DU_INTERVAL_KEY - Static variable in class org.apache.hadoop.fs.CommonConfigurationKeysPublic
-
- FS_FILE_IMPL_KEY - Static variable in class org.apache.hadoop.fs.CommonConfigurationKeysPublic
-
- FS_FTP_HOST - Static variable in class org.apache.hadoop.fs.ftp.FTPFileSystem
-
- FS_FTP_HOST_KEY - Static variable in class org.apache.hadoop.fs.CommonConfigurationKeysPublic
-
- FS_FTP_HOST_PORT - Static variable in class org.apache.hadoop.fs.ftp.FTPFileSystem
-
- FS_FTP_HOST_PORT_KEY - Static variable in class org.apache.hadoop.fs.CommonConfigurationKeysPublic
-
- FS_FTP_PASSWORD_PREFIX - Static variable in class org.apache.hadoop.fs.ftp.FTPFileSystem
-
- FS_FTP_USER_PREFIX - Static variable in class org.apache.hadoop.fs.ftp.FTPFileSystem
-
- FS_LOCAL_BLOCK_SIZE_DEFAULT - Static variable in class org.apache.hadoop.fs.CommonConfigurationKeysPublic
-
Not used anywhere, looks like default value for FS_LOCAL_BLOCK_SIZE
- FS_NODE_LABELS_STORE_RETRY_POLICY_SPEC - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- FS_NODE_LABELS_STORE_ROOT_DIR - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
URI for NodeLabelManager
- FS_RM_STATE_STORE_RETRY_POLICY_SPEC - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- FS_RM_STATE_STORE_URI - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
URI for FileSystemRMStateStore
- FS_TRASH_CHECKPOINT_INTERVAL_DEFAULT - Static variable in class org.apache.hadoop.fs.CommonConfigurationKeysPublic
-
Default value for FS_TRASH_CHECKPOINT_INTERVAL_KEY
- FS_TRASH_CHECKPOINT_INTERVAL_KEY - Static variable in class org.apache.hadoop.fs.CommonConfigurationKeysPublic
-
- FS_TRASH_INTERVAL_DEFAULT - Static variable in class org.apache.hadoop.fs.CommonConfigurationKeysPublic
-
Default value for FS_TRASH_INTERVAL_KEY
- FS_TRASH_INTERVAL_KEY - Static variable in class org.apache.hadoop.fs.CommonConfigurationKeysPublic
-
- FsAction - Enum in org.apache.hadoop.fs.permission
-
File system actions, e.g.
- FsConstants - Interface in org.apache.hadoop.fs
-
FileSystem related constants.
- FSDataInputStream - Class in org.apache.hadoop.fs
-
- FSDataInputStream(InputStream) - Constructor for class org.apache.hadoop.fs.FSDataInputStream
-
- FSDataOutputStream - Class in org.apache.hadoop.fs
-
- FSDataOutputStream(OutputStream) - Constructor for class org.apache.hadoop.fs.FSDataOutputStream
-
Deprecated.
- FSDataOutputStream(OutputStream, FileSystem.Statistics) - Constructor for class org.apache.hadoop.fs.FSDataOutputStream
-
- FSDataOutputStream(OutputStream, FileSystem.Statistics, long) - Constructor for class org.apache.hadoop.fs.FSDataOutputStream
-
- FSError - Error in org.apache.hadoop.fs
-
Thrown for unexpected filesystem errors, presumed to reflect disk errors
in the native filesystem.
- FsPermission - Class in org.apache.hadoop.fs.permission
-
A class for file/directory permissions.
- FsPermission(FsAction, FsAction, FsAction) - Constructor for class org.apache.hadoop.fs.permission.FsPermission
-
- FsPermission(FsAction, FsAction, FsAction, boolean) - Constructor for class org.apache.hadoop.fs.permission.FsPermission
-
- FsPermission(short) - Constructor for class org.apache.hadoop.fs.permission.FsPermission
-
Construct by the given mode.
- FsPermission(FsPermission) - Constructor for class org.apache.hadoop.fs.permission.FsPermission
-
Copy constructor
- FsPermission(String) - Constructor for class org.apache.hadoop.fs.permission.FsPermission
-
Construct by given mode, either in octal or symbolic format.
- FsServerDefaults - Class in org.apache.hadoop.fs
-
Provides server default configuration values to clients.
- FsServerDefaults() - Constructor for class org.apache.hadoop.fs.FsServerDefaults
-
- FsServerDefaults(long, int, int, short, int, boolean, long, DataChecksum.Type) - Constructor for class org.apache.hadoop.fs.FsServerDefaults
-
- FsStatus - Class in org.apache.hadoop.fs
-
This class is used to represent the capacity, free and used space on a
FileSystem
.
- FsStatus(long, long, long) - Constructor for class org.apache.hadoop.fs.FsStatus
-
Construct a FsStatus object, using the specified statistics
- FTP_SCHEME - Static variable in interface org.apache.hadoop.fs.FsConstants
-
- FTPException - Exception in org.apache.hadoop.fs.ftp
-
A class to wrap a
Throwable
into a Runtime Exception.
- FTPException(String) - Constructor for exception org.apache.hadoop.fs.ftp.FTPException
-
- FTPException(Throwable) - Constructor for exception org.apache.hadoop.fs.ftp.FTPException
-
- FTPException(String, Throwable) - Constructor for exception org.apache.hadoop.fs.ftp.FTPException
-
- FTPFileSystem - Class in org.apache.hadoop.fs.ftp
-
- FTPFileSystem() - Constructor for class org.apache.hadoop.fs.ftp.FTPFileSystem
-
- fullyDelete(File) - Static method in class org.apache.hadoop.fs.FileUtil
-
Delete a directory and all its contents.
- fullyDelete(File, boolean) - Static method in class org.apache.hadoop.fs.FileUtil
-
Delete a directory and all its contents.
- fullyDelete(FileSystem, Path) - Static method in class org.apache.hadoop.fs.FileUtil
-
- fullyDeleteContents(File) - Static method in class org.apache.hadoop.fs.FileUtil
-
Delete the contents of a directory, not the directory itself.
- fullyDeleteContents(File, boolean) - Static method in class org.apache.hadoop.fs.FileUtil
-
Delete the contents of a directory, not the directory itself.
- GangliaContext - Class in org.apache.hadoop.metrics.ganglia
-
Context for sending metrics to Ganglia.
- GangliaContext() - Constructor for class org.apache.hadoop.metrics.ganglia.GangliaContext
-
Creates a new instance of GangliaContext
- gauge(MetricsInfo, int) - Method in interface org.apache.hadoop.metrics2.MetricsVisitor
-
Callback for integer value gauges
- gauge(MetricsInfo, long) - Method in interface org.apache.hadoop.metrics2.MetricsVisitor
-
Callback for long value gauges
- gauge(MetricsInfo, float) - Method in interface org.apache.hadoop.metrics2.MetricsVisitor
-
Callback for float value gauges
- gauge(MetricsInfo, double) - Method in interface org.apache.hadoop.metrics2.MetricsVisitor
-
Callback for double value gauges
- genCode(String, String, ArrayList<String>) - Method in class org.apache.hadoop.record.compiler.JFile
-
Deprecated.
Generate record code in given language.
- generateActualKey(K, V) - Method in class org.apache.hadoop.mapred.lib.MultipleOutputFormat
-
Generate the actual key from the given key/value.
- generateActualValue(K, V) - Method in class org.apache.hadoop.mapred.lib.MultipleOutputFormat
-
Generate the actual value from the given key and value.
- generateEntry(String, String, Text) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
-
- generateEntry(String, String, Text) - Static method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorBaseDescriptor
-
- generateFileNameForKeyValue(K, V, String) - Method in class org.apache.hadoop.mapred.lib.MultipleOutputFormat
-
Generate the file output file name based on the given key and the leaf file
name.
- generateKey(int, String) - Method in class org.apache.hadoop.crypto.key.KeyProvider
-
Generates a key material.
- generateKeyValPairs(Object, Object) - Method in class org.apache.hadoop.mapreduce.lib.aggregate.UserDefinedValueAggregatorDescriptor
-
Generate a list of aggregation-id/value pairs for the given
key/value pairs by delegating the invocation to the real object.
- generateKeyValPairs(Object, Object) - Method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorBaseDescriptor
-
Generate 1 or 2 aggregation-id/value pairs for the given key/value pair.
- generateKeyValPairs(Object, Object) - Method in interface org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorDescriptor
-
Generate a list of aggregation-id/value pairs for
the given key/value pair.
- generateLeafFileName(String) - Method in class org.apache.hadoop.mapred.lib.MultipleOutputFormat
-
Generate the leaf name for the output file name.
- generateParseException() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
-
Deprecated.
- generateStateGraph(String) - Method in class org.apache.hadoop.yarn.state.StateMachineFactory
-
Generate a graph represents the state graph of this StateMachine
- generateValueAggregator(String) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
-
- generateValueAggregator(String, long) - Static method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorBaseDescriptor
-
- GenericWritable - Class in org.apache.hadoop.io
-
A wrapper for Writable instances.
- GenericWritable() - Constructor for class org.apache.hadoop.io.GenericWritable
-
- get(String) - Method in class org.apache.hadoop.conf.Configuration
-
Get the value of the name
property, null
if
no such property exists.
- get(String, String) - Method in class org.apache.hadoop.conf.Configuration
-
Get the value of the name
.
- get(URI, Configuration) - Static method in class org.apache.hadoop.crypto.key.KeyProviderFactory
-
Create a KeyProvider based on a provided URI.
- get(URI, Configuration) - Static method in class org.apache.hadoop.fs.AbstractFileSystem
-
The main factory method for creating a file system.
- get(URI, Configuration, String) - Static method in class org.apache.hadoop.fs.FileSystem
-
Get a filesystem instance based on the uri, the passed
configuration and the user
- get(Configuration) - Static method in class org.apache.hadoop.fs.FileSystem
-
Returns the configured filesystem implementation.
- get(URI, Configuration) - Static method in class org.apache.hadoop.fs.FileSystem
-
Returns the FileSystem for this URI's scheme and authority.
- get() - Method in class org.apache.hadoop.io.ArrayPrimitiveWritable
-
Get the original array.
- get() - Method in class org.apache.hadoop.io.ArrayWritable
-
- get() - Method in class org.apache.hadoop.io.BooleanWritable
-
Returns the value of the BooleanWritable
- get() - Method in class org.apache.hadoop.io.BytesWritable
-
- get() - Method in class org.apache.hadoop.io.ByteWritable
-
Return the value of this ByteWritable.
- get() - Method in class org.apache.hadoop.io.DoubleWritable
-
- get() - Method in class org.apache.hadoop.io.EnumSetWritable
-
Return the value of this EnumSetWritable.
- get() - Method in class org.apache.hadoop.io.FloatWritable
-
Return the value of this FloatWritable.
- get() - Method in class org.apache.hadoop.io.GenericWritable
-
Return the wrapped instance.
- get() - Method in class org.apache.hadoop.io.IntWritable
-
Return the value of this IntWritable.
- get() - Method in class org.apache.hadoop.io.LongWritable
-
Return the value of this LongWritable.
- get(Object) - Method in class org.apache.hadoop.io.MapWritable
-
- get() - Static method in class org.apache.hadoop.io.NullWritable
-
Returns the single instance of this class.
- get() - Method in class org.apache.hadoop.io.ObjectWritable
-
Return the instance, or null if none.
- get() - Method in class org.apache.hadoop.io.ShortWritable
-
Return the value of this ShortWritable.
- get(Object) - Method in class org.apache.hadoop.io.SortedMapWritable
-
- get() - Method in class org.apache.hadoop.io.TwoDArrayWritable
-
- get() - Method in class org.apache.hadoop.io.VIntWritable
-
Return the value of this VIntWritable.
- get() - Method in class org.apache.hadoop.io.VLongWritable
-
Return the value of this LongWritable.
- get(Class<? extends WritableComparable>) - Static method in class org.apache.hadoop.io.WritableComparator
-
For backwards compatibility.
- get(Class<? extends WritableComparable>, Configuration) - Static method in class org.apache.hadoop.io.WritableComparator
-
- get(int) - Method in class org.apache.hadoop.mapred.join.CompositeInputSplit
-
Get ith child InputSplit.
- get(int) - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeInputSplit
-
Get ith child InputSplit.
- get(int) - Method in class org.apache.hadoop.mapreduce.lib.join.TupleWritable
-
Get ith Writable from Tuple.
- get(String) - Method in class org.apache.hadoop.metrics2.lib.MetricsRegistry
-
Get a metric by name
- get(String, Collection<MetricsTag>) - Method in class org.apache.hadoop.metrics2.util.MetricsCache
-
Get the cached record
- get(DataInput) - Static method in class org.apache.hadoop.record.BinaryRecordInput
-
Deprecated.
Get a thread-local record input for the supplied DataInput.
- get(DataOutput) - Static method in class org.apache.hadoop.record.BinaryRecordOutput
-
Deprecated.
Get a thread-local record output for the supplied DataOutput.
- get() - Method in class org.apache.hadoop.record.Buffer
-
Deprecated.
Get the data from the Buffer.
- get(String) - Method in class org.apache.hadoop.registry.client.types.ServiceRecord
-
Get the "other" attribute with a specific key
- get(String, String) - Method in class org.apache.hadoop.registry.client.types.ServiceRecord
-
Get the "other" attribute with a specific key.
- getAccessibleNodeLabels() - Method in class org.apache.hadoop.yarn.api.records.QueueInfo
-
Get the accessible node labels
of the queue.
- getAccessTime() - Method in class org.apache.hadoop.fs.FileStatus
-
Get the access time of the file.
- getAclBit() - Method in class org.apache.hadoop.fs.permission.FsPermission
-
Returns true if there is also an ACL (access control list).
- getAclStatus(Path) - Method in class org.apache.hadoop.fs.AbstractFileSystem
-
Gets the ACLs of files and directories.
- getAclStatus(Path) - Method in class org.apache.hadoop.fs.FileContext
-
Gets the ACLs of files and directories.
- getAclStatus(Path) - Method in class org.apache.hadoop.fs.FileSystem
-
Gets the ACL of a file or directory.
- getAclStatus(Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- getAclStatus(Path) - Method in class org.apache.hadoop.fs.viewfs.ViewFileSystem
-
- getAclStatus(Path) - Method in class org.apache.hadoop.fs.viewfs.ViewFs
-
- getAclString() - Method in class org.apache.hadoop.security.authorize.AccessControlList
-
Returns the access control list as a String that can be used for building a
new instance by sending it to the constructor of
AccessControlList
.
- getActiveTaskTrackers() - Method in class org.apache.hadoop.mapreduce.Cluster
-
Get all active trackers in the cluster.
- getActiveTrackerNames() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Get the names of task trackers in the cluster.
- getAddress() - Method in class org.apache.hadoop.ha.HAServiceTarget
-
- getAddressField(Map<String, String>, String) - Static method in class org.apache.hadoop.registry.client.binding.RegistryTypeUtils
-
Get a specific field from an address -raising an exception if
the field is not present
- getAdjustedEnd() - Method in class org.apache.hadoop.io.compress.SplitCompressionInputStream
-
After calling createInputStream, the values of start or end
might change.
- getAdjustedStart() - Method in class org.apache.hadoop.io.compress.SplitCompressionInputStream
-
After calling createInputStream, the values of start or end
might change.
- getAggregatorDescriptors(Configuration) - Static method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorJobBase
-
- getAlgorithmName() - Method in class org.apache.hadoop.fs.FileChecksum
-
The checksum algorithm name
- getAliases() - Method in class org.apache.hadoop.security.alias.CredentialProvider
-
Get the aliases for all credentials.
- getAllEvents() - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEvents
-
- getAllFileInfo() - Method in class org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager
-
- getAllJobs() - Method in class org.apache.hadoop.mapred.JobClient
-
Get the jobs that are submitted.
- getAllJobs() - Method in class org.apache.hadoop.mapreduce.Cluster
-
- getAllJobStatuses() - Method in class org.apache.hadoop.mapreduce.Cluster
-
Get job status for all jobs in the cluster.
- getAllocatedContainers() - Method in class org.apache.hadoop.yarn.api.protocolrecords.AllocateResponse
-
Get the list of newly allocated Container
by the
ResourceManager
.
- getAllocatedResource() - Method in class org.apache.hadoop.yarn.api.records.ContainerReport
-
Get the allocated Resource
of the container.
- getAllPartialJobs() - Method in interface org.apache.hadoop.mapreduce.v2.hs.HistoryStorage
-
Get all of the cached jobs.
- getAllQueues() - Method in class org.apache.hadoop.yarn.client.api.YarnClient
-
Get information (
QueueInfo
) about all queues, recursively if there
is a hierarchy
- getAllRecords() - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
-
Retrieves all the records managed by this MetricsContext.
- getAllServicesMetaData() - Method in class org.apache.hadoop.yarn.api.protocolrecords.StartContainersResponse
-
Get the meta-data from all auxiliary services running on the
NodeManager
.
- getAllStatistics() - Static method in class org.apache.hadoop.fs.AbstractFileSystem
-
- getAllStatistics() - Static method in class org.apache.hadoop.fs.FileContext
-
- getAllStatistics() - Static method in class org.apache.hadoop.fs.FileSystem
-
Return the FileSystem classes that have Statistics
- getAllTaskTypes() - Static method in class org.apache.hadoop.mapreduce.TaskID
-
- getAMCommand() - Method in class org.apache.hadoop.yarn.api.protocolrecords.AllocateResponse
-
If the ResourceManager
needs the
ApplicationMaster
to take some action then it will send an
AMCommand to the ApplicationMaster
.
- getAMContainerId() - Method in class org.apache.hadoop.yarn.api.records.ApplicationAttemptReport
-
Get the ContainerId
of AMContainer for this attempt
- getAMContainerResourceRequest() - Method in class org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext
-
Get ResourceRequest of AM container, if this is not null, scheduler will
use this to acquire resource for AM container.
- getAMContainerSpec() - Method in class org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext
-
Get the ContainerLaunchContext
to describe the
Container
with which the ApplicationMaster
is
launched.
- getAMRMToken() - Method in class org.apache.hadoop.yarn.api.protocolrecords.AllocateResponse
-
The AMRMToken that belong to this attempt
- getAMRMToken() - Method in class org.apache.hadoop.yarn.api.records.ApplicationReport
-
Get the AMRM token of the application.
- getAMRMToken(ApplicationId) - Method in class org.apache.hadoop.yarn.client.api.YarnClient
-
Get the AMRM token of the application.
- getAMRMTokenService(Configuration) - Static method in class org.apache.hadoop.yarn.client.ClientRMProxy
-
- getApplicationACLs() - Method in class org.apache.hadoop.yarn.api.protocolrecords.RegisterApplicationMasterResponse
-
Get the ApplicationACL
s for the application.
- getApplicationACLs() - Method in class org.apache.hadoop.yarn.api.records.ContainerLaunchContext
-
Get the ApplicationACL
s for the application.
- getApplicationAcls() - Method in class org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat.LogReader
-
Returns ACLs for the application.
- getApplicationAttemptId() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetApplicationAttemptReportRequest
-
Get the ApplicationAttemptId
of an application attempt.
- getApplicationAttemptId() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetContainersRequest
-
Get the ApplicationAttemptId
of an application attempt.
- getApplicationAttemptId() - Method in class org.apache.hadoop.yarn.api.records.ApplicationAttemptReport
-
Get the ApplicationAttemptId
of this attempt of the
application
- getApplicationAttemptId() - Method in class org.apache.hadoop.yarn.api.records.ContainerId
-
Get the ApplicationAttemptId
of the application to which the
Container
was assigned.
- getApplicationAttemptId() - Method in class org.apache.hadoop.yarn.security.AMRMTokenIdentifier
-
- getApplicationAttemptID() - Method in class org.apache.hadoop.yarn.security.client.ClientToAMTokenIdentifier
-
- getApplicationAttemptId() - Method in class org.apache.hadoop.yarn.security.NMTokenIdentifier
-
- getApplicationAttemptList() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetApplicationAttemptsResponse
-
Get a list of ApplicationReport
of an application.
- getApplicationAttemptReport() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetApplicationAttemptReportResponse
-
Get the ApplicationAttemptReport
for the application attempt.
- getApplicationAttemptReport(ApplicationAttemptId) - Method in class org.apache.hadoop.yarn.client.api.AHSClient
-
Get a report of the given ApplicationAttempt.
- getApplicationAttemptReport(ApplicationAttemptId) - Method in class org.apache.hadoop.yarn.client.api.YarnClient
-
Get a report of the given ApplicationAttempt.
- GetApplicationAttemptReportRequest - Class in org.apache.hadoop.yarn.api.protocolrecords
-
- GetApplicationAttemptReportRequest() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.GetApplicationAttemptReportRequest
-
- GetApplicationAttemptReportResponse - Class in org.apache.hadoop.yarn.api.protocolrecords
-
The response sent by the ResourceManager
to a client requesting
an application attempt report.
- GetApplicationAttemptReportResponse() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.GetApplicationAttemptReportResponse
-
- getApplicationAttempts(ApplicationId) - Method in class org.apache.hadoop.yarn.client.api.AHSClient
-
Get a report of all (ApplicationAttempts) of Application in the cluster.
- getApplicationAttempts(ApplicationId) - Method in class org.apache.hadoop.yarn.client.api.YarnClient
-
Get a report of all (ApplicationAttempts) of Application in the cluster.
- GetApplicationAttemptsRequest - Class in org.apache.hadoop.yarn.api.protocolrecords
-
The request from clients to get a list of application attempt reports of an
application from the ResourceManager
.
- GetApplicationAttemptsRequest() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.GetApplicationAttemptsRequest
-
- GetApplicationAttemptsResponse - Class in org.apache.hadoop.yarn.api.protocolrecords
-
The response sent by the
ResourceManager
to a client requesting
a list of
ApplicationAttemptReport
for application attempts.
- GetApplicationAttemptsResponse() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.GetApplicationAttemptsResponse
-
- getApplicationId() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetApplicationAttemptsRequest
-
Get the ApplicationId
of an application
- getApplicationId() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetApplicationReportRequest
-
Get the ApplicationId
of the application.
- getApplicationId() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetNewApplicationResponse
-
Get the new ApplicationId
allocated by the
ResourceManager
.
- getApplicationId() - Method in class org.apache.hadoop.yarn.api.protocolrecords.KillApplicationRequest
-
Get the ApplicationId
of the application to be aborted.
- getApplicationId() - Method in class org.apache.hadoop.yarn.api.protocolrecords.MoveApplicationAcrossQueuesRequest
-
Get the ApplicationId
of the application to be moved.
- getApplicationId() - Method in class org.apache.hadoop.yarn.api.records.ApplicationAttemptId
-
Get the ApplicationId
of the ApplicationAttempId
.
- getApplicationId() - Method in class org.apache.hadoop.yarn.api.records.ApplicationReport
-
Get the ApplicationId
of the application.
- getApplicationId() - Method in class org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext
-
Get the ApplicationId
of the submitted application.
- getApplicationList() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetApplicationsResponse
-
Get ApplicationReport
for applications.
- getApplicationName() - Method in class org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext
-
Get the application name.
- getApplicationOwner() - Method in class org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat.LogReader
-
Returns the owner of the application.
- getApplicationReport() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetApplicationReportResponse
-
Get the ApplicationReport
for the application.
- getApplicationReport(ApplicationId) - Method in class org.apache.hadoop.yarn.client.api.AHSClient
-
Get a report of the given Application.
- getApplicationReport(ApplicationId) - Method in class org.apache.hadoop.yarn.client.api.YarnClient
-
Get a report of the given Application.
- GetApplicationReportRequest - Class in org.apache.hadoop.yarn.api.protocolrecords
-
The request sent by a client to the
ResourceManager
to
get an
ApplicationReport
for an application.
- GetApplicationReportRequest() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.GetApplicationReportRequest
-
- GetApplicationReportResponse - Class in org.apache.hadoop.yarn.api.protocolrecords
-
The response sent by the ResourceManager
to a client
requesting an application report.
- GetApplicationReportResponse() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.GetApplicationReportResponse
-
- getApplicationResourceUsageReport() - Method in class org.apache.hadoop.yarn.api.records.ApplicationReport
-
Retrieve the structure containing the job resources for this application
- getApplications() - Method in class org.apache.hadoop.yarn.api.records.QueueInfo
-
Get the running applications of the queue.
- getApplications() - Method in class org.apache.hadoop.yarn.client.api.AHSClient
-
Get a report (ApplicationReport) of all Applications in the cluster.
- getApplications() - Method in class org.apache.hadoop.yarn.client.api.YarnClient
-
Get a report (ApplicationReport) of all Applications in the cluster.
- getApplications(Set<String>) - Method in class org.apache.hadoop.yarn.client.api.YarnClient
-
Get a report (ApplicationReport) of Applications
matching the given application types in the cluster.
- getApplications(EnumSet<YarnApplicationState>) - Method in class org.apache.hadoop.yarn.client.api.YarnClient
-
Get a report (ApplicationReport) of Applications matching the given
application states in the cluster.
- getApplications(Set<String>, EnumSet<YarnApplicationState>) - Method in class org.apache.hadoop.yarn.client.api.YarnClient
-
Get a report (ApplicationReport) of Applications matching the given
application types and application states in the cluster.
- GetApplicationsRequest - Class in org.apache.hadoop.yarn.api.protocolrecords
-
The request from clients to get a report of Applications
in the cluster from the ResourceManager
.
- GetApplicationsRequest() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.GetApplicationsRequest
-
- GetApplicationsResponse - Class in org.apache.hadoop.yarn.api.protocolrecords
-
The response sent by the
ResourceManager
to a client
requesting an
ApplicationReport
for applications.
- GetApplicationsResponse() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.GetApplicationsResponse
-
- getApplicationStates() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetApplicationsRequest
-
Get the application states to filter applications on
- getApplicationSubmissionContext() - Method in class org.apache.hadoop.yarn.api.protocolrecords.SubmitApplicationRequest
-
Get the ApplicationSubmissionContext
for the application.
- getApplicationSubmissionContext() - Method in class org.apache.hadoop.yarn.client.api.YarnClientApplication
-
- getApplicationSubmitter() - Method in class org.apache.hadoop.yarn.security.ContainerTokenIdentifier
-
- getApplicationSubmitter() - Method in class org.apache.hadoop.yarn.security.NMTokenIdentifier
-
- getApplicationTags() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetApplicationsRequest
-
Get the tags to filter applications on
- getApplicationTags() - Method in class org.apache.hadoop.yarn.api.records.ApplicationReport
-
Get all tags corresponding to the application
- getApplicationTags() - Method in class org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext
-
Get tags for the application
- getApplicationType() - Method in class org.apache.hadoop.yarn.api.records.ApplicationReport
-
Get the application's Type
- getApplicationType() - Method in class org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext
-
Get the application type
- getApplicationTypes() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetApplicationsRequest
-
Get the application types to filter applications on
- getApproxChkSumLength(long) - Static method in class org.apache.hadoop.fs.ChecksumFileSystem
-
- getArchiveClassPaths() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the archive entries in classpath as an array of Path
- getArchiveTimestamps() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the timestamps of the archives.
- getArrival() - Method in class org.apache.hadoop.yarn.api.records.ReservationDefinition
-
Get the arrival time or the earliest time from which the resource(s) can be
allocated.
- getAskList() - Method in class org.apache.hadoop.yarn.api.protocolrecords.AllocateRequest
-
Get the list of ResourceRequest
to update the
ResourceManager
about the application's resource requirements.
- getAssignedJobID() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
-
- getAssignedNode() - Method in class org.apache.hadoop.yarn.api.records.ContainerReport
-
Get the allocated NodeId
where container is running.
- getAttemptFailuresValidityInterval() - Method in class org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext
-
Get the attemptFailuresValidityInterval in milliseconds for the application
- getAttemptId() - Method in class org.apache.hadoop.yarn.api.records.ApplicationAttemptId
-
Get the attempt id
of the Application
.
- getAttemptsToStartSkipping(Configuration) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
-
Get the number of Task attempts AFTER which skip mode
will be kicked off.
- getAttribute(String) - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
-
Convenience method for subclasses to access factory attributes.
- getAttributeTable(String) - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
-
Returns an attribute-value map derived from the factory attributes
by finding all factory attributes that begin with
contextName.tableName.
- getAuthMethod() - Method in enum org.apache.hadoop.security.UserGroupInformation.AuthenticationMethod
-
- getAutoIncrMapperProcCount(Configuration) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
-
- getAutoIncrReducerProcCount(Configuration) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
-
- getAvailableResources() - Method in class org.apache.hadoop.yarn.api.protocolrecords.AllocateResponse
-
Get the available headroom for resources in the cluster for the
application.
- getAvailableResources() - Method in class org.apache.hadoop.yarn.client.api.AMRMClient
-
Get the currently available resources in the cluster.
- getAvailableResources() - Method in class org.apache.hadoop.yarn.client.api.async.AMRMClientAsync
-
Get the currently available resources in the cluster.
- getBaseName(String) - Static method in class org.apache.hadoop.crypto.key.KeyProvider
-
Split the versionName in to a base name.
- getBaseRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.lib.MultipleOutputFormat
-
- getBaseRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.lib.MultipleSequenceFileOutputFormat
-
- getBaseRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.lib.MultipleTextOutputFormat
-
- getBeginColumn() - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
Deprecated.
- getBeginLine() - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
Deprecated.
- getBlacklistAdditions() - Method in class org.apache.hadoop.yarn.api.records.ResourceBlacklistRequest
-
Get the list of resource-names which should be added to the
application blacklist.
- getBlackListedTaskTrackerCount() - Method in class org.apache.hadoop.mapreduce.ClusterMetrics
-
Get the number of blacklisted trackers in the cluster.
- getBlackListedTaskTrackers() - Method in class org.apache.hadoop.mapreduce.Cluster
-
Get blacklisted trackers.
- getBlacklistedTrackerNames() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Get the names of task trackers in the cluster.
- getBlacklistedTrackers() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Get the number of blacklisted task trackers in the cluster.
- getBlackListedTrackersInfo() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Gets the list of blacklisted trackers along with reasons for blacklisting.
- getBlacklistRemovals() - Method in class org.apache.hadoop.yarn.api.records.ResourceBlacklistRequest
-
Get the list of resource-names which should be removed from the
application blacklist.
- getBlacklistReport() - Method in class org.apache.hadoop.mapreduce.TaskTrackerInfo
-
Gets a descriptive report about why the tasktracker was blacklisted.
- getBlockers() - Method in class org.apache.hadoop.service.AbstractService
-
- getBlockers() - Method in interface org.apache.hadoop.service.Service
-
Get the blockers on a service -remote dependencies
that are stopping the service from being live.
- getBlockIndex(BlockLocation[], long) - Method in class org.apache.hadoop.mapred.FileInputFormat
-
- getBlockIndex(BlockLocation[], long) - Method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
- getBlockLocations() - Method in class org.apache.hadoop.fs.LocatedFileStatus
-
Get the file's block locations
- getBlockSize() - Method in class org.apache.hadoop.fs.FileStatus
-
Get the block size of the file.
- getBlockSize(Path) - Method in class org.apache.hadoop.fs.FileSystem
-
Deprecated.
Use getFileStatus() instead
- getBlockSize() - Method in class org.apache.hadoop.fs.FsServerDefaults
-
- getBoolean(String, boolean) - Method in class org.apache.hadoop.conf.Configuration
-
Get the value of the name
property as a boolean
.
- getBoundingValsQuery() - Method in class org.apache.hadoop.mapreduce.lib.db.DataDrivenDBInputFormat
-
- getBuffer(boolean, int) - Method in interface org.apache.hadoop.io.ByteBufferPool
-
Get a new direct ByteBuffer.
- getBuffer(boolean, int) - Method in class org.apache.hadoop.io.ElasticByteBufferPool
-
- getBytes() - Method in class org.apache.hadoop.fs.FileChecksum
-
The value of the checksum in bytes
- getBytes() - Method in class org.apache.hadoop.io.BinaryComparable
-
Return representative byte array for this instance.
- getBytes() - Method in class org.apache.hadoop.io.BytesWritable
-
Get the data backing the BytesWritable.
- getBytes() - Method in class org.apache.hadoop.io.Text
-
- getBytesPerChecksum() - Method in class org.apache.hadoop.fs.FsServerDefaults
-
- getBytesPerSum() - Method in class org.apache.hadoop.fs.ChecksumFileSystem
-
Return the bytes Per Checksum
- getBytesRead() - Method in interface org.apache.hadoop.io.compress.Compressor
-
Return number of uncompressed bytes input so far.
- getBytesWritten() - Method in interface org.apache.hadoop.io.compress.Compressor
-
Return number of compressed bytes output so far.
- getCacheArchives() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get cache archives set in the Configuration
- getCachedHosts() - Method in class org.apache.hadoop.fs.BlockLocation
-
Get the list of hosts (hostname) hosting a cached replica of the block
- getCacheFiles() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get cache files set in the Configuration
- getCachePoolDefault() - Static method in class org.apache.hadoop.fs.permission.FsPermission
-
Get the default permission for cache pools.
- getCallbackHandler() - Method in class org.apache.hadoop.yarn.client.api.async.NMClientAsync
-
- getCancelTokensWhenComplete() - Method in class org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext
-
- getCanonicalServiceName() - Method in class org.apache.hadoop.fs.AbstractFileSystem
-
Get a canonical name for this file system.
- getCanonicalServiceName() - Method in class org.apache.hadoop.fs.FileSystem
-
Get a canonical service name for this file system.
- getCanonicalServiceName() - Method in class org.apache.hadoop.fs.s3.S3FileSystem
-
- getCanonicalServiceName() - Method in class org.apache.hadoop.fs.s3native.NativeS3FileSystem
-
- getCanonicalUri() - Method in class org.apache.hadoop.fs.FileSystem
-
Return a canonicalized form of this FileSystem's URI.
- getCanonicalUri() - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- getCapability() - Method in class org.apache.hadoop.yarn.api.records.ContainerResourceIncreaseRequest
-
- getCapability() - Method in class org.apache.hadoop.yarn.api.records.NodeReport
-
Get the total Resource
on the node.
- getCapability() - Method in class org.apache.hadoop.yarn.api.records.ReservationRequest
-
Get the
Resource
capability of the request.
- getCapability() - Method in class org.apache.hadoop.yarn.api.records.ResourceRequest
-
Get the Resource
capability of the request.
- getCapacity() - Method in class org.apache.hadoop.fs.FsStatus
-
Return the capacity in bytes of the file system
- getCapacity() - Method in class org.apache.hadoop.io.BytesWritable
-
Get the capacity, which is the maximum size that could handled without
resizing the backing storage.
- getCapacity() - Method in class org.apache.hadoop.record.Buffer
-
Deprecated.
Get the capacity, which is the maximum count that could handled without
resizing the backing storage.
- getCapacity() - Method in class org.apache.hadoop.yarn.api.records.QueueInfo
-
Get the configured capacity of the queue.
- getChecksumFile(Path) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
-
Return the name of the checksum file associated with a file.
- getChecksumFileLength(Path, long) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
-
Return the length of the checksum file given the size of the
actual file.
- getChecksumLength(long, int) - Static method in class org.apache.hadoop.fs.ChecksumFileSystem
-
Calculated the length of the checksum file in bytes.
- getChecksumOpt() - Method in class org.apache.hadoop.fs.FileChecksum
-
- getChecksumType() - Method in class org.apache.hadoop.fs.FsServerDefaults
-
- getChildFileSystems() - Method in class org.apache.hadoop.fs.FileSystem
-
Get all the immediate child FileSystems embedded in this FileSystem.
- getChildFileSystems() - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- getChildFileSystems() - Method in class org.apache.hadoop.fs.viewfs.ViewFileSystem
-
- getChildQueueInfos(String) - Method in class org.apache.hadoop.yarn.client.api.YarnClient
-
Get information (
QueueInfo
) about all the immediate children queues
of the given queue
- getChildQueues(String) - Method in class org.apache.hadoop.mapred.JobClient
-
Returns an array of queue information objects about immediate children
of queue queueName.
- getChildQueues(String) - Method in class org.apache.hadoop.mapreduce.Cluster
-
Returns immediate children of queueName.
- getChildQueues() - Method in class org.apache.hadoop.yarn.api.records.QueueInfo
-
Get the child queues of the queue.
- getChildren() - Method in class org.apache.hadoop.mapred.JobQueueInfo
-
- getClass(String, Class<?>) - Method in class org.apache.hadoop.conf.Configuration
-
Get the value of the name
property as a Class
.
- getClass(String, Class<? extends U>, Class<U>) - Method in class org.apache.hadoop.conf.Configuration
-
Get the value of the name
property as a Class
implementing the interface specified by xface
.
- getClass(byte) - Method in class org.apache.hadoop.io.AbstractMapWritable
-
- getClass(T) - Static method in class org.apache.hadoop.util.ReflectionUtils
-
Return the correctly-typed
Class
of the given object.
- getClassByName(String) - Method in class org.apache.hadoop.conf.Configuration
-
Load a class by name.
- getClassByNameOrNull(String) - Method in class org.apache.hadoop.conf.Configuration
-
Load a class by name, returning null rather than throwing an exception
if it couldn't be loaded.
- getClasses(String, Class<?>...) - Method in class org.apache.hadoop.conf.Configuration
-
Get the value of the name
property
as an array of Class
.
- getClassLoader() - Method in class org.apache.hadoop.conf.Configuration
-
- getClassName() - Method in class org.apache.hadoop.tracing.SpanReceiverInfo
-
- getCleanupProgress() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
- getCleanupTaskReports(JobID) - Method in class org.apache.hadoop.mapred.JobClient
-
Get the information of the current state of the cleanup tasks of a job.
- getClient() - Method in class org.apache.hadoop.yarn.client.api.async.NMClientAsync
-
- getClientAcls() - Method in class org.apache.hadoop.registry.client.impl.zk.RegistryOperationsService
-
Get the aggregate set of ACLs the client should use
to create directories
- getClientName() - Method in class org.apache.hadoop.yarn.security.client.ClientToAMTokenIdentifier
-
- getClientToAMToken() - Method in class org.apache.hadoop.yarn.api.records.ApplicationReport
-
Get the client token for communicating with the
ApplicationMaster
.
- getClientToAMTokenMasterKey() - Method in class org.apache.hadoop.yarn.api.protocolrecords.RegisterApplicationMasterResponse
-
Get ClientToAMToken master key.
- getCluster() - Method in class org.apache.hadoop.mapreduce.Job
-
- getClusterHandle() - Method in class org.apache.hadoop.mapred.JobClient
-
Get a handle to the Cluster
- getClusterId(Configuration) - Static method in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- getClusterMetrics(GetClusterMetricsRequest) - Method in interface org.apache.hadoop.yarn.api.ApplicationClientProtocol
-
The interface used by clients to get metrics about the cluster from
the ResourceManager
.
- getClusterMetrics() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetClusterMetricsResponse
-
Get the YarnClusterMetrics
for the cluster.
- GetClusterMetricsRequest - Class in org.apache.hadoop.yarn.api.protocolrecords
-
The request sent by clients to get cluster metrics from the
ResourceManager
.
- GetClusterMetricsRequest() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.GetClusterMetricsRequest
-
- GetClusterMetricsResponse - Class in org.apache.hadoop.yarn.api.protocolrecords
-
The response sent by the ResourceManager
to a client
requesting cluster metrics.
- GetClusterMetricsResponse() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.GetClusterMetricsResponse
-
- getClusterNodeCount() - Method in class org.apache.hadoop.yarn.client.api.AMRMClient
-
Get the current number of nodes in the cluster.
- getClusterNodeCount() - Method in class org.apache.hadoop.yarn.client.api.async.AMRMClientAsync
-
Get the current number of nodes in the cluster.
- getClusterNodeLabels(GetClusterNodeLabelsRequest) - Method in interface org.apache.hadoop.yarn.api.ApplicationClientProtocol
-
The interface used by client to get node labels in the cluster
- getClusterNodeLabels() - Method in class org.apache.hadoop.yarn.client.api.YarnClient
-
The interface used by client to get node labels in the cluster
- GetClusterNodeLabelsRequest - Class in org.apache.hadoop.yarn.api.protocolrecords
-
- GetClusterNodeLabelsRequest() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.GetClusterNodeLabelsRequest
-
- GetClusterNodeLabelsResponse - Class in org.apache.hadoop.yarn.api.protocolrecords
-
- GetClusterNodeLabelsResponse() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.GetClusterNodeLabelsResponse
-
- getClusterNodes(GetClusterNodesRequest) - Method in interface org.apache.hadoop.yarn.api.ApplicationClientProtocol
-
The interface used by clients to get a report of all nodes
in the cluster from the ResourceManager
.
- GetClusterNodesRequest - Class in org.apache.hadoop.yarn.api.protocolrecords
-
The request from clients to get a report of all nodes
in the cluster from the ResourceManager
.
- GetClusterNodesRequest() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.GetClusterNodesRequest
-
- GetClusterNodesResponse - Class in org.apache.hadoop.yarn.api.protocolrecords
-
The response sent by the
ResourceManager
to a client
requesting a
NodeReport
for all nodes.
- GetClusterNodesResponse() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.GetClusterNodesResponse
-
- getClusterStatus() - Method in class org.apache.hadoop.mapred.JobClient
-
Get status information about the Map-Reduce cluster.
- getClusterStatus(boolean) - Method in class org.apache.hadoop.mapred.JobClient
-
Get status information about the Map-Reduce cluster.
- getClusterStatus() - Method in class org.apache.hadoop.mapreduce.Cluster
-
Get current cluster status.
- getClusterTimestamp() - Method in class org.apache.hadoop.yarn.api.records.ApplicationId
-
Get the start time of the ResourceManager
which is
used to generate globally unique ApplicationId
.
- getClusterTimestamp() - Method in class org.apache.hadoop.yarn.api.records.ReservationId
-
Get the
start time of the
ResourceManager
which is used to
generate globally unique
ReservationId
.
- getCodec(Path) - Method in class org.apache.hadoop.io.compress.CompressionCodecFactory
-
Find the relevant compression codec for the given file based on its
filename suffix.
- getCodecByClassName(String) - Method in class org.apache.hadoop.io.compress.CompressionCodecFactory
-
Find the relevant compression codec for the codec's canonical class name.
- getCodecByName(String) - Method in class org.apache.hadoop.io.compress.CompressionCodecFactory
-
Find the relevant compression codec for the codec's canonical class name
or by codec alias.
- getCodecClassByName(String) - Method in class org.apache.hadoop.io.compress.CompressionCodecFactory
-
Find the relevant compression codec for the codec's canonical class name
or by codec alias and returns its implemetation class.
- getCodecClasses(Configuration) - Static method in class org.apache.hadoop.io.compress.CompressionCodecFactory
-
Get the list of codecs discovered via a Java ServiceLoader, or
listed in the configuration.
- getCollector(String, Reporter) - Method in class org.apache.hadoop.mapred.lib.MultipleOutputs
-
Gets the output collector for a named output.
- getCollector(String, String, Reporter) - Method in class org.apache.hadoop.mapred.lib.MultipleOutputs
-
Gets the output collector for a multi named output.
- getCombinerClass() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the user-defined combiner class used to combine map-outputs
before being sent to the reducers.
- getCombinerClass() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the combiner class for the job.
- getCombinerKeyGroupingComparator() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the user defined
WritableComparable
comparator for
grouping keys of inputs to the combiner.
- getCombinerKeyGroupingComparator() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the user defined
RawComparator
comparator for
grouping keys of inputs to the combiner.
- getCombinerOutput() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.DoubleValueSum
-
- getCombinerOutput() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.LongValueMax
-
- getCombinerOutput() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.LongValueMin
-
- getCombinerOutput() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.LongValueSum
-
- getCombinerOutput() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.StringValueMax
-
- getCombinerOutput() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.StringValueMin
-
- getCombinerOutput() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.UniqValueCount
-
- getCombinerOutput() - Method in interface org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregator
-
- getCombinerOutput() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueHistogram
-
- getCommands() - Method in class org.apache.hadoop.yarn.api.records.ContainerLaunchContext
-
Get the list of commands for launching the container.
- getCommittedTaskPath(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
Compute the path where the output of a committed task is stored until
the entire job is committed.
- getCommittedTaskPath(TaskAttemptContext, Path) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
- getCommittedTaskPath(int, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
Compute the path where the output of a committed task is stored until the
entire job is committed for a specific application attempt.
- getCommittedTaskPath(int, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.PartialFileOutputCommitter
-
- getComparator() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
Return comparator defining the ordering for RecordReaders in this
composite.
- getComparator() - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
-
Return comparator defining the ordering for RecordReaders in this
composite.
- getCompletedContainersStatuses() - Method in class org.apache.hadoop.yarn.api.protocolrecords.AllocateResponse
-
Get the list of completed containers' statuses.
- getCompletionPollInterval(Configuration) - Static method in class org.apache.hadoop.mapreduce.Job
-
The interval at which waitForCompletion() should check.
- getComponentType() - Method in class org.apache.hadoop.io.ArrayPrimitiveWritable
-
- getCompressedData() - Method in class org.apache.hadoop.io.compress.BlockDecompressorStream
-
- getCompressedData() - Method in class org.apache.hadoop.io.compress.DecompressorStream
-
- getCompressMapOutput() - Method in class org.apache.hadoop.mapred.JobConf
-
Are the outputs of the maps be compressed?
- getCompressor(CompressionCodec, Configuration) - Static method in class org.apache.hadoop.io.compress.CodecPool
-
- getCompressor(CompressionCodec) - Static method in class org.apache.hadoop.io.compress.CodecPool
-
- getCompressorType() - Method in class org.apache.hadoop.io.compress.BZip2Codec
-
- getCompressorType() - Method in interface org.apache.hadoop.io.compress.CompressionCodec
-
- getCompressorType() - Method in class org.apache.hadoop.io.compress.DefaultCodec
-
- getCompressorType() - Method in class org.apache.hadoop.io.compress.GzipCodec
-
- getCompressOutput(JobConf) - Static method in class org.apache.hadoop.mapred.FileOutputFormat
-
Is the job output compressed?
- getCompressOutput(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
Is the job output compressed?
- getConcurrency() - Method in class org.apache.hadoop.yarn.api.records.ReservationRequest
-
Get the number of containers that need to be scheduled concurrently.
- getConditions() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
- getConf() - Method in interface org.apache.hadoop.conf.Configurable
-
Return the configuration used by this object.
- getConf() - Method in class org.apache.hadoop.conf.Configured
-
- getConf() - Method in class org.apache.hadoop.crypto.key.KeyProvider
-
Return the provider configuration.
- getConf() - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- getConf() - Method in class org.apache.hadoop.io.AbstractMapWritable
-
- getConf() - Method in class org.apache.hadoop.io.compress.BZip2Codec
-
Return the configuration used by this object.
- getConf() - Method in class org.apache.hadoop.io.compress.DefaultCodec
-
- getConf() - Method in class org.apache.hadoop.io.EnumSetWritable
-
- getConf() - Method in class org.apache.hadoop.io.GenericWritable
-
- getConf() - Method in class org.apache.hadoop.io.ObjectWritable
-
- getConf() - Method in class org.apache.hadoop.io.WritableComparator
-
- getConf() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
Return the configuration used by this object.
- getConf() - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
-
- getConf() - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getConf() - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
- getConf() - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
-
Return the configuration used by this object.
- getConf() - Method in class org.apache.hadoop.mapreduce.lib.partition.BinaryPartitioner
-
- getConf() - Method in class org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedComparator
-
- getConf() - Method in class org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedPartitioner
-
- getConf() - Method in class org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner
-
- getConf() - Method in class org.apache.hadoop.net.AbstractDNSToSwitchMapping
-
- getConf() - Method in class org.apache.hadoop.net.ScriptBasedMapping
-
- getConf() - Method in class org.apache.hadoop.net.SocksSocketFactory
-
- getConf() - Method in class org.apache.hadoop.net.TableMapping
-
- getConf() - Method in class org.apache.hadoop.security.authorize.DefaultImpersonationProvider
-
- getConfig() - Method in class org.apache.hadoop.service.AbstractService
-
- getConfig() - Method in interface org.apache.hadoop.service.Service
-
Get the configuration of this service.
- getConfiguration() - Method in interface org.apache.hadoop.mapred.RunningJob
-
Get the underlying job configuration
- getConfiguration() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Return the configuration for the job.
- getConfResourceAsInputStream(String) - Method in class org.apache.hadoop.conf.Configuration
-
Get an input stream attached to the configuration resource with the
given name
.
- getConfResourceAsReader(String) - Method in class org.apache.hadoop.conf.Configuration
-
Get a
Reader
attached to the configuration resource with the
given
name
.
- getConnection() - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
Returns a connection object o the DB
- getConnection() - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
- getConnection() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
- getContainerExitStatus() - Method in class org.apache.hadoop.yarn.api.records.ContainerReport
-
Get the final exit status
of the container.
- getContainerId() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetContainerReportRequest
-
Get the ContainerId
of the Container.
- getContainerId() - Method in class org.apache.hadoop.yarn.api.records.ContainerId
-
Get the identifier of the ContainerId
.
- getContainerId() - Method in class org.apache.hadoop.yarn.api.records.ContainerReport
-
Get the ContainerId
of the container.
- getContainerId() - Method in class org.apache.hadoop.yarn.api.records.ContainerResourceIncreaseRequest
-
- getContainerId() - Method in class org.apache.hadoop.yarn.api.records.ContainerStatus
-
Get the ContainerId
of the container.
- getContainerID() - Method in class org.apache.hadoop.yarn.security.ContainerTokenIdentifier
-
- getContainerIds() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetContainerStatusesRequest
-
Get the list of ContainerId
s of containers for which to obtain
the ContainerStatus
.
- getContainerIds() - Method in class org.apache.hadoop.yarn.api.protocolrecords.StopContainersRequest
-
Get the ContainerId
s of the containers to be stopped.
- getContainerLaunchContext() - Method in class org.apache.hadoop.yarn.api.protocolrecords.StartContainerRequest
-
Get the ContainerLaunchContext
for the container to be started
by the NodeManager
.
- getContainerList() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetContainersResponse
-
Get a list of ContainerReport
for all the containers of an
application attempt.
- getContainerLogDir() - Method in class org.apache.hadoop.yarn.ContainerLogAppender
-
Getter/Setter methods for log4j.
- getContainerLogDir() - Method in class org.apache.hadoop.yarn.ContainerRollingLogAppender
-
Getter/Setter methods for log4j.
- getContainerLogsReader(ContainerId) - Method in class org.apache.hadoop.yarn.logaggregation.