- ABORTED - Static variable in class org.apache.hadoop.yarn.api.records.ContainerExitStatus
-
Containers killed by the framework, either due to being released by
the application or being 'lost' due to node failures etc.
- abortJob(JobContext, int) - Method in class org.apache.hadoop.mapred.FileOutputCommitter
-
- abortJob(JobContext, int) - Method in class org.apache.hadoop.mapred.OutputCommitter
-
For aborting an unsuccessful job's output.
- abortJob(JobContext, JobStatus.State) - Method in class org.apache.hadoop.mapred.OutputCommitter
-
This method implements the new interface by calling the old method.
- abortJob(JobContext, JobStatus.State) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
Delete the temporary directory, including all of the work directories.
- abortJob(JobContext, JobStatus.State) - Method in class org.apache.hadoop.mapreduce.OutputCommitter
-
For aborting an unsuccessful job's output.
- abortTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapred.FileOutputCommitter
-
- abortTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
-
Discard the task output.
- abortTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
-
This method implements the new interface by calling the old method.
- abortTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
Delete the work directory
- abortTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.OutputCommitter
-
Discard the task output.
- ABSOLUTE - Static variable in class org.apache.hadoop.metrics.spi.MetricValue
-
- AbstractCounters<C extends Counter,G extends CounterGroupBase<C>> - Class in org.apache.hadoop.mapreduce.counters
-
An abstract class to provide common implementation for the Counters
container in both mapred and mapreduce packages.
- AbstractCounters(CounterGroupFactory<C, G>) - Constructor for class org.apache.hadoop.mapreduce.counters.AbstractCounters
-
- AbstractCounters(AbstractCounters<C1, G1>, CounterGroupFactory<C, G>) - Constructor for class org.apache.hadoop.mapreduce.counters.AbstractCounters
-
Construct from another counters object.
- AbstractDNSToSwitchMapping - Class in org.apache.hadoop.net
-
This is a base class for DNS to Switch mappings.
- AbstractDNSToSwitchMapping() - Constructor for class org.apache.hadoop.net.AbstractDNSToSwitchMapping
-
Create an unconfigured instance
- AbstractDNSToSwitchMapping(Configuration) - Constructor for class org.apache.hadoop.net.AbstractDNSToSwitchMapping
-
Create an instance, caching the configuration file.
- AbstractEvent<TYPE extends Enum<TYPE>> - Class in org.apache.hadoop.yarn.event
-
Parent class of all the events.
- AbstractEvent(TYPE) - Constructor for class org.apache.hadoop.yarn.event.AbstractEvent
-
- AbstractEvent(TYPE, long) - Constructor for class org.apache.hadoop.yarn.event.AbstractEvent
-
- AbstractFileSystem - Class in org.apache.hadoop.fs
-
This class provides an interface for implementors of a Hadoop file system
(analogous to the VFS of Unix).
- AbstractFileSystem(URI, String, boolean, int) - Constructor for class org.apache.hadoop.fs.AbstractFileSystem
-
Constructor to be called by subclasses.
- AbstractLivelinessMonitor<O> - Class in org.apache.hadoop.yarn.util
-
A simple liveliness monitor with which clients can register, trust the
component to monitor liveliness, get a call-back on expiry and then finally
unregister.
- AbstractLivelinessMonitor(String, Clock) - Constructor for class org.apache.hadoop.yarn.util.AbstractLivelinessMonitor
-
- AbstractMapWritable - Class in org.apache.hadoop.io
-
Abstract base class for MapWritable and SortedMapWritable
Unlike org.apache.nutch.crawl.MapWritable, this class allows creation of
MapWritable<Writable, MapWritable> so the CLASS_TO_ID and ID_TO_CLASS
maps travel with the class instead of being static.
- AbstractMapWritable() - Constructor for class org.apache.hadoop.io.AbstractMapWritable
-
constructor.
- AbstractMetric - Class in org.apache.hadoop.metrics2
-
The immutable metric
- AbstractMetric(MetricsInfo) - Constructor for class org.apache.hadoop.metrics2.AbstractMetric
-
Construct the metric
- AbstractMetricsContext - Class in org.apache.hadoop.metrics.spi
-
The main class of the Service Provider Interface.
- AbstractMetricsContext() - Constructor for class org.apache.hadoop.metrics.spi.AbstractMetricsContext
-
Creates a new instance of AbstractMetricsContext
- AbstractService - Class in org.apache.hadoop.service
-
This is the base implementation class for services.
- AbstractService(String) - Constructor for class org.apache.hadoop.service.AbstractService
-
Construct the service.
- accept(Path) - Method in class org.apache.hadoop.fs.GlobFilter
-
- accept(Path) - Method in interface org.apache.hadoop.fs.PathFilter
-
Tests whether or not the specified abstract pathname should be
included in a pathname list.
- accept(CompositeRecordReader.JoinCollector, K) - Method in interface org.apache.hadoop.mapred.join.ComposableRecordReader
-
While key-value pairs from this RecordReader match the given key, register
them with the JoinCollector provided.
- accept(CompositeRecordReader.JoinCollector, K) - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
If key provided matches that of this Composite, give JoinCollector
iterator over values it may emit.
- accept(CompositeRecordReader.JoinCollector, K) - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
-
Add an iterator to the collector at the position occupied by this
RecordReader over the values in this stream paired with the key
provided (ie register a stream of values from this source matching K
with a collector).
- accept(Path) - Method in class org.apache.hadoop.mapred.OutputLogFilter
-
- accept(CompositeRecordReader.JoinCollector, K) - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
-
If key provided matches that of this Composite, give JoinCollector
iterator over values it may emit.
- accept(CompositeRecordReader.JoinCollector, K) - Method in class org.apache.hadoop.mapreduce.lib.join.WrappedRecordReader
-
Add an iterator to the collector at the position occupied by this
RecordReader over the values in this stream paired with the key
provided (ie register a stream of values from this source matching K
with a collector).
- accepts(String) - Method in class org.apache.hadoop.metrics2.MetricsFilter
-
Whether to accept the name
- accepts(MetricsTag) - Method in class org.apache.hadoop.metrics2.MetricsFilter
-
Whether to accept the tag
- accepts(Iterable<MetricsTag>) - Method in class org.apache.hadoop.metrics2.MetricsFilter
-
Whether to accept the tags
- accepts(MetricsRecord) - Method in class org.apache.hadoop.metrics2.MetricsFilter
-
Whether to accept the record
- access(Path, FsAction) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- access(Path, FsAction) - Method in class org.apache.hadoop.fs.viewfs.ViewFileSystem
-
- access(Path, FsAction) - Method in class org.apache.hadoop.fs.viewfs.ViewFs
-
- ACCESS_DENIED - Static variable in class org.apache.hadoop.yarn.api.records.timeline.TimelinePutResponse.TimelinePutError
-
Error code returned if the user is denied to access the timeline data
- AccessControlException - Exception in org.apache.hadoop.fs.permission
-
Deprecated.
Use AccessControlException
instead.
- AccessControlException() - Constructor for exception org.apache.hadoop.fs.permission.AccessControlException
-
Deprecated.
Default constructor is needed for unwrapping from
RemoteException
.
- AccessControlException(String) - Constructor for exception org.apache.hadoop.fs.permission.AccessControlException
-
Deprecated.
- AccessControlException(Throwable) - Constructor for exception org.apache.hadoop.fs.permission.AccessControlException
-
Deprecated.
Constructs a new exception with the specified cause and a detail
message of (cause==null ? null : cause.toString()) (which
typically contains the class and detail message of cause).
- AccessControlList - Class in org.apache.hadoop.security.authorize
-
Class representing a configured access control list.
- AccessControlList() - Constructor for class org.apache.hadoop.security.authorize.AccessControlList
-
This constructor exists primarily for AccessControlList to be Writable.
- AccessControlList(String) - Constructor for class org.apache.hadoop.security.authorize.AccessControlList
-
Construct a new ACL from a String representation of the same.
- AccessControlList(String, String) - Constructor for class org.apache.hadoop.security.authorize.AccessControlList
-
Construct a new ACL from String representation of users and groups
The arguments are comma separated lists
- AclEntry - Class in org.apache.hadoop.fs.permission
-
Defines a single entry in an ACL.
- AclEntryScope - Enum in org.apache.hadoop.fs.permission
-
Specifies the scope or intended usage of an ACL entry.
- AclEntryType - Enum in org.apache.hadoop.fs.permission
-
Specifies the type of an ACL entry.
- aclSpecToString(List<AclEntry>) - Static method in class org.apache.hadoop.fs.permission.AclEntry
-
Convert a List of AclEntries into a string - the reverse of parseAclSpec.
- AclStatus - Class in org.apache.hadoop.fs.permission
-
An AclStatus contains the ACL information of a specific file.
- acquireLease(Path) - Method in class org.apache.hadoop.fs.azure.NativeAzureFileSystem
-
Get a self-renewing lease on the specified file.
- activateOptions() - Method in class org.apache.hadoop.yarn.ContainerLogAppender
-
- activateOptions() - Method in class org.apache.hadoop.yarn.ContainerRollingLogAppender
-
- add(E) - Method in class org.apache.hadoop.io.EnumSetWritable
-
- add(InputSplit) - Method in class org.apache.hadoop.mapred.join.CompositeInputSplit
-
Add an InputSplit to this collection.
- add(ComposableRecordReader<K, ? extends V>) - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
Add a RecordReader to this collection.
- add(X) - Method in class org.apache.hadoop.mapreduce.lib.join.ArrayListBackedIterator
-
- add(InputSplit) - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeInputSplit
-
Add an InputSplit to this collection.
- add(ComposableRecordReader<K, ? extends V>) - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
-
Add a RecordReader to this collection.
- add(T) - Method in interface org.apache.hadoop.mapreduce.lib.join.ResetableIterator
-
Add an element to the collection of elements to iterate over.
- add(X) - Method in class org.apache.hadoop.mapreduce.lib.join.StreamBackedIterator
-
- add(String, long) - Method in class org.apache.hadoop.metrics2.lib.MetricsRegistry
-
Add sample to a stat metric by name.
- add(long) - Method in class org.apache.hadoop.metrics2.lib.MutableQuantiles
-
- add(String, long) - Method in class org.apache.hadoop.metrics2.lib.MutableRates
-
Add a rate sample for a rate metric
- add(long, long) - Method in class org.apache.hadoop.metrics2.lib.MutableStat
-
Add a number of samples and their sum to the running stat
- add(long) - Method in class org.apache.hadoop.metrics2.lib.MutableStat
-
Add a snapshot to the metric
- add(MetricsTag) - Method in class org.apache.hadoop.metrics2.MetricsRecordBuilder
-
Add an immutable metrics tag object
- add(AbstractMetric) - Method in class org.apache.hadoop.metrics2.MetricsRecordBuilder
-
Add a pre-made immutable metric object
- add(Key) - Method in class org.apache.hadoop.util.bloom.BloomFilter
-
- add(Key) - Method in class org.apache.hadoop.util.bloom.CountingBloomFilter
-
- add(Key) - Method in class org.apache.hadoop.util.bloom.DynamicBloomFilter
-
- add(Key) - Method in class org.apache.hadoop.util.bloom.RetouchedBloomFilter
-
- add_escapes(String) - Method in exception org.apache.hadoop.record.compiler.generated.ParseException
-
Deprecated.
Used to convert raw characters to their escaped version
when these raw version cannot be used as part of an ASCII
string literal.
- addArchiveToClassPath(Path) - Method in class org.apache.hadoop.mapreduce.Job
-
Add an archive path to the current set of classpath entries.
- addCacheArchive(URI) - Method in class org.apache.hadoop.mapreduce.Job
-
Add a archives to be localized
- addCacheFile(URI) - Method in class org.apache.hadoop.mapreduce.Job
-
Add a file to be localized
- addConfigurationPair(String, String) - Method in class org.apache.hadoop.tracing.SpanReceiverInfoBuilder
-
- addContainerRequest(T) - Method in class org.apache.hadoop.yarn.client.api.AMRMClient
-
Request containers for resources before calling allocate
- addContainerRequest(T) - Method in class org.apache.hadoop.yarn.client.api.async.AMRMClientAsync
-
Request containers for resources before calling allocate
- addCounter(Counters.Counter) - Method in class org.apache.hadoop.mapred.Counters.Group
-
- addCounter(String, String, long) - Method in class org.apache.hadoop.mapred.Counters.Group
-
- addCounter(T) - Method in interface org.apache.hadoop.mapreduce.counters.CounterGroupBase
-
Add a counter to this group.
- addCounter(String, String, long) - Method in interface org.apache.hadoop.mapreduce.counters.CounterGroupBase
-
Add a counter to this group
- addCounter(MetricsInfo, int) - Method in class org.apache.hadoop.metrics2.MetricsRecordBuilder
-
Add an integer metric
- addCounter(MetricsInfo, long) - Method in class org.apache.hadoop.metrics2.MetricsRecordBuilder
-
Add an long metric
- addDefaultResource(String) - Static method in class org.apache.hadoop.conf.Configuration
-
Add a default resource.
- addDefaults() - Method in class org.apache.hadoop.mapred.join.CompositeInputFormat
-
Adds the default set of identifiers to the parser.
- addDefaults() - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeInputFormat
-
Adds the default set of identifiers to the parser.
- addDependingJob(Job) - Method in class org.apache.hadoop.mapred.jobcontrol.Job
-
Add a job to this jobs' dependency list.
- addDependingJob(ControlledJob) - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
Add a job to this jobs' dependency list.
- addDeprecation(String, String[], String) - Static method in class org.apache.hadoop.conf.Configuration
-
- addDeprecation(String, String, String) - Static method in class org.apache.hadoop.conf.Configuration
-
Adds the deprecated key to the global deprecation map.
- addDeprecation(String, String[]) - Static method in class org.apache.hadoop.conf.Configuration
-
- addDeprecation(String, String) - Static method in class org.apache.hadoop.conf.Configuration
-
Adds the deprecated key to the global deprecation map when no custom
message is provided.
- addDeprecations(Configuration.DeprecationDelta[]) - Static method in class org.apache.hadoop.conf.Configuration
-
Adds a set of deprecated keys to the global deprecations.
- addDomain(TimelineDomain) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineDomains
-
Add a single domain into the existing domain list
- addDomains(List<TimelineDomain>) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineDomains
-
All a list of domains into the existing domain list
- addEntities(List<TimelineEntity>) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEntities
-
All a list of entities into the existing entity list
- addEntity(TimelineEntity) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEntities
-
Add a single entity into the existing entity list
- addError(TimelinePutResponse.TimelinePutError) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelinePutResponse
-
- addErrors(List<TimelinePutResponse.TimelinePutError>) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelinePutResponse
-
- addEscapes(String) - Static method in error org.apache.hadoop.record.compiler.generated.TokenMgrError
-
Deprecated.
Replaces unprintable characters by their espaced (or unicode escaped)
equivalents in the given string
- addEvent(TimelineEvent) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEntity
-
Add a single event related to the entity to the existing event list
- addEvent(TimelineEvents.EventsOfOneEntity) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEvents
-
- addEvent(TimelineEvent) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEvents.EventsOfOneEntity
-
Add a single event to the existing event list
- addEventInfo(String, Object) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEvent
-
Add one piece of the information of the event to the existing information
map
- addEventInfo(Map<String, Object>) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEvent
-
Add a map of the information of the event to the existing information map
- addEvents(List<TimelineEvent>) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEntity
-
Add a list of events related to the entity to the existing event list
- addEvents(List<TimelineEvents.EventsOfOneEntity>) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEvents
-
- addEvents(List<TimelineEvent>) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEvents.EventsOfOneEntity
-
Add a list of event to the existing event list
- addExternalEndpoint(Endpoint) - Method in class org.apache.hadoop.registry.client.types.ServiceRecord
-
Add an external endpoint
- addFalsePositive(Key) - Method in class org.apache.hadoop.util.bloom.RetouchedBloomFilter
-
Adds a false positive information to this retouched Bloom filter.
- addFalsePositive(Collection<Key>) - Method in class org.apache.hadoop.util.bloom.RetouchedBloomFilter
-
Adds a collection of false positive information to this retouched Bloom filter.
- addFalsePositive(List<Key>) - Method in class org.apache.hadoop.util.bloom.RetouchedBloomFilter
-
Adds a list of false positive information to this retouched Bloom filter.
- addFalsePositive(Key[]) - Method in class org.apache.hadoop.util.bloom.RetouchedBloomFilter
-
Adds an array of false positive information to this retouched Bloom filter.
- addFencingParameters(Map<String, String>) - Method in class org.apache.hadoop.ha.HAServiceTarget
-
Hook to allow subclasses to add any parameters they would like to
expose to fencing implementations/scripts.
- addField(String, TypeID) - Method in class org.apache.hadoop.record.meta.RecordTypeInfo
-
Deprecated.
Add a field.
- addFileset(FileSet) - Method in class org.apache.hadoop.record.compiler.ant.RccTask
-
Deprecated.
Adds a fileset that can consist of one or more files
- addFileToClassPath(Path) - Method in class org.apache.hadoop.mapreduce.Job
-
Add an file path to the current set of classpath entries It adds the file
to cache as well.
- addGauge(MetricsInfo, int) - Method in class org.apache.hadoop.metrics2.MetricsRecordBuilder
-
Add a integer gauge metric
- addGauge(MetricsInfo, long) - Method in class org.apache.hadoop.metrics2.MetricsRecordBuilder
-
Add a long gauge metric
- addGauge(MetricsInfo, float) - Method in class org.apache.hadoop.metrics2.MetricsRecordBuilder
-
Add a float gauge metric
- addGauge(MetricsInfo, double) - Method in class org.apache.hadoop.metrics2.MetricsRecordBuilder
-
Add a double gauge metric
- addGroup(String) - Method in class org.apache.hadoop.security.authorize.AccessControlList
-
Add group to the names of groups allowed for this service.
- addIdentifier(String, Class<?>[], Class<? extends Parser.Node>, Class<? extends ComposableRecordReader>) - Static method in class org.apache.hadoop.mapred.join.Parser.Node
-
For a given identifier, add a mapping to the nodetype for the parse
tree and to the ComposableRecordReader to be created, including the
formals required to invoke the constructor.
- addIdentifier(String, Class<?>[], Class<? extends Parser.Node>, Class<? extends ComposableRecordReader>) - Static method in class org.apache.hadoop.mapreduce.lib.join.Parser.Node
-
For a given identifier, add a mapping to the nodetype for the parse
tree and to the ComposableRecordReader to be created, including the
formals required to invoke the constructor.
- addIfService(Object) - Method in class org.apache.hadoop.service.CompositeService
-
- AddingCompositeService - Class in org.apache.hadoop.registry.server.services
-
Composite service that exports the add/remove methods.
- AddingCompositeService(String) - Constructor for class org.apache.hadoop.registry.server.services.AddingCompositeService
-
- addInputPath(JobConf, Path) - Static method in class org.apache.hadoop.mapred.FileInputFormat
-
Add a
Path
to the list of inputs for the map-reduce job.
- addInputPath(JobConf, Path, Class<? extends InputFormat>) - Static method in class org.apache.hadoop.mapred.lib.MultipleInputs
-
Add a
Path
with a custom
InputFormat
to the list of
inputs for the map-reduce job.
- addInputPath(JobConf, Path, Class<? extends InputFormat>, Class<? extends Mapper>) - Static method in class org.apache.hadoop.mapred.lib.MultipleInputs
-
- addInputPath(Job, Path) - Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
Add a
Path
to the list of inputs for the map-reduce job.
- addInputPath(Job, Path, Class<? extends InputFormat>) - Static method in class org.apache.hadoop.mapreduce.lib.input.MultipleInputs
-
Add a
Path
with a custom
InputFormat
to the list of
inputs for the map-reduce job.
- addInputPath(Job, Path, Class<? extends InputFormat>, Class<? extends Mapper>) - Static method in class org.apache.hadoop.mapreduce.lib.input.MultipleInputs
-
- addInputPathRecursively(List<FileStatus>, FileSystem, Path, PathFilter) - Method in class org.apache.hadoop.mapred.FileInputFormat
-
Add files in the input path recursively into the results.
- addInputPathRecursively(List<FileStatus>, FileSystem, Path, PathFilter) - Method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
Add files in the input path recursively into the results.
- addInputPaths(JobConf, String) - Static method in class org.apache.hadoop.mapred.FileInputFormat
-
Add the given comma separated paths to the list of inputs for
the map-reduce job.
- addInputPaths(Job, String) - Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
Add the given comma separated paths to the list of inputs for
the map-reduce job.
- addInternalEndpoint(Endpoint) - Method in class org.apache.hadoop.registry.client.types.ServiceRecord
-
Add an internal endpoint
- addJob(ControlledJob) - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl
-
Add a new controlled job.
- addJob(Job) - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl
-
Add a new job.
- addJobCollection(Collection<ControlledJob>) - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl
-
Add a collection of jobs
- addJobs(Collection<Job>) - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
-
Add a collection of jobs
- addLocalArchives(Configuration, String) - Static method in class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
- addLocalFiles(Configuration, String) - Static method in class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
- addMapper(JobConf, Class<? extends Mapper<K1, V1, K2, V2>>, Class<? extends K1>, Class<? extends V1>, Class<? extends K2>, Class<? extends V2>, boolean, JobConf) - Static method in class org.apache.hadoop.mapred.lib.ChainMapper
-
Adds a Mapper class to the chain job's JobConf.
- addMapper(JobConf, Class<? extends Mapper<K1, V1, K2, V2>>, Class<? extends K1>, Class<? extends V1>, Class<? extends K2>, Class<? extends V2>, boolean, JobConf) - Static method in class org.apache.hadoop.mapred.lib.ChainReducer
-
Adds a Mapper class to the chain job's JobConf.
- addMapper(Job, Class<? extends Mapper>, Class<?>, Class<?>, Class<?>, Class<?>, Configuration) - Static method in class org.apache.hadoop.mapreduce.lib.chain.ChainMapper
-
Adds a
Mapper
class to the chain mapper.
- addMapper(Job, Class<? extends Mapper>, Class<?>, Class<?>, Class<?>, Class<?>, Configuration) - Static method in class org.apache.hadoop.mapreduce.lib.chain.ChainReducer
-
Adds a
Mapper
class to the chain reducer.
- addMultiNamedOutput(JobConf, String, Class<? extends OutputFormat>, Class<?>, Class<?>) - Static method in class org.apache.hadoop.mapred.lib.MultipleOutputs
-
Adds a multi named output for the job.
- addNamedOutput(JobConf, String, Class<? extends OutputFormat>, Class<?>, Class<?>) - Static method in class org.apache.hadoop.mapred.lib.MultipleOutputs
-
Adds a named output for the job.
- addNamedOutput(Job, String, Class<? extends OutputFormat>, Class<?>, Class<?>) - Static method in class org.apache.hadoop.mapreduce.lib.output.MultipleOutputs
-
Adds a named output for the job.
- addNextValue(Object) - Method in class org.apache.hadoop.mapreduce.lib.aggregate.DoubleValueSum
-
add a value to the aggregator
- addNextValue(double) - Method in class org.apache.hadoop.mapreduce.lib.aggregate.DoubleValueSum
-
add a value to the aggregator
- addNextValue(Object) - Method in class org.apache.hadoop.mapreduce.lib.aggregate.LongValueMax
-
add a value to the aggregator
- addNextValue(long) - Method in class org.apache.hadoop.mapreduce.lib.aggregate.LongValueMax
-
add a value to the aggregator
- addNextValue(Object) - Method in class org.apache.hadoop.mapreduce.lib.aggregate.LongValueMin
-
add a value to the aggregator
- addNextValue(long) - Method in class org.apache.hadoop.mapreduce.lib.aggregate.LongValueMin
-
add a value to the aggregator
- addNextValue(Object) - Method in class org.apache.hadoop.mapreduce.lib.aggregate.LongValueSum
-
add a value to the aggregator
- addNextValue(long) - Method in class org.apache.hadoop.mapreduce.lib.aggregate.LongValueSum
-
add a value to the aggregator
- addNextValue(Object) - Method in class org.apache.hadoop.mapreduce.lib.aggregate.StringValueMax
-
add a value to the aggregator
- addNextValue(Object) - Method in class org.apache.hadoop.mapreduce.lib.aggregate.StringValueMin
-
add a value to the aggregator
- addNextValue(Object) - Method in class org.apache.hadoop.mapreduce.lib.aggregate.UniqValueCount
-
add a value to the aggregator
- addNextValue(Object) - Method in interface org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregator
-
add a value to the aggregator
- addNextValue(Object) - Method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueHistogram
-
add the given val to the aggregator.
- addOtherInfo(String, Object) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEntity
-
Add one piece of other information of the entity to the existing other info
map
- addOtherInfo(Map<String, Object>) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEntity
-
Add a map of other information of the entity to the existing other info map
- addPrimaryFilter(String, Object) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEntity
-
Add a single piece of primary filter to the existing primary filter map
- addPrimaryFilters(Map<String, Set<Object>>) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEntity
-
Add a map of primary filters to the existing primary filter map
- addRecord(String) - Method in interface org.apache.hadoop.metrics2.MetricsCollector
-
Add a metrics record
- addRecord(MetricsInfo) - Method in interface org.apache.hadoop.metrics2.MetricsCollector
-
Add a metrics record
- addRelatedEntities(Map<String, Set<String>>) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEntity
-
Add a map of related entities to the existing related entity map
- addRelatedEntity(String, String) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEntity
-
Add an entity to the existing related entity map
- addResource(String) - Method in class org.apache.hadoop.conf.Configuration
-
Add a configuration resource.
- addResource(URL) - Method in class org.apache.hadoop.conf.Configuration
-
Add a configuration resource.
- addResource(Path) - Method in class org.apache.hadoop.conf.Configuration
-
Add a configuration resource.
- addResource(InputStream) - Method in class org.apache.hadoop.conf.Configuration
-
Add a configuration resource.
- addResource(InputStream, String) - Method in class org.apache.hadoop.conf.Configuration
-
Add a configuration resource.
- addResource(Configuration) - Method in class org.apache.hadoop.conf.Configuration
-
Add a configuration resource.
- ADDRESS_HOSTNAME_AND_PORT - Static variable in interface org.apache.hadoop.registry.client.types.AddressTypes
-
hostname/FQDN and port pair: "host/port".
- ADDRESS_HOSTNAME_FIELD - Static variable in interface org.apache.hadoop.registry.client.types.AddressTypes
-
- ADDRESS_OTHER - Static variable in interface org.apache.hadoop.registry.client.types.AddressTypes
-
Any other address: "".
- ADDRESS_PATH - Static variable in interface org.apache.hadoop.registry.client.types.AddressTypes
-
Path /a/b/c
style: "path".
- ADDRESS_PORT_FIELD - Static variable in interface org.apache.hadoop.registry.client.types.AddressTypes
-
- ADDRESS_URI - Static variable in interface org.apache.hadoop.registry.client.types.AddressTypes
-
URI entries: "uri".
- ADDRESS_ZOOKEEPER - Static variable in interface org.apache.hadoop.registry.client.types.AddressTypes
-
Zookeeper addresses as a triple : "zktriple".
- addresses - Variable in class org.apache.hadoop.registry.client.types.Endpoint
-
a list of address tuples —tuples whose format depends on the address type
- addressType - Variable in class org.apache.hadoop.registry.client.types.Endpoint
-
Type of address.
- AddressTypes - Interface in org.apache.hadoop.registry.client.types
-
Enum of address types -as integers.
- addService(Service) - Method in class org.apache.hadoop.registry.server.services.AddingCompositeService
-
- addService(Service) - Method in class org.apache.hadoop.service.CompositeService
-
- addSpanReceiver(SpanReceiverInfo) - Method in interface org.apache.hadoop.tracing.TraceAdminProtocol
-
Add a new trace span receiver.
- addToMap(Class<?>) - Method in class org.apache.hadoop.io.AbstractMapWritable
-
Add a Class to the maps if it is not already present.
- addTransition(STATE, STATE, EVENTTYPE) - Method in class org.apache.hadoop.yarn.state.StateMachineFactory
-
- addTransition(STATE, STATE, Set<EVENTTYPE>) - Method in class org.apache.hadoop.yarn.state.StateMachineFactory
-
- addTransition(STATE, STATE, Set<EVENTTYPE>, SingleArcTransition<OPERAND, EVENT>) - Method in class org.apache.hadoop.yarn.state.StateMachineFactory
-
- addTransition(STATE, STATE, EVENTTYPE, SingleArcTransition<OPERAND, EVENT>) - Method in class org.apache.hadoop.yarn.state.StateMachineFactory
-
- addTransition(STATE, Set<STATE>, EVENTTYPE, MultipleArcTransition<OPERAND, EVENT, STATE>) - Method in class org.apache.hadoop.yarn.state.StateMachineFactory
-
- addUser(String) - Method in class org.apache.hadoop.security.authorize.AccessControlList
-
Add user to the names of users allowed for this service.
- addWriteAccessor(String, String) - Method in interface org.apache.hadoop.registry.client.api.RegistryOperations
-
Add a new write access entry to be added to node permissions in all
future write operations of a session connected to a secure registry.
- adjustBeginLineColumn(int, int) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
Deprecated.
Method to adjust line and column numbers for the start of a token.
- AdminSecurityInfo - Class in org.apache.hadoop.yarn.security.admin
-
- AdminSecurityInfo() - Constructor for class org.apache.hadoop.yarn.security.admin.AdminSecurityInfo
-
- AggregatedLogFormat - Class in org.apache.hadoop.yarn.logaggregation
-
- AggregatedLogFormat() - Constructor for class org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat
-
- AggregatedLogFormat.LogKey - Class in org.apache.hadoop.yarn.logaggregation
-
- AggregatedLogFormat.LogKey() - Constructor for class org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat.LogKey
-
- AggregatedLogFormat.LogKey(ContainerId) - Constructor for class org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat.LogKey
-
- AggregatedLogFormat.LogKey(String) - Constructor for class org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat.LogKey
-
- AggregatedLogFormat.LogReader - Class in org.apache.hadoop.yarn.logaggregation
-
- AggregatedLogFormat.LogReader(Configuration, Path) - Constructor for class org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat.LogReader
-
- aggregatorDescriptorList - Variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJobBase
-
- aggregatorDescriptorList - Static variable in class org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorJobBase
-
- AHSClient - Class in org.apache.hadoop.yarn.client.api
-
- AHSClient(String) - Constructor for class org.apache.hadoop.yarn.client.api.AHSClient
-
- AHSProxy<T> - Class in org.apache.hadoop.yarn.client
-
- AHSProxy() - Constructor for class org.apache.hadoop.yarn.client.AHSProxy
-
- allFinished() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl
-
- allocate(AllocateRequest) - Method in interface org.apache.hadoop.yarn.api.ApplicationMasterProtocol
-
The main interface between an ApplicationMaster
and the
ResourceManager
.
- allocate(float) - Method in class org.apache.hadoop.yarn.client.api.AMRMClient
-
Request additional containers and receive new container allocations.
- AllocateRequest - Class in org.apache.hadoop.yarn.api.protocolrecords
-
The core request sent by the ApplicationMaster
to the
ResourceManager
to obtain resources in the cluster.
- AllocateRequest() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.AllocateRequest
-
- AllocateResponse - Class in org.apache.hadoop.yarn.api.protocolrecords
-
The response sent by the ResourceManager
the
ApplicationMaster
during resource negotiation.
- AllocateResponse() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.AllocateResponse
-
- AMCommand - Enum in org.apache.hadoop.yarn.api.records
-
Command sent by the Resource Manager to the Application Master in the
AllocateResponse
- AMRMClient<T extends org.apache.hadoop.yarn.client.api.AMRMClient.ContainerRequest> - Class in org.apache.hadoop.yarn.client.api
-
- AMRMClient(String) - Constructor for class org.apache.hadoop.yarn.client.api.AMRMClient
-
- AMRMClientAsync<T extends org.apache.hadoop.yarn.client.api.AMRMClient.ContainerRequest> - Class in org.apache.hadoop.yarn.client.api.async
-
AMRMClientAsync
handles communication with the ResourceManager
and provides asynchronous updates on events such as container allocations and
completions.
- AMRMClientAsync(int, AMRMClientAsync.CallbackHandler) - Constructor for class org.apache.hadoop.yarn.client.api.async.AMRMClientAsync
-
- AMRMClientAsync(AMRMClient<T>, int, AMRMClientAsync.CallbackHandler) - Constructor for class org.apache.hadoop.yarn.client.api.async.AMRMClientAsync
-
- AMRMTokenIdentifier - Class in org.apache.hadoop.yarn.security
-
AMRMTokenIdentifier is the TokenIdentifier to be used by
ApplicationMasters to authenticate to the ResourceManager.
- AMRMTokenIdentifier() - Constructor for class org.apache.hadoop.yarn.security.AMRMTokenIdentifier
-
- AMRMTokenIdentifier(ApplicationAttemptId, int) - Constructor for class org.apache.hadoop.yarn.security.AMRMTokenIdentifier
-
- AMRMTokenSelector - Class in org.apache.hadoop.yarn.security
-
- AMRMTokenSelector() - Constructor for class org.apache.hadoop.yarn.security.AMRMTokenSelector
-
- and(FsAction) - Method in enum org.apache.hadoop.fs.permission.FsAction
-
AND operation.
- and(Filter) - Method in class org.apache.hadoop.util.bloom.BloomFilter
-
- and(Filter) - Method in class org.apache.hadoop.util.bloom.CountingBloomFilter
-
- and(Filter) - Method in class org.apache.hadoop.util.bloom.DynamicBloomFilter
-
- ANY - Static variable in class org.apache.hadoop.yarn.api.records.ResourceRequest
-
The constant string representing no locality.
- api - Variable in class org.apache.hadoop.registry.client.types.Endpoint
-
API implemented at the end of the binding
- APP_SUBMIT_TIME_ENV - Static variable in interface org.apache.hadoop.yarn.api.ApplicationConstants
-
The environment variable for APP_SUBMIT_TIME.
- appAttemptID - Variable in class org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster
-
- appAttemptIdStrPrefix - Static variable in class org.apache.hadoop.yarn.api.records.ApplicationAttemptId
-
- append(Path, int, Progressable) - Method in class org.apache.hadoop.fs.azure.NativeAzureFileSystem
-
This optional operation is not yet supported.
- append(Path, int, Progressable) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
-
- append(Path) - Method in class org.apache.hadoop.fs.FileSystem
-
Append to an existing file (optional operation).
- append(Path, int) - Method in class org.apache.hadoop.fs.FileSystem
-
Append to an existing file (optional operation).
- append(Path, int, Progressable) - Method in class org.apache.hadoop.fs.FileSystem
-
Append to an existing file (optional operation).
- append(Path, int, Progressable) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- append(Path, int, Progressable) - Method in class org.apache.hadoop.fs.ftp.FTPFileSystem
-
This optional operation is not yet supported.
- append(Path, int, Progressable) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
-
- append(Path, int, Progressable) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
-
This optional operation is not yet supported.
- append(Path, int, Progressable) - Method in class org.apache.hadoop.fs.s3native.NativeS3FileSystem
-
This optional operation is not yet supported.
- append(Path, int, Progressable) - Method in class org.apache.hadoop.fs.viewfs.ViewFileSystem
-
- append(byte[], int, int) - Method in class org.apache.hadoop.io.Text
-
Append a range of bytes to the end of the given text
- append(LoggingEvent) - Method in class org.apache.hadoop.log.metrics.EventCounter
-
- append(byte[], int, int) - Method in class org.apache.hadoop.record.Buffer
-
Deprecated.
Append specified bytes to the buffer.
- append(byte[]) - Method in class org.apache.hadoop.record.Buffer
-
Deprecated.
Append specified bytes to the buffer
- append(LoggingEvent) - Method in class org.apache.hadoop.yarn.ContainerLogAppender
-
- appendTo(StringBuilder) - Method in class org.apache.hadoop.mapreduce.JobID
-
Add the stuff after the "job" prefix to the given builder.
- appendTo(StringBuilder) - Method in class org.apache.hadoop.mapreduce.TaskAttemptID
-
Add the unique string to the StringBuilder
- appendTo(StringBuilder) - Method in class org.apache.hadoop.mapreduce.TaskID
-
Add the unique string to the given builder.
- appIdStrPrefix - Static variable in class org.apache.hadoop.yarn.api.records.ApplicationId
-
- APPLICATION_HISTORY_ENABLED - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
The setting that controls whether application history service is
enabled or not.
- APPLICATION_HISTORY_MAX_APPS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
Defines the max number of applications could be fetched using
REST API or application history protocol and shown in timeline
server web ui.
- APPLICATION_HISTORY_PREFIX - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- APPLICATION_HISTORY_SAVE_NON_AM_CONTAINER_META_INFO - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
Save container meta-info in the application history store.
- APPLICATION_HISTORY_STORE - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
Application history store class
- APPLICATION_MAX_TAG_LENGTH - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- APPLICATION_MAX_TAGS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- APPLICATION_TYPE_LENGTH - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
Default application type length
- APPLICATION_WEB_PROXY_BASE_ENV - Static variable in interface org.apache.hadoop.yarn.api.ApplicationConstants
-
The environmental variable for APPLICATION_WEB_PROXY_BASE.
- ApplicationAccessType - Enum in org.apache.hadoop.yarn.api.records
-
Application access types.
- ApplicationAttemptId - Class in org.apache.hadoop.yarn.api.records
-
ApplicationAttemptId
denotes the particular
attempt
of an
ApplicationMaster
for a given
ApplicationId
.
- ApplicationAttemptId() - Constructor for class org.apache.hadoop.yarn.api.records.ApplicationAttemptId
-
- ApplicationAttemptNotFoundException - Exception in org.apache.hadoop.yarn.exceptions
-
- ApplicationAttemptNotFoundException(Throwable) - Constructor for exception org.apache.hadoop.yarn.exceptions.ApplicationAttemptNotFoundException
-
- ApplicationAttemptNotFoundException(String) - Constructor for exception org.apache.hadoop.yarn.exceptions.ApplicationAttemptNotFoundException
-
- ApplicationAttemptNotFoundException(String, Throwable) - Constructor for exception org.apache.hadoop.yarn.exceptions.ApplicationAttemptNotFoundException
-
- ApplicationAttemptReport - Class in org.apache.hadoop.yarn.api.records
-
ApplicationAttemptReport
is a report of an application attempt.
- ApplicationAttemptReport() - Constructor for class org.apache.hadoop.yarn.api.records.ApplicationAttemptReport
-
- ApplicationClassLoader - Class in org.apache.hadoop.util
-
- ApplicationClassLoader(URL[], ClassLoader, List<String>) - Constructor for class org.apache.hadoop.util.ApplicationClassLoader
-
- ApplicationClassLoader(String, ClassLoader, List<String>) - Constructor for class org.apache.hadoop.util.ApplicationClassLoader
-
- ApplicationClassLoader - Class in org.apache.hadoop.yarn.util
-
Deprecated.
- ApplicationClassLoader(URL[], ClassLoader, List<String>) - Constructor for class org.apache.hadoop.yarn.util.ApplicationClassLoader
-
Deprecated.
- ApplicationClassLoader(String, ClassLoader, List<String>) - Constructor for class org.apache.hadoop.yarn.util.ApplicationClassLoader
-
Deprecated.
- ApplicationClientProtocol - Interface in org.apache.hadoop.yarn.api
-
The protocol between clients and the ResourceManager
to submit/abort jobs and to get information on applications, cluster metrics,
nodes, queues and ACLs.
- ApplicationConstants - Interface in org.apache.hadoop.yarn.api
-
This is the API for the applications comprising of constants that YARN sets
up for the applications and the containers.
- ApplicationHistoryProtocol - Interface in org.apache.hadoop.yarn.api
-
The protocol between clients and the ApplicationHistoryServer
to
get the information of completed applications etc.
- ApplicationId - Class in org.apache.hadoop.yarn.api.records
-
ApplicationId
represents the globally unique
identifier for an application.
- ApplicationId() - Constructor for class org.apache.hadoop.yarn.api.records.ApplicationId
-
- ApplicationIdNotProvidedException - Exception in org.apache.hadoop.yarn.exceptions
-
- ApplicationIdNotProvidedException(Throwable) - Constructor for exception org.apache.hadoop.yarn.exceptions.ApplicationIdNotProvidedException
-
- ApplicationIdNotProvidedException(String) - Constructor for exception org.apache.hadoop.yarn.exceptions.ApplicationIdNotProvidedException
-
- ApplicationIdNotProvidedException(String, Throwable) - Constructor for exception org.apache.hadoop.yarn.exceptions.ApplicationIdNotProvidedException
-
- ApplicationMaster - Class in org.apache.hadoop.yarn.applications.distributedshell
-
An ApplicationMaster for executing shell commands on a set of launched
containers using the YARN framework.
- ApplicationMaster() - Constructor for class org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster
-
- ApplicationMasterProtocol - Interface in org.apache.hadoop.yarn.api
-
The protocol between a live instance of ApplicationMaster
and the ResourceManager
.
- ApplicationNotFoundException - Exception in org.apache.hadoop.yarn.exceptions
-
This exception is thrown on
(GetApplicationReportRequest)
API
when the Application doesn't exist in RM and AHS
- ApplicationNotFoundException(Throwable) - Constructor for exception org.apache.hadoop.yarn.exceptions.ApplicationNotFoundException
-
- ApplicationNotFoundException(String) - Constructor for exception org.apache.hadoop.yarn.exceptions.ApplicationNotFoundException
-
- ApplicationNotFoundException(String, Throwable) - Constructor for exception org.apache.hadoop.yarn.exceptions.ApplicationNotFoundException
-
- ApplicationReport - Class in org.apache.hadoop.yarn.api.records
-
ApplicationReport
is a report of an application.
- ApplicationReport() - Constructor for class org.apache.hadoop.yarn.api.records.ApplicationReport
-
- ApplicationResourceUsageReport - Class in org.apache.hadoop.yarn.api.records
-
Contains various scheduling metrics to be reported by UI and CLI.
- ApplicationResourceUsageReport() - Constructor for class org.apache.hadoop.yarn.api.records.ApplicationResourceUsageReport
-
- ApplicationsRequestScope - Enum in org.apache.hadoop.yarn.api.protocolrecords
-
Enumeration that controls the scope of applications fetched
- ApplicationSubmissionContext - Class in org.apache.hadoop.yarn.api.records
-
ApplicationSubmissionContext
represents all of the
information needed by the ResourceManager
to launch
the ApplicationMaster
for an application.
- ApplicationSubmissionContext() - Constructor for class org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext
-
- applyUMask(FsPermission) - Method in class org.apache.hadoop.fs.permission.FsPermission
-
Apply a umask to this permission and return a new one.
- approximateCount(Key) - Method in class org.apache.hadoop.util.bloom.CountingBloomFilter
-
This method calculates an approximate count of the key, i.e.
- areSymlinksEnabled() - Static method in class org.apache.hadoop.fs.FileSystem
-
- ArrayFile - Class in org.apache.hadoop.io
-
A dense file-based mapping from integers to values.
- ArrayFile() - Constructor for class org.apache.hadoop.io.ArrayFile
-
- ArrayListBackedIterator<X extends Writable> - Class in org.apache.hadoop.mapred.join
-
This class provides an implementation of ResetableIterator.
- ArrayListBackedIterator() - Constructor for class org.apache.hadoop.mapred.join.ArrayListBackedIterator
-
- ArrayListBackedIterator(ArrayList<X>) - Constructor for class org.apache.hadoop.mapred.join.ArrayListBackedIterator
-
- ArrayListBackedIterator<X extends Writable> - Class in org.apache.hadoop.mapreduce.lib.join
-
This class provides an implementation of ResetableIterator.
- ArrayListBackedIterator() - Constructor for class org.apache.hadoop.mapreduce.lib.join.ArrayListBackedIterator
-
- ArrayListBackedIterator(ArrayList<X>) - Constructor for class org.apache.hadoop.mapreduce.lib.join.ArrayListBackedIterator
-
- ArrayPrimitiveWritable - Class in org.apache.hadoop.io
-
This is a wrapper class.
- ArrayPrimitiveWritable() - Constructor for class org.apache.hadoop.io.ArrayPrimitiveWritable
-
Construct an empty instance, for use during Writable read
- ArrayPrimitiveWritable(Class<?>) - Constructor for class org.apache.hadoop.io.ArrayPrimitiveWritable
-
Construct an instance of known type but no value yet
for use with type-specific wrapper classes
- ArrayPrimitiveWritable(Object) - Constructor for class org.apache.hadoop.io.ArrayPrimitiveWritable
-
Wrap an existing array of primitives
- ArrayWritable - Class in org.apache.hadoop.io
-
A Writable for arrays containing instances of a class.
- ArrayWritable(Class<? extends Writable>) - Constructor for class org.apache.hadoop.io.ArrayWritable
-
- ArrayWritable(Class<? extends Writable>, Writable[]) - Constructor for class org.apache.hadoop.io.ArrayWritable
-
- ArrayWritable(String[]) - Constructor for class org.apache.hadoop.io.ArrayWritable
-
- asList() - Static method in enum org.apache.hadoop.fs.StorageType
-
- AsyncDispatcher - Class in org.apache.hadoop.yarn.event
-
Dispatches
Event
s in a separate thread.
- AsyncDispatcher() - Constructor for class org.apache.hadoop.yarn.event.AsyncDispatcher
-
- AsyncDispatcher(BlockingQueue<Event>) - Constructor for class org.apache.hadoop.yarn.event.AsyncDispatcher
-
- ATTEMPT - Static variable in class org.apache.hadoop.mapreduce.TaskAttemptID
-
- attributes() - Method in class org.apache.hadoop.registry.client.types.ServiceRecord
-
The map of "other" attributes set when parsing.
- authenticate(URL, AuthenticatedURL.Token) - Method in class org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator
-
- authorize(UserGroupInformation, String) - Method in class org.apache.hadoop.security.authorize.DefaultImpersonationProvider
-
- authorize(UserGroupInformation, String) - Method in interface org.apache.hadoop.security.authorize.ImpersonationProvider
-
Authorize the superuser which is doing doAs
- AUTO_FAILOVER_EMBEDDED - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- AUTO_FAILOVER_ENABLED - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- AUTO_FAILOVER_PREFIX - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- AUTO_FAILOVER_ZK_BASE_PATH - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- available() - Method in class org.apache.hadoop.io.compress.DecompressorStream
-
- AVRO_REFLECT_PACKAGES - Static variable in class org.apache.hadoop.io.serializer.avro.AvroReflectSerialization
-
Key to configure packages that contain classes to be serialized and
deserialized using this class.
- AVRO_SCHEMA_KEY - Static variable in class org.apache.hadoop.io.serializer.avro.AvroSerialization
-
- AvroFSInput - Class in org.apache.hadoop.fs
-
- AvroFSInput(FSDataInputStream, long) - Constructor for class org.apache.hadoop.fs.AvroFSInput
-
- AvroFSInput(FileContext, Path) - Constructor for class org.apache.hadoop.fs.AvroFSInput
-
- AvroReflectSerializable - Interface in org.apache.hadoop.io.serializer.avro
-
Tag interface for Avro 'reflect' serializable classes.
- AvroReflectSerialization - Class in org.apache.hadoop.io.serializer.avro
-
Serialization for Avro Reflect classes.
- AvroReflectSerialization() - Constructor for class org.apache.hadoop.io.serializer.avro.AvroReflectSerialization
-
- AvroSerialization<T> - Class in org.apache.hadoop.io.serializer.avro
-
Base class for providing serialization to Avro types.
- AvroSerialization() - Constructor for class org.apache.hadoop.io.serializer.avro.AvroSerialization
-
- AvroSpecificSerialization - Class in org.apache.hadoop.io.serializer.avro
-
Serialization for Avro Specific classes.
- AvroSpecificSerialization() - Constructor for class org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization
-
- AzureException - Exception in org.apache.hadoop.fs.azure
-
Thrown if there is a problem communicating with Azure Storage service.
- AzureException(String) - Constructor for exception org.apache.hadoop.fs.azure.AzureException
-
- AzureException(String, Throwable) - Constructor for exception org.apache.hadoop.fs.azure.AzureException
-
- AzureException(Throwable) - Constructor for exception org.apache.hadoop.fs.azure.AzureException
-
- AzureFileSystemInstrumentation - Class in org.apache.hadoop.fs.azure.metrics
-
A metrics source for the WASB file system to track all the metrics we care
about for getting a clear picture of the performance/reliability/interaction
of the Hadoop cluster with Azure Storage.
- AzureFileSystemInstrumentation(Configuration) - Constructor for class org.apache.hadoop.fs.azure.metrics.AzureFileSystemInstrumentation
-
- CACHE_ARCHIVES - Static variable in class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
- CACHE_ARCHIVES_SIZES - Static variable in class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
- CACHE_ARCHIVES_TIMESTAMPS - Static variable in class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
- CACHE_FILES - Static variable in class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
- CACHE_FILES_SIZES - Static variable in class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
- CACHE_FILES_TIMESTAMPS - Static variable in class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
- CACHE_LOCALARCHIVES - Static variable in class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
- CACHE_LOCALFILES - Static variable in class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
- CACHE_SYMLINK - Static variable in class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
- CachedDNSToSwitchMapping - Class in org.apache.hadoop.net
-
A cached implementation of DNSToSwitchMapping that takes an
raw DNSToSwitchMapping and stores the resolved network location in
a cache.
- CachedDNSToSwitchMapping(DNSToSwitchMapping) - Constructor for class org.apache.hadoop.net.CachedDNSToSwitchMapping
-
cache a raw DNS mapping
- CacheFlag - Enum in org.apache.hadoop.fs
-
Specifies semantics for CacheDirective operations.
- cacheGroupsAdd(List<String>) - Method in interface org.apache.hadoop.security.GroupMappingServiceProvider
-
Caches the group user information
- cacheGroupsRefresh() - Method in interface org.apache.hadoop.security.GroupMappingServiceProvider
-
Refresh the cache of groups and user mapping
- callbackHandler - Variable in class org.apache.hadoop.yarn.client.api.async.NMClientAsync
-
- cancelDelegationToken(Token<DelegationTokenIdentifier>) - Method in class org.apache.hadoop.mapred.JobClient
-
Deprecated.
Use Token.cancel(org.apache.hadoop.conf.Configuration)
instead
- cancelDelegationToken(Token<DelegationTokenIdentifier>) - Method in class org.apache.hadoop.mapreduce.Cluster
-
Deprecated.
Use Token.cancel(org.apache.hadoop.conf.Configuration)
instead
- cancelDelegationToken(URL, DelegationTokenAuthenticatedURL.Token) - Method in class org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL
-
Cancels a delegation token from the server end-point.
- cancelDelegationToken(URL, DelegationTokenAuthenticatedURL.Token, String) - Method in class org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL
-
Cancels a delegation token from the server end-point.
- cancelDelegationToken(URL, AuthenticatedURL.Token, Token<AbstractDelegationTokenIdentifier>) - Method in class org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator
-
Cancels a delegation token from the server end-point.
- cancelDelegationToken(URL, AuthenticatedURL.Token, Token<AbstractDelegationTokenIdentifier>, String) - Method in class org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator
-
Cancels a delegation token from the server end-point.
- cancelDelegationToken(Token<TimelineDelegationTokenIdentifier>) - Method in class org.apache.hadoop.yarn.client.api.TimelineClient
-
Cancel a timeline delegation token.
- CancelDelegationTokenRequest - Interface in org.apache.hadoop.mapreduce.v2.api.protocolrecords
-
The request issued by the client to the ResourceManager
to cancel a
delegation token.
- CancelDelegationTokenResponse - Interface in org.apache.hadoop.mapreduce.v2.api.protocolrecords
-
The response from the ResourceManager
to a cancelDelegationToken
request.
- cancelDeleteOnExit(Path) - Method in class org.apache.hadoop.fs.FileSystem
-
Cancel the deletion of the path when the FileSystem is closed
- canExecute(File) - Static method in class org.apache.hadoop.fs.FileUtil
-
- canonicalizeUri(URI) - Method in class org.apache.hadoop.fs.FileSystem
-
Canonicalize the given URI.
- canonicalizeUri(URI) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- canRead(File) - Static method in class org.apache.hadoop.fs.FileUtil
-
- CanSetDropBehind - Interface in org.apache.hadoop.fs
-
- CanSetReadahead - Interface in org.apache.hadoop.fs
-
- CanUnbuffer - Interface in org.apache.hadoop.fs
-
FSDataInputStreams implement this interface to indicate that they can clear
their buffers on request.
- canWrite(File) - Static method in class org.apache.hadoop.fs.FileUtil
-
- ChainMapper - Class in org.apache.hadoop.mapred.lib
-
The ChainMapper class allows to use multiple Mapper classes within a single
Map task.
- ChainMapper() - Constructor for class org.apache.hadoop.mapred.lib.ChainMapper
-
Constructor.
- ChainMapper<KEYIN,VALUEIN,KEYOUT,VALUEOUT> - Class in org.apache.hadoop.mapreduce.lib.chain
-
The ChainMapper class allows to use multiple Mapper classes within a single
Map task.
- ChainMapper() - Constructor for class org.apache.hadoop.mapreduce.lib.chain.ChainMapper
-
- ChainReducer - Class in org.apache.hadoop.mapred.lib
-
The ChainReducer class allows to chain multiple Mapper classes after a
Reducer within the Reducer task.
- ChainReducer() - Constructor for class org.apache.hadoop.mapred.lib.ChainReducer
-
Constructor.
- ChainReducer<KEYIN,VALUEIN,KEYOUT,VALUEOUT> - Class in org.apache.hadoop.mapreduce.lib.chain
-
The ChainReducer class allows to chain multiple Mapper classes after a
Reducer within the Reducer task.
- ChainReducer() - Constructor for class org.apache.hadoop.mapreduce.lib.chain.ChainReducer
-
- changed() - Method in class org.apache.hadoop.metrics2.lib.MutableMetric
-
- charAt(int) - Method in class org.apache.hadoop.io.Text
-
Returns the Unicode Scalar Value (32-bit integer value)
for the character at position
.
- checkArgs(String) - Method in interface org.apache.hadoop.ha.FenceMethod
-
Verify that the given fencing method's arguments are valid.
- checkFencingConfigured() - Method in class org.apache.hadoop.ha.HAServiceTarget
-
- checkOutputSpecs(FileSystem, JobConf) - Method in class org.apache.hadoop.mapred.FileOutputFormat
-
- checkOutputSpecs(FileSystem, JobConf) - Method in class org.apache.hadoop.mapred.lib.db.DBOutputFormat
-
Check for validity of the output-specification for the job.
- checkOutputSpecs(FileSystem, JobConf) - Method in class org.apache.hadoop.mapred.lib.FilterOutputFormat
-
- checkOutputSpecs(FileSystem, JobConf) - Method in class org.apache.hadoop.mapred.lib.LazyOutputFormat
-
- checkOutputSpecs(FileSystem, JobConf) - Method in class org.apache.hadoop.mapred.lib.NullOutputFormat
-
- checkOutputSpecs(FileSystem, JobConf) - Method in interface org.apache.hadoop.mapred.OutputFormat
-
Check for validity of the output-specification for the job.
- checkOutputSpecs(FileSystem, JobConf) - Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat
-
- checkOutputSpecs(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.db.DBOutputFormat
-
- checkOutputSpecs(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
- checkOutputSpecs(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FilterOutputFormat
-
- checkOutputSpecs(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.output.LazyOutputFormat
-
- checkOutputSpecs(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.output.NullOutputFormat
-
- checkOutputSpecs(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.output.SequenceFileAsBinaryOutputFormat
-
- checkOutputSpecs(JobContext) - Method in class org.apache.hadoop.mapreduce.OutputFormat
-
Check for validity of the output-specification for the job.
- checkPath(Path) - Method in class org.apache.hadoop.fs.AbstractFileSystem
-
Check that a Path belongs to this FileSystem.
- checkPath(Path) - Method in class org.apache.hadoop.fs.azure.NativeAzureFileSystem
-
- checkPath(Path) - Method in class org.apache.hadoop.fs.FileSystem
-
Check that a Path belongs to this FileSystem.
- checkPath(Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
Check that a Path belongs to this FileSystem.
- checkPidPgrpidForMatch() - Method in class org.apache.hadoop.yarn.util.ResourceCalculatorProcessTree
-
Verify that the tree process id is same as its process group id.
- checkpoint() - Method in class org.apache.hadoop.fs.Trash
-
Create a trash checkpoint.
- Checkpointable - Annotation Type in org.apache.hadoop.mapreduce.task.annotation
-
Contract representing to the framework that the task can be safely preempted
and restarted between invocations of the user-defined function.
- checkScheme(URI, String) - Method in class org.apache.hadoop.fs.AbstractFileSystem
-
Check that the Uri's scheme matches
- checkStateTransition(String, Service.STATE, Service.STATE) - Static method in class org.apache.hadoop.service.ServiceStateModel
-
Check that a state tansition is valid and
throw an exception if not
- checkStream() - Method in class org.apache.hadoop.io.compress.DecompressorStream
-
- ChecksumException - Exception in org.apache.hadoop.fs
-
Thrown for checksum errors.
- ChecksumException(String, long) - Constructor for exception org.apache.hadoop.fs.ChecksumException
-
- ChecksumFileSystem - Class in org.apache.hadoop.fs
-
Abstract Checksumed FileSystem.
- ChecksumFileSystem(FileSystem) - Constructor for class org.apache.hadoop.fs.ChecksumFileSystem
-
- children - Variable in class org.apache.hadoop.registry.client.types.RegistryPathStatus
-
Number of child nodes
- chmod(String, String) - Static method in class org.apache.hadoop.fs.FileUtil
-
Change the permissions on a filename.
- chmod(String, String, boolean) - Static method in class org.apache.hadoop.fs.FileUtil
-
Change the permissions on a file / directory, recursively, if
needed.
- CLASS_PATH_SEPARATOR - Static variable in interface org.apache.hadoop.yarn.api.ApplicationConstants
-
This constant is used to construct class path and it will be replaced with
real class path separator(':' for Linux and ';' for Windows) by
NodeManager on container launch.
- cleanup(Log, Closeable...) - Static method in class org.apache.hadoop.io.IOUtils
-
Close the Closeable objects and
ignore any
IOException
or
null pointers.
- cleanup(Mapper<KEYIN, VALUEIN, KEYOUT, VALUEOUT>.Context) - Method in class org.apache.hadoop.mapreduce.Mapper
-
Called once at the end of the task.
- cleanup(Reducer<KEYIN, VALUEIN, KEYOUT, VALUEOUT>.Context) - Method in class org.apache.hadoop.mapreduce.Reducer
-
Called once at the end of the task.
- cleanupJob(JobContext) - Method in class org.apache.hadoop.mapred.FileOutputCommitter
-
Deprecated.
- cleanupJob(JobContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
-
- cleanupJob(JobContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
-
- cleanupJob(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
Deprecated.
- cleanupJob(JobContext) - Method in class org.apache.hadoop.mapreduce.OutputCommitter
-
- cleanUpPartialOutputForTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.PartialFileOutputCommitter
-
- cleanUpPartialOutputForTask(TaskAttemptContext) - Method in interface org.apache.hadoop.mapreduce.lib.output.PartialOutputCommitter
-
Remove all previously committed outputs from prior executions of this task.
- cleanupProgress() - Method in class org.apache.hadoop.mapred.JobStatus
-
- cleanupProgress() - Method in interface org.apache.hadoop.mapred.RunningJob
-
Get the progress of the job's cleanup-tasks, as a float between 0.0
and 1.0.
- cleanupProgress() - Method in class org.apache.hadoop.mapreduce.Job
-
Get the progress of the job's cleanup-tasks, as a float between 0.0
and 1.0.
- cleanupRunningContainersOnStop(boolean) - Method in class org.apache.hadoop.yarn.client.api.NMClient
-
Set whether the containers that are started by this client, and are
still running should be stopped when the client stops.
- cleanUpTokenReferral(Configuration) - Static method in class org.apache.hadoop.mapreduce.security.TokenCache
-
Remove jobtoken referrals which don't make sense in the context
of the task execution.
- clear() - Method in class org.apache.hadoop.conf.Configuration
-
Clears all keys from the configuration.
- clear() - Method in class org.apache.hadoop.io.MapWritable
-
- clear() - Method in class org.apache.hadoop.io.SortedMapWritable
-
- clear() - Method in class org.apache.hadoop.io.Text
-
Clear the string to empty.
- clear() - Method in class org.apache.hadoop.mapreduce.lib.join.ArrayListBackedIterator
-
- clear() - Method in interface org.apache.hadoop.mapreduce.lib.join.ResetableIterator
-
Close datasources, but do not release internal resources.
- clear() - Method in class org.apache.hadoop.mapreduce.lib.join.StreamBackedIterator
-
- clear() - Method in class org.apache.hadoop.util.bloom.HashFunction
-
Clears this hash function.
- CLEAR_TEXT_FALLBACK - Static variable in class org.apache.hadoop.security.alias.CredentialProvider
-
- clearChanged() - Method in class org.apache.hadoop.metrics2.lib.MutableMetric
-
Clear the changed flag in the snapshot operations
- clearMark() - Method in class org.apache.hadoop.mapreduce.MarkableIterator
-
- clearStatistics() - Static method in class org.apache.hadoop.fs.AbstractFileSystem
-
- clearStatistics() - Static method in class org.apache.hadoop.fs.FileContext
-
Clears all the statistics stored in AbstractFileSystem, for all the file
systems.
- clearStatistics() - Static method in class org.apache.hadoop.fs.FileSystem
-
Reset all statistics for all file systems
- clearWriteAccessors() - Method in interface org.apache.hadoop.registry.client.api.RegistryOperations
-
Clear all write accessors.
- CLI - Class in org.apache.hadoop.mapreduce.tools
-
Interprets the map reduce cli options
- CLI() - Constructor for class org.apache.hadoop.mapreduce.tools.CLI
-
- CLI(Configuration) - Constructor for class org.apache.hadoop.mapreduce.tools.CLI
-
- Client - Class in org.apache.hadoop.yarn.applications.distributedshell
-
Client for Distributed Shell application submission to YARN.
- Client(Configuration) - Constructor for class org.apache.hadoop.yarn.applications.distributedshell.Client
-
- Client() - Constructor for class org.apache.hadoop.yarn.applications.distributedshell.Client
-
- client - Variable in class org.apache.hadoop.yarn.client.api.async.AMRMClientAsync
-
- client - Variable in class org.apache.hadoop.yarn.client.api.async.NMClientAsync
-
- CLIENT_FAILOVER_MAX_ATTEMPTS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- CLIENT_FAILOVER_PREFIX - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- CLIENT_FAILOVER_PROXY_PROVIDER - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- CLIENT_FAILOVER_RETRIES - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- CLIENT_FAILOVER_RETRIES_ON_SOCKET_TIMEOUTS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- CLIENT_FAILOVER_SLEEPTIME_BASE_MS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- CLIENT_FAILOVER_SLEEPTIME_MAX_MS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- CLIENT_NM_CONNECT_MAX_WAIT_MS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
Max time to wait to establish a connection to NM
- CLIENT_NM_CONNECT_RETRY_INTERVAL_MS - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
Time interval between each attempt to connect to NM
- clientErrorEncountered() - Method in class org.apache.hadoop.fs.azure.metrics.AzureFileSystemInstrumentation
-
Indicate that we just encountered a client-side error.
- ClientRMProxy<T> - Class in org.apache.hadoop.yarn.client
-
- ClientRMSecurityInfo - Class in org.apache.hadoop.yarn.security.client
-
- ClientRMSecurityInfo() - Constructor for class org.apache.hadoop.yarn.security.client.ClientRMSecurityInfo
-
- ClientSCMProtocol - Interface in org.apache.hadoop.yarn.api
-
The protocol between clients and the SharedCacheManager
to claim
and release resources in the shared cache.
- ClientTimelineSecurityInfo - Class in org.apache.hadoop.yarn.security.client
-
- ClientTimelineSecurityInfo() - Constructor for class org.apache.hadoop.yarn.security.client.ClientTimelineSecurityInfo
-
- ClientToAMTokenIdentifier - Class in org.apache.hadoop.yarn.security.client
-
- ClientToAMTokenIdentifier() - Constructor for class org.apache.hadoop.yarn.security.client.ClientToAMTokenIdentifier
-
- ClientToAMTokenIdentifier(ApplicationAttemptId, String) - Constructor for class org.apache.hadoop.yarn.security.client.ClientToAMTokenIdentifier
-
- ClientToAMTokenSecretManager - Class in org.apache.hadoop.yarn.security.client
-
A simple SecretManager
for AMs to validate Client-RM tokens issued to
clients by the RM using the underlying master-key shared by RM to the AMs on
their launch.
- ClientToAMTokenSecretManager(ApplicationAttemptId, byte[]) - Constructor for class org.apache.hadoop.yarn.security.client.ClientToAMTokenSecretManager
-
- Clock - Interface in org.apache.hadoop.yarn.util
-
A simple clock interface that gives you time.
- clone(T, Configuration) - Static method in class org.apache.hadoop.io.WritableUtils
-
Make a copy of a writable object using serialization to a buffer.
- clone() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
- clone() - Method in class org.apache.hadoop.record.Buffer
-
Deprecated.
- clone() - Method in class org.apache.hadoop.registry.client.types.Endpoint
-
Shallow clone: the lists of addresses are shared
- clone() - Method in class org.apache.hadoop.registry.client.types.ServiceRecord
-
Shallow clone: all endpoints will be shared across instances
- cloneInto(Writable, Writable) - Static method in class org.apache.hadoop.io.WritableUtils
-
Deprecated.
use ReflectionUtils.cloneInto instead.
- cloneWritableInto(Writable, Writable) - Static method in class org.apache.hadoop.util.ReflectionUtils
-
Deprecated.
- close() - Method in class org.apache.hadoop.crypto.key.KeyProvider
-
Can be used by implementing classes to close any resources
that require closing
- close() - Method in class org.apache.hadoop.fs.AvroFSInput
-
- close() - Method in class org.apache.hadoop.fs.azure.NativeAzureFileSystem
-
- close() - Method in class org.apache.hadoop.fs.FileSystem
-
No more filesystem operations are needed.
- close() - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- close() - Method in class org.apache.hadoop.fs.FSDataOutputStream
-
Close the underlying output stream.
- close() - Method in class org.apache.hadoop.fs.RawLocalFileSystem
-
- close() - Method in class org.apache.hadoop.io.compress.CompressionInputStream
-
- close() - Method in class org.apache.hadoop.io.compress.CompressionOutputStream
-
- close() - Method in class org.apache.hadoop.io.compress.CompressorStream
-
- close() - Method in class org.apache.hadoop.io.compress.DecompressorStream
-
- close() - Method in class org.apache.hadoop.io.DefaultStringifier
-
- close() - Method in interface org.apache.hadoop.io.Stringifier
-
Closes this object.
- close() - Method in class org.apache.hadoop.log.metrics.EventCounter
-
- close() - Method in class org.apache.hadoop.mapred.JobClient
-
Close the JobClient
.
- close() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
Close all child RRs.
- close() - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
-
Forward close request to proxied RR.
- close() - Method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
-
- close() - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorCombiner
-
Do nothing.
- close() - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJobBase
-
- close() - Method in class org.apache.hadoop.mapred.lib.ChainMapper
-
Closes the ChainMapper and all the Mappers in the chain.
- close() - Method in class org.apache.hadoop.mapred.lib.ChainReducer
-
Closes the ChainReducer, the Reducer and all the Mappers in the chain.
- close() - Method in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
-
- close() - Method in class org.apache.hadoop.mapred.lib.CombineFileRecordReaderWrapper
-
- close() - Method in class org.apache.hadoop.mapred.lib.FieldSelectionMapReduce
-
- close() - Method in class org.apache.hadoop.mapred.lib.MultipleOutputs
-
Closes all the opened named outputs.
- close() - Method in class org.apache.hadoop.mapred.MapReduceBase
-
Default implementation that does nothing.
- close() - Method in interface org.apache.hadoop.mapred.RecordReader
-
- close(Reporter) - Method in interface org.apache.hadoop.mapred.RecordWriter
-
Close this RecordWriter
to future operations.
- close() - Method in class org.apache.hadoop.mapred.SequenceFileAsTextRecordReader
-
- close() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
-
- close() - Method in class org.apache.hadoop.mapreduce.Cluster
-
Close the Cluster
.
- close() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
Close the record reader.
- close() - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader
-
- close() - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReaderWrapper
-
- close() - Method in class org.apache.hadoop.mapreduce.lib.input.KeyValueLineRecordReader
-
- close() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsTextRecordReader
-
- close() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader
-
- close() - Method in class org.apache.hadoop.mapreduce.lib.join.ArrayListBackedIterator
-
- close() - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
-
Close all child RRs.
- close() - Method in interface org.apache.hadoop.mapreduce.lib.join.ResetableIterator
-
Close datasources and release resources.
- close() - Method in class org.apache.hadoop.mapreduce.lib.join.StreamBackedIterator
-
- close() - Method in class org.apache.hadoop.mapreduce.lib.join.WrappedRecordReader
-
Forward close request to proxied RR.
- close() - Method in class org.apache.hadoop.mapreduce.lib.output.MultipleOutputs
-
Closes all the opened outputs.
- close() - Method in class org.apache.hadoop.mapreduce.RecordReader
-
Close the record reader.
- close(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.RecordWriter
-
Close this RecordWriter
to future operations.
- close() - Method in class org.apache.hadoop.metrics.ganglia.GangliaContext
-
method to close the datagram socket
- close() - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
-
Stops monitoring and frees buffered data, returning this
object to its initial state.
- close() - Method in class org.apache.hadoop.metrics2.sink.FileSink
-
- close() - Method in class org.apache.hadoop.metrics2.sink.GraphiteSink
-
- close() - Method in class org.apache.hadoop.service.AbstractService
-
- close() - Method in interface org.apache.hadoop.service.Service
-
A version of stop() that is designed to be usable in Java7 closure
clauses.
- close() - Method in class org.apache.hadoop.yarn.ContainerLogAppender
-
- close() - Method in class org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat.LogReader
-
- Closeable - Interface in org.apache.hadoop.io
-
Deprecated.
use java.io.Closeable
- closeAll() - Static method in class org.apache.hadoop.fs.FileSystem
-
Close all cached filesystems.
- closeAllForUGI(UserGroupInformation) - Static method in class org.apache.hadoop.fs.FileSystem
-
Close all cached filesystems for a given UGI.
- closeConnection() - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
- closed - Variable in class org.apache.hadoop.io.compress.CompressorStream
-
- closed - Variable in class org.apache.hadoop.io.compress.DecompressorStream
-
- closeSocket(Socket) - Static method in class org.apache.hadoop.io.IOUtils
-
- closeStream(Closeable) - Static method in class org.apache.hadoop.io.IOUtils
-
- Cluster - Class in org.apache.hadoop.mapreduce
-
Provides a way to access information about the map/reduce cluster.
- Cluster(Configuration) - Constructor for class org.apache.hadoop.mapreduce.Cluster
-
- Cluster(InetSocketAddress, Configuration) - Constructor for class org.apache.hadoop.mapreduce.Cluster
-
- cluster - Variable in class org.apache.hadoop.mapreduce.tools.CLI
-
- ClusterMetrics - Class in org.apache.hadoop.mapreduce
-
Status information on the current state of the Map-Reduce cluster.
- ClusterMetrics() - Constructor for class org.apache.hadoop.mapreduce.ClusterMetrics
-
- ClusterMetrics(int, int, int, int, int, int, int, int, int, int, int, int) - Constructor for class org.apache.hadoop.mapreduce.ClusterMetrics
-
- ClusterMetrics(int, int, int, int, int, int, int, int, int, int, int, int, int) - Constructor for class org.apache.hadoop.mapreduce.ClusterMetrics
-
- ClusterStatus - Class in org.apache.hadoop.mapred
-
Status information on the current state of the Map-Reduce cluster.
- clusterTimestamp - Variable in class org.apache.hadoop.yarn.api.records.ReservationId
-
- cmp - Variable in class org.apache.hadoop.mapreduce.lib.join.WrappedRecordReader
-
- cmpcl - Variable in class org.apache.hadoop.mapred.join.Parser.Node
-
- cmpcl - Variable in class org.apache.hadoop.mapreduce.lib.join.Parser.Node
-
- CodeBuffer - Class in org.apache.hadoop.record.compiler
-
- CodecPool - Class in org.apache.hadoop.io.compress
-
A global compressor/decompressor pool used to save and reuse
(possibly native) compression/decompression codecs.
- CodecPool() - Constructor for class org.apache.hadoop.io.compress.CodecPool
-
- collect(K, V) - Method in interface org.apache.hadoop.mapred.OutputCollector
-
Adds a key/value pair to the output.
- column - Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
Deprecated.
- combine(Object[], TupleWritable) - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
- combine(Object[], TupleWritable) - Method in class org.apache.hadoop.mapred.join.InnerJoinRecordReader
-
Return true iff the tuple is full (all data sources contain this key).
- combine(Object[], TupleWritable) - Method in class org.apache.hadoop.mapred.join.MultiFilterRecordReader
-
- combine(Object[], TupleWritable) - Method in class org.apache.hadoop.mapred.join.OuterJoinRecordReader
-
Emit everything from the collector.
- combine(Object[], TupleWritable) - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
-
- combine(Object[], TupleWritable) - Method in class org.apache.hadoop.mapreduce.lib.join.InnerJoinRecordReader
-
Return true iff the tuple is full (all data sources contain this key).
- combine(Object[], TupleWritable) - Method in class org.apache.hadoop.mapreduce.lib.join.MultiFilterRecordReader
-
- combine(Object[], TupleWritable) - Method in class org.apache.hadoop.mapreduce.lib.join.OuterJoinRecordReader
-
Emit everything from the collector.
- CombineFileInputFormat<K,V> - Class in org.apache.hadoop.mapred.lib
-
- CombineFileInputFormat() - Constructor for class org.apache.hadoop.mapred.lib.CombineFileInputFormat
-
default constructor
- CombineFileInputFormat<K,V> - Class in org.apache.hadoop.mapreduce.lib.input
-
- CombineFileInputFormat() - Constructor for class org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat
-
default constructor
- CombineFileRecordReader<K,V> - Class in org.apache.hadoop.mapred.lib
-
A generic RecordReader that can hand out different recordReaders
for each chunk in a
CombineFileSplit
.
- CombineFileRecordReader(JobConf, CombineFileSplit, Reporter, Class<RecordReader<K, V>>) - Constructor for class org.apache.hadoop.mapred.lib.CombineFileRecordReader
-
A generic RecordReader that can hand out different recordReaders
for each chunk in the CombineFileSplit.
- CombineFileRecordReader<K,V> - Class in org.apache.hadoop.mapreduce.lib.input
-
A generic RecordReader that can hand out different recordReaders
for each chunk in a
CombineFileSplit
.
- CombineFileRecordReader(CombineFileSplit, TaskAttemptContext, Class<? extends RecordReader<K, V>>) - Constructor for class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader
-
A generic RecordReader that can hand out different recordReaders
for each chunk in the CombineFileSplit.
- CombineFileRecordReaderWrapper<K,V> - Class in org.apache.hadoop.mapred.lib
-
A wrapper class for a record reader that handles a single file split.
- CombineFileRecordReaderWrapper(FileInputFormat<K, V>, CombineFileSplit, Configuration, Reporter, Integer) - Constructor for class org.apache.hadoop.mapred.lib.CombineFileRecordReaderWrapper
-
- CombineFileRecordReaderWrapper<K,V> - Class in org.apache.hadoop.mapreduce.lib.input
-
A wrapper class for a record reader that handles a single file split.
- CombineFileRecordReaderWrapper(FileInputFormat<K, V>, CombineFileSplit, TaskAttemptContext, Integer) - Constructor for class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReaderWrapper
-
- CombineFileSplit - Class in org.apache.hadoop.mapred.lib
-
- CombineFileSplit() - Constructor for class org.apache.hadoop.mapred.lib.CombineFileSplit
-
- CombineFileSplit(JobConf, Path[], long[], long[], String[]) - Constructor for class org.apache.hadoop.mapred.lib.CombineFileSplit
-
- CombineFileSplit(JobConf, Path[], long[]) - Constructor for class org.apache.hadoop.mapred.lib.CombineFileSplit
-
- CombineFileSplit(CombineFileSplit) - Constructor for class org.apache.hadoop.mapred.lib.CombineFileSplit
-
Copy constructor
- CombineFileSplit - Class in org.apache.hadoop.mapreduce.lib.input
-
A sub-collection of input files.
- CombineFileSplit() - Constructor for class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
default constructor
- CombineFileSplit(Path[], long[], long[], String[]) - Constructor for class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
- CombineFileSplit(Path[], long[]) - Constructor for class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
- CombineFileSplit(CombineFileSplit) - Constructor for class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
Copy constructor
- CombineSequenceFileInputFormat<K,V> - Class in org.apache.hadoop.mapred.lib
-
Input format that is a CombineFileInputFormat
-equivalent for
SequenceFileInputFormat
.
- CombineSequenceFileInputFormat() - Constructor for class org.apache.hadoop.mapred.lib.CombineSequenceFileInputFormat
-
- CombineSequenceFileInputFormat<K,V> - Class in org.apache.hadoop.mapreduce.lib.input
-
Input format that is a CombineFileInputFormat
-equivalent for
SequenceFileInputFormat
.
- CombineSequenceFileInputFormat() - Constructor for class org.apache.hadoop.mapreduce.lib.input.CombineSequenceFileInputFormat
-
- CombineTextInputFormat - Class in org.apache.hadoop.mapred.lib
-
Input format that is a CombineFileInputFormat
-equivalent for
TextInputFormat
.
- CombineTextInputFormat() - Constructor for class org.apache.hadoop.mapred.lib.CombineTextInputFormat
-
- CombineTextInputFormat - Class in org.apache.hadoop.mapreduce.lib.input
-
Input format that is a CombineFileInputFormat
-equivalent for
TextInputFormat
.
- CombineTextInputFormat() - Constructor for class org.apache.hadoop.mapreduce.lib.input.CombineTextInputFormat
-
- COMMA_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
-
Deprecated.
- commitJob(JobContext) - Method in class org.apache.hadoop.mapred.FileOutputCommitter
-
- commitJob(JobContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
-
For committing job's output after successful job completion.
- commitJob(JobContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
-
This method implements the new interface by calling the old method.
- commitJob(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
The job has completed, so do works in commitJobInternal().
- commitJob(JobContext) - Method in class org.apache.hadoop.mapreduce.OutputCommitter
-
For committing job's output after successful job completion.
- commitJobInternal(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
The job has completed, so do following commit job, include:
Move all committed tasks to the final output dir (algorithm 1 only).
- commitTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapred.FileOutputCommitter
-
- commitTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
-
To promote the task's temporary output to final output location.
- commitTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
-
This method implements the new interface by calling the old method.
- commitTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
Move the files from the work directory to the job output directory
- commitTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.OutputCommitter
-
To promote the task's temporary output to final output location.
- CommonConfigurationKeysPublic - Class in org.apache.hadoop.fs
-
This class contains constants for configuration keys used
in the common code.
- CommonConfigurationKeysPublic() - Constructor for class org.apache.hadoop.fs.CommonConfigurationKeysPublic
-
- comparator() - Method in class org.apache.hadoop.io.SortedMapWritable
-
- COMPARATOR_JCLASS - Static variable in class org.apache.hadoop.io.file.tfile.TFile
-
comparator prefix: java class
- COMPARATOR_MEMCMP - Static variable in class org.apache.hadoop.io.file.tfile.TFile
-
comparator: memcmp
- COMPARATOR_OPTIONS - Static variable in class org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedComparator
-
- compare(byte[], int, int, byte[], int, int) - Method in interface org.apache.hadoop.io.RawComparator
-
Compare two objects in binary.
- compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.io.WritableComparator
-
Optimization hook.
- compare(WritableComparable, WritableComparable) - Method in class org.apache.hadoop.io.WritableComparator
-
Compare two WritableComparables.
- compare(Object, Object) - Method in class org.apache.hadoop.io.WritableComparator
-
- compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedComparator
-
- compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.record.RecordComparator
-
Deprecated.
- compare(ReservationRequest, ReservationRequest) - Method in class org.apache.hadoop.yarn.api.records.ReservationRequest.ReservationRequestComparator
-
- compare(ResourceRequest, ResourceRequest) - Method in class org.apache.hadoop.yarn.api.records.ResourceRequest.ResourceRequestComparator
-
- compareBytes(byte[], int, int, byte[], int, int) - Static method in class org.apache.hadoop.io.WritableComparator
-
Lexicographic order of binary data.
- compareBytes(byte[], int, int, byte[], int, int) - Static method in class org.apache.hadoop.record.Utils
-
Deprecated.
Lexicographic order of binary data.
- compareTo(Object) - Method in class org.apache.hadoop.fs.FileStatus
-
Compare this object to another object
- compareTo(VolumeId) - Method in class org.apache.hadoop.fs.HdfsVolumeId
-
- compareTo(Object) - Method in class org.apache.hadoop.fs.LocatedFileStatus
-
Compare this object to another object
- compareTo(Object) - Method in class org.apache.hadoop.fs.Path
-
- compareTo(VolumeId) - Method in interface org.apache.hadoop.fs.VolumeId
-
- compareTo(BinaryComparable) - Method in class org.apache.hadoop.io.BinaryComparable
-
Compare bytes from {#getBytes()}.
- compareTo(byte[], int, int) - Method in class org.apache.hadoop.io.BinaryComparable
-
Compare bytes from {#getBytes()} to those provided.
- compareTo(BooleanWritable) - Method in class org.apache.hadoop.io.BooleanWritable
-
- compareTo(ByteWritable) - Method in class org.apache.hadoop.io.ByteWritable
-
Compares two ByteWritables.
- compareTo(DoubleWritable) - Method in class org.apache.hadoop.io.DoubleWritable
-
- compareTo(FloatWritable) - Method in class org.apache.hadoop.io.FloatWritable
-
Compares two FloatWritables.
- compareTo(IntWritable) - Method in class org.apache.hadoop.io.IntWritable
-
Compares two IntWritables.
- compareTo(LongWritable) - Method in class org.apache.hadoop.io.LongWritable
-
Compares two LongWritables.
- compareTo(MD5Hash) - Method in class org.apache.hadoop.io.MD5Hash
-
Compares this object with the specified object for order.
- compareTo(NullWritable) - Method in class org.apache.hadoop.io.NullWritable
-
- compareTo(ShortWritable) - Method in class org.apache.hadoop.io.ShortWritable
-
Compares two ShortWritable.
- compareTo(VIntWritable) - Method in class org.apache.hadoop.io.VIntWritable
-
Compares two VIntWritables.
- compareTo(VLongWritable) - Method in class org.apache.hadoop.io.VLongWritable
-
Compares two VLongWritables.
- compareTo(ComposableRecordReader<K, ?>) - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
Implement Comparable contract (compare key of join or head of heap
with that of another).
- compareTo(ComposableRecordReader<K, ?>) - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
-
Implement Comparable contract (compare key at head of proxied RR
with that of another).
- compareTo(ID) - Method in class org.apache.hadoop.mapreduce.ID
-
Compare IDs by associated numbers
- compareTo(ID) - Method in class org.apache.hadoop.mapreduce.JobID
-
Compare JobIds by first jtIdentifiers, then by job numbers
- compareTo(ComposableRecordReader<K, ?>) - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
-
Implement Comparable contract (compare key of join or head of heap
with that of another).
- compareTo(ComposableRecordReader<K, ?>) - Method in class org.apache.hadoop.mapreduce.lib.join.WrappedRecordReader
-
Implement Comparable contract (compare key at head of proxied RR
with that of another).
- compareTo(ID) - Method in class org.apache.hadoop.mapreduce.TaskAttemptID
-
Compare TaskIds by first tipIds, then by task numbers.
- compareTo(ID) - Method in class org.apache.hadoop.mapreduce.TaskID
-
Compare TaskInProgressIds by first jobIds, then by tip numbers.
- compareTo(Object) - Method in class org.apache.hadoop.record.Buffer
-
Deprecated.
Define the sort order of the Buffer.
- compareTo(Object) - Method in class org.apache.hadoop.record.meta.RecordTypeInfo
-
Deprecated.
This class doesn't implement Comparable as it's not meant to be used
for anything besides de/serializing.
- compareTo(Object) - Method in class org.apache.hadoop.record.Record
-
Deprecated.
- compareTo(ApplicationAttemptId) - Method in class org.apache.hadoop.yarn.api.records.ApplicationAttemptId
-
- compareTo(ApplicationId) - Method in class org.apache.hadoop.yarn.api.records.ApplicationId
-
- compareTo(ContainerId) - Method in class org.apache.hadoop.yarn.api.records.ContainerId
-
- compareTo(NodeId) - Method in class org.apache.hadoop.yarn.api.records.NodeId
-
- compareTo(Priority) - Method in class org.apache.hadoop.yarn.api.records.Priority
-
- compareTo(ReservationId) - Method in class org.apache.hadoop.yarn.api.records.ReservationId
-
- compareTo(ReservationRequest) - Method in class org.apache.hadoop.yarn.api.records.ReservationRequest
-
- compareTo(ResourceRequest) - Method in class org.apache.hadoop.yarn.api.records.ResourceRequest
-
- compareTo(TimelineEntity) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEntity
-
- compareTo(TimelineEvent) - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEvent
-
- compile(String) - Method in class org.apache.hadoop.metrics2.filter.GlobFilter
-
- compile(String) - Method in class org.apache.hadoop.metrics2.filter.RegexFilter
-
- completeLocalOutput(Path, Path) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
-
- completeLocalOutput(Path, Path) - Method in class org.apache.hadoop.fs.FileSystem
-
Called when we're all done writing to the target.
- completeLocalOutput(Path, Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
Called when we're all done writing to the target.
- completeLocalOutput(Path, Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
-
- COMPLETION_POLL_INTERVAL_KEY - Static variable in class org.apache.hadoop.mapreduce.Job
-
Key in mapred-*.xml that sets completionPollInvervalMillis
- componentListPath(String, String, String) - Static method in class org.apache.hadoop.registry.client.binding.RegistryUtils
-
Create a path for listing components under a service
- componentPath(String, String, String, String) - Static method in class org.apache.hadoop.registry.client.binding.RegistryUtils
-
Create the path to a service record for a component
- ComposableInputFormat<K extends WritableComparable,V extends Writable> - Interface in org.apache.hadoop.mapred.join
-
Refinement of InputFormat requiring implementors to provide
ComposableRecordReader instead of RecordReader.
- ComposableInputFormat<K extends WritableComparable<?>,V extends Writable> - Class in org.apache.hadoop.mapreduce.lib.join
-
Refinement of InputFormat requiring implementors to provide
ComposableRecordReader instead of RecordReader.
- ComposableInputFormat() - Constructor for class org.apache.hadoop.mapreduce.lib.join.ComposableInputFormat
-
- ComposableRecordReader<K extends WritableComparable,V extends Writable> - Interface in org.apache.hadoop.mapred.join
-
Additional operations required of a RecordReader to participate in a join.
- ComposableRecordReader<K extends WritableComparable<?>,V extends Writable> - Class in org.apache.hadoop.mapreduce.lib.join
-
Additional operations required of a RecordReader to participate in a join.
- ComposableRecordReader() - Constructor for class org.apache.hadoop.mapreduce.lib.join.ComposableRecordReader
-
- compose(Class<? extends InputFormat>, String) - Static method in class org.apache.hadoop.mapred.join.CompositeInputFormat
-
Convenience method for constructing composite formats.
- compose(String, Class<? extends InputFormat>, String...) - Static method in class org.apache.hadoop.mapred.join.CompositeInputFormat
-
Convenience method for constructing composite formats.
- compose(String, Class<? extends InputFormat>, Path...) - Static method in class org.apache.hadoop.mapred.join.CompositeInputFormat
-
Convenience method for constructing composite formats.
- compose(Class<? extends InputFormat>, String) - Static method in class org.apache.hadoop.mapreduce.lib.join.CompositeInputFormat
-
Convenience method for constructing composite formats.
- compose(String, Class<? extends InputFormat>, String...) - Static method in class org.apache.hadoop.mapreduce.lib.join.CompositeInputFormat
-
Convenience method for constructing composite formats.
- compose(String, Class<? extends InputFormat>, Path...) - Static method in class org.apache.hadoop.mapreduce.lib.join.CompositeInputFormat
-
Convenience method for constructing composite formats.
- CompositeContext - Class in org.apache.hadoop.metrics.spi
-
- CompositeContext() - Constructor for class org.apache.hadoop.metrics.spi.CompositeContext
-
- CompositeInputFormat<K extends WritableComparable> - Class in org.apache.hadoop.mapred.join
-
An InputFormat capable of performing joins over a set of data sources sorted
and partitioned the same way.
- CompositeInputFormat() - Constructor for class org.apache.hadoop.mapred.join.CompositeInputFormat
-
- CompositeInputFormat<K extends WritableComparable> - Class in org.apache.hadoop.mapreduce.lib.join
-
An InputFormat capable of performing joins over a set of data sources sorted
and partitioned the same way.
- CompositeInputFormat() - Constructor for class org.apache.hadoop.mapreduce.lib.join.CompositeInputFormat
-
- CompositeInputSplit - Class in org.apache.hadoop.mapred.join
-
This InputSplit contains a set of child InputSplits.
- CompositeInputSplit() - Constructor for class org.apache.hadoop.mapred.join.CompositeInputSplit
-
- CompositeInputSplit(int) - Constructor for class org.apache.hadoop.mapred.join.CompositeInputSplit
-
- CompositeInputSplit - Class in org.apache.hadoop.mapreduce.lib.join
-
This InputSplit contains a set of child InputSplits.
- CompositeInputSplit() - Constructor for class org.apache.hadoop.mapreduce.lib.join.CompositeInputSplit
-
- CompositeInputSplit(int) - Constructor for class org.apache.hadoop.mapreduce.lib.join.CompositeInputSplit
-
- CompositeRecordReader<K extends WritableComparable,V extends Writable,X extends Writable> - Class in org.apache.hadoop.mapred.join
-
A RecordReader that can effect joins of RecordReaders sharing a common key
type and partitioning.
- CompositeRecordReader(int, int, Class<? extends WritableComparator>) - Constructor for class org.apache.hadoop.mapred.join.CompositeRecordReader
-
Create a RecordReader with capacity children to position
id in the parent reader.
- CompositeRecordReader<K extends WritableComparable<?>,V extends Writable,X extends Writable> - Class in org.apache.hadoop.mapreduce.lib.join
-
A RecordReader that can effect joins of RecordReaders sharing a common key
type and partitioning.
- CompositeRecordReader(int, int, Class<? extends WritableComparator>) - Constructor for class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
-
Create a RecordReader with capacity children to position
id in the parent reader.
- CompositeService - Class in org.apache.hadoop.service
-
Composition of services.
- CompositeService(String) - Constructor for class org.apache.hadoop.service.CompositeService
-
- compress() - Method in class org.apache.hadoop.io.compress.BlockCompressorStream
-
- compress(byte[], int, int) - Method in interface org.apache.hadoop.io.compress.Compressor
-
Fills specified buffer with compressed data.
- compress() - Method in class org.apache.hadoop.io.compress.CompressorStream
-
- COMPRESS - Static variable in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
- COMPRESS_CODEC - Static variable in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
- COMPRESS_TYPE - Static variable in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
- CompressedWritable - Class in org.apache.hadoop.io
-
A base-class for Writables which store themselves compressed and lazily
inflate on field access.
- CompressedWritable() - Constructor for class org.apache.hadoop.io.CompressedWritable
-
- COMPRESSION_GZ - Static variable in class org.apache.hadoop.io.file.tfile.TFile
-
compression: gzip
- COMPRESSION_LZO - Static variable in class org.apache.hadoop.io.file.tfile.TFile
-
compression: lzo
- COMPRESSION_NONE - Static variable in class org.apache.hadoop.io.file.tfile.TFile
-
compression: none
- CompressionCodec - Interface in org.apache.hadoop.io.compress
-
This class encapsulates a streaming compression/decompression pair.
- CompressionCodecFactory - Class in org.apache.hadoop.io.compress
-
A factory that will find the correct codec for a given filename.
- CompressionCodecFactory(Configuration) - Constructor for class org.apache.hadoop.io.compress.CompressionCodecFactory
-
Find the codecs specified in the config value io.compression.codecs
and register them.
- CompressionInputStream - Class in org.apache.hadoop.io.compress
-
A compression input stream.
- CompressionInputStream(InputStream) - Constructor for class org.apache.hadoop.io.compress.CompressionInputStream
-
Create a compression input stream that reads
the decompressed bytes from the given stream.
- CompressionOutputStream - Class in org.apache.hadoop.io.compress
-
A compression output stream.
- CompressionOutputStream(OutputStream) - Constructor for class org.apache.hadoop.io.compress.CompressionOutputStream
-
Create a compression output stream that writes
the compressed bytes to the given stream.
- Compressor - Interface in org.apache.hadoop.io.compress
-
- compressor - Variable in class org.apache.hadoop.io.compress.CompressorStream
-
- CompressorStream - Class in org.apache.hadoop.io.compress
-
- CompressorStream(OutputStream, Compressor, int) - Constructor for class org.apache.hadoop.io.compress.CompressorStream
-
- CompressorStream(OutputStream, Compressor) - Constructor for class org.apache.hadoop.io.compress.CompressorStream
-
- CompressorStream(OutputStream) - Constructor for class org.apache.hadoop.io.compress.CompressorStream
-
Allow derived classes to directly set the underlying stream.
- computeChecksum(InputStream) - Method in interface org.apache.hadoop.yarn.sharedcache.SharedCacheChecksum
-
Calculate the checksum of the passed input stream.
- computeSplitSize(long, long, long) - Method in class org.apache.hadoop.mapred.FileInputFormat
-
- computeSplitSize(long, long, long) - Method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
- concat(Path, Path[]) - Method in class org.apache.hadoop.fs.FileSystem
-
Concat existing files together.
- concat(Path, Path[]) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- conditions - Variable in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
- conf - Variable in class org.apache.hadoop.mapred.SequenceFileRecordReader
-
- conf - Variable in class org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader
-
- conf - Variable in class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
-
- Configurable - Interface in org.apache.hadoop.conf
-
- Configuration - Class in org.apache.hadoop.conf
-
Provides access to configuration parameters.
- Configuration() - Constructor for class org.apache.hadoop.conf.Configuration
-
A new configuration.
- Configuration(boolean) - Constructor for class org.apache.hadoop.conf.Configuration
-
A new configuration where the behavior of reading from the default
resources can be turned off.
- Configuration(Configuration) - Constructor for class org.apache.hadoop.conf.Configuration
-
A new configuration with the same settings cloned from another.
- configure(JobConf) - Method in class org.apache.hadoop.mapred.FixedLengthInputFormat
-
- configure(JobConf) - Method in interface org.apache.hadoop.mapred.JobConfigurable
-
Initializes a new instance from a
JobConf
.
- configure(JobConf) - Method in class org.apache.hadoop.mapred.KeyValueTextInputFormat
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.aggregate.UserDefinedValueAggregatorDescriptor
-
Do nothing.
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
-
get the input file name.
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorCombiner
-
Combiner does not need to configure.
- configure(JobConf) - Method in interface org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorDescriptor
-
Configure the object
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJobBase
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.BinaryPartitioner
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.ChainMapper
-
Configures the ChainMapper and all the Mappers in the chain.
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.ChainReducer
-
Configures the ChainReducer, the Reducer and all the Mappers in the chain.
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat
-
Initializes a new instance from a
JobConf
.
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.FieldSelectionMapReduce
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.HashPartitioner
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.KeyFieldBasedComparator
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.KeyFieldBasedPartitioner
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.MultithreadedMapRunner
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.NLineInputFormat
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.RegexMapper
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.TotalOrderPartitioner
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.MapReduceBase
-
Default implementation that does nothing.
- configure(JobConf) - Method in class org.apache.hadoop.mapred.MapRunner
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.TextInputFormat
-
- configure(Configuration) - Method in class org.apache.hadoop.mapreduce.lib.aggregate.UserDefinedValueAggregatorDescriptor
-
Do nothing.
- configure(Configuration) - Method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorBaseDescriptor
-
get the input file name.
- configure(Configuration) - Method in interface org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorDescriptor
-
Configure the object
- Configured - Class in org.apache.hadoop.conf
-
Base class for things that may be configured with a
Configuration
.
- Configured() - Constructor for class org.apache.hadoop.conf.Configured
-
Construct a Configured.
- Configured(Configuration) - Constructor for class org.apache.hadoop.conf.Configured
-
Construct a Configured.
- configureDB(JobConf, String, String, String, String) - Static method in class org.apache.hadoop.mapred.lib.db.DBConfiguration
-
Sets the DB access related fields in the JobConf.
- configureDB(JobConf, String, String) - Static method in class org.apache.hadoop.mapred.lib.db.DBConfiguration
-
Sets the DB access related fields in the JobConf.
- configureDB(Configuration, String, String, String, String) - Static method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- configureDB(Configuration, String, String) - Static method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
Sets the DB access related fields in the JobConf.
- confirmPrompt(String) - Static method in class org.apache.hadoop.util.ToolRunner
-
Print out a prompt to the user, and return true if the user
responds with "y" or "yes".
- connection - Variable in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
- ConnectTimeoutException - Exception in org.apache.hadoop.net
-
Thrown by NetUtils.connect(java.net.Socket, java.net.SocketAddress, int)
if it times out while connecting to the remote host.
- ConnectTimeoutException(String) - Constructor for exception org.apache.hadoop.net.ConnectTimeoutException
-
- constructOutputStream(DataOutput) - Static method in class org.apache.hadoop.io.DataOutputOutputStream
-
Construct an OutputStream from the given DataOutput.
- constructQuery(String, String[]) - Method in class org.apache.hadoop.mapreduce.lib.db.DBOutputFormat
-
Constructs the query used as the prepared statement to insert data.
- Consts - Class in org.apache.hadoop.record.compiler
-
- Container - Class in org.apache.hadoop.yarn.api.records
-
Container
represents an allocated resource in the cluster.
- Container() - Constructor for class org.apache.hadoop.yarn.api.records.Container
-
- CONTAINER_ID_BITMASK - Static variable in class org.apache.hadoop.yarn.api.records.ContainerId
-
- CONTAINER_TOKEN_FILE_ENV_NAME - Static variable in interface org.apache.hadoop.yarn.api.ApplicationConstants
-
The cache file into which container token is written
- ContainerExitStatus - Class in org.apache.hadoop.yarn.api.records
-
Container exit statuses indicating special exit circumstances.
- ContainerExitStatus() - Constructor for class org.apache.hadoop.yarn.api.records.ContainerExitStatus
-
- ContainerId - Class in org.apache.hadoop.yarn.api.records
-
ContainerId
represents a globally unique identifier
for a
Container
in the cluster.
- ContainerId() - Constructor for class org.apache.hadoop.yarn.api.records.ContainerId
-
- ContainerLaunchContext - Class in org.apache.hadoop.yarn.api.records
-
ContainerLaunchContext
represents all of the information
needed by the NodeManager
to launch a container.
- ContainerLaunchContext() - Constructor for class org.apache.hadoop.yarn.api.records.ContainerLaunchContext
-
- ContainerLogAppender - Class in org.apache.hadoop.yarn
-
A simple log4j-appender for container's logs.
- ContainerLogAppender() - Constructor for class org.apache.hadoop.yarn.ContainerLogAppender
-
- ContainerManagementProtocol - Interface in org.apache.hadoop.yarn.api
-
The protocol between an ApplicationMaster
and a
NodeManager
to start/stop containers and to get status
of running containers.
- ContainerManagerSecurityInfo - Class in org.apache.hadoop.yarn.security
-
- ContainerManagerSecurityInfo() - Constructor for class org.apache.hadoop.yarn.security.ContainerManagerSecurityInfo
-
- ContainerNotFoundException - Exception in org.apache.hadoop.yarn.exceptions
-
This exception is thrown on
(GetContainerReportRequest)
API when the container doesn't exist in AHS
- ContainerNotFoundException(Throwable) - Constructor for exception org.apache.hadoop.yarn.exceptions.ContainerNotFoundException
-
- ContainerNotFoundException(String) - Constructor for exception org.apache.hadoop.yarn.exceptions.ContainerNotFoundException
-
- ContainerNotFoundException(String, Throwable) - Constructor for exception org.apache.hadoop.yarn.exceptions.ContainerNotFoundException
-
- ContainerReport - Class in org.apache.hadoop.yarn.api.records
-
ContainerReport
is a report of an container.
- ContainerReport() - Constructor for class org.apache.hadoop.yarn.api.records.ContainerReport
-
- ContainerResourceIncreaseRequest - Class in org.apache.hadoop.yarn.api.records
-
Used by Application Master, send a container resource increase request to
Resource Manager
- ContainerResourceIncreaseRequest() - Constructor for class org.apache.hadoop.yarn.api.records.ContainerResourceIncreaseRequest
-
- ContainerRollingLogAppender - Class in org.apache.hadoop.yarn
-
A simple log4j-appender for container's logs.
- ContainerRollingLogAppender() - Constructor for class org.apache.hadoop.yarn.ContainerRollingLogAppender
-
- ContainerState - Enum in org.apache.hadoop.yarn.api.records
-
State of a Container
.
- ContainerStatus - Class in org.apache.hadoop.yarn.api.records
-
ContainerStatus
represents the current status of a
Container
.
- ContainerStatus() - Constructor for class org.apache.hadoop.yarn.api.records.ContainerStatus
-
- ContainerTokenIdentifier - Class in org.apache.hadoop.yarn.security
-
TokenIdentifier for a container.
- ContainerTokenIdentifier(ContainerId, String, String, Resource, long, int, long, Priority, long) - Constructor for class org.apache.hadoop.yarn.security.ContainerTokenIdentifier
-
- ContainerTokenIdentifier(ContainerId, String, String, Resource, long, int, long, Priority, long, LogAggregationContext) - Constructor for class org.apache.hadoop.yarn.security.ContainerTokenIdentifier
-
- ContainerTokenIdentifier() - Constructor for class org.apache.hadoop.yarn.security.ContainerTokenIdentifier
-
Default constructor needed by RPC layer/SecretManager.
- ContainerTokenSelector - Class in org.apache.hadoop.yarn.security
-
- ContainerTokenSelector() - Constructor for class org.apache.hadoop.yarn.security.ContainerTokenSelector
-
- containsKey(Object) - Method in class org.apache.hadoop.io.MapWritable
-
- containsKey(Object) - Method in class org.apache.hadoop.io.SortedMapWritable
-
- containsValue(Object) - Method in class org.apache.hadoop.io.MapWritable
-
- containsValue(Object) - Method in class org.apache.hadoop.io.SortedMapWritable
-
- contentEquals(Counters.Counter) - Method in class org.apache.hadoop.mapred.Counters.Counter
-
Deprecated.
- ContentSummary - Class in org.apache.hadoop.fs
-
Store the summary of a content (a directory or a file).
- ContentSummary() - Constructor for class org.apache.hadoop.fs.ContentSummary
-
Deprecated.
- ContentSummary(long, long, long) - Constructor for class org.apache.hadoop.fs.ContentSummary
-
Deprecated.
- ContentSummary(long, long, long, long, long, long) - Constructor for class org.apache.hadoop.fs.ContentSummary
-
Deprecated.
- context - Variable in class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader
-
- context() - Method in interface org.apache.hadoop.metrics2.MetricsRecord
-
- ControlledJob - Class in org.apache.hadoop.mapreduce.lib.jobcontrol
-
This class encapsulates a MapReduce job and its dependency.
- ControlledJob(Job, List<ControlledJob>) - Constructor for class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
Construct a job.
- ControlledJob(Configuration) - Constructor for class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
Construct a job.
- convert(Throwable) - Static method in exception org.apache.hadoop.service.ServiceStateException
-
- convert(String, Throwable) - Static method in exception org.apache.hadoop.service.ServiceStateException
-
- convertUsername(String) - Static method in class org.apache.hadoop.registry.client.binding.RegistryUtils
-
Convert the username to that which can be used for registry
entries.
- copy(FileSystem, Path, FileSystem, Path, boolean, Configuration) - Static method in class org.apache.hadoop.fs.FileUtil
-
Copy files between FileSystems.
- copy(FileSystem, Path[], FileSystem, Path, boolean, boolean, Configuration) - Static method in class org.apache.hadoop.fs.FileUtil
-
- copy(FileSystem, Path, FileSystem, Path, boolean, boolean, Configuration) - Static method in class org.apache.hadoop.fs.FileUtil
-
Copy files between FileSystems.
- copy(FileSystem, FileStatus, FileSystem, Path, boolean, boolean, Configuration) - Static method in class org.apache.hadoop.fs.FileUtil
-
Copy files between FileSystems.
- copy(File, FileSystem, Path, boolean, Configuration) - Static method in class org.apache.hadoop.fs.FileUtil
-
Copy local files to a FileSystem.
- copy(FileSystem, Path, File, boolean, Configuration) - Static method in class org.apache.hadoop.fs.FileUtil
-
Copy FileSystem files to local files.
- copy(Writable) - Method in class org.apache.hadoop.io.AbstractMapWritable
-
Used by child copy constructors.
- copy(byte[], int, int) - Method in class org.apache.hadoop.record.Buffer
-
Deprecated.
Copy the specified byte array to the Buffer.
- copy(Configuration, T, T) - Static method in class org.apache.hadoop.util.ReflectionUtils
-
Make a copy of the writable object using serialization to a buffer
- copyBytes() - Method in class org.apache.hadoop.io.BytesWritable
-
Get a copy of the bytes that is exactly the length of the data.
- copyBytes(InputStream, OutputStream, int, boolean) - Static method in class org.apache.hadoop.io.IOUtils
-
Copies from one stream to another.
- copyBytes(InputStream, OutputStream, int) - Static method in class org.apache.hadoop.io.IOUtils
-
Copies from one stream to another.
- copyBytes(InputStream, OutputStream, Configuration) - Static method in class org.apache.hadoop.io.IOUtils
-
Copies from one stream to another.
- copyBytes(InputStream, OutputStream, Configuration, boolean) - Static method in class org.apache.hadoop.io.IOUtils
-
Copies from one stream to another.
- copyBytes(InputStream, OutputStream, long, boolean) - Static method in class org.apache.hadoop.io.IOUtils
-
Copies count bytes from one stream to another.
- copyBytes() - Method in class org.apache.hadoop.io.Text
-
Get a copy of the bytes that is exactly the length of the data.
- copyFromLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
-
- copyFromLocalFile(Path, Path) - Method in class org.apache.hadoop.fs.FileSystem
-
The src file is on the local disk.
- copyFromLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.FileSystem
-
The src file is on the local disk.
- copyFromLocalFile(boolean, boolean, Path[], Path) - Method in class org.apache.hadoop.fs.FileSystem
-
The src files are on the local disk.
- copyFromLocalFile(boolean, boolean, Path, Path) - Method in class org.apache.hadoop.fs.FileSystem
-
The src file is on the local disk.
- copyFromLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
The src file is on the local disk.
- copyFromLocalFile(boolean, boolean, Path[], Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
The src files are on the local disk.
- copyFromLocalFile(boolean, boolean, Path, Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
The src file is on the local disk.
- copyFromLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.LocalFileSystem
-
- copyMerge(FileSystem, Path, FileSystem, Path, boolean, Configuration, String) - Static method in class org.apache.hadoop.fs.FileUtil
-
Copy all files in a directory to one output file (merge).
- copyToLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
-
The src file is under FS, and the dst is on the local disk.
- copyToLocalFile(Path, Path, boolean) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
-
The src file is under FS, and the dst is on the local disk.
- copyToLocalFile(Path, Path) - Method in class org.apache.hadoop.fs.FileSystem
-
The src file is under FS, and the dst is on the local disk.
- copyToLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.FileSystem
-
The src file is under FS, and the dst is on the local disk.
- copyToLocalFile(boolean, Path, Path, boolean) - Method in class org.apache.hadoop.fs.FileSystem
-
The src file is under FS, and the dst is on the local disk.
- copyToLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
The src file is under FS, and the dst is on the local disk.
- copyToLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.LocalFileSystem
-
- CORE_SITE_CONFIGURATION_FILE - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- countCounters() - Method in class org.apache.hadoop.mapreduce.counters.AbstractCounters
-
Returns the total number of counters, by summing the number of counters
in each group.
- Counter - Interface in org.apache.hadoop.mapreduce
-
A named counter that tracks the progress of a map/reduce job.
- counter(MetricsInfo, int) - Method in interface org.apache.hadoop.metrics2.MetricsVisitor
-
Callback for integer value counters
- counter(MetricsInfo, long) - Method in interface org.apache.hadoop.metrics2.MetricsVisitor
-
Callback for long value counters
- COUNTER_GROUP - Static variable in class org.apache.hadoop.mapred.SkipBadRecords
-
Special counters which are written by the application and are
used by the framework for detecting bad records.
- COUNTER_MAP_PROCESSED_RECORDS - Static variable in class org.apache.hadoop.mapred.SkipBadRecords
-
Number of processed map records.
- COUNTER_REDUCE_PROCESSED_GROUPS - Static variable in class org.apache.hadoop.mapred.SkipBadRecords
-
Number of processed reduce groups.
- CounterGroup - Interface in org.apache.hadoop.mapreduce
-
A group of
Counter
s that logically belong together.
- CounterGroupBase<T extends Counter> - Interface in org.apache.hadoop.mapreduce.counters
-
The common counter group interface.
- Counters - Class in org.apache.hadoop.mapred
-
A set of named counters.
- Counters() - Constructor for class org.apache.hadoop.mapred.Counters
-
- Counters(Counters) - Constructor for class org.apache.hadoop.mapred.Counters
-
- Counters - Class in org.apache.hadoop.mapreduce
-
Counters
holds per job/task counters, defined either by the
Map-Reduce framework or applications.
- Counters() - Constructor for class org.apache.hadoop.mapreduce.Counters
-
Default constructor
- Counters(AbstractCounters<C, G>) - Constructor for class org.apache.hadoop.mapreduce.Counters
-
Construct the Counters object from the another counters object
- Counters.Counter - Class in org.apache.hadoop.mapred
-
A counter record, comprising its name and value.
- Counters.Counter() - Constructor for class org.apache.hadoop.mapred.Counters.Counter
-
- Counters.Group - Class in org.apache.hadoop.mapred
-
Group
of counters, comprising of counters from a particular
counter
Enum
class.
- Counters.Group() - Constructor for class org.apache.hadoop.mapred.Counters.Group
-
- CountingBloomFilter - Class in org.apache.hadoop.util.bloom
-
Implements a counting Bloom filter, as defined by Fan et al.
- CountingBloomFilter() - Constructor for class org.apache.hadoop.util.bloom.CountingBloomFilter
-
Default constructor - use with readFields
- CountingBloomFilter(int, int, int) - Constructor for class org.apache.hadoop.util.bloom.CountingBloomFilter
-
Constructor
- create(Path, EnumSet<CreateFlag>, Options.CreateOpts...) - Method in class org.apache.hadoop.fs.AbstractFileSystem
-
- create(Path, FsPermission, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.azure.NativeAzureFileSystem
-
- create(Path, FsPermission, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
-
- create(Path, EnumSet<CreateFlag>, Options.CreateOpts...) - Method in class org.apache.hadoop.fs.FileContext
-
Create or overwrite file on indicated path and returns an output stream for
writing into the file.
- create(FileSystem, Path, FsPermission) - Static method in class org.apache.hadoop.fs.FileSystem
-
create a file with the provided permission
The permission of the file is set to be the provided permission as in
setPermission, not permission&~umask
It is implemented using two RPCs.
- create(Path) - Method in class org.apache.hadoop.fs.FileSystem
-
Create an FSDataOutputStream at the indicated Path.
- create(Path, boolean) - Method in class org.apache.hadoop.fs.FileSystem
-
Create an FSDataOutputStream at the indicated Path.
- create(Path, Progressable) - Method in class org.apache.hadoop.fs.FileSystem
-
Create an FSDataOutputStream at the indicated Path with write-progress
reporting.
- create(Path, short) - Method in class org.apache.hadoop.fs.FileSystem
-
Create an FSDataOutputStream at the indicated Path.
- create(Path, short, Progressable) - Method in class org.apache.hadoop.fs.FileSystem
-
Create an FSDataOutputStream at the indicated Path with write-progress
reporting.
- create(Path, boolean, int) - Method in class org.apache.hadoop.fs.FileSystem
-
Create an FSDataOutputStream at the indicated Path.
- create(Path, boolean, int, Progressable) - Method in class org.apache.hadoop.fs.FileSystem
-
Create an FSDataOutputStream at the indicated Path with write-progress
reporting.
- create(Path, boolean, int, short, long) - Method in class org.apache.hadoop.fs.FileSystem
-
Create an FSDataOutputStream at the indicated Path.
- create(Path, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.FileSystem
-
Create an FSDataOutputStream at the indicated Path with write-progress
reporting.
- create(Path, FsPermission, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.FileSystem
-
Create an FSDataOutputStream at the indicated Path with write-progress
reporting.
- create(Path, FsPermission, EnumSet<CreateFlag>, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.FileSystem
-
Create an FSDataOutputStream at the indicated Path with write-progress
reporting.
- create(Path, FsPermission, EnumSet<CreateFlag>, int, short, long, Progressable, Options.ChecksumOpt) - Method in class org.apache.hadoop.fs.FileSystem
-
Create an FSDataOutputStream at the indicated Path with a custom
checksum option
- create(Path, FsPermission, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- create(Path, FsPermission, EnumSet<CreateFlag>, int, short, long, Progressable, Options.ChecksumOpt) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- create(Path, FsPermission, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.ftp.FTPFileSystem
-
A stream obtained via this call must be closed before using other APIs of
this class or else the invocation will block.
- create(Path, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
-
- create(Path, FsPermission, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
-
- create(Path, FsPermission, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
-
- create(Path, FsPermission, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.s3native.NativeS3FileSystem
-
- create(Path, FsPermission, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.viewfs.ViewFileSystem
-
- CREATE - Static variable in interface org.apache.hadoop.registry.client.api.BindFlags
-
Create the entry..
- CREATE_DIR - Static variable in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
- createAHSClient() - Static method in class org.apache.hadoop.yarn.client.api.AHSClient
-
Create a new instance of AHSClient.
- createAHSProxy(Configuration, Class<T>, InetSocketAddress) - Static method in class org.apache.hadoop.yarn.client.AHSProxy
-
- createAllSymlink(Configuration, File, File) - Static method in class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
Internal to MapReduce framework. Use DistributedCacheManager
instead.
- createAMRMClient() - Static method in class org.apache.hadoop.yarn.client.api.AMRMClient
-
Create a new instance of AMRMClient.
- createAMRMClientAsync(int, AMRMClientAsync.CallbackHandler) - Static method in class org.apache.hadoop.yarn.client.api.async.AMRMClientAsync
-
- createAMRMClientAsync(AMRMClient<T>, int, AMRMClientAsync.CallbackHandler) - Static method in class org.apache.hadoop.yarn.client.api.async.AMRMClientAsync
-
- createAndSubmitJob() - Method in class org.apache.hadoop.tools.DistCp
-
Create and submit the mapreduce job.
- createApplication() - Method in class org.apache.hadoop.yarn.client.api.YarnClient
-
- createCheckpoint() - Method in class org.apache.hadoop.fs.TrashPolicy
-
Create a trash checkpoint.
- createCompressor() - Method in class org.apache.hadoop.io.compress.BZip2Codec
-
- createCompressor() - Method in interface org.apache.hadoop.io.compress.CompressionCodec
-
- createCompressor() - Method in class org.apache.hadoop.io.compress.DefaultCodec
-
- createCompressor() - Method in class org.apache.hadoop.io.compress.GzipCodec
-
- createConnection() - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
- createCredentialEntry(String, char[]) - Method in class org.apache.hadoop.security.alias.CredentialProvider
-
Create a new credential.
- createDBRecordReader(DBInputFormat.DBInputSplit, Configuration) - Method in class org.apache.hadoop.mapreduce.lib.db.DataDrivenDBInputFormat
-
- createDBRecordReader(DBInputFormat.DBInputSplit, Configuration) - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
- createDBRecordReader(DBInputFormat.DBInputSplit, Configuration) - Method in class org.apache.hadoop.mapreduce.lib.db.OracleDataDrivenDBInputFormat
-
- createDecompressor() - Method in class org.apache.hadoop.io.compress.BZip2Codec
-
- createDecompressor() - Method in interface org.apache.hadoop.io.compress.CompressionCodec
-
- createDecompressor() - Method in class org.apache.hadoop.io.compress.DefaultCodec
-
- createDecompressor() - Method in class org.apache.hadoop.io.compress.GzipCodec
-
- createDirectDecompressor() - Method in class org.apache.hadoop.io.compress.DefaultCodec
-
- createDirectDecompressor() - Method in interface org.apache.hadoop.io.compress.DirectDecompressionCodec
-
- createDirectDecompressor() - Method in class org.apache.hadoop.io.compress.GzipCodec
-
- createFileSplit(Path, long, long) - Static method in class org.apache.hadoop.mapred.lib.NLineInputFormat
-
NLineInputFormat uses LineRecordReader, which always reads
(and consumes) at least one character out of its upper split
boundary.
- createFileSplit(Path, long, long) - Static method in class org.apache.hadoop.mapreduce.lib.input.NLineInputFormat
-
NLineInputFormat uses LineRecordReader, which always reads
(and consumes) at least one character out of its upper split
boundary.
- createFileSystem(URI, Configuration) - Static method in class org.apache.hadoop.fs.AbstractFileSystem
-
Create a file system instance for the specified uri using the conf.
- CreateFlag - Enum in org.apache.hadoop.fs
-
CreateFlag specifies the file create semantic.
- createImmutable(short) - Static method in class org.apache.hadoop.fs.permission.FsPermission
-
- createInputFileListing(Job) - Method in class org.apache.hadoop.tools.DistCp
-
Create input listing by invoking an appropriate copy listing
implementation.
- createInputStream(InputStream) - Method in class org.apache.hadoop.io.compress.BZip2Codec
-
Create a
CompressionInputStream
that will read from the given
input stream and return a stream for uncompressed data.
- createInputStream(InputStream, Decompressor) - Method in class org.apache.hadoop.io.compress.BZip2Codec
-
- createInputStream(InputStream, Decompressor, long, long, SplittableCompressionCodec.READ_MODE) - Method in class org.apache.hadoop.io.compress.BZip2Codec
-
Creates CompressionInputStream to be used to read off uncompressed data
in one of the two reading modes.
- createInputStream(InputStream) - Method in interface org.apache.hadoop.io.compress.CompressionCodec
-
- createInputStream(InputStream, Decompressor) - Method in interface org.apache.hadoop.io.compress.CompressionCodec
-
- createInputStream(InputStream) - Method in class org.apache.hadoop.io.compress.DefaultCodec
-
- createInputStream(InputStream, Decompressor) - Method in class org.apache.hadoop.io.compress.DefaultCodec
-
- createInputStream(InputStream) - Method in class org.apache.hadoop.io.compress.GzipCodec
-
- createInputStream(InputStream, Decompressor) - Method in class org.apache.hadoop.io.compress.GzipCodec
-
- createInputStream(InputStream, Decompressor, long, long, SplittableCompressionCodec.READ_MODE) - Method in interface org.apache.hadoop.io.compress.SplittableCompressionCodec
-
Create a stream as dictated by the readMode.
- createInstance(String) - Static method in class org.apache.hadoop.mapred.lib.aggregate.UserDefinedValueAggregatorDescriptor
-
Create an instance of the given class
- createInstance(String) - Static method in class org.apache.hadoop.mapreduce.lib.aggregate.UserDefinedValueAggregatorDescriptor
-
Create an instance of the given class
- createInternal(Path, EnumSet<CreateFlag>, FsPermission, int, short, long, Progressable, Options.ChecksumOpt, boolean) - Method in class org.apache.hadoop.fs.AbstractFileSystem
-
- createInternal(Path, EnumSet<CreateFlag>, FsPermission, int, short, long, Progressable, Options.ChecksumOpt, boolean) - Method in class org.apache.hadoop.fs.viewfs.ViewFs
-
- createInternalValue() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
Create a value to be used internally for joins.
- createIOException(List<IOException>) - Static method in exception org.apache.hadoop.io.MultipleIOException
-
- createJarWithClassPath(String, Path, Map<String, String>) - Static method in class org.apache.hadoop.fs.FileUtil
-
- createJarWithClassPath(String, Path, Path, Map<String, String>) - Static method in class org.apache.hadoop.fs.FileUtil
-
Create a jar file at the given path, containing a manifest with a classpath
that references all specified entries.
- createJobListCache() - Method in class org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager
-
- createKey(String, byte[], KeyProvider.Options) - Method in class org.apache.hadoop.crypto.key.KeyProvider
-
Create a new key.
- createKey(String, KeyProvider.Options) - Method in class org.apache.hadoop.crypto.key.KeyProvider
-
Create a new key generating the material for it.
- createKey() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
Create a new key value common to all child RRs.
- createKey() - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
-
Request new key from proxied RR.
- createKey() - Method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
-
- createKey() - Method in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
-
- createKey() - Method in class org.apache.hadoop.mapred.lib.CombineFileRecordReaderWrapper
-
- createKey() - Method in interface org.apache.hadoop.mapred.RecordReader
-
Create an object of the appropriate type to be used as a key.
- createKey() - Method in class org.apache.hadoop.mapred.SequenceFileAsTextRecordReader
-
- createKey() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
-
- createKey() - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
-
Create a new key common to all child RRs.
- createKey() - Method in class org.apache.hadoop.mapreduce.lib.join.WrappedRecordReader
-
Request new key from proxied RR.
- createLocalTempFile(File, String, boolean) - Static method in class org.apache.hadoop.fs.FileUtil
-
Create a tmp file for a base file.
- createNewFile(Path) - Method in class org.apache.hadoop.fs.FileSystem
-
Creates the given Path as a brand-new zero-length file.
- createNMClient() - Static method in class org.apache.hadoop.yarn.client.api.NMClient
-
Create a new instance of NMClient.
- createNMClient(String) - Static method in class org.apache.hadoop.yarn.client.api.NMClient
-
Create a new instance of NMClient.
- createNMClientAsync(NMClientAsync.CallbackHandler) - Static method in class org.apache.hadoop.yarn.client.api.async.NMClientAsync
-
- createNMProxy(Configuration, Class<T>, UserGroupInformation, YarnRPC, InetSocketAddress) - Static method in class org.apache.hadoop.yarn.client.NMProxy
-
- createNonRecursive(Path, FsPermission, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.azure.NativeAzureFileSystem
-
- createNonRecursive(Path, FsPermission, EnumSet<CreateFlag>, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.azure.NativeAzureFileSystem
-
- createNonRecursive(Path, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.azure.NativeAzureFileSystem
-
- createNonRecursive(Path, FsPermission, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
-
- createNonRecursive(Path, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.FileSystem
-
Deprecated.
API only for 0.20-append
- createNonRecursive(Path, FsPermission, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.FileSystem
-
Deprecated.
API only for 0.20-append
- createNonRecursive(Path, FsPermission, EnumSet<CreateFlag>, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.FileSystem
-
Deprecated.
API only for 0.20-append
- createNonRecursive(Path, FsPermission, EnumSet<CreateFlag>, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
Deprecated.
- createNonRecursive(Path, FsPermission, EnumSet<CreateFlag>, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
-
Deprecated.
- createNonRecursive(Path, FsPermission, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
-
- createNonRecursive(Path, FsPermission, EnumSet<CreateFlag>, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.viewfs.ViewFileSystem
-
- createOutputStream(Path, boolean) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
-
- createOutputStream(OutputStream) - Method in class org.apache.hadoop.io.compress.BZip2Codec
-
- createOutputStream(OutputStream, Compressor) - Method in class org.apache.hadoop.io.compress.BZip2Codec
-
- createOutputStream(OutputStream) - Method in interface org.apache.hadoop.io.compress.CompressionCodec
-
- createOutputStream(OutputStream, Compressor) - Method in interface org.apache.hadoop.io.compress.CompressionCodec
-
- createOutputStream(OutputStream) - Method in class org.apache.hadoop.io.compress.DefaultCodec
-
- createOutputStream(OutputStream, Compressor) - Method in class org.apache.hadoop.io.compress.DefaultCodec
-
- createOutputStream(OutputStream) - Method in class org.apache.hadoop.io.compress.GzipCodec
-
- createOutputStream(OutputStream, Compressor) - Method in class org.apache.hadoop.io.compress.GzipCodec
-
- createOutputStreamWithMode(Path, boolean, FsPermission) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
-
- createPool(JobConf, List<PathFilter>) - Method in class org.apache.hadoop.mapred.lib.CombineFileInputFormat
-
- createPool(JobConf, PathFilter...) - Method in class org.apache.hadoop.mapred.lib.CombineFileInputFormat
-
- createPool(List<PathFilter>) - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat
-
Create a new pool and add the filters to it.
- createPool(PathFilter...) - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat
-
Create a new pool and add the filters to it.
- createProvider(URI, Configuration) - Method in class org.apache.hadoop.crypto.key.KeyProviderFactory
-
- createProvider(URI, Configuration) - Method in class org.apache.hadoop.security.alias.CredentialProviderFactory
-
- createRecord(String) - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
-
Creates a new AbstractMetricsRecord instance with the given recordName
.
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapred.lib.CombineFileInputFormat
-
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.InputFormat
-
Create a record reader for a given split.
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
Create a record reader for a given split.
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat
-
This is not implemented yet.
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.CombineSequenceFileInputFormat
-
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.CombineTextInputFormat
-
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.FixedLengthInputFormat
-
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.KeyValueTextInputFormat
-
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.NLineInputFormat
-
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsBinaryInputFormat
-
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsTextInputFormat
-
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFilter
-
Create a record reader for the given split
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat
-
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.TextInputFormat
-
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.join.ComposableInputFormat
-
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeInputFormat
-
Construct a CompositeRecordReader for the children of this InputFormat
as defined in the init expression.
- createRetriableProxy(Configuration, Class<T>, UserGroupInformation, YarnRPC, InetSocketAddress, RetryPolicy) - Static method in class org.apache.hadoop.yarn.client.ServerProxy
-
- createRetryPolicy(Configuration, String, long, String, long) - Static method in class org.apache.hadoop.yarn.client.ServerProxy
-
- createRMProxy(Configuration, Class<T>) - Static method in class org.apache.hadoop.yarn.client.ClientRMProxy
-
Create a proxy to the ResourceManager for the specified protocol.
- createRMProxy(Configuration, Class<T>, InetSocketAddress) - Static method in class org.apache.hadoop.yarn.client.RMProxy
-
Deprecated.
This method is deprecated and is not used by YARN internally any more.
To create a proxy to the RM, use ClientRMProxy#createRMProxy or
ServerRMProxy#createRMProxy.
Create a proxy to the ResourceManager at the specified address.
- createSharedCacheClient() - Static method in class org.apache.hadoop.yarn.client.api.SharedCacheClient
-
- createSnapshot(Path) - Method in class org.apache.hadoop.fs.FileSystem
-
Create a snapshot with a default name.
- createSnapshot(Path, String) - Method in class org.apache.hadoop.fs.FileSystem
-
Create a snapshot
- createSnapshot(Path, String) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- createSocket() - Method in class org.apache.hadoop.net.SocksSocketFactory
-
- createSocket(InetAddress, int) - Method in class org.apache.hadoop.net.SocksSocketFactory
-
- createSocket(InetAddress, int, InetAddress, int) - Method in class org.apache.hadoop.net.SocksSocketFactory
-
- createSocket(String, int) - Method in class org.apache.hadoop.net.SocksSocketFactory
-
- createSocket(String, int, InetAddress, int) - Method in class org.apache.hadoop.net.SocksSocketFactory
-
- createSocket() - Method in class org.apache.hadoop.net.StandardSocketFactory
-
- createSocket(InetAddress, int) - Method in class org.apache.hadoop.net.StandardSocketFactory
-
- createSocket(InetAddress, int, InetAddress, int) - Method in class org.apache.hadoop.net.StandardSocketFactory
-
- createSocket(String, int) - Method in class org.apache.hadoop.net.StandardSocketFactory
-
- createSocket(String, int, InetAddress, int) - Method in class org.apache.hadoop.net.StandardSocketFactory
-
- createSymlink(Path, Path, boolean) - Method in class org.apache.hadoop.fs.AbstractFileSystem
-
- createSymlink(Path, Path, boolean) - Method in class org.apache.hadoop.fs.FileContext
-
Creates a symbolic link to an existing file.
- createSymlink(Path, Path, boolean) - Method in class org.apache.hadoop.fs.FileSystem
-
- createSymlink(Path, Path, boolean) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- createSymlink(Path, Path, boolean) - Method in class org.apache.hadoop.fs.LocalFileSystem
-
- createSymlink(Path, Path, boolean) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
-
- createSymlink(Path, Path, boolean) - Method in class org.apache.hadoop.fs.viewfs.ViewFs
-
- createSymlink() - Method in class org.apache.hadoop.mapreduce.Job
-
Deprecated.
- createTimelineClient() - Static method in class org.apache.hadoop.yarn.client.api.TimelineClient
-
Create a timeline client.
- createTupleWritable() - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
-
Create a value to be used internally for joins.
- createValue() - Method in class org.apache.hadoop.mapred.join.JoinRecordReader
-
Create an object of the appropriate type to be used as a value.
- createValue() - Method in class org.apache.hadoop.mapred.join.MultiFilterRecordReader
-
Create an object of the appropriate type to be used as a value.
- createValue() - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
-
Request new value from proxied RR.
- createValue() - Method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
-
- createValue() - Method in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
-
- createValue() - Method in class org.apache.hadoop.mapred.lib.CombineFileRecordReaderWrapper
-
- createValue() - Method in interface org.apache.hadoop.mapred.RecordReader
-
Create an object of the appropriate type to be used as a value.
- createValue() - Method in class org.apache.hadoop.mapred.SequenceFileAsTextRecordReader
-
- createValue() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
-
- createValue() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
Deprecated.
- createValue() - Method in class org.apache.hadoop.mapreduce.lib.join.JoinRecordReader
-
- createValue() - Method in class org.apache.hadoop.mapreduce.lib.join.OverrideRecordReader
-
- createValue() - Method in class org.apache.hadoop.mapreduce.lib.join.WrappedRecordReader
-
- createValueAggregatorJob(String[], Class<?>) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
-
Create an Aggregate based map/reduce job.
- createValueAggregatorJob(String[]) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
-
Create an Aggregate based map/reduce job.
- createValueAggregatorJob(String[], Class<? extends ValueAggregatorDescriptor>[]) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
-
- createValueAggregatorJob(String[], Class<? extends ValueAggregatorDescriptor>[], Class<?>) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
-
- createValueAggregatorJob(Configuration, String[]) - Static method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorJob
-
Create an Aggregate based map/reduce job.
- createValueAggregatorJob(String[], Class<? extends ValueAggregatorDescriptor>[]) - Static method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorJob
-
- createValueAggregatorJobs(String[], Class<? extends ValueAggregatorDescriptor>[]) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
-
- createValueAggregatorJobs(String[]) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
-
- createValueAggregatorJobs(String[], Class<? extends ValueAggregatorDescriptor>[]) - Static method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorJob
-
- createValueAggregatorJobs(String[]) - Static method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorJob
-
- createWriter(Configuration, SequenceFile.Writer.Option...) - Static method in class org.apache.hadoop.io.SequenceFile
-
Create a new Writer with the given options.
- createWriter(FileSystem, Configuration, Path, Class, Class) - Static method in class org.apache.hadoop.io.SequenceFile
-
- createWriter(FileSystem, Configuration, Path, Class, Class, SequenceFile.CompressionType) - Static method in class org.apache.hadoop.io.SequenceFile
-
- createWriter(FileSystem, Configuration, Path, Class, Class, SequenceFile.CompressionType, Progressable) - Static method in class org.apache.hadoop.io.SequenceFile
-
- createWriter(FileSystem, Configuration, Path, Class, Class, SequenceFile.CompressionType, CompressionCodec) - Static method in class org.apache.hadoop.io.SequenceFile
-
- createWriter(FileSystem, Configuration, Path, Class, Class, SequenceFile.CompressionType, CompressionCodec, Progressable, SequenceFile.Metadata) - Static method in class org.apache.hadoop.io.SequenceFile
-
- createWriter(FileSystem, Configuration, Path, Class, Class, int, short, long, SequenceFile.CompressionType, CompressionCodec, Progressable, SequenceFile.Metadata) - Static method in class org.apache.hadoop.io.SequenceFile
-
- createWriter(FileSystem, Configuration, Path, Class, Class, int, short, long, boolean, SequenceFile.CompressionType, CompressionCodec, SequenceFile.Metadata) - Static method in class org.apache.hadoop.io.SequenceFile
-
Deprecated.
- createWriter(FileContext, Configuration, Path, Class, Class, SequenceFile.CompressionType, CompressionCodec, SequenceFile.Metadata, EnumSet<CreateFlag>, Options.CreateOpts...) - Static method in class org.apache.hadoop.io.SequenceFile
-
Construct the preferred type of SequenceFile Writer.
- createWriter(FileSystem, Configuration, Path, Class, Class, SequenceFile.CompressionType, CompressionCodec, Progressable) - Static method in class org.apache.hadoop.io.SequenceFile
-
- createWriter(Configuration, FSDataOutputStream, Class, Class, SequenceFile.CompressionType, CompressionCodec, SequenceFile.Metadata) - Static method in class org.apache.hadoop.io.SequenceFile
-
- createWriter(Configuration, FSDataOutputStream, Class, Class, SequenceFile.CompressionType, CompressionCodec) - Static method in class org.apache.hadoop.io.SequenceFile
-
- createYarnClient() - Static method in class org.apache.hadoop.yarn.client.api.YarnClient
-
Create a new instance of YarnClient.
- createYarnClient() - Method in class org.apache.hadoop.yarn.client.cli.LogsCLI
-
- CREDENTIAL_PROVIDER_PATH - Static variable in class org.apache.hadoop.security.alias.CredentialProviderFactory
-
- CredentialProvider - Class in org.apache.hadoop.security.alias
-
A provider of credentials or password for Hadoop applications.
- CredentialProvider() - Constructor for class org.apache.hadoop.security.alias.CredentialProvider
-
- CredentialProviderFactory - Class in org.apache.hadoop.security.alias
-
A factory to create a list of CredentialProvider based on the path given in a
Configuration.
- CredentialProviderFactory() - Constructor for class org.apache.hadoop.security.alias.CredentialProviderFactory
-
- CS_CONFIGURATION_FILE - Static variable in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
- CSTRING_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
-
Deprecated.
- CsvRecordInput - Class in org.apache.hadoop.record
-
- CsvRecordInput(InputStream) - Constructor for class org.apache.hadoop.record.CsvRecordInput
-
Deprecated.
Creates a new instance of CsvRecordInput
- CsvRecordOutput - Class in org.apache.hadoop.record
-
- CsvRecordOutput(OutputStream) - Constructor for class org.apache.hadoop.record.CsvRecordOutput
-
Deprecated.
Creates a new instance of CsvRecordOutput
- CUR_DIR - Static variable in class org.apache.hadoop.fs.Path
-
- curChar - Variable in class org.apache.hadoop.record.compiler.generated.RccTokenManager
-
Deprecated.
- curReader - Variable in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
-
- curReader - Variable in class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader
-
- currentConfig() - Method in interface org.apache.hadoop.metrics2.MetricsSystemMXBean
-
- currentDownloadBytesPerSecond(long) - Method in class org.apache.hadoop.fs.azure.metrics.AzureFileSystemInstrumentation
-
Record the current bytes-per-second download rate seen.
- currentToken - Variable in exception org.apache.hadoop.record.compiler.generated.ParseException
-
Deprecated.
This is the last token that has been consumed successfully.
- currentUploadBytesPerSecond(long) - Method in class org.apache.hadoop.fs.azure.metrics.AzureFileSystemInstrumentation
-
Record the current bytes-per-second upload rate seen.
- currentUser() - Static method in class org.apache.hadoop.registry.client.binding.RegistryUtils
-
Get the current user path formatted for the registry
- GangliaContext - Class in org.apache.hadoop.metrics.ganglia
-
Context for sending metrics to Ganglia.
- GangliaContext() - Constructor for class org.apache.hadoop.metrics.ganglia.GangliaContext
-
Creates a new instance of GangliaContext
- gauge(MetricsInfo, int) - Method in interface org.apache.hadoop.metrics2.MetricsVisitor
-
Callback for integer value gauges
- gauge(MetricsInfo, long) - Method in interface org.apache.hadoop.metrics2.MetricsVisitor
-
Callback for long value gauges
- gauge(MetricsInfo, float) - Method in interface org.apache.hadoop.metrics2.MetricsVisitor
-
Callback for float value gauges
- gauge(MetricsInfo, double) - Method in interface org.apache.hadoop.metrics2.MetricsVisitor
-
Callback for double value gauges
- genCode(String, String, ArrayList<String>) - Method in class org.apache.hadoop.record.compiler.JFile
-
Deprecated.
Generate record code in given language.
- generateActualKey(K, V) - Method in class org.apache.hadoop.mapred.lib.MultipleOutputFormat
-
Generate the actual key from the given key/value.
- generateActualValue(K, V) - Method in class org.apache.hadoop.mapred.lib.MultipleOutputFormat
-
Generate the actual value from the given key and value.
- generateEntry(String, String, Text) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
-
- generateEntry(String, String, Text) - Static method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorBaseDescriptor
-
- generateFileNameForKeyValue(K, V, String) - Method in class org.apache.hadoop.mapred.lib.MultipleOutputFormat
-
Generate the file output file name based on the given key and the leaf file
name.
- generateKey(int, String) - Method in class org.apache.hadoop.crypto.key.KeyProvider
-
Generates a key material.
- generateKeyValPairs(Object, Object) - Method in class org.apache.hadoop.mapreduce.lib.aggregate.UserDefinedValueAggregatorDescriptor
-
Generate a list of aggregation-id/value pairs for the given
key/value pairs by delegating the invocation to the real object.
- generateKeyValPairs(Object, Object) - Method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorBaseDescriptor
-
Generate 1 or 2 aggregation-id/value pairs for the given key/value pair.
- generateKeyValPairs(Object, Object) - Method in interface org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorDescriptor
-
Generate a list of aggregation-id/value pairs for
the given key/value pair.
- generateLeafFileName(String) - Method in class org.apache.hadoop.mapred.lib.MultipleOutputFormat
-
Generate the leaf name for the output file name.
- generateParseException() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
-
Deprecated.
- generateStateGraph(String) - Method in class org.apache.hadoop.yarn.state.StateMachineFactory
-
Generate a graph represents the state graph of this StateMachine
- generateValueAggregator(String) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
-
- generateValueAggregator(String, long) - Static method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorBaseDescriptor
-
- GenericWritable - Class in org.apache.hadoop.io
-
A wrapper for Writable instances.
- GenericWritable() - Constructor for class org.apache.hadoop.io.GenericWritable
-
- get(String) - Method in class org.apache.hadoop.conf.Configuration
-
Get the value of the name
property, null
if
no such property exists.
- get(String, String) - Method in class org.apache.hadoop.conf.Configuration
-
Get the value of the name
.
- get(URI, Configuration) - Static method in class org.apache.hadoop.crypto.key.KeyProviderFactory
-
Create a KeyProvider based on a provided URI.
- get(URI, Configuration) - Static method in class org.apache.hadoop.fs.AbstractFileSystem
-
The main factory method for creating a file system.
- get(URI, Configuration, String) - Static method in class org.apache.hadoop.fs.FileSystem
-
Get a filesystem instance based on the uri, the passed
configuration and the user
- get(Configuration) - Static method in class org.apache.hadoop.fs.FileSystem
-
Returns the configured filesystem implementation.
- get(URI, Configuration) - Static method in class org.apache.hadoop.fs.FileSystem
-
Returns the FileSystem for this URI's scheme and authority.
- get() - Method in class org.apache.hadoop.io.ArrayPrimitiveWritable
-
Get the original array.
- get() - Method in class org.apache.hadoop.io.ArrayWritable
-
- get() - Method in class org.apache.hadoop.io.BooleanWritable
-
Returns the value of the BooleanWritable
- get() - Method in class org.apache.hadoop.io.BytesWritable
-
- get() - Method in class org.apache.hadoop.io.ByteWritable
-
Return the value of this ByteWritable.
- get() - Method in class org.apache.hadoop.io.DoubleWritable
-
- get() - Method in class org.apache.hadoop.io.EnumSetWritable
-
Return the value of this EnumSetWritable.
- get() - Method in class org.apache.hadoop.io.FloatWritable
-
Return the value of this FloatWritable.
- get() - Method in class org.apache.hadoop.io.GenericWritable
-
Return the wrapped instance.
- get() - Method in class org.apache.hadoop.io.IntWritable
-
Return the value of this IntWritable.
- get() - Method in class org.apache.hadoop.io.LongWritable
-
Return the value of this LongWritable.
- get(Object) - Method in class org.apache.hadoop.io.MapWritable
-
- get() - Static method in class org.apache.hadoop.io.NullWritable
-
Returns the single instance of this class.
- get() - Method in class org.apache.hadoop.io.ObjectWritable
-
Return the instance, or null if none.
- get() - Method in class org.apache.hadoop.io.ShortWritable
-
Return the value of this ShortWritable.
- get(Object) - Method in class org.apache.hadoop.io.SortedMapWritable
-
- get() - Method in class org.apache.hadoop.io.TwoDArrayWritable
-
- get() - Method in class org.apache.hadoop.io.VIntWritable
-
Return the value of this VIntWritable.
- get() - Method in class org.apache.hadoop.io.VLongWritable
-
Return the value of this LongWritable.
- get(Class<? extends WritableComparable>) - Static method in class org.apache.hadoop.io.WritableComparator
-
For backwards compatibility.
- get(Class<? extends WritableComparable>, Configuration) - Static method in class org.apache.hadoop.io.WritableComparator
-
- get(int) - Method in class org.apache.hadoop.mapred.join.CompositeInputSplit
-
Get ith child InputSplit.
- get(int) - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeInputSplit
-
Get ith child InputSplit.
- get(int) - Method in class org.apache.hadoop.mapreduce.lib.join.TupleWritable
-
Get ith Writable from Tuple.
- get(String) - Method in class org.apache.hadoop.metrics2.lib.MetricsRegistry
-
Get a metric by name
- get(String, Collection<MetricsTag>) - Method in class org.apache.hadoop.metrics2.util.MetricsCache
-
Get the cached record
- get(DataInput) - Static method in class org.apache.hadoop.record.BinaryRecordInput
-
Deprecated.
Get a thread-local record input for the supplied DataInput.
- get(DataOutput) - Static method in class org.apache.hadoop.record.BinaryRecordOutput
-
Deprecated.
Get a thread-local record output for the supplied DataOutput.
- get() - Method in class org.apache.hadoop.record.Buffer
-
Deprecated.
Get the data from the Buffer.
- get(String) - Method in class org.apache.hadoop.registry.client.types.ServiceRecord
-
Get the "other" attribute with a specific key
- get(String, String) - Method in class org.apache.hadoop.registry.client.types.ServiceRecord
-
Get the "other" attribute with a specific key.
- getAccessibleNodeLabels() - Method in class org.apache.hadoop.yarn.api.records.QueueInfo
-
Get the accessible node labels
of the queue.
- getAccessTime() - Method in class org.apache.hadoop.fs.FileStatus
-
Get the access time of the file.
- getAclBit() - Method in class org.apache.hadoop.fs.permission.FsPermission
-
Returns true if there is also an ACL (access control list).
- getAclStatus(Path) - Method in class org.apache.hadoop.fs.AbstractFileSystem
-
Gets the ACLs of files and directories.
- getAclStatus(Path) - Method in class org.apache.hadoop.fs.FileContext
-
Gets the ACLs of files and directories.
- getAclStatus(Path) - Method in class org.apache.hadoop.fs.FileSystem
-
Gets the ACL of a file or directory.
- getAclStatus(Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- getAclStatus(Path) - Method in class org.apache.hadoop.fs.viewfs.ViewFileSystem
-
- getAclStatus(Path) - Method in class org.apache.hadoop.fs.viewfs.ViewFs
-
- getAclString() - Method in class org.apache.hadoop.security.authorize.AccessControlList
-
Returns the access control list as a String that can be used for building a
new instance by sending it to the constructor of
AccessControlList
.
- getActiveTaskTrackers() - Method in class org.apache.hadoop.mapreduce.Cluster
-
Get all active trackers in the cluster.
- getActiveTrackerNames() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Get the names of task trackers in the cluster.
- getAddress() - Method in class org.apache.hadoop.ha.HAServiceTarget
-
- getAddressField(Map<String, String>, String) - Static method in class org.apache.hadoop.registry.client.binding.RegistryTypeUtils
-
Get a specific field from an address -raising an exception if
the field is not present
- getAdjustedEnd() - Method in class org.apache.hadoop.io.compress.SplitCompressionInputStream
-
After calling createInputStream, the values of start or end
might change.
- getAdjustedStart() - Method in class org.apache.hadoop.io.compress.SplitCompressionInputStream
-
After calling createInputStream, the values of start or end
might change.
- getAggregatorDescriptors(Configuration) - Static method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorJobBase
-
- getAlgorithmName() - Method in class org.apache.hadoop.fs.FileChecksum
-
The checksum algorithm name
- getAliases() - Method in class org.apache.hadoop.security.alias.CredentialProvider
-
Get the aliases for all credentials.
- getAllEvents() - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEvents
-
- getAllFileInfo() - Method in class org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager
-
- getAllJobs() - Method in class org.apache.hadoop.mapred.JobClient
-
Get the jobs that are submitted.
- getAllJobs() - Method in class org.apache.hadoop.mapreduce.Cluster
-
- getAllJobStatuses() - Method in class org.apache.hadoop.mapreduce.Cluster
-
Get job status for all jobs in the cluster.
- getAllocatedContainers() - Method in class org.apache.hadoop.yarn.api.protocolrecords.AllocateResponse
-
Get the list of newly allocated Container
by the
ResourceManager
.
- getAllocatedResource() - Method in class org.apache.hadoop.yarn.api.records.ContainerReport
-
Get the allocated Resource
of the container.
- getAllPartialJobs() - Method in interface org.apache.hadoop.mapreduce.v2.hs.HistoryStorage
-
Get all of the cached jobs.
- getAllQueues() - Method in class org.apache.hadoop.yarn.client.api.YarnClient
-
Get information (
QueueInfo
) about all queues, recursively if there
is a hierarchy
- getAllRecords() - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
-
Retrieves all the records managed by this MetricsContext.
- getAllServicesMetaData() - Method in class org.apache.hadoop.yarn.api.protocolrecords.StartContainersResponse
-
Get the meta-data from all auxiliary services running on the
NodeManager
.
- getAllStatistics() - Static method in class org.apache.hadoop.fs.AbstractFileSystem
-
- getAllStatistics() - Static method in class org.apache.hadoop.fs.FileContext
-
- getAllStatistics() - Static method in class org.apache.hadoop.fs.FileSystem
-
Return the FileSystem classes that have Statistics
- getAllTaskTypes() - Static method in class org.apache.hadoop.mapreduce.TaskID
-
- getAMCommand() - Method in class org.apache.hadoop.yarn.api.protocolrecords.AllocateResponse
-
If the ResourceManager
needs the
ApplicationMaster
to take some action then it will send an
AMCommand to the ApplicationMaster
.
- getAMContainerId() - Method in class org.apache.hadoop.yarn.api.records.ApplicationAttemptReport
-
Get the ContainerId
of AMContainer for this attempt
- getAMContainerResourceRequest() - Method in class org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext
-
Get ResourceRequest of AM container, if this is not null, scheduler will
use this to acquire resource for AM container.
- getAMContainerSpec() - Method in class org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext
-
Get the ContainerLaunchContext
to describe the
Container
with which the ApplicationMaster
is
launched.
- getAMRMToken() - Method in class org.apache.hadoop.yarn.api.protocolrecords.AllocateResponse
-
The AMRMToken that belong to this attempt
- getAMRMToken() - Method in class org.apache.hadoop.yarn.api.records.ApplicationReport
-
Get the AMRM token of the application.
- getAMRMToken(ApplicationId) - Method in class org.apache.hadoop.yarn.client.api.YarnClient
-
Get the AMRM token of the application.
- getAMRMTokenService(Configuration) - Static method in class org.apache.hadoop.yarn.client.ClientRMProxy
-
- getAppId() - Method in class org.apache.hadoop.yarn.api.protocolrecords.ReleaseSharedCacheResourceRequest
-
Get the ApplicationId
of the resource to be released.
- getAppId() - Method in class org.apache.hadoop.yarn.api.protocolrecords.UseSharedCacheResourceRequest
-
Get the ApplicationId
of the resource to be used.
- getApplicationACLs() - Method in class org.apache.hadoop.yarn.api.protocolrecords.RegisterApplicationMasterResponse
-
Get the ApplicationACL
s for the application.
- getApplicationACLs() - Method in class org.apache.hadoop.yarn.api.records.ContainerLaunchContext
-
Get the ApplicationACL
s for the application.
- getApplicationAcls() - Method in class org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat.LogReader
-
Returns ACLs for the application.
- getApplicationAttemptId() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetApplicationAttemptReportRequest
-
Get the ApplicationAttemptId
of an application attempt.
- getApplicationAttemptId() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetContainersRequest
-
Get the ApplicationAttemptId
of an application attempt.
- getApplicationAttemptId() - Method in class org.apache.hadoop.yarn.api.records.ApplicationAttemptReport
-
Get the ApplicationAttemptId
of this attempt of the
application
- getApplicationAttemptId() - Method in class org.apache.hadoop.yarn.api.records.ContainerId
-
Get the ApplicationAttemptId
of the application to which the
Container
was assigned.
- getApplicationAttemptID() - Method in class org.apache.hadoop.yarn.security.client.ClientToAMTokenIdentifier
-
- getApplicationAttemptId() - Method in class org.apache.hadoop.yarn.security.NMTokenIdentifier
-
- getApplicationAttemptList() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetApplicationAttemptsResponse
-
Get a list of ApplicationReport
of an application.
- getApplicationAttemptReport() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetApplicationAttemptReportResponse
-
Get the ApplicationAttemptReport
for the application attempt.
- getApplicationAttemptReport(ApplicationAttemptId) - Method in class org.apache.hadoop.yarn.client.api.AHSClient
-
Get a report of the given ApplicationAttempt.
- getApplicationAttemptReport(ApplicationAttemptId) - Method in class org.apache.hadoop.yarn.client.api.YarnClient
-
Get a report of the given ApplicationAttempt.
- GetApplicationAttemptReportRequest - Class in org.apache.hadoop.yarn.api.protocolrecords
-
- GetApplicationAttemptReportRequest() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.GetApplicationAttemptReportRequest
-
- GetApplicationAttemptReportResponse - Class in org.apache.hadoop.yarn.api.protocolrecords
-
The response sent by the ResourceManager
to a client requesting
an application attempt report.
- GetApplicationAttemptReportResponse() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.GetApplicationAttemptReportResponse
-
- getApplicationAttempts(ApplicationId) - Method in class org.apache.hadoop.yarn.client.api.AHSClient
-
Get a report of all (ApplicationAttempts) of Application in the cluster.
- getApplicationAttempts(ApplicationId) - Method in class org.apache.hadoop.yarn.client.api.YarnClient
-
Get a report of all (ApplicationAttempts) of Application in the cluster.
- GetApplicationAttemptsRequest - Class in org.apache.hadoop.yarn.api.protocolrecords
-
The request from clients to get a list of application attempt reports of an
application from the ResourceManager
.
- GetApplicationAttemptsRequest() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.GetApplicationAttemptsRequest
-
- GetApplicationAttemptsResponse - Class in org.apache.hadoop.yarn.api.protocolrecords
-
The response sent by the
ResourceManager
to a client requesting
a list of
ApplicationAttemptReport
for application attempts.
- GetApplicationAttemptsResponse() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.GetApplicationAttemptsResponse
-
- getApplicationId() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetApplicationAttemptsRequest
-
Get the ApplicationId
of an application
- getApplicationId() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetApplicationReportRequest
-
Get the ApplicationId
of the application.
- getApplicationId() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetNewApplicationResponse
-
Get the new ApplicationId
allocated by the
ResourceManager
.
- getApplicationId() - Method in class org.apache.hadoop.yarn.api.protocolrecords.KillApplicationRequest
-
Get the ApplicationId
of the application to be aborted.
- getApplicationId() - Method in class org.apache.hadoop.yarn.api.protocolrecords.MoveApplicationAcrossQueuesRequest
-
Get the ApplicationId
of the application to be moved.
- getApplicationId() - Method in class org.apache.hadoop.yarn.api.records.ApplicationAttemptId
-
Get the ApplicationId
of the ApplicationAttempId
.
- getApplicationId() - Method in class org.apache.hadoop.yarn.api.records.ApplicationReport
-
Get the ApplicationId
of the application.
- getApplicationId() - Method in class org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext
-
Get the ApplicationId
of the submitted application.
- getApplicationList() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetApplicationsResponse
-
Get ApplicationReport
for applications.
- getApplicationName() - Method in class org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext
-
Get the application name.
- getApplicationOwner() - Method in class org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat.LogReader
-
Returns the owner of the application.
- getApplicationReport() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetApplicationReportResponse
-
Get the ApplicationReport
for the application.
- getApplicationReport(ApplicationId) - Method in class org.apache.hadoop.yarn.client.api.AHSClient
-
Get a report of the given Application.
- getApplicationReport(ApplicationId) - Method in class org.apache.hadoop.yarn.client.api.YarnClient
-
Get a report of the given Application.
- GetApplicationReportRequest - Class in org.apache.hadoop.yarn.api.protocolrecords
-
The request sent by a client to the
ResourceManager
to
get an
ApplicationReport
for an application.
- GetApplicationReportRequest() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.GetApplicationReportRequest
-
- GetApplicationReportResponse - Class in org.apache.hadoop.yarn.api.protocolrecords
-
The response sent by the ResourceManager
to a client
requesting an application report.
- GetApplicationReportResponse() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.GetApplicationReportResponse
-
- getApplicationResourceUsageReport() - Method in class org.apache.hadoop.yarn.api.records.ApplicationReport
-
Retrieve the structure containing the job resources for this application
- getApplications() - Method in class org.apache.hadoop.yarn.api.records.QueueInfo
-
Get the running applications of the queue.
- getApplications() - Method in class org.apache.hadoop.yarn.client.api.AHSClient
-
Get a report (ApplicationReport) of all Applications in the cluster.
- getApplications() - Method in class org.apache.hadoop.yarn.client.api.YarnClient
-
Get a report (ApplicationReport) of all Applications in the cluster.
- getApplications(Set<String>) - Method in class org.apache.hadoop.yarn.client.api.YarnClient
-
Get a report (ApplicationReport) of Applications
matching the given application types in the cluster.
- getApplications(EnumSet<YarnApplicationState>) - Method in class org.apache.hadoop.yarn.client.api.YarnClient
-
Get a report (ApplicationReport) of Applications matching the given
application states in the cluster.
- getApplications(Set<String>, EnumSet<YarnApplicationState>) - Method in class org.apache.hadoop.yarn.client.api.YarnClient
-
Get a report (ApplicationReport) of Applications matching the given
application types and application states in the cluster.
- GetApplicationsRequest - Class in org.apache.hadoop.yarn.api.protocolrecords
-
The request from clients to get a report of Applications
in the cluster from the ResourceManager
.
- GetApplicationsRequest() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.GetApplicationsRequest
-
- GetApplicationsResponse - Class in org.apache.hadoop.yarn.api.protocolrecords
-
The response sent by the
ResourceManager
to a client
requesting an
ApplicationReport
for applications.
- GetApplicationsResponse() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.GetApplicationsResponse
-
- getApplicationStates() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetApplicationsRequest
-
Get the application states to filter applications on
- getApplicationSubmissionContext() - Method in class org.apache.hadoop.yarn.api.protocolrecords.SubmitApplicationRequest
-
Get the ApplicationSubmissionContext
for the application.
- getApplicationSubmissionContext() - Method in class org.apache.hadoop.yarn.client.api.YarnClientApplication
-
- getApplicationSubmitter() - Method in class org.apache.hadoop.yarn.security.ContainerTokenIdentifier
-
- getApplicationSubmitter() - Method in class org.apache.hadoop.yarn.security.NMTokenIdentifier
-
- getApplicationTags() - Method in class org.apache.hadoop.yarn.api.records.ApplicationReport
-
Get all tags corresponding to the application
- getApplicationTags() - Method in class org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext
-
Get tags for the application
- getApplicationType() - Method in class org.apache.hadoop.yarn.api.records.ApplicationReport
-
Get the application's Type
- getApplicationType() - Method in class org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext
-
Get the application type
- getApplicationTypes() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetApplicationsRequest
-
Get the application types to filter applications on
- getApproxChkSumLength(long) - Static method in class org.apache.hadoop.fs.ChecksumFileSystem
-
- getArchiveClassPaths() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the archive entries in classpath as an array of Path
- getArchiveTimestamps() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the timestamps of the archives.
- getArrival() - Method in class org.apache.hadoop.yarn.api.records.ReservationDefinition
-
Get the arrival time or the earliest time from which the resource(s) can be
allocated.
- getAskList() - Method in class org.apache.hadoop.yarn.api.protocolrecords.AllocateRequest
-
Get the list of ResourceRequest
to update the
ResourceManager
about the application's resource requirements.
- getAssignedJobID() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
-
- getAssignedNode() - Method in class org.apache.hadoop.yarn.api.records.ContainerReport
-
Get the allocated NodeId
where container is running.
- getAttemptFailuresValidityInterval() - Method in class org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext
-
Get the attemptFailuresValidityInterval in milliseconds for the application
- getAttemptId() - Method in class org.apache.hadoop.yarn.api.records.ApplicationAttemptId
-
Get the attempt id
of the Application
.
- getAttemptsToStartSkipping(Configuration) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
-
Get the number of Task attempts AFTER which skip mode
will be kicked off.
- getAttribute(String) - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
-
Convenience method for subclasses to access factory attributes.
- getAttributeTable(String) - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
-
Returns an attribute-value map derived from the factory attributes
by finding all factory attributes that begin with
contextName.tableName.
- getAuthMethod() - Method in enum org.apache.hadoop.security.UserGroupInformation.AuthenticationMethod
-
- getAutoIncrMapperProcCount(Configuration) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
-
- getAutoIncrReducerProcCount(Configuration) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
-
- getAvailableResources() - Method in class org.apache.hadoop.yarn.api.protocolrecords.AllocateResponse
-
Get the available headroom for resources in the cluster for the
application.
- getAvailableResources() - Method in class org.apache.hadoop.yarn.client.api.AMRMClient
-
Get the currently available resources in the cluster.
- getAvailableResources() - Method in class org.apache.hadoop.yarn.client.api.async.AMRMClientAsync
-
Get the currently available resources in the cluster.
- getBaseName(String) - Static method in class org.apache.hadoop.crypto.key.KeyProvider
-
Split the versionName in to a base name.
- getBaseRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.lib.MultipleOutputFormat
-
- getBaseRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.lib.MultipleSequenceFileOutputFormat
-
- getBaseRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.lib.MultipleTextOutputFormat
-
- getBeginColumn() - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
Deprecated.
- getBeginLine() - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
Deprecated.
- getBlacklistAdditions() - Method in class org.apache.hadoop.yarn.api.records.ResourceBlacklistRequest
-
Get the list of resource-names which should be added to the
application blacklist.
- getBlackListedTaskTrackerCount() - Method in class org.apache.hadoop.mapreduce.ClusterMetrics
-
Get the number of blacklisted trackers in the cluster.
- getBlackListedTaskTrackers() - Method in class org.apache.hadoop.mapreduce.Cluster
-
Get blacklisted trackers.
- getBlacklistedTrackerNames() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Get the names of task trackers in the cluster.
- getBlacklistedTrackers() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Get the number of blacklisted task trackers in the cluster.
- getBlackListedTrackersInfo() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Gets the list of blacklisted trackers along with reasons for blacklisting.
- getBlacklistRemovals() - Method in class org.apache.hadoop.yarn.api.records.ResourceBlacklistRequest
-
Get the list of resource-names which should be removed from the
application blacklist.
- getBlacklistReport() - Method in class org.apache.hadoop.mapreduce.TaskTrackerInfo
-
Gets a descriptive report about why the tasktracker was blacklisted.
- getBlockDownloadLatency() - Method in class org.apache.hadoop.fs.azure.metrics.AzureFileSystemInstrumentation
-
Get the current rolling average of the download latency.
- getBlockers() - Method in class org.apache.hadoop.service.AbstractService
-
- getBlockers() - Method in interface org.apache.hadoop.service.Service
-
Get the blockers on a service -remote dependencies
that are stopping the service from being live.
- getBlockIndex(BlockLocation[], long) - Method in class org.apache.hadoop.mapred.FileInputFormat
-
- getBlockIndex(BlockLocation[], long) - Method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
- getBlockLocations() - Method in class org.apache.hadoop.fs.LocatedFileStatus
-
Get the file's block locations
- getBlockSize() - Method in class org.apache.hadoop.fs.FileStatus
-
Get the block size of the file.
- getBlockSize(Path) - Method in class org.apache.hadoop.fs.FileSystem
-
Deprecated.
Use getFileStatus() instead
- getBlockSize() - Method in class org.apache.hadoop.fs.FsServerDefaults
-
- getBlockUploadLatency() - Method in class org.apache.hadoop.fs.azure.metrics.AzureFileSystemInstrumentation
-
Get the current rolling average of the upload latency.
- getBoolean(String, boolean) - Method in class org.apache.hadoop.conf.Configuration
-
Get the value of the name
property as a boolean
.
- getBoundingValsQuery() - Method in class org.apache.hadoop.mapreduce.lib.db.DataDrivenDBInputFormat
-
- getBuffer(boolean, int) - Method in interface org.apache.hadoop.io.ByteBufferPool
-
Get a new direct ByteBuffer.
- getBuffer(boolean, int) - Method in class org.apache.hadoop.io.ElasticByteBufferPool
-
- getBytes() - Method in class org.apache.hadoop.fs.FileChecksum
-
The value of the checksum in bytes
- getBytes() - Method in class org.apache.hadoop.io.BinaryComparable
-
Return representative byte array for this instance.
- getBytes() - Method in class org.apache.hadoop.io.BytesWritable
-
Get the data backing the BytesWritable.
- getBytes() - Method in class org.apache.hadoop.io.Text
-
- getBytesPerChecksum() - Method in class org.apache.hadoop.fs.FsServerDefaults
-
- getBytesPerSum() - Method in class org.apache.hadoop.fs.ChecksumFileSystem
-
Return the bytes Per Checksum
- getBytesRead() - Method in interface org.apache.hadoop.io.compress.Compressor
-
Return number of uncompressed bytes input so far.
- getBytesWritten() - Method in interface org.apache.hadoop.io.compress.Compressor
-
Return number of compressed bytes output so far.
- getCacheArchives() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get cache archives set in the Configuration
- getCachedHosts() - Method in class org.apache.hadoop.fs.BlockLocation
-
Get the list of hosts (hostname) hosting a cached replica of the block
- getCacheFiles() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get cache files set in the Configuration
- getCachePoolDefault() - Static method in class org.apache.hadoop.fs.permission.FsPermission
-
Get the default permission for cache pools.
- getCallbackHandler() - Method in class org.apache.hadoop.yarn.client.api.async.NMClientAsync
-
- getCanonicalServiceName() - Method in class org.apache.hadoop.fs.AbstractFileSystem
-
Get a canonical name for this file system.
- getCanonicalServiceName() - Method in class org.apache.hadoop.fs.s3.S3FileSystem
-
- getCanonicalServiceName() - Method in class org.apache.hadoop.fs.s3native.NativeS3FileSystem
-
- getCanonicalUri() - Method in class org.apache.hadoop.fs.FileSystem
-
Return a canonicalized form of this FileSystem's URI.
- getCanonicalUri() - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- getCapability() - Method in class org.apache.hadoop.yarn.api.records.ContainerResourceIncreaseRequest
-
- getCapability() - Method in class org.apache.hadoop.yarn.api.records.NodeReport
-
Get the total Resource
on the node.
- getCapability() - Method in class org.apache.hadoop.yarn.api.records.ReservationRequest
-
Get the
Resource
capability of the request.
- getCapability() - Method in class org.apache.hadoop.yarn.api.records.ResourceRequest
-
Get the Resource
capability of the request.
- getCapacity() - Method in class org.apache.hadoop.fs.FsStatus
-
Return the capacity in bytes of the file system
- getCapacity() - Method in class org.apache.hadoop.io.BytesWritable
-
Get the capacity, which is the maximum size that could handled without
resizing the backing storage.
- getCapacity() - Method in class org.apache.hadoop.record.Buffer
-
Deprecated.
Get the capacity, which is the maximum count that could handled without
resizing the backing storage.
- getCapacity() - Method in class org.apache.hadoop.yarn.api.records.QueueInfo
-
Get the configured capacity of the queue.
- getChecksum(Configuration) - Static method in class org.apache.hadoop.yarn.sharedcache.SharedCacheChecksumFactory
-
Get a new SharedCacheChecksum
object based on the configurable
algorithm implementation
(see yarn.sharedcache.checksum.algo.impl
)
- getChecksumFile(Path) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
-
Return the name of the checksum file associated with a file.
- getChecksumFileLength(Path, long) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
-
Return the length of the checksum file given the size of the
actual file.
- getChecksumLength(long, int) - Static method in class org.apache.hadoop.fs.ChecksumFileSystem
-
Calculated the length of the checksum file in bytes.
- getChecksumOpt() - Method in class org.apache.hadoop.fs.FileChecksum
-
- getChecksumType() - Method in class org.apache.hadoop.fs.FsServerDefaults
-
- getChildFileSystems() - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- getChildFileSystems() - Method in class org.apache.hadoop.fs.viewfs.ViewFileSystem
-
- getChildQueueInfos(String) - Method in class org.apache.hadoop.yarn.client.api.YarnClient
-
Get information (
QueueInfo
) about all the immediate children queues
of the given queue
- getChildQueues(String) - Method in class org.apache.hadoop.mapred.JobClient
-
Returns an array of queue information objects about immediate children
of queue queueName.
- getChildQueues(String) - Method in class org.apache.hadoop.mapreduce.Cluster
-
Returns immediate children of queueName.
- getChildQueues() - Method in class org.apache.hadoop.yarn.api.records.QueueInfo
-
Get the child queues of the queue.
- getChildren() - Method in class org.apache.hadoop.mapred.JobQueueInfo
-
- getClass(String, Class<?>) - Method in class org.apache.hadoop.conf.Configuration
-
Get the value of the name
property as a Class
.
- getClass(String, Class<? extends U>, Class<U>) - Method in class org.apache.hadoop.conf.Configuration
-
Get the value of the name
property as a Class
implementing the interface specified by xface
.
- getClass(byte) - Method in class org.apache.hadoop.io.AbstractMapWritable
-
- getClass(T) - Static method in class org.apache.hadoop.util.ReflectionUtils
-
Return the correctly-typed
Class
of the given object.
- getClassByName(String) - Method in class org.apache.hadoop.conf.Configuration
-
Load a class by name.
- getClassByNameOrNull(String) - Method in class org.apache.hadoop.conf.Configuration
-
Load a class by name, returning null rather than throwing an exception
if it couldn't be loaded.
- getClasses(String, Class<?>...) - Method in class org.apache.hadoop.conf.Configuration
-
Get the value of the name
property
as an array of Class
.
- getClassLoader() - Method in class org.apache.hadoop.conf.Configuration
-
- getClassName() - Method in class org.apache.hadoop.tracing.SpanReceiverInfo
-
- getCleanupProgress() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
- getCleanupTaskReports(JobID) - Method in class org.apache.hadoop.mapred.JobClient
-
Get the information of the current state of the cleanup tasks of a job.
- getClient() - Method in class org.apache.hadoop.yarn.client.api.async.NMClientAsync
-
- getClientAcls() - Method in class org.apache.hadoop.registry.client.impl.zk.RegistryOperationsService
-
Get the aggregate set of ACLs the client should use
to create directories
- getClientName() - Method in class org.apache.hadoop.yarn.security.client.ClientToAMTokenIdentifier
-
- getClientToAMToken() - Method in class org.apache.hadoop.yarn.api.records.ApplicationReport
-
Get the client token for communicating with the
ApplicationMaster
.
- getClientToAMTokenMasterKey() - Method in class org.apache.hadoop.yarn.api.protocolrecords.RegisterApplicationMasterResponse
-
Get ClientToAMToken master key.
- getClusterHandle() - Method in class org.apache.hadoop.mapred.JobClient
-
Get a handle to the Cluster
- getClusterMetrics(GetClusterMetricsRequest) - Method in interface org.apache.hadoop.yarn.api.ApplicationClientProtocol
-
The interface used by clients to get metrics about the cluster from
the ResourceManager
.
- getClusterMetrics() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetClusterMetricsResponse
-
Get the YarnClusterMetrics
for the cluster.
- GetClusterMetricsRequest - Class in org.apache.hadoop.yarn.api.protocolrecords
-
The request sent by clients to get cluster metrics from the
ResourceManager
.
- GetClusterMetricsRequest() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.GetClusterMetricsRequest
-
- GetClusterMetricsResponse - Class in org.apache.hadoop.yarn.api.protocolrecords
-
The response sent by the ResourceManager
to a client
requesting cluster metrics.
- GetClusterMetricsResponse() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.GetClusterMetricsResponse
-
- getClusterNodeCount() - Method in class org.apache.hadoop.yarn.client.api.AMRMClient
-
Get the current number of nodes in the cluster.
- getClusterNodeCount() - Method in class org.apache.hadoop.yarn.client.api.async.AMRMClientAsync
-
Get the current number of nodes in the cluster.
- getClusterNodeLabels(GetClusterNodeLabelsRequest) - Method in interface org.apache.hadoop.yarn.api.ApplicationClientProtocol
-
The interface used by client to get node labels in the cluster
- getClusterNodeLabels() - Method in class org.apache.hadoop.yarn.client.api.YarnClient
-
The interface used by client to get node labels in the cluster
- GetClusterNodeLabelsRequest - Class in org.apache.hadoop.yarn.api.protocolrecords
-
- GetClusterNodeLabelsRequest() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.GetClusterNodeLabelsRequest
-
- GetClusterNodeLabelsResponse - Class in org.apache.hadoop.yarn.api.protocolrecords
-
- GetClusterNodeLabelsResponse() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.GetClusterNodeLabelsResponse
-
- getClusterNodes(GetClusterNodesRequest) - Method in interface org.apache.hadoop.yarn.api.ApplicationClientProtocol
-
The interface used by clients to get a report of all nodes
in the cluster from the ResourceManager
.
- GetClusterNodesRequest - Class in org.apache.hadoop.yarn.api.protocolrecords
-
The request from clients to get a report of all nodes
in the cluster from the ResourceManager
.
- GetClusterNodesRequest() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.GetClusterNodesRequest
-
- GetClusterNodesResponse - Class in org.apache.hadoop.yarn.api.protocolrecords
-
The response sent by the
ResourceManager
to a client
requesting a
NodeReport
for all nodes.
- GetClusterNodesResponse() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.GetClusterNodesResponse
-
- getClusterStatus() - Method in class org.apache.hadoop.mapred.JobClient
-
Get status information about the Map-Reduce cluster.
- getClusterStatus(boolean) - Method in class org.apache.hadoop.mapred.JobClient
-
Get status information about the Map-Reduce cluster.
- getClusterStatus() - Method in class org.apache.hadoop.mapreduce.Cluster
-
Get current cluster status.
- getClusterTimestamp() - Method in class org.apache.hadoop.yarn.api.records.ApplicationId
-
Get the start time of the ResourceManager
which is
used to generate globally unique ApplicationId
.
- getClusterTimestamp() - Method in class org.apache.hadoop.yarn.api.records.ReservationId
-
Get the
start time of the
ResourceManager
which is used to
generate globally unique
ReservationId
.
- getCodec(Path) - Method in class org.apache.hadoop.io.compress.CompressionCodecFactory
-
Find the relevant compression codec for the given file based on its
filename suffix.
- getCodecByClassName(String) - Method in class org.apache.hadoop.io.compress.CompressionCodecFactory
-
Find the relevant compression codec for the codec's canonical class name.
- getCodecByName(String) - Method in class org.apache.hadoop.io.compress.CompressionCodecFactory
-
Find the relevant compression codec for the codec's canonical class name
or by codec alias.
- getCodecClassByName(String) - Method in class org.apache.hadoop.io.compress.CompressionCodecFactory
-
Find the relevant compression codec for the codec's canonical class name
or by codec alias and returns its implemetation class.
- getCodecClasses(Configuration) - Static method in class org.apache.hadoop.io.compress.CompressionCodecFactory
-
Get the list of codecs discovered via a Java ServiceLoader, or
listed in the configuration.
- getCollector(String, Reporter) - Method in class org.apache.hadoop.mapred.lib.MultipleOutputs
-
Gets the output collector for a named output.
- getCollector(String, String, Reporter) - Method in class org.apache.hadoop.mapred.lib.MultipleOutputs
-
Gets the output collector for a multi named output.
- getCombinerClass() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the user-defined combiner class used to combine map-outputs
before being sent to the reducers.
- getCombinerClass() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the combiner class for the job.
- getCombinerKeyGroupingComparator() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the user defined
WritableComparable
comparator for
grouping keys of inputs to the combiner.
- getCombinerKeyGroupingComparator() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the user defined
RawComparator
comparator for
grouping keys of inputs to the combiner.
- getCombinerOutput() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.DoubleValueSum
-
- getCombinerOutput() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.LongValueMax
-
- getCombinerOutput() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.LongValueMin
-
- getCombinerOutput() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.LongValueSum
-
- getCombinerOutput() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.StringValueMax
-
- getCombinerOutput() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.StringValueMin
-
- getCombinerOutput() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.UniqValueCount
-
- getCombinerOutput() - Method in interface org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregator
-
- getCombinerOutput() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueHistogram
-
- getCommands() - Method in class org.apache.hadoop.yarn.api.records.ContainerLaunchContext
-
Get the list of commands for launching the container.
- getCommittedTaskPath(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
Compute the path where the output of a committed task is stored until
the entire job is committed.
- getCommittedTaskPath(TaskAttemptContext, Path) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
- getCommittedTaskPath(int, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
Compute the path where the output of a committed task is stored until the
entire job is committed for a specific application attempt.
- getCommittedTaskPath(int, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.PartialFileOutputCommitter
-
- getComparator() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
Return comparator defining the ordering for RecordReaders in this
composite.
- getComparator() - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
-
Return comparator defining the ordering for RecordReaders in this
composite.
- getCompletedContainersStatuses() - Method in class org.apache.hadoop.yarn.api.protocolrecords.AllocateResponse
-
Get the list of completed containers' statuses.
- getCompletionPollInterval(Configuration) - Static method in class org.apache.hadoop.mapreduce.Job
-
The interval at which waitForCompletion() should check.
- getComponentType() - Method in class org.apache.hadoop.io.ArrayPrimitiveWritable
-
- getCompressedData() - Method in class org.apache.hadoop.io.compress.BlockDecompressorStream
-
- getCompressedData() - Method in class org.apache.hadoop.io.compress.DecompressorStream
-
- getCompressMapOutput() - Method in class org.apache.hadoop.mapred.JobConf
-
Are the outputs of the maps be compressed?
- getCompressor(CompressionCodec, Configuration) - Static method in class org.apache.hadoop.io.compress.CodecPool
-
- getCompressor(CompressionCodec) - Static method in class org.apache.hadoop.io.compress.CodecPool
-
- getCompressorType() - Method in class org.apache.hadoop.io.compress.BZip2Codec
-
- getCompressorType() - Method in interface org.apache.hadoop.io.compress.CompressionCodec
-
- getCompressorType() - Method in class org.apache.hadoop.io.compress.DefaultCodec
-
- getCompressorType() - Method in class org.apache.hadoop.io.compress.GzipCodec
-
- getCompressOutput(JobConf) - Static method in class org.apache.hadoop.mapred.FileOutputFormat
-
Is the job output compressed?
- getCompressOutput(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
Is the job output compressed?
- getConcurrency() - Method in class org.apache.hadoop.yarn.api.records.ReservationRequest
-
Get the number of containers that need to be scheduled concurrently.
- getConditions() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
- getConf() - Method in interface org.apache.hadoop.conf.Configurable
-
Return the configuration used by this object.
- getConf() - Method in class org.apache.hadoop.conf.Configured
-
- getConf() - Method in class org.apache.hadoop.crypto.key.KeyProvider
-
Return the provider configuration.
- getConf() - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- getConf() - Method in class org.apache.hadoop.io.AbstractMapWritable
-
- getConf() - Method in class org.apache.hadoop.io.compress.BZip2Codec
-
Return the configuration used by this object.
- getConf() - Method in class org.apache.hadoop.io.compress.DefaultCodec
-
- getConf() - Method in class org.apache.hadoop.io.EnumSetWritable
-
- getConf() - Method in class org.apache.hadoop.io.GenericWritable
-
- getConf() - Method in class org.apache.hadoop.io.ObjectWritable
-
- getConf() - Method in class org.apache.hadoop.io.WritableComparator
-
- getConf() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
Return the configuration used by this object.
- getConf() - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
-
- getConf() - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getConf() - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
- getConf() - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
-
Return the configuration used by this object.
- getConf() - Method in class org.apache.hadoop.mapreduce.lib.partition.BinaryPartitioner
-
- getConf() - Method in class org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedComparator
-
- getConf() - Method in class org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedPartitioner
-
- getConf() - Method in class org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner
-
- getConf() - Method in class org.apache.hadoop.net.AbstractDNSToSwitchMapping
-
- getConf() - Method in class org.apache.hadoop.net.ScriptBasedMapping
-
- getConf() - Method in class org.apache.hadoop.net.SocksSocketFactory
-
- getConf() - Method in class org.apache.hadoop.net.TableMapping
-
- getConf() - Method in class org.apache.hadoop.security.authorize.DefaultImpersonationProvider
-
- getConfig() - Method in class org.apache.hadoop.service.AbstractService
-
- getConfig() - Method in interface org.apache.hadoop.service.Service
-
Get the configuration of this service.
- getConfiguration() - Method in interface org.apache.hadoop.mapred.RunningJob
-
Get the underlying job configuration
- getConfiguration() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Return the configuration for the job.
- getConfResourceAsInputStream(String) - Method in class org.apache.hadoop.conf.Configuration
-
Get an input stream attached to the configuration resource with the
given name
.
- getConfResourceAsReader(String) - Method in class org.apache.hadoop.conf.Configuration
-
Get a
Reader
attached to the configuration resource with the
given
name
.
- getConnection() - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
Returns a connection object o the DB
- getConnection() - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
- getConnection() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
- getContainerExitStatus() - Method in class org.apache.hadoop.yarn.api.records.ContainerReport
-
Get the final exit status
of the container.
- getContainerId() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetContainerReportRequest
-
Get the ContainerId
of the Container.
- getContainerId() - Method in class org.apache.hadoop.yarn.api.records.ContainerId
-
Get the identifier of the ContainerId
.
- getContainerId() - Method in class org.apache.hadoop.yarn.api.records.ContainerReport
-
Get the ContainerId
of the container.
- getContainerId() - Method in class org.apache.hadoop.yarn.api.records.ContainerResourceIncreaseRequest
-
- getContainerId() - Method in class org.apache.hadoop.yarn.api.records.ContainerStatus
-
Get the ContainerId
of the container.
- getContainerID() - Method in class org.apache.hadoop.yarn.security.ContainerTokenIdentifier
-
- getContainerIds() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetContainerStatusesRequest
-
Get the list of ContainerId
s of containers for which to obtain
the ContainerStatus
.
- getContainerIds() - Method in class org.apache.hadoop.yarn.api.protocolrecords.StopContainersRequest
-
Get the ContainerId
s of the containers to be stopped.
- getContainerLaunchContext() - Method in class org.apache.hadoop.yarn.api.protocolrecords.StartContainerRequest
-
Get the ContainerLaunchContext
for the container to be started
by the NodeManager
.
- getContainerList() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetContainersResponse
-
Get a list of ContainerReport
for all the containers of an
application attempt.
- getContainerLogDir() - Method in class org.apache.hadoop.yarn.ContainerLogAppender
-
Getter/Setter methods for log4j.
- getContainerLogDir() - Method in class org.apache.hadoop.yarn.ContainerRollingLogAppender
-
Getter/Setter methods for log4j.
- getContainerLogFile() - Method in class org.apache.hadoop.yarn.ContainerLogAppender
-
- getContainerLogFile() - Method in class org.apache.hadoop.yarn.ContainerRollingLogAppender
-
- getContainerReport() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetContainerReportResponse
-
Get the ContainerReport
for the container.
- getContainerReport(ContainerId) - Method in class org.apache.hadoop.yarn.client.api.AHSClient
-
Get a report of the given Container.
- getContainerReport(ContainerId) - Method in class org.apache.hadoop.yarn.client.api.YarnClient
-
Get a report of the given Container.
- GetContainerReportRequest - Class in org.apache.hadoop.yarn.api.protocolrecords
-
The request sent by a client to the
ResourceManager
to get an
ContainerReport
for a container.
- GetContainerReportRequest() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.GetContainerReportRequest
-
- GetContainerReportResponse - Class in org.apache.hadoop.yarn.api.protocolrecords
-
The response sent by the ResourceManager
to a client requesting
a container report.
- GetContainerReportResponse() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.GetContainerReportResponse
-
- getContainers() - Method in class org.apache.hadoop.yarn.api.records.PreemptionContract
-
Assign the set of
PreemptionContainer
specifying which containers
owned by the
ApplicationMaster
that may be reclaimed by the
ResourceManager
.
- getContainers() - Method in class org.apache.hadoop.yarn.api.records.StrictPreemptionContract
-
Get the set of
PreemptionContainer
specifying containers owned by
the
ApplicationMaster
that may be reclaimed by the
ResourceManager
.
- getContainers(ApplicationAttemptId) - Method in class org.apache.hadoop.yarn.client.api.AHSClient
-
Get a report of all (Containers) of ApplicationAttempt in the cluster.
- getContainers(ApplicationAttemptId) - Method in class org.apache.hadoop.yarn.client.api.YarnClient
-
Get a report of all (Containers) of ApplicationAttempt in the cluster.
- getContainersFromPreviousAttempts() - Method in class org.apache.hadoop.yarn.api.protocolrecords.RegisterApplicationMasterResponse
-
Get the list of running containers as viewed by
ResourceManager
from previous application attempts.
- GetContainersRequest - Class in org.apache.hadoop.yarn.api.protocolrecords
-
The request from clients to get a list of container reports, which belong to
an application attempt from the ResourceManager
.
- GetContainersRequest() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.GetContainersRequest
-
- GetContainersResponse - Class in org.apache.hadoop.yarn.api.protocolrecords
-
The response sent by the
ResourceManager
to a client requesting
a list of
ContainerReport
for containers.
- GetContainersResponse() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.GetContainersResponse
-
- getContainerState() - Method in class org.apache.hadoop.yarn.api.records.ContainerReport
-
Get the final ContainerState
of the container.
- getContainerStatus(ContainerId, NodeId) - Method in class org.apache.hadoop.yarn.client.api.NMClient
-
Query the status of a container.
- getContainerStatusAsync(ContainerId, NodeId) - Method in class org.apache.hadoop.yarn.client.api.async.NMClientAsync
-
- getContainerStatuses(GetContainerStatusesRequest) - Method in interface org.apache.hadoop.yarn.api.ContainerManagementProtocol
-
The API used by the ApplicationMaster
to request for current
statuses of Container
s from the NodeManager
.
- getContainerStatuses() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetContainerStatusesResponse
-
Get the ContainerStatus
es of the requested containers.
- GetContainerStatusesRequest - Class in org.apache.hadoop.yarn.api.protocolrecords
-
The request sent by the
ApplicationMaster
to the
NodeManager
to get
ContainerStatus
of requested
containers.
- GetContainerStatusesRequest() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.GetContainerStatusesRequest
-
- GetContainerStatusesResponse - Class in org.apache.hadoop.yarn.api.protocolrecords
-
The response sent by the NodeManager
to the
ApplicationMaster
when asked to obtain the
ContainerStatus
of requested containers.
- GetContainerStatusesResponse() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.GetContainerStatusesResponse
-
- getContainerToken() - Method in class org.apache.hadoop.yarn.api.protocolrecords.StartContainerRequest
-
Get the container token to be used for authorization during starting
container.
- getContainerToken() - Method in class org.apache.hadoop.yarn.api.records.Container
-
Get the ContainerToken
for the container.
- getContent() - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineDelegationTokenResponse
-
- getContentSummary(Path) - Method in class org.apache.hadoop.fs.FileSystem
-
- getContentSummary(Path) - Method in class org.apache.hadoop.fs.viewfs.ViewFileSystem
-
- getContextFactory() - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
-
Returns the factory by which this context was created.
- getContextName() - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
-
Returns the context name.
- getContract() - Method in class org.apache.hadoop.yarn.api.records.PreemptionMessage
-
- getCount() - Method in class org.apache.hadoop.record.Buffer
-
Deprecated.
Get the current count of the buffer.
- getCounter() - Method in class org.apache.hadoop.mapred.Counters.Counter
-
- getCounter(Enum<?>) - Method in class org.apache.hadoop.mapred.Counters
-
Returns current value of the specified counter, or 0 if the counter
does not exist.
- getCounter(String) - Method in class org.apache.hadoop.mapred.Counters.Group
-
- getCounter(int, String) - Method in class org.apache.hadoop.mapred.Counters.Group
-
- getCounter(Counters, String, String) - Method in class org.apache.hadoop.mapred.JobClient
-
- getCounter(Enum<?>) - Method in interface org.apache.hadoop.mapred.Reporter
-
- getCounter(String, String) - Method in interface org.apache.hadoop.mapred.Reporter
-
- getCounter(Enum<?>) - Method in interface org.apache.hadoop.mapreduce.TaskAttemptContext
-
Get the
Counter
for the given
counterName
.
- getCounter(String, String) - Method in interface org.apache.hadoop.mapreduce.TaskAttemptContext
-
Get the
Counter
for the given
groupName
and
counterName
.
- getCounter(Counters, String, String) - Method in class org.apache.hadoop.mapreduce.tools.CLI
-
- getCounterForName(String) - Method in class org.apache.hadoop.mapred.Counters.Group
-
Get the counter for the given name and create it if it doesn't exist.
- getCounters() - Method in interface org.apache.hadoop.mapred.RunningJob
-
Gets the counters for this job.
- getCounters() - Method in class org.apache.hadoop.mapred.TaskReport
-
- getCounters() - Method in class org.apache.hadoop.mapreduce.Job
-
Gets the counters for this job.
- getCountersEnabled(JobConf) - Static method in class org.apache.hadoop.mapred.lib.MultipleOutputs
-
Returns if the counters for the named outputs are enabled or not.
- getCountersEnabled(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.output.MultipleOutputs
-
Returns if the counters for the named outputs are enabled or not.
- getCountQuery() - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
Returns the query for getting the total number of rows,
subclasses can override this for custom behaviour.
- getCpuUsagePercent() - Method in class org.apache.hadoop.yarn.util.ResourceCalculatorProcessTree
-
Get the CPU usage by all the processes in the process-tree based on
average between samples as a ratio of overall CPU cycles similar to top.
- getCreatedTime() - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineDomain
-
Get the created time of the domain
- getCreationTime() - Method in class org.apache.hadoop.yarn.api.records.ContainerReport
-
Get the creation time of the container.
- getCreationTime() - Method in class org.apache.hadoop.yarn.security.ContainerTokenIdentifier
-
- getCredentialEntry(String) - Method in class org.apache.hadoop.security.alias.CredentialProvider
-
Get the credential entry for a specific alias.
- getCredentials() - Method in class org.apache.hadoop.mapred.JobConf
-
Get credentials for the job.
- getCredentials() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get credentials for the job.
- getCumulativeCpuTime() - Method in class org.apache.hadoop.yarn.util.ResourceCalculatorProcessTree
-
Get the CPU time in millisecond used by all the processes in the
process-tree since the process-tree was created
- getCumulativeRssmem() - Method in class org.apache.hadoop.yarn.util.ResourceCalculatorProcessTree
-
Deprecated.
- getCumulativeRssmem(int) - Method in class org.apache.hadoop.yarn.util.ResourceCalculatorProcessTree
-
Deprecated.
- getCumulativeVmem() - Method in class org.apache.hadoop.yarn.util.ResourceCalculatorProcessTree
-
Deprecated.
- getCumulativeVmem(int) - Method in class org.apache.hadoop.yarn.util.ResourceCalculatorProcessTree
-
Deprecated.
- getCurrentApplicationAttemptId() - Method in class org.apache.hadoop.yarn.api.records.ApplicationReport
-
Get the ApplicationAttemptId
of the current
attempt of the application
- getCurrentCapacity() - Method in class org.apache.hadoop.yarn.api.records.QueueInfo
-
Get the current capacity of the queue.
- getCurrentKey(String) - Method in class org.apache.hadoop.crypto.key.KeyProvider
-
Get the current version of the key, which should be used for encrypting new
data.
- getCurrentKey() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
Get the current key
- getCurrentKey() - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader
-
- getCurrentKey() - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReaderWrapper
-
- getCurrentKey() - Method in class org.apache.hadoop.mapreduce.lib.input.KeyValueLineRecordReader
-
- getCurrentKey() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsTextRecordReader
-
- getCurrentKey() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader
-
- getCurrentKey() - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
-
- getCurrentKey() - Method in class org.apache.hadoop.mapreduce.lib.join.WrappedRecordReader
-
Get current key
- getCurrentKey() - Method in class org.apache.hadoop.mapreduce.RecordReader
-
Get the current key
- getCurrentKey() - Method in interface org.apache.hadoop.mapreduce.TaskInputOutputContext
-
Get the current key.
- getCurrentMaximumDownloadBandwidth() - Method in class org.apache.hadoop.fs.azure.metrics.AzureFileSystemInstrumentation
-
Get the current maximum download bandwidth.
- getCurrentMaximumUploadBandwidth() - Method in class org.apache.hadoop.fs.azure.metrics.AzureFileSystemInstrumentation
-
Get the current maximum upload bandwidth.
- getCurrentState() - Method in exception org.apache.hadoop.yarn.state.InvalidStateTransitonException
-
- getCurrentState() - Method in interface org.apache.hadoop.yarn.state.StateMachine
-
- getCurrentTrashDir() - Method in class org.apache.hadoop.fs.TrashPolicy
-
Get the current working directory of the Trash Policy
- getCurrentUsernameUnencoded(String) - Static method in class org.apache.hadoop.registry.client.binding.RegistryUtils
-
Get the current username, using the value of the parameter
env_hadoop_username
if it is set on an insecure cluster.
- getCurrentValue(V) - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
-
- getCurrentValue() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
Get the current value.
- getCurrentValue() - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader
-
- getCurrentValue() - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReaderWrapper
-
- getCurrentValue() - Method in class org.apache.hadoop.mapreduce.lib.input.KeyValueLineRecordReader
-
- getCurrentValue() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsTextRecordReader
-
- getCurrentValue() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader
-
- getCurrentValue() - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
-
Get the current value.
- getCurrentValue() - Method in class org.apache.hadoop.mapreduce.lib.join.WrappedRecordReader
-
Get current value
- getCurrentValue() - Method in class org.apache.hadoop.mapreduce.RecordReader
-
Get the current value.
- getCurrentValue() - Method in interface org.apache.hadoop.mapreduce.TaskInputOutputContext
-
Get the current value.
- getCurrentWebResponses() - Method in class org.apache.hadoop.fs.azure.metrics.AzureFileSystemInstrumentation
-
Gets the current number of web responses obtained from Azure Storage.
- getDBConf() - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
- getDBConf() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
- getDBProductName() - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
- getDeadline() - Method in class org.apache.hadoop.yarn.api.records.ReservationDefinition
-
Get the deadline or the latest time by when the resource(s) must be
allocated.
- getDeclaredClass() - Method in class org.apache.hadoop.io.ObjectWritable
-
Return the class this is meant to be.
- getDeclaredComponentType() - Method in class org.apache.hadoop.io.ArrayPrimitiveWritable
-
- getDeclaredFieldsIncludingInherited(Class<?>) - Static method in class org.apache.hadoop.util.ReflectionUtils
-
Gets all the declared fields of a class including fields declared in
superclasses.
- getDeclaredMethodsIncludingInherited(Class<?>) - Static method in class org.apache.hadoop.util.ReflectionUtils
-
Gets all the declared methods of a class including methods declared in
superclasses.
- getDecommissionedTaskTrackerCount() - Method in class org.apache.hadoop.mapreduce.ClusterMetrics
-
Get the number of decommissioned trackers in the cluster.
- getDecompressor(CompressionCodec) - Static method in class org.apache.hadoop.io.compress.CodecPool
-
- getDecompressorType() - Method in class org.apache.hadoop.io.compress.BZip2Codec
-
- getDecompressorType() - Method in interface org.apache.hadoop.io.compress.CompressionCodec
-
- getDecompressorType() - Method in class org.apache.hadoop.io.compress.DefaultCodec
-
- getDecompressorType() - Method in class org.apache.hadoop.io.compress.GzipCodec
-
- getDecreasedContainers() - Method in class org.apache.hadoop.yarn.api.protocolrecords.AllocateResponse
-
Get the list of newly decreased containers by NodeManager
- getDefault() - Static method in class org.apache.hadoop.fs.permission.FsPermission
-
Get the default permission for directory and symlink.
- getDefaultBlockSize() - Method in class org.apache.hadoop.fs.FileSystem
-
- getDefaultBlockSize(Path) - Method in class org.apache.hadoop.fs.FileSystem
-
Return the number of bytes that large input files should be optimally
be split into to minimize i/o time.
- getDefaultBlockSize() - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- getDefaultBlockSize(Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- getDefaultBlockSize() - Method in class org.apache.hadoop.fs.s3.S3FileSystem
-
- getDefaultBlockSize() - Method in class org.apache.hadoop.fs.s3native.NativeS3FileSystem
-
- getDefaultBlockSize() - Method in class org.apache.hadoop.fs.viewfs.ViewFileSystem
-
- getDefaultBlockSize(Path) - Method in class org.apache.hadoop.fs.viewfs.ViewFileSystem
-
- getDefaultCompressionType(Configuration) - Static method in class org.apache.hadoop.io.SequenceFile
-
Get the compression type for the reduce outputs
- getDefaultDelegationTokenAuthenticator() - Static method in class org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL
-
- getDefaultExtension() - Method in class org.apache.hadoop.io.compress.BZip2Codec
-
.bz2 is recognized as the default extension for compressed BZip2 files
- getDefaultExtension() - Method in interface org.apache.hadoop.io.compress.CompressionCodec
-
Get the default filename extension for this kind of compression.
- getDefaultExtension() - Method in class org.apache.hadoop.io.compress.DefaultCodec
-
- getDefaultExtension() - Method in class org.apache.hadoop.io.compress.GzipCodec
-
- getDefaultMaps() - Method in class org.apache.hadoop.mapred.JobClient
-
Get status information about the max available Maps in the cluster.
- getDefaultNodeLabelExpression() - Method in class org.apache.hadoop.yarn.api.records.QueueInfo
-
Get the default node label expression
of the queue, this takes
affect only when the ApplicationSubmissionContext
and
ResourceRequest
don't specify their
NodeLabelExpression
.
- getDefaultPort() - Method in class org.apache.hadoop.fs.FileSystem
-
Get the default port for this file system.
- getDefaultPort() - Method in class org.apache.hadoop.fs.ftp.FTPFileSystem
-
Get the default port for this FTPFileSystem.
- getDefaultReduces() - Method in class org.apache.hadoop.mapred.JobClient
-
Get status information about the max available Reduces in the cluster.
- getDefaultReplication() - Method in class org.apache.hadoop.fs.FileSystem
-
- getDefaultReplication(Path) - Method in class org.apache.hadoop.fs.FileSystem
-
Get the default replication for a path.
- getDefaultReplication() - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- getDefaultReplication(Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- getDefaultReplication() - Method in class org.apache.hadoop.fs.viewfs.ViewFileSystem
-
- getDefaultReplication(Path) - Method in class org.apache.hadoop.fs.viewfs.ViewFileSystem
-
- getDefaultUri(Configuration) - Static method in class org.apache.hadoop.fs.FileSystem
-
Get the default filesystem URI from a configuration.
- getDefaultWorkFile(TaskAttemptContext, String) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
Get the default path and filename for the output format.
- getDelegate() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
Obtain an iterator over the child RRs apropos of the value type
ultimately emitted from this join.
- getDelegate() - Method in class org.apache.hadoop.mapred.join.JoinRecordReader
-
Return an iterator wrapping the JoinCollector.
- getDelegate() - Method in class org.apache.hadoop.mapred.join.MultiFilterRecordReader
-
Return an iterator returning a single value from the tuple.
- getDelegate() - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
-
Obtain an iterator over the child RRs apropos of the value type
ultimately emitted from this join.
- getDelegate() - Method in class org.apache.hadoop.mapreduce.lib.join.JoinRecordReader
-
Return an iterator wrapping the JoinCollector.
- getDelegate() - Method in class org.apache.hadoop.mapreduce.lib.join.MultiFilterRecordReader
-
Return an iterator returning a single value from the tuple.
- getDelegationToken(Text) - Method in class org.apache.hadoop.mapred.JobClient
-
Get a delegation token for the user from the JobTracker.
- getDelegationToken(Text) - Method in class org.apache.hadoop.mapreduce.Cluster
-
Get a delegation token for the user from the JobTracker.
- getDelegationToken() - Method in interface org.apache.hadoop.mapreduce.v2.api.protocolrecords.CancelDelegationTokenRequest
-
- getDelegationToken() - Method in interface org.apache.hadoop.mapreduce.v2.api.protocolrecords.RenewDelegationTokenRequest
-
- getDelegationToken(URL, DelegationTokenAuthenticatedURL.Token, String) - Method in class org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL
-
Requests a delegation token using the configured Authenticator
for authentication.
- getDelegationToken(URL, DelegationTokenAuthenticatedURL.Token, String, String) - Method in class org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL
-
Requests a delegation token using the configured Authenticator
for authentication.
- getDelegationToken() - Method in class org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.Token
-
- getDelegationToken(URL, AuthenticatedURL.Token, String) - Method in class org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator
-
Requests a delegation token using the configured Authenticator
for authentication.
- getDelegationToken(URL, AuthenticatedURL.Token, String, String) - Method in class org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator
-
Requests a delegation token using the configured Authenticator
for authentication.
- getDelegationToken(String) - Method in class org.apache.hadoop.yarn.client.api.TimelineClient
-
Get a delegation token so as to be able to talk to the timeline server in a
secure way.
- GetDelegationTokenRequest - Interface in org.apache.hadoop.mapreduce.v2.api.protocolrecords
-
- GetDelegationTokenRequest - Class in org.apache.hadoop.yarn.api.protocolrecords
-
The request issued by the client to get a delegation token from
the ResourceManager
.
- GetDelegationTokenRequest() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.GetDelegationTokenRequest
-
- GetDelegationTokenResponse - Class in org.apache.hadoop.yarn.api.protocolrecords
-
- GetDelegationTokenResponse() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.GetDelegationTokenResponse
-
- getDelegationTokens(String) - Method in class org.apache.hadoop.fs.viewfs.ViewFs
-
- getDependentJobs() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
- getDependingJobs() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
-
- getDescription() - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineDomain
-
Get the domain description
- getDiagnostics() - Method in class org.apache.hadoop.yarn.api.protocolrecords.FinishApplicationMasterRequest
-
Get diagnostic information on application failure.
- getDiagnostics() - Method in class org.apache.hadoop.yarn.api.records.ApplicationAttemptReport
-
Get the diagnositic information of the application attempt in case
of errors.
- getDiagnostics() - Method in class org.apache.hadoop.yarn.api.records.ApplicationReport
-
Get the diagnositic information of the application in case of
errors.
- getDiagnostics() - Method in class org.apache.hadoop.yarn.api.records.ContainerStatus
-
Get diagnostic messages for failed containers.
- getDiagnosticsInfo() - Method in class org.apache.hadoop.yarn.api.records.ContainerReport
-
Get the DiagnosticsInfo of the container.
- getDigest() - Method in class org.apache.hadoop.io.MD5Hash
-
Returns the digest bytes.
- getDigester() - Static method in class org.apache.hadoop.io.MD5Hash
-
Create a thread local MD5 digester
- getDirDefault() - Static method in class org.apache.hadoop.fs.permission.FsPermission
-
Get the default permission for directory.
- getDirectoryCount() - Method in class org.apache.hadoop.fs.ContentSummary
-
- getDisplayName() - Method in class org.apache.hadoop.mapred.Counters.Counter
-
- getDisplayName() - Method in class org.apache.hadoop.mapred.Counters.Group
-
- getDisplayName() - Method in interface org.apache.hadoop.mapreduce.Counter
-
Get the display name of the counter.
- getDisplayName() - Method in interface org.apache.hadoop.mapreduce.counters.CounterGroupBase
-
Get the display name of the group.
- getDmax(String) - Method in class org.apache.hadoop.metrics.ganglia.GangliaContext
-
- getDomainId() - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEntity
-
Get the ID of the domain that the entity is to be put
- getDomains() - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineDomains
-
Get a list of domains
- getDouble(String, double) - Method in class org.apache.hadoop.conf.Configuration
-
Get the value of the name
property as a double
.
- getDU(File) - Static method in class org.apache.hadoop.fs.FileUtil
-
Takes an input dir and returns the du on that local directory.
- getDuration() - Method in class org.apache.hadoop.yarn.api.records.ReservationRequest
-
Get the duration in milliseconds for which the resource is required.
- getEffectivePermission(AclEntry) - Method in class org.apache.hadoop.fs.permission.AclStatus
-
Get the effective permission for the AclEntry
- getEffectivePermission(AclEntry, FsPermission) - Method in class org.apache.hadoop.fs.permission.AclStatus
-
Get the effective permission for the AclEntry.
- getElementType() - Method in class org.apache.hadoop.io.EnumSetWritable
-
Returns the class of all the elements of the underlying EnumSetWriable.
- getElementTypeID() - Method in class org.apache.hadoop.record.meta.VectorTypeID
-
Deprecated.
- getEmptier() - Method in class org.apache.hadoop.fs.Trash
-
Return a
Runnable
that periodically empties the trash of all
users, intended to be run by the superuser.
- getEmptier() - Method in class org.apache.hadoop.fs.TrashPolicy
-
Return a
Runnable
that periodically empties the trash of all
users, intended to be run by the superuser.
- getEncryptDataTransfer() - Method in class org.apache.hadoop.fs.FsServerDefaults
-
- getEncryptedBit() - Method in class org.apache.hadoop.fs.permission.FsPermission
-
Returns true if the file is encrypted or directory is in an encryption zone
- getEndColumn() - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
Deprecated.
- getEndLine() - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
Deprecated.
- getEndTime() - Method in class org.apache.hadoop.conf.ReconfigurationTaskStatus
-
- getEntities() - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEntities
-
Get a list of entities
- getEntityId() - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEntity
-
Get the entity Id
- getEntityId() - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEvents.EventsOfOneEntity
-
Get the entity Id
- getEntityId() - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelinePutResponse.TimelinePutError
-
Get the entity Id
- getEntityType() - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEntity
-
Get the entity type
- getEntityType() - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEvents.EventsOfOneEntity
-
Get the entity type
- getEntityType() - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelinePutResponse.TimelinePutError
-
Get the entity type
- getEntries() - Method in class org.apache.hadoop.fs.permission.AclStatus
-
Returns the list of all ACL entries, ordered by their natural ordering.
- getEntry(MapFile.Reader[], Partitioner<K, V>, K, V) - Static method in class org.apache.hadoop.mapred.MapFileOutputFormat
-
Get an entry from output generated by this class.
- getEntry(MapFile.Reader[], Partitioner<K, V>, K, V) - Static method in class org.apache.hadoop.mapreduce.lib.output.MapFileOutputFormat
-
Get an entry from output generated by this class.
- getEnum(String, T) - Method in class org.apache.hadoop.conf.Configuration
-
Return value matching this enumerated type.
- getEnvironment() - Method in class org.apache.hadoop.yarn.api.records.ContainerLaunchContext
-
Get environment variables for the container.
- getErrorCode() - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelinePutResponse.TimelinePutError
-
Get the error code
- getErrors() - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelinePutResponse
-
- getEvent() - Method in exception org.apache.hadoop.yarn.state.InvalidStateTransitonException
-
- getEventHandler() - Method in class org.apache.hadoop.yarn.event.AsyncDispatcher
-
- getEventHandler() - Method in interface org.apache.hadoop.yarn.event.Dispatcher
-
- getEventId() - Method in class org.apache.hadoop.mapreduce.TaskCompletionEvent
-
Returns event Id.
- getEventInfo() - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEvent
-
Set the information of the event
- getEvents() - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEntity
-
Get a list of events related to the entity
- getEvents() - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEvents.EventsOfOneEntity
-
Get a list of events
- getEventType() - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEvent
-
Get the event type
- getExceptions() - Method in exception org.apache.hadoop.io.MultipleIOException
-
- getExcludePattern() - Method in class org.apache.hadoop.yarn.api.records.LogAggregationContext
-
Get exclude pattern.
- getExecutable(JobConf) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
-
Get the URI of the application's executable.
- getExitStatus() - Method in class org.apache.hadoop.yarn.api.records.ContainerStatus
-
Get the exit status for the container.
- getExpiryTimeStamp() - Method in class org.apache.hadoop.yarn.security.ContainerTokenIdentifier
-
- getExternalEndpoint(String) - Method in class org.apache.hadoop.registry.client.types.ServiceRecord
-
Look up an external endpoint
- getFactory(Class) - Static method in class org.apache.hadoop.io.WritableFactories
-
Define a factory for a class.
- getFailedJobList() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl
-
- getFailedJobs() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
-
- getFailedRequests() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetContainerStatusesResponse
-
Get the containerId-to-exception map in which the exception indicates error
from per container for failed requests
- getFailedRequests() - Method in class org.apache.hadoop.yarn.api.protocolrecords.StartContainersResponse
-
Get the containerId-to-exception map in which the exception indicates error
from per container for failed requests
- getFailedRequests() - Method in class org.apache.hadoop.yarn.api.protocolrecords.StopContainersResponse
-
Get the containerId-to-exception map in which the exception indicates error
from per container for failed requests
- getFailureCause() - Method in class org.apache.hadoop.service.AbstractService
-
- getFailureCause() - Method in interface org.apache.hadoop.service.Service
-
Get the first exception raised during the service failure.
- getFailureInfo() - Method in interface org.apache.hadoop.mapred.RunningJob
-
Get failure info for the job.
- getFailureInfo() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
Gets any available info on the reason of failure of the job.
- getFailureState() - Method in class org.apache.hadoop.service.AbstractService
-
- getFailureState() - Method in interface org.apache.hadoop.service.Service
-
- getFencer() - Method in class org.apache.hadoop.ha.HAServiceTarget
-
- getFencingParameters() - Method in class org.apache.hadoop.ha.HAServiceTarget
-
- getFieldID() - Method in class org.apache.hadoop.record.meta.FieldTypeInfo
-
Deprecated.
get the field's id (name)
- getFieldNames() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
- getFieldTypeInfos() - Method in class org.apache.hadoop.record.meta.RecordTypeInfo
-
Deprecated.
Return a collection of field type infos
- getFieldTypeInfos() - Method in class org.apache.hadoop.record.meta.StructTypeID
-
Deprecated.
- getFile(String, String) - Method in class org.apache.hadoop.conf.Configuration
-
Get a local file name under a directory named in dirsProp with
the given path.
- getFile() - Method in class org.apache.hadoop.yarn.api.records.URL
-
Get the file of the URL.
- getFileBlockLocations(Path, long, long) - Method in class org.apache.hadoop.fs.AbstractFileSystem
-
- getFileBlockLocations(FileStatus, long, long) - Method in class org.apache.hadoop.fs.azure.NativeAzureFileSystem
-
Return an array containing hostnames, offset and size of
portions of the given file.
- getFileBlockLocations(FileStatus, long, long) - Method in class org.apache.hadoop.fs.FileSystem
-
Return an array containing hostnames, offset and size of
portions of the given file.
- getFileBlockLocations(Path, long, long) - Method in class org.apache.hadoop.fs.FileSystem
-
Return an array containing hostnames, offset and size of
portions of the given file.
- getFileBlockLocations(FileStatus, long, long) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- getFileBlockLocations(FileStatus, long, long) - Method in class org.apache.hadoop.fs.viewfs.ViewFileSystem
-
- getFileBlockLocations(Path, long, long) - Method in class org.apache.hadoop.fs.viewfs.ViewFs
-
- getFileBlockLocations(FileSystem, FileStatus) - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat
-
- getFileBufferSize() - Method in class org.apache.hadoop.fs.FsServerDefaults
-
- getFileChecksum(Path) - Method in class org.apache.hadoop.fs.AbstractFileSystem
-
- getFileChecksum(Path) - Method in class org.apache.hadoop.fs.FileContext
-
Get the checksum of a file.
- getFileChecksum(Path) - Method in class org.apache.hadoop.fs.FileSystem
-
Get the checksum of a file.
- getFileChecksum(Path, long) - Method in class org.apache.hadoop.fs.FileSystem
-
Get the checksum of a file, from the beginning of the file till the
specific length.
- getFileChecksum(Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- getFileChecksum(Path, long) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- getFileChecksum(Path) - Method in class org.apache.hadoop.fs.viewfs.ViewFileSystem
-
- getFileChecksum(Path) - Method in class org.apache.hadoop.fs.viewfs.ViewFs
-
- getFileChecksum(Path) - Method in class org.apache.hadoop.yarn.client.api.SharedCacheClient
-
A convenience method to calculate the checksum of a specified file.
- getFileClassPaths() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the file entries in classpath as an array of Path
- getFileContext(AbstractFileSystem, Configuration) - Static method in class org.apache.hadoop.fs.FileContext
-
Create a FileContext with specified FS as default using the specified
config.
- getFileContext(AbstractFileSystem) - Static method in class org.apache.hadoop.fs.FileContext
-
Create a FileContext for specified file system using the default config.
- getFileContext() - Static method in class org.apache.hadoop.fs.FileContext
-
Create a FileContext using the default config read from the
$HADOOP_CONFIG/core.xml, Unspecified key-values for config are defaulted
from core-defaults.xml in the release jar.
- getFileContext(URI) - Static method in class org.apache.hadoop.fs.FileContext
-
Create a FileContext for specified URI using the default config.
- getFileContext(URI, Configuration) - Static method in class org.apache.hadoop.fs.FileContext
-
Create a FileContext for specified default URI using the specified config.
- getFileContext(Configuration) - Static method in class org.apache.hadoop.fs.FileContext
-
Create a FileContext using the passed config.
- getFileCount() - Method in class org.apache.hadoop.fs.ContentSummary
-
- getFileDefault() - Static method in class org.apache.hadoop.fs.permission.FsPermission
-
Get the default permission for file.
- getFileDescriptor() - Method in class org.apache.hadoop.fs.FSDataInputStream
-
- getFileInfo(JobId) - Method in class org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager
-
- getFileLinkStatus(Path) - Method in class org.apache.hadoop.fs.AbstractFileSystem
-
The specification of this method matches that of
FileContext.getFileLinkStatus(Path)
except that an UnresolvedLinkException may be thrown if a symlink is
encountered in the path leading up to the final path component.
- getFileLinkStatus(Path) - Method in class org.apache.hadoop.fs.FileContext
-
Return a file status object that represents the path.
- getFileLinkStatus(Path) - Method in class org.apache.hadoop.fs.FileSystem
-
- getFileLinkStatus(Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- getFileLinkStatus(Path) - Method in class org.apache.hadoop.fs.LocalFileSystem
-
- getFileLinkStatus(Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
-
Return a FileStatus representing the given path.
- getFileLinkStatus(Path) - Method in class org.apache.hadoop.fs.viewfs.ViewFs
-
- getFileListingPath() - Method in class org.apache.hadoop.tools.DistCp
-
Get default name of the copy listing file.
- getFileStatus(Configuration, URI) - Static method in class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
- getFileStatus(Path) - Method in class org.apache.hadoop.fs.AbstractFileSystem
-
The specification of this method matches that of
FileContext.getFileStatus(Path)
except that an UnresolvedLinkException may be thrown if a symlink is
encountered in the path.
- getFileStatus(Path) - Method in class org.apache.hadoop.fs.azure.NativeAzureFileSystem
-
- getFileStatus(Path) - Method in class org.apache.hadoop.fs.FileContext
-
Return a file status object that represents the path.
- getFileStatus(Path) - Method in class org.apache.hadoop.fs.FileSystem
-
Return a file status object that represents the path.
- getFileStatus(Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
Get file status.
- getFileStatus(Path) - Method in class org.apache.hadoop.fs.ftp.FTPFileSystem
-
- getFileStatus(Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
-
- getFileStatus(Path) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
-
FileStatus for S3 file systems.
- getFileStatus(Path) - Method in class org.apache.hadoop.fs.s3native.NativeS3FileSystem
-
- getFileStatus(Path) - Method in class org.apache.hadoop.fs.viewfs.ViewFileSystem
-
- getFileStatus(Path) - Method in class org.apache.hadoop.fs.viewfs.ViewFs
-
- getFileSystem(Configuration) - Method in class org.apache.hadoop.fs.Path
-
Return the FileSystem that owns this Path.
- getFileSystem() - Method in class org.apache.hadoop.mapreduce.Cluster
-
Get the file system where job-specific files are stored
- getFileSystemClass(String, Configuration) - Static method in class org.apache.hadoop.fs.FileSystem
-
- getFileSystemInstanceId() - Method in class org.apache.hadoop.fs.azure.metrics.AzureFileSystemInstrumentation
-
The unique identifier for this file system in the metrics.
- getFileTimestamps() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the timestamps of the files.
- getFinalApplicationStatus() - Method in class org.apache.hadoop.yarn.api.protocolrecords.FinishApplicationMasterRequest
-
Get final state of the ApplicationMaster
.
- getFinalApplicationStatus() - Method in class org.apache.hadoop.yarn.api.records.ApplicationReport
-
Get the final finish status of the application.
- getFinalParameters() - Method in class org.apache.hadoop.conf.Configuration
-
Get the set of parameters marked final.
- getFinishTime() - Method in class org.apache.hadoop.mapreduce.Job
-
Get finish time of the job.
- getFinishTime() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
Get the finish time of the job.
- getFinishTime() - Method in class org.apache.hadoop.yarn.api.records.ApplicationReport
-
Get the finish time of the application.
- getFinishTime() - Method in class org.apache.hadoop.yarn.api.records.ContainerReport
-
Get the Finish time of the container.
- getFloat(String, float) - Method in class org.apache.hadoop.conf.Configuration
-
Get the value of the name
property as a float
.
- getFormatMinSplitSize() - Method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
Get the lower bound on split size imposed by the format.
- getFormatMinSplitSize() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat
-
- getFs() - Method in class org.apache.hadoop.mapred.JobClient
-
Get a filesystem handle.
- getFsAction(String) - Static method in enum org.apache.hadoop.fs.permission.FsAction
-
Get the FsAction enum for String representation of permissions
- getFSofPath(Path) - Method in class org.apache.hadoop.fs.FileContext
-
Get the file system of supplied path.
- getFSofPath(Path, Configuration) - Static method in class org.apache.hadoop.fs.FileSystem
-
- getFsStatus(Path) - Method in class org.apache.hadoop.fs.AbstractFileSystem
-
- getFsStatus() - Method in class org.apache.hadoop.fs.AbstractFileSystem
-
- getFsStatus(Path) - Method in class org.apache.hadoop.fs.FileContext
-
Returns a status object describing the use and capacity of the
file system denoted by the Parh argument p.
- getFsStatus() - Method in class org.apache.hadoop.fs.viewfs.ViewFs
-
- getFullJob(JobId) - Method in interface org.apache.hadoop.mapreduce.v2.hs.HistoryStorage
-
Get a fully parsed job.
- getGid(String) - Method in interface org.apache.hadoop.security.IdMappingServiceProvider
-
- getGidAllowingUnknown(String) - Method in interface org.apache.hadoop.security.IdMappingServiceProvider
-
- getGrayListedTaskTrackerCount() - Method in class org.apache.hadoop.mapreduce.ClusterMetrics
-
Get the number of graylisted trackers in the cluster.
- getGraylistedTrackerNames() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Deprecated.
- getGraylistedTrackers() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Deprecated.
- getGroup() - Method in class org.apache.hadoop.fs.FileStatus
-
Get the group associated with the file.
- getGroup() - Method in class org.apache.hadoop.fs.permission.AclStatus
-
Returns the file group.
- getGroup(String) - Method in class org.apache.hadoop.mapred.Counters
-
- getGroup(String) - Method in class org.apache.hadoop.mapreduce.counters.AbstractCounters
-
Returns the named counter group, or an empty group if there is none
with the specified name.
- getGroupAction() - Method in class org.apache.hadoop.fs.permission.FsPermission
-
- getGroupingComparator() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the user defined
RawComparator
comparator for
grouping keys of inputs to the reduce.
- getGroupName(int, String) - Method in interface org.apache.hadoop.security.IdMappingServiceProvider
-
- getGroupNames() - Method in class org.apache.hadoop.mapred.Counters
-
- getGroupNames() - Method in class org.apache.hadoop.mapreduce.counters.AbstractCounters
-
Returns the names of all counter classes.
- getGroups() - Method in class org.apache.hadoop.security.authorize.AccessControlList
-
Get the names of user groups allowed for this service.
- getGroups(String) - Method in interface org.apache.hadoop.security.GroupMappingServiceProvider
-
Get all various group memberships of a given user.
- getHeader(boolean) - Static method in class org.apache.hadoop.fs.ContentSummary
-
Return the header of the output.
- getHealthReport() - Method in class org.apache.hadoop.yarn.api.records.NodeReport
-
Get the diagnostic health report of the node.
- getHistoryFile() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
- getHistoryUrl() - Method in interface org.apache.hadoop.mapred.RunningJob
-
Get the url where history file is archived.
- getHistoryUrl() - Method in class org.apache.hadoop.mapreduce.Job
-
- getHomeDirectory() - Method in class org.apache.hadoop.fs.AbstractFileSystem
-
Return the current user's home directory in this file system.
- getHomeDirectory() - Method in class org.apache.hadoop.fs.FileContext
-
Return the current user's home directory in this file system.
- getHomeDirectory() - Method in class org.apache.hadoop.fs.FileSystem
-
Return the current user's home directory in this filesystem.
- getHomeDirectory() - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- getHomeDirectory() - Method in class org.apache.hadoop.fs.ftp.FTPFileSystem
-
- getHomeDirectory() - Method in class org.apache.hadoop.fs.RawLocalFileSystem
-
- getHomeDirectory() - Method in class org.apache.hadoop.fs.viewfs.ViewFileSystem
-
- getHomeDirectory() - Method in class org.apache.hadoop.fs.viewfs.ViewFs
-
- getHost() - Method in class org.apache.hadoop.yarn.api.protocolrecords.RegisterApplicationMasterRequest
-
Get the host on which the ApplicationMaster
is
running.
- getHost() - Method in class org.apache.hadoop.yarn.api.records.ApplicationAttemptReport
-
Get the host on which this attempt of
ApplicationMaster
is running.
- getHost() - Method in class org.apache.hadoop.yarn.api.records.ApplicationReport
-
Get the host on which the ApplicationMaster
is running.
- getHost() - Method in class org.apache.hadoop.yarn.api.records.NodeId
-
Get the hostname of the node.
- getHost() - Method in class org.apache.hadoop.yarn.api.records.URL
-
Get the host of the URL.
- getHosts() - Method in class org.apache.hadoop.fs.BlockLocation
-
Get the list of hosts (hostname) hosting this block
- getHttpAddress() - Method in class org.apache.hadoop.yarn.api.records.NodeReport
-
Get the http address of the node.
- getId(Class<?>) - Method in class org.apache.hadoop.io.AbstractMapWritable
-
- getID() - Method in interface org.apache.hadoop.mapred.RunningJob
-
Get the job identifier.
- getId() - Method in class org.apache.hadoop.mapreduce.ID
-
returns the int which represents the identifier
- getId() - Method in class org.apache.hadoop.tracing.SpanReceiverInfo
-
- getId() - Method in class org.apache.hadoop.yarn.api.records.ApplicationId
-
Get the short integer identifier of the ApplicationId
which is unique for all applications started by a particular instance
of the ResourceManager
.
- getId() - Method in class org.apache.hadoop.yarn.api.records.Container
-
Get the globally unique identifier for the container.
- getId() - Method in class org.apache.hadoop.yarn.api.records.ContainerId
-
Deprecated.
- getId() - Method in class org.apache.hadoop.yarn.api.records.PreemptionContainer
-
- getId() - Method in class org.apache.hadoop.yarn.api.records.ReservationId
-
Get the long identifier of the
ReservationId
which is unique for
all Reservations started by a particular instance of the
ResourceManager
.
- getId() - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineDomain
-
Get the domain ID
- getIdentifier() - Method in class org.apache.hadoop.yarn.api.records.Token
-
Get the token identifier.
- GetImage() - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
Deprecated.
- getIncludeApplications() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetQueueInfoRequest
-
Is information about active applications required?
- getIncludeChildQueues() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetQueueInfoRequest
-
Is information about child queues required?
- getIncludePattern() - Method in class org.apache.hadoop.yarn.api.records.LogAggregationContext
-
Get include pattern.
- getIncreasedContainers() - Method in class org.apache.hadoop.yarn.api.protocolrecords.AllocateResponse
-
Get the list of newly increased containers by ResourceManager
- getIncreaseRequests() - Method in class org.apache.hadoop.yarn.api.protocolrecords.AllocateRequest
-
Get the ContainerResourceIncreaseRequest
being sent by the
ApplicationMaster
- getInitialWorkingDirectory() - Method in class org.apache.hadoop.fs.AbstractFileSystem
-
Some file systems like LocalFileSystem have an initial workingDir
that is used as the starting workingDir.
- getInitialWorkingDirectory() - Method in class org.apache.hadoop.fs.FileSystem
-
Note: with the new FilesContext class, getWorkingDirectory()
will be removed.
- getInitialWorkingDirectory() - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- getInitialWorkingDirectory() - Method in class org.apache.hadoop.fs.RawLocalFileSystem
-
- getInputBoundingQuery() - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getInputClass() - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getInputConditions() - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getInputCountQuery() - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getInputDirRecursive(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
- getInputFieldNames() - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getInputFileBasedOutputFileName(JobConf, String) - Method in class org.apache.hadoop.mapred.lib.MultipleOutputFormat
-
Generate the outfile name based on a given anme and the input file name.
- getInputFormat() - Method in class org.apache.hadoop.mapred.JobConf
-
- getInputFormatClass() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
- getInputOrderBy() - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getInputPathFilter(JobConf) - Static method in class org.apache.hadoop.mapred.FileInputFormat
-
Get a PathFilter instance of the filter set for the input paths.
- getInputPathFilter(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
Get a PathFilter instance of the filter set for the input paths.
- getInputPaths(JobConf) - Static method in class org.apache.hadoop.mapred.FileInputFormat
-
Get the list of input
Path
s for the map-reduce job.
- getInputPaths(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
Get the list of input
Path
s for the map-reduce job.
- getInputQuery() - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getInputSplit() - Method in interface org.apache.hadoop.mapred.Reporter
-
- getInputSplit() - Method in interface org.apache.hadoop.mapreduce.MapContext
-
Get the input split for this map.
- getInputTableName() - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getInstance(Configuration, FileSystem, Path) - Static method in class org.apache.hadoop.fs.TrashPolicy
-
Get an instance of the configured TrashPolicy based on the value
of the configuration parameter fs.trash.classname.
- getInstance() - Static method in class org.apache.hadoop.mapreduce.Job
-
- getInstance(Configuration) - Static method in class org.apache.hadoop.mapreduce.Job
-
- getInstance(Configuration, String) - Static method in class org.apache.hadoop.mapreduce.Job
-
Creates a new
Job
with no particular
Cluster
and a given jobName.
- getInstance(JobStatus, Configuration) - Static method in class org.apache.hadoop.mapreduce.Job
-
- getInstance(Cluster) - Static method in class org.apache.hadoop.mapreduce.Job
-
- getInstance(Cluster, Configuration) - Static method in class org.apache.hadoop.mapreduce.Job
-
- getInstances(String, Class<U>) - Method in class org.apache.hadoop.conf.Configuration
-
Get the value of the name
property as a List
of objects implementing the interface specified by xface
.
- getInstrumentation() - Method in class org.apache.hadoop.fs.azure.NativeAzureFileSystem
-
Gets the metrics source for this file system.
- getInt(String, int) - Method in class org.apache.hadoop.conf.Configuration
-
Get the value of the name
property as an int
.
- getInternalEndpoint(String) - Method in class org.apache.hadoop.registry.client.types.ServiceRecord
-
Look up an internal endpoint
- getInterpreter() - Method in class org.apache.hadoop.yarn.api.records.ReservationRequests
-
Get the
ReservationRequestInterpreter
, representing how the list of
resources should be allocated, this captures temporal ordering and other
constraints.
- getInterval() - Method in class org.apache.hadoop.metrics2.lib.MutableQuantiles
-
- getInts(String) - Method in class org.apache.hadoop.conf.Configuration
-
Get the value of the name
property as a set of comma-delimited
int
values.
- getIsJavaMapper(JobConf) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
-
Check whether the job is using a Java Mapper.
- getIsJavaRecordReader(JobConf) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
-
Check whether the job is using a Java RecordReader
- getIsJavaRecordWriter(JobConf) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
-
Will the reduce use a Java RecordWriter?
- getIsJavaReducer(JobConf) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
-
Check whether the job is using a Java Reducer.
- getIsKillCompleted() - Method in class org.apache.hadoop.yarn.api.protocolrecords.KillApplicationResponse
-
Get the flag which indicates that the process of killing application is completed or not.
- getIsUnregistered() - Method in class org.apache.hadoop.yarn.api.protocolrecords.FinishApplicationMasterResponse
-
Get the flag which indicates that the application has successfully
unregistered with the RM and the application can safely stop.
- getJar() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the user jar for the map-reduce job.
- getJar() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the pathname of the job's jar.
- getJarUnpackPattern() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the pattern for jar contents to unpack on the tasktracker
- getJob(JobID) - Method in class org.apache.hadoop.mapred.JobClient
-
- getJob(String) - Method in class org.apache.hadoop.mapred.JobClient
-
- getJob() - Method in class org.apache.hadoop.mapred.lib.CombineFileSplit
-
- getJob(JobID) - Method in class org.apache.hadoop.mapreduce.Cluster
-
Get job corresponding to jobid.
- getJob() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
- getJobACLs() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
Get the job acls.
- getJobAttemptPath(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
Compute the path where the output of a given job attempt will be placed.
- getJobAttemptPath(JobContext, Path) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
Compute the path where the output of a given job attempt will be placed.
- getJobAttemptPath(int) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
Compute the path where the output of a given job attempt will be placed.
- getJobClient() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
-
- getJobConf() - Method in interface org.apache.hadoop.mapred.JobContext
-
Get the job Configuration
- getJobConf() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
-
- getJobConf() - Method in interface org.apache.hadoop.mapred.TaskAttemptContext
-
- getJobEndNotificationURI() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the uri to be invoked in-order to send a notification after the job
has completed (success/failure).
- getJobFile() - Method in interface org.apache.hadoop.mapred.RunningJob
-
Get the path of the submitted job configuration.
- getJobFile() - Method in class org.apache.hadoop.mapreduce.Job
-
Get the path of the submitted job configuration.
- getJobFile() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
Get the configuration file for the job.
- getJobHistoryUrl(JobID) - Method in class org.apache.hadoop.mapreduce.Cluster
-
Get the job history file path for a given job id.
- getJobId() - Method in class org.apache.hadoop.mapred.JobStatus
-
Deprecated.
use getJobID instead
- getJobID() - Method in class org.apache.hadoop.mapred.JobStatus
-
- getJobID() - Method in interface org.apache.hadoop.mapred.RunningJob
-
Deprecated.
This method is deprecated and will be removed. Applications should
rather use RunningJob.getID()
.
- getJobID() - Method in class org.apache.hadoop.mapred.TaskAttemptID
-
- getJobID() - Method in class org.apache.hadoop.mapred.TaskID
-
- getJobID() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the unique ID for the job.
- getJobID() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
- getJobID() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
- getJobID() - Method in class org.apache.hadoop.mapreduce.TaskAttemptID
-
Returns the
JobID
object that this task attempt belongs to
- getJobID() - Method in class org.apache.hadoop.mapreduce.TaskID
-
Returns the
JobID
object that this tip belongs to
- getJobIDsPattern(String, Integer) - Static method in class org.apache.hadoop.mapred.JobID
-
Deprecated.
- getJobInner(JobID) - Method in class org.apache.hadoop.mapred.JobClient
-
- getJobLocalDir() - Method in class org.apache.hadoop.mapred.JobConf
-
Get job-specific shared directory for use as scratch space
- getJobName() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the user-specified job name.
- getJobName() - Method in interface org.apache.hadoop.mapred.RunningJob
-
Get the name of the job.
- getJobName() - Method in class org.apache.hadoop.mapreduce.Job
-
The user-specified job name.
- getJobName() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the user-specified job name.
- getJobName() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
Get the user-specified job name.
- getJobName() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
- getJobPriority() - Method in class org.apache.hadoop.mapred.JobConf
-
- getJobPriority() - Method in class org.apache.hadoop.mapred.JobStatus
-
Return the priority of the job
- getJobRunState(int) - Static method in class org.apache.hadoop.mapred.JobStatus
-
Helper method to get human-readable state of the job.
- getJobSetupCleanupNeeded() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get whether job-setup and job-cleanup is needed for the job
- getJobsFromQueue(String) - Method in class org.apache.hadoop.mapred.JobClient
-
Gets all the jobs which were added to particular Job Queue
- getJobState() - Method in interface org.apache.hadoop.mapred.RunningJob
-
Returns the current state of the Job.
- getJobState() - Method in class org.apache.hadoop.mapreduce.Job
-
Returns the current state of the Job.
- getJobState() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
- getJobStatus() - Method in interface org.apache.hadoop.mapred.RunningJob
-
Returns a snapshot of the current status,
JobStatus
, of the Job.
- getJobStatuses() - Method in class org.apache.hadoop.mapreduce.QueueInfo
-
Get the jobs submitted to queue
- getJobTrackerState() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Deprecated.
- getJobTrackerStatus() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Get the JobTracker's status.
- getJobTrackerStatus() - Method in class org.apache.hadoop.mapreduce.Cluster
-
Get the JobTracker's status.
- getJtIdentifier() - Method in class org.apache.hadoop.mapreduce.JobID
-
- getKeepCommandFile(JobConf) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
-
Does the user want to keep the command file for debugging? If this is
true, pipes will write a copy of the command data to a file in the
task directory named "downlink.data", which may be used to run the C++
program under the debugger.
- getKeepContainersAcrossApplicationAttempts() - Method in class org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext
-
Get the flag which indicates whether to keep containers across application
attempts or not.
- getKeepFailedTaskFiles() - Method in class org.apache.hadoop.mapred.JobConf
-
Should the temporary files for failed tasks be kept?
- getKeepTaskFilesPattern() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the regular expression that is matched against the task names
to see if we need to keep the files.
- getKerberosInfo(Class<?>, Configuration) - Method in class org.apache.hadoop.yarn.security.admin.AdminSecurityInfo
-
- getKerberosInfo(Class<?>, Configuration) - Method in class org.apache.hadoop.yarn.security.client.ClientRMSecurityInfo
-
- getKerberosInfo(Class<?>, Configuration) - Method in class org.apache.hadoop.yarn.security.client.ClientTimelineSecurityInfo
-
- getKerberosInfo(Class<?>, Configuration) - Method in class org.apache.hadoop.yarn.security.ContainerManagerSecurityInfo
-
- getKerberosInfo(Class<?>, Configuration) - Method in class org.apache.hadoop.yarn.security.SchedulerSecurityInfo
-
- getKey() - Method in class org.apache.hadoop.mapreduce.lib.fieldsel.FieldSelectionHelper
-
- getKeyClass() - Method in class org.apache.hadoop.io.WritableComparator
-
Returns the WritableComparable implementation class.
- getKeyClass() - Method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
-
- getKeyClass() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
-
- getKeyClass() - Method in class org.apache.hadoop.mapreduce.lib.input.KeyValueLineRecordReader
-
- getKeyFieldComparatorOption() - Method in class org.apache.hadoop.mapred.JobConf
-
- getKeyFieldComparatorOption(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedComparator
-
- getKeyFieldPartitionerOption() - Method in class org.apache.hadoop.mapred.JobConf
-
- getKeyFieldPartitionerOption(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedPartitioner
-
- getKeyId() - Method in class org.apache.hadoop.yarn.security.AMRMTokenIdentifier
-
- getKeyId() - Method in class org.apache.hadoop.yarn.security.NMTokenIdentifier
-
- getKeys() - Method in class org.apache.hadoop.crypto.key.KeyProvider
-
Get the key names for all keys.
- getKeysMetadata(String...) - Method in class org.apache.hadoop.crypto.key.KeyProvider
-
Get key metadata in bulk.
- getKeyTypeID() - Method in class org.apache.hadoop.record.meta.MapTypeID
-
Deprecated.
get the TypeID of the map's key element
- getKeyVersion(String) - Method in class org.apache.hadoop.crypto.key.KeyProvider
-
Get the key material for a specific version of the key.
- getKeyVersions(String) - Method in class org.apache.hadoop.crypto.key.KeyProvider
-
Get the key material for all versions of a specific key name.
- getKind() - Method in class org.apache.hadoop.yarn.api.records.Token
-
Get the token kind.
- getKind() - Method in class org.apache.hadoop.yarn.security.AMRMTokenIdentifier
-
- getKind() - Method in class org.apache.hadoop.yarn.security.client.ClientToAMTokenIdentifier
-
- getKind() - Method in class org.apache.hadoop.yarn.security.client.RMDelegationTokenIdentifier
-
- getKind() - Method in class org.apache.hadoop.yarn.security.client.TimelineDelegationTokenIdentifier
-
- getKind() - Method in class org.apache.hadoop.yarn.security.ContainerTokenIdentifier
-
- getKind() - Method in class org.apache.hadoop.yarn.security.NMTokenIdentifier
-
- getLabelsToNodes(GetLabelsToNodesRequest) - Method in interface org.apache.hadoop.yarn.api.ApplicationClientProtocol
-
The interface used by client to get labels to nodes mappings
in existing cluster
- getLabelsToNodes() - Method in class org.apache.hadoop.yarn.client.api.YarnClient
-
The interface used by client to get labels to nodes mapping
in existing cluster
- getLabelsToNodes(Set<String>) - Method in class org.apache.hadoop.yarn.client.api.YarnClient
-
The interface used by client to get labels to nodes mapping
for specified labels in existing cluster
- getLastHealthReportTime() - Method in class org.apache.hadoop.yarn.api.records.NodeReport
-
Get the last timestamp at which the health report was received.
- getLeasedCompressorsCount(CompressionCodec) - Static method in class org.apache.hadoop.io.compress.CodecPool
-
- getLeasedDecompressorsCount(CompressionCodec) - Static method in class org.apache.hadoop.io.compress.CodecPool
-
- getLen() - Method in class org.apache.hadoop.fs.FileStatus
-
Get the length of this file, in bytes.
- getLength() - Method in class org.apache.hadoop.fs.BlockLocation
-
Get the length of the block
- getLength() - Method in class org.apache.hadoop.fs.ContentSummary
-
- getLength() - Method in class org.apache.hadoop.fs.FileChecksum
-
The length of the checksum in bytes
- getLength(Path) - Method in class org.apache.hadoop.fs.FileSystem
-
Deprecated.
Use getFileStatus() instead
- getLength() - Method in class org.apache.hadoop.io.BinaryComparable
-
Return n st bytes 0..n-1 from {#getBytes()} are valid.
- getLength() - Method in class org.apache.hadoop.io.BytesWritable
-
Get the current size of the buffer.
- getLength() - Method in class org.apache.hadoop.io.Text
-
Returns the number of bytes in the byte array
- getLength() - Method in class org.apache.hadoop.mapred.FileSplit
-
The number of bytes in the file to process.
- getLength() - Method in interface org.apache.hadoop.mapred.InputSplit
-
Get the total number of bytes in the data of the InputSplit
.
- getLength() - Method in class org.apache.hadoop.mapred.join.CompositeInputSplit
-
Return the aggregate length of all child InputSplits currently added.
- getLength(int) - Method in class org.apache.hadoop.mapred.join.CompositeInputSplit
-
Get the length of ith child InputSplit.
- getLength() - Method in class org.apache.hadoop.mapreduce.InputSplit
-
Get the size of the split, so that the input splits can be sorted by size.
- getLength() - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
- getLength(int) - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
Returns the length of the ith Path
- getLength() - Method in class org.apache.hadoop.mapreduce.lib.input.FileSplit
-
The number of bytes in the file to process.
- getLength() - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeInputSplit
-
Return the aggregate length of all child InputSplits currently added.
- getLength(int) - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeInputSplit
-
Get the length of ith child InputSplit.
- getLengths() - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
Returns an array containing the lengths of the files in the split
- getLifecycleHistory() - Method in class org.apache.hadoop.service.AbstractService
-
- getLifecycleHistory() - Method in interface org.apache.hadoop.service.Service
-
Get a snapshot of the lifecycle history; it is a static list
- getLinkTarget(Path) - Method in class org.apache.hadoop.fs.AbstractFileSystem
-
Partially resolves the path.
- getLinkTarget(Path) - Method in class org.apache.hadoop.fs.FileContext
-
Returns the target of the given symbolic link as it was specified
when the link was created.
- getLinkTarget(Path) - Method in class org.apache.hadoop.fs.FileSystem
-
- getLinkTarget(Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- getLinkTarget(Path) - Method in class org.apache.hadoop.fs.LocalFileSystem
-
- getLinkTarget(Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
-
- getLinkTarget(Path) - Method in class org.apache.hadoop.fs.viewfs.ViewFs
-
- getLocal(Configuration) - Static method in class org.apache.hadoop.fs.FileSystem
-
Get the local file system.
- getLocalCacheArchives() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Deprecated.
the array returned only includes the items the were
downloaded. There is no way to map this to what is returned by
JobContext.getCacheArchives()
.
- getLocalCacheFiles() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Deprecated.
the array returned only includes the items the were
downloaded. There is no way to map this to what is returned by
JobContext.getCacheFiles()
.
- getLocalDirs() - Method in class org.apache.hadoop.mapred.JobConf
-
- getLocalFSFileContext() - Static method in class org.apache.hadoop.fs.FileContext
-
- getLocalFSFileContext(Configuration) - Static method in class org.apache.hadoop.fs.FileContext
-
- getLocalPath(String, String) - Method in class org.apache.hadoop.conf.Configuration
-
Get a local file under a directory named by dirsProp with
the given path.
- getLocalPath(String) - Method in class org.apache.hadoop.mapred.JobConf
-
Constructs a local file name.
- getLocalResources() - Method in class org.apache.hadoop.yarn.api.records.ContainerLaunchContext
-
Get LocalResource
required by the container.
- getLocation(int) - Method in class org.apache.hadoop.mapred.join.CompositeInputSplit
-
getLocations from ith InputSplit.
- getLocation() - Method in class org.apache.hadoop.mapred.SplitLocationInfo
-
- getLocation(int) - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeInputSplit
-
getLocations from ith InputSplit.
- getLocationInfo() - Method in class org.apache.hadoop.mapred.FileSplit
-
- getLocationInfo() - Method in interface org.apache.hadoop.mapred.InputSplitWithLocationInfo
-
Gets info about which nodes the input split is stored on and how it is
stored at each location.
- getLocationInfo() - Method in class org.apache.hadoop.mapreduce.InputSplit
-
Gets info about which nodes the input split is stored on and how it is
stored at each location.
- getLocationInfo() - Method in class org.apache.hadoop.mapreduce.lib.input.FileSplit
-
- getLocations() - Method in class org.apache.hadoop.mapred.FileSplit
-
- getLocations() - Method in interface org.apache.hadoop.mapred.InputSplit
-
Get the list of hostnames where the input split is located.
- getLocations() - Method in class org.apache.hadoop.mapred.join.CompositeInputSplit
-
Collect a set of hosts from all child InputSplits.
- getLocations() - Method in class org.apache.hadoop.mapred.MultiFileSplit
-
- getLocations() - Method in class org.apache.hadoop.mapreduce.InputSplit
-
Get the list of nodes by name where the data for the split would be local.
- getLocations() - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
Returns all the Paths where this input-split resides
- getLocations() - Method in class org.apache.hadoop.mapreduce.lib.input.FileSplit
-
- getLocations() - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeInputSplit
-
Collect a set of hosts from all child InputSplits.
- getLogAggregationContext() - Method in class org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext
-
Get LogAggregationContext
of the application
- getLogAggregationContext() - Method in class org.apache.hadoop.yarn.security.ContainerTokenIdentifier
-
- getLogParams(JobID, TaskAttemptID) - Method in class org.apache.hadoop.mapreduce.Cluster
-
Get log parameters for the specified jobID or taskAttemptID
- getLogUrl() - Method in class org.apache.hadoop.yarn.api.records.ContainerReport
-
Get the LogURL of the container.
- getLong(String, long) - Method in class org.apache.hadoop.conf.Configuration
-
Get the value of the name
property as a long
.
- getLongBytes(String, long) - Method in class org.apache.hadoop.conf.Configuration
-
Get the value of the name
property as a long
or
human readable format.
- getMapContext(MapContext<KEYIN, VALUEIN, KEYOUT, VALUEOUT>) - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper
-
Get a wrapped Mapper.Context
for custom implementations.
- getMapDebugScript() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the map task's debug script.
- getMapOutputCompressorClass(Class<? extends CompressionCodec>) - Method in class org.apache.hadoop.mapred.JobConf
-
- getMapOutputKeyClass() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the key class for the map output data.
- getMapOutputKeyClass() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the key class for the map output data.
- getMapOutputValueClass() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the value class for the map output data.
- getMapOutputValueClass() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the value class for the map output data.
- getMapper() - Method in class org.apache.hadoop.mapred.MapRunner
-
- getMapperClass() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the
Mapper
class for the job.
- getMapperClass() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the
Mapper
class for the job.
- getMapperClass(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.map.MultithreadedMapper
-
Get the application's mapper class.
- getMapperMaxSkipRecords(Configuration) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
-
Get the number of acceptable skip records surrounding the bad record PER
bad record in mapper.
- getMapProgress() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
- getMapredJobID() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
-
- getMapredJobId() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
- getMapRunnerClass() - Method in class org.apache.hadoop.mapred.JobConf
-
- getMapSlotCapacity() - Method in class org.apache.hadoop.mapreduce.ClusterMetrics
-
Get the total number of map slots in the cluster.
- getMapSpeculativeExecution() - Method in class org.apache.hadoop.mapred.JobConf
-
Should speculative execution be used for this job for map tasks?
Defaults to true
.
- getMapTaskReports(JobID) - Method in class org.apache.hadoop.mapred.JobClient
-
Get the information of the current state of the map tasks of a job.
- getMapTaskReports(String) - Method in class org.apache.hadoop.mapred.JobClient
-
- getMapTasks() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Get the number of currently running map tasks in the cluster.
- getMasterKey(ApplicationAttemptId) - Method in class org.apache.hadoop.yarn.security.client.ClientToAMTokenSecretManager
-
- getMasterKeyId() - Method in class org.apache.hadoop.yarn.security.ContainerTokenIdentifier
-
- getMatchingRequests(Priority, String, Resource) - Method in class org.apache.hadoop.yarn.client.api.AMRMClient
-
Get outstanding ContainerRequest
s matching the given
parameters.
- getMatchingRequests(Priority, String, Resource) - Method in class org.apache.hadoop.yarn.client.api.async.AMRMClientAsync
-
- getMaxAppAttempts() - Method in class org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext
-
- getMaximumCapacity() - Method in class org.apache.hadoop.yarn.api.records.QueueInfo
-
Get the maximum capacity of the queue.
- getMaximumResourceCapability() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetNewApplicationResponse
-
Get the maximum capability for any
Resource
allocated by the
ResourceManager
in the cluster.
- getMaximumResourceCapability() - Method in class org.apache.hadoop.yarn.api.protocolrecords.RegisterApplicationMasterResponse
-
Get the maximum capability for any
Resource
allocated by the
ResourceManager
in the cluster.
- getMaxMapAttempts() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the configured number of maximum attempts that will be made to run a
map task, as specified by the mapreduce.map.maxattempts
property.
- getMaxMapAttempts() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the configured number of maximum attempts that will be made to run a
map task, as specified by the mapred.map.max.attempts
property.
- getMaxMapTaskFailuresPercent() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the maximum percentage of map tasks that can fail without
the job being aborted.
- getMaxMapTasks() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Get the maximum capacity for running map tasks in the cluster.
- getMaxMemory() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Deprecated.
- getMaxPhysicalMemoryForTask() - Method in class org.apache.hadoop.mapred.JobConf
-
Deprecated.
this variable is deprecated and nolonger in use.
- getMaxReduceAttempts() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the configured number of maximum attempts that will be made to run a
reduce task, as specified by the mapreduce.reduce.maxattempts
property.
- getMaxReduceAttempts() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the configured number of maximum attempts that will be made to run a
reduce task, as specified by the mapred.reduce.max.attempts
property.
- getMaxReduceTaskFailuresPercent() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the maximum percentage of reduce tasks that can fail without
the job being aborted.
- getMaxReduceTasks() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Get the maximum capacity for running reduce tasks in the cluster.
- getMaxSplitSize(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
Get the maximum split size.
- getMaxTaskFailuresPerTracker() - Method in class org.apache.hadoop.mapred.JobConf
-
Expert: Get the maximum no.
- getMaxVirtualMemoryForTask() - Method in class org.apache.hadoop.mapred.JobConf
-
- getMemory() - Method in class org.apache.hadoop.yarn.api.records.Resource
-
Get memory of the resource.
- getMemoryForMapTask() - Method in class org.apache.hadoop.mapred.JobConf
-
Get memory required to run a map task of the job, in MB.
- getMemoryForReduceTask() - Method in class org.apache.hadoop.mapred.JobConf
-
Get memory required to run a reduce task of the job, in MB.
- getMemorySeconds() - Method in class org.apache.hadoop.yarn.api.records.ApplicationResourceUsageReport
-
Get the aggregated amount of memory (in megabytes) the application has
allocated times the number of seconds the application has been running.
- getMessage() - Method in exception org.apache.hadoop.fs.viewfs.NotInMountpointException
-
- getMessage() - Method in exception org.apache.hadoop.mapred.InvalidInputException
-
Get a summary message of the problems found.
- getMessage() - Method in exception org.apache.hadoop.mapreduce.lib.input.InvalidInputException
-
Get a summary message of the problems found.
- getMessage() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
- getMessage() - Method in exception org.apache.hadoop.record.compiler.generated.ParseException
-
Deprecated.
This method has the standard behavior when this object has been
created using the standard constructors.
- getMessage() - Method in error org.apache.hadoop.record.compiler.generated.TokenMgrError
-
Deprecated.
You can also modify the body of this method to customize your error messages.
- getMetadata(String) - Method in class org.apache.hadoop.crypto.key.KeyProvider
-
Get metadata about the key.
- getMetric(String) - Method in class org.apache.hadoop.metrics.spi.OutputRecord
-
Returns the metric object which can be a Float, Integer, Short or Byte.
- getMetricNames() - Method in class org.apache.hadoop.metrics.spi.OutputRecord
-
Returns the set of metric names.
- getMetrics(MetricsCollector, boolean) - Method in class org.apache.hadoop.fs.azure.metrics.AzureFileSystemInstrumentation
-
- getMetrics(MetricsCollector, boolean) - Method in interface org.apache.hadoop.metrics2.MetricsSource
-
Get metrics from the source
- getMetricsCopy() - Method in class org.apache.hadoop.metrics.spi.OutputRecord
-
Returns a copy of this record's metrics.
- getMetricsRegistryInfo() - Method in class org.apache.hadoop.fs.azure.metrics.AzureFileSystemInstrumentation
-
Get the metrics registry information.
- getMinSplitSize(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
Get the minimum split size
- getModificationTime() - Method in class org.apache.hadoop.fs.FileStatus
-
Get the modification time of the file.
- getModifiedTime() - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineDomain
-
Get the modified time of the domain
- getMountPoints() - Method in class org.apache.hadoop.fs.viewfs.ViewFileSystem
-
- getMountPoints() - Method in class org.apache.hadoop.fs.viewfs.ViewFs
-
- getMovableTypes() - Static method in enum org.apache.hadoop.fs.StorageType
-
- getName() - Method in class org.apache.hadoop.fs.FileSystem
-
Deprecated.
call #getUri() instead.
- getName() - Method in class org.apache.hadoop.fs.Path
-
Returns the final component of this path.
- getName() - Method in class org.apache.hadoop.fs.permission.AclEntry
-
Returns the optional ACL entry name.
- getName() - Method in class org.apache.hadoop.mapred.Counters.Counter
-
- getName() - Method in class org.apache.hadoop.mapred.Counters.Group
-
- getName() - Method in interface org.apache.hadoop.mapreduce.Counter
-
- getName() - Method in interface org.apache.hadoop.mapreduce.counters.CounterGroupBase
-
Get the internal name of the group
- getName() - Method in class org.apache.hadoop.record.meta.RecordTypeInfo
-
Deprecated.
return the name of the record
- getName() - Method in class org.apache.hadoop.service.AbstractService
-
- getName() - Method in interface org.apache.hadoop.service.Service
-
Get the name of this service.
- getName() - Method in class org.apache.hadoop.yarn.api.records.ApplicationReport
-
Get the user-defined name of the application.
- getNamed(String, Configuration) - Static method in class org.apache.hadoop.fs.FileSystem
-
Deprecated.
call #get(URI,Configuration) instead.
- getNamedOutputFormatClass(JobConf, String) - Static method in class org.apache.hadoop.mapred.lib.MultipleOutputs
-
Returns the named output OutputFormat.
- getNamedOutputKeyClass(JobConf, String) - Static method in class org.apache.hadoop.mapred.lib.MultipleOutputs
-
Returns the key class for a named output.
- getNamedOutputs() - Method in class org.apache.hadoop.mapred.lib.MultipleOutputs
-
Returns iterator with the defined name outputs.
- getNamedOutputsList(JobConf) - Static method in class org.apache.hadoop.mapred.lib.MultipleOutputs
-
Returns list of channel names.
- getNamedOutputValueClass(JobConf, String) - Static method in class org.apache.hadoop.mapred.lib.MultipleOutputs
-
Returns the value class for a named output.
- getNames() - Method in class org.apache.hadoop.fs.BlockLocation
-
Get the list of names (IP:xferPort) hosting this block
- getNeededMem() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
- getNeededResources() - Method in class org.apache.hadoop.yarn.api.records.ApplicationResourceUsageReport
-
Get the needed Resource
.
- getNestedStructTypeInfo(String) - Method in class org.apache.hadoop.record.meta.RecordTypeInfo
-
Deprecated.
Return the type info of a nested record.
- getNewApplication(GetNewApplicationRequest) - Method in interface org.apache.hadoop.yarn.api.ApplicationClientProtocol
-
The interface used by clients to obtain a new
ApplicationId
for
submitting new applications.
- GetNewApplicationRequest - Class in org.apache.hadoop.yarn.api.protocolrecords
-
The request sent by clients to get a new
ApplicationId
for
submitting an application.
- GetNewApplicationRequest() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.GetNewApplicationRequest
-
- GetNewApplicationResponse - Class in org.apache.hadoop.yarn.api.protocolrecords
-
The response sent by the
ResourceManager
to the client for
a request to get a new
ApplicationId
for submitting applications.
- GetNewApplicationResponse() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.GetNewApplicationResponse
-
- getNewApplicationResponse() - Method in class org.apache.hadoop.yarn.client.api.YarnClientApplication
-
- getNextExpirationTime() - Method in interface org.apache.hadoop.mapreduce.v2.api.protocolrecords.RenewDelegationTokenResponse
-
- getNextToken() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
-
Deprecated.
- getNextToken() - Method in class org.apache.hadoop.record.compiler.generated.RccTokenManager
-
Deprecated.
- getNmHostAddress() - Method in class org.apache.hadoop.yarn.security.ContainerTokenIdentifier
-
- getNMToken(String) - Static method in class org.apache.hadoop.yarn.client.api.NMTokenCache
-
Returns NMToken, null if absent.
- getNMTokenCache() - Method in class org.apache.hadoop.yarn.client.api.AMRMClient
-
Get the NM token cache of the AMRMClient
.
- getNMTokenCache() - Method in class org.apache.hadoop.yarn.client.api.NMClient
-
Get the NM token cache of the NMClient
.
- getNMTokens() - Method in class org.apache.hadoop.yarn.api.protocolrecords.AllocateResponse
-
Get the list of NMTokens required for communicating with NM.
- getNMTokensFromPreviousAttempts() - Method in class org.apache.hadoop.yarn.api.protocolrecords.RegisterApplicationMasterResponse
-
Get the list of NMTokens for communicating with the NMs where the
containers of previous application attempts are running.
- getNode() - Method in class org.apache.hadoop.mapred.join.Parser.NodeToken
-
- getNode() - Method in class org.apache.hadoop.mapred.join.Parser.Token
-
- getNode() - Method in class org.apache.hadoop.mapreduce.lib.join.Parser.NodeToken
-
- getNode() - Method in class org.apache.hadoop.mapreduce.lib.join.Parser.Token
-
- getNodeHttpAddress() - Method in class org.apache.hadoop.yarn.api.records.Container
-
Get the http uri of the node on which the container is allocated.
- getNodeHttpAddress() - Method in class org.apache.hadoop.yarn.api.records.ContainerReport
-
Get the Node Http address of the container
- getNodeId() - Method in class org.apache.hadoop.yarn.api.records.Container
-
Get the identifier of the node on which the container is allocated.
- getNodeId() - Method in class org.apache.hadoop.yarn.api.records.NMToken
-
Get the
NodeId
of the
NodeManager
for which the NMToken
is used to authenticate.
- getNodeId() - Method in class org.apache.hadoop.yarn.api.records.NodeReport
-
Get the NodeId
of the node.
- getNodeId() - Method in class org.apache.hadoop.yarn.security.NMTokenIdentifier
-
- getNodeLabelExpression() - Method in class org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext
-
Get node-label-expression for this app.
- getNodeLabelExpression() - Method in class org.apache.hadoop.yarn.api.records.ResourceRequest
-
Get node-label-expression for this Resource Request.
- getNodeLabels() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetClusterNodeLabelsResponse
-
- getNodeLabels() - Method in class org.apache.hadoop.yarn.api.records.NodeReport
-
Get labels of this node
- getNodeReports() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetClusterNodesResponse
-
Get NodeReport
for all nodes in the cluster.
- getNodeReports(NodeState...) - Method in class org.apache.hadoop.yarn.client.api.YarnClient
-
Get a report of nodes (
NodeReport
) in the cluster.
- getNodeState() - Method in class org.apache.hadoop.yarn.api.records.NodeReport
-
Get the NodeState
of the node.
- getNodeStates() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetClusterNodesRequest
-
The state to filter the cluster nodes with.
- getNodeToLabels(GetNodesToLabelsRequest) - Method in interface org.apache.hadoop.yarn.api.ApplicationClientProtocol
-
The interface used by client to get node to labels mappings in existing cluster
- getNodeToLabels() - Method in class org.apache.hadoop.yarn.client.api.YarnClient
-
The interface used by client to get node to labels mappings in existing cluster
- getNum() - Method in class org.apache.hadoop.mapred.join.Parser.NumToken
-
- getNum() - Method in class org.apache.hadoop.mapred.join.Parser.Token
-
- getNum() - Method in class org.apache.hadoop.mapreduce.lib.join.Parser.NumToken
-
- getNum() - Method in class org.apache.hadoop.mapreduce.lib.join.Parser.Token
-
- getNumber() - Method in class org.apache.hadoop.metrics.spi.MetricValue
-
- getNumberOfThreads(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.map.MultithreadedMapper
-
The number of threads in the thread pool that will run the map function.
- getNumClusterNodes() - Method in class org.apache.hadoop.yarn.api.protocolrecords.AllocateResponse
-
Get the number of hosts available on the cluster.
- getNumContainers() - Method in class org.apache.hadoop.yarn.api.records.ReservationRequest
-
Get the number of containers required with the given specifications.
- getNumContainers() - Method in class org.apache.hadoop.yarn.api.records.ResourceRequest
-
Get the number of containers required with the given specifications.
- getNumExcludedNodes() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Get the number of excluded hosts in the cluster.
- getNumLinesPerSplit(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.input.NLineInputFormat
-
Get the number of lines per split
- getNumMapTasks() - Method in class org.apache.hadoop.mapred.JobConf
-
Get configured the number of reduce tasks for this job.
- getNumNodeManagers() - Method in class org.apache.hadoop.yarn.api.records.YarnClusterMetrics
-
Get the number of NodeManager
s in the cluster.
- getNumPaths() - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
Returns the number of Paths in the split
- getNumReduceTasks() - Method in class org.apache.hadoop.mapred.JobConf
-
Get configured the number of reduce tasks for this job.
- getNumReduceTasks() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get configured the number of reduce tasks for this job.
- getNumReservedSlots() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
- getNumTasksToExecutePerJvm() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the number of tasks that a spawned JVM should execute
- getNumUsedContainers() - Method in class org.apache.hadoop.yarn.api.records.ApplicationResourceUsageReport
-
Get the number of used containers.
- getNumUsedSlots() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
- getOccupiedMapSlots() - Method in class org.apache.hadoop.mapreduce.ClusterMetrics
-
Get number of occupied map slots in the cluster.
- getOccupiedReduceSlots() - Method in class org.apache.hadoop.mapreduce.ClusterMetrics
-
Get the number of occupied reduce slots in the cluster.
- getOffset() - Method in class org.apache.hadoop.fs.BlockLocation
-
Get the start offset of file associated with this block
- getOffset(int) - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
Returns the start offset of the ith Path
- getOperations() - Method in class org.apache.hadoop.mapreduce.QueueAclsInfo
-
Get opearations allowed on queue.
- getOriginalTrackingUrl() - Method in class org.apache.hadoop.yarn.api.records.ApplicationAttemptReport
-
Get the original tracking url for the application attempt.
- getOtherAction() - Method in class org.apache.hadoop.fs.permission.FsPermission
-
- getOtherInfo() - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEntity
-
Get the other information of the entity
- getOutputCommitter() - Method in class org.apache.hadoop.mapred.JobConf
-
- getOutputCommitter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.db.DBOutputFormat
-
- getOutputCommitter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
- getOutputCommitter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FilterOutputFormat
-
- getOutputCommitter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.LazyOutputFormat
-
- getOutputCommitter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.NullOutputFormat
-
- getOutputCommitter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.OutputFormat
-
Get the output committer for this output format.
- getOutputCommitter() - Method in interface org.apache.hadoop.mapreduce.TaskInputOutputContext
-
- getOutputCompressionType(JobConf) - Static method in class org.apache.hadoop.mapred.SequenceFileOutputFormat
-
Get the
SequenceFile.CompressionType
for the output
SequenceFile
.
- getOutputCompressionType(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat
-
Get the
SequenceFile.CompressionType
for the output
SequenceFile
.
- getOutputCompressorClass(JobConf, Class<? extends CompressionCodec>) - Static method in class org.apache.hadoop.mapred.FileOutputFormat
-
- getOutputCompressorClass(JobContext, Class<? extends CompressionCodec>) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
- getOutputFieldCount() - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getOutputFieldNames() - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getOutputFormat() - Method in class org.apache.hadoop.mapred.JobConf
-
- getOutputFormatClass() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
- getOutputKeyClass() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the key class for the job output data.
- getOutputKeyClass() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the key class for the job output data.
- getOutputKeyComparator() - Method in class org.apache.hadoop.mapred.JobConf
-
- getOutputName(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
Get the base output name for the output file.
- getOutputPath(JobConf) - Static method in class org.apache.hadoop.mapred.FileOutputFormat
-
Get the
Path
to the output directory for the map-reduce job.
- getOutputPath(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
Get the
Path
to the output directory for the map-reduce job.
- getOutputTableName() - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getOutputValueClass() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the value class for job outputs.
- getOutputValueClass() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the value class for job outputs.
- getOutputValueGroupingComparator() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the user defined
WritableComparable
comparator for
grouping keys of inputs to the reduce.
- getOwner() - Method in class org.apache.hadoop.fs.FileStatus
-
Get the owner of the file.
- getOwner() - Method in class org.apache.hadoop.fs.permission.AclStatus
-
Returns the file owner.
- getOwner() - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineDomain
-
Get the domain owner
- getParent() - Method in class org.apache.hadoop.fs.Path
-
Returns the parent of a path or null if at root.
- getPartialJobs(Long, Long, String, String, Long, Long, Long, Long, JobState) - Method in interface org.apache.hadoop.mapreduce.v2.hs.HistoryStorage
-
Look for a set of partial jobs.
- getPartition(K2, V2, int) - Method in class org.apache.hadoop.mapred.lib.HashPartitioner
-
- getPartition(K2, V2, int) - Method in interface org.apache.hadoop.mapred.Partitioner
-
Get the paritition number for a given key (hence record) given the total
number of partitions i.e.
- getPartition(BinaryComparable, V, int) - Method in class org.apache.hadoop.mapreduce.lib.partition.BinaryPartitioner
-
- getPartition(K, V, int) - Method in class org.apache.hadoop.mapreduce.lib.partition.HashPartitioner
-
- getPartition(K2, V2, int) - Method in class org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedPartitioner
-
- getPartition(int, int) - Method in class org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedPartitioner
-
- getPartition(K, V, int) - Method in class org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner
-
- getPartition(KEY, VALUE, int) - Method in class org.apache.hadoop.mapreduce.Partitioner
-
Get the partition number for a given key (hence record) given the total
number of partitions i.e.
- getPartitionerClass() - Method in class org.apache.hadoop.mapred.JobConf
-
- getPartitionerClass() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
- getPartitionFile(JobConf) - Static method in class org.apache.hadoop.mapred.lib.TotalOrderPartitioner
-
- getPartitionFile(Configuration) - Static method in class org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner
-
Get the path to the SequenceFile storing the sorted partition keyset.
- getPassword(String) - Method in class org.apache.hadoop.conf.Configuration
-
Get the value for a known password configuration element.
- getPassword() - Method in class org.apache.hadoop.yarn.api.records.Token
-
Get the token password
- getPasswordFromConfig(String) - Method in class org.apache.hadoop.conf.Configuration
-
Fallback to clear text passwords in configuration.
- getPasswordFromCredentialProviders(String) - Method in class org.apache.hadoop.conf.Configuration
-
Try and resolve the provided element name as a credential provider
alias.
- getPath() - Method in class org.apache.hadoop.fs.FileStatus
-
- getPath() - Method in class org.apache.hadoop.mapred.FileSplit
-
The file containing this split's data.
- getPath(int) - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
Returns the ith Path
- getPath() - Method in class org.apache.hadoop.mapreduce.lib.input.FileSplit
-
The file containing this split's data.
- getPath() - Method in class org.apache.hadoop.yarn.api.protocolrecords.UseSharedCacheResourceResponse
-
Get the Path
corresponding to the requested resource in the
shared cache.
- getPathForCustomFile(JobConf, String) - Static method in class org.apache.hadoop.mapred.FileOutputFormat
-
Helper function to generate a
Path
for a file that is unique for
the task within the job output directory.
- getPathForWorkFile(TaskInputOutputContext<?, ?, ?, ?>, String, String) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
Helper function to generate a
Path
for a file that is unique for
the task within the job output directory.
- getPathNameWarning() - Method in class org.apache.hadoop.fs.azure.WasbFsck
-
- getPaths() - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
Returns all the Paths in the split
- getPathWithoutSchemeAndAuthority(Path) - Static method in class org.apache.hadoop.fs.Path
-
- getPattern(String, Pattern) - Method in class org.apache.hadoop.conf.Configuration
-
Get the value of the name
property as a Pattern
.
- getPattern() - Method in class org.apache.hadoop.yarn.api.records.LocalResource
-
Get the pattern that should be used to extract entries from the
archive (only used when type is PATTERN
).
- getPeriod() - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
-
Returns the timer period.
- getPermission() - Method in class org.apache.hadoop.fs.FileStatus
-
Get FsPermission associated with the file.
- getPermission() - Method in class org.apache.hadoop.fs.permission.AclEntry
-
Returns the set of permissions in the ACL entry.
- getPermission() - Method in class org.apache.hadoop.fs.permission.AclStatus
-
Returns the permission set for the path
- getPort() - Method in class org.apache.hadoop.yarn.api.records.NodeId
-
Get the port for communicating with the node.
- getPort() - Method in class org.apache.hadoop.yarn.api.records.URL
-
Get the port of the URL.
- getPos() - Method in exception org.apache.hadoop.fs.ChecksumException
-
- getPos() - Method in class org.apache.hadoop.fs.FSDataInputStream
-
Get the current position in the input stream.
- getPos() - Method in class org.apache.hadoop.fs.FSDataOutputStream
-
Get the current position in the output stream.
- getPos() - Method in interface org.apache.hadoop.fs.Seekable
-
Return the current offset from the start of the file
- getPos() - Method in class org.apache.hadoop.io.compress.CompressionInputStream
-
This method returns the current position in the stream.
- getPos() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
Unsupported (returns zero in all cases).
- getPos() - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
-
Request position from proxied RR.
- getPos() - Method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
-
- getPos() - Method in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
-
return the amount of data processed
- getPos() - Method in class org.apache.hadoop.mapred.lib.CombineFileRecordReaderWrapper
-
- getPos() - Method in interface org.apache.hadoop.mapred.RecordReader
-
Returns the current position in the input.
- getPos() - Method in class org.apache.hadoop.mapred.SequenceFileAsTextRecordReader
-
- getPos() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
-
- getPos() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
Deprecated.
- getPreemptionMessage() - Method in class org.apache.hadoop.yarn.api.protocolrecords.AllocateResponse
-
Get the description of containers owned by the AM, but requested back by
the cluster.
- getPrimaryFilters() - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEntity
-
Get the primary filters
- getPriority() - Method in class org.apache.hadoop.mapreduce.Job
-
Get scheduling info of the job.
- getPriority() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
Return the priority of the job
- getPriority() - Method in class org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext
-
Get the Priority
of the application.
- getPriority() - Method in class org.apache.hadoop.yarn.api.records.Container
-
Get the Priority
at which the Container
was
allocated.
- getPriority() - Method in class org.apache.hadoop.yarn.api.records.ContainerReport
-
Get the allocated Priority
of the container.
- getPriority() - Method in class org.apache.hadoop.yarn.api.records.Priority
-
Get the assigned priority
- getPriority() - Method in class org.apache.hadoop.yarn.api.records.ResourceRequest
-
Get the Priority
of the request.
- getPriority() - Method in class org.apache.hadoop.yarn.security.ContainerTokenIdentifier
-
- getProblems() - Method in exception org.apache.hadoop.mapred.InvalidInputException
-
Get the complete list of the problems reported.
- getProblems() - Method in exception org.apache.hadoop.mapreduce.lib.input.InvalidInputException
-
Get the complete list of the problems reported.
- getProcessTreeDump() - Method in class org.apache.hadoop.yarn.util.ResourceCalculatorProcessTree
-
Get a dump of the process-tree.
- getProfileEnabled() - Method in class org.apache.hadoop.mapred.JobConf
-
Get whether the task profiling is enabled.
- getProfileEnabled() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get whether the task profiling is enabled.
- getProfileParams() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the profiler configuration arguments.
- getProfileParams() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the profiler configuration arguments.
- getProfileTaskRange(boolean) - Method in class org.apache.hadoop.mapred.JobConf
-
Get the range of maps or reduces to profile.
- getProfileTaskRange(boolean) - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the range of maps or reduces to profile.
- getProgress() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
Report progress as the minimum of all child RR progress.
- getProgress() - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
-
Request progress from proxied RR.
- getProgress() - Method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
-
- getProgress() - Method in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
-
return progress based on the amount of data processed so far.
- getProgress() - Method in class org.apache.hadoop.mapred.lib.CombineFileRecordReaderWrapper
-
- getProgress() - Method in interface org.apache.hadoop.mapred.RecordReader
-
- getProgress() - Method in interface org.apache.hadoop.mapred.Reporter
-
Get the progress of the task.
- getProgress() - Method in class org.apache.hadoop.mapred.SequenceFileAsTextRecordReader
-
- getProgress() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
-
Return the progress within the input split
- getProgress() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
The current progress of the record reader through its data.
- getProgress() - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader
-
return progress based on the amount of data processed so far.
- getProgress() - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReaderWrapper
-
- getProgress() - Method in class org.apache.hadoop.mapreduce.lib.input.KeyValueLineRecordReader
-
- getProgress() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsTextRecordReader
-
- getProgress() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader
-
Return the progress within the input split
- getProgress() - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
-
Report progress as the minimum of all child RR progress.
- getProgress() - Method in class org.apache.hadoop.mapreduce.lib.join.WrappedRecordReader
-
Request progress from proxied RR.
- getProgress() - Method in class org.apache.hadoop.mapreduce.RecordReader
-
The current progress of the record reader through its data.
- getProgress() - Method in interface org.apache.hadoop.mapreduce.TaskAttemptContext
-
The current progress of the task attempt.
- getProgress() - Method in class org.apache.hadoop.yarn.api.protocolrecords.AllocateRequest
-
Get the current progress of application.
- getProgress() - Method in class org.apache.hadoop.yarn.api.records.ApplicationReport
-
Get the application's progress ( range 0.0 to 1.0 )
- getProgressible() - Method in interface org.apache.hadoop.mapred.JobContext
-
Get the progress mechanism for reporting progress.
- getProgressible() - Method in interface org.apache.hadoop.mapred.TaskAttemptContext
-
- getProgressPollInterval(Configuration) - Static method in class org.apache.hadoop.mapreduce.Job
-
The interval at which monitorAndPrintJob() prints status
- getProperties() - Method in class org.apache.hadoop.mapreduce.QueueInfo
-
Get properties.
- getPropertySources(String) - Method in class org.apache.hadoop.conf.Configuration
-
Gets information about why a property was set.
- getProps() - Method in class org.apache.hadoop.conf.Configuration
-
- getProto() - Method in class org.apache.hadoop.yarn.security.AMRMTokenIdentifier
-
- getProto() - Method in class org.apache.hadoop.yarn.security.client.ClientToAMTokenIdentifier
-
- getProto() - Method in class org.apache.hadoop.yarn.security.ContainerTokenIdentifier
-
- getProto() - Method in class org.apache.hadoop.yarn.security.NMTokenIdentifier
-
- getProviders(Configuration) - Static method in class org.apache.hadoop.crypto.key.KeyProviderFactory
-
- getProviders(Configuration) - Static method in class org.apache.hadoop.security.alias.CredentialProviderFactory
-
- getProxy(Configuration, int) - Method in class org.apache.hadoop.ha.HAServiceTarget
-
- getProxy(Configuration, Class<T>, InetSocketAddress) - Static method in class org.apache.hadoop.yarn.client.AHSProxy
-
- getProxyGroups() - Method in class org.apache.hadoop.security.authorize.DefaultImpersonationProvider
-
- getProxyHosts() - Method in class org.apache.hadoop.security.authorize.DefaultImpersonationProvider
-
- getProxySuperuserGroupConfKey(String) - Method in class org.apache.hadoop.security.authorize.DefaultImpersonationProvider
-
Returns configuration key for effective groups allowed for a superuser
- getProxySuperuserIpConfKey(String) - Method in class org.apache.hadoop.security.authorize.DefaultImpersonationProvider
-
Return configuration key for superuser ip addresses
- getProxySuperuserUserConfKey(String) - Method in class org.apache.hadoop.security.authorize.DefaultImpersonationProvider
-
Returns configuration key for effective usergroups allowed for a superuser
- getQueue(String) - Method in class org.apache.hadoop.mapreduce.Cluster
-
Get queue information for the specified name.
- getQueue() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
Get queue name
- getQueue() - Method in class org.apache.hadoop.yarn.api.protocolrecords.RegisterApplicationMasterResponse
-
Get the queue that the application was placed in.
- getQueue() - Method in class org.apache.hadoop.yarn.api.protocolrecords.ReservationSubmissionRequest
-
Get the name of the
Plan
that corresponds to the name of the
QueueInfo
in the scheduler to which the reservation will be
submitted to.
- getQueue() - Method in class org.apache.hadoop.yarn.api.records.ApplicationReport
-
Get the queue to which the application was submitted.
- getQueue() - Method in class org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext
-
Get the queue to which the application is being submitted.
- getQueueAclsForCurrentUser() - Method in class org.apache.hadoop.mapred.JobClient
-
Gets the Queue ACLs for current user
- getQueueAclsForCurrentUser() - Method in class org.apache.hadoop.mapreduce.Cluster
-
Gets the Queue ACLs for current user
- getQueueAclsInfo() - Method in class org.apache.hadoop.yarn.client.api.YarnClient
-
Get information about acls for current user on all the
existing queues.
- getQueueChildren() - Method in class org.apache.hadoop.mapreduce.QueueInfo
-
Get immediate children.
- getQueueInfo(String) - Method in class org.apache.hadoop.mapred.JobClient
-
Gets the queue information associated to a particular Job Queue
- getQueueInfo(GetQueueInfoRequest) - Method in interface org.apache.hadoop.yarn.api.ApplicationClientProtocol
-
The interface used by clients to get information about queues
from the ResourceManager
.
- getQueueInfo() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetQueueInfoResponse
-
Get the QueueInfo
for the specified queue.
- getQueueInfo(String) - Method in class org.apache.hadoop.yarn.client.api.YarnClient
-
Get information (
QueueInfo
) about a given
queue.
- GetQueueInfoRequest - Class in org.apache.hadoop.yarn.api.protocolrecords
-
The request sent by clients to get queue information
from the ResourceManager
.
- GetQueueInfoRequest() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.GetQueueInfoRequest
-
- GetQueueInfoResponse - Class in org.apache.hadoop.yarn.api.protocolrecords
-
The response sent by the ResourceManager
to a client
requesting information about queues in the system.
- GetQueueInfoResponse() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.GetQueueInfoResponse
-
- getQueueName() - Method in class org.apache.hadoop.mapred.JobConf
-
Return the name of the queue to which this job is submitted.
- getQueueName() - Method in class org.apache.hadoop.mapreduce.QueueAclsInfo
-
Get queue name.
- getQueueName() - Method in class org.apache.hadoop.mapreduce.QueueInfo
-
Get the queue name from JobQueueInfo
- getQueueName() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetQueueInfoRequest
-
Get the queue name for which to get queue information.
- getQueueName() - Method in class org.apache.hadoop.yarn.api.records.QueueInfo
-
Get the name of the queue.
- getQueueName() - Method in class org.apache.hadoop.yarn.api.records.QueueUserACLInfo
-
Get the queue name of the queue.
- getQueues() - Method in class org.apache.hadoop.mapred.JobClient
-
Return an array of queue information objects about all the Job Queues
configured.
- getQueues() - Method in class org.apache.hadoop.mapreduce.Cluster
-
Get all the queues in cluster.
- getQueueState() - Method in class org.apache.hadoop.mapred.JobQueueInfo
-
Deprecated.
- getQueueState() - Method in class org.apache.hadoop.yarn.api.records.QueueInfo
-
Get the QueueState
of the queue.
- getQueueUserAcls(GetQueueUserAclsInfoRequest) - Method in interface org.apache.hadoop.yarn.api.ApplicationClientProtocol
-
The interface used by clients to get information about queue
acls for current user from the ResourceManager
.
- GetQueueUserAclsInfoRequest - Class in org.apache.hadoop.yarn.api.protocolrecords
-
The request sent by clients to the ResourceManager
to
get queue acls for the current user.
- GetQueueUserAclsInfoRequest() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.GetQueueUserAclsInfoRequest
-
- GetQueueUserAclsInfoResponse - Class in org.apache.hadoop.yarn.api.protocolrecords
-
The response sent by the ResourceManager
to clients
seeking queue acls for the user.
- GetQueueUserAclsInfoResponse() - Constructor for class org.apache.hadoop.yarn.api.protocolrecords.GetQueueUserAclsInfoResponse
-
- getQuota() - Method in class org.apache.hadoop.fs.ContentSummary
-
Return the directory quota
- getRackName() - Method in class org.apache.hadoop.yarn.api.records.NodeReport
-
Get the rack name for the node.
- getRange(String, String) - Method in class org.apache.hadoop.conf.Configuration
-
Parse the given attribute as a set of integer ranges
- getRaw(String) - Method in class org.apache.hadoop.conf.Configuration
-
Get the value of the
name
property, without doing
variable expansion.If the key is
deprecated, it returns the value of the first key which replaces
the deprecated key and is not null.
- getRaw() - Method in class org.apache.hadoop.fs.LocalFileSystem
-
- getRawFileSystem() - Method in class org.apache.hadoop.fs.ChecksumFileSystem
-
get the raw file system
- getRawFileSystem() - Method in class org.apache.hadoop.fs.FilterFileSystem
-
Get the raw file system
- getReaders(FileSystem, Path, Configuration) - Static method in class org.apache.hadoop.mapred.MapFileOutputFormat
-
Open the output generated by this format.
- getReaders(Configuration, Path) - Static method in class org.apache.hadoop.mapred.SequenceFileOutputFormat
-
Open the output generated by this format.
- getReaders(Path, Configuration) - Static method in class org.apache.hadoop.mapreduce.lib.output.MapFileOutputFormat
-
Open the output generated by this format.
- getReaders() - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineDomain
-
Get the reader (and/or reader group) list string
- getReadyJobs() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
-
- getReadyJobsList() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl
-
- getReasonForBlacklist() - Method in class org.apache.hadoop.mapreduce.TaskTrackerInfo
-
Gets the reason for which the tasktracker was blacklisted.
- getRecordLength(Configuration) - Static method in class org.apache.hadoop.mapred.FixedLengthInputFormat
-
Get record length value
- getRecordLength(Configuration) - Static method in class org.apache.hadoop.mapreduce.lib.input.FixedLengthInputFormat
-
Get record length value
- getRecordName() - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
-
Returns the record name.
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.FileInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.FixedLengthInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) - Method in interface org.apache.hadoop.mapred.InputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) - Method in interface org.apache.hadoop.mapred.join.ComposableInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.join.CompositeInputFormat
-
Construct a CompositeRecordReader for the children of this InputFormat
as defined in the init expression.
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.KeyValueTextInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.lib.CombineFileInputFormat
-
This is not implemented yet.
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.lib.CombineSequenceFileInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.lib.CombineTextInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.lib.NLineInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.MultiFileInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.SequenceFileAsTextInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.SequenceFileInputFilter
-
Create a record reader for the given split
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.SequenceFileInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.TextInputFormat
-
- getRecordReaderQueue() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
Return sorted list of RecordReaders for this composite.
- getRecordReaderQueue() - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
-
Return sorted list of RecordReaders for this composite.
- getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.FileOutputFormat
-
- getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.lib.db.DBOutputFormat
-
- getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.lib.FilterOutputFormat
-
- getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.lib.LazyOutputFormat
-
- getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.lib.MultipleOutputFormat
-
Create a composite record writer that can write key/value data to different
output files
- getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.lib.NullOutputFormat
-
- getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.MapFileOutputFormat
-
- getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in interface org.apache.hadoop.mapred.OutputFormat
-
- getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat
-
- getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.SequenceFileOutputFormat
-
- getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.TextOutputFormat
-
- getRecordWriter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.db.DBOutputFormat
-
- getRecordWriter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
- getRecordWriter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FilterOutputFormat
-
- getRecordWriter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.LazyOutputFormat
-
- getRecordWriter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.MapFileOutputFormat
-
- getRecordWriter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.NullOutputFormat
-
- getRecordWriter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.SequenceFileAsBinaryOutputFormat
-
- getRecordWriter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat
-
- getRecordWriter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.TextOutputFormat
-
- getRecordWriter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.OutputFormat
-
- getRecursive() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetQueueInfoRequest
-
Is information on the entire child queue hierarchy required?
- getReduceDebugScript() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the reduce task's debug Script
- getReduceProgress() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
- getReducerClass() - Method in class org.apache.hadoop.mapred.JobConf
-
- getReducerClass() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
- getReducerContext(ReduceContext<KEYIN, VALUEIN, KEYOUT, VALUEOUT>) - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer
-
A a wrapped Reducer.Context
for custom implementations.
- getReducerMaxSkipGroups(Configuration) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
-
Get the number of acceptable skip groups surrounding the bad group PER
bad group in reducer.
- getReduceSlotCapacity() - Method in class org.apache.hadoop.mapreduce.ClusterMetrics
-
Get the total number of reduce slots in the cluster.
- getReduceSpeculativeExecution() - Method in class org.apache.hadoop.mapred.JobConf
-
Should speculative execution be used for this job for reduce tasks?
Defaults to true
.
- getReduceTaskReports(JobID) - Method in class org.apache.hadoop.mapred.JobClient
-
Get the information of the current state of the reduce tasks of a job.
- getReduceTaskReports(String) - Method in class org.apache.hadoop.mapred.JobClient
-
- getReduceTasks() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Get the number of currently running reduce tasks in the cluster.
- getRelatedEntities() - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEntity
-
Get the related entities
- getRelaxLocality() - Method in class org.apache.hadoop.yarn.api.records.ResourceRequest
-
Get whether locality relaxation is enabled with this
ResourceRequest
.
- getReleaseList() - Method in class org.apache.hadoop.yarn.api.protocolrecords.AllocateRequest
-
Get the list of ContainerId
of containers being
released by the ApplicationMaster
.
- getRemaining() - Method in class org.apache.hadoop.fs.FsStatus
-
Return the number of remaining bytes on the file system
- getRemaining() - Method in interface org.apache.hadoop.io.compress.Decompressor
-
Returns the number of bytes remaining in the compressed data buffer.
- getRenewer() - Method in interface org.apache.hadoop.mapreduce.v2.api.protocolrecords.GetDelegationTokenRequest
-
- getRenewer() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetDelegationTokenRequest
-
- getReplication() - Method in class org.apache.hadoop.fs.FileStatus
-
Get the replication factor of a file.
- getReplication(Path) - Method in class org.apache.hadoop.fs.FileSystem
-
Deprecated.
Use getFileStatus() instead
- getReplication() - Method in class org.apache.hadoop.fs.FsServerDefaults
-
- getReport() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.DoubleValueSum
-
- getReport() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.LongValueMax
-
- getReport() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.LongValueMin
-
- getReport() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.LongValueSum
-
- getReport() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.StringValueMax
-
- getReport() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.StringValueMin
-
- getReport() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.UniqValueCount
-
- getReport() - Method in interface org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregator
-
- getReport() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueHistogram
-
- getReportDetails() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueHistogram
-
- getReportItems() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueHistogram
-
- getRepresentingCharacter(TaskType) - Static method in class org.apache.hadoop.mapreduce.TaskID
-
Gets the character representing the
TaskType
- getReservationDefinition() - Method in class org.apache.hadoop.yarn.api.protocolrecords.ReservationSubmissionRequest
-
- getReservationDefinition() - Method in class org.apache.hadoop.yarn.api.protocolrecords.ReservationUpdateRequest
-
- getReservationId() - Method in class org.apache.hadoop.mapreduce.Job
-
Get the reservation to which the job is submitted to, if any
- getReservationId() - Method in class org.apache.hadoop.yarn.api.protocolrecords.ReservationDeleteRequest
-
Get the
ReservationId
, that corresponds to a valid resource
allocation in the scheduler (between start and end time of this
reservation)
- getReservationId() - Method in class org.apache.hadoop.yarn.api.protocolrecords.ReservationSubmissionResponse
-
Get the
ReservationId
, that corresponds to a valid resource
allocation in the scheduler (between start and end time of this
reservation)
- getReservationId() - Method in class org.apache.hadoop.yarn.api.protocolrecords.ReservationUpdateRequest
-
Get the
ReservationId
, that corresponds to a valid resource
allocation in the scheduler (between start and end time of this
reservation)
- getReservationID() - Method in class org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext
-
Get the reservation id, that corresponds to a valid resource allocation in
the scheduler (between start and end time of the corresponding reservation)
- getReservationName() - Method in class org.apache.hadoop.yarn.api.records.ReservationDefinition
-
Get the name for this reservation.
- getReservationRequests() - Method in class org.apache.hadoop.yarn.api.records.ReservationDefinition
-
- getReservationResources() - Method in class org.apache.hadoop.yarn.api.records.ReservationRequests
-
- getReservedMapSlots() - Method in class org.apache.hadoop.mapreduce.ClusterMetrics
-
Get number of reserved map slots in the cluster.
- getReservedMem() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
- getReservedReduceSlots() - Method in class org.apache.hadoop.mapreduce.ClusterMetrics
-
Get the number of reserved reduce slots in the cluster.
- getReservedResources() - Method in class org.apache.hadoop.yarn.api.records.ApplicationResourceUsageReport
-
Get the reserved Resource
.
- getResource(String) - Method in class org.apache.hadoop.conf.Configuration
-
Get the
URL
for the named resource.
- getResource(String) - Method in class org.apache.hadoop.util.ApplicationClassLoader
-
- getResource() - Method in class org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext
-
Get the resource required by the ApplicationMaster
for this
application.
- getResource() - Method in class org.apache.hadoop.yarn.api.records.Container
-
Get the Resource
allocated to the container.
- getResource() - Method in class org.apache.hadoop.yarn.api.records.LocalResource
-
Get the location of the resource to be localized.
- getResource() - Method in class org.apache.hadoop.yarn.security.ContainerTokenIdentifier
-
- getResourceBlacklistRequest() - Method in class org.apache.hadoop.yarn.api.protocolrecords.AllocateRequest
-
Get the ResourceBlacklistRequest
being sent by the
ApplicationMaster
.
- getResourceCalculatorProcessTree(String, Class<? extends ResourceCalculatorProcessTree>, Configuration) - Static method in class org.apache.hadoop.yarn.util.ResourceCalculatorProcessTree
-
Create the ResourceCalculatorProcessTree rooted to specified process
from the class name and configure it.
- getResourceKey() - Method in class org.apache.hadoop.yarn.api.protocolrecords.ReleaseSharedCacheResourceRequest
-
Get the key
of the resource to be released.
- getResourceKey() - Method in class org.apache.hadoop.yarn.api.protocolrecords.UseSharedCacheResourceRequest
-
Get the key
of the resource to be used.
- getResourceName() - Method in class org.apache.hadoop.yarn.api.records.ResourceRequest
-
Get the resource (e.g.
- getResourceRequest() - Method in class org.apache.hadoop.yarn.api.records.PreemptionContract
-
- getResourceRequest() - Method in class org.apache.hadoop.yarn.api.records.PreemptionResourceRequest
-
- getResponseId() - Method in class org.apache.hadoop.yarn.api.protocolrecords.AllocateRequest
-
Get the response id used to track duplicate responses.
- getResponseId() - Method in class org.apache.hadoop.yarn.api.protocolrecords.AllocateResponse
-
Get the last response id.
- getRMDelegationToken() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetDelegationTokenResponse
-
The Delegation tokens have a identifier which maps to
AbstractDelegationTokenIdentifier
.
- getRMDelegationToken(Text) - Method in class org.apache.hadoop.yarn.client.api.YarnClient
-
Get a delegation token so as to be able to talk to YARN using those tokens.
- getRMDelegationTokenService(Configuration) - Static method in class org.apache.hadoop.yarn.client.ClientRMProxy
-
Get the token service name to be used for RMDelegationToken.
- getRMIdentifier() - Method in class org.apache.hadoop.yarn.security.ContainerTokenIdentifier
-
Get the RMIdentifier of RM in which containers are allocated
- getRolledLogsExcludePattern() - Method in class org.apache.hadoop.yarn.api.records.LogAggregationContext
-
Get exclude pattern for aggregation in a rolling fashion.
- getRolledLogsIncludePattern() - Method in class org.apache.hadoop.yarn.api.records.LogAggregationContext
-
Get include pattern in a rolling fashion.
- getRootQueueInfos() - Method in class org.apache.hadoop.yarn.client.api.YarnClient
-
Get information (
QueueInfo
) about top level queues.
- getRootQueues() - Method in class org.apache.hadoop.mapred.JobClient
-
Returns an array of queue information objects about root level queues
configured
- getRootQueues() - Method in class org.apache.hadoop.mapreduce.Cluster
-
Gets the root level queues.
- getRpcPort() - Method in class org.apache.hadoop.yarn.api.protocolrecords.RegisterApplicationMasterRequest
-
Get the RPC port on which the ApplicationMaster
is
responding.
- getRpcPort() - Method in class org.apache.hadoop.yarn.api.records.ApplicationAttemptReport
-
Get the RPC port of this attempt ApplicationMaster
.
- getRpcPort() - Method in class org.apache.hadoop.yarn.api.records.ApplicationReport
-
Get the RPC port of the ApplicationMaster
.
- getRssMemorySize() - Method in class org.apache.hadoop.yarn.util.ResourceCalculatorProcessTree
-
Get the resident set size (rss) memory used by all the processes
in the process-tree.
- getRssMemorySize(int) - Method in class org.apache.hadoop.yarn.util.ResourceCalculatorProcessTree
-
Get the resident set size (rss) memory used by all the processes
in the process-tree that are older than the passed in age.
- getRunningJobList() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl
-
- getRunningJobs() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
-
- getRunningMaps() - Method in class org.apache.hadoop.mapreduce.ClusterMetrics
-
Get the number of running map tasks in the cluster.
- getRunningReduces() - Method in class org.apache.hadoop.mapreduce.ClusterMetrics
-
Get the number of running reduce tasks in the cluster.
- getRunningTaskAttempts() - Method in class org.apache.hadoop.mapred.TaskReport
-
Get the running task attempt IDs for this task
- getRunState() - Method in class org.apache.hadoop.mapred.JobStatus
-
- getSchedulerResourceTypes() - Method in class org.apache.hadoop.yarn.api.protocolrecords.RegisterApplicationMasterResponse
-
Get a set of the resource types considered by the scheduler.
- getSchedulingInfo() - Method in class org.apache.hadoop.mapreduce.Job
-
Get scheduling info of the job.
- getSchedulingInfo() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
Gets the Scheduling information associated to a particular Job.
- getSchedulingInfo() - Method in class org.apache.hadoop.mapreduce.QueueInfo
-
Gets the scheduling information associated to particular job queue.
- getScheme() - Method in class org.apache.hadoop.fs.azure.NativeAzureFileSystem
-
- getScheme() - Method in class org.apache.hadoop.fs.FileSystem
-
Return the protocol scheme for the FileSystem.
- getScheme() - Method in class org.apache.hadoop.fs.ftp.FTPFileSystem
-
Return the protocol scheme for the FileSystem.
- getScheme() - Method in class org.apache.hadoop.fs.LocalFileSystem
-
Return the protocol scheme for the FileSystem.
- getScheme() - Method in class org.apache.hadoop.fs.s3.S3FileSystem
-
Return the protocol scheme for the FileSystem.
- getScheme() - Method in class org.apache.hadoop.fs.s3native.NativeS3FileSystem
-
Return the protocol scheme for the FileSystem.
- getScheme() - Method in class org.apache.hadoop.fs.viewfs.ViewFileSystem
-
Return the protocol scheme for the FileSystem.
- getScheme() - Method in class org.apache.hadoop.yarn.api.records.URL
-
Get the scheme of the URL.
- getScope() - Method in class org.apache.hadoop.fs.permission.AclEntry
-
Returns the scope of the ACL entry.
- getSecretKey(Credentials, Text) - Static method in class org.apache.hadoop.mapreduce.security.TokenCache
-
auxiliary method to get user's secret keys..
- getSelectQuery() - Method in class org.apache.hadoop.mapreduce.lib.db.DataDrivenDBRecordReader
-
Returns the query for selecting the records,
subclasses can override this for custom behaviour.
- getSelectQuery() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
Returns the query for selecting the records,
subclasses can override this for custom behaviour.
- getSelectQuery() - Method in class org.apache.hadoop.mapreduce.lib.db.OracleDBRecordReader
-
Returns the query for selecting the records from an Oracle DB.
- getSequenceFileOutputKeyClass(JobConf) - Static method in class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat
-
- getSequenceFileOutputKeyClass(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.output.SequenceFileAsBinaryOutputFormat
-
- getSequenceFileOutputValueClass(JobConf) - Static method in class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat
-
- getSequenceFileOutputValueClass(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.output.SequenceFileAsBinaryOutputFormat
-
- getSequenceWriter(TaskAttemptContext, Class<?>, Class<?>) - Method in class org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat
-
- getServerDefaults() - Method in class org.apache.hadoop.fs.AbstractFileSystem
-
Return a set of server default configuration values.
- getServerDefaults() - Method in class org.apache.hadoop.fs.FileSystem
-
- getServerDefaults(Path) - Method in class org.apache.hadoop.fs.FileSystem
-
Return a set of server default configuration values
- getServerDefaults() - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- getServerDefaults(Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- getServerDefaults() - Method in class org.apache.hadoop.fs.viewfs.ViewFileSystem
-
- getServerDefaults(Path) - Method in class org.apache.hadoop.fs.viewfs.ViewFileSystem
-
- getServerDefaults() - Method in class org.apache.hadoop.fs.viewfs.ViewFs
-
- getService() - Method in class org.apache.hadoop.yarn.api.records.Token
-
Get the service to which the token is allocated.
- getServiceData() - Method in class org.apache.hadoop.yarn.api.records.ContainerLaunchContext
-
Get application-specific binary service data.
- getServices() - Method in class org.apache.hadoop.service.CompositeService
-
Get a cloned list of services
- getServiceState() - Method in class org.apache.hadoop.service.AbstractService
-
- getServiceState() - Method in interface org.apache.hadoop.service.Service
-
Get the current service state
- getServiceStatus() - Method in interface org.apache.hadoop.ha.HAServiceProtocol
-
Return the current status of the service.
- getSessionId() - Method in class org.apache.hadoop.mapred.JobConf
-
Deprecated.
- getSetupProgress() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
- getSetupTaskReports(JobID) - Method in class org.apache.hadoop.mapred.JobClient
-
Get the information of the current state of the setup tasks of a job.
- getShouldBeUploadedToSharedCache() - Method in class org.apache.hadoop.yarn.api.records.LocalResource
-
NM uses it to decide whether if it is necessary to upload the resource to
the shared cache
- getSingleton() - Static method in class org.apache.hadoop.yarn.client.api.NMTokenCache
-
Returns the singleton NM token cache.
- getSize() - Method in class org.apache.hadoop.io.BytesWritable
-
- getSize() - Method in class org.apache.hadoop.yarn.api.records.LocalResource
-
Get the size of the resource to be localized.
- getSkipOutputPath(Configuration) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
-
Get the directory to which skipped records are written.
- getSlope(String) - Method in class org.apache.hadoop.metrics.ganglia.GangliaContext
-
- getSocketAddr(String, String, String, int) - Method in class org.apache.hadoop.conf.Configuration
-
Get the socket address for hostProperty
as a
InetSocketAddress
.
- getSocketAddr(String, String, int) - Method in class org.apache.hadoop.conf.Configuration
-
Get the socket address for name
property as a
InetSocketAddress
.
- getSocketAddr(String, String, int) - Method in class org.apache.hadoop.yarn.conf.YarnConfiguration
-
Get the socket address for name
property as a
InetSocketAddress
.
- getSortComparator() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
- getSpaceConsumed() - Method in class org.apache.hadoop.fs.ContentSummary
-
Retuns storage space consumed
- getSpaceQuota() - Method in class org.apache.hadoop.fs.ContentSummary
-
Returns storage space quota
- getSpeculativeExecution() - Method in class org.apache.hadoop.mapred.JobConf
-
Should speculative execution be used for this job?
Defaults to true
.
- getSplit() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
- getSplitHosts(BlockLocation[], long, long, NetworkTopology) - Method in class org.apache.hadoop.mapred.FileInputFormat
-
This function identifies and returns the hosts that contribute
most for a given split.
- getSplits(JobConf, int) - Method in class org.apache.hadoop.mapred.FileInputFormat
-
- getSplits(JobConf, int) - Method in interface org.apache.hadoop.mapred.InputFormat
-
Logically split the set of input files for the job.
- getSplits(JobConf, int) - Method in class org.apache.hadoop.mapred.join.CompositeInputFormat
-
Build a CompositeInputSplit from the child InputFormats by assigning the
ith split from each child to the ith composite split.
- getSplits(JobConf, int) - Method in class org.apache.hadoop.mapred.lib.CombineFileInputFormat
-
- getSplits(JobConf, int) - Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat
-
Logically split the set of input files for the job.
- getSplits(JobConf, int) - Method in class org.apache.hadoop.mapred.lib.NLineInputFormat
-
Logically splits the set of input files for the job, splits N lines
of the input as one split.
- getSplits(JobConf, int) - Method in class org.apache.hadoop.mapred.MultiFileInputFormat
-
- getSplits(JobContext) - Method in class org.apache.hadoop.mapreduce.InputFormat
-
Logically split the set of input files for the job.
- getSplits(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.db.DataDrivenDBInputFormat
-
Logically split the set of input files for the job.
- getSplits(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
Logically split the set of input files for the job.
- getSplits(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat
-
- getSplits(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
Generate the list of files and make them into FileSplits.
- getSplits(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.input.NLineInputFormat
-
Logically splits the set of input files for the job, splits N lines
of the input as one split.
- getSplits(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeInputFormat
-
Build a CompositeInputSplit from the child InputFormats by assigning the
ith split from each child to the ith composite split.
- getSplitsForFile(FileStatus, Configuration, int) - Static method in class org.apache.hadoop.mapreduce.lib.input.NLineInputFormat
-
- getSplitter(int) - Method in class org.apache.hadoop.mapreduce.lib.db.DataDrivenDBInputFormat
-
- getSplitter(int) - Method in class org.apache.hadoop.mapreduce.lib.db.OracleDataDrivenDBInputFormat
-
- getStagingAreaDir() - Method in class org.apache.hadoop.mapred.JobClient
-
Fetch the staging area directory for the application
- getStagingAreaDir() - Method in class org.apache.hadoop.mapreduce.Cluster
-
Grab the jobtracker's view of the staging directory path where
job-specific files will be placed.
- getStart() - Method in class org.apache.hadoop.mapred.FileSplit
-
The position of the first byte in the file to process.
- getStart() - Method in class org.apache.hadoop.mapreduce.lib.input.FileSplit
-
The position of the first byte in the file to process.
- getStartContainerRequests() - Method in class org.apache.hadoop.yarn.api.protocolrecords.StartContainersRequest
-
- getStartOffsets() - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
Returns an array containing the start offsets of the files in the split
- getStartTime() - Method in class org.apache.hadoop.conf.ReconfigurationTaskStatus
-
- getStartTime() - Method in class org.apache.hadoop.mapreduce.Job
-
Get start time of the job.
- getStartTime() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
- getStartTime() - Method in class org.apache.hadoop.service.AbstractService
-
- getStartTime() - Method in interface org.apache.hadoop.service.Service
-
Get the service start time
- getStartTime() - Method in class org.apache.hadoop.yarn.api.records.ApplicationReport
-
Get the start time of the application.
- getStartTime() - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEntity
-
Get the start time of the entity
- getState() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
-
- getState() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
-
- getState() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
- getState() - Method in class org.apache.hadoop.mapreduce.QueueInfo
-
Return the queue state
- getState(String) - Static method in enum org.apache.hadoop.mapreduce.QueueState
-
- getState() - Method in class org.apache.hadoop.service.ServiceStateModel
-
Query the service state.
- getState() - Method in class org.apache.hadoop.yarn.api.records.ContainerStatus
-
Get the ContainerState
of the container.
- getStatement() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
- getStateName() - Method in enum org.apache.hadoop.mapreduce.QueueState
-
- getStatistics() - Method in class org.apache.hadoop.fs.AbstractFileSystem
-
- getStatistics(URI) - Static method in class org.apache.hadoop.fs.AbstractFileSystem
-
Get the statistics for a particular file system.
- getStatistics(URI) - Static method in class org.apache.hadoop.fs.FileContext
-
Get the statistics for a particular file system
- getStatistics() - Static method in class org.apache.hadoop.fs.FileSystem
-
- getStatistics(String, Class<? extends FileSystem>) - Static method in class org.apache.hadoop.fs.FileSystem
-
Get the statistics for a particular file system
- getStatus() - Method in class org.apache.hadoop.conf.ReconfigurationTaskStatus
-
- getStatus() - Method in class org.apache.hadoop.fs.FileSystem
-
Returns a status object describing the use and capacity of the
file system.
- getStatus(Path) - Method in class org.apache.hadoop.fs.FileSystem
-
Returns a status object describing the use and capacity of the
file system.
- getStatus(Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- getStatus(Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
-
- getStatus() - Method in class org.apache.hadoop.mapreduce.Job
-
- getStatus() - Method in interface org.apache.hadoop.mapreduce.TaskAttemptContext
-
Get the last set status message.
- getStatus() - Method in class org.apache.hadoop.mapreduce.TaskCompletionEvent
-
- getStickyBit() - Method in class org.apache.hadoop.fs.permission.FsPermission
-
- getStore() - Method in class org.apache.hadoop.fs.azure.NativeAzureFileSystem
-
For unit test purposes, retrieves the AzureNativeFileSystemStore store
backing this file system.
- getStr() - Method in class org.apache.hadoop.mapred.join.Parser.StrToken
-
- getStr() - Method in class org.apache.hadoop.mapred.join.Parser.Token
-
- getStr() - Method in class org.apache.hadoop.mapreduce.lib.join.Parser.StrToken
-
- getStr() - Method in class org.apache.hadoop.mapreduce.lib.join.Parser.Token
-
- getStrictContract() - Method in class org.apache.hadoop.yarn.api.records.PreemptionMessage
-
- getStringCollection(String) - Method in class org.apache.hadoop.conf.Configuration
-
Get the comma delimited values of the name
property as
a collection of String
s.
- getStrings(String) - Method in class org.apache.hadoop.conf.Configuration
-
Get the comma delimited values of the name
property as
an array of String
s.
- getStrings(String, String...) - Method in class org.apache.hadoop.conf.Configuration
-
Get the comma delimited values of the name
property as
an array of String
s.
- getSuccessfulJobList() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl
-
- getSuccessfulJobs() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
-
- getSuccessfullyStartedContainers() - Method in class org.apache.hadoop.yarn.api.protocolrecords.StartContainersResponse
-
Get the list of ContainerId
s of the containers that are
started successfully.
- getSuccessfullyStoppedContainers() - Method in class org.apache.hadoop.yarn.api.protocolrecords.StopContainersResponse
-
Get the list of containerIds of successfully stopped containers.
- getSuccessfulTaskAttempt() - Method in class org.apache.hadoop.mapred.TaskReport
-
Get the attempt ID that took this task to completion
- GetSuffix(int) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
Deprecated.
- getSum() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.DoubleValueSum
-
- getSum() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.LongValueSum
-
- getSupportedCompressionAlgorithms() - Static method in class org.apache.hadoop.io.file.tfile.TFile
-
Get names of supported compression algorithms.
- getSwitchMap() - Method in class org.apache.hadoop.net.AbstractDNSToSwitchMapping
-
Get a copy of the map (for diagnostics)
- getSwitchMap() - Method in class org.apache.hadoop.net.CachedDNSToSwitchMapping
-
Get the (host x switch) map.
- getSymlink() - Method in class org.apache.hadoop.fs.FileStatus
-
- getSymlink() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Deprecated.
- getSystemDir() - Method in class org.apache.hadoop.mapred.JobClient
-
Grab the jobtracker system directory path where job-specific files are to be placed.
- getSystemDir() - Method in class org.apache.hadoop.mapreduce.Cluster
-
Grab the jobtracker system directory path where
job-specific files will be placed.
- getTableName() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
- getTabSize(int) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
Deprecated.
- getTag(String) - Method in class org.apache.hadoop.metrics.spi.OutputRecord
-
Returns a tag object which is can be a String, Integer, Short or Byte.
- getTag(String) - Method in class org.apache.hadoop.metrics2.lib.MetricsRegistry
-
Get a tag by name
- getTagNames() - Method in class org.apache.hadoop.metrics.spi.OutputRecord
-
Returns the set of tag names
- getTagsCopy() - Method in class org.apache.hadoop.metrics.spi.OutputRecord
-
Returns a copy of this record's tags.
- getTargetQueue() - Method in class org.apache.hadoop.yarn.api.protocolrecords.MoveApplicationAcrossQueuesRequest
-
Get the queue to place the application in.
- getTaskAttemptID() - Method in interface org.apache.hadoop.mapred.TaskAttemptContext
-
- getTaskAttemptId() - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
-
Returns task id.
- getTaskAttemptID() - Method in interface org.apache.hadoop.mapreduce.TaskAttemptContext
-
Get the unique name for this task attempt.
- getTaskAttemptId() - Method in class org.apache.hadoop.mapreduce.TaskCompletionEvent
-
Returns task id.
- getTaskAttemptIDsPattern(String, Integer, Boolean, Integer, Integer) - Static method in class org.apache.hadoop.mapred.TaskAttemptID
-
Deprecated.
- getTaskAttemptIDsPattern(String, Integer, TaskType, Integer, Integer) - Static method in class org.apache.hadoop.mapred.TaskAttemptID
-
Deprecated.
- getTaskAttemptPath(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
Compute the path where the output of a task attempt is stored until
that task is committed.
- getTaskAttemptPath(TaskAttemptContext, Path) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
Compute the path where the output of a task attempt is stored until
that task is committed.
- getTaskCleanupNeeded() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get whether task-cleanup is needed for the job
- getTaskCompletionEvents(int) - Method in interface org.apache.hadoop.mapred.RunningJob
-
Get events indicating completion (success/failure) of component tasks.
- getTaskCompletionEvents(int, int) - Method in class org.apache.hadoop.mapreduce.Job
-
Get events indicating completion (success/failure) of component tasks.
- getTaskCompletionEvents(int) - Method in class org.apache.hadoop.mapreduce.Job
-
Get events indicating completion (success/failure) of component tasks.
- getTaskDiagnostics(TaskAttemptID) - Method in interface org.apache.hadoop.mapred.RunningJob
-
Gets the diagnostic messages for a given task attempt.
- getTaskDiagnostics(TaskAttemptID) - Method in class org.apache.hadoop.mapreduce.Job
-
Gets the diagnostic messages for a given task attempt.
- getTaskID() - Method in class org.apache.hadoop.mapred.TaskAttemptID
-
- getTaskId() - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
-
- getTaskId() - Method in class org.apache.hadoop.mapred.TaskReport
-
The string of the task id.
- getTaskID() - Method in class org.apache.hadoop.mapred.TaskReport
-
The id of the task.
- getTaskID() - Method in class org.apache.hadoop.mapreduce.TaskAttemptID
-
Returns the
TaskID
object that this task attempt belongs to
- getTaskIDsPattern(String, Integer, Boolean, Integer) - Static method in class org.apache.hadoop.mapred.TaskID
-
- getTaskIDsPattern(String, Integer, TaskType, Integer) - Static method in class org.apache.hadoop.mapred.TaskID
-
Deprecated.
- getTaskLogURL(TaskAttemptID, String) - Static method in class org.apache.hadoop.mapreduce.tools.CLI
-
- getTaskOutputFilter(JobConf) - Static method in class org.apache.hadoop.mapred.JobClient
-
Get the task output filter out of the JobConf.
- getTaskOutputFilter() - Method in class org.apache.hadoop.mapred.JobClient
-
Deprecated.
- getTaskOutputFilter(Configuration) - Static method in class org.apache.hadoop.mapreduce.Job
-
Get the task output filter.
- getTaskOutputPath(JobConf, String) - Static method in class org.apache.hadoop.mapred.FileOutputFormat
-
Helper function to create the task's temporary output directory and
return the path to the task's output file.
- getTaskReports(TaskType) - Method in class org.apache.hadoop.mapreduce.Job
-
Get the information of the current state of the tasks of a job.
- getTaskRunTime() - Method in class org.apache.hadoop.mapreduce.TaskCompletionEvent
-
Returns time (in millisec) the task took to complete.
- getTaskStatus() - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
-
- getTaskTrackerCount() - Method in class org.apache.hadoop.mapreduce.ClusterMetrics
-
Get the number of active trackers in the cluster.
- getTaskTrackerExpiryInterval() - Method in class org.apache.hadoop.mapreduce.Cluster
-
Get the tasktracker expiry interval for the cluster
- getTaskTrackerHttp() - Method in class org.apache.hadoop.mapreduce.TaskCompletionEvent
-
http location of the tasktracker where this task ran.
- getTaskTrackerName() - Method in class org.apache.hadoop.mapreduce.TaskTrackerInfo
-
Gets the tasktracker's name.
- getTaskTrackers() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Get the number of task trackers in the cluster.
- getTaskType() - Method in class org.apache.hadoop.mapreduce.TaskAttemptID
-
Returns the TaskType of the TaskAttemptID
- getTaskType() - Method in class org.apache.hadoop.mapreduce.TaskID
-
Get the type of the task
- getTaskType(char) - Static method in class org.apache.hadoop.mapreduce.TaskID
-
Gets the
TaskType
corresponding to the character
- getTestProvider() - Static method in class org.apache.hadoop.security.authorize.DefaultImpersonationProvider
-
- getThreadState() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl
-
- getTime() - Method in interface org.apache.hadoop.yarn.util.Clock
-
- getTime() - Method in class org.apache.hadoop.yarn.util.SystemClock
-
- getTime() - Method in class org.apache.hadoop.yarn.util.UTCClock
-
- getTimeDuration(String, long, TimeUnit) - Method in class org.apache.hadoop.conf.Configuration
-
Return time duration in the given time unit.
- getTimelineTokenServiceAddress(Configuration) - Static method in class org.apache.hadoop.yarn.util.timeline.TimelineUtils
-
- getTimestamp(Configuration, URI) - Static method in class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
- getTimestamp() - Method in class org.apache.hadoop.yarn.api.records.LocalResource
-
Get the original timestamp of the resource to be localized, used
for verification.
- getTimestamp() - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineEvent
-
Get the timestamp of the event
- getTimestamp() - Method in class org.apache.hadoop.yarn.event.AbstractEvent
-
- getTimestamp() - Method in interface org.apache.hadoop.yarn.event.Event
-
- getTmax(String) - Method in class org.apache.hadoop.metrics.ganglia.GangliaContext
-
- getToken(int) - Method in class org.apache.hadoop.record.compiler.generated.Rcc
-
Deprecated.
- getToken() - Method in class org.apache.hadoop.yarn.api.records.NMToken
-
Get the
Token
used for authenticating with
NodeManager
- getToken(String) - Method in class org.apache.hadoop.yarn.client.api.NMTokenCache
-
Returns NMToken, null if absent
- getTokenInfo(Class<?>, Configuration) - Method in class org.apache.hadoop.yarn.security.admin.AdminSecurityInfo
-
- getTokenInfo(Class<?>, Configuration) - Method in class org.apache.hadoop.yarn.security.client.ClientRMSecurityInfo
-
- getTokenInfo(Class<?>, Configuration) - Method in class org.apache.hadoop.yarn.security.client.ClientTimelineSecurityInfo
-
- getTokenInfo(Class<?>, Configuration) - Method in class org.apache.hadoop.yarn.security.ContainerManagerSecurityInfo
-
- getTokenInfo(Class<?>, Configuration) - Method in class org.apache.hadoop.yarn.security.SchedulerSecurityInfo
-
- getTokens() - Method in class org.apache.hadoop.yarn.api.records.ContainerLaunchContext
-
Get all the tokens needed by this container.
- getTokenService(Configuration, String, String, int) - Static method in class org.apache.hadoop.yarn.client.ClientRMProxy
-
- getTopologyPaths() - Method in class org.apache.hadoop.fs.BlockLocation
-
Get the list of network topology paths for each of the hosts.
- getTotalJobSubmissions() - Method in class org.apache.hadoop.mapreduce.ClusterMetrics
-
Get the total number of job submissions in the cluster.
- getTotalLogFileSize() - Method in class org.apache.hadoop.yarn.ContainerLogAppender
-
- getTrackingURL() - Method in interface org.apache.hadoop.mapred.RunningJob
-
Get the URL where some job progress information will be displayed.
- getTrackingURL() - Method in class org.apache.hadoop.mapreduce.Job
-
Get the URL where some job progress information will be displayed.
- getTrackingUrl() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
Get the link to the web-ui for details of the job.
- getTrackingUrl() - Method in class org.apache.hadoop.yarn.api.protocolrecords.FinishApplicationMasterRequest
-
Get the tracking URL for the ApplicationMaster
.
- getTrackingUrl() - Method in class org.apache.hadoop.yarn.api.protocolrecords.RegisterApplicationMasterRequest
-
Get the tracking URL for the ApplicationMaster
.
- getTrackingUrl() - Method in class org.apache.hadoop.yarn.api.records.ApplicationAttemptReport
-
Get the tracking url for the application attempt.
- getTrackingUrl() - Method in class org.apache.hadoop.yarn.api.records.ApplicationReport
-
Get the tracking url for the application.
- getTrashCanLocation(Path) - Method in class org.apache.hadoop.fs.viewfs.ViewFileSystem
-
- getTrashInterval() - Method in class org.apache.hadoop.fs.FsServerDefaults
-
- getTrimmed(String) - Method in class org.apache.hadoop.conf.Configuration
-
Get the value of the name
property as a trimmed String
,
null
if no such property exists.
- getTrimmed(String, String) - Method in class org.apache.hadoop.conf.Configuration
-
Get the value of the name
property as a trimmed String
,
defaultValue
if no such property exists.
- getTrimmedStringCollection(String) - Method in class org.apache.hadoop.conf.Configuration
-
Get the comma delimited values of the name
property as
a collection of String
s, trimmed of the leading and trailing whitespace.
- getTrimmedStrings(String) - Method in class org.apache.hadoop.conf.Configuration
-
Get the comma delimited values of the name
property as
an array of String
s, trimmed of the leading and trailing whitespace.
- getTrimmedStrings(String, String...) - Method in class org.apache.hadoop.conf.Configuration
-
Get the comma delimited values of the name
property as
an array of String
s, trimmed of the leading and trailing whitespace.
- getTTExpiryInterval() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Get the tasktracker expiry interval for the cluster
- getType() - Method in class org.apache.hadoop.fs.permission.AclEntry
-
Returns the ACL entry type.
- getType() - Method in class org.apache.hadoop.mapred.join.Parser.Token
-
- getType() - Method in class org.apache.hadoop.mapreduce.lib.join.Parser.Token
-
- getType() - Method in class org.apache.hadoop.yarn.api.records.LocalResource
-
Get the LocalResourceType
of the resource to be localized.
- getType() - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineDelegationTokenResponse
-
- getType() - Method in class org.apache.hadoop.yarn.event.AbstractEvent
-
- getType() - Method in interface org.apache.hadoop.yarn.event.Event
-
- getTypeConsumed(StorageType) - Method in class org.apache.hadoop.fs.ContentSummary
-
Returns storage type consumed
- getTypeID() - Method in class org.apache.hadoop.record.meta.FieldTypeInfo
-
Deprecated.
get the field's TypeID object
- getTypeQuota(StorageType) - Method in class org.apache.hadoop.fs.ContentSummary
-
Returns storage type quota
- getTypes() - Method in class org.apache.hadoop.io.GenericWritable
-
Return all classes that may be wrapped.
- getTypesSupportingQuota() - Static method in enum org.apache.hadoop.fs.StorageType
-
- getTypeVal() - Method in class org.apache.hadoop.record.meta.TypeID
-
Deprecated.
Get the type value.
- getUgi() - Method in class org.apache.hadoop.fs.FileContext
-
Gets the ugi in the file-context
- getUid(String) - Method in interface org.apache.hadoop.security.IdMappingServiceProvider
-
- getUidAllowingUnknown(String) - Method in interface org.apache.hadoop.security.IdMappingServiceProvider
-
- getUMask() - Method in class org.apache.hadoop.fs.FileContext
-
- getUMask(Configuration) - Static method in class org.apache.hadoop.fs.permission.FsPermission
-
Get the user file creation mask (umask)
UMASK_LABEL
config param has umask value that is either symbolic
or octal.
- getUnderlyingCounter() - Method in class org.apache.hadoop.mapred.Counters.Counter
-
- getUnderlyingGroup() - Method in class org.apache.hadoop.mapred.Counters.Group
-
- getUniqueFile(TaskAttemptContext, String, String) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
Generate a unique filename, based on the task id, name, and extension
- getUniqueItems() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.UniqValueCount
-
- getUniqueName(JobConf, String) - Static method in class org.apache.hadoop.mapred.FileOutputFormat
-
Helper function to generate a name that is unique for the task.
- getUnits(String) - Method in class org.apache.hadoop.metrics.ganglia.GangliaContext
-
- getUnmanagedAM() - Method in class org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext
-
Get if the RM should manage the execution of the AM.
- getUpdatedNodes() - Method in class org.apache.hadoop.yarn.api.protocolrecords.AllocateResponse
-
Get the list of updated NodeReport
s.
- getUri() - Method in class org.apache.hadoop.fs.AbstractFileSystem
-
Returns a URI whose scheme and authority identify this FileSystem.
- getUri() - Method in class org.apache.hadoop.fs.azure.NativeAzureFileSystem
-
- getUri() - Method in class org.apache.hadoop.fs.FileSystem
-
Returns a URI whose scheme and authority identify this FileSystem.
- getUri() - Method in class org.apache.hadoop.fs.FilterFileSystem
-
Returns a URI whose scheme and authority identify this FileSystem.
- getUri() - Method in class org.apache.hadoop.fs.ftp.FTPFileSystem
-
- getUri() - Method in class org.apache.hadoop.fs.RawLocalFileSystem
-
- getUri() - Method in class org.apache.hadoop.fs.s3.S3FileSystem
-
- getUri() - Method in class org.apache.hadoop.fs.s3native.NativeS3FileSystem
-
- getUri() - Method in class org.apache.hadoop.fs.viewfs.ViewFileSystem
-
- getUriDefaultPort() - Method in class org.apache.hadoop.fs.AbstractFileSystem
-
The default port of this file system.
- getUriDefaultPort() - Method in class org.apache.hadoop.fs.azure.Wasb
-
- getUriDefaultPort() - Method in class org.apache.hadoop.fs.viewfs.ViewFs
-
- getUriPath(Path) - Method in class org.apache.hadoop.fs.AbstractFileSystem
-
Get the path-part of a pathname.
- getUsed() - Method in class org.apache.hadoop.fs.FileSystem
-
Return the total size of all files in the filesystem.
- getUsed() - Method in class org.apache.hadoop.fs.FilterFileSystem
-
Return the total size of all files in the filesystem.
- getUsed() - Method in class org.apache.hadoop.fs.FsStatus
-
Return the number of bytes used on the file system
- getUsed() - Method in class org.apache.hadoop.yarn.api.records.NodeReport
-
Get used Resource
on the node.
- getUsedMem() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
- getUsedMemory() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Deprecated.
- getUsedResources() - Method in class org.apache.hadoop.yarn.api.records.ApplicationResourceUsageReport
-
Get the used Resource
.
- getUseNewMapper() - Method in class org.apache.hadoop.mapred.JobConf
-
Should the framework use the new context-object code for running
the mapper?
- getUseNewReducer() - Method in class org.apache.hadoop.mapred.JobConf
-
Should the framework use the new context-object code for running
the reducer?
- getUser() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the reported username for this job.
- getUser() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the reported username for this job.
- getUser() - Method in class org.apache.hadoop.yarn.api.records.ApplicationReport
-
Get the user who submitted the application.
- getUser() - Method in class org.apache.hadoop.yarn.security.AMRMTokenIdentifier
-
- getUser() - Method in class org.apache.hadoop.yarn.security.client.ClientToAMTokenIdentifier
-
- getUser() - Method in class org.apache.hadoop.yarn.security.ContainerTokenIdentifier
-
- getUser() - Method in class org.apache.hadoop.yarn.security.NMTokenIdentifier
-
- getUserAcls() - Method in class org.apache.hadoop.yarn.api.records.QueueUserACLInfo
-
Get the list of QueueACL
for the given user.
- getUserAclsInfoList() - Method in class org.apache.hadoop.yarn.api.protocolrecords.GetQueueUserAclsInfoResponse
-
Get the QueueUserACLInfo
per queue for the user.
- getUserAction() - Method in class org.apache.hadoop.fs.permission.FsPermission
-
- getUserInfo() - Method in class org.apache.hadoop.yarn.api.records.URL
-
Get the user info of the URL.
- getUsername() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
- getUserName(int, String) - Method in interface org.apache.hadoop.security.IdMappingServiceProvider
-
- getUsers() - Method in class org.apache.hadoop.security.authorize.AccessControlList
-
Get the names of users allowed for this service.
- getVal() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.LongValueMax
-
- getVal() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.LongValueMin
-
- getVal() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.StringValueMax
-
- getVal() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.StringValueMin
-
- getValByRegex(String) - Method in class org.apache.hadoop.conf.Configuration
-
get keys matching the the regex
- getValue() - Method in class org.apache.hadoop.mapred.Counters.Counter
-
- getValue() - Method in interface org.apache.hadoop.mapreduce.Counter
-
What is the current value of this counter?
- getValue() - Method in class org.apache.hadoop.mapreduce.lib.fieldsel.FieldSelectionHelper
-
- getValue() - Method in class org.apache.hadoop.util.PureJavaCrc32
-
- getValue() - Method in class org.apache.hadoop.util.PureJavaCrc32C
-
- getValueAggregatorDescriptor(String, Configuration) - Static method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorJobBase
-
- getValueClass() - Method in class org.apache.hadoop.io.ArrayWritable
-
- getValueClass() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
-
- getValues() - Method in interface org.apache.hadoop.mapreduce.ReduceContext
-
Iterate through the values for the current key, reusing the same value
object, which is stored in the context.
- getValueTypeID() - Method in class org.apache.hadoop.record.meta.MapTypeID
-
Deprecated.
get the TypeID of the map's value element
- getVcoreSeconds() - Method in class org.apache.hadoop.yarn.api.records.ApplicationResourceUsageReport
-
Get the aggregated number of vcores that the application has allocated
times the number of seconds the application has been running.
- getVectorSize() - Method in class org.apache.hadoop.util.bloom.BloomFilter
-
- getVersion() - Method in class org.apache.hadoop.io.VersionedWritable
-
Return the version number of the current implementation.
- getVIntSize(long) - Static method in class org.apache.hadoop.io.WritableUtils
-
Get the encoded length if an integer is stored in a variable-length format
- getVIntSize(long) - Static method in class org.apache.hadoop.record.Utils
-
Deprecated.
Get the encoded length if an integer is stored in a variable-length format
- getVirtualCores() - Method in class org.apache.hadoop.yarn.api.records.Resource
-
Get number of virtual cpu cores of the resource.
- getVirtualMemorySize() - Method in class org.apache.hadoop.yarn.util.ResourceCalculatorProcessTree
-
Get the virtual memory used by all the processes in the
process-tree.
- getVirtualMemorySize(int) - Method in class org.apache.hadoop.yarn.util.ResourceCalculatorProcessTree
-
Get the virtual memory used by all the processes in the
process-tree that are older than the passed in age.
- getVisibility() - Method in class org.apache.hadoop.yarn.api.records.LocalResource
-
Get the LocalResourceVisibility
of the resource to be
localized.
- getVolumeIds() - Method in class org.apache.hadoop.fs.BlockStorageLocation
-
Gets the list of
VolumeId
corresponding to the block's replicas.
- getWaitingJobList() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl
-
- getWaitingJobs() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
-
- getWorkingDirectory() - Method in class org.apache.hadoop.fs.azure.NativeAzureFileSystem
-
- getWorkingDirectory() - Method in class org.apache.hadoop.fs.FileContext
-
Gets the working directory for wd-relative names (such a "foo/bar").
- getWorkingDirectory() - Method in class org.apache.hadoop.fs.FileSystem
-
Get the current working directory for the given file system
- getWorkingDirectory() - Method in class org.apache.hadoop.fs.FilterFileSystem
-
Get the current working directory for the given file system
- getWorkingDirectory() - Method in class org.apache.hadoop.fs.ftp.FTPFileSystem
-
- getWorkingDirectory() - Method in class org.apache.hadoop.fs.RawLocalFileSystem
-
- getWorkingDirectory() - Method in class org.apache.hadoop.fs.s3.S3FileSystem
-
- getWorkingDirectory() - Method in class org.apache.hadoop.fs.s3native.NativeS3FileSystem
-
- getWorkingDirectory() - Method in class org.apache.hadoop.fs.viewfs.ViewFileSystem
-
- getWorkingDirectory() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the current working directory for the default file system.
- getWorkingDirectory() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the current working directory for the default file system.
- getWorkOutputPath(JobConf) - Static method in class org.apache.hadoop.mapred.FileOutputFormat
-
Get the
Path
to the task's temporary output directory
for the map-reduce job
Tasks' Side-Effect Files
- getWorkOutputPath(TaskInputOutputContext<?, ?, ?, ?>) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
Get the
Path
to the task's temporary output directory
for the map-reduce job
Tasks' Side-Effect Files
- getWorkPath(TaskAttemptContext, Path) - Method in class org.apache.hadoop.mapred.FileOutputCommitter
-
- getWorkPath() - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
Get the directory that the task should write results into.
- getWritePacketSize() - Method in class org.apache.hadoop.fs.FsServerDefaults
-
- getWriters() - Method in class org.apache.hadoop.yarn.api.records.timeline.TimelineDomain
-
Get the writer (and/or writer group) list string
- getXAttr(Path, String) - Method in class org.apache.hadoop.fs.AbstractFileSystem
-
Get an xattr for a file or directory.
- getXAttr(Path, String) - Method in class org.apache.hadoop.fs.FileContext
-
Get an xattr for a file or directory.
- getXAttr(Path, String) - Method in class org.apache.hadoop.fs.FileSystem
-
Get an xattr name and value for a file or directory.
- getXAttr(Path, String) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- getXAttr(Path, String) - Method in class org.apache.hadoop.fs.viewfs.ViewFileSystem
-
- getXAttr(Path, String) - Method in class org.apache.hadoop.fs.viewfs.ViewFs
-
- getXAttrs(Path) - Method in class org.apache.hadoop.fs.AbstractFileSystem
-
Get all of the xattrs for a file or directory.
- getXAttrs(Path, List<String>) - Method in class org.apache.hadoop.fs.AbstractFileSystem
-
Get all of the xattrs for a file or directory.
- getXAttrs(Path) - Method in class org.apache.hadoop.fs.FileContext
-
Get all of the xattrs for a file or directory.
- getXAttrs(Path, List<String>) - Method in class org.apache.hadoop.fs.FileContext
-
Get all of the xattrs for a file or directory.
- getXAttrs(Path) - Method in class org.apache.hadoop.fs.FileSystem
-
Get all of the xattr name/value pairs for a file or directory.
- getXAttrs(Path, List<String>) - Method in class org.apache.hadoop.fs.FileSystem
-
Get all of the xattrs name/value pairs for a file or directory.
- getXAttrs(Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- getXAttrs(Path, List<String>) - Method in class org.apache.hadoop.fs.FilterFileSystem
-
- getXAttrs(Path) - Method in class org.apache.hadoop.fs.viewfs.ViewFileSystem
-
- getXAttrs(Path, List<String>) - Method in class org.apache.hadoop.fs.viewfs.ViewFileSystem
-
- getXAttrs(Path) - Method in class org.apache.hadoop.fs.viewfs.ViewFs
-
- getXAttrs(Path, List<String>) - Method in class org.apache.hadoop.fs.viewfs.ViewFs
-
- getYarnApplicationAttemptState() - Method in class org.apache.hadoop.yarn.api.records.ApplicationAttemptReport
-
Get the YarnApplicationAttemptState of the application attempt.
- getYarnApplicationState() - Method in class org.apache.hadoop.yarn.api.records.ApplicationReport
-
Get the YarnApplicationState
of the application.
- getYarnClusterMetrics() - Method in class org.apache.hadoop.yarn.client.api.YarnClient
-
- getZKFCAddress() - Method in class org.apache.hadoop.ha.HAServiceTarget
-
- getZKFCProxy(Configuration, int) - Method in class org.apache.hadoop.ha.HAServiceTarget
-
- GlobFilter - Class in org.apache.hadoop.fs
-
A filter for POSIX glob pattern with brace expansions.
- GlobFilter(String) - Constructor for class org.apache.hadoop.fs.GlobFilter
-
Creates a glob filter with the specified file pattern.
- GlobFilter(String, PathFilter) - Constructor for class org.apache.hadoop.fs.GlobFilter
-
Creates a glob filter with the specified file pattern and an user filter.
- GlobFilter - Class in org.apache.hadoop.metrics2.filter
-
A glob pattern filter for metrics.
- GlobFilter() - Constructor for class org.apache.hadoop.metrics2.filter.GlobFilter
-
- globStatus(Path) - Method in class org.apache.hadoop.fs.FileSystem
-
Return all the files that match filePattern and are not checksum
files.
- globStatus(Path, PathFilter) - Method in class org.apache.hadoop.fs.FileSystem
-
Return an array of FileStatus objects whose path names match pathPattern
and is accepted by the user-supplied path filter.
- GraphiteSink - Class in org.apache.hadoop.metrics2.sink
-
A metrics sink that writes to a Graphite server
- GraphiteSink() - Constructor for class org.apache.hadoop.metrics2.sink.GraphiteSink
-
- GROUP - Static variable in class org.apache.hadoop.mapreduce.lib.map.RegexMapper
-
- GROUP_MAPPING_CONFIG_PREFIX - Static variable in interface org.apache.hadoop.security.GroupMappingServiceProvider
-
- GroupMappingServiceProvider - Interface in org.apache.hadoop.security
-
An interface for the implementation of a user-to-groups mapping service
used by Groups
.
- GT_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
-
Deprecated.
- GzipCodec - Class in org.apache.hadoop.io.compress
-
This class creates gzip compressors/decompressors.
- GzipCodec() - Constructor for class org.apache.hadoop.io.compress.GzipCodec
-