|
||||||||||
| PREV NEXT | FRAMES NO FRAMES | |||||||||
AccessControlException
instead.RemoteException.
AccessControlException
with the specified detail message.
allocate
allocate
Configuration.addDeprecation(String key, String newKey,
String customMessage) instead
Configuration.addDeprecation(String key, String newKey) instead
TimelinePutResponse.TimelinePutError instance into the existing list
TimelinePutResponse.TimelinePutError instances into the existing list
TimelineEvents.EventsOfOneEntity instance into the existing list
TimelineEvents.EventsOfOneEntity instances into the existing list
Service,
add it to the list of services managed by this CompositeService
Path to the list of inputs for the map-reduce job.
Path with a custom InputFormat to the list of
inputs for the map-reduce job.
Path with a custom InputFormat and
Mapper to the list of inputs for the map-reduce job.
Path to the list of inputs for the map-reduce job.
Path with a custom InputFormat to the list of
inputs for the map-reduce job.
Path with a custom InputFormat and
Mapper to the list of inputs for the map-reduce job.
Mapper class to the chain mapper.
Mapper class to the chain reducer.
Service to the list of services managed by this
CompositeService
ApplicationMaster and the
ResourceManager.
ApplicationMaster to the
ResourceManager to obtain resources in the cluster.ResourceManager the
ApplicationMaster during resource negotiation.AMRMClientAsync handles communication with the ResourceManager
and provides asynchronous updates on events such as container allocations and
completions.ApplicationAttemptId denotes the particular attempt
of an ApplicationMaster for a given ApplicationId.(GetApplicationAttemptReportRequest)
API when the Application Attempt doesn't exist in Application History ServerApplicationAttemptReport is a report of an application attempt.URLClassLoader for YARN application isolation.ResourceManager
to submit/abort jobs and to get information on applications, cluster metrics,
nodes, queues and ACLs.ApplicationHistoryServer to
get the information of completed applications etc.ApplicationId represents the globally unique
identifier for an application.ApplicationId in ApplicationSubmissionContext.ApplicationMaster
and the ResourceManager.(GetApplicationReportRequest) API
when the Application doesn't exist in RM and AHSApplicationReport is a report of an application.ApplicationSubmissionContext represents all of the
information needed by the ResourceManager to launch
the ApplicationMaster for an application.Events in a separate thread.FSDataInputStream to Avro's SeekableInput interface.FSDataInputStream and its length.
FileContext and a Path.
SecretManager for AMs to extend and validate Client-RM tokens
issued to clients by the RM using the underlying master-key shared by RM to
the AMs on their launch.WritableComparable
types supporting ordering/permutation by a representative set of bytes.BinaryComparable keys using a configurable part of
the bytes array returned by BinaryComparable.getBytes().BinaryComparable keys using a configurable part of
the bytes array returned by BinaryComparable.getBytes().CompressorStream which works
with 'block-based' based compression algorithms, as opposed to
'stream-based' compression algorithms.BlockCompressorStream.
BlockCompressorStream with given output-stream and
compressor.
DecompressorStream which works
with 'block-based' based compression algorithms, as opposed to
'stream-based' compression algorithms.BlockDecompressorStream.
BlockDecompressorStream.
BlockLocation that also adds VolumeId volume
location information for each replica.MapFile and provides very much the same
functionality.Token.cancel(org.apache.hadoop.conf.Configuration) instead
Token.cancel(org.apache.hadoop.conf.Configuration) instead
Token.
ResourceManager to cancel a
delegation token.ResourceManager to a cancelDelegationToken
request.File.canExecute()
File.canRead()
File.canWrite()
position.
IOException or
null pointers.
OutputCommitter.commitJob(JobContext) or
OutputCommitter.abortJob(JobContext, int) instead.
OutputCommitter.commitJob(org.apache.hadoop.mapreduce.JobContext)
or OutputCommitter.abortJob(org.apache.hadoop.mapreduce.JobContext, org.apache.hadoop.mapreduce.JobStatus.State)
instead.
OutputCommitter.commitJob(JobContext) and
OutputCommitter.abortJob(JobContext, JobStatus.State) instead.
SecretManager for AMs to validate Client-RM tokens issued to
clients by the RM using the underlying master-key shared by RM to the AMs on
their launch.JobClient.
InputSplit to future operations.
RecordWriter to future operations.
Cluster.
RecordWriter to future operations.
AbstractService.stop()
IOException
IOException.
MultiFilterRecordReader.emit(org.apache.hadoop.mapred.join.TupleWritable) every Tuple from the
collector (the outer join of child RRs).
MultiFilterRecordReader.emit(org.apache.hadoop.mapreduce.lib.join.TupleWritable) every Tuple from the
collector (the outer join of child RRs).
InputFormat that returns CombineFileSplit's
in InputFormat.getSplits(JobConf, int) method.InputFormat that returns CombineFileSplit's in
InputFormat.getSplits(JobContext) method.CombineFileSplit.CombineFileSplit.CombineFileInputFormat-equivalent for
SequenceFileInputFormat.CombineFileInputFormat-equivalent for
SequenceFileInputFormat.CombineFileInputFormat-equivalent for
TextInputFormat.CombineFileInputFormat-equivalent for
TextInputFormat.CompressionOutputStream to compress data.Configuration.JobConf.
JobConf.
Configuration.Configuration.
NetUtils.connect(java.net.Socket, java.net.SocketAddress, int)
if it times out while connecting to the remote host.Container represents an allocated resource in the cluster.ContainerId represents a globally unique identifier
for a Container in the cluster.ContainerLaunchContext represents all of the information
needed by the NodeManager to launch a container.ApplicationMaster and a
NodeManager to start/stop containers and to get status
of running containers.(GetContainerReportRequest)
API when the container doesn't exist in AHSContainerReport is a report of an container.Container.ContainerStatus represents the current status of a
Container.RuntimeException.
RuntimeException.
Counters that logically belong together.Counters holds per job/task counters, defined either by the
Map-Reduce framework or applications.Group of counters, comprising of counters from a particular
counter Enum class.FileContext.create(Path, EnumSet, Options.CreateOpts...) except
that the Path f must be fully qualified and the permission is absolute
(i.e.
YarnClientApplication for a new application,
which in turn contains the ApplicationSubmissionContext and
GetNewApplicationResponse
objects.
Compressor for use by this CompressionCodec.
Compressor for use by this CompressionCodec.
Decompressor for use by this CompressionCodec.
Decompressor for use by this CompressionCodec.
DirectDecompressor for use by this DirectDecompressionCodec.
DirectDecompressor for use by this DirectDecompressionCodec.
FsPermission object.
CompressionInputStream that will read from the given
input stream and return a stream for uncompressed data.
CompressionInputStream that will read from the given
InputStream with the given Decompressor, and return a
stream for uncompressed data.
CompressionInputStream that will read from the given
input stream.
CompressionInputStream that will read from the given
InputStream with the given Decompressor.
AbstractFileSystem.create(Path, EnumSet, Options.CreateOpts...) except that the opts
have been declared explicitly.
IOException.
CompressionOutputStream that will write to the given
OutputStream.
CompressionOutputStream that will write to the given
OutputStream with the given Compressor.
CompressionOutputStream that will write to the given
OutputStream.
CompressionOutputStream that will write to the given
OutputStream with the given Compressor.
CombineFileInputFormat.createPool(List).
CombineFileInputFormat.createPool(PathFilter...).
recordName.
FileContext.createSymlink(Path, Path, boolean);
FileContext.createSymlink(Path, Path, boolean)
SequenceFile.createWriter(Configuration, Writer.Option...)
instead.
SequenceFile.createWriter(Configuration, Writer.Option...)
instead.
SequenceFile.createWriter(Configuration, Writer.Option...)
instead.
SequenceFile.createWriter(Configuration, Writer.Option...)
instead.
SequenceFile.createWriter(Configuration, Writer.Option...)
instead.
SequenceFile.createWriter(Configuration, Writer.Option...)
instead.
SequenceFile.createWriter(Configuration, Writer.Option...)
instead.
SequenceFile.createWriter(Configuration, Writer.Option...)
instead.
SequenceFile.createWriter(Configuration, Writer.Option...)
instead.
DBWritable.CompressionInputStream to compress data.Stringifier
interface which stringifies the objects using base64 encoding of the
serialized version of the objects.WritableComparable
implementation.
Record implementation.
FileContext.delete(Path, boolean) except that Path f must be for
this file system.
FileSystem.delete(Path, boolean) instead.
Writer
The format of the output would be
{ "properties" : [ {key1,value1,key1.isFinal,key1.resource}, {key2,value2,
key2.isFinal,key2.resource}...
o is a ByteWritable with the same value.
o is a DoubleWritable with the same value.
o is an EnumSetWritable with the same value,
or both are null.
o is a FloatWritable with the same value.
o is a IntWritable with the same value.
o is a LongWritable with the same value.
o is an MD5Hash whose digest contains the
same values.
o is a ShortWritable with the same value.
o is a Text with the same contents.
o is a VIntWritable with the same value.
o is a VLongWritable with the same value.
InputFormat.InputFormats.OutputCommitter that commits files specified
in job output directory i.e.OutputCommitter that commits files specified
in job output directory i.e.OutputFormat.OutputFormats that read from FileSystems.FilterFileSystem contains
some other file system, which it uses as
its basic file system, possibly transforming
the data along the way or providing additional
functionality.Application.what in the backing
buffer, starting as position start.
Counters.findCounter(String, String) instead
ApplicationMaster to notify the
ResourceManager about its completion (success or failed).
ApplicationMaster to
inform the ResourceManager about its completion.ResourceManager to a
ApplicationMaster on it's completion.true if the end of the decompressed
data output stream has been reached.
FileContext.fixRelativePart(org.apache.hadoop.fs.Path)
ResourceManager to abort submitted application.
Counters.makeEscapedCompactString() counter
representation into a counter object.
FSInputStream in a DataInputStream
and buffers input through a BufferedInputStream.OutputStream in a DataOutputStream.FsAction.
FileSystem.Throwable into a Runtime Exception.FileSystem backed by an FTP client provided by Apache Commons Net.FileSystem.delete(Path, boolean)
name property, null if
no such property exists.
name.
BytesWritable.getBytes() instead.
WritableComparable implementation.
TimelineEvents.EventsOfOneEntity instances
Cluster.getAllJobStatuses() instead.
Container by the
ResourceManager.
Resource of the container.
QueueInfo) about all queues, recursively if there
is a hierarchy
NodeManager.
ResourceManager needs the
ApplicationMaster to take some action then it will send an
AMCommand to the ApplicationMaster.
ContainerId of AMContainer for this attempt
ContainerLaunchContext to describe the
Container with which the ApplicationMaster is
launched.
ApplicationACLs for the application.
ApplicationACLs for the application.
ApplicationAttemptId of an application attempt.
ApplicationAttemptId of an application attempt.
ApplicationAttemptId of this attempt of the
application
ApplicationAttemptId of the application to which the
Container was assigned.
ApplicationReport of an application.
ResourceManager
ApplicationHistoryServer.
ApplicationAttemptReport for the application attempt.
ResourceManager to get an
ApplicationAttemptReport for an application attempt.ResourceManager to a client requesting
an application attempt report.ResourceManager
ApplicationHistoryServer.
ResourceManager.ResourceManager to a client requesting
a list of ApplicationAttemptReport for application attempts.ApplicationId of an application
ApplicationId of the application.
ApplicationId allocated by the
ResourceManager.
ApplicationId of the application to be aborted.
ApplicationId of the application to be moved.
ApplicationId of the ApplicationAttempId.
ApplicationId of the application.
ApplicationId of the submitted application.
ApplicationReport for applications.
ResourceManager.
ResourceManager.
ApplicationReport for the application.
ResourceManager to
get an ApplicationReport for an application.ResourceManager to a client
requesting an application report.GetApplicationsRequest
in the cluster from the ResourceManager.
ApplicationHistoryServer.
ResourceManager.ResourceManager to a client
requesting an ApplicationReport for applications.ApplicationSubmissionContext for the application.
ResourceRequest to update the
ResourceManager about the application's resource requirements.
NodeId where container is running.
attempt id of the Application.
SkipBadRecords.COUNTER_MAP_PROCESSED_RECORDS is incremented
by MapRunner after invoking the map function.
SkipBadRecords.COUNTER_REDUCE_PROCESSED_GROUPS is incremented
by framework after invoking the reduce function.
name property as a boolean.
Text.getLength() is
valid.
Resource on the node.
Resource capability of the request.
QueueInfo) about all the immediate children queues
of the given queue
name property as a Class.
name property as a Class
implementing the interface specified by xface.
Class of the given object.
name property
as an array of Class.
ClassLoader for this job.
ApplicationMaster.
ResourceManager.
YarnClusterMetrics for the cluster.
ResourceManager.ResourceManager to a client
requesting cluster metrics.ResourceManager.
ResourceManager.ResourceManager to a client
requesting a NodeReport for all nodes.ResourceManager which is
used to generate globally unique ApplicationId.
WritableComparable comparator for
grouping keys of inputs to the combiner.
RawComparator comparator for
grouping keys of inputs to the combiner.
Compressor for the given CompressionCodec from the
pool or a new one.
Compressor needed by this CompressionCodec.
Compressor needed by this CompressionCodec.
name.
Reader attached to the configuration resource with the
given name.
exit status of the container.
ContainerId of the Container.
ContainerId of the container.
ContainerId of the container.
ContainerIds of containers for which to obtain
the ContainerStatus.
ContainerIds of the containers to be stopped.
ContainerLaunchContext for the container to be started
by the NodeManager.
ContainerReport for all the containers of an
application attempt.
ResourceManager
ApplicationHistoryServer.
ContainerReport for the container.
ResourceManager to get an
ContainerReport for a container.ResourceManager to a client requesting
a container report.ResourceManager
ApplciationHistoryServer.
PreemptionContainer specifying which containers
owned by the ApplicationMaster that may be reclaimed by the
ResourceManager.
PreemptionContainer specifying containers owned by
the ApplicationMaster that may be reclaimed by the
ResourceManager.
ResourceManager from previous application attempts.
ResourceManager.ResourceManager to a client requesting
a list of ContainerReport for containers.ContainerState of the container.
ApplicationMaster to request for current
statuses of Containers from the NodeManager.
ContainerStatuses of the requested containers.
ApplicationMaster to the
NodeManager to get ContainerStatus of requested
containers.NodeManager to the
ApplicationMaster when asked to obtain the
ContainerStatus of requested containers.ContainerToken for the container.
ContentSummary of a given Path.
Counters.Group.findCounter(String) instead
Counters.Counter of the given group with the given name.
Counters.Counter of the given group with the given name.
Counter for the given counterName.
Counter for the given groupName and
counterName.
ApplicationAttemptId of the current
attempt of the application
Decompressor for the given CompressionCodec from the
pool or a new one.
Decompressor needed by this CompressionCodec.
Decompressor needed by this CompressionCodec.
NodeManager
FileSystem.getDefaultBlockSize(Path) instead
FileSystem.getDefaultReplication(Path) instead
Credentials.getToken(org.apache.hadoop.io.Text)
instead, this method is included for compatibility against Hadoop-1
ResourceManager.GetDelegationTokenRequest request
from the client.name property as a double.
Runnable that periodically empties the trash of all
users, intended to be run by the superuser.
Runnable that periodically empties the trash of all
users, intended to be run by the superuser.
TimelinePutResponse.TimelinePutError instances
Service.getFailureCause() occurred.
FileContext.getFileBlockLocations(Path, long, long) except that
Path f must be for this file system.
FileContext.getFileChecksum(Path) except that Path f must be for
this file system.
FileContext.getFileLinkStatus(Path)
except that an UnresolvedLinkException may be thrown if a symlink is
encountered in the path leading up to the final path component.
FileContext.getFileLinkStatus(Path)
FileContext.getFileStatus(Path)
except that an UnresolvedLinkException may be thrown if a symlink is
encountered in the path.
ApplicationMaster.
name property as a float.
FileContext.getFsStatus(Path) except that Path f must be for this
file system.
FileContext.getFsStatus(Path).
FsAction.
RawComparator comparator for
grouping keys of inputs to the reduce.
ApplicationMaster is
running.
ApplicationMaster is running.
ApplicationMaster
is running.
ApplicationId
which is unique for all applications started by a particular instance
of the ResourceManager.
ContainerId.
ResourceManager
ContainerResourceIncreaseRequest being sent by the
ApplicationMaster
InputFormat implementation for the map-reduce job,
defaults to TextInputFormat if not specified explicity.
InputFormat class for the job.
Paths for the map-reduce job.
Paths for the map-reduce job.
InputSplit object for a map.
Job with no particular Cluster .
Job with no particular Cluster and a
given Configuration.
Job with no particular Cluster and a given jobName.
Job with no particular Cluster and given
Configuration and JobStatus.
Job.getInstance()
Job.getInstance(Configuration)
Job with no particular Cluster and given
Configuration and JobStatus.
name property as a List
of objects implementing the interface specified by xface.
name property as an int.
name property as a set of comma-delimited
int values.
RunningJob object to track an ongoing job.
JobClient.getJob(JobID).
RunningJob.getID().
JobID object that this task attempt belongs to
JobID object that this tip belongs to
JobPriority for this job.
JobStatus, of the Job.
SequenceFileRecordReader.next(Object, Object)..
KeyFieldBasedComparator options
KeyFieldBasedComparator options
KeyFieldBasedPartitioner options
KeyFieldBasedPartitioner options
Compressors for this
CompressionCodec
Decompressors for this
CompressionCodec
InputSplit.
FileContext.getLinkTarget(Path)
JobContext.getCacheArchives().
JobContext.getCacheFiles().
LocalResource required by the container.
name property as a long.
name property as a long or
human readable format.
WrappedMapper.Context for custom implementations.
CompressionCodec for compressing the map outputs.
Mapper class for the job.
Mapper class for the job.
MapRunnable class for the job.
true.
JobClient.getMapTaskReports(JobID)
ContainerRequests matching the given
parameters.
Resource allocated by the
ResourceManager in the cluster.
Resource allocated by the
ResourceManager in the cluster.
mapreduce.map.maxattempts
property.
mapred.map.max.attempts
property.
mapreduce.reduce.maxattempts
property.
mapred.reduce.max.attempts
property.
JobConf.getMemoryForMapTask() and
JobConf.getMemoryForReduceTask()
Resource
ApplicationId for
submitting new applications.
ApplicationId for
submitting an application.ResourceManager to the client for
a request to get a new ApplicationId for submitting applications.AMRMClient.
NMClient.
NodeId of the NodeManager for which the NMToken
is used to authenticate.
NodeId of the node.
NodeReport for all nodes in the cluster.
NodeReport) in the cluster.
NodeState of the node.
NodeManagers in the cluster.
FsAction.
OutputCommitter implementation for the map-reduce job,
defaults to FileOutputCommitter if not specified explicitly.
OutputCommitter for the task-attempt.
SequenceFile.CompressionType for the output SequenceFile.
SequenceFile.CompressionType for the output SequenceFile.
CompressionCodec for compressing the job outputs.
CompressionCodec for compressing the job outputs.
OutputFormat implementation for the map-reduce job,
defaults to TextOutputFormat if not specified explicity.
OutputFormat class for the job.
RawComparator comparator used to compare keys.
Path to the output directory for the map-reduce job.
Path to the output directory for the map-reduce job.
WritableComparable comparator for
grouping keys of inputs to the reduce.
Object.hashCode() to partition.
BinaryComparable.getBytes() to partition.
Object.hashCode() to partition.
Partitioner used to partition Mapper-outputs
to be sent to the Reducers.
Partitioner class for the job.
TotalOrderPartitioner.getPartitionFile(Configuration)
instead
Path for a file that is unique for
the task within the job output directory.
Path for a file that is unique for
the task within the job output directory.
name property as a Pattern.
PATTERN).
Priority of the application.
Priority at which the Container was
allocated.
Priority of the container.
Priority of the request.
RecordReader consumed i.e.
ResourceManager.
QueueInfo for the specified queue.
QueueInfo) about a given queue.
ResourceManager.ResourceManager to a client
requesting information about queues in the system.QueueState of the queue.
ResourceManager.
ResourceManager to
get queue acls for the current user.ResourceManager to clients
seeking queue acls for the user.name property, without doing
variable expansion.If the key is
deprecated, it returns the value of the first key which replaces
the deprecated key and is not null.
RecordReader for the given InputSplit.
RecordReader for the given InputSplit.
RecordWriter for the given job.
RecordWriter for the given job.
RecordWriter for the given task.
RecordWriter for the given task.
Reducer class for the job.
Reducer class for the job.
WrappedReducer.Context for custom implementations.
true.
JobClient.getReduceTaskReports(JobID)
ResourceRequest.
ContainerId of containers being
released by the ApplicationMaster.
TaskType
Resource
URL for the named resource.
ApplicationMaster for this
application.
Resource allocated to the container.
ResourceBlacklistRequest being sent by the
ApplicationMaster.
PreemptionContainers enumerated in PreemptionContract.getContainers() should not be
evicted from the cluster.
AbstractDelegationTokenIdentifier.
QueueInfo) about top level queues.
ApplicationMaster
is responding.
ApplicationMaster.
ApplicationMaster.
ApplicationsRequestScope of applications to be filtered.
SequenceFile
SequenceFile
SequenceFile
SequenceFile
FileSystem.getServerDefaults(Path) instead
BytesWritable.getLength() instead.
name property as a
InetSocketAddress.
name property as a
InetSocketAddress.
RawComparator comparator used to compare keys.
true.
FileInputFormat.listStatus(JobConf) when
they're too big.
StartContainerRequest to start containers.
ContainerState of the container.
FileSystem.getAllStatistics() instead
name property as
a collection of Strings.
name property as
an array of Strings.
name property as
an array of Strings.
ContainerId s of the containers that are
started successfully.
TaskCompletionEvent.getTaskAttemptId() instead.
TaskID object that this task attempt belongs to
TaskID.getTaskIDsPattern(String, Integer, TaskType,
Integer)
TaskType corresponding to the character
Token used for authenticating with NodeManager
ApplicationMaster.
ApplicationMaster.
name property as a trimmed String,
null if no such property exists.
name property as a trimmed String,
defaultValue if no such property exists.
name property as
a collection of Strings, trimmed of the leading and trailing whitespace.
name property as
an array of Strings, trimmed of the leading and trailing whitespace.
name property as
an array of Strings, trimmed of the leading and trailing whitespace.
LocalResourceType of the resource to be localized.
UMASK_LABEL config param has umask value that is either symbolic
or octal.
NodeReports.
Resource on the node.
Resource
QueueACL for the given user.
QueueUserACLInfo per queue for the user.
FsAction.
SequenceFileRecordReader.next(Object, Object)..
LocalResourceVisibility of the resource to be
localized.
VolumeId corresponding to the block's replicas.
Path to the task's temporary output directory
for the map-reduce job
Path to the task's temporary output directory
for the map-reduce job
YarnApplicationState of the application.
YarnClusterMetrics) about the cluster.
Groups.HAServiceProtocol RPC calls.Object.hashCode().Object.hashCode().VolumeId.Enum type, by the specified amount.
InputFormat describes the input-specification for a
Map-Reduce job.InputFormat describes the input-specification for a
Map-Reduce job.TotalOrderPartitioner.InputSplit represents the data to be processed by an
individual Mapper.InputSplit represents the data to be processed by an
individual Mapper.Mapper that swaps keys and values.Mapper that swaps keys and values.key is deprecated.
FileStatus.isFile(),
FileStatus.isDirectory(), and FileStatus.isSymlink()
instead.
DNSToSwitchMapping instance being on a single
switch.
AbstractDNSToSwitchMapping.isMappingSingleSwitch(DNSToSwitchMapping)
CombineFileInputFormat.isSplitable(FileSystem, Path).
Iterator to go through the list of String
key-value pairs in the configuration.
Serialization for Java Serializable classes.RawComparator that uses a JavaSerialization
Deserializer to deserialize objects that are then compared via
their Comparable interfaces.JobClient is the primary interface for the user-job to interact
with the cluster.JobConf, and connect to the
default cluster
Configuration,
and connect to the default cluster
KeyFieldBasedComparator.KeyFieldBasedComparator.InputFormat for plain text files.InputFormat for plain text files.ResourceManager
to abort a submitted application.ResourceManager to the client aborting
a submitted application.RunningJob.killTask(TaskAttemptID, boolean)
File.list().
File.listFiles().
FileContext.listLocatedStatus(Path) except that Path f
must be for this file system.
FileContext.Util.listStatus(Path) except that Path f must be
for this file system.
f is a file, this method will make a single call to S3.
FileContext.listStatus(Path) except that Path f must be for this
file system.
Credentials.readTokenStorageFile(org.apache.hadoop.fs.Path, org.apache.hadoop.conf.Configuration) instead,
this method is included for compatibility against Hadoop-1.
Credentials.readTokenStorageFile(org.apache.hadoop.fs.Path, org.apache.hadoop.conf.Configuration) instead,
this method is included for compatibility against Hadoop-1.
LocalResource represents a local resource required to
run a container.LocalResourceType specifies the type
of a resource localized by the NodeManager.LocalResourceVisibility specifies the visibility
of a resource localized by the NodeManager.Reducer that sums long values.map(...) methods of the Mappers in the chain.
Mapper.OutputFormat that writes MapFiles.OutputFormat that writes
MapFiles.Level for the map task.
Level for the reduce task.
JobConf.MAPRED_MAP_TASK_ENV or
JobConf.MAPRED_REDUCE_TASK_ENV
JobConf.MAPRED_MAP_TASK_JAVA_OPTS or
JobConf.MAPRED_REDUCE_TASK_JAVA_OPTS
JobConf.MAPREDUCE_JOB_MAP_MEMORY_MB_PROPERTY and
JobConf.MAPREDUCE_JOB_REDUCE_MEMORY_MB_PROPERTY
Mapper and Reducer implementations.Mappers.MapRunnable implementation.MarkableIterator is a wrapper iterator class that
implements the MarkableIteratorInterface.MBeans.register(String, String, Object)FileContext.mkdir(Path, FsPermission, boolean) except that the Path
f must be fully qualified and the permission is absolute (i.e.
FileSystem.mkdirs(Path, FsPermission) with default permission.
ResourceManager
to move a submitted application to a different queue.ResourceManager to the client moving
a submitted application to a different queue.InputFormat that returns MultiFileSplit's
in MultiFileInputFormat.getSplits(JobConf, int) method.InputFormat and Mapper for each pathInputFormat and Mapper for each pathIOException into an IOExceptionOutputCollector passed to
the map() and reduce() methods of the
Mapper and Reducer implementations.MutableQuantiles for a metric that rolls itself
over on the specified time interval.
FileSystem for reading and writing files stored on
Amazon S3.true if a preset dictionary is needed for decompression.
true if the input data buffer is empty and
Decompressor.setInput(byte[], int, int) should be called to
provide more input.
ResourceManager.
ResourceManager.
ResourceManager.
ResourceManager.
ResourceManager.
RegisterApplicationMasterRequest.
WritableComparable instance.
DBRecordReader.nextKeyValue()
NMClientAsync handles communication with all the NodeManagers
and provides asynchronous updates on getting responses from them.NodeManagerScriptBasedMapping.toString() method if there is no string
"no script"
NodeId is the unique identifier for a node.NodeReport is a summary of runtime information of a
node in the cluster.Node.FileContext.open(Path) except that Path f must be for this
file system.
FileContext.open(Path, int) except that Path f must be for this
file system.
FileSystem that uses Amazon S3
as a backing store.FileSystem for reading and writing files on
Amazon S3.JMXJsonServlet class.<key, value> pairs output by Mappers
and Reducers.OutputCommitter describes the commit of task output for a
Map-Reduce job.OutputCommitter describes the commit of task output for a
Map-Reduce job.OutputFormat describes the output-specification for a
Map-Reduce job.OutputFormat describes the output-specification for a
Map-Reduce job.OutputCommitter that commits files specified
in job output directory i.e.OutputCommitter
implementing partial commit of task output, as during preemption.FileSystem.ResourceManager.ResourceManager.PreemptionMessage is part of the RM-AM protocol, and it is used by
the RM to specify resources that the RM wants to reclaim from this
ApplicationMaster (AM).QueueACL enumerates the various ACLs for queues.QueueUserACLInfo provides information QueueACL for
the given user.RawComparator.Comparator that operates directly on byte representations of
objects.FsPermission from DataInput.
EOFException for getting logs of
all types for a single container.
in.
in.
ResultSet.
in.
in.
CompressedWritable.readFields(DataInput).
FSDataInputStream.readFully(long, byte[], int, int).
Writable, String, primitive type, or an array of
the preceding.
Writable, String, primitive type, or an array of
the preceding.
Record comparison implementation.
RecordReader reads <key, value> pairs from an
InputSplit.Mapper.RecordWriter writes the output <key, value> pairs
to an output file.RecordWriter writes the output <key, value> pairs
to an output file.reduce(...) method of the Reducer with the
map(...) methods of the Mappers in the chain.
Reducer.Mapper that extracts text matching a regular expression.Mapper that extracts text matching a regular expression.ApplicationMaster to register with
the ResourceManager.
ApplicationMaster to
ResourceManager on registration.ResourceManager to a new
ApplicationMaster on registration.FileContext.rename(Path, Path, Options.Rename...) except that Path
f must be for this file system.
FileContext.rename(Path, Path, Options.Rename...) except that Path
f must be for this file system and NO OVERWRITE is performed.
FileContext.rename(Path, Path, Options.Rename...) except that Path
f must be for this file system.
Token.renew(org.apache.hadoop.conf.Configuration) instead
Token.renew(org.apache.hadoop.conf.Configuration) instead
Token.
ResourceManager.ResourceManager.AbstractFileSystem.getLinkTarget(Path)
Resource models a set of computer resources in the
cluster.ResourceBlacklistRequest encapsulates the list of resource-names
which should be added or removed from the blacklist of resources
for the application.ResourceRequest represents the request made by an
application to the ResourceManager to obtain various
Container allocations.Compressor to the pool.
Decompressor to the pool.
Reducer.run(org.apache.hadoop.mapreduce.Reducer.Context) method to
control how the reduce task works.
Tool by Tool.run(String[]), after
parsing with the given generic arguments.
Tool with its Configuration.
RunningJob is the user-interface to query for details on a
running Map-Reduce job.FileSystem backed by
Amazon S3.S3FileSystem.DNSToSwitchMapping interface using a
script configured via the
CommonConfigurationKeysPublic.NET_TOPOLOGY_SCRIPT_FILE_NAME_KEY option.SequenceFiles are flat files consisting of binary key/value
pairs.OutputFormat that writes keys, values to
SequenceFiles in binary(raw) formatOutputFormat that writes keys,
values to SequenceFiles in binary(raw) formatInputFormat for SequenceFiles.InputFormat for SequenceFiles.OutputFormat that writes SequenceFiles.OutputFormat that writes SequenceFiles.RecordReader for SequenceFiles.RecordReader for SequenceFiles.Service.STATE.NOTINITED
state.
value of the name property.
value of the name property.
Container by the
ResourceManager.
NodeManager.
ContainerLaunchContext to describe the
Container with which the ApplicationMaster is
launched.
ApplicationACLs for the application.
ApplicationACLs for the application.
ApplicationAttemptId of an application attempt
ApplicationAttemptId of an application attempt
ApplicationReport of an application.
ApplicationAttemptReport for the application attempt.
ApplicationId of an application
ApplicationId of the application
ApplicationId of the application to be moved.
ApplicationId of the submitted application.
ApplicationSubmissionContext for the application.
ResourceRequest to update the
ResourceManager about the application's resource requirements.
SkipBadRecords.COUNTER_MAP_PROCESSED_RECORDS is incremented
by MapRunner after invoking the map function.
SkipBadRecords.COUNTER_REDUCE_PROCESSED_GROUPS is incremented
by framework after invoking the reduce function.
name property to a boolean.
Resource capability of the request
name property to the name of a
theClass implementing the given interface xface.
RawComparator comparator for
grouping keys in the input to the combiner.
Reducer.reduce(Object, Iterable,
org.apache.hadoop.mapreduce.Reducer.Context)
ContainerId of the container
ContainerIds of containers for which to obtain
the ContainerStatus
ContainerIds of the containers to be stopped.
ContainerLaunchContext for the container to be started
by the NodeManager
ContainerReport for all the containers of an
application attempt.
ResourceManager from previous application attempts.
ContainerStatuses of the requested containers.
NodeManager
name property to a double.
name property to the given type.
TimelinePutResponse.TimelinePutError instances
TimelineEvents.EventsOfOneEntity instances
File.setExecutable(boolean)
File#setExecutable does not work as expected on Windows.
ApplicationMaster
name property to a float.
Reducer.reduce(Object, Iterable,
org.apache.hadoop.mapreduce.Reducer.Context)
ApplicationMaster is
running.
ResourceManager
ContainerResourceIncreaseRequest to inform the
ResourceManager about some container's resources need to be
increased
InputFormat implementation for the map-reduce job.
InputFormat for the job.
Paths as the list of inputs
for the map-reduce job.
Paths as the list of inputs
for the map-reduce job.
name property to an int.
JobPriority for this job.
KeyFieldBasedComparator options used to compare keys.
KeyFieldBasedComparator options used to compare keys.
KeyFieldBasedPartitioner options used for
Partitioner
KeyFieldBasedPartitioner options used for
Partitioner
bytes[offset:] in Python syntax.
LocalResource required by the container.
name property to a long.
CompressionCodec for the map outputs.
Mapper class for the job.
Mapper for the job.
MapRunnable class for the job.
JobConf.setMemoryForMapTask(long mem) and
Use JobConf.setMemoryForReduceTask(long mem)
NMTokenCache.getSingleton().
AMRMClient.
NMClient.
bytes[left:(right+1)] in Python syntax.
OutputCommitter implementation for the map-reduce job.
SequenceFile.CompressionType for the output SequenceFile.
SequenceFile.CompressionType for the output SequenceFile.
CompressionCodec to be used to compress job outputs.
CompressionCodec to be used to compress job outputs.
OutputFormat implementation for the map-reduce job.
OutputFormat for the job.
RawComparator comparator used to compare keys.
Path of the output directory for the map-reduce job.
Path of the output directory for the map-reduce job.
RawComparator comparator for
grouping keys in the input to the reduce.
FileContext.setOwner(Path, String, String) except that Path f must
be for this file system.
Partitioner class used to partition
Mapper-outputs to be sent to the Reducers.
Partitioner for the job.
TotalOrderPartitioner.setPartitionFile(Configuration, Path)
instead
Pattern.
PATTERN).
FileContext.setPermission(Path, FsPermission) except that Path f
must be for this file system.
Priority of the application.
Priority of the request
File.setReadable(boolean)
File#setReadable does not work as expected on Windows.
Reducer class to the chain job.
Reducer class for the job.
Reducer for the job.
ContainerId of containers being
released by the ApplicationMaster
FileContext.setReplication(Path, short) except that Path f must be
for this file system.
ApplicationMaster for this
application.
ResourceBlacklistRequest to inform the
ResourceManager about the blacklist additions and removals
per the ApplicationMaster.
bytes[:(offset+1)] in Python syntax.
ApplicationMaster is
responding.
ApplicationsRequestScope of applications to filter.
SequenceFile
SequenceFile
SequenceFile
SequenceFile
name property as
a host:port.
Reducer.
StartContainerRequest to start containers.
name property as
as comma delimited values.
TaskCompletionEvent.setTaskAttemptId(TaskAttemptID) instead.
TaskCompletionEvent.setTaskAttemptId(TaskAttemptID) instead.
name to the given time duration.
FileContext.setTimes(Path, long, long) except that Path f must be
for this file system.
Path's last modified time only to the given
valid time.
ApplicationMaster.
ApplicationMaster while
it is running.
LocalResourceType of the resource to be localized.
FileContext.setVerifyChecksum(boolean, Path) except that Path f
must be for this file system.
LocalResourceVisibility of the resource to be
localized.
Path of the task's temporary output directory
for the map-reduce job.
File.setWritable(boolean)
File#setWritable does not work as expected on Windows.
AbstractCounters.countCounters() instead
ApplicationMaster to the
NodeManager to start a container.ApplicationMaster provides a list of
StartContainerRequests to a NodeManager to
start Containers allocated to it using this interface.
StartContainerRequest sent by
the ApplicationMaster to the NodeManager to
start containers.NodeManager to the
ApplicationMaster when asked to start an allocated
container.fileName attribute,
if specified.
ApplicationMaster requests a NodeManager to
stop a list of Containers allocated to it using this
interface.
ApplicationMaster to the
NodeManager to stop containers.NodeManager to the
ApplicationMaster when asked to stop allocated
containers.ResourceManager.
YARN. It is a blocking call - it
will not return ApplicationId until the submitted application is
submitted successfully and accepted by the ResourceManager.
ResourceManager.ResourceManager to a client on
application submission.Submitter.runJob(JobConf)
AbstractFileSystem.supportsSymlinks()
Clock that gives the current time from the system
clock in milliseconds.DNSToSwitchMapping implementation that reads a 2 column text
file.TaskID.
TaskAttemptID.TaskAttemptID(String, int, TaskType, int, int).
TaskID.
TaskID.TaskID(String, int, TaskType, int)
TaskID.TaskID(org.apache.hadoop.mapreduce.JobID, TaskType,
int)
JobID.
JobID.
InputFormat for plain text files.InputFormat for plain text files.OutputFormat that writes plain text files.OutputFormat that writes plain text files.Token is the security entity used by the framework
to verify authenticity of any resource.Mapper that maps text values into Tools.Writables.Writables.name property as a host:port.
URL represents a serializable URL.S3FileSystem.VersionedWritable.readFields(DataInput) when the
version of an object being read does not match the current implementation
version as returned by VersionedWritable.getVersion().FileSystem.createFileSystem(URI, Configuration)
After this constructor is called initialize() is called.
Mapper which wraps a given one to allow custom
WrappedMapper.Context implementations.InputStream.
Reducer which wraps a given one to allow for custom
WrappedReducer.Context implementations.DataInput and DataOutput.Writable which is also Comparable.WritableComparables.WritableComparable implementation.
Serialization for Writables that delegates to
Writable.write(java.io.DataOutput) and
Writable.readFields(java.io.DataInput).out.
PreparedStatement.
out.
CompressedWritable.write(DataOutput).
Writable, String, primitive type, or an array of
the preceding.
Writable, String, primitive type, or an array of
the preceding.
OutputStream using UTF-8 encoding.
Writer.
RMAppAttempt.ApplicationMaster.YarnClusterMetrics represents cluster metrics.Thread.setDefaultUncaughtExceptionHandler(UncaughtExceptionHandler)
In the main entry point.
|
||||||||||
| PREV NEXT | FRAMES NO FRAMES | |||||||||