|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
AccessControlException
instead.RemoteException
.
AccessControlException
with the specified detail message.
allocate
allocate
Configuration.addDeprecation(String key, String newKey,
String customMessage)
instead
Configuration.addDeprecation(String key, String newKey)
instead
TimelinePutResponse.TimelinePutError
instance into the existing list
TimelinePutResponse.TimelinePutError
instances into the existing list
TimelineEvents.EventsOfOneEntity
instance into the existing list
TimelineEvents.EventsOfOneEntity
instances into the existing list
Service
,
add it to the list of services managed by this CompositeService
Path
to the list of inputs for the map-reduce job.
Path
with a custom InputFormat
to the list of
inputs for the map-reduce job.
Path
with a custom InputFormat
and
Mapper
to the list of inputs for the map-reduce job.
Path
to the list of inputs for the map-reduce job.
Path
with a custom InputFormat
to the list of
inputs for the map-reduce job.
Path
with a custom InputFormat
and
Mapper
to the list of inputs for the map-reduce job.
Mapper
class to the chain mapper.
Mapper
class to the chain reducer.
Service
to the list of services managed by this
CompositeService
ApplicationMaster
and the
ResourceManager
.
ApplicationMaster
to the
ResourceManager
to obtain resources in the cluster.ResourceManager
the
ApplicationMaster
during resource negotiation.AMRMClientAsync
handles communication with the ResourceManager
and provides asynchronous updates on events such as container allocations and
completions.ApplicationAttemptId
denotes the particular attempt
of an ApplicationMaster
for a given ApplicationId
.(GetApplicationAttemptReportRequest)
API when the Application Attempt doesn't exist in Application History ServerApplicationAttemptReport
is a report of an application attempt.URLClassLoader
for YARN application isolation.ResourceManager
to submit/abort jobs and to get information on applications, cluster metrics,
nodes, queues and ACLs.ApplicationHistoryServer
to
get the information of completed applications etc.ApplicationId
represents the globally unique
identifier for an application.ApplicationId
in ApplicationSubmissionContext
.ApplicationMaster
and the ResourceManager
.(GetApplicationReportRequest)
API
when the Application doesn't exist in RM and AHSApplicationReport
is a report of an application.ApplicationSubmissionContext
represents all of the
information needed by the ResourceManager
to launch
the ApplicationMaster
for an application.Event
s in a separate thread.FSDataInputStream
to Avro's SeekableInput interface.FSDataInputStream
and its length.
FileContext
and a Path
.
SecretManager
for AMs to extend and validate Client-RM tokens
issued to clients by the RM using the underlying master-key shared by RM to
the AMs on their launch.WritableComparable
types supporting ordering/permutation by a representative set of bytes.BinaryComparable
keys using a configurable part of
the bytes array returned by BinaryComparable.getBytes()
.BinaryComparable
keys using a configurable part of
the bytes array returned by BinaryComparable.getBytes()
.CompressorStream
which works
with 'block-based' based compression algorithms, as opposed to
'stream-based' compression algorithms.BlockCompressorStream
.
BlockCompressorStream
with given output-stream and
compressor.
DecompressorStream
which works
with 'block-based' based compression algorithms, as opposed to
'stream-based' compression algorithms.BlockDecompressorStream
.
BlockDecompressorStream
.
BlockLocation
that also adds VolumeId
volume
location information for each replica.MapFile
and provides very much the same
functionality.Token.cancel(org.apache.hadoop.conf.Configuration)
instead
Token.cancel(org.apache.hadoop.conf.Configuration)
instead
Token
.
ResourceManager
to cancel a
delegation token.ResourceManager
to a cancelDelegationToken
request.File.canExecute()
File.canRead()
File.canWrite()
position
.
IOException
or
null pointers.
OutputCommitter.commitJob(JobContext)
or
OutputCommitter.abortJob(JobContext, int)
instead.
OutputCommitter.commitJob(org.apache.hadoop.mapreduce.JobContext)
or OutputCommitter.abortJob(org.apache.hadoop.mapreduce.JobContext, org.apache.hadoop.mapreduce.JobStatus.State)
instead.
OutputCommitter.commitJob(JobContext)
and
OutputCommitter.abortJob(JobContext, JobStatus.State)
instead.
SecretManager
for AMs to validate Client-RM tokens issued to
clients by the RM using the underlying master-key shared by RM to the AMs on
their launch.JobClient
.
InputSplit
to future operations.
RecordWriter
to future operations.
Cluster
.
RecordWriter
to future operations.
AbstractService.stop()
IOException
IOException
.
MultiFilterRecordReader.emit(org.apache.hadoop.mapred.join.TupleWritable)
every Tuple from the
collector (the outer join of child RRs).
MultiFilterRecordReader.emit(org.apache.hadoop.mapreduce.lib.join.TupleWritable)
every Tuple from the
collector (the outer join of child RRs).
InputFormat
that returns CombineFileSplit
's
in InputFormat.getSplits(JobConf, int)
method.InputFormat
that returns CombineFileSplit
's in
InputFormat.getSplits(JobContext)
method.CombineFileSplit
.CombineFileSplit
.CombineFileInputFormat
-equivalent for
SequenceFileInputFormat
.CombineFileInputFormat
-equivalent for
SequenceFileInputFormat
.CombineFileInputFormat
-equivalent for
TextInputFormat
.CombineFileInputFormat
-equivalent for
TextInputFormat
.CompressionOutputStream
to compress data.Configuration
.JobConf
.
JobConf
.
Configuration
.Configuration
.
NetUtils.connect(java.net.Socket, java.net.SocketAddress, int)
if it times out while connecting to the remote host.Container
represents an allocated resource in the cluster.ContainerId
represents a globally unique identifier
for a Container
in the cluster.ContainerLaunchContext
represents all of the information
needed by the NodeManager
to launch a container.ApplicationMaster
and a
NodeManager
to start/stop containers and to get status
of running containers.(GetContainerReportRequest)
API when the container doesn't exist in AHSContainerReport
is a report of an container.Container
.ContainerStatus
represents the current status of a
Container
.RuntimeException
.
RuntimeException
.
Counter
s that logically belong together.Counters
holds per job/task counters, defined either by the
Map-Reduce framework or applications.Group
of counters, comprising of counters from a particular
counter Enum
class.FileContext.create(Path, EnumSet, Options.CreateOpts...)
except
that the Path f must be fully qualified and the permission is absolute
(i.e.
YarnClientApplication
for a new application,
which in turn contains the ApplicationSubmissionContext
and
GetNewApplicationResponse
objects.
Compressor
for use by this CompressionCodec
.
Compressor
for use by this CompressionCodec
.
Decompressor
for use by this CompressionCodec
.
Decompressor
for use by this CompressionCodec
.
DirectDecompressor
for use by this DirectDecompressionCodec
.
DirectDecompressor
for use by this DirectDecompressionCodec
.
FsPermission
object.
CompressionInputStream
that will read from the given
input stream and return a stream for uncompressed data.
CompressionInputStream
that will read from the given
InputStream
with the given Decompressor
, and return a
stream for uncompressed data.
CompressionInputStream
that will read from the given
input stream.
CompressionInputStream
that will read from the given
InputStream
with the given Decompressor
.
AbstractFileSystem.create(Path, EnumSet, Options.CreateOpts...)
except that the opts
have been declared explicitly.
IOException
.
CompressionOutputStream
that will write to the given
OutputStream
.
CompressionOutputStream
that will write to the given
OutputStream
with the given Compressor
.
CompressionOutputStream
that will write to the given
OutputStream
.
CompressionOutputStream
that will write to the given
OutputStream
with the given Compressor
.
CombineFileInputFormat.createPool(List)
.
CombineFileInputFormat.createPool(PathFilter...)
.
recordName
.
FileContext.createSymlink(Path, Path, boolean)
;
FileContext.createSymlink(Path, Path, boolean)
SequenceFile.createWriter(Configuration, Writer.Option...)
instead.
SequenceFile.createWriter(Configuration, Writer.Option...)
instead.
SequenceFile.createWriter(Configuration, Writer.Option...)
instead.
SequenceFile.createWriter(Configuration, Writer.Option...)
instead.
SequenceFile.createWriter(Configuration, Writer.Option...)
instead.
SequenceFile.createWriter(Configuration, Writer.Option...)
instead.
SequenceFile.createWriter(Configuration, Writer.Option...)
instead.
SequenceFile.createWriter(Configuration, Writer.Option...)
instead.
SequenceFile.createWriter(Configuration, Writer.Option...)
instead.
DBWritable
.CompressionInputStream
to compress data.Stringifier
interface which stringifies the objects using base64 encoding of the
serialized version of the objects.WritableComparable
implementation.
Record
implementation.
FileContext.delete(Path, boolean)
except that Path f must be for
this file system.
FileSystem.delete(Path, boolean)
instead.
Writer
The format of the output would be
{ "properties" : [ {key1,value1,key1.isFinal,key1.resource}, {key2,value2,
key2.isFinal,key2.resource}...
o
is a ByteWritable with the same value.
o
is a DoubleWritable with the same value.
o
is an EnumSetWritable with the same value,
or both are null.
o
is a FloatWritable with the same value.
o
is a IntWritable with the same value.
o
is a LongWritable with the same value.
o
is an MD5Hash whose digest contains the
same values.
o
is a ShortWritable with the same value.
o
is a Text with the same contents.
o
is a VIntWritable with the same value.
o
is a VLongWritable with the same value.
InputFormat
.InputFormat
s.OutputCommitter
that commits files specified
in job output directory i.e.OutputCommitter
that commits files specified
in job output directory i.e.OutputFormat
.OutputFormat
s that read from FileSystem
s.FilterFileSystem
contains
some other file system, which it uses as
its basic file system, possibly transforming
the data along the way or providing additional
functionality.Application
.what
in the backing
buffer, starting as position start
.
Counters.findCounter(String, String)
instead
ApplicationMaster
to notify the
ResourceManager
about its completion (success or failed).
ApplicationMaster
to
inform the ResourceManager
about its completion.ResourceManager
to a
ApplicationMaster
on it's completion.true
if the end of the decompressed
data output stream has been reached.
FileContext.fixRelativePart(org.apache.hadoop.fs.Path)
ResourceManager
to abort submitted application.
Counters.makeEscapedCompactString()
counter
representation into a counter object.
FSInputStream
in a DataInputStream
and buffers input through a BufferedInputStream
.OutputStream
in a DataOutputStream
.FsAction
.
FileSystem
.Throwable
into a Runtime Exception.FileSystem
backed by an FTP client provided by Apache Commons Net.FileSystem.delete(Path, boolean)
name
property, null
if
no such property exists.
name
.
BytesWritable.getBytes()
instead.
WritableComparable
implementation.
TimelineEvents.EventsOfOneEntity
instances
Cluster.getAllJobStatuses()
instead.
Container
by the
ResourceManager
.
Resource
of the container.
QueueInfo
) about all queues, recursively if there
is a hierarchy
NodeManager
.
ResourceManager
needs the
ApplicationMaster
to take some action then it will send an
AMCommand to the ApplicationMaster
.
ContainerId
of AMContainer for this attempt
ContainerLaunchContext
to describe the
Container
with which the ApplicationMaster
is
launched.
ApplicationACL
s for the application.
ApplicationACL
s for the application.
ApplicationAttemptId
of an application attempt.
ApplicationAttemptId
of an application attempt.
ApplicationAttemptId
of this attempt of the
application
ApplicationAttemptId
of the application to which the
Container
was assigned.
ApplicationReport
of an application.
ResourceManager
ApplicationHistoryServer
.
ApplicationAttemptReport
for the application attempt.
ResourceManager
to get an
ApplicationAttemptReport
for an application attempt.ResourceManager
to a client requesting
an application attempt report.ResourceManager
ApplicationHistoryServer
.
ResourceManager
.ResourceManager
to a client requesting
a list of ApplicationAttemptReport
for application attempts.ApplicationId
of an application
ApplicationId
of the application.
ApplicationId
allocated by the
ResourceManager
.
ApplicationId
of the application to be aborted.
ApplicationId
of the application to be moved.
ApplicationId
of the ApplicationAttempId
.
ApplicationId
of the application.
ApplicationId
of the submitted application.
ApplicationReport
for applications.
ResourceManager
.
ResourceManager
.
ApplicationReport
for the application.
ResourceManager
to
get an ApplicationReport
for an application.ResourceManager
to a client
requesting an application report.GetApplicationsRequest
in the cluster from the ResourceManager
.
ApplicationHistoryServer
.
ResourceManager
.ResourceManager
to a client
requesting an ApplicationReport
for applications.ApplicationSubmissionContext
for the application.
ResourceRequest
to update the
ResourceManager
about the application's resource requirements.
NodeId
where container is running.
attempt id
of the Application
.
SkipBadRecords.COUNTER_MAP_PROCESSED_RECORDS
is incremented
by MapRunner after invoking the map function.
SkipBadRecords.COUNTER_REDUCE_PROCESSED_GROUPS
is incremented
by framework after invoking the reduce function.
name
property as a boolean
.
Text.getLength()
is
valid.
Resource
on the node.
Resource
capability of the request.
QueueInfo
) about all the immediate children queues
of the given queue
name
property as a Class
.
name
property as a Class
implementing the interface specified by xface
.
Class
of the given object.
name
property
as an array of Class
.
ClassLoader
for this job.
ApplicationMaster
.
ResourceManager
.
YarnClusterMetrics
for the cluster.
ResourceManager
.ResourceManager
to a client
requesting cluster metrics.ResourceManager
.
ResourceManager
.ResourceManager
to a client
requesting a NodeReport
for all nodes.ResourceManager
which is
used to generate globally unique ApplicationId
.
WritableComparable
comparator for
grouping keys of inputs to the combiner.
RawComparator
comparator for
grouping keys of inputs to the combiner.
Compressor
for the given CompressionCodec
from the
pool or a new one.
Compressor
needed by this CompressionCodec
.
Compressor
needed by this CompressionCodec
.
name
.
Reader
attached to the configuration resource with the
given name
.
exit status
of the container.
ContainerId
of the Container.
ContainerId
of the container.
ContainerId
of the container.
ContainerId
s of containers for which to obtain
the ContainerStatus
.
ContainerId
s of the containers to be stopped.
ContainerLaunchContext
for the container to be started
by the NodeManager
.
ContainerReport
for all the containers of an
application attempt.
ResourceManager
ApplicationHistoryServer
.
ContainerReport
for the container.
ResourceManager
to get an
ContainerReport
for a container.ResourceManager
to a client requesting
a container report.ResourceManager
ApplciationHistoryServer
.
PreemptionContainer
specifying which containers
owned by the ApplicationMaster
that may be reclaimed by the
ResourceManager
.
PreemptionContainer
specifying containers owned by
the ApplicationMaster
that may be reclaimed by the
ResourceManager
.
ResourceManager
from previous application attempts.
ResourceManager
.ResourceManager
to a client requesting
a list of ContainerReport
for containers.ContainerState
of the container.
ApplicationMaster
to request for current
statuses of Container
s from the NodeManager
.
ContainerStatus
es of the requested containers.
ApplicationMaster
to the
NodeManager
to get ContainerStatus
of requested
containers.NodeManager
to the
ApplicationMaster
when asked to obtain the
ContainerStatus
of requested containers.ContainerToken
for the container.
ContentSummary
of a given Path
.
Counters.Group.findCounter(String)
instead
Counters.Counter
of the given group with the given name.
Counters.Counter
of the given group with the given name.
Counter
for the given counterName
.
Counter
for the given groupName
and
counterName
.
ApplicationAttemptId
of the current
attempt of the application
Decompressor
for the given CompressionCodec
from the
pool or a new one.
Decompressor
needed by this CompressionCodec
.
Decompressor
needed by this CompressionCodec
.
NodeManager
FileSystem.getDefaultBlockSize(Path)
instead
FileSystem.getDefaultReplication(Path)
instead
Credentials.getToken(org.apache.hadoop.io.Text)
instead, this method is included for compatibility against Hadoop-1
ResourceManager
.GetDelegationTokenRequest
request
from the client.name
property as a double
.
Runnable
that periodically empties the trash of all
users, intended to be run by the superuser.
Runnable
that periodically empties the trash of all
users, intended to be run by the superuser.
TimelinePutResponse.TimelinePutError
instances
Service.getFailureCause()
occurred.
FileContext.getFileBlockLocations(Path, long, long)
except that
Path f must be for this file system.
FileContext.getFileChecksum(Path)
except that Path f must be for
this file system.
FileContext.getFileLinkStatus(Path)
except that an UnresolvedLinkException may be thrown if a symlink is
encountered in the path leading up to the final path component.
FileContext.getFileLinkStatus(Path)
FileContext.getFileStatus(Path)
except that an UnresolvedLinkException may be thrown if a symlink is
encountered in the path.
ApplicationMaster
.
name
property as a float
.
FileContext.getFsStatus(Path)
except that Path f must be for this
file system.
FileContext.getFsStatus(Path)
.
FsAction
.
RawComparator
comparator for
grouping keys of inputs to the reduce.
ApplicationMaster
is
running.
ApplicationMaster
is running.
ApplicationMaster
is running.
ApplicationId
which is unique for all applications started by a particular instance
of the ResourceManager
.
ContainerId
.
ResourceManager
ContainerResourceIncreaseRequest
being sent by the
ApplicationMaster
InputFormat
implementation for the map-reduce job,
defaults to TextInputFormat
if not specified explicity.
InputFormat
class for the job.
Path
s for the map-reduce job.
Path
s for the map-reduce job.
InputSplit
object for a map.
Job
with no particular Cluster
.
Job
with no particular Cluster
and a
given Configuration
.
Job
with no particular Cluster
and a given jobName.
Job
with no particular Cluster
and given
Configuration
and JobStatus
.
Job.getInstance()
Job.getInstance(Configuration)
Job
with no particular Cluster
and given
Configuration
and JobStatus
.
name
property as a List
of objects implementing the interface specified by xface
.
name
property as an int
.
name
property as a set of comma-delimited
int
values.
RunningJob
object to track an ongoing job.
JobClient.getJob(JobID)
.
RunningJob.getID()
.
JobID
object that this task attempt belongs to
JobID
object that this tip belongs to
JobPriority
for this job.
JobStatus
, of the Job.
SequenceFileRecordReader.next(Object, Object)
..
KeyFieldBasedComparator
options
KeyFieldBasedComparator
options
KeyFieldBasedPartitioner
options
KeyFieldBasedPartitioner
options
Compressor
s for this
CompressionCodec
Decompressor
s for this
CompressionCodec
InputSplit
.
FileContext.getLinkTarget(Path)
JobContext.getCacheArchives()
.
JobContext.getCacheFiles()
.
LocalResource
required by the container.
name
property as a long
.
name
property as a long
or
human readable format.
WrappedMapper.Context
for custom implementations.
CompressionCodec
for compressing the map outputs.
Mapper
class for the job.
Mapper
class for the job.
MapRunnable
class for the job.
true
.
JobClient.getMapTaskReports(JobID)
ContainerRequest
s matching the given
parameters.
Resource
allocated by the
ResourceManager
in the cluster.
Resource
allocated by the
ResourceManager
in the cluster.
mapreduce.map.maxattempts
property.
mapred.map.max.attempts
property.
mapreduce.reduce.maxattempts
property.
mapred.reduce.max.attempts
property.
JobConf.getMem