Factory to create client IPC classes.
yarn.ipc.client.factory.class
Type of serialization to use.
yarn.ipc.serializer.type
protocolbuffers
Factory to create server IPC classes.
yarn.ipc.server.factory.class
Factory to create IPC exceptions.
yarn.ipc.exception.factory.class
Factory to create serializeable records.
yarn.ipc.record.factory.class
RPC class implementation
yarn.ipc.rpc.class
org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC
The hostname of the RM.
yarn.resourcemanager.hostname
0.0.0.0
The address of the applications manager interface in the RM.
yarn.resourcemanager.address
${yarn.resourcemanager.hostname}:8032
The number of threads used to handle applications manager requests.
yarn.resourcemanager.client.thread-count
50
The expiry interval for application master reporting.
yarn.am.liveness-monitor.expiry-interval-ms
600000
The Kerberos principal for the resource manager.
yarn.resourcemanager.principal
The address of the scheduler interface.
yarn.resourcemanager.scheduler.address
${yarn.resourcemanager.hostname}:8030
Number of threads to handle scheduler interface.
yarn.resourcemanager.scheduler.client.thread-count
50
This configures the HTTP endpoint for Yarn Daemons.The following
values are supported:
- HTTP_ONLY : Service is provided only on http
- HTTPS_ONLY : Service is provided only on https
yarn.http.policy
HTTP_ONLY
The http address of the RM web application.
yarn.resourcemanager.webapp.address
${yarn.resourcemanager.hostname}:8088
The https adddress of the RM web application.
yarn.resourcemanager.webapp.https.address
${yarn.resourcemanager.hostname}:8090
yarn.resourcemanager.resource-tracker.address
${yarn.resourcemanager.hostname}:8031
Are acls enabled.
yarn.acl.enable
false
ACL of who can be admin of the YARN cluster.
yarn.admin.acl
*
The address of the RM admin interface.
yarn.resourcemanager.admin.address
${yarn.resourcemanager.hostname}:8033
Number of threads used to handle RM admin interface.
yarn.resourcemanager.admin.client.thread-count
1
How often should the RM check that the AM is still alive.
yarn.resourcemanager.amliveliness-monitor.interval-ms
1000
Maximum time to wait to establish connection to
ResourceManager.
yarn.resourcemanager.connect.max-wait.ms
900000
How often to try connecting to the
ResourceManager.
yarn.resourcemanager.connect.retry-interval.ms
30000
The maximum number of application attempts. It's a global
setting for all application masters. Each application master can specify
its individual maximum number of application attempts via the API, but the
individual number cannot be more than the global upper bound. If it is,
the resourcemanager will override it. The default number is set to 2, to
allow at least one retry for AM.
yarn.resourcemanager.am.max-attempts
2
How often to check that containers are still alive.
yarn.resourcemanager.container.liveness-monitor.interval-ms
600000
The keytab for the resource manager.
yarn.resourcemanager.keytab
/etc/krb5.keytab
How long to wait until a node manager is considered dead.
yarn.nm.liveness-monitor.expiry-interval-ms
600000
How often to check that node managers are still alive.
yarn.resourcemanager.nm.liveness-monitor.interval-ms
1000
Path to file with nodes to include.
yarn.resourcemanager.nodes.include-path
Path to file with nodes to exclude.
yarn.resourcemanager.nodes.exclude-path
Number of threads to handle resource tracker calls.
yarn.resourcemanager.resource-tracker.client.thread-count
50
The class to use as the resource scheduler.
yarn.resourcemanager.scheduler.class
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
The minimum allocation for every container request at the RM,
in MBs. Memory requests lower than this won't take effect,
and the specified value will get allocated at minimum.
yarn.scheduler.minimum-allocation-mb
1024
The maximum allocation for every container request at the RM,
in MBs. Memory requests higher than this won't take effect,
and will get capped to this value.
yarn.scheduler.maximum-allocation-mb
8192
The minimum allocation for every container request at the RM,
in terms of virtual CPU cores. Requests lower than this won't take effect,
and the specified value will get allocated the minimum.
yarn.scheduler.minimum-allocation-vcores
1
The maximum allocation for every container request at the RM,
in terms of virtual CPU cores. Requests higher than this won't take effect,
and will get capped to this value.
yarn.scheduler.maximum-allocation-vcores
32
Enable RM to recover state after starting. If true, then
yarn.resourcemanager.store.class must be specified.
yarn.resourcemanager.recovery.enabled
false
The class to use as the persistent store.
If org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore
is used, the store is implicitly fenced; meaning a single ResourceManager
is able to use the store at any point in time. More details on this
implicit fencing, along with setting up appropriate ACLs is discussed
under yarn.resourcemanager.zk-state-store.root-node.acl.
yarn.resourcemanager.store.class
org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
The maximum number of completed applications RM state
store keeps, less than or equals to ${yarn.resourcemanager.max-completed-applications}.
By default, it equals to ${yarn.resourcemanager.max-completed-applications}.
This ensures that the applications kept in the state store are consistent with
the applications remembered in RM memory.
Any values larger than ${yarn.resourcemanager.max-completed-applications} will
be reset to ${yarn.resourcemanager.max-completed-applications}.
Note that this value impacts the RM recovery performance.Typically,
a smaller value indicates better performance on RM recovery.
yarn.resourcemanager.state-store.max-completed-applications
${yarn.resourcemanager.max-completed-applications}
Host:Port of the ZooKeeper server to be used by the RM. This
must be supplied when using the ZooKeeper based implementation of the
RM state store and/or embedded automatic failover in a HA setting.
yarn.resourcemanager.zk-address
Number of times RM tries to connect to ZooKeeper.
yarn.resourcemanager.zk-num-retries
500
Retry interval in milliseconds when connecting to ZooKeeper.
yarn.resourcemanager.zk-retry-interval-ms
2000
Full path of the ZooKeeper znode where RM state will be
stored. This must be supplied when using
org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore
as the value for yarn.resourcemanager.store.class
yarn.resourcemanager.zk-state-store.parent-path
/rmstore
ZooKeeper session timeout in milliseconds. Session expiration
is managed by the ZooKeeper cluster itself, not by the client. This value is
used by the cluster to determine when the client's session expires.
Expirations happens when the cluster does not hear from the client within
the specified session timeout period (i.e. no heartbeat).
yarn.resourcemanager.zk-timeout-ms
10000
ACL's to be used for ZooKeeper znodes.
yarn.resourcemanager.zk-acl
world:anyone:rwcda
ACLs to be used for the root znode when using ZKRMStateStore in a HA
scenario for fencing.
ZKRMStateStore supports implicit fencing to allow a single
ResourceManager write-access to the store. For fencing, the
ResourceManagers in the cluster share read-write-admin privileges on the
root node, but the Active ResourceManager claims exclusive create-delete
permissions.
By default, when this property is not set, we use the ACLs from
yarn.resourcemanager.zk-acl for shared admin access and
rm-address:random-number for username-based exclusive create-delete
access.
This property allows users to set ACLs of their choice instead of using
the default mechanism. For fencing to work, the ACLs should be
carefully set differently on each ResourceManger such that all the
ResourceManagers have shared admin access and the Active ResourceManger
takes over (exclusively) the create-delete access.
yarn.resourcemanager.zk-state-store.root-node.acl
URI pointing to the location of the FileSystem path where
RM state will be stored. This must be supplied when using
org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
as the value for yarn.resourcemanager.store.class
yarn.resourcemanager.fs.state-store.uri
${hadoop.tmp.dir}/yarn/system/rmstore
hdfs client retry policy specification. hdfs client retry
is always enabled. Specified in pairs of sleep-time and number-of-retries
and (t0, n0), (t1, n1), ..., the first n0 retries sleep t0 milliseconds on
average, the following n1 retries sleep t1 milliseconds on average, and so on.
yarn.resourcemanager.fs.state-store.retry-policy-spec
2000, 500
Enable RM high-availability. When enabled,
(1) The RM starts in the Standby mode by default, and transitions to
the Active mode when prompted to.
(2) The nodes in the RM ensemble are listed in
yarn.resourcemanager.ha.rm-ids
(3) The id of each RM either comes from yarn.resourcemanager.ha.id
if yarn.resourcemanager.ha.id is explicitly specified or can be
figured out by matching yarn.resourcemanager.address.{id} with local address
(4) The actual physical addresses come from the configs of the pattern
- {rpc-config}.{id}
yarn.resourcemanager.ha.enabled
false
Enable automatic failover.
By default, it is enabled only when HA is enabled
yarn.resourcemanager.ha.automatic-failover.enabled
true
Enable embedded automatic failover.
By default, it is enabled only when HA is enabled.
The embedded elector relies on the RM state store to handle fencing,
and is primarily intended to be used in conjunction with ZKRMStateStore.
yarn.resourcemanager.ha.automatic-failover.embedded
true
The base znode path to use for storing leader information,
when using ZooKeeper based leader election.
yarn.resourcemanager.ha.automatic-failover.zk-base-path
/yarn-leader-election
Name of the cluster. In a HA setting,
this is used to ensure the RM participates in leader
election fo this cluster and ensures it does not affect
other clusters
yarn.resourcemanager.cluster-id
The list of RM nodes in the cluster when HA is
enabled. See description of yarn.resourcemanager.ha
.enabled for full details on how this is used.
yarn.resourcemanager.ha.rm-ids
The id (string) of the current RM. When HA is enabled, this
is an optional config. The id of current RM can be set by explicitly
specifying yarn.resourcemanager.ha.id or figured out by matching
yarn.resourcemanager.address.{id} with local address
See description of yarn.resourcemanager.ha.enabled
for full details on how this is used.
yarn.resourcemanager.ha.id
When HA is enabled, the class to be used by Clients, AMs and
NMs to failover to the Active RM. It should extend
org.apache.hadoop.yarn.client.RMFailoverProxyProvider
yarn.client.failover-proxy-provider
org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider
When HA is enabled, the max number of times
FailoverProxyProvider should attempt failover. When set,
this overrides the yarn.resourcemanager.connect.max-wait.ms. When
not set, this is inferred from
yarn.resourcemanager.connect.max-wait.ms.
yarn.client.failover-max-attempts
When HA is enabled, the sleep base (in milliseconds) to be
used for calculating the exponential delay between failovers. When set,
this overrides the yarn.resourcemanager.connect.* settings. When
not set, yarn.resourcemanager.connect.retry-interval.ms is used instead.
yarn.client.failover-sleep-base-ms
When HA is enabled, the maximum sleep time (in milliseconds)
between failovers. When set, this overrides the
yarn.resourcemanager.connect.* settings. When not set,
yarn.resourcemanager.connect.retry-interval.ms is used instead.
yarn.client.failover-sleep-max-ms
When HA is enabled, the number of retries per
attempt to connect to a ResourceManager. In other words,
it is the ipc.client.connect.max.retries to be used during
failover attempts
yarn.client.failover-retries
0
When HA is enabled, the number of retries per
attempt to connect to a ResourceManager on socket timeouts. In other
words, it is the ipc.client.connect.max.retries.on.timeouts to be used
during failover attempts
yarn.client.failover-retries-on-socket-timeouts
0
The maximum number of completed applications RM keeps.
yarn.resourcemanager.max-completed-applications
10000
Interval at which the delayed token removal thread runs
yarn.resourcemanager.delayed.delegation-token.removal-interval-ms
30000
Interval for the roll over for the master key used to generate
application tokens
yarn.resourcemanager.application-tokens.master-key-rolling-interval-secs
86400
Interval for the roll over for the master key used to generate
container tokens. It is expected to be much greater than
yarn.nm.liveness-monitor.expiry-interval-ms and
yarn.rm.container-allocation.expiry-interval-ms. Otherwise the
behavior is undefined.
yarn.resourcemanager.container-tokens.master-key-rolling-interval-secs
86400
The heart-beat interval in milliseconds for every NodeManager in the cluster.
yarn.resourcemanager.nodemanagers.heartbeat-interval-ms
1000
The minimum allowed version of a connecting nodemanager. The valid values are
NONE (no version checking), EqualToRM (the nodemanager's version is equal to
or greater than the RM version), or a Version String.
yarn.resourcemanager.nodemanager.minimum.version
NONE
Enable a set of periodic monitors (specified in
yarn.resourcemanager.scheduler.monitor.policies) that affect the
scheduler.
yarn.resourcemanager.scheduler.monitor.enable
false
The list of SchedulingEditPolicy classes that interact with
the scheduler. A particular module may be incompatible with the
scheduler, other policies, or a configuration of either.
yarn.resourcemanager.scheduler.monitor.policies
org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy
Number of worker threads that write the history data.
yarn.resourcemanager.history-writer.multi-threaded-dispatcher.pool-size
10
The class to use as the configuration provider.
If org.apache.hadoop.yarn.LocalConfigurationProvider is used,
the local configuration will be loaded.
If org.apache.hadoop.yarn.FileSystemBasedConfigurationProvider is used,
the configuration which will be loaded should be uploaded to remote File system first.
>
yarn.resourcemanager.configuration.provider-class
org.apache.hadoop.yarn.LocalConfigurationProvider
The hostname of the NM.
yarn.nodemanager.hostname
0.0.0.0
The address of the container manager in the NM.
yarn.nodemanager.address
${yarn.nodemanager.hostname}:0
Environment variables that should be forwarded from the NodeManager's environment to the container's.
yarn.nodemanager.admin-env
MALLOC_ARENA_MAX=$MALLOC_ARENA_MAX
Environment variables that containers may override rather than use NodeManager's default.
yarn.nodemanager.env-whitelist
JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,HADOOP_YARN_HOME
who will execute(launch) the containers.
yarn.nodemanager.container-executor.class
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor
Number of threads container manager uses.
yarn.nodemanager.container-manager.thread-count
20
Number of threads used in cleanup.
yarn.nodemanager.delete.thread-count
4
Number of seconds after an application finishes before the nodemanager's
DeletionService will delete the application's localized file directory
and log directory.
To diagnose Yarn application problems, set this property's value large
enough (for example, to 600 = 10 minutes) to permit examination of these
directories. After changing the property's value, you must restart the
nodemanager in order for it to have an effect.
The roots of Yarn applications' work directories is configurable with
the yarn.nodemanager.local-dirs property (see below), and the roots
of the Yarn applications' log directories is configurable with the
yarn.nodemanager.log-dirs property (see also below).
yarn.nodemanager.delete.debug-delay-sec
0
Keytab for NM.
yarn.nodemanager.keytab
/etc/krb5.keytab
List of directories to store localized files in. An
application's localized file directory will be found in:
${yarn.nodemanager.local-dirs}/usercache/${user}/appcache/application_${appid}.
Individual containers' work directories, called container_${contid}, will
be subdirectories of this.
yarn.nodemanager.local-dirs
${hadoop.tmp.dir}/nm-local-dir
It limits the maximum number of files which will be localized
in a single local directory. If the limit is reached then sub-directories
will be created and new files will be localized in them. If it is set to
a value less than or equal to 36 [which are sub-directories (0-9 and then
a-z)] then NodeManager will fail to start. For example; [for public
cache] if this is configured with a value of 40 ( 4 files +
36 sub-directories) and the local-dir is "/tmp/local-dir1" then it will
allow 4 files to be created directly inside "/tmp/local-dir1/filecache".
For files that are localized further it will create a sub-directory "0"
inside "/tmp/local-dir1/filecache" and will localize files inside it
until it becomes full. If a file is removed from a sub-directory that
is marked full, then that sub-directory will be used back again to
localize files.
yarn.nodemanager.local-cache.max-files-per-directory
8192
Address where the localizer IPC is.
yarn.nodemanager.localizer.address
${yarn.nodemanager.hostname}:8040
Interval in between cache cleanups.
yarn.nodemanager.localizer.cache.cleanup.interval-ms
600000
Target size of localizer cache in MB, per local directory.
yarn.nodemanager.localizer.cache.target-size-mb
10240
Number of threads to handle localization requests.
yarn.nodemanager.localizer.client.thread-count
5
Number of threads to use for localization fetching.
yarn.nodemanager.localizer.fetch.thread-count
4
Where to store container logs. An application's localized log directory
will be found in ${yarn.nodemanager.log-dirs}/application_${appid}.
Individual containers' log directories will be below this, in directories
named container_{$contid}. Each container directory will contain the files
stderr, stdin, and syslog generated by that container.
yarn.nodemanager.log-dirs
${yarn.log.dir}/userlogs
Whether to enable log aggregation
yarn.log-aggregation-enable
false
How long to keep aggregation logs before deleting them. -1 disables.
Be careful set this too small and you will spam the name node.
yarn.log-aggregation.retain-seconds
-1
How long to wait between aggregated log retention checks.
If set to 0 or a negative value then the value is computed as one-tenth
of the aggregated log retention time. Be careful set this too small and
you will spam the name node.
yarn.log-aggregation.retain-check-interval-seconds
-1
Time in seconds to retain user logs. Only applicable if
log aggregation is disabled
yarn.nodemanager.log.retain-seconds
10800
Where to aggregate logs to.
yarn.nodemanager.remote-app-log-dir
/tmp/logs
The remote log dir will be created at
{yarn.nodemanager.remote-app-log-dir}/${user}/{thisParam}
yarn.nodemanager.remote-app-log-dir-suffix
logs
Amount of physical memory, in MB, that can be allocated
for containers.
yarn.nodemanager.resource.memory-mb
8192
Whether physical memory limits will be enforced for
containers.
yarn.nodemanager.pmem-check-enabled
true
Whether virtual memory limits will be enforced for
containers.
yarn.nodemanager.vmem-check-enabled
true
Ratio between virtual memory to physical memory when
setting memory limits for containers. Container allocations are
expressed in terms of physical memory, and virtual memory usage
is allowed to exceed this allocation by this ratio.
yarn.nodemanager.vmem-pmem-ratio
2.1
Number of CPU cores that can be allocated
for containers.
yarn.nodemanager.resource.cpu-vcores
8
NM Webapp address.
yarn.nodemanager.webapp.address
${yarn.nodemanager.hostname}:8042
How often to monitor containers.
yarn.nodemanager.container-monitor.interval-ms
3000
Class that calculates containers current resource utilization.
yarn.nodemanager.container-monitor.resource-calculator.class
Frequency of running node health script.
yarn.nodemanager.health-checker.interval-ms
600000
Script time out period.
yarn.nodemanager.health-checker.script.timeout-ms
1200000
The health check script to run.
yarn.nodemanager.health-checker.script.path
The arguments to pass to the health check script.
yarn.nodemanager.health-checker.script.opts
Frequency of running disk health checker code.
yarn.nodemanager.disk-health-checker.interval-ms
120000
The minimum fraction of number of disks to be healthy for the
nodemanager to launch new containers. This correspond to both
yarn-nodemanager.local-dirs and yarn.nodemanager.log-dirs. i.e. If there
are less number of healthy local-dirs (or log-dirs) available, then
new containers will not be launched on this node.
yarn.nodemanager.disk-health-checker.min-healthy-disks
0.25
The maximum percentage of disk space utilization allowed after
which a disk is marked as bad. Values can range from 0.0 to 100.0.
If the value is greater than or equal to 100, the nodemanager will check
for full disk. This applies to yarn-nodemanager.local-dirs and
yarn.nodemanager.log-dirs.
yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage
100.0
The minimum space that must be available on a disk for
it to be used. This applies to yarn-nodemanager.local-dirs and
yarn.nodemanager.log-dirs.
yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb
0
The path to the Linux container executor.
yarn.nodemanager.linux-container-executor.path
The class which should help the LCE handle resources.
yarn.nodemanager.linux-container-executor.resources-handler.class
org.apache.hadoop.yarn.server.nodemanager.util.DefaultLCEResourcesHandler
The cgroups hierarchy under which to place YARN proccesses (cannot contain commas).
If yarn.nodemanager.linux-container-executor.cgroups.mount is false (that is, if cgroups have
been pre-configured), then this cgroups hierarchy must already exist and be writable by the
NodeManager user, otherwise the NodeManager may fail.
Only used when the LCE resources handler is set to the CgroupsLCEResourcesHandler.
yarn.nodemanager.linux-container-executor.cgroups.hierarchy
/hadoop-yarn
Whether the LCE should attempt to mount cgroups if not found.
Only used when the LCE resources handler is set to the CgroupsLCEResourcesHandler.
yarn.nodemanager.linux-container-executor.cgroups.mount
false
Where the LCE should attempt to mount cgroups if not found. Common locations
include /sys/fs/cgroup and /cgroup; the default location can vary depending on the Linux
distribution in use. This path must exist before the NodeManager is launched.
Only used when the LCE resources handler is set to the CgroupsLCEResourcesHandler, and
yarn.nodemanager.linux-container-executor.cgroups.mount is true.
yarn.nodemanager.linux-container-executor.cgroups.mount-path
The UNIX user that containers will run as when Linux-container-executor
is used in nonsecure mode (a use case for this is using cgroups).
yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user
nobody
The allowed pattern for UNIX user names enforced by
Linux-container-executor when used in nonsecure mode (use case for this
is using cgroups). The default value is taken from /usr/sbin/adduser
yarn.nodemanager.linux-container-executor.nonsecure-mode.user-pattern
^[_.A-Za-z0-9][-@_.A-Za-z0-9]{0,255}?[$]?$
T-file compression types used to compress aggregated logs.
yarn.nodemanager.log-aggregation.compression-type
none
The kerberos principal for the node manager.
yarn.nodemanager.principal
the valid service name should only contain a-zA-Z0-9_ and can not start with numbers
yarn.nodemanager.aux-services
No. of ms to wait between sending a SIGTERM and SIGKILL to a container
yarn.nodemanager.sleep-delay-before-sigkill.ms
250
Max time to wait for a process to come up when trying to cleanup a container
yarn.nodemanager.process-kill-wait.ms
2000
Max time, in seconds, to wait to establish a connection to RM when NM starts.
The NM will shutdown if it cannot connect to RM within the specified max time period.
If the value is set as -1, then NM will retry forever.
yarn.nodemanager.resourcemanager.connect.wait.secs
900
Time interval, in seconds, between each NM attempt to connect to RM.
yarn.nodemanager.resourcemanager.connect.retry_interval.secs
30
The minimum allowed version of a resourcemanager that a nodemanager will connect to.
The valid values are NONE (no version checking), EqualToNM (the resourcemanager's version is
equal to or greater than the NM version), or a Version String.
yarn.nodemanager.resourcemanager.minimum.version
NONE
Max number of threads in NMClientAsync to process container
management events
yarn.client.nodemanager-client-async.thread-pool-max-size
500
Maximum number of proxy connections for node manager. It should always be
more than 1. NMClient and MRAppMaster will use this to cache connection
with node manager. There will be at max one connection per node manager.
Ex. configuring it to a value of 5 will make sure that client will at
max have 5 connections cached with 5 different node managers. These
connections will be timed out if idle for more than system wide idle
timeout period. The token if used for authentication then it will be used
only at connection creation time. If new token is received then earlier
connection should be closed in order to use newer token. This and
(yarn.client.nodemanager-client-async.thread-pool-max-size) are related
and should be sync (no need for them to be equal).
yarn.client.max-nodemanagers-proxies
500
yarn.nodemanager.aux-services.mapreduce_shuffle.class
org.apache.hadoop.mapred.ShuffleHandler
mapreduce.job.jar
mapreduce.job.hdfs-servers
${fs.defaultFS}
The kerberos principal for the proxy, if the proxy is not
running as part of the RM.
yarn.web-proxy.principal
Keytab for WebAppProxy, if the proxy is not running as part of
the RM.
yarn.web-proxy.keytab
The address for the web proxy as HOST:PORT, if this is not
given then the proxy will run as part of the RM
yarn.web-proxy.address
CLASSPATH for YARN applications. A comma-separated list
of CLASSPATH entries. When this value is empty, the following default
CLASSPATH for YARN applications would be used.
For Linux:
$HADOOP_CONF_DIR,
$HADOOP_COMMON_HOME/share/hadoop/common/*,
$HADOOP_COMMON_HOME/share/hadoop/common/lib/*,
$HADOOP_HDFS_HOME/share/hadoop/hdfs/*,
$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*,
$HADOOP_YARN_HOME/share/hadoop/yarn/*,
$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*
For Windows:
%HADOOP_CONF_DIR%,
%HADOOP_COMMON_HOME%/share/hadoop/common/*,
%HADOOP_COMMON_HOME%/share/hadoop/common/lib/*,
%HADOOP_HDFS_HOME%/share/hadoop/hdfs/*,
%HADOOP_HDFS_HOME%/share/hadoop/hdfs/lib/*,
%HADOOP_YARN_HOME%/share/hadoop/yarn/*,
%HADOOP_YARN_HOME%/share/hadoop/yarn/lib/*
yarn.application.classpath
Indicate to clients whether timeline service is enabled or not.
If enabled, clients will put entities and events to the timeline server.
yarn.timeline-service.enabled
false
The hostname of the timeline service web application.
yarn.timeline-service.hostname
0.0.0.0
This is default address for the timeline server to start the
RPC server.
yarn.timeline-service.address
${yarn.timeline-service.hostname}:10200
The http address of the timeline service web application.
yarn.timeline-service.webapp.address
${yarn.timeline-service.hostname}:8188
The https address of the timeline service web application.
yarn.timeline-service.webapp.https.address
${yarn.timeline-service.hostname}:8190
Store class name for timeline store.
yarn.timeline-service.store-class
org.apache.hadoop.yarn.server.applicationhistoryservice.timeline.LeveldbTimelineStore
Enable age off of timeline store data.
yarn.timeline-service.ttl-enable
true
Time to live for timeline store data in milliseconds.
yarn.timeline-service.ttl-ms
604800000
Store file name for leveldb timeline store.
yarn.timeline-service.leveldb-timeline-store.path
${hadoop.tmp.dir}/yarn/timeline
Length of time to wait between deletion cycles of leveldb timeline store in milliseconds.
yarn.timeline-service.leveldb-timeline-store.ttl-interval-ms
300000
Size of read cache for uncompressed blocks for leveldb timeline store in bytes.
yarn.timeline-service.leveldb-timeline-store.read-cache-size
104857600
Size of cache for recently read entity start times for leveldb timeline store in number of entities.
yarn.timeline-service.leveldb-timeline-store.start-time-read-cache-size
10000
Size of cache for recently written entity start times for leveldb timeline store in number of entities.
yarn.timeline-service.leveldb-timeline-store.start-time-write-cache-size
10000
Handler thread count to serve the client RPC requests.
yarn.timeline-service.handler-thread-count
10
Indicate to ResourceManager as well as clients whether
history-service is enabled or not. If enabled, ResourceManager starts
recording historical data that ApplicationHistory service can consume.
Similarly, clients can redirect to the history service when applications
finish if this is enabled.
yarn.timeline-service.generic-application-history.enabled
false
URI pointing to the location of the FileSystem path where
the history will be persisted. This must be supplied when using
org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore
as the value for yarn.timeline-service.generic-application-history.store-class
yarn.timeline-service.generic-application-history.fs-history-store.uri
${hadoop.tmp.dir}/yarn/timeline/generic-history
T-file compression types used to compress history data.
yarn.timeline-service.generic-application-history.fs-history-store.compression-type
none
Store class name for history store, defaulting to file
system store
yarn.timeline-service.generic-application-history.store-class
org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore
The interval that the yarn client library uses to poll the
completion status of the asynchronous API of application client protocol.
yarn.client.application-client-protocol.poll-interval-ms
200
RSS usage of a process computed via
/proc/pid/stat is not very accurate as it includes shared pages of a
process. /proc/pid/smaps provides useful information like
Private_Dirty, Private_Clean, Shared_Dirty, Shared_Clean which can be used
for computing more accurate RSS. When this flag is enabled, RSS is computed
as Min(Shared_Dirty, Pss) + Private_Clean + Private_Dirty. It excludes
read-only shared mappings in RSS computation.
yarn.nodemanager.container-monitor.procfs-tree.smaps-based-rss.enabled
false