Factory to create client IPC classes.
yarn.ipc.client.factory.class
Type of serialization to use.
yarn.ipc.serializer.type
protocolbuffers
Factory to create server IPC classes.
yarn.ipc.server.factory.class
Factory to create IPC exceptions.
yarn.ipc.exception.factory.class
Factory to create serializeable records.
yarn.ipc.record.factory.class
RPC class implementation
yarn.ipc.rpc.class
org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC
The address of the applications manager interface in the RM.
yarn.resourcemanager.address
0.0.0.0:8032
The number of threads used to handle applications manager requests.
yarn.resourcemanager.client.thread-count
50
The expiry interval for application master reporting.
yarn.am.liveness-monitor.expiry-interval-ms
600000
The Kerberos principal for the resource manager.
yarn.resourcemanager.principal
The address of the scheduler interface.
yarn.resourcemanager.scheduler.address
0.0.0.0:8030
Number of threads to handle scheduler interface.
yarn.resourcemanager.scheduler.client.thread-count
50
The address of the RM web application.
yarn.resourcemanager.webapp.address
0.0.0.0:8088
yarn.resourcemanager.resource-tracker.address
0.0.0.0:8031
Are acls enabled.
yarn.acl.enable
true
ACL of who can be admin of the YARN cluster.
yarn.admin.acl
*
The address of the RM admin interface.
yarn.resourcemanager.admin.address
0.0.0.0:8033
Number of threads used to handle RM admin interface.
yarn.resourcemanager.admin.client.thread-count
1
How often should the RM check that the AM is still alive.
yarn.resourcemanager.amliveliness-monitor.interval-ms
1000
The maximum number of application master retries.
yarn.resourcemanager.am.max-retries
1
How often to check that containers are still alive.
yarn.resourcemanager.container.liveness-monitor.interval-ms
600000
The keytab for the resource manager.
yarn.resourcemanager.keytab
/etc/krb5.keytab
How long to wait until a node manager is considered dead.
yarn.nm.liveness-monitor.expiry-interval-ms
600000
How often to check that node managers are still alive.
yarn.resourcemanager.nm.liveness-monitor.interval-ms
1000
Path to file with nodes to include.
yarn.resourcemanager.nodes.include-path
Path to file with nodes to exclude.
yarn.resourcemanager.nodes.exclude-path
Number of threads to handle resource tracker calls.
yarn.resourcemanager.resource-tracker.client.thread-count
50
The class to use as the resource scheduler.
yarn.resourcemanager.scheduler.class
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
The minimum allocation size for every container request at the RM,
in MBs. Memory requests lower than this won't take effect,
and the specified value will get allocated at minimum.
yarn.scheduler.minimum-allocation-mb
1024
The maximum allocation size for every container request at the RM,
in MBs. Memory requests higher than this won't take effect,
and will get capped to this value.
yarn.scheduler.maximum-allocation-mb
8192
The class to use as the persistent store.
yarn.resourcemanager.store.class
The address of the zookeeper instance to use with ZK store.
yarn.resourcemanager.zookeeper-store.address
The zookeeper session timeout for the zookeeper store.
yarn.resourcemanager.zookeeper-store.session.timeout-ms
60000
The maximum number of completed applications RM keeps.
yarn.resourcemanager.max-completed-applications
10000
Interval at which the delayed token removal thread runs
yarn.resourcemanager.delayed.delegation-token.removal-interval-ms
30000
Interval for the roll over for the master key used to generate
application tokens
yarn.resourcemanager.application-tokens.master-key-rolling-interval-secs
86400
Interval for the roll over for the master key used to generate
container tokens. It is expected to be much greater than
yarn.nm.liveness-monitor.expiry-interval-ms and
yarn.rm.container-allocation.expiry-interval-ms. Otherwise the
behavior is undefined.
yarn.resourcemanager.container-tokens.master-key-rolling-interval-secs
86400
address of node manager IPC.
yarn.nodemanager.address
0.0.0.0:0
Environment variables that should be forwarded from the NodeManager's environment to the container's.
yarn.nodemanager.admin-env
MALLOC_ARENA_MAX=$MALLOC_ARENA_MAX
Environment variables that containers may override rather than use NodeManager's default.
yarn.nodemanager.env-whitelist
JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,YARN_HOME
who will execute(launch) the containers.
yarn.nodemanager.container-executor.class
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor
Number of threads container manager uses.
yarn.nodemanager.container-manager.thread-count
20
Number of threads used in cleanup.
yarn.nodemanager.delete.thread-count
4
Heartbeat interval to RM
yarn.nodemanager.heartbeat.interval-ms
1000
Keytab for NM.
yarn.nodemanager.keytab
/etc/krb5.keytab
List of directories to store localized files in.
yarn.nodemanager.local-dirs
/tmp/nm-local-dir
Address where the localizer IPC is.
yarn.nodemanager.localizer.address
0.0.0.0:8040
Interval in between cache cleanups.
yarn.nodemanager.localizer.cache.cleanup.interval-ms
600000
Target size of localizer cache in MB, per local directory.
yarn.nodemanager.localizer.cache.target-size-mb
10240
Number of threads to handle localization requests.
yarn.nodemanager.localizer.client.thread-count
5
Number of threads to use for localization fetching.
yarn.nodemanager.localizer.fetch.thread-count
4
Where to store container logs.
yarn.nodemanager.log-dirs
/tmp/logs
Whether to enable log aggregation
yarn.log-aggregation-enable
false
How long to keep aggregation logs before deleting them. -1 disables.
Be careful set this too small and you will spam the name node.
yarn.log-aggregation.retain-seconds
-1
How long to wait between aggregated log retention checks.
If set to 0 or a negative value then the value is computed as one-tenth
of the aggregated log retention time. Be careful set this too small and
you will spam the name node.
yarn.log-aggregation.retain-check-interval-seconds
-1
Time in seconds to retain user logs. Only applicable if
log aggregation is disabled
yarn.nodemanager.log.retain-seconds
10800
Where to aggregate logs to.
yarn.nodemanager.remote-app-log-dir
/tmp/logs
The remote log dir will be created at
{yarn.nodemanager.remote-app-log-dir}/${user}/{thisParam}
yarn.nodemanager.remote-app-log-dir-suffix
logs
Amount of physical memory, in MB, that can be allocated
for containers.
yarn.nodemanager.resource.memory-mb
8192
Ratio between virtual memory to physical memory when
setting memory limits for containers. Container allocations are
expressed in terms of physical memory, and virtual memory usage
is allowed to exceed this allocation by this ratio.
yarn.nodemanager.vmem-pmem-ratio
2.1
NM Webapp address.
yarn.nodemanager.webapp.address
0.0.0.0:8042
How often to monitor containers.
yarn.nodemanager.container-monitor.interval-ms
3000
Class that calculates containers current resource utilization.
yarn.nodemanager.container-monitor.resource-calculator.class
Frequency of running node health script.
yarn.nodemanager.health-checker.interval-ms
600000
Script time out period.
yarn.nodemanager.health-checker.script.timeout-ms
1200000
The health check script to run.
yarn.nodemanager.health-checker.script.path
The arguments to pass to the health check script.
yarn.nodemanager.health-checker.script.opts
Frequency of running disk health checker code.
yarn.nodemanager.disk-health-checker.interval-ms
120000
The minimum fraction of number of disks to be healthy for the
nodemanager to launch new containers. This correspond to both
yarn-nodemanager.local-dirs and yarn.nodemanager.log-dirs. i.e. If there
are less number of healthy local-dirs (or log-dirs) available, then
new containers will not be launched on this node.
yarn.nodemanager.disk-health-checker.min-healthy-disks
0.25
The path to the Linux container executor.
yarn.nodemanager.linux-container-executor.path
T-file compression types used to compress aggregated logs.
yarn.nodemanager.log-aggregation.compression-type
none
The kerberos principal for the node manager.
yarn.nodemanager.principal
yarn.nodemanager.aux-services
No. of ms to wait between sending a SIGTERM and SIGKILL to a container
yarn.nodemanager.sleep-delay-before-sigkill.ms
250
Max time to wait for a process to come up when trying to cleanup a container
yarn.nodemanager.process-kill-wait.ms
2000
yarn.nodemanager.aux-services.mapreduce.shuffle.class
org.apache.hadoop.mapred.ShuffleHandler
mapreduce.job.jar
mapreduce.job.hdfs-servers
${fs.defaultFS}
The kerberos principal for the proxy, if the proxy is not
running as part of the RM.
yarn.web-proxy.principal
Keytab for WebAppProxy, if the proxy is not running as part of
the RM.
yarn.web-proxy.keytab
The address for the web proxy as HOST:PORT, if this is not
given or if it matches yarn.resourcemanager.address then the proxy will
run as part of the RM
yarn.web-proxy.address
Classpath for typical applications.
yarn.application.classpath
$HADOOP_CONF_DIR,
$HADOOP_COMMON_HOME/share/hadoop/common/*,
$HADOOP_COMMON_HOME/share/hadoop/common/lib/*,
$HADOOP_HDFS_HOME/share/hadoop/hdfs/*,
$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*,
$YARN_HOME/share/hadoop/yarn/*,
$YARN_HOME/share/hadoop/mapreduce/*,
$YARN_HOME/share/hadoop/mapreduce/lib/*