Overview

All hadoop commands are invoked by the bin/hadoop script. Running the hadoop script without any arguments prints the description for all commands.

Usage: hadoop [--config confdir] [COMMAND] [GENERIC_OPTIONS] [COMMAND_OPTIONS]

Hadoop has an option parsing framework that employs parsing generic options as well as running classes.

COMMAND_OPTION Description
--config confdir Overwrites the default Configuration directory. Default is $HADOOP_HOME/conf.
GENERIC_OPTIONS COMMAND_OPTIONS The common set of options supported by multiple commands. Various commands with their options are described in the following sections. The commands have been grouped into User Commands and Administration Commands.

Generic Options

The following options are supported by dfsadmin, fs, fsck, job and fetchdt. Applications should implement Tool to support GenericOptions.

GENERIC_OPTION Description
-conf <configuration file> Specify an application configuration file.
-D <property>=<value> Use value for given property.
-jt <local> or <resourcemanager:port> Specify a ResourceManager. Applies only to job.
-files <comma separated list of files> Specify comma separated files to be copied to the map reduce cluster. Applies only to job.
-libjars <comma seperated list of jars> Specify comma separated jar files to include in the classpath. Applies only to job.
-archives <comma separated list of archives> Specify comma separated archives to be unarchived on the compute machines. Applies only to job.

User Commands

Commands useful for users of a hadoop cluster.

archive

Creates a hadoop archive. More information can be found at Hadoop Archives Guide.

credential

Command to manage credentials, passwords and secrets within credential providers.

The CredentialProvider API in Hadoop allows for the separation of applications and how they store their required passwords/secrets. In order to indicate a particular provider type and location, the user must provide the hadoop.security.credential.provider.path configuration element in core-site.xml or use the command line option -provider on each of the following commands. This provider path is a comma-separated list of URLs that indicates the type and location of a list of providers that should be consulted. For example, the following path:

user:///,jceks://file/tmp/test.jceks,jceks://hdfs@nn1.example.com/my/path/test.jceks

indicates that the current user's credentials file should be consulted through the User Provider, that the local file located at /tmp/test.jceks is a Java Keystore Provider and that the file located within HDFS at nn1.example.com/my/path/test.jceks is also a store for a Java Keystore Provider.

When utilizing the credential command it will often be for provisioning a password or secret to a particular credential store provider. In order to explicitly indicate which provider store to use the -provider option should be used. Otherwise, given a path of multiple providers, the first non-transient provider will be used. This may or may not be the one that you intended.

Example: -provider jceks://file/tmp/test.jceks

Usage: hadoop credential <subcommand> [options]

COMMAND_OPTION Description
create alias [-v value][-provider provider-path] Prompts the user for a credential to be stored as the given alias when a value is not provided via -v. The hadoop.security.credential.provider.path within the core-site.xml file will be used unless a -provider is indicated.
delete alias [-i][-provider provider-path] Deletes the credential with the provided alias and optionally warns the user when --interactive is used. The hadoop.security.credential.provider.path within the core-site.xml file will be used unless a -provider is indicated.
list [-provider provider-path] Lists all of the credential aliases The hadoop.security.credential.provider.path within the core-site.xml file will be used unless a -provider is indicated.

distcp

Copy file or directories recursively. More information can be found at Hadoop DistCp Guide.

fs

Deprecated, use hdfs dfs instead.

fsck

Deprecated, use hdfs fsck instead.

fetchdt

Deprecated, use hdfs fetchdt instead.

jar

Runs a jar file. Users can bundle their Map Reduce code in a jar file and execute it using this command.

Usage: hadoop jar <jar> [mainClass] args...

The streaming jobs are run via this command. Examples can be referred from Streaming examples

Word count example is also run using jar command. It can be referred from Wordcount example

job

Deprecated. Use mapred job instead.

pipes

Deprecated. Use mapred pipes instead.

queue

Deprecated. Use mapred queue instead.

version

Prints the version.

Usage: hadoop version

CLASSNAME

hadoop script can be used to invoke any class.

Usage: hadoop CLASSNAME

Runs the class named CLASSNAME.

classpath

Prints the class path needed to get the Hadoop jar and the required libraries. If called without arguments, then prints the classpath set up by the command scripts, which is likely to contain wildcards in the classpath entries. Additional options print the classpath after wildcard expansion or write the classpath into the manifest of a jar file. The latter is useful in environments where wildcards cannot be used and the expanded classpath exceeds the maximum supported command line length.

Usage: hadoop classpath [--glob|--jar <path>|-h|--help]

COMMAND_OPTION Description
--glob expand wildcards
--jar path write classpath as manifest in jar named path
-h, --help print help

Administration Commands

Commands useful for administrators of a hadoop cluster.

balancer

Deprecated, use hdfs balancer instead.

daemonlog

Get/Set the log level for each daemon.

Usage: hadoop daemonlog -getlevel <host:port> <name> Usage: hadoop daemonlog -setlevel <host:port> <name> <level>

COMMAND_OPTION Description
-getlevel host:port name Prints the log level of the daemon running at host:port. This command internally connects to http://host:port/logLevel?log=name
-setlevel host:port name level Sets the log level of the daemon running at host:port. This command internally connects to http://host:port/logLevel?log=name

datanode

Deprecated, use hdfs datanode instead.

dfsadmin

Deprecated, use hdfs dfsadmin instead.

namenode

Deprecated, use hdfs namenode instead.

secondarynamenode

Deprecated, use hdfs secondarynamenode instead.