org.apache.hadoop.fs
Class FileContext

java.lang.Object
  extended by org.apache.hadoop.fs.FileContext

@InterfaceAudience.Public
@InterfaceStability.Evolving
public class FileContext
extends Object

The FileContext class provides an interface to the application writer for using the Hadoop file system. It provides a set of methods for the usual operation: create, open, list, etc

*** Path Names ***

The Hadoop file system supports a URI name space and URI names. It offers a forest of file systems that can be referenced using fully qualified URIs. Two common Hadoop file systems implementations are

While URI names are very flexible, it requires knowing the name or address of the server. For convenience one often wants to access the default system in one's environment without knowing its name/address. This has an additional benefit that it allows one to change one's default fs (e.g. admin moves application from cluster1 to cluster2).

To facilitate this, Hadoop supports a notion of a default file system. The user can set his default file system, although this is typically set up for you in your environment via your default config. A default file system implies a default scheme and authority; slash-relative names (such as /for/bar) are resolved relative to that default FS. Similarly a user can also have working-directory-relative names (i.e. names not starting with a slash). While the working directory is generally in the same default FS, the wd can be in a different FS.

Hence Hadoop path names can be one of:

Relative paths with scheme (scheme:foo/bar) are illegal.

****The Role of the FileContext and configuration defaults****

The FileContext provides file namespace context for resolving file names; it also contains the umask for permissions, In that sense it is like the per-process file-related state in Unix system. These two properties

in general, are obtained from the default configuration file in your environment, (@see Configuration). No other configuration parameters are obtained from the default config as far as the file context layer is concerned. All file system instances (i.e. deployments of file systems) have default properties; we call these server side (SS) defaults. Operation like create allow one to select many properties: either pass them in as explicit parameters or use the SS properties.

The file system related SS defaults are

*** Usage Model for the FileContext class ***

Example 1: use the default config read from the $HADOOP_CONFIG/core.xml. Unspecified values come from core-defaults.xml in the release jar.

Example 2: Get a FileContext with a specific URI as the default FS Example 3: FileContext with local file system as the default Example 4: Use a specific config, ignoring $HADOOP_CONFIG Generally you should not need use a config unless you are doing


Field Summary
static FsPermission DEFAULT_PERM
          Default permission for directory and symlink In previous versions, this default permission was also used to create files, so files created end up with ugo+x permission.
static FsPermission DIR_DEFAULT_PERM
          Default permission for directory
static FsPermission FILE_DEFAULT_PERM
          Default permission for file
static org.apache.commons.logging.Log LOG
           
static int SHUTDOWN_HOOK_PRIORITY
          Priority of the FileContext shutdown hook.
 
Method Summary
static void clearStatistics()
          Clears all the statistics stored in AbstractFileSystem, for all the file systems.
 FSDataOutputStream create(Path f, EnumSet<CreateFlag> createFlag, org.apache.hadoop.fs.Options.CreateOpts... opts)
          Create or overwrite file on indicated path and returns an output stream for writing into the file.
 void createSymlink(Path target, Path link, boolean createParent)
          Creates a symbolic link to an existing file.
 boolean delete(Path f, boolean recursive)
          Delete a file.
 boolean deleteOnExit(Path f)
          Mark a path to be deleted on JVM shutdown.
 AclStatus getAclStatus(Path path)
          Gets the ACLs of files and directories.
static Map<URI,org.apache.hadoop.fs.FileSystem.Statistics> getAllStatistics()
           
 FileChecksum getFileChecksum(Path f)
          Get the checksum of a file.
static FileContext getFileContext()
          Create a FileContext using the default config read from the $HADOOP_CONFIG/core.xml, Unspecified key-values for config are defaulted from core-defaults.xml in the release jar.
protected static FileContext getFileContext(AbstractFileSystem defaultFS)
          Create a FileContext for specified file system using the default config.
static FileContext getFileContext(AbstractFileSystem defFS, Configuration aConf)
          Create a FileContext with specified FS as default using the specified config.
static FileContext getFileContext(Configuration aConf)
          Create a FileContext using the passed config.
static FileContext getFileContext(URI defaultFsUri)
          Create a FileContext for specified URI using the default config.
static FileContext getFileContext(URI defaultFsUri, Configuration aConf)
          Create a FileContext for specified default URI using the specified config.
 FileStatus getFileLinkStatus(Path f)
          Return a file status object that represents the path.
 FileStatus getFileStatus(Path f)
          Return a file status object that represents the path.
protected  AbstractFileSystem getFSofPath(Path absOrFqPath)
          Get the file system of supplied path.
 FsStatus getFsStatus(Path f)
          Returns a status object describing the use and capacity of the file system denoted by the Parh argument p.
 Path getHomeDirectory()
          Return the current user's home directory in this file system.
 Path getLinkTarget(Path f)
          Returns the target of the given symbolic link as it was specified when the link was created.
static FileContext getLocalFSFileContext()
           
static FileContext getLocalFSFileContext(Configuration aConf)
           
static org.apache.hadoop.fs.FileSystem.Statistics getStatistics(URI uri)
          Get the statistics for a particular file system
 org.apache.hadoop.security.UserGroupInformation getUgi()
          Gets the ugi in the file-context
 FsPermission getUMask()
           
 Path getWorkingDirectory()
          Gets the working directory for wd-relative names (such a "foo/bar").
 byte[] getXAttr(Path path, String name)
          Get an xattr for a file or directory.
 Map<String,byte[]> getXAttrs(Path path)
          Get all of the xattrs for a file or directory.
 Map<String,byte[]> getXAttrs(Path path, List<String> names)
          Get all of the xattrs for a file or directory.
 org.apache.hadoop.fs.RemoteIterator<Path> listCorruptFileBlocks(Path path)
           
 org.apache.hadoop.fs.RemoteIterator<LocatedFileStatus> listLocatedStatus(Path f)
          List the statuses of the files/directories in the given path if the path is a directory.
 org.apache.hadoop.fs.RemoteIterator<FileStatus> listStatus(Path f)
          List the statuses of the files/directories in the given path if the path is a directory.
 List<String> listXAttrs(Path path)
          Get all of the xattr names for a file or directory.
 Path makeQualified(Path path)
          Make the path fully qualified if it is isn't.
 void mkdir(Path dir, FsPermission permission, boolean createParent)
          Make(create) a directory and all the non-existent parents.
 void modifyAclEntries(Path path, List<AclEntry> aclSpec)
          Modifies ACL entries of files and directories.
 FSDataInputStream open(Path f)
          Opens an FSDataInputStream at the indicated Path using default buffersize.
 FSDataInputStream open(Path f, int bufferSize)
          Opens an FSDataInputStream at the indicated Path.
static void printStatistics()
          Prints the statistics to standard output.
 void removeAcl(Path path)
          Removes all but the base ACL entries of files and directories.
 void removeAclEntries(Path path, List<AclEntry> aclSpec)
          Removes ACL entries from files and directories.
 void removeDefaultAcl(Path path)
          Removes all default ACL entries from files and directories.
 void removeXAttr(Path path, String name)
          Remove an xattr of a file or directory.
 void rename(Path src, Path dst, org.apache.hadoop.fs.Options.Rename... options)
          Renames Path src to Path dst
  • protected  Path resolve(Path f)
              Resolves all symbolic links in the specified path.
    protected  Path resolveIntermediate(Path f)
              Resolves all symbolic links in the specified path leading up to, but not including the final path component.
     Path resolvePath(Path f)
              Resolve the path following any symlinks or mount points
     void setAcl(Path path, List<AclEntry> aclSpec)
              Fully replaces ACL of files and directories, discarding all existing entries.
     void setOwner(Path f, String username, String groupname)
              Set owner of a path (i.e.
     void setPermission(Path f, FsPermission permission)
              Set permission of a path.
     boolean setReplication(Path f, short replication)
              Set replication for an existing file.
     void setTimes(Path f, long mtime, long atime)
              Set access time of a file.
     void setUMask(FsPermission newUmask)
              Set umask to the supplied parameter.
     void setVerifyChecksum(boolean verifyChecksum, Path f)
              Set the verify checksum flag for the file system denoted by the path.
     void setWorkingDirectory(Path newWDir)
              Set the working directory for wd-relative names (such a "foo/bar").
     void setXAttr(Path path, String name, byte[] value)
              Set an xattr of a file or directory.
     void setXAttr(Path path, String name, byte[] value, EnumSet<XAttrSetFlag> flag)
              Set an xattr of a file or directory.
     org.apache.hadoop.fs.FileContext.Util util()
               
     
    Methods inherited from class java.lang.Object
    clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
     

    Field Detail

    LOG

    public static final org.apache.commons.logging.Log LOG

    DEFAULT_PERM

    public static final FsPermission DEFAULT_PERM
    Default permission for directory and symlink In previous versions, this default permission was also used to create files, so files created end up with ugo+x permission. See HADOOP-9155 for detail. Two new constants are added to solve this, please use DIR_DEFAULT_PERM for directory, and use FILE_DEFAULT_PERM for file. This constant is kept for compatibility.


    DIR_DEFAULT_PERM

    public static final FsPermission DIR_DEFAULT_PERM
    Default permission for directory


    FILE_DEFAULT_PERM

    public static final FsPermission FILE_DEFAULT_PERM
    Default permission for file


    SHUTDOWN_HOOK_PRIORITY

    public static final int SHUTDOWN_HOOK_PRIORITY
    Priority of the FileContext shutdown hook.

    See Also:
    Constant Field Values
    Method Detail

    getFSofPath

    protected AbstractFileSystem getFSofPath(Path absOrFqPath)
                                      throws UnsupportedFileSystemException,
                                             IOException
    Get the file system of supplied path.

    Parameters:
    absOrFqPath - - absolute or fully qualified path
    Returns:
    the file system of the path
    Throws:
    UnsupportedFileSystemException - If the file system for absOrFqPath is not supported.
    IOExcepton - If the file system for absOrFqPath could not be instantiated.
    IOException

    getFileContext

    public static FileContext getFileContext(AbstractFileSystem defFS,
                                             Configuration aConf)
    Create a FileContext with specified FS as default using the specified config.

    Parameters:
    defFS -
    aConf -
    Returns:
    new FileContext with specifed FS as default.

    getFileContext

    protected static FileContext getFileContext(AbstractFileSystem defaultFS)
    Create a FileContext for specified file system using the default config.

    Parameters:
    defaultFS -
    Returns:
    a FileContext with the specified AbstractFileSystem as the default FS.

    getFileContext

    public static FileContext getFileContext()
                                      throws UnsupportedFileSystemException
    Create a FileContext using the default config read from the $HADOOP_CONFIG/core.xml, Unspecified key-values for config are defaulted from core-defaults.xml in the release jar.

    Throws:
    UnsupportedFileSystemException - If the file system from the default configuration is not supported

    getLocalFSFileContext

    public static FileContext getLocalFSFileContext()
                                             throws UnsupportedFileSystemException
    Returns:
    a FileContext for the local file system using the default config.
    Throws:
    UnsupportedFileSystemException - If the file system for FsConstants.LOCAL_FS_URI is not supported.

    getFileContext

    public static FileContext getFileContext(URI defaultFsUri)
                                      throws UnsupportedFileSystemException
    Create a FileContext for specified URI using the default config.

    Parameters:
    defaultFsUri -
    Returns:
    a FileContext with the specified URI as the default FS.
    Throws:
    UnsupportedFileSystemException - If the file system for defaultFsUri is not supported

    getFileContext

    public static FileContext getFileContext(URI defaultFsUri,
                                             Configuration aConf)
                                      throws UnsupportedFileSystemException
    Create a FileContext for specified default URI using the specified config.

    Parameters:
    defaultFsUri -
    aConf -
    Returns:
    new FileContext for specified uri
    Throws:
    UnsupportedFileSystemException - If the file system with specified is not supported
    RuntimeException - If the file system specified is supported but could not be instantiated, or if login fails.

    getFileContext

    public static FileContext getFileContext(Configuration aConf)
                                      throws UnsupportedFileSystemException
    Create a FileContext using the passed config. Generally it is better to use getFileContext(URI, Configuration) instead of this one.

    Parameters:
    aConf -
    Returns:
    new FileContext
    Throws:
    UnsupportedFileSystemException - If file system in the config is not supported

    getLocalFSFileContext

    public static FileContext getLocalFSFileContext(Configuration aConf)
                                             throws UnsupportedFileSystemException
    Parameters:
    aConf - - from which the FileContext is configured
    Returns:
    a FileContext for the local file system using the specified config.
    Throws:
    UnsupportedFileSystemException - If default file system in the config is not supported

    setWorkingDirectory

    public void setWorkingDirectory(Path newWDir)
                             throws IOException
    Set the working directory for wd-relative names (such a "foo/bar"). Working directory feature is provided by simply prefixing relative names with the working dir. Note this is different from Unix where the wd is actually set to the inode. Hence setWorkingDir does not follow symlinks etc. This works better in a distributed environment that has multiple independent roots. getWorkingDirectory() should return what setWorkingDir() set.

    Parameters:
    newWDir - new working directory
    Throws:
    IOException -
    NewWdir can be one of:
    • relative path: "foo/bar";
    • absolute without scheme: "/foo/bar"
    • fully qualified with scheme: "xx://auth/foo/bar"

    Illegal WDs:
    • relative with scheme: "xx:foo/bar"
    • non existent directory

    getWorkingDirectory

    public Path getWorkingDirectory()
    Gets the working directory for wd-relative names (such a "foo/bar").


    getUgi

    public org.apache.hadoop.security.UserGroupInformation getUgi()
    Gets the ugi in the file-context

    Returns:
    UserGroupInformation

    getHomeDirectory

    public Path getHomeDirectory()
    Return the current user's home directory in this file system. The default implementation returns "/user/$USER/".

    Returns:
    the home directory

    getUMask

    public FsPermission getUMask()
    Returns:
    the umask of this FileContext

    setUMask

    public void setUMask(FsPermission newUmask)
    Set umask to the supplied parameter.

    Parameters:
    newUmask - the new umask

    resolvePath

    public Path resolvePath(Path f)
                     throws FileNotFoundException,
                            org.apache.hadoop.fs.UnresolvedLinkException,
                            org.apache.hadoop.security.AccessControlException,
                            IOException
    Resolve the path following any symlinks or mount points

    Parameters:
    f - to be resolved
    Returns:
    fully qualified resolved path
    Throws:
    FileNotFoundException - If f does not exist
    org.apache.hadoop.security.AccessControlException - if access denied
    IOException - If an IO Error occurred Exceptions applicable to file systems accessed over RPC:
    org.apache.hadoop.ipc.RpcClientException - If an exception occurred in the RPC client
    org.apache.hadoop.ipc.RpcServerException - If an exception occurred in the RPC server
    org.apache.hadoop.ipc.UnexpectedServerException - If server implementation throws undeclared exception to RPC server RuntimeExceptions:
    InvalidPathException - If path f is not valid
    org.apache.hadoop.fs.UnresolvedLinkException

    makeQualified

    public Path makeQualified(Path path)
    Make the path fully qualified if it is isn't. A Fully-qualified path has scheme and authority specified and an absolute path. Use the default file system and working dir in this FileContext to qualify.

    Parameters:
    path -
    Returns:
    qualified path

    create

    public FSDataOutputStream create(Path f,
                                     EnumSet<CreateFlag> createFlag,
                                     org.apache.hadoop.fs.Options.CreateOpts... opts)
                              throws org.apache.hadoop.security.AccessControlException,
                                     FileAlreadyExistsException,
                                     FileNotFoundException,
                                     ParentNotDirectoryException,
                                     UnsupportedFileSystemException,
                                     IOException
    Create or overwrite file on indicated path and returns an output stream for writing into the file.

    Parameters:
    f - the file name to open
    createFlag - gives the semantics of create; see CreateFlag
    opts - file creation options; see Options.CreateOpts.
    • Progress - to report progress on the operation - default null
    • Permission - umask is applied against permisssion: default is FsPermissions:getDefault()
    • CreateParent - create missing parent path; default is to not to create parents
    • The defaults for the following are SS defaults of the file server implementing the target path. Not all parameters make sense for all kinds of file system - eg. localFS ignores Blocksize, replication, checksum
      • BufferSize - buffersize used in FSDataOutputStream
      • Blocksize - block size for file blocks
      • ReplicationFactor - replication for blocks
      • ChecksumParam - Checksum parameters. server default is used if not specified.
    Returns:
    FSDataOutputStream for created file
    Throws:
    org.apache.hadoop.security.AccessControlException - If access is denied
    FileAlreadyExistsException - If file f already exists
    FileNotFoundException - If parent of f does not exist and createParent is false
    ParentNotDirectoryException - If parent of f is not a directory.
    UnsupportedFileSystemException - If file system for f is not supported
    IOException - If an I/O error occurred Exceptions applicable to file systems accessed over RPC:
    org.apache.hadoop.ipc.RpcClientException - If an exception occurred in the RPC client
    org.apache.hadoop.ipc.RpcServerException - If an exception occurred in the RPC server
    org.apache.hadoop.ipc.UnexpectedServerException - If server implementation throws undeclared exception to RPC server RuntimeExceptions:
    InvalidPathException - If path f is not valid

    mkdir

    public void mkdir(Path dir,
                      FsPermission permission,
                      boolean createParent)
               throws org.apache.hadoop.security.AccessControlException,
                      FileAlreadyExistsException,
                      FileNotFoundException,
                      ParentNotDirectoryException,
                      UnsupportedFileSystemException,
                      IOException
    Make(create) a directory and all the non-existent parents.

    Parameters:
    dir - - the dir to make
    permission - - permissions is set permission&~umask
    createParent - - if true then missing parent dirs are created if false then parent must exist
    Throws:
    org.apache.hadoop.security.AccessControlException - If access is denied
    FileAlreadyExistsException - If directory dir already exists
    FileNotFoundException - If parent of dir does not exist and createParent is false
    ParentNotDirectoryException - If parent of dir is not a directory
    UnsupportedFileSystemException - If file system for dir is not supported
    IOException - If an I/O error occurred Exceptions applicable to file systems accessed over RPC:
    org.apache.hadoop.ipc.RpcClientException - If an exception occurred in the RPC client
    org.apache.hadoop.ipc.UnexpectedServerException - If server implementation throws undeclared exception to RPC server RuntimeExceptions:
    InvalidPathException - If path dir is not valid

    delete

    public boolean delete(Path f,
                          boolean recursive)
                   throws org.apache.hadoop.security.AccessControlException,
                          FileNotFoundException,
                          UnsupportedFileSystemException,
                          IOException
    Delete a file.

    Parameters:
    f - the path to delete.
    recursive - if path is a directory and set to true, the directory is deleted else throws an exception. In case of a file the recursive can be set to either true or false.
    Throws:
    org.apache.hadoop.security.AccessControlException - If access is denied
    FileNotFoundException - If f does not exist
    UnsupportedFileSystemException - If file system for f is not supported
    IOException - If an I/O error occurred Exceptions applicable to file systems accessed over RPC:
    org.apache.hadoop.ipc.RpcClientException - If an exception occurred in the RPC client
    org.apache.hadoop.ipc.RpcServerException - If an exception occurred in the RPC server
    org.apache.hadoop.ipc.UnexpectedServerException - If server implementation throws undeclared exception to RPC server RuntimeExceptions:
    InvalidPathException - If path f is invalid

    open

    public FSDataInputStream open(Path f)
                           throws org.apache.hadoop.security.AccessControlException,
                                  FileNotFoundException,
                                  UnsupportedFileSystemException,
                                  IOException
    Opens an FSDataInputStream at the indicated Path using default buffersize.

    Parameters:
    f - the file name to open
    Throws:
    org.apache.hadoop.security.AccessControlException - If access is denied
    FileNotFoundException - If file f does not exist
    UnsupportedFileSystemException - If file system for f is not supported
    IOException - If an I/O error occurred Exceptions applicable to file systems accessed over RPC:
    org.apache.hadoop.ipc.RpcClientException - If an exception occurred in the RPC client
    org.apache.hadoop.ipc.RpcServerException - If an exception occurred in the RPC server
    org.apache.hadoop.ipc.UnexpectedServerException - If server implementation throws undeclared exception to RPC server

    open

    public FSDataInputStream open(Path f,
                                  int bufferSize)
                           throws org.apache.hadoop.security.AccessControlException,
                                  FileNotFoundException,
                                  UnsupportedFileSystemException,
                                  IOException
    Opens an FSDataInputStream at the indicated Path.

    Parameters:
    f - the file name to open
    bufferSize - the size of the buffer to be used.
    Throws:
    org.apache.hadoop.security.AccessControlException - If access is denied
    FileNotFoundException - If file f does not exist
    UnsupportedFileSystemException - If file system for f is not supported
    IOException - If an I/O error occurred Exceptions applicable to file systems accessed over RPC:
    org.apache.hadoop.ipc.RpcClientException - If an exception occurred in the RPC client
    org.apache.hadoop.ipc.RpcServerException - If an exception occurred in the RPC server
    org.apache.hadoop.ipc.UnexpectedServerException - If server implementation throws undeclared exception to RPC server

    setReplication

    public boolean setReplication(Path f,
                                  short replication)
                           throws org.apache.hadoop.security.AccessControlException,
                                  FileNotFoundException,
                                  IOException
    Set replication for an existing file.

    Parameters:
    f - file name
    replication - new replication
    Returns:
    true if successful
    Throws:
    org.apache.hadoop.security.AccessControlException - If access is denied
    FileNotFoundException - If file f does not exist
    IOException - If an I/O error occurred Exceptions applicable to file systems accessed over RPC:
    org.apache.hadoop.ipc.RpcClientException - If an exception occurred in the RPC client
    org.apache.hadoop.ipc.RpcServerException - If an exception occurred in the RPC server
    org.apache.hadoop.ipc.UnexpectedServerException - If server implementation throws undeclared exception to RPC server

    rename

    public void rename(Path src,
                       Path dst,
                       org.apache.hadoop.fs.Options.Rename... options)
                throws org.apache.hadoop.security.AccessControlException,
                       FileAlreadyExistsException,
                       FileNotFoundException,
                       ParentNotDirectoryException,
                       UnsupportedFileSystemException,
                       IOException
    Renames Path src to Path dst

    If OVERWRITE option is not passed as an argument, rename fails if the dst already exists.

    If OVERWRITE option is passed as an argument, rename overwrites the dst if it is a file or an empty directory. Rename fails if dst is a non-empty directory.

    Note that atomicity of rename is dependent on the file system implementation. Please refer to the file system documentation for details

    Parameters:
    src - path to be renamed
    dst - new path after rename
    Throws:
    org.apache.hadoop.security.AccessControlException - If access is denied
    FileAlreadyExistsException - If dst already exists and options has Options.Rename.OVERWRITE option false.
    FileNotFoundException - If src does not exist
    ParentNotDirectoryException - If parent of dst is not a directory
    UnsupportedFileSystemException - If file system for src and dst is not supported
    IOException - If an I/O error occurred Exceptions applicable to file systems accessed over RPC:
    org.apache.hadoop.ipc.RpcClientException - If an exception occurred in the RPC client
    org.apache.hadoop.ipc.RpcServerException - If an exception occurred in the RPC server
    org.apache.hadoop.ipc.UnexpectedServerException - If server implementation throws undeclared exception to RPC server

    setPermission

    public void setPermission(Path f,
                              FsPermission permission)
                       throws org.apache.hadoop.security.AccessControlException,
                              FileNotFoundException,
                              UnsupportedFileSystemException,
                              IOException
    Set permission of a path.

    Parameters:
    f -
    permission - - the new absolute permission (umask is not applied)
    Throws:
    org.apache.hadoop.security.AccessControlException - If access is denied
    FileNotFoundException - If f does not exist
    UnsupportedFileSystemException - If file system for f is not supported
    IOException - If an I/O error occurred Exceptions applicable to file systems accessed over RPC:
    org.apache.hadoop.ipc.RpcClientException - If an exception occurred in the RPC client
    org.apache.hadoop.ipc.RpcServerException - If an exception occurred in the RPC server
    org.apache.hadoop.ipc.UnexpectedServerException - If server implementation throws undeclared exception to RPC server

    setOwner

    public void setOwner(Path f,
                         String username,
                         String groupname)
                  throws org.apache.hadoop.security.AccessControlException,
                         UnsupportedFileSystemException,
                         FileNotFoundException,
                         IOException
    Set owner of a path (i.e. a file or a directory). The parameters username and groupname cannot both be null.

    Parameters:
    f - The path
    username - If it is null, the original username remains unchanged.
    groupname - If it is null, the original groupname remains unchanged.
    Throws:
    org.apache.hadoop.security.AccessControlException - If access is denied
    FileNotFoundException - If f does not exist
    UnsupportedFileSystemException - If file system for f is not supported
    IOException - If an I/O error occurred Exceptions applicable to file systems accessed over RPC:
    org.apache.hadoop.ipc.RpcClientException - If an exception occurred in the RPC client
    org.apache.hadoop.ipc.RpcServerException - If an exception occurred in the RPC server
    org.apache.hadoop.ipc.UnexpectedServerException - If server implementation throws undeclared exception to RPC server RuntimeExceptions:
    HadoopIllegalArgumentException - If username or groupname is invalid.

    setTimes

    public void setTimes(Path f,
                         long mtime,
                         long atime)
                  throws org.apache.hadoop.security.AccessControlException,
                         FileNotFoundException,
                         UnsupportedFileSystemException,
                         IOException
    Set access time of a file.

    Parameters:
    f - The path
    mtime - Set the modification time of this file. The number of milliseconds since epoch (Jan 1, 1970). A value of -1 means that this call should not set modification time.
    atime - Set the access time of this file. The number of milliseconds since Jan 1, 1970. A value of -1 means that this call should not set access time.
    Throws:
    org.apache.hadoop.security.AccessControlException - If access is denied
    FileNotFoundException - If f does not exist
    UnsupportedFileSystemException - If file system for f is not supported
    IOException - If an I/O error occurred Exceptions applicable to file systems accessed over RPC:
    org.apache.hadoop.ipc.RpcClientException - If an exception occurred in the RPC client
    org.apache.hadoop.ipc.RpcServerException - If an exception occurred in the RPC server
    org.apache.hadoop.ipc.UnexpectedServerException - If server implementation throws undeclared exception to RPC server

    getFileChecksum

    public FileChecksum getFileChecksum(Path f)
                                 throws org.apache.hadoop.security.AccessControlException,
                                        FileNotFoundException,
                                        IOException
    Get the checksum of a file.

    Parameters:
    f - file path
    Returns:
    The file checksum. The default return value is null, which indicates that no checksum algorithm is implemented in the corresponding FileSystem.
    Throws:
    org.apache.hadoop.security.AccessControlException - If access is denied
    FileNotFoundException - If f does not exist
    IOException - If an I/O error occurred Exceptions applicable to file systems accessed over RPC:
    org.apache.hadoop.ipc.RpcClientException - If an exception occurred in the RPC client
    org.apache.hadoop.ipc.RpcServerException - If an exception occurred in the RPC server
    org.apache.hadoop.ipc.UnexpectedServerException - If server implementation throws undeclared exception to RPC server

    setVerifyChecksum

    public void setVerifyChecksum(boolean verifyChecksum,
                                  Path f)
                           throws org.apache.hadoop.security.AccessControlException,
                                  FileNotFoundException,
                                  UnsupportedFileSystemException,
                                  IOException
    Set the verify checksum flag for the file system denoted by the path. This is only applicable if the corresponding FileSystem supports checksum. By default doesn't do anything.

    Parameters:
    verifyChecksum -
    f - set the verifyChecksum for the Filesystem containing this path
    Throws:
    org.apache.hadoop.security.AccessControlException - If access is denied
    FileNotFoundException - If f does not exist
    UnsupportedFileSystemException - If file system for f is not supported
    IOException - If an I/O error occurred Exceptions applicable to file systems accessed over RPC:
    org.apache.hadoop.ipc.RpcClientException - If an exception occurred in the RPC client
    org.apache.hadoop.ipc.RpcServerException - If an exception occurred in the RPC server
    org.apache.hadoop.ipc.UnexpectedServerException - If server implementation throws undeclared exception to RPC server

    getFileStatus

    public FileStatus getFileStatus(Path f)
                             throws org.apache.hadoop.security.AccessControlException,
                                    FileNotFoundException,
                                    UnsupportedFileSystemException,
                                    IOException
    Return a file status object that represents the path.

    Parameters:
    f - The path we want information from
    Returns:
    a FileStatus object
    Throws:
    org.apache.hadoop.security.AccessControlException - If access is denied
    FileNotFoundException - If f does not exist
    UnsupportedFileSystemException - If file system for f is not supported
    IOException - If an I/O error occurred Exceptions applicable to file systems accessed over RPC:
    org.apache.hadoop.ipc.RpcClientException - If an exception occurred in the RPC client
    org.apache.hadoop.ipc.RpcServerException - If an exception occurred in the RPC server
    org.apache.hadoop.ipc.UnexpectedServerException - If server implementation throws undeclared exception to RPC server

    getFileLinkStatus

    public FileStatus getFileLinkStatus(Path f)
                                 throws org.apache.hadoop.security.AccessControlException,
                                        FileNotFoundException,
                                        UnsupportedFileSystemException,
                                        IOException
    Return a file status object that represents the path. If the path refers to a symlink then the FileStatus of the symlink is returned. The behavior is equivalent to #getFileStatus() if the underlying file system does not support symbolic links.

    Parameters:
    f - The path we want information from.
    Returns:
    A FileStatus object
    Throws:
    org.apache.hadoop.security.AccessControlException - If access is denied
    FileNotFoundException - If f does not exist
    UnsupportedFileSystemException - If file system for f is not supported
    IOException - If an I/O error occurred

    getLinkTarget

    public Path getLinkTarget(Path f)
                       throws org.apache.hadoop.security.AccessControlException,
                              FileNotFoundException,
                              UnsupportedFileSystemException,
                              IOException
    Returns the target of the given symbolic link as it was specified when the link was created. Links in the path leading up to the final path component are resolved transparently.

    Parameters:
    f - the path to return the target of
    Returns:
    The un-interpreted target of the symbolic link.
    Throws:
    org.apache.hadoop.security.AccessControlException - If access is denied
    FileNotFoundException - If path f does not exist
    UnsupportedFileSystemException - If file system for f is not supported
    IOException - If the given path does not refer to a symlink or an I/O error occurred

    getFsStatus

    public FsStatus getFsStatus(Path f)
                         throws org.apache.hadoop.security.AccessControlException,
                                FileNotFoundException,
                                UnsupportedFileSystemException,
                                IOException
    Returns a status object describing the use and capacity of the file system denoted by the Parh argument p. If the file system has multiple partitions, the use and capacity of the partition pointed to by the specified path is reflected.

    Parameters:
    f - Path for which status should be obtained. null means the root partition of the default file system.
    Returns:
    a FsStatus object
    Throws:
    org.apache.hadoop.security.AccessControlException - If access is denied
    FileNotFoundException - If f does not exist
    UnsupportedFileSystemException - If file system for f is not supported
    IOException - If an I/O error occurred Exceptions applicable to file systems accessed over RPC:
    org.apache.hadoop.ipc.RpcClientException - If an exception occurred in the RPC client
    org.apache.hadoop.ipc.RpcServerException - If an exception occurred in the RPC server
    org.apache.hadoop.ipc.UnexpectedServerException - If server implementation throws undeclared exception to RPC server

    createSymlink

    public void createSymlink(Path target,
                              Path link,
                              boolean createParent)
                       throws org.apache.hadoop.security.AccessControlException,
                              FileAlreadyExistsException,
                              FileNotFoundException,
                              ParentNotDirectoryException,
                              UnsupportedFileSystemException,
                              IOException
    Creates a symbolic link to an existing file. An exception is thrown if the symlink exits, the user does not have permission to create symlink, or the underlying file system does not support symlinks. Symlink permissions are ignored, access to a symlink is determined by the permissions of the symlink target. Symlinks in paths leading up to the final path component are resolved transparently. If the final path component refers to a symlink some functions operate on the symlink itself, these are: - delete(f) and deleteOnExit(f) - Deletes the symlink. - rename(src, dst) - If src refers to a symlink, the symlink is renamed. If dst refers to a symlink, the symlink is over-written. - getLinkTarget(f) - Returns the target of the symlink. - getFileLinkStatus(f) - Returns a FileStatus object describing the symlink. Some functions, create() and mkdir(), expect the final path component does not exist. If they are given a path that refers to a symlink that does exist they behave as if the path referred to an existing file or directory. All other functions fully resolve, ie follow, the symlink. These are: open, setReplication, setOwner, setTimes, setWorkingDirectory, setPermission, getFileChecksum, setVerifyChecksum, getFileBlockLocations, getFsStatus, getFileStatus, exists, and listStatus. Symlink targets are stored as given to createSymlink, assuming the underlying file system is capable of storing a fully qualified URI. Dangling symlinks are permitted. FileContext supports four types of symlink targets, and resolves them as follows
     Given a path referring to a symlink of form:
     
       <---X---> 
       fs://host/A/B/link 
       <-----Y----->
     
     In this path X is the scheme and authority that identify the file system,
     and Y is the path leading up to the final path component "link". If Y is
     a symlink  itself then let Y' be the target of Y and X' be the scheme and
     authority of Y'. Symlink targets may:
     
     1. Fully qualified URIs
     
     fs://hostX/A/B/file  Resolved according to the target file system.
     
     2. Partially qualified URIs (eg scheme but no host)
     
     fs:///A/B/file  Resolved according to the target file system. Eg resolving
                     a symlink to hdfs:///A results in an exception because
                     HDFS URIs must be fully qualified, while a symlink to 
                     file:///A will not since Hadoop's local file systems 
                     require partially qualified URIs.
     
     3. Relative paths
     
     path  Resolves to [Y'][path]. Eg if Y resolves to hdfs://host/A and path 
           is "../B/file" then [Y'][path] is hdfs://host/B/file
     
     4. Absolute paths
     
     path  Resolves to [X'][path]. Eg if Y resolves hdfs://host/A/B and path
           is "/file" then [X][path] is hdfs://host/file
     

    Parameters:
    target - the target of the symbolic link
    link - the path to be created that points to target
    createParent - if true then missing parent dirs are created if false then parent must exist
    Throws:
    org.apache.hadoop.security.AccessControlException - If access is denied
    FileAlreadyExistsException - If file linkcode> already exists
    FileNotFoundException - If target does not exist
    ParentNotDirectoryException - If parent of link is not a directory.
    UnsupportedFileSystemException - If file system for target or link is not supported
    IOException - If an I/O error occurred

    listStatus

    public org.apache.hadoop.fs.RemoteIterator<FileStatus> listStatus(Path f)
                                                               throws org.apache.hadoop.security.AccessControlException,
                                                                      FileNotFoundException,
                                                                      UnsupportedFileSystemException,
                                                                      IOException
    List the statuses of the files/directories in the given path if the path is a directory.

    Parameters:
    f - is the path
    Returns:
    an iterator that traverses statuses of the files/directories in the given path
    Throws:
    org.apache.hadoop.security.AccessControlException - If access is denied
    FileNotFoundException - If f does not exist
    UnsupportedFileSystemException - If file system for f is not supported
    IOException - If an I/O error occurred Exceptions applicable to file systems accessed over RPC:
    org.apache.hadoop.ipc.RpcClientException - If an exception occurred in the RPC client
    org.apache.hadoop.ipc.RpcServerException - If an exception occurred in the RPC server
    org.apache.hadoop.ipc.UnexpectedServerException - If server implementation throws undeclared exception to RPC server

    listCorruptFileBlocks

    public org.apache.hadoop.fs.RemoteIterator<Path> listCorruptFileBlocks(Path path)
                                                                    throws IOException
    Returns:
    an iterator over the corrupt files under the given path (may contain duplicates if a file has more than one corrupt block)
    Throws:
    IOException

    listLocatedStatus

    public org.apache.hadoop.fs.RemoteIterator<LocatedFileStatus> listLocatedStatus(Path f)
                                                                             throws org.apache.hadoop.security.AccessControlException,
                                                                                    FileNotFoundException,
                                                                                    UnsupportedFileSystemException,
                                                                                    IOException
    List the statuses of the files/directories in the given path if the path is a directory. Return the file's status and block locations If the path is a file. If a returned status is a file, it contains the file's block locations.

    Parameters:
    f - is the path
    Returns:
    an iterator that traverses statuses of the files/directories in the given path If any IO exception (for example the input directory gets deleted while listing is being executed), next() or hasNext() of the returned iterator may throw a RuntimeException with the io exception as the cause.
    Throws:
    org.apache.hadoop.security.AccessControlException - If access is denied
    FileNotFoundException - If f does not exist
    UnsupportedFileSystemException - If file system for f is not supported
    IOException - If an I/O error occurred Exceptions applicable to file systems accessed over RPC:
    org.apache.hadoop.ipc.RpcClientException - If an exception occurred in the RPC client
    org.apache.hadoop.ipc.RpcServerException - If an exception occurred in the RPC server
    org.apache.hadoop.ipc.UnexpectedServerException - If server implementation throws undeclared exception to RPC server

    deleteOnExit

    public boolean deleteOnExit(Path f)
                         throws org.apache.hadoop.security.AccessControlException,
                                IOException
    Mark a path to be deleted on JVM shutdown.

    Parameters:
    f - the existing path to delete.
    Returns:
    true if deleteOnExit is successful, otherwise false.
    Throws:
    org.apache.hadoop.security.AccessControlException - If access is denied
    UnsupportedFileSystemException - If file system for f is not supported
    IOException - If an I/O error occurred Exceptions applicable to file systems accessed over RPC:
    org.apache.hadoop.ipc.RpcClientException - If an exception occurred in the RPC client
    org.apache.hadoop.ipc.RpcServerException - If an exception occurred in the RPC server
    org.apache.hadoop.ipc.UnexpectedServerException - If server implementation throws undeclared exception to RPC server

    util

    public org.apache.hadoop.fs.FileContext.Util util()

    resolve

    protected Path resolve(Path f)
                    throws FileNotFoundException,
                           org.apache.hadoop.fs.UnresolvedLinkException,
                           org.apache.hadoop.security.AccessControlException,
                           IOException
    Resolves all symbolic links in the specified path. Returns the new path object.

    Throws:
    FileNotFoundException
    org.apache.hadoop.fs.UnresolvedLinkException
    org.apache.hadoop.security.AccessControlException
    IOException

    resolveIntermediate

    protected Path resolveIntermediate(Path f)
                                throws IOException
    Resolves all symbolic links in the specified path leading up to, but not including the final path component.

    Parameters:
    f - path to resolve
    Returns:
    the new path object.
    Throws:
    IOException

    getStatistics

    public static org.apache.hadoop.fs.FileSystem.Statistics getStatistics(URI uri)
    Get the statistics for a particular file system

    Parameters:
    uri - the uri to lookup the statistics. Only scheme and authority part of the uri are used as the key to store and lookup.
    Returns:
    a statistics object

    clearStatistics

    public static void clearStatistics()
    Clears all the statistics stored in AbstractFileSystem, for all the file systems.


    printStatistics

    public static void printStatistics()
    Prints the statistics to standard output. File System is identified by the scheme and authority.


    getAllStatistics

    public static Map<URI,org.apache.hadoop.fs.FileSystem.Statistics> getAllStatistics()
    Returns:
    Map of uri and statistics for each filesystem instantiated. The uri consists of scheme and authority for the filesystem.

    modifyAclEntries

    public void modifyAclEntries(Path path,
                                 List<AclEntry> aclSpec)
                          throws IOException
    Modifies ACL entries of files and directories. This method can add new ACL entries or modify the permissions on existing ACL entries. All existing ACL entries that are not specified in this call are retained without changes. (Modifications are merged into the current ACL.)

    Parameters:
    path - Path to modify
    aclSpec - List describing modifications
    Throws:
    IOException - if an ACL could not be modified

    removeAclEntries

    public void removeAclEntries(Path path,
                                 List<AclEntry> aclSpec)
                          throws IOException
    Removes ACL entries from files and directories. Other ACL entries are retained.

    Parameters:
    path - Path to modify
    aclSpec - List describing entries to remove
    Throws:
    IOException - if an ACL could not be modified

    removeDefaultAcl

    public void removeDefaultAcl(Path path)
                          throws IOException
    Removes all default ACL entries from files and directories.

    Parameters:
    path - Path to modify
    Throws:
    IOException - if an ACL could not be modified

    removeAcl

    public void removeAcl(Path path)
                   throws IOException
    Removes all but the base ACL entries of files and directories. The entries for user, group, and others are retained for compatibility with permission bits.

    Parameters:
    path - Path to modify
    Throws:
    IOException - if an ACL could not be removed

    setAcl

    public void setAcl(Path path,
                       List<AclEntry> aclSpec)
                throws IOException
    Fully replaces ACL of files and directories, discarding all existing entries.

    Parameters:
    path - Path to modify
    aclSpec - List describing modifications, must include entries for user, group, and others for compatibility with permission bits.
    Throws:
    IOException - if an ACL could not be modified

    getAclStatus

    public AclStatus getAclStatus(Path path)
                           throws IOException
    Gets the ACLs of files and directories.

    Parameters:
    path - Path to get
    Returns:
    RemoteIterator which returns each AclStatus
    Throws:
    IOException - if an ACL could not be read

    setXAttr

    public void setXAttr(Path path,
                         String name,
                         byte[] value)
                  throws IOException
    Set an xattr of a file or directory. The name must be prefixed with the namespace followed by ".". For example, "user.attr".

    Refer to the HDFS extended attributes user documentation for details.

    Parameters:
    path - Path to modify
    name - xattr name.
    value - xattr value.
    Throws:
    IOException

    setXAttr

    public void setXAttr(Path path,
                         String name,
                         byte[] value,
                         EnumSet<XAttrSetFlag> flag)
                  throws IOException
    Set an xattr of a file or directory. The name must be prefixed with the namespace followed by ".". For example, "user.attr".

    Refer to the HDFS extended attributes user documentation for details.

    Parameters:
    path - Path to modify
    name - xattr name.
    value - xattr value.
    flag - xattr set flag
    Throws:
    IOException

    getXAttr

    public byte[] getXAttr(Path path,
                           String name)
                    throws IOException
    Get an xattr for a file or directory. The name must be prefixed with the namespace followed by ".". For example, "user.attr".

    Refer to the HDFS extended attributes user documentation for details.

    Parameters:
    path - Path to get extended attribute
    name - xattr name.
    Returns:
    byte[] xattr value.
    Throws:
    IOException

    getXAttrs

    public Map<String,byte[]> getXAttrs(Path path)
                                 throws IOException
    Get all of the xattrs for a file or directory. Only those xattrs for which the logged-in user has permissions to view are returned.

    Refer to the HDFS extended attributes user documentation for details.

    Parameters:
    path - Path to get extended attributes
    Returns:
    Map describing the XAttrs of the file or directory
    Throws:
    IOException

    getXAttrs

    public Map<String,byte[]> getXAttrs(Path path,
                                        List<String> names)
                                 throws IOException
    Get all of the xattrs for a file or directory. Only those xattrs for which the logged-in user has permissions to view are returned.

    Refer to the HDFS extended attributes user documentation for details.

    Parameters:
    path - Path to get extended attributes
    names - XAttr names.
    Returns:
    Map describing the XAttrs of the file or directory
    Throws:
    IOException

    removeXAttr

    public void removeXAttr(Path path,
                            String name)
                     throws IOException
    Remove an xattr of a file or directory. The name must be prefixed with the namespace followed by ".". For example, "user.attr".

    Refer to the HDFS extended attributes user documentation for details.

    Parameters:
    path - Path to remove extended attribute
    name - xattr name
    Throws:
    IOException

    listXAttrs

    public List<String> listXAttrs(Path path)
                            throws IOException
    Get all of the xattr names for a file or directory. Only those xattr names which the logged-in user has permissions to view are returned.

    Refer to the HDFS extended attributes user documentation for details.

    Parameters:
    path - Path to get extended attributes
    Returns:
    List of the XAttr names of the file or directory
    Throws:
    IOException


    Copyright © 2014 Apache Software Foundation. All Rights Reserved.