Class AbstractFileSystem

java.lang.Object
org.apache.hadoop.fs.AbstractFileSystem
All Implemented Interfaces:
org.apache.hadoop.fs.PathCapabilities
Direct Known Subclasses:
org.apache.hadoop.fs.DelegateToFileSystem, ViewFs

@Public @Stable public abstract class AbstractFileSystem extends Object implements org.apache.hadoop.fs.PathCapabilities
This class provides an interface for implementors of a Hadoop file system (analogous to the VFS of Unix). Applications do not access this class; instead they access files across all file systems using FileContext. Pathnames passed to AbstractFileSystem can be fully qualified URI that matches the "this" file system (ie same scheme and authority) or a Slash-relative name that is assumed to be relative to the root of the "this" file system .
  • Field Details

    • statistics

      protected org.apache.hadoop.fs.FileSystem.Statistics statistics
      The statistics for this file system.
  • Constructor Details

    • AbstractFileSystem

      public AbstractFileSystem(URI uri, String supportedScheme, boolean authorityNeeded, int defaultPort) throws URISyntaxException
      Constructor to be called by subclasses.
      Parameters:
      uri - for this file system.
      supportedScheme - the scheme supported by the implementor
      authorityNeeded - if true then theURI must have authority, if false then the URI must have null authority.
      defaultPort - default port to use if port is not specified in the URI.
      Throws:
      URISyntaxException - uri has syntax error
  • Method Details

    • getStatistics

      public org.apache.hadoop.fs.FileSystem.Statistics getStatistics()
    • isValidName

      public boolean isValidName(String src)
      Returns true if the specified string is considered valid in the path part of a URI by this file system. The default implementation enforces the rules of HDFS, but subclasses may override this method to implement specific validation rules for specific file systems.
      Parameters:
      src - String source filename to check, path part of the URI
      Returns:
      boolean true if the specified string is considered valid
    • createFileSystem

      public static AbstractFileSystem createFileSystem(URI uri, Configuration conf) throws UnsupportedFileSystemException
      Create a file system instance for the specified uri using the conf. The conf is used to find the class name that implements the file system. The conf is also passed to the file system for its configuration.
      Parameters:
      uri - URI of the file system
      conf - Configuration for the file system
      Returns:
      Returns the file system for the given URI
      Throws:
      UnsupportedFileSystemException - file system for uri is not found
    • getStatistics

      protected static org.apache.hadoop.fs.FileSystem.Statistics getStatistics(URI uri)
      Get the statistics for a particular file system.
      Parameters:
      uri - used as key to lookup STATISTICS_TABLE. Only scheme and authority part of the uri are used.
      Returns:
      a statistics object
    • clearStatistics

      public static void clearStatistics()
    • printStatistics

      public static void printStatistics()
      Prints statistics for all file systems.
    • getAllStatistics

      protected static Map<URI,org.apache.hadoop.fs.FileSystem.Statistics> getAllStatistics()
    • get

      public static AbstractFileSystem get(URI uri, Configuration conf) throws UnsupportedFileSystemException
      The main factory method for creating a file system. Get a file system for the URI's scheme and authority. The scheme of the uri determines a configuration property name, fs.AbstractFileSystem.scheme.impl whose value names the AbstractFileSystem class. The entire URI and conf is passed to the AbstractFileSystem factory method.
      Parameters:
      uri - for the file system to be created.
      conf - which is passed to the file system impl.
      Returns:
      file system for the given URI.
      Throws:
      UnsupportedFileSystemException - if the file system for uri is not supported.
    • checkScheme

      public void checkScheme(URI uri, String supportedScheme)
      Check that the Uri's scheme matches.
      Parameters:
      uri - name URI of the FS.
      supportedScheme - supported scheme.
    • getUriDefaultPort

      public abstract int getUriDefaultPort()
      The default port of this file system.
      Returns:
      default port of this file system's Uri scheme A uri with a port of -1 => default port;
    • getUri

      public URI getUri()
      Returns a URI whose scheme and authority identify this FileSystem.
      Returns:
      the uri of this file system.
    • checkPath

      public void checkPath(Path path)
      Check that a Path belongs to this FileSystem. If the path is fully qualified URI, then its scheme and authority matches that of this file system. Otherwise the path must be slash-relative name.
      Parameters:
      path - the path.
      Throws:
      InvalidPathException - if the path is invalid
    • getUriPath

      public String getUriPath(Path p)
      Get the path-part of a pathname. Checks that URI matches this file system and that the path-part is a valid name.
      Parameters:
      p - path
      Returns:
      path-part of the Path p
    • makeQualified

      public Path makeQualified(Path path)
      Make the path fully qualified to this file system
      Parameters:
      path - the path.
      Returns:
      the qualified path
    • getInitialWorkingDirectory

      public Path getInitialWorkingDirectory()
      Some file systems like LocalFileSystem have an initial workingDir that is used as the starting workingDir. For other file systems like HDFS there is no built in notion of an initial workingDir.
      Returns:
      the initial workingDir if the file system has such a notion otherwise return a null.
    • getHomeDirectory

      public Path getHomeDirectory()
      Return the current user's home directory in this file system. The default implementation returns "/user/$USER/".
      Returns:
      current user's home directory.
    • getServerDefaults

      @Deprecated public abstract FsServerDefaults getServerDefaults() throws IOException
      Deprecated.
      Return a set of server default configuration values.
      Returns:
      server default configuration values
      Throws:
      IOException - an I/O error occurred
    • getServerDefaults

      public FsServerDefaults getServerDefaults(Path f) throws IOException
      Return a set of server default configuration values based on path.
      Parameters:
      f - path to fetch server defaults
      Returns:
      server default configuration values for path
      Throws:
      IOException - an I/O error occurred
    • resolvePath

      public Path resolvePath(Path p) throws FileNotFoundException, org.apache.hadoop.fs.UnresolvedLinkException, AccessControlException, IOException
      Return the fully-qualified path of path f resolving the path through any internal symlinks or mount point
      Parameters:
      p - path to be resolved
      Returns:
      fully qualified path
      Throws:
      FileNotFoundException - when file not find throw.
      AccessControlException - when accees control error throw.
      IOException - raised on errors performing I/O.
      org.apache.hadoop.fs.UnresolvedLinkException - if symbolic link on path cannot be resolved internally
    • create

      public final FSDataOutputStream create(Path f, EnumSet<CreateFlag> createFlag, org.apache.hadoop.fs.Options.CreateOpts... opts) throws AccessControlException, FileAlreadyExistsException, FileNotFoundException, ParentNotDirectoryException, UnsupportedFileSystemException, org.apache.hadoop.fs.UnresolvedLinkException, IOException
      The specification of this method matches that of FileContext.create(Path, EnumSet, Options.CreateOpts...) except that the Path f must be fully qualified and the permission is absolute (i.e. umask has been applied).
      Parameters:
      f - the path.
      createFlag - create_flag.
      opts - create ops.
      Returns:
      output stream.
      Throws:
      AccessControlException - access controll exception.
      FileAlreadyExistsException - file already exception.
      FileNotFoundException - file not found exception.
      ParentNotDirectoryException - parent not dir exception.
      UnsupportedFileSystemException - unsupported file system exception.
      org.apache.hadoop.fs.UnresolvedLinkException - unresolved link exception.
      IOException - raised on errors performing I/O.
    • createInternal

      public abstract FSDataOutputStream createInternal(Path f, EnumSet<CreateFlag> flag, FsPermission absolutePermission, int bufferSize, short replication, long blockSize, Progressable progress, org.apache.hadoop.fs.Options.ChecksumOpt checksumOpt, boolean createParent) throws AccessControlException, FileAlreadyExistsException, FileNotFoundException, ParentNotDirectoryException, UnsupportedFileSystemException, org.apache.hadoop.fs.UnresolvedLinkException, IOException
      The specification of this method matches that of create(Path, EnumSet, Options.CreateOpts...) except that the opts have been declared explicitly.
      Parameters:
      f - the path.
      flag - create flag.
      absolutePermission - absolute permission.
      bufferSize - buffer size.
      replication - replications.
      blockSize - block size.
      progress - progress.
      checksumOpt - check sum opt.
      createParent - create parent.
      Returns:
      output stream.
      Throws:
      AccessControlException - access control exception.
      FileAlreadyExistsException - file already exists exception.
      FileNotFoundException - file not found exception.
      ParentNotDirectoryException - parent not directory exception.
      UnsupportedFileSystemException - unsupported filesystem exception.
      org.apache.hadoop.fs.UnresolvedLinkException - unresolved link exception.
      IOException - raised on errors performing I/O.
    • mkdir

      public abstract void mkdir(Path dir, FsPermission permission, boolean createParent) throws AccessControlException, FileAlreadyExistsException, FileNotFoundException, org.apache.hadoop.fs.UnresolvedLinkException, IOException
      The specification of this method matches that of FileContext.mkdir(Path, FsPermission, boolean) except that the Path f must be fully qualified and the permission is absolute (i.e. umask has been applied).
      Parameters:
      dir - directory.
      permission - permission.
      createParent - create parent flag.
      Throws:
      AccessControlException - access control exception.
      FileAlreadyExistsException - file already exists exception.
      FileNotFoundException - file not found exception.
      org.apache.hadoop.fs.UnresolvedLinkException - unresolved link exception.
      IOException - raised on errors performing I/O.
    • delete

      public abstract boolean delete(Path f, boolean recursive) throws AccessControlException, FileNotFoundException, org.apache.hadoop.fs.UnresolvedLinkException, IOException
      The specification of this method matches that of FileContext.delete(Path, boolean) except that Path f must be for this file system.
      Parameters:
      f - the path.
      recursive - recursive flag.
      Returns:
      if successfully deleted success true, not false.
      Throws:
      AccessControlException - access control exception.
      FileNotFoundException - file not found exception.
      org.apache.hadoop.fs.UnresolvedLinkException - unresolved link exception.
      IOException - raised on errors performing I/O.
    • open

      public FSDataInputStream open(Path f) throws AccessControlException, FileNotFoundException, org.apache.hadoop.fs.UnresolvedLinkException, IOException
      The specification of this method matches that of FileContext.open(Path) except that Path f must be for this file system.
      Parameters:
      f - the path.
      Returns:
      input stream.
      Throws:
      AccessControlException - access control exception.
      FileNotFoundException - file not found exception.
      org.apache.hadoop.fs.UnresolvedLinkException - unresolved link exception.
      IOException - raised on errors performing I/O.
    • open

      public abstract FSDataInputStream open(Path f, int bufferSize) throws AccessControlException, FileNotFoundException, org.apache.hadoop.fs.UnresolvedLinkException, IOException
      The specification of this method matches that of FileContext.open(Path, int) except that Path f must be for this file system.
      Parameters:
      f - the path.
      bufferSize - buffer size.
      Returns:
      if successfully open success true, not false.
      Throws:
      AccessControlException - access control exception.
      FileNotFoundException - file not found exception.
      org.apache.hadoop.fs.UnresolvedLinkException - unresolved link exception.
      IOException - raised on errors performing I/O.
    • truncate

      public boolean truncate(Path f, long newLength) throws AccessControlException, FileNotFoundException, org.apache.hadoop.fs.UnresolvedLinkException, IOException
      The specification of this method matches that of FileContext.truncate(Path, long) except that Path f must be for this file system.
      Parameters:
      f - the path.
      newLength - new length.
      Returns:
      if successfully truncate success true, not false.
      Throws:
      AccessControlException - access control exception.
      FileNotFoundException - file not found exception.
      org.apache.hadoop.fs.UnresolvedLinkException - unresolved link exception.
      IOException - raised on errors performing I/O.
    • setReplication

      public abstract boolean setReplication(Path f, short replication) throws AccessControlException, FileNotFoundException, org.apache.hadoop.fs.UnresolvedLinkException, IOException
      The specification of this method matches that of FileContext.setReplication(Path, short) except that Path f must be for this file system.
      Parameters:
      f - the path.
      replication - replication.
      Returns:
      if successfully set replication success true, not false.
      Throws:
      AccessControlException - access control exception.
      FileNotFoundException - file not found exception.
      org.apache.hadoop.fs.UnresolvedLinkException - unresolved link exception.
      IOException - raised on errors performing I/O.
    • rename

      public final void rename(Path src, Path dst, Options.Rename... options) throws AccessControlException, FileAlreadyExistsException, FileNotFoundException, ParentNotDirectoryException, org.apache.hadoop.fs.UnresolvedLinkException, IOException
      The specification of this method matches that of FileContext.rename(Path, Path, Options.Rename...) except that Path f must be for this file system.
      Parameters:
      src - src.
      dst - dst.
      options - options.
      Throws:
      AccessControlException - access control exception.
      FileAlreadyExistsException - file already exists exception.
      FileNotFoundException - file not found exception.
      ParentNotDirectoryException - parent not directory exception.
      org.apache.hadoop.fs.UnresolvedLinkException - unresolved link exception.
      IOException - raised on errors performing I/O.
    • renameInternal

      public abstract void renameInternal(Path src, Path dst) throws AccessControlException, FileAlreadyExistsException, FileNotFoundException, ParentNotDirectoryException, org.apache.hadoop.fs.UnresolvedLinkException, IOException
      The specification of this method matches that of FileContext.rename(Path, Path, Options.Rename...) except that Path f must be for this file system and NO OVERWRITE is performed. File systems that do not have a built in overwrite need implement only this method and can take advantage of the default impl of the other renameInternal(Path, Path, boolean)
      Parameters:
      src - src.
      dst - dst.
      Throws:
      AccessControlException - access control exception.
      FileAlreadyExistsException - file already exists exception.
      FileNotFoundException - file not found exception.
      ParentNotDirectoryException - parent not directory exception.
      org.apache.hadoop.fs.UnresolvedLinkException - unresolved link exception.
      IOException - raised on errors performing I/O.
    • renameInternal

      public void renameInternal(Path src, Path dst, boolean overwrite) throws AccessControlException, FileAlreadyExistsException, FileNotFoundException, ParentNotDirectoryException, org.apache.hadoop.fs.UnresolvedLinkException, IOException
      The specification of this method matches that of FileContext.rename(Path, Path, Options.Rename...) except that Path f must be for this file system.
      Parameters:
      src - src.
      dst - dst.
      overwrite - overwrite flag.
      Throws:
      AccessControlException - access control exception.
      FileAlreadyExistsException - file already exists exception.
      FileNotFoundException - file not found exception.
      ParentNotDirectoryException - parent not directory exception.
      org.apache.hadoop.fs.UnresolvedLinkException - unresolved link exception.
      IOException - raised on errors performing I/O.
    • supportsSymlinks

      public boolean supportsSymlinks()
      Returns true if the file system supports symlinks, false otherwise.
      Returns:
      true if filesystem supports symlinks
    • createSymlink

      public void createSymlink(Path target, Path link, boolean createParent) throws IOException, org.apache.hadoop.fs.UnresolvedLinkException
      The specification of this method matches that of FileContext.createSymlink(Path, Path, boolean);
      Parameters:
      target - target.
      link - link.
      createParent - create parent.
      Throws:
      IOException - raised on errors performing I/O.
      org.apache.hadoop.fs.UnresolvedLinkException - unresolved link exception.
    • getLinkTarget

      public Path getLinkTarget(Path f) throws IOException
      Partially resolves the path. This is used during symlink resolution in FSLinkResolver, and differs from the similarly named method FileContext.getLinkTarget(Path).
      Parameters:
      f - the path.
      Returns:
      target path.
      Throws:
      IOException - subclass implementations may throw IOException
    • setPermission

      public abstract void setPermission(Path f, FsPermission permission) throws AccessControlException, FileNotFoundException, org.apache.hadoop.fs.UnresolvedLinkException, IOException
      The specification of this method matches that of FileContext.setPermission(Path, FsPermission) except that Path f must be for this file system.
      Parameters:
      f - the path.
      permission - permission.
      Throws:
      AccessControlException - access control exception.
      FileNotFoundException - file not found exception.
      org.apache.hadoop.fs.UnresolvedLinkException - unresolved link exception.
      IOException - raised on errors performing I/O.
    • setOwner

      public abstract void setOwner(Path f, String username, String groupname) throws AccessControlException, FileNotFoundException, org.apache.hadoop.fs.UnresolvedLinkException, IOException
      The specification of this method matches that of FileContext.setOwner(Path, String, String) except that Path f must be for this file system.
      Parameters:
      f - the path.
      username - username.
      groupname - groupname.
      Throws:
      AccessControlException - access control exception.
      FileNotFoundException - file not found exception.
      org.apache.hadoop.fs.UnresolvedLinkException - unresolved link exception.
      IOException - raised on errors performing I/O.
    • setTimes

      public abstract void setTimes(Path f, long mtime, long atime) throws AccessControlException, FileNotFoundException, org.apache.hadoop.fs.UnresolvedLinkException, IOException
      The specification of this method matches that of FileContext.setTimes(Path, long, long) except that Path f must be for this file system.
      Parameters:
      f - the path.
      mtime - modify time.
      atime - access time.
      Throws:
      AccessControlException - access control exception.
      FileNotFoundException - file not found exception.
      org.apache.hadoop.fs.UnresolvedLinkException - unresolved link exception.
      IOException - raised on errors performing I/O.
    • getFileChecksum

      public abstract FileChecksum getFileChecksum(Path f) throws AccessControlException, FileNotFoundException, org.apache.hadoop.fs.UnresolvedLinkException, IOException
      The specification of this method matches that of FileContext.getFileChecksum(Path) except that Path f must be for this file system.
      Parameters:
      f - the path.
      Returns:
      File Check sum.
      Throws:
      AccessControlException - access control exception.
      FileNotFoundException - file not found exception.
      org.apache.hadoop.fs.UnresolvedLinkException - unresolved link exception.
      IOException - raised on errors performing I/O.
    • getFileStatus

      public abstract FileStatus getFileStatus(Path f) throws AccessControlException, FileNotFoundException, org.apache.hadoop.fs.UnresolvedLinkException, IOException
      The specification of this method matches that of FileContext.getFileStatus(Path) except that an UnresolvedLinkException may be thrown if a symlink is encountered in the path.
      Parameters:
      f - the path.
      Returns:
      File Status
      Throws:
      AccessControlException - access control exception.
      FileNotFoundException - file not found exception.
      org.apache.hadoop.fs.UnresolvedLinkException - unresolved link exception.
      IOException - raised on errors performing I/O.
    • msync

      public void msync() throws IOException, UnsupportedOperationException
      Synchronize client metadata state.

      In some FileSystem implementations such as HDFS metadata synchronization is essential to guarantee consistency of read requests particularly in HA setting.

      Throws:
      IOException - raised on errors performing I/O.
      UnsupportedOperationException - Unsupported Operation Exception.
    • access

      @LimitedPrivate({"HDFS","Hive"}) public void access(Path path, FsAction mode) throws AccessControlException, FileNotFoundException, org.apache.hadoop.fs.UnresolvedLinkException, IOException
      The specification of this method matches that of FileContext.access(Path, FsAction) except that an UnresolvedLinkException may be thrown if a symlink is encountered in the path.
      Parameters:
      path - the path.
      mode - fsaction mode.
      Throws:
      AccessControlException - access control exception.
      FileNotFoundException - file not found exception.
      org.apache.hadoop.fs.UnresolvedLinkException - unresolved link exception.
      IOException - raised on errors performing I/O.
    • getFileLinkStatus

      The specification of this method matches that of FileContext.getFileLinkStatus(Path) except that an UnresolvedLinkException may be thrown if a symlink is encountered in the path leading up to the final path component. If the file system does not support symlinks then the behavior is equivalent to getFileStatus(Path).
      Parameters:
      f - the path.
      Returns:
      file status.
      Throws:
      AccessControlException - access control exception.
      FileNotFoundException - file not found exception.
      UnsupportedFileSystemException - UnSupported File System Exception.
      IOException - raised on errors performing I/O.
    • getFileBlockLocations

      public abstract BlockLocation[] getFileBlockLocations(Path f, long start, long len) throws AccessControlException, FileNotFoundException, org.apache.hadoop.fs.UnresolvedLinkException, IOException
      The specification of this method matches that of FileContext.getFileBlockLocations(Path, long, long) except that Path f must be for this file system.
      Parameters:
      f - the path.
      start - start.
      len - length.
      Returns:
      BlockLocation Array.
      Throws:
      AccessControlException - access control exception.
      FileNotFoundException - file not found exception.
      org.apache.hadoop.fs.UnresolvedLinkException - unresolved link exception.
      IOException - raised on errors performing I/O.
    • getFsStatus

      public FsStatus getFsStatus(Path f) throws AccessControlException, FileNotFoundException, org.apache.hadoop.fs.UnresolvedLinkException, IOException
      The specification of this method matches that of FileContext.getFsStatus(Path) except that Path f must be for this file system.
      Parameters:
      f - the path.
      Returns:
      Fs Status.
      Throws:
      AccessControlException - access control exception.
      FileNotFoundException - file not found exception.
      org.apache.hadoop.fs.UnresolvedLinkException - unresolved link exception.
      IOException - raised on errors performing I/O.
    • getFsStatus

      public abstract FsStatus getFsStatus() throws AccessControlException, FileNotFoundException, IOException
      The specification of this method matches that of FileContext.getFsStatus(Path).
      Returns:
      Fs Status.
      Throws:
      AccessControlException - access control exception.
      FileNotFoundException - file not found exception.
      IOException - raised on errors performing I/O.
    • listStatusIterator

      public org.apache.hadoop.fs.RemoteIterator<FileStatus> listStatusIterator(Path f) throws AccessControlException, FileNotFoundException, org.apache.hadoop.fs.UnresolvedLinkException, IOException
      The specification of this method matches that of FileContext.listStatus(Path) except that Path f must be for this file system.
      Parameters:
      f - path.
      Returns:
      FileStatus Iterator.
      Throws:
      AccessControlException - access control exception.
      FileNotFoundException - file not found exception.
      org.apache.hadoop.fs.UnresolvedLinkException - unresolved link exception.
      IOException - raised on errors performing I/O.
    • listLocatedStatus

      public org.apache.hadoop.fs.RemoteIterator<LocatedFileStatus> listLocatedStatus(Path f) throws AccessControlException, FileNotFoundException, org.apache.hadoop.fs.UnresolvedLinkException, IOException
      The specification of this method matches that of FileContext.listLocatedStatus(Path) except that Path f must be for this file system. In HDFS implementation, the BlockLocation of returned LocatedFileStatus will have different formats for replicated and erasure coded file. Please refer to FileSystem.getFileBlockLocations(FileStatus, long, long) for more details.
      Parameters:
      f - the path.
      Returns:
      FileStatus Iterator.
      Throws:
      AccessControlException - access control exception.
      FileNotFoundException - file not found exception.
      org.apache.hadoop.fs.UnresolvedLinkException - unresolved link exception.
      IOException - raised on errors performing I/O.
    • listStatus

      public abstract FileStatus[] listStatus(Path f) throws AccessControlException, FileNotFoundException, org.apache.hadoop.fs.UnresolvedLinkException, IOException
      The specification of this method matches that of FileContext.Util.listStatus(Path) except that Path f must be for this file system.
      Parameters:
      f - the path.
      Returns:
      FileStatus Iterator.
      Throws:
      AccessControlException - access control exception.
      FileNotFoundException - file not found exception.
      org.apache.hadoop.fs.UnresolvedLinkException - unresolved link exception.
      IOException - raised on errors performing I/O.
    • listCorruptFileBlocks

      public org.apache.hadoop.fs.RemoteIterator<Path> listCorruptFileBlocks(Path path) throws IOException
      Parameters:
      path - the path.
      Returns:
      an iterator over the corrupt files under the given path (may contain duplicates if a file has more than one corrupt block)
      Throws:
      IOException - raised on errors performing I/O.
    • setVerifyChecksum

      public abstract void setVerifyChecksum(boolean verifyChecksum) throws AccessControlException, IOException
      The specification of this method matches that of FileContext.setVerifyChecksum(boolean, Path) except that Path f must be for this file system.
      Parameters:
      verifyChecksum - verify check sum flag.
      Throws:
      AccessControlException - access control exception.
      IOException - raised on errors performing I/O.
    • getCanonicalServiceName

      public String getCanonicalServiceName()
      Get a canonical name for this file system.
      Returns:
      a URI string that uniquely identifies this file system
    • getDelegationTokens

      @LimitedPrivate({"HDFS","MapReduce"}) public List<Token<?>> getDelegationTokens(String renewer) throws IOException
      Get one or more delegation tokens associated with the filesystem. Normally a file system returns a single delegation token. A file system that manages multiple file systems underneath, could return set of delegation tokens for all the file systems it manages
      Parameters:
      renewer - the account name that is allowed to renew the token.
      Returns:
      List of delegation tokens. If delegation tokens not supported then return a list of size zero.
      Throws:
      IOException - raised on errors performing I/O.
    • modifyAclEntries

      public void modifyAclEntries(Path path, List<AclEntry> aclSpec) throws IOException
      Modifies ACL entries of files and directories. This method can add new ACL entries or modify the permissions on existing ACL entries. All existing ACL entries that are not specified in this call are retained without changes. (Modifications are merged into the current ACL.)
      Parameters:
      path - Path to modify
      aclSpec - List<AclEntry> describing modifications
      Throws:
      IOException - if an ACL could not be modified
    • removeAclEntries

      public void removeAclEntries(Path path, List<AclEntry> aclSpec) throws IOException
      Removes ACL entries from files and directories. Other ACL entries are retained.
      Parameters:
      path - Path to modify
      aclSpec - List<AclEntry> describing entries to remove
      Throws:
      IOException - if an ACL could not be modified
    • removeDefaultAcl

      public void removeDefaultAcl(Path path) throws IOException
      Removes all default ACL entries from files and directories.
      Parameters:
      path - Path to modify
      Throws:
      IOException - if an ACL could not be modified
    • removeAcl

      public void removeAcl(Path path) throws IOException
      Removes all but the base ACL entries of files and directories. The entries for user, group, and others are retained for compatibility with permission bits.
      Parameters:
      path - Path to modify
      Throws:
      IOException - if an ACL could not be removed
    • setAcl

      public void setAcl(Path path, List<AclEntry> aclSpec) throws IOException
      Fully replaces ACL of files and directories, discarding all existing entries.
      Parameters:
      path - Path to modify
      aclSpec - List<AclEntry> describing modifications, must include entries for user, group, and others for compatibility with permission bits.
      Throws:
      IOException - if an ACL could not be modified
    • getAclStatus

      public AclStatus getAclStatus(Path path) throws IOException
      Gets the ACLs of files and directories.
      Parameters:
      path - Path to get
      Returns:
      RemoteIterator<AclStatus> which returns each AclStatus
      Throws:
      IOException - if an ACL could not be read
    • setXAttr

      public void setXAttr(Path path, String name, byte[] value) throws IOException
      Set an xattr of a file or directory. The name must be prefixed with the namespace followed by ".". For example, "user.attr".

      Refer to the HDFS extended attributes user documentation for details.

      Parameters:
      path - Path to modify
      name - xattr name.
      value - xattr value.
      Throws:
      IOException - raised on errors performing I/O.
    • setXAttr

      public void setXAttr(Path path, String name, byte[] value, EnumSet<XAttrSetFlag> flag) throws IOException
      Set an xattr of a file or directory. The name must be prefixed with the namespace followed by ".". For example, "user.attr".

      Refer to the HDFS extended attributes user documentation for details.

      Parameters:
      path - Path to modify
      name - xattr name.
      value - xattr value.
      flag - xattr set flag
      Throws:
      IOException - raised on errors performing I/O.
    • getXAttr

      public byte[] getXAttr(Path path, String name) throws IOException
      Get an xattr for a file or directory. The name must be prefixed with the namespace followed by ".". For example, "user.attr".

      Refer to the HDFS extended attributes user documentation for details.

      Parameters:
      path - Path to get extended attribute
      name - xattr name.
      Returns:
      byte[] xattr value.
      Throws:
      IOException - raised on errors performing I/O.
    • getXAttrs

      public Map<String,byte[]> getXAttrs(Path path) throws IOException
      Get all of the xattrs for a file or directory. Only those xattrs for which the logged-in user has permissions to view are returned.

      Refer to the HDFS extended attributes user documentation for details.

      Parameters:
      path - Path to get extended attributes
      Returns:
      Map<String, byte[]> describing the XAttrs of the file or directory
      Throws:
      IOException - raised on errors performing I/O.
    • getXAttrs

      public Map<String,byte[]> getXAttrs(Path path, List<String> names) throws IOException
      Get all of the xattrs for a file or directory. Only those xattrs for which the logged-in user has permissions to view are returned.

      Refer to the HDFS extended attributes user documentation for details.

      Parameters:
      path - Path to get extended attributes
      names - XAttr names.
      Returns:
      Map<String, byte[]> describing the XAttrs of the file or directory
      Throws:
      IOException - raised on errors performing I/O.
    • listXAttrs

      public List<String> listXAttrs(Path path) throws IOException
      Get all of the xattr names for a file or directory. Only the xattr names for which the logged-in user has permissions to view are returned.

      Refer to the HDFS extended attributes user documentation for details.

      Parameters:
      path - Path to get extended attributes
      Returns:
      Map<String, byte[]> describing the XAttrs of the file or directory
      Throws:
      IOException - raised on errors performing I/O.
    • removeXAttr

      public void removeXAttr(Path path, String name) throws IOException
      Remove an xattr of a file or directory. The name must be prefixed with the namespace followed by ".". For example, "user.attr".

      Refer to the HDFS extended attributes user documentation for details.

      Parameters:
      path - Path to remove extended attribute
      name - xattr name
      Throws:
      IOException - raised on errors performing I/O.
    • createSnapshot

      public Path createSnapshot(Path path, String snapshotName) throws IOException
      The specification of this method matches that of FileContext.createSnapshot(Path, String).
      Parameters:
      path - the path.
      snapshotName - snapshot name.
      Returns:
      path.
      Throws:
      IOException - raised on errors performing I/O.
    • renameSnapshot

      public void renameSnapshot(Path path, String snapshotOldName, String snapshotNewName) throws IOException
      The specification of this method matches that of FileContext.renameSnapshot(Path, String, String).
      Parameters:
      path - the path.
      snapshotOldName - snapshot old name.
      snapshotNewName - snapshot new name.
      Throws:
      IOException - raised on errors performing I/O.
    • deleteSnapshot

      public void deleteSnapshot(Path snapshotDir, String snapshotName) throws IOException
      The specification of this method matches that of FileContext.deleteSnapshot(Path, String).
      Parameters:
      snapshotDir - snapshot dir.
      snapshotName - snapshot name.
      Throws:
      IOException - raised on errors performing I/O.
    • satisfyStoragePolicy

      public void satisfyStoragePolicy(Path path) throws IOException
      Set the source path to satisfy storage policy.
      Parameters:
      path - The source path referring to either a directory or a file.
      Throws:
      IOException - raised on errors performing I/O.
    • setStoragePolicy

      public void setStoragePolicy(Path path, String policyName) throws IOException
      Set the storage policy for a given file or directory.
      Parameters:
      path - file or directory path.
      policyName - the name of the target storage policy. The list of supported Storage policies can be retrieved via getAllStoragePolicies().
      Throws:
      IOException - raised on errors performing I/O.
    • unsetStoragePolicy

      public void unsetStoragePolicy(Path src) throws IOException
      Unset the storage policy set for a given file or directory.
      Parameters:
      src - file or directory path.
      Throws:
      IOException - raised on errors performing I/O.
    • getStoragePolicy

      public BlockStoragePolicySpi getStoragePolicy(Path src) throws IOException
      Retrieve the storage policy for a given file or directory.
      Parameters:
      src - file or directory path.
      Returns:
      storage policy for give file.
      Throws:
      IOException - raised on errors performing I/O.
    • getAllStoragePolicies

      public Collection<? extends BlockStoragePolicySpi> getAllStoragePolicies() throws IOException
      Retrieve all the storage policies supported by this file system.
      Returns:
      all storage policies supported by this filesystem.
      Throws:
      IOException - raised on errors performing I/O.
    • hashCode

      public int hashCode()
      Overrides:
      hashCode in class Object
    • equals

      public boolean equals(Object other)
      Overrides:
      equals in class Object
    • openFileWithOptions

      public CompletableFuture<FSDataInputStream> openFileWithOptions(Path path, org.apache.hadoop.fs.impl.OpenFileParameters parameters) throws IOException
      Open a file with the given set of options. The base implementation performs a blocking call to open(Path, int)in this call; the actual outcome is in the returned CompletableFuture. This avoids having to create some thread pool, while still setting up the expectation that the get() call is needed to evaluate the result.
      Parameters:
      path - path to the file
      parameters - open file parameters from the builder.
      Returns:
      a future which will evaluate to the opened file.
      Throws:
      IOException - failure to resolve the link.
      IllegalArgumentException - unknown mandatory key
    • hasPathCapability

      public boolean hasPathCapability(Path path, String capability) throws IOException
      Description copied from interface: org.apache.hadoop.fs.PathCapabilities
      Probe for a specific capability under the given path. If the function returns true, this instance is explicitly declaring that the capability is available. If the function returns false, it can mean one of:
      • The capability is not known.
      • The capability is known but it is not supported.
      • The capability is known but the filesystem does not know if it is supported under the supplied path.
      The core guarantee which a caller can rely on is: if the predicate returns true, then the specific operation/behavior can be expected to be supported. However a specific call may be rejected for permission reasons, the actual file/directory not being present, or some other failure during the attempted execution of the operation.

      Implementors: PathCapabilitiesSupport can be used to help implement this method.

      Specified by:
      hasPathCapability in interface org.apache.hadoop.fs.PathCapabilities
      Parameters:
      path - path to query the capability of.
      capability - non-null, non-empty string to query the path for support.
      Returns:
      true if the capability is supported under that part of the FS.
      Throws:
      IOException - this should not be raised, except on problems resolving paths or relaying the call.
    • createMultipartUploader

      @Unstable public org.apache.hadoop.fs.MultipartUploaderBuilder createMultipartUploader(Path basePath) throws IOException
      Create a multipart uploader.
      Parameters:
      basePath - file path under which all files are uploaded
      Returns:
      a MultipartUploaderBuilder object to build the uploader
      Throws:
      IOException - if some early checks cause IO failures.
      UnsupportedOperationException - if support is checked early.
    • getEnclosingRoot

      @Public @Unstable public Path getEnclosingRoot(Path path) throws IOException
      Return path of the enclosing root for a given path The enclosing root path is a common ancestor that should be used for temp and staging dirs as well as within encryption zones and other restricted directories. Call makeQualified on the param path to ensure its part of the correct filesystem
      Parameters:
      path - file path to find the enclosing root path for
      Returns:
      a path to the enclosing root
      Throws:
      IOException - early checks like failure to resolve path cause IO failures
    • methodNotSupported

      protected final void methodNotSupported()
      Helper method that throws an UnsupportedOperationException for the current FileSystem method being called.