Class FileContext
- All Implemented Interfaces:
org.apache.hadoop.fs.PathCapabilities
Path Names
The Hadoop file system supports a URI namespace and URI names. This enables multiple types of file systems to be referenced using fully-qualified URIs. Two common Hadoop file system implementations are- the local file system: file:///path
- the HDFS file system: hdfs://nnAddress:nnPort/path
Hadoop also supports working-directory-relative names, which are paths relative to the current working directory (similar to Unix). The working directory can be in a different file system than the default FS.
Thus, Hadoop path names can be specified as one of the following:
- a fully-qualified URI: scheme://authority/path (e.g. hdfs://nnAddress:nnPort/foo/bar)
- a slash-relative name: path relative to the default file system (e.g. /foo/bar)
- a working-directory-relative name: path relative to the working dir (e.g. foo/bar)
Role of FileContext and Configuration Defaults
The FileContext is the analogue of per-process file-related state in Unix. It contains two properties:- the default file system (for resolving slash-relative names)
- the umask (for file permissions)
Configuration).
Further file system properties are specified on the server-side. File system
operations default to using these server-side defaults unless otherwise
specified.
The file system related server-side defaults are:
- the home directory (default is "/user/userName")
- the initial wd (only for local fs)
- replication factor
- block size
- buffer size
- encryptDataTransfer
- checksum option. (checksumType and bytesPerChecksum)
Example Usage
Example 1: use the default config read from the $HADOOP_CONFIG/core.xml. Unspecified values come from core-defaults.xml in the release jar.- myFContext = FileContext.getFileContext(); // uses the default config // which has your default FS
- myFContext.create(path, ...);
- myFContext.setWorkingDir(path);
- myFContext.open (path, ...);
- ...
- myFContext = FileContext.getFileContext(URI);
- myFContext.create(path, ...);
- ...
- myFContext = FileContext.getLocalFSFileContext();
- myFContext.create(path, ...);
- ...
- configX = someConfigSomeOnePassedToYou;
- myFContext = getFileContext(configX); // configX is not changed, // is passed down
- myFContext.create(path, ...);
- ...
-
Nested Class Summary
Nested ClassesModifier and TypeClassDescriptionclassorg.apache.hadoop.fs.FileContext.UtilUtility/library methods built over the basic FileContext methods. -
Field Summary
FieldsModifier and TypeFieldDescriptionstatic final FsPermissionDefault permission for directory and symlink In previous versions, this default permission was also used to create files, so files created end up with ugo+x permission.static final FsPermissionDefault permission for directorystatic final FsPermissionDefault permission for filestatic final org.slf4j.Loggerstatic final intPriority of the FileContext shutdown hook. -
Method Summary
Modifier and TypeMethodDescriptionvoidChecks if the user can access a path.static voidClears all the statistics stored in AbstractFileSystem, for all the file systems.Create aFSDataOutputStreamBuilderfor creating or overwriting a file on indicated path.create(Path f, EnumSet<CreateFlag> createFlag, org.apache.hadoop.fs.Options.CreateOpts... opts) Create or overwrite file on indicated path and returns an output stream for writing into the file.org.apache.hadoop.fs.MultipartUploaderBuildercreateMultipartUploader(Path basePath) Create a multipart uploader.final PathcreateSnapshot(Path path) Create a snapshot with a default name.createSnapshot(Path path, String snapshotName) Create a snapshot.voidcreateSymlink(Path target, Path link, boolean createParent) Creates a symbolic link to an existing file.booleanDelete a file.booleandeleteOnExit(Path f) Mark a path to be deleted on JVM shutdown.voiddeleteSnapshot(Path path, String snapshotName) Delete a snapshot of a directory.getAclStatus(Path path) Gets the ACLs of files and directories.Collection<? extends BlockStoragePolicySpi>Retrieve all the storage policies supported by this file system.getDefaultFileSystem()Get delegation tokens for the file systems accessed for a given path.getFileBlockLocations(Path f, long start, long len) Return blockLocation of the given file for the given offset and len.Get the checksum of a file.static FileContextCreate a FileContext using the default config read from the $HADOOP_CONFIG/core.xml, Unspecified key-values for config are defaulted from core-defaults.xml in the release jar.static FileContextgetFileContext(URI defaultFsUri) Create a FileContext for specified URI using the default config.static FileContextgetFileContext(URI defaultFsUri, Configuration aConf) Create a FileContext for specified default URI using the specified config.static FileContextgetFileContext(Configuration aConf) Create a FileContext using the passed config.protected static FileContextgetFileContext(AbstractFileSystem defaultFS) Create a FileContext for specified file system using the default config.static FileContextgetFileContext(AbstractFileSystem defFS, Configuration aConf) Create a FileContext with specified FS as default using the specified config.Return a file status object that represents the path.Return a file status object that represents the path.protected AbstractFileSystemgetFSofPath(Path absOrFqPath) Get the file system of supplied path.getFsStatus(Path f) Returns a status object describing the use and capacity of the file system denoted by the Parh argument p.Return the current user's home directory in this file system.Returns the target of the given symbolic link as it was specified when the link was created.static FileContextstatic FileContextgetServerDefaults(Path path) Return a set of server default configuration values based on path.static org.apache.hadoop.fs.FileSystem.StatisticsgetStatistics(URI uri) Get the statistics for a particular file systemgetStoragePolicy(Path path) Query the effective storage policy ID for the given file or directory.getUgi()Gets the ugi in the file-contextgetUMask()Gets the working directory for wd-relative names (such a "foo/bar").byte[]Get an xattr for a file or directory.Get all of the xattrs for a file or directory.Get all of the xattrs for a file or directory.booleanhasPathCapability(Path path, String capability) Return the path capabilities of the bondedAbstractFileSystem.org.apache.hadoop.fs.RemoteIterator<Path>listCorruptFileBlocks(Path path) List CorruptFile Blocks.org.apache.hadoop.fs.RemoteIterator<LocatedFileStatus>List the statuses of the files/directories in the given path if the path is a directory.org.apache.hadoop.fs.RemoteIterator<FileStatus>listStatus(Path f) List the statuses of the files/directories in the given path if the path is a directory.listXAttrs(Path path) Get all of the xattr names for a file or directory.makeQualified(Path path) Make the path fully qualified if it is isn't.voidmkdir(Path dir, FsPermission permission, boolean createParent) Make(create) a directory and all the non-existent parents.voidmodifyAclEntries(Path path, List<AclEntry> aclSpec) Modifies ACL entries of files and directories.voidmsync()Synchronize client metadata state.Opens an FSDataInputStream at the indicated Path using default buffersize.Opens an FSDataInputStream at the indicated Path.Open a file for reading through a builder API.static voidPrints the statistics to standard output.voidRemoves all but the base ACL entries of files and directories.voidremoveAclEntries(Path path, List<AclEntry> aclSpec) Removes ACL entries from files and directories.voidremoveDefaultAcl(Path path) Removes all default ACL entries from files and directories.voidremoveXAttr(Path path, String name) Remove an xattr of a file or directory.voidrename(Path src, Path dst, Options.Rename... options) Renames Path src to Path dst Fails if src is a file and dst is a directory.voidrenameSnapshot(Path path, String snapshotOldName, String snapshotNewName) Rename a snapshot.protected PathResolves all symbolic links in the specified path.protected PathResolves all symbolic links in the specified path leading up to, but not including the final path component.resolvePath(Path f) Resolve the path following any symlinks or mount pointsvoidsatisfyStoragePolicy(Path path) Set the source path to satisfy storage policy.voidFully replaces ACL of files and directories, discarding all existing entries.voidSet owner of a path (i.e. a file or a directory).voidsetPermission(Path f, FsPermission permission) Set permission of a path.booleansetReplication(Path f, short replication) Set replication for an existing file.voidsetStoragePolicy(Path path, String policyName) Set the storage policy for a given file or directory.voidSet access time of a file.voidsetUMask(FsPermission newUmask) Set umask to the supplied parameter.voidsetVerifyChecksum(boolean verifyChecksum, Path f) Set the verify checksum flag for the file system denoted by the path.voidsetWorkingDirectory(Path newWDir) Set the working directory for wd-relative names (such a "foo/bar").voidSet an xattr of a file or directory.voidsetXAttr(Path path, String name, byte[] value, EnumSet<XAttrSetFlag> flag) Set an xattr of a file or directory.booleanTruncate the file in the indicated path to the indicated size.voidunsetStoragePolicy(Path src) Unset the storage policy set for a given file or directory.org.apache.hadoop.fs.FileContext.Utilutil()
-
Field Details
-
LOG
public static final org.slf4j.Logger LOG -
DEFAULT_PERM
Default permission for directory and symlink In previous versions, this default permission was also used to create files, so files created end up with ugo+x permission. See HADOOP-9155 for detail. Two new constants are added to solve this, please useDIR_DEFAULT_PERMfor directory, and useFILE_DEFAULT_PERMfor file. This constant is kept for compatibility. -
DIR_DEFAULT_PERM
Default permission for directory -
FILE_DEFAULT_PERM
Default permission for file -
SHUTDOWN_HOOK_PRIORITY
public static final int SHUTDOWN_HOOK_PRIORITYPriority of the FileContext shutdown hook.- See Also:
-
-
Method Details
-
getFSofPath
protected AbstractFileSystem getFSofPath(Path absOrFqPath) throws UnsupportedFileSystemException, IOException Get the file system of supplied path.- Parameters:
absOrFqPath- - absolute or fully qualified path- Returns:
- the file system of the path
- Throws:
UnsupportedFileSystemException- If the file system forabsOrFqPathis not supported.IOException- If the file system forabsOrFqPathcould not be instantiated.
-
getFileContext
Create a FileContext with specified FS as default using the specified config.- Parameters:
defFS- default fs.aConf- configutration.- Returns:
- new FileContext with specified FS as default.
-
getFileContext
Create a FileContext for specified file system using the default config.- Parameters:
defaultFS- default fs.- Returns:
- a FileContext with the specified AbstractFileSystem as the default FS.
-
getFileContext
Create a FileContext using the default config read from the $HADOOP_CONFIG/core.xml, Unspecified key-values for config are defaulted from core-defaults.xml in the release jar.- Returns:
- file context.
- Throws:
UnsupportedFileSystemException- If the file system from the default configuration is not supported
-
getLocalFSFileContext
- Returns:
- a FileContext for the local file system using the default config.
- Throws:
UnsupportedFileSystemException- If the file system forFsConstants.LOCAL_FS_URIis not supported.
-
getFileContext
Create a FileContext for specified URI using the default config.- Parameters:
defaultFsUri- defaultFsUri.- Returns:
- a FileContext with the specified URI as the default FS.
- Throws:
UnsupportedFileSystemException- If the file system fordefaultFsUriis not supported
-
getFileContext
public static FileContext getFileContext(URI defaultFsUri, Configuration aConf) throws UnsupportedFileSystemException Create a FileContext for specified default URI using the specified config.- Parameters:
defaultFsUri- defaultFsUri.aConf- configrution.- Returns:
- new FileContext for specified uri
- Throws:
UnsupportedFileSystemException- If the file system with specified is not supportedRuntimeException- If the file system specified is supported but could not be instantiated, or if login fails.
-
getFileContext
Create a FileContext using the passed config. Generally it is better to usegetFileContext(URI, Configuration)instead of this one.- Parameters:
aConf- configration.- Returns:
- new FileContext
- Throws:
UnsupportedFileSystemException- If file system in the config is not supported
-
getLocalFSFileContext
public static FileContext getLocalFSFileContext(Configuration aConf) throws UnsupportedFileSystemException - Parameters:
aConf- - from which the FileContext is configured- Returns:
- a FileContext for the local file system using the specified config.
- Throws:
UnsupportedFileSystemException- If default file system in the config is not supported
-
getDefaultFileSystem
-
setWorkingDirectory
Set the working directory for wd-relative names (such a "foo/bar"). Working directory feature is provided by simply prefixing relative names with the working dir. Note this is different from Unix where the wd is actually set to the inode. Hence setWorkingDir does not follow symlinks etc. This works better in a distributed environment that has multiple independent roots.getWorkingDirectory()should return what setWorkingDir() set.- Parameters:
newWDir- new working directory- Throws:
IOException-
NewWdir can be one of:- relative path: "foo/bar";
- absolute without scheme: "/foo/bar"
- fully qualified with scheme: "xx://auth/foo/bar"
Illegal WDs:- relative with scheme: "xx:foo/bar"
- non existent directory
-
getWorkingDirectory
Gets the working directory for wd-relative names (such a "foo/bar").- Returns:
- the path.
-
getUgi
Gets the ugi in the file-context- Returns:
- UserGroupInformation
-
getHomeDirectory
Return the current user's home directory in this file system. The default implementation returns "/user/$USER/".- Returns:
- the home directory
-
getUMask
- Returns:
- the umask of this FileContext
-
setUMask
Set umask to the supplied parameter.- Parameters:
newUmask- the new umask
-
resolvePath
public Path resolvePath(Path f) throws FileNotFoundException, org.apache.hadoop.fs.UnresolvedLinkException, AccessControlException, IOException Resolve the path following any symlinks or mount points- Parameters:
f- to be resolved- Returns:
- fully qualified resolved path
- Throws:
FileNotFoundException- Iffdoes not existAccessControlException- if access deniedIOException- If an IO Error occurredorg.apache.hadoop.fs.UnresolvedLinkException- If unresolved link occurred. Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException- If server implementation throws undeclared exception to RPC server RuntimeExceptions:InvalidPathException- If pathfis not valid
-
makeQualified
Make the path fully qualified if it is isn't. A Fully-qualified path has scheme and authority specified and an absolute path. Use the default file system and working dir in this FileContext to qualify.- Parameters:
path- the path.- Returns:
- qualified path
-
create
public FSDataOutputStream create(Path f, EnumSet<CreateFlag> createFlag, org.apache.hadoop.fs.Options.CreateOpts... opts) throws AccessControlException, FileAlreadyExistsException, FileNotFoundException, ParentNotDirectoryException, UnsupportedFileSystemException, IOException Create or overwrite file on indicated path and returns an output stream for writing into the file.- Parameters:
f- the file name to opencreateFlag- gives the semantics of create; seeCreateFlagopts- file creation options; seeOptions.CreateOpts.- Progress - to report progress on the operation - default null
- Permission - umask is applied against permission: default is FsPermissions:getDefault()
- CreateParent - create missing parent path; default is to not to create parents
- The defaults for the following are SS defaults of the file
server implementing the target path. Not all parameters make sense
for all kinds of file system - eg. localFS ignores Blocksize,
replication, checksum
- BufferSize - buffersize used in FSDataOutputStream
- Blocksize - block size for file blocks
- ReplicationFactor - replication for blocks
- ChecksumParam - Checksum parameters. server default is used if not specified.
- Returns:
FSDataOutputStreamfor created file- Throws:
AccessControlException- If access is deniedFileAlreadyExistsException- If filefalready existsFileNotFoundException- If parent offdoes not exist andcreateParentis falseParentNotDirectoryException- If parent offis not a directory.UnsupportedFileSystemException- If file system forfis not supportedIOException- If an I/O error occurred Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException- If server implementation throws undeclared exception to RPC server RuntimeExceptions:InvalidPathException- If pathfis not valid
-
create
Create aFSDataOutputStreamBuilderfor creating or overwriting a file on indicated path.- Parameters:
f- the file path to create builder for.- Returns:
FSDataOutputStreamBuilderto build aFSDataOutputStream. UponFSDataOutputStreamBuilder.build()being invoked, builder parameters will be verified byFileContextandAbstractFileSystem.create(org.apache.hadoop.fs.Path, java.util.EnumSet<org.apache.hadoop.fs.CreateFlag>, org.apache.hadoop.fs.Options.CreateOpts...). And filesystem states will be modified. Client should expectFSDataOutputStreamBuilder.build()throw the same exceptions as create(Path, EnumSet, CreateOpts...).- Throws:
IOException- If an I/O error occurred.
-
mkdir
public void mkdir(Path dir, FsPermission permission, boolean createParent) throws AccessControlException, FileAlreadyExistsException, FileNotFoundException, ParentNotDirectoryException, UnsupportedFileSystemException, IOException Make(create) a directory and all the non-existent parents.- Parameters:
dir- - the dir to makepermission- - permissions is set permission&~umaskcreateParent- - if true then missing parent dirs are created if false then parent must exist- Throws:
AccessControlException- If access is deniedFileAlreadyExistsException- If directorydiralready existsFileNotFoundException- If parent ofdirdoes not exist andcreateParentis falseParentNotDirectoryException- If parent ofdiris not a directoryUnsupportedFileSystemException- If file system fordiris not supportedIOException- If an I/O error occurred Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.UnexpectedServerException- If server implementation throws undeclared exception to RPC server RuntimeExceptions:InvalidPathException- If pathdiris not valid
-
delete
public boolean delete(Path f, boolean recursive) throws AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException Delete a file.- Parameters:
f- the path to delete.recursive- if path is a directory and set to true, the directory is deleted else throws an exception. In case of a file the recursive can be set to either true or false.- Returns:
- if delete success true, not false.
- Throws:
AccessControlException- If access is deniedFileNotFoundException- Iffdoes not existUnsupportedFileSystemException- If file system forfis not supportedIOException- If an I/O error occurred Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException- If server implementation throws undeclared exception to RPC server RuntimeExceptions:InvalidPathException- If pathfis invalid
-
open
public FSDataInputStream open(Path f) throws AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException Opens an FSDataInputStream at the indicated Path using default buffersize.- Parameters:
f- the file name to open- Returns:
- input stream.
- Throws:
AccessControlException- If access is deniedFileNotFoundException- If filefdoes not existUnsupportedFileSystemException- If file system forfis not supportedIOException- If an I/O error occurred Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException- If server implementation throws undeclared exception to RPC server
-
open
public FSDataInputStream open(Path f, int bufferSize) throws AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException Opens an FSDataInputStream at the indicated Path.- Parameters:
f- the file name to openbufferSize- the size of the buffer to be used.- Returns:
- output stream.
- Throws:
AccessControlException- If access is deniedFileNotFoundException- If filefdoes not existUnsupportedFileSystemException- If file system forfis not supportedIOException- If an I/O error occurred Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException- If server implementation throws undeclared exception to RPC server
-
truncate
public boolean truncate(Path f, long newLength) throws AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException Truncate the file in the indicated path to the indicated size.- Fails if path is a directory.
- Fails if path does not exist.
- Fails if path is not closed.
- Fails if new size is greater than current size.
- Parameters:
f- The path to the file to be truncatednewLength- The size the file is to be truncated to- Returns:
trueif the file has been truncated to the desirednewLengthand is immediately available to be reused for write operations such asappend, orfalseif a background process of adjusting the length of the last block has been started, and clients should wait for it to complete before proceeding with further file updates.- Throws:
AccessControlException- If access is deniedFileNotFoundException- If filefdoes not existUnsupportedFileSystemException- If file system forfis not supportedIOException- If an I/O error occurred Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException- If server implementation throws undeclared exception to RPC server
-
setReplication
public boolean setReplication(Path f, short replication) throws AccessControlException, FileNotFoundException, IOException Set replication for an existing file.- Parameters:
f- file namereplication- new replication- Returns:
- true if successful
- Throws:
AccessControlException- If access is deniedFileNotFoundException- If filefdoes not existIOException- If an I/O error occurred Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException- If server implementation throws undeclared exception to RPC server
-
rename
public void rename(Path src, Path dst, Options.Rename... options) throws AccessControlException, FileAlreadyExistsException, FileNotFoundException, ParentNotDirectoryException, UnsupportedFileSystemException, IOException Renames Path src to Path dst- Fails if src is a file and dst is a directory.
- Fails if src is a directory and dst is a file.
- Fails if the parent of dst does not exist or is a file.
If OVERWRITE option is not passed as an argument, rename fails if the dst already exists.
If OVERWRITE option is passed as an argument, rename overwrites the dst if it is a file or an empty directory. Rename fails if dst is a non-empty directory.
Note that atomicity of rename is dependent on the file system implementation. Please refer to the file system documentation for details
- Parameters:
src- path to be renameddst- new path after renameoptions- rename options.- Throws:
AccessControlException- If access is deniedFileAlreadyExistsException- Ifdstalready exists andoptionshasOptions.Rename.OVERWRITEoption false.FileNotFoundException- Ifsrcdoes not existParentNotDirectoryException- If parent ofdstis not a directoryUnsupportedFileSystemException- If file system forsrcanddstis not supportedIOException- If an I/O error occurred Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException- If server implementation throws undeclared exception to RPC server
-
setPermission
public void setPermission(Path f, FsPermission permission) throws AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException Set permission of a path.- Parameters:
f- the path.permission- - the new absolute permission (umask is not applied)- Throws:
AccessControlException- If access is deniedFileNotFoundException- Iffdoes not existUnsupportedFileSystemException- If file system forfis not supportedIOException- If an I/O error occurred Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException- If server implementation throws undeclared exception to RPC server
-
setOwner
public void setOwner(Path f, String username, String groupname) throws AccessControlException, UnsupportedFileSystemException, FileNotFoundException, IOException Set owner of a path (i.e. a file or a directory). The parameters username and groupname cannot both be null.- Parameters:
f- The pathusername- If it is null, the original username remains unchanged.groupname- If it is null, the original groupname remains unchanged.- Throws:
AccessControlException- If access is deniedFileNotFoundException- Iffdoes not existUnsupportedFileSystemException- If file system forfis not supportedIOException- If an I/O error occurred Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException- If server implementation throws undeclared exception to RPC server RuntimeExceptions:HadoopIllegalArgumentException- Ifusernameorgroupnameis invalid.
-
setTimes
public void setTimes(Path f, long mtime, long atime) throws AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException Set access time of a file.- Parameters:
f- The pathmtime- Set the modification time of this file. The number of milliseconds since epoch (Jan 1, 1970). A value of -1 means that this call should not set modification time.atime- Set the access time of this file. The number of milliseconds since Jan 1, 1970. A value of -1 means that this call should not set access time.- Throws:
AccessControlException- If access is deniedFileNotFoundException- Iffdoes not existUnsupportedFileSystemException- If file system forfis not supportedIOException- If an I/O error occurred Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException- If server implementation throws undeclared exception to RPC server
-
getFileChecksum
public FileChecksum getFileChecksum(Path f) throws AccessControlException, FileNotFoundException, IOException Get the checksum of a file.- Parameters:
f- file path- Returns:
- The file checksum. The default return value is null, which indicates that no checksum algorithm is implemented in the corresponding FileSystem.
- Throws:
AccessControlException- If access is deniedFileNotFoundException- Iffdoes not existIOException- If an I/O error occurred Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException- If server implementation throws undeclared exception to RPC server
-
setVerifyChecksum
public void setVerifyChecksum(boolean verifyChecksum, Path f) throws AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException Set the verify checksum flag for the file system denoted by the path. This is only applicable if the corresponding FileSystem supports checksum. By default doesn't do anything.- Parameters:
verifyChecksum- verify check sum.f- set the verifyChecksum for the Filesystem containing this path- Throws:
AccessControlException- If access is deniedFileNotFoundException- Iffdoes not existUnsupportedFileSystemException- If file system forfis not supportedIOException- If an I/O error occurred Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException- If server implementation throws undeclared exception to RPC server
-
getFileStatus
public FileStatus getFileStatus(Path f) throws AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException Return a file status object that represents the path.- Parameters:
f- The path we want information from- Returns:
- a FileStatus object
- Throws:
AccessControlException- If access is deniedFileNotFoundException- Iffdoes not existUnsupportedFileSystemException- If file system forfis not supportedIOException- If an I/O error occurred Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException- If server implementation throws undeclared exception to RPC server
-
msync
Synchronize client metadata state.- Throws:
IOException- If an I/O error occurred.UnsupportedOperationException- If file system forfis not supported.
-
access
@LimitedPrivate({"HDFS","Hive"}) public void access(Path path, FsAction mode) throws AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException Checks if the user can access a path. The mode specifies which access checks to perform. If the requested permissions are granted, then the method returns normally. If access is denied, then the method throws anAccessControlException.The default implementation of this method calls
getFileStatus(Path)and checks the returned permissions against the requested permissions. Note that the getFileStatus call will be subject to authorization checks. Typically, this requires search (execute) permissions on each directory in the path's prefix, but this is implementation-defined. Any file system that provides a richer authorization model (such as ACLs) may override the default implementation so that it checks against that model instead.In general, applications should avoid using this method, due to the risk of time-of-check/time-of-use race conditions. The permissions on a file may change immediately after the access call returns. Most applications should prefer running specific file system actions as the desired user represented by a
UserGroupInformation.- Parameters:
path- Path to checkmode- type of access to check- Throws:
AccessControlException- if access is deniedFileNotFoundException- if the path does not existUnsupportedFileSystemException- if file system forpathis not supportedIOException- see specific implementation Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException- If server implementation throws undeclared exception to RPC server
-
getFileLinkStatus
public FileStatus getFileLinkStatus(Path f) throws AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException Return a file status object that represents the path. If the path refers to a symlink then the FileStatus of the symlink is returned. The behavior is equivalent to #getFileStatus() if the underlying file system does not support symbolic links.- Parameters:
f- The path we want information from.- Returns:
- A FileStatus object
- Throws:
AccessControlException- If access is deniedFileNotFoundException- Iffdoes not existUnsupportedFileSystemException- If file system forfis not supportedIOException- If an I/O error occurred
-
getLinkTarget
public Path getLinkTarget(Path f) throws AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException Returns the target of the given symbolic link as it was specified when the link was created. Links in the path leading up to the final path component are resolved transparently.- Parameters:
f- the path to return the target of- Returns:
- The un-interpreted target of the symbolic link.
- Throws:
AccessControlException- If access is deniedFileNotFoundException- If pathfdoes not existUnsupportedFileSystemException- If file system forfis not supportedIOException- If the given path does not refer to a symlink or an I/O error occurred
-
getFileBlockLocations
@LimitedPrivate({"HDFS","MapReduce"}) @Evolving public BlockLocation[] getFileBlockLocations(Path f, long start, long len) throws AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException Return blockLocation of the given file for the given offset and len. For a nonexistent file or regions, null will be returned. This call is most helpful with DFS, where it returns hostnames of machines that contain the given file. In HDFS, if file is three-replicated, the returned array contains elements like:BlockLocation(offset: 0, length: BLOCK_SIZE, hosts: {"host1:9866", "host2:9866, host3:9866"}) BlockLocation(offset: BLOCK_SIZE, length: BLOCK_SIZE, hosts: {"host2:9866", "host3:9866, host4:9866"})And if a file is erasure-coded, the returned BlockLocation are logical block groups. Suppose we have a RS_3_2 coded file (3 data units and 2 parity units). 1. If the file size is less than one stripe size, say 2 * CELL_SIZE, then there will be one BlockLocation returned, with 0 offset, actual file size and 4 hosts (2 data blocks and 2 parity blocks) hosting the actual blocks. 3. If the file size is less than one group size but greater than one stripe size, then there will be one BlockLocation returned, with 0 offset, actual file size with 5 hosts (3 data blocks and 2 parity blocks) hosting the actual blocks. 4. If the file size is greater than one group size, 3 * BLOCK_SIZE + 123 for example, then the result will be like:BlockLocation(offset: 0, length: 3 * BLOCK_SIZE, hosts: {"host1:9866", "host2:9866","host3:9866","host4:9866","host5:9866"}) BlockLocation(offset: 3 * BLOCK_SIZE, length: 123, hosts: {"host1:9866", "host4:9866", "host5:9866"})- Parameters:
f- - get blocklocations of this filestart- position (byte offset)len- (in bytes)- Returns:
- block locations for given file at specified offset of len
- Throws:
AccessControlException- If access is deniedFileNotFoundException- Iffdoes not existUnsupportedFileSystemException- If file system forfis not supportedIOException- If an I/O error occurred Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException- If server implementation throws undeclared exception to RPC server RuntimeExceptions:InvalidPathException- If pathfis invalid
-
getFsStatus
public FsStatus getFsStatus(Path f) throws AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException Returns a status object describing the use and capacity of the file system denoted by the Parh argument p. If the file system has multiple partitions, the use and capacity of the partition pointed to by the specified path is reflected.- Parameters:
f- Path for which status should be obtained. null means the root partition of the default file system.- Returns:
- a FsStatus object
- Throws:
AccessControlException- If access is deniedFileNotFoundException- Iffdoes not existUnsupportedFileSystemException- If file system forfis not supportedIOException- If an I/O error occurred Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException- If server implementation throws undeclared exception to RPC server
-
createSymlink
public void createSymlink(Path target, Path link, boolean createParent) throws AccessControlException, FileAlreadyExistsException, FileNotFoundException, ParentNotDirectoryException, UnsupportedFileSystemException, IOException Creates a symbolic link to an existing file. An exception is thrown if the symlink exits, the user does not have permission to create symlink, or the underlying file system does not support symlinks. Symlink permissions are ignored, access to a symlink is determined by the permissions of the symlink target. Symlinks in paths leading up to the final path component are resolved transparently. If the final path component refers to a symlink some functions operate on the symlink itself, these are: - delete(f) and deleteOnExit(f) - Deletes the symlink. - rename(src, dst) - If src refers to a symlink, the symlink is renamed. If dst refers to a symlink, the symlink is over-written. - getLinkTarget(f) - Returns the target of the symlink. - getFileLinkStatus(f) - Returns a FileStatus object describing the symlink. Some functions, create() and mkdir(), expect the final path component does not exist. If they are given a path that refers to a symlink that does exist they behave as if the path referred to an existing file or directory. All other functions fully resolve, ie follow, the symlink. These are: open, setReplication, setOwner, setTimes, setWorkingDirectory, setPermission, getFileChecksum, setVerifyChecksum, getFileBlockLocations, getFsStatus, getFileStatus, exists, and listStatus. Symlink targets are stored as given to createSymlink, assuming the underlying file system is capable of storing a fully qualified URI. Dangling symlinks are permitted. FileContext supports four types of symlink targets, and resolves them as followsGiven a path referring to a symlink of form: <---X---> fs://host/A/B/link <-----Y-----> In this path X is the scheme and authority that identify the file system, and Y is the path leading up to the final path component "link". If Y is a symlink itself then let Y' be the target of Y and X' be the scheme and authority of Y'. Symlink targets may: 1. Fully qualified URIs fs://hostX/A/B/file Resolved according to the target file system. 2. Partially qualified URIs (eg scheme but no host) fs:///A/B/file Resolved according to the target file system. Eg resolving a symlink to hdfs:///A results in an exception because HDFS URIs must be fully qualified, while a symlink to file:///A will not since Hadoop's local file systems require partially qualified URIs. 3. Relative paths path Resolves to [Y'][path]. Eg if Y resolves to hdfs://host/A and path is "../B/file" then [Y'][path] is hdfs://host/B/file 4. Absolute paths path Resolves to [X'][path]. Eg if Y resolves hdfs://host/A/B and path is "/file" then [X][path] is hdfs://host/file- Parameters:
target- the target of the symbolic linklink- the path to be created that points to targetcreateParent- if true then missing parent dirs are created if false then parent must exist- Throws:
AccessControlException- If access is deniedFileAlreadyExistsException- If filelinkalready existsFileNotFoundException- Iftargetdoes not existParentNotDirectoryException- If parent oflinkis not a directory.UnsupportedFileSystemException- If file system fortargetorlinkis not supportedIOException- If an I/O error occurred
-
listStatus
public org.apache.hadoop.fs.RemoteIterator<FileStatus> listStatus(Path f) throws AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException List the statuses of the files/directories in the given path if the path is a directory.- Parameters:
f- is the path- Returns:
- an iterator that traverses statuses of the files/directories in the given path
- Throws:
AccessControlException- If access is deniedFileNotFoundException- Iffdoes not existUnsupportedFileSystemException- If file system forfis not supportedIOException- If an I/O error occurred Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException- If server implementation throws undeclared exception to RPC server
-
listCorruptFileBlocks
public org.apache.hadoop.fs.RemoteIterator<Path> listCorruptFileBlocks(Path path) throws IOException List CorruptFile Blocks.- Parameters:
path- the path.- Returns:
- an iterator over the corrupt files under the given path (may contain duplicates if a file has more than one corrupt block)
- Throws:
IOException- If an I/O error occurred.
-
listLocatedStatus
public org.apache.hadoop.fs.RemoteIterator<LocatedFileStatus> listLocatedStatus(Path f) throws AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException List the statuses of the files/directories in the given path if the path is a directory. Return the file's status and block locations If the path is a file. If a returned status is a file, it contains the file's block locations.- Parameters:
f- is the path- Returns:
- an iterator that traverses statuses of the files/directories in the given path If any IO exception (for example the input directory gets deleted while listing is being executed), next() or hasNext() of the returned iterator may throw a RuntimeException with the io exception as the cause.
- Throws:
AccessControlException- If access is deniedFileNotFoundException- Iffdoes not existUnsupportedFileSystemException- If file system forfis not supportedIOException- If an I/O error occurred Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException- If server implementation throws undeclared exception to RPC server
-
deleteOnExit
Mark a path to be deleted on JVM shutdown.- Parameters:
f- the existing path to delete.- Returns:
- true if deleteOnExit is successful, otherwise false.
- Throws:
AccessControlException- If access is deniedUnsupportedFileSystemException- If file system forfis not supportedIOException- If an I/O error occurred Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException- If server implementation throws undeclared exception to RPC server
-
util
public org.apache.hadoop.fs.FileContext.Util util() -
resolve
protected Path resolve(Path f) throws FileNotFoundException, org.apache.hadoop.fs.UnresolvedLinkException, AccessControlException, IOException Resolves all symbolic links in the specified path. Returns the new path object.- Parameters:
f- the path.- Returns:
- resolve path.
- Throws:
FileNotFoundException- Iffdoes not exist.org.apache.hadoop.fs.UnresolvedLinkException- If unresolved link occurred.AccessControlException- If access is denied.IOException- If an I/O error occurred.
-
resolveIntermediate
Resolves all symbolic links in the specified path leading up to, but not including the final path component.- Parameters:
f- path to resolve- Returns:
- the new path object.
- Throws:
IOException- If an I/O error occurred.
-
getStatistics
Get the statistics for a particular file system- Parameters:
uri- the uri to lookup the statistics. Only scheme and authority part of the uri are used as the key to store and lookup.- Returns:
- a statistics object
-
clearStatistics
public static void clearStatistics()Clears all the statistics stored in AbstractFileSystem, for all the file systems. -
printStatistics
public static void printStatistics()Prints the statistics to standard output. File System is identified by the scheme and authority. -
getAllStatistics
- Returns:
- Map of uri and statistics for each filesystem instantiated. The uri consists of scheme and authority for the filesystem.
-
getDelegationTokens
@LimitedPrivate({"HDFS","MapReduce"}) public List<Token<?>> getDelegationTokens(Path p, String renewer) throws IOException Get delegation tokens for the file systems accessed for a given path.- Parameters:
p- Path for which delegations tokens are requested.renewer- the account name that is allowed to renew the token.- Returns:
- List of delegation tokens.
- Throws:
IOException- If an I/O error occurred.
-
modifyAclEntries
Modifies ACL entries of files and directories. This method can add new ACL entries or modify the permissions on existing ACL entries. All existing ACL entries that are not specified in this call are retained without changes. (Modifications are merged into the current ACL.)- Parameters:
path- Path to modifyaclSpec- List<AclEntry> describing modifications- Throws:
IOException- if an ACL could not be modified
-
removeAclEntries
Removes ACL entries from files and directories. Other ACL entries are retained.- Parameters:
path- Path to modifyaclSpec- List<AclEntry> describing entries to remove- Throws:
IOException- if an ACL could not be modified
-
removeDefaultAcl
Removes all default ACL entries from files and directories.- Parameters:
path- Path to modify- Throws:
IOException- if an ACL could not be modified
-
removeAcl
Removes all but the base ACL entries of files and directories. The entries for user, group, and others are retained for compatibility with permission bits.- Parameters:
path- Path to modify- Throws:
IOException- if an ACL could not be removed
-
setAcl
Fully replaces ACL of files and directories, discarding all existing entries.- Parameters:
path- Path to modifyaclSpec- List<AclEntry> describing modifications, must include entries for user, group, and others for compatibility with permission bits.- Throws:
IOException- if an ACL could not be modified
-
getAclStatus
Gets the ACLs of files and directories.- Parameters:
path- Path to get- Returns:
- RemoteIterator<AclStatus> which returns each AclStatus
- Throws:
IOException- if an ACL could not be read
-
setXAttr
Set an xattr of a file or directory. The name must be prefixed with the namespace followed by ".". For example, "user.attr".Refer to the HDFS extended attributes user documentation for details.
- Parameters:
path- Path to modifyname- xattr name.value- xattr value.- Throws:
IOException- If an I/O error occurred.
-
setXAttr
public void setXAttr(Path path, String name, byte[] value, EnumSet<XAttrSetFlag> flag) throws IOException Set an xattr of a file or directory. The name must be prefixed with the namespace followed by ".". For example, "user.attr".Refer to the HDFS extended attributes user documentation for details.
- Parameters:
path- Path to modifyname- xattr name.value- xattr value.flag- xattr set flag- Throws:
IOException- If an I/O error occurred.
-
getXAttr
Get an xattr for a file or directory. The name must be prefixed with the namespace followed by ".". For example, "user.attr".Refer to the HDFS extended attributes user documentation for details.
- Parameters:
path- Path to get extended attributename- xattr name.- Returns:
- byte[] xattr value.
- Throws:
IOException- If an I/O error occurred.
-
getXAttrs
Get all of the xattrs for a file or directory. Only those xattrs for which the logged-in user has permissions to view are returned.Refer to the HDFS extended attributes user documentation for details.
- Parameters:
path- Path to get extended attributes- Returns:
- Map<String, byte[]> describing the XAttrs of the file or directory
- Throws:
IOException- If an I/O error occurred.
-
getXAttrs
Get all of the xattrs for a file or directory. Only those xattrs for which the logged-in user has permissions to view are returned.Refer to the HDFS extended attributes user documentation for details.
- Parameters:
path- Path to get extended attributesnames- XAttr names.- Returns:
- Map<String, byte[]> describing the XAttrs of the file or directory
- Throws:
IOException- If an I/O error occurred.
-
removeXAttr
Remove an xattr of a file or directory. The name must be prefixed with the namespace followed by ".". For example, "user.attr".Refer to the HDFS extended attributes user documentation for details.
- Parameters:
path- Path to remove extended attributename- xattr name- Throws:
IOException- If an I/O error occurred.
-
listXAttrs
Get all of the xattr names for a file or directory. Only those xattr names which the logged-in user has permissions to view are returned.Refer to the HDFS extended attributes user documentation for details.
- Parameters:
path- Path to get extended attributes- Returns:
- List<String> of the XAttr names of the file or directory
- Throws:
IOException- If an I/O error occurred.
-
createSnapshot
Create a snapshot with a default name.- Parameters:
path- The directory where snapshots will be taken.- Returns:
- the snapshot path.
- Throws:
IOException- If an I/O error occurredExceptions applicable to file systems accessed over RPC:
org.apache.hadoop.ipc.RpcClientException- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException- If server implementation throws undeclared exception to RPC server
-
createSnapshot
Create a snapshot.- Parameters:
path- The directory where snapshots will be taken.snapshotName- The name of the snapshot- Returns:
- the snapshot path.
- Throws:
IOException- If an I/O error occurredExceptions applicable to file systems accessed over RPC:
org.apache.hadoop.ipc.RpcClientException- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException- If server implementation throws undeclared exception to RPC server
-
renameSnapshot
public void renameSnapshot(Path path, String snapshotOldName, String snapshotNewName) throws IOException Rename a snapshot.- Parameters:
path- The directory path where the snapshot was takensnapshotOldName- Old name of the snapshotsnapshotNewName- New name of the snapshot- Throws:
IOException- If an I/O error occurredExceptions applicable to file systems accessed over RPC:
org.apache.hadoop.ipc.RpcClientException- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException- If server implementation throws undeclared exception to RPC server
-
deleteSnapshot
Delete a snapshot of a directory.- Parameters:
path- The directory that the to-be-deleted snapshot belongs tosnapshotName- The name of the snapshot- Throws:
IOException- If an I/O error occurredExceptions applicable to file systems accessed over RPC:
org.apache.hadoop.ipc.RpcClientException- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException- If server implementation throws undeclared exception to RPC server
-
satisfyStoragePolicy
Set the source path to satisfy storage policy.- Parameters:
path- The source path referring to either a directory or a file.- Throws:
IOException- If an I/O error occurred.
-
setStoragePolicy
Set the storage policy for a given file or directory.- Parameters:
path- file or directory path.policyName- the name of the target storage policy. The list of supported Storage policies can be retrieved viagetAllStoragePolicies().- Throws:
IOException- If an I/O error occurred.
-
unsetStoragePolicy
Unset the storage policy set for a given file or directory.- Parameters:
src- file or directory path.- Throws:
IOException- If an I/O error occurred.
-
getStoragePolicy
Query the effective storage policy ID for the given file or directory.- Parameters:
path- file or directory path.- Returns:
- storage policy for give file.
- Throws:
IOException- If an I/O error occurred.
-
getAllStoragePolicies
Retrieve all the storage policies supported by this file system.- Returns:
- all storage policies supported by this filesystem.
- Throws:
IOException- If an I/O error occurred.
-
openFile
@Unstable public FutureDataInputStreamBuilder openFile(Path path) throws IOException, UnsupportedOperationException Open a file for reading through a builder API. Ultimately callsopen(Path, int)unless a subclass executes the open command differently. The semantics of this call are therefore the same as that ofopen(Path, int)with one special point: it is inFSDataInputStreamBuilder.build()in which the open operation takes place -it is there where all preconditions to the operation are checked.- Parameters:
path- file path- Returns:
- a FSDataInputStreamBuilder object to build the input stream
- Throws:
IOException- if some early checks cause IO failures.UnsupportedOperationException- if support is checked early.
-
hasPathCapability
Return the path capabilities of the bondedAbstractFileSystem.- Specified by:
hasPathCapabilityin interfaceorg.apache.hadoop.fs.PathCapabilities- Parameters:
path- path to query the capability of.capability- string to query the stream support for.- Returns:
- true iff the capability is supported under that FS.
- Throws:
IOException- path resolution or other IO failureIllegalArgumentException- invalid arguments
-
getServerDefaults
Return a set of server default configuration values based on path.- Parameters:
path- path to fetch server defaults- Returns:
- server default configuration values for path
- Throws:
IOException- an I/O error occurred
-
createMultipartUploader
@Unstable public org.apache.hadoop.fs.MultipartUploaderBuilder createMultipartUploader(Path basePath) throws IOException Create a multipart uploader.- Parameters:
basePath- file path under which all files are uploaded- Returns:
- a MultipartUploaderBuilder object to build the uploader
- Throws:
IOException- if some early checks cause IO failures.UnsupportedOperationException- if support is checked early.
-