@InterfaceAudience.Public @InterfaceStability.Stable public class FileContext extends Object implements org.apache.hadoop.fs.PathCapabilities
Hadoop also supports working-directory-relative names, which are paths relative to the current working directory (similar to Unix). The working directory can be in a different file system than the default FS.
Thus, Hadoop path names can be specified as one of the following:
Configuration
).
Further file system properties are specified on the server-side. File system
operations default to using these server-side defaults unless otherwise
specified.
The file system related server-side defaults are:
Modifier and Type | Field and Description |
---|---|
static FsPermission |
DEFAULT_PERM
Default permission for directory and symlink
In previous versions, this default permission was also used to
create files, so files created end up with ugo+x permission.
|
static FsPermission |
DIR_DEFAULT_PERM
Default permission for directory
|
static FsPermission |
FILE_DEFAULT_PERM
Default permission for file
|
static org.slf4j.Logger |
LOG |
static int |
SHUTDOWN_HOOK_PRIORITY
Priority of the FileContext shutdown hook.
|
Modifier and Type | Method and Description |
---|---|
static void |
clearStatistics()
Clears all the statistics stored in AbstractFileSystem, for all the file
systems.
|
FSDataOutputStreamBuilder<FSDataOutputStream,?> |
create(Path f)
Create a
FSDataOutputStreamBuilder for creating or overwriting
a file on indicated path. |
FSDataOutputStream |
create(Path f,
EnumSet<CreateFlag> createFlag,
org.apache.hadoop.fs.Options.CreateOpts... opts)
Create or overwrite file on indicated path and returns an output stream for
writing into the file.
|
org.apache.hadoop.fs.MultipartUploaderBuilder |
createMultipartUploader(Path basePath)
Create a multipart uploader.
|
Path |
createSnapshot(Path path)
Create a snapshot with a default name.
|
Path |
createSnapshot(Path path,
String snapshotName)
Create a snapshot.
|
void |
createSymlink(Path target,
Path link,
boolean createParent)
Creates a symbolic link to an existing file.
|
boolean |
delete(Path f,
boolean recursive)
Delete a file.
|
boolean |
deleteOnExit(Path f)
Mark a path to be deleted on JVM shutdown.
|
void |
deleteSnapshot(Path path,
String snapshotName)
Delete a snapshot of a directory.
|
AclStatus |
getAclStatus(Path path)
Gets the ACLs of files and directories.
|
static Map<URI,org.apache.hadoop.fs.FileSystem.Statistics> |
getAllStatistics() |
Collection<? extends BlockStoragePolicySpi> |
getAllStoragePolicies()
Retrieve all the storage policies supported by this file system.
|
FileChecksum |
getFileChecksum(Path f)
Get the checksum of a file.
|
static FileContext |
getFileContext()
Create a FileContext using the default config read from the
$HADOOP_CONFIG/core.xml, Unspecified key-values for config are defaulted
from core-defaults.xml in the release jar.
|
protected static FileContext |
getFileContext(AbstractFileSystem defaultFS)
Create a FileContext for specified file system using the default config.
|
static FileContext |
getFileContext(AbstractFileSystem defFS,
Configuration aConf)
Create a FileContext with specified FS as default using the specified
config.
|
static FileContext |
getFileContext(Configuration aConf)
Create a FileContext using the passed config.
|
static FileContext |
getFileContext(URI defaultFsUri)
Create a FileContext for specified URI using the default config.
|
static FileContext |
getFileContext(URI defaultFsUri,
Configuration aConf)
Create a FileContext for specified default URI using the specified config.
|
FileStatus |
getFileLinkStatus(Path f)
Return a file status object that represents the path.
|
FileStatus |
getFileStatus(Path f)
Return a file status object that represents the path.
|
protected AbstractFileSystem |
getFSofPath(Path absOrFqPath)
Get the file system of supplied path.
|
FsStatus |
getFsStatus(Path f)
Returns a status object describing the use and capacity of the
file system denoted by the Parh argument p.
|
Path |
getHomeDirectory()
Return the current user's home directory in this file system.
|
Path |
getLinkTarget(Path f)
Returns the target of the given symbolic link as it was specified
when the link was created.
|
static FileContext |
getLocalFSFileContext() |
static FileContext |
getLocalFSFileContext(Configuration aConf) |
FsServerDefaults |
getServerDefaults(Path path)
Return a set of server default configuration values based on path.
|
static org.apache.hadoop.fs.FileSystem.Statistics |
getStatistics(URI uri)
Get the statistics for a particular file system
|
BlockStoragePolicySpi |
getStoragePolicy(Path path)
Query the effective storage policy ID for the given file or directory.
|
UserGroupInformation |
getUgi()
Gets the ugi in the file-context
|
FsPermission |
getUMask() |
Path |
getWorkingDirectory()
Gets the working directory for wd-relative names (such a "foo/bar").
|
byte[] |
getXAttr(Path path,
String name)
Get an xattr for a file or directory.
|
Map<String,byte[]> |
getXAttrs(Path path)
Get all of the xattrs for a file or directory.
|
Map<String,byte[]> |
getXAttrs(Path path,
List<String> names)
Get all of the xattrs for a file or directory.
|
boolean |
hasPathCapability(Path path,
String capability)
Return the path capabilities of the bonded
AbstractFileSystem . |
org.apache.hadoop.fs.RemoteIterator<Path> |
listCorruptFileBlocks(Path path) |
org.apache.hadoop.fs.RemoteIterator<LocatedFileStatus> |
listLocatedStatus(Path f)
List the statuses of the files/directories in the given path if the path is
a directory.
|
org.apache.hadoop.fs.RemoteIterator<FileStatus> |
listStatus(Path f)
List the statuses of the files/directories in the given path if the path is
a directory.
|
List<String> |
listXAttrs(Path path)
Get all of the xattr names for a file or directory.
|
Path |
makeQualified(Path path)
Make the path fully qualified if it is isn't.
|
void |
mkdir(Path dir,
FsPermission permission,
boolean createParent)
Make(create) a directory and all the non-existent parents.
|
void |
modifyAclEntries(Path path,
List<AclEntry> aclSpec)
Modifies ACL entries of files and directories.
|
void |
msync()
Synchronize client metadata state.
|
FSDataInputStream |
open(Path f)
Opens an FSDataInputStream at the indicated Path using
default buffersize.
|
FSDataInputStream |
open(Path f,
int bufferSize)
Opens an FSDataInputStream at the indicated Path.
|
FutureDataInputStreamBuilder |
openFile(Path path)
Open a file for reading through a builder API.
|
static void |
printStatistics()
Prints the statistics to standard output.
|
void |
removeAcl(Path path)
Removes all but the base ACL entries of files and directories.
|
void |
removeAclEntries(Path path,
List<AclEntry> aclSpec)
Removes ACL entries from files and directories.
|
void |
removeDefaultAcl(Path path)
Removes all default ACL entries from files and directories.
|
void |
removeXAttr(Path path,
String name)
Remove an xattr of a file or directory.
|
void |
rename(Path src,
Path dst,
org.apache.hadoop.fs.Options.Rename... options)
Renames Path src to Path dst
Fails if src is a file and dst is a directory.
|
void |
renameSnapshot(Path path,
String snapshotOldName,
String snapshotNewName)
Rename a snapshot.
|
protected Path |
resolve(Path f)
Resolves all symbolic links in the specified path.
|
protected Path |
resolveIntermediate(Path f)
Resolves all symbolic links in the specified path leading up
to, but not including the final path component.
|
Path |
resolvePath(Path f)
Resolve the path following any symlinks or mount points
|
void |
satisfyStoragePolicy(Path path)
Set the source path to satisfy storage policy.
|
void |
setAcl(Path path,
List<AclEntry> aclSpec)
Fully replaces ACL of files and directories, discarding all existing
entries.
|
void |
setOwner(Path f,
String username,
String groupname)
Set owner of a path (i.e.
|
void |
setPermission(Path f,
FsPermission permission)
Set permission of a path.
|
boolean |
setReplication(Path f,
short replication)
Set replication for an existing file.
|
void |
setStoragePolicy(Path path,
String policyName)
Set the storage policy for a given file or directory.
|
void |
setTimes(Path f,
long mtime,
long atime)
Set access time of a file.
|
void |
setUMask(FsPermission newUmask)
Set umask to the supplied parameter.
|
void |
setVerifyChecksum(boolean verifyChecksum,
Path f)
Set the verify checksum flag for the file system denoted by the path.
|
void |
setWorkingDirectory(Path newWDir)
Set the working directory for wd-relative names (such a "foo/bar").
|
void |
setXAttr(Path path,
String name,
byte[] value)
Set an xattr of a file or directory.
|
void |
setXAttr(Path path,
String name,
byte[] value,
EnumSet<XAttrSetFlag> flag)
Set an xattr of a file or directory.
|
boolean |
truncate(Path f,
long newLength)
Truncate the file in the indicated path to the indicated size.
|
void |
unsetStoragePolicy(Path src)
Unset the storage policy set for a given file or directory.
|
org.apache.hadoop.fs.FileContext.Util |
util() |
public static final org.slf4j.Logger LOG
public static final FsPermission DEFAULT_PERM
DIR_DEFAULT_PERM
for directory, and use
FILE_DEFAULT_PERM
for file.
This constant is kept for compatibility.public static final FsPermission DIR_DEFAULT_PERM
public static final FsPermission FILE_DEFAULT_PERM
public static final int SHUTDOWN_HOOK_PRIORITY
protected AbstractFileSystem getFSofPath(Path absOrFqPath) throws UnsupportedFileSystemException, IOException
absOrFqPath
- - absolute or fully qualified pathUnsupportedFileSystemException
- If the file system for
absOrFqPath
is not supported.IOException
- If the file system for absOrFqPath
could
not be instantiated.public static FileContext getFileContext(AbstractFileSystem defFS, Configuration aConf)
defFS
- aConf
- protected static FileContext getFileContext(AbstractFileSystem defaultFS)
defaultFS
- public static FileContext getFileContext() throws UnsupportedFileSystemException
UnsupportedFileSystemException
- If the file system from the default
configuration is not supportedpublic static FileContext getLocalFSFileContext() throws UnsupportedFileSystemException
UnsupportedFileSystemException
- If the file system for
FsConstants.LOCAL_FS_URI
is not supported.public static FileContext getFileContext(URI defaultFsUri) throws UnsupportedFileSystemException
defaultFsUri
- UnsupportedFileSystemException
- If the file system for
defaultFsUri
is not supportedpublic static FileContext getFileContext(URI defaultFsUri, Configuration aConf) throws UnsupportedFileSystemException
defaultFsUri
- aConf
- UnsupportedFileSystemException
- If the file system with specified is
not supportedRuntimeException
- If the file system specified is supported but
could not be instantiated, or if login fails.public static FileContext getFileContext(Configuration aConf) throws UnsupportedFileSystemException
getFileContext(URI, Configuration)
instead of this one.aConf
- UnsupportedFileSystemException
- If file system in the config
is not supportedpublic static FileContext getLocalFSFileContext(Configuration aConf) throws UnsupportedFileSystemException
aConf
- - from which the FileContext is configuredUnsupportedFileSystemException
- If default file system in the config
is not supportedpublic void setWorkingDirectory(Path newWDir) throws IOException
getWorkingDirectory()
should return what setWorkingDir() set.newWDir
- new working directoryIOException
- public Path getWorkingDirectory()
public UserGroupInformation getUgi()
public Path getHomeDirectory()
public FsPermission getUMask()
public void setUMask(FsPermission newUmask)
newUmask
- the new umaskpublic Path resolvePath(Path f) throws FileNotFoundException, org.apache.hadoop.fs.UnresolvedLinkException, AccessControlException, IOException
f
- to be resolvedFileNotFoundException
- If f
does not existAccessControlException
- if access deniedIOException
- If an IO Error occurred
Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC server
RuntimeExceptions:InvalidPathException
- If path f
is not validorg.apache.hadoop.fs.UnresolvedLinkException
public Path makeQualified(Path path)
path
- public FSDataOutputStream create(Path f, EnumSet<CreateFlag> createFlag, org.apache.hadoop.fs.Options.CreateOpts... opts) throws AccessControlException, FileAlreadyExistsException, FileNotFoundException, ParentNotDirectoryException, UnsupportedFileSystemException, IOException
f
- the file name to opencreateFlag
- gives the semantics of create; see CreateFlag
opts
- file creation options; see Options.CreateOpts
.
FSDataOutputStream
for created fileAccessControlException
- If access is deniedFileAlreadyExistsException
- If file f
already existsFileNotFoundException
- If parent of f
does not exist
and createParent
is falseParentNotDirectoryException
- If parent of f
is not a
directory.UnsupportedFileSystemException
- If file system for f
is
not supportedIOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC server
RuntimeExceptions:InvalidPathException
- If path f
is not validpublic FSDataOutputStreamBuilder<FSDataOutputStream,?> create(Path f) throws IOException
FSDataOutputStreamBuilder
for creating or overwriting
a file on indicated path.f
- the file path to create builder for.FSDataOutputStreamBuilder
to build a
FSDataOutputStream
.
Upon FSDataOutputStreamBuilder.build()
being invoked,
builder parameters will be verified by FileContext
and
AbstractFileSystem.create(org.apache.hadoop.fs.Path, java.util.EnumSet<org.apache.hadoop.fs.CreateFlag>, org.apache.hadoop.fs.Options.CreateOpts...)
. And filesystem states will be modified.
Client should expect FSDataOutputStreamBuilder.build()
throw the
same exceptions as create(Path, EnumSet, CreateOpts...).IOException
public void mkdir(Path dir, FsPermission permission, boolean createParent) throws AccessControlException, FileAlreadyExistsException, FileNotFoundException, ParentNotDirectoryException, UnsupportedFileSystemException, IOException
dir
- - the dir to makepermission
- - permissions is set permission&~umaskcreateParent
- - if true then missing parent dirs are created if false
then parent must existAccessControlException
- If access is deniedFileAlreadyExistsException
- If directory dir
already
existsFileNotFoundException
- If parent of dir
does not exist
and createParent
is falseParentNotDirectoryException
- If parent of dir
is not a
directoryUnsupportedFileSystemException
- If file system for dir
is not supportedIOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC server
RuntimeExceptions:InvalidPathException
- If path dir
is not validpublic boolean delete(Path f, boolean recursive) throws AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException
f
- the path to delete.recursive
- if path is a directory and set to
true, the directory is deleted else throws an exception. In
case of a file the recursive can be set to either true or false.AccessControlException
- If access is deniedFileNotFoundException
- If f
does not existUnsupportedFileSystemException
- If file system for f
is
not supportedIOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC server
RuntimeExceptions:InvalidPathException
- If path f
is invalidpublic FSDataInputStream open(Path f) throws AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException
f
- the file name to openAccessControlException
- If access is deniedFileNotFoundException
- If file f
does not existUnsupportedFileSystemException
- If file system for f
is not supportedIOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC serverpublic FSDataInputStream open(Path f, int bufferSize) throws AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException
f
- the file name to openbufferSize
- the size of the buffer to be used.AccessControlException
- If access is deniedFileNotFoundException
- If file f
does not existUnsupportedFileSystemException
- If file system for f
is
not supportedIOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC serverpublic boolean truncate(Path f, long newLength) throws AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException
f
- The path to the file to be truncatednewLength
- The size the file is to be truncated totrue
if the file has been truncated to the desired
newLength
and is immediately available to be reused for
write operations such as append
, or
false
if a background process of adjusting the length of
the last block has been started, and clients should wait for it to
complete before proceeding with further file updates.AccessControlException
- If access is deniedFileNotFoundException
- If file f
does not existUnsupportedFileSystemException
- If file system for f
is
not supportedIOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC serverpublic boolean setReplication(Path f, short replication) throws AccessControlException, FileNotFoundException, IOException
f
- file namereplication
- new replicationAccessControlException
- If access is deniedFileNotFoundException
- If file f
does not existIOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC serverpublic void rename(Path src, Path dst, org.apache.hadoop.fs.Options.Rename... options) throws AccessControlException, FileAlreadyExistsException, FileNotFoundException, ParentNotDirectoryException, UnsupportedFileSystemException, IOException
If OVERWRITE option is not passed as an argument, rename fails if the dst already exists.
If OVERWRITE option is passed as an argument, rename overwrites the dst if it is a file or an empty directory. Rename fails if dst is a non-empty directory.
Note that atomicity of rename is dependent on the file system implementation. Please refer to the file system documentation for details
src
- path to be renameddst
- new path after renameAccessControlException
- If access is deniedFileAlreadyExistsException
- If dst
already exists and
options
has Options.Rename.OVERWRITE
option false.FileNotFoundException
- If src
does not existParentNotDirectoryException
- If parent of dst
is not a
directoryUnsupportedFileSystemException
- If file system for src
and dst
is not supportedIOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC serverpublic void setPermission(Path f, FsPermission permission) throws AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException
f
- permission
- - the new absolute permission (umask is not applied)AccessControlException
- If access is deniedFileNotFoundException
- If f
does not existUnsupportedFileSystemException
- If file system for f
is not supportedIOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC serverpublic void setOwner(Path f, String username, String groupname) throws AccessControlException, UnsupportedFileSystemException, FileNotFoundException, IOException
f
- The pathusername
- If it is null, the original username remains unchanged.groupname
- If it is null, the original groupname remains unchanged.AccessControlException
- If access is deniedFileNotFoundException
- If f
does not existUnsupportedFileSystemException
- If file system for f
is
not supportedIOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC server
RuntimeExceptions:HadoopIllegalArgumentException
- If username
or
groupname
is invalid.public void setTimes(Path f, long mtime, long atime) throws AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException
f
- The pathmtime
- Set the modification time of this file.
The number of milliseconds since epoch (Jan 1, 1970).
A value of -1 means that this call should not set modification time.atime
- Set the access time of this file.
The number of milliseconds since Jan 1, 1970.
A value of -1 means that this call should not set access time.AccessControlException
- If access is deniedFileNotFoundException
- If f
does not existUnsupportedFileSystemException
- If file system for f
is
not supportedIOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC serverpublic FileChecksum getFileChecksum(Path f) throws AccessControlException, FileNotFoundException, IOException
f
- file pathAccessControlException
- If access is deniedFileNotFoundException
- If f
does not existIOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC serverpublic void setVerifyChecksum(boolean verifyChecksum, Path f) throws AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException
verifyChecksum
- f
- set the verifyChecksum for the Filesystem containing this pathAccessControlException
- If access is deniedFileNotFoundException
- If f
does not existUnsupportedFileSystemException
- If file system for f
is
not supportedIOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC serverpublic FileStatus getFileStatus(Path f) throws AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException
f
- The path we want information fromAccessControlException
- If access is deniedFileNotFoundException
- If f
does not existUnsupportedFileSystemException
- If file system for f
is
not supportedIOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC serverpublic void msync() throws IOException, UnsupportedOperationException
public FileStatus getFileLinkStatus(Path f) throws AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException
f
- The path we want information from.AccessControlException
- If access is deniedFileNotFoundException
- If f
does not existUnsupportedFileSystemException
- If file system for f
is
not supportedIOException
- If an I/O error occurredpublic Path getLinkTarget(Path f) throws AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException
f
- the path to return the target ofAccessControlException
- If access is deniedFileNotFoundException
- If path f
does not existUnsupportedFileSystemException
- If file system for f
is
not supportedIOException
- If the given path does not refer to a symlink
or an I/O error occurredpublic FsStatus getFsStatus(Path f) throws AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException
f
- Path for which status should be obtained. null means the
root partition of the default file system.AccessControlException
- If access is deniedFileNotFoundException
- If f
does not existUnsupportedFileSystemException
- If file system for f
is
not supportedIOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC serverpublic void createSymlink(Path target, Path link, boolean createParent) throws AccessControlException, FileAlreadyExistsException, FileNotFoundException, ParentNotDirectoryException, UnsupportedFileSystemException, IOException
Given a path referring to a symlink of form: <---X ---> fs://host/A/B/link <-----Y -----> In this path X is the scheme and authority that identify the file system, and Y is the path leading up to the final path component "link". If Y is a symlink itself then let Y' be the target of Y and X' be the scheme and authority of Y'. Symlink targets may: 1. Fully qualified URIs fs://hostX/A/B/file Resolved according to the target file system. 2. Partially qualified URIs (eg scheme but no host) fs:///A/B/file Resolved according to the target file system. Eg resolving a symlink to hdfs:///A results in an exception because HDFS URIs must be fully qualified, while a symlink to file:///A will not since Hadoop's local file systems require partially qualified URIs. 3. Relative paths path Resolves to [Y'][path]. Eg if Y resolves to hdfs://host/A and path is "../B/file" then [Y'][path] is hdfs://host/B/file 4. Absolute paths path Resolves to [X'][path]. Eg if Y resolves hdfs://host/A/B and path is "/file" then [X][path] is hdfs://host/file
target
- the target of the symbolic linklink
- the path to be created that points to targetcreateParent
- if true then missing parent dirs are created if
false then parent must existAccessControlException
- If access is deniedFileAlreadyExistsException
- If file link
already existsFileNotFoundException
- If target
does not existParentNotDirectoryException
- If parent of link
is not a
directory.UnsupportedFileSystemException
- If file system for
target
or link
is not supportedIOException
- If an I/O error occurredpublic org.apache.hadoop.fs.RemoteIterator<FileStatus> listStatus(Path f) throws AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException
f
- is the pathAccessControlException
- If access is deniedFileNotFoundException
- If f
does not existUnsupportedFileSystemException
- If file system for f
is
not supportedIOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC serverpublic org.apache.hadoop.fs.RemoteIterator<Path> listCorruptFileBlocks(Path path) throws IOException
IOException
public org.apache.hadoop.fs.RemoteIterator<LocatedFileStatus> listLocatedStatus(Path f) throws AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException
f
- is the pathAccessControlException
- If access is deniedFileNotFoundException
- If f
does not existUnsupportedFileSystemException
- If file system for f
is
not supportedIOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC serverpublic boolean deleteOnExit(Path f) throws AccessControlException, IOException
f
- the existing path to delete.AccessControlException
- If access is deniedUnsupportedFileSystemException
- If file system for f
is
not supportedIOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC serverpublic org.apache.hadoop.fs.FileContext.Util util()
protected Path resolve(Path f) throws FileNotFoundException, org.apache.hadoop.fs.UnresolvedLinkException, AccessControlException, IOException
FileNotFoundException
org.apache.hadoop.fs.UnresolvedLinkException
AccessControlException
IOException
protected Path resolveIntermediate(Path f) throws IOException
f
- path to resolveIOException
public static org.apache.hadoop.fs.FileSystem.Statistics getStatistics(URI uri)
uri
- the uri to lookup the statistics. Only scheme and authority part
of the uri are used as the key to store and lookup.public static void clearStatistics()
public static void printStatistics()
public static Map<URI,org.apache.hadoop.fs.FileSystem.Statistics> getAllStatistics()
public void modifyAclEntries(Path path, List<AclEntry> aclSpec) throws IOException
path
- Path to modifyaclSpec
- List<AclEntry> describing
modificationsIOException
- if an ACL could not be modifiedpublic void removeAclEntries(Path path, List<AclEntry> aclSpec) throws IOException
path
- Path to modifyaclSpec
- List<AclEntry> describing entries
to removeIOException
- if an ACL could not be modifiedpublic void removeDefaultAcl(Path path) throws IOException
path
- Path to modifyIOException
- if an ACL could not be modifiedpublic void removeAcl(Path path) throws IOException
path
- Path to modifyIOException
- if an ACL could not be removedpublic void setAcl(Path path, List<AclEntry> aclSpec) throws IOException
path
- Path to modifyaclSpec
- List<AclEntry> describing
modifications, must include entries for user, group, and others for
compatibility with permission bits.IOException
- if an ACL could not be modifiedpublic AclStatus getAclStatus(Path path) throws IOException
path
- Path to getIOException
- if an ACL could not be readpublic void setXAttr(Path path, String name, byte[] value) throws IOException
Refer to the HDFS extended attributes user documentation for details.
path
- Path to modifyname
- xattr name.value
- xattr value.IOException
public void setXAttr(Path path, String name, byte[] value, EnumSet<XAttrSetFlag> flag) throws IOException
Refer to the HDFS extended attributes user documentation for details.
path
- Path to modifyname
- xattr name.value
- xattr value.flag
- xattr set flagIOException
public byte[] getXAttr(Path path, String name) throws IOException
Refer to the HDFS extended attributes user documentation for details.
path
- Path to get extended attributename
- xattr name.IOException
public Map<String,byte[]> getXAttrs(Path path) throws IOException
Refer to the HDFS extended attributes user documentation for details.
path
- Path to get extended attributesIOException
public Map<String,byte[]> getXAttrs(Path path, List<String> names) throws IOException
Refer to the HDFS extended attributes user documentation for details.
path
- Path to get extended attributesnames
- XAttr names.IOException
public void removeXAttr(Path path, String name) throws IOException
Refer to the HDFS extended attributes user documentation for details.
path
- Path to remove extended attributename
- xattr nameIOException
public List<String> listXAttrs(Path path) throws IOException
Refer to the HDFS extended attributes user documentation for details.
path
- Path to get extended attributesIOException
public final Path createSnapshot(Path path) throws IOException
path
- The directory where snapshots will be taken.IOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:
org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC serverpublic Path createSnapshot(Path path, String snapshotName) throws IOException
path
- The directory where snapshots will be taken.snapshotName
- The name of the snapshotIOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:
org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC serverpublic void renameSnapshot(Path path, String snapshotOldName, String snapshotNewName) throws IOException
path
- The directory path where the snapshot was takensnapshotOldName
- Old name of the snapshotsnapshotNewName
- New name of the snapshotIOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:
org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC serverpublic void deleteSnapshot(Path path, String snapshotName) throws IOException
path
- The directory that the to-be-deleted snapshot belongs tosnapshotName
- The name of the snapshotIOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:
org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC serverpublic void satisfyStoragePolicy(Path path) throws IOException
path
- The source path referring to either a directory or a file.IOException
public void setStoragePolicy(Path path, String policyName) throws IOException
path
- file or directory path.policyName
- the name of the target storage policy. The list
of supported Storage policies can be retrieved
via getAllStoragePolicies()
.IOException
public void unsetStoragePolicy(Path src) throws IOException
src
- file or directory path.IOException
public BlockStoragePolicySpi getStoragePolicy(Path path) throws IOException
path
- file or directory path.IOException
public Collection<? extends BlockStoragePolicySpi> getAllStoragePolicies() throws IOException
IOException
@InterfaceStability.Unstable public FutureDataInputStreamBuilder openFile(Path path) throws IOException, UnsupportedOperationException
open(Path, int)
unless a subclass
executes the open command differently.
The semantics of this call are therefore the same as that of
open(Path, int)
with one special point: it is in
FSDataInputStreamBuilder.build()
in which the open operation
takes place -it is there where all preconditions to the operation
are checked.path
- file pathIOException
- if some early checks cause IO failures.UnsupportedOperationException
- if support is checked early.public boolean hasPathCapability(Path path, String capability) throws IOException
AbstractFileSystem
.hasPathCapability
in interface org.apache.hadoop.fs.PathCapabilities
path
- path to query the capability of.capability
- string to query the stream support for.IOException
- path resolution or other IO failureIllegalArgumentException
- invalid argumentspublic FsServerDefaults getServerDefaults(Path path) throws IOException
path
- path to fetch server defaultsIOException
- an I/O error occurred@InterfaceStability.Unstable public org.apache.hadoop.fs.MultipartUploaderBuilder createMultipartUploader(Path basePath) throws IOException
basePath
- file path under which all files are uploadedIOException
- if some early checks cause IO failures.UnsupportedOperationException
- if support is checked early.Copyright © 2021 Apache Software Foundation. All rights reserved.