@InterfaceAudience.Public @InterfaceStability.Evolving public class FileContext extends Object
*** Path Names ***
The Hadoop file system supports a URI name space and URI names. It offers a forest of file systems that can be referenced using fully qualified URIs. Two common Hadoop file systems implementations are
To facilitate this, Hadoop supports a notion of a default file system. The user can set his default file system, although this is typically set up for you in your environment via your default config. A default file system implies a default scheme and authority; slash-relative names (such as /for/bar) are resolved relative to that default FS. Similarly a user can also have working-directory-relative names (i.e. names not starting with a slash). While the working directory is generally in the same default FS, the wd can be in a different FS.
Hence Hadoop path names can be one of:
****The Role of the FileContext and configuration defaults****
The FileContext provides file namespace context for resolving file names; it also contains the umask for permissions, In that sense it is like the per-process file-related state in Unix system. These two properties
Configuration
).
No other configuration parameters are obtained from the default config as
far as the file context layer is concerned. All file system instances
(i.e. deployments of file systems) have default properties; we call these
server side (SS) defaults. Operation like create allow one to select many
properties: either pass them in as explicit parameters or use
the SS properties.
The file system related SS defaults are
*** Usage Model for the FileContext class ***
Example 1: use the default config read from the $HADOOP_CONFIG/core.xml. Unspecified values come from core-defaults.xml in the release jar.
Modifier and Type | Field and Description |
---|---|
static FsPermission |
DEFAULT_PERM
Default permission for directory and symlink
In previous versions, this default permission was also used to
create files, so files created end up with ugo+x permission.
|
static FsPermission |
DIR_DEFAULT_PERM
Default permission for directory
|
static FsPermission |
FILE_DEFAULT_PERM
Default permission for file
|
static org.apache.commons.logging.Log |
LOG |
static int |
SHUTDOWN_HOOK_PRIORITY
Priority of the FileContext shutdown hook.
|
Modifier and Type | Method and Description |
---|---|
static void |
clearStatistics()
Clears all the statistics stored in AbstractFileSystem, for all the file
systems.
|
FSDataOutputStream |
create(Path f,
EnumSet<CreateFlag> createFlag,
org.apache.hadoop.fs.Options.CreateOpts... opts)
Create or overwrite file on indicated path and returns an output stream for
writing into the file.
|
void |
createSymlink(Path target,
Path link,
boolean createParent)
Creates a symbolic link to an existing file.
|
boolean |
delete(Path f,
boolean recursive)
Delete a file.
|
boolean |
deleteOnExit(Path f)
Mark a path to be deleted on JVM shutdown.
|
AclStatus |
getAclStatus(Path path)
Gets the ACLs of files and directories.
|
static Map<URI,org.apache.hadoop.fs.FileSystem.Statistics> |
getAllStatistics() |
FileChecksum |
getFileChecksum(Path f)
Get the checksum of a file.
|
static FileContext |
getFileContext()
Create a FileContext using the default config read from the
$HADOOP_CONFIG/core.xml, Unspecified key-values for config are defaulted
from core-defaults.xml in the release jar.
|
protected static FileContext |
getFileContext(AbstractFileSystem defaultFS)
Create a FileContext for specified file system using the default config.
|
static FileContext |
getFileContext(AbstractFileSystem defFS,
Configuration aConf)
Create a FileContext with specified FS as default using the specified
config.
|
static FileContext |
getFileContext(Configuration aConf)
Create a FileContext using the passed config.
|
static FileContext |
getFileContext(URI defaultFsUri)
Create a FileContext for specified URI using the default config.
|
static FileContext |
getFileContext(URI defaultFsUri,
Configuration aConf)
Create a FileContext for specified default URI using the specified config.
|
FileStatus |
getFileLinkStatus(Path f)
Return a file status object that represents the path.
|
FileStatus |
getFileStatus(Path f)
Return a file status object that represents the path.
|
protected AbstractFileSystem |
getFSofPath(Path absOrFqPath)
Get the file system of supplied path.
|
FsStatus |
getFsStatus(Path f)
Returns a status object describing the use and capacity of the
file system denoted by the Parh argument p.
|
Path |
getHomeDirectory()
Return the current user's home directory in this file system.
|
Path |
getLinkTarget(Path f)
Returns the target of the given symbolic link as it was specified
when the link was created.
|
static FileContext |
getLocalFSFileContext() |
static FileContext |
getLocalFSFileContext(Configuration aConf) |
static org.apache.hadoop.fs.FileSystem.Statistics |
getStatistics(URI uri)
Get the statistics for a particular file system
|
org.apache.hadoop.security.UserGroupInformation |
getUgi()
Gets the ugi in the file-context
|
FsPermission |
getUMask() |
Path |
getWorkingDirectory()
Gets the working directory for wd-relative names (such a "foo/bar").
|
byte[] |
getXAttr(Path path,
String name)
Get an xattr for a file or directory.
|
Map<String,byte[]> |
getXAttrs(Path path)
Get all of the xattrs for a file or directory.
|
Map<String,byte[]> |
getXAttrs(Path path,
List<String> names)
Get all of the xattrs for a file or directory.
|
org.apache.hadoop.fs.RemoteIterator<Path> |
listCorruptFileBlocks(Path path) |
org.apache.hadoop.fs.RemoteIterator<LocatedFileStatus> |
listLocatedStatus(Path f)
List the statuses of the files/directories in the given path if the path is
a directory.
|
org.apache.hadoop.fs.RemoteIterator<FileStatus> |
listStatus(Path f)
List the statuses of the files/directories in the given path if the path is
a directory.
|
List<String> |
listXAttrs(Path path)
Get all of the xattr names for a file or directory.
|
Path |
makeQualified(Path path)
Make the path fully qualified if it is isn't.
|
void |
mkdir(Path dir,
FsPermission permission,
boolean createParent)
Make(create) a directory and all the non-existent parents.
|
void |
modifyAclEntries(Path path,
List<AclEntry> aclSpec)
Modifies ACL entries of files and directories.
|
FSDataInputStream |
open(Path f)
Opens an FSDataInputStream at the indicated Path using
default buffersize.
|
FSDataInputStream |
open(Path f,
int bufferSize)
Opens an FSDataInputStream at the indicated Path.
|
static void |
printStatistics()
Prints the statistics to standard output.
|
void |
removeAcl(Path path)
Removes all but the base ACL entries of files and directories.
|
void |
removeAclEntries(Path path,
List<AclEntry> aclSpec)
Removes ACL entries from files and directories.
|
void |
removeDefaultAcl(Path path)
Removes all default ACL entries from files and directories.
|
void |
removeXAttr(Path path,
String name)
Remove an xattr of a file or directory.
|
void |
rename(Path src,
Path dst,
org.apache.hadoop.fs.Options.Rename... options)
Renames Path src to Path dst
|
protected Path |
resolve(Path f)
Resolves all symbolic links in the specified path.
|
protected Path |
resolveIntermediate(Path f)
Resolves all symbolic links in the specified path leading up
to, but not including the final path component.
|
Path |
resolvePath(Path f)
Resolve the path following any symlinks or mount points
|
void |
setAcl(Path path,
List<AclEntry> aclSpec)
Fully replaces ACL of files and directories, discarding all existing
entries.
|
void |
setOwner(Path f,
String username,
String groupname)
Set owner of a path (i.e.
|
void |
setPermission(Path f,
FsPermission permission)
Set permission of a path.
|
boolean |
setReplication(Path f,
short replication)
Set replication for an existing file.
|
void |
setTimes(Path f,
long mtime,
long atime)
Set access time of a file.
|
void |
setUMask(FsPermission newUmask)
Set umask to the supplied parameter.
|
void |
setVerifyChecksum(boolean verifyChecksum,
Path f)
Set the verify checksum flag for the file system denoted by the path.
|
void |
setWorkingDirectory(Path newWDir)
Set the working directory for wd-relative names (such a "foo/bar").
|
void |
setXAttr(Path path,
String name,
byte[] value)
Set an xattr of a file or directory.
|
void |
setXAttr(Path path,
String name,
byte[] value,
EnumSet<XAttrSetFlag> flag)
Set an xattr of a file or directory.
|
boolean |
truncate(Path f,
long newLength)
Truncate the file in the indicated path to the indicated size.
|
org.apache.hadoop.fs.FileContext.Util |
util() |
public static final org.apache.commons.logging.Log LOG
public static final FsPermission DEFAULT_PERM
DIR_DEFAULT_PERM
for directory, and use
FILE_DEFAULT_PERM
for file.
This constant is kept for compatibility.public static final FsPermission DIR_DEFAULT_PERM
public static final FsPermission FILE_DEFAULT_PERM
public static final int SHUTDOWN_HOOK_PRIORITY
protected AbstractFileSystem getFSofPath(Path absOrFqPath) throws UnsupportedFileSystemException, IOException
absOrFqPath
- - absolute or fully qualified pathUnsupportedFileSystemException
- If the file system for
absOrFqPath
is not supported.IOExcepton
- If the file system for absOrFqPath
could
not be instantiated.IOException
public static FileContext getFileContext(AbstractFileSystem defFS, Configuration aConf)
defFS
- aConf
- protected static FileContext getFileContext(AbstractFileSystem defaultFS)
defaultFS
- public static FileContext getFileContext() throws UnsupportedFileSystemException
UnsupportedFileSystemException
- If the file system from the default
configuration is not supportedpublic static FileContext getLocalFSFileContext() throws UnsupportedFileSystemException
UnsupportedFileSystemException
- If the file system for
FsConstants.LOCAL_FS_URI
is not supported.public static FileContext getFileContext(URI defaultFsUri) throws UnsupportedFileSystemException
defaultFsUri
- UnsupportedFileSystemException
- If the file system for
defaultFsUri
is not supportedpublic static FileContext getFileContext(URI defaultFsUri, Configuration aConf) throws UnsupportedFileSystemException
defaultFsUri
- aConf
- UnsupportedFileSystemException
- If the file system with specified is
not supportedRuntimeException
- If the file system specified is supported but
could not be instantiated, or if login fails.public static FileContext getFileContext(Configuration aConf) throws UnsupportedFileSystemException
getFileContext(URI, Configuration)
instead of this one.aConf
- UnsupportedFileSystemException
- If file system in the config
is not supportedpublic static FileContext getLocalFSFileContext(Configuration aConf) throws UnsupportedFileSystemException
aConf
- - from which the FileContext is configuredUnsupportedFileSystemException
- If default file system in the config
is not supportedpublic void setWorkingDirectory(Path newWDir) throws IOException
getWorkingDirectory()
should return what setWorkingDir() set.newWDir
- new working directoryIOException
- public Path getWorkingDirectory()
public org.apache.hadoop.security.UserGroupInformation getUgi()
public Path getHomeDirectory()
public FsPermission getUMask()
public void setUMask(FsPermission newUmask)
newUmask
- the new umaskpublic Path resolvePath(Path f) throws FileNotFoundException, org.apache.hadoop.fs.UnresolvedLinkException, org.apache.hadoop.security.AccessControlException, IOException
f
- to be resolvedFileNotFoundException
- If f
does not existorg.apache.hadoop.security.AccessControlException
- if access deniedIOException
- If an IO Error occurred
Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC server
RuntimeExceptions:InvalidPathException
- If path f
is not validorg.apache.hadoop.fs.UnresolvedLinkException
public Path makeQualified(Path path)
path
- public FSDataOutputStream create(Path f, EnumSet<CreateFlag> createFlag, org.apache.hadoop.fs.Options.CreateOpts... opts) throws org.apache.hadoop.security.AccessControlException, FileAlreadyExistsException, FileNotFoundException, ParentNotDirectoryException, UnsupportedFileSystemException, IOException
f
- the file name to opencreateFlag
- gives the semantics of create; see CreateFlag
opts
- file creation options; see Options.CreateOpts
.
FSDataOutputStream
for created fileorg.apache.hadoop.security.AccessControlException
- If access is deniedFileAlreadyExistsException
- If file f
already existsFileNotFoundException
- If parent of f
does not exist
and createParent
is falseParentNotDirectoryException
- If parent of f
is not a
directory.UnsupportedFileSystemException
- If file system for f
is
not supportedIOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC server
RuntimeExceptions:InvalidPathException
- If path f
is not validpublic void mkdir(Path dir, FsPermission permission, boolean createParent) throws org.apache.hadoop.security.AccessControlException, FileAlreadyExistsException, FileNotFoundException, ParentNotDirectoryException, UnsupportedFileSystemException, IOException
dir
- - the dir to makepermission
- - permissions is set permission&~umaskcreateParent
- - if true then missing parent dirs are created if false
then parent must existorg.apache.hadoop.security.AccessControlException
- If access is deniedFileAlreadyExistsException
- If directory dir
already
existsFileNotFoundException
- If parent of dir
does not exist
and createParent
is falseParentNotDirectoryException
- If parent of dir
is not a
directoryUnsupportedFileSystemException
- If file system for dir
is not supportedIOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC server
RuntimeExceptions:InvalidPathException
- If path dir
is not validpublic boolean delete(Path f, boolean recursive) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException
f
- the path to delete.recursive
- if path is a directory and set to
true, the directory is deleted else throws an exception. In
case of a file the recursive can be set to either true or false.org.apache.hadoop.security.AccessControlException
- If access is deniedFileNotFoundException
- If f
does not existUnsupportedFileSystemException
- If file system for f
is
not supportedIOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC server
RuntimeExceptions:InvalidPathException
- If path f
is invalidpublic FSDataInputStream open(Path f) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException
f
- the file name to openorg.apache.hadoop.security.AccessControlException
- If access is deniedFileNotFoundException
- If file f
does not existUnsupportedFileSystemException
- If file system for f
is not supportedIOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC serverpublic FSDataInputStream open(Path f, int bufferSize) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException
f
- the file name to openbufferSize
- the size of the buffer to be used.org.apache.hadoop.security.AccessControlException
- If access is deniedFileNotFoundException
- If file f
does not existUnsupportedFileSystemException
- If file system for f
is
not supportedIOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC serverpublic boolean truncate(Path f, long newLength) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException
f
- The path to the file to be truncatednewLength
- The size the file is to be truncated totrue
if the file has been truncated to the desired
newLength
and is immediately available to be reused for
write operations such as append
, or
false
if a background process of adjusting the length of
the last block has been started, and clients should wait for it to
complete before proceeding with further file updates.org.apache.hadoop.security.AccessControlException
- If access is deniedFileNotFoundException
- If file f
does not existUnsupportedFileSystemException
- If file system for f
is
not supportedIOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC serverpublic boolean setReplication(Path f, short replication) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, IOException
f
- file namereplication
- new replicationorg.apache.hadoop.security.AccessControlException
- If access is deniedFileNotFoundException
- If file f
does not existIOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC clientorg.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC serverorg.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC serverpublic void rename(Path src, Path dst, org.apache.hadoop.fs.Options.Rename... options) throws org.apache.hadoop.security.AccessControlException, FileAlreadyExistsException, FileNotFoundException, ParentNotDirectoryException, UnsupportedFileSystemException, IOException
If OVERWRITE option is not passed as an argument, rename fails if the dst already exists.
If OVERWRITE option is passed as an argument, rename overwrites the dst if it is a file or an empty directory. Rename fails if dst is a non-empty directory.
Note that atomicity of rename is dependent on the file system implementation. Please refer to the file system documentation for details
src
- path to be renameddst
- new path after renameorg.apache.hadoop.security.AccessControlException
- If access is deniedFileAlreadyExistsException
- If dst
already exists and
options has Options.Rename.OVERWRITE
option false.
FileNotFoundException
- If src
does not exist
ParentNotDirectoryException
- If parent of dst
is not a
directory
UnsupportedFileSystemException
- If file system for src
and dst
is not supported
IOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:
org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC client
org.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC server
org.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC server
-
setPermission
public void setPermission(Path f,
FsPermission permission)
throws org.apache.hadoop.security.AccessControlException,
FileNotFoundException,
UnsupportedFileSystemException,
IOException
Set permission of a path.
- Parameters:
f
- permission
- - the new absolute permission (umask is not applied)
- Throws:
org.apache.hadoop.security.AccessControlException
- If access is denied
FileNotFoundException
- If f
does not exist
UnsupportedFileSystemException
- If file system for f
is not supported
IOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:
org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC client
org.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC server
org.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC server
-
setOwner
public void setOwner(Path f,
String username,
String groupname)
throws org.apache.hadoop.security.AccessControlException,
UnsupportedFileSystemException,
FileNotFoundException,
IOException
Set owner of a path (i.e. a file or a directory). The parameters username
and groupname cannot both be null.
- Parameters:
f
- The pathusername
- If it is null, the original username remains unchanged.groupname
- If it is null, the original groupname remains unchanged.
- Throws:
org.apache.hadoop.security.AccessControlException
- If access is denied
FileNotFoundException
- If f
does not exist
UnsupportedFileSystemException
- If file system for f
is
not supported
IOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:
org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC client
org.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC server
org.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC server
RuntimeExceptions:
HadoopIllegalArgumentException
- If username
or
groupname
is invalid.
-
setTimes
public void setTimes(Path f,
long mtime,
long atime)
throws org.apache.hadoop.security.AccessControlException,
FileNotFoundException,
UnsupportedFileSystemException,
IOException
Set access time of a file.
- Parameters:
f
- The pathmtime
- Set the modification time of this file.
The number of milliseconds since epoch (Jan 1, 1970).
A value of -1 means that this call should not set modification time.atime
- Set the access time of this file.
The number of milliseconds since Jan 1, 1970.
A value of -1 means that this call should not set access time.
- Throws:
org.apache.hadoop.security.AccessControlException
- If access is denied
FileNotFoundException
- If f
does not exist
UnsupportedFileSystemException
- If file system for f
is
not supported
IOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:
org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC client
org.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC server
org.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC server
-
getFileChecksum
public FileChecksum getFileChecksum(Path f)
throws org.apache.hadoop.security.AccessControlException,
FileNotFoundException,
IOException
Get the checksum of a file.
- Parameters:
f
- file path
- Returns:
- The file checksum. The default return value is null,
which indicates that no checksum algorithm is implemented
in the corresponding FileSystem.
- Throws:
org.apache.hadoop.security.AccessControlException
- If access is denied
FileNotFoundException
- If f
does not exist
IOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:
org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC client
org.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC server
org.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC server
-
setVerifyChecksum
public void setVerifyChecksum(boolean verifyChecksum,
Path f)
throws org.apache.hadoop.security.AccessControlException,
FileNotFoundException,
UnsupportedFileSystemException,
IOException
Set the verify checksum flag for the file system denoted by the path.
This is only applicable if the
corresponding FileSystem supports checksum. By default doesn't do anything.
- Parameters:
verifyChecksum
- f
- set the verifyChecksum for the Filesystem containing this path
- Throws:
org.apache.hadoop.security.AccessControlException
- If access is denied
FileNotFoundException
- If f
does not exist
UnsupportedFileSystemException
- If file system for f
is
not supported
IOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:
org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC client
org.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC server
org.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC server
-
getFileStatus
public FileStatus getFileStatus(Path f)
throws org.apache.hadoop.security.AccessControlException,
FileNotFoundException,
UnsupportedFileSystemException,
IOException
Return a file status object that represents the path.
- Parameters:
f
- The path we want information from
- Returns:
- a FileStatus object
- Throws:
org.apache.hadoop.security.AccessControlException
- If access is denied
FileNotFoundException
- If f
does not exist
UnsupportedFileSystemException
- If file system for f
is
not supported
IOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:
org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC client
org.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC server
org.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC server
-
getFileLinkStatus
public FileStatus getFileLinkStatus(Path f)
throws org.apache.hadoop.security.AccessControlException,
FileNotFoundException,
UnsupportedFileSystemException,
IOException
Return a file status object that represents the path. If the path
refers to a symlink then the FileStatus of the symlink is returned.
The behavior is equivalent to #getFileStatus() if the underlying
file system does not support symbolic links.
- Parameters:
f
- The path we want information from.
- Returns:
- A FileStatus object
- Throws:
org.apache.hadoop.security.AccessControlException
- If access is denied
FileNotFoundException
- If f
does not exist
UnsupportedFileSystemException
- If file system for f
is
not supported
IOException
- If an I/O error occurred
-
getLinkTarget
public Path getLinkTarget(Path f)
throws org.apache.hadoop.security.AccessControlException,
FileNotFoundException,
UnsupportedFileSystemException,
IOException
Returns the target of the given symbolic link as it was specified
when the link was created. Links in the path leading up to the
final path component are resolved transparently.
- Parameters:
f
- the path to return the target of
- Returns:
- The un-interpreted target of the symbolic link.
- Throws:
org.apache.hadoop.security.AccessControlException
- If access is denied
FileNotFoundException
- If path f
does not exist
UnsupportedFileSystemException
- If file system for f
is
not supported
IOException
- If the given path does not refer to a symlink
or an I/O error occurred
-
getFsStatus
public FsStatus getFsStatus(Path f)
throws org.apache.hadoop.security.AccessControlException,
FileNotFoundException,
UnsupportedFileSystemException,
IOException
Returns a status object describing the use and capacity of the
file system denoted by the Parh argument p.
If the file system has multiple partitions, the
use and capacity of the partition pointed to by the specified
path is reflected.
- Parameters:
f
- Path for which status should be obtained. null means the
root partition of the default file system.
- Returns:
- a FsStatus object
- Throws:
org.apache.hadoop.security.AccessControlException
- If access is denied
FileNotFoundException
- If f
does not exist
UnsupportedFileSystemException
- If file system for f
is
not supported
IOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:
org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC client
org.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC server
org.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC server
-
createSymlink
public void createSymlink(Path target,
Path link,
boolean createParent)
throws org.apache.hadoop.security.AccessControlException,
FileAlreadyExistsException,
FileNotFoundException,
ParentNotDirectoryException,
UnsupportedFileSystemException,
IOException
Creates a symbolic link to an existing file. An exception is thrown if
the symlink exits, the user does not have permission to create symlink,
or the underlying file system does not support symlinks.
Symlink permissions are ignored, access to a symlink is determined by
the permissions of the symlink target.
Symlinks in paths leading up to the final path component are resolved
transparently. If the final path component refers to a symlink some
functions operate on the symlink itself, these are:
- delete(f) and deleteOnExit(f) - Deletes the symlink.
- rename(src, dst) - If src refers to a symlink, the symlink is
renamed. If dst refers to a symlink, the symlink is over-written.
- getLinkTarget(f) - Returns the target of the symlink.
- getFileLinkStatus(f) - Returns a FileStatus object describing
the symlink.
Some functions, create() and mkdir(), expect the final path component
does not exist. If they are given a path that refers to a symlink that
does exist they behave as if the path referred to an existing file or
directory. All other functions fully resolve, ie follow, the symlink.
These are: open, setReplication, setOwner, setTimes, setWorkingDirectory,
setPermission, getFileChecksum, setVerifyChecksum, getFileBlockLocations,
getFsStatus, getFileStatus, exists, and listStatus.
Symlink targets are stored as given to createSymlink, assuming the
underlying file system is capable of storing a fully qualified URI.
Dangling symlinks are permitted. FileContext supports four types of
symlink targets, and resolves them as follows
Given a path referring to a symlink of form:
<---X--->
fs://host/A/B/link
<-----Y----->
In this path X is the scheme and authority that identify the file system,
and Y is the path leading up to the final path component "link". If Y is
a symlink itself then let Y' be the target of Y and X' be the scheme and
authority of Y'. Symlink targets may:
1. Fully qualified URIs
fs://hostX/A/B/file Resolved according to the target file system.
2. Partially qualified URIs (eg scheme but no host)
fs:///A/B/file Resolved according to the target file system. Eg resolving
a symlink to hdfs:///A results in an exception because
HDFS URIs must be fully qualified, while a symlink to
file:///A will not since Hadoop's local file systems
require partially qualified URIs.
3. Relative paths
path Resolves to [Y'][path]. Eg if Y resolves to hdfs://host/A and path
is "../B/file" then [Y'][path] is hdfs://host/B/file
4. Absolute paths
path Resolves to [X'][path]. Eg if Y resolves hdfs://host/A/B and path
is "/file" then [X][path] is hdfs://host/file
- Parameters:
target
- the target of the symbolic linklink
- the path to be created that points to targetcreateParent
- if true then missing parent dirs are created if
false then parent must exist
- Throws:
org.apache.hadoop.security.AccessControlException
- If access is denied
FileAlreadyExistsException
- If file linkcode> already exists
FileNotFoundException
- If target
does not exist
ParentNotDirectoryException
- If parent of link
is not a
directory.
UnsupportedFileSystemException
- If file system for
target
or link
is not supported
IOException
- If an I/O error occurred
-
listStatus
public org.apache.hadoop.fs.RemoteIterator<FileStatus> listStatus(Path f)
throws org.apache.hadoop.security.AccessControlException,
FileNotFoundException,
UnsupportedFileSystemException,
IOException
List the statuses of the files/directories in the given path if the path is
a directory.
- Parameters:
f
- is the path
- Returns:
- an iterator that traverses statuses of the files/directories
in the given path
- Throws:
org.apache.hadoop.security.AccessControlException
- If access is denied
FileNotFoundException
- If f
does not exist
UnsupportedFileSystemException
- If file system for f
is
not supported
IOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:
org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC client
org.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC server
org.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC server
-
listCorruptFileBlocks
public org.apache.hadoop.fs.RemoteIterator<Path> listCorruptFileBlocks(Path path)
throws IOException
- Returns:
- an iterator over the corrupt files under the given path
(may contain duplicates if a file has more than one corrupt block)
- Throws:
IOException
-
listLocatedStatus
public org.apache.hadoop.fs.RemoteIterator<LocatedFileStatus> listLocatedStatus(Path f)
throws org.apache.hadoop.security.AccessControlException,
FileNotFoundException,
UnsupportedFileSystemException,
IOException
List the statuses of the files/directories in the given path if the path is
a directory.
Return the file's status and block locations If the path is a file.
If a returned status is a file, it contains the file's block locations.
- Parameters:
f
- is the path
- Returns:
- an iterator that traverses statuses of the files/directories
in the given path
If any IO exception (for example the input directory gets deleted while
listing is being executed), next() or hasNext() of the returned iterator
may throw a RuntimeException with the io exception as the cause.
- Throws:
org.apache.hadoop.security.AccessControlException
- If access is denied
FileNotFoundException
- If f
does not exist
UnsupportedFileSystemException
- If file system for f
is
not supported
IOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:
org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC client
org.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC server
org.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC server
-
deleteOnExit
public boolean deleteOnExit(Path f)
throws org.apache.hadoop.security.AccessControlException,
IOException
Mark a path to be deleted on JVM shutdown.
- Parameters:
f
- the existing path to delete.
- Returns:
- true if deleteOnExit is successful, otherwise false.
- Throws:
org.apache.hadoop.security.AccessControlException
- If access is denied
UnsupportedFileSystemException
- If file system for f
is
not supported
IOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:
org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC client
org.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC server
org.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC server
-
util
public org.apache.hadoop.fs.FileContext.Util util()
-
resolve
protected Path resolve(Path f)
throws FileNotFoundException,
org.apache.hadoop.fs.UnresolvedLinkException,
org.apache.hadoop.security.AccessControlException,
IOException
Resolves all symbolic links in the specified path.
Returns the new path object.
- Throws:
FileNotFoundException
org.apache.hadoop.fs.UnresolvedLinkException
org.apache.hadoop.security.AccessControlException
IOException
-
resolveIntermediate
protected Path resolveIntermediate(Path f)
throws IOException
Resolves all symbolic links in the specified path leading up
to, but not including the final path component.
- Parameters:
f
- path to resolve
- Returns:
- the new path object.
- Throws:
IOException
-
getStatistics
public static org.apache.hadoop.fs.FileSystem.Statistics getStatistics(URI uri)
Get the statistics for a particular file system
- Parameters:
uri
- the uri to lookup the statistics. Only scheme and authority part
of the uri are used as the key to store and lookup.
- Returns:
- a statistics object
-
clearStatistics
public static void clearStatistics()
Clears all the statistics stored in AbstractFileSystem, for all the file
systems.
-
printStatistics
public static void printStatistics()
Prints the statistics to standard output. File System is identified by the
scheme and authority.
-
getAllStatistics
public static Map<URI,org.apache.hadoop.fs.FileSystem.Statistics> getAllStatistics()
- Returns:
- Map of uri and statistics for each filesystem instantiated. The uri
consists of scheme and authority for the filesystem.
-
modifyAclEntries
public void modifyAclEntries(Path path,
List<AclEntry> aclSpec)
throws IOException
Modifies ACL entries of files and directories. This method can add new ACL
entries or modify the permissions on existing ACL entries. All existing
ACL entries that are not specified in this call are retained without
changes. (Modifications are merged into the current ACL.)
- Parameters:
path
- Path to modifyaclSpec
- List describing modifications
- Throws:
IOException
- if an ACL could not be modified
-
removeAclEntries
public void removeAclEntries(Path path,
List<AclEntry> aclSpec)
throws IOException
Removes ACL entries from files and directories. Other ACL entries are
retained.
- Parameters:
path
- Path to modifyaclSpec
- List describing entries to remove
- Throws:
IOException
- if an ACL could not be modified
-
removeDefaultAcl
public void removeDefaultAcl(Path path)
throws IOException
Removes all default ACL entries from files and directories.
- Parameters:
path
- Path to modify
- Throws:
IOException
- if an ACL could not be modified
-
removeAcl
public void removeAcl(Path path)
throws IOException
Removes all but the base ACL entries of files and directories. The entries
for user, group, and others are retained for compatibility with permission
bits.
- Parameters:
path
- Path to modify
- Throws:
IOException
- if an ACL could not be removed
-
setAcl
public void setAcl(Path path,
List<AclEntry> aclSpec)
throws IOException
Fully replaces ACL of files and directories, discarding all existing
entries.
- Parameters:
path
- Path to modifyaclSpec
- List describing modifications, must include entries
for user, group, and others for compatibility with permission bits.
- Throws:
IOException
- if an ACL could not be modified
-
getAclStatus
public AclStatus getAclStatus(Path path)
throws IOException
Gets the ACLs of files and directories.
- Parameters:
path
- Path to get
- Returns:
- RemoteIterator
which returns each AclStatus
- Throws:
IOException
- if an ACL could not be read
-
setXAttr
public void setXAttr(Path path,
String name,
byte[] value)
throws IOException
Set an xattr of a file or directory.
The name must be prefixed with the namespace followed by ".". For example,
"user.attr".
Refer to the HDFS extended attributes user documentation for details.
- Parameters:
path
- Path to modifyname
- xattr name.value
- xattr value.
- Throws:
IOException
-
setXAttr
public void setXAttr(Path path,
String name,
byte[] value,
EnumSet<XAttrSetFlag> flag)
throws IOException
Set an xattr of a file or directory.
The name must be prefixed with the namespace followed by ".". For example,
"user.attr".
Refer to the HDFS extended attributes user documentation for details.
- Parameters:
path
- Path to modifyname
- xattr name.value
- xattr value.flag
- xattr set flag
- Throws:
IOException
-
getXAttr
public byte[] getXAttr(Path path,
String name)
throws IOException
Get an xattr for a file or directory.
The name must be prefixed with the namespace followed by ".". For example,
"user.attr".
Refer to the HDFS extended attributes user documentation for details.
- Parameters:
path
- Path to get extended attributename
- xattr name.
- Returns:
- byte[] xattr value.
- Throws:
IOException
-
getXAttrs
public Map<String,byte[]> getXAttrs(Path path)
throws IOException
Get all of the xattrs for a file or directory.
Only those xattrs for which the logged-in user has permissions to view
are returned.
Refer to the HDFS extended attributes user documentation for details.
- Parameters:
path
- Path to get extended attributes
- Returns:
- Map
describing the XAttrs of the file or directory
- Throws:
IOException
-
getXAttrs
public Map<String,byte[]> getXAttrs(Path path,
List<String> names)
throws IOException
Get all of the xattrs for a file or directory.
Only those xattrs for which the logged-in user has permissions to view
are returned.
Refer to the HDFS extended attributes user documentation for details.
- Parameters:
path
- Path to get extended attributesnames
- XAttr names.
- Returns:
- Map
describing the XAttrs of the file or directory
- Throws:
IOException
-
removeXAttr
public void removeXAttr(Path path,
String name)
throws IOException
Remove an xattr of a file or directory.
The name must be prefixed with the namespace followed by ".". For example,
"user.attr".
Refer to the HDFS extended attributes user documentation for details.
- Parameters:
path
- Path to remove extended attributename
- xattr name
- Throws:
IOException
-
listXAttrs
public List<String> listXAttrs(Path path)
throws IOException
Get all of the xattr names for a file or directory.
Only those xattr names which the logged-in user has permissions to view
are returned.
Refer to the HDFS extended attributes user documentation for details.
- Parameters:
path
- Path to get extended attributes
- Returns:
- List
of the XAttr names of the file or directory
- Throws:
IOException
Copyright © 2015 Apache Software Foundation. All rights reserved.