|
||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |
java.lang.Object org.apache.hadoop.fs.FileContext
@InterfaceAudience.Public @InterfaceStability.Evolving public final class FileContext
The FileContext class provides an interface to the application writer for using the Hadoop file system. It provides a set of methods for the usual operation: create, open, list, etc
*** Path Names ***
The Hadoop file system supports a URI name space and URI names. It offers a forest of file systems that can be referenced using fully qualified URIs. Two common Hadoop file systems implementations are
To facilitate this, Hadoop supports a notion of a default file system. The user can set his default file system, although this is typically set up for you in your environment via your default config. A default file system implies a default scheme and authority; slash-relative names (such as /for/bar) are resolved relative to that default FS. Similarly a user can also have working-directory-relative names (i.e. names not starting with a slash). While the working directory is generally in the same default FS, the wd can be in a different FS.
Hence Hadoop path names can be one of:
****The Role of the FileContext and configuration defaults****
The FileContext provides file namespace context for resolving file names; it also contains the umask for permissions, In that sense it is like the per-process file-related state in Unix system. These two properties
Configuration
).
No other configuration parameters are obtained from the default config as
far as the file context layer is concerned. All file system instances
(i.e. deployments of file systems) have default properties; we call these
server side (SS) defaults. Operation like create allow one to select many
properties: either pass them in as explicit parameters or use
the SS properties.
The file system related SS defaults are
*** Usage Model for the FileContext class ***
Example 1: use the default config read from the $HADOOP_CONFIG/core.xml. Unspecified values come from core-defaults.xml in the release jar.
Field Summary | |
---|---|
static FsPermission |
DEFAULT_PERM
Default permission for directory and symlink In previous versions, this default permission was also used to create files, so files created end up with ugo+x permission. |
static FsPermission |
DIR_DEFAULT_PERM
Default permission for directory |
static FsPermission |
FILE_DEFAULT_PERM
Default permission for file |
static org.apache.commons.logging.Log |
LOG
|
static int |
SHUTDOWN_HOOK_PRIORITY
Priority of the FileContext shutdown hook. |
Method Summary | |
---|---|
static void |
clearStatistics()
Clears all the statistics stored in AbstractFileSystem, for all the file systems. |
FSDataOutputStream |
create(Path f,
EnumSet<CreateFlag> createFlag,
org.apache.hadoop.fs.Options.CreateOpts... opts)
Create or overwrite file on indicated path and returns an output stream for writing into the file. |
void |
createSymlink(Path target,
Path link,
boolean createParent)
Creates a symbolic link to an existing file. |
boolean |
delete(Path f,
boolean recursive)
Delete a file. |
boolean |
deleteOnExit(Path f)
Mark a path to be deleted on JVM shutdown. |
AclStatus |
getAclStatus(Path path)
Gets the ACLs of files and directories. |
static Map<URI,org.apache.hadoop.fs.FileSystem.Statistics> |
getAllStatistics()
|
FileChecksum |
getFileChecksum(Path f)
Get the checksum of a file. |
static FileContext |
getFileContext()
Create a FileContext using the default config read from the $HADOOP_CONFIG/core.xml, Unspecified key-values for config are defaulted from core-defaults.xml in the release jar. |
protected static FileContext |
getFileContext(AbstractFileSystem defaultFS)
Create a FileContext for specified file system using the default config. |
static FileContext |
getFileContext(AbstractFileSystem defFS,
Configuration aConf)
Create a FileContext with specified FS as default using the specified config. |
static FileContext |
getFileContext(Configuration aConf)
Create a FileContext using the passed config. |
static FileContext |
getFileContext(URI defaultFsUri)
Create a FileContext for specified URI using the default config. |
static FileContext |
getFileContext(URI defaultFsUri,
Configuration aConf)
Create a FileContext for specified default URI using the specified config. |
FileStatus |
getFileLinkStatus(Path f)
Return a file status object that represents the path. |
FileStatus |
getFileStatus(Path f)
Return a file status object that represents the path. |
protected AbstractFileSystem |
getFSofPath(Path absOrFqPath)
Get the file system of supplied path. |
FsStatus |
getFsStatus(Path f)
Returns a status object describing the use and capacity of the file system denoted by the Parh argument p. |
Path |
getHomeDirectory()
Return the current user's home directory in this file system. |
Path |
getLinkTarget(Path f)
Returns the target of the given symbolic link as it was specified when the link was created. |
static FileContext |
getLocalFSFileContext()
|
static FileContext |
getLocalFSFileContext(Configuration aConf)
|
static org.apache.hadoop.fs.FileSystem.Statistics |
getStatistics(URI uri)
Get the statistics for a particular file system |
org.apache.hadoop.security.UserGroupInformation |
getUgi()
Gets the ugi in the file-context |
FsPermission |
getUMask()
|
Path |
getWorkingDirectory()
Gets the working directory for wd-relative names (such a "foo/bar"). |
org.apache.hadoop.fs.RemoteIterator<Path> |
listCorruptFileBlocks(Path path)
|
org.apache.hadoop.fs.RemoteIterator<LocatedFileStatus> |
listLocatedStatus(Path f)
List the statuses of the files/directories in the given path if the path is a directory. |
org.apache.hadoop.fs.RemoteIterator<FileStatus> |
listStatus(Path f)
List the statuses of the files/directories in the given path if the path is a directory. |
Path |
makeQualified(Path path)
Make the path fully qualified if it is isn't. |
void |
mkdir(Path dir,
FsPermission permission,
boolean createParent)
Make(create) a directory and all the non-existent parents. |
void |
modifyAclEntries(Path path,
List<AclEntry> aclSpec)
Modifies ACL entries of files and directories. |
FSDataInputStream |
open(Path f)
Opens an FSDataInputStream at the indicated Path using default buffersize. |
FSDataInputStream |
open(Path f,
int bufferSize)
Opens an FSDataInputStream at the indicated Path. |
static void |
printStatistics()
Prints the statistics to standard output. |
void |
removeAcl(Path path)
Removes all but the base ACL entries of files and directories. |
void |
removeAclEntries(Path path,
List<AclEntry> aclSpec)
Removes ACL entries from files and directories. |
void |
removeDefaultAcl(Path path)
Removes all default ACL entries from files and directories. |
void |
rename(Path src,
Path dst,
org.apache.hadoop.fs.Options.Rename... options)
Renames Path src to Path dst |
protected Path |
resolve(Path f)
Resolves all symbolic links in the specified path. |
protected Path |
resolveIntermediate(Path f)
Resolves all symbolic links in the specified path leading up to, but not including the final path component. |
Path |
resolvePath(Path f)
Resolve the path following any symlinks or mount points |
void |
setAcl(Path path,
List<AclEntry> aclSpec)
Fully replaces ACL of files and directories, discarding all existing entries. |
void |
setOwner(Path f,
String username,
String groupname)
Set owner of a path (i.e. |
void |
setPermission(Path f,
FsPermission permission)
Set permission of a path. |
boolean |
setReplication(Path f,
short replication)
Set replication for an existing file. |
void |
setTimes(Path f,
long mtime,
long atime)
Set access time of a file. |
void |
setUMask(FsPermission newUmask)
Set umask to the supplied parameter. |
void |
setVerifyChecksum(boolean verifyChecksum,
Path f)
Set the verify checksum flag for the file system denoted by the path. |
void |
setWorkingDirectory(Path newWDir)
Set the working directory for wd-relative names (such a "foo/bar"). |
org.apache.hadoop.fs.FileContext.Util |
util()
|
Methods inherited from class java.lang.Object |
---|
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait |
Field Detail |
---|
public static final org.apache.commons.logging.Log LOG
public static final FsPermission DEFAULT_PERM
DIR_DEFAULT_PERM
for directory, and use
FILE_DEFAULT_PERM
for file.
This constant is kept for compatibility.
public static final FsPermission DIR_DEFAULT_PERM
public static final FsPermission FILE_DEFAULT_PERM
public static final int SHUTDOWN_HOOK_PRIORITY
Method Detail |
---|
protected AbstractFileSystem getFSofPath(Path absOrFqPath) throws UnsupportedFileSystemException, IOException
absOrFqPath
- - absolute or fully qualified path
UnsupportedFileSystemException
- If the file system for
absOrFqPath
is not supported.
IOExcepton
- If the file system for absOrFqPath
could
not be instantiated.
IOException
public static FileContext getFileContext(AbstractFileSystem defFS, Configuration aConf)
defFS
- aConf
-
protected static FileContext getFileContext(AbstractFileSystem defaultFS)
defaultFS
-
public static FileContext getFileContext() throws UnsupportedFileSystemException
UnsupportedFileSystemException
- If the file system from the default
configuration is not supportedpublic static FileContext getLocalFSFileContext() throws UnsupportedFileSystemException
UnsupportedFileSystemException
- If the file system for
FsConstants.LOCAL_FS_URI
is not supported.public static FileContext getFileContext(URI defaultFsUri) throws UnsupportedFileSystemException
defaultFsUri
-
UnsupportedFileSystemException
- If the file system for
defaultFsUri
is not supportedpublic static FileContext getFileContext(URI defaultFsUri, Configuration aConf) throws UnsupportedFileSystemException
defaultFsUri
- aConf
-
UnsupportedFileSystemException
- If the file system with specified is
not supported
RuntimeException
- If the file system specified is supported but
could not be instantiated, or if login fails.public static FileContext getFileContext(Configuration aConf) throws UnsupportedFileSystemException
getFileContext(URI, Configuration)
instead of this one.
aConf
-
UnsupportedFileSystemException
- If file system in the config
is not supportedpublic static FileContext getLocalFSFileContext(Configuration aConf) throws UnsupportedFileSystemException
aConf
- - from which the FileContext is configured
UnsupportedFileSystemException
- If default file system in the config
is not supportedpublic void setWorkingDirectory(Path newWDir) throws IOException
getWorkingDirectory()
should return what setWorkingDir() set.
newWDir
- new working directory
IOException
- public Path getWorkingDirectory()
public org.apache.hadoop.security.UserGroupInformation getUgi()
public Path getHomeDirectory()
public FsPermission getUMask()
public void setUMask(FsPermission newUmask)
newUmask
- the new umaskpublic Path resolvePath(Path f) throws FileNotFoundException, org.apache.hadoop.fs.UnresolvedLinkException, org.apache.hadoop.security.AccessControlException, IOException
f
- to be resolved
FileNotFoundException
- If f
does not exist
org.apache.hadoop.security.AccessControlException
- if access denied
IOException
- If an IO Error occurred
Exceptions applicable to file systems accessed over RPC:
org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC client
org.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC server
org.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC server
RuntimeExceptions:
InvalidPathException
- If path f
is not valid
org.apache.hadoop.fs.UnresolvedLinkException
public Path makeQualified(Path path)
path
-
public FSDataOutputStream create(Path f, EnumSet<CreateFlag> createFlag, org.apache.hadoop.fs.Options.CreateOpts... opts) throws org.apache.hadoop.security.AccessControlException, FileAlreadyExistsException, FileNotFoundException, ParentNotDirectoryException, UnsupportedFileSystemException, IOException
f
- the file name to opencreateFlag
- gives the semantics of create; see CreateFlag
opts
- file creation options; see Options.CreateOpts
.
FSDataOutputStream
for created file
org.apache.hadoop.security.AccessControlException
- If access is denied
FileAlreadyExistsException
- If file f
already exists
FileNotFoundException
- If parent of f
does not exist
and createParent
is false
ParentNotDirectoryException
- If parent of f
is not a
directory.
UnsupportedFileSystemException
- If file system for f
is
not supported
IOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:
org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC client
org.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC server
org.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC server
RuntimeExceptions:
InvalidPathException
- If path f
is not validpublic void mkdir(Path dir, FsPermission permission, boolean createParent) throws org.apache.hadoop.security.AccessControlException, FileAlreadyExistsException, FileNotFoundException, ParentNotDirectoryException, UnsupportedFileSystemException, IOException
dir
- - the dir to makepermission
- - permissions is set permission&~umaskcreateParent
- - if true then missing parent dirs are created if false
then parent must exist
org.apache.hadoop.security.AccessControlException
- If access is denied
FileAlreadyExistsException
- If directory dir
already
exists
FileNotFoundException
- If parent of dir
does not exist
and createParent
is false
ParentNotDirectoryException
- If parent of dir
is not a
directory
UnsupportedFileSystemException
- If file system for dir
is not supported
IOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:
org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC client
org.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC server
RuntimeExceptions:
InvalidPathException
- If path dir
is not validpublic boolean delete(Path f, boolean recursive) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException
f
- the path to delete.recursive
- if path is a directory and set to
true, the directory is deleted else throws an exception. In
case of a file the recursive can be set to either true or false.
org.apache.hadoop.security.AccessControlException
- If access is denied
FileNotFoundException
- If f
does not exist
UnsupportedFileSystemException
- If file system for f
is
not supported
IOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:
org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC client
org.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC server
org.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC server
RuntimeExceptions:
InvalidPathException
- If path f
is invalidpublic FSDataInputStream open(Path f) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException
f
- the file name to open
org.apache.hadoop.security.AccessControlException
- If access is denied
FileNotFoundException
- If file f
does not exist
UnsupportedFileSystemException
- If file system for f
is not supported
IOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:
org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC client
org.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC server
org.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC serverpublic FSDataInputStream open(Path f, int bufferSize) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException
f
- the file name to openbufferSize
- the size of the buffer to be used.
org.apache.hadoop.security.AccessControlException
- If access is denied
FileNotFoundException
- If file f
does not exist
UnsupportedFileSystemException
- If file system for f
is
not supported
IOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:
org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC client
org.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC server
org.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC serverpublic boolean setReplication(Path f, short replication) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, IOException
f
- file namereplication
- new replication
org.apache.hadoop.security.AccessControlException
- If access is denied
FileNotFoundException
- If file f
does not exist
IOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:
org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC client
org.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC server
org.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC serverpublic void rename(Path src, Path dst, org.apache.hadoop.fs.Options.Rename... options) throws org.apache.hadoop.security.AccessControlException, FileAlreadyExistsException, FileNotFoundException, ParentNotDirectoryException, UnsupportedFileSystemException, IOException
If OVERWRITE option is not passed as an argument, rename fails if the dst already exists.
If OVERWRITE option is passed as an argument, rename overwrites the dst if it is a file or an empty directory. Rename fails if dst is a non-empty directory.
Note that atomicity of rename is dependent on the file system implementation. Please refer to the file system documentation for details
src
- path to be renameddst
- new path after rename
org.apache.hadoop.security.AccessControlException
- If access is denied
FileAlreadyExistsException
- If dst
already exists and
options has Options.Rename.OVERWRITE
option false.
FileNotFoundException
- If src
does not exist
ParentNotDirectoryException
- If parent of dst
is not a
directory
UnsupportedFileSystemException
- If file system for src
and dst
is not supported
IOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:
org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC client
org.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC server
org.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC server
public void setPermission(Path f, FsPermission permission) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException
f
- permission
- - the new absolute permission (umask is not applied)
org.apache.hadoop.security.AccessControlException
- If access is denied
FileNotFoundException
- If f
does not exist
UnsupportedFileSystemException
- If file system for f
is not supported
IOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:
org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC client
org.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC server
org.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC serverpublic void setOwner(Path f, String username, String groupname) throws org.apache.hadoop.security.AccessControlException, UnsupportedFileSystemException, FileNotFoundException, IOException
f
- The pathusername
- If it is null, the original username remains unchanged.groupname
- If it is null, the original groupname remains unchanged.
org.apache.hadoop.security.AccessControlException
- If access is denied
FileNotFoundException
- If f
does not exist
UnsupportedFileSystemException
- If file system for f
is
not supported
IOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:
org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC client
org.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC server
org.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC server
RuntimeExceptions:
HadoopIllegalArgumentException
- If username
or
groupname
is invalid.public void setTimes(Path f, long mtime, long atime) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException
f
- The pathmtime
- Set the modification time of this file.
The number of milliseconds since epoch (Jan 1, 1970).
A value of -1 means that this call should not set modification time.atime
- Set the access time of this file.
The number of milliseconds since Jan 1, 1970.
A value of -1 means that this call should not set access time.
org.apache.hadoop.security.AccessControlException
- If access is denied
FileNotFoundException
- If f
does not exist
UnsupportedFileSystemException
- If file system for f
is
not supported
IOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:
org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC client
org.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC server
org.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC serverpublic FileChecksum getFileChecksum(Path f) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, IOException
f
- file path
org.apache.hadoop.security.AccessControlException
- If access is denied
FileNotFoundException
- If f
does not exist
IOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:
org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC client
org.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC server
org.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC serverpublic void setVerifyChecksum(boolean verifyChecksum, Path f) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException
verifyChecksum
- f
- set the verifyChecksum for the Filesystem containing this path
org.apache.hadoop.security.AccessControlException
- If access is denied
FileNotFoundException
- If f
does not exist
UnsupportedFileSystemException
- If file system for f
is
not supported
IOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:
org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC client
org.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC server
org.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC serverpublic FileStatus getFileStatus(Path f) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException
f
- The path we want information from
org.apache.hadoop.security.AccessControlException
- If access is denied
FileNotFoundException
- If f
does not exist
UnsupportedFileSystemException
- If file system for f
is
not supported
IOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:
org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC client
org.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC server
org.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC serverpublic FileStatus getFileLinkStatus(Path f) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException
f
- The path we want information from.
org.apache.hadoop.security.AccessControlException
- If access is denied
FileNotFoundException
- If f
does not exist
UnsupportedFileSystemException
- If file system for f
is
not supported
IOException
- If an I/O error occurredpublic Path getLinkTarget(Path f) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException
f
- the path to return the target of
org.apache.hadoop.security.AccessControlException
- If access is denied
FileNotFoundException
- If path f
does not exist
UnsupportedFileSystemException
- If file system for f
is
not supported
IOException
- If the given path does not refer to a symlink
or an I/O error occurredpublic FsStatus getFsStatus(Path f) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException
f
- Path for which status should be obtained. null means the
root partition of the default file system.
org.apache.hadoop.security.AccessControlException
- If access is denied
FileNotFoundException
- If f
does not exist
UnsupportedFileSystemException
- If file system for f
is
not supported
IOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:
org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC client
org.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC server
org.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC serverpublic void createSymlink(Path target, Path link, boolean createParent) throws org.apache.hadoop.security.AccessControlException, FileAlreadyExistsException, FileNotFoundException, ParentNotDirectoryException, UnsupportedFileSystemException, IOException
Given a path referring to a symlink of form: <---X---> fs://host/A/B/link <-----Y-----> In this path X is the scheme and authority that identify the file system, and Y is the path leading up to the final path component "link". If Y is a symlink itself then let Y' be the target of Y and X' be the scheme and authority of Y'. Symlink targets may: 1. Fully qualified URIs fs://hostX/A/B/file Resolved according to the target file system. 2. Partially qualified URIs (eg scheme but no host) fs:///A/B/file Resolved according to the target file system. Eg resolving a symlink to hdfs:///A results in an exception because HDFS URIs must be fully qualified, while a symlink to file:///A will not since Hadoop's local file systems require partially qualified URIs. 3. Relative paths path Resolves to [Y'][path]. Eg if Y resolves to hdfs://host/A and path is "../B/file" then [Y'][path] is hdfs://host/B/file 4. Absolute paths path Resolves to [X'][path]. Eg if Y resolves hdfs://host/A/B and path is "/file" then [X][path] is hdfs://host/file
target
- the target of the symbolic linklink
- the path to be created that points to targetcreateParent
- if true then missing parent dirs are created if
false then parent must exist
org.apache.hadoop.security.AccessControlException
- If access is denied
FileAlreadyExistsException
- If file linkcode> already exists
FileNotFoundException
- If target
does not exist
ParentNotDirectoryException
- If parent of link
is not a
directory.
UnsupportedFileSystemException
- If file system for
target
or link
is not supported
IOException
- If an I/O error occurred
public org.apache.hadoop.fs.RemoteIterator<FileStatus> listStatus(Path f) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException
f
- is the path
org.apache.hadoop.security.AccessControlException
- If access is denied
FileNotFoundException
- If f
does not exist
UnsupportedFileSystemException
- If file system for f
is
not supported
IOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:
org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC client
org.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC server
org.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC serverpublic org.apache.hadoop.fs.RemoteIterator<Path> listCorruptFileBlocks(Path path) throws IOException
IOException
public org.apache.hadoop.fs.RemoteIterator<LocatedFileStatus> listLocatedStatus(Path f) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, UnsupportedFileSystemException, IOException
f
- is the path
org.apache.hadoop.security.AccessControlException
- If access is denied
FileNotFoundException
- If f
does not exist
UnsupportedFileSystemException
- If file system for f
is
not supported
IOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:
org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC client
org.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC server
org.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC serverpublic boolean deleteOnExit(Path f) throws org.apache.hadoop.security.AccessControlException, IOException
f
- the existing path to delete.
org.apache.hadoop.security.AccessControlException
- If access is denied
UnsupportedFileSystemException
- If file system for f
is
not supported
IOException
- If an I/O error occurred
Exceptions applicable to file systems accessed over RPC:
org.apache.hadoop.ipc.RpcClientException
- If an exception occurred in the RPC client
org.apache.hadoop.ipc.RpcServerException
- If an exception occurred in the RPC server
org.apache.hadoop.ipc.UnexpectedServerException
- If server implementation throws
undeclared exception to RPC serverpublic org.apache.hadoop.fs.FileContext.Util util()
protected Path resolve(Path f) throws FileNotFoundException, org.apache.hadoop.fs.UnresolvedLinkException, org.apache.hadoop.security.AccessControlException, IOException
FileNotFoundException
org.apache.hadoop.fs.UnresolvedLinkException
org.apache.hadoop.security.AccessControlException
IOException
protected Path resolveIntermediate(Path f) throws IOException
f
- path to resolve
IOException
public static org.apache.hadoop.fs.FileSystem.Statistics getStatistics(URI uri)
uri
- the uri to lookup the statistics. Only scheme and authority part
of the uri are used as the key to store and lookup.
public static void clearStatistics()
public static void printStatistics()
public static Map<URI,org.apache.hadoop.fs.FileSystem.Statistics> getAllStatistics()
public void modifyAclEntries(Path path, List<AclEntry> aclSpec) throws IOException
path
- Path to modifyaclSpec
- ListIOException
- if an ACL could not be modifiedpublic void removeAclEntries(Path path, List<AclEntry> aclSpec) throws IOException
path
- Path to modifyaclSpec
- ListIOException
- if an ACL could not be modifiedpublic void removeDefaultAcl(Path path) throws IOException
path
- Path to modify
IOException
- if an ACL could not be modifiedpublic void removeAcl(Path path) throws IOException
path
- Path to modify
IOException
- if an ACL could not be removedpublic void setAcl(Path path, List<AclEntry> aclSpec) throws IOException
path
- Path to modifyaclSpec
- ListIOException
- if an ACL could not be modifiedpublic AclStatus getAclStatus(Path path) throws IOException
path
- Path to get
IOException
- if an ACL could not be read
|
||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |