Class HdfsAdmin
-
Field Summary
Fields -
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionlongaddCacheDirective(CacheDirectiveInfo info, EnumSet<CacheFlag> flags) Add a new CacheDirectiveInfo.voidaddCachePool(CachePoolInfo info) Add a cache pool.org.apache.hadoop.hdfs.protocol.AddErasureCodingPolicyResponse[]addErasureCodingPolicies(org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy[] policies) Add Erasure coding policies to HDFS.voidallowSnapshot(Path path) Allow snapshot on a directory.voidclearQuota(Path src) Clear the namespace quota (count of files, directories and sym links) for a directory.voidclearQuotaByStorageType(Path src, StorageType type) Clear the space quota by storage type for a directory.voidclearSpaceQuota(Path src) Clear the storage space quota (size of files) for a directory.voidcreateEncryptionZone(Path path, String keyName) Deprecated.voidcreateEncryptionZone(Path path, String keyName, EnumSet<CreateEncryptionZoneFlag> flags) Create an encryption zone rooted at an empty existing directory, using the specified encryption key.voiddisableErasureCodingPolicy(String ecPolicyName) Disable erasure coding policy.voiddisallowSnapshot(Path path) Disallow snapshot on a directory.voidenableErasureCodingPolicy(String ecPolicyName) Enable erasure coding policy.Collection<? extends BlockStoragePolicySpi>Retrieve all the storage policies supported by HDFS file system.getEncryptionZoneForPath(Path path) Get the path of the encryption zone for a given file or directory.org.apache.hadoop.hdfs.protocol.ErasureCodingPolicyInfo[]Get the Erasure coding policies supported.org.apache.hadoop.hdfs.protocol.ErasureCodingPolicygetErasureCodingPolicy(Path path) Get the erasure coding policy information for the specified pathorg.apache.hadoop.fs.FileEncryptionInfogetFileEncryptionInfo(Path path) Returns the FileEncryptionInfo on the HdfsFileStatus for the given path.Exposes a stream of namesystem events.getInotifyEventStream(long lastReadTxid) A version ofgetInotifyEventStream()meant for advanced users who are aware of HDFS edits up to lastReadTxid (e.g. because they have access to an FSImage inclusive of lastReadTxid) and only want to read events after this point.Get KeyProvider if present.getStoragePolicy(Path src) Query the effective storage policy ID for the given file or directory.org.apache.hadoop.fs.RemoteIterator<CacheDirectiveEntry>List cache directives.org.apache.hadoop.fs.RemoteIterator<CachePoolEntry>List all cache pools.org.apache.hadoop.fs.RemoteIterator<EncryptionZone>Returns a RemoteIterator which can be used to list the encryption zones in HDFS.org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.hdfs.protocol.OpenFileEntry>Deprecated.org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.hdfs.protocol.OpenFileEntry>listOpenFiles(EnumSet<OpenFilesIterator.OpenFilesType> openFilesTypes) Deprecated.org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.hdfs.protocol.OpenFileEntry>listOpenFiles(EnumSet<OpenFilesIterator.OpenFilesType> openFilesTypes, String path) org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.hdfs.protocol.ZoneReencryptionStatus>Returns a RemoteIterator which can be used to list all re-encryption information.voidmodifyCacheDirective(CacheDirectiveInfo info, EnumSet<CacheFlag> flags) Modify a CacheDirective.voidmodifyCachePool(CachePoolInfo info) Modify an existing cache pool.voidProvision a trash directory for a given encryption zone.provisionSnapshotTrash(Path path) Provision a trash directory for a given snapshottable directory.voidreencryptEncryptionZone(Path zone, HdfsConstants.ReencryptAction action) Performs re-encryption action for a given encryption zone.voidremoveCacheDirective(long id) Remove a CacheDirective.voidremoveCachePool(String poolName) Remove a cache pool.voidremoveErasureCodingPolicy(String ecPolicyName) Remove erasure coding policy.voidsatisfyStoragePolicy(Path path) Set the source path to the specified storage policy.voidsetErasureCodingPolicy(Path path, String ecPolicyName) Set the source path to the specified erasure coding policy.voidSet the namespace quota (count of files, directories, and sym links) for a directory.voidsetQuotaByStorageType(Path src, StorageType type, long quota) Set the quota by storage type for a directory.voidsetSpaceQuota(Path src, long spaceQuota) Set the storage space quota (size of files) for a directory.voidsetStoragePolicy(Path src, String policyName) Set the source path to the specified storage policy.voidunsetErasureCodingPolicy(Path path) Unset erasure coding policy from the directory.voidunsetStoragePolicy(Path src) Unset the storage policy set for a given file or directory.
-
Field Details
-
TRASH_PERMISSION
-
-
Constructor Details
-
HdfsAdmin
Create a new HdfsAdmin client.- Parameters:
uri- the unique URI of the HDFS file system to administerconf- configuration- Throws:
IOException- in the event the file system could not be created
-
-
Method Details
-
setQuota
Set the namespace quota (count of files, directories, and sym links) for a directory.- Parameters:
src- the path to set the quota forquota- the value to set for the quota- Throws:
IOException- in the event of error
-
clearQuota
Clear the namespace quota (count of files, directories and sym links) for a directory.- Parameters:
src- the path to clear the quota of- Throws:
IOException- in the event of error
-
setSpaceQuota
Set the storage space quota (size of files) for a directory. Note that directories and sym links do not occupy storage space.- Parameters:
src- the path to set the space quota ofspaceQuota- the value to set for the space quota- Throws:
IOException- in the event of error
-
clearSpaceQuota
Clear the storage space quota (size of files) for a directory. Note that directories and sym links do not occupy storage space.- Parameters:
src- the path to clear the space quota of- Throws:
IOException- in the event of error
-
setQuotaByStorageType
Set the quota by storage type for a directory. Note that directories and sym links do not occupy storage type quota.- Parameters:
src- the target directory to set the quota by storage typetype- the storage type to set for quota by storage typequota- the value to set for quota by storage type- Throws:
IOException- in the event of error
-
clearQuotaByStorageType
Clear the space quota by storage type for a directory. Note that directories and sym links do not occupy storage type quota.- Parameters:
src- the target directory to clear the quota by storage typetype- the storage type to clear for quota by storage type- Throws:
IOException- in the event of error
-
allowSnapshot
Allow snapshot on a directory.- Parameters:
path- The path of the directory where snapshots will be taken.- Throws:
IOException
-
provisionSnapshotTrash
Provision a trash directory for a given snapshottable directory.- Parameters:
path- the root of the snapshottable directory- Returns:
- Path of the provisioned trash root
- Throws:
IOException- if the trash directory can not be created.
-
disallowSnapshot
Disallow snapshot on a directory.- Parameters:
path- The path of the snapshottable directory.- Throws:
IOException
-
addCacheDirective
Add a new CacheDirectiveInfo.- Parameters:
info- Information about a directive to add.flags-CacheFlags to use for this operation.- Returns:
- the ID of the directive that was created.
- Throws:
IOException- if the directive could not be added
-
modifyCacheDirective
public void modifyCacheDirective(CacheDirectiveInfo info, EnumSet<CacheFlag> flags) throws IOException Modify a CacheDirective.- Parameters:
info- Information about the directive to modify. You must set the ID to indicate which CacheDirective you want to modify.flags-CacheFlags to use for this operation.- Throws:
IOException- if the directive could not be modified
-
removeCacheDirective
Remove a CacheDirective.- Parameters:
id- identifier of the CacheDirectiveInfo to remove- Throws:
IOException- if the directive could not be removed
-
listCacheDirectives
public org.apache.hadoop.fs.RemoteIterator<CacheDirectiveEntry> listCacheDirectives(CacheDirectiveInfo filter) throws IOException List cache directives. Incrementally fetches results from the server.- Parameters:
filter- Filter parameters to use when listing the directives, null to list all directives visible to us.- Returns:
- A RemoteIterator which returns CacheDirectiveInfo objects.
- Throws:
IOException
-
addCachePool
Add a cache pool.- Parameters:
info- The request to add a cache pool.- Throws:
IOException- If the request could not be completed.
-
modifyCachePool
Modify an existing cache pool.- Parameters:
info- The request to modify a cache pool.- Throws:
IOException- If the request could not be completed.
-
removeCachePool
Remove a cache pool.- Parameters:
poolName- Name of the cache pool to remove.- Throws:
IOException- if the cache pool did not exist, or could not be removed.
-
listCachePools
List all cache pools.- Returns:
- A remote iterator from which you can get CachePoolEntry objects. Requests will be made as needed.
- Throws:
IOException- If there was an error listing cache pools.
-
getKeyProvider
Get KeyProvider if present.- Returns:
- the key provider if encryption is enabled on HDFS. Otherwise, it returns null.
- Throws:
IOException- on RPC exception to the NN.
-
createEncryptionZone
@Deprecated public void createEncryptionZone(Path path, String keyName) throws IOException, AccessControlException, FileNotFoundException Deprecated.Create an encryption zone rooted at an empty existing directory, using the specified encryption key. An encryption zone has an associated encryption key used when reading and writing files within the zone.- Parameters:
path- The path of the root of the encryption zone. Must refer to an empty, existing directory.keyName- Name of key available at the KeyProvider.- Throws:
IOException- if there was a general IO exceptionAccessControlException- if the caller does not have access to pathFileNotFoundException- if the path does not exist
-
createEncryptionZone
public void createEncryptionZone(Path path, String keyName, EnumSet<CreateEncryptionZoneFlag> flags) throws IOException, AccessControlException, FileNotFoundException, HadoopIllegalArgumentException Create an encryption zone rooted at an empty existing directory, using the specified encryption key. An encryption zone has an associated encryption key used when reading and writing files within the zone. Additional options, such as provisioning the trash directory, can be specified usingCreateEncryptionZoneFlagflags.- Parameters:
path- The path of the root of the encryption zone. Must refer to an empty, existing directory.keyName- Name of key available at the KeyProvider.flags- flags for this operation.- Throws:
IOException- if there was a general IO exceptionAccessControlException- if the caller does not have access to pathFileNotFoundException- if the path does not existHadoopIllegalArgumentException- if the flags are invalid
-
provisionEncryptionZoneTrash
Provision a trash directory for a given encryption zone.- Parameters:
path- the root of the encryption zone- Throws:
IOException- if the trash directory can not be created.
-
getEncryptionZoneForPath
public EncryptionZone getEncryptionZoneForPath(Path path) throws IOException, AccessControlException Get the path of the encryption zone for a given file or directory.- Parameters:
path- The path to get the ez for.- Returns:
- An EncryptionZone, or null if path does not exist or is not in an ez.
- Throws:
IOException- if there was a general IO exceptionAccessControlException- if the caller does not have access to path
-
listEncryptionZones
Returns a RemoteIterator which can be used to list the encryption zones in HDFS. For large numbers of encryption zones, the iterator will fetch the list of zones in a number of small batches.Since the list is fetched in batches, it does not represent a consistent snapshot of the entire list of encryption zones.
This method can only be called by HDFS superusers.
- Throws:
IOException
-
reencryptEncryptionZone
public void reencryptEncryptionZone(Path zone, HdfsConstants.ReencryptAction action) throws IOException Performs re-encryption action for a given encryption zone.- Parameters:
zone- the root of the encryption zoneaction- the re-encrypt action- Throws:
IOException- If any error occurs when handling re-encrypt action.
-
listReencryptionStatus
public org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.hdfs.protocol.ZoneReencryptionStatus> listReencryptionStatus() throws IOExceptionReturns a RemoteIterator which can be used to list all re-encryption information. For large numbers of re-encryptions, the iterator will fetch the list in a number of small batches.Since the list is fetched in batches, it does not represent a consistent snapshot of the entire list of encryption zones.
This method can only be called by HDFS superusers.
- Throws:
IOException
-
getFileEncryptionInfo
Returns the FileEncryptionInfo on the HdfsFileStatus for the given path. The return value can be null if the path points to a directory, or a file that is not in an encryption zone.- Throws:
FileNotFoundException- if the path does not exist.AccessControlException- if no execute permission on parent path.IOException
-
getInotifyEventStream
Exposes a stream of namesystem events. Only events occurring after the stream is created are available. SeeDFSInotifyEventInputStreamfor information on stream usage. SeeEventfor information on the available events.Inotify users may want to tune the following HDFS parameters to ensure that enough extra HDFS edits are saved to support inotify clients that fall behind the current state of the namespace while reading events. The default parameter values should generally be reasonable. If edits are deleted before their corresponding events can be read, clients will see a
MissingEventsExceptiononDFSInotifyEventInputStreammethod calls. It should generally be sufficient to tune these parameters: dfs.namenode.num.extra.edits.retained dfs.namenode.max.extra.edits.segments.retained Parameters that affect the number of created segments and the number of edits that are considered necessary, i.e. do not count towards the dfs.namenode.num.extra.edits.retained quota): dfs.namenode.checkpoint.period dfs.namenode.checkpoint.txns dfs.namenode.num.checkpoints.retained dfs.ha.log-roll.periodIt is recommended that local journaling be configured (dfs.namenode.edits.dir) for inotify (in addition to a shared journal) so that edit transfers from the shared journal can be avoided.
- Throws:
IOException- If there was an error obtaining the stream.
-
getInotifyEventStream
A version ofgetInotifyEventStream()meant for advanced users who are aware of HDFS edits up to lastReadTxid (e.g. because they have access to an FSImage inclusive of lastReadTxid) and only want to read events after this point.- Throws:
IOException
-
setStoragePolicy
Set the source path to the specified storage policy.- Parameters:
src- The source path referring to either a directory or a file.policyName- The name of the storage policy.- Throws:
IOException
-
unsetStoragePolicy
Unset the storage policy set for a given file or directory.- Parameters:
src- file or directory path.- Throws:
IOException
-
getStoragePolicy
Query the effective storage policy ID for the given file or directory.- Parameters:
src- file or directory path.- Returns:
- storage policy for the given file or directory.
- Throws:
IOException
-
getAllStoragePolicies
Retrieve all the storage policies supported by HDFS file system.- Returns:
- all storage policies supported by HDFS file system.
- Throws:
IOException
-
setErasureCodingPolicy
Set the source path to the specified erasure coding policy.- Parameters:
path- The source path referring to a directory.ecPolicyName- The erasure coding policy name for the directory.- Throws:
IOExceptionHadoopIllegalArgumentException- if the specified EC policy is not enabled on the cluster
-
getErasureCodingPolicy
public org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy getErasureCodingPolicy(Path path) throws IOException Get the erasure coding policy information for the specified path- Parameters:
path-- Returns:
- Returns the policy information if file or directory on the path is erasure coded. Null otherwise.
- Throws:
IOException
-
satisfyStoragePolicy
Set the source path to the specified storage policy.- Parameters:
path- The source path referring to either a directory or a file.- Throws:
IOException
-
getErasureCodingPolicies
public org.apache.hadoop.hdfs.protocol.ErasureCodingPolicyInfo[] getErasureCodingPolicies() throws IOExceptionGet the Erasure coding policies supported.- Throws:
IOException
-
unsetErasureCodingPolicy
Unset erasure coding policy from the directory.- Parameters:
path- The source path referring to a directory.- Throws:
IOException
-
addErasureCodingPolicies
public org.apache.hadoop.hdfs.protocol.AddErasureCodingPolicyResponse[] addErasureCodingPolicies(org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy[] policies) throws IOException Add Erasure coding policies to HDFS. For each policy input, schema and cellSize are musts, name and id are ignored. They will be automatically created and assigned by Namenode once the policy is successfully added, and will be returned in the response; policy states will be set to DISABLED automatically.- Parameters:
policies- The user defined ec policy list to add.- Returns:
- Return the response list of adding operations.
- Throws:
IOException
-
removeErasureCodingPolicy
Remove erasure coding policy.- Parameters:
ecPolicyName- The name of the policy to be removed.- Throws:
IOException
-
enableErasureCodingPolicy
Enable erasure coding policy.- Parameters:
ecPolicyName- The name of the policy to be enabled.- Throws:
IOException
-
disableErasureCodingPolicy
Disable erasure coding policy.- Parameters:
ecPolicyName- The name of the policy to be disabled.- Throws:
IOException
-
listOpenFiles
@Deprecated public org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.hdfs.protocol.OpenFileEntry> listOpenFiles() throws IOExceptionDeprecated.Returns a RemoteIterator which can be used to list all open files currently managed by the NameNode. For large numbers of open files, iterator will fetch the list in batches of configured size.Since the list is fetched in batches, it does not represent a consistent snapshot of the all open files.
This method can only be called by HDFS superusers.
- Throws:
IOException
-
listOpenFiles
@Deprecated public org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.hdfs.protocol.OpenFileEntry> listOpenFiles(EnumSet<OpenFilesIterator.OpenFilesType> openFilesTypes) throws IOException Deprecated.- Throws:
IOException
-
listOpenFiles
public org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.hdfs.protocol.OpenFileEntry> listOpenFiles(EnumSet<OpenFilesIterator.OpenFilesType> openFilesTypes, String path) throws IOException - Throws:
IOException
-