Package org.apache.hadoop.hdfs.server.datanode
package org.apache.hadoop.hdfs.server.datanode
-
ClassDescriptionorg.apache.hadoop.hdfs.server.datanode.BlockMetadataHeaderBlockMetadataHeader manages metadata for data blocks on Datanodes.org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorageManages storage for the set of BlockPoolSlices which share a particular block pool id, on this DataNode.org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorkerThis class handles the block recovery work commands.org.apache.hadoop.hdfs.server.datanode.BlockScannerorg.apache.hadoop.hdfs.server.datanode.BlockScanner.Servletorg.apache.hadoop.hdfs.server.datanode.BPServiceActorActionBase class for BPServiceActor class Issued by BPOfferSerivce class to tell BPServiceActor to take several actions.org.apache.hadoop.hdfs.server.datanode.BPServiceActorActionExceptionorg.apache.hadoop.hdfs.server.datanode.CachingStrategyThe caching strategy we should use for an HDFS read or write operation.org.apache.hadoop.hdfs.server.datanode.CachingStrategy.Builderorg.apache.hadoop.hdfs.server.datanode.ChunkChecksumholder class that holds checksum bytes and the length in a block at which the checksum bytes end ex: length = 1023 and checksum is 4 bytes which is for 512 bytes, then the checksum applies for the last chunk, or bytes 512 - 1023org.apache.hadoop.hdfs.server.datanode.CorruptMetaHeaderExceptionException object that is thrown when the block metadata file is corrupt.org.apache.hadoop.hdfs.server.datanode.DataNodeDataNode is a class (and program) that stores a set of blocks for a DFS deployment.org.apache.hadoop.hdfs.server.datanode.DataNode.ShortCircuitFdsUnsupportedExceptionorg.apache.hadoop.hdfs.server.datanode.DataNode.ShortCircuitFdsVersionExceptionorg.apache.hadoop.hdfs.server.datanode.DataNodeFaultInjectorUsed for injecting faults in DFSClient and DFSOutputStream tests.org.apache.hadoop.hdfs.server.datanode.DataNodeLayoutSubLockStrategyorg.apache.hadoop.hdfs.server.datanode.DataNodeLayoutVersionEnums for features that change the layout version.org.apache.hadoop.hdfs.server.datanode.DataNodeMXBeanThis is the JMX management interface for data node information.org.apache.hadoop.hdfs.server.datanode.DatanodeUtilProvide utility methods for Datanode.org.apache.hadoop.hdfs.server.datanode.DataSetLockManagerClass for maintain a set of lock for fsDataSetImpl.org.apache.hadoop.hdfs.server.datanode.DataSetSubLockStrategyThis interface is used to generate sub lock name for a blockid.org.apache.hadoop.hdfs.server.datanode.DataStorageData storage information file.org.apache.hadoop.hdfs.server.datanode.DataStorage.VolumeBuilderVolumeBuilder holds the metadata (e.g., the storage directories) of the prepared volume returned from
DataStorage.prepareVolume(DataNode, StorageLocation, List).org.apache.hadoop.hdfs.server.datanode.DirectoryScannerPeriodically scans the data directories for block and block metadata files.org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.BlockPoolReportHelper class for compiling block info reports per block pool.org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.ScanInfoVolumeReportHelper class for compiling block info reports from report compiler threads.org.apache.hadoop.hdfs.server.datanode.DiskBalancerWorker class for Disk Balancer.org.apache.hadoop.hdfs.server.datanode.DiskBalancer.BlockMoverBlockMover supports moving blocks across Volumes.org.apache.hadoop.hdfs.server.datanode.DiskBalancer.DiskBalancerMoverActual DataMover class for DiskBalancer.org.apache.hadoop.hdfs.server.datanode.DiskBalancer.VolumePairHolds source and dest volumes UUIDs and their BasePaths that disk balancer will be operating against.org.apache.hadoop.hdfs.server.datanode.DiskBalancerWorkItemKeeps track of how much work has finished.org.apache.hadoop.hdfs.server.datanode.DiskBalancerWorkStatusHelper class that reports how much work has has been done by the node.org.apache.hadoop.hdfs.server.datanode.DiskBalancerWorkStatus.DiskBalancerWorkEntryA class that is used to report each work item that we are working on.Various result values.org.apache.hadoop.hdfs.server.datanode.DiskFileCorruptExceptionWhen kernel report a "Input/output error", we use this exception to represents some corruption(e.g. bad disk track) happened on some disk file.org.apache.hadoop.hdfs.server.datanode.DNConfSimple class encapsulating all of the configuration that the DataNode loads at startup time.org.apache.hadoop.hdfs.server.datanode.ErrorReportActionA ErrorReportAction is an instruction issued by BPOfferService to BPServiceActor about a particular block encapsulated in errorMessage.org.apache.hadoop.hdfs.server.datanode.FaultInjectorFileIoEventsInjects faults in the metadata and data related operations on datanode volumes.org.apache.hadoop.hdfs.server.datanode.FileIoProviderThis class abstracts out various file IO operations performed by the DataNode and invokes profiling (for collecting stats) and fault injection (for testing) event hooks before and after each file IO.Lists the types of file system operations.org.apache.hadoop.hdfs.server.datanode.FinalizedProvidedReplicaThis class is used for provided replicas that are finalized.org.apache.hadoop.hdfs.server.datanode.FinalizedReplicaThis class describes a replica that has been finalized.org.apache.hadoop.hdfs.server.datanode.FSCachingGetSpaceUsedFast and accurate class to tell how much space HDFS is using.org.apache.hadoop.hdfs.server.datanode.FSCachingGetSpaceUsed.BuilderThe builder class.org.apache.hadoop.hdfs.server.datanode.LocalReplicaThis class is used for all replicas which are on local storage media and hence, are backed by files.org.apache.hadoop.hdfs.server.datanode.LocalReplica.ReplicaDirInfoorg.apache.hadoop.hdfs.server.datanode.LocalReplicaInPipelineThis class defines a replica in a pipeline, which includes a persistent replica being written to by a dfs client or a temporary replica being replicated by a source datanode or being copied for the balancing purpose.org.apache.hadoop.hdfs.server.datanode.ProvidedReplicaThis abstract class is used as a base class for provided replicas.org.apache.hadoop.hdfs.server.datanode.ReplicaThis represents block replicas which are stored in DataNode.org.apache.hadoop.hdfs.server.datanode.ReplicaAlreadyExistsExceptionException indicating that the target block already exists and is not set to be recovered/overwritten.org.apache.hadoop.hdfs.server.datanode.ReplicaBeingWrittenThis class represents replicas being written.org.apache.hadoop.hdfs.server.datanode.ReplicaBuilderThis class is to be used as a builder forReplicaInfoobjects.org.apache.hadoop.hdfs.server.datanode.ReplicaHandlerThis class includes a replica being actively written and the reference to the fs volume where this replica is located.org.apache.hadoop.hdfs.server.datanode.ReplicaInfoThis class is used by datanodes to maintain meta data of its replicas.org.apache.hadoop.hdfs.server.datanode.ReplicaInPipelineThis defines the interface of a replica in Pipeline that's being written toorg.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundExceptionException indicating that DataNode does not have a replica that matches the target block.org.apache.hadoop.hdfs.server.datanode.ReplicaUnderRecoveryThis class represents replicas that are under block recovery It has a recovery id that is equal to the generation stamp that the replica will be bumped to after recovery The recovery id is used to handle multiple concurrent block recoveries.org.apache.hadoop.hdfs.server.datanode.ReplicaWaitingToBeRecoveredThis class represents a replica that is waiting to be recovered.org.apache.hadoop.hdfs.server.datanode.ReportBadBlockActionReportBadBlockAction is an instruction issued by {{BPOfferService}} to {{BPServiceActor}} to report bad block to namenodeorg.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarterUtility class to start a datanode in a secure cluster, first obtaining privileged resources before main startup and handing them to the datanode.org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.SecureResourcesStash necessary resources needed for datanode operation in a secure env.org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistryManages client short-circuit memory segments on the DataNode.org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.NewShmInfoorg.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.RegisteredShmorg.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.Visitororg.apache.hadoop.hdfs.server.datanode.StorageLocationEncapsulates the URI and storage medium that together describe a storage directory.org.apache.hadoop.hdfs.server.datanode.StorageLocation.CheckContextClass to hold the parameters for running aStorageLocation.check(org.apache.hadoop.hdfs.server.datanode.StorageLocation.CheckContext).org.apache.hadoop.hdfs.server.datanode.UnexpectedReplicaStateExceptionException indicating that the replica is in an unexpected stateorg.apache.hadoop.hdfs.server.datanode.VolumeScannerVolumeScanner scans a single volume.org.apache.hadoop.hdfs.server.datanode.VolumeScannerCBInjectorUsed for injecting call backs inVolumeScannerandBlockScannertests.