public class ReplicaInPipeline extends org.apache.hadoop.hdfs.server.datanode.ReplicaInfo implements ReplicaInPipelineInterface
ReplicaInfo.ReplicaDirInfo| Constructor and Description |
|---|
ReplicaInPipeline(long blockId,
long genStamp,
FsVolumeSpi vol,
File dir,
long bytesToReserve)
Constructor for a zero length replica
|
ReplicaInPipeline(ReplicaInPipeline from)
Copy constructor.
|
| Modifier and Type | Method and Description |
|---|---|
boolean |
attemptToSetWriter(Thread prevWriter,
Thread newWriter)
Attempt to set the writer to a new value.
|
OutputStream |
createRestartMetaStream()
Create an output stream to write restart metadata in case of datanode
shutting down for quick restart.
|
ReplicaOutputStreams |
createStreams(boolean isCreate,
org.apache.hadoop.util.DataChecksum requestedChecksum)
Create output streams for writing to this replica,
one for block file and one for CRC file
|
boolean |
equals(Object o) |
long |
getBytesAcked()
Get the number of bytes acked
|
long |
getBytesOnDisk()
Get the number of bytes that have written to disk
|
long |
getBytesReserved()
Number of bytes reserved for this replica on disk.
|
ChunkChecksum |
getLastChecksumAndDataLen()
gets the last chunk checksum and the length of the block corresponding
to that checksum
|
long |
getOriginalBytesReserved()
Number of bytes originally reserved for this replica.
|
HdfsServerConstants.ReplicaState |
getState()
Get the replica state
|
long |
getVisibleLength()
Get the number of bytes that are visible to readers
|
int |
hashCode() |
void |
interruptThread() |
void |
releaseAllBytesReserved()
Release any disk space reserved for this replica.
|
void |
setBytesAcked(long bytesAcked)
Set the number bytes that have acked
|
void |
setLastChecksumAndDataLen(long dataLength,
byte[] lastChecksum)
store the checksum for the last chunk along with the data length
|
void |
stopWriter(long xceiverStopTimeout)
Interrupt the writing thread and wait until it dies
|
String |
toString() |
breakHardLinksIfNeeded, getBlockFile, getMetaFile, getNext, getStorageUuid, getVolume, isOnTransientStorage, parseBaseDir, setDir, setNextappendStringTo, compareTo, filename2id, getBlockId, getBlockId, getBlockName, getGenerationStamp, getGenerationStamp, getNumBytes, isBlockFilename, isMetaFilename, matchingIdAndGenStamp, metaToBlockFile, readFields, readId, set, setBlockId, setGenerationStamp, setNumBytes, toString, write, writeIdclone, finalize, getClass, notify, notifyAll, wait, wait, waitsetNumBytespublic ReplicaInPipeline(long blockId, long genStamp, FsVolumeSpi vol, File dir, long bytesToReserve)
blockId - block idgenStamp - replica generation stampvol - volume where replica is locateddir - directory path where block and meta files are locatedbytesToReserve - disk space to reserve for this replica, based on
the estimated maximum block length.public ReplicaInPipeline(ReplicaInPipeline from)
from - where to copy frompublic long getVisibleLength()
org.apache.hadoop.hdfs.server.datanode.ReplicagetVisibleLength in interface org.apache.hadoop.hdfs.server.datanode.Replicapublic HdfsServerConstants.ReplicaState getState()
org.apache.hadoop.hdfs.server.datanode.ReplicagetState in interface org.apache.hadoop.hdfs.server.datanode.Replicapublic long getBytesAcked()
ReplicaInPipelineInterfacegetBytesAcked in interface ReplicaInPipelineInterfacepublic void setBytesAcked(long bytesAcked)
ReplicaInPipelineInterfacesetBytesAcked in interface ReplicaInPipelineInterfacebytesAcked - number bytes ackedpublic long getBytesOnDisk()
org.apache.hadoop.hdfs.server.datanode.ReplicagetBytesOnDisk in interface org.apache.hadoop.hdfs.server.datanode.Replicapublic long getBytesReserved()
org.apache.hadoop.hdfs.server.datanode.ReplicaInfogetBytesReserved in class org.apache.hadoop.hdfs.server.datanode.ReplicaInfopublic long getOriginalBytesReserved()
org.apache.hadoop.hdfs.server.datanode.ReplicaInfogetOriginalBytesReserved in class org.apache.hadoop.hdfs.server.datanode.ReplicaInfopublic void releaseAllBytesReserved()
ReplicaInPipelineInterfacereleaseAllBytesReserved in interface ReplicaInPipelineInterfacepublic void setLastChecksumAndDataLen(long dataLength, byte[] lastChecksum)
ReplicaInPipelineInterfacesetLastChecksumAndDataLen in interface ReplicaInPipelineInterfacedataLength - number of bytes on disklastChecksum - - checksum bytes for the last chunkpublic ChunkChecksum getLastChecksumAndDataLen()
ReplicaInPipelineInterfacegetLastChecksumAndDataLen in interface ReplicaInPipelineInterfacepublic void interruptThread()
public boolean equals(Object o)
equals in class org.apache.hadoop.hdfs.protocol.Blockpublic boolean attemptToSetWriter(Thread prevWriter, Thread newWriter)
public void stopWriter(long xceiverStopTimeout) throws IOException
IOException - the waiting is interruptedpublic int hashCode()
hashCode in class org.apache.hadoop.hdfs.protocol.Blockpublic ReplicaOutputStreams createStreams(boolean isCreate, org.apache.hadoop.util.DataChecksum requestedChecksum) throws IOException
ReplicaInPipelineInterfacecreateStreams in interface ReplicaInPipelineInterfaceisCreate - if it is for creationrequestedChecksum - the checksum the writer would prefer to useIOException - if any error occurspublic OutputStream createRestartMetaStream() throws IOException
ReplicaInPipelineInterfacecreateRestartMetaStream in interface ReplicaInPipelineInterfaceIOException - if any error occursCopyright © 2017 Apache Software Foundation. All Rights Reserved.