public class LocalDirsHandlerService extends org.apache.hadoop.service.AbstractService implements HealthReporter
| Constructor and Description | 
|---|
| LocalDirsHandlerService() | 
| LocalDirsHandlerService(NodeManagerMetrics nodeManagerMetrics) | 
| Modifier and Type | Method and Description | 
|---|---|
| boolean | areDisksHealthy()The minimum fraction of number of disks needed to be healthy for a node to
 be considered healthy in terms of disks is configured using
  YarnConfiguration.NM_MIN_HEALTHY_DISKS_FRACTION, with a default
 value ofYarnConfiguration.DEFAULT_NM_MIN_HEALTHY_DISKS_FRACTION. | 
| void | checkDirs() | 
| void | deregisterLocalDirsChangeListener(DirectoryCollection.DirsChangeListener listener) | 
| void | deregisterLogDirsChangeListener(DirectoryCollection.DirsChangeListener listener) | 
| Iterable<org.apache.hadoop.fs.Path> | getAllLocalPathsForRead(String pathStr) | 
| List<String> | getDiskFullLocalDirs() | 
| List<String> | getDiskFullLogDirs() | 
| String | getDisksHealthReport(boolean listGoodDirs)Function to generate a report on the state of the disks. | 
| String | getHealthReport()Returns output from health check. | 
| long | getLastDisksCheckTime() | 
| long | getLastHealthReportTime()Returns time stamp when node health check was last run. | 
| List<String> | getLocalDirs() | 
| List<String> | getLocalDirsForCleanup()Function to get the local dirs which should be considered when cleaning up
 resources. | 
| List<String> | getLocalDirsForRead()Function to get the local dirs which should be considered for reading
 existing files on disk. | 
| org.apache.hadoop.fs.Path | getLocalPathForRead(String pathStr) | 
| org.apache.hadoop.fs.Path | getLocalPathForWrite(String pathStr) | 
| org.apache.hadoop.fs.Path | getLocalPathForWrite(String pathStr,
                    long size,
                    boolean checkWrite) | 
| List<String> | getLogDirs() | 
| List<String> | getLogDirsForCleanup()Function to get the log dirs which should be considered when cleaning up
 resources. | 
| List<String> | getLogDirsForRead()Function to get the log dirs which should be considered for reading
 existing files on disk. | 
| org.apache.hadoop.fs.Path | getLogPathForWrite(String pathStr,
                  boolean checkWrite) | 
| org.apache.hadoop.fs.Path | getLogPathToRead(String pathStr) | 
| boolean | isGoodLocalDir(String path) | 
| boolean | isGoodLogDir(String path) | 
| boolean | isHealthy()Gets whether the node is healthy or not. | 
| void | registerLocalDirsChangeListener(DirectoryCollection.DirsChangeListener listener) | 
| void | registerLogDirsChangeListener(DirectoryCollection.DirsChangeListener listener) | 
| protected void | serviceInit(org.apache.hadoop.conf.Configuration config)Method which initializes the timertask and its interval time. | 
| protected void | serviceStart()Method used to start the disk health monitoring, if enabled. | 
| protected void | serviceStop()Method used to terminate the disk health monitoring service. | 
| protected void | updateMetrics() | 
| static String[] | validatePaths(String[] paths) | 
close, getBlockers, getConfig, getFailureCause, getFailureState, getLifecycleHistory, getName, getServiceState, getStartTime, init, isInState, noteFailure, putBlocker, registerGlobalListener, registerServiceListener, removeBlocker, setConfig, start, stop, toString, unregisterGlobalListener, unregisterServiceListener, waitForServiceToStoppublic LocalDirsHandlerService()
public LocalDirsHandlerService(NodeManagerMetrics nodeManagerMetrics)
protected void serviceInit(org.apache.hadoop.conf.Configuration config)
                    throws Exception
serviceInit in class org.apache.hadoop.service.AbstractServiceExceptionprotected void serviceStart()
                     throws Exception
serviceStart in class org.apache.hadoop.service.AbstractServiceExceptionprotected void serviceStop()
                    throws Exception
serviceStop in class org.apache.hadoop.service.AbstractServiceExceptionpublic void registerLocalDirsChangeListener(DirectoryCollection.DirsChangeListener listener)
public void registerLogDirsChangeListener(DirectoryCollection.DirsChangeListener listener)
public void deregisterLocalDirsChangeListener(DirectoryCollection.DirsChangeListener listener)
public void deregisterLogDirsChangeListener(DirectoryCollection.DirsChangeListener listener)
public List<String> getLocalDirs()
public List<String> getLogDirs()
public List<String> getDiskFullLocalDirs()
public List<String> getDiskFullLogDirs()
public List<String> getLocalDirsForRead()
public List<String> getLocalDirsForCleanup()
public List<String> getLogDirsForRead()
public List<String> getLogDirsForCleanup()
public String getDisksHealthReport(boolean listGoodDirs)
listGoodDirs - flag to determine whether the report should report the state of
          good dirs or failed dirspublic String getHealthReport()
HealthReportergetHealthReport in interface HealthReporterpublic boolean areDisksHealthy()
YarnConfiguration.NM_MIN_HEALTHY_DISKS_FRACTION, with a default
 value of YarnConfiguration.DEFAULT_NM_MIN_HEALTHY_DISKS_FRACTION.public boolean isHealthy()
HealthReporterisHealthy in interface HealthReporterpublic long getLastDisksCheckTime()
public long getLastHealthReportTime()
HealthReportergetLastHealthReportTime in interface HealthReporterpublic boolean isGoodLocalDir(String path)
public boolean isGoodLogDir(String path)
@VisibleForTesting public void checkDirs()
public org.apache.hadoop.fs.Path getLocalPathForWrite(String pathStr) throws IOException
IOExceptionpublic org.apache.hadoop.fs.Path getLocalPathForWrite(String pathStr, long size, boolean checkWrite) throws IOException
IOExceptionpublic org.apache.hadoop.fs.Path getLocalPathForRead(String pathStr) throws IOException
IOExceptionpublic Iterable<org.apache.hadoop.fs.Path> getAllLocalPathsForRead(String pathStr) throws IOException
IOExceptionpublic org.apache.hadoop.fs.Path getLogPathForWrite(String pathStr, boolean checkWrite) throws IOException
IOExceptionpublic org.apache.hadoop.fs.Path getLogPathToRead(String pathStr) throws IOException
IOExceptionprotected void updateMetrics()
Copyright © 2008–2025 Apache Software Foundation. All rights reserved.