This guide provides an overview of the Hadoop Fault Injection (FI) framework for those who will be developing their own faults (aspects).
The idea of fault injection is fairly simple: it is an infusion of errors and exceptions into an application's logic to achieve a higher coverage and fault tolerance of the system. Different implementations of this idea are available today. Hadoop's FI framework is built on top of Aspect Oriented Paradigm (AOP) implemented by AspectJ toolkit.
The current implementation of the FI framework assumes that the faults it will be emulating are of non-deterministic nature. That is, the moment of a fault's happening isn't known in advance and is a coin-flip based.
Components layout
This piece of the FI framework allows you to set expectations for faults to happen. The settings can be applied either statically (in advance) or in runtime. The desired level of faults in the framework can be configured two ways:
This is fundamentally a coin flipper. The methods of this class are getting a random number between 0.0 and 1.0 and then checking if a new number has happened in the range of 0.0 and a configured level for the fault in question. If that condition is true then the fault will occur.
Thus, to guarantee the happening of a fault one needs to set an appropriate level to 1.0. To completely prevent a fault from happening its probability level has to be set to 0.0.
Note: The default probability level is set to 0 (zero) unless the level is changed explicitly through the configuration file or in the runtime. The name of the default level's configuration parameter is fi.*
The foundation of Hadoop's FI framework includes a cross-cutting concept implemented by AspectJ. The following basic terms are important to remember:
The following readily available join points are provided by AspectJ:
package org.apache.hadoop.hdfs.server.datanode; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.fi.ProbabilityModel; import org.apache.hadoop.hdfs.server.datanode.DataNode; import org.apache.hadoop.util.DiskChecker.*; import java.io.IOException; import java.io.OutputStream; import java.io.DataOutputStream; /** * This aspect takes care about faults injected into datanode.BlockReceiver * class */ public aspect BlockReceiverAspects { public static final Log LOG = LogFactory.getLog(BlockReceiverAspects.class); public static final String BLOCK_RECEIVER_FAULT="hdfs.datanode.BlockReceiver"; pointcut callReceivePacket() : call (* OutputStream.write(..)) && withincode (* BlockReceiver.receivePacket(..)) // to further limit the application of this aspect a very narrow 'target' can be used as follows // && target(DataOutputStream) && !within(BlockReceiverAspects +); before () throws IOException : callReceivePacket () { if (ProbabilityModel.injectCriteria(BLOCK_RECEIVER_FAULT)) { LOG.info("Before the injection point"); Thread.dumpStack(); throw new DiskOutOfSpaceException ("FI: injected fault point at " + thisJoinPoint.getStaticPart( ).getSourceLocation()); } } }
The aspect has two main parts:
The pointcut identifies an invocation of class' java.io.OutputStream write() method with any number of parameters and any return type. This invoke should take place within the body of method receivepacket() from classBlockReceiver. The method can have any parameters and any return type. Possible invocations of write() method happening anywhere within the aspect BlockReceiverAspects or its heirs will be ignored.
Note 1: This short example doesn't illustrate the fact that you can have more than a single injection point per class. In such a case the names of the faults have to be different if a developer wants to trigger them separately.
Note 2: After the injection step (see Putting It All Together) you can verify that the faults were properly injected by searching for ajc keywords in a disassembled class file.
For the sake of a unified naming convention the following two types of names are recommended for a new aspects development:
Faults (aspects) have to injected (or woven) together before they can be used. Follow these instructions: * To weave aspects in place use:
% ant injectfaults
[iajc] warning at src/test/aop/org/apache/hadoop/hdfs/server/datanode/ \ BlockReceiverAspects.aj:44::0 advice defined in org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects has not been applied [Xlint:adviceDidNotMatch]
% ant jar-fault-inject
% ant jar-test-fault-inject
% ant run-test-hdfs-fault-inject
Faults can be triggered as follows:
% ant run-test-hdfs -Dfi.hdfs.datanode.BlockReceiver=0.12
To set a certain level, for example 25%, of all injected faults use:
% ant run-test-hdfs-fault-inject -Dfi.*=0.25
package org.apache.hadoop.fs; import org.junit.Test; import org.junit.Before; public class DemoFiTest { public static final String BLOCK_RECEIVER_FAULT="hdfs.datanode.BlockReceiver"; @Override @Before public void setUp() { //Setting up the test's environment as required } @Test public void testFI() { // It triggers the fault, assuming that there's one called 'hdfs.datanode.BlockReceiver' System.setProperty("fi." + BLOCK_RECEIVER_FAULT, "0.12"); // // The main logic of your tests goes here // // Now set the level back to 0 (zero) to prevent this fault from happening again System.setProperty("fi." + BLOCK_RECEIVER_FAULT, "0.0"); // or delete its trigger completely System.getProperties().remove("fi." + BLOCK_RECEIVER_FAULT); } @Override @After public void tearDown() { //Cleaning up test test environment } }
As you can see above these two methods do the same thing. They are setting the probability level of hdfs.datanode.BlockReceiver at 12%. The difference, however, is that the program provides more flexibility and allows you to turn a fault off when a test no longer needs it.
These two sources of information are particularly interesting and worth reading:
If you have additional comments or questions for the author check HDFS-435.