Apache Hadoop 0.18.0 Release Notes

These release notes cover new developer and user-facing incompatibilities, important issues, features, and major improvements.


Changed streaming tasks to adhere to task timeout value specified in the job configuration.


Modified HOD to include the RPC port of the JobTracker in the ‘notes’ attribute of the resource manager. The RPC port is included as the string ‘Mapred RPC Port:<port number>’. Tools that depend on the value of the notes attribute must change to parse this new value.


Modified logcondense.py to use the new format of hadoop dfs -lsr output. This version of logcondense would not work with previous versions of Hadoop and hence is incompatible.


Change FileListed to getNumGetListingOps and add CreateFileOps, DeleteFileOps and AddBlockOps metrics.


Simplify generation stamp upgrade by making is a local upgrade on datandodes. Deleted distributed upgrade.


WARNING: No release note provided for this incompatible change.


Modified HOD to automatically create a cluster directory if the one specified with the script command does not exist.


Changed Map-Reduce framework to no longer create temporary task output directories for staging outputs if staging outputs isn’t necessary. ${mapred.out.dir}/_temporary/_${taskid}


Fixed KFS to have read() read and return 1 byte instead of 4.


Modifed HOD to generate the dfs.datanode.ipc.address parameter in the hadoop-site.xml of datanodes that it launches.


Separated Distcp, Logalyzer and Archiver into a tools jar.


Changed the default value of dfs.blockreport.initialDelay to be 0 seconds.


Modified HOD to create a cluster directory if one does not exist and to auto-deallocate a cluster while reallocating it, if it is already dead.


Implemented a mechanism to transfer HOD errors that occur on compute nodes to the submit node running the HOD client, so users have good feedback on why an allocation failed.


Created SequenceFileAsBinaryOutputFormat to write raw bytes as keys and values to a SequenceFile.


Changed the output of the “fs -ls” command to more closely match familiar Linux format. Applications that parse the command output should be reviewed.


Changed exit status of fsck to report whether the files system is healthy or corrupt.


Increased the size of the buffer used in the communication between the Java task and the Streaming process to 128KB.


Changed shuffle scheduler policy to wait for notifications from shuffle threads before scheduling more.


Removed the public class org.apache.hadoop.mapred.JobShell. Command line options -libjars, -files and -archives are moved to GenericCommands. Thus applications have to implement org.apache.hadoop.util.Tool to use the options.


Refactored previously public classes MapTaskStatus, ReduceTaskStatus, JobSubmissionProtocol, CompletedJobStatusStore to be package local.


Removed deprecated ClientProtocol.abandonFileInProgress().


Set default value for configuration property “stream.non.zero.exit.status.is.failure” to be “true”.


Modified HOD client to look for specific messages related to resource limit overruns and take appropriate actions - such as either failing to allocate the cluster, or issuing a warning to the user. A tool is provided, specific to Maui and Torque, that will set these specific messages.


Improved shuffle so that all fetched map-outputs are kept in-memory before being merged by stalling the shuffle so that the in-memory merge executes and frees up memory for the shuffle.


Added support for hexadecimal values in Configuration


Improved failure handling of last Data Node in write pipeline.


Added a log4j appender that emits events from FSNamesystem for audit logging


Changed format of file system image to not store locations of last block.


Changed fetchOutputs() so that LocalFSMerger and InMemFSMergeThread threads are spawned only once. The thread gets notified when something is ready for merge. The merge happens when thresholds are met.


Changed the default port for “hdfs:” URIs to be 8020, so that one may simply use URIs of the form “hdfs://example.com/dir/file”.


Implemented Lease Recovery to sync the last bock of a file. Added ClientDatanodeProtocol for client trigging block recovery. Changed DatanodeProtocol to support block synchronization. Changed InterDatanodeProtocol to support block update.


Introduced archive feature to Hadoop. A Map/Reduce job can be run to create an archive with indexes. A FileSystem abstraction is provided over the archive.


Changed the TextInputFormat and KeyValueTextInput classes to initialize the compressionCodecs member variable before dereferencing it.


Added an IPC server in DataNode and a new IPC protocol InterDatanodeProtocol. Added conf properties dfs.datanode.ipc.address and dfs.datanode.handler.count with defaults “0.0.0.0:50020” and 3, respectively. Changed the serialization in DatanodeRegistration and DatanodeInfo, and therefore, updated the versionID in ClientProtocol, DatanodeProtocol, NamenodeProtocol.


Removed deprecated API getFileCacheHints


Introduced an FTPFileSystem backed by Apache Commons FTPClient to directly store data into HDFS.


Changed ‘du’ command to run in a seperate thread so that it does not block user.


Added command line tool “job -counter <job-id> <group-name> <counter-name>” to access counters.


Changed policy for running combiner. The combiner may be run multiple times as the map’s output is sorted and merged. Additionally, it may be run on the reduce side as data is merged. The old semantics are available in Hadoop 0.18 if the user calls: job.setCombineOnlyOnce(true);


Added org.apache.hadoop.mapred.lib.NLineInputFormat ,which splits N lines of input as one split. N can be specified by configuration property “mapred.line.input.format.linespermap”, which defaults to 1.


Added reporter to FSNamesystem stateChangeLog, and a new metric to track the number of corrupted replicas.


Introduced directory quota as hard limits on the number of names in the tree rooted at that directory. An administrator may set quotas on individual directories explicitly. Newly created directories have no associated quota. File/directory creations fault if the quota would be exceeded. The attempt to set a quota faults if the directory would be in violation of the new quota.


Modified HOD to handle master (NameNode or JobTracker) failures on bad nodes by trying to bring them up on another node in the ring. Introduced new property ringmaster.max-master-failures to specify the maximum number of times a master is allowed to fail.


Added a new public interface Syncable which declares the sync() operation. FSDataOutputStream implements Syncable. If the wrappedStream in FSDataOutputStream is Syncalbe, calling FSDataOutputStream.sync() is equivalent to call wrappedStream.sync(). Otherwise, FSDataOutputStream.sync() is a no-op. Both DistributedFileSystem and LocalFileSystem support the sync() operation.


Changed data node to use FileChannel.tranferTo() to transfer block data.


Changed job submission protocol to not allow submission if the client’s value of mapred.system.dir does not match the job tracker’s. Deprecated JobConf.getSystemDir(); use JobClient.getSystemDir().


Added sync() method to FSDataOutputStream to really, really persist data in HDFS. InterDatanodeProtocol to implement this feature.


Added overloaded method getFileBlockLocations(FileStatus, long, long). This is an incompatible change for FileSystem implementations which override getFileBlockLocations(Path, long, long). They should have the signature of this method changed to getFileBlockLocations(FileStatus, long, long) to work correctly.


Introduced ByteWritable and DoubleWritable (implementing WritableComparable) implementations for Byte and Double.


Added FSNamesystem status metrics.


Changed protocol for transferring blocks between data nodes to report corrupt blocks to data node for re-replication from a good replica.


fsck reports corrupt blocks in the system.


Removed property ipc.client.maxidletime from the default configuration. The allowed idle time is twice ipc.client.connection.maxidletime.


Added task’s cwd to its LD_LIBRARY_PATH.


Changed the output of the “fs -ls” command to more closely match familiar Linux format. Additional changes were made by HADOOP-3459. Applications that parse the command output should be reviewed.


Withdrew the upgrade-to-CRC facility. HDFS will no longer support upgrades from versions without CRCs for block data. Users upgrading from version 0.13 or earlier must first upgrade to an intermediate (0.14, 0.15, 0.16, 0.17) version before doing upgrade to version 0.18 or later.


Changed fsck to ignore files opened for writing. Introduced new option “-openforwrite” to explicitly show open files.


Associated a generation stamp with each block. On data nodes, the generation stamp is stored as part of the file name of the block’s meta-data file.


Improved management of replicas of the name space image. If all replicas on the Name Node are lost, the latest check point can be loaded from the secondary Name Node. Use parameter “-importCheckpoint” and specify the location with “fs.checkpoint.dir.” The directory structure on the secondary Name Node has changed to match the primary Name Node.


The current working directory of a task, i.e. ${mapred.local.dir}/taskTracker/jobcache/<jobid>/<task_dir>/work is cleanedup, as soon as the task is finished.


Replaced timeouts with pings to check that client connection is alive. Removed the property ipc.client.timeout from the default Hadoop configuration. Removed the metric RpcOpsDiscardedOPsNum.


Added logging for input splits in job tracker log and job history log. Added web UI for viewing input splits in the job UI and history UI.


Change “job -kill” to only allow a job that is in the RUNNING or PREP state to be killed.


Reduced in-memory copies of keys and values as they flow through the Map-Reduce framework. Changed the storage of intermediate map outputs to use new IFile instead of SequenceFile for better compression.


Added “corrupt” flag to LocatedBlock to indicate that all replicas of the block thought to be corrupt.


Added support for .tar, .tgz and .tar.gz files in DistributedCache. File sizes are limited to 2GB.


Provided a new method to update counters. “incrCounter(String group, String counter, long amount)”


Reduced buffer copies as data is written to HDFS. The order of sending data bytes and control information has changed, but this will not be observed by client applications.


Introduced a way for a streaming process to update global counters and status using stderr stream to emit information. Use “reporter:counter:<group>,<counter>,<amount> ” to update a counter. Use “reporter:status:<message>” to update status.


Added support for reading and writing native S3 files. Native S3 files are referenced using s3n URIs. See http://wiki.apache.org/hadoop/AmazonS3 for more details.


Introduced new classes JobID, TaskID and TaskAttemptID, which should be used instead of their string counterparts. Deprecated functions in JobClient, TaskReport, RunningJob, jobcontrol.Job and TaskCompletionEvent that use string arguments. Applications can use xxxID.toString() and xxxID.forName() methods to convert/restore objects to/from strings.


Changed connection protocol job tracker and task tracker so that task tracker will not connect to a job tracker with a different build version.


Introduced FUSE module for HDFS. Module allows mount of HDFS as a Unix filesystem, and optionally the export of that mount point to other machines. Writes are disabled. rmdir, mv, mkdir, rm are supported, but not cp, touch, and the like. Usage information is attached to the Jira record.