Apache Hadoop 2.7.4 Release Notes

These release notes cover new developer and user-facing incompatibilities, important issues, features, and major improvements.


Fixed Configuration.getClasses() API to return the default value if the key is not set.


The patch replaces -namenode option with -fs for specifying the remote name node against which the benchmark is running. Before this patch, if ‘-namenode’ was not given, the benchmark would run in standalone mode, ignoring the ‘fs.defaultFS’ in config file even if it’s remote. With this patch, the benchmark, as other tools, will rely on the ‘fs.defaultFS’ config, which is overridable by -fs command option, to run standalone mode or remote mode.


The Code Changes include following: - Modified DFSUtil.java in Apache HDFS project for supplying new parameter ssl.server.exclude.cipher.list - Modified HttpServer2.java in Apache Hadoop-common project to work with new parameter and exclude ciphers using jetty setExcludeCihers method. - Modfied associated test classes to owrk with existing code and also cover the newfunctionality in junit


Skip blocks with size below dfs.balancer.getBlocks.min-block-size (default 10MB) when a balancer asks for a list of blocks.


Reserved space can be configured independently for different storage types for clusters with heterogeneous storage. The ‘dfs.datanode.du.reserved’ property name can be suffixed with a storage types (i.e. one of ssd, disk, archival or ram_disk). e.g. reserved space for RAM_DISK storage can be configured using the property ‘dfs.datanode.du.reserved.ram_disk’. If specific storage type reservation is not configured then the value specified by ‘dfs.datanode.du.reserved’ will be used for all volumes.


The output of hdfs fsck now also contains information about decommissioning replicas.


Permissions are now checked when moving a file to Trash.


If pipeline recovery fails due to expired encryption key, attempt to refresh the key and retry.


This change introduces a new configuration key used by RPC server to decide whether to send backoff signal to RPC Client when RPC call queue is full. When the feature is enabled, RPC server will no longer block on the processing of RPC requests when RPC call queue is full. It helps to improve quality of service when the service is under heavy load. The configuration key is in the format of “ipc.#port#.backoff.enable” where #port# is the port number that RPC server listens on. For example, if you want to enable the feature for the RPC server that listens on 8020, set ipc.8020.backoff.enable to true.


Load last partial chunk checksum properly into memory when converting a finalized/temporary replica to rbw replica. This ensures concurrent reader reads the correct checksum that matches the data before the update.


Tomcat 6.0.46 starts to filter weak ciphers. Some old SSL clients may be affected. It is recommended to upgrade the SSL client. Run the SSL client against https://www.howsmyssl.com/a/check to find out its TLS version and cipher suites.


The fix for HDFS-11056 reads meta file to load last partial chunk checksum when a block is converted from finalized/temporary to rbw. However, it did not close the file explicitly, which may cause number of open files reaching system limit. This jira fixes it by closing the file explicitly after the meta file is read.


Fixed a race condition that caused VolumeScanner to recognize a good replica as a bad one if the replica is also being written concurrently.


WARNING: No release note provided for this change.


The classpath implementing the s3a filesystem is now defined in core-default.xml. Attempting to instantiate an S3A filesystem instance using a Configuration instance which has not included the default resorts will fail. Applications should not be doing this anyway, as it will lose other critical configuration options needed by the filesystem.


Allow a block to complete if the number of replicas on live nodes, decommissioning nodes and nodes in maintenance mode satisfies minimum replication factor. The fix prevents block recovery failure if replica of last block is being decommissioned. Vice versa, the decommissioning will be stuck, waiting for the last block to be completed. In addition, file close() operation will not fail due to last block being decommissioned.


Add a new conf “dfs.balancer.max-size-to-move” so that Balancer.MAX_SIZE_TO_MOVE becomes configurable.