Apache Hadoop 3.4.1 Release Notes

These release notes cover new developer and user-facing incompatibilities, important issues, features, and major improvements.


S3 Select is no longer supported through the S3A connector


If the user wants to load custom implementations of AWS Credential Providers through user provided jars can set {{fs.s3a.extensions.isolated.classloader}} to {{false}}.


maven/ivy imports of hadoop-common are less likely to end up with log4j versions on their classpath.


PositionedReadable.readVectored() will read incorrect data when reading from hdfs, azure abfs and other stores when given a direct buffer allocator.

For cross-version compatibility, use on-heap buffer allocators only


Apache httpclient 4.5.x is a new implementation of http connections; this supports a large configurable pool of connections along with the ability to limit their lifespan.

The networking library can be chosen using the configuration option fs.azure.networking.library

The supported values are - JDK_HTTP_URL_CONNECTION : Use JDK networking library [Default] - APACHE_HTTP_CLIENT : Use Apache HttpClient

Important: when the networking library is switched to the Apache http client, the apache httpcore and httpclient must be on the classpath.


S3A upload operations can now recover from failures where the store returns a 500 error. There is an option to control whether or not the S3A client itself attempts to retry on a 50x error other than 503 throttling events (which are independently processed as before). Option: fs.s3a.retry.http.5xx.errors . Default: true