Update release_docs/HISTORY-1_10.txt and RELEASE.txt after HDF5 1.10.1

release.
This commit is contained in:
lrknox 2017-05-04 17:14:38 -05:00
parent 494029c27d
commit e2ad2751dc
2 changed files with 790 additions and 335 deletions

View File

@ -3,11 +3,745 @@ HDF5 History
This file contains development history of the HDF5 1.10 branch
03. Release Information for hdf5-1.10.1
02. Release Information for hdf5-1.10.0-patch1
01. Release Information for hdf5-1.10.0
[Search on the string '%%%%' for section breaks of each release.]
%%%%1.10.1%%%%
HDF5 version 1.10.1 released on 2017-04-27
================================================================================
INTRODUCTION
This document describes the differences between HDF5-1.10.0-patch1 and
HDF5 1.10.1, and contains information on the platforms tested and known
problems in HDF5-1.10.1. For more details check the HISTORY*.txt files
in the HDF5 source.
Links to HDF5 1.10.1 source code, documentation, and additional materials can
be found on The HDF5 web page at:
https://support.hdfgroup.org/HDF5/
The HDF5 1.10.1 release can be obtained from:
https://support.hdfgroup.org/HDF5/release/obtain5.html
User documentation for the snapshot can be accessed directly at this location:
https://support.hdfgroup.org/HDF5/doc/
New features in the HDF5-1.10.x release series, including brief general
descriptions of some new and modified APIs, are described in the "New Features
in HDF5 Release 1.10" document:
https://support.hdfgroup.org/HDF5/docNewFeatures/index.html
All new and modified APIs are listed in detail in the "HDF5 Software Changes
from Release to Release" document, in the section "Release 10.1 (current
release) versus Release 1.10.0
https://support.hdfgroup.org/HDF5/doc/ADGuide/Changes.html
If you have any questions or comments, please send them to the HDF Help Desk:
help@hdfgroup.org
CONTENTS
- Major New Features Introduced in HDF5 1.10.1
- Other New Features and Enhancements
- Support for New Platforms, Languages, and Compilers
- Bug Fixes since HDF5-1.10.0-patch1
- Supported Platforms
- Tested Configuration Features Summary
- More Tested Platforms
- Known Problems
Major New Features Introduced in HDF5 1.10.1
============================================
For links to the RFCs and documentation in this section please view
https://support.hdfgroup.org/HDF5/docNewFeatures in a web browser.
________________________________________
Metadata Cache Image
________________________________________
HDF5 metadata is typically small, and scattered throughout the HDF5 file.
This can affect performance, particularly on large HPC systems. The
Metadata Cache Image feature can improve performance by writing the
metadata cache in a single block on file close, and then populating the
cache with the contents of this block on file open, thus avoiding the many
small I/O operations that would otherwise be required on file open and
close. See the RFC for complete details regarding this feature. Also,
see the Fine Tuning the Metadata Cache documentation.
At present, metadata cache images may not be generated by parallel
applications. Parallel applications can read files with metadata cache
images, but since this is a collective operation, a deadlock is possible
if one or more processes do not participate.
________________________________________
Metadata Cache Evict on Close
________________________________________
The HDF5 library's metadata cache is fairly conservative about holding on
to HDF5 object metadata (object headers, chunk index structures, etc.),
which can cause the cache size to grow, resulting in memory pressure on
an application or system. The "evict on close" property will cause all
metadata for an object to be evicted from the cache as long as metadata
is not referenced from any other open object. See the Fine Tuning the
Metadata Cache documentation for information on the APIs.
At present, evict on close is disabled in parallel builds.
________________________________________
Paged Aggregation
________________________________________
The current HDF5 file space allocation accumulates small pieces of metadata
and raw data in aggregator blocks which are not page aligned and vary
widely in sizes. The paged aggregation feature was implemented to provide
efficient paged access of these small pieces of metadata and raw data.
See the RFC for details. Also, see the File Space Management documentation.
________________________________________
Page Buffering
________________________________________
Small and random I/O accesses on parallel file systems result in poor
performance for applications. Page buffering in conjunction with paged
aggregation can improve performance by giving an application control of
minimizing HDF5 I/O requests to a specific granularity and alignment.
See the RFC for details. Also, see the Page Buffering documentation.
At present, page buffering is disabled in parallel builds.
Other New Features and Enhancements
===================================
Library
-------
- Added a mechanism for disabling the SWMR file locking scheme.
The file locking calls used in HDF5 1.10.0 (including patch1)
will fail when the underlying file system does not support file
locking or where locks have been disabled. To disable all file
locking operations, an environment variable named
HDF5_USE_FILE_LOCKING can be set to the five-character string
'FALSE'. This does not fundamentally change HDF5 library
operation (aside from initial file open/create, SWMR is lock-free),
but users will have to be more careful about opening files
to avoid problematic access patterns (i.e.: multiple writers)
that the file locking was designed to prevent.
Additionally, the error message that is emitted when file lock
operations set errno to ENOSYS (typical when file locking has been
disabled) has been updated to describe the problem and potential
resolution better.
(DER, 2016/10/26, HDFFV-9918)
- The return type of H5Pget_driver_info() has been changed from void *
to const void *.
The pointer returned by this function points to internal library
memory and should not be freed by the user.
(DER, 2016/11/04, HDFFV-10017)
- The direct I/O VFD has been removed from the list of VFDs that
support SWMR.
This configuration was never officially tested and several SWMR
tests fail when this VFD is set.
(DER, 2016/11/03, HDFFV-10169)
Configuration:
--------------
- The minimum version of CMake required to build HDF5 is now 3.2.2.
(ADB, 2017/01/10)
- An --enable/disable-developer-warnings option has been added to
configure.
This disables warnings that do not indicate poor code quality such
as -Winline and gcc's -Wsuggest-attribute. Developer warnings are
disabled by default.
(DER, 2017/01/10)
- A bin/restore.sh script was added that reverts autogen.sh processing.
(DER, 2016/11/08)
- CMake: Added NAMESPACE hdf5:: to package configuration files to allow
projects using installed HDF5 binaries built with CMake to link with
them without specifying the HDF5 library location via IMPORTED_LOCATION.
(ABD, 2016/10/17, HDFFV-10003)
- CMake: Changed the CTEST_BUILD_CONFIGURATION option to
CTEST_CONFIGURATION_TYPE as recommended by the CMake documentation.
(ABD, 2016/10/17, HDFFV-9971)
Fortran Library:
----------------
- The HDF5 Fortran library can now be compiled with the NAG compiler.
(MSB, 2017/2/10, HDFFV-9973)
C++ Library:
------------
- The following C++ API wrappers have been added to the C++ Library:
// Sets/Gets the strategy and the threshold value that the library
// will employ in managing file space.
FileCreatPropList::setFileSpaceStrategy - H5Pset_file_space_strategy
FileCreatPropList::getFileSpaceStrategy - H5Pget_file_space_strategy
// Sets/Gets the file space page size for paged aggregation.
FileCreatPropList::setFileSpacePagesize - H5Pset_file_space_page_size
FileCreatPropList::getFileSpacePagesize - H5Pget_file_space_page_size
// Checks if the given ID is valid.
IdComponent::isValid - H5Iis_valid
// Sets/Gets the number of soft or user-defined links that can be
// traversed before a failure occurs.
LinkAccPropList::setNumLinks - H5Pset_nlinks
LinkAccPropList::getNumLinks - H5Pget_nlinks
// Returns a copy of the creation property list of a datatype.
DataType::getCreatePlist - H5Tget_create_plist
// Opens/Closes an object within a group or a file, regardless of object
// type
Group::getObjId - H5Oopen
Group::closeObjId - H5Oclose
// Maps elements of a virtual dataset to elements of the source dataset.
DSetCreatPropList::setVirtual - H5Pset_virtual
// Gets general information about this file.
H5File::getFileInfo - H5Fget_info2
// Returns the number of members in a type.
IdComponent::getNumMembers - H5Inmembers
// Determines if an element type exists.
IdComponent::typeExists - H5Itype_exists
// Determines if an object exists.
H5Location::exists - H5Lexists.
// Returns the header version of an HDF5 object.
H5Object::objVersion - H5Oget_info for version
(BMR, 2017/03/20, HDFFV-10004, HDFFV-10139, HDFFV-10145)
- New exception: ObjHeaderIException for H5O interface.
(BMR, 2017/03/15, HDFFV-10145)
- New class LinkAccPropList for link access property list, to be used by
wrappers of H5Lexists.
(BMR, 2017/01/04, HDFFV-10145)
- New constructors to open datatypes in ArrayType, CompType, DataType,
EnumType, FloatType, IntType, StrType, and VarLenType.
(BMR, 2016/12/26, HDFFV-10056)
- New member functions:
DSetCreatPropList::setNbit() to setup N-bit compression for a dataset.
ArrayType::getArrayNDims() const
ArrayType::getArrayDims() const
both to replace the non-const versions.
(BMR, 2016/04/25, HDFFV-8623, HDFFV-9725)
Tools:
------
- The following options have been added to h5clear:
-s: clear the status_flags field in the file's superblock
-m: Remove the metadata cache image from the file
(QAK, 2017/03/22, PR#361)
High-Level APIs:
---------------
- Added New Fortran 2003 API for h5tbmake_table_f.
(MSB, 2017/02/10, HDFFV-8486)
Support for New Platforms, Languages, and Compilers
===================================================
- Added NAG compiler
Bug Fixes since HDF5-1.10.0-patch1 release
==================================
Library
-------
- Outdated data structure was used in H5D_CHUNK_DEBUG blocks, causing
compilation errors when H5D_CHUNK_DEBUG was defined. This is fixed.
(BMR, 2017/04/04, HDFFV-8089)
- SWMR implementation in the HDF5 1.10.0 and 1.10.0-patch1 releases has a
broken metadata flush dependency that manifested itself with the following
error at the end of the HDF5 error stack:
H5Dint.c line 846 in H5D__swmr_setup(): dataspace chunk index must be 0
for SWMR access, chunkno = 1
major: Dataset
minor: Bad value
It was also reported at https://github.com/areaDetector/ADCore/issues/203
The flush dependency is fixed in this release.
- Changed the plugins dlopen option from RTLD_NOW to RTLD_LAZY
(ABD, 2016/12/12, PR#201)
- A number of issues were fixed when reading/writing from/to corrupted
files to ensure that the library fails gracefully in these cases:
* Writing to a corrupted file that has an object message which is
incorrectly marked as sharable on disk results in a buffer overflow /
invalid write instead of a clean error message.
* Decoding data from a corrupted file with a dataset encoded with the
H5Z_NBIT decoding can result in a code execution vulnerability under
the context of the application using the HDF5 library.
* When decoding an array datatype from a corrupted file, the HDF5 library
fails to return an error in production if the number of dimensions
decoded is greater than the maximum rank.
* When decoding an "old style" array datatype from a corrupted file, the
HDF5 library fails to return an error in production if the number of
dimensions decoded is greater than the maximum rank.
(NAF, 2016/10/06, HDFFV-9950, HDFFV-9951, HDFFV-9992, HDFFV-9993)
- Fixed an error that would occur when copying an object with an attribute
which is a compound datatype consisting of a variable length string.
(VC, 2016/08/24, HDFFV-7991)
- H5DOappend will no longer fail if a dataset has no append callback
registered.
(VC, 2016/08/14, HDFFV-9960)
- Fixed an issue where H5Pset_alignment could result in misaligned blocks
with some input combinations, causing an assertion failure in debug mode.
(NAF, 2016/08/11, HDFFV-9948)
- Fixed a problem where a plugin compiled into a DLL in the default plugin
directory could not be found by the HDF5 library at runtime on Windows
when the HDF5_PLUGIN_PATH environment variable was not set.
(ABD, 2016/08/01, HDFFV-9706)
- Fixed an error that would occur when calling H5Adelete on an attribute
which is attached to an externally linked object in the target file and
whose datatype is a committed datatype in the main file.
(VC, 2016/07/06, HDFFV-9940)
- (a) Throw an error instead of assertion when v1 btree level hits the 1
byte limit.
(b) Modifications to better handle error recovery when conversion by
h5format_convert fails.
(VC, 2016/05/29, HDFFV-9434)
- Fixed a memory leak where an array used by the library to track SWMR
read retries was unfreed.
The leaked memory was small (on the order of a few tens of ints) and
allocated per-file. The memory was allocated (and lost) only when a
file was opened for SWMR access.
(DER, 2016/04/27, HDFFV-9786)
- Fixed a memory leak that could occur when opening a file for the first
time (including creating) and the call fails.
This occurred when the file-driver-specific info was not cleaned up.
The amount of memory leaked varied with the file driver, but would
normally be less than 1 kB.
(DER, 2016/12/06, HDFFV-10168)
- Fixed a failure in collective metadata writes.
This failure only appeared when collective metadata writes
were enabled (via H5Pset_coll_metadata_write()).
(JRM, 2017/04/10, HDFFV-10055)
Parallel Library
----------------
- Fixed a bug that could occur when allocating a chunked dataset in parallel
with an alignment set and an alignment threshold greater than the chunk
size but less than or equal to the raw data aggregator size.
(NAF, 2016/08/11, HDFFV-9969)
Configuration
-------------
- Configuration will check for the strtoll and strtoull functions
before using alternatives
(ABD, 2017/03/17, PR#340)
- CMake uses a Windows pdb directory variable if available and
will generate both static and shared pdb files.
(ABD, 2017/02/06, HDFFV-9875)
- CMake now builds shared versions of tools.
(ABD, 2017/02/01, HDFFV-10123)
- Makefiles and test scripts have been updated to correctly remove files
created when running "make check" and to avoid removing any files under
source control. In-source builds followed by "make clean" and "make
distclean" should result in the original source files.
(LRK, 2017/01/17, HDFFV-10099)
- The tools directory has been divided into two separate source and test
directories. This resolves a build dependency and, as a result,
'make check' will no longer fail in the tools directory if 'make' was
not executed first.
(ABD, 2016/10/27, HDFFV-9719)
- CMake: Fixed a timeout error that would occasionally occur when running
the virtual file driver tests simultaneously due to test directory
and file name collisions.
(ABD, 2016/09/19, HDFFV-9431)
- CMake: Fixed a command length overflow error by converting custom
commands inside CMakeTest.cmake files into regular dependencies and
targets.
(ABD, 2016/07/12, HDFFV-9939)
- Fixed a problem preventing HDF5 to be built on 32-bit CYGWIN by
condensing cygwin configuration files into a single file and
removing outdated compiler settings.
(ABD, 2016/07/12, HDFFV-9946)
Fortran
--------
- Changed H5S_ALL_F from INTEGER to INTEGER(HID_T)
(MSB, 2016/10/14, HDFFV-9987)
Tools
-----
- h5diff now correctly ignores strpad in comparing strings.
(ABD, 2017/03/03, HDFFV-10128)
- h5repack now correctly parses the command line filter options.
(ABD, 2017/01/24, HDFFV-10046)
- h5diff now correctly returns an error when it cannot read data due
to an unavailable filter plugin.
(ADB 2017/01/18, HDFFV-9994 )
- Fixed an error in the compiler wrapper scripts (h5cc, h5fc, et al.)
in which they would erroneously drop the file argument specified via
the -o flag when the -o flag was specified before the -c flag on the
command line, resulting in a failure to compile.
(LRK, 2016/11/04, HDFFV-9938, HDFFV-9530)
- h5repack User Defined (UD) filter parameters were not parsed correctly.
The UD filter parameters were not being parsed correctly. Reworked coding
section to parse the correct values and verify number of parameters.
(ABD, 2016/10/19, HDFFV-9996, HDFFV-9974, HDFFV-9515, HDFFV-9039)
- h5repack allows the --enable-error-stack option on the command line.
(ADB, 2016/08/08, HDFFV-9775)
C++ APIs
--------
- The member function H5Location::getNumObjs() is moved to
class Group because the objects are in a group or a file only,
and H5Object::getNumAttrs to H5Location to get the number of
attributes at a given location.
(BMR, 2017/03/17, PR#466)
- Due to the change in the C API, the overloaded functions of
PropList::setProperty now need const for some arguments. They are
planned for deprecation and are replaced by new versions with proper
consts.
(BMR, 2017/03/17, PR#344)
- The high-level API Packet Table (PT) did not write data correctly when
the datatype is a compound type that has string type as one of the
members. This problem started in 1.8.15, after the fix of HDFFV-9042
was applied, which caused the Packet Table to use native type to access
the data. It should be up to the application to specify whether the
buffer to be read into memory is in the machine's native architecture.
Thus, the PT is fixed to not use native type but to make a copy of the
user's provided datatype during creation or the packet table's datatype
during opening. If an application wishes to use native type to read the
data, then the application will request that. However, the Packet Table
doesn't provide a way to specify memory datatype in this release. This
feature will be available in future releases.
(BMR, 2016/10/27, HDFFV-9758)
- The obsolete macros H5_NO_NAMESPACE and H5_NO_STD have been removed from
the HDF5 C++ API library.
(BMR, 2016/10/23, HDFFV-9532)
- The problem where a user-defined function cannot access both, attribute
and dataset, using only one argument is now fixed.
(BMR, 2016/10/11, HDFFV-9920)
- In-memory array information, ArrayType::rank and
ArrayType::dimensions, were removed. This is an implementation
detail and should not affect applications.
(BMR, 2016/04/25, HDFFV-9725)
Testing
-------
- Fixed a problem that caused tests using SWMR to occasionally fail when
running "make check" using parallel make.
(LRK, 2016/03/22, PR#338, PR#346, PR#358)
Supported Platforms
===================
Linux 2.6.32-573.18.1.el6.ppc64 gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-4)
#1 SMP ppc64 GNU/Linux g++ (GCC) 4.4.7 20120313 (Red Hat 4.4.7-4)
(ostrich) GNU Fortran (GCC) 4.4.7 20120313
(Red Hat 4.4.7-4)
IBM XL C/C++ V13.1
IBM XL Fortran V15.1
Linux 3.10.0-327.10.1.el7 GNU C (gcc), Fortran (gfortran), C++ (g++)
#1 SMP x86_64 GNU/Linux compilers:
(kituo/moohan) Version 4.8.5 20150623 (Red Hat 4.8.5-4)
Version 4.9.3, Version 5.2.0
Intel(R) C (icc), C++ (icpc), Fortran (icc)
compilers:
Version 15.0.3.187 Build 20150407
MPICH 3.1.4 compiled with GCC 4.9.3
SunOS 5.11 32- and 64-bit Sun C 5.12 SunOS_sparc
(emu) Sun Fortran 95 8.6 SunOS_sparc
Sun C++ 5.12 SunOS_sparc
Windows 7 Visual Studio 2012 w/ Intel Fortran 15 (cmake)
Visual Studio 2013 w/ Intel Fortran 15 (cmake)
Visual Studio 2015 w/ Intel Fortran 16 (cmake)
Windows 7 x64 Visual Studio 2012 w/ Intel Fortran 15 (cmake)
Visual Studio 2013 w/ Intel Fortran 15 (cmake)
Visual Studio 2015 w/ Intel Fortran 16 (cmake)
Visual Studio 2015 w/ MSMPI 8 (cmake)
Cygwin(CYGWIN_NT-6.1 2.8.0(0.309/5/3)
gcc and gfortran compilers (GCC 5.4.0)
(cmake and autotools)
Windows 10 Visual Studio 2015 w/ Intel Fortran 16 (cmake)
Cygwin(CYGWIN_NT-6.1 2.8.0(0.309/5/3)
gcc and gfortran compilers (GCC 5.4.0)
(cmake and autotools)
Windows 10 x64 Visual Studio 2015 w/ Intel Fortran 16 (cmake)
Mac OS X Mt. Lion 10.8.5 Apple clang/clang++ version 5.1 from Xcode 5.1
64-bit gfortran GNU Fortran (GCC) 4.8.2
(swallow/kite) Intel icc/icpc/ifort version 15.0.3
Mac OS X Mavericks 10.9.5 Apple clang/clang++ version 6.0 from Xcode 6.2
64-bit gfortran GNU Fortran (GCC) 4.9.2
(wren/quail) Intel icc/icpc/ifort version 15.0.3
Mac OS X Yosemite 10.10.5 Apple clang/clang++ version 6.1 from Xcode 7.0
64-bit gfortran GNU Fortran (GCC) 4.9.2
(osx1010dev/osx1010test) Intel icc/icpc/ifort version 15.0.3
Mac OS X El Capitan 10.11.6 Apple clang/clang++ version 7.3 from Xcode 7.3
64-bit gfortran GNU Fortran (GCC) 5.2.0
(osx1010dev/osx1010test) Intel icc/icpc/ifort version 16.0.2
Tested Configuration Features Summary
=====================================
In the tables below
y = tested
n = not tested in this release
C = Cluster
W = Workstation
x = not working in this release
dna = does not apply
( ) = footnote appears below second table
<blank> = testing incomplete on this feature or platform
Platform C F90/ F90 C++ zlib SZIP
parallel F2003 parallel
Solaris2.11 32-bit n y/y n y y y
Solaris2.11 64-bit n y/n n y y y
Windows 7 y y/y n y y y
Windows 7 x64 y y/y y y y y
Windows 7 Cygwin n y/n n y y y
Windows 7 x64 Cygwin n y/n n y y y
Windows 10 y y/y n y y y
Windows 10 x64 y y/y n y y y
Mac OS X Mountain Lion 10.8.5 64-bit n y/y n y y y
Mac OS X Mavericks 10.9.5 64-bit n y/y n y y y
Mac OS X Yosemite 10.10.5 64-bit n y/y n y y y
Mac OS X El Capitan 10.11.6 64-bit n y/y n y y y
CentOS 7.2 Linux 2.6.32 x86_64 PGI n y/y n y y y
CentOS 7.2 Linux 2.6.32 x86_64 GNU y y/y y y y y
CentOS 7.2 Linux 2.6.32 x86_64 Intel n y/y n y y y
Linux 2.6.32-573.18.1.el6.ppc64 n y/y n y y y
Platform Shared Shared Shared Thread-
C libs F90 libs C++ libs safe
Solaris2.11 32-bit y y y y
Solaris2.11 64-bit y y y y
Windows 7 y y y y
Windows 7 x64 y y y y
Windows 7 Cygwin n n n y
Windows 7 x64 Cygwin n n n y
Windows 10 y y y y
Windows 10 x64 y y y y
Mac OS X Mountain Lion 10.8.5 64-bit y n y y
Mac OS X Mavericks 10.9.5 64-bit y n y y
Mac OS X Yosemite 10.10.5 64-bit y n y y
Mac OS X El Capitan 10.11.6 64-bit y n y y
CentOS 7.2 Linux 2.6.32 x86_64 PGI y y y n
CentOS 7.2 Linux 2.6.32 x86_64 GNU y y y y
CentOS 7.2 Linux 2.6.32 x86_64 Intel y y y n
Linux 2.6.32-573.18.1.el6.ppc64 y y y n
Compiler versions for each platform are listed in the preceding
"Supported Platforms" table.
More Tested Platforms
=====================
The following platforms are not supported but have been tested for this release.
Linux 2.6.32-573.22.1.el6 GNU C (gcc), Fortran (gfortran), C++ (g++)
#1 SMP x86_64 GNU/Linux compilers:
(mayll/platypus) Version 4.4.7 20120313
Version 4.8.4
PGI C, Fortran, C++ for 64-bit target on
x86-64;
Version 16.10-0
Intel(R) C (icc), C++ (icpc), Fortran (icc)
compilers:
Version 15.0.3.187 (Build 20150407)
MPICH 3.1.4 compiled with GCC 4.9.3
Linux 3.10.0-327.18.2.el7 GNU C (gcc) and C++ (g++) compilers
#1 SMP x86_64 GNU/Linux Version 4.8.5 20150623 (Red Hat 4.8.5-4)
(jelly) with NAG Fortran Compiler Release 6.1(Tozai)
Intel(R) C (icc) and C++ (icpc) compilers
Version 15.0.3.187 (Build 20150407)
with NAG Fortran Compiler Release 6.1(Tozai)
Linux 2.6.32-573.18.1.el6.ppc64 MPICH mpich 3.1.4 compiled with
#1 SMP ppc64 GNU/Linux IBM XL C/C++ for Linux, V13.1
(ostrich) and IBM XL Fortran for Linux, V15.1
Debian 8.4 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1 x86_64 GNU/Linux
gcc, g++ (Debian 4.9.2-10) 4.9.2
GNU Fortran (Debian 4.9.2-10) 4.9.2
(cmake and autotools)
Fedora 24 4.7.2-201.fc24.x86_64 #1 SMP x86_64 x86_64 x86_64 GNU/Linux
gcc, g++ (GCC) 6.1.1 20160621
(Red Hat 6.1.1-3)
GNU Fortran (GCC) 6.1.1 20160621
(Red Hat 6.1.1-3)
(cmake and autotools)
Ubuntu 16.04.1 4.4.0-38-generic #57-Ubuntu SMP x86_64 GNU/Linux
gcc, g++ (Ubuntu 5.4.0-6ubuntu1~16.04.2)
5.4.0 20160609
GNU Fortran (Ubuntu 5.4.0-6ubuntu1~16.04.2)
5.4.0 20160609
(cmake and autotools)
Known Problems
==============
At present, metadata cache images may not be generated by parallel
applications. Parallel applications can read files with metadata cache
images, but since this is a collective operation, a deadlock is possible
if one or more processes do not participate.
Known problems in previous releases can be found in the HISTORY*.txt files
in the HDF5 source. Please report any new problems found to
help@hdfgroup.org.
%%%%1.10.0-patch1%%%%

View File

@ -4,8 +4,8 @@ HDF5 version 1.11.0 currently under development
INTRODUCTION
This document describes the differences between HDF5-1.10.0-patch1 and
HDF5 1.10.1, and contains information on the platforms tested and known problems in HDF5-1.10.1.
This document describes the differences between HDF5-1.10.1 and HDF5 1.10.2, and
contains information on the platforms tested and known problems in HDF5-1.10.1.
For more details check the HISTORY*.txt files in the HDF5 source.
@ -15,23 +15,23 @@ Links to HDF5 1.10.1 source code, documentation, and additional materials can be
The HDF5 1.10.1 release can be obtained from:
https://support.hdfgroup.org/HDF5/release/obtain5110.html
https://support.hdfgroup.org/HDF5/release/obtain5.html
User documentation for the snapshot can be accessed directly at this location:
https://support.hdfgroup.org/HDF5/doc1.10/
https://support.hdfgroup.org/HDF5/doc/
New features in the HDF5-1.10.x release series, including brief general
descriptions of some new and modified APIs, are described in the "What's New
in 1.10.1?" document:
descriptions of some new and modified APIs, are described in the "New Features
in HDF5 1.10" document:
https://support.hdfgroup.org/HDF5/doc/ADGuide/WhatsNew1101.html
https://support.hdfgroup.org/HDF5/docNewFeatures/index.html
All new and modified APIs are listed in detail in the "HDF5 Software Changes
from Release to Release" document, in the section "Release 1.8.19 (current
release) versus Release 1.10.1
from Release to Release" document, in the section "Release 1.10.1 (current
release) versus Release 1.10.0
https://support.hdfgroup.org/HDF5/doc1.10/ADGuide/Changes.html
https://support.hdfgroup.org/HDF5/doc/ADGuide/Changes.html
If you have any questions or comments, please send them to the HDF Help Desk:
@ -54,37 +54,11 @@ New Features
Configuration:
-------------
- CMake minimum is now 3.2.2.
(ADB 2017/01/10)
- Tools folder is separated into source and test folders. This
allows autotools to skip the make command and just execute
the make check command.
(HDFFV-9719 ADB 2016/10/27)
-
Library:
--------
- Paged Aggregation
This is one of the file space management strategies. This strategy
aggregates small metadata and raw data allocations into constant-sized
well-aligned pages, which are suitable for page caching. Paged
aggregation together with the page buffering feature will allow efficient
I/O accesses.
- Page Buffering
The page buffering layer in the HDF5 library absorbs small accesses to
the file system. Each page in memory corresponds to a page allocated in
the file. Access to the file system is then performed as a single page
or multiple of pages, if they are contiguous. This ensures that small
accesses to the file system are avoided while providing another caching
layer for improved I/O performance. This feature works in conjunction
with the paged aggregation feature.
- Filter plugin API added to access the table of paths to search for a
library. Java interface expanded with wrappers for the new functions.
(HDFFV-10143 ADB 2017/04/04)
-
Parallel Library:
-----------------
@ -100,17 +74,15 @@ New Features
Tools:
------
- Add options to h5clear:
-s: clear the status_flags field in the file's superblock
-m: Remove the metadata cache image from the file
(Pull Request #361 QK 2017/03/22)
-
High-Level APIs:
---------------
-
C Packet Table API
------------------
-
-
Internal header file
--------------------
@ -118,51 +90,26 @@ New Features
Documentation
-------------
-
Support for new platforms, languages and compilers.
=======================================
-
Bug Fixes since HDF5-1.10.0-patch1 release
Bug Fixes since HDF5-1.10.1 release
==================================
Library
-------
- Changed the plugins dlopen option from RTLD_NOW to RTLD_LAZY
(PR 201 ADB 2016/12/12)
- Fix error when copying dataset with attribute which is a compound datatype
consisting of a variable length string.
(HDFFV-7991 VC 2016/08/19, 2016/08/21, 2016/08/24)
- H5DOappend will not fail if a dataset has no append callback registered.
(HDFFV-9960 VC 2016/08/05, 2016/08/14)
- Fix the problem where the committed datatype's file location is different
from the file location of an attribute with that committed datatype.
(HDFFV-9940 VC 2016/07/03, 2016/07/06)
- (a) Throw an error instead of assertion when v1 btree level hits the 1 byte limit.
(b) Modifications to better handle error recovery when conversion by
h5format_convert fails.
(HDFFV-9434 VC 2016/05/29)
-
Configuration
-------------
- Configuration will check for the strtoll and strtoull functions
before using alternatives
(PR 340 ADB 2017/03/17)
- CMake uses a Windows pdb directory variable if available and
will generate both static and shared pdb files.
(HDFFV-9875 ADB 2017/02/06)
- CMake now builds shared versions of tools.
(HDFFV-10123 ADB 2017/02/01)
-
Performance
-------------
-
-
Fortran
--------
@ -170,18 +117,7 @@ Bug Fixes since HDF5-1.10.0-patch1 release
Tools
-----
- h5diff correctly ignores strpad in comparing strings.
(HDFFV-10128 ADB 2017/03/03)
- h5repack now correctly parses the command line filter options.
(HDFFV-10046 ADB 2017/01/24)
- h5diff correctly indicates error when it cannot read data due
to an unavailable filter plugin.
(HDFFV-9994 ADB 2017/01/18)
- h5repack allows the --enable-error-stack option on the command line.
(HDFFV-775 ADB 2016/08/08)
-
High-Level APIs:
------
@ -193,18 +129,16 @@ Bug Fixes since HDF5-1.10.0-patch1 release
Documentation
-------------
-
F90 APIs
--------
-
C++ APIs
--------
-
Testing
-------
-
@ -339,6 +273,25 @@ More Tested Platforms
=====================
The following platforms are not supported but have been tested for this release.
Linux 2.6.32-573.22.1.el6 GNU C (gcc), Fortran (gfortran), C++ (g++)
#1 SMP x86_64 GNU/Linux compilers:
(mayll/platypus) Version 4.4.7 20120313
Version 4.8.4
PGI C, Fortran, C++ for 64-bit target on
x86-64;
Version 16.10-0
Intel(R) C (icc), C++ (icpc), Fortran (icc)
compilers:
Version 15.0.3.187 (Build 20150407)
MPICH 3.1.4 compiled with GCC 4.9.3
Linux 3.10.0-327.18.2.el7 GNU C (gcc) and C++ (g++) compilers
#1 SMP x86_64 GNU/Linux Version 4.8.5 20150623 (Red Hat 4.8.5-4)
(jelly) with NAG Fortran Compiler Release 6.1(Tozai)
Intel(R) C (icc) and C++ (icpc) compilers
Version 15.0.3.187 (Build 20150407)
with NAG Fortran Compiler Release 6.1(Tozai)
Linux 2.6.32-573.18.1.el6.ppc64 MPICH mpich 3.1.4 compiled with
#1 SMP ppc64 GNU/Linux IBM XL C/C++ for Linux, V13.1
(ostrich) and IBM XL Fortran for Linux, V15.1
@ -349,261 +302,29 @@ The following platforms are not supported but have been tested for this release.
(cmake and autotools)
Fedora 24 4.7.2-201.fc24.x86_64 #1 SMP x86_64 x86_64 x86_64 GNU/Linux
gcc, g++ (GCC) 6.1.1 20160621 (Red Hat 6.1.1-3)
GNU Fortran (GCC) 6.1.1 20160621 (Red Hat 6.1.1-3)
gcc, g++ (GCC) 6.1.1 20160621
(Red Hat 6.1.1-3)
GNU Fortran (GCC) 6.1.1 20160621
(Red Hat 6.1.1-3)
(cmake and autotools)
Ubuntu 16.04.1 4.4.0-38-generic #57-Ubuntu SMP x86_64 GNU/Linux
gcc, g++ (Ubuntu 5.4.0-6ubuntu1~16.04.2) 5.4.0 20160609
GNU Fortran (Ubuntu 5.4.0-6ubuntu1~16.04.2) 5.4.0 20160609
gcc, g++ (Ubuntu 5.4.0-6ubuntu1~16.04.2)
5.4.0 20160609
GNU Fortran (Ubuntu 5.4.0-6ubuntu1~16.04.2)
5.4.0 20160609
(cmake and autotools)
Known Problems
==============
* "make check" fails on CYGWIN when building shared lib files is enabled. The
default on Cygwin has been changed to disable shared. It can be enabled with
the --enable-shared configure option but is likely to fail "make check"
with GCC compilers. (LK -2015/04/16)
* CLANG compiler with the options -fcatch-undefined-behavior and -ftrapv
catches some undefined behavior in the alignment algorithm of the macro DETECT_I
in H5detect.c (Issue 8147). Since the algorithm is trying to detect the alignment
of integers, ideally the flag -fcatch-undefined-behavior shouldn't to be used for
H5detect.c. In the future, we can separate flags for H5detect.c from the rest of
the library. (SLU - 2013/10/16)
At present, metadata cache images may not be generated by parallel
applications. Parallel applications can read files with metadata cache
images, but since this is a collective operation, a deadlock is possible
if one or more processes do not participate.
* The 5.9 C++ compiler on Sun failed to compile a C++ test ttypes.cpp. It
complains with this message:
"/home/hdf5/src/H5Vprivate.h", line 130: Error: __func__ is not defined.
The reason is that __func__ is a predefined identifier in C99 standard. The
HDF5 C library uses it in H5private.h. The test ttypes.cpp includes
H5private.h (H5Tpkg.h<-H5Fprivate.h<-H5Vprivate.h<-H5private.h). Sun's 5.9
C++ compiler doesn't support __func__, thus fails to compile the C++ test.
But 5.11 C++ compiler does. To check whether your Sun C++ compiler knows this
identifier, try to compile the following simple C++ program:
#include<stdio.h>
int main(void)
{
printf("%s\n", __func__);
return 0;
}
(SLU - 2012/11/5)
* The C++ and FORTRAN bindings are not currently working on FreeBSD with the
native release 8.2 compilers (4.2.1), but are working with gcc 4.6 from the
ports (and probably gcc releases after that).
(QAK - 2012/10/19)
* The data conversion test dt_arith.c has failures (segmentation fault) from
"long double" to other datatypes during hard conversion when the library
is built with the default GCC 4.2.1 on Mac Lion system. It only happens
with optimization (-O3, -O2, and -O1). Some newer versions of GCC do not
have this problem. Users should disable optimization or try newer version
of GCC. (Issue 8017. SLU - 2012/6/12)
* The data conversion test dt_arith.c fails in "long double" to integer
conversion on Ubuntu 11.10 (3.0.0.13 kernal) with GCC 4.6.1 if the library
is built with optimization -O3 or -O2. The older GCC (4.5) or newer kernal
(3.2.2 on Fedora) doesn't have the problem. Users should lower down the
optimization level (-O1 or -O0) by defining CFLAGS in the command line of
"configure" like:
CFLAGS=-O1 ./configure
It will overwrite the library's default optimization level. (Issue 7829.
SLU - 2012/2/7)
* --with-mpe configure option does not work with Mpich2. AKC - 2011/03/10)
* While working on the 1.8.6 release of HDF5, a bug was discovered that can
occur when reading from a dataset in parallel shortly after it has been
written to collectively. The issue was exposed by a new test in the parallel
HDF5 test suite, but had existed before that. We believe the problem lies with
certain MPI implementations and/or filesystems.
We have provided a pure MPI test program, as well as a standalone HDF5
program, that can be used to determine if this is an issue on your system.
They should be run across multiple nodes with a varying number of processes.
These programs can be found at:
http://www.hdfgroup.org/ftp/HDF5/examples/known_problems/
* Parallel mode in AIX will fail some of the testcheck_version.sh tests where
it treats "exit(134) the same as if process 0 had received an abort signal.
This is fixed and will be available in the next release. AKC - 2009/11/3
* The PathScale MPI implementation, accessing a Panasas file system, would
cause H5Fcreate() with H5F_ACC_EXCL to fail even when the file is not
existing. This is due to the MPI_File_open() call failing if the amode has
the MPI_MODE_EXCL bit set. (See bug 1468 for details.) AKC - 2009/8/11
* Parallel tests failed with 16 processes with data inconsistency at testphdf5
/ dataset_readAll. Parallel tests also failed with 32 and 64 processes with
collective abort of all ranks at t_posix_compliant / allwrite_allread_blocks
with MPI IO. (CMC - 2009/04/28)
* For SNL, spirit/liberty/thunderbird: The serial tests pass but parallel
tests failed with MPI-IO file locking message. AKC - 2007/6/25.
* On Intel 64 Linux cluster (RH 4, Linux 2.6.9) with Intel 10.0 compilers use
-mp -O1 compilation flags to build the libraries. Higher level of optimization
causes failures in several HDF5 library tests.
* For HPUX 11.23 many tools tests failed for 64-bit version when linked to the
shared libraries (tested for 1.8.0-beta2)
* For SNL, Red Storm: only paralle HDF5 is supported. The serial tests pass
and the parallel tests also pass with lots of non-fatal error messages.
* on SUN 5.10 C++ test fails in the "Testing Shared Datatypes with Attributes" test
* configuring with --enable-debug=all produces compiler errors on most
platforms. Users who want to run HDF5 in debug mode should use
--enable-debug rather than --enable-debug=all to enable debugging
information on most modules.
* On Mac OS 10.4, test/dt_arith.c has some errors in conversion from long
double to (unsigned) long long and from (unsigned)long long to long double.
* On Altix SGI with Intel 9.0 testmeta.c would not compile with -O3
optimization flag.
* On VAX, Scaleoffset filter isn't supported. The filter cannot be applied to
HDF5 data generated on VAX. Scaleoffset filter only supports IEEE standard
for floating-point data.
* On Cray X1, a lone colon on the command line of h5dump --xml (as in
the testh5dumpxml.sh script) is misinterpereted by the operating system
and causes an error.
* On mpich 1.2.5 and 1.2.6, we found that if more than two processes
contribute no IO and the application asks to do IO with collective, we found
that when using 4 processors, a simple collective write will be hung
sometimes. This can be verified with t_mpi test under testpar.
* The dataset created or rewritten with the v1.6.3 library or after can't
be read with the v1.6.2 library or before when Fletcher32 EDC(filter) is
enabled. There was a bug in the calculating code of the Fletcher32
checksum in the library before v1.6.3. The checksum value wasn't consistent
between big-endian and little-endian systems. This bug was fixed in
Release 1.6.3. However, after fixing the bug, the checksum value is no
longer the same as before on little-endian system. The library release
after 1.6.4 can still read the dataset created or rewritten with the library
of v1.6.2 or before. SLU - 2005/6/30
* For the version 6(6.02 and 6.04) of Portland Group compiler on AMD Opteron
processor, there's a bug in the compiler for optimization(-O2). The library
failed in several tests but all related to multi driver. The problem has
been reported to the vendor.
* On IBM AIX systems, parallel HDF5 mode will fail some tests with error
messages like "INFO: 0031-XXX ...". This is from the command poe.
Set the environment variable MP_INFOLEVEL to 0 to minimize the messages
and run the tests again.
The tests may fail with messages like "The socket name is already
in use". HDF5 does not use sockets (except for stream-VFD). This is
due to problems of the poe command trying to set up the debug socket.
Check if there are many old /tmp/s.pedb.* staying around. These are
sockets used by the poe command and left behind due to failed commands.
Ask your system administrator to clean them out. Lastly, request IBM
to provide a mean to run poe without the debug socket.
* The C++ library's tests fails when compiling with PGI C++ compiler. The
workaround until the problem is correctly handled is to use the
flag "--instantiate=local" prior to the configure and build steps, as:
setenv CXX "pgCC --instantiate=local" for pgCC 5.02 and higher
* The stream-vfd test uses ip port 10007 for testing. If another
application is already using that port address, the test will hang
indefinitely and has to be terminated by the kill command. To try the
test again, change the port address in test/stream_test.c to one not
being used in the host.
* The --enable-static-exec configure flag will only statically link libraries
if the static version of that library is present. If only the shared version
of a library exists (i.e., most system libraries on Solaris, AIX, and Mac,
for example, only have shared versions), the flag should still result in a
successful compilation, but note that the installed executables will not be
fully static. Thus, the only guarantee on these systems is that the
executable is statically linked with just the HDF5 library.
* With the gcc 2.95.2 compiler, HDF 5 uses the `-ansi' flag during
compilation. The ANSI version of the compiler complains about not being
able to handle the `long long' datatype with the warning:
warning: ANSI C does not support `long long'
This warning is innocuous and can be safely ignored.
* Certain platforms give false negatives when testing h5ls:
- Cray J90 and Cray T90IEEE give errors during testing when displaying
some floating-point values. These are benign differences due to
the different precision in the values displayed and h5ls appears to
be dumping floating-point numbers correctly.
* Not all platforms behave correctly with szip's shared libraries. Szip is
disabled in these cases, and a message is relayed at configure time. Static
libraries should be working on all systems that support szip, and should be
used when shared libraries are unavailable. There is also a configure error
on Altix machines that incorrectly reports when a version of szip without
an encoder is being used.
* On some platforms that use Intel and Absoft compilers to build HDF5 fortran library,
compilation may fail for fortranlib_test.f90, fflush1.f90 and fflush2.f90
complaining about exit subroutine. Comment out the line
IF (total_error .ne. 0) CALL exit (total_error)
* Information about building with PGI and Intel compilers is available in
INSTALL file sections 5.7 and 5.8
* On at least one system, (SDSC DataStar), the scheduler (in this case
LoadLeveler) sends job status updates to standard error when you run
any executable that was compiled with the parallel compilers.
This causes problems when running "make check" on parallel builds, as
many of the tool tests function by saving the output from test runs,
and comparing it to an exemplar.
The best solution is to reconfigure the target system so it no longer
inserts the extra text. However, this may not be practical.
In such cases, one solution is to "setenv HDF5_Make_Ignore yes" prior to
the configure and build. This will cause "make check" to continue after
detecting errors in the tool tests. However, in the case of SDSC DataStar,
it also leaves you with some 150 "failed" tests to examine by hand.
A second solution is to write a script to run serial tests and filter
out the text added by the scheduler. A sample script used on SDSC
DataStar is given below, but you will probably have to customize it
for your installation.
Observe that the basic idea is to insert the script as the first item
on the command line which executes the the test. The script then
executes the test and filters out the offending text before passing
it on.
#!/bin/csh
set STDOUT_FILE=~/bin/serial_filter.stdout
set STDERR_FILE=~/bin/serial_filter.stderr
rm -f $STDOUT_FILE $STDERR_FILE
($* > $STDOUT_FILE) >& $STDERR_FILE
set RETURN_VALUE=$status
cat $STDOUT_FILE
tail +3 $STDERR_FILE
exit $RETURN_VALUE
You get the HDF make files and test scipts to execute your filter script
by setting the environment variable "RUNSERIAL" to the full path of the
script prior to running configure for parallel builds. Remember to
"unsetenv RUNSERIAL" before running configure for a serial build.
Note that the RUNSERIAL environment variable exists so that we can
can prefix serial runs as necessary on the target system. On DataStar,
no prefix is necessary. However on an MPICH system, the prefix might
have to be set to something like "/usr/local/mpi/bin/mpirun -np 1" to
get the serial tests to run at all.
In such cases, you will have to include the regular prefix in your
filter script.
* H5Ocopy() does not copy reg_ref attributes correctly when shared-message
is turn on. The value of the reference in the destination attriubte is
wrong. This H5Ocopy problem will affect h5copy tool
Known problems in previous releases can be found in the HISTORY*.txt files
in the HDF5 source. Please report any new problems found to
help@hdfgroup.org.