Update RELEASE.txt, HISTORY-1_10.txt and INSTALL files with changes from

HDF5-1.10.2 release.
This commit is contained in:
lrknox 2018-04-03 16:57:26 -05:00
parent 066b342af1
commit fa0d7aec10
5 changed files with 1452 additions and 981 deletions

View File

@ -1,38 +1,80 @@
HDF5 version 1.11.2 currently under development
------------------------------------------------------------------------------
Please refer to the release_docs/INSTALL file for installation instructions.
------------------------------------------------------------------------------
THE HDF GROUP
---------------
The HDF Group is the developer of HDF5®, a high-performance software library and
data format that has been adopted across multiple industries and has become a
de facto standard in scientific and research communities.
More information about The HDF Group, the HDF5 Community and the HDF5 software
project, tools and services can be found at the Group's website.
https://www.hdfgroup.org/
DOCUMENTATION
-------------
This release is fully functional for the API described in the documentation.
See the RELEASE.txt file in the release_docs/ directory for information
specific to this release of the library. Several INSTALL* files can also be
found in the release_docs/ directory: INSTALL contains instructions for
compiling and installing the library; INSTALL_parallel contains instructions
for installing the parallel version of the library; similarly-named files
contain instructions for several environments on MS Windows systems.
https://portal.hdfgroup.org/display/HDF5/The+HDF5+API
Documentation for this release can be found at the following URL:
http://www.hdfgroup.org/HDF5/doc/.
Full Documentation and Programming Resources for this release can be found at
https://portal.hdfgroup.org/display/HDF5
The following mailing lists are currently set up for HDF5 Library users:
See the RELEASE.txt file in the release_docs/ directory for information specific
to the features and updates included in this release of the library.
news - For announcements of HDF5 related developments,
not a discussion list.
Several more files are located within the release_docs/ directory with specific
details for several common platforms and configurations.
hdf-forum - For general discussion of the HDF5 library with
other users.
INSTALL - Start Here. General instructions for compiling and installing the library
INSTALL_CMAKE - instructions for building with CMake (Kitware.com)
INSTALL_parallel - instructions for building and configuring Parallel HDF5
INSTALL_Windows and INSTALL_Cygwin - MS Windows installations.
To subscribe to a list, send mail to "<list>-subscribe@lists.hdfgroup.org".
where <list> is the name of the list. For example, send a request
to subscribe to the 'news' mail list to the following address:
news-subscribe@lists.hdfgroup.org
Messages sent to the list should be addressed to "<list>@lists.hdfgroup.org".
Periodic code snapshots are provided at the following URL:
ftp://ftp.hdfgroup.uiuc.edu/pub/outgoing/hdf5/snapshots
Please read the README.txt file in that directory before working with a
library snapshot.
HELP AND SUPPORT
----------------
Information regarding Help Desk and Support services is available at
The HDF5 website is located at http://hdfgroup.org/HDF5/
https://portal.hdfgroup.org/display/support/The+HDF+Help+Desk
FORUM and NEWS
--------------
The following public forums are provided for public announcements and discussions
of interest to the general HDF5 Community.
Homepage of the Forum
https://forum.hdfgroup.org
News and Announcement
https://forum.hdfgroup.org/c/news-and-announcements-from-the-hdf-group
HDF5 and HDF4 Topics
https://forum.hdfgroup.org/c/hdf5
These forums are provided as an open and public service for searching and reading.
Posting requires completing a simple registration and allows one to join in the
conversation. Please read the following instructions pertaining to the Forum's
use and configuration
https://forum.hdfgroup.org/t/quickstart-guide-welcome-to-the-new-hdf-forum
SNAPSHOTS, PREVIOUS RELEASES AND SOURCE CODE
--------------------------------------------
Periodically development code snapshots are provided at the following URL:
https://gamma.hdfgroup.org/ftp/pub/outgoing/hdf5/snapshots/
Source packages for current and previous releases are located at:
https://portal.hdfgroup.org/display/support/Downloads
Development code is available at our BitBucket Server:
https://bitbucket.hdfgroup.org/projects/HDFFV/repos/hdf5/browse
Bugs should be reported to help@hdfgroup.org.

File diff suppressed because it is too large Load Diff

View File

@ -3,10 +3,11 @@ Instructions for the Installation of HDF5 Software
==================================================
This file provides instructions for installing the HDF5 software.
If you have any problems with the installation, please see The HDF Group's
support page at the following location:
http://www.hdfgroup.org/services/support.html
For help with installing, questions can be posted to the HDF Forum or sent to the HDF Helpdesk:
HDF Forum: https://forum.hdfgroup.org/
HDF Helpdesk: https://portal.hdfgroup.org/display/support/The+HDF+Help+Desk
CONTENTS
--------
@ -31,59 +32,34 @@ CONTENTS
4.3. Configuring
4.3.1. Specifying the installation directories
4.3.2. Using an alternate C compiler
4.3.3. Configuring for 64-bit support
4.3.4. Additional compilation flags
4.3.5. Compiling HDF5 wrapper libraries
4.3.6. Specifying other programs
4.3.7. Specifying other libraries and headers
4.3.8. Static versus shared linking
4.3.9. Optimization versus symbolic debugging
4.3.10. Parallel versus serial library
4.3.11. Threadsafe capability
4.3.12. Backward compatibility
4.3.3. Additional compilation flags
4.3.4. Compiling HDF5 wrapper libraries
4.3.5. Specifying other programs
4.3.6. Specifying other libraries and headers
4.3.7. Static versus shared linking
4.3.8. Optimization versus symbolic debugging
4.3.9. Parallel versus serial library
4.3.10. Threadsafe capability
4.3.11. Backward compatibility
4.4. Building
4.5. Testing
4.6. Installing HDF5
5. Using the Library
6. Support
A. Warnings about compilers
A.1. GNU (Intel platforms)
A.2. DEC
A.3. SGI (Irix64 6.2)
A.4. Windows/NT
B. Large (>2GB) versus small (<2GB) file capability
C. Building and testing with other compilers
C.1. Building and testing with Intel compilers
C.2. Building and testing with PGI compilers
*****************************************************************************
1. Obtaining HDF5
The latest supported public release of HDF5 is available from
ftp://ftp.hdfgroup.org/HDF5/current/src. For Unix and UNIX-like
https://www.hdfgroup.org/downloads/hdf5/. For Unix and UNIX-like
platforms, it is available in tar format compressed with gzip.
For Microsoft Windows, it is in ZIP format.
The HDF team also makes snapshots of the source code available on
a regular basis. These snapshots are unsupported (that is, the
HDF team will not release a bug-fix on a particular snapshot;
rather any bug fixes will be rolled into the next snapshot).
Furthermore, the snapshots have only been tested on a few
machines and may not test correctly for parallel applications.
Snapshots, in a limited number of formats, can be found on THG's
development FTP server:
ftp://ftp.hdfgroup.uiuc.edu/pub/outgoing/hdf5/snapshots
2. Quick installation
For those who don't like to read ;-) the following steps can be used
to configure, build, test, and install the HDF5 Library, header files,
to configure, build, test, and install the HDF5 library, header files,
and support programs. For example, to install HDF5 version X.Y.Z at
location /usr/local/hdf5, use the following steps.
@ -125,28 +101,30 @@ CONTENTS
3. HDF5 dependencies
3.1. Zlib
The HDF5 Library includes a predefined compression filter that
The HDF5 library includes a predefined compression filter that
uses the "deflate" method for chunked datasets. If zlib-1.1.2 or
later is found, HDF5 will use it. Otherwise, HDF5's predefined
compression method will degenerate to a no-op; the compression
filter will succeed but the data will not be compressed.
3.2. Szip (optional)
The HDF5 Library includes a predefined compression filter that
The HDF5 library includes a predefined compression filter that
uses the extended-Rice lossless compression algorithm for chunked
datasets. For more information about Szip compression and license
terms, see http://hdfgroup.org/doc_resource/SZIP/.
datasets. For information on Szip compression, license terms,
and obtaining the Szip source code, see:
The Szip source code can be obtained from the HDF5 Download page
http://www.hdfgroup.org/HDF5/release/obtain5.html#extlibs. Building
instructions are available with the Szip source code.
https://portal.hdfgroup.org/display/HDF5/Szip+Compression+in+HDF+Products
Building instructions are available with the Szip source code.
The HDF Group does not distribute separate Szip precompiled libraries,
but the HDF5 binaries available from
http://www.hdfgroup.org/HDF5/release/obtain5.html include
the Szip encoder enabled binary for the corresponding platform.
but the HDF5 pre-built binaries provided on The HDF Group download page
include the Szip library with the encoder enabled. These can be found
here:
To configure the HDF5 Library with the Szip compression filter, use
https://www.hdfgroup.org/downloads/hdf5/
To configure the HDF5 library with the Szip compression filter, use
the '--with-szlib=/PATH_TO_SZIP' flag. For more information, see
section 4.3.7, "Specifying other libraries and headers."
@ -204,20 +182,6 @@ CONTENTS
$ cd build-fortran
$ ../hdf5-X.Y.Z/configure --enable-fortran ...
Unfortunately, this does not work on recent Irix platforms (6.5?
and later) because that `make' does not understand the VPATH variable.
However, HDF5 also supports Irix `pmake' which has a .PATH target
which serves a similar purpose. Here's what the Irix man pages say
about VPATH, the facility used by HDF5 makefiles for this feature:
The VPATH facility is a derivation of the undocumented
VPATH feature in the System V Release 3 version of make.
System V Release 4 has a new VPATH implementation, much
like the pmake(1) .PATH feature. This new feature is also
undocumented in the standard System V Release 4 manual
pages. For this reason it is not available in the IRIX
version of make. The VPATH facility should not be used
with the new parallel make option.
4.3. Configuring
HDF5 uses the GNU autoconf system for configuration, which
@ -243,7 +207,7 @@ CONTENTS
4.3.1. Specifying the installation directories
The default installation location is the HDF5 directory created in
the build directory. Typing `make install' will install the HDF5
Library, header files, examples, and support programs in hdf5/lib,
library, header files, examples, and support programs in hdf5/lib,
hdf5/include, hdf5/doc/hdf5/examples, and hdf5/bin. To use a path
other than hdf5, specify the path with the `--prefix=PATH' switch:
@ -275,45 +239,24 @@ CONTENTS
$ CC=/usr/local/mpi/bin/mpicc ./configure
4.3.3. Configuring for 64-bit support
Several machine architectures support 32-bit or 64-bit binaries.
The options below describe how to enable support for different options.
On Irix64, the default compiler is `cc'. To use an alternate compiler,
specify it with the CC variable:
$ CC='cc -n32' ./configure
Similarly, users compiling on a Solaris machine and desiring to
build the distribution with 64-bit support should specify the
correct flags with the CC variable:
$ CC='cc -m64' ./configure
To configure AIX 64-bit support including the Fortran and C++ APIs,
(Note: need to set $AR to 'ar -X 64'.)
Serial:
$ CFLAGS=-q64 FCFLAGS=-q64 CXXFLAGS=-q64 AR='ar -X 64'\
./configure --enable-fortran
Parallel: (C++ not supported with parallel)
$ CFLAGS=-q64 FCFLAGS=-q64 AR='ar -X 64'\
./configure --enable-fortran
4.3.4. Additional compilation flags
If addtional flags must be passed to the compilation commands,
4.3.3. Additional compilation flags
If additional flags must be passed to the compilation commands,
specify those flags with the CFLAGS variable. For instance,
to enable symbolic debugging of a production version of HDF5, one
might say:
$ CFLAGS=-g ./configure --enable-build-mode=production
4.3.5. Compiling HDF5 wrapper libraries
One can optionally build the Fortran and/or C++ interfaces to the
HDF5 C library. By default, both options are disabled. To build
them, specify `--enable-fortran' and `--enable-cxx', respectively.
4.3.4. Compiling HDF5 wrapper libraries
One can optionally build the Fortran, C++, and Java interfaces to
the HDF5 C library. By default, these options are disabled. To build
them, specify '--enable-fortran', '--enable-cxx', or '--enable-java',
respectively.
$ ./configure --enable-fortran
$ ./configure --enable-cxx
$ ./configure --enable-java
Configuration will halt if a working Fortran 90 or 95 compiler or
C++ compiler is not found. Currently, the Fortran configure tests
@ -322,15 +265,8 @@ CONTENTS
$ FC=/usr/local/bin/g95 ./configure --enable-fortran
Note: The Fortran and C++ interfaces are not supported on all the
platforms the main HDF5 Library supports. Also, the Fortran
interface supports parallel HDF5 while the C++ interface does
not.
Note: See sections 4.7 and 4.8 for building the Fortran library with
Intel or PGI compilers.
4.3.6. Specifying other programs
4.3.5. Specifying other programs
The build system has been tuned for use with GNU make but also
works with other versions of make. If the `make' command runs a
non-GNU version but a GNU version is available under a different
@ -346,7 +282,7 @@ CONTENTS
the `ar' and `ranlib' (or `:') commands to override values
detected by configure.
The HDF5 Library, include files, and utilities are installed
The HDF5 library, include files, and utilities are installed
during `make install' (described below) with a BSD-compatible
install program detected automatically by configure. If none is
found, the shell script bin/install-sh is used. Configure does not
@ -364,7 +300,7 @@ CONTENTS
because the HDF5 makefiles also use the install program to
change file ownership and/or access permissions.
4.3.7. Specifying other libraries and headers
4.3.6. Specifying other libraries and headers
Configure searches the standard places (those places known by the
systems compiler) for include files and header files. However,
additional directories can be specified by using the CPPFLAGS
@ -389,12 +325,12 @@ CONTENTS
./configure
HDF5 includes Szip as a predefined compression method (see 3.2).
To enable Szip compression, the HDF5 Library must be configured
and built using the Szip Library:
To enable Szip compression, the HDF5 library must be configured
and built using the Szip library:
$ ./configure --with-szlib=/Szip_Install_Directory
4.3.8. Static versus shared linking
4.3.7. Static versus shared linking
The build process will create static libraries on all systems and
shared libraries on systems that support dynamic linking to a
sufficient degree. Either form of the library may be suppressed by
@ -410,7 +346,7 @@ CONTENTS
$ ./configure --enable-static-exec
4.3.9. Optimization versus symbolic debugging
4.3.8. Optimization versus symbolic debugging
The library can be compiled to provide symbolic debugging support
so it can be debugged with gdb, dbx, ddd, etc., or it can be
compiled with various optimizations. To compile for symbolic
@ -430,9 +366,7 @@ CONTENTS
(such as type conversion execution times and extensive invariant
condition checking). To enable this debugging, supply a
comma-separated list of package names to the `--enable-internal-debug'
switch. See "Debugging HDF5 Applications" for a list of package names:
http://www.hdfgroup.org/HDF5/doc/H5.user/Debugging.html
switch.
Debugging can be disabled by saying `--disable-internal-debug'.
The default debugging level for snapshots is a subset of the
@ -448,39 +382,39 @@ CONTENTS
arguments, and the return values. To enable or disable the
ability to trace the API say `--enable-trace' (the default for
snapthots) or `--disable-trace' (the default for public releases).
The tracing must also be enabled at runtime to see any output
(see "Debugging HDF5 Applications," reference above).
The tracing must also be enabled at runtime to see any output.
4.3.10. Parallel versus serial library
The HDF5 Library can be configured to use MPI and MPI-IO for
4.3.9. Parallel versus serial library
The HDF5 library can be configured to use MPI and MPI-IO for
parallelism on a distributed multi-processor system. Read the
file INSTALL_parallel for detailed explanations.
file INSTALL_parallel for detailed information.
4.3.11. Threadsafe capability
The HDF5 Library can be configured to be thread-safe (on a very
4.3.10. Threadsafe capability
The HDF5 library can be configured to be thread-safe (on a very
large scale) with the `--enable-threadsafe' flag to the configure
script. Some platforms may also require the '-with-pthread=INC,LIB'
(or '--with-pthread=DIR') flag to the configure script.
For further details, see "HDF5 Thread Safe Library":
For further information, see:
http://www.hdfgroup.org/HDF5/doc/TechNotes/ThreadSafeLibrary.html
https://portal.hdfgroup.org/display/knowledge/Questions+about+thread-safety+and+concurrent+access
4.3.12. Backward compatibility
The 1.10 version of the HDF5 Library can be configured to operate
4.3.11. Backward compatibility
The 1.10 version of the HDF5 library can be configured to operate
identically to the v1.8 library with the
--with-default-api-version=v18
configure flag, or identically to the v1.6 library with the
--with-default-api-version=v16
configure flag. This allows existing code to be compiled with the
v1.10 library without requiring immediate changes to the application
source code. For addtional configuration options and other details,
see "API Compatibility Macros in HDF5":
source code. For additional configuration options and other details,
see "API Compatibility Macros":
https://support.hdfgroup.org/HDF5/doc/RM/APICompatMacros.html
https://portal.hdfgroup.org/display/HDF5/API+Compatibility+Macros
4.4. Building
The library, confidence tests, and programs can be built by
saying just:
specifying:
$ make
@ -497,7 +431,7 @@ CONTENTS
4.5. Testing
HDF5 comes with various test suites, all of which can be run by
saying
specifying:
$ make check
@ -526,13 +460,13 @@ CONTENTS
longer test, set HDF5TestExpress to 0. 1 is the default.
4.6. Installing HDF5
The HDF5 Library, include files, and support programs can be
installed in a (semi-)public place by saying `make install'. The
files are installed under the directory specified with
`--prefix=DIR' (default is 'hdf5') in directories named `lib',
`include', and `bin'. The directories, if not existing, will be
created automatically, provided the mkdir command supports the -p
option.
The HDF5 library, include files, and support programs can be
installed by specifying `make install'. The files are installed under the
directory specified with `--prefix=DIR' (or if not specified, in 'hdf5'
in the top directory of the HDF5 source code). They will be
placed in directories named `lib', `include', and `bin'. The directories,
if not existing, will be created automatically, provided the mkdir command
supports the -p option.
If `make install' fails because the install command at your site
somehow fails, you may use the install-sh that comes with the
@ -589,134 +523,15 @@ CONTENTS
5. Using the Library
Please see the "HDF5 User's Guide" and the "HDF5 Reference Manual":
For information on using HDF5 see the documentation, tutorials and examples
found here:
http://www.hdfgroup.org/HDF5/doc/
https://portal.hdfgroup.org/display/HDF5/HDF5
Most programs will include <hdf5.h> and link with -lhdf5.
Additional libraries may also be necessary depending on whether
support for compression, etc., was compiled into the HDF5 Library.
A summary of the HDF5 installation can be found in the
libhdf5.settings file in the same directory as the static and/or
shared HDF5 Libraries.
A summary of the features included in the built HDF5 installation can be found
in the libhdf5.settings file in the same directory as the static and/or
shared HDF5 libraries.
6. Support
Support is described in the README file.
*****************************************************************************
APPENDIX
*****************************************************************************
A. Warnings about compilers
Output from the following compilers should be extremely suspected
when used to compile the HDF5 Library, especially if optimizations are
enabled. In all cases, HDF5 attempts to work around the compiler bugs.
A.1. GNU (Intel platforms)
Versions before 2.8.1 have serious problems allocating registers
when functions contain operations on `long long' datatypes.
A.2. COMPAQ/DEC
The V5.2-038 compiler (and possibly others) occasionally
generates incorrect code for memcpy() calls when optimizations
are enabled, resulting in unaligned access faults. HDF5 works
around the problem by casting the second argument to `char *'.
The Fortran module (5.4.1a) fails in compiling some Fortran
programs. Use 5.5.0 or higher.
A.3. SGI (Irix64 6.2)
The Mongoose 7.00 compiler has serious optimization bugs and
should be upgraded to MIPSpro 7.2.1.2m. Patches are available
from SGI.
A.4. Windows/NT
The Microsoft Win32 5.0 compiler is unable to cast unsigned long
long values to doubles. HDF5 works around this bug by first
casting to signed long long and then to double.
A link warning: defaultlib "LIBC" conflicts with use of other libs
appears for debug version of VC++ 6.0. This warning will not affect
building and testing HDF5 Libraries.
B. Large (>2GB) versus small (<2GB) file capability
In order to read or write files that could potentially be larger
than 2GB, it is necessary to use the non-ANSI `long long' data
type on some platforms. However, some compilers (e.g., GNU gcc
versions before 2.8.1 on Intel platforms) are unable to produce
correct machine code for this datatype.
C. Building and testing with other compilers
C.1. Building and testing with Intel compilers
When Intel compilers are used (icc or ecc), you will need to modify
the generated "libtool" program after configuration is finished.
On or around line 104 of the libtool file, there are lines which
look like:
# How to pass a linker flag through the compiler.
wl=""
Change these lines to this:
# How to pass a linker flag through the compiler.
wl="-Wl,"
UPDATE: This is now done automatically by the configure script.
However, if you still experience a problem, you may want to check this
line in the libtool file and make sure that it has the correct value.
* To build the Fortran library using Intel compiler on Linux 2.4,
one has to perform the following steps:
x Use the -fpp -DDEC$=DEC_ -DMS$=MS_ compiler flags to disable
DEC and MS compiler directives in source files in the fortran/src,
fortran/test, and fortran/examples directories.
E.g., setenv F9X 'ifc -fpp -DDEC$=DEC_ -DMS$=MS_'
Do not use double quotes since $ is interpreted in them.
x If Version 6.0 of Fortran compiler is used, the build fails in
the fortran/test directory and then in the fortran/examples
directory. To proceed, edit the work.pcl files in those
directories to contain two lines:
work.pc
../src/work.pc
x Do the same in the fortran/examples directory.
x A problem with work.pc files was resolved for the newest version
of the compiler (7.0).
* To build the Fortran library on IA32, follow the steps described
above, except that the DEC and MS compiler directives should be
removed manually or use a patch from HDF FTP server:
ftp://ftp.hdfgroup.org/HDF5/current/
C.2. Building and testing with PGI compilers
When PGI C and C++ compilers are used (pgcc or pgCC), you will need to
modify the generated "libtool" program after configuration is finished.
On or around line 104 of the libtool file, there are lines which
look like this:
# How to pass a linker flag through the compiler.
wl=""
Change these lines to this:
# How to pass a linker flag through the compiler.
wl="-Wl,"
UPDATE: This is now done automatically by the configure script. However,
if you still experience a problem, you may want to check this line in
the libtool file and make sure that it has the correct value.
To build the HDF5 C++ Library with pgCC (version 4.0 and later), set
the environment variable CXX to "pgCC -tlocal"
setenv CXX "pgCC -tlocal"
before running the configure script.

View File

@ -40,9 +40,11 @@ and the parallel file system.
1.2. Further Help
-----------------
If you still have difficulties installing PHDF5 in your system, please send
mail to
help@hdfgroup.org
For help with installing, questions can be posted to the HDF Forum or sent to the HDF Helpdesk:
HDF Forum: https://forum.hdfgroup.org/
HDF Helpdesk: https://portal.hdfgroup.org/display/support/The+HDF+Help+Desk
In your mail, please include the output of "uname -a". If you have run the
"configure" command, attach the output of the command and the content of
@ -87,12 +89,8 @@ The following steps are for building HDF5 for the Hopper compute
nodes. They would probably work for other Cray systems but have
not been verified.
Obtain a copy from the HDF ftp server:
http://www.hdfgroup.org/ftp/HDF5/current/src/
(link might change, so always double check the HDF group website).
$ wget http://www.hdfgroup.org/ftp/HDF5/current/src/hdf5-x.x.x.tar.gz
unpack the tarball
Obtain the HDF5 source code:
https://portal.hdfgroup.org/display/support/Downloads
The entire build process should be done on a MOM node in an interactive allocation and on a file system accessible by all compute nodes.
Request an interactive allocation with qsub:

View File

@ -4,45 +4,37 @@ HDF5 version 1.11.2 currently under development
INTRODUCTION
This document describes the differences between HDF5-1.10.1 and HDF5 1.10.2, and
contains information on the platforms tested and known problems in HDF5-1.10.1.
For more details check the HISTORY*.txt files in the HDF5 source.
This document describes the differences between this release and the previous
HDF5 release. It contains information on the platforms tested and known
problems in this release. For more details check the HISTORY*.txt files in the
HDF5 source.
Note that documentation in the links below will be updated at the time of each
final release.
Links to HDF5 1.10.1 source code, documentation, and additional materials can be found on The HDF5 web page at:
Links to HDF5 documentation can be found on The HDF5 web page:
https://support.hdfgroup.org/HDF5/
https://portal.hdfgroup.org/display/HDF5/HDF5
The HDF5 1.10.1 release can be obtained from:
The official HDF5 releases can be obtained from:
https://support.hdfgroup.org/HDF5/release/obtain5.html
https://www.hdfgroup.org/downloads/hdf5/
User documentation for the snapshot can be accessed directly at this location:
Changes from Release to Release and New Features in the HDF5-1.10.x release series
can be found at:
https://support.hdfgroup.org/HDF5/doc/
New features in the HDF5-1.10.x release series, including brief general
descriptions of some new and modified APIs, are described in the "New Features
in HDF5 1.10" document:
https://support.hdfgroup.org/HDF5/docNewFeatures/index.html
All new and modified APIs are listed in detail in the "HDF5 Software Changes
from Release to Release" document, in the section "Release 1.10.1 (current
release) versus Release 1.10.0
https://support.hdfgroup.org/HDF5/doc/ADGuide/Changes.html
https://portal.hdfgroup.org/display/HDF5/HDF5+Application+Developer%27s+Guide
If you have any questions or comments, please send them to the HDF Help Desk:
help@hdfgroup.org
help@hdfgroup.org
CONTENTS
- New Features
- Support for new platforms and languages
- Bug Fixes since HDF5-1.10.1
- Bug Fixes since HDF5-1.10.2
- Supported Platforms
- Tested Configuration Features Summary
- More Tested Platforms
@ -54,119 +46,16 @@ New Features
Configuration:
-------------
- CMake
Change minimum version to 3.10.
This change removes the need to support a copy of the FindMPI.cmake module,
which has been removed, along with its subfolder in the config/cmake_ext_mod
location.
(ADB - 2018/03/09)
- CMake
Add pkg-config file generation
Added pkg-config file generation for the C, C++, HL, and HL C++ libraries.
In addition, builds on linux will create h5cXXX scripts that use the pkg-config
files. This is a limited implementation of a script like autotools h5cc.
(ADB - 2018/03/08, HDFFV-4359)
- CMake
Refactor use of CMAKE_BUILD_TYPE for new variable, which understands
the type of generator in use.
Added new configuration macros to use new HDF_BUILD_TYPE variable. This
variable is set correctly for the type of generator being used for the build.
(ADB - 2018/01/08, HDFFV-10385, HDFFV-10296)
-
Library:
--------
- Add an enumerated value to H5F_libver_t for H5Pset_libver_bounds().
Currently, the library defines two values for H5F_libver_t and supports
only two pairs of (low, high) combinations as derived from these values.
Thus the bounds setting via H5Pset_libver_bounds() is rather restricted.
Add an enumerated value (H5F_LIBVER_V18) to H5F_libver_t and
H5Pset_libver_bounds() now supports five pairs of (low, high) combinations
as derived from these values. This addition provides the user more
flexibility in setting bounds for object creation.
(VC - 2018/03/14)
- Add prefix option to VDS files.
Currently, VDS source files must be in the active directory to be
found by the virtual file. Adding the option of a prefix to be set
on the virtual file, using a data access property list (DAPL),
allows the source files to located at an absolute or relative path
to the virtual file.
Private utility functions in H5D and H5L packages merged into single
function in H5F package.
New public APIs:
herr_t H5Pset_virtual_prefix(hid_t dapl_id, const char* prefix);
ssize_t H5Pget_virtual_prefix(hid_t dapl_id, char* prefix /*out*/, size_t size);
The prefix can also be set with an environment variable, HDF5_VDS_PREFIX.
(ADB - 2017/12/12, HDFFV-9724, HDFFV-10361)
-
Parallel Library:
-----------------
- Optimize parallel open/location of the HDF5 super-block
Previous releases of PHDF5 required all parallel ranks to
search for the HDF5 superblock signature when opening the
file. As this is accomplished more or less as a synchronous
operation, a large number of processes can experience a
slowdown in the file open due to filesystem contention.
As a first step in improving the startup/file-open performance,
we allow MPI rank 0 of the associated MPI communicator to locate
the base offset of the super-block and then broadcast that result
to the remaining ranks in the parallel group. Note that this
approach is utilized ONLY during file opens which employ the MPIO
file driver in HDF5 by previously having called H5Pset_fapl_mpio().
HDF5 parallel file operations which do not employ multiple ranks
e.g. specifiying MPI_COMM_SELF (whose MPI_Comm_size == 1)
as opposed to MPI_COMM_WORLD, will not be affected by this
optimization. Conversely, parallel file operations on subgroups
of MPI_COMM_WORLD are allowed to be run in parallel with each
subgroup operating as an independant collection of processes.
(RAW - 2017/10/10, HDFFV-10294)
- Large MPI-IO transfers
Previous releases of PHDF5 would fail when attempting to
read or write greater than 2GB of data in a single IO operation.
This issue stems principally from an MPI API whose definitions
utilize 32 bit integers to describe the number of data elements
and datatype that MPI should use to effect a data transfer.
Historically, HDF5 has invoked MPI-IO with the number of
elements in a contiguous buffer represented as the length
of that buffer in bytes.
Resolving the issue and thus enabling larger MPI-IO transfers
is accomplished first, by detecting when a user IO request would
exceed the 2GB limit as described above. Once a transfer request
is identified as requiring special handling, PHDF5 now creates a
derived datatype consisting of a vector of fixed sized blocks
which is in turn wrapped within a single MPI_Type_struct to
contain the vector and any remaining data. The newly created
datatype is then used in place of MPI_BYTE and can be used to
fulfill the original user request without encountering API
errors.
(RAW - 2017/07/11, HDFFV-8839)
-
Fortran Library:
----------------
@ -174,125 +63,15 @@ New Features
C++ Library:
------------
- The following C++ API wrappers have been added to the C++ Library:
+ H5Lcreate_soft:
// Creates a soft link from link_name to target_name.
void link(const char *target_name, const char *link_name,...)
void link(const H5std_string& target_name,...)
+ H5Lcreate_hard:
// Creates a hard link from new_name to curr_name.
void link(const char *curr_name, const Group& new_loc,...)
void link(const H5std_string& curr_name, const Group& new_loc,...)
// Creates a hard link from new_name to curr_name in same location.
void link(const char *curr_name, const hid_t same_loc,...)
void link(const H5std_string& curr_name, const hid_t same_loc,...)
Note: previous version of H5Location::link will be deprecated.
+ H5Lcopy:
// Copy an object from a group of file to another.
void copyLink(const char *src_name, const Group& dst,...)
void copyLink(const H5std_string& src_name, const Group& dst,...)
// Copy an object from a group of file to the same location.
void copyLink(const char *src_name, const char *dst_name,...)
void copyLink(const H5std_string& src_name,...)
+ H5Lmove:
// Rename an object in a group or file to a new location.
void moveLink(const char* src_name, const Group& dst,...)
void moveLink(const H5std_string& src_name, const Group& dst,...)
// Rename an object in a group or file to the same location.
void moveLink(const char* src_name, const char* dst_name,...)
void moveLink(const H5std_string& src_name,...)
Note: previous version H5Location::move will be deprecated.
+ H5Ldelete:
// Removes the specified link from this location.
void unlink(const char *link_name,
const LinkAccPropList& lapl = LinkAccPropList::DEFAULT)
void unlink(const H5std_string& link_name,
const LinkAccPropList& lapl = LinkAccPropList::DEFAULT)
Note: additional parameter is added to previous H5Location::unlink.
+ H5Tencode and H5Tdecode:
// Creates a binary object description of this datatype.
void DataType::encode() - C API H5Tencode()
// Returns the decoded type from the binary object description.
DataType::decode() - C API H5Tdecode()
ArrayType::decode() - C API H5Tdecode()
CompType::decode() - C API H5Tdecode()
DataType::decode() - C API H5Tdecode()
EnumType::decode() - C API H5Tdecode()
FloatType::decode() - C API H5Tdecode()
IntType::decode() - C API H5Tdecode()
StrType::decode() - C API H5Tdecode()
VarLenType::decode() - C API H5Tdecode()
+ H5Lget_info:
// Returns the information of the named link.
H5L_info_t getLinkInfo(const H5std_string& link_name,...)
(BMR - 2018/03/11, HDFFV-10149)
- Added class LinkCreatPropList for link create property list.
(BMR - 2018/03/11, HDFFV-10149)
- Added overloaded functions H5Location::createGroup to take a link
creation property list
Group createGroup(const char* name, const LinkCreatPropList& lcpl)
Group createGroup(const H5std_string& name, const LinkCreatPropList& lcpl)
(BMR - 2018/03/11, HDFFV-10149)
- A document is added to the HDF5 C++ API Reference Manual to show the
mapping from a C API to C++ wrappers. It can be found from the main
page of the C++ API Reference Manual.
(BMR - 2017/10/17, HDFFV-10151)
-
Java Library:
----------------
- Wrapper added for enabling the error stack.
H5error_off would disable the error stack reporting. In order
to re-enable the reporting, the error stack info needs to be
saved so that H5error_on can revert state.
(ADB - 2018/03/13, HDFFV-10412)
- Wrappers added for the following APIs:
H5Pset_evict_on_close
H5Pget_evict_on_close
H5Pset_chunk_opts
H5Pget_chunk_opts
H5Pset_efile_prefix
H5Pget_efile_prefix
H5Pset_virtual_prefix
H5Pget_virtual_prefix
(ADB - 2017/12/20)
-
Tools:
------
- h5diff
h5diff has new option enable-error-stack.
Updated h5diff with the --enable-error-stack argument, which
enables the display of the hdf5 error stack. This completes the
improvement to the main tools; h5copy, h5diff, h5dump, h5ls and
h5repack.
(ADB - 2017/08/30, HDFFV-9774)
-
High-Level APIs:
---------------
@ -314,139 +93,11 @@ Support for new platforms, languages and compilers.
=======================================
-
Bug Fixes since HDF5-1.10.1 release
Bug Fixes since HDF5-1.10.2 release
==================================
Library
-------
- The data read after a direct chunk write to a chunked dataset
was incorrect.
The problem was due to the passing of a null dataset pointer to
the insert callback for the chunk index in the routine
H5D__chunk_direct_write() in H5Dchunk.c
The dataset was a single-chunked dataset which will use the
single chunk index when latest format was enabled on file creation.
The single chunk index was the only index that used this pointer
in the insert callback.
Pass the dataset pointer to the insert callback for the chunk
index in H5D__chunk_direct_write().
(VC - 2018/03/20, HDFFV-10425)
- Add public routine H5DOread_chunk to the high-level C library
As we have H5DOwrite_chunk() to write an entire chunk to the file
directly, the customer requested to add this public routine to
read an entire chunk from the file directly.
This public routine was added based on a patch from GE Healthcare.
(VC - 2017/05/19, HDFFV-9934)
- Freeing of object header in H5Ocache.c
It was discovered that the object header was not released properly
when the checksum verification failed and a re-load of the object
header was needed.
Free the object header that failed the chksum verification only
after the new object header is reloaded, deserialized and set up.
(VC - 2018/03/14, HDFFV-10209)
- H5Pset_evict_on_close in H5Pfapl.c
Changed the minor error number from H5E_CANTSET to H5E_UNSUPPORTED for
parallel library.
(ADB - 2018/03/6, HDFFV-10414)
- Utility function can not handle lowercase Windows drive letters
Added call to toupper function for drive letter.
(ADB - 2017/12/18, HDFFV-10307)
- filter plugin handling in H5PL.c and H5Z.c
It was discovered that the dynamic loading process used by
filter plugins had issues with library dependencies.
CMake build process changed to use LINK INTERFACE keywords, which
allowed HDF5 C library to make dependent libraries private. The
filter plugin libraries no longer require dependent libraries
(such as szip or zlib) to be available.
(ADB - 2017/11/16, HDFFV-10328)
- H5Zfilter_avail in H5Z.c
The public function checked for plugins, while the private
function did not.
Modified H5Zfilter_avail and private function, H5Z_filter_avail.
Moved check for plugin from public to private function. Updated
H5P__set_filter due to change in H5Z_filter_avail. Updated tests.
(ADB - 2017/10/10, HDFFV-10297, HDFFV-10319)
- An uninitialized struct could cause a memory access error when using
variable-length or reference types in a compressed, chunked dataset.
A struct containing a callback function pointer and a pointer to some
associated data was used before initialization. This could cause a
memory access error and system crash. This could only occur under
unusual conditions when using variable-lenth and reference types in
a compressed, chunked dataset.
On recent versions of Visual Studio, when built in debug mode, the
debug heap will complain and cause a crash if the code in question
is executed (this will cause the objcopy test to fail).
(DER - 2017/11/21, HDFFV-10330)
- If an HDF5 file contains a filter pipeline message with a 'number of
filters' field that exceeds the maximum number of allowed filters,
the error handling code will attempt to dereference a NULL pointer.
This issue was reported to The HDF Group as issue #CVE-2017-17505.
NOTE: The HDF5 C library cannot produce such a file. This condition
should only occur in a corrupt (or deliberately altered) file
or a file created by third-party software.
This problem arose because the error handling code assumed that
the 'number of filters' field implied that a dynamic array of that
size had already been created and that the cleanup code should
iterate over that array and clean up each element's resources. If
an error occurred before the array has been allocated, this will
not be true.
This has been changed so that the number of filters is set to
zero on errors. Additionally, the filter array traversal in the
error handling code now requires that the filter array not be NULL.
(DER - 2018/02/06, HDFFV-10354)
- If an HDF5 file contains a filter pipeline message which contains
a 'number of filters' field that exceeds the actual number of
filters in the message, the HDF5 C library will read off the end of
the read buffer.
This issue was reported to The HDF Group as issue #CVE-2017-17506.
NOTE: The HDF5 C library cannot produce such a file. This condition
should only occur in a corrupt (or deliberately altered) file
or a file created by third-party software.
The problem was fixed by passing the buffer size with the buffer
and ensuring that the pointer cannot be incremented off the end
of the buffer. A mismatch between the number of filters declared
and the actual number of filters will now invoke normal HDF5
error handling.
(DER - 2018/02/26, HDFFV-10355)
- If an HDF5 file contains a malformed compound datatype with a
suitably large offset, the type conversion code can run off
the end of the type conversion buffer, causing a segmentation
@ -467,104 +118,10 @@ Bug Fixes since HDF5-1.10.1 release
(DER - 2018/02/26, HDFFV-10356)
- If an HDF5 file contains a malformed compound type which contains
a member of size zero, a division by zero error will occur while
processing the type.
This issue was reported to The HDF Group as issue #CVE-2017-17508.
NOTE: The HDF5 C library cannot produce such a file. This condition
should only occur in a corrupt (or deliberately altered) file
or a file created by third-party software.
Checking for zero before dividing fixes the problem. Instead of the
division by zero, the normal HDF5 error handling is invoked.
(DER - 2018/02/26, HDFFV-10357)
- If an HDF5 file contains a malformed symbol table node that declares
it contains more symbols than it actually contains, the library
can run off the end of the metadata cache buffer while processing
the symbol table node.
This issue was reported to The HDF Group as issue #CVE-2017-17509.
NOTE: The HDF5 C library cannot produce such a file. This condition
should only occur in a corrupt (or deliberately altered) file
or a file created by third-party software.
Performing bounds checks on the buffer while processing fixes the
problem. Instead of the segmentation fault, the normal HDF5 error
handling is invoked.
(DER - 2018/03/12, HDFFV-10358)
Configuration
-------------
- CMake
Update CMake commands configuration.
A number of improvements were made to the CMake commands. Most
changes simplify usage or eliminate unused constructs. Also,
some changes support better cross-platform support.
(ADB - 2018/02/01, HDFFV-10398)
- CMake
Correct usage of CMAKE_BUILD_TYPE variable.
The use of the CMAKE_BUILD_TYPE is incorrect for multi-config
generators (Visual Studio and XCode) and is optional for single
config generators. Created a new macro to check
GLOBAL PROPERTY -> GENERATOR_IS_MULTI_CONFIG
Created two new HDF variable, HDF_BUILD_TYPE and HDF_CFG_BUILD_TYPE.
Defaults for these variables is "Release".
(ADB - 2018/01/10, HDFFV-10385)
- CMake
Add replacement of fortran flags if using static CRT.
Added TARGET_STATIC_CRT_FLAGS call to HDFUseFortran.cmake file in
config/cmake_ext_mod folder.
(ADB - 2018/01/08, HDFFV-10334)
- CMake
The hdf5 library used shared szip and zlib, which needlessly required
applications to link with the same szip and zlib libraries.
Changed the target_link_libraries commands to use the static libs.
Removed improper link duplication of szip and zlib.
Adjusted the link dependencies and the link interface values of
the target_link_libraries commands.
(ADB - 2017/11/14, HDFFV-10329)
- CMake MPI
CMake implementation for MPI was problematic and would create incorrect
MPI library references in the hdf5 libraries.
Reworked the CMake MPI code to properly create CMake targets. Also merged
the latest CMake FindMPI.cmake changes to the local copy. This is necessary
until HDF changes the CMake minimum to 3.9 or greater.
(ADB - 2017/11/02, HDFFV-10321)
- CMake
Too many commands for POST_BUILD step caused command line to be
too big on windows.
Changed foreach of copy command to use a custom command with the
use of the HDFTEST_COPY_FILE macro.
(ADB - 2017/07/12, HDFFV-10254)
-
Performance
-------------
@ -572,189 +129,11 @@ Bug Fixes since HDF5-1.10.1 release
Fortran
--------
- Fixed compilation errors when using Intel 18 Fortran compilers
(MSB - 2017/11/3, HDFFV-10322)
-
Tools
-----
- h5clear
An enhancement to the tool in setting a file's stored EOA.
It was discovered that a crashed file's stored EOA in the superblock
was smaller than the actual file's EOF. When the file was reopened
and closed, the library truncated the file to the stored EOA.
Add an option to the tool in setting the file's stored EOA in the
superblock to the maximum of (EOA, EOF) + increment.
Another option is also added to print the file's EOA and EOF.
(VC - 2018/03/14, HDFFV-10360)
- h5repack
h5repack changes the chunk parameters when a change of layout is not
specified and a filter is applied.
HDFFV-10297, HDFFV-10319 reworked code for h5repack and h5diff code
in the tools library. The check for an existing layout was incorrectly
placed into an if block and not executed. The check was moved into
the normal path of the function.
(ADB - 2018/02/21, HDFFV-10412)
- h5dump
the tools library will hide the error stack during file open.
While this is preferable almost always, there are reasons to enable
display of the error stack when a tool will not open a file. Adding an
optional argument to the --enable-error-stack will provide this use case.
As an optional argument it will not affect the operation of the
--enable-error-stack. h5dump is the only tool to implement this change.
(ADB - 2018/02/15, HDFFV-10384)
- h5dump
h5dump would output an indented blank line in the filters section.
h5dump overused the h5tools_simple_prefix function, which is a
function intended to account for the data index (x,y,z) option.
Removed the function call for header information.
(ADB - 2018/01/25, HDFFV-10396)
- h5repack
h5repack incorrectly searched internal object table for name.
h5repack would search the table of objects for a name, if the
name did not match it tried to determine if the name without a
leading slash would match. The logic was flawed! The table
stored names(paths) without a leading slash and did a strstr
of the table path to the name.
The assumption was that if there was a difference of one then
it was a match, however "pressure" would match "/pressure" as
well as "/pressure1", "/pressure2", etc. Changed logic to remove
any leading slash and then do a full compare of the name.
(ADB - 2018/01/18, HDFFV-10393)
- h5repack
h5repack failed to handle more then 9 chars for int conversion.
User defined filter parameter conversions would fail for integers
larger then 9 characters. Increased local variable array for storing
the current command line parameter to prevent buffer overflows.
(ADB - 2018/01/17, HDFFV-10392)
- h5diff
h5diff seg faulted if comparing VL strings against fixed strings.
Reworked solution for HDFFV-8625 and HDFFV-8639. Implemented the check
for string objects of same type in the diff_can_type function by
adding an if(tclass1 == H5T_STRING) block. This if block moves the
same check that was added for attributes to this function, which is
used by all object types. This function also handles complex type
structures.
Also added a new test file in h5diffgenttest for testing this issue
and removed the temporary files used in the test scripts.
(ADB - 2018/01/04, HDFFV-8745)
- h5repack
h5repack failed to copy a dataset with existing filter.
Reworked code for h5repack and h5diff code in tools library. Added
improved error handling, cleanup of resources and checks of calls.
Modified H5Zfilter_avail and private function, H5Z_filter_avail.
Moved check for plugin from public to private function. Updated
H5P__set_filter due to change in H5Z_filter_avail. Updated tests.
Note, h5repack output display has changed to clarify the individual
steps of the repack process. The output indicates if an operation
applies to all objects. Lines with notation and no information
have been removed.
(ADB - 2017/10/10, HDFFV-10297, HDFFV-10319)
- h5repack
h5repack always set the User Defined filter flag to H5Z_FLAG_MANDATORY.
Added another parameter to the 'UD=' option to set the flag by default
to '0' or H5Z_FLAG_MANDATORY, the other choice is '1' or H5Z_FLAG_OPTIONAL.
(ADB - 2017/08/31, HDFFV-10269)
- h5ls
h5ls generated error on stack when it encountered a H5S_NULL
dataspace.
Adding checks for H5S_NULL before calling H5Sis_simple (located
in the h5tools_dump_mem function) fixed the issue.
(ADB - 2017/08/17, HDFFV-10188)
- h5dump
h5dump segfaulted on output of XML file.
Function that escape'd strings used the full buffer length
instead of just the length of the replacement string in a
strncpy call. Using the correct length fixed the issue.
(ADB - 2017/08/01, HDFFV-10256)
- h5diff
h5diff segfaulted on compare of a NULL variable length string.
Improved h5diff compare of strings by adding a check for
NULL strings and setting the lengths to zero.
(ADB - 2017/07/25, HDFFV-10246)
- h5import
h5import crashed trying to import data from a subset of a dataset.
Improved h5import by adding the SUBSET keyword. h5import understands
to use the Count times the Block as the size of the dimensions.
Added INPUT_B_ORDER keyword to old-style configuration files.
The import from h5dump function expects the binary files to use native
types (FILE '-b' option) in the binary file.
(ADB - 2017/06/15, HDFFV-10219)
- h5repack
h5repack did not maintain the creation order flag of the root
group.
Improved h5repack by reading the creation order and applying the
flag to the new root group. Also added arguments to set the
order and index direction, which applies to the traversing of the
original file, on the command line.
(ADB - 2017/05/26, HDFFV-8611)
- h5diff
h5diff failed to account for strpad type and null terminators
of char strings. Also, h5diff failed to account for string length
differences and would give a different result depending on file
order in the command line.
Improved h5diff compare of strings and arrays by adding a check for
string lengths and if the strpad was null filled.
(ADB - 2017/05/18, HDFFV-9055, HDFFV-10128)
-
High-Level APIs:
------
@ -774,49 +153,27 @@ Bug Fixes since HDF5-1.10.1 release
C++ APIs
--------
- Removal of memory leaks
A private function was inadvertently called, causing memory leaks. This
is now fixed.
(BMR - 2018/03/12 - User's reported in email)
-
Testing
-------
- Memory for three variables in testphdf5's coll_write_test was malloced
but not freed, leaking memory when running the test. The variables'
memory is now freed.
(LRK - 2018/03/12, HDFFV-10397)
Supported Platforms
===================
Linux 2.6.32-573.22.1.el6 GNU C (gcc), Fortran (gfortran), C++ (g++)
#1 SMP x86_64 GNU/Linux compilers:
(mayll/platypus) Version 4.4.7 20120313
Version 4.8.4
PGI C, Fortran, C++ for 64-bit target on
x86-64;
Version 16.10-0
Intel(R) C (icc), C++ (icpc), Fortran (icc)
compilers:
Version 15.0.3.187 (Build 20150407)
MPICH 3.1.4 compiled with GCC 4.9.3
Linux 2.6.32-573.18.1.el6.ppc64 gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-4)
#1 SMP ppc64 GNU/Linux g++ (GCC) 4.4.7 20120313 (Red Hat 4.4.7-4)
(ostrich) GNU Fortran (GCC) 4.4.7 20120313 (Red Hat 4.4.7-4)
Linux 2.6.32-696.16.1.el6.ppc64 gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-18)
#1 SMP ppc64 GNU/Linux g++ (GCC) 4.4.7 20120313 (Red Hat 4.4.7-18)
(ostrich) GNU Fortran (GCC) 4.4.7 20120313 (Red Hat 4.4.7-18)
IBM XL C/C++ V13.1
IBM XL Fortran V15.1
Linux 3.10.0-327.10.1.el7 GNU C (gcc), Fortran (gfortran), C++ (g++)
#1 SMP x86_64 GNU/Linux compilers:
(kituo/moohan) Version 4.8.5 20150623 (Red Hat 4.8.5-4)
Version 4.9.3, Version 5.2.0
(kituo/moohan) Version 4.8.5 20150623 (Red Hat 4.8.5-4)
Version 4.9.3, Version 5.2.0,
Intel(R) C (icc), C++ (icpc), Fortran (icc)
compilers:
Version 15.0.3.187 Build 20150407
Version 17.0.0.098 Build 20160721
MPICH 3.1.4 compiled with GCC 4.9.3
SunOS 5.11 32- and 64-bit Sun C 5.12 SunOS_sparc
@ -843,21 +200,17 @@ Supported Platforms
Windows 10 x64 Visual Studio 2015 w/ Intel Fortran 16 (cmake)
Mac OS X Mt. Lion 10.8.5 Apple clang/clang++ version 5.1 from Xcode 5.1
64-bit gfortran GNU Fortran (GCC) 4.8.2
(swallow/kite) Intel icc/icpc/ifort version 15.0.3
Mac OS X Mavericks 10.9.5 Apple clang/clang++ version 6.0 from Xcode 6.2
64-bit gfortran GNU Fortran (GCC) 4.9.2
(wren/quail) Intel icc/icpc/ifort version 15.0.3
Mac OS X Yosemite 10.10.5 Apple clang/clang++ version 6.1 from Xcode 7.0
64-bit gfortran GNU Fortran (GCC) 4.9.2
(osx1010dev/osx1010test) Intel icc/icpc/ifort version 15.0.3
Mac OS X El Capitan 10.11.6 Apple clang/clang++ version 7.3.0 from Xcode 7.3
64-bit gfortran GNU Fortran (GCC) 5.2.0
(osx1010dev/osx1010test) Intel icc/icpc/ifort version 16.0.2
(osx1011dev/osx1011test) Intel icc/icpc/ifort version 16.0.2
Mac OS Sierra 10.12.6 Apple LLVM version 8.1.0 (clang/clang++-802.0.42)
64-bit gfortran GNU Fortran (GCC) 7.1.0
(swallow/kite) Intel icc/icpc/ifort version 17.0.2
Tested Configuration Features Summary
@ -927,22 +280,28 @@ The following platforms are not supported but have been tested for this release.
Linux 2.6.32-573.22.1.el6 GNU C (gcc), Fortran (gfortran), C++ (g++)
#1 SMP x86_64 GNU/Linux compilers:
(mayll/platypus) Version 4.4.7 20120313
Version 4.8.4
Version 4.9.3, 5.3.0, 6.2.0
PGI C, Fortran, C++ for 64-bit target on
x86-64;
Version 16.10-0
Version 17.10-0
Intel(R) C (icc), C++ (icpc), Fortran (icc)
compilers:
Version 15.0.3.187 (Build 20150407)
Version 17.0.4.196 Build 20170411
MPICH 3.1.4 compiled with GCC 4.9.3
Linux 3.10.0-327.18.2.el7 GNU C (gcc) and C++ (g++) compilers
#1 SMP x86_64 GNU/Linux Version 4.8.5 20150623 (Red Hat 4.8.5-4)
(jelly) with NAG Fortran Compiler Release 6.1(Tozai)
GCC Version 7.1.0
OpenMPI 3.0.0-GCC-7.2.0-2.29
Intel(R) C (icc) and C++ (icpc) compilers
Version 15.0.3.187 (Build 20150407)
Version 17.0.0.098 Build 20160721
with NAG Fortran Compiler Release 6.1(Tozai)
Linux 3.10.0-327.10.1.el7 MPICH 3.2 compiled with GCC 5.3.0
#1 SMP x86_64 GNU/Linux
(moohan)
Linux 2.6.32-573.18.1.el6.ppc64 MPICH mpich 3.1.4 compiled with
#1 SMP ppc64 GNU/Linux IBM XL C/C++ for Linux, V13.1
(ostrich) and IBM XL Fortran for Linux, V15.1