* "Simultaneous and equivalent" Read-Write and Write-Only channels for
file I/O.
* Only supports drivers with the H5FD_FEAT_DEFAULT_VFD_COMPATIBLE flag for
now, preventing issues with multi-file drivers.
Add Mirror VFD to library.
* Write-only operations over a network.
* Uses TCP/IP sockets.
* Server and auxiliary server-shutdown programs provided in a new directory,
`utils/mirror_vfd`.
* Automated testing via loopback ("remote" of localhost).
This is handy for NetBSD where HDF5 examples are installed
by convention in $prefix/share/examples/hdf5/ rather than in
${prefix}/share/hdf5_examples/, which is the HDF5 default.
Place hdf5_examples/ under ${datarootdir} which on most systems will be
${prefix}/share/, anyway.
should always be built and installed whether tools are enabled or
disabled. Also added Makefile.am to bin to build h5redeploy and
to install and uninstall them. h5cc is created from h5cc.in by
configure.
disabling tests.
Moved h5cc.in from tools/src/misc to src directory to always create h5cc
whether or not tools are enabled.
Added configuration status of tools and tests to libhdf5.settings.
1. restored the datatype, dataspace, and LCPL of the dataset for VOL connector back to the properties.
2. splitted external.c and vds.c because they called HDsetenv in the program, instead using shell scripts to set the environment variables.
3. changed H5CX_get_vds_prefix and H5CX_get_ext_file_prefix to use H5P_peek instead of H5P_get.
* commit 'b02de315b93ac29d2483a91d526b110a25073505':
NNSA Tri-LabsTRILAB-98: Another two test cases out.
NNSA Tri-Labs TRILAB-98: Taking out a few more test cases.
NNSA Tri-Labs TRILAB-98 dt_arith and cpp_testhdf5 tests fail on sierra.llnl.gov: According to the group decision, simply provide a macro to disable some failing test cases on sierra (IBM power9 cpu). All failing cases involve long double data type.
* commit 'd6c2a96ac2f103d90b96d5b39814810e6a31ef99':
Updated the parallel install docs.
Eliminated the need for a separate script variable.
Added a helpful message to the flush script.
Added a shell script so we can run the parallel flush test on OpenMPI.
after deleting attributes in densed storage.
The fix: When deleting attribute nodes from the name index v2 B-tree,
if an attribute is found in the intermediate B-tree nodes, which may be
merged/redistributed in the process, we need to free the dynamically
allocated spaces for the intermediate decoded attribute.