If a corrupt file sets the page buffer size in the superblock to zero,
the library could attempt to divide by zero when allocating space in
the file. The library now checks for valid page buffer sizes when
reading the superblock message.
Fixes oss-fuzz issue 58762
* Fix bug in array conversion with strided background buffer. Convert some
memmove calls to non-overlapping buffers to memcpy.
* Revert inappropriate use of mempy to memmove in H5T__conv_array
* Add testing
* Add RELEASE.txt note and overwrite test case.
Add configure option to enable or disable extension features in general
Add configure option to enable or disable _Float16 support
Add new config options to various settings files
This API call sets the size of a file's page buffer cache. This call
was extremely strict about matching its parameters to the file strategy
and page size used to create the file, requiring a separate open of the
file to obtain these parameters.
These requirements have been relaxed when using the fapl to open
a previously-created file:
* When opening a file that does not use the H5F_FSPACE_STRATEGY_PAGE
strategy, the setting is ignored and the file will be opened, but
without a page buffer cache. This was previously an error.
* When opening a file that has a page size larger than the desired
page buffer cache size, the page buffer cache size will be increased
to the file's page size. This was previously an error.
The behavior when creating a file using H5Pset_page_buffer_size() is
unchanged.
Fixes GitHub issue #3382
H5PB_read previously did not account for the fact that the size of the
read it's performing could overflow the page buffer pointer, depending
on the calculated offset for the read. This has been fixed by adjusting
the size of the read if it's determined that it would overflow the page.
* Remove an error check regarding large cache objects
In PR#4231 an assert() call was converted to a normal HDF5 error
check. It turns out that the original assert() was added by a
developer as a way of being alerted that large cache objects
existed instead of as a guard against incorrect behavior, making
it unnecessary in either debug or release builds.
The error check has been removed.
* Update RELEASE.txt
We previously tried removing the per-tool invocation of the Autotools
and instead simply invoked autoreconf (PR #1906). This was reverted
when it turned out that the NAG Fortran compiler had trouble with an
undecorated -shared linker flag.
It turns out that this is due to a bug in libtool 2.4.2 and earlier.
Since this version of libtool is over a decade old, we're un-reverting
the change. We've added a release note for anyone who has to build
from source on elderly platforms.
Fixes#1343
Both H5Dchunk_iter() and H5Dget_chunk_info(_by_coord)() did not take
the size of the user block into account when reporting addresses. Since
the #1 use of these functions is to root around in the file for the raw
data, this is kind of a problem.
Fixes GitHub issue #3003
If the library tries to load a metadata object that is above the
library's hard-coded limits, the size will trip an assert in debug
builds. In HDF5 1.14.4, this can happen if you create a very large
number of links in an old-style group that uses local heaps.
The library will now emit a normal error when it tries to load a
metadata object that is too large.
Partially addresses GitHub #3762
* Fix issue with Subfiling VFD and multiple opens of same file
* Update H5_subfile_fid_to_context to return error value instead of ID
* Add helper routine to initialize open file mapping
Fixed some conversion issues with Clang due to problematic undefined
behavior when casting a negative floating-point value to an integer
Fixed a bug in the library's software integer to floating-point
conversion function where a user's conversion exception function
returning H5T_CONV_UNHANDLED in the case of overflows would result in
incorrect data after conversion
Added configure checks for functions and macros related to _Float16
usage since some compilers expose the datatype but not the functions or
macros
Fixed a dt_arith test failure when H5_WANT_DCONV_EXCEPTION isn't defined
Fixed a few warnings from not explicitly casting some _Float16 variables
upwards
The reference manual states that the offset parameter of H5Soffset_simple()
can be set to NULL to reset the offset of a simple dataspace to 0. This
has never been true, and passing NULL was regarded as an error.
The library will now accept NULL for the offset parameter and will
correctly set the offset to zero.
Fixes HDFFV-9299
The Autotools temporarily scrub -Werror(=whatever) from CFLAGS, etc.
so configure checks don't trip over warnings generated by configure
check programs. The sed line originally only scrubbed -Werror but not
-Werror=something, which would cause errors when the '=something' was
left behind in CFLAGS.
The sed line has been updated to handle -Werror=something lines.
Fixes one issue raised in #3872
Externally visible:
* The HDF_ENABLE_LARGE_FILE option (advanced) has been removed
* We no longer run a test program to determine if LFS works, which
will help with cross-compiling
* On Linux we now unilaterally set -D_LARGEFILE_SOURCE and
-D_FILE_OFFSET_BITS=64, regardless of 32/64 bit system. CMake
doesn't offer a nice equivalent to AC_SYS_LARGEFILE and since
those options do nothing on 64-bit systems, this seems safe and
covers all our bases. We don't set -D_LARGEFILE64_SOURCE since
we don't use any of the POSIX 64-bit specific API calls like
ftello64, as noted above.
* We didn't test for LFS support on non-Linux platforms. We've added
comments for how LFS should probably be supported on AIX and Solaris,
which seem to be alive, though uncommon. PRs would be appreciated if
anyone wishes to test this.
Internal:
* Drops off64_t size checks since this is unused (as in Autotools)
* Remove HDF_EXTRA_FLAGS, which is now unused
* Remove hack around deprecated LINUX_LFS
Fixes#2395
The H5B (version 1 B-tree) package would add some computationally
expensive integrity checks when H5B_DEBUG was defined. Due to their
negative effects on performance, this option was rarely turned on,
making the H5B__assert() check function stale, if not dead, code.
This change:
* Builds H5B__assert() when NDEBUG is not defined (the function
relies on assert()) so it gets compiled more often.
* Removes some printf debugging statements in the B-tree code
* Removes all H5B "extra debug" checks that are leftover from
past debugging sessions. Maintainers can add H5B__assert()
selectively to perform integrity checks when debugging.
* Removes the HDF5_ENABLE_DEBUG_H5B CMake option
H5B_DEBUG now has no effect
* Fixed asserts due to H5Pset_est_link_info() values
If large values for est_num_entries and/or est_name_len were passed
to H5Pset_est_link_info(), the library would attempt to create an
object header NIL message to reserve enough space to hold the links in
compact form (i.e., concatenated), which could exceed allowable object
header message size limits and trip asserts in the library.
This bug only occurred when using the HDF5 1.8 file format or later and
required the product of the two values to be ~64k more than the size
of any links written to the group, which would cause the library to
write out a too-large NIL spacer message to reserve the space for the
unwritten links.
The library now inspects the phase change values to see if the dataset
is likely to be compact and checks the size to ensure any NIL spacer
messages won't be larger than the library allows.
Fixes GitHub #1632
* Fix copy-paste comments
Microsoft has added a new, standards-conformant preprocessor
to MSVC, which can be enabled with /Zc:preprocessor. This
preprocessor trips over our HDopen() function-like variadic
macro since it uses a hack that only works with the legacy
MSVC preprocessor.
This fix adds ifdefs to use the correct HDopen() macro
depending on the MSVC preprocessor selected.
Fixes#2515
H5Tset_fields did not account for any offset in a floating-point datatype,
causing it to fail when a datatype's precision is correctly set such that
it doesn't include the offset bits.