mirror of
https://github.com/Unidata/netcdf-c.git
synced 2024-12-27 08:49:16 +08:00
d07c05b58f
re: issue https://github.com/Unidata/netcdf-c/issues/1156 Starting with HDF5 version 1.10.x, the plugin code MUST be careful when using the standard *malloc()*, *realloc()*, and *free()* function. In the event that the code is allocating, reallocating, or free'ing memory that either came from -- or will be exported to -- the calling HDF5 library, then one MUST use the corresponding HDF5 functions *H5allocate_memory()*, *H5resize_memory()*, *H5free_memory()* [5] to avoid memory failures. Additionally, if your filter code leaks memory, then the HDF5 library generates a failure something like this. ```` H5MM.c:232: H5MM_final_sanity_check: Assertion `0 == H5MM_curr_alloc_bytes_s' failed. ```` This PR modifies the code in the plugins directory to conform to these new requirements. This raises a question about the libhdf5 code where this same problem may occur. We need to scan especially nc4hdf.c to look for this problem. |
||
---|---|---|
.. | ||
blocksort.c | ||
bzlib_private.h | ||
bzlib.c | ||
bzlib.h | ||
CMakeLists.txt | ||
compress.c | ||
crctable.c | ||
decompress.c | ||
h5bzip2.h | ||
h5misc.h | ||
H5Zbzip2.c | ||
H5Zmisc.c | ||
H5Ztemplate.c | ||
huffman.c | ||
Makefile.am | ||
randtable.c |