re: issue https://github.com/Unidata/netcdf-c/issues/1687
static functions are being used before decl and it causes
errors. Only occurs when BIG_ENDIAN is defined.
Solution is to add the forward declarations.
re: https://github.com/Unidata/netcdf-c/issues/1584
Support has been added for multiple filters per variable. This
affects a number of components in netcdf. The new APIs are
documented in NUG/filters.md.
The primary changes are:
* A set of new functions are provided (see __include/netcdf_filter.h__).
- Obtain a list of the filters associated with a variable
- Obtain the parameters for a specific filter.
* The existing __nc_inq_var_filter__ function now returns info
about the first defined filter.
* The utilities (ncgen, ncdump, and nccopy) now support
an extended format for specifying a sequence of filters.
The general form is __<filter>|<filter>..._.
* The ncdump **_Filter** attribute now dumps a list of all the
filters associated with a variable using the above new format.
* Filter specifications can now use a filter name instead of number
for filters known to the netcdf library, which in turn is taken
from the HDF5 filter registration page.
* New errors are defined: NC_EFILTER and NC_ENOFILTER. The latter
is returned if an attempt is made to access an unknown filter.
* Internally, the dispatch table has been extended to add a function
to handle all of the filter functions.
* New, filter-related, tests were added to nc_test4.
* A new plugin was added to the plugins directory to help with testing.
Notes:
1. The shuffle and fletcher32 filters are not part of the multifilter system.
Misc. changes:
1. A debug module was added to libhdf5 to help catch error locations.
* For URL paths, the new approach essentially centralizes all information
in the URL into the "#mode=" fragment key and uses that value
to determine the dispatcher for (most) URLs.
* The new approach has the following steps:
1. canonicalize the path if it is a URL.
2. use the mode= fragment key to determine the dispatcher
3. if dispatcher still not determined, then use the mode flags
argument to nc_open/nc_create to determine the dispatcher.
4. if the path points to something readable, attempt to read the
magic number at the front, and use that to determine the dispatcher.
this case may override all previous cases.
* Misc changes.
1. Update documentation
2. Moved some unit tests from libdispatch to unit_test directory.
3. Fixed use of wrong #ifdef macro in test_filter_reg.c
[I think this may fix an previously reported esupport query].
Priority: Low
re: issue https://github.com/Unidata/netcdf-c/issues/1329
HDF5 has the ability to programmatically define new filters,
as opposed to using HDF5_PLUGIN_PATH env variable.
This PR adds support for that feature.
Not clear how useful this is, though.
See docs/filters.md for details.
re: https://github.com/Unidata/netcdf-c/issues/1347
It turns out that the plugin libraries (bzip2 and misc) were
being installed as part of 'make installed'. This was not intended
behavior. But after some discussion in the above issue, it was decided
to install the bzip2 plugin. However, in order to avoid naming conflicts,
the plugin is installed under the name 'libh5bzip2.so'.
Note that this is automake behavior only; the install does not
(yet) occur using cmake.
Misc. unrelated changes
-----------------------
1. turn off some debug output in ncdump/Makefile.am
re: issue https://github.com/Unidata/netcdf-c/issues/1278
re: issue https://github.com/Unidata/netcdf-c/issues/876
re: issue https://github.com/Unidata/netcdf-c/issues/806
* Major change to the handling of 8-byte parameters for nc_def_var_filter.
The old code was not well thought out.
* The new algorithm is documented in docs/filters.md.
* Added new utility file plugins/H5Zutil.c to support
* Modified plugins/H5Zmisc.c to use new algorithm
the new algorithm.
* Renamed include/ncfilter.h to include/netcdf_filter.h
and made it an installed header so clients can access the
new algorithm utility.
* Fixed nc_test4/tst_filterparser.c and nc_test4/test_filter_misc.c
to use the new algorithm
* libdap4/ fixes:
* d4swap.c has an error in the endian pre-processing such
that record counts were not being swapped correctly.
* d4data.c had an error in that checksums were being computed
after endian swapping rather than before.
* ocinitialize() was never being called, so xxdr bigendian handling
was never set correctly.
* Required adding debug statements to occompile
* Found and fixed memory leak in ncdump.c
Not tested:
* HDF4
* Pnetcdf
* parallel HDF5
re: pull request https://github.com/Unidata/netcdf-c/pull/1219
This pr should go in after or at the same time as 1219.
It updates nc_test/tst_inmemory to add a test to cover
the case fixed in 1219. It also disables a couple of tests
that no longer are relevant.
re: issue https://github.com/Unidata/netcdf-c/issues/1156
Starting with HDF5 version 1.10.x, the plugin code MUST be
careful when using the standard *malloc()*, *realloc()*, and
*free()* function.
In the event that the code is allocating, reallocating, or
free'ing memory that either came from -- or will be exported to --
the calling HDF5 library, then one MUST use the corresponding
HDF5 functions *H5allocate_memory()*, *H5resize_memory()*,
*H5free_memory()* [5] to avoid memory failures.
Additionally, if your filter code leaks memory, then the HDF5 library
generates a failure something like this.
````
H5MM.c:232: H5MM_final_sanity_check: Assertion `0 == H5MM_curr_alloc_bytes_s' failed.
````
This PR modifies the code in the plugins directory to
conform to these new requirements.
This raises a question about the libhdf5 code where this
same problem may occur. We need to scan especially nc4hdf.c
to look for this problem.
I took Ed's advice and moved the plugin stuff to its own
top-level directory. This is an attempt to solve the problem of
copying files that we have experienced. In any case, it will
serve as a place to stick additional plugins.