re: Issue
The netcdf dispatch table version was defined in several places.
Modify to only require defining it in CMakeLists.txt and configure.ac.
Fix entailed the following changes:
* Up the NC_DISPATCH_VERSION from 2 to 3 in configure.ac and CMakeLists.txt
* Create include/netcdf_dispatch.h.in and use it to configure include/netcdf_dispatch.h
* For CMAKE, make it search CMAKE_CURRENT_BINARY_DIR so code can locate the configured netcdf_dispatch.h
* Add entry to config.h.cmake.in for NC_DISPATCH_VERSION
* Move NCerror from include/ncdispatch.h to libdap2/nccomon.h
* Fix an API problem re nchttp.h
* Fix a conversion warning in libdispatch/dinfermodel.c
Changes:
1. add "use_nczarr: [ nczarr_off, nczarr_on ]" to matrix
2. add libzip-dev to the apt installs (might need caching).
3. convert deprecated "--enable-netcdf-4" to "--enable-hdf5" (also for cmake)
re: esupport 31942
* Add a new set of remote tests based on using the opendap hyrax server
* Re-enable --enable-dap-remote-tests as default on
* Make it possible to query the DMR separate from the DAP
so that it is possible to delay reading data until it is actually
requested. This make things like ncdump -h much more efficient.
* Fix handling of <Map>s in DMRs.
* Fix some memory leaks
re: https://github.com/Unidata/netcdf-c/issues/1614
NetCDF has a requirement that the type of a _FillValue attribute
be the same as the containing variable. However, it is clear
that too many servers do not honor this requirement. So, change
the default for DAP2 and DAP4 to allow fill mismatch.
* Replace wholevar with more useful wholechunk optimization
* Add optimization to read multiple values at one time
* Replace NCDEFAULT_get/put_vars with native nczarr versions.
* Clarify chunk projection computations
* zdebdispatch.h
* Add more chunking test cases and re-enable run_chunkcases
* If !szip, then suppress deflate interference test
* Make H5Znoop(1) filter produce more information
* cleanup bzlib.c API
* Turn back on the DAP2 remote tests, but leave DAP4 remote tests disabled.
* Turn on selected non-remote DAP4 tests
* Replace d4crc32 with the equivalents in libdispatch
re: https://github.com/Unidata/netcdf-c/issues/1923
re: https://github.com/Unidata/netcdf-c/issues/1921
The issue was raised about the order of returned filter ids
for nc_inq_var_filter_ids() when creating a file as opposed
to later reading the file.
For creation, the order is the same as the order in which the
calls to nc_def_var_filter() occur.
However, after the file is closed and then reopened for reading,
the question was raised if the returned order is the same or the reverse.
In fact the order is the same in both cases.
This PR extends the existing filter order testcase to check the create
versus read orders. This also required changing the H5Znoop(1) filters
in the plugins directory.
Misc. Unrelated Changes
1. fix calls to fdopen under windows
2. Temporarily suppres the nczarr_tests/run_chunkcases test
since it seems to be causing problems with github actions.
If the user is opening a existing file for appending (NC_WRITE) in parallel and the file is in CDF5 format, the `NC_interpret_magic_number()` routine clears the `model->impl` setting of `NC_FORMATX_PNETCDF` which was set in `NC_omodeinfer` (See lines following the `done:` label in that routine which specifically set the `impl` if `useparallel` is true.)
This setting then gets overwritten when `NC_interpret_magic_number` is called which sets the `model->impl` back to `NC_FORMATX_NC3`. This can (did) cause problems with parallel output as the `NC3` format does not correctly handle parallel writing but the `PNETCDF` does.
Not sure if this is the best place for the test, but it did fix the parallel write issues I was seeing...
If you need more details on what is happening, let me know. But a restatement at a higher level is that I was calling `nc_open_par` with `NC_WRITE` and `NC_64BIT_DATA` mode and the existing file has `CDF5` for the magic number. However, the dispatcher was being set to `NC3_dispatch_table` instead of `NCP_dispatch_table` which is the dispatcher which had been chosen for the original creation of the file being appended to.
I was then getting zeroes in the data being written to the vars since NC3 wasn't correctly handling multiple MPI ranks writing to different parts of the same variable...
The `NC_HAS_MULTIFILTERS` line in `netcdf_meta.h` was not of the correct form -- showed as
```
#define NC_HAS_MULTIFILTERS
```
Instead of the better form:
```
#define NC_HAS_MULTIFILTERS 0 (or 1 if enabled)
```