re: Partly addresses issue https://github.com/Unidata/netcdf-c/issues/1712.
1. Turn on Hyrax Hack to accept Hyrax style attribute containers.
2. Support Url type as alias for String.
3. Accept the special attribute, "__DAP4_Checksum_CRC32",
to control per-variable checksums.
4. Make _DAP4_xxx attributes be reserved and only accessible
by name (ala _SuperBlock attribute).
5. Fix handling of checksums. There is a hack in the code
that uses an extra flag in the chunk header to indicate
that all variables have checksums. This violates the spec
and will be removed once it is possible to regenerate the
test cases.
Note that checksumming with the Hyrax test server has not
been tested. This, along with some other probable inconsistencies,
needs fixing when OPeNDAP and Unidata can agree on the proper
specification. Testing will be included.
Fix Issue https://github.com/Unidata/netcdf-c/issues/1725.
Replace PR https://github.com/Unidata/netcdf-c/pull/1726
Also replace PR https://github.com/Unidata/netcdf-c/pull/1694
The general problem is that under Visual Studio, we are seeing
a number of undefined reference and other scoping errors.
The reason is that the code is not properly using Visual Studio
_declspec() declarations.
The basic solution is to ensure that when compiling the code itself
one needs to ensure that _declspec(dllexport) is used. There
are several sets of macros to handle this, but they all rely
on the flag DLL_EXPORT being define when the code is compiled,
but not being defined when the code is used via a .h file.
As a test, I modified XGetOpt.c to build properly. I also
fixed the oc2 library to properly _declspec things like ocdebug.
I also made some misc. changes to get all the tests to run
if cygwin is installed (to get bash, sed, etc).
Misc. Changes:
* Put XGetOpt.c into libsrc and copy at build time
to the other directories where it is needed.
The current ncgen does not properly handle very large
data sections. Apparently this is very uncommon because
it was only discovered in testing the new zarr code.
The fix required a new approach to processing data sections.
Unfortunately, the resulting ncgen is slower than before
but at least it is, I think, now correct.
The added test cases are in libnczarr, and so will
not show up until that is incorporated into master.
Note also that fortran code generation changed, but
has not been tested here.
Misc. Changes
1. Cleanup error handling in ncgen -lc and -lb output
2. Cleanup Makefiles for ncgen to remove unused code
3. Added a program, ncgen/ncdumpchunks, to print
the data for a .nc file on a per-chunk format.
4. Made the XGetOpt change in PR https://github.com/Unidata/netcdf-c/pull/1694
for ncdump/ncvalidator
Enables ncdump -t (-i) to recognize a wider variety of time related units
and calendar names. This brings ncdump closer to what it advertises in its
man page regarding its understanding of udunits compliant time units.
These changes seem to be correct for the generation of `netcdf_meta.h` in a CMake build. I think this addresses #1723
There is still an issue with the libnetcdf.settings
The `error4.c` file is already listed in `libsrc4_SOURCES`, so does not need to be added again later. The file's contents are only compiled if `LOGGING` is defined, so is ok to always compile.
The current method of setting the `HDF5_HAS_PAR_FILTERS` and `HAS_PAR_FILTERS` variables is done purely based on the `HDF5_VERSION` and that variable is only set inside the if block which finds the HDF5 library based on CMake package files. If the user specifies the explicit location of the HDF5 library and include files (for example, via:
```
-DHDF5_C_LIBRARY:PATH=${INSTALL_PATH}/lib/libhdf5.${LD_EXT} \
-DHDF5_HL_LIBRARY:PATH=${INSTALL_PATH}/lib/libhdf5_hl.${LD_EXT} \
-DHDF5_INCLUDE_DIR:PATH=${INSTALL_PATH}/include
```
Then, the code path which determines whether the par filters variables is set is not run. However, later on in the file, there is another check for parallel filter support (near line 759):
```
# Check to see if this is hdf5-1.10.3 or later.
CHECK_LIBRARY_EXISTS(${HDF5_C_LIBRARY_hdf5} H5Dread_chunk "" HDF5_SUPPORTS_PAR_FILTERS)
```
This PR moves the code that sets the two other par filters variables down after this check and instead of setting their values based on the version, it bases it on the results of this test.
I'm not totally sure why there are three variables; it looks like the `HDF5_SUPPORTS_PAR_FILTERS` and `HDF5_HAS_PAR_FILTERS` could be combined. I think the `HAS_PAR_FILTERS` is a string which is used to show the results of the configuration and the other two are booleans.
The new check should work for both types of HDF5 installs (cmake-based and configure-based)
re: Github issue https://github.com/Unidata/netcdf-c/issues/1713
If nc_def_var_filter or nc_def_var_deflate or nc_def_var_szip is
called multiple times with the same filter id, but possibly with
different sets of parameters, then the first invocation is
sticky and later invocations are ignored. The desired behavior
is to have the last invocation be used.
This PR implements that desired behavior, with some special
cases. If you call nc_def_var_deflate multiple times, then the
last invocation rule applies with respect to deflate. However,
the shuffle filter, if enabled, is always applied just before
applying deflate.
Misc unrelated changes:
1. Make client-side filters be disabled by default
2. Fix the definition of uintptr_t and use in oc2 and libdap4
3. Add some test cases
4. modify filter order tests to use plugin filters rather
than client-side filters