These changes seem to be correct for the generation of `netcdf_meta.h` in a CMake build. I think this addresses #1723
There is still an issue with the libnetcdf.settings
The `error4.c` file is already listed in `libsrc4_SOURCES`, so does not need to be added again later. The file's contents are only compiled if `LOGGING` is defined, so is ok to always compile.
The current method of setting the `HDF5_HAS_PAR_FILTERS` and `HAS_PAR_FILTERS` variables is done purely based on the `HDF5_VERSION` and that variable is only set inside the if block which finds the HDF5 library based on CMake package files. If the user specifies the explicit location of the HDF5 library and include files (for example, via:
```
-DHDF5_C_LIBRARY:PATH=${INSTALL_PATH}/lib/libhdf5.${LD_EXT} \
-DHDF5_HL_LIBRARY:PATH=${INSTALL_PATH}/lib/libhdf5_hl.${LD_EXT} \
-DHDF5_INCLUDE_DIR:PATH=${INSTALL_PATH}/include
```
Then, the code path which determines whether the par filters variables is set is not run. However, later on in the file, there is another check for parallel filter support (near line 759):
```
# Check to see if this is hdf5-1.10.3 or later.
CHECK_LIBRARY_EXISTS(${HDF5_C_LIBRARY_hdf5} H5Dread_chunk "" HDF5_SUPPORTS_PAR_FILTERS)
```
This PR moves the code that sets the two other par filters variables down after this check and instead of setting their values based on the version, it bases it on the results of this test.
I'm not totally sure why there are three variables; it looks like the `HDF5_SUPPORTS_PAR_FILTERS` and `HDF5_HAS_PAR_FILTERS` could be combined. I think the `HAS_PAR_FILTERS` is a string which is used to show the results of the configuration and the other two are booleans.
The new check should work for both types of HDF5 installs (cmake-based and configure-based)
re: Github issue https://github.com/Unidata/netcdf-c/issues/1713
If nc_def_var_filter or nc_def_var_deflate or nc_def_var_szip is
called multiple times with the same filter id, but possibly with
different sets of parameters, then the first invocation is
sticky and later invocations are ignored. The desired behavior
is to have the last invocation be used.
This PR implements that desired behavior, with some special
cases. If you call nc_def_var_deflate multiple times, then the
last invocation rule applies with respect to deflate. However,
the shuffle filter, if enabled, is always applied just before
applying deflate.
Misc unrelated changes:
1. Make client-side filters be disabled by default
2. Fix the definition of uintptr_t and use in oc2 and libdap4
3. Add some test cases
4. modify filter order tests to use plugin filters rather
than client-side filters
The current library seems to have some behavior which is N^2 in the number of vars in a file.
The `NC4_inq_dim` routine calls down to `nc4_find_dim_len` which iterates through each `var` in the file/group and calls `find_var_dim_max_length` on each var and finds the largest length of the dim on each of those vars. This is done only for unlimited vars.
I have a file with 129 dim and 1630 vars. The unlimited dimension is of length 41. In my test program, I am reading data from 4 files which have the same dim and var count and reading every 4th time step (unlimited dimension). If I run a profile, I see that 98.2% of the program time is in the `nc_get_vara_float` call tree and most of that is in `find_var_dim_max_length` (94.8%).
There are 66,142 calls to `nc_get_vara_float` resulting in 107,307,290 calls to `find_var_dim_max_length` with twice that number of calls to `malloc/free` and calls to 5 HDF5 routines. All of this, at least in my case, to return the same `41` each time.
The proof of concept patch here will check whether the file is read-only (or no_write) and if so, it will cache the value of the dim length the first time it is calculated. With this change, my example run is sped up by a factor of 60. The time for `NC4_inq_dim` and below drops from 97.2% down to 2.7%.
I'm not sure whether this is the correct fix, or if there is some behavior that I am overlooking, but my users would definitely like a 10 second run compared to a 10 minute run...
This is on current Netcdf master branch.
I will try to attach some valgrind/callgrind profiles.
re: https://github.com/Unidata/netcdf-c/issues/1693
1. Add functions to libdispatch/dnotnc4.c to support
dispatch table operations that should work for any
dispatch table, even if they do not do anything.
Functions such as nc_inq_var_filter.
2. Modify selected dispatch tables to utilize
the noop functions.
3. Extend nc_test/tst_formats.c to test.
This is an extension of Ed's work to do this for
chunking and deflate and szip. See PRs
https://github.com/Unidata/netcdf-c/pull/1697
and
https://github.com/Unidata/netcdf-c/pull/1692
As a side effect, elide libdispatch/dnotnc3.c since
it is no longer used.
re: https://github.com/Unidata/netcdf-c/issues/1684
re: e-support VZL-904142
Two issues:
1. As of libcurl 7.66, the semantics of CURLOPT_SSL_VERIFYHOST
changed so that the non-zero values affects certificate processing.
2. The current library was forcing the values of VERIFYPEER
and VERIFYHOST to zero instead of leaving them to the default values.
Solution was first to leave the defaults in place for VERIFYPEER and VERIFYHOST
as long as they are not set in .ocrc/.dodsrc file.
Second, the value of HTTP.SSL.VERIFYPEER or HTTP.SSL.VERIFYHOST
as set in .ocrc/.dodrc is used to set the corresponding CURLOPT flags.
So for example, adding
> HTTP.SSL.VERIFYHOST=2
will set the value of CURLOPT_SSL_VERIFYHOST to 2, the default.
Using
> HTTP.SSL.VERIFYHOST=0
will set the value of CURLOPT_SSL_VERIFYHOST to 0, which disables it.
Similarly for VERIFYPEER.
Finally the semantics of HTTP.SSL.VALIDATE is now equivalent to
> HTTP.SSL.VERIFYPEER=1
> HTTP.SSL.VERIFYHOST=2