Fix Issue https://github.com/Unidata/netcdf-c/issues/1725.
Replace PR https://github.com/Unidata/netcdf-c/pull/1726
Also replace PR https://github.com/Unidata/netcdf-c/pull/1694
The general problem is that under Visual Studio, we are seeing
a number of undefined reference and other scoping errors.
The reason is that the code is not properly using Visual Studio
_declspec() declarations.
The basic solution is to ensure that when compiling the code itself
one needs to ensure that _declspec(dllexport) is used. There
are several sets of macros to handle this, but they all rely
on the flag DLL_EXPORT being define when the code is compiled,
but not being defined when the code is used via a .h file.
As a test, I modified XGetOpt.c to build properly. I also
fixed the oc2 library to properly _declspec things like ocdebug.
I also made some misc. changes to get all the tests to run
if cygwin is installed (to get bash, sed, etc).
Misc. Changes:
* Put XGetOpt.c into libsrc and copy at build time
to the other directories where it is needed.
The current ncgen does not properly handle very large
data sections. Apparently this is very uncommon because
it was only discovered in testing the new zarr code.
The fix required a new approach to processing data sections.
Unfortunately, the resulting ncgen is slower than before
but at least it is, I think, now correct.
The added test cases are in libnczarr, and so will
not show up until that is incorporated into master.
Note also that fortran code generation changed, but
has not been tested here.
Misc. Changes
1. Cleanup error handling in ncgen -lc and -lb output
2. Cleanup Makefiles for ncgen to remove unused code
3. Added a program, ncgen/ncdumpchunks, to print
the data for a .nc file on a per-chunk format.
4. Made the XGetOpt change in PR https://github.com/Unidata/netcdf-c/pull/1694
for ncdump/ncvalidator
Enables ncdump -t (-i) to recognize a wider variety of time related units
and calendar names. This brings ncdump closer to what it advertises in its
man page regarding its understanding of udunits compliant time units.
These changes seem to be correct for the generation of `netcdf_meta.h` in a CMake build. I think this addresses #1723
There is still an issue with the libnetcdf.settings
The `error4.c` file is already listed in `libsrc4_SOURCES`, so does not need to be added again later. The file's contents are only compiled if `LOGGING` is defined, so is ok to always compile.
The current method of setting the `HDF5_HAS_PAR_FILTERS` and `HAS_PAR_FILTERS` variables is done purely based on the `HDF5_VERSION` and that variable is only set inside the if block which finds the HDF5 library based on CMake package files. If the user specifies the explicit location of the HDF5 library and include files (for example, via:
```
-DHDF5_C_LIBRARY:PATH=${INSTALL_PATH}/lib/libhdf5.${LD_EXT} \
-DHDF5_HL_LIBRARY:PATH=${INSTALL_PATH}/lib/libhdf5_hl.${LD_EXT} \
-DHDF5_INCLUDE_DIR:PATH=${INSTALL_PATH}/include
```
Then, the code path which determines whether the par filters variables is set is not run. However, later on in the file, there is another check for parallel filter support (near line 759):
```
# Check to see if this is hdf5-1.10.3 or later.
CHECK_LIBRARY_EXISTS(${HDF5_C_LIBRARY_hdf5} H5Dread_chunk "" HDF5_SUPPORTS_PAR_FILTERS)
```
This PR moves the code that sets the two other par filters variables down after this check and instead of setting their values based on the version, it bases it on the results of this test.
I'm not totally sure why there are three variables; it looks like the `HDF5_SUPPORTS_PAR_FILTERS` and `HDF5_HAS_PAR_FILTERS` could be combined. I think the `HAS_PAR_FILTERS` is a string which is used to show the results of the configuration and the other two are booleans.
The new check should work for both types of HDF5 installs (cmake-based and configure-based)
re: Github issue https://github.com/Unidata/netcdf-c/issues/1713
If nc_def_var_filter or nc_def_var_deflate or nc_def_var_szip is
called multiple times with the same filter id, but possibly with
different sets of parameters, then the first invocation is
sticky and later invocations are ignored. The desired behavior
is to have the last invocation be used.
This PR implements that desired behavior, with some special
cases. If you call nc_def_var_deflate multiple times, then the
last invocation rule applies with respect to deflate. However,
the shuffle filter, if enabled, is always applied just before
applying deflate.
Misc unrelated changes:
1. Make client-side filters be disabled by default
2. Fix the definition of uintptr_t and use in oc2 and libdap4
3. Add some test cases
4. modify filter order tests to use plugin filters rather
than client-side filters
The current library seems to have some behavior which is N^2 in the number of vars in a file.
The `NC4_inq_dim` routine calls down to `nc4_find_dim_len` which iterates through each `var` in the file/group and calls `find_var_dim_max_length` on each var and finds the largest length of the dim on each of those vars. This is done only for unlimited vars.
I have a file with 129 dim and 1630 vars. The unlimited dimension is of length 41. In my test program, I am reading data from 4 files which have the same dim and var count and reading every 4th time step (unlimited dimension). If I run a profile, I see that 98.2% of the program time is in the `nc_get_vara_float` call tree and most of that is in `find_var_dim_max_length` (94.8%).
There are 66,142 calls to `nc_get_vara_float` resulting in 107,307,290 calls to `find_var_dim_max_length` with twice that number of calls to `malloc/free` and calls to 5 HDF5 routines. All of this, at least in my case, to return the same `41` each time.
The proof of concept patch here will check whether the file is read-only (or no_write) and if so, it will cache the value of the dim length the first time it is calculated. With this change, my example run is sped up by a factor of 60. The time for `NC4_inq_dim` and below drops from 97.2% down to 2.7%.
I'm not sure whether this is the correct fix, or if there is some behavior that I am overlooking, but my users would definitely like a 10 second run compared to a 10 minute run...
This is on current Netcdf master branch.
I will try to attach some valgrind/callgrind profiles.