The current library seems to have some behavior which is N^2 in the number of vars in a file.
The `NC4_inq_dim` routine calls down to `nc4_find_dim_len` which iterates through each `var` in the file/group and calls `find_var_dim_max_length` on each var and finds the largest length of the dim on each of those vars. This is done only for unlimited vars.
I have a file with 129 dim and 1630 vars. The unlimited dimension is of length 41. In my test program, I am reading data from 4 files which have the same dim and var count and reading every 4th time step (unlimited dimension). If I run a profile, I see that 98.2% of the program time is in the `nc_get_vara_float` call tree and most of that is in `find_var_dim_max_length` (94.8%).
There are 66,142 calls to `nc_get_vara_float` resulting in 107,307,290 calls to `find_var_dim_max_length` with twice that number of calls to `malloc/free` and calls to 5 HDF5 routines. All of this, at least in my case, to return the same `41` each time.
The proof of concept patch here will check whether the file is read-only (or no_write) and if so, it will cache the value of the dim length the first time it is calculated. With this change, my example run is sped up by a factor of 60. The time for `NC4_inq_dim` and below drops from 97.2% down to 2.7%.
I'm not sure whether this is the correct fix, or if there is some behavior that I am overlooking, but my users would definitely like a 10 second run compared to a 10 minute run...
This is on current Netcdf master branch.
I will try to attach some valgrind/callgrind profiles.
re: https://github.com/Unidata/netcdf-c/issues/1693
1. Add functions to libdispatch/dnotnc4.c to support
dispatch table operations that should work for any
dispatch table, even if they do not do anything.
Functions such as nc_inq_var_filter.
2. Modify selected dispatch tables to utilize
the noop functions.
3. Extend nc_test/tst_formats.c to test.
This is an extension of Ed's work to do this for
chunking and deflate and szip. See PRs
https://github.com/Unidata/netcdf-c/pull/1697
and
https://github.com/Unidata/netcdf-c/pull/1692
As a side effect, elide libdispatch/dnotnc3.c since
it is no longer used.
re: https://github.com/Unidata/netcdf-c/issues/1684
re: e-support VZL-904142
Two issues:
1. As of libcurl 7.66, the semantics of CURLOPT_SSL_VERIFYHOST
changed so that the non-zero values affects certificate processing.
2. The current library was forcing the values of VERIFYPEER
and VERIFYHOST to zero instead of leaving them to the default values.
Solution was first to leave the defaults in place for VERIFYPEER and VERIFYHOST
as long as they are not set in .ocrc/.dodsrc file.
Second, the value of HTTP.SSL.VERIFYPEER or HTTP.SSL.VERIFYHOST
as set in .ocrc/.dodrc is used to set the corresponding CURLOPT flags.
So for example, adding
> HTTP.SSL.VERIFYHOST=2
will set the value of CURLOPT_SSL_VERIFYHOST to 2, the default.
Using
> HTTP.SSL.VERIFYHOST=0
will set the value of CURLOPT_SSL_VERIFYHOST to 0, which disables it.
Similarly for VERIFYPEER.
Finally the semantics of HTTP.SSL.VALIDATE is now equivalent to
> HTTP.SSL.VERIFYPEER=1
> HTTP.SSL.VERIFYHOST=2
re: issue https://github.com/Unidata/netcdf-c/issues/1687
static functions are being used before decl and it causes
errors. Only occurs when BIG_ENDIAN is defined.
Solution is to add the forward declarations.
nc4internal.c contains code to free the format_XXX_info
fields. Since these are format specific, this code
was moved to the dispatch code (libhdf5 and libhdf4
in the current case).
Additionally, there are some fields in nc4internal.h (e.g.
dimscale fields) that are specific to HDF5 and have been moved
to the corresponding HDF5 data structures and code.
Misc. other changes:
1. NC_VAR_INFO_T->hdf5_name renamed to alt_name to avoid
implying it is necessarily HDF5 specific.
2. prefix NC_FILE_INFO_T with an instance of NC_OBJ for consistency.
this also requires wrapping move_in_NCList() to keep
hdr.id consistent.