re: Discussion https://github.com/Unidata/netcdf-c/discussions/2554
re: PR https://github.com/Unidata/netcdf-c/pull/2231
re: Issue https://github.com/Unidata/netcdf-c/issues/2189
After some discussion, the issue of applying filters on variables
whose type is not fixed size, was resolved as follows:
1. A call to nc_def_var_filter will ignore such filters, but will issue a log warning.
2. Loading (from an existing file) a variable whose type is not fixed-size and which has filters, will cause the variable to be suppressed.
This PR enforces those rules.
### Misc. Other changes
* Add a test case to test the vlen change.
* Make some minor clean-ups in various cmake and automake files.
* Remove unused test
re: Issue https://github.com/Unidata/netcdf-c/issues/2704
The issue reported problems accessing e.g. opendap.earthdata.nasa.gov,
which uses the authentication mechanisms of urs.earthdata.nasa.gov.
The file *docs/auth.md* describes how to setup the proper authorization
mechanisms for earthdata, but there turned out to be some bugs
in the code that prevented this from working.
## Primary Changes
* Add some clarification text to *auth.md*.
* Fix the process for loading and merging *.ncrc* and *.dodsrc* file to conform to documentation.
* Fix *NC_s3urlrebuild* so that non-S3 urls are passed through unchanged.
* Fix a bug in the .rc test *test_rcmerge.sh*.
Add the option "--disable-network-access" (automake)
or "-DENABLE_NETWORK_ACCESS=OFF" (cmake).
When disabled, this option transitively disables all
network access capabilities and testing.
If set, this option implies the following:
* --disable-dap
* --disable-byterange
* --disable-s3
This PR answers a request for a feature from Ed Hartnett.
## Misc. Other changes
* Take the opportunity to clean up some old, unused options;
e.g. --enable-multifilters.
* Fix bug in using S3 urls.
re: Issue https://github.com/Unidata/netcdf-c/issues/2685
re: PR https://github.com/Unidata/netcdf-c/pull/2179
As noted in PR https://github.com/Unidata/netcdf-c/pull/2179,
the old code did not allow for reclaiming instances of types,
nor for properly copying them. That PR provided new functions
capable of reclaiming/copying instances of arbitrary types.
However, as noted by Issue https://github.com/Unidata/netcdf-c/issues/2685, using these
most general functions resulted in a significant performance
degradation, even for common cases.
This PR attempts to mitigate the cost of using the general
reclaim/copy functions in two ways.
First, the previous functions operating at the top level by
using ncid and typeid arguments. These functions were augmented
with equivalent versions that used the netcdf-c library internal
data structures to allow direct access to needed information.
These new functions are used internally to the library.
The second mitigation involves optimizing the internal functions
by providing early tests for common cases. This avoids
unnecessary recursive function calls.
The overall result is a significant improvement in speed by a
factor of roughly twenty -- your mileage may vary. These
optimized functions are still not as fast as the original (more
limited) functions, but they are getting close. Additional optimizations are
possible. But the cost is a significant "uglification" of the
code that I deemed a step too far, at least for now.
## Misc. Changes
1. Added a test case to check the proper reclamation/copy of complex types.
2. Found and fixed some places where nc_reclaim/copy should have been used.
3. Replaced, in the netcdf-c library, (almost all) occurrences of nc_reclaim_copy with calls to NC_reclaim/copy. This plus the optimizations is the primary speed-up mechanism.
4. In DAP4, the metadata is held in a substrate in-memory file; this required some changes so that the reclaim/copy code accessed that substrate dispatcher rather than the DAP4 dispatcher.
5. Re-factored and isolated the code that computes if a type is (transitively) variable-sized or not.
6. Clean up the reclamation code in ncgen; adding the use of nc_reclaim exposed some memory problems.
It turns out that attempting to test S3 using a github action secret is a very complex process. So, this was disabled for github actions. However, a new *run_tests_s3.yml* action file was added that will eventually encapsulate S3 testing.