Replaces PR https://github.com/Unidata/netcdf-c/pull/3024
and PR https://github.com/Unidata/netcdf-c/pull/3033
re: https://github.com/Unidata/netcdf-c/issues/2753
As suggested by Ed Hartnett, This PR extends the netcdf.h API to support programmatic control over the search path used to locate plugins.
I created several different APIs, but finally settled on the following API as being the simplest possible. It does have the disadvantage that it requires use of a global lock (not implemented) if used in a threaded environment.
Specifically, note that modifying the plugin path must be done "atomically". That is, in a multi-threaded environment, it is important that the sequence of actions involved in setting up the plugin path must be done by a single processor or in some other way as to guarantee that two or more processors are not simultaneously accessing the plugin path get/set operations.
As an example, assume there exists a mutex lock called PLUGINLOCK. Then any processor accessing the plugin paths should operate as follows:
````
lock(PLUGINLOCK);
nc_plugin_path_get(...);
<rebuild plugin path>
nc_plugin_path_set(...);
unlock(PLUGINLOCK);
````
## Internal Architecture
It is assumed here that there only needs to be a single set of plugin path directories that is shared by all filter code and is independent of any file descriptor; it is global in other words. This means, for example, that the path list for NCZarr and for HDF5 will always be the same.
However internally, processing the set of plugin paths depends on the particular NC_FORMATX value (NC_FORMATX_NC_HDF5 and NC_FORMATX_NCZARR, currently). So the *nc_plugin_path_set* function, will take the paths it is given and propagate them to each of the NC_FORMATX dispatchers to store in a way that is appropriate to the given dispatcher.
There is a complication with respect to the *nc_plugin_path_get* function. It is possible for users to bypass the netcdf API and modify the HDF5 plugin paths directly. This can result in an inconsistent plugin path between the value used by HDF5 and the global value used by netcdf-c. Since there is no obvious fix for this, we warn the user of this possibility and otherwise ignore it.
## Test Changes
* New tests<br>
a. unit_test/run_pluginpaths.sh -- was created to test this new capability.<br>
b. A new test utility has been added as *unit_test/run_dfaltpluginpath.sh* to test the default plugin path list.
* New test support utilities<br>
a. unit_test/ncpluginpath.c -- report current state of the plugin path<br>
b. unit_test/tst_pluginpaths.c -- test program to support run_pluginpaths.sh
## Documentation
* A new file -- docs/pluginpath.md -- provides documentation of the new API. It includes some
material taken fro filters.md.
## Other Major Changes
1. Cleanup the whole plugin path decision tree. This is described in the *docs/pluginpath.md* document and summarized in Addendum 2 below.
2. I noticed that the ncdump/testpathcvt.sh had been disabled, so fixed and re-enabled it. This necessitated some significant changes to dpathmgr.c.
## Misc. Changes
1. Add some path manipulation utilities to netcf_aux.h
2. Fix some minor bugs in netcdf_json.h
3. Convert netcdf_json.h and netcdf_proplist.h to BUILT_SOURCE.
4. Add NETCDF_ENABLE_HDF5 as synonym for USE_HDF5
5. Fix some size_t <-> int conversion warnings.
6. Encountered and fixed the Windows \r\n problem in tst_pluginpaths.c.
7. Cleanup some minor CMakeLists.txt problems.
8. Provide an implementation of echo -n since it appears to not be
available on all platforms.
9. Add a property list mechanism to pass environmental information to filters.
10. Cleanup Doxyfile.in
11. Fixed a memory leak in libdap2; surprised that I did not find this earlier.
## Addendum 1: Proposed API
The API makes use of a counted vector of strings representing the sequence of directories in the path. The relevant type definition is as follows.
````
typedef struct NCPluginList {size_t ndirs; char** dirs;} NCPluginList;
````
The API proposed in this PR looks like this (from netcdf-c/include/netcdf_filter.h).
* ````int nc_plugin_path_ndirs(size_t* ndirsp);````
Arguments: *ndirsp* -- store the number of directories in this memory.
This function returns the number of directories in the sequence if internal directories of the internal plugin path list.
* ````int nc_plugin_path_get(NCPluginList* dirs);````
Arguments: *dirs* -- counted vector for storing the sequence of directies in the internal path list.
This function returns the current sequence of directories from the internal plugin path list. Since this function does not modify the plugin path, it does not need to be locked; it is only when used to get the path to be modified that locking is required. If the value of *dirs.dirs* is NULL (the normal case), then memory is allocated to hold the vector of directories. Otherwise, use the memory of *dirs.dirs* to hold the vector of directories.
* ````int nc_plugin_path_set(const NCPluginList* dirs);````
Arguments: *dirs* -- counted vector for providing the new sequence of directories in the internal path list.
This function empties the current internal path sequence and replaces it with the sequence of directories argument. Using an *ndirs* argument of 0 will clear the set of plugin paths.
## Addendum 2: Build-Time and Run-Time Constants.
### Build-Time Constants
<table style="border:2px solid black;border-collapse:collapse">
<tr style="outline: thin solid;" align="center"><td colspan="4">Table showing the build-time computation of NETCDF_PLUGIN_INSTALL_DIR and NETCDF_PLUGIN_SEARCH_PATH.</td>
<tr style="outline: thin solid" ><th>--with-plugin-dir<th>--prefix<th>NETCDF_PLUGIN_INSTALL_DIR<th>NETCDF_PLUGIN_SEARCH_PATH
<tr style="outline: thin solid" ><td>undefined<td>undefined<td>undefined<td>PLATFORMDEFALT
<tr style="outline: thin solid" ><td>undefined<td><abspath-prefix><td><abspath-prefix>/hdf5/lib/plugin<td><abspath-prefix>/hdf5/lib/plugin<SEP>PLATFORMDEFALT
<tr style="outline: thin solid" ><td><abspath-plugins><td>N.A.<td><abspath-plugins><td><abspath-plugins><SEP>PLATFORMDEFALT
</table>
<table style="border:2px solid black;border-collapse:collapse">
<tr style="outline: thin solid" align="center"><td colspan="2">Table showing the computation of the initial global plugin path</td>
<tr style="outline: thin solid"><th>HDF5_PLUGIN_PATH<th>Initial global plugin path
<tr style="outline: thin solid"><td>undefined<td>NETCDF_PLUGIN_SEARCH_PATH
<tr style="outline: thin solid"><td><path1;...pathn><td><path1;...pathn>
</table>
Problem was basically that the test file ref_oldformat.zip
was incorrect. Additionally, logic in zsync.c was incorrect.
### Misc. other fixes
1. Turn off accidental debug output
As suggested by Ward, I ensured that this PR supports
read backward compatibility with old key format.
This addition also adds a test case for this.
## Misc. Other Changes
* Remove some unused code
* Cleanup json error handling
* Fix some more unsigned/signed conversions warning
As discussed in a netcdf meeting, convert NCZarr V2 to store all netcdf-4 specific info as attributes. This improves interoperability with other Zarr implementations by no longer using non-standard keys.
## Other Changes
* Remove support for older NCZarr formats.
* Update anonymous dimension naming
* Begin the process of fixing the -Wconversion and -Wsign-compare warnings in libnczarr, nczarr_test, and v3_nczarr_test.
* Update docs/nczarr.md
* Rebuild using the .y and .l files
re: Issue https://github.com/Unidata/netcdf-c/issues/2927
The NC_s3sdkinitialize NC_s3sdkfinalize functions were
misplaced. They should have been moved from ds3util.c to
ncs3sdk_h5.c. When using ncs3sdl_aws.cpp, this resulted in a
duplicate definition.
Also, found and fixed a memory leak in the NCZarr S3 code.
# Description
Remove various obsolete build options. Also do some code movement.
## Specific Changes
* The remotetest server is sometimes unstable, so provide a mechanism
to force disabling calls to remotetest.unidata.ucar.edu.
This is enabled by adding a repository variable named
REMOTETESTDOWN with the value "yes".
* Fix CMakeLists.txt to use the uname command as an alternate
to using the hostname command (which does not work under cygwin).
* Remove the JNA stuff as obsolete
* Remove the ENABLE_CLIENTSIDE_FILTERS options since it has been
disabled for a while.
* Fix bad option flag in some github action .yml files: change --disable-xml2 to --disable-libxml2
* Collect globalstate definitions into nc4internal.h
* Remove ENABLE_NCZARR_FILTERS_TESTING option as obsolete and replace
with ENABLE_NCZARR_FILTERS
* Move some dispatcher independent functions from libsrc4/nc4internal.c to libdispatch/ddispatch.c
* As a long term goal, and because it is now the case that --enable-nczarr
=> USE_NETCDF4, make the external options --enable-netcdf-4 and
--enable-netcdf4 obsolete in favor of --enable-hdf5
We will do the following for one more release cycle.
1. Make --enable-netcdf-4 be an alias for --enable-netcdf4.
2. Make --enable-netcdf4 an alias for --enable-hdf5.
3. Internally, convert most uses of USE_NETCDF_4 ad USE_NETCDF4 to USE_HDF5
After the next release, --enable-netcdf-4 and --enable-netcdf4 will
be removed.
Argument 'count' in NetCDF is not exactly the same as the 'count' in
H5Sselect_hyperslabs(space_id, op, start, stride, count, block).
When the argument 'stride' is NULL, NetCDF's 'count' should be used in
argument 'block', for example,
H5Sselect_hyperslabs(space_id, op, start, NULL, ones, count);
where 'one' is an array of all 1s. Although using NULL 'block' below
H5Sselect_hyperslabs(space_id, op, start, NULL, count, NULL);
has the same effect, HDF5 internally stores the space of a subarray as a
list of single elements, instead of a "block", which can affect the
performance.
Mostly just add an explicit cast when calling `malloc` and its
variants. Sometimes instead change the type of a local variable if
this would silence multiple warnings.
Prior to this PR, DAP4 always fetched the whole (constrained) dataset
This PR changes the query processing so
1. It reads data on a per-variable request (equivalent to calling nc_get_var()).
2. It tracks a response for every query.
Most of the changes reflect having to do per-variable requests.
In any case, doing all this significantly reduces the amount of data transmitted and hence speeds up DAP4 requests.
re: Issue https://github.com/Unidata/netcdf-c/issues/2748
This PR fixes a number of issues and bugs.
## s3cleanup fixes
* Delete extraneous s3cleanup.sh related files.
* Remove duplicate s3cleanup.uids entries.
## Support the Google S3 API
* Add code to recognize "storage.gooleapis.com"
* Add extra code to track the kind of server being accessed: unknown, Amazon, Google.
* Add a new mode flag "gs3" (analog to "s3") to support this api.
* Modify the S3 URL code to support this case.
* Modify the listobjects result parsing because Google returns some non-standard XML elements.
* Change signature and calls for NC_s3urlrebuild.
## Handle corrupt Zarr files where shape is empty for a variable.
Modify behavior when a variable's "shape" dictionary entry.
Previously it returned an error, but now it suppresses such a variable.
This change makes it possible to read non-corrupt data from the file.
Also added a test case.
## Misc. Other Changes
* Fix the nclog level handling to suppress output by default.
* Fix de-duplicates code in ncuri.c
* Restore testing of iridl.ldeo.columbia.edu.
* Fix bug in define_vars() which did not always do a proper reclaim between variables.
This PR started as an attempt to add unlimited dimensions to NCZarr.
It did that, but this exposed significant problems with test interference.
So this PR is mostly about fixing -- well mitigating anyway -- test
interference.
The problem of test interference is now documented in the document docs/internal.md.
The solutions implemented here are also describe in that document.
The solution is somewhat fragile but multiple cleanup mechanisms
are provided. Note that this feature requires that the
AWS command line utility must be installed.
## Unlimited Dimensions.
The existing NCZarr extensions to Zarr are modified to support unlimited dimensions.
NCzarr extends the Zarr meta-data for the ".zgroup" object to include netcdf-4 model extensions. This information is stored in ".zgroup" as dictionary named "_nczarr_group".
Inside "_nczarr_group", there is a key named "dims" that stores information about netcdf-4 named dimensions. The value of "dims" is a dictionary whose keys are the named dimensions. The value associated with each dimension name has one of two forms
Form 1 is a special case of form 2, and is kept for backward compatibility. Whenever a new file is written, it uses format 1 if possible, otherwise format 2.
* Form 1: An integer representing the size of the dimension, which is used for simple named dimensions.
* Form 2: A dictionary with the following keys and values"
- "size" with an integer value representing the (current) size of the dimension.
- "unlimited" with a value of either "1" or "0" to indicate if this dimension is an unlimited dimension.
For Unlimited dimensions, the size is initially zero, and as variables extend the length of that dimension, the size value for the dimension increases.
That dimension size is shared by all arrays referencing that dimension, so if one array extends an unlimited dimension, it is implicitly extended for all other arrays that reference that dimension.
This is the standard semantics for unlimited dimensions.
Adding unlimited dimensions required a number of other changes to the NCZarr code-base. These included the following.
* Did a partial refactor of the slice handling code in zwalk.c to clean it up.
* Added a number of tests for unlimited dimensions derived from the same test in nc_test4.
* Added several NCZarr specific unlimited tests; more are needed.
* Add test of endianness.
## Misc. Other Changes
* Modify libdispatch/ncs3sdk_aws.cpp to optionally support use of the
AWS Transfer Utility mechanism. This is controlled by the
```#define TRANSFER```` command in that file. It defaults to being disabled.
* Parameterize both the standard Unidata S3 bucket (S3TESTBUCKET) and the netcdf-c test data prefix (S3TESTSUBTREE).
* Fixed an obscure memory leak in ncdump.
* Removed some obsolete unit testing code and test cases.
* Uncovered a bug in the netcdf-c handling of big-endian floats and doubles. Have not fixed yet. See tst_h5_endians.c.
* Renamed some nczarr_tests testcases to avoid name conflicts with nc_test4.
* Modify the semantics of zmap\#ncsmap_write to only allow total rewrite of objects.
* Modify the semantics of zodom to properly handle stride > 1.
* Add a truncate operation to the libnczarr zmap code.
re: https://github.com/Unidata/netcdf-c/issues/2733
When addressing the above issue, I noticed that there was a disconnect
in NCZarr between nc_set_chunk_cache and nc_set_var_chunk cache.
Specifically, setting nc_set_chunk_cache had no impact on the per-variable cache parameters when nc_set_var_chunk_cache was not used.
So, modified the NCZarr code so that the per-variable cache parameters are set in this order (#1 is first choice):
1. The values set by nc_set_var_chunk_cache
2. The values set by nc_set_chunk_cache
3. The defaults set by configure.ac
re: PR https://github.com/Unidata/netcdf-c/pull/2716).
re: Issue https://github.com/Unidata/netcdf-c/issues/2189
The basic change is to make use of the fact that HDF5 automatically suppresses optional filters when an attempt is made to apply them to variable-length typed arrays.
This means that e.g. ncdump or nccopy will properly see meaningful data.
Note that if a filter is defined as HDF5 mandatory, then the corresponding variable will be suppressed and will be invisible to ncdump and nccopy.
This functionality is also propagated to NCZarr.
This PR makes some minor changes to PR https://github.com/Unidata/netcdf-c/pull/2716 as follows:
* Move the test for filter X variable-length from dfilter.c down into the dispatch table functions.
* Make all filters for HDF5 optional rather than mandatory so that the built-in HDF5 test for filter X variable-length will be invoked.
The test case for this was expanded to verify that the filters are defined, but suppressed.
re: Discussion https://github.com/Unidata/netcdf-c/discussions/2554
re: PR https://github.com/Unidata/netcdf-c/pull/2231
re: Issue https://github.com/Unidata/netcdf-c/issues/2189
After some discussion, the issue of applying filters on variables
whose type is not fixed size, was resolved as follows:
1. A call to nc_def_var_filter will ignore such filters, but will issue a log warning.
2. Loading (from an existing file) a variable whose type is not fixed-size and which has filters, will cause the variable to be suppressed.
This PR enforces those rules.
### Misc. Other changes
* Add a test case to test the vlen change.
* Make some minor clean-ups in various cmake and automake files.
* Remove unused test
re: Issue https://github.com/Unidata/netcdf-c/issues/2685
re: PR https://github.com/Unidata/netcdf-c/pull/2179
As noted in PR https://github.com/Unidata/netcdf-c/pull/2179,
the old code did not allow for reclaiming instances of types,
nor for properly copying them. That PR provided new functions
capable of reclaiming/copying instances of arbitrary types.
However, as noted by Issue https://github.com/Unidata/netcdf-c/issues/2685, using these
most general functions resulted in a significant performance
degradation, even for common cases.
This PR attempts to mitigate the cost of using the general
reclaim/copy functions in two ways.
First, the previous functions operating at the top level by
using ncid and typeid arguments. These functions were augmented
with equivalent versions that used the netcdf-c library internal
data structures to allow direct access to needed information.
These new functions are used internally to the library.
The second mitigation involves optimizing the internal functions
by providing early tests for common cases. This avoids
unnecessary recursive function calls.
The overall result is a significant improvement in speed by a
factor of roughly twenty -- your mileage may vary. These
optimized functions are still not as fast as the original (more
limited) functions, but they are getting close. Additional optimizations are
possible. But the cost is a significant "uglification" of the
code that I deemed a step too far, at least for now.
## Misc. Changes
1. Added a test case to check the proper reclamation/copy of complex types.
2. Found and fixed some places where nc_reclaim/copy should have been used.
3. Replaced, in the netcdf-c library, (almost all) occurrences of nc_reclaim_copy with calls to NC_reclaim/copy. This plus the optimizations is the primary speed-up mechanism.
4. In DAP4, the metadata is held in a substrate in-memory file; this required some changes so that the reclaim/copy code accessed that substrate dispatcher rather than the DAP4 dispatcher.
5. Re-factored and isolated the code that computes if a type is (transitively) variable-sized or not.
6. Clean up the reclamation code in ncgen; adding the use of nc_reclaim exposed some memory problems.