re: Issue https://github.com/Unidata/netcdf-c/issues/2419
There are effectively two json subsystems in netcdf-c.
1. ncjson.[ch] in libnetcdf
2. netcdf_json.h for use by plugins so they can be built without need
for libnetcdf.
The netcdf_json.h file is constructed from the concatenation of
ncjson.h plus ncjson.c. It turned out that in doing this, I was
leaving some symbols externally visible so that if, for some
reason, a plugin was built and needed libnetcdf, then symbol
conflicts arose.
The solution is to prefix the declarations in ncjson.[ch] with a
macro (OPTSTATIC) that can be resolved to either nothing or to
"static". Then in netcdf_json.h, it resolves to "static" and
prevents the symbol conflicts.
Note that netcdf_json.h is constructed once in
netcdf-c/include/Makefile.am with the rule named
"makepluginjson". This means that it is included in the
distribution. However, this also means that if ncjson.[ch] is
changed, then it is necessary to invoke makepluginjson
explicitly to rebuild netcdf_json.h
re: https://github.com/Unidata/netcdf-c/issues/2337
re: https://github.com/Unidata/netcdf-c/issues/2407
Add two functions to netcdf.h to allow programs to get/set
selected entries into the internal .rc tables. This should fix
the above issues by allowing HTTP.CAINFO to be set to the
certificates directory. Note that the changes should be
performed as early as possible in the program because some of
the .rc table entries may get cached internally and changing the
entry after that caching occurs may have no effect.
The new signatures are as follows:
1. Get the value of a simple .rc entry of the form "key=value".
Note that caller must free the returned value, which might be NULL.
````
char* nc_rc_get(char* const * key);
@param key table entry key
@return value if .rc table has entry of the form key=value
@return NULL if no such entry is found.
````
2. Insert/Overwrite the specified key=value pair in the .rc table.
````
int nc_rc_set(const char* key, const char* value);
@param key table entry key -- may not be NULL
@param value table entry value -- may not be NULL
@return NC_NOERR if no error
@return NC_EINVAL if error
````
Addendum:
re: https://github.com/Unidata/netcdf-c/issues/2407
Modify dhttp.c to use the .rc entry HTTP.CAINFO if defined.
re: Discussion https://github.com/Unidata/netcdf-c/discussions/2214
The primary change is to support so-called "standard filters".
A standard filter is one that is defined by the following
netcdf-c API:
````
int nc_def_var_XXX(int ncid, int varid, size_t nparams, unsigned* params);
int nc_inq_var_XXXX(int ncid, int varid, int* usefilterp, unsigned* params);
````
So for example, zstandard would be a standard filter by defining
the functions *nc_def_var_zstandard* and *nc_inq_var_zstandard*.
In order to define these functions, we need a new dispatch function:
````
int nc_inq_filter_avail(int ncid, unsigned filterid);
````
This function, combined with the existing filter API can be used
to implement arbitrary standard filters using a simple code pattern.
Note that I would have preferred that this function return a list
of all available filters, but HDF5 does not support that functionality.
So this PR implements the dispatch function and implements
the following standard functions:
+ bzip2
+ zstandard
+ blosc
Specific test cases are also provided for HDF5 and NCZarr.
Over time, other specific standard filters will be defined.
## Primary Changes
* Add nc_inq_filter_avail() to netcdf-c API.
* Add standard filter implementations to test use of *nc_inq_filter_avail*.
* Bump the dispatch table version number and add to all the relevant
dispatch tables (libsrc, libsrcp, etc).
* Create a program to invoke nc_inq_filter_avail so that it is accessible
to shell scripts.
* Cleanup szip support to properly support szip
when HDF5 is disabled. This involves detecting
libsz separately from testing if HDF5 supports szip.
* Integrate shuffle and fletcher32 into the existing
filter API. This means that, for example, nc_def_var_fletcher32
is now a wrapper around nc_def_var_filter.
* Extend the Codec defaulting to allow multiple default shared libraries.
## Misc. Changes
* Modify configure.ac/CMakeLists.txt to look for the relevant
libraries implementing standard filters.
* Modify libnetcdf.settings to list available standard filters
(including deflate and szip).
* Add CMake test modules to locate libbz2 and libzstd.
* Cleanup the HDF5 memory manager function use in the plugins.
* remove unused file include//ncfilter.h
* remove tests for the HDF5 memory operations e.g. H5allocate_memory.
* Add flag to ncdump to force use of _Filter instead of _Deflate
or _Shuffle or _Fletcher32. Used for testing.
re: https://github.com/Unidata/netcdf-c/issues/2117
re: https://github.com/Unidata/netcdf-c/issues/2119
* Modify libsrc to allow byte-range reading of netcdf-3 files in private S3 buckets; this required using the aws sdk. Also add a test case.
* The aws sdk can sometimes cause problems if the Awd::ShutdownAPI function is not called. So at optional atexit() support to ensure it is called. This is disabled for Windows.
* Add documentation to nczarr.md on how to build and use the aws sdk under windows. Currently it builds, but testing fails.
* Switch testing from stratus to the Unidata bucket on S3.
* Improve support for the s3: url protocol.
* Add a s3 specific utility code file: ds3util.c
* Modify NC_infermodel to attempt to read the magic number of byte-ranged files in S3.
## Misc.
* Move and rename the core S3 SDK wrapper code (libnczarr/zs3sdk.cpp) to libdispatch since it now used in libsrc as well as libnczarr.
* Add calls to nc_finalize in the utilities in case atexit is disabled.
* Add header only json parser to the distribution rather than as a built source.
## S3 Related Fixes
* Add comprehensive support for specifying AWS profiles to provide access credentials.
* Parse the files "~/.aws/config" and "~/.aws/credentials to provide credentials for the HDF5 ROS3 driver and to locate default region.
* Add a function to obtain the currently active S3 credentials. The search rules are defined in docs/nczarr.md.
* Provide documentation for the new features.
* Modify the struct NCauth (in include/ncauth.h) to replace specific S3 credentials with a profile name.
* Add a unit test to test the operation of profile and credentials management.
* Add support for URLS of the form "s3://<bucket>/<key>"; this requires obtaining a default region.
* Allows the specification of profile and/or region in a URL of the form "#mode=nczarr,...&aws.region=...&aws.profile=..."
## Misc. Fixes
* Move the ezxml code to libdispatch so that it can be used both by DAP4 and nczarr.
* Modify nclist to provide a deep clone operation.
* Modify ncuri to provide a deep clone operation.
* Modify the .rc file format to allow the specification of a path to be tested when looking for an entry in the .rc file.
* Ensure that the NC_rcload function is called.
* Modify nchttp to support setting request headers.
re: https://github.com/zarr-developers/zarr-specs/issues/41
After discussions with the Zarr community, it was decided to
convert to a new representation of the NCZarr meta-data extensions: version 2.
These extensions store information necessary to mapping the Zarr data model
to the netcdf-4 data model.
The basic change is to remove the NCZarr specific objects: .nczarr, .nczgroup, .nczarray, and .nczattr.
The contents of these objects is moved into the corresponding existing Zarr objects as special keys. The mapping is as follows:
* ''.nczarr'' => ''/.zgroup/_NCZARR_SUPERBLOCK_''
* ''.nczgroup => ''.zgroup/_NCZARR_GROUP_''
* ''.nczarray => ''.zarray/_NCZARR_ARRAY_''
* ''.nczattr => ''.zattr/_NCZARR_ATTR_''
Backward compatibility is maintained by looking for the object ''/.nczarr''
and if found, then assuming that the dataset is in the older version 1 format.
This compatibility only supports reading of such version 1 datasets.
Documentation and test cases are also added.
Misc. Other Changes:
1. The json parsing code was added to the general library instead of nczarr only (ncjson.c, ncjson.h).
2. Improved support for different platform paths by allowing conversion
to a single common path representation.
3. Add some new error codes.
4. Modify nccopy usage to mention the new chunking specification.
The netcdf-c code has to deal with a variety of platforms:
Windows, OSX, Linux, Cygwin, MSYS, etc. These platforms differ
significantly in the kind of file paths that they accept. So in
order to handle this, I have created a set of replacements for
the most common file system operations such as _open_ or _fopen_
or _access_ to manage the file path differences correctly.
A more limited version of this idea was already implemented via
the ncwinpath.h and dwinpath.c code. So this can be viewed as a
replacement for that code. And in path in many cases, the only
change that was required was to replace '#include <ncwinpath.h>'
with '#include <ncpathmgt.h>' and then replace file operation
calls with the NCxxx equivalent from ncpathmgr.h Note that
recently, the ncwinpath.h was renamed ncpathmgmt.h, so this pull
request should not require dealing with winpath.
The heart of the change is include/ncpathmgmt.h, which provides
alternate operations such as NCfopen or NCaccess and which properly
parse and rebuild path arguments to work for the platform on which
the code is executing. This mostly matters for Windows because of the
way that it uses backslash and drive letters, as compared to *nix*.
One important feature is that the user can do string manipulations
on a file path without having to worry too much about the platform
because the path management code will properly handle most mixed cases.
So one can for example concatenate a path suffix that uses forward
slashes to a Windows path and have it work correctly.
The conversion code is in libdispatch/dpathmgr.c, and the
important function there is NCpathcvt which does the proper
conversions to the local path format.
As a rule, most code should just replace their file operations with
the corresponding NCxxx ones defined in include/ncpathmgmt.h. These
NCxxx functions all call NCpathcvt on their path arguments before
executing the actual file operation.
In some rare cases, the client may need to directly use NCpathcvt,
but this should be avoided as much as possible. If there is a need
for supporting a new file operation not already in ncpathmgmt.h, then
use the code in dpathmgr.c as a template. Also please notify Unidata
so we can include it as a formal part or our supported operations.
Also, if you see an operation in the library that is not using the
NCxxx form, then please submit an issue so we can fix it.
Misc. Changes:
* Clean up the utf8 testing code; it is impossible to get some
tests to work under windows using shell scripts; the args do
not pass as utf8 but as some other encoding.
* Added an extra utf8 test case: test_unicode_path.sh
* Add a true test for HDF5 1.10.6 or later because as noted in
PR https://github.com/Unidata/netcdf-c/pull/1794,
HDF5 changed its Windows file path handling.
re: Issue
The netcdf dispatch table version was defined in several places.
Modify to only require defining it in CMakeLists.txt and configure.ac.
Fix entailed the following changes:
* Up the NC_DISPATCH_VERSION from 2 to 3 in configure.ac and CMakeLists.txt
* Create include/netcdf_dispatch.h.in and use it to configure include/netcdf_dispatch.h
* For CMAKE, make it search CMAKE_CURRENT_BINARY_DIR so code can locate the configured netcdf_dispatch.h
* Add entry to config.h.cmake.in for NC_DISPATCH_VERSION
* Move NCerror from include/ncdispatch.h to libdap2/nccomon.h
* Fix an API problem re nchttp.h
* Fix a conversion warning in libdispatch/dinfermodel.c
Found on conda-forge (which is now running 7.71.1), that byte-range
requests would stall. It turns out this is due to
CURLOPT_NOBODY--apparently setting this to 0 disables the HEAD request,
but does not restore downloading the body. The way to fix this is to
reset to CURLOPT_HTTPGET when done with a HEAD request.
The primary fix is to improve CMake build support.
Specific changes include:
* CMake: Provide a better soln to locating the AWS SDK
libraries; the new way is the preferred method as described in
the aws-cpp-sdk documentation.
* CMake (and Automake): allow -DENABLE_S3_SDK (default off) to suppress
looking for AWS libraries.
* CMake: add the complete set of nczarr tests
* CMake: add EXTERNL as needed to various .h files.
* Improve support for windows drive letters in paths.
* Add nczarr and s3 flags to nc-config
* For VisualStudio X nczarr, cleanup the NAN+INFINITY handling
* Convert _MSC_VER -> _WIN32 and vice versa as needed
* NCZarr - support multiple platform paths including windows, cygwin.
mingw, etc.
* NCZarr - sort the test outputs because different platforms
produce directory contents in different orders.
One big change concerns netcdf-c/CMakeLists.txt and netcdf-c/configure.ac.
In the current versions, it was the case that --disable-hdf5
disabled netcdf-4 (libsrc4). With nczarr, this can no longer
be the case because nczarr requires libsrc4 even if libhdf5
is disabled. So, I modified the above files to move the
format options (HDF5, NCZarr, HDF4, etc) to a single place
near the front of the files. Now it is the case that:
* Enabling any of the formats that require libsrc4
also does an implicit --enable-netcdf4.
* --disable-netcdf4 | --disable-netcdf-4 now becomes
and alias for --disable-hdf5.
There are probably some bugs in this change in terms of
dependencies between format options.
Problems:
* CMake S3 support is still not working for Visual Studio
* A recent issue points out that there is work to do on handling
UTF8 filenames, but that will be addressed in a separate fix.
Notes:
* Consider converting all of our includes/.h files to use EXTERNL
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
Some versions of some servers are returning malformed responses.
Make the library either handle them or gracefully fail.
The three server errors "fixed" here are as follows.
1. The attribute _NCProperties sometimes has a trailing nul character
in its value. Soln is to elide the nul(s).
2. Sometimes a DAP response has no data part, only a DMR.
Soln is to detect and return an error code instead of crashing.
3. Sometimes a server returns a redirection, but our current
openmagic() function was not following the redirect. Soln
is to follow redirects.
Also because of #2, I am temporarily making --disable-dap-remote-tests
be the default.
re: issue https://github.com/Unidata/netcdf-c/issues/1251
Assume that you have the URL to a remote dataset
which is a normal netcdf-3 or netcdf-4 file.
This PR allows the netcdf-c to read that dataset's
contents as a netcdf file using HTTP byte ranges
if the remote server supports byte-range access.
Originally, this PR was set up to access Amazon S3 objects,
but it can also access other remote datasets such as those
provided by a Thredds server via the HTTPServer access protocol.
It may also work for other kinds of servers.
Note that this is not intended as a true production
capability because, as is known, this kind of access to
can be quite slow. In addition, the byte-range IO drivers
do not currently do any sort of optimization or caching.
An additional goal here is to gain some experience with
the Amazon S3 REST protocol.
This architecture and its use documented in
the file docs/byterange.dox.
There are currently two test cases:
1. nc_test/tst_s3raw.c - this does a simple open, check format, close cycle
for a remote netcdf-3 file and a remote netcdf-4 file.
2. nc_test/test_s3raw.sh - this uses ncdump to investigate some remote
datasets.
This PR also incorporates significantly changed model inference code
(see the superceded PR https://github.com/Unidata/netcdf-c/pull/1259).
1. It centralizes the code that infers the dispatcher.
2. It adds support for byte-range URLs
Other changes:
1. NC_HDF5_finalize was not being properly called by nc_finalize().
2. Fix minor bug in ncgen3.l
3. fix memory leak in nc4info.c
4. add code to walk the .daprc triples and to replace protocol=
fragment tag with a more general mode= tag.
Final Note:
Th inference code is still way too complicated. We need to move
to the validfile() model used by netcdf Java, where each
dispatcher is asked if it can process the file. This decentralizes
the inference code. This will be done after all the major new
dispatchers (PIO, Zarr, etc) have been implemented.