## Improvements to S3 Documentation
* Create a new document *quickstart_paths.md* that give a summary of the legal path formats used by netcdf-c. This includes both file paths and URL paths.
* Modify *nczarr.md* to remove most of the S3 related text.
* Move the S3 text from *nczarr.md* to a new document *cloud.md*.
* Add some S3-related text to the *byterange.md* document.
Hopefully, this will make it easier for users to find the information they want.
## Rebuild NCZarr Testing
In order to avoid problems with running make check in parallel, two changes were made:
1. The *nczarr_test* test system was rebuilt. Now, for each test.
any generated files are kept in a test-specific directory, isolated
from all other test executions.
2. Similarly, since the S3 test bucket is shared, any generated S3 objects
are isolated using a test-specific key path.
## Other S3 Related Changes
* Add code to ensure that files created on S3 are reclaimed at end of testing.
* Used the bash "trap" command to ensure S3 cleanup even if the test fails.
* Cleanup the S3 related configure.ac flag set since S3 is used in several places. So now one should use the option *--enable-s3* instead of *--enable-nczarr-s3*, although the latter is still kept as a deprecated alias for the former.
* Get some of the github actions yml to work with S3; required fixing various test scripts adding a secret to access the Unidata S3 bucket.
* Cleanup S3 portion of libnetcdf.settings.in and netcdf_meta.h.in and test_common.in.
* Merge partial S3 support into dhttp.c.
* Create an experimental s3 access library especially for use with Windows. It is enabled by using the options *--enable-s3-internal* (automake) or *-DENABLE_S3_INTERNAL=ON* (CMake). Also add a unit-test for it.
* Move some definitions from ncrc.h to ncs3sdk.h
## Other Changes
* Provide a default implementation of strlcpy and move this and similar defaults into *dmissing.c*.
When "getopt()" is not available, various of the netcdf-c utilities
use XGetopt instead. This occurs primarily when building under Window,
so the build changes are restricted to CMake.
This PR tries to isolate XGetopt.c to the libdispatch directory
and then builds the various utilities using this cliche:
````
IF(USE_X_GETOPT)
SET(XGETOPTSRC "${CMAKE_CURRENT_SOURCE_DIR}/../libdispatch/XGetopt.c")
ENDIF()
````
This avoids the need to copy XGetopt.c to all the directories that
use it.
re: https://github.com/Unidata/netcdf-c/issues/2117
re: https://github.com/Unidata/netcdf-c/issues/2119
* Modify libsrc to allow byte-range reading of netcdf-3 files in private S3 buckets; this required using the aws sdk. Also add a test case.
* The aws sdk can sometimes cause problems if the Awd::ShutdownAPI function is not called. So at optional atexit() support to ensure it is called. This is disabled for Windows.
* Add documentation to nczarr.md on how to build and use the aws sdk under windows. Currently it builds, but testing fails.
* Switch testing from stratus to the Unidata bucket on S3.
* Improve support for the s3: url protocol.
* Add a s3 specific utility code file: ds3util.c
* Modify NC_infermodel to attempt to read the magic number of byte-ranged files in S3.
## Misc.
* Move and rename the core S3 SDK wrapper code (libnczarr/zs3sdk.cpp) to libdispatch since it now used in libsrc as well as libnczarr.
* Add calls to nc_finalize in the utilities in case atexit is disabled.
* Add header only json parser to the distribution rather than as a built source.
re: issue https://github.com/Unidata/netcdf-c/issues/1251
Assume that you have the URL to a remote dataset
which is a normal netcdf-3 or netcdf-4 file.
This PR allows the netcdf-c to read that dataset's
contents as a netcdf file using HTTP byte ranges
if the remote server supports byte-range access.
Originally, this PR was set up to access Amazon S3 objects,
but it can also access other remote datasets such as those
provided by a Thredds server via the HTTPServer access protocol.
It may also work for other kinds of servers.
Note that this is not intended as a true production
capability because, as is known, this kind of access to
can be quite slow. In addition, the byte-range IO drivers
do not currently do any sort of optimization or caching.
An additional goal here is to gain some experience with
the Amazon S3 REST protocol.
This architecture and its use documented in
the file docs/byterange.dox.
There are currently two test cases:
1. nc_test/tst_s3raw.c - this does a simple open, check format, close cycle
for a remote netcdf-3 file and a remote netcdf-4 file.
2. nc_test/test_s3raw.sh - this uses ncdump to investigate some remote
datasets.
This PR also incorporates significantly changed model inference code
(see the superceded PR https://github.com/Unidata/netcdf-c/pull/1259).
1. It centralizes the code that infers the dispatcher.
2. It adds support for byte-range URLs
Other changes:
1. NC_HDF5_finalize was not being properly called by nc_finalize().
2. Fix minor bug in ncgen3.l
3. fix memory leak in nc4info.c
4. add code to walk the .daprc triples and to replace protocol=
fragment tag with a more general mode= tag.
Final Note:
Th inference code is still way too complicated. We need to move
to the validfile() model used by netcdf Java, where each
dispatcher is asked if it can process the file. This decentralizes
the inference code. This will be done after all the major new
dispatchers (PIO, Zarr, etc) have been implemented.
Fix https://github.com/Unidata/netcdf-c/issues/962
1. remove the --disable-diskless option since it is no
longer needed. Similarly for CMakeLists.txt.
2. Fixed nc4files.c where BAIL and return were mixed
leading to situation where cleanup code was not
being invoked. This probably occurs elsewhere,
but I did not find any specifically.
m4 can emit 'sync lines' which tell the toolchain "hey, this code
actually came from this other file over here". Should help prevent
anyone accidentally editing a generated file.
Add a new function called nc_inq_format_extended that
returns more detailed format information (vis-a-vis
nc_inq_format) about an open dataset.
Note that the netcdf API will present the file as if it had
the format specified by nc_inq_format. The true file
format, however, may not even be a netcdf file; it might be
DAP, HDF4, or PNETCDF, for example. This function returns
that true file type. It also returns the effective mode for
the file.
signature: nc_inq_format_extended(int ncid, int* formatp, int* modep)
where
* ncid is the NetCDF ID from a previous call to nc_open() or
nc_create().
* formatp is a pointer to a location for returned true format.
* modep is a pointer to a location for returned mode flags.
Refer to the actual list in the file netcdf.h to see the
currently defined set.
Also added test cases (tst_formatx*).
contain as little file-type specific info as possible. It
modifies especially libsrc so that all of the netcdf-3 data
that used to be in struct NC is now kept in a separate chunk
of data pointed to by the struct NC. This makes all of
current protocols consistent: netcdf-3, netcdf-4, and dap.
The in-memory files can be made persistent if nc_create is called with
NC_DISKLESS|NC_WRITE flags set. Initial test case also included.
- Modified ncio mechanism to support
multiple ncio packages; this is so we
can have posixio and memio operating
at the same time.
- cleanup up a bunch of lint issues (unused variables, etc).
- Fix NCF-157 to modify DAP code to support
partial variable retrieval.
- Fix of NCF-154 to solve problem of ncgen
improperly processing data lists for variables
of size greater than 2**18 bytes.
- Fix ncgen processing of char variables that have
multiple unlimited dimensions.
- Partly fix Jira issue: NCF-145 (vlen issues).
- Benchmark program nc_test4/tst_ar4_*) requires arguments
and should only be invoked inside a shell
script; fixed so that they terminate cleanly
if invoked with no arguments.
- Fix the Doxygen processing so it will work
with make distcheck.
- Begin switchover to using an alternative to ncio.
- Begin support for in-memory (diskless) files.