## Improvements to S3 Documentation
* Create a new document *quickstart_paths.md* that give a summary of the legal path formats used by netcdf-c. This includes both file paths and URL paths.
* Modify *nczarr.md* to remove most of the S3 related text.
* Move the S3 text from *nczarr.md* to a new document *cloud.md*.
* Add some S3-related text to the *byterange.md* document.
Hopefully, this will make it easier for users to find the information they want.
## Rebuild NCZarr Testing
In order to avoid problems with running make check in parallel, two changes were made:
1. The *nczarr_test* test system was rebuilt. Now, for each test.
any generated files are kept in a test-specific directory, isolated
from all other test executions.
2. Similarly, since the S3 test bucket is shared, any generated S3 objects
are isolated using a test-specific key path.
## Other S3 Related Changes
* Add code to ensure that files created on S3 are reclaimed at end of testing.
* Used the bash "trap" command to ensure S3 cleanup even if the test fails.
* Cleanup the S3 related configure.ac flag set since S3 is used in several places. So now one should use the option *--enable-s3* instead of *--enable-nczarr-s3*, although the latter is still kept as a deprecated alias for the former.
* Get some of the github actions yml to work with S3; required fixing various test scripts adding a secret to access the Unidata S3 bucket.
* Cleanup S3 portion of libnetcdf.settings.in and netcdf_meta.h.in and test_common.in.
* Merge partial S3 support into dhttp.c.
* Create an experimental s3 access library especially for use with Windows. It is enabled by using the options *--enable-s3-internal* (automake) or *-DENABLE_S3_INTERNAL=ON* (CMake). Also add a unit-test for it.
* Move some definitions from ncrc.h to ncs3sdk.h
## Other Changes
* Provide a default implementation of strlcpy and move this and similar defaults into *dmissing.c*.
When "getopt()" is not available, various of the netcdf-c utilities
use XGetopt instead. This occurs primarily when building under Window,
so the build changes are restricted to CMake.
This PR tries to isolate XGetopt.c to the libdispatch directory
and then builds the various utilities using this cliche:
````
IF(USE_X_GETOPT)
SET(XGETOPTSRC "${CMAKE_CURRENT_SOURCE_DIR}/../libdispatch/XGetopt.c")
ENDIF()
````
This avoids the need to copy XGetopt.c to all the directories that
use it.
NOTE: This PR should not be included in 4.9.1 since additional
DAP4 related PRs will be forthcoming.
This PR makes major changes to libdap4 and dap4_test driven by changes to TDS.
* Enable DAP4
* Clean up the test input files and the test baseline comparison files. This entails:
* Remove a multitude of unused test input and baseline data files; among them are dap4_test/: daptestfiles, dmrtestfiles, nctestfiles, and misctestfiles.
* Define a canonical set of test input files and record in dap4_test/cdltestfiles.
* Use the cdltestfiles to generate the .nc test inputs. This set of .nc files is then moved to the d4ts (DAP4 test server) war file in the tds repository. This set then becomes the canonical set of DAP4 test sources.
* Scrape d4ts to obtain copies of the raw streams of DAP4 encoded data. The .dmr and .dap streams are then stored in dap4_test/rawtestfiles.
* Disable some remote server tests until those servers are fixed.
* Add an option to ncdump (-XF) that forces the type of the _FillValue attribute; this is primarily to simplify testing of fill mismatch.
* Minor bug fixes to ncgen.
* Changes to libdap4:
* Replace old checksum hack with the dap4.checksum flag.
* Support the dap4.XXX controls.
* Cleanup _FillValue handling, especially var-attribute type mismatches.
* Fix enum handling based on changes to netcdf-java.
* Changes to dap4_test:
* Add getopt support to various test support programs.
* Remove unneeded shell scripts.
* Add new scripts: test_curlopt.sh
re: Issue https://github.com/Unidata/netcdf-c/issues/2551
Ryan May identified the use of a common scratch file (tmp.cdl)
across multiple test shell scripts in ncdump directory
and the nczarr_test directory.
This sometimes causes errors because of race conditions
between those scripts.
I renamed those common files to avoid the race condition. I
also did some further checking and found some additional,
similar conflicts and fixed those. Also did some minor cleanup
of unused files.
Tests fixed:
ncdump: run_back_comp_tests.sh tst_bom.sh tst_nccopy4.sh tst_nccopy5.sh
nczarr_test: git df master -- run_nccopyz.sh run_nczarr_fill.sh run_scalar.sh
re: Issue https://github.com/Unidata/netcdf-c/issues/2502
H/T Charlie Zender
* Fix NCZarr handling of endianness value NC_ENDIAN_NATIVE. This now matches how it is handled in libhdf5
* Fix NCZarr handling of char typed attribute with value "". This now matches how it is handled in libhdf5
* Add test for various char attribute values
* Change the mapping of NC_CHAR and NC_STRING to dtype; requires changing some test files also.
* Optimize the testing for NC_ENOTBUILT in NC_open.
* Turn off debugging left on accidentally
* Fix memory leak in tst_pnetcdf.c
* Fix blosc test
re: https://github.com/Unidata/netcdf-c/issues/982
It is possible to define an enum type that has no enum constant
with value zero. However, HDF5 has a default fill value of zero
that it used to fill all chunks. In the event that this situation
occurs, ncdump, say, will fail because there is no enum const
to print for the value zero.
The solution is to create a special enum constant called "_UNDEFINED"
that has the value zero. It is only used in the case that there is
no constant in the enum that already covers zero.
A test case is added in netcdf-c/ncdump to validate this solution.
Note: the changes occur primarily in libsrc4, so they also work for NCZarr.
re: Issue https://github.com/Unidata/netcdf-c/issues/2419
There are effectively two json subsystems in netcdf-c.
1. ncjson.[ch] in libnetcdf
2. netcdf_json.h for use by plugins so they can be built without need
for libnetcdf.
The netcdf_json.h file is constructed from the concatenation of
ncjson.h plus ncjson.c. It turned out that in doing this, I was
leaving some symbols externally visible so that if, for some
reason, a plugin was built and needed libnetcdf, then symbol
conflicts arose.
The solution is to prefix the declarations in ncjson.[ch] with a
macro (OPTSTATIC) that can be resolved to either nothing or to
"static". Then in netcdf_json.h, it resolves to "static" and
prevents the symbol conflicts.
Note that netcdf_json.h is constructed once in
netcdf-c/include/Makefile.am with the rule named
"makepluginjson". This means that it is included in the
distribution. However, this also means that if ncjson.[ch] is
changed, then it is necessary to invoke makepluginjson
explicitly to rebuild netcdf_json.h
re: https://github.com/Unidata/netcdf-c/issues/2337
re: https://github.com/Unidata/netcdf-c/issues/2407
Add two functions to netcdf.h to allow programs to get/set
selected entries into the internal .rc tables. This should fix
the above issues by allowing HTTP.CAINFO to be set to the
certificates directory. Note that the changes should be
performed as early as possible in the program because some of
the .rc table entries may get cached internally and changing the
entry after that caching occurs may have no effect.
The new signatures are as follows:
1. Get the value of a simple .rc entry of the form "key=value".
Note that caller must free the returned value, which might be NULL.
````
char* nc_rc_get(char* const * key);
@param key table entry key
@return value if .rc table has entry of the form key=value
@return NULL if no such entry is found.
````
2. Insert/Overwrite the specified key=value pair in the .rc table.
````
int nc_rc_set(const char* key, const char* value);
@param key table entry key -- may not be NULL
@param value table entry value -- may not be NULL
@return NC_NOERR if no error
@return NC_EINVAL if error
````
Addendum:
re: https://github.com/Unidata/netcdf-c/issues/2407
Modify dhttp.c to use the .rc entry HTTP.CAINFO if defined.
re: https://github.com/Unidata/netcdf-c/issues/2338
re: https://github.com/Unidata/netcdf-c/issues/2294
In issue https://github.com/Unidata/netcdf-c/issues/2338,
Ed Hartnett suggested a better way to install filters to a user
defined location -- for Automake, anyway.
This PR implements that suggestion. It turns out to be more
complicated than it appears, so there are fair number of changes;
mostly to shell scripts. Most of the change is in plugins/Makefile.am.
NOTE: this PR still does NOT address the use of HDF5_PLUGIN_PATH
as the default; this turns out to be complex when dealing with NCZarr.
So this will be addressed in a subsequent post 4.9.0 PR.
## Misc. Changes
1. Record the occurrences of incomplete codecs in libnczarr so that
they can be included in _Codecs attribute correctly. This allows
users to see what missing filters are referenced in the Zarr file.
Primarily affects libnczarr/zfilter.[ch]. Also required creating a
new no-effect filter: H5Zunknown.c.
2. Move the unknown filter test to a separate test file.
3. Incorporates PR https://github.com/Unidata/netcdf-c/pull/2343
## Include <getopt.h> in various utilities
re: https://github.com/Unidata/netcdf-c/issues/2303
As noted, some utilities are using getopt() without including
getopt.h, so add as needed.
## Turn off run_diskless2.sh when ENABLE_PARALLEL is true
re: https://github.com/Unidata/netcdf-c/issues/2315
Ed notes that this test hangs when running parallel. The test
is attempting to create a very large in-memory file, which is
the proximate cause. But no idea what's the underlying cause.
re: Discussion https://github.com/Unidata/netcdf-c/discussions/2214
The primary change is to support so-called "standard filters".
A standard filter is one that is defined by the following
netcdf-c API:
````
int nc_def_var_XXX(int ncid, int varid, size_t nparams, unsigned* params);
int nc_inq_var_XXXX(int ncid, int varid, int* usefilterp, unsigned* params);
````
So for example, zstandard would be a standard filter by defining
the functions *nc_def_var_zstandard* and *nc_inq_var_zstandard*.
In order to define these functions, we need a new dispatch function:
````
int nc_inq_filter_avail(int ncid, unsigned filterid);
````
This function, combined with the existing filter API can be used
to implement arbitrary standard filters using a simple code pattern.
Note that I would have preferred that this function return a list
of all available filters, but HDF5 does not support that functionality.
So this PR implements the dispatch function and implements
the following standard functions:
+ bzip2
+ zstandard
+ blosc
Specific test cases are also provided for HDF5 and NCZarr.
Over time, other specific standard filters will be defined.
## Primary Changes
* Add nc_inq_filter_avail() to netcdf-c API.
* Add standard filter implementations to test use of *nc_inq_filter_avail*.
* Bump the dispatch table version number and add to all the relevant
dispatch tables (libsrc, libsrcp, etc).
* Create a program to invoke nc_inq_filter_avail so that it is accessible
to shell scripts.
* Cleanup szip support to properly support szip
when HDF5 is disabled. This involves detecting
libsz separately from testing if HDF5 supports szip.
* Integrate shuffle and fletcher32 into the existing
filter API. This means that, for example, nc_def_var_fletcher32
is now a wrapper around nc_def_var_filter.
* Extend the Codec defaulting to allow multiple default shared libraries.
## Misc. Changes
* Modify configure.ac/CMakeLists.txt to look for the relevant
libraries implementing standard filters.
* Modify libnetcdf.settings to list available standard filters
(including deflate and szip).
* Add CMake test modules to locate libbz2 and libzstd.
* Cleanup the HDF5 memory manager function use in the plugins.
* remove unused file include//ncfilter.h
* remove tests for the HDF5 memory operations e.g. H5allocate_memory.
* Add flag to ncdump to force use of _Filter instead of _Deflate
or _Shuffle or _Fletcher32. Used for testing.
re: https://github.com/Unidata/netcdf-c/issues/2189
Compression of a variable whose type is variable length
fails for all current filters. This is because at some point,
the compression buffer will contain pointers to data instead
of the actual data. Compression of pointers of course is meaningless.
The PR changes the behavior of nc_def_var_filter so that it will
fail with error NC_EFILTER if an attempt is made to add a filter
to a variable whose type is variable-length.
A variable is variable-length if it is of type string or VLEN
or transitively (via a compound type) contains a string or VLEN.
Also added a test case for this.
## Misc Changes
1. Turn off a number of debugging statements
re: Issue https://github.com/Unidata/netcdf-c/issues/2190
The primary purpose of this PR is to improve the utf8 support
for windows. This is persuant to a change in Windows that
supports utf8 natively (almost). The almost means that it is
still utf16 internally and the set of characters representable
by utf8 is larger than those representable by utf16.
This leaves open the question in the Issue about handling
the Windows 1252 character set.
This required the following changes:
1. Test the Windows build and major version in order to see if
native utf8 is supported.
2. If native utf8 is supported, Modify dpathmgr.c to call the 8-bit
version of the windows fopen() and open() functions.
3. In support of this, programs that use XGetOpt (Windows versions)
need to get the command line as utf8 and then parse to
arc+argv as utf8. This requires using a homegrown command line parser
named XCommandLineToArgvA.
4. Add a utility program called "acpget" that prints out the
current Windows code page and locale.
Additionally, some technical debt was cleaned up as follows:
1. Unify all the places which attempt to read all or a part
of a file into the dutil.c#NC_readfile code.
2. Similary unify all the code that creates temp files into
dutil.c#NC_mktmp code.
3. Convert almost all remaining calls to fopen() and open()
to NCfopen() and NCopen3(). This is to ensure that path management
is used consistently. This touches a number of files.
4. extern->EXTERNL as needed to get it to work under Windows.