re: https://github.com/Unidata/netcdf-c/issues/2117
re: https://github.com/Unidata/netcdf-c/issues/2119
* Modify libsrc to allow byte-range reading of netcdf-3 files in private S3 buckets; this required using the aws sdk. Also add a test case.
* The aws sdk can sometimes cause problems if the Awd::ShutdownAPI function is not called. So at optional atexit() support to ensure it is called. This is disabled for Windows.
* Add documentation to nczarr.md on how to build and use the aws sdk under windows. Currently it builds, but testing fails.
* Switch testing from stratus to the Unidata bucket on S3.
* Improve support for the s3: url protocol.
* Add a s3 specific utility code file: ds3util.c
* Modify NC_infermodel to attempt to read the magic number of byte-ranged files in S3.
## Misc.
* Move and rename the core S3 SDK wrapper code (libnczarr/zs3sdk.cpp) to libdispatch since it now used in libsrc as well as libnczarr.
* Add calls to nc_finalize in the utilities in case atexit is disabled.
* Add header only json parser to the distribution rather than as a built source.
If the `val` passed to `findPrimeGreaterThan` is greater than the largest value (not the sentinel) in the `NC_primes`, then the routine will fall into an infinite loop. Modified to call an external routine that brute forces the finding of a prime larger than the value in this case.
The brute force routine uses the primes in `NC_primes` table in the prime test, so this will fail if given a `value > 180503 * 180503`. The `isPrime` function could be rewritten to avoid this, but assuming this won't happen for the forseeable future. If it does happen, `isPrime` will return that any value larger than this is prime...
## Examine and fix ezxml errors
re: Issue https://github.com/Unidata/netcdf-c/issues/2119
Multiple security issues were found in ezxml (see above Issue).
* CVE-2021-31598
* CVE-2021-31348 / CVE-2021-31347
* CVE-2021-31229
* CVE-2021-30485
* CVE-2021-26222
* CVE-2021-26221
* CVE-2021-26220
* CVE-2019-20202
* CVE-2019-20201
* CVE-2019-20200
* CVE-2019-20199
* CVE-2019-20198
* CVE-2019-20007
* CVE-2019-20006
* CVE-2019-20005
In addition, moved ezxml to libdispatch.
## Examine and fix selected oss-fuzz detected errors
Note that most of these errors are in the libsrc .m4 generated
code so fixing them is difficult. It would nice if we could tell
oss-fuzz to skip those files. They are old and crufty and
probably need a complete refactor.
Issue|Status
-----|------
35382|Fixed; old bug
35398|Closed by OSS-Fuzz
35442|Guarantee alloc > 0 or error; Old bug
35721|Assert failure; ok
35992|Fixed; old bug
36038|Fixed; old bug
36129|Unfixed; old bug
36229|Fixed by adding assert; old bug
37476|Unfixed; old bug
37824|Assert Failure; ok
38300|Closed by OSS-Fuzz
38537|Unfixed; old bug
38658|Unfixed; old bug
38699|Fixed maybe; old bug
38772|Nature of error is unclear, suspect that it results from using too large a type.
39248|Need more information
39394|Unfixed; old bug
## S3 Related Fixes
* Add comprehensive support for specifying AWS profiles to provide access credentials.
* Parse the files "~/.aws/config" and "~/.aws/credentials to provide credentials for the HDF5 ROS3 driver and to locate default region.
* Add a function to obtain the currently active S3 credentials. The search rules are defined in docs/nczarr.md.
* Provide documentation for the new features.
* Modify the struct NCauth (in include/ncauth.h) to replace specific S3 credentials with a profile name.
* Add a unit test to test the operation of profile and credentials management.
* Add support for URLS of the form "s3://<bucket>/<key>"; this requires obtaining a default region.
* Allows the specification of profile and/or region in a URL of the form "#mode=nczarr,...&aws.region=...&aws.profile=..."
## Misc. Fixes
* Move the ezxml code to libdispatch so that it can be used both by DAP4 and nczarr.
* Modify nclist to provide a deep clone operation.
* Modify ncuri to provide a deep clone operation.
* Modify the .rc file format to allow the specification of a path to be tested when looking for an entry in the .rc file.
* Ensure that the NC_rcload function is called.
* Modify nchttp to support setting request headers.
Filter support has three goals:
1. Use the existing HDF5 filter implementations,
2. Allow filter metadata to be stored in the NumCodecs metadata format used by Zarr,
3. Allow filters to be used even when HDF5 is disabled
Detailed usage directions are define in docs/filters.md.
For now, the existing filter API is left in place. So filters
are defined using ''nc_def_var_filter'' using the HDF5 style
where the id and parameters are unsigned integers.
This is a big change since filters affect many parts of the code.
In the following, the terms "compressor" and "filter" and "codec" are generally
used synonomously.
### Filter-Related Changes:
* In order to support dynamic loading of shared filter libraries, a new library was added in the libncpoco directory; it helps to isolate dynamic loading across multiple platforms.
* Provide a json parsing library for use by plugins; this is created by merging libdispatch/ncjson.c with include/ncjson.h.
* Add a new _Codecs attribute to allow clients to see what codecs are being used; let ncdump -s print it out.
* Provide special headers to help support compilation of HDF5 filters when HDF5 is not enabled: netcdf_filter_hdf5_build.h and netcdf_filter_build.h.
* Add a number of new test to test the new nczarr filters.
* Let ncgen parse _Codecs attribute, although it is ignored.
### Plugin directory changes:
* Add support for the Blosc compressor; this is essential because it is the most common compressor used in Zarr datasets. This also necessitated adding a CMake FindBlosc.cmake file
* Add NCZarr support for the big-four filters provided by HDF5: shuffle, fletcher32, deflate (zlib), and szip
* Add a Codec defaulter (see docs/filters.md) for the big four filters.
* Make plugins work with windows by properly adding __declspec declaration.
### Misc. Non-Filter Changes
* Replace most uses of USE_NETCDF4 (deprecated) with USE_HDF5.
* Improve support for caching
* More fixes for path conversion code
* Fix misc. memory leaks
* Add new utility -- ncdump/ncpathcvt -- that does more or less the same thing as cygpath.
* Add a number of new test to test the non-filter fixes.
* Update the parsers
* Convert most instances of '#ifdef _MSC_VER' to '#ifdef _WIN32'
re: Issue https://github.com/Unidata/netcdf-c/issues/2096
The methods nc_set_var_chunk_cache_ints and nc_def_var_chunking_ints
are Fortran entry points for accessing the cache. They are not defined
if netcdf-c is built with --disable-hdf5.
Fix is to create dummy versions that do nothing and return NC_NOERR
when invoked. These dummy versions are defined when USE_HDF5 is false.
In VTK, there are some files which require the NC3 implementation, but
no longer open under 4.8.0 (it worked under 4.7.4). The code checks to
make sure that certain formats were *not* requested when it is entirely
reasonable that support may be required for other files.
This partially reverts changes made in
59e04ae071 which is a massive commit which
adds Zarr support but doesn't mention why this specific change was made.
re: Issue https:\\github.com\Unidata\netcdf-c\issues\2060
The path conversion code forgot to consider the case of
windows network paths of the form \\svc\x\y...
I have added support for it, but I can't really test it
since I do not have access to a network drive.
GCC warns if the length parameter to `strncpy` is computed from the
source since it is actually the destination that is relevant here. Since
these allocations are all made with the right amount, other string
functions may be used instead.
re: https://github.com/zarr-developers/zarr-specs/issues/41
After discussions with the Zarr community, it was decided to
convert to a new representation of the NCZarr meta-data extensions: version 2.
These extensions store information necessary to mapping the Zarr data model
to the netcdf-4 data model.
The basic change is to remove the NCZarr specific objects: .nczarr, .nczgroup, .nczarray, and .nczattr.
The contents of these objects is moved into the corresponding existing Zarr objects as special keys. The mapping is as follows:
* ''.nczarr'' => ''/.zgroup/_NCZARR_SUPERBLOCK_''
* ''.nczgroup => ''.zgroup/_NCZARR_GROUP_''
* ''.nczarray => ''.zarray/_NCZARR_ARRAY_''
* ''.nczattr => ''.zattr/_NCZARR_ATTR_''
Backward compatibility is maintained by looking for the object ''/.nczarr''
and if found, then assuming that the dataset is in the older version 1 format.
This compatibility only supports reading of such version 1 datasets.
Documentation and test cases are also added.
Misc. Other Changes:
1. The json parsing code was added to the general library instead of nczarr only (ncjson.c, ncjson.h).
2. Improved support for different platform paths by allowing conversion
to a single common path representation.
3. Add some new error codes.
4. Modify nccopy usage to mention the new chunking specification.
This is a follow-on to pull request
````https://github.com/Unidata/netcdf-c/pull/1959````,
which fixed up type scoping.
The primary changes are to _nc\_inq\_dimid()_ and to ncdump.
The _nc\_inq\_dimid()_ function is supposed to allow the name to be
and FQN, but this apparently never got implemented. So if was modified
to support FQNs.
The ncdump program is supposed to output fully qualified dimension names
in its generated CDL file under certain conditions.
Suppose ncdump has a netcdf-4 file F with variable V, and V's parent group
is G. For each dimension id D referenced by V, ncdump needs to determine
whether to print its name as a simple name or as a fully qualified name (FQN).
The algorithm is as follows:
1. Search up the tree of ancestor groups.
2. If one of those ancestor groups contains the dimid, then call it dimgrp.
3. If one of those ancestor groups contains a dim with the same name as the dimid, but with a different dimid, then record that as duplicate=true.
4. If dimgrp is defined and duplicate == false, then we do not need an fqn.
5. If dimgrp is defined and duplicate == true, then we do need an fqn to avoid incorrectly using the duplicate.
6. If dimgrp is undefined, then do a preorder breadth-first search of all the groups looking for the dimid.
7. If found, then use the fqn of the first found such dimension location.
8. If not found, then fail.
Test case ncdump/test_scope.sh was modified to test the proper
operation of ncdump and _nc\_inq\_dimid()_.
Misc. Other Changes:
* Fix nc_inq_ncid (NC4_inq_ncid actually) to return root group id if the name argument is NULL.
* Modify _ncdump/printfqn_ to print out a dimid FQN; this supports verification that the resulting .nc files were properly created.
re: e-support EOT-483791
* Add a new set of remote tests based on using the thredds-test server.
* Improve error reporting when server requests fail.
* Fix handing of _NCProperties attribute
re: Issue https://github.com/Unidata/netcdf-c/issues/1999
NCclosedir code is incorrect. Fix.
Note that this issue crops up when using a non-VisualStudio windows build
such as Mingw because Mingq defines dirent.h, but Visual Studio does not.
re: https://github.com/Unidata/netcdf-c/issues/1996
Improve the error message and location that is reported when reading a filter with a variable that uses a filter that is not available on the reading platform.
This requires checking the availability of the filter, recording it, and failing when any attempt is made to read or write that variable. A test case was added for this in tst_filter.sh. Also, LOG level 0 message is generated giving the variable and the filter id.
Note that by design if there is no attempt to read or write the variable, then no error is reported; this means that, for example, ncdump -h will list the filter even though it is not actually available. This is important for allowing a user to see the filter details.
re: https://github.com/Unidata/netcdf-c/issues/1827
The issue is partly resolved by this PR. The proximate problem appears to be that the semantics of mkstemp in **nix is different than the semantics of _mktemp_s in Windows. I had thought they were the same but that is incorrect. The _mktemp_s function will only produce 26 different files and so the netcdf temp file code will fail after about that many iterations.
So, to solve this, I created my own version of mkstemp for windows that uses a random number generator. This appears to solve the reported issue. I also added the testcase ncdap_test/test_manyurls but made it conditional on --enable-dap-long-tests because it is very slow.
I did note that the provided test program now fails after some 800 iterations with a libcurl error claiming it cannot resolve the host name. My belief is that the library is just running out of resources at this point: too many open curl handles or some such. I doubt if this failure is fixable.
So bottom line is that it is really important to do nc_close when you are finished with a file.
Misc. Other Changes:
1. I took the opportunity to clean up some bad string hacks in the code. Specifically
* change all uses of strncat to strlcat
* remove old string hacks: occoncat and occopycat
2. Add heck to see if test.opendap.org is running and if not, then skip test
3. Make CYGWIN use TEMP environment variable
Re: https://github.com/zarr-developers/zarr-python/pull/716
The Zarr version 2 spec has been extended to include the ability
to choose the dimension separator in chunk name keys. The legal
separators has been extended from {'.'} to {'.' '/'}. So now it
is possible to use a key like "0/1/2/0" for chunk names.
This PR implements this for NCZarr. The V2 spec now says that
this separator can be set on a per-variable basis. For now, I
have chosen to allow this be set only globally by adding a key
named "ZARR.DIMENSION_SEPARATOR=<char>" in the
.daprc/.dodsrc/ncrc file. Currently, the only legal separator
characters are '.' (the default) and '/'. On writing, this key
will only be written if its value is different than the default.
This change caused problems because supporting a separator of '/'
is difficult to parse when keys/paths use '/' as the path separator.
A test case was added for this.
Additionally, make nczarr be enabled default by default. This required
some additional changes so that if zip and/or AWS S3 sdk are unavailable,
then they are disabled for NCZarr.
In addition the following unrelated changes were made.
1. Tested that pure-zarr mode could read an nczarr formatted store.
1. The .rc file handling now merges all known .rc files (.ncrc,.daprc, and .dodsrc) in that order and using those in HOME first, then in current directory. For duplicate entries, the later ones override the earlier ones. This change is to remove some of the conflicts inherent in the current .rc file load process. A set of test cases was also added.
1. Re-order tests in configure.ac and CMakeLists.txt so that if libcurl
is not found then the other options that depend upon it properly
are disabled.
1. I decided that xarray support should be enabled by default for pure
zarr. In order to allow disabling, I added a new mode flag "noxarray".
1. Certain test in nczarr_test depend on use of .dodsrc. In order for these
to work when testing in parallel, some inter-test dependencies needed to
be added.
1. Improved authorization testing to use changes in thredds.ucar.edu
re: https://github.com/Unidata/netcdf-c/issues/1988
There was an issue with certain shell programs (bash notably).
For certain platforms and when given a url that had an escaped
'#' character (e.g. \\#) bash would not remove the backslash. So I
had to add a hack for this. Unfortunately I overdid it and it
removed all '' characters. This is ok for non-windows platforms,
but obviously fails for windows.
The fix is this.
1. In a utility program (ncgen, ncdump, nccopy, etc) there is probably a call (or calls) to NC_backslashUnescape(xxx) where xxx is a path argument from the command line.
2. Replace each such call with NC_shellUnescape(xxx).
The NC_shellUnescape function was added and searched only for occurrences of "\#" and replaces them with "#".
interoperability fixed. We were given a Zarr format dataset
stored as a directory+file tree. This dataset uses the XArray
conventions and was generated by some non-Unidata Zarr implementation.
In attempting to process it with NCZarr, several interoperability
problems were discovered and fixed. This gives us more confidence
that NCZarr -- using pure zarr -- can interoperate with other
Zarr implementations.
Specific changes:
* Add test nczarr_test/run_interop.sh
* Support attributes with single value not enclosed in JSON array tags.
* Add mode inferencing and use it in nczarr_test/run_purezarr.sh
* Reduce size of tst_err_enddef.nc because it is more than 3 GB.
re: Github issue https://github.com/Unidata/netcdf-c/issues/1956
The function NC_compare_nc_types in libdispatch/dcopy.c uses an
incorrect algorithm to search for types. The core of this is the
function NC_rec_find_nc_type in libdispatch/dcopy.c. Currently
it searchs the current group and its subtree.
Additionally, the function NC4_inq_typeid in libsrc4/nc4internal.c
has been extended to handle fully qualified names. It was originally
designed to do this, but for some reason never completed.
The NC_rec_find_nc_type algorithm has been altered to match the
algorithm used by NC4_inq_typeid. It operates as follows.
Given a file F, group G and a type T. It searches file F2, group
G2, for another type T2 that is equivalent to T.
The search order is as follows.
1. Search G2 for a type T2 equivalent to T.
2. Search upwards in the ancestor groups of G2 for a type T2 equivalent to T.
3. Search the complete group tree of F2 in pre-order, breadth-first order to locate T2 equivalent to T.
Also add a test case to validate algorithm: ncdump/test_scope.sh.
Note, this change may cause compatibility problems, though it is
unlikely because two different equivalent type declarations in
one dataset is unlikely.
The netcdf-c code has to deal with a variety of platforms:
Windows, OSX, Linux, Cygwin, MSYS, etc. These platforms differ
significantly in the kind of file paths that they accept. So in
order to handle this, I have created a set of replacements for
the most common file system operations such as _open_ or _fopen_
or _access_ to manage the file path differences correctly.
A more limited version of this idea was already implemented via
the ncwinpath.h and dwinpath.c code. So this can be viewed as a
replacement for that code. And in path in many cases, the only
change that was required was to replace '#include <ncwinpath.h>'
with '#include <ncpathmgt.h>' and then replace file operation
calls with the NCxxx equivalent from ncpathmgr.h Note that
recently, the ncwinpath.h was renamed ncpathmgmt.h, so this pull
request should not require dealing with winpath.
The heart of the change is include/ncpathmgmt.h, which provides
alternate operations such as NCfopen or NCaccess and which properly
parse and rebuild path arguments to work for the platform on which
the code is executing. This mostly matters for Windows because of the
way that it uses backslash and drive letters, as compared to *nix*.
One important feature is that the user can do string manipulations
on a file path without having to worry too much about the platform
because the path management code will properly handle most mixed cases.
So one can for example concatenate a path suffix that uses forward
slashes to a Windows path and have it work correctly.
The conversion code is in libdispatch/dpathmgr.c, and the
important function there is NCpathcvt which does the proper
conversions to the local path format.
As a rule, most code should just replace their file operations with
the corresponding NCxxx ones defined in include/ncpathmgmt.h. These
NCxxx functions all call NCpathcvt on their path arguments before
executing the actual file operation.
In some rare cases, the client may need to directly use NCpathcvt,
but this should be avoided as much as possible. If there is a need
for supporting a new file operation not already in ncpathmgmt.h, then
use the code in dpathmgr.c as a template. Also please notify Unidata
so we can include it as a formal part or our supported operations.
Also, if you see an operation in the library that is not using the
NCxxx form, then please submit an issue so we can fix it.
Misc. Changes:
* Clean up the utf8 testing code; it is impossible to get some
tests to work under windows using shell scripts; the args do
not pass as utf8 but as some other encoding.
* Added an extra utf8 test case: test_unicode_path.sh
* Add a true test for HDF5 1.10.6 or later because as noted in
PR https://github.com/Unidata/netcdf-c/pull/1794,
HDF5 changed its Windows file path handling.
The XArray implementation that uses Zarr for storage
provides a mechanism to simulate named dimensions.
It does this by adding a per-variable attribute called
_ARRAY_DIMENSIONS. This attribute contains a list of names
to be matched against the shape values of the variable.
In effect a named dimension is created with the name
_ARRAY_DIMENSIONS(i) and length shape(i) for all i
in range 0..rank(variable).
Both read and write support is provided.
This XArray support is only invoked if the mode value
of "xarray" is defined. So for example, as in this URL.
````
https://s3.us-west-1.amazonaws.com/bucket/dataset#mode=nczarr,xarray,s3
````
Note that the "xarray" mode flag also implies mode flag "zarr", so the above
is equivalent to this URL.
````
https://s3.us-west-1.amazonaws.com/bucket/dataset#mode=nczarr,zarr,xarray,s3
````
The primary change to implement this was to unify the handling
of dimension references in libnczarr/zsync.
A test for this and other pure-zarr features was added as
nczarr_test/run_purezarr.sh
Other changes:
* Make sure distcheck leaves no files around.
* Change the special attribute flag DIMSCALEFLAG to HIDDENATTRFLAG
to support the xarray attribute.
* Annotate the zmap implementations with feature flags such as
WRITEONCE (for zip files).
re: Issue
The netcdf dispatch table version was defined in several places.
Modify to only require defining it in CMakeLists.txt and configure.ac.
Fix entailed the following changes:
* Up the NC_DISPATCH_VERSION from 2 to 3 in configure.ac and CMakeLists.txt
* Create include/netcdf_dispatch.h.in and use it to configure include/netcdf_dispatch.h
* For CMAKE, make it search CMAKE_CURRENT_BINARY_DIR so code can locate the configured netcdf_dispatch.h
* Add entry to config.h.cmake.in for NC_DISPATCH_VERSION
* Move NCerror from include/ncdispatch.h to libdap2/nccomon.h
* Fix an API problem re nchttp.h
* Fix a conversion warning in libdispatch/dinfermodel.c
The primary change is to support the use of a zip file as a
storage format. Simultaneously the .nz4 support is made obsolete
Use of zip requires the libzip support library, so a number of
changes to the build files (Makefile.am, CMakeLists.txt) are
necessary to locate and incorporate libzip. The nczarr_tests
tests are also changed to add zip testing.
Other changes:
* Make sure distcheck leaves no files around.
* Add some functions to netcdf_aux to export some functions of libnetcdf.
* Add a new error NC_EFOUND as the complement of NC_EEMPTY.
* Add tracing support to nclog and use it in libnczarr.
* Modify the zmap interface to support the writeonce semantics of zip.
* Create a new s3util.c to support a variety of S3 auxilliary functions.
* EXTERNL'ize a number of functions so they can be used in s3util.
* Add support for the S3 ListObjects CommonPrefixes mechanism
to improve search.
* Add experimental support for running nczarr X s3 tests against
the actual Amazon S3 cloud.
* Replace wholevar with more useful wholechunk optimization
* Add optimization to read multiple values at one time
* Replace NCDEFAULT_get/put_vars with native nczarr versions.
* Clarify chunk projection computations
* zdebdispatch.h
* Add more chunking test cases and re-enable run_chunkcases
* If !szip, then suppress deflate interference test
* Make H5Znoop(1) filter produce more information
* cleanup bzlib.c API
re: https://github.com/Unidata/netcdf-c/issues/1923
re: https://github.com/Unidata/netcdf-c/issues/1921
The issue was raised about the order of returned filter ids
for nc_inq_var_filter_ids() when creating a file as opposed
to later reading the file.
For creation, the order is the same as the order in which the
calls to nc_def_var_filter() occur.
However, after the file is closed and then reopened for reading,
the question was raised if the returned order is the same or the reverse.
In fact the order is the same in both cases.
This PR extends the existing filter order testcase to check the create
versus read orders. This also required changing the H5Znoop(1) filters
in the plugins directory.
Misc. Unrelated Changes
1. fix calls to fdopen under windows
2. Temporarily suppres the nczarr_tests/run_chunkcases test
since it seems to be causing problems with github actions.
If the user is opening a existing file for appending (NC_WRITE) in parallel and the file is in CDF5 format, the `NC_interpret_magic_number()` routine clears the `model->impl` setting of `NC_FORMATX_PNETCDF` which was set in `NC_omodeinfer` (See lines following the `done:` label in that routine which specifically set the `impl` if `useparallel` is true.)
This setting then gets overwritten when `NC_interpret_magic_number` is called which sets the `model->impl` back to `NC_FORMATX_NC3`. This can (did) cause problems with parallel output as the `NC3` format does not correctly handle parallel writing but the `PNETCDF` does.
Not sure if this is the best place for the test, but it did fix the parallel write issues I was seeing...
If you need more details on what is happening, let me know. But a restatement at a higher level is that I was calling `nc_open_par` with `NC_WRITE` and `NC_64BIT_DATA` mode and the existing file has `CDF5` for the magic number. However, the dispatcher was being set to `NC3_dispatch_table` instead of `NCP_dispatch_table` which is the dispatcher which had been chosen for the original creation of the file being appended to.
I was then getting zeroes in the data being written to the vars since NC3 wasn't correctly handling multiple MPI ranks writing to different parts of the same variable...
Primary Fixes:
* Add a whole variable optimization -- used in the rare case that nc_get/put_vara covers the whole of a variable and the variable has a single chunk.
* Fix chunking error when stride causes whole chunks to be skipped.
* Fix some memory leaks
* Add test cases
* Add one performance test to nczarr_test/. This uses the timer utils from unit_test: timer_utils.[ch].
* Move ncdumpchunks utility from ncdump to nczarr_test
Misc. Other Changes:
* Make check for aws libraries conditional on --enable-nczarr-s3
* Remove all but one bm tests from nczarr_test until they are working.
* Remove another dependency on HDF5 from supposedly non-HDF5 specific code; specifically hdf5_log_hdf5.
* Make the BAIL2 macro be hdf5 specific and replace elsewhere with an HDF5 independent equivalent.
* Move hdf5cache.c to libsrc4/nc4cache.c because it is used by nczarr.
* Modify unit_tests so that some of them are run even if using Windows.
* Misc. small bug fixes and refactors and memory leaks.
* Rename some conflicting tests for cmake.
* Attempted to make nc_perf work with cmake and failed.
Re: GH Issue https://github.com/Unidata/netcdf-c/issues/1900
Apparently the clock_gettime() function is not always available.
It is used in unit_test/tst_exhash.c and unit_test/tst_xcache.c.
To solve this, a number of things were changed:
* Move the timing code to a new file unit_tests/timer_utils.[ch]
* Modify the timing code to choose one of several timing methods
depending on availability. The prioritized order is as follows:
1. If Windows, use the QueryPerformanceCounter mechanism else
2. Use clock_gettime if available else
3. Use gettimeofday if available else
4. Use getrusage if available
Note that the resolution of 3 and 4 is less than 1 or 2.
Misc. Other Changes:
* Move the test in CMakeLists.txt that disables unit tests for WIN32 to unit_test/CMakeLists.txt since some unit tests actually work under Visual Studio.
* Fix some of the unit tests to work under visual studio
* Fix problem with using remove() in zmap_nzf.c
* Remove some warning about use of EXTERNL
Primary changes:
* Add an improved cache system to speed up performance.
* Fix NCZarr to properly handle scalar variables.
Misc. Related Changes:
* Added unit tests for extendible hash and for the generic cache.
* Add config parameter to set size of the NCZarr cache.
* Add initial performance tests but leave them unused.
* Add CRC64 support.
* Move location of ncdumpchunks utility from /ncgen to /ncdump.
* Refactor auth support.
Misc. Unrelated Changes:
* More cleanup of the S3 support
* Add support for S3 authentication in .rc files: HTTP.S3.ACCESSID and HTTP.S3.SECRETKEY.
* Remove the hashkey from the struct OBJHDR since it is never used.
re: https://github.com/Unidata/netcdf-c/issues/1876
and: https://github.com/Unidata/netcdf-c/pull/1835
and: https://github.com/Unidata/netcdf4-python/issues/1041
The change in PR 1835 was correct with respect to using %20 instead of '+'
for encoding blanks. However, it was a mistake to assume everything was
unencoded and then to do encoding ourselves. The problem is that
different servers do different things, with Columbia being an outlier.
So, I have added a set of client controls that can at least give
the caller some control over this. The caller can append
the following fragment to his URL to control what gets encoded before
sending it to the server. The syntax is as follows:
````
https://<host>/<path>/<query>#encode=path|query|all|none
````
The possible values:
* path -- URL encode (i.e. %xx encode) as needed in the path part of the URL.
* query -- URL encode as needed in the query part of the URL.
* all -- equivalent to ````#encode=path,query````.
* none -- do not url encode any part of the URL sent to the server; not strictly necessary, so mostly for completeness.
Note that if "encode=" is used, then before it is processed, all encoding
is turned of so that ````#encode=path```` will only encode the path
and not the query.
The default is ````#encode=query````, so the path is left untouched,
but the query is always encoded.
Internally, this required changes to pass the encode flags down into
the OC2 library.
Misc. Unrelated Changes:
* Shut up those irritating warning from putget.m4
re: https://github.com/Unidata/netcdf-c/issues/1836
Revert the internal filter code to simplify it. From the user's
point of view, the only visible changes should be:
1. The functions that convert text to filter specs have had their signature reverted and have been moved to netcdf_aux.h
2. Some filter API functions now return NC_ENOFILTER when inquiry is made about some filter.
Internally,the dispatch table has been modified to get rid of the filter_actions
entry and associated complex structures. It has been replaced with
inq_var_filter_ids and inq_var_filter_info entries and the dispatch table
version has been bumped to 3. Corresponding NOOP and NOTNC4 functions
were added to libdispatch/dnotnc4.c. Also, the filter_action entries
in dispatch tables were replaced for all dispatch code bases (HDF5, DAP2,
etc). This should only impact UDF users.
In the process, it became clear that the form of the filters
field in NC_VAR_INFO_T was format dependent, so I converted it to
be of type void* and pushed its management into the various dispatch
code bases. Specifically libhdf5 and libnczarr now manage the filters
field in their own way.
The auxilliary functions for parsing textual filter specifications
were moved to netcdf_aux.h and were renamed to the following:
* ncaux_h5filterspec_parse
* ncaux_h5filterspec_parselist
* ncaux_h5filterspec_free
* ncaux_h5filter_fix8
Misc. Other Changes:
1. Document NUG/filters.md updated to reflect the changes above.
2. All the old data types (structs and enums)
used by filter_actions actions were deleted.
The exception is the NC_H5_Filterspec because it is needed
by ncaux_h5filterspec_parselist.
3. Clientside filters were removed -- another enhancement
for which no-one ever asked.
4. The ability to remove filters was itself removed.
5. Some functionality needed by nczarr was moved from libhdf5
to libsrc4 e.g. nc4_find_default_chunksizes
6. All the filterx code was removed
7. ncfilter.h and nc4filter.c no longer used
Misc. Unrelated Changes:
1. The nczarr_test makefile clean was leaving some directories; so
add clean-local to take care of them.
Found on conda-forge (which is now running 7.71.1), that byte-range
requests would stall. It turns out this is due to
CURLOPT_NOBODY--apparently setting this to 0 disables the HEAD request,
but does not restore downloading the body. The way to fix this is to
reset to CURLOPT_HTTPGET when done with a HEAD request.
The primary fix is to improve CMake build support.
Specific changes include:
* CMake: Provide a better soln to locating the AWS SDK
libraries; the new way is the preferred method as described in
the aws-cpp-sdk documentation.
* CMake (and Automake): allow -DENABLE_S3_SDK (default off) to suppress
looking for AWS libraries.
* CMake: add the complete set of nczarr tests
* CMake: add EXTERNL as needed to various .h files.
* Improve support for windows drive letters in paths.
* Add nczarr and s3 flags to nc-config
* For VisualStudio X nczarr, cleanup the NAN+INFINITY handling
* Convert _MSC_VER -> _WIN32 and vice versa as needed
* NCZarr - support multiple platform paths including windows, cygwin.
mingw, etc.
* NCZarr - sort the test outputs because different platforms
produce directory contents in different orders.
One big change concerns netcdf-c/CMakeLists.txt and netcdf-c/configure.ac.
In the current versions, it was the case that --disable-hdf5
disabled netcdf-4 (libsrc4). With nczarr, this can no longer
be the case because nczarr requires libsrc4 even if libhdf5
is disabled. So, I modified the above files to move the
format options (HDF5, NCZarr, HDF4, etc) to a single place
near the front of the files. Now it is the case that:
* Enabling any of the formats that require libsrc4
also does an implicit --enable-netcdf4.
* --disable-netcdf4 | --disable-netcdf-4 now becomes
and alias for --disable-hdf5.
There are probably some bugs in this change in terms of
dependencies between format options.
Problems:
* CMake S3 support is still not working for Visual Studio
* A recent issue points out that there is work to do on handling
UTF8 filenames, but that will be addressed in a separate fix.
Notes:
* Consider converting all of our includes/.h files to use EXTERNL
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
Enables ncdump -t (-i) to recognize a wider variety of time related units
and calendar names. This brings ncdump closer to what it advertises in its
man page regarding its understanding of udunits compliant time units.
re: Github issue https://github.com/Unidata/netcdf-c/issues/1713
If nc_def_var_filter or nc_def_var_deflate or nc_def_var_szip is
called multiple times with the same filter id, but possibly with
different sets of parameters, then the first invocation is
sticky and later invocations are ignored. The desired behavior
is to have the last invocation be used.
This PR implements that desired behavior, with some special
cases. If you call nc_def_var_deflate multiple times, then the
last invocation rule applies with respect to deflate. However,
the shuffle filter, if enabled, is always applied just before
applying deflate.
Misc unrelated changes:
1. Make client-side filters be disabled by default
2. Fix the definition of uintptr_t and use in oc2 and libdap4
3. Add some test cases
4. modify filter order tests to use plugin filters rather
than client-side filters
re: https://github.com/Unidata/netcdf-c/issues/1693
1. Add functions to libdispatch/dnotnc4.c to support
dispatch table operations that should work for any
dispatch table, even if they do not do anything.
Functions such as nc_inq_var_filter.
2. Modify selected dispatch tables to utilize
the noop functions.
3. Extend nc_test/tst_formats.c to test.
This is an extension of Ed's work to do this for
chunking and deflate and szip. See PRs
https://github.com/Unidata/netcdf-c/pull/1697
and
https://github.com/Unidata/netcdf-c/pull/1692
As a side effect, elide libdispatch/dnotnc3.c since
it is no longer used.
re: https://github.com/Unidata/netcdf-c/issues/1684
re: e-support VZL-904142
Two issues:
1. As of libcurl 7.66, the semantics of CURLOPT_SSL_VERIFYHOST
changed so that the non-zero values affects certificate processing.
2. The current library was forcing the values of VERIFYPEER
and VERIFYHOST to zero instead of leaving them to the default values.
Solution was first to leave the defaults in place for VERIFYPEER and VERIFYHOST
as long as they are not set in .ocrc/.dodsrc file.
Second, the value of HTTP.SSL.VERIFYPEER or HTTP.SSL.VERIFYHOST
as set in .ocrc/.dodrc is used to set the corresponding CURLOPT flags.
So for example, adding
> HTTP.SSL.VERIFYHOST=2
will set the value of CURLOPT_SSL_VERIFYHOST to 2, the default.
Using
> HTTP.SSL.VERIFYHOST=0
will set the value of CURLOPT_SSL_VERIFYHOST to 0, which disables it.
Similarly for VERIFYPEER.
Finally the semantics of HTTP.SSL.VALIDATE is now equivalent to
> HTTP.SSL.VERIFYPEER=1
> HTTP.SSL.VERIFYHOST=2
re: issue https://github.com/Unidata/netcdf-c/issues/1687
static functions are being used before decl and it causes
errors. Only occurs when BIG_ENDIAN is defined.
Solution is to add the forward declarations.
re: issue https://github.com/Unidata/netcdf-c/issues/1666
The code in NC_open and NC_create (in dfile.c)
was using improperly testing for leading whitespace chars.
It was treating UTF-8 as whitespace.
Fix is to do tests using unsigned char.
re: https://github.com/Unidata/netcdf-c/issues/1584
Support has been added for multiple filters per variable. This
affects a number of components in netcdf. The new APIs are
documented in NUG/filters.md.
The primary changes are:
* A set of new functions are provided (see __include/netcdf_filter.h__).
- Obtain a list of the filters associated with a variable
- Obtain the parameters for a specific filter.
* The existing __nc_inq_var_filter__ function now returns info
about the first defined filter.
* The utilities (ncgen, ncdump, and nccopy) now support
an extended format for specifying a sequence of filters.
The general form is __<filter>|<filter>..._.
* The ncdump **_Filter** attribute now dumps a list of all the
filters associated with a variable using the above new format.
* Filter specifications can now use a filter name instead of number
for filters known to the netcdf library, which in turn is taken
from the HDF5 filter registration page.
* New errors are defined: NC_EFILTER and NC_ENOFILTER. The latter
is returned if an attempt is made to access an unknown filter.
* Internally, the dispatch table has been extended to add a function
to handle all of the filter functions.
* New, filter-related, tests were added to nc_test4.
* A new plugin was added to the plugins directory to help with testing.
Notes:
1. The shuffle and fletcher32 filters are not part of the multifilter system.
Misc. changes:
1. A debug module was added to libhdf5 to help catch error locations.
I see that there is no way to set CURLOPT_CONNECTTIMEOUT,
but there is support for CURLOPT_TIMEOUT.
So, accept the line 'HTTP.CONNECTTIMEOUT'
in .rc file to allow user to set CURLOPT_CONNECTTIMEOUT.
Some versions of some servers are returning malformed responses.
Make the library either handle them or gracefully fail.
The three server errors "fixed" here are as follows.
1. The attribute _NCProperties sometimes has a trailing nul character
in its value. Soln is to elide the nul(s).
2. Sometimes a DAP response has no data part, only a DMR.
Soln is to detect and return an error code instead of crashing.
3. Sometimes a server returns a redirection, but our current
openmagic() function was not following the redirect. Soln
is to follow redirects.
Also because of #2, I am temporarily making --disable-dap-remote-tests
be the default.
The comment states that prefix must end in '/', but the '/' is added in the function itself, so the prefix should *not* end in '/' and the comment is incorrect.
* For URL paths, the new approach essentially centralizes all information
in the URL into the "#mode=" fragment key and uses that value
to determine the dispatcher for (most) URLs.
* The new approach has the following steps:
1. canonicalize the path if it is a URL.
2. use the mode= fragment key to determine the dispatcher
3. if dispatcher still not determined, then use the mode flags
argument to nc_open/nc_create to determine the dispatcher.
4. if the path points to something readable, attempt to read the
magic number at the front, and use that to determine the dispatcher.
this case may override all previous cases.
* Misc changes.
1. Update documentation
2. Moved some unit tests from libdispatch to unit_test directory.
3. Fixed use of wrong #ifdef macro in test_filter_reg.c
[I think this may fix an previously reported esupport query].
Partially address: https://github.com/Unidata/netcdf-c/issues/1056
Currently, some of the entries in the dispatch table
are conditional'd on USE_NETCDF4.
As a step in upgrading the dispatch table for use
with user-defined tables, we remove that conditional.
This means that all dispatch tables must implement the
netcdf-4 specific functions even if only to make them
return NC_ENOTNC4. To simplify this, a set of default
functions are defined in libdispatch/dnotnc4.c to provide this
behavior. The file libdispatch/dnotnc3.c is also relevant to
this.
The primary fix is to modify the various dispatch tables to
remove the conditional and use the functions in
libdispatch/dnotnc4.c as appropriate. In practice, all of the
existing tables are prepared to handle this, so the only
real change is to remove the conditionals.
Misc. Unrelated fixes
1. Fix some annoying warnings in ncvalidator.
Notes:
1. This has not been tested with either pnetcdf or hdf4 enabled.
When those are enabled, it is possible that there are still
some conditionals that need to be fixed.
re: github issue #1425
The 'ncdump -v' command causes a constraint to be sent
to the opendap code (in libdap2). This is a separate path
from specifying the constraint via a URL.
This separate path encoded its constraint using code independent
of and duplicative of that provided by ncuri.c and this duplicate
code did not properly encode the constraint, which might include
square brackets.
Solution chosen here was to get rid of the duplicate code and
ensure that all URL escaping is performed in the ncuribuild function
in the ncuri.c file.
Also removed the use of the NEWESCAPE conditional in ncuri.c
because it is no longer needed.
re: https://github.com/Unidata/netcdf-c/issues/1388
1. Centralize calls to curl_global_init and curl_global_cleanup
to libdispatch/ddispatch.c
2. Make the above calls if options require curl: currently
any of DAP2, DAP4, or byterange.
3. Side issue: Fix obscure bug in mmapio.c involving non-persistent mmap.
re: https://github.com/Unidata/netcdf-c/issues/1373 (partial)
* Mark some global constants be const to indicate to make them easier to track.
* Hide direct access to the ncrc_globalstate behind a function call.
* Convert dispatch tables to constants (except the user defined ones)
This has some consequences in terms of function arguments needing to be marked
as const also.
* Remove some no longer needed global fields
* Aggregate all the globals in nclog.c
* Uniformly replace nc_sizevector{0,1} with NC_coord_{zero,one}
* Uniformly replace nc_ptrdffvector1 with NC_stride_one
* Remove some obsolete code
Priority: Low
re: issue https://github.com/Unidata/netcdf-c/issues/1329
HDF5 has the ability to programmatically define new filters,
as opposed to using HDF5_PLUGIN_PATH env variable.
This PR adds support for that feature.
Not clear how useful this is, though.
See docs/filters.md for details.
re: esupport (DVK-211460)
Turns out it was a typo in libdispatch/dauth.c
Fix is to Change:
HTTP.USERNAME -> HTTP.CREDENTIALS.USERNAME
and
HTTP.PASSWORD-> HTTP.CREDENTIALS.PASSWORD
So, fixed the following:
1. Forgot to check for NC_FORMATX_PNETCDF case
in one of the switches in NC_infermodel.
2. Accidentally turned on both the NC_64BIT_OFFSET
and the NC_64BIT_DATA mode flags.
re: issue https://github.com/Unidata/netcdf-c/issues/1278
re: issue https://github.com/Unidata/netcdf-c/issues/876
re: issue https://github.com/Unidata/netcdf-c/issues/806
* Major change to the handling of 8-byte parameters for nc_def_var_filter.
The old code was not well thought out.
* The new algorithm is documented in docs/filters.md.
* Added new utility file plugins/H5Zutil.c to support
* Modified plugins/H5Zmisc.c to use new algorithm
the new algorithm.
* Renamed include/ncfilter.h to include/netcdf_filter.h
and made it an installed header so clients can access the
new algorithm utility.
* Fixed nc_test4/tst_filterparser.c and nc_test4/test_filter_misc.c
to use the new algorithm
* libdap4/ fixes:
* d4swap.c has an error in the endian pre-processing such
that record counts were not being swapped correctly.
* d4data.c had an error in that checksums were being computed
after endian swapping rather than before.
* ocinitialize() was never being called, so xxdr bigendian handling
was never set correctly.
* Required adding debug statements to occompile
* Found and fixed memory leak in ncdump.c
Not tested:
* HDF4
* Pnetcdf
* parallel HDF5
re: issue https://github.com/Unidata/netcdf-c/issues/1251
Assume that you have the URL to a remote dataset
which is a normal netcdf-3 or netcdf-4 file.
This PR allows the netcdf-c to read that dataset's
contents as a netcdf file using HTTP byte ranges
if the remote server supports byte-range access.
Originally, this PR was set up to access Amazon S3 objects,
but it can also access other remote datasets such as those
provided by a Thredds server via the HTTPServer access protocol.
It may also work for other kinds of servers.
Note that this is not intended as a true production
capability because, as is known, this kind of access to
can be quite slow. In addition, the byte-range IO drivers
do not currently do any sort of optimization or caching.
An additional goal here is to gain some experience with
the Amazon S3 REST protocol.
This architecture and its use documented in
the file docs/byterange.dox.
There are currently two test cases:
1. nc_test/tst_s3raw.c - this does a simple open, check format, close cycle
for a remote netcdf-3 file and a remote netcdf-4 file.
2. nc_test/test_s3raw.sh - this uses ncdump to investigate some remote
datasets.
This PR also incorporates significantly changed model inference code
(see the superceded PR https://github.com/Unidata/netcdf-c/pull/1259).
1. It centralizes the code that infers the dispatcher.
2. It adds support for byte-range URLs
Other changes:
1. NC_HDF5_finalize was not being properly called by nc_finalize().
2. Fix minor bug in ncgen3.l
3. fix memory leak in nc4info.c
4. add code to walk the .daprc triples and to replace protocol=
fragment tag with a more general mode= tag.
Final Note:
Th inference code is still way too complicated. We need to move
to the validfile() model used by netcdf Java, where each
dispatcher is asked if it can process the file. This decentralizes
the inference code. This will be done after all the major new
dispatchers (PIO, Zarr, etc) have been implemented.