re: https://github.com/Unidata/netcdf-c/issues/2338
re: https://github.com/Unidata/netcdf-c/issues/2294
In issue https://github.com/Unidata/netcdf-c/issues/2338,
Ed Hartnett suggested a better way to install filters to a user
defined location -- for Automake, anyway.
This PR implements that suggestion. It turns out to be more
complicated than it appears, so there are fair number of changes;
mostly to shell scripts. Most of the change is in plugins/Makefile.am.
NOTE: this PR still does NOT address the use of HDF5_PLUGIN_PATH
as the default; this turns out to be complex when dealing with NCZarr.
So this will be addressed in a subsequent post 4.9.0 PR.
## Misc. Changes
1. Record the occurrences of incomplete codecs in libnczarr so that
they can be included in _Codecs attribute correctly. This allows
users to see what missing filters are referenced in the Zarr file.
Primarily affects libnczarr/zfilter.[ch]. Also required creating a
new no-effect filter: H5Zunknown.c.
2. Move the unknown filter test to a separate test file.
3. Incorporates PR https://github.com/Unidata/netcdf-c/pull/2343
re: https://github.com/Unidata/netcdf-c/issues/2294
Ed Hartnett suggested that the netcdf library installation process
be extended to install the standard filters into a user specified
location. The user can then set HDF5_PLUGIN_PATH to that location.
This PR provides that capability using:
````
configure option: --with-plugin-dir=<absolute directory path>
cmake option: -DPLUGIN_INSTALL_DIR=<absolute directory path>
````
Currently, the following plugins are always installed, if
available: bzip2, zstd, blosc.
If NCZarr is enabled, then additional plugins are installed:
fletcher32, shuffle, deflate, szip.
Additionally, the necessary codec support is installed
for each of the above filters that is installed.
## Changes:
1. Cleanup handling of built-in bzip2.
2. Add documentation to docs/filters.md
3. Re-factor the NCZarr codec libraries
4. Add a test, although it can only be exercised after
the library is installed, so it cannot be used during
normal testing.
5. Cleanup use of HDF5_PLUGIN_PATH in the filter test cases.
re: Discussion https://github.com/Unidata/netcdf-c/discussions/2214
The primary change is to support so-called "standard filters".
A standard filter is one that is defined by the following
netcdf-c API:
````
int nc_def_var_XXX(int ncid, int varid, size_t nparams, unsigned* params);
int nc_inq_var_XXXX(int ncid, int varid, int* usefilterp, unsigned* params);
````
So for example, zstandard would be a standard filter by defining
the functions *nc_def_var_zstandard* and *nc_inq_var_zstandard*.
In order to define these functions, we need a new dispatch function:
````
int nc_inq_filter_avail(int ncid, unsigned filterid);
````
This function, combined with the existing filter API can be used
to implement arbitrary standard filters using a simple code pattern.
Note that I would have preferred that this function return a list
of all available filters, but HDF5 does not support that functionality.
So this PR implements the dispatch function and implements
the following standard functions:
+ bzip2
+ zstandard
+ blosc
Specific test cases are also provided for HDF5 and NCZarr.
Over time, other specific standard filters will be defined.
## Primary Changes
* Add nc_inq_filter_avail() to netcdf-c API.
* Add standard filter implementations to test use of *nc_inq_filter_avail*.
* Bump the dispatch table version number and add to all the relevant
dispatch tables (libsrc, libsrcp, etc).
* Create a program to invoke nc_inq_filter_avail so that it is accessible
to shell scripts.
* Cleanup szip support to properly support szip
when HDF5 is disabled. This involves detecting
libsz separately from testing if HDF5 supports szip.
* Integrate shuffle and fletcher32 into the existing
filter API. This means that, for example, nc_def_var_fletcher32
is now a wrapper around nc_def_var_filter.
* Extend the Codec defaulting to allow multiple default shared libraries.
## Misc. Changes
* Modify configure.ac/CMakeLists.txt to look for the relevant
libraries implementing standard filters.
* Modify libnetcdf.settings to list available standard filters
(including deflate and szip).
* Add CMake test modules to locate libbz2 and libzstd.
* Cleanup the HDF5 memory manager function use in the plugins.
* remove unused file include//ncfilter.h
* remove tests for the HDF5 memory operations e.g. H5allocate_memory.
* Add flag to ncdump to force use of _Filter instead of _Deflate
or _Shuffle or _Fletcher32. Used for testing.
re: https://github.com/Unidata/netcdf-c/issues/2189
Compression of a variable whose type is variable length
fails for all current filters. This is because at some point,
the compression buffer will contain pointers to data instead
of the actual data. Compression of pointers of course is meaningless.
The PR changes the behavior of nc_def_var_filter so that it will
fail with error NC_EFILTER if an attempt is made to add a filter
to a variable whose type is variable-length.
A variable is variable-length if it is of type string or VLEN
or transitively (via a compound type) contains a string or VLEN.
Also added a test case for this.
## Misc Changes
1. Turn off a number of debugging statements
re: Issue https://github.com/Unidata/netcdf-c/issues/2190
The primary purpose of this PR is to improve the utf8 support
for windows. This is persuant to a change in Windows that
supports utf8 natively (almost). The almost means that it is
still utf16 internally and the set of characters representable
by utf8 is larger than those representable by utf16.
This leaves open the question in the Issue about handling
the Windows 1252 character set.
This required the following changes:
1. Test the Windows build and major version in order to see if
native utf8 is supported.
2. If native utf8 is supported, Modify dpathmgr.c to call the 8-bit
version of the windows fopen() and open() functions.
3. In support of this, programs that use XGetOpt (Windows versions)
need to get the command line as utf8 and then parse to
arc+argv as utf8. This requires using a homegrown command line parser
named XCommandLineToArgvA.
4. Add a utility program called "acpget" that prints out the
current Windows code page and locale.
Additionally, some technical debt was cleaned up as follows:
1. Unify all the places which attempt to read all or a part
of a file into the dutil.c#NC_readfile code.
2. Similary unify all the code that creates temp files into
dutil.c#NC_mktmp code.
3. Convert almost all remaining calls to fopen() and open()
to NCfopen() and NCopen3(). This is to ensure that path management
is used consistently. This touches a number of files.
4. extern->EXTERNL as needed to get it to work under Windows.
re: https://github.com/Unidata/netcdf-c/issues/2177
re: https://github.com/Unidata/netcdf-c/pull/2178
Provide get/set functions to store global data alignment
information and apply it when a file is created.
The api is as follows:
````
int nc_set_alignment(int threshold, int alignment);
int nc_get_alignment(int* thresholdp, int* alignmentp);
````
If defined, then for every file created opened after the call to
nc_set_alignment, for every new variable added to the file, the
most recently set threshold and alignment values will be applied
to that variable.
The nc_get_alignment function return the last values set by
nc_set_alignment. If nc_set_alignment has not been called, then
it returns the value 0 for both threshold and alignment.
The alignment parameters are stored in the NCglobalstate object
(see below) for use as needed. Repeated calls to nc_set_alignment
will overwrite any existing values in NCglobalstate.
The alignment parameters are applied in libhdf5/hdf5create.c
and libhdf5/hdf5open.c
The set/get alignment functions are defined in libsrc4/nc4internal.c.
A test program was added as nc_test4/tst_alignment.c.
## Misc. Changes Unrelated to Alignment
* The NCRCglobalstate type was renamed to NCglobalstate to
indicate that it represented more general global state than
just .rc data. It was also moved to nc4internal.h. This led
to a large number of small changes: mostly renaming. The
global state management functions were moved to nc4internal.c.
* The global chunk cache variables have been moved into
NCglobalstate. As warranted, other global state will be moved
as well.
* Some misc. problems with the nczarr performance tests were corrected.
re: PR https://github.com/Unidata/netcdf-c/pull/2088
re: PR https://github.com/Unidata/netcdf-c/pull/2130
replaces: https://github.com/Unidata/netcdf-c/pull/2140
Changes:
* Add NCZarr-specific quantize functions to the dispatch table.
* Copy (modified) quantize code from libhdf5 to NCZarr
* Add quantize invocation to zvar.c
* Add support for _QuantizeBitgroomNumberOfSignificantDigits
and _QuantizeGranularBitgroomNumberOfSignificantDigits to ncgen.
* Modify nc_test4/tst_quantize.c to allow it to be used both for hdf5
and for nczarr.
* Make dap4 properly handle quantize functions in dispatch table.
* Add quantize attribute support to ncgen.
Other changes:
* Caught and fixed some S3 problems
* Fixed some nczarr fillvalue problems.
* Fixed some nczarr cache problems.
* Cleanup some flaws in libdispatch/dinfermodel.c
* Allow byterange requests to S3 be readable by dinfermodel.c/check_file_type
* Remove the libnczarr ztracedispatch code (big change).
re:
The current netcdf-c release has some problems with the mingw platform
on windows. Mostly they are path issues.
Changes to support mingw+msys2:
-------------------------------
* Enable option of looking into the windows registry to find
the mingw root path. In aid of proper path handling.
* Add mingw+msys as a specific platform in configure.ac and move testing
of the platform to the front so it is available early.
* Handle mingw X libncpoco (dynamic loader) properly even though
mingw does not yet support it.
* Handle mingw X plugins properly even though mingw does not yet support it.
* Alias pwd='pwd -W' to better handle paths in shell scripts.
* Plus a number of other minor compile irritations.
* Disallow the use of multiple nc_open's on the same file for windows
(and mingw) because windows does not seem to handle these properly.
Not sure why we did not catch this earlier.
* Add mountpoint info to dpathmgr.c to help support mingw.
* Cleanup dpathmgr conversions.
Known problems:
---------------
* I have not been able to get shared libraries to work, so
plugins/filters must be disabled.
* There is some kind of problem with libcurl that I have not solved,
so all uses of libcurl (currently DAP+Byterange) must be disabled.
Misc. other fixes:
------------------
* Cleanup the relationship between ENABLE_PLUGINS and various other flags
in CMakeLists.txt and configure.ac.
* Re-arrange the TESTDIRS order in Makefile.am.
* Add pseudo-breakpoint to nclog.[ch] for debugging.
* Improve the documentation of the path manager code in ncpathmgr.h
* Add better support for relative paths in dpathmgr.c
* Default the mode args to NCfopen to include "b" (binary) for windows.
* Add optional debugging output in various places.
* Make sure that everything builds with plugins disabled.
* Fix numerous (s)printf inconsistencies betweenb the format spec
and the arguments.
re: https://github.com/Unidata/netcdf-c/issues/2151
The of a non-aws appliance broke during the switch to testing
against Amazon S3.
So make necessary changes to get non-aws appliances work correctly.
re: https://github.com/Unidata/netcdf-c/issues/2119
H/T to [Egbert Eich](https://github.com/e4t) and [Bas Couwenberg](https://github.com/sebastic) for this PR.
It is undesirable to make netcdf be dependent on the availability
of libxml2, but it is desirable to allow its use if available.
In order to do this, a wrapper API (include/ncxml.h) was constructed
that supports either ezxml or libxml2 as the implementation.
Additionally, the xml support code was moved to a new directory
netcdf-c/libncxml.
Primary changes:
* Create a new sub-directory named netcdf-c/libncxml to hold all the xml implementation code.
* Move ezxml.c and ezxml.h to libncxml
* Create a wrapper API -- include/ncxml.h
* Create an implementation, ncxml_ezxml.c to support use of ezxml.
* Create an implementation, ncxml_xml2.c to support use of libxml2.
* Add a check for libxml2 in configure.ac and CMakeLists.txt
* Modify libdap to use the wrapper API instead of ezxml directly.
Misc. Other Changes:
* Change include/netcdf_json.h from built source to be part of the distribution.
re: https://github.com/Unidata/netcdf-c/issues/2117
re: https://github.com/Unidata/netcdf-c/issues/2119
* Modify libsrc to allow byte-range reading of netcdf-3 files in private S3 buckets; this required using the aws sdk. Also add a test case.
* The aws sdk can sometimes cause problems if the Awd::ShutdownAPI function is not called. So at optional atexit() support to ensure it is called. This is disabled for Windows.
* Add documentation to nczarr.md on how to build and use the aws sdk under windows. Currently it builds, but testing fails.
* Switch testing from stratus to the Unidata bucket on S3.
* Improve support for the s3: url protocol.
* Add a s3 specific utility code file: ds3util.c
* Modify NC_infermodel to attempt to read the magic number of byte-ranged files in S3.
## Misc.
* Move and rename the core S3 SDK wrapper code (libnczarr/zs3sdk.cpp) to libdispatch since it now used in libsrc as well as libnczarr.
* Add calls to nc_finalize in the utilities in case atexit is disabled.
* Add header only json parser to the distribution rather than as a built source.
## S3 Related Fixes
* Add comprehensive support for specifying AWS profiles to provide access credentials.
* Parse the files "~/.aws/config" and "~/.aws/credentials to provide credentials for the HDF5 ROS3 driver and to locate default region.
* Add a function to obtain the currently active S3 credentials. The search rules are defined in docs/nczarr.md.
* Provide documentation for the new features.
* Modify the struct NCauth (in include/ncauth.h) to replace specific S3 credentials with a profile name.
* Add a unit test to test the operation of profile and credentials management.
* Add support for URLS of the form "s3://<bucket>/<key>"; this requires obtaining a default region.
* Allows the specification of profile and/or region in a URL of the form "#mode=nczarr,...&aws.region=...&aws.profile=..."
## Misc. Fixes
* Move the ezxml code to libdispatch so that it can be used both by DAP4 and nczarr.
* Modify nclist to provide a deep clone operation.
* Modify ncuri to provide a deep clone operation.
* Modify the .rc file format to allow the specification of a path to be tested when looking for an entry in the .rc file.
* Ensure that the NC_rcload function is called.
* Modify nchttp to support setting request headers.
Filter support has three goals:
1. Use the existing HDF5 filter implementations,
2. Allow filter metadata to be stored in the NumCodecs metadata format used by Zarr,
3. Allow filters to be used even when HDF5 is disabled
Detailed usage directions are define in docs/filters.md.
For now, the existing filter API is left in place. So filters
are defined using ''nc_def_var_filter'' using the HDF5 style
where the id and parameters are unsigned integers.
This is a big change since filters affect many parts of the code.
In the following, the terms "compressor" and "filter" and "codec" are generally
used synonomously.
### Filter-Related Changes:
* In order to support dynamic loading of shared filter libraries, a new library was added in the libncpoco directory; it helps to isolate dynamic loading across multiple platforms.
* Provide a json parsing library for use by plugins; this is created by merging libdispatch/ncjson.c with include/ncjson.h.
* Add a new _Codecs attribute to allow clients to see what codecs are being used; let ncdump -s print it out.
* Provide special headers to help support compilation of HDF5 filters when HDF5 is not enabled: netcdf_filter_hdf5_build.h and netcdf_filter_build.h.
* Add a number of new test to test the new nczarr filters.
* Let ncgen parse _Codecs attribute, although it is ignored.
### Plugin directory changes:
* Add support for the Blosc compressor; this is essential because it is the most common compressor used in Zarr datasets. This also necessitated adding a CMake FindBlosc.cmake file
* Add NCZarr support for the big-four filters provided by HDF5: shuffle, fletcher32, deflate (zlib), and szip
* Add a Codec defaulter (see docs/filters.md) for the big four filters.
* Make plugins work with windows by properly adding __declspec declaration.
### Misc. Non-Filter Changes
* Replace most uses of USE_NETCDF4 (deprecated) with USE_HDF5.
* Improve support for caching
* More fixes for path conversion code
* Fix misc. memory leaks
* Add new utility -- ncdump/ncpathcvt -- that does more or less the same thing as cygpath.
* Add a number of new test to test the non-filter fixes.
* Update the parsers
* Convert most instances of '#ifdef _MSC_VER' to '#ifdef _WIN32'
re: https://github.com/zarr-developers/zarr-specs/issues/41
After discussions with the Zarr community, it was decided to
convert to a new representation of the NCZarr meta-data extensions: version 2.
These extensions store information necessary to mapping the Zarr data model
to the netcdf-4 data model.
The basic change is to remove the NCZarr specific objects: .nczarr, .nczgroup, .nczarray, and .nczattr.
The contents of these objects is moved into the corresponding existing Zarr objects as special keys. The mapping is as follows:
* ''.nczarr'' => ''/.zgroup/_NCZARR_SUPERBLOCK_''
* ''.nczgroup => ''.zgroup/_NCZARR_GROUP_''
* ''.nczarray => ''.zarray/_NCZARR_ARRAY_''
* ''.nczattr => ''.zattr/_NCZARR_ATTR_''
Backward compatibility is maintained by looking for the object ''/.nczarr''
and if found, then assuming that the dataset is in the older version 1 format.
This compatibility only supports reading of such version 1 datasets.
Documentation and test cases are also added.
Misc. Other Changes:
1. The json parsing code was added to the general library instead of nczarr only (ncjson.c, ncjson.h).
2. Improved support for different platform paths by allowing conversion
to a single common path representation.
3. Add some new error codes.
4. Modify nccopy usage to mention the new chunking specification.
re: https://github.com/Unidata/netcdf-c/pull/2003#issuecomment-847637871
Turns out that mingw defines both _WIN32 and also defines getopt.
This means that this test:
````
#ifdef _WIN32
#include "XGetopt.h"
#endif
````
fails on this error:
````
../include/XGetopt.h:38:24: error: conflicting types for 'getopt'
````
Fix is to replace
````
#ifdef _WIN32
with
#if defined(_WIN32) && !defined(__MINGW32__)
````
re: Issue https://github.com/Unidata/netcdf-c/issues/1999
NCclosedir code is incorrect. Fix.
Note that this issue crops up when using a non-VisualStudio windows build
such as Mingw because Mingq defines dirent.h, but Visual Studio does not.
Addendum:
Fix some mingw bugs:
1. Modify XGetopt.h to be conditional on _WIN32 instead of _MSC_VER.
2. Make sure sys/stat.h is included in ncpathmgr.h
re: github issue https://github.com/Unidata/netcdf-c/issues/1982
The problem was that the libnczarr/zsjon.c handling of strings with
embedded double quotes was wrong; a one line fix.
Also added a test case.
Misc. other changes:
1. I Discovered, en passant, that the handling of 64 bit constants
had an error that was fixed.
2. cleanup of the constant conversion code to recurse on arrays of values.
Re: https://github.com/zarr-developers/zarr-python/pull/716
The Zarr version 2 spec has been extended to include the ability
to choose the dimension separator in chunk name keys. The legal
separators has been extended from {'.'} to {'.' '/'}. So now it
is possible to use a key like "0/1/2/0" for chunk names.
This PR implements this for NCZarr. The V2 spec now says that
this separator can be set on a per-variable basis. For now, I
have chosen to allow this be set only globally by adding a key
named "ZARR.DIMENSION_SEPARATOR=<char>" in the
.daprc/.dodsrc/ncrc file. Currently, the only legal separator
characters are '.' (the default) and '/'. On writing, this key
will only be written if its value is different than the default.
This change caused problems because supporting a separator of '/'
is difficult to parse when keys/paths use '/' as the path separator.
A test case was added for this.
Additionally, make nczarr be enabled default by default. This required
some additional changes so that if zip and/or AWS S3 sdk are unavailable,
then they are disabled for NCZarr.
In addition the following unrelated changes were made.
1. Tested that pure-zarr mode could read an nczarr formatted store.
1. The .rc file handling now merges all known .rc files (.ncrc,.daprc, and .dodsrc) in that order and using those in HOME first, then in current directory. For duplicate entries, the later ones override the earlier ones. This change is to remove some of the conflicts inherent in the current .rc file load process. A set of test cases was also added.
1. Re-order tests in configure.ac and CMakeLists.txt so that if libcurl
is not found then the other options that depend upon it properly
are disabled.
1. I decided that xarray support should be enabled by default for pure
zarr. In order to allow disabling, I added a new mode flag "noxarray".
1. Certain test in nczarr_test depend on use of .dodsrc. In order for these
to work when testing in parallel, some inter-test dependencies needed to
be added.
1. Improved authorization testing to use changes in thredds.ucar.edu
re: https://github.com/Unidata/netcdf-c/issues/1988
There was an issue with certain shell programs (bash notably).
For certain platforms and when given a url that had an escaped
'#' character (e.g. \\#) bash would not remove the backslash. So I
had to add a hack for this. Unfortunately I overdid it and it
removed all '' characters. This is ok for non-windows platforms,
but obviously fails for windows.
The fix is this.
1. In a utility program (ncgen, ncdump, nccopy, etc) there is probably a call (or calls) to NC_backslashUnescape(xxx) where xxx is a path argument from the command line.
2. Replace each such call with NC_shellUnescape(xxx).
The NC_shellUnescape function was added and searched only for occurrences of "\#" and replaces them with "#".
interoperability fixed. We were given a Zarr format dataset
stored as a directory+file tree. This dataset uses the XArray
conventions and was generated by some non-Unidata Zarr implementation.
In attempting to process it with NCZarr, several interoperability
problems were discovered and fixed. This gives us more confidence
that NCZarr -- using pure zarr -- can interoperate with other
Zarr implementations.
Specific changes:
* Add test nczarr_test/run_interop.sh
* Support attributes with single value not enclosed in JSON array tags.
* Add mode inferencing and use it in nczarr_test/run_purezarr.sh
* Reduce size of tst_err_enddef.nc because it is more than 3 GB.