The XArray implementation that uses Zarr for storage
provides a mechanism to simulate named dimensions.
It does this by adding a per-variable attribute called
_ARRAY_DIMENSIONS. This attribute contains a list of names
to be matched against the shape values of the variable.
In effect a named dimension is created with the name
_ARRAY_DIMENSIONS(i) and length shape(i) for all i
in range 0..rank(variable).
Both read and write support is provided.
This XArray support is only invoked if the mode value
of "xarray" is defined. So for example, as in this URL.
````
https://s3.us-west-1.amazonaws.com/bucket/dataset#mode=nczarr,xarray,s3
````
Note that the "xarray" mode flag also implies mode flag "zarr", so the above
is equivalent to this URL.
````
https://s3.us-west-1.amazonaws.com/bucket/dataset#mode=nczarr,zarr,xarray,s3
````
The primary change to implement this was to unify the handling
of dimension references in libnczarr/zsync.
A test for this and other pure-zarr features was added as
nczarr_test/run_purezarr.sh
Other changes:
* Make sure distcheck leaves no files around.
* Change the special attribute flag DIMSCALEFLAG to HIDDENATTRFLAG
to support the xarray attribute.
* Annotate the zmap implementations with feature flags such as
WRITEONCE (for zip files).
re: https://github.com/Unidata/netcdf-c/issues/1614
NetCDF has a requirement that the type of a _FillValue attribute
be the same as the containing variable. However, it is clear
that too many servers do not honor this requirement. So, change
the default for DAP2 and DAP4 to allow fill mismatch.
Primary changes:
* Add an improved cache system to speed up performance.
* Fix NCZarr to properly handle scalar variables.
Misc. Related Changes:
* Added unit tests for extendible hash and for the generic cache.
* Add config parameter to set size of the NCZarr cache.
* Add initial performance tests but leave them unused.
* Add CRC64 support.
* Move location of ncdumpchunks utility from /ncgen to /ncdump.
* Refactor auth support.
Misc. Unrelated Changes:
* More cleanup of the S3 support
* Add support for S3 authentication in .rc files: HTTP.S3.ACCESSID and HTTP.S3.SECRETKEY.
* Remove the hashkey from the struct OBJHDR since it is never used.
re: https://github.com/Unidata/netcdf-c/issues/1876
and: https://github.com/Unidata/netcdf-c/pull/1835
and: https://github.com/Unidata/netcdf4-python/issues/1041
The change in PR 1835 was correct with respect to using %20 instead of '+'
for encoding blanks. However, it was a mistake to assume everything was
unencoded and then to do encoding ourselves. The problem is that
different servers do different things, with Columbia being an outlier.
So, I have added a set of client controls that can at least give
the caller some control over this. The caller can append
the following fragment to his URL to control what gets encoded before
sending it to the server. The syntax is as follows:
````
https://<host>/<path>/<query>#encode=path|query|all|none
````
The possible values:
* path -- URL encode (i.e. %xx encode) as needed in the path part of the URL.
* query -- URL encode as needed in the query part of the URL.
* all -- equivalent to ````#encode=path,query````.
* none -- do not url encode any part of the URL sent to the server; not strictly necessary, so mostly for completeness.
Note that if "encode=" is used, then before it is processed, all encoding
is turned of so that ````#encode=path```` will only encode the path
and not the query.
The default is ````#encode=query````, so the path is left untouched,
but the query is always encoded.
Internally, this required changes to pass the encode flags down into
the OC2 library.
Misc. Unrelated Changes:
* Shut up those irritating warning from putget.m4
re: https://github.com/Unidata/netcdf-c/issues/1836
Revert the internal filter code to simplify it. From the user's
point of view, the only visible changes should be:
1. The functions that convert text to filter specs have had their signature reverted and have been moved to netcdf_aux.h
2. Some filter API functions now return NC_ENOFILTER when inquiry is made about some filter.
Internally,the dispatch table has been modified to get rid of the filter_actions
entry and associated complex structures. It has been replaced with
inq_var_filter_ids and inq_var_filter_info entries and the dispatch table
version has been bumped to 3. Corresponding NOOP and NOTNC4 functions
were added to libdispatch/dnotnc4.c. Also, the filter_action entries
in dispatch tables were replaced for all dispatch code bases (HDF5, DAP2,
etc). This should only impact UDF users.
In the process, it became clear that the form of the filters
field in NC_VAR_INFO_T was format dependent, so I converted it to
be of type void* and pushed its management into the various dispatch
code bases. Specifically libhdf5 and libnczarr now manage the filters
field in their own way.
The auxilliary functions for parsing textual filter specifications
were moved to netcdf_aux.h and were renamed to the following:
* ncaux_h5filterspec_parse
* ncaux_h5filterspec_parselist
* ncaux_h5filterspec_free
* ncaux_h5filter_fix8
Misc. Other Changes:
1. Document NUG/filters.md updated to reflect the changes above.
2. All the old data types (structs and enums)
used by filter_actions actions were deleted.
The exception is the NC_H5_Filterspec because it is needed
by ncaux_h5filterspec_parselist.
3. Clientside filters were removed -- another enhancement
for which no-one ever asked.
4. The ability to remove filters was itself removed.
5. Some functionality needed by nczarr was moved from libhdf5
to libsrc4 e.g. nc4_find_default_chunksizes
6. All the filterx code was removed
7. ncfilter.h and nc4filter.c no longer used
Misc. Unrelated Changes:
1. The nczarr_test makefile clean was leaving some directories; so
add clean-local to take care of them.
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
re: Partly addresses issue https://github.com/Unidata/netcdf-c/issues/1712.
1. Turn on Hyrax Hack to accept Hyrax style attribute containers.
2. Support Url type as alias for String.
3. Accept the special attribute, "__DAP4_Checksum_CRC32",
to control per-variable checksums.
4. Make _DAP4_xxx attributes be reserved and only accessible
by name (ala _SuperBlock attribute).
5. Fix handling of checksums. There is a hack in the code
that uses an extra flag in the chunk header to indicate
that all variables have checksums. This violates the spec
and will be removed once it is possible to regenerate the
test cases.
Note that checksumming with the Hyrax test server has not
been tested. This, along with some other probable inconsistencies,
needs fixing when OPeNDAP and Unidata can agree on the proper
specification. Testing will be included.
Fix Issue https://github.com/Unidata/netcdf-c/issues/1725.
Replace PR https://github.com/Unidata/netcdf-c/pull/1726
Also replace PR https://github.com/Unidata/netcdf-c/pull/1694
The general problem is that under Visual Studio, we are seeing
a number of undefined reference and other scoping errors.
The reason is that the code is not properly using Visual Studio
_declspec() declarations.
The basic solution is to ensure that when compiling the code itself
one needs to ensure that _declspec(dllexport) is used. There
are several sets of macros to handle this, but they all rely
on the flag DLL_EXPORT being define when the code is compiled,
but not being defined when the code is used via a .h file.
As a test, I modified XGetOpt.c to build properly. I also
fixed the oc2 library to properly _declspec things like ocdebug.
I also made some misc. changes to get all the tests to run
if cygwin is installed (to get bash, sed, etc).
Misc. Changes:
* Put XGetOpt.c into libsrc and copy at build time
to the other directories where it is needed.
The current ncgen does not properly handle very large
data sections. Apparently this is very uncommon because
it was only discovered in testing the new zarr code.
The fix required a new approach to processing data sections.
Unfortunately, the resulting ncgen is slower than before
but at least it is, I think, now correct.
The added test cases are in libnczarr, and so will
not show up until that is incorporated into master.
Note also that fortran code generation changed, but
has not been tested here.
Misc. Changes
1. Cleanup error handling in ncgen -lc and -lb output
2. Cleanup Makefiles for ncgen to remove unused code
3. Added a program, ncgen/ncdumpchunks, to print
the data for a .nc file on a per-chunk format.
4. Made the XGetOpt change in PR https://github.com/Unidata/netcdf-c/pull/1694
for ncdump/ncvalidator
re: Github issue https://github.com/Unidata/netcdf-c/issues/1713
If nc_def_var_filter or nc_def_var_deflate or nc_def_var_szip is
called multiple times with the same filter id, but possibly with
different sets of parameters, then the first invocation is
sticky and later invocations are ignored. The desired behavior
is to have the last invocation be used.
This PR implements that desired behavior, with some special
cases. If you call nc_def_var_deflate multiple times, then the
last invocation rule applies with respect to deflate. However,
the shuffle filter, if enabled, is always applied just before
applying deflate.
Misc unrelated changes:
1. Make client-side filters be disabled by default
2. Fix the definition of uintptr_t and use in oc2 and libdap4
3. Add some test cases
4. modify filter order tests to use plugin filters rather
than client-side filters
re: https://github.com/Unidata/netcdf-c/issues/1684
re: e-support VZL-904142
Two issues:
1. As of libcurl 7.66, the semantics of CURLOPT_SSL_VERIFYHOST
changed so that the non-zero values affects certificate processing.
2. The current library was forcing the values of VERIFYPEER
and VERIFYHOST to zero instead of leaving them to the default values.
Solution was first to leave the defaults in place for VERIFYPEER and VERIFYHOST
as long as they are not set in .ocrc/.dodsrc file.
Second, the value of HTTP.SSL.VERIFYPEER or HTTP.SSL.VERIFYHOST
as set in .ocrc/.dodrc is used to set the corresponding CURLOPT flags.
So for example, adding
> HTTP.SSL.VERIFYHOST=2
will set the value of CURLOPT_SSL_VERIFYHOST to 2, the default.
Using
> HTTP.SSL.VERIFYHOST=0
will set the value of CURLOPT_SSL_VERIFYHOST to 0, which disables it.
Similarly for VERIFYPEER.
Finally the semantics of HTTP.SSL.VALIDATE is now equivalent to
> HTTP.SSL.VERIFYPEER=1
> HTTP.SSL.VERIFYHOST=2
re: https://github.com/Unidata/netcdf-c/issues/1584
Support has been added for multiple filters per variable. This
affects a number of components in netcdf. The new APIs are
documented in NUG/filters.md.
The primary changes are:
* A set of new functions are provided (see __include/netcdf_filter.h__).
- Obtain a list of the filters associated with a variable
- Obtain the parameters for a specific filter.
* The existing __nc_inq_var_filter__ function now returns info
about the first defined filter.
* The utilities (ncgen, ncdump, and nccopy) now support
an extended format for specifying a sequence of filters.
The general form is __<filter>|<filter>..._.
* The ncdump **_Filter** attribute now dumps a list of all the
filters associated with a variable using the above new format.
* Filter specifications can now use a filter name instead of number
for filters known to the netcdf library, which in turn is taken
from the HDF5 filter registration page.
* New errors are defined: NC_EFILTER and NC_ENOFILTER. The latter
is returned if an attempt is made to access an unknown filter.
* Internally, the dispatch table has been extended to add a function
to handle all of the filter functions.
* New, filter-related, tests were added to nc_test4.
* A new plugin was added to the plugins directory to help with testing.
Notes:
1. The shuffle and fletcher32 filters are not part of the multifilter system.
Misc. changes:
1. A debug module was added to libhdf5 to help catch error locations.