Commit Graph

7 Commits

Author SHA1 Message Date
Ward Fisher
a89e1f73b8 Merge branch 'ncgenchunks.dmh' of https://github.com/DennisHeimbigner/netcdf-c into master 2020-09-09 10:24:33 -06:00
Ward Fisher
31dee0c4da
Revert "Revert "Fix nczarr-experimental: improve build support, disengage hdf5 vs netcdf4 flags, and find AWS libraries"" 2020-08-17 19:15:47 -06:00
Ward Fisher
16c27ca13f
Revert "Fix nczarr-experimental: improve build support, disengage hdf5 vs netcdf4 flags, and find AWS libraries" 2020-08-17 15:51:01 -06:00
Dennis Heimbigner
6074c8a02d Fix items in netcdf_meta.h 2020-08-04 17:31:24 -06:00
Greg Sjaardema
338ca2c212
Protect use of H5Dread_chunk function
The`H5Dread_chunk` function is only available if `HDF5_SUPPORTS_PAR_FILTERS` is defined (See CMakeLists.txt, line 745)  The function was added in HDF5-1.10.3
2020-07-10 15:27:54 -06:00
Dennis Heimbigner
59e04ae071 This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".

The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.

More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).

WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:

Platform | Build System | S3 support
------------------------------------
Linux+gcc      | Automake     | yes
Linux+gcc      | CMake        | yes
Visual Studio  | CMake        | no

Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future.  Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.

In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*.  The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
   and the version bumped.
4. An overly complex set of structs was created to support funnelling
   all of the filterx operations thru a single dispatch
   "filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
   to nczarr.

Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
   -- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
   support zarr and to regularize the structure of the fragments
   section of a URL.

Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
   e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
   * Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
   and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.

Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-28 18:02:47 -06:00
Dennis Heimbigner
68a98f6e81 Fix ncgen handling of big data sections
The current ncgen does not properly handle very large
data sections. Apparently this is very uncommon because
it was only discovered in testing the new zarr code.

The fix required a new approach to processing data sections.
Unfortunately, the resulting ncgen is slower than before
but at least it is, I think, now correct.

The added test cases are in libnczarr, and so will
not show up until that is incorporated into master.

Note also that fortran code generation changed, but
has not been tested here.

Misc. Changes
1. Cleanup error handling in ncgen -lc and -lb output
2. Cleanup Makefiles for ncgen to remove unused code
3. Added a program, ncgen/ncdumpchunks, to print
   the data for a .nc file on a per-chunk format.
4. Made the XGetOpt change in PR https://github.com/Unidata/netcdf-c/pull/1694
   for ncdump/ncvalidator
2020-05-14 11:20:46 -06:00