Commit Graph

63 Commits

Author SHA1 Message Date
Dennis Heimbigner
eb3d9eb0c9 Provide a Number of fixes/improvements to NCZarr
Primary changes:
* Add an improved cache system to speed up performance.
* Fix NCZarr to properly handle scalar variables.

Misc. Related Changes:
* Added unit tests for extendible hash and for the generic cache.
* Add config parameter to set size of the NCZarr cache.
* Add initial performance tests but leave them unused.
* Add CRC64 support.
* Move location of ncdumpchunks utility from /ncgen to /ncdump.
* Refactor auth support.

Misc. Unrelated Changes:
* More cleanup of the S3 support
* Add support for S3 authentication in .rc files: HTTP.S3.ACCESSID and HTTP.S3.SECRETKEY.
* Remove the hashkey from the struct OBJHDR since it is never used.
2020-11-19 17:01:04 -07:00
Dennis Heimbigner
aeb3ac2809 Mostly revert the filter code to reduce its complexity of use.
re: https://github.com/Unidata/netcdf-c/issues/1836

Revert the internal filter code to simplify it. From the user's
point of view, the only visible changes should be:

1. The functions that convert text to filter specs have had their signature reverted and have been moved to netcdf_aux.h
2. Some filter API functions now return NC_ENOFILTER when inquiry is made about some filter.

Internally,the dispatch table has been modified to get rid of the filter_actions
entry and associated complex structures. It has been replaced with
inq_var_filter_ids and inq_var_filter_info entries and the dispatch table
version has been bumped to 3. Corresponding NOOP and NOTNC4 functions
were added to libdispatch/dnotnc4.c. Also, the filter_action entries
in dispatch tables were replaced for all dispatch code bases (HDF5, DAP2,
etc). This should only impact UDF users.

In the process, it became clear that the form of the filters
field in NC_VAR_INFO_T was format dependent, so I converted it to
be of type void* and pushed its management into the various dispatch
code bases. Specifically libhdf5 and libnczarr now manage the filters
field in their own way.

The auxilliary functions for parsing textual filter specifications
were moved to netcdf_aux.h and were renamed to the following:
* ncaux_h5filterspec_parse
* ncaux_h5filterspec_parselist
* ncaux_h5filterspec_free
* ncaux_h5filter_fix8

Misc. Other Changes:

1. Document NUG/filters.md updated to reflect the changes above.
2. All the old data types (structs and enums)
   used by filter_actions actions were deleted.
   The exception is the NC_H5_Filterspec because it is needed
   by ncaux_h5filterspec_parselist.
3. Clientside filters were removed -- another enhancement
   for which no-one ever asked.
4. The ability to remove filters was itself removed.
5. Some functionality needed by nczarr was moved from libhdf5
   to libsrc4 e.g. nc4_find_default_chunksizes
6. All the filterx code was removed
7. ncfilter.h and nc4filter.c no longer used

Misc. Unrelated Changes:

1. The nczarr_test makefile clean was leaving some directories; so
   add clean-local to take care of them.
2020-09-27 12:43:46 -06:00
bombipappoo
ecbb0f5bbf Convert filename from ANSI to UTF-8 before calling HDF5. 2020-07-14 22:44:42 +09:00
Dennis Heimbigner
59e04ae071 This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".

The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.

More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).

WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:

Platform | Build System | S3 support
------------------------------------
Linux+gcc      | Automake     | yes
Linux+gcc      | CMake        | yes
Visual Studio  | CMake        | no

Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future.  Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.

In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*.  The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
   and the version bumped.
4. An overly complex set of structs was created to support funnelling
   all of the filterx operations thru a single dispatch
   "filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
   to nczarr.

Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
   -- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
   support zarr and to regularize the structure of the fragments
   section of a URL.

Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
   e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
   * Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
   and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.

Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-28 18:02:47 -06:00
Dennis Heimbigner
84c69afca7 Allow redefinition of variable filters
re: Github issue https://github.com/Unidata/netcdf-c/issues/1713

If nc_def_var_filter or nc_def_var_deflate or nc_def_var_szip is
called multiple times with the same filter id, but possibly with
different sets of parameters, then the first invocation is
sticky and later invocations are ignored. The desired behavior
is to have the last invocation be used.

This PR implements that desired behavior, with some special
cases.  If you call nc_def_var_deflate multiple times, then the
last invocation rule applies with respect to deflate. However,
the shuffle filter, if enabled, is always applied just before
applying deflate.

Misc unrelated changes:
1. Make client-side filters be disabled by default
2. Fix the definition of uintptr_t and use in oc2 and libdap4
3. Add some test cases
4. modify filter order tests to use plugin filters rather
   than client-side filters
2020-05-11 09:42:31 -06:00
Edward Hartnett
e3c9e83ecf adding internal function, plus some documentation 2020-05-08 08:58:42 -06:00
Dennis Heimbigner
b0e0d81aa9 Fix reclamation of the ->format_XXX_info fields
nc4internal.c contains code to free the format_XXX_info
fields. Since these are format specific, this code
was moved to the dispatch code (libhdf5 and libhdf4
in the current case).

Additionally, there are some fields in nc4internal.h (e.g.
dimscale fields) that are specific to HDF5 and have been moved
to the corresponding HDF5 data structures and code.

Misc. other changes:
1. NC_VAR_INFO_T->hdf5_name renamed to alt_name to avoid
   implying it is necessarily HDF5 specific.
2. prefix NC_FILE_INFO_T with an instance of NC_OBJ for consistency.
   this also requires wrapping move_in_NCList() to keep
   hdr.id consistent.
2020-03-29 12:48:59 -06:00
Dennis Heimbigner
44d0dcaad2 Add support for multiple filters per variable.
re: https://github.com/Unidata/netcdf-c/issues/1584

Support has been added for multiple filters per variable.  This
affects a number of components in netcdf. The new APIs are
documented in NUG/filters.md.

The primary changes are:
* A set of new functions are provided (see __include/netcdf_filter.h__).
    - Obtain a list of the filters associated with a variable
    - Obtain the parameters for a specific filter.
* The existing __nc_inq_var_filter__ function now returns info
  about the first defined filter.
* The utilities (ncgen, ncdump, and nccopy) now support
  an extended format for specifying a sequence of filters.
  The general form is __<filter>|<filter>..._.
* The ncdump **_Filter** attribute now dumps a list of all the
  filters associated with a variable using the above new format.
* Filter specifications can now use a filter name instead of number
  for filters known to the netcdf library, which in turn is taken
  from the HDF5 filter registration page.
* New errors are defined: NC_EFILTER and NC_ENOFILTER. The latter
  is returned if an attempt is made to access an unknown filter.
* Internally, the dispatch table has been extended to add a function
  to handle all of the filter functions.
* New, filter-related, tests were added to nc_test4.
* A new plugin was added to the plugins directory to help with testing.

Notes:
1. The shuffle and fletcher32 filters are not part of the multifilter system.

Misc. changes:
1. A debug module was added to libhdf5 to help catch error locations.
2020-02-16 12:59:33 -07:00
Greg Sjaardema
56c0d5cf8a Spelling fixes 2019-09-18 08:03:01 -06:00
Ward Fisher
5410967b00 Merge branch 'master' into addfilter.dmh 2019-04-30 14:51:25 -06:00
Dennis Heimbigner
2eb1a8d8cf For some reason, the code for this was incorrect.
Anyway, I repaired it as follows:
1. Created NC4_write_provenance as parallel to NC4_read_provenance
2. Modified hdf5file.c to use NC4_write_provenance
3. Modified hdf5open.c to use NC4_read_provenance (was NC4_read_ncproperties).
4. The creation of the _NCProperties string was seriously hosed:
   was using all the wrong fields.
2019-04-18 14:23:20 -06:00
Ward Fisher
c776039eba Updated call to NC4_read_provenance. 2019-04-18 10:53:16 -06:00
Dennis Heimbigner
8d0bced60d Allow in-line definition of filters
Priority: Low

re: issue https://github.com/Unidata/netcdf-c/issues/1329

HDF5 has the ability to programmatically define new filters,
as opposed to using HDF5_PLUGIN_PATH env variable.
This PR adds support for that feature.
Not clear how useful this is, though.
See docs/filters.md for details.
2019-03-21 11:33:27 -06:00
Dennis Heimbigner
88a7a1753c Simplify libhdf5/nc5info.c to move to lazy parsing
re: https://github.com/Unidata/netcdf-c/issues/1352

When nc4info.c encounters an _NCProperties attribute
with a version number it does not recognize, it does not
show it correctly.

Solution chosen is to arrange so that accessing the attribute
returns the raw value of the Attribute from the file. This way,
even if the version is unrecognized, it will return something
usable.

The changes were primarily to never attempt to parse the value
of _NCProperties until actually required. Which since they
are currently not used means that parsing never occurs.

Also modified ncdump/tst_fileinfo.sh to include some extra testing

I tested the original failure by changing the value of NCPROPS to 3.
However, there is no way to test this at build time.

Misc. Changes
* Inlined the provenance info in the NC_FILE_INFO_T structure
* Centralized stuff from elsewhere into include/nc_provenance.h

Misc. Unrelated Changes
* Removed/turned off some misc debug output left on by accident
* Fix CPPFLAGS name error in libhdf5/Makefile.am
2019-03-09 20:35:57 -07:00
Dennis Heimbigner
0c59e13bf7 Master merge, conflict resolution, cleanup 2019-02-24 16:54:13 -07:00
Dennis Heimbigner
45a8a265b8 master merge 2019-02-23 17:14:12 -07:00
Ed Hartnett
58a801c91a cleanup of whitespace in include directory 2019-02-19 06:09:10 -07:00
Ward Fisher
1fde39c8d7
Merge branch 'master' into byterange.dmh 2019-02-07 14:28:23 -07:00
Ed Hartnett
e74ec6f2a0 made function give_var_secret_name() not static, fixed warning 2019-01-27 11:10:41 -07:00
Ed Hartnett
627a55cf78 added function give_var_secret_name() 2019-01-27 11:06:02 -07:00
Ward Fisher
b27c7d899d Merge branch 'master' into byterange.dmh 2019-01-25 14:50:23 -07:00
Ed Hartnett
15e6a782db removed unneeded variable, shortened function name 2019-01-20 09:25:04 -07:00
Dennis Heimbigner
84c2bc0d78 Merge branch 'master' into byterange.dmh 2019-01-02 13:18:45 -07:00
Dennis Heimbigner
bf2746b8ea Provide byte-range reading of remote datasets
re: issue https://github.com/Unidata/netcdf-c/issues/1251

Assume that you have the URL to a remote dataset
which is a normal netcdf-3 or netcdf-4 file.

This PR allows the netcdf-c to read that dataset's
contents as a netcdf file using HTTP byte ranges
if the remote server supports byte-range access.

Originally, this PR was set up to access Amazon S3 objects,
but it can also access other remote datasets such as those
provided by a Thredds server via the HTTPServer access protocol.
It may also work for other kinds of servers.

Note that this is not intended as a true production
capability because, as is known, this kind of access to
can be quite slow. In addition, the byte-range IO drivers
do not currently do any sort of optimization or caching.

An additional goal here is to gain some experience with
the Amazon S3 REST protocol.

This architecture and its use documented in
the file docs/byterange.dox.

There are currently two test cases:

1. nc_test/tst_s3raw.c - this does a simple open, check format, close cycle
   for a remote netcdf-3 file and a remote netcdf-4 file.
2. nc_test/test_s3raw.sh - this uses ncdump to investigate some remote
   datasets.

This PR also incorporates significantly changed model inference code
(see the superceded PR https://github.com/Unidata/netcdf-c/pull/1259).

1. It centralizes the code that infers the dispatcher.
2. It adds support for byte-range URLs

Other changes:

1. NC_HDF5_finalize was not being properly called by nc_finalize().
2. Fix minor bug in ncgen3.l
3. fix memory leak in nc4info.c
4. add code to walk the .daprc triples and to replace protocol=
   fragment tag with a more general mode= tag.

Final Note:
Th inference code is still way too complicated. We need to move
to the validfile() model used by netcdf Java, where each
dispatcher is asked if it can process the file. This decentralizes
the inference code. This will be done after all the major new
dispatchers (PIO, Zarr, etc) have been implemented.
2019-01-01 18:27:36 -07:00
Ed Hartnett
7249350c3f now remember whether coords att has been read for a var 2018-12-19 09:43:32 -07:00
Ed Hartnett
1b38d9aef8 lazy read of some var metadata 2018-12-18 07:48:22 -07:00
Ed Hartnett
8ca5a1ac17
Merge branch 'master' into ejh_fast_var_prep 2018-12-12 07:05:45 -07:00
Ward Fisher
30ea33435c Merge remote-tracking branch 'origin/license_update.wif' into pr-aggregation.wif 2018-12-11 17:08:21 -05:00
Ward Fisher
50fa6b4f32 Merge branch 'ejh_next_22' of https://github.com/NetCDF-World-Domination-Council/netcdf-c into pr-aggregation.wif 2018-12-11 17:06:43 -05:00
Ward Fisher
6deb77bade Merge branch 'master' into gh1207.dmh 2018-12-11 16:44:04 -05:00
Ed Hartnett
dc1115ae76 moved rec_match_dimscales() to hdf5open.c, made it faster by skipping already-identified dims 2018-12-11 09:40:59 -07:00
Ed Hartnett
3c9a141ee3 moved function detect_preserve_dimids and made it static 2018-12-11 06:15:47 -07:00
Ward Fisher
5be0126920 More standardizing of the copyright stanza. 2018-12-06 14:13:56 -07:00
Ed Hartnett
77d0922d49 starting to deal with normalized name in HDF5 attribute code 2018-11-30 08:59:58 -07:00
Ed Hartnett
1f64c66cdf rename HDF5 dispatch functions to start with NC4_HDF5 2018-11-26 08:13:57 -07:00
Ed Hartnett
aade08ee22 moving checking for lazy att reads to libhdf5 2018-11-26 07:49:58 -07:00
Ed Hartnett
5104262f6b changing types 2018-11-20 08:00:48 -07:00
Dennis Heimbigner
4bb92b77db Fix error report coming out of nc4info.c
re: issue https://github.com/Unidata/netcdf-c/issues/1207

The NC4_get_provenance is generating a spurious error message.
This properly suppresses it.
2018-11-16 15:31:37 -07:00
Ed Hartnett
8ae5ebf6bc remove unneeded params from function 2018-11-16 08:26:09 -07:00
Ed Hartnett
d7fe095066 moving rest of var stuff 2018-11-13 16:59:07 -07:00
Ed Hartnett
f7cb5ead1c moving var hdf5-specific info into libhdf5 2018-11-13 05:44:39 -07:00
Ed Hartnett
578ddb2a28 cleaning up nc4internal.h 2018-11-13 05:27:46 -07:00
Ed Hartnett
286e276390 cleaning up nc4internal.h 2018-11-13 05:18:00 -07:00
Ed Hartnett
c35aa3ccb9 changed header files to separate HDF5-specific grp info 2018-11-12 07:40:15 -07:00
Ed Hartnett
8aabd2020c moving HDF5 dim fields to hdf5internal.h from nc4internal.h 2018-11-08 07:09:11 -07:00
Ed Hartnett
9929e7acf9 moving att HDF5 stuff to libhdf5 2018-11-07 11:33:02 -07:00
Ed Hartnett
35cfaefc0c closing HDF5 objects separately 2018-10-23 05:39:00 -06:00
Ed Hartnett
8390d572ad
Merge branch 'master' into ejh_hdf5_sep_next 2018-09-06 17:30:37 -06:00
Ward Fisher
784d777bff Merge branch 'master' into provenance.dmh 2018-09-06 15:13:09 -06:00
Ed Hartnett
80dc5bc0f7 merged master 2018-09-06 12:24:29 -06:00