Commit Graph

58 Commits

Author SHA1 Message Date
Dennis Heimbigner
f6e25b695e Fix additional S3 support issues
re: https://github.com/Unidata/netcdf-c/issues/2117
re: https://github.com/Unidata/netcdf-c/issues/2119

* Modify libsrc to allow byte-range reading of netcdf-3 files in private S3 buckets; this required using the aws sdk. Also add a test case.
* The aws sdk can sometimes cause problems if the Awd::ShutdownAPI function is not called. So at optional atexit() support to ensure it is called. This is disabled for Windows.
* Add documentation to nczarr.md on how to build and use the aws sdk under windows. Currently it builds, but testing fails.
* Switch testing from stratus to the Unidata bucket on S3.
* Improve support for the s3: url protocol.
* Add a s3 specific utility code file: ds3util.c
* Modify NC_infermodel to attempt to read the magic number of byte-ranged files in S3.

## Misc.

* Move and rename the core S3 SDK wrapper code (libnczarr/zs3sdk.cpp) to libdispatch since it now used in libsrc as well as libnczarr.
* Add calls to nc_finalize in the utilities in case atexit is disabled.
* Add header only json parser to the distribution rather than as a built source.
2021-10-29 20:06:37 -06:00
Dennis Heimbigner
289103d2b1 Merge branch 'master' into zarrs3.dmh 2021-10-07 15:10:03 -06:00
Dennis Heimbigner
6b69b9c52c Significantly Improve Amazon S3 Cloud Storage Support
## S3 Related Fixes

* Add comprehensive support for specifying AWS profiles to provide access credentials.
* Parse the files "~/.aws/config" and "~/.aws/credentials to provide credentials for the HDF5 ROS3 driver and to locate default region.
* Add a function to obtain the currently active S3 credentials. The search rules are defined in docs/nczarr.md.
* Provide documentation for the new features.
* Modify the struct NCauth (in include/ncauth.h) to replace specific S3 credentials with a profile name.
* Add a unit test to test the operation of profile and credentials management.
* Add support for URLS of the form "s3://<bucket>/<key>"; this requires obtaining a default region.
* Allows the specification of profile and/or region in a URL of the form "#mode=nczarr,...&aws.region=...&aws.profile=..."

## Misc. Fixes

* Move the ezxml code to libdispatch so that it can be used both by DAP4 and nczarr.
* Modify nclist to provide a deep clone operation.
* Modify ncuri to provide a deep clone operation.
* Modify the .rc file format to allow the specification of a path to be tested when looking for an entry in the .rc file.
* Ensure that the NC_rcload function is called.
* Modify nchttp to support setting request headers.
2021-09-27 18:36:33 -06:00
Edward Hartnett
0ce463761c
Merge branch 'main' into ejh_quantize_2 2021-09-07 10:44:45 -06:00
Dennis Heimbigner
11fe00ea05 Add filter support to NCZarr
Filter support has three goals:

1. Use the existing HDF5 filter implementations,
2. Allow filter metadata to be stored in the NumCodecs metadata format used by Zarr,
3. Allow filters to be used even when HDF5 is disabled

Detailed usage directions are define in docs/filters.md.

For now, the existing filter API is left in place. So filters
are defined using ''nc_def_var_filter'' using the HDF5 style
where the id and parameters are unsigned integers.

This is a big change since filters affect many parts of the code.

In the following, the terms "compressor" and "filter" and "codec" are generally
used synonomously.

### Filter-Related Changes:
* In order to support dynamic loading of shared filter libraries, a new library was added in the libncpoco directory; it helps to isolate dynamic loading across multiple platforms.
* Provide a json parsing library for use by plugins; this is created by merging libdispatch/ncjson.c with include/ncjson.h.
* Add a new _Codecs attribute to allow clients to see what codecs are being used; let ncdump -s print it out.
* Provide special headers to help support compilation of HDF5 filters when HDF5 is not enabled: netcdf_filter_hdf5_build.h and netcdf_filter_build.h.
* Add a number of new test to test the new nczarr filters.
* Let ncgen parse _Codecs attribute, although it is ignored.

### Plugin directory changes:
* Add support for the Blosc compressor; this is essential because it is the most common compressor used in Zarr datasets. This also necessitated adding a CMake FindBlosc.cmake file
* Add NCZarr support for the big-four filters provided by HDF5: shuffle, fletcher32, deflate (zlib), and szip
* Add a Codec defaulter (see docs/filters.md) for the big four filters.
* Make plugins work with windows by properly adding __declspec declaration.

### Misc. Non-Filter Changes
* Replace most uses of USE_NETCDF4 (deprecated) with USE_HDF5.
* Improve support for caching
* More fixes for path conversion code
* Fix misc. memory leaks
* Add new utility -- ncdump/ncpathcvt -- that does more or less the same thing as cygpath.
* Add a number of new test to test the non-filter fixes.
* Update the parsers
* Convert most instances of '#ifdef _MSC_VER' to '#ifdef _WIN32'
2021-09-02 17:04:26 -06:00
Edward Hartnett
0f26083f4d perparing to apply bitgroom algorithm 2021-08-25 01:31:26 -06:00
Edward Hartnett
3202b8b37c adding quantize functions to all the dispatch tables 2021-08-24 01:26:44 -06:00
Dennis Heimbigner
a58d243245 Fix library crash 2021-08-11 12:28:06 -06:00
Dennis Heimbigner
ec258bf314 Fix a number of bugs in the nczarr code.
re: Issues https://github.com/Unidata/netcdf-c/issues/2063, https://github.com/Unidata/netcdf-c/issues/2062, https://github.com/Unidata/netcdf-c/issues/2061, https://github.com/Unidata/netcdf-c/issues/2059

1. Support "fill_value: null" (https://github.com/Unidata/netcdf-c/issues/2063).
2. Handle the dtype case "|u1" (https://github.com/Unidata/netcdf-c/issues/2062).
3. When writing a pure Zarr format file, some nczarr attributes inadvertently crept in (https://github.com/Unidata/netcdf-c/issues/2061).
4. If there is no fill value, then the .zarray fill_value key should have the value null rather than left out (https://github.com/Unidata/netcdf-c/issues/2059).

Hat tip: Even Rouault
2021-08-09 17:05:02 -06:00
Dennis Heimbigner
9417055c3e Fix bad chunkpath calculation 2021-07-18 16:20:22 -06:00
Dennis Heimbigner
d953899559 Move to Version 2 NCZarr Extended Meta-Data
re: https://github.com/zarr-developers/zarr-specs/issues/41

After discussions with the Zarr community, it was decided to
convert to a new representation of the NCZarr meta-data extensions: version 2.
These extensions store information necessary to mapping the Zarr data model
to the netcdf-4 data model.

The basic change is to remove the NCZarr specific objects: .nczarr, .nczgroup, .nczarray, and .nczattr.
The contents of these objects is moved into the corresponding existing Zarr objects as special keys. The mapping is as follows:

* ''.nczarr'' => ''/.zgroup/_NCZARR_SUPERBLOCK_''
* ''.nczgroup => ''.zgroup/_NCZARR_GROUP_''
* ''.nczarray => ''.zarray/_NCZARR_ARRAY_''
* ''.nczattr => ''.zattr/_NCZARR_ATTR_''

Backward compatibility is maintained by looking for the object ''/.nczarr''
and if found, then assuming that the dataset is in the older version 1 format.
This compatibility only supports reading of such version 1 datasets.

Documentation and test cases are also added.

Misc. Other Changes:
1. The json parsing code was added to the general library instead of nczarr only (ncjson.c, ncjson.h).
2. Improved support for different platform paths by allowing conversion
   to a single common path representation.
3. Add some new error codes.
4. Modify nccopy usage to mention the new chunking specification.
2021-07-17 16:55:30 -06:00
Dennis Heimbigner
1c3e86440e NCZarr is outputting fill value as an array instead of a singleton.
re: https://github.com/Unidata/netcdf-c/issues/2008

The fill_value key in a .zarray should have a single value,
but it currently has a value that is a 1 element array.
Fix is to pull out the single element.

This occurred because the fill_value is taken from the _FillValue attribute,
and all netcdf attributes are stored as arrays, of length 1 in this case.

In a related change, any attribute with length 1 is now stored in .zattrs
as a singleton rather than an array of length 1. This make the generated
Zarr more consistent with other Zarr implementations.

Misc. other changes:

1. Fix bug in testpathcvt caused by the way various shells handle backslash escapes.
2. Fix bug in testauth where test for MSVC is wrong.
2021-06-05 14:12:21 -06:00
Greg Sjaardema
e2d0bbb8ea
Merge branch 'master' into eliminate_need_for_hdf5-1.6-API 2021-05-28 07:11:13 -06:00
Ward Fisher
cc618af959
Merge branch 'master' into badfilter.dmh 2021-05-27 12:30:39 -06:00
Dennis Heimbigner
537f41aeb3 Fix 2 for cygwin build 2021-05-19 17:41:41 -06:00
Dennis Heimbigner
453ad847b9 turn off tracing 2021-05-19 14:38:07 -06:00
Dennis Heimbigner
fba7198039 Fix NCclosedir in dpathmgr.c
re: Issue https://github.com/Unidata/netcdf-c/issues/1999

NCclosedir code is incorrect. Fix.
Note that this issue crops up when using a non-VisualStudio windows build
such as Mingw because Mingq defines dirent.h, but Visual Studio does not.

Addendum:
Fix some mingw bugs:

1. Modify XGetopt.h to be conditional on _WIN32 instead of _MSC_VER.
2. Make sure sys/stat.h is included in ncpathmgr.h
2021-05-19 14:19:28 -06:00
Dennis Heimbigner
74e7812d83 Improve error message when non-existent filter is encountered.
re: https://github.com/Unidata/netcdf-c/issues/1996

Improve the error message and location that is reported when reading a filter with a variable that uses a filter that is not available on the reading platform.

This requires checking the availability of the filter, recording it, and failing when any attempt is made to read or write that variable. A test case was added for this in tst_filter.sh. Also, LOG level 0 message is generated giving the variable and the filter id.

Note that by design if there is no attempt to read or write the variable, then no error is reported; this means that, for example, ncdump -h will list the filter even though it is not actually available. This is important for allowing a user to see the filter details.
2021-05-17 19:49:58 -06:00
Dennis Heimbigner
91168e33a0 Fix JSON quoted string processing in libnczarr
re: github issue https://github.com/Unidata/netcdf-c/issues/1982

The problem was that the libnczarr/zsjon.c handling of strings with
embedded double quotes was wrong; a one line fix.
Also added a test case.

Misc. other changes:

1. I Discovered, en passant, that the handling of 64 bit constants
had an error that was fixed.
2. cleanup of the constant conversion code to recurse on arrays of values.
2021-05-06 16:39:44 -06:00
Greg Sjaardema
cbcee382b0 Remove need for HDF5-1.6 API being defined 2021-04-28 13:59:24 -06:00
Dennis Heimbigner
1243c3d866 Allow .rc tests to work in parallel by isolation 2021-04-25 22:02:29 -06:00
Dennis Heimbigner
74b40fd788 Upgrade the nczarr code to match Zarr V2
Re: https://github.com/zarr-developers/zarr-python/pull/716

The Zarr version 2 spec has been extended to include the ability
to choose the dimension separator in chunk name keys. The legal
separators has been extended from {'.'} to {'.' '/'}.  So now it
is possible to use a key like "0/1/2/0" for chunk names.

This PR implements this for NCZarr. The V2 spec now says that
this separator can be set on a per-variable basis. For now, I
have chosen to allow this be set only globally by adding a key
named "ZARR.DIMENSION_SEPARATOR=<char>" in the
.daprc/.dodsrc/ncrc file. Currently, the only legal separator
characters are '.' (the default) and '/'. On writing, this key
will only be written if its value is different than the default.
This change caused problems because supporting a separator of '/'
is difficult to parse when keys/paths use '/' as the path separator.
A test case was added for this.

Additionally, make nczarr be enabled default by default. This required
some additional changes so that if zip and/or AWS S3 sdk are unavailable,
then they are disabled for NCZarr.

In addition the following unrelated changes were made.

1. Tested that pure-zarr mode could read an nczarr formatted store.
1. The .rc file handling now merges all known .rc files (.ncrc,.daprc, and .dodsrc) in that order and using those in HOME first, then in current directory. For duplicate entries, the later ones override the earlier ones. This change is to remove some of the conflicts inherent in the current .rc file load process. A set of test cases was also added.
1. Re-order tests in configure.ac and CMakeLists.txt so that if libcurl
   is not found then the other options that depend upon it properly
   are disabled.
1. I decided that xarray support should be enabled by default for pure
   zarr. In order to allow disabling, I added a new mode flag "noxarray".
1. Certain test in nczarr_test depend on use of .dodsrc. In order for these
   to work when testing in parallel, some inter-test dependencies needed to
   be added.
1. Improved authorization testing to use changes in thredds.ucar.edu
2021-04-24 19:48:15 -06:00
Dennis Heimbigner
0454d8e235 Addendum: This PR has been extended to include
interoperability fixed. We were given a Zarr format dataset
stored as a directory+file tree. This dataset uses the XArray
conventions and was generated by some non-Unidata Zarr implementation.
In attempting to process it with NCZarr, several interoperability
problems were discovered and fixed. This gives us more confidence
that NCZarr -- using pure zarr -- can interoperate with other
Zarr implementations.

Specific changes:
* Add test nczarr_test/run_interop.sh
* Support attributes with single value not enclosed in JSON array tags.
* Add mode inferencing and use it in nczarr_test/run_purezarr.sh
* Reduce size of tst_err_enddef.nc because it is more than 3 GB.
2021-04-02 18:39:50 -06:00
Dennis Heimbigner
e038553abe Update RELEASE_NOTES.md 2021-04-01 14:12:49 -06:00
Dennis Heimbigner
e7c4e7ead1 add zjson fix 2021-04-01 13:56:04 -06:00
Greg Sjaardema
7732ef1c88
Fix if statement to apply to fflush
Even though the `fflush()` is on the same line as the `fprintf(stderr, ...` statement, it is not part of the `if` and is therefore, executed even if the `wdebug` is not active.  This results in `fflush()` being called more than it should.  

Added parenthesis to property protect the fflush call.
2021-03-18 13:49:07 -06:00
Dennis Heimbigner
0b7a5382e7 Codify cross-platform file paths
The netcdf-c code has to deal with a variety of platforms:
Windows, OSX, Linux, Cygwin, MSYS, etc.  These platforms differ
significantly in the kind of file paths that they accept.  So in
order to handle this, I have created a set of replacements for
the most common file system operations such as _open_ or _fopen_
or _access_ to manage the file path differences correctly.

A more limited version of this idea was already implemented via
the ncwinpath.h and dwinpath.c code. So this can be viewed as a
replacement for that code. And in path in many cases, the only
change that was required was to replace '#include <ncwinpath.h>'
with '#include <ncpathmgt.h>' and then replace file operation
calls with the NCxxx equivalent from ncpathmgr.h Note that
recently, the ncwinpath.h was renamed ncpathmgmt.h, so this pull
request should not require dealing with winpath.

The heart of the change is include/ncpathmgmt.h, which provides
alternate operations such as NCfopen or NCaccess and which properly
parse and rebuild path arguments to work for the platform on which
the code is executing. This mostly matters for Windows because of the
way that it uses backslash and drive letters, as compared to *nix*.
One important feature is that the user can do string manipulations
on a file path without having to worry too much about the platform
because the path management code will properly handle most mixed cases.
So one can for example concatenate a path suffix that uses forward
slashes to a Windows path and have it work correctly.

The conversion code is in libdispatch/dpathmgr.c, and the
important function there is NCpathcvt which does the proper
conversions to the local path format.

As a rule, most code should just replace their file operations with
the corresponding NCxxx ones defined in include/ncpathmgmt.h. These
NCxxx functions all call NCpathcvt on their path arguments before
executing the actual file operation.

In some rare cases, the client may need to directly use NCpathcvt,
but this should be avoided as much as possible. If there is a need
for supporting a new file operation not already in ncpathmgmt.h, then
use the code in dpathmgr.c as a template. Also please notify Unidata
so we can include it as a formal part or our supported operations.
Also, if you see an operation in the library that is not using the
NCxxx form, then please submit an issue so we can fix it.

Misc. Changes:
* Clean up the utf8 testing code; it is impossible to get some
  tests to work under windows using shell scripts; the args do
  not pass as utf8 but as some other encoding.
* Added an extra utf8 test case: test_unicode_path.sh
* Add a true test for HDF5 1.10.6 or later because as noted in
  PR https://github.com/Unidata/netcdf-c/pull/1794,
  HDF5 changed its Windows file path handling.
2021-03-04 13:41:31 -07:00
Dennis Heimbigner
2afbdbd18f Add support for the XArray Zarr _ARRAY_DIMENSIONS attribute
The XArray implementation that uses Zarr for storage
provides a mechanism to simulate named dimensions.
It does this by adding a per-variable attribute called
_ARRAY_DIMENSIONS. This attribute contains a list of names
to be matched against the shape values of the variable.
In effect a named dimension is created with the name
_ARRAY_DIMENSIONS(i) and length shape(i) for all i
in range 0..rank(variable).
Both read and write support is provided.

This XArray support is only invoked if the mode value
of "xarray" is defined. So for example, as in this URL.
````
https://s3.us-west-1.amazonaws.com/bucket/dataset#mode=nczarr,xarray,s3
````
Note that the "xarray" mode flag also implies mode flag "zarr", so the above
is equivalent to this URL.
````
https://s3.us-west-1.amazonaws.com/bucket/dataset#mode=nczarr,zarr,xarray,s3
````

The primary change to implement this was to unify the handling
of dimension references in libnczarr/zsync.

A test for this and other pure-zarr features was added as
nczarr_test/run_purezarr.sh

Other changes:
* Make sure distcheck leaves no files around.
* Change the special attribute flag DIMSCALEFLAG to HIDDENATTRFLAG
  to support the xarray attribute.
* Annotate the zmap implementations with feature flags such as
  WRITEONCE (for zip files).
2021-02-24 13:46:11 -07:00
Dennis Heimbigner
4ae71d3d73 appveyor fix 2021-01-28 20:31:16 -07:00
Dennis Heimbigner
e7d5f24078 Add zip file support
The primary change is to support the use of a zip file as a
storage format. Simultaneously the .nz4 support is made obsolete

Use of zip requires the libzip support library, so a number of
changes to the build files (Makefile.am, CMakeLists.txt) are
necessary to locate and incorporate libzip.  The nczarr_tests
tests are also changed to add zip testing.

Other changes:
* Make sure distcheck leaves no files around.
* Add some functions to netcdf_aux to export some functions of libnetcdf.
* Add a new error NC_EFOUND as the complement of NC_EEMPTY.
* Add tracing support to nclog and use it in libnczarr.
* Modify the zmap interface to support the writeonce semantics of zip.
* Create a new s3util.c to support a variety of S3 auxilliary functions.
* EXTERNL'ize a number of functions so they can be used in s3util.
* Add support for the S3 ListObjects CommonPrefixes mechanism
  to improve search.
* Add experimental support for running nczarr X s3 tests against
  the actual Amazon S3 cloud.
2021-01-28 20:11:01 -07:00
Dan Ibanez
9ad3e49371 more missing includes for MPI without wrapper 2021-01-19 10:33:08 -07:00
Dennis Heimbigner
93e9d92778 More NCZarr optimizations
* Replace wholevar with more useful wholechunk optimization
* Add optimization to read multiple values at one time
* Replace NCDEFAULT_get/put_vars with native nczarr versions.
* Clarify chunk projection computations
* zdebdispatch.h
* Add more chunking test cases and re-enable run_chunkcases
* If !szip, then suppress deflate interference test
* Make H5Znoop(1) filter produce more information
* cleanup bzlib.c API
2021-01-06 13:35:59 -07:00
Tim Gates
451230cf03
docs: fix simple typo, maximim -> maximum
There is a small typo in libnczarr/zvar.c.

Should read `maximum` rather than `maximim`.
2020-12-28 11:21:29 +11:00
Dennis Heimbigner
4614690d61 Fix some additional edges cases for mapping slices to chunks
The code for mapping slices to chunks is wrong for some cases.
Fix code and add additional chunking tests to cover edge cases.
2020-12-19 21:17:46 -07:00
Dennis Heimbigner
b0990a24d5 Remove some potentially harmful duplicate code 2020-12-17 12:52:35 -07:00
Dennis Heimbigner
d2316f866c Additional Fixes to NCZarr
Primary Fixes:
* Add a whole variable optimization -- used in the rare case that nc_get/put_vara covers the whole of a variable and the variable has a single chunk.
* Fix chunking error when stride causes whole chunks to be skipped.
* Fix some memory leaks
* Add test cases
* Add one performance test to nczarr_test/. This uses the timer utils from unit_test: timer_utils.[ch].
* Move ncdumpchunks utility from ncdump to nczarr_test

Misc. Other Changes:
* Make check for aws libraries conditional on --enable-nczarr-s3
* Remove all but one bm tests from nczarr_test until they are working.
* Remove another dependency on HDF5 from supposedly non-HDF5 specific code; specifically hdf5_log_hdf5.
* Make the BAIL2 macro be hdf5 specific and replace elsewhere with an HDF5 independent equivalent.
* Move hdf5cache.c to libsrc4/nc4cache.c because it is used by nczarr.
* Modify unit_tests so that some of them are run even if using Windows.
* Misc. small bug fixes and refactors and memory leaks.
* Rename some conflicting tests for cmake.
* Attempted to make nc_perf work with cmake and failed.
2020-12-16 20:48:02 -07:00
Dennis Heimbigner
90fd1406bc Make use of clock_gettime be conditional.
Re: GH Issue https://github.com/Unidata/netcdf-c/issues/1900

Apparently the clock_gettime() function is not always available.
It is used in unit_test/tst_exhash.c and unit_test/tst_xcache.c.

To solve this, a number of things were changed:
* Move the timing code to a new file unit_tests/timer_utils.[ch]
* Modify the timing code to choose one of several timing methods
depending on availability. The prioritized order is as follows:
    1. If Windows, use the QueryPerformanceCounter mechanism else
    2. Use clock_gettime if available else
    3. Use gettimeofday if available else
    4. Use getrusage if available

Note that the resolution of 3 and 4 is less than 1 or 2.

Misc. Other Changes:
* Move the test in CMakeLists.txt that disables unit tests for WIN32 to unit_test/CMakeLists.txt since some unit tests actually work under Visual Studio.
* Fix some of the unit tests to work under visual studio
* Fix problem with using remove() in zmap_nzf.c
* Remove some warning about use of EXTERNL
2020-12-06 18:19:53 -07:00
Dennis Heimbigner
c25ebd7787 Fix a number of CMake problems 2020-11-19 22:24:13 -07:00
Dennis Heimbigner
eb3d9eb0c9 Provide a Number of fixes/improvements to NCZarr
Primary changes:
* Add an improved cache system to speed up performance.
* Fix NCZarr to properly handle scalar variables.

Misc. Related Changes:
* Added unit tests for extendible hash and for the generic cache.
* Add config parameter to set size of the NCZarr cache.
* Add initial performance tests but leave them unused.
* Add CRC64 support.
* Move location of ncdumpchunks utility from /ncgen to /ncdump.
* Refactor auth support.

Misc. Unrelated Changes:
* More cleanup of the S3 support
* Add support for S3 authentication in .rc files: HTTP.S3.ACCESSID and HTTP.S3.SECRETKEY.
* Remove the hashkey from the struct OBJHDR since it is never used.
2020-11-19 17:01:04 -07:00
Dennis Heimbigner
d631656966 Remove trailing comma from _NCProperties attribute value.
If NCPROPERTIES_EXTRA (see configure.ac) is defined
but is null or empty, then an extra comma is being generated
at the end of _NCProperties global attribute.

Soln: check for null/empty NCPROPERTIES_EXTRA value.
2020-11-14 15:07:08 -07:00
Dennis Heimbigner
25d2e05444 Prepare for the path management code
Rename some files in prep for eventual implementation
of more comprehensive cross-platform file path management.
2020-10-13 19:12:15 -06:00
Dennis Heimbigner
91f3e75cab Fix missing casts of var->filters 2020-10-09 21:26:29 -06:00
Dennis Heimbigner
aeb3ac2809 Mostly revert the filter code to reduce its complexity of use.
re: https://github.com/Unidata/netcdf-c/issues/1836

Revert the internal filter code to simplify it. From the user's
point of view, the only visible changes should be:

1. The functions that convert text to filter specs have had their signature reverted and have been moved to netcdf_aux.h
2. Some filter API functions now return NC_ENOFILTER when inquiry is made about some filter.

Internally,the dispatch table has been modified to get rid of the filter_actions
entry and associated complex structures. It has been replaced with
inq_var_filter_ids and inq_var_filter_info entries and the dispatch table
version has been bumped to 3. Corresponding NOOP and NOTNC4 functions
were added to libdispatch/dnotnc4.c. Also, the filter_action entries
in dispatch tables were replaced for all dispatch code bases (HDF5, DAP2,
etc). This should only impact UDF users.

In the process, it became clear that the form of the filters
field in NC_VAR_INFO_T was format dependent, so I converted it to
be of type void* and pushed its management into the various dispatch
code bases. Specifically libhdf5 and libnczarr now manage the filters
field in their own way.

The auxilliary functions for parsing textual filter specifications
were moved to netcdf_aux.h and were renamed to the following:
* ncaux_h5filterspec_parse
* ncaux_h5filterspec_parselist
* ncaux_h5filterspec_free
* ncaux_h5filter_fix8

Misc. Other Changes:

1. Document NUG/filters.md updated to reflect the changes above.
2. All the old data types (structs and enums)
   used by filter_actions actions were deleted.
   The exception is the NC_H5_Filterspec because it is needed
   by ncaux_h5filterspec_parselist.
3. Clientside filters were removed -- another enhancement
   for which no-one ever asked.
4. The ability to remove filters was itself removed.
5. Some functionality needed by nczarr was moved from libhdf5
   to libsrc4 e.g. nc4_find_default_chunksizes
6. All the filterx code was removed
7. ncfilter.h and nc4filter.c no longer used

Misc. Unrelated Changes:

1. The nczarr_test makefile clean was leaving some directories; so
   add clean-local to take care of them.
2020-09-27 12:43:46 -06:00
Dennis Heimbigner
c07f41db7d Fix issue 1839 -- missing symbols under OSX 2020-09-15 13:19:57 -06:00
Dennis Heimbigner
2f0a6d22e9 Fix error where not converting fill data
re: Github Issue https://github.com/Unidata/netcdf-c/issues/1826

It turns out that the common get code (NC4_get_vars) in libhdf5
(and libnczarr) has an optimization where it does not attempt to
read from the file if the file is all fill values. Rather it
just fills the output buffer with the fill value.  The problem
is that -- in that case -- it forgets that conversion might still be
needed.  So the conversion never occurs and the raw bits of
the fill data are stored directly into the memory space.

Solution: move some code around to properly do the
conversion no matter how the data was obtained.

Added a test cases nc_test4/test_fillonly.sh and
nczarr_test/test_fillonlyz.sh
2020-09-12 14:49:59 -06:00
Dennis Heimbigner
62a4cc1ae0 Fix nccopy -c dim/x to actually use the dim/x value.
As it was, nccopy -c dim/x was sometimes being ignored. So
modify nccopy to properly take into account. This also required
a change to the nczarr code because it was not applying default
chunking in the same way as libhdf5.

Modify ncdump/tst_nccopy4.sh to test this feature properly.
Also add a similar test to nczarr_test.

Additionally, fix some other things that were causing Visual
Studio builds with testing to not work.
* fix curl testing under CMake to properly handle case
  where DAP is disabled, but byterange support is enabled.
* properly test and/or define uintptr_t
* Convert _O_XXX to O_XXX flags used by open();
2020-09-01 13:44:24 -06:00
Ward Fisher
80cf5e3d79 Modify isnan() operation 2020-08-29 18:42:20 -06:00
Ward Fisher
31dee0c4da
Revert "Revert "Fix nczarr-experimental: improve build support, disengage hdf5 vs netcdf4 flags, and find AWS libraries"" 2020-08-17 19:15:47 -06:00
Ward Fisher
16c27ca13f
Revert "Fix nczarr-experimental: improve build support, disengage hdf5 vs netcdf4 flags, and find AWS libraries" 2020-08-17 15:51:01 -06:00
Dennis Heimbigner
d85bb6fe20 The big change for this commit is complete the
disengagement of enable-netcdf4 from enable-hdf5.
That is, with the advent of nczarr, it is possible
to turn off hdf5 but still need netcdf-4 enabled
because nczarr uses libsrc4, but not libhdf5.
This change involves a bunch of things:
1. Modify configure.ac and CMakelist to make enable_hdf5
   control if hdf5 support is provided. For back compatibility,
   disable-netcdf4 is treated as disable-hdf5. But internally,
   netcdf4 support is controlled only by the enabling of formats
   that require it.
2. In support of #1, modify .travis.yml to use enable/disable-hdf5
   instead of enable/disable-netcdf4.
3. test_common.in is modified to track selected features,
   including enable-hdf5 and enable-s3-tests. This is used in
   selected tests that mix netcdf-3 and netcdf4 tests.
4. The conflation of USE_HDF5 and USE_NETCDF4 is common in
   code, tests, and build files, so all of those had to be weeded out.
5. It turns out that some of the NC4_dim functions really are HDF5 specific,
   but are not treated as such. So they are moved from nc4dim.c to
   hdf5dim.c or hdf5dispatch.c
6. Some generic functions in libhdf5 can be (and were) moved to libsrc4.
2020-08-12 15:42:50 -06:00