Commit Graph

631 Commits

Author SHA1 Message Date
Edward Hartnett
0ce463761c
Merge branch 'main' into ejh_quantize_2 2021-09-07 10:44:45 -06:00
Ward Fisher
0a4f4e16ed
Merge pull request #2098 from DennisHeimbigner/fortcache.dmh
Make the fortran cache API always be defined.
2021-09-07 10:30:00 -06:00
Dennis Heimbigner
11fe00ea05 Add filter support to NCZarr
Filter support has three goals:

1. Use the existing HDF5 filter implementations,
2. Allow filter metadata to be stored in the NumCodecs metadata format used by Zarr,
3. Allow filters to be used even when HDF5 is disabled

Detailed usage directions are define in docs/filters.md.

For now, the existing filter API is left in place. So filters
are defined using ''nc_def_var_filter'' using the HDF5 style
where the id and parameters are unsigned integers.

This is a big change since filters affect many parts of the code.

In the following, the terms "compressor" and "filter" and "codec" are generally
used synonomously.

### Filter-Related Changes:
* In order to support dynamic loading of shared filter libraries, a new library was added in the libncpoco directory; it helps to isolate dynamic loading across multiple platforms.
* Provide a json parsing library for use by plugins; this is created by merging libdispatch/ncjson.c with include/ncjson.h.
* Add a new _Codecs attribute to allow clients to see what codecs are being used; let ncdump -s print it out.
* Provide special headers to help support compilation of HDF5 filters when HDF5 is not enabled: netcdf_filter_hdf5_build.h and netcdf_filter_build.h.
* Add a number of new test to test the new nczarr filters.
* Let ncgen parse _Codecs attribute, although it is ignored.

### Plugin directory changes:
* Add support for the Blosc compressor; this is essential because it is the most common compressor used in Zarr datasets. This also necessitated adding a CMake FindBlosc.cmake file
* Add NCZarr support for the big-four filters provided by HDF5: shuffle, fletcher32, deflate (zlib), and szip
* Add a Codec defaulter (see docs/filters.md) for the big four filters.
* Make plugins work with windows by properly adding __declspec declaration.

### Misc. Non-Filter Changes
* Replace most uses of USE_NETCDF4 (deprecated) with USE_HDF5.
* Improve support for caching
* More fixes for path conversion code
* Fix misc. memory leaks
* Add new utility -- ncdump/ncpathcvt -- that does more or less the same thing as cygpath.
* Add a number of new test to test the non-filter fixes.
* Update the parsers
* Convert most instances of '#ifdef _MSC_VER' to '#ifdef _WIN32'
2021-09-02 17:04:26 -06:00
Dennis Heimbigner
b81f8b676a Make the fortran cache API always be defined.
re: Issue https://github.com/Unidata/netcdf-c/issues/2096

The methods nc_set_var_chunk_cache_ints and nc_def_var_chunking_ints
are Fortran entry points for accessing the cache. They are not defined
if netcdf-c is built with --disable-hdf5.

Fix is to create dummy versions that do nothing and return NC_NOERR
when invoked. These dummy versions are defined when USE_HDF5 is false.
2021-09-01 14:10:02 -06:00
Edward Hartnett
eabbd686b0 bitgroom working for floats 2021-08-29 00:30:17 -06:00
Edward Hartnett
4f96fcc7b2 improved doxygen documenation 2021-08-26 10:46:43 -06:00
Edward Hartnett
d29436c99d improved doxygen documenation 2021-08-26 10:39:27 -06:00
Tobias Kölling
3eaecb41b7 hand over udata to h5->mem a little earlier
I think udata will be freed by the calback as soon as the callbacks are
set and H5Pclose is called, so the backward link should be set up at
that place.

Without this change, local_image_free may be called without a proper
reference to udata, which may raise an assertion:
assert(udata->fapl_image_ptr == udata->app_image_ptr);
while trying to open a broken file.
2021-08-26 16:12:24 +02:00
Edward Hartnett
b2c0bb9810 more quantize testing 2021-08-25 01:54:25 -06:00
Edward Hartnett
a02faa0cb5 more testing of qunatize setting 2021-08-25 01:45:38 -06:00
Edward Hartnett
0f26083f4d perparing to apply bitgroom algorithm 2021-08-25 01:31:26 -06:00
Tobias Kölling
b58a3ff07a allow missing udata when closing file with abort=1
If a memory backed file is closed due to an aborted opening (i.e. a
broken file was attempted to be opened), udata may not be set. In this
case, the assertation checking for udata should not be triggered.
2021-08-24 17:11:05 +02:00
Edward Hartnett
c9eca4bbca moving function 2021-08-24 04:09:57 -06:00
Edward Hartnett
148706b657 now reading quantize attribute to get settings 2021-08-24 03:35:52 -06:00
Edward Hartnett
233ddfb0a4 further development 2021-08-24 02:57:49 -06:00
Edward Hartnett
24ed2a40c4 fixed comment 2021-08-24 02:09:40 -06:00
Edward Hartnett
d053418867 merged nc4hdf.c with changes from master branch 2021-08-24 02:07:43 -06:00
Edward Hartnett
d6d9825b2c now qunatizing with inq function in dispatch table 2021-08-24 01:53:16 -06:00
Edward Hartnett
3202b8b37c adding quantize functions to all the dispatch tables 2021-08-24 01:26:44 -06:00
Edward Hartnett
dabe008ad0 further preparation for try 2 at quantizing 2021-08-24 00:55:51 -06:00
Ward Fisher
e06d8b744f Merge branch 'patch-48' of https://github.com/gsjaardema/netcdf-c into v4.8.1-wellspring.wif 2021-08-16 10:37:52 -06:00
Greg Sjaardema
3ce6df07ba Detect attribute creation order tracking setting
When opening an existing file for NC_WRITE access,
check whether the file was created originally with
attribute creation order tracking disabled and if
so, use that setting for subsequent attribute creation.

Also check the `mode` passed in to the open call
in case application is explicitly disabling the
attribute creation order tracking with the NC_NOATTRCREORD flag.
2021-08-12 13:22:49 -06:00
Greg Sjaardema
a611f19b32 Finish argument name refactoring... 2021-08-10 09:03:21 -06:00
Greg Sjaardema
d2c3165664 WIP: attribute creation order on/off
Work in progress / Proof of concept:

Add a capability to disable the tracking of attribute creation order.
See #2054 for details.

This PR adds a `NC_NOATTCREORD` define which can be passed int the
`mode` argument to `nc_create`.  If it is present, then the
calls to set the attribute creation order tracking is disabled.
This should only be used for files in which you *know* that the
ordering of the attributes does not matter to *any* potential
readers of this database.
2021-08-06 13:30:47 -06:00
Greg Sjaardema
b9d192d0c4
Only write the coord dimids if ndims >= 1
It looks like some vars have ndims==0 in which case the coord_dimids should not be written. Modify patch to catch those cases.
2021-08-04 09:49:48 -06:00
Greg Sjaardema
7b6f11c544
The coord dimids should be written for all variables
See discussion in #1279
2021-08-03 13:36:34 -06:00
Edward Hartnett
c77c4a9a40 fixing H5Linterate() API compatipility problem 2021-08-03 02:27:57 -06:00
Ward Fisher
84f0696e7d
Merge pull request #2036 from Unidata/gh1983.wif
Address optimization issue
2021-07-29 11:15:29 -06:00
Ward Fisher
9f798e2ed6 Merge branch 'virtual_datasets' of https://github.com/d70-t/netcdf-c into gh1983.wif 2021-07-19 09:44:35 -07:00
Bruno Pagani
74fe2fe95b
libhdf5/H5FDhttp: add missing semicolons to H5Epush_ret
In HDF5 1.12.1, this was changed from optional to required per the changelog:

>H5Epush_ret() is a function-like macro that has been changed to
>contain a `do {} while(0)` loop. Consequently, a trailing semicolon
>is now required to end the `while` statement. Previously, a trailing
>semi would work, but was not mandatory.

This should be backward compatible with older version of HDF5.
2021-07-18 20:12:47 +00:00
Greg Sjaardema
e2d0bbb8ea
Merge branch 'master' into eliminate_need_for_hdf5-1.6-API 2021-05-28 07:11:13 -06:00
Dennis Heimbigner
74e7812d83 Improve error message when non-existent filter is encountered.
re: https://github.com/Unidata/netcdf-c/issues/1996

Improve the error message and location that is reported when reading a filter with a variable that uses a filter that is not available on the reading platform.

This requires checking the availability of the filter, recording it, and failing when any attempt is made to read or write that variable. A test case was added for this in tst_filter.sh. Also, LOG level 0 message is generated giving the variable and the filter id.

Note that by design if there is no attempt to read or write the variable, then no error is reported; this means that, for example, ncdump -h will list the filter even though it is not actually available. This is important for allowing a user to see the filter details.
2021-05-17 19:49:58 -06:00
Greg Sjaardema
f92b7a9505
Fix so works with hdf5-1.8 also
Fix a bad change so can still compile with hdf5-1.8.x
2021-04-28 15:14:57 -06:00
Greg Sjaardema
cbcee382b0 Remove need for HDF5-1.6 API being defined 2021-04-28 13:59:24 -06:00
Dennis Heimbigner
e038553abe Update RELEASE_NOTES.md 2021-04-01 14:12:49 -06:00
Dennis Heimbigner
e7c4e7ead1 add zjson fix 2021-04-01 13:56:04 -06:00
Ward Fisher
ffa8a7067f Merge branch '951' of https://github.com/brtnfld/netcdf-c into 4.8.0-wellspring-prs.wif 2021-03-22 11:51:54 -06:00
Dennis Heimbigner
0b7a5382e7 Codify cross-platform file paths
The netcdf-c code has to deal with a variety of platforms:
Windows, OSX, Linux, Cygwin, MSYS, etc.  These platforms differ
significantly in the kind of file paths that they accept.  So in
order to handle this, I have created a set of replacements for
the most common file system operations such as _open_ or _fopen_
or _access_ to manage the file path differences correctly.

A more limited version of this idea was already implemented via
the ncwinpath.h and dwinpath.c code. So this can be viewed as a
replacement for that code. And in path in many cases, the only
change that was required was to replace '#include <ncwinpath.h>'
with '#include <ncpathmgt.h>' and then replace file operation
calls with the NCxxx equivalent from ncpathmgr.h Note that
recently, the ncwinpath.h was renamed ncpathmgmt.h, so this pull
request should not require dealing with winpath.

The heart of the change is include/ncpathmgmt.h, which provides
alternate operations such as NCfopen or NCaccess and which properly
parse and rebuild path arguments to work for the platform on which
the code is executing. This mostly matters for Windows because of the
way that it uses backslash and drive letters, as compared to *nix*.
One important feature is that the user can do string manipulations
on a file path without having to worry too much about the platform
because the path management code will properly handle most mixed cases.
So one can for example concatenate a path suffix that uses forward
slashes to a Windows path and have it work correctly.

The conversion code is in libdispatch/dpathmgr.c, and the
important function there is NCpathcvt which does the proper
conversions to the local path format.

As a rule, most code should just replace their file operations with
the corresponding NCxxx ones defined in include/ncpathmgmt.h. These
NCxxx functions all call NCpathcvt on their path arguments before
executing the actual file operation.

In some rare cases, the client may need to directly use NCpathcvt,
but this should be avoided as much as possible. If there is a need
for supporting a new file operation not already in ncpathmgmt.h, then
use the code in dpathmgr.c as a template. Also please notify Unidata
so we can include it as a formal part or our supported operations.
Also, if you see an operation in the library that is not using the
NCxxx form, then please submit an issue so we can fix it.

Misc. Changes:
* Clean up the utf8 testing code; it is impossible to get some
  tests to work under windows using shell scripts; the args do
  not pass as utf8 but as some other encoding.
* Added an extra utf8 test case: test_unicode_path.sh
* Add a true test for HDF5 1.10.6 or later because as noted in
  PR https://github.com/Unidata/netcdf-c/pull/1794,
  HDF5 changed its Windows file path handling.
2021-03-04 13:41:31 -07:00
Dennis Heimbigner
2afbdbd18f Add support for the XArray Zarr _ARRAY_DIMENSIONS attribute
The XArray implementation that uses Zarr for storage
provides a mechanism to simulate named dimensions.
It does this by adding a per-variable attribute called
_ARRAY_DIMENSIONS. This attribute contains a list of names
to be matched against the shape values of the variable.
In effect a named dimension is created with the name
_ARRAY_DIMENSIONS(i) and length shape(i) for all i
in range 0..rank(variable).
Both read and write support is provided.

This XArray support is only invoked if the mode value
of "xarray" is defined. So for example, as in this URL.
````
https://s3.us-west-1.amazonaws.com/bucket/dataset#mode=nczarr,xarray,s3
````
Note that the "xarray" mode flag also implies mode flag "zarr", so the above
is equivalent to this URL.
````
https://s3.us-west-1.amazonaws.com/bucket/dataset#mode=nczarr,zarr,xarray,s3
````

The primary change to implement this was to unify the handling
of dimension references in libnczarr/zsync.

A test for this and other pure-zarr features was added as
nczarr_test/run_purezarr.sh

Other changes:
* Make sure distcheck leaves no files around.
* Change the special attribute flag DIMSCALEFLAG to HIDDENATTRFLAG
  to support the xarray attribute.
* Annotate the zmap implementations with feature flags such as
  WRITEONCE (for zip files).
2021-02-24 13:46:11 -07:00
Dan Ibanez
69182dc438 Ensure MPI header found without wrapper 2021-01-19 09:38:07 -07:00
Scot Breitenfeld
a464bea84b removed the check for H5Pset_libver_bounds (HAVE_H5PSET_LIBVER_BOUNDS) since API
function was introduced in 1.8.0 (and some tests used  H5Pset_libver_bounds without
checking HAVE_H5PSET_LIBVER_BOUNDS.
2021-01-11 16:36:23 -06:00
Scot Breitenfeld
46b2e1d666 removed the use of H5_VERSION_LT 2021-01-11 10:36:53 -06:00
Scot Breitenfeld
7bec179038 fixed syntax 2021-01-11 10:22:18 -06:00
Scot Breitenfeld
e5cb3c2a45 fixed #def 2021-01-11 10:19:59 -06:00
Scot Breitenfeld
388bbf4e73 Allow for the HDF5 file format to be compatible with features greater
than HDF5 1.8. This will allow for HDF5 features (VDS, SWMR, new references, etc...)
which may require to have a superblock greater than v2.

See for discussion Ref: Update HDF5 format compatibility Unidata#951
2021-01-11 09:48:04 -06:00
Scot Breitenfeld
c4279a68b0 Allow for the HDF5 file format to be compatible with features greater than HDF5 1.8, which
may require to have a superblock greater than v2.

Ref: Update HDF5 format compatibility #951
2021-01-08 11:15:10 -06:00
Dennis Heimbigner
93e9d92778 More NCZarr optimizations
* Replace wholevar with more useful wholechunk optimization
* Add optimization to read multiple values at one time
* Replace NCDEFAULT_get/put_vars with native nczarr versions.
* Clarify chunk projection computations
* zdebdispatch.h
* Add more chunking test cases and re-enable run_chunkcases
* If !szip, then suppress deflate interference test
* Make H5Znoop(1) filter produce more information
* cleanup bzlib.c API
2021-01-06 13:35:59 -07:00
Dennis Heimbigner
d2316f866c Additional Fixes to NCZarr
Primary Fixes:
* Add a whole variable optimization -- used in the rare case that nc_get/put_vara covers the whole of a variable and the variable has a single chunk.
* Fix chunking error when stride causes whole chunks to be skipped.
* Fix some memory leaks
* Add test cases
* Add one performance test to nczarr_test/. This uses the timer utils from unit_test: timer_utils.[ch].
* Move ncdumpchunks utility from ncdump to nczarr_test

Misc. Other Changes:
* Make check for aws libraries conditional on --enable-nczarr-s3
* Remove all but one bm tests from nczarr_test until they are working.
* Remove another dependency on HDF5 from supposedly non-HDF5 specific code; specifically hdf5_log_hdf5.
* Make the BAIL2 macro be hdf5 specific and replace elsewhere with an HDF5 independent equivalent.
* Move hdf5cache.c to libsrc4/nc4cache.c because it is used by nczarr.
* Modify unit_tests so that some of them are run even if using Windows.
* Misc. small bug fixes and refactors and memory leaks.
* Rename some conflicting tests for cmake.
* Attempted to make nc_perf work with cmake and failed.
2020-12-16 20:48:02 -07:00
Dennis Heimbigner
e1c470683c Fix merge error from PR https://github.com/Unidata/netcdf-c/pull/1892/files 2020-12-01 20:10:48 -07:00
Ward Fisher
c96722e001
Merge pull request #1890 from gsjaardema/patch-46
Fix undefined struct member access
2020-12-01 14:41:27 -07:00
Dennis Heimbigner
68bcd1122a Enforce that !ENABLE_BYTERANGE => !ENABLE_HDF5_ROS3
This is a follow on to PR https://github.com/Unidata/netcdf-c/pull/1890

Modify configure.ac to enforce that
!ENABLE_BYTERANGE => !ENABLE_HDF5_ROS3
2020-11-28 13:00:06 -07:00
Greg Sjaardema
db0b84252c
Fix undefined struct member access
The `http` field of the hdf5 info struct is not defined unless `ENABLE_HDF5_ROS3` or `ENABLE_BYTERANGE` or `ENABLE_S3_SDK` is defined.  Based on a quick look at the code, I think that the `ENABLE_HDF5_ROS3` define is the relavant one here.  Maybe a better fix is to check if any of them are defined...
2020-11-24 08:25:29 -07:00
Dennis Heimbigner
eb3d9eb0c9 Provide a Number of fixes/improvements to NCZarr
Primary changes:
* Add an improved cache system to speed up performance.
* Fix NCZarr to properly handle scalar variables.

Misc. Related Changes:
* Added unit tests for extendible hash and for the generic cache.
* Add config parameter to set size of the NCZarr cache.
* Add initial performance tests but leave them unused.
* Add CRC64 support.
* Move location of ncdumpchunks utility from /ncgen to /ncdump.
* Refactor auth support.

Misc. Unrelated Changes:
* More cleanup of the S3 support
* Add support for S3 authentication in .rc files: HTTP.S3.ACCESSID and HTTP.S3.SECRETKEY.
* Remove the hashkey from the struct OBJHDR since it is never used.
2020-11-19 17:01:04 -07:00
Dennis Heimbigner
d631656966 Remove trailing comma from _NCProperties attribute value.
If NCPROPERTIES_EXTRA (see configure.ac) is defined
but is null or empty, then an extra comma is being generated
at the end of _NCProperties global attribute.

Soln: check for null/empty NCPROPERTIES_EXTRA value.
2020-11-14 15:07:08 -07:00
Dennis Heimbigner
730aa1f6bc Improve the building of NCZARR S3 support in CMake and Autoconf
There were some irregularities in the flags for handling NCZarr S3 support.

The primary change is to regularize the flags controlling this to the following.

1. Automake: --enable-nczarr-s3 and CMake: ENABLE_NCZARR_S3
2. Automake: --enable-nczarr-s3-tests and CMake: ENABLE_NCZARR_S3_TESTS

Flag 1 indicates that NCZarr should be built with S3 support enabled.
Flag 2 indicates that the NCZarr S3 tests should be run

These two flags are separate because running the NCZarr S3 tests
requires access to protected S3 resources. Currently, running
these tests is restricted to Unidata personnel. However, users
may want to enable S3 support even if they cannot run the tests.
It is, of course, an error to specify 2 without specifying 1.

Additionally, if the AWS S3 SDK library is not found, then the NCZARR S3
support and testing must be disabled. Otherwise an error is signaled
during the build.

Some of these NCZarr and S3 changes are propagated to nc-config.

Misc. Other Changes:

1. Allow testing for CYGWIN or MSVC in shell scripts.
2. Add specific test for HDF5 library version 1.10.6.
   This is encoded as "HDF5_UTF8_PATHS" because that is the first
   version where HDF5 properly supports it under Windows. This is used
   in hdf5internal/nc4_ndf5_ansi_to_utf8.
3. Add a AM Conditional -- AX_IGNORE -- for use in testing
   when it is desirable to temporarily suppress Makefile code.
4. Add MULTIFILTER flag to CMakeLists.txt
2020-10-16 15:04:51 -06:00
Ward Fisher
e4138efa9d
Merge pull request #1851 from brtnfld/master
Replaced deprecated (in 1.8.0) H5Aopen_name with H5Aopen_by_name
2020-10-01 14:25:06 -07:00
Dennis Heimbigner
aeb3ac2809 Mostly revert the filter code to reduce its complexity of use.
re: https://github.com/Unidata/netcdf-c/issues/1836

Revert the internal filter code to simplify it. From the user's
point of view, the only visible changes should be:

1. The functions that convert text to filter specs have had their signature reverted and have been moved to netcdf_aux.h
2. Some filter API functions now return NC_ENOFILTER when inquiry is made about some filter.

Internally,the dispatch table has been modified to get rid of the filter_actions
entry and associated complex structures. It has been replaced with
inq_var_filter_ids and inq_var_filter_info entries and the dispatch table
version has been bumped to 3. Corresponding NOOP and NOTNC4 functions
were added to libdispatch/dnotnc4.c. Also, the filter_action entries
in dispatch tables were replaced for all dispatch code bases (HDF5, DAP2,
etc). This should only impact UDF users.

In the process, it became clear that the form of the filters
field in NC_VAR_INFO_T was format dependent, so I converted it to
be of type void* and pushed its management into the various dispatch
code bases. Specifically libhdf5 and libnczarr now manage the filters
field in their own way.

The auxilliary functions for parsing textual filter specifications
were moved to netcdf_aux.h and were renamed to the following:
* ncaux_h5filterspec_parse
* ncaux_h5filterspec_parselist
* ncaux_h5filterspec_free
* ncaux_h5filter_fix8

Misc. Other Changes:

1. Document NUG/filters.md updated to reflect the changes above.
2. All the old data types (structs and enums)
   used by filter_actions actions were deleted.
   The exception is the NC_H5_Filterspec because it is needed
   by ncaux_h5filterspec_parselist.
3. Clientside filters were removed -- another enhancement
   for which no-one ever asked.
4. The ability to remove filters was itself removed.
5. Some functionality needed by nczarr was moved from libhdf5
   to libsrc4 e.g. nc4_find_default_chunksizes
6. All the filterx code was removed
7. ncfilter.h and nc4filter.c no longer used

Misc. Unrelated Changes:

1. The nczarr_test makefile clean was leaving some directories; so
   add clean-local to take care of them.
2020-09-27 12:43:46 -06:00
Scot Breitenfeld
2620c01067 Replaced deprecated (in 1.8.0) H5Aopen_name with H5Aopen_by_name 2020-09-25 12:17:20 -05:00
Dennis Heimbigner
f3218a2e2c Use the built-in HDF5 byte-range reader, if available.
re: Issue https://github.com/Unidata/netcdf-c/issues/1848

The existing Virtual File Driver built to support byte-range
read-only file access is quite old. It turns out to be extremely
slow (reason unknown at the moment).

Starting with HDF5 1.10.6, the HDF5 library has its own version
of such a file driver. The HDF5 developers have better knowledge
about building such a driver and what incantations are needed to
get good performance.

This PR modifies the byte-range code in hdf5open.c so
that if the HDF5 file driver is available, then it is used
in preference to the one written by the Netcdf group.

Misc. Other Changes:

1. Moved all of nc4print code to ncdump to keep appveyor quiet.
2020-09-24 14:33:58 -06:00
Tobias Kölling
a80d473ca8 Merge remote-tracking branch 'upstream/master' into virtual_datasets 2020-09-15 09:50:25 +02:00
Tobias Kölling
8de6398b99 check if H5D_VIRTUAL exists in installed HDF5 library
Older HDF5 libraries do not support virtual datasets but could otherwise
be supported by netCDF4. This change removes the special case to handle
HDF5 virtual datasets if the installed HDF5 version does not support
virtual datasets.
2020-09-14 17:25:25 +02:00
Dennis Heimbigner
2f0a6d22e9 Fix error where not converting fill data
re: Github Issue https://github.com/Unidata/netcdf-c/issues/1826

It turns out that the common get code (NC4_get_vars) in libhdf5
(and libnczarr) has an optimization where it does not attempt to
read from the file if the file is all fill values. Rather it
just fills the output buffer with the fill value.  The problem
is that -- in that case -- it forgets that conversion might still be
needed.  So the conversion never occurs and the raw bits of
the fill data are stored directly into the memory space.

Solution: move some code around to properly do the
conversion no matter how the data was obtained.

Added a test cases nc_test4/test_fillonly.sh and
nczarr_test/test_fillonlyz.sh
2020-09-12 14:49:59 -06:00
Tobias Kölling
69fb44ec4b added NC_VIRTUAL storage layout 2020-09-03 17:51:46 +02:00
Tobias Kölling
a3b753d764 change H5F_CLOSE_SEMI -> H5F_CLOSE_WEAK for nc4_create_file as well
The nc_sync test fails if the settings are different for file creation
and opening.
2020-09-02 20:02:08 +02:00
Tobias Kölling
9f8897762d changed H5Pset_fclose_degree to H5F_CLOSE_WEAK
It seems like it is part of the design of HDF5 virtual datasets that
objects within a file remain opened while the files is aready "closed".
Setting the fclose degree to SEMI would cause the library to bail out.
This commit makes nc_test4/tst_virtual_dataset succeed.

See also Unidata/netcdf-c#1799
2020-09-02 16:16:57 +02:00
Tobias Kölling
4c27730ae3 hdf5: added unknown storage specification
In case HDF5 adds more storage specifications, netcdf4 should be able to
cope with them by default. Further specializations could be added
nonetheless.
2020-09-02 16:13:23 +02:00
Ward Fisher
31dee0c4da
Revert "Revert "Fix nczarr-experimental: improve build support, disengage hdf5 vs netcdf4 flags, and find AWS libraries"" 2020-08-17 19:15:47 -06:00
Ward Fisher
16c27ca13f
Revert "Fix nczarr-experimental: improve build support, disengage hdf5 vs netcdf4 flags, and find AWS libraries" 2020-08-17 15:51:01 -06:00
Dennis Heimbigner
d85bb6fe20 The big change for this commit is complete the
disengagement of enable-netcdf4 from enable-hdf5.
That is, with the advent of nczarr, it is possible
to turn off hdf5 but still need netcdf-4 enabled
because nczarr uses libsrc4, but not libhdf5.
This change involves a bunch of things:
1. Modify configure.ac and CMakelist to make enable_hdf5
   control if hdf5 support is provided. For back compatibility,
   disable-netcdf4 is treated as disable-hdf5. But internally,
   netcdf4 support is controlled only by the enabling of formats
   that require it.
2. In support of #1, modify .travis.yml to use enable/disable-hdf5
   instead of enable/disable-netcdf4.
3. test_common.in is modified to track selected features,
   including enable-hdf5 and enable-s3-tests. This is used in
   selected tests that mix netcdf-3 and netcdf4 tests.
4. The conflation of USE_HDF5 and USE_NETCDF4 is common in
   code, tests, and build files, so all of those had to be weeded out.
5. It turns out that some of the NC4_dim functions really are HDF5 specific,
   but are not treated as such. So they are moved from nc4dim.c to
   hdf5dim.c or hdf5dispatch.c
6. Some generic functions in libhdf5 can be (and were) moved to libsrc4.
2020-08-12 15:42:50 -06:00
bombipappoo
ecbb0f5bbf Convert filename from ANSI to UTF-8 before calling HDF5. 2020-07-14 22:44:42 +09:00
Ward Fisher
0825c9767f Merge branch 'ejh_par_test' of https://github.com/NOAA-GSD/netcdf-c into NOAA-GSD-ejh_par_test 2020-07-09 17:29:45 -06:00
Ward Fisher
7d2a646f25 Merge branch 'ejh_fix_redef' of https://github.com/NOAA-GSD/netcdf-c into NOAA-GSD-ejh_fix_redef 2020-07-09 13:55:37 -06:00
Edward Hartnett
3e60a863de fixed warning in hdf5filter.c 2020-07-08 11:24:54 -06:00
Edward Hartnett
832fbf19c8 now dont return error on second redef call for netcdf/HDF5 files 2020-07-08 11:10:15 -06:00
Edward Hartnett
ac3b77d418 merged in changes from ejh_test_szip_unlim 2020-07-04 07:43:50 -06:00
Edward Hartnett
4b78c0c4a3 merged master 2020-07-03 13:57:47 -06:00
Edward Hartnett
6c112efb8e fixed problem setting szip on var with unlimited dim and added test 2020-07-02 10:55:34 -06:00
Edward Hartnett
dc37446a5f more test development 2020-06-29 09:01:24 -06:00
Edward Hartnett
467f342ae9 further test development 2020-06-29 08:35:11 -06:00
Dennis Heimbigner
59e04ae071 This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".

The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.

More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).

WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:

Platform | Build System | S3 support
------------------------------------
Linux+gcc      | Automake     | yes
Linux+gcc      | CMake        | yes
Visual Studio  | CMake        | no

Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future.  Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.

In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*.  The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
   and the version bumped.
4. An overly complex set of structs was created to support funnelling
   all of the filterx operations thru a single dispatch
   "filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
   to nczarr.

Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
   -- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
   support zarr and to regularize the structure of the fragments
   section of a URL.

Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
   e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
   * Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
   and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.

Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-28 18:02:47 -06:00
Greg Sjaardema
edf0ca6c98
Avoid potential integer overrun
It is possible for the values stored to `file_value_size` to overrun the storage capacity of a 32-bit integer.  The value does need to store negative values potentially, so can be `size_t` or `hsize_t`, so use the `hssize_t` which is a signed 64-bit value.  Could also use `ssize_t`, but that is not used in this routine...
2020-06-10 15:42:22 -06:00
Dennis Heimbigner
84c69afca7 Allow redefinition of variable filters
re: Github issue https://github.com/Unidata/netcdf-c/issues/1713

If nc_def_var_filter or nc_def_var_deflate or nc_def_var_szip is
called multiple times with the same filter id, but possibly with
different sets of parameters, then the first invocation is
sticky and later invocations are ignored. The desired behavior
is to have the last invocation be used.

This PR implements that desired behavior, with some special
cases.  If you call nc_def_var_deflate multiple times, then the
last invocation rule applies with respect to deflate. However,
the shuffle filter, if enabled, is always applied just before
applying deflate.

Misc unrelated changes:
1. Make client-side filters be disabled by default
2. Fix the definition of uintptr_t and use in oc2 and libdap4
3. Add some test cases
4. modify filter order tests to use plugin filters rather
   than client-side filters
2020-05-11 09:42:31 -06:00
Edward Hartnett
6aa6eff710 now properly setting HDF5 file cache for files created/opened sequentially on parallel IO builds 2020-05-08 11:00:56 -06:00
Edward Hartnett
e3c9e83ecf adding internal function, plus some documentation 2020-05-08 08:58:42 -06:00
Greg Sjaardema
3e919a568f
Remove line that was missed in original patch 2020-04-30 14:00:18 -06:00
Greg Sjaardema
1db3d07beb
Proof-of-Concept: Avoid N^2 behavior in NC4_inq_dim
The current library seems to have some behavior which is N^2 in the number of vars in a file.

The `NC4_inq_dim` routine calls down to `nc4_find_dim_len` which iterates through each `var` in the file/group and calls `find_var_dim_max_length` on each var and finds the largest length of the dim on each of those vars. This is done only for unlimited vars.

I have a file with 129 dim and 1630 vars.  The unlimited dimension is of length 41.  In my test program, I am reading data from 4 files which have the same dim and var count and reading every 4th time step (unlimited dimension).  If I run a profile, I see that 98.2% of the program time is in the `nc_get_vara_float` call tree and most of that is in `find_var_dim_max_length` (94.8%).

There are 66,142 calls to `nc_get_vara_float` resulting in 107,307,290 calls to `find_var_dim_max_length` with twice that number of calls to `malloc/free` and calls to 5 HDF5 routines.  All of this, at least in my case, to return the same `41` each time.

The proof of concept patch here will check whether the file is read-only (or no_write) and if so, it will cache the value of the dim length the first time it is calculated.   With this change, my example run is sped up by a factor of 60.  The time for `NC4_inq_dim` and below drops from 97.2% down to 2.7%.

I'm not sure whether this is the correct fix, or if there is some behavior that I am overlooking, but my users would definitely like a 10 second run compared to a 10 minute run... 

This is on current Netcdf master branch.

I will try to attach some valgrind/callgrind profiles.
2020-04-30 11:01:10 -06:00
Scot Breitenfeld
7b1b06b5ca Merge remote-tracking branch 'upstream/master' 2020-04-23 15:36:14 -05:00
Dennis Heimbigner
b0e0d81aa9 Fix reclamation of the ->format_XXX_info fields
nc4internal.c contains code to free the format_XXX_info
fields. Since these are format specific, this code
was moved to the dispatch code (libhdf5 and libhdf4
in the current case).

Additionally, there are some fields in nc4internal.h (e.g.
dimscale fields) that are specific to HDF5 and have been moved
to the corresponding HDF5 data structures and code.

Misc. other changes:
1. NC_VAR_INFO_T->hdf5_name renamed to alt_name to avoid
   implying it is necessarily HDF5 specific.
2. prefix NC_FILE_INFO_T with an instance of NC_OBJ for consistency.
   this also requires wrapping move_in_NCList() to keep
   hdr.id consistent.
2020-03-29 12:48:59 -06:00
Edward Hartnett
e7b9b1b587 fixed documentation of cache int functions 2020-03-24 15:02:42 -06:00
Scot Breitenfeld
c5d2e99417 Updated to use H5O_info2_t for HDF5 1.12 and the use of H5Oget_info3 instead of H5Gget_objinfo 2020-03-12 15:50:24 +00:00
Edward Hartnett
b29f9f34a0 whitespace cleanup 2020-03-08 09:10:07 -06:00
Edward Hartnett
4c7e162f34 less use of contiguous/compact field 2020-03-08 07:31:21 -06:00
Edward Hartnett
053752440b stop setting contiguous field in nc4hdf5.c 2020-03-08 07:18:52 -06:00
Edward Hartnett
04eafff166 stop setting contiguous field in hdf5filter.c 2020-03-08 07:18:11 -06:00
Edward Hartnett
5574317db7 stop setting contiguous/compact fields at file open 2020-03-08 07:17:01 -06:00
Edward Hartnett
61357cfd4d more use of storage field 2020-03-08 07:09:15 -06:00
Edward Hartnett
1761850795 continuing to switch to storage field 2020-03-08 07:05:51 -06:00
Edward Hartnett
b98a37e0b3 using storage field in nc4var.c 2020-03-08 06:38:44 -06:00
Edward Hartnett
119e8e9465 using storage in hdf5filter.c 2020-03-08 06:31:34 -06:00
Edward Hartnett
8dec9f6c99 now setting storage field when setting var storage 2020-03-08 06:29:49 -06:00