Commit Graph

128 Commits

Author SHA1 Message Date
Ward Fisher
dc6e392c9d
Merge branch 'main' into znotnc.dmh 2023-04-12 16:02:34 -06:00
Ward Fisher
91591d37a0
Merge pull request #2660 from Unidata/v4.9.2-wellspring.wif
v4.9.2 Wellspring branch
2023-04-11 15:32:26 -06:00
Dennis Heimbigner
3765d86e46 "Simplify" XGetopt usage
When "getopt()" is not available, various of the netcdf-c utilities
use XGetopt instead. This occurs primarily when building under Window,
so the build changes are restricted to CMake.

This PR tries to isolate XGetopt.c to the libdispatch directory
and then builds the various utilities using this cliche:
````
IF(USE_X_GETOPT)
  SET(XGETOPTSRC "${CMAKE_CURRENT_SOURCE_DIR}/../libdispatch/XGetopt.c")
ENDIF()
````

This avoids the need to copy XGetopt.c to all the directories that
use it.
2023-04-09 13:10:41 -06:00
Dennis Heimbigner
17218d788a bad file reference 2023-03-14 20:25:36 -06:00
Dennis Heimbigner
250604582a oops 2023-03-14 14:48:30 -06:00
Dennis Heimbigner
e439a0788b Fix run_jsonconvention.sh to be resilient against irrelevant changes to _NCProperties. 2023-03-14 13:46:31 -06:00
Ward Fisher
dc11c6d094 Correct jsonconvention map with netcdf version. 2023-03-14 09:23:48 -06:00
Dennis Heimbigner
5c07ebfd11 Check at nc_open if file appears to be in NCZarr/Zarr format.
re: Issue https://github.com/Unidata/netcdf-c/issues/2656

Charlie Zender notes that *nc_open()* does not immediately detect that the given path refers to a file not in zarr format. Rather it fails later when trying to read the (meta-)data.

The reason is that the Zarr format is highly decentralized. There is no easily testable magic number or superblock to look for. In effect the only way to see if a directory is Zarr is to successfully read it.

It is possible to heuristically detect that a path refers to an NCZarr/Zarr file by doing a breadth-first search of the file tree starting at the given path. If the search encounters a file whose name starts with ".z", then assume it is a legitimate NCZarr/Zarr file. Of course, this test could be costly. One hopes that in practice that it is not.

In addition to this fix, a corresponding test case was added.

## Other Changes

re: PR https://github.com/Unidata/netcdf-c/pull/2529

There was an error under Cygwin for this PR that is fixed in this PR. The fix was to convert all *noinst_* references to *check_*.
2023-03-13 13:24:14 -06:00
Dennis Heimbigner
69e84fe9f1 Fix byterange handling of some URLS
re: Issue

The byterange handling of the following URLS fails.

### Problem 1: "https://crudata.uea.ac.uk/cru/data/temperature/HadCRUT.4.6.0.0.median.nc#mode=bytes"
It turns out that byterange in hdf5 has two possible targets: S3 and not-S3 (e.g. a thredds server or the crudata URL above). Each uses a different HDF5 Virtual File Driver (VFD).
I incorrectly set up the byterange code in libhdf5 so that it would choose one or the other of the two VFD's for any netcdf-c library build. The fix is to allow it to choose either one at run-time.

### Problem 2: "https://noaa-goes16.s3.amazonaws.com/ABI-L1b-RadF/2022/001/18/OR_ABI-L1b-RadF-M6C01_G16_s20220011800205_e20220011809513_c20220011809562.nc#mode=bytes,s3"
When given what appears to be an S3-related URL, the netcdf-c library code converts it into a canonical, so-called "path" format. In casing out the possible input URL formats, I missed the case where the host contains the bucket ("noaa-goes16"), but not the region. So the fix was to check for this case.

## Misc. Related Changes
1. Since S3 is used in more than just NCZarr, I changed the automake/cmake options to replace "--enable-nczarr-s3" with "--enable-s3", but keeping the former option as a synonym for the latter. This also entailed cleaning up libnetcdf.settings WRT S3 support
2. Added the above URLS as additional test cases

## Misc. Un-Related Changes
1. CURLOPT_PUT is deprecated in favor to CURLOPT_UPLOAD
2. Fix some minor warnings

## Open Problems
* Under Ubuntu, either libcrypto or aws-sdk-cpp has a memory leak.
2023-03-02 19:51:02 -07:00
Dennis Heimbigner
295c132789 Fix a distcheck failure with nczarr_test/run_interop.sh
The problem was that files were being copied
into the ${srcdir} rather than ${builddir} directory.
2023-02-17 13:01:11 -07:00
Dennis Heimbigner
2943a78ebb Merge main and fix conflicts 2022-11-09 12:58:40 -07:00
Ward Fisher
87b50932de
Merge pull request #2530 from Unidata/v4.9.1-wellspring.wif
Merge subset of v4.9.1 files back into main development branch
2022-11-09 12:44:18 -07:00
Dennis Heimbigner
9f848c9e53 Fix race condition in ncdump (and other) tests.
re: Issue https://github.com/Unidata/netcdf-c/issues/2551

Ryan May identified the use of a common scratch file (tmp.cdl)
across multiple test shell scripts in ncdump directory
and the nczarr_test directory.
This sometimes causes errors because of race conditions
between those scripts.

I renamed those common files to avoid the race condition.  I
also did some further checking and found some additional,
similar conflicts and fixed those. Also did some minor cleanup
of unused files.

Tests fixed:
ncdump: run_back_comp_tests.sh tst_bom.sh tst_nccopy4.sh tst_nccopy5.sh
nczarr_test: git df master -- run_nccopyz.sh run_nczarr_fill.sh run_scalar.sh
2022-11-08 20:12:38 -07:00
Ward Fisher
da03c01263 Correct an issue observed in out-of-source builds. 2022-10-19 10:26:44 -06:00
Ward Fisher
614c1f764b Working on another make distcheck failure. 2022-10-18 15:12:04 -06:00
Ward Fisher
22da6b73c3 Add generated files to distclean. 2022-10-18 11:19:03 -06:00
Ward Fisher
b42ab34cec Copy zmap reference files for cmake-based tests. 2022-10-17 16:55:40 -06:00
Ward Fisher
85cfbab102 Manually bump version in diff-compare to get RC1 out the door, this will need to be automatically excluded from the test at some point otherwise we will see this test fail every time the VERSION string changes. 2022-10-17 14:20:34 -06:00
DWesl
f28dcaa994 TST: Mark nczarr s3 cleanup test XFAIL on Cygwin instead of skipping.
It might be nice to be told when it starts passing.
This probably requires installing s3 on Cygwin.
2022-10-12 13:36:52 -04:00
DWesl
a71c606802 TST: Mark NCZarr plugins XFAIL on MinGW
Not sure why --disable-nczarr-filters doesn't exclude them, but let's check the rest of the functionality.
2022-10-12 12:58:01 -04:00
DWesl
0eed60a295 BLD: Get netCDF4 build working on Windows.
Most changes are to get plugins working.
libdispatchdreg.c went in in unidata/netcdf-c#2460,
after I'd done it here.

Summary of individual changes below.

BLD: Remove declspec(dllexport); in dreg.c.

By removing the explicit handling, the automatic handling
(equivalent to --export-all-symbols with recent GNU tools)
will be enabled again, so the generated library will have
more than one function exported.

BLD: Link plugins against libnetcdf on Cygwin.

BLD: Add AM_LDFLAGS to plugin _LDFLAGS to pass -no-undefined.

BLD: Link ncz*filters plugins against libnetcdf.

BLD: Add AM_LDFLAGS to test plugin _LDFLAGS.

Also move rpath from AM_LDFLAGS to test plugin _LDFLAGS.

TST: Don't run nczarr_test/run_specific_filters.sh on Cygwin.

It takes over half an hour to complete, where the others take a minute or less.

TST: Try to find the hanging Cygwin test.
2022-10-12 10:56:17 -04:00
Dennis Heimbigner
46ed3a1da7 Cleanup built test sources in nczarr_test
re: https://github.com/conda-forge/libnetcdf-feedstock/pull/140

Some test are BUILTSOURCE in nczarr_test. But apparently
I did not do it correctly. SO try to cleanup their construction.
2022-09-16 18:58:36 -06:00
Dennis Heimbigner
600885cb34 update file permission 2022-09-16 18:35:08 -06:00
Dennis Heimbigner
1a45ee025f Fix some addtional errors in NCZarr
re: Issue https://github.com/Unidata/netcdf-c/issues/2502

H/T Charlie Zender

* Fix NCZarr handling of endianness value NC_ENDIAN_NATIVE. This now matches how it is handled in libhdf5
* Fix NCZarr handling of char typed attribute with value "". This now matches how it is handled in libhdf5
* Add test for various char attribute values
* Change the mapping of NC_CHAR and NC_STRING to dtype; requires changing some test files also.
* Optimize the testing for NC_ENOTBUILT in NC_open.
* Turn off debugging left on accidentally
* Fix memory leak in tst_pnetcdf.c
* Fix blosc test
2022-09-09 14:25:24 -06:00
Dennis Heimbigner
7e48f2ad7b Fix missing files 2022-09-03 14:57:48 -06:00
Dennis Heimbigner
be88b66390 Update Release Notes 2022-09-03 14:54:18 -06:00
Dennis Heimbigner
6abaab967b Fix some problems with PR https://github.com/Unidata/netcdf-c/pull/2492
re: PR https://github.com/Unidata/netcdf-c/pull/2492
re: Issue https://github.com/Unidata/netcdf-c/issues/2494

This PR fixes some problems with the pull request https://github.com/Unidata/netcdf-c/pull/2492 in response to Issue https://github.com/Unidata/netcdf-c/issues/2494.

* Found and fixed more scalar handling problems and add a test case for scalars.
* Cleanup nczarr_test/run_string.sh test
* Document *_nczarr_default_maxstrlen* and *_nczarr_maxstrlen*.

* Support both "Nan" and *Nan* as being floating point constants
  for attributes. It is unclear from the Zarr V2 spec if
  unquoted *Nan* is legal or not, but support for reading.
  Write the quoted versions when writing an attribute.  Similar
  for Infinity constants.
  So NCZarr supports the following constants for use in Attributes
    * *Nan*, "Nan", *-Nan*, "-Nan"
    * *Nanf*, "Nanf", *-Nanf*, "-Nanf"
    * *Infinity*, "Infinity", *-Infinity*, "-Infinity"
    * *Infinityf*, "Infinityf", *-Infinityf*, "-Infinityf"
2022-09-03 14:21:48 -06:00
Dennis Heimbigner
57b1d9f7f8 update file permission 2022-09-03 14:21:36 -06:00
Dennis Heimbigner
231ae96c4b Add support for Zarr string type to NCZarr
* re: https://github.com/Unidata/netcdf-c/pull/2278
* re: https://github.com/Unidata/netcdf-c/issues/2485
* re: https://github.com/Unidata/netcdf-c/issues/2474

This PR subsumes PR https://github.com/Unidata/netcdf-c/pull/2278.
Actually is a bit an omnibus covering several issues.

## PR https://github.com/Unidata/netcdf-c/pull/2278
Add support for the Zarr string type.
Zarr strings are restricted currently to be of fixed size.
The primary issue to be addressed is to provide a way for user to
specify the size of the fixed length strings. This is handled by providing
the following new attributes special:
1. **_nczarr_default_maxstrlen** —
This is an attribute of the root group. It specifies the default
maximum string length for string types. If not specified, then
it has the value of 64 characters.
2. **_nczarr_maxstrlen** —
This is a per-variable attribute. It specifies the maximum
string length for the string type associated with the variable.
If not specified, then it is assigned the value of
**_nczarr_default_maxstrlen**.

This PR also requires some hacking to handle the existing netcdf-c NC_CHAR
type, which does not exist in zarr. The goal was to choose numpy types for
both the netcdf-c NC_STRING type and the netcdf-c NC_CHAR type such that
if a pure zarr implementation read them, it would still work and an
NC_CHAR type would be handled by zarr as a string of length 1.

For writing variables and NCZarr attributes, the type mapping is as follows:
* "|S1" for NC_CHAR.
* ">S1" for NC_STRING && MAXSTRLEN==1
* ">Sn" for NC_STRING && MAXSTRLEN==n

Note that it is a bit of a hack to use endianness, but it should be ok since for
string/char, the endianness has no meaning.

For reading attributes with pure zarr (i.e. with no nczarr
atribute types defined), they will always be interpreted as of
type NC_CHAR.

## Issue: https://github.com/Unidata/netcdf-c/issues/2474
This PR partly fixes this issue because it provided more
comprehensive support for Zarr attributes that are JSON valued expressions.
This PR still does not address the problem in that issue where the
_ARRAY_DIMENSION attribute is incorrectly set. Than can only be
fixed by the creator of the datasets.

## Issue: https://github.com/Unidata/netcdf-c/issues/2485
This PR also fixes the scalar failure shown in this issue.
It generally cleans up scalar handling.
It also adds a note to the documentation describing that
NCZarr supports scalars while Zarr does not and also how
scalar interoperability is achieved.

## Misc. Other Changes
1. Convert the nczarr special attributes and keys to be all lower case. So "_NCZARR_ATTR" now used "_nczarr_attr. Support back compatibility for the upper case names.
2. Cleanup my too-clever-by-half handling of scalars in libnczarr.
2022-08-27 20:21:13 -06:00
Ward Fisher
ba37c0af9f
Merge branch 'main' into enumdfalt.dmh 2022-07-26 15:23:40 -06:00
Magnus Ulimoen
aa394b5ebc Prevent cmake writing to source dir 2022-07-19 15:55:42 +02:00
Dennis Heimbigner
eeb215bf4e debug1 2022-07-17 14:43:59 -06:00
Dennis Heimbigner
d7e57d261a Update to default --with-plugin-dir to yes 2022-05-24 20:05:19 -06:00
Dennis Heimbigner
cad946cde4 merged 2022-05-24 19:59:36 -06:00
Dennis Heimbigner
d16a894458 conflicts 2022-05-24 14:40:54 -06:00
Ward Fisher
d8959f170b
Merge branch 'main' into install.dmh 2022-05-24 14:35:33 -06:00
Ward Fisher
c59f626219 Add missing file to EXTRA_DIST 2022-05-24 10:56:25 -06:00
Dennis Heimbigner
6ae3289701 I made a major update to this PR with the following changes:
## Overwriting
I think I solved the file overwrite problem by doing light name
mangling of the shared library names. With this change the probabilty
is very small that installing our filter wrappers in a directory will
overwrite code produced by others.

## Default Install Location
I have setup the --with-plugin-dir option default to install in
the following locations in order of preference

1. If HDF5_PLUGIN_PATH is defined (at build time remember), then the last directory in that path will be where the filter wrapper shared libraries will be installed.
2. Otherwise the default is "/usr/local/hdf5/lib/plugin" (on *nix*) or "%ALLUSERSPROFILE%\\hdf5\\lib\\plugin" for Windows or Mingw.

Currently, --with-plugin-dir is disabled by default.
I should note that even if I enable it by default, installing
netcdf-c will still not run "out of the box" because the hypothetical
naive user will not know which compressor libraries need to be
pre-installed before netcdf is installed. Nor will that user have any
way to find out what needs to be installed.
2022-05-19 22:00:40 -06:00
Dennis Heimbigner
e05f5c36a8 Merge master 2022-05-17 15:11:31 -06:00
Ward Fisher
c9727c2a65
Merge branch 'main' into distcheck.dmh 2022-05-17 13:26:58 -06:00
Ward Fisher
771b959cad
Merge branch 'main' into jsonconvention.dmh 2022-05-17 13:24:53 -06:00
Ward Fisher
375e5adfe4
Merge branch 'main' into alwaysxarray.dmh 2022-05-17 13:23:19 -06:00
Dennis Heimbigner
7b09290a3a Improve filter installation process to avoid use of an extra shell script
re: https://github.com/Unidata/netcdf-c/issues/2338
re: https://github.com/Unidata/netcdf-c/issues/2294

In issue https://github.com/Unidata/netcdf-c/issues/2338,
Ed Hartnett suggested a better way to install filters to a user
defined location -- for Automake, anyway.

This PR implements that suggestion. It turns out to be more
complicated than it appears, so there are fair number of changes;
mostly to shell scripts. Most of the change is in plugins/Makefile.am.

NOTE: this PR still does NOT address the use of HDF5_PLUGIN_PATH
as the default; this turns out to be complex when dealing with NCZarr.
So this will be addressed in a subsequent post 4.9.0 PR.

## Misc. Changes
1. Record the occurrences of incomplete codecs in libnczarr so that
   they can be included in _Codecs attribute correctly. This allows
   users to see what missing filters are referenced in the Zarr file.
   Primarily affects libnczarr/zfilter.[ch]. Also required creating a
   new no-effect filter: H5Zunknown.c.
2. Move the unknown filter test to a separate test file.
3. Incorporates PR https://github.com/Unidata/netcdf-c/pull/2343
2022-05-14 16:05:48 -06:00
Dennis Heimbigner
5b400442ff Merge branch 'master' into jsonconvention.dmh 2022-05-09 12:43:52 -06:00
Dennis Heimbigner
53890fd3a0 Fix distcheck problems
re: https://github.com/Unidata/netcdf-c/issues/2342
This PR replaces PR https://github.com/Unidata/netcdf-c/pull/2342

This PR extends the distcheck corrections in PR
https://github.com/Unidata/netcdf-c/pull/2342.  That original PR
exposed some errors in the file naming in the plugins and
nczarr_test directories.  This PR corrects those problems and
should be used instead of https://github.com/Unidata/netcdf-c/pull/2342

Ed Hartnett's suggestion about how to install the plugins in the
user specified directory will be addressed in a subsequent PR.
2022-05-09 12:10:53 -06:00
Dennis Heimbigner
444024a7be Merge branch 'master' into jsonconvention.dmh 2022-05-01 13:16:58 -06:00
Dennis Heimbigner
909884ffb3 cleanup 2022-04-30 21:54:00 -06:00
Dennis Heimbigner
f897b458ea Fix szip handling 2022-04-30 19:06:01 -06:00
Dennis Heimbigner
126b3f9423 Support installation of filters into user-specified location
re: https://github.com/Unidata/netcdf-c/issues/2294

Ed Hartnett suggested that the netcdf library installation process
be extended to install the standard filters into a user specified
location. The user can then set HDF5_PLUGIN_PATH to that location.

This PR provides that capability using:
````
configure option: --with-plugin-dir=<absolute directory path>
cmake option: -DPLUGIN_INSTALL_DIR=<absolute directory path>
````

Currently, the following plugins are always installed, if
available: bzip2, zstd, blosc.
If NCZarr is enabled, then additional plugins are installed:
fletcher32, shuffle, deflate, szip.

Additionally, the necessary codec support is installed
for each of the above filters that is installed.

## Changes:
1. Cleanup handling of built-in bzip2.
2. Add documentation to docs/filters.md
3. Re-factor the NCZarr codec libraries
4. Add a test, although it can only be exercised after
   the library is installed, so it cannot be used during
   normal testing.
5. Cleanup use of HDF5_PLUGIN_PATH in the filter test cases.
2022-04-29 14:31:55 -06:00
Edward Hartnett
336b7d7222 turning off tests that depend on ncpathcvt when --disable-utilities is used 2022-04-09 13:28:01 -06:00