Commit Graph

205 Commits

Author SHA1 Message Date
Ward Fisher
7d9baced0e Correct issue with test file. 2023-10-24 10:28:51 -06:00
Ward Fisher
b41c33805a Update cdash script file. 2023-10-24 10:18:58 -06:00
Ward Fisher
df5261ce0c Correct issue with file. 2023-10-24 10:17:13 -06:00
Ward Fisher
bce53cae08 Add first script to try to embed cdash scripts into CI. 2023-10-24 10:15:25 -06:00
Dennis Heimbigner
df3636b959 Mitigate S3 test interference + Unlimited Dimensions in NCZarr
This PR started as an attempt to add unlimited dimensions to NCZarr.
It did that, but this exposed significant problems with test interference.
So this PR is mostly about fixing -- well mitigating anyway -- test
interference.

The problem of test interference is now documented in the document docs/internal.md.
The solutions implemented here are also describe in that document.
The solution is somewhat fragile but multiple cleanup mechanisms
are provided. Note that this feature requires that the
AWS command line utility must be installed.

## Unlimited Dimensions.
The existing NCZarr extensions to Zarr are modified to support unlimited dimensions.
NCzarr extends the Zarr meta-data for the ".zgroup" object to include netcdf-4 model extensions. This information is stored in ".zgroup" as dictionary named "_nczarr_group".
Inside "_nczarr_group", there is a key named "dims" that stores information about netcdf-4 named dimensions. The value of "dims" is a dictionary whose keys are the named dimensions. The value associated with each dimension name has one of two forms
Form 1 is a special case of form 2, and is kept for backward compatibility. Whenever a new file is written, it uses format 1 if possible, otherwise format 2.
* Form 1: An integer representing the size of the dimension, which is used for simple named dimensions.
* Form 2: A dictionary with the following keys and values"
   - "size" with an integer value representing the (current) size of the dimension.
   - "unlimited" with a value of either "1" or "0" to indicate if this dimension is an unlimited dimension.

For Unlimited dimensions, the size is initially zero, and as variables extend the length of that dimension, the size value for the dimension increases.
That dimension size is shared by all arrays referencing that dimension, so if one array extends an unlimited dimension, it is implicitly extended for all other arrays that reference that dimension.
This is the standard semantics for unlimited dimensions.

Adding unlimited dimensions required a number of other changes to the NCZarr code-base. These included the following.
* Did a partial refactor of the slice handling code in zwalk.c to clean it up.
* Added a number of tests for unlimited dimensions derived from the same test in nc_test4.
* Added several NCZarr specific unlimited tests; more are needed.
* Add test of endianness.

## Misc. Other Changes
* Modify libdispatch/ncs3sdk_aws.cpp to optionally support use of the
   AWS Transfer Utility mechanism. This is controlled by the
   ```#define TRANSFER```` command in that file. It defaults to being disabled.
* Parameterize both the standard Unidata S3 bucket (S3TESTBUCKET) and the netcdf-c test data prefix (S3TESTSUBTREE).
* Fixed an obscure memory leak in ncdump.
* Removed some obsolete unit testing code and test cases.
* Uncovered a bug in the netcdf-c handling of big-endian floats and doubles. Have not fixed yet. See tst_h5_endians.c.
* Renamed some nczarr_tests testcases to avoid name conflicts with nc_test4.
* Modify the semantics of zmap\#ncsmap_write to only allow total rewrite of objects.
* Modify the semantics of zodom to properly handle stride > 1.
* Add a truncate operation to the libnczarr zmap code.
2023-09-26 16:56:48 -06:00
Dennis Heimbigner
8887b5bb51 Update tinyxml and allow its use under OS/X.
re: PR https://github.com/Unidata/netcdf-c/pull/2710

Apparently (see above PR) tinyxml2 now works under OS/X.
So this PR is a follow on to the above PR. It modifies
our OS/X github action to test tinyxml2 under OS/X.
2023-06-12 20:16:23 -06:00
Dennis Heimbigner
cdbf04956b Provide a single option to disable all network access and testing.
Add the option "--disable-network-access" (automake)
or "-DENABLE_NETWORK_ACCESS=OFF" (cmake).
When disabled, this option transitively disables all
network access capabilities and testing.
If set, this option implies the following:
* --disable-dap
* --disable-byterange
* --disable-s3

This PR answers a request for a feature from Ed Hartnett.

## Misc. Other changes
* Take the opportunity to clean up some old, unused options;
e.g. --enable-multifilters.
* Fix bug in using S3 urls.
2023-06-10 14:08:04 -06:00
Ward Fisher
dd75fa343c
Merge branch 'main' into patch-2 2023-05-23 11:15:04 -06:00
Dennis Heimbigner
6c7e668a04 Remove debugging 2023-05-09 21:18:51 -06:00
Dennis Heimbigner
98477b9f25 ## Addendum [5/9/23]
It turns out that attempting to test S3 using a github action secret is a very complex process. So, this was disabled for github actions. However, a new *run_tests_s3.yml* action file was added that will eventually encapsulate S3 testing.
2023-05-09 21:13:49 -06:00
Dennis Heimbigner
f928428680 remove push trigger 2023-05-03 16:31:39 -06:00
Dennis Heimbigner
912e76e552 Suppress S3 testing in github actions 2023-05-03 16:27:14 -06:00
Dennis Heimbigner
3ac9958ffc creds1 2023-05-02 19:38:30 -06:00
Dennis Heimbigner
c315873af6 Disable s3 tests 2023-05-02 14:51:34 -06:00
Dennis Heimbigner
ef55c327b6 secret1 2023-05-02 14:10:02 -06:00
Dennis Heimbigner
eb6c9fa40f chmod 2023-05-02 13:35:22 -06:00
Dennis Heimbigner
b5ea9616e9 profile1 2023-05-02 13:20:09 -06:00
Dennis Heimbigner
38615da3e7 enable s3 testing 2023-05-02 12:55:25 -06:00
Dennis Heimbigner
681abc3fb1 s3-off 2023-04-30 18:41:31 -06:00
Dennis Heimbigner
77ac0e052b debug 2023-04-29 21:33:45 -06:00
Dennis Heimbigner
dcc99e8d8b debug 2023-04-29 21:23:07 -06:00
Dennis Heimbigner
2e7befd209 debug15 2023-04-29 21:22:18 -06:00
Dennis Heimbigner
e8eeaf5f19 debug14 2023-04-29 21:14:57 -06:00
Dennis Heimbigner
908aa20859 debug12 2023-04-29 21:12:16 -06:00
Dennis Heimbigner
53021408a6 ub2 2023-04-27 14:50:32 -06:00
Dennis Heimbigner
dbff85af2b Merge branch 's3update.dmh' of https://github.com/DennisHeimbigner/netcdf-c into s3update.dmh 2023-04-27 14:48:10 -06:00
Dennis Heimbigner
03854bcf27 ub1 2023-04-27 14:48:00 -06:00
Dennis Heimbigner
744aa6cd25 only1 2023-04-27 14:28:21 -06:00
Dennis Heimbigner
8ee9453043 valg2 2023-04-26 13:20:54 -06:00
Dennis Heimbigner
5dd237246f fault1 2023-04-26 13:03:21 -06:00
Dennis Heimbigner
3eaa4bbb2c valgrind1 2023-04-26 12:38:11 -06:00
Dennis Heimbigner
49737888ca Improve S3 Documentation and Support
## Improvements to S3 Documentation
* Create a new document *quickstart_paths.md* that give a summary of the legal path formats used by netcdf-c. This includes both file paths and URL paths.
* Modify *nczarr.md* to remove most of the S3 related text.
* Move the S3 text from *nczarr.md* to a new document *cloud.md*.
* Add some S3-related text to the *byterange.md* document.

Hopefully, this will make it easier for users to find the information they want.

## Rebuild NCZarr Testing
In order to avoid problems with running make check in parallel, two changes were made:
1. The *nczarr_test* test system was rebuilt. Now, for each test.
any generated files are kept in a test-specific directory, isolated
from all other test executions.
2. Similarly, since the S3 test bucket is shared, any generated S3 objects
are isolated using a test-specific key path.

## Other S3 Related Changes
* Add code to ensure that files created on S3 are reclaimed at end of testing.
* Used the bash "trap" command to ensure S3 cleanup even if the test fails.
* Cleanup the S3 related configure.ac flag set since S3 is used in several places. So now one should use the option *--enable-s3* instead of *--enable-nczarr-s3*, although the latter is still kept as a deprecated alias for the former.
* Get some of the github actions yml to work with S3; required fixing various test scripts adding a secret to access the Unidata S3 bucket.
* Cleanup S3 portion of libnetcdf.settings.in and netcdf_meta.h.in and test_common.in.
* Merge partial S3 support into dhttp.c.
* Create an experimental s3 access library especially for use with Windows. It is enabled by using the options *--enable-s3-internal* (automake) or *-DENABLE_S3_INTERNAL=ON* (CMake). Also add a unit-test for it.
* Move some definitions from ncrc.h to ncs3sdk.h

## Other Changes
* Provide a default implementation of strlcpy and move this and similar defaults into *dmissing.c*.
2023-04-25 17:15:06 -06:00
Ward Fisher
dc6e392c9d
Merge branch 'main' into znotnc.dmh 2023-04-12 16:02:34 -06:00
Dennis Heimbigner
2aee428ee4 ubuntufix1 2023-04-04 14:28:32 -06:00
Dennis Heimbigner
0ca921f721 ub1" 2023-04-04 13:15:27 -06:00
Dennis Heimbigner
d738e03f5b Update 2023-03-14 14:14:44 -06:00
Ward Fisher
331ed2bdab Expand CI testing with HDF5 1.14.0 2023-03-14 11:47:57 -06:00
Ward Fisher
77738e546d Add hdf5 1.14.0 to GitHub CI. 2023-03-14 11:39:14 -06:00
DWesl
2f103420f6
CI: Test --without-plugin-dir on Cygwin
This caused problems a bit ago.
This will likely take a bit of iteration.
2023-03-13 16:28:35 -04:00
Dennis Heimbigner
5c07ebfd11 Check at nc_open if file appears to be in NCZarr/Zarr format.
re: Issue https://github.com/Unidata/netcdf-c/issues/2656

Charlie Zender notes that *nc_open()* does not immediately detect that the given path refers to a file not in zarr format. Rather it fails later when trying to read the (meta-)data.

The reason is that the Zarr format is highly decentralized. There is no easily testable magic number or superblock to look for. In effect the only way to see if a directory is Zarr is to successfully read it.

It is possible to heuristically detect that a path refers to an NCZarr/Zarr file by doing a breadth-first search of the file tree starting at the given path. If the search encounters a file whose name starts with ".z", then assume it is a legitimate NCZarr/Zarr file. Of course, this test could be costly. One hopes that in practice that it is not.

In addition to this fix, a corresponding test case was added.

## Other Changes

re: PR https://github.com/Unidata/netcdf-c/pull/2529

There was an error under Cygwin for this PR that is fixed in this PR. The fix was to convert all *noinst_* references to *check_*.
2023-03-13 13:24:14 -06:00
Ward Fisher
34f64d4322 Update github action configuration scripts. 2023-01-27 12:06:39 -07:00
DWesl
fe67ea4224 CI: Change autotools CI build to out-of-tree build.
This is a reference to an issue with how most distribution packagers
run autotools (source in one directory, compile in another, install
to a third.
There was a PR to catch errors in that kind of build by running
make distcheck; this should do the relevant bits of that PR,
taking into account the preference for separate build and compile
steps.
2023-01-12 10:58:03 -05:00
Ward Fisher
341a43b5aa Correct lingering merge issue. 2023-01-09 20:27:12 -08:00
Ward Fisher
4c27c59fea Update whitespace. 2023-01-09 20:26:05 -08:00
Ward Fisher
bd0341256b Add libiconv-devel to cygwin CI 2023-01-09 14:55:30 -08:00
Ward Fisher
e02f678168 Correct libcurl development package. 2023-01-09 14:45:02 -08:00
Ward Fisher
19a1f9ec29 Add libcurl-dev to cygwin github actions 2023-01-09 14:43:50 -08:00
Ward Fisher
435f16bcb9 Merge branch 'loop.dmh' of https://github.com/DennisHeimbigner/netcdf-c into v4.9.1-wellspring.wif 2023-01-04 14:07:34 -08:00
Dennis Heimbigner
a03bb5e601 Fix infinite loop in file inferencing
re: Issue https://github.com/Unidata/netcdf-c/issues/2573

The file type inferencer in libdispatch/dinference.c has a simple
forward inference mechanism so that the occurrence of certain mode
values in a URL fragment implies inclusion of additional mode values.
This kind of inference is notorious for leading to cycles if not
careful. Unfortunately, this occurred in the one in dinference.c.

This was fixed by providing a more complicated, but more reliable inference
mechanism.

## Misc. Other Changes
* Found and fixed a couple of memory leaks.
* There is a recent problem in building HDF4 support on github actions. Fixed by using the internal HDF4 xdr capability.
* Some filter-related code was not being properly ifdef'd with ENABLE_NCZARRA_FILTERS.
2022-12-18 13:18:00 -07:00
Ward Fisher
087d3b6c37 Supported headers for hdf4 are not installed in actions, and there does not appear (currently) to be an easy way to reinstall these. 2022-11-18 11:34:09 -07:00