2010-06-03 21:24:43 +08:00
|
|
|
## This is a automake file, part of Unidata's netCDF package.
|
2018-12-07 06:36:53 +08:00
|
|
|
# Copyright 2018v, see the COPYRIGHT file for more information.
|
2010-06-03 21:24:43 +08:00
|
|
|
|
2011-05-18 03:14:35 +08:00
|
|
|
# This file builds and runs DAP tests.
|
2010-06-03 21:24:43 +08:00
|
|
|
|
2011-05-18 03:14:35 +08:00
|
|
|
# Put together AM_CPPFLAGS and AM_LDFLAGS.
|
|
|
|
include $(top_srcdir)/lib_flags.am
|
2010-06-03 21:24:43 +08:00
|
|
|
|
2018-10-02 05:51:43 +08:00
|
|
|
# Un comment to use a more verbose test driver
|
2021-11-27 11:28:51 +08:00
|
|
|
#SH_LOG_DRIVER = $(SHELL) $(top_srcdir)/test-driver-verbose
|
|
|
|
#LOG_DRIVER = $(SHELL) $(top_srcdir)/test-driver-verbose
|
|
|
|
#TEST_LOG_DRIVER = $(SHELL) $(top_srcdir)/test-driver-verbose
|
Support installation of filters into user-specified location
re: https://github.com/Unidata/netcdf-c/issues/2294
Ed Hartnett suggested that the netcdf library installation process
be extended to install the standard filters into a user specified
location. The user can then set HDF5_PLUGIN_PATH to that location.
This PR provides that capability using:
````
configure option: --with-plugin-dir=<absolute directory path>
cmake option: -DPLUGIN_INSTALL_DIR=<absolute directory path>
````
Currently, the following plugins are always installed, if
available: bzip2, zstd, blosc.
If NCZarr is enabled, then additional plugins are installed:
fletcher32, shuffle, deflate, szip.
Additionally, the necessary codec support is installed
for each of the above filters that is installed.
## Changes:
1. Cleanup handling of built-in bzip2.
2. Add documentation to docs/filters.md
3. Re-factor the NCZarr codec libraries
4. Add a test, although it can only be exercised after
the library is installed, so it cannot be used during
normal testing.
5. Cleanup use of HDF5_PLUGIN_PATH in the filter test cases.
2022-04-30 04:31:55 +08:00
|
|
|
#TESTS_ENVIRONMENT = export SETX=1;
|
2017-07-16 04:32:21 +08:00
|
|
|
|
2017-11-26 22:00:17 +08:00
|
|
|
# Note which tests depend on other tests. Necessary for make -j check.
|
|
|
|
TEST_EXTENSIONS = .sh
|
|
|
|
|
2015-05-09 04:27:19 +08:00
|
|
|
LDADD = ${top_builddir}/liblib/libnetcdf.la
|
2017-03-09 08:01:10 +08:00
|
|
|
AM_CPPFLAGS += -I$(top_srcdir)/liblib
|
|
|
|
AM_CPPFLAGS += -DTOPSRCDIR=${abs_top_srcdir}
|
2022-05-10 02:10:53 +08:00
|
|
|
AM_CPPFLAGS += -DTOPBINDIR=${abs_top_builddir}
|
2010-06-03 21:24:43 +08:00
|
|
|
|
|
|
|
# Set up the tests; do the .sh first, then .c
|
2015-05-09 04:27:19 +08:00
|
|
|
check_PROGRAMS =
|
|
|
|
TESTS =
|
2010-12-16 05:45:05 +08:00
|
|
|
|
2017-01-19 12:46:47 +08:00
|
|
|
t_dap3a_SOURCES = t_dap3a.c t_srcdir.h
|
2017-03-09 08:01:10 +08:00
|
|
|
test_cvt3_SOURCES = test_cvt.c t_srcdir.h
|
|
|
|
test_vara_SOURCES = test_vara.c t_srcdir.h
|
2014-03-09 11:41:30 +08:00
|
|
|
|
2024-03-19 04:32:23 +08:00
|
|
|
if NETCDF_ENABLE_DAP
|
2015-05-09 04:27:19 +08:00
|
|
|
check_PROGRAMS += t_dap3a test_cvt3 test_vara
|
2014-03-09 11:41:30 +08:00
|
|
|
TESTS += t_dap3a test_cvt3 test_vara
|
2024-03-19 04:29:24 +08:00
|
|
|
if NETCDF_BUILD_UTILITIES
|
2017-03-09 08:01:10 +08:00
|
|
|
TESTS += tst_ncdap3.sh
|
2021-07-12 17:58:24 +08:00
|
|
|
endif
|
2014-03-09 11:41:30 +08:00
|
|
|
|
2010-06-03 21:24:43 +08:00
|
|
|
# remote tests are optional
|
2011-11-14 12:20:19 +08:00
|
|
|
# because the server may be down or inaccessible
|
|
|
|
|
2024-03-19 04:32:23 +08:00
|
|
|
if NETCDF_ENABLE_DAP_REMOTE_TESTS
|
2021-09-03 07:04:26 +08:00
|
|
|
noinst_PROGRAMS = findtestserver pingurl
|
2017-03-09 08:01:10 +08:00
|
|
|
findtestserver_SOURCES = findtestserver.c
|
2020-01-01 06:42:58 +08:00
|
|
|
pingurl_SOURCES = pingurl.c
|
2017-01-19 12:46:47 +08:00
|
|
|
|
2024-03-19 04:29:24 +08:00
|
|
|
if NETCDF_BUILD_UTILITIES
|
2023-08-17 13:07:05 +08:00
|
|
|
TESTS += tst_ber.sh tst_remote3.sh tst_formatx.sh testurl.sh tst_fillmismatch.sh tst_zero_len_var.sh
|
|
|
|
endif
|
|
|
|
|
2024-05-12 06:23:40 +08:00
|
|
|
if NETCDF_ENABLE_EXTERNAL_SERVER_TESTS
|
|
|
|
if NETCDF_ENABLE_DAP_REMOTE_TESTS
|
|
|
|
|
|
|
|
if NETCDF_BUILD_UTILITIES
|
Mitigate S3 test interference + Unlimited Dimensions in NCZarr
This PR started as an attempt to add unlimited dimensions to NCZarr.
It did that, but this exposed significant problems with test interference.
So this PR is mostly about fixing -- well mitigating anyway -- test
interference.
The problem of test interference is now documented in the document docs/internal.md.
The solutions implemented here are also describe in that document.
The solution is somewhat fragile but multiple cleanup mechanisms
are provided. Note that this feature requires that the
AWS command line utility must be installed.
## Unlimited Dimensions.
The existing NCZarr extensions to Zarr are modified to support unlimited dimensions.
NCzarr extends the Zarr meta-data for the ".zgroup" object to include netcdf-4 model extensions. This information is stored in ".zgroup" as dictionary named "_nczarr_group".
Inside "_nczarr_group", there is a key named "dims" that stores information about netcdf-4 named dimensions. The value of "dims" is a dictionary whose keys are the named dimensions. The value associated with each dimension name has one of two forms
Form 1 is a special case of form 2, and is kept for backward compatibility. Whenever a new file is written, it uses format 1 if possible, otherwise format 2.
* Form 1: An integer representing the size of the dimension, which is used for simple named dimensions.
* Form 2: A dictionary with the following keys and values"
- "size" with an integer value representing the (current) size of the dimension.
- "unlimited" with a value of either "1" or "0" to indicate if this dimension is an unlimited dimension.
For Unlimited dimensions, the size is initially zero, and as variables extend the length of that dimension, the size value for the dimension increases.
That dimension size is shared by all arrays referencing that dimension, so if one array extends an unlimited dimension, it is implicitly extended for all other arrays that reference that dimension.
This is the standard semantics for unlimited dimensions.
Adding unlimited dimensions required a number of other changes to the NCZarr code-base. These included the following.
* Did a partial refactor of the slice handling code in zwalk.c to clean it up.
* Added a number of tests for unlimited dimensions derived from the same test in nc_test4.
* Added several NCZarr specific unlimited tests; more are needed.
* Add test of endianness.
## Misc. Other Changes
* Modify libdispatch/ncs3sdk_aws.cpp to optionally support use of the
AWS Transfer Utility mechanism. This is controlled by the
```#define TRANSFER```` command in that file. It defaults to being disabled.
* Parameterize both the standard Unidata S3 bucket (S3TESTBUCKET) and the netcdf-c test data prefix (S3TESTSUBTREE).
* Fixed an obscure memory leak in ncdump.
* Removed some obsolete unit testing code and test cases.
* Uncovered a bug in the netcdf-c handling of big-endian floats and doubles. Have not fixed yet. See tst_h5_endians.c.
* Renamed some nczarr_tests testcases to avoid name conflicts with nc_test4.
* Modify the semantics of zmap\#ncsmap_write to only allow total rewrite of objects.
* Modify the semantics of zodom to properly handle stride > 1.
* Add a truncate operation to the libnczarr zmap code.
2023-09-27 06:56:48 +08:00
|
|
|
# Remote servers
|
|
|
|
# iridl.ldeo.columbia.edu
|
2023-08-17 13:07:05 +08:00
|
|
|
TESTS += tst_encode.sh
|
Mitigate S3 test interference + Unlimited Dimensions in NCZarr
This PR started as an attempt to add unlimited dimensions to NCZarr.
It did that, but this exposed significant problems with test interference.
So this PR is mostly about fixing -- well mitigating anyway -- test
interference.
The problem of test interference is now documented in the document docs/internal.md.
The solutions implemented here are also describe in that document.
The solution is somewhat fragile but multiple cleanup mechanisms
are provided. Note that this feature requires that the
AWS command line utility must be installed.
## Unlimited Dimensions.
The existing NCZarr extensions to Zarr are modified to support unlimited dimensions.
NCzarr extends the Zarr meta-data for the ".zgroup" object to include netcdf-4 model extensions. This information is stored in ".zgroup" as dictionary named "_nczarr_group".
Inside "_nczarr_group", there is a key named "dims" that stores information about netcdf-4 named dimensions. The value of "dims" is a dictionary whose keys are the named dimensions. The value associated with each dimension name has one of two forms
Form 1 is a special case of form 2, and is kept for backward compatibility. Whenever a new file is written, it uses format 1 if possible, otherwise format 2.
* Form 1: An integer representing the size of the dimension, which is used for simple named dimensions.
* Form 2: A dictionary with the following keys and values"
- "size" with an integer value representing the (current) size of the dimension.
- "unlimited" with a value of either "1" or "0" to indicate if this dimension is an unlimited dimension.
For Unlimited dimensions, the size is initially zero, and as variables extend the length of that dimension, the size value for the dimension increases.
That dimension size is shared by all arrays referencing that dimension, so if one array extends an unlimited dimension, it is implicitly extended for all other arrays that reference that dimension.
This is the standard semantics for unlimited dimensions.
Adding unlimited dimensions required a number of other changes to the NCZarr code-base. These included the following.
* Did a partial refactor of the slice handling code in zwalk.c to clean it up.
* Added a number of tests for unlimited dimensions derived from the same test in nc_test4.
* Added several NCZarr specific unlimited tests; more are needed.
* Add test of endianness.
## Misc. Other Changes
* Modify libdispatch/ncs3sdk_aws.cpp to optionally support use of the
AWS Transfer Utility mechanism. This is controlled by the
```#define TRANSFER```` command in that file. It defaults to being disabled.
* Parameterize both the standard Unidata S3 bucket (S3TESTBUCKET) and the netcdf-c test data prefix (S3TESTSUBTREE).
* Fixed an obscure memory leak in ncdump.
* Removed some obsolete unit testing code and test cases.
* Uncovered a bug in the netcdf-c handling of big-endian floats and doubles. Have not fixed yet. See tst_h5_endians.c.
* Renamed some nczarr_tests testcases to avoid name conflicts with nc_test4.
* Modify the semantics of zmap\#ncsmap_write to only allow total rewrite of objects.
* Modify the semantics of zodom to properly handle stride > 1.
* Add a truncate operation to the libnczarr zmap code.
2023-09-27 06:56:48 +08:00
|
|
|
# test.opendap.org
|
|
|
|
TESTS += tst_hyrax.sh
|
2016-09-01 05:38:59 +08:00
|
|
|
|
2017-11-04 22:12:48 +08:00
|
|
|
TESTS += test_partvar
|
2010-06-03 21:24:43 +08:00
|
|
|
|
2024-05-12 06:23:40 +08:00
|
|
|
# Various
|
2024-08-14 10:53:04 +08:00
|
|
|
TESTS += tst_longremote3.sh
|
|
|
|
tst_longremote3.log: tst_remote3.log
|
2024-05-12 06:23:40 +08:00
|
|
|
endif
|
|
|
|
|
|
|
|
if NETCDF_ENABLE_DAP_LONG_TESTS
|
2024-08-14 10:53:04 +08:00
|
|
|
test_manyurls_SOURCES = test_manyurls.c manyurls.h
|
|
|
|
check_PROGRAMS += test_manyurls
|
|
|
|
test_manyurls.log: tst_longremote3.log
|
|
|
|
TESTS += test_manyurls
|
2022-08-28 10:21:13 +08:00
|
|
|
endif
|
2010-06-03 21:24:43 +08:00
|
|
|
|
2012-03-15 07:26:48 +08:00
|
|
|
test_partvar_SOURCES = test_partvar.c
|
2017-03-09 08:01:10 +08:00
|
|
|
|
2014-01-21 07:11:45 +08:00
|
|
|
t_misc_SOURCES = t_misc.c
|
2012-05-16 01:48:27 +08:00
|
|
|
|
2015-05-16 06:08:48 +08:00
|
|
|
#TESTS += t_ncf330
|
2014-01-21 07:11:45 +08:00
|
|
|
TESTS += t_misc
|
2010-06-03 21:24:43 +08:00
|
|
|
|
2020-01-01 06:42:58 +08:00
|
|
|
test_nstride_cached_SOURCES = test_nstride_cached.c
|
|
|
|
TESTS += test_nstride_cached
|
2013-04-24 04:18:16 +08:00
|
|
|
check_PROGRAMS += test_nstride_cached
|
2020-01-01 06:42:58 +08:00
|
|
|
test_varm3_SOURCES = test_varm3.c
|
|
|
|
TESTS += test_varm3
|
2014-03-13 10:09:01 +08:00
|
|
|
check_PROGRAMS += test_varm3
|
2020-01-01 06:42:58 +08:00
|
|
|
|
|
|
|
check_PROGRAMS += test_partvar
|
|
|
|
check_PROGRAMS += t_misc
|
2015-05-09 04:27:19 +08:00
|
|
|
check_PROGRAMS += t_ncf330
|
2014-03-08 03:04:38 +08:00
|
|
|
|
2024-05-12 06:23:40 +08:00
|
|
|
endif
|
|
|
|
endif
|
|
|
|
|
2024-03-19 04:32:23 +08:00
|
|
|
if NETCDF_ENABLE_DAP_AUTH_TESTS
|
2021-04-26 12:02:29 +08:00
|
|
|
TESTS += testauth.sh
|
2014-03-08 03:04:38 +08:00
|
|
|
endif
|
|
|
|
|
2024-03-19 04:32:23 +08:00
|
|
|
endif #NETCDF_ENABLE_DAP_REMOTE_TESTS
|
2011-11-14 12:20:19 +08:00
|
|
|
|
2024-03-19 04:32:23 +08:00
|
|
|
endif #NETCDF_ENABLE_DAP
|
2017-03-09 08:01:10 +08:00
|
|
|
|
2010-06-03 21:24:43 +08:00
|
|
|
# Need to add subdirs
|
Mitigate S3 test interference + Unlimited Dimensions in NCZarr
This PR started as an attempt to add unlimited dimensions to NCZarr.
It did that, but this exposed significant problems with test interference.
So this PR is mostly about fixing -- well mitigating anyway -- test
interference.
The problem of test interference is now documented in the document docs/internal.md.
The solutions implemented here are also describe in that document.
The solution is somewhat fragile but multiple cleanup mechanisms
are provided. Note that this feature requires that the
AWS command line utility must be installed.
## Unlimited Dimensions.
The existing NCZarr extensions to Zarr are modified to support unlimited dimensions.
NCzarr extends the Zarr meta-data for the ".zgroup" object to include netcdf-4 model extensions. This information is stored in ".zgroup" as dictionary named "_nczarr_group".
Inside "_nczarr_group", there is a key named "dims" that stores information about netcdf-4 named dimensions. The value of "dims" is a dictionary whose keys are the named dimensions. The value associated with each dimension name has one of two forms
Form 1 is a special case of form 2, and is kept for backward compatibility. Whenever a new file is written, it uses format 1 if possible, otherwise format 2.
* Form 1: An integer representing the size of the dimension, which is used for simple named dimensions.
* Form 2: A dictionary with the following keys and values"
- "size" with an integer value representing the (current) size of the dimension.
- "unlimited" with a value of either "1" or "0" to indicate if this dimension is an unlimited dimension.
For Unlimited dimensions, the size is initially zero, and as variables extend the length of that dimension, the size value for the dimension increases.
That dimension size is shared by all arrays referencing that dimension, so if one array extends an unlimited dimension, it is implicitly extended for all other arrays that reference that dimension.
This is the standard semantics for unlimited dimensions.
Adding unlimited dimensions required a number of other changes to the NCZarr code-base. These included the following.
* Did a partial refactor of the slice handling code in zwalk.c to clean it up.
* Added a number of tests for unlimited dimensions derived from the same test in nc_test4.
* Added several NCZarr specific unlimited tests; more are needed.
* Add test of endianness.
## Misc. Other Changes
* Modify libdispatch/ncs3sdk_aws.cpp to optionally support use of the
AWS Transfer Utility mechanism. This is controlled by the
```#define TRANSFER```` command in that file. It defaults to being disabled.
* Parameterize both the standard Unidata S3 bucket (S3TESTBUCKET) and the netcdf-c test data prefix (S3TESTSUBTREE).
* Fixed an obscure memory leak in ncdump.
* Removed some obsolete unit testing code and test cases.
* Uncovered a bug in the netcdf-c handling of big-endian floats and doubles. Have not fixed yet. See tst_h5_endians.c.
* Renamed some nczarr_tests testcases to avoid name conflicts with nc_test4.
* Modify the semantics of zmap\#ncsmap_write to only allow total rewrite of objects.
* Modify the semantics of zodom to properly handle stride > 1.
* Add a truncate operation to the libnczarr zmap code.
2023-09-27 06:56:48 +08:00
|
|
|
SUBDIRS = testdata3 expected3 expectremote3 expectedhyrax
|
2010-06-03 21:24:43 +08:00
|
|
|
|
2017-02-22 06:13:37 +08:00
|
|
|
EXTRA_DIST = tst_ncdap3.sh \
|
|
|
|
tst_remote3.sh \
|
|
|
|
tst_longremote3.sh \
|
2021-08-26 03:33:49 +08:00
|
|
|
tst_zero_len_var.sh \
|
2018-03-21 11:31:31 +08:00
|
|
|
tst_filelists.sh tst_urls.sh tst_utils.sh \
|
2015-05-09 04:27:19 +08:00
|
|
|
t_dap.c CMakeLists.txt tst_formatx.sh testauth.sh testurl.sh \
|
Mitigate S3 test interference + Unlimited Dimensions in NCZarr
This PR started as an attempt to add unlimited dimensions to NCZarr.
It did that, but this exposed significant problems with test interference.
So this PR is mostly about fixing -- well mitigating anyway -- test
interference.
The problem of test interference is now documented in the document docs/internal.md.
The solutions implemented here are also describe in that document.
The solution is somewhat fragile but multiple cleanup mechanisms
are provided. Note that this feature requires that the
AWS command line utility must be installed.
## Unlimited Dimensions.
The existing NCZarr extensions to Zarr are modified to support unlimited dimensions.
NCzarr extends the Zarr meta-data for the ".zgroup" object to include netcdf-4 model extensions. This information is stored in ".zgroup" as dictionary named "_nczarr_group".
Inside "_nczarr_group", there is a key named "dims" that stores information about netcdf-4 named dimensions. The value of "dims" is a dictionary whose keys are the named dimensions. The value associated with each dimension name has one of two forms
Form 1 is a special case of form 2, and is kept for backward compatibility. Whenever a new file is written, it uses format 1 if possible, otherwise format 2.
* Form 1: An integer representing the size of the dimension, which is used for simple named dimensions.
* Form 2: A dictionary with the following keys and values"
- "size" with an integer value representing the (current) size of the dimension.
- "unlimited" with a value of either "1" or "0" to indicate if this dimension is an unlimited dimension.
For Unlimited dimensions, the size is initially zero, and as variables extend the length of that dimension, the size value for the dimension increases.
That dimension size is shared by all arrays referencing that dimension, so if one array extends an unlimited dimension, it is implicitly extended for all other arrays that reference that dimension.
This is the standard semantics for unlimited dimensions.
Adding unlimited dimensions required a number of other changes to the NCZarr code-base. These included the following.
* Did a partial refactor of the slice handling code in zwalk.c to clean it up.
* Added a number of tests for unlimited dimensions derived from the same test in nc_test4.
* Added several NCZarr specific unlimited tests; more are needed.
* Add test of endianness.
## Misc. Other Changes
* Modify libdispatch/ncs3sdk_aws.cpp to optionally support use of the
AWS Transfer Utility mechanism. This is controlled by the
```#define TRANSFER```` command in that file. It defaults to being disabled.
* Parameterize both the standard Unidata S3 bucket (S3TESTBUCKET) and the netcdf-c test data prefix (S3TESTSUBTREE).
* Fixed an obscure memory leak in ncdump.
* Removed some obsolete unit testing code and test cases.
* Uncovered a bug in the netcdf-c handling of big-endian floats and doubles. Have not fixed yet. See tst_h5_endians.c.
* Renamed some nczarr_tests testcases to avoid name conflicts with nc_test4.
* Modify the semantics of zmap\#ncsmap_write to only allow total rewrite of objects.
* Modify the semantics of zodom to properly handle stride > 1.
* Add a truncate operation to the libnczarr zmap code.
2023-09-27 06:56:48 +08:00
|
|
|
t_ncf330.c tst_ber.sh tst_fillmismatch.sh tst_encode.sh tst_hyrax.sh \
|
2021-09-03 07:04:26 +08:00
|
|
|
findtestserver.c.in
|
2010-06-03 21:24:43 +08:00
|
|
|
|
2021-05-30 11:30:33 +08:00
|
|
|
CLEANFILES = test_varm3 test_cvt3 file_results/* remote_results/* datadds* t_dap3a test_nstride_cached *.exe tmp*.txt
|
2017-11-24 01:55:24 +08:00
|
|
|
# This should only be left behind if using parallel io
|
|
|
|
CLEANFILES += tmp_*
|
2010-06-03 21:24:43 +08:00
|
|
|
|
2018-10-02 05:51:43 +08:00
|
|
|
DISTCLEANFILES = findtestserver.c
|
2018-09-05 01:27:47 +08:00
|
|
|
|
2011-05-13 01:51:32 +08:00
|
|
|
# This rule are used if someone wants to rebuild t_dap3a.c
|
|
|
|
# Otherwise never invoked, but records how to do it.
|
|
|
|
t_dap3a.c: t_dap.c
|
|
|
|
echo "#define NETCDF3ONLY" > ./t_dap3a.c
|
|
|
|
cat t_dap.c >> t_dap3a.c
|
|
|
|
|
|
|
|
# One last thing
|
|
|
|
BUILT_SOURCES = .dodsrc
|
|
|
|
|
|
|
|
.dodsrc:
|
|
|
|
echo "#DODSRC" >.dodsrc
|
2018-08-27 07:04:46 +08:00
|
|
|
echo "HTTP.READ.BUFFERSIZE=max" >>.dodsrc
|
|
|
|
echo "HTTP.KEEPALIVE=60/60" >>.dodsrc
|
2011-05-13 01:51:32 +08:00
|
|
|
|
2017-03-09 08:01:10 +08:00
|
|
|
clean-local: clean-local-check
|
|
|
|
|
|
|
|
.PHONY: clean-local-check
|
|
|
|
|
|
|
|
clean-local-check:
|
|
|
|
-rm -rf results
|
re e-support UBS-599337
re pull request https://github.com/Unidata/netcdf-c/pull/405
re pull request https://github.com/Unidata/netcdf-c/pull/446
Notes:
1. This branch is a cleanup of the magic.dmh branch.
2. magic.dmh was originally merged, but caused problems with parallel IO.
It was re-issued as pull request https://github.com/Unidata/netcdf-c/pull/446.
3. This branch + pull request replace any previous pull requests and magic.dmh branch.
Given an otherwise valid netCDF file that has a corrupted header,
the netcdf library currently crashes. Instead, it should return
NC_ENOTNC.
Additionally, the NC_check_file_type code does not do the
forward search required by hdf5 files. It currently only looks
at file position 0 instead of 512, 1024, 2048,... Also, it turns
out that the HDF4 magic number is assumed to always be at the
beginning of the file (unlike HDF5).
The change is localized to libdispatch/dfile.c See
https://support.hdfgroup.org/release4/doc/DSpec_html/DS.pdf
Also, it turns out that the code in NC_check_file_type is duplicated
(mostly) in the function libsrc4/nc4file.c#nc_check_for_hdf.
This branch does the following.
1. Make NC_check_file_type return NC_ENOTNC instead of crashing.
2. Remove nc_check_for_hdf and centralize all file format checking
NC_check_file_type.
3. Add proper forward search for HDF5 files (but not HDF4 files)
to look for the magic number at offsets of 0, 512, 1024...
4. Add test tst_hdf5_offset.sh. This tests that hdf5 files with
an offset are properly recognized. It does so by prefixing
a legal file with some number of zero bytes: 512, 1024, etc.
5. Off-topic: Added -N flag to ncdump to force a specific output dataset name.
2017-10-25 06:25:09 +08:00
|
|
|
-rm -f .dodsrc
|
Mitigate S3 test interference + Unlimited Dimensions in NCZarr
This PR started as an attempt to add unlimited dimensions to NCZarr.
It did that, but this exposed significant problems with test interference.
So this PR is mostly about fixing -- well mitigating anyway -- test
interference.
The problem of test interference is now documented in the document docs/internal.md.
The solutions implemented here are also describe in that document.
The solution is somewhat fragile but multiple cleanup mechanisms
are provided. Note that this feature requires that the
AWS command line utility must be installed.
## Unlimited Dimensions.
The existing NCZarr extensions to Zarr are modified to support unlimited dimensions.
NCzarr extends the Zarr meta-data for the ".zgroup" object to include netcdf-4 model extensions. This information is stored in ".zgroup" as dictionary named "_nczarr_group".
Inside "_nczarr_group", there is a key named "dims" that stores information about netcdf-4 named dimensions. The value of "dims" is a dictionary whose keys are the named dimensions. The value associated with each dimension name has one of two forms
Form 1 is a special case of form 2, and is kept for backward compatibility. Whenever a new file is written, it uses format 1 if possible, otherwise format 2.
* Form 1: An integer representing the size of the dimension, which is used for simple named dimensions.
* Form 2: A dictionary with the following keys and values"
- "size" with an integer value representing the (current) size of the dimension.
- "unlimited" with a value of either "1" or "0" to indicate if this dimension is an unlimited dimension.
For Unlimited dimensions, the size is initially zero, and as variables extend the length of that dimension, the size value for the dimension increases.
That dimension size is shared by all arrays referencing that dimension, so if one array extends an unlimited dimension, it is implicitly extended for all other arrays that reference that dimension.
This is the standard semantics for unlimited dimensions.
Adding unlimited dimensions required a number of other changes to the NCZarr code-base. These included the following.
* Did a partial refactor of the slice handling code in zwalk.c to clean it up.
* Added a number of tests for unlimited dimensions derived from the same test in nc_test4.
* Added several NCZarr specific unlimited tests; more are needed.
* Add test of endianness.
## Misc. Other Changes
* Modify libdispatch/ncs3sdk_aws.cpp to optionally support use of the
AWS Transfer Utility mechanism. This is controlled by the
```#define TRANSFER```` command in that file. It defaults to being disabled.
* Parameterize both the standard Unidata S3 bucket (S3TESTBUCKET) and the netcdf-c test data prefix (S3TESTSUBTREE).
* Fixed an obscure memory leak in ncdump.
* Removed some obsolete unit testing code and test cases.
* Uncovered a bug in the netcdf-c handling of big-endian floats and doubles. Have not fixed yet. See tst_h5_endians.c.
* Renamed some nczarr_tests testcases to avoid name conflicts with nc_test4.
* Modify the semantics of zmap\#ncsmap_write to only allow total rewrite of objects.
* Modify the semantics of zodom to properly handle stride > 1.
* Add a truncate operation to the libnczarr zmap code.
2023-09-27 06:56:48 +08:00
|
|
|
-rm -fr testdir_* testset_*
|
re e-support UBS-599337
re pull request https://github.com/Unidata/netcdf-c/pull/405
re pull request https://github.com/Unidata/netcdf-c/pull/446
Notes:
1. This branch is a cleanup of the magic.dmh branch.
2. magic.dmh was originally merged, but caused problems with parallel IO.
It was re-issued as pull request https://github.com/Unidata/netcdf-c/pull/446.
3. This branch + pull request replace any previous pull requests and magic.dmh branch.
Given an otherwise valid netCDF file that has a corrupted header,
the netcdf library currently crashes. Instead, it should return
NC_ENOTNC.
Additionally, the NC_check_file_type code does not do the
forward search required by hdf5 files. It currently only looks
at file position 0 instead of 512, 1024, 2048,... Also, it turns
out that the HDF4 magic number is assumed to always be at the
beginning of the file (unlike HDF5).
The change is localized to libdispatch/dfile.c See
https://support.hdfgroup.org/release4/doc/DSpec_html/DS.pdf
Also, it turns out that the code in NC_check_file_type is duplicated
(mostly) in the function libsrc4/nc4file.c#nc_check_for_hdf.
This branch does the following.
1. Make NC_check_file_type return NC_ENOTNC instead of crashing.
2. Remove nc_check_for_hdf and centralize all file format checking
NC_check_file_type.
3. Add proper forward search for HDF5 files (but not HDF4 files)
to look for the magic number at offsets of 0, 512, 1024...
4. Add test tst_hdf5_offset.sh. This tests that hdf5 files with
an offset are properly recognized. It does so by prefixing
a legal file with some number of zero bytes: 512, 1024, etc.
5. Off-topic: Added -N flag to ncdump to force a specific output dataset name.
2017-10-25 06:25:09 +08:00
|
|
|
|
2018-05-19 10:28:51 +08:00
|
|
|
# If valgrind is present, add valgrind targets.
|
|
|
|
@VALGRIND_CHECK_RULES@
|