netcdf-c/ncdump/Makefile.am

232 lines
9.3 KiB
Makefile
Raw Normal View History

2010-06-03 21:24:43 +08:00
## This is a automake file, part of Unidata's netCDF package.
# Copyright 2018, see the COPYRIGHT file for more information.
2010-06-03 21:24:43 +08:00
# This file builds and runs the ncdump program.
# Ed Hartnett, Dennis Heimbigner, Ward Fisher
2010-06-03 21:24:43 +08:00
#SH_LOG_DRIVER = $(SHELL) $(top_srcdir)/test-driver-verbose
#sh_LOG_DRIVER = $(SHELL) $(top_srcdir)/test-driver-verbose
#LOG_DRIVER = $(SHELL) $(top_srcdir)/test-driver-verbose
This PR adds EXPERIMENTAL support for accessing data in the cloud using a variant of the Zarr protocol and storage format. This enhancement is generically referred to as "NCZarr". The data model supported by NCZarr is netcdf-4 minus the user-defined types and the String type. In this sense it is similar to the CDF-5 data model. More detailed information about enabling and using NCZarr is described in the document NUG/nczarr.md and in a [Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in). WARNING: this code has had limited testing, so do use this version for production work. Also, performance improvements are ongoing. Note especially the following platform matrix of successful tests: Platform | Build System | S3 support ------------------------------------ Linux+gcc | Automake | yes Linux+gcc | CMake | yes Visual Studio | CMake | no Additionally, and as a consequence of the addition of NCZarr, major changes have been made to the Filter API. NOTE: NCZarr does not yet support filters, but these changes are enablers for that support in the future. Note that it is possible (probable?) that there will be some accidental reversions if the changes here did not correctly mimic the existing filter testing. In any case, previously filter ids and parameters were of type unsigned int. In order to support the more general zarr filter model, this was all converted to char*. The old HDF5-specific, unsigned int operations are still supported but they are wrappers around the new, char* based nc_filterx_XXX functions. This entailed at least the following changes: 1. Added the files libdispatch/dfilterx.c and include/ncfilter.h 2. Some filterx utilities have been moved to libdispatch/daux.c 3. A new entry, "filter_actions" was added to the NCDispatch table and the version bumped. 4. An overly complex set of structs was created to support funnelling all of the filterx operations thru a single dispatch "filter_actions" entry. 5. Move common code to from libhdf5 to libsrc4 so that it is accessible to nczarr. Changes directly related to Zarr: 1. Modified CMakeList.txt and configure.ac to support both C and C++ -- this is in support of S3 support via the awd-sdk libraries. 2. Define a size64_t type to support nczarr. 3. More reworking of libdispatch/dinfermodel.c to support zarr and to regularize the structure of the fragments section of a URL. Changes not directly related to Zarr: 1. Make client-side filter registration be conditional, with default off. 2. Hack include/nc4internal.h to make some flags added by Ed be unique: e.g. NC_CREAT, NC_INDEF, etc. 3. cleanup include/nchttp.h and libdispatch/dhttp.c. 4. Misc. changes to support compiling under Visual Studio including: * Better testing under windows for dirent.h and opendir and closedir. 5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags and to centralize error reporting. 6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them. 7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible. Changes Left TO-DO: 1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
#TESTS_ENVIRONMENT += export SETX=1;
# Put together AM_CPPFLAGS and AM_LDFLAGS.
include $(top_srcdir)/lib_flags.am
LDADD = ${top_builddir}/liblib/libnetcdf.la
2018-08-02 04:27:09 +08:00
noinst_PROGRAMS=
# Note which tests depend on other tests. Necessary for make -j check.
TEST_EXTENSIONS = .sh
2017-11-21 08:26:06 +08:00
XFAIL_TESTS=""
This PR adds EXPERIMENTAL support for accessing data in the cloud using a variant of the Zarr protocol and storage format. This enhancement is generically referred to as "NCZarr". The data model supported by NCZarr is netcdf-4 minus the user-defined types and the String type. In this sense it is similar to the CDF-5 data model. More detailed information about enabling and using NCZarr is described in the document NUG/nczarr.md and in a [Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in). WARNING: this code has had limited testing, so do use this version for production work. Also, performance improvements are ongoing. Note especially the following platform matrix of successful tests: Platform | Build System | S3 support ------------------------------------ Linux+gcc | Automake | yes Linux+gcc | CMake | yes Visual Studio | CMake | no Additionally, and as a consequence of the addition of NCZarr, major changes have been made to the Filter API. NOTE: NCZarr does not yet support filters, but these changes are enablers for that support in the future. Note that it is possible (probable?) that there will be some accidental reversions if the changes here did not correctly mimic the existing filter testing. In any case, previously filter ids and parameters were of type unsigned int. In order to support the more general zarr filter model, this was all converted to char*. The old HDF5-specific, unsigned int operations are still supported but they are wrappers around the new, char* based nc_filterx_XXX functions. This entailed at least the following changes: 1. Added the files libdispatch/dfilterx.c and include/ncfilter.h 2. Some filterx utilities have been moved to libdispatch/daux.c 3. A new entry, "filter_actions" was added to the NCDispatch table and the version bumped. 4. An overly complex set of structs was created to support funnelling all of the filterx operations thru a single dispatch "filter_actions" entry. 5. Move common code to from libhdf5 to libsrc4 so that it is accessible to nczarr. Changes directly related to Zarr: 1. Modified CMakeList.txt and configure.ac to support both C and C++ -- this is in support of S3 support via the awd-sdk libraries. 2. Define a size64_t type to support nczarr. 3. More reworking of libdispatch/dinfermodel.c to support zarr and to regularize the structure of the fragments section of a URL. Changes not directly related to Zarr: 1. Make client-side filter registration be conditional, with default off. 2. Hack include/nc4internal.h to make some flags added by Ed be unique: e.g. NC_CREAT, NC_INDEF, etc. 3. cleanup include/nchttp.h and libdispatch/dhttp.c. 4. Misc. changes to support compiling under Visual Studio including: * Better testing under windows for dirent.h and opendir and closedir. 5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags and to centralize error reporting. 6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them. 7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible. Changes Left TO-DO: 1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
if ! ENABLE_UNFIXED_MEMORY_LEAKS
AM_TESTS_ENVIRONMENT += export NC_VLEN_NOTEST=1;
endif
# This is the program we're building, and it's sources.
bin_PROGRAMS = ncdump
ncdump_SOURCES = ncdump.c vardata.c dumplib.c indent.c nctime0.c \
ncdump.h vardata.h dumplib.h indent.h nctime0.h cdl.h utils.h \
utils.c nciter.h nciter.c nccomps.h
# Another utility program that copies any netCDF file using only the
# netCDF API
bin_PROGRAMS += nccopy
nccopy_SOURCES = nccopy.c nciter.c nciter.h chunkspec.h chunkspec.c \
utils.h utils.c dimmap.h dimmap.c list.c list.h
# Wei-keng Liao's (wkliao@eecs.northwestern.edu)
# netcdf-3 validator program
# (https://github.com/Parallel-NetCDF/PnetCDF/blob/master/src/utils/ncvalidator/ncvalidator.c)
# that prints out the structure of a netcdf-3 file.
# This program is built but not installed.
noinst_PROGRAMS += ncvalidator
ncvalidator_SOURCES = ncvalidator.c
Add filter support to NCZarr Filter support has three goals: 1. Use the existing HDF5 filter implementations, 2. Allow filter metadata to be stored in the NumCodecs metadata format used by Zarr, 3. Allow filters to be used even when HDF5 is disabled Detailed usage directions are define in docs/filters.md. For now, the existing filter API is left in place. So filters are defined using ''nc_def_var_filter'' using the HDF5 style where the id and parameters are unsigned integers. This is a big change since filters affect many parts of the code. In the following, the terms "compressor" and "filter" and "codec" are generally used synonomously. ### Filter-Related Changes: * In order to support dynamic loading of shared filter libraries, a new library was added in the libncpoco directory; it helps to isolate dynamic loading across multiple platforms. * Provide a json parsing library for use by plugins; this is created by merging libdispatch/ncjson.c with include/ncjson.h. * Add a new _Codecs attribute to allow clients to see what codecs are being used; let ncdump -s print it out. * Provide special headers to help support compilation of HDF5 filters when HDF5 is not enabled: netcdf_filter_hdf5_build.h and netcdf_filter_build.h. * Add a number of new test to test the new nczarr filters. * Let ncgen parse _Codecs attribute, although it is ignored. ### Plugin directory changes: * Add support for the Blosc compressor; this is essential because it is the most common compressor used in Zarr datasets. This also necessitated adding a CMake FindBlosc.cmake file * Add NCZarr support for the big-four filters provided by HDF5: shuffle, fletcher32, deflate (zlib), and szip * Add a Codec defaulter (see docs/filters.md) for the big four filters. * Make plugins work with windows by properly adding __declspec declaration. ### Misc. Non-Filter Changes * Replace most uses of USE_NETCDF4 (deprecated) with USE_HDF5. * Improve support for caching * More fixes for path conversion code * Fix misc. memory leaks * Add new utility -- ncdump/ncpathcvt -- that does more or less the same thing as cygpath. * Add a number of new test to test the non-filter fixes. * Update the parsers * Convert most instances of '#ifdef _MSC_VER' to '#ifdef _WIN32'
2021-09-03 07:04:26 +08:00
# A non-installed utility program to convert paths; similar to cygpath
noinst_PROGRAMS += ncpathcvt
ncpathcvt_SOURCES = ncpathcvt.c
2017-11-18 03:29:12 +08:00
# A simple netcdf-4 metadata -> xml printer. Do not install.
if USE_HDF5
2020-09-16 04:56:12 +08:00
bin_PROGRAMS += nc4print
2018-08-02 04:27:09 +08:00
noinst_PROGRAMS += nc4print
nc4print_SOURCES = nc4print.c nc4printer.c
# Create a helper program for tst_scope.sh
# Program prints out the fqn of the type of
# a specified variable in the .nc file.
noinst_PROGRAMS += printfqn
printfqn_SOURCES = printfqn.c
Primary change: add dap4 support Specific changes: 1. Add dap4 code: libdap4 and dap4_test. Note that until the d4ts server problem is solved, dap4 is turned off. 2. Modify various files to support dap4 flags: configure.ac, Makefile.am, CMakeLists.txt, etc. 3. Add nc_test/test_common.sh. This centralizes the handling of the locations of various things in the build tree: e.g. where is ncgen.exe located. See nc_test/test_common.sh for details. 4. Modify .sh files to use test_common.sh 5. Obsolete separate oc2 by moving it to be part of netcdf-c. This means replacing code with netcdf-c equivalents. 5. Add --with-testserver to configure.ac to allow override of the servers to be used for --enable-dap-remote-tests. 6. There were multiple versions of nctypealignment code. Try to centralize in libdispatch/doffset.c and include/ncoffsets.h 7. Add a unit test for the ncuri code because of its complexity. 8. Move the findserver code out of libdispatch and into a separate, self contained program in ncdap_test and dap4_test. 9. Move the dispatch header files (nc{3,4}dispatch.h) to .../include because they are now shared by modules. 10. Revamp the handling of TOPSRCDIR and TOPBUILDDIR for shell scripts. 11. Make use of MREMAP if available 12. Misc. minor changes e.g. - #include <config.h> -> #include "config.h" - Add some no-install headers to /include - extern -> EXTERNL and vice versa as needed - misc header cleanup - clean up checking for misc. unix vs microsoft functions 13. Change copyright decls in some files to point to LICENSE file. 14. Add notes to RELEASENOTES.md
2017-03-09 08:01:10 +08:00
endif
# Conditionally build the ocprint program, but do not install
if ENABLE_DAP
2020-09-16 04:56:12 +08:00
bin_PROGRAMS += ocprint
2018-08-02 04:19:50 +08:00
noinst_PROGRAMS += ocprint
ocprint_SOURCES = ocprint.c
Primary change: add dap4 support Specific changes: 1. Add dap4 code: libdap4 and dap4_test. Note that until the d4ts server problem is solved, dap4 is turned off. 2. Modify various files to support dap4 flags: configure.ac, Makefile.am, CMakeLists.txt, etc. 3. Add nc_test/test_common.sh. This centralizes the handling of the locations of various things in the build tree: e.g. where is ncgen.exe located. See nc_test/test_common.sh for details. 4. Modify .sh files to use test_common.sh 5. Obsolete separate oc2 by moving it to be part of netcdf-c. This means replacing code with netcdf-c equivalents. 5. Add --with-testserver to configure.ac to allow override of the servers to be used for --enable-dap-remote-tests. 6. There were multiple versions of nctypealignment code. Try to centralize in libdispatch/doffset.c and include/ncoffsets.h 7. Add a unit test for the ncuri code because of its complexity. 8. Move the findserver code out of libdispatch and into a separate, self contained program in ncdap_test and dap4_test. 9. Move the dispatch header files (nc{3,4}dispatch.h) to .../include because they are now shared by modules. 10. Revamp the handling of TOPSRCDIR and TOPBUILDDIR for shell scripts. 11. Make use of MREMAP if available 12. Misc. minor changes e.g. - #include <config.h> -> #include "config.h" - Add some no-install headers to /include - extern -> EXTERNL and vice versa as needed - misc header cleanup - clean up checking for misc. unix vs microsoft functions 13. Change copyright decls in some files to point to LICENSE file. 14. Add notes to RELEASENOTES.md
2017-03-09 08:01:10 +08:00
endif
# This is the man page.
man_MANS = ncdump.1 nccopy.1
if BUILD_TESTSETS
2019-09-18 10:27:43 +08:00
# C programs needed by shell scripts for classic tests.
check_PROGRAMS = rewrite-scalar ref_ctest ref_ctest64 ncdump tst_utf8 \
Upgrade the nczarr code to match Zarr V2 Re: https://github.com/zarr-developers/zarr-python/pull/716 The Zarr version 2 spec has been extended to include the ability to choose the dimension separator in chunk name keys. The legal separators has been extended from {'.'} to {'.' '/'}. So now it is possible to use a key like "0/1/2/0" for chunk names. This PR implements this for NCZarr. The V2 spec now says that this separator can be set on a per-variable basis. For now, I have chosen to allow this be set only globally by adding a key named "ZARR.DIMENSION_SEPARATOR=<char>" in the .daprc/.dodsrc/ncrc file. Currently, the only legal separator characters are '.' (the default) and '/'. On writing, this key will only be written if its value is different than the default. This change caused problems because supporting a separator of '/' is difficult to parse when keys/paths use '/' as the path separator. A test case was added for this. Additionally, make nczarr be enabled default by default. This required some additional changes so that if zip and/or AWS S3 sdk are unavailable, then they are disabled for NCZarr. In addition the following unrelated changes were made. 1. Tested that pure-zarr mode could read an nczarr formatted store. 1. The .rc file handling now merges all known .rc files (.ncrc,.daprc, and .dodsrc) in that order and using those in HOME first, then in current directory. For duplicate entries, the later ones override the earlier ones. This change is to remove some of the conflicts inherent in the current .rc file load process. A set of test cases was also added. 1. Re-order tests in configure.ac and CMakeLists.txt so that if libcurl is not found then the other options that depend upon it properly are disabled. 1. I decided that xarray support should be enabled by default for pure zarr. In order to allow disabling, I added a new mode flag "noxarray". 1. Certain test in nczarr_test depend on use of .dodsrc. In order for these to work when testing in parallel, some inter-test dependencies needed to be added. 1. Improved authorization testing to use changes in thredds.ucar.edu
2021-04-25 09:48:15 +08:00
bom tst_dimsizes nctrunc tst_rcmerge
2017-11-18 03:29:12 +08:00
# Tests for classic and 64-bit offset files.
TESTS = tst_inttags.sh run_tests.sh tst_64bit.sh ref_ctest \
ref_ctest64 tst_output.sh tst_lengths.sh tst_calendars.sh \
2021-05-20 07:19:33 +08:00
run_utf8_tests.sh tst_nccopy3.sh tst_nccopy3_subset.sh \
tst_charfill.sh tst_iter.sh tst_formatx3.sh tst_bom.sh \
2021-09-01 16:29:17 +08:00
tst_dimsizes.sh run_ncgen_tests.sh tst_ncgen4_classic.sh test_radix.sh #test_rcmerge.sh
2017-11-25 22:30:17 +08:00
# The tst_nccopy3.sh test uses output from a bunch of other
# tests. This records the dependency so parallel builds work.
2017-11-18 02:07:24 +08:00
tst_nccopy3.log: tst_calendars.log run_utf8_tests.log tst_output.log \
2017-11-19 00:28:25 +08:00
tst_64bit.log run_tests.log tst_lengths.log
2017-11-21 06:06:10 +08:00
TESTS += tst_null_byte_padding.sh
2017-11-21 08:26:06 +08:00
if USE_STRICT_NULL_BYTE_HEADER_PADDING
XFAIL_TESTS += tst_null_byte_padding.sh
2017-11-21 06:06:10 +08:00
endif
2021-05-20 07:19:33 +08:00
if ! ISCYGWIN
Regularize the scoping of dimensions This is a follow-on to pull request ````https://github.com/Unidata/netcdf-c/pull/1959````, which fixed up type scoping. The primary changes are to _nc\_inq\_dimid()_ and to ncdump. The _nc\_inq\_dimid()_ function is supposed to allow the name to be and FQN, but this apparently never got implemented. So if was modified to support FQNs. The ncdump program is supposed to output fully qualified dimension names in its generated CDL file under certain conditions. Suppose ncdump has a netcdf-4 file F with variable V, and V's parent group is G. For each dimension id D referenced by V, ncdump needs to determine whether to print its name as a simple name or as a fully qualified name (FQN). The algorithm is as follows: 1. Search up the tree of ancestor groups. 2. If one of those ancestor groups contains the dimid, then call it dimgrp. 3. If one of those ancestor groups contains a dim with the same name as the dimid, but with a different dimid, then record that as duplicate=true. 4. If dimgrp is defined and duplicate == false, then we do not need an fqn. 5. If dimgrp is defined and duplicate == true, then we do need an fqn to avoid incorrectly using the duplicate. 6. If dimgrp is undefined, then do a preorder breadth-first search of all the groups looking for the dimid. 7. If found, then use the fqn of the first found such dimension location. 8. If not found, then fail. Test case ncdump/test_scope.sh was modified to test the proper operation of ncdump and _nc\_inq\_dimid()_. Misc. Other Changes: * Fix nc_inq_ncid (NC4_inq_ncid actually) to return root group id if the name argument is NULL. * Modify _ncdump/printfqn_ to print out a dimid FQN; this supports verification that the resulting .nc files were properly created.
2021-06-01 05:51:12 +08:00
TESTS += test_unicode_directory.sh test_unicode_path.sh
2021-05-20 07:19:33 +08:00
endif
if LARGE_FILE_TESTS
TESTS += tst_iter.sh
endif
Add filter support to NCZarr Filter support has three goals: 1. Use the existing HDF5 filter implementations, 2. Allow filter metadata to be stored in the NumCodecs metadata format used by Zarr, 3. Allow filters to be used even when HDF5 is disabled Detailed usage directions are define in docs/filters.md. For now, the existing filter API is left in place. So filters are defined using ''nc_def_var_filter'' using the HDF5 style where the id and parameters are unsigned integers. This is a big change since filters affect many parts of the code. In the following, the terms "compressor" and "filter" and "codec" are generally used synonomously. ### Filter-Related Changes: * In order to support dynamic loading of shared filter libraries, a new library was added in the libncpoco directory; it helps to isolate dynamic loading across multiple platforms. * Provide a json parsing library for use by plugins; this is created by merging libdispatch/ncjson.c with include/ncjson.h. * Add a new _Codecs attribute to allow clients to see what codecs are being used; let ncdump -s print it out. * Provide special headers to help support compilation of HDF5 filters when HDF5 is not enabled: netcdf_filter_hdf5_build.h and netcdf_filter_build.h. * Add a number of new test to test the new nczarr filters. * Let ncgen parse _Codecs attribute, although it is ignored. ### Plugin directory changes: * Add support for the Blosc compressor; this is essential because it is the most common compressor used in Zarr datasets. This also necessitated adding a CMake FindBlosc.cmake file * Add NCZarr support for the big-four filters provided by HDF5: shuffle, fletcher32, deflate (zlib), and szip * Add a Codec defaulter (see docs/filters.md) for the big four filters. * Make plugins work with windows by properly adding __declspec declaration. ### Misc. Non-Filter Changes * Replace most uses of USE_NETCDF4 (deprecated) with USE_HDF5. * Improve support for caching * More fixes for path conversion code * Fix misc. memory leaks * Add new utility -- ncdump/ncpathcvt -- that does more or less the same thing as cygpath. * Add a number of new test to test the non-filter fixes. * Update the parsers * Convert most instances of '#ifdef _MSC_VER' to '#ifdef _WIN32'
2021-09-03 07:04:26 +08:00
TESTS += testpathcvt.sh
if USE_HDF5
# HDF5 has some extra C programs to build. These will be run by
2017-11-18 03:29:12 +08:00
# the shell script tests.
check_PROGRAMS += tst_fileinfo tst_create_files tst_h_rdc0 \
tst_group_data tst_enum_data tst_opaque_data tst_string_data \
tst_vlen_data tst_comp tst_comp2 tst_nans tst_special_atts \
tst_unicode tst_fillbug tst_compress tst_chunking tst_h_scalar
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
check_PROGRAMS += tst_vlen_demo
2017-11-18 03:29:12 +08:00
# Tests for netCDF-4 behavior.
TESTS += tst_fileinfo.sh tst_hdf5_offset.sh tst_inttags4.sh \
tst_netcdf4.sh tst_fillbug.sh tst_netcdf4_4.sh tst_nccopy4.sh \
tst_nccopy5.sh tst_grp_spec.sh tst_mud.sh tst_h_scalar.sh tst_formatx4.sh \
2017-11-18 03:29:12 +08:00
run_utf8_nc4_tests.sh run_back_comp_tests.sh run_ncgen_nc4_tests.sh \
tst_ncgen4.sh test_scope.sh
2017-11-18 03:29:12 +08:00
# Record interscript dependencies so parallel builds work.
tst_nccopy4.log: run_ncgen_tests.log tst_output.log tst_ncgen4.log \
2017-11-26 22:13:10 +08:00
tst_fillbug.log tst_netcdf4_4.log tst_h_scalar.log
tst_nccopy5.log: tst_nccopy4.log
endif #!USE_HDF5
2010-06-03 21:24:43 +08:00
TESTS += tst_inmemory_nc3.sh tst_nccopy_w3.sh
if USE_HDF5
TESTS += tst_inmemory_nc4.sh tst_nccopy_w4.sh
2017-11-18 03:29:12 +08:00
endif
2019-11-01 06:08:57 +08:00
if USE_HDF5
# Re-activate the ncgen -lc tests
2019-11-04 03:03:13 +08:00
TESTS += tst_ctests.sh
2019-11-01 06:08:57 +08:00
endif
if ENABLE_CDF5
# Test for keywords as identifiers
TESTS += test_keywords.sh
endif
endif BUILD_TESTSETS
2010-06-03 21:24:43 +08:00
# These files all have to be included with the distribution.
2017-11-18 03:29:12 +08:00
EXTRA_DIST = run_tests.sh tst_64bit.sh tst_output.sh test0.cdl \
ref_ctest1_nc4.cdl ref_ctest1_nc4c.cdl ref_tst_solar_1.cdl \
ref_tst_solar_2.cdl tst_netcdf4.sh tst_netcdf4_4.sh ref_tst_small.cdl \
tst_lengths.sh tst_ncml.cdl ref1.ncml ref_tst_group_data.cdl \
ref_tst_enum_data.cdl ref_tst_opaque_data.cdl ref_tst_string_data.cdl \
ref_tst_vlen_data.cdl ref_tst_comp.cdl ref_tst_unicode.cdl \
ref_tst_nans.cdl small.cdl small2.cdl $(man_MANS) run_utf8_tests.sh \
ref_tst_utf8.cdl ref_tst_fillbug.cdl tst_fillbug.sh tst_calendars.cdl \
tst_calendars.sh ref_times.cdl ref_tst_special_atts.cdl \
ref_tst_noncoord.cdl ref_tst_compounds2.nc ref_tst_compounds2.cdl \
ref_tst_compounds3.nc ref_tst_compounds3.cdl ref_tst_compounds4.nc \
ref_tst_compounds4.cdl ref_tst_group_data_v23.cdl tst_mslp.cdl \
tst_bug321.cdl ref_tst_format_att.cdl ref_tst_format_att_64.cdl \
tst_nccopy3.sh tst_nccopy4.sh tst_nccopy5.sh \
ref_nc_test_netcdf4_4_0.nc run_back_comp_tests.sh \
ref_nc_test_netcdf4.cdl ref_tst_special_atts3.cdl tst_brecs.cdl \
ref_tst_grp_spec0.cdl ref_tst_grp_spec.cdl tst_grp_spec.sh \
ref_tst_charfill.cdl tst_charfill.cdl tst_charfill.sh tst_iter.sh \
tst_mud.sh ref_tst_mud4.cdl ref_tst_mud4-bc.cdl \
ref_tst_mud4_chars.cdl inttags.cdl inttags4.cdl ref_inttags.cdl \
ref_inttags4.cdl ref_tst_ncf213.cdl tst_h_scalar.sh \
run_utf8_nc4_tests.sh tst_formatx3.sh tst_formatx4.sh \
ref_tst_utf8_4.cdl ref_tst_nc4_utf8_4.cdl tst_inttags.sh \
tst_inttags4.sh CMakeLists.txt tst_bom.sh tst_inmemory_nc3.sh \
tst_dimsizes.sh tst_inmemory_nc4.sh tst_fileinfo.sh \
run_ncgen_tests.sh ref_test_360_day_1900.nc ref_test_365_day_1900.nc \
2017-11-26 22:13:10 +08:00
ref_test_366_day_1900.nc ref_test_360_day_1900.cdl \
ref_test_365_day_1900.cdl ref_test_366_day_1900.cdl \
tst_hdf5_offset.sh run_ncgen_nc4_tests.sh tst_nccopy3_subset.sh \
ref_nccopy3_subset.nc ref_test_corrupt_magic.nc tst_ncgen_shared.sh \
tst_ncgen4.sh tst_ncgen4_classic.sh tst_ncgen4_diff.sh \
tst_ncgen4_cycle.sh tst_null_byte_padding.sh \
ref_null_byte_padding_test.nc ref_tst_irish_rover.nc \
ref_provenance_v1.nc ref_tst_radix.cdl tst_radix.cdl test_radix.sh \
ref_nccopy_w.cdl tst_nccopy_w3.sh tst_nccopy_w4.sh \
Regularize the scoping of dimensions This is a follow-on to pull request ````https://github.com/Unidata/netcdf-c/pull/1959````, which fixed up type scoping. The primary changes are to _nc\_inq\_dimid()_ and to ncdump. The _nc\_inq\_dimid()_ function is supposed to allow the name to be and FQN, but this apparently never got implemented. So if was modified to support FQNs. The ncdump program is supposed to output fully qualified dimension names in its generated CDL file under certain conditions. Suppose ncdump has a netcdf-4 file F with variable V, and V's parent group is G. For each dimension id D referenced by V, ncdump needs to determine whether to print its name as a simple name or as a fully qualified name (FQN). The algorithm is as follows: 1. Search up the tree of ancestor groups. 2. If one of those ancestor groups contains the dimid, then call it dimgrp. 3. If one of those ancestor groups contains a dim with the same name as the dimid, but with a different dimid, then record that as duplicate=true. 4. If dimgrp is defined and duplicate == false, then we do not need an fqn. 5. If dimgrp is defined and duplicate == true, then we do need an fqn to avoid incorrectly using the duplicate. 6. If dimgrp is undefined, then do a preorder breadth-first search of all the groups looking for the dimid. 7. If found, then use the fqn of the first found such dimension location. 8. If not found, then fail. Test case ncdump/test_scope.sh was modified to test the proper operation of ncdump and _nc\_inq\_dimid()_. Misc. Other Changes: * Fix nc_inq_ncid (NC4_inq_ncid actually) to return root group id if the name argument is NULL. * Modify _ncdump/printfqn_ to print out a dimid FQN; this supports verification that the resulting .nc files were properly created.
2021-06-01 05:51:12 +08:00
ref_no_ncproperty.nc test_unicode_directory.sh test_unicode_path.sh \
ref_roman_szip_simple.cdl ref_roman_szip_unlim.cdl ref_tst_perdimspecs.cdl \
test_keywords.sh ref_keyword1.cdl ref_keyword2.cdl ref_keyword3.cdl ref_keyword4.cdl \
Regularize the scoping of dimensions This is a follow-on to pull request ````https://github.com/Unidata/netcdf-c/pull/1959````, which fixed up type scoping. The primary changes are to _nc\_inq\_dimid()_ and to ncdump. The _nc\_inq\_dimid()_ function is supposed to allow the name to be and FQN, but this apparently never got implemented. So if was modified to support FQNs. The ncdump program is supposed to output fully qualified dimension names in its generated CDL file under certain conditions. Suppose ncdump has a netcdf-4 file F with variable V, and V's parent group is G. For each dimension id D referenced by V, ncdump needs to determine whether to print its name as a simple name or as a fully qualified name (FQN). The algorithm is as follows: 1. Search up the tree of ancestor groups. 2. If one of those ancestor groups contains the dimid, then call it dimgrp. 3. If one of those ancestor groups contains a dim with the same name as the dimid, but with a different dimid, then record that as duplicate=true. 4. If dimgrp is defined and duplicate == false, then we do not need an fqn. 5. If dimgrp is defined and duplicate == true, then we do need an fqn to avoid incorrectly using the duplicate. 6. If dimgrp is undefined, then do a preorder breadth-first search of all the groups looking for the dimid. 7. If found, then use the fqn of the first found such dimension location. 8. If not found, then fail. Test case ncdump/test_scope.sh was modified to test the proper operation of ncdump and _nc\_inq\_dimid()_. Misc. Other Changes: * Fix nc_inq_ncid (NC4_inq_ncid actually) to return root group id if the name argument is NULL. * Modify _ncdump/printfqn_ to print out a dimid FQN; this supports verification that the resulting .nc files were properly created.
2021-06-01 05:51:12 +08:00
ref_tst_nofilters.cdl test_scope.sh \
test_rcmerge.sh ref_rcmerge1.txt ref_rcmerge2.txt ref_rcmerge3.txt \
scope_ancestor_only.cdl scope_ancestor_subgroup.cdl scope_group_only.cdl scope_preorder.cdl
re e-support UBS-599337 re pull request https://github.com/Unidata/netcdf-c/pull/405 re pull request https://github.com/Unidata/netcdf-c/pull/446 Notes: 1. This branch is a cleanup of the magic.dmh branch. 2. magic.dmh was originally merged, but caused problems with parallel IO. It was re-issued as pull request https://github.com/Unidata/netcdf-c/pull/446. 3. This branch + pull request replace any previous pull requests and magic.dmh branch. Given an otherwise valid netCDF file that has a corrupted header, the netcdf library currently crashes. Instead, it should return NC_ENOTNC. Additionally, the NC_check_file_type code does not do the forward search required by hdf5 files. It currently only looks at file position 0 instead of 512, 1024, 2048,... Also, it turns out that the HDF4 magic number is assumed to always be at the beginning of the file (unlike HDF5). The change is localized to libdispatch/dfile.c See https://support.hdfgroup.org/release4/doc/DSpec_html/DS.pdf Also, it turns out that the code in NC_check_file_type is duplicated (mostly) in the function libsrc4/nc4file.c#nc_check_for_hdf. This branch does the following. 1. Make NC_check_file_type return NC_ENOTNC instead of crashing. 2. Remove nc_check_for_hdf and centralize all file format checking NC_check_file_type. 3. Add proper forward search for HDF5 files (but not HDF4 files) to look for the magic number at offsets of 0, 512, 1024... 4. Add test tst_hdf5_offset.sh. This tests that hdf5 files with an offset are properly recognized. It does so by prefixing a legal file with some number of zero bytes: 512, 1024, etc. 5. Off-topic: Added -N flag to ncdump to force a specific output dataset name.
2017-10-25 06:25:09 +08:00
# The L512.bin file is file containing exactly 512 bytes each of value 0.
# It is used for creating hdf5 files with varying offsets for testing.
EXTRA_DIST += L512.bin
2019-11-06 04:43:59 +08:00
EXTRA_DIST += tst_ctests.sh ref_ctest_small_3.c ref_ctest_small_4.c \
2019-11-04 03:03:13 +08:00
ref_ctest_special_atts_4.c
2019-11-01 06:08:57 +08:00
Add filter support to NCZarr Filter support has three goals: 1. Use the existing HDF5 filter implementations, 2. Allow filter metadata to be stored in the NumCodecs metadata format used by Zarr, 3. Allow filters to be used even when HDF5 is disabled Detailed usage directions are define in docs/filters.md. For now, the existing filter API is left in place. So filters are defined using ''nc_def_var_filter'' using the HDF5 style where the id and parameters are unsigned integers. This is a big change since filters affect many parts of the code. In the following, the terms "compressor" and "filter" and "codec" are generally used synonomously. ### Filter-Related Changes: * In order to support dynamic loading of shared filter libraries, a new library was added in the libncpoco directory; it helps to isolate dynamic loading across multiple platforms. * Provide a json parsing library for use by plugins; this is created by merging libdispatch/ncjson.c with include/ncjson.h. * Add a new _Codecs attribute to allow clients to see what codecs are being used; let ncdump -s print it out. * Provide special headers to help support compilation of HDF5 filters when HDF5 is not enabled: netcdf_filter_hdf5_build.h and netcdf_filter_build.h. * Add a number of new test to test the new nczarr filters. * Let ncgen parse _Codecs attribute, although it is ignored. ### Plugin directory changes: * Add support for the Blosc compressor; this is essential because it is the most common compressor used in Zarr datasets. This also necessitated adding a CMake FindBlosc.cmake file * Add NCZarr support for the big-four filters provided by HDF5: shuffle, fletcher32, deflate (zlib), and szip * Add a Codec defaulter (see docs/filters.md) for the big four filters. * Make plugins work with windows by properly adding __declspec declaration. ### Misc. Non-Filter Changes * Replace most uses of USE_NETCDF4 (deprecated) with USE_HDF5. * Improve support for caching * More fixes for path conversion code * Fix misc. memory leaks * Add new utility -- ncdump/ncpathcvt -- that does more or less the same thing as cygpath. * Add a number of new test to test the non-filter fixes. * Update the parsers * Convert most instances of '#ifdef _MSC_VER' to '#ifdef _WIN32'
2021-09-03 07:04:26 +08:00
EXTRA_DIST += testpathcvt.sh ref_pathcvt.txt
# CDL files and Expected results
2017-11-18 03:29:12 +08:00
SUBDIRS = cdl expected
Upgrade the nczarr code to match Zarr V2 Re: https://github.com/zarr-developers/zarr-python/pull/716 The Zarr version 2 spec has been extended to include the ability to choose the dimension separator in chunk name keys. The legal separators has been extended from {'.'} to {'.' '/'}. So now it is possible to use a key like "0/1/2/0" for chunk names. This PR implements this for NCZarr. The V2 spec now says that this separator can be set on a per-variable basis. For now, I have chosen to allow this be set only globally by adding a key named "ZARR.DIMENSION_SEPARATOR=<char>" in the .daprc/.dodsrc/ncrc file. Currently, the only legal separator characters are '.' (the default) and '/'. On writing, this key will only be written if its value is different than the default. This change caused problems because supporting a separator of '/' is difficult to parse when keys/paths use '/' as the path separator. A test case was added for this. Additionally, make nczarr be enabled default by default. This required some additional changes so that if zip and/or AWS S3 sdk are unavailable, then they are disabled for NCZarr. In addition the following unrelated changes were made. 1. Tested that pure-zarr mode could read an nczarr formatted store. 1. The .rc file handling now merges all known .rc files (.ncrc,.daprc, and .dodsrc) in that order and using those in HOME first, then in current directory. For duplicate entries, the later ones override the earlier ones. This change is to remove some of the conflicts inherent in the current .rc file load process. A set of test cases was also added. 1. Re-order tests in configure.ac and CMakeLists.txt so that if libcurl is not found then the other options that depend upon it properly are disabled. 1. I decided that xarray support should be enabled by default for pure zarr. In order to allow disabling, I added a new mode flag "noxarray". 1. Certain test in nczarr_test depend on use of .dodsrc. In order for these to work when testing in parallel, some inter-test dependencies needed to be added. 1. Improved authorization testing to use changes in thredds.ucar.edu
2021-04-25 09:48:15 +08:00
CLEANFILES = tst_*.nc tmp*.nc test*.nc iter.* tmp*.cdl tmp*.txt \
2018-11-19 05:59:05 +08:00
tst_output_*.cdl tst_output_*.c tst_utf8_*.cdl *.tmp tst_tst8.cdl \
tst_netcdf4_*.cdl test1_ncdump.cdl test2_ncdump.cdl test1.cdl \
ctest1.cdl test1_cdf5.cdl test2_cdf5.cdl test1_offset.cdl \
test2_offset.cdl ctest0.nc ctest0_64.nc c1.cdl c1_4.cdl ctest1_64.cdl \
c0.nc c0_4.nc small.nc small2.nc c0tmp.nc c1.ncml utf8.cdl \
utf8_64.cdl utf8.nc utf8_64.nc nc4_utf8.cdl nc4_utf8.nc \
tst_unicode.cdl tst_group_data.cdl tst_compounds2.cdl tst_comp.cdl \
tst_enum_data.cdl tst_small.cdl tst_times.cdl tst_solar_2.cdl \
tst_string_data.cdl tst_fillbug.cdl tst_opaque_data.cdl \
tst_compounds4.cdl tst_utf8.cdl tst_compounds3.cdl \
tst_special_atts.cdl tst_nans.cdl tst_format_att_64.cdl \
tst_vlen_data.cdl tst_solar_1.cdl tst_format_att.cdl \
tst_nc_test_netcdf4_4_0.cdl tst_mud4.cdl tst_mud4-bc.cdl \
tst_ncf213.cdl tst_h_scalar.cdl tst_mud4_chars.cdl inttags.nc \
inttags4.nc tst_inttags.cdl tst_inttags4.cdl nc4_fileinfo.nc \
hdf5_fileinfo.hdf nccopy3_subset_out.nc c5.nc \
2017-11-19 05:20:04 +08:00
compound_datasize_test.nc compound_datasize_test2.nc ncf199.nc \
tst_c0.cdl tst_c0_4.cdl tst_c0_4c.cdl tst_c0_64.cdl \
tst_compound_datasize_test.cdl tst_compound_datasize_test2.cdl \
2017-11-19 05:20:04 +08:00
tst_ncf199.cdl tst_tst_gattenum.cdl tst_tst_usuffix.cdl ctest.c \
2020-07-04 02:56:22 +08:00
ctest64.c nccopy3_subset_out.nc camrun.c tst_ncf213.cdl tst_ncf213.nc \
tst_radix.nc tmp_radix.cdl ctest_small_3.c ctest_small_4.c \
ctest_special_atts_4.c tst_roman_szip_simple.cdl \
tst_roman_szip_unlim.cdl tst_perdimpspecs.nc tmppds.* \
Regularize the scoping of dimensions This is a follow-on to pull request ````https://github.com/Unidata/netcdf-c/pull/1959````, which fixed up type scoping. The primary changes are to _nc\_inq\_dimid()_ and to ncdump. The _nc\_inq\_dimid()_ function is supposed to allow the name to be and FQN, but this apparently never got implemented. So if was modified to support FQNs. The ncdump program is supposed to output fully qualified dimension names in its generated CDL file under certain conditions. Suppose ncdump has a netcdf-4 file F with variable V, and V's parent group is G. For each dimension id D referenced by V, ncdump needs to determine whether to print its name as a simple name or as a fully qualified name (FQN). The algorithm is as follows: 1. Search up the tree of ancestor groups. 2. If one of those ancestor groups contains the dimid, then call it dimgrp. 3. If one of those ancestor groups contains a dim with the same name as the dimid, but with a different dimid, then record that as duplicate=true. 4. If dimgrp is defined and duplicate == false, then we do not need an fqn. 5. If dimgrp is defined and duplicate == true, then we do need an fqn to avoid incorrectly using the duplicate. 6. If dimgrp is undefined, then do a preorder breadth-first search of all the groups looking for the dimid. 7. If found, then use the fqn of the first found such dimension location. 8. If not found, then fail. Test case ncdump/test_scope.sh was modified to test the proper operation of ncdump and _nc\_inq\_dimid()_. Misc. Other Changes: * Fix nc_inq_ncid (NC4_inq_ncid actually) to return root group id if the name argument is NULL. * Modify _ncdump/printfqn_ to print out a dimid FQN; this supports verification that the resulting .nc files were properly created.
2021-06-01 05:51:12 +08:00
keyword1.nc keyword2.nc keyword3.nc keyword4.nc \
tmp_keyword1.cdl tmp_keyword2.cdl tmp_keyword3.cdl tmp_keyword4.cdl \
type_*.nc copy_type_*.cdl \
scope_*.nc copy_scope_*.cdl
# Remove directories
clean-local:
rm -fr rcmergedir