Re: https://github.com/zarr-developers/zarr-python/pull/716
The Zarr version 2 spec has been extended to include the ability
to choose the dimension separator in chunk name keys. The legal
separators has been extended from {'.'} to {'.' '/'}. So now it
is possible to use a key like "0/1/2/0" for chunk names.
This PR implements this for NCZarr. The V2 spec now says that
this separator can be set on a per-variable basis. For now, I
have chosen to allow this be set only globally by adding a key
named "ZARR.DIMENSION_SEPARATOR=<char>" in the
.daprc/.dodsrc/ncrc file. Currently, the only legal separator
characters are '.' (the default) and '/'. On writing, this key
will only be written if its value is different than the default.
This change caused problems because supporting a separator of '/'
is difficult to parse when keys/paths use '/' as the path separator.
A test case was added for this.
Additionally, make nczarr be enabled default by default. This required
some additional changes so that if zip and/or AWS S3 sdk are unavailable,
then they are disabled for NCZarr.
In addition the following unrelated changes were made.
1. Tested that pure-zarr mode could read an nczarr formatted store.
1. The .rc file handling now merges all known .rc files (.ncrc,.daprc, and .dodsrc) in that order and using those in HOME first, then in current directory. For duplicate entries, the later ones override the earlier ones. This change is to remove some of the conflicts inherent in the current .rc file load process. A set of test cases was also added.
1. Re-order tests in configure.ac and CMakeLists.txt so that if libcurl
is not found then the other options that depend upon it properly
are disabled.
1. I decided that xarray support should be enabled by default for pure
zarr. In order to allow disabling, I added a new mode flag "noxarray".
1. Certain test in nczarr_test depend on use of .dodsrc. In order for these
to work when testing in parallel, some inter-test dependencies needed to
be added.
1. Improved authorization testing to use changes in thredds.ucar.edu
re: https://github.com/Unidata/netcdf-c/issues/1988
There was an issue with certain shell programs (bash notably).
For certain platforms and when given a url that had an escaped
'#' character (e.g. \\#) bash would not remove the backslash. So I
had to add a hack for this. Unfortunately I overdid it and it
removed all '' characters. This is ok for non-windows platforms,
but obviously fails for windows.
The fix is this.
1. In a utility program (ncgen, ncdump, nccopy, etc) there is probably a call (or calls) to NC_backslashUnescape(xxx) where xxx is a path argument from the command line.
2. Replace each such call with NC_shellUnescape(xxx).
The NC_shellUnescape function was added and searched only for occurrences of "\#" and replaces them with "#".
re: https://github.com/Unidata/netcdf-c/issues/1977
PR https://github.com/Unidata/netcdf-c/pull/1753, changed ncgen
to allows certain type names to be used as identifiers in
selected situations.
An unwanted side effect was that existing type aliases no longer
were accepted by ncgen. Specifically, using the "long" type
caused an error.
I was able to figure out a better solution to the original
problem (https://github.com/Unidata/netcdf-c/issues/1750)
that also fixes this problem as well.
This PR fixes that problem in ncgen/ncgen.l,
and adds tests to ncdump/test_keywords.sh
re: Github issue https://github.com/Unidata/netcdf-c/issues/1956
The function NC_compare_nc_types in libdispatch/dcopy.c uses an
incorrect algorithm to search for types. The core of this is the
function NC_rec_find_nc_type in libdispatch/dcopy.c. Currently
it searchs the current group and its subtree.
Additionally, the function NC4_inq_typeid in libsrc4/nc4internal.c
has been extended to handle fully qualified names. It was originally
designed to do this, but for some reason never completed.
The NC_rec_find_nc_type algorithm has been altered to match the
algorithm used by NC4_inq_typeid. It operates as follows.
Given a file F, group G and a type T. It searches file F2, group
G2, for another type T2 that is equivalent to T.
The search order is as follows.
1. Search G2 for a type T2 equivalent to T.
2. Search upwards in the ancestor groups of G2 for a type T2 equivalent to T.
3. Search the complete group tree of F2 in pre-order, breadth-first order to locate T2 equivalent to T.
Also add a test case to validate algorithm: ncdump/test_scope.sh.
Note, this change may cause compatibility problems, though it is
unlikely because two different equivalent type declarations in
one dataset is unlikely.
The netcdf-c code has to deal with a variety of platforms:
Windows, OSX, Linux, Cygwin, MSYS, etc. These platforms differ
significantly in the kind of file paths that they accept. So in
order to handle this, I have created a set of replacements for
the most common file system operations such as _open_ or _fopen_
or _access_ to manage the file path differences correctly.
A more limited version of this idea was already implemented via
the ncwinpath.h and dwinpath.c code. So this can be viewed as a
replacement for that code. And in path in many cases, the only
change that was required was to replace '#include <ncwinpath.h>'
with '#include <ncpathmgt.h>' and then replace file operation
calls with the NCxxx equivalent from ncpathmgr.h Note that
recently, the ncwinpath.h was renamed ncpathmgmt.h, so this pull
request should not require dealing with winpath.
The heart of the change is include/ncpathmgmt.h, which provides
alternate operations such as NCfopen or NCaccess and which properly
parse and rebuild path arguments to work for the platform on which
the code is executing. This mostly matters for Windows because of the
way that it uses backslash and drive letters, as compared to *nix*.
One important feature is that the user can do string manipulations
on a file path without having to worry too much about the platform
because the path management code will properly handle most mixed cases.
So one can for example concatenate a path suffix that uses forward
slashes to a Windows path and have it work correctly.
The conversion code is in libdispatch/dpathmgr.c, and the
important function there is NCpathcvt which does the proper
conversions to the local path format.
As a rule, most code should just replace their file operations with
the corresponding NCxxx ones defined in include/ncpathmgmt.h. These
NCxxx functions all call NCpathcvt on their path arguments before
executing the actual file operation.
In some rare cases, the client may need to directly use NCpathcvt,
but this should be avoided as much as possible. If there is a need
for supporting a new file operation not already in ncpathmgmt.h, then
use the code in dpathmgr.c as a template. Also please notify Unidata
so we can include it as a formal part or our supported operations.
Also, if you see an operation in the library that is not using the
NCxxx form, then please submit an issue so we can fix it.
Misc. Changes:
* Clean up the utf8 testing code; it is impossible to get some
tests to work under windows using shell scripts; the args do
not pass as utf8 but as some other encoding.
* Added an extra utf8 test case: test_unicode_path.sh
* Add a true test for HDF5 1.10.6 or later because as noted in
PR https://github.com/Unidata/netcdf-c/pull/1794,
HDF5 changed its Windows file path handling.
re: Issue https://github.com/Unidata/netcdf-c/issues/1936
The algorithm controlling interaction of -d 0 -F and -c / options
is incorrect.
The fix:
1. make -d 0 => no deflation
2. make -F properly use the filter specs to decide.
Also added a test case to ncdump/tst_nccopy4.sh.
Primary Fixes:
* Add a whole variable optimization -- used in the rare case that nc_get/put_vara covers the whole of a variable and the variable has a single chunk.
* Fix chunking error when stride causes whole chunks to be skipped.
* Fix some memory leaks
* Add test cases
* Add one performance test to nczarr_test/. This uses the timer utils from unit_test: timer_utils.[ch].
* Move ncdumpchunks utility from ncdump to nczarr_test
Misc. Other Changes:
* Make check for aws libraries conditional on --enable-nczarr-s3
* Remove all but one bm tests from nczarr_test until they are working.
* Remove another dependency on HDF5 from supposedly non-HDF5 specific code; specifically hdf5_log_hdf5.
* Make the BAIL2 macro be hdf5 specific and replace elsewhere with an HDF5 independent equivalent.
* Move hdf5cache.c to libsrc4/nc4cache.c because it is used by nczarr.
* Modify unit_tests so that some of them are run even if using Windows.
* Misc. small bug fixes and refactors and memory leaks.
* Rename some conflicting tests for cmake.
* Attempted to make nc_perf work with cmake and failed.
Re: GH Issue https://github.com/Unidata/netcdf-c/issues/1900
Apparently the clock_gettime() function is not always available.
It is used in unit_test/tst_exhash.c and unit_test/tst_xcache.c.
To solve this, a number of things were changed:
* Move the timing code to a new file unit_tests/timer_utils.[ch]
* Modify the timing code to choose one of several timing methods
depending on availability. The prioritized order is as follows:
1. If Windows, use the QueryPerformanceCounter mechanism else
2. Use clock_gettime if available else
3. Use gettimeofday if available else
4. Use getrusage if available
Note that the resolution of 3 and 4 is less than 1 or 2.
Misc. Other Changes:
* Move the test in CMakeLists.txt that disables unit tests for WIN32 to unit_test/CMakeLists.txt since some unit tests actually work under Visual Studio.
* Fix some of the unit tests to work under visual studio
* Fix problem with using remove() in zmap_nzf.c
* Remove some warning about use of EXTERNL
Primary changes:
* Add an improved cache system to speed up performance.
* Fix NCZarr to properly handle scalar variables.
Misc. Related Changes:
* Added unit tests for extendible hash and for the generic cache.
* Add config parameter to set size of the NCZarr cache.
* Add initial performance tests but leave them unused.
* Add CRC64 support.
* Move location of ncdumpchunks utility from /ncgen to /ncdump.
* Refactor auth support.
Misc. Unrelated Changes:
* More cleanup of the S3 support
* Add support for S3 authentication in .rc files: HTTP.S3.ACCESSID and HTTP.S3.SECRETKEY.
* Remove the hashkey from the struct OBJHDR since it is never used.
re: https://github.com/Unidata/netcdf-c/issues/1836
Revert the internal filter code to simplify it. From the user's
point of view, the only visible changes should be:
1. The functions that convert text to filter specs have had their signature reverted and have been moved to netcdf_aux.h
2. Some filter API functions now return NC_ENOFILTER when inquiry is made about some filter.
Internally,the dispatch table has been modified to get rid of the filter_actions
entry and associated complex structures. It has been replaced with
inq_var_filter_ids and inq_var_filter_info entries and the dispatch table
version has been bumped to 3. Corresponding NOOP and NOTNC4 functions
were added to libdispatch/dnotnc4.c. Also, the filter_action entries
in dispatch tables were replaced for all dispatch code bases (HDF5, DAP2,
etc). This should only impact UDF users.
In the process, it became clear that the form of the filters
field in NC_VAR_INFO_T was format dependent, so I converted it to
be of type void* and pushed its management into the various dispatch
code bases. Specifically libhdf5 and libnczarr now manage the filters
field in their own way.
The auxilliary functions for parsing textual filter specifications
were moved to netcdf_aux.h and were renamed to the following:
* ncaux_h5filterspec_parse
* ncaux_h5filterspec_parselist
* ncaux_h5filterspec_free
* ncaux_h5filter_fix8
Misc. Other Changes:
1. Document NUG/filters.md updated to reflect the changes above.
2. All the old data types (structs and enums)
used by filter_actions actions were deleted.
The exception is the NC_H5_Filterspec because it is needed
by ncaux_h5filterspec_parselist.
3. Clientside filters were removed -- another enhancement
for which no-one ever asked.
4. The ability to remove filters was itself removed.
5. Some functionality needed by nczarr was moved from libhdf5
to libsrc4 e.g. nc4_find_default_chunksizes
6. All the filterx code was removed
7. ncfilter.h and nc4filter.c no longer used
Misc. Unrelated Changes:
1. The nczarr_test makefile clean was leaving some directories; so
add clean-local to take care of them.
re: Issue https://github.com/Unidata/netcdf-c/issues/1848
The existing Virtual File Driver built to support byte-range
read-only file access is quite old. It turns out to be extremely
slow (reason unknown at the moment).
Starting with HDF5 1.10.6, the HDF5 library has its own version
of such a file driver. The HDF5 developers have better knowledge
about building such a driver and what incantations are needed to
get good performance.
This PR modifies the byte-range code in hdf5open.c so
that if the HDF5 file driver is available, then it is used
in preference to the one written by the Netcdf group.
Misc. Other Changes:
1. Moved all of nc4print code to ncdump to keep appveyor quiet.
In case HDF5 adds more storage specifications, netcdf4 should be able to
cope with them by default. Further specializations could be added
nonetheless.
As it was, nccopy -c dim/x was sometimes being ignored. So
modify nccopy to properly take into account. This also required
a change to the nczarr code because it was not applying default
chunking in the same way as libhdf5.
Modify ncdump/tst_nccopy4.sh to test this feature properly.
Also add a similar test to nczarr_test.
Additionally, fix some other things that were causing Visual
Studio builds with testing to not work.
* fix curl testing under CMake to properly handle case
where DAP is disabled, but byterange support is enabled.
* properly test and/or define uintptr_t
* Convert _O_XXX to O_XXX flags used by open();
disengagement of enable-netcdf4 from enable-hdf5.
That is, with the advent of nczarr, it is possible
to turn off hdf5 but still need netcdf-4 enabled
because nczarr uses libsrc4, but not libhdf5.
This change involves a bunch of things:
1. Modify configure.ac and CMakelist to make enable_hdf5
control if hdf5 support is provided. For back compatibility,
disable-netcdf4 is treated as disable-hdf5. But internally,
netcdf4 support is controlled only by the enabling of formats
that require it.
2. In support of #1, modify .travis.yml to use enable/disable-hdf5
instead of enable/disable-netcdf4.
3. test_common.in is modified to track selected features,
including enable-hdf5 and enable-s3-tests. This is used in
selected tests that mix netcdf-3 and netcdf4 tests.
4. The conflation of USE_HDF5 and USE_NETCDF4 is common in
code, tests, and build files, so all of those had to be weeded out.
5. It turns out that some of the NC4_dim functions really are HDF5 specific,
but are not treated as such. So they are moved from nc4dim.c to
hdf5dim.c or hdf5dispatch.c
6. Some generic functions in libhdf5 can be (and were) moved to libsrc4.
The primary fix is to improve CMake build support.
Specific changes include:
* CMake: Provide a better soln to locating the AWS SDK
libraries; the new way is the preferred method as described in
the aws-cpp-sdk documentation.
* CMake (and Automake): allow -DENABLE_S3_SDK (default off) to suppress
looking for AWS libraries.
* CMake: add the complete set of nczarr tests
* CMake: add EXTERNL as needed to various .h files.
* Improve support for windows drive letters in paths.
* Add nczarr and s3 flags to nc-config
* For VisualStudio X nczarr, cleanup the NAN+INFINITY handling
* Convert _MSC_VER -> _WIN32 and vice versa as needed
* NCZarr - support multiple platform paths including windows, cygwin.
mingw, etc.
* NCZarr - sort the test outputs because different platforms
produce directory contents in different orders.
One big change concerns netcdf-c/CMakeLists.txt and netcdf-c/configure.ac.
In the current versions, it was the case that --disable-hdf5
disabled netcdf-4 (libsrc4). With nczarr, this can no longer
be the case because nczarr requires libsrc4 even if libhdf5
is disabled. So, I modified the above files to move the
format options (HDF5, NCZarr, HDF4, etc) to a single place
near the front of the files. Now it is the case that:
* Enabling any of the formats that require libsrc4
also does an implicit --enable-netcdf4.
* --disable-netcdf4 | --disable-netcdf-4 now becomes
and alias for --disable-hdf5.
There are probably some bugs in this change in terms of
dependencies between format options.
Problems:
* CMake S3 support is still not working for Visual Studio
* A recent issue points out that there is work to do on handling
UTF8 filenames, but that will be addressed in a separate fix.
Notes:
* Consider converting all of our includes/.h files to use EXTERNL
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
re: https://github.com/Unidata/netcdf-c/issues/1763
The nccopy program incorrectly set default chunking parameters
to use full dimension lengths. Instead, it should use the
values computed by the default chunking values as defined
in nc4_default_chunksizes2 in the netcdf library.
Solution: Revert to the old behavior.
Built-in type-name keywords are currently flagged when used as
identifiers in formats that do not support that type. So if a
user declares a dimension named "string" in a classic .cdl file,
it causes an error.
This PR modifies ncgen to allow those format-specific type keywords
to be used as identifiers when compiling to formats that do not
support that type. Also added a test for this.
Also a couple of misc. changes to conditionalize some debug output.
Fix Issue https://github.com/Unidata/netcdf-c/issues/1725.
Replace PR https://github.com/Unidata/netcdf-c/pull/1726
Also replace PR https://github.com/Unidata/netcdf-c/pull/1694
The general problem is that under Visual Studio, we are seeing
a number of undefined reference and other scoping errors.
The reason is that the code is not properly using Visual Studio
_declspec() declarations.
The basic solution is to ensure that when compiling the code itself
one needs to ensure that _declspec(dllexport) is used. There
are several sets of macros to handle this, but they all rely
on the flag DLL_EXPORT being define when the code is compiled,
but not being defined when the code is used via a .h file.
As a test, I modified XGetOpt.c to build properly. I also
fixed the oc2 library to properly _declspec things like ocdebug.
I also made some misc. changes to get all the tests to run
if cygwin is installed (to get bash, sed, etc).
Misc. Changes:
* Put XGetOpt.c into libsrc and copy at build time
to the other directories where it is needed.
The current ncgen does not properly handle very large
data sections. Apparently this is very uncommon because
it was only discovered in testing the new zarr code.
The fix required a new approach to processing data sections.
Unfortunately, the resulting ncgen is slower than before
but at least it is, I think, now correct.
The added test cases are in libnczarr, and so will
not show up until that is incorporated into master.
Note also that fortran code generation changed, but
has not been tested here.
Misc. Changes
1. Cleanup error handling in ncgen -lc and -lb output
2. Cleanup Makefiles for ncgen to remove unused code
3. Added a program, ncgen/ncdumpchunks, to print
the data for a .nc file on a per-chunk format.
4. Made the XGetOpt change in PR https://github.com/Unidata/netcdf-c/pull/1694
for ncdump/ncvalidator
Enables ncdump -t (-i) to recognize a wider variety of time related units
and calendar names. This brings ncdump closer to what it advertises in its
man page regarding its understanding of udunits compliant time units.
re: Github issue https://github.com/Unidata/netcdf-c/issues/1713
If nc_def_var_filter or nc_def_var_deflate or nc_def_var_szip is
called multiple times with the same filter id, but possibly with
different sets of parameters, then the first invocation is
sticky and later invocations are ignored. The desired behavior
is to have the last invocation be used.
This PR implements that desired behavior, with some special
cases. If you call nc_def_var_deflate multiple times, then the
last invocation rule applies with respect to deflate. However,
the shuffle filter, if enabled, is always applied just before
applying deflate.
Misc unrelated changes:
1. Make client-side filters be disabled by default
2. Fix the definition of uintptr_t and use in oc2 and libdap4
3. Add some test cases
4. modify filter order tests to use plugin filters rather
than client-side filters
re: https://github.com/Unidata/netcdf-c/issues/1642
Modify ncdump, nccopy, and ncgen to support the NC_COMPACT storage option.
Added test cases and added description to the man pages for the utilities.
1. ncdump: For compact storage variable, print special attribute __Storage_ as
````
<var>: _Storage = "compact";
````
2. ncgen: parse and implement
````
<var>: _Storage = "compact";
````
in a .cdl file
3. nccopy: Extend the chunk specification (-c flag) to support
compact using the forms
````
nccopy ... -c <var>:compact
and
nccopy ... -c <var>:contiguous
````
Misc. other changes
1. cleanup the copy_chunking function in ncdump/nccopy.c
re: https://github.com/Unidata/netcdf-c/issues/1584
Support has been added for multiple filters per variable. This
affects a number of components in netcdf. The new APIs are
documented in NUG/filters.md.
The primary changes are:
* A set of new functions are provided (see __include/netcdf_filter.h__).
- Obtain a list of the filters associated with a variable
- Obtain the parameters for a specific filter.
* The existing __nc_inq_var_filter__ function now returns info
about the first defined filter.
* The utilities (ncgen, ncdump, and nccopy) now support
an extended format for specifying a sequence of filters.
The general form is __<filter>|<filter>..._.
* The ncdump **_Filter** attribute now dumps a list of all the
filters associated with a variable using the above new format.
* Filter specifications can now use a filter name instead of number
for filters known to the netcdf library, which in turn is taken
from the HDF5 filter registration page.
* New errors are defined: NC_EFILTER and NC_ENOFILTER. The latter
is returned if an attempt is made to access an unknown filter.
* Internally, the dispatch table has been extended to add a function
to handle all of the filter functions.
* New, filter-related, tests were added to nc_test4.
* A new plugin was added to the plugins directory to help with testing.
Notes:
1. The shuffle and fletcher32 filters are not part of the multifilter system.
Misc. changes:
1. A debug module was added to libhdf5 to help catch error locations.
* For URL paths, the new approach essentially centralizes all information
in the URL into the "#mode=" fragment key and uses that value
to determine the dispatcher for (most) URLs.
* The new approach has the following steps:
1. canonicalize the path if it is a URL.
2. use the mode= fragment key to determine the dispatcher
3. if dispatcher still not determined, then use the mode flags
argument to nc_open/nc_create to determine the dispatcher.
4. if the path points to something readable, attempt to read the
magic number at the front, and use that to determine the dispatcher.
this case may override all previous cases.
* Misc changes.
1. Update documentation
2. Moved some unit tests from libdispatch to unit_test directory.
3. Fixed use of wrong #ifdef macro in test_filter_reg.c
[I think this may fix an previously reported esupport query].
Partially address: https://github.com/Unidata/netcdf-c/issues/1056
Currently, some of the entries in the dispatch table
are conditional'd on USE_NETCDF4.
As a step in upgrading the dispatch table for use
with user-defined tables, we remove that conditional.
This means that all dispatch tables must implement the
netcdf-4 specific functions even if only to make them
return NC_ENOTNC4. To simplify this, a set of default
functions are defined in libdispatch/dnotnc4.c to provide this
behavior. The file libdispatch/dnotnc3.c is also relevant to
this.
The primary fix is to modify the various dispatch tables to
remove the conditional and use the functions in
libdispatch/dnotnc4.c as appropriate. In practice, all of the
existing tables are prepared to handle this, so the only
real change is to remove the conditionals.
Misc. Unrelated fixes
1. Fix some annoying warnings in ncvalidator.
Notes:
1. This has not been tested with either pnetcdf or hdf4 enabled.
When those are enabled, it is possible that there are still
some conditionals that need to be fixed.
re: issue https://github.com/Unidata/netcdf-c/issues/1436
It used to be that when no -c parameters were specified
(and the input was some variant of netcdf-3) that nccopy
let the netcdf-c library decide on any default chunking.
Now, it attempts to do it itself and is not doing it
correctly when unlimited dimensions are involved.
So fix is to revert to previous behavior.
Note: The chunking rules are getting too complicated; consider revising.
re: issue https://github.com/Unidata/netcdf-c/issues/1398
re: esupport NDY-294972
The new chunking code added to nccopy missed one case.
In the event that there are no chunking specifications
of any kind, and the input is not netcdf-4, and the output
is netcdf-4 and must be chunked, then use the default chunking
that the library computes as part of the nc_def_var() function.
Misc. changes:
1. add some chunking debug code to hdf5var.c
Add Wei-King Liao's ncvalidator program (with his permission) as
an uninstalled (for now) tool in the ncdump directory. It has in
the past been useful for debugging netcdf-3 files.
re: Issue https://github.com/Unidata/netcdf-c/issues/1365
At some point (4.6.1) we changed the diskless handling of flags
and added a new NC_PERSIST flag. Looks like we did not fix all
occurrences of the old flag set, specifically for 'nccopy -w' command.
Also added some tests (tst_nccopy_w{3,4}.sh) for this situation.
re: https://github.com/Unidata/netcdf-c/issues/1352
When nc4info.c encounters an _NCProperties attribute
with a version number it does not recognize, it does not
show it correctly.
Solution chosen is to arrange so that accessing the attribute
returns the raw value of the Attribute from the file. This way,
even if the version is unrecognized, it will return something
usable.
The changes were primarily to never attempt to parse the value
of _NCProperties until actually required. Which since they
are currently not used means that parsing never occurs.
Also modified ncdump/tst_fileinfo.sh to include some extra testing
I tested the original failure by changing the value of NCPROPS to 3.
However, there is no way to test this at build time.
Misc. Changes
* Inlined the provenance info in the NC_FILE_INFO_T structure
* Centralized stuff from elsewhere into include/nc_provenance.h
Misc. Unrelated Changes
* Removed/turned off some misc debug output left on by accident
* Fix CPPFLAGS name error in libhdf5/Makefile.am