In a number of places in the netcdf-c library, some of the
high order mode flags (the mode argument to nc_open or nc_close)
are being used to save state information. This means that the
description of the defined and open mode flags in netcdf.h
were not accurate.
This PR moves all those hack flags so that the list of mode flags
in netcdf.h is correct.
re: https://github.com/Unidata/netcdf-c/issues/541
re: https://github.com/Unidata/netcdf-c/issues/1208
re: https://github.com/Unidata/netcdf-c/issues/2078
re: https://github.com/Unidata/netcdf-c/issues/2041
re: https://github.com/Unidata/netcdf-c/issues/2143
For a long time, there have been known problems with the
management of complex types containing VLENs. This also
involves the string type because it is stored as a VLEN of
chars.
This PR (mostly) fixes this problem. But note that it adds new
functions to netcdf.h (see below) and this may require bumping
the .so number. These new functions can be removed, if desired,
in favor of functions in netcdf_aux.h, but netcdf.h seems the
better place for them because they are intended as alternatives
to the nc_free_vlen and nc_free_string functions already in
netcdf.h.
The term complex type refers to any type that directly or
transitively references a VLEN type. So an array of VLENS, a
compound with a VLEN field, and so on.
In order to properly handle instances of these complex types, it
is necessary to have function that can recursively walk
instances of such types to perform various actions on them. The
term "deep" is also used to mean recursive.
At the moment, the two operations needed by the netcdf library are:
* free'ing an instance of the complex type
* copying an instance of the complex type.
The current library does only shallow free and shallow copy of
complex types. This means that only the top level is properly
free'd or copied, but deep internal blocks in the instance are
not touched.
Note that the term "vector" will be used to mean a contiguous (in
memory) sequence of instances of some type. Given an array with,
say, dimensions 2 X 3 X 4, this will be stored in memory as a
vector of length 2*3*4=24 instances.
The use cases are primarily these.
## nc_get_vars
Suppose one is reading a vector of instances using nc_get_vars
(or nc_get_vara or nc_get_var, etc.). These functions will
return the vector in the top-level memory provided. All
interior blocks (form nested VLEN or strings) will have been
dynamically allocated.
After using this vector of instances, it is necessary to free
(aka reclaim) the dynamically allocated memory, otherwise a
memory leak occurs. So, the recursive reclaim function is used
to walk the returned instance vector and do a deep reclaim of
the data.
Currently functions are defined in netcdf.h that are supposed to
handle this: nc_free_vlen(), nc_free_vlens(), and
nc_free_string(). Unfortunately, these functions only do a
shallow free, so deeply nested instances are not properly
handled by them.
Note that internally, the provided data is immediately written so
there is no need to copy it. But the caller may need to reclaim the
data it passed into the function.
## nc_put_att
Suppose one is writing a vector of instances as the data of an attribute
using, say, nc_put_att.
Internally, the incoming attribute data must be copied and stored
so that changes/reclamation of the input data will not affect
the attribute.
Again, the code inside the netcdf library does only shallow copying
rather than deep copy. As a result, one sees effects such as described
in Github Issue https://github.com/Unidata/netcdf-c/issues/2143.
Also, after defining the attribute, it may be necessary for the user
to free the data that was provided as input to nc_put_att().
## nc_get_att
Suppose one is reading a vector of instances as the data of an attribute
using, say, nc_get_att.
Internally, the existing attribute data must be copied and returned
to the caller, and the caller is responsible for reclaiming
the returned data.
Again, the code inside the netcdf library does only shallow copying
rather than deep copy. So this can lead to memory leaks and errors
because the deep data is shared between the library and the user.
# Solution
The solution is to build properly recursive reclaim and copy
functions and use those as needed.
These recursive functions are defined in libdispatch/dinstance.c
and their signatures are defined in include/netcdf.h.
For back compatibility, corresponding "ncaux_XXX" functions
are defined in include/netcdf_aux.h.
````
int nc_reclaim_data(int ncid, nc_type xtypeid, void* memory, size_t count);
int nc_reclaim_data_all(int ncid, nc_type xtypeid, void* memory, size_t count);
int nc_copy_data(int ncid, nc_type xtypeid, const void* memory, size_t count, void* copy);
int nc_copy_data_all(int ncid, nc_type xtypeid, const void* memory, size_t count, void** copyp);
````
There are two variants. The first two, nc_reclaim_data() and
nc_copy_data(), assume the top-level vector is managed by the
caller. For reclaim, this is so the user can use, for example, a
statically allocated vector. For copy, it assumes the user
provides the space into which the copy is stored.
The second two, nc_reclaim_data_all() and
nc_copy_data_all(), allows the functions to manage the
top-level. So for nc_reclaim_data_all, the top level is
assumed to be dynamically allocated and will be free'd by
nc_reclaim_data_all(). The nc_copy_data_all() function
will allocate the top level and return a pointer to it to the
user. The user can later pass that pointer to
nc_reclaim_data_all() to reclaim the instance(s).
# Internal Changes
The netcdf-c library internals are changed to use the proper
reclaim and copy functions. It turns out that the places where
these functions are needed is quite pervasive in the netcdf-c
library code. Using these functions also allows some
simplification of the code since the stdata and vldata fields of
NC_ATT_INFO are no longer needed. Currently this is commented
out using the SEPDATA \#define macro. When any bugs are largely
fixed, all this code will be removed.
# Known Bugs
1. There is still one known failure that has not been solved.
All the failures revolve around some variant of this .cdl file.
The proximate cause of failure is the use of a VLEN FillValue.
````
netcdf x {
types:
float(*) row_of_floats ;
dimensions:
m = 5 ;
variables:
row_of_floats ragged_array(m) ;
row_of_floats ragged_array:_FillValue = {-999} ;
data:
ragged_array = {10, 11, 12, 13, 14}, {20, 21, 22, 23}, {30, 31, 32},
{40, 41}, _ ;
}
````
When a solution is found, I will either add it to this PR or post a new PR.
# Related Changes
* Mark nc_free_vlen(s) as deprecated in favor of ncaux_reclaim_data.
* Remove the --enable-unfixed-memory-leaks option.
* Remove the NC_VLENS_NOTEST code that suppresses some vlen tests.
* Document this change in docs/internal.md
* Disable the tst_vlen_data test in ncdump/tst_nccopy4.sh.
* Mark types as fixed size or not (transitively) to optimize the reclaim
and copy functions.
# Misc. Changes
* Make Doxygen process libdispatch/daux.c
* Make sure the NC_ATT_INFO_T.container field is set.
re:
The current netcdf-c release has some problems with the mingw platform
on windows. Mostly they are path issues.
Changes to support mingw+msys2:
-------------------------------
* Enable option of looking into the windows registry to find
the mingw root path. In aid of proper path handling.
* Add mingw+msys as a specific platform in configure.ac and move testing
of the platform to the front so it is available early.
* Handle mingw X libncpoco (dynamic loader) properly even though
mingw does not yet support it.
* Handle mingw X plugins properly even though mingw does not yet support it.
* Alias pwd='pwd -W' to better handle paths in shell scripts.
* Plus a number of other minor compile irritations.
* Disallow the use of multiple nc_open's on the same file for windows
(and mingw) because windows does not seem to handle these properly.
Not sure why we did not catch this earlier.
* Add mountpoint info to dpathmgr.c to help support mingw.
* Cleanup dpathmgr conversions.
Known problems:
---------------
* I have not been able to get shared libraries to work, so
plugins/filters must be disabled.
* There is some kind of problem with libcurl that I have not solved,
so all uses of libcurl (currently DAP+Byterange) must be disabled.
Misc. other fixes:
------------------
* Cleanup the relationship between ENABLE_PLUGINS and various other flags
in CMakeLists.txt and configure.ac.
* Re-arrange the TESTDIRS order in Makefile.am.
* Add pseudo-breakpoint to nclog.[ch] for debugging.
* Improve the documentation of the path manager code in ncpathmgr.h
* Add better support for relative paths in dpathmgr.c
* Default the mode args to NCfopen to include "b" (binary) for windows.
* Add optional debugging output in various places.
* Make sure that everything builds with plugins disabled.
* Fix numerous (s)printf inconsistencies betweenb the format spec
and the arguments.
1. Issue https://github.com/Unidata/netcdf-c/issues/2043
* FreeBSD build fails because of conflicts in defining the fileno() function. So removed all extern declarations of fileno.
2. Issue https://github.com/Unidata/netcdf-c/issues/2124
* There were a couple of problems here.
* I was conflating msys with mingw and they need separate handling of paths. So treat mingw like windows.
* memio.c was not always writing the full content of the memory to file. Untested fix by properly accounting for zero size writes.
* Fix bug when skipping white space in tst_xcache.c
3. Issue https://github.com/Unidata/netcdf-c/pull/2105
* On MINGW, bash and other POSIX utilities use a mounted root directory,
but executables compiled for Windows do not recognise the mount point.
Ensure that Windows paths are used in tests of Windows executables.
4. Issue https://github.com/Unidata/netcdf-c/issues/2132
* Apparently the Intel C compiler on OSX defines isnan etc.
So disable declaration in dutil.c under that condition.
5. Fix and re-enable test_rcmerge.sh by allowing override of where to
look for .rc files
6. CMakeLists.txt suppresses certain ncdump directory tests because of differences in printing floats/doubles.
* Extend the list to include those that also fail under mingw.
* Suppress the mingw tests in ncdump/Makefile.am
re: https://github.com/Unidata/netcdf-c/issues/2119
H/T to [Egbert Eich](https://github.com/e4t) and [Bas Couwenberg](https://github.com/sebastic) for this PR.
It is undesirable to make netcdf be dependent on the availability
of libxml2, but it is desirable to allow its use if available.
In order to do this, a wrapper API (include/ncxml.h) was constructed
that supports either ezxml or libxml2 as the implementation.
Additionally, the xml support code was moved to a new directory
netcdf-c/libncxml.
Primary changes:
* Create a new sub-directory named netcdf-c/libncxml to hold all the xml implementation code.
* Move ezxml.c and ezxml.h to libncxml
* Create a wrapper API -- include/ncxml.h
* Create an implementation, ncxml_ezxml.c to support use of ezxml.
* Create an implementation, ncxml_xml2.c to support use of libxml2.
* Add a check for libxml2 in configure.ac and CMakeLists.txt
* Modify libdap to use the wrapper API instead of ezxml directly.
Misc. Other Changes:
* Change include/netcdf_json.h from built source to be part of the distribution.
re: https://github.com/Unidata/netcdf-c/issues/2117
re: https://github.com/Unidata/netcdf-c/issues/2119
* Modify libsrc to allow byte-range reading of netcdf-3 files in private S3 buckets; this required using the aws sdk. Also add a test case.
* The aws sdk can sometimes cause problems if the Awd::ShutdownAPI function is not called. So at optional atexit() support to ensure it is called. This is disabled for Windows.
* Add documentation to nczarr.md on how to build and use the aws sdk under windows. Currently it builds, but testing fails.
* Switch testing from stratus to the Unidata bucket on S3.
* Improve support for the s3: url protocol.
* Add a s3 specific utility code file: ds3util.c
* Modify NC_infermodel to attempt to read the magic number of byte-ranged files in S3.
## Misc.
* Move and rename the core S3 SDK wrapper code (libnczarr/zs3sdk.cpp) to libdispatch since it now used in libsrc as well as libnczarr.
* Add calls to nc_finalize in the utilities in case atexit is disabled.
* Add header only json parser to the distribution rather than as a built source.
## Examine and fix ezxml errors
re: Issue https://github.com/Unidata/netcdf-c/issues/2119
Multiple security issues were found in ezxml (see above Issue).
* CVE-2021-31598
* CVE-2021-31348 / CVE-2021-31347
* CVE-2021-31229
* CVE-2021-30485
* CVE-2021-26222
* CVE-2021-26221
* CVE-2021-26220
* CVE-2019-20202
* CVE-2019-20201
* CVE-2019-20200
* CVE-2019-20199
* CVE-2019-20198
* CVE-2019-20007
* CVE-2019-20006
* CVE-2019-20005
In addition, moved ezxml to libdispatch.
## Examine and fix selected oss-fuzz detected errors
Note that most of these errors are in the libsrc .m4 generated
code so fixing them is difficult. It would nice if we could tell
oss-fuzz to skip those files. They are old and crufty and
probably need a complete refactor.
Issue|Status
-----|------
35382|Fixed; old bug
35398|Closed by OSS-Fuzz
35442|Guarantee alloc > 0 or error; Old bug
35721|Assert failure; ok
35992|Fixed; old bug
36038|Fixed; old bug
36129|Unfixed; old bug
36229|Fixed by adding assert; old bug
37476|Unfixed; old bug
37824|Assert Failure; ok
38300|Closed by OSS-Fuzz
38537|Unfixed; old bug
38658|Unfixed; old bug
38699|Fixed maybe; old bug
38772|Nature of error is unclear, suspect that it results from using too large a type.
39248|Need more information
39394|Unfixed; old bug
## S3 Related Fixes
* Add comprehensive support for specifying AWS profiles to provide access credentials.
* Parse the files "~/.aws/config" and "~/.aws/credentials to provide credentials for the HDF5 ROS3 driver and to locate default region.
* Add a function to obtain the currently active S3 credentials. The search rules are defined in docs/nczarr.md.
* Provide documentation for the new features.
* Modify the struct NCauth (in include/ncauth.h) to replace specific S3 credentials with a profile name.
* Add a unit test to test the operation of profile and credentials management.
* Add support for URLS of the form "s3://<bucket>/<key>"; this requires obtaining a default region.
* Allows the specification of profile and/or region in a URL of the form "#mode=nczarr,...&aws.region=...&aws.profile=..."
## Misc. Fixes
* Move the ezxml code to libdispatch so that it can be used both by DAP4 and nczarr.
* Modify nclist to provide a deep clone operation.
* Modify ncuri to provide a deep clone operation.
* Modify the .rc file format to allow the specification of a path to be tested when looking for an entry in the .rc file.
* Ensure that the NC_rcload function is called.
* Modify nchttp to support setting request headers.
Filter support has three goals:
1. Use the existing HDF5 filter implementations,
2. Allow filter metadata to be stored in the NumCodecs metadata format used by Zarr,
3. Allow filters to be used even when HDF5 is disabled
Detailed usage directions are define in docs/filters.md.
For now, the existing filter API is left in place. So filters
are defined using ''nc_def_var_filter'' using the HDF5 style
where the id and parameters are unsigned integers.
This is a big change since filters affect many parts of the code.
In the following, the terms "compressor" and "filter" and "codec" are generally
used synonomously.
### Filter-Related Changes:
* In order to support dynamic loading of shared filter libraries, a new library was added in the libncpoco directory; it helps to isolate dynamic loading across multiple platforms.
* Provide a json parsing library for use by plugins; this is created by merging libdispatch/ncjson.c with include/ncjson.h.
* Add a new _Codecs attribute to allow clients to see what codecs are being used; let ncdump -s print it out.
* Provide special headers to help support compilation of HDF5 filters when HDF5 is not enabled: netcdf_filter_hdf5_build.h and netcdf_filter_build.h.
* Add a number of new test to test the new nczarr filters.
* Let ncgen parse _Codecs attribute, although it is ignored.
### Plugin directory changes:
* Add support for the Blosc compressor; this is essential because it is the most common compressor used in Zarr datasets. This also necessitated adding a CMake FindBlosc.cmake file
* Add NCZarr support for the big-four filters provided by HDF5: shuffle, fletcher32, deflate (zlib), and szip
* Add a Codec defaulter (see docs/filters.md) for the big four filters.
* Make plugins work with windows by properly adding __declspec declaration.
### Misc. Non-Filter Changes
* Replace most uses of USE_NETCDF4 (deprecated) with USE_HDF5.
* Improve support for caching
* More fixes for path conversion code
* Fix misc. memory leaks
* Add new utility -- ncdump/ncpathcvt -- that does more or less the same thing as cygpath.
* Add a number of new test to test the non-filter fixes.
* Update the parsers
* Convert most instances of '#ifdef _MSC_VER' to '#ifdef _WIN32'
re: Issue https:\\github.com\Unidata\netcdf-c\issues\2060
The path conversion code forgot to consider the case of
windows network paths of the form \\svc\x\y...
I have added support for it, but I can't really test it
since I do not have access to a network drive.
Work in progress / Proof of concept:
Add a capability to disable the tracking of attribute creation order.
See #2054 for details.
This PR adds a `NC_NOATTCREORD` define which can be passed int the
`mode` argument to `nc_create`. If it is present, then the
calls to set the attribute creation order tracking is disabled.
This should only be used for files in which you *know* that the
ordering of the attributes does not matter to *any* potential
readers of this database.
re: https://github.com/zarr-developers/zarr-specs/issues/41
After discussions with the Zarr community, it was decided to
convert to a new representation of the NCZarr meta-data extensions: version 2.
These extensions store information necessary to mapping the Zarr data model
to the netcdf-4 data model.
The basic change is to remove the NCZarr specific objects: .nczarr, .nczgroup, .nczarray, and .nczattr.
The contents of these objects is moved into the corresponding existing Zarr objects as special keys. The mapping is as follows:
* ''.nczarr'' => ''/.zgroup/_NCZARR_SUPERBLOCK_''
* ''.nczgroup => ''.zgroup/_NCZARR_GROUP_''
* ''.nczarray => ''.zarray/_NCZARR_ARRAY_''
* ''.nczattr => ''.zattr/_NCZARR_ATTR_''
Backward compatibility is maintained by looking for the object ''/.nczarr''
and if found, then assuming that the dataset is in the older version 1 format.
This compatibility only supports reading of such version 1 datasets.
Documentation and test cases are also added.
Misc. Other Changes:
1. The json parsing code was added to the general library instead of nczarr only (ncjson.c, ncjson.h).
2. Improved support for different platform paths by allowing conversion
to a single common path representation.
3. Add some new error codes.
4. Modify nccopy usage to mention the new chunking specification.
This is a follow-on to pull request
````https://github.com/Unidata/netcdf-c/pull/1959````,
which fixed up type scoping.
The primary changes are to _nc\_inq\_dimid()_ and to ncdump.
The _nc\_inq\_dimid()_ function is supposed to allow the name to be
and FQN, but this apparently never got implemented. So if was modified
to support FQNs.
The ncdump program is supposed to output fully qualified dimension names
in its generated CDL file under certain conditions.
Suppose ncdump has a netcdf-4 file F with variable V, and V's parent group
is G. For each dimension id D referenced by V, ncdump needs to determine
whether to print its name as a simple name or as a fully qualified name (FQN).
The algorithm is as follows:
1. Search up the tree of ancestor groups.
2. If one of those ancestor groups contains the dimid, then call it dimgrp.
3. If one of those ancestor groups contains a dim with the same name as the dimid, but with a different dimid, then record that as duplicate=true.
4. If dimgrp is defined and duplicate == false, then we do not need an fqn.
5. If dimgrp is defined and duplicate == true, then we do need an fqn to avoid incorrectly using the duplicate.
6. If dimgrp is undefined, then do a preorder breadth-first search of all the groups looking for the dimid.
7. If found, then use the fqn of the first found such dimension location.
8. If not found, then fail.
Test case ncdump/test_scope.sh was modified to test the proper
operation of ncdump and _nc\_inq\_dimid()_.
Misc. Other Changes:
* Fix nc_inq_ncid (NC4_inq_ncid actually) to return root group id if the name argument is NULL.
* Modify _ncdump/printfqn_ to print out a dimid FQN; this supports verification that the resulting .nc files were properly created.
re: Issue https://github.com/Unidata/netcdf-c/issues/1999
NCclosedir code is incorrect. Fix.
Note that this issue crops up when using a non-VisualStudio windows build
such as Mingw because Mingq defines dirent.h, but Visual Studio does not.
Addendum:
Fix some mingw bugs:
1. Modify XGetopt.h to be conditional on _WIN32 instead of _MSC_VER.
2. Make sure sys/stat.h is included in ncpathmgr.h