This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173
Sorry that it is so big, but leak suppression can be complex.
This PR fixes all remaining memory leaks -- as determined by
-fsanitize=address, and with the exceptions noted below.
Unfortunately. there remains a significant leak that I cannot
solve. It involves vlens, and it is unclear if the leak is
occurring in the netcdf-c library or the HDF5 library.
I have added a check_PROGRAM to the ncdump directory to show the
problem. The program is called tst_vlen_demo.c To exercise it,
build the netcdf library with -fsanitize=address enabled. Then
go into ncdump and do a "make clean check". This should build
tst_vlen_demo without actually executing it. Then do the
command "./tst_vlen_demo" to see the output of the memory
checker. Note the the lost malloc is deep in the HDF5 library
(in H5Tvlen.c).
I am temporarily working around this error in the following way.
1. I modified several test scripts to not execute known vlen tests
that fail as described above.
2. Added an environment variable called NC_VLEN_NOTEST.
If set, then those specific tests are suppressed.
This should mean that the --disable-utilities option to
./configure should not need to be set to get a memory leak clean
build. This should allow for detection of any new leaks.
Note: I used an environment variable rather than a ./configure
option to control the vlen tests. This is because it is
temporary (I hope) and because it is a bit tricky for shell
scripts to access ./configure options.
Finally, as before, this only been tested with netcdf-4 and hdf5 support.
https://github.com/Unidata/netcdf-c/issues/1168https://github.com/Unidata/netcdf-c/issues/1163https://github.com/Unidata/netcdf-c/issues/1162
This PR partially fixes memory leaks in the netcdf-c library,
in the ncdump utility, and in some test cases.
The netcdf-c library now runs memory clean with the assumption
that the --disable-utilities option is used. The primary remaining
problem is ncgen. Once that is fixed, I believe the netcdf-c library
will run memory clean with no limitations.
Notes
-----------
1. Memory checking was performed using gcc -fsanitize=address.
Valgrind-based testing has yet to be performed.
2. The pnetcdf, hdf4, and examples code has not been tested.
Misc. Non-leak changes
1. Make tst_diskless2 only run when netcdf4 is enabled (issue 1162)
2. Fix CmakeLists.txt to turn off logging if ENABLE_NETCDF_4 is OFF
3. Isolated all my debug scripts into a single top-level directory
called debug
4. Fix some USE_NETCDF4 dependencies in nc_test and nc_test4 Makefile.am
re: issue https://github.com/Unidata/netcdf-c/issues/1151
Modify DAP2 and DAP4 code to handle case when _FillValue type is not
same as the parent variable type.
Specifically:
1. Define a parameter [fillmismatch] to allow this mismatch;
default is to disallow.
2. If allowed, forcibly change the type of the _FillValue to match
the parent variable.
3. If allowed Convert the values to match new type
4. Generate a log message
5. if not allowed, then fail
Implementing this required some changes to ncdap_test/dapcvt.c
Also added test cases.
Minor Unrelated Changes:
1. There were a number of warnings about e.g.
assigning a const char* to a char*. Fix these
2. In nccopy.1, replace .NP with .IP "n"
(re PR https://github.com/Unidata/netcdf-c/pull/1144)
3. fix minor error in ncdump/ocprint
re: github issue https://github.com/Unidata/netcdf-c/issues/1111
One of the less common use cases for the in-memory feature is
apparently failing with HDF5-1.10.x. The fix is complicated and
requires significant changes to libhdf5/nc4memcb.c. The current
setup is detailed in the file docs/inmeminternal.dox.
Additionally, it was discovered that the program
nc_test/tst_inmemory.c, which is invoked by
nc_test/run_inmemory.sh, actually was failing because of the
above problem. But the failure is not detected since the script
does not return non-zero value.
Other Changes:
1. Fix nc_test_tst_inmemory to return errors correctly.
2. Make ncdap_tests/findtestserver.c and dap4_tests/findtestserver4.c
be generated from ncdap_test/findtestserver.c.in.
3. Make LOG() print output to stderr instead of stdout to
avoid contaminating e.g. ncdump output.
4. Modify the handling of NC_INMEMORY and NC_DISKLESS flags
to properly handle that NC_DISKLESS => NC_INMEMORY. This
affects a number of code pieces, especially memio.c.
Add the ability to set some additional curlopt values via .daprc (aka .dodsrc).
This effects both DAP2 and DAP4 protocols.
Related issues:
[1] re: esupport: KOZ-821332
[2] re: github issue https://github.com/Unidata/netcdf4-python/issues/836
[3] re: github issue https://github.com/Unidata/netcdf-c/issues/1074
1. CURLOPT_BUFFERSIZE: Relevant to [1]. Allow user to set the read/write
buffersizes used by curl.
This is done by adding the following to .daprc (aka .dodsrc):
HTTP.READ.BUFFERSIZE=n
where n is the buffersize in bytes. There is a built-in (to curl)
limit of 512k for this value.
2. CURLOPT_TCP_KEEPALIVE (and CURLOPT_TCP_KEEPIDLE and CURLOPT_TCP_KEEPINTVL):
Relevant (maybe) to [2] and [3]. Allow the user to turn on KEEPALIVE
This is done by adding the following to .daprc (aka .dodsrc):
HTTP.KEEPALIVE=on|n/m
If the value is "on", then simply enable default KEEPALIVE. If the value
is n/m, then enable KEEPALIVE and set KEEPIDLE to n and KEEPINTVL to m.
So it now becomes the first default test server to try.
This also means that the dap4 remote testing is enabled.
The only issue to watch is to see if the jetstream-based
server can stay up for significant periods of time.
A uptimerobot (https://uptimerobot.com) has been set ups
to monitor this hourly, so we shall see.
to remove cruft and to remove unused site tests
and make the tests somewhat more understandable.
Also did a fix to libdispatch/dwinpath to convert
relative paths to absolute paths. This will, I hope,
take care of some windows path problems when using
$srcdir in shell scripts.
The file docs/indexing.dox tries to provide design
information for the refactoring.
The primary change is to replace all walking of linked
lists with the use of the NCindex data structure.
Ncindex is a combination of a hash table (for name-based
lookup) and a vector (for walking the elements in the index).
Additionally, global vectors are added to NC_HDF5_FILE_INFO_T
to support direct mapping of an e.g. dimid to the NC_DIM_INFO_T
object. These global vectors exist for dimensions, types, and groups
because they have globally unique id numbers.
WARNING:
1. since libsrc4 and libsrchdf4 share code, there are also
changes in libsrchdf4.
2. Any outstanding pull requests that change libsrc4 or libhdf4
are likely to cause conflicts with this code.
3. The original reason for doing this was for performance improvements,
but as noted elsewhere, this may not be significant because
the meta-data read performance apparently is being dominated
by the hdf5 library because we do bulk meta-data reading rather
than lazy reading.
2. Fixed plugin building (nc_test4/hdf5plugins)
to be done properly by cmake and automake.
4. Duplicated part of the nc_test4 filter test code
in examples/C
An incomplete and untested set of hooks exist
for OS-X in nc_test4/findplugins.in. They need testing.
strlcat provides better protection against buffer overflows.
Code is taken from the FreeBSD project source code. Specifically:
https://github.com/freebsd/freebsd/blob/master/lib/libc/string/strlcat.c
License appears to be acceptable, but needs to be checked by e.g. Debian.
Step 1:
1. Add to netcdf-c/include/ncconfigure.h to use our version
if not already available as determined by HAVE_STRLCAT in config.h.
2. Add the strlcat code to libdispatch/dstring.c
3. Turns out that strlcat was already defined in several places.
So remove it from:
ncgen3/genlib.c
ncdump/dumplib.c
3. Define strlcat extern definition in ncconfigure.h.
4. Modify following directories to use strlcat:
libdap2 libdap4 ncdap_test dap4_test
Will do others in subsequent steps.
Some parameters like stringlength actually affect a dimension
named maxStrlen. So, add some aliasing so maxstrlen can be
specified as a parameter and as an alias for stringlength.
The affected parameters (case insensitive):
stringlength has alias maxstrlen
stringlength_<varname> has alias maxstrlen_<varname>
Also:
1. added a test case in ncdap_test/testurl.sh
2. added note to documentation
re pull request https://github.com/Unidata/netcdf-c/pull/405
re pull request https://github.com/Unidata/netcdf-c/pull/446
Notes:
1. This branch is a cleanup of the magic.dmh branch.
2. magic.dmh was originally merged, but caused problems with parallel IO.
It was re-issued as pull request https://github.com/Unidata/netcdf-c/pull/446.
3. This branch + pull request replace any previous pull requests and magic.dmh branch.
Given an otherwise valid netCDF file that has a corrupted header,
the netcdf library currently crashes. Instead, it should return
NC_ENOTNC.
Additionally, the NC_check_file_type code does not do the
forward search required by hdf5 files. It currently only looks
at file position 0 instead of 512, 1024, 2048,... Also, it turns
out that the HDF4 magic number is assumed to always be at the
beginning of the file (unlike HDF5).
The change is localized to libdispatch/dfile.c See
https://support.hdfgroup.org/release4/doc/DSpec_html/DS.pdf
Also, it turns out that the code in NC_check_file_type is duplicated
(mostly) in the function libsrc4/nc4file.c#nc_check_for_hdf.
This branch does the following.
1. Make NC_check_file_type return NC_ENOTNC instead of crashing.
2. Remove nc_check_for_hdf and centralize all file format checking
NC_check_file_type.
3. Add proper forward search for HDF5 files (but not HDF4 files)
to look for the magic number at offsets of 0, 512, 1024...
4. Add test tst_hdf5_offset.sh. This tests that hdf5 files with
an offset are properly recognized. It does so by prefixing
a legal file with some number of zero bytes: 512, 1024, etc.
5. Off-topic: Added -N flag to ncdump to force a specific output dataset name.
2. modified ncdap_tests to remove <cr>
from generated output before comparison
to expected. This is a hack because
my other attempts to force windows to use
binary mode have not worked.