Commit Graph

9 Commits

Author SHA1 Message Date
Dennis Heimbigner
751300ec59 Fix more memory leaks in netcdf-c library
This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173

Sorry that it is so big, but leak suppression can be complex.

This PR fixes all remaining memory leaks -- as determined by
-fsanitize=address, and with the exceptions noted below.

Unfortunately. there remains a significant leak that I cannot
solve. It involves vlens, and it is unclear if the leak is
occurring in the netcdf-c library or the HDF5 library.

I have added a check_PROGRAM to the ncdump directory to show the
problem.  The program is called tst_vlen_demo.c To exercise it,
build the netcdf library with -fsanitize=address enabled. Then
go into ncdump and do a "make clean check".  This should build
tst_vlen_demo without actually executing it.  Then do the
command "./tst_vlen_demo" to see the output of the memory
checker.  Note the the lost malloc is deep in the HDF5 library
(in H5Tvlen.c).

I am temporarily working around this error in the following way.
1. I modified several test scripts to not execute known vlen tests
   that fail as described above.
2. Added an environment variable called NC_VLEN_NOTEST.
   If set, then those specific tests are suppressed.

This should mean that the --disable-utilities option to
./configure should not need to be set to get a memory leak clean
build.  This should allow for detection of any new leaks.

Note: I used an environment variable rather than a ./configure
option to control the vlen tests. This is because it is
temporary (I hope) and because it is a bit tricky for shell
scripts to access ./configure options.

Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-15 10:00:38 -07:00
Dennis Heimbigner
571cf6b933 Fix test run_diskless2.sh with LARGE_FILE_TESTS
ret: https://github.com/Unidata/netcdf-c/issues/1162

The test nc_test/run_diskless2.sh fails
when LARGE_FILE_TESTS is enabled.
Since the goal of the test was to test out
diskless+persist on a reasonably large file,
I fixed by just limiting the file size to
1000000000L bytes.
2018-11-03 11:51:10 -06:00
Ward Fisher
a996ed554e Swapped /bin/bash for /bin/sh to test on osx. 2018-08-12 23:01:08 -06:00
Ward Fisher
aeb0e90b0b When large file tests are enabled, this test fails in Docker. Fixed by switching from sh to bash. 2018-06-14 12:46:39 -06:00
Dennis Heimbigner
9983b9d911 re e-support UBS-599337
re pull request https://github.com/Unidata/netcdf-c/pull/405
re pull request https://github.com/Unidata/netcdf-c/pull/446

Notes:
1. This branch is a cleanup of the magic.dmh branch.
2. magic.dmh was originally merged, but caused problems with parallel IO.
   It was re-issued as pull request https://github.com/Unidata/netcdf-c/pull/446.
3. This branch + pull request replace any previous pull requests and magic.dmh branch.

Given an otherwise valid netCDF file that has a corrupted header,
the netcdf library currently crashes. Instead, it should return
NC_ENOTNC.

Additionally, the NC_check_file_type code does not do the
forward search required by hdf5 files. It currently only looks
at file position 0 instead of 512, 1024, 2048,... Also, it turns
out that the HDF4 magic number is assumed to always be at the
beginning of the file (unlike HDF5).
The change is localized to libdispatch/dfile.c See
https://support.hdfgroup.org/release4/doc/DSpec_html/DS.pdf

Also, it turns out that the code in NC_check_file_type is duplicated
(mostly) in the function libsrc4/nc4file.c#nc_check_for_hdf.

This branch does the following.
1. Make NC_check_file_type return NC_ENOTNC instead of crashing.
2. Remove nc_check_for_hdf and centralize all file format checking
   NC_check_file_type.
3. Add proper forward search for HDF5 files (but not HDF4 files)
   to look for the magic number at offsets of 0, 512, 1024...
4. Add test tst_hdf5_offset.sh. This tests that hdf5 files with
   an offset are properly recognized. It does so by prefixing
   a legal file with some number of zero bytes: 512, 1024, etc.
5. Off-topic: Added -N flag to ncdump to force a specific output dataset name.
2017-10-24 16:25:09 -06:00
Ward Fisher
f8c7df70b3 Corrected an issue where run_diskless2.sh was not properly configured. 2017-05-10 16:55:47 -06:00
Dennis Heimbigner
6d8809100f Fix pull request https://github.com/Unidata/netcdf-c/pull/374 (dap4.dmh)
1. When running under windows (as opposed to cygwin)
   we need to make sure to not user /cygdrive/ file paths.
   This was ocurring in libdap4/d4read.c, but may occur
   elsewhere.
2. Shell scripts in the git repo are not being checked-out
   with the executable mode set. Had core.filemode set to false.
   Was a major hassle to fix.
2017-04-03 21:39:44 -06:00
Ward Fisher
5b40f3a27e Added a platform check for hostname on Windows. Updated how 'diff' is called in a diskless test to ignore whitespace when comparing files. 2015-02-04 09:11:27 -07:00
Dennis Heimbigner
48ca394d2e diskless: make read and write loop to read/write whole file when persisting 2012-04-09 22:03:02 +00:00