mirror of
https://github.com/Unidata/netcdf-c.git
synced 2024-11-21 03:13:42 +08:00
9983b9d911
re pull request https://github.com/Unidata/netcdf-c/pull/405 re pull request https://github.com/Unidata/netcdf-c/pull/446 Notes: 1. This branch is a cleanup of the magic.dmh branch. 2. magic.dmh was originally merged, but caused problems with parallel IO. It was re-issued as pull request https://github.com/Unidata/netcdf-c/pull/446. 3. This branch + pull request replace any previous pull requests and magic.dmh branch. Given an otherwise valid netCDF file that has a corrupted header, the netcdf library currently crashes. Instead, it should return NC_ENOTNC. Additionally, the NC_check_file_type code does not do the forward search required by hdf5 files. It currently only looks at file position 0 instead of 512, 1024, 2048,... Also, it turns out that the HDF4 magic number is assumed to always be at the beginning of the file (unlike HDF5). The change is localized to libdispatch/dfile.c See https://support.hdfgroup.org/release4/doc/DSpec_html/DS.pdf Also, it turns out that the code in NC_check_file_type is duplicated (mostly) in the function libsrc4/nc4file.c#nc_check_for_hdf. This branch does the following. 1. Make NC_check_file_type return NC_ENOTNC instead of crashing. 2. Remove nc_check_for_hdf and centralize all file format checking NC_check_file_type. 3. Add proper forward search for HDF5 files (but not HDF4 files) to look for the magic number at offsets of 0, 512, 1024... 4. Add test tst_hdf5_offset.sh. This tests that hdf5 files with an offset are properly recognized. It does so by prefixing a legal file with some number of zero bytes: 512, 1024, etc. 5. Off-topic: Added -N flag to ncdump to force a specific output dataset name.
6.2 KiB
6.2 KiB