re pull request https://github.com/Unidata/netcdf-c/pull/405
re pull request https://github.com/Unidata/netcdf-c/pull/446
Notes:
1. This branch is a cleanup of the magic.dmh branch.
2. magic.dmh was originally merged, but caused problems with parallel IO.
It was re-issued as pull request https://github.com/Unidata/netcdf-c/pull/446.
3. This branch + pull request replace any previous pull requests and magic.dmh branch.
Given an otherwise valid netCDF file that has a corrupted header,
the netcdf library currently crashes. Instead, it should return
NC_ENOTNC.
Additionally, the NC_check_file_type code does not do the
forward search required by hdf5 files. It currently only looks
at file position 0 instead of 512, 1024, 2048,... Also, it turns
out that the HDF4 magic number is assumed to always be at the
beginning of the file (unlike HDF5).
The change is localized to libdispatch/dfile.c See
https://support.hdfgroup.org/release4/doc/DSpec_html/DS.pdf
Also, it turns out that the code in NC_check_file_type is duplicated
(mostly) in the function libsrc4/nc4file.c#nc_check_for_hdf.
This branch does the following.
1. Make NC_check_file_type return NC_ENOTNC instead of crashing.
2. Remove nc_check_for_hdf and centralize all file format checking
NC_check_file_type.
3. Add proper forward search for HDF5 files (but not HDF4 files)
to look for the magic number at offsets of 0, 512, 1024...
4. Add test tst_hdf5_offset.sh. This tests that hdf5 files with
an offset are properly recognized. It does so by prefixing
a legal file with some number of zero bytes: 512, 1024, etc.
5. Off-topic: Added -N flag to ncdump to force a specific output dataset name.
Some temporary files are being left in a tempdir (e.g. /tmp
under *nix*).
The situation is described tersely in
netcdf-c/docs/auth.html#REDIR Basically, when a url is used that
requires redirection, a physical cookiejar file is required
to exist in the file system in order for this to work.
Since it was difficult to figure out when redirection was
being used (it was internal to libcurl) I needed to be prepared for that
eventuality. The result was that I always created a cookiejar file if one
was not specified in the rc file. This actually occurs in two places:
one inside oc2 and one inside libdap4.
The solution was two-fold:
1. do not use a cookiejar directory -- create cookiejar file directly
2. ensure that all cookiejar related files are reclaimed by nc_close().
Note that if nc_close (or nc_abort) is not called for whatever reason,
then reclamation will not occur.
This is a follow-on in that the old utf8 code was still being
used in ncgen to convert utf8->utf16 when converting cdl to Java
(see genj.c).
The new code apparently has no utf16 support, but it does have
utf32 support. Converting utf32 -> utf16 can be approximated by
truncating the 32bits to 16 bits, unless the top 16 bits are
not zero. This latter condition is unlikely to be common because
it implies use of some rather obscure characters.
So solution is to convert to utf32 and truncate to 16 bits to
get utf16. An error is reported if the high-order truncated 16
bits are not zero. If we get complaints, then I will figure out
how to convert full utf32 to a utf16 pair.
Also removed the old code from ncgen.
1. Cleanup test_common.sh to expunge (mostly) the use of the VS
path value. This has the effect of being unable to use the
Visual Studio C compiler for shell tests.
2. There is a missing case in CMakeLists.txt so add
defaulting for HDF5_C_LIBRARY_hdf5 using HDF5_C_LIBRARY.
Ward should probably examine this to get it fixed correctly.
3. Put back ref to esg.md in docs/Doxyfile.in
4. Fix minor warning in dut8proc.h
Specific changes:
1. Add dap4 code: libdap4 and dap4_test.
Note that until the d4ts server problem is solved, dap4 is turned off.
2. Modify various files to support dap4 flags:
configure.ac, Makefile.am, CMakeLists.txt, etc.
3. Add nc_test/test_common.sh. This centralizes
the handling of the locations of various
things in the build tree: e.g. where is
ncgen.exe located. See nc_test/test_common.sh
for details.
4. Modify .sh files to use test_common.sh
5. Obsolete separate oc2 by moving it to be part of
netcdf-c. This means replacing code with netcdf-c
equivalents.
5. Add --with-testserver to configure.ac to allow
override of the servers to be used for --enable-dap-remote-tests.
6. There were multiple versions of nctypealignment code. Try to
centralize in libdispatch/doffset.c and include/ncoffsets.h
7. Add a unit test for the ncuri code because of its complexity.
8. Move the findserver code out of libdispatch and into
a separate, self contained program in ncdap_test and dap4_test.
9. Move the dispatch header files (nc{3,4}dispatch.h) to
.../include because they are now shared by modules.
10. Revamp the handling of TOPSRCDIR and TOPBUILDDIR for shell scripts.
11. Make use of MREMAP if available
12. Misc. minor changes e.g.
- #include <config.h> -> #include "config.h"
- Add some no-install headers to /include
- extern -> EXTERNL and vice versa as needed
- misc header cleanup
- clean up checking for misc. unix vs microsoft functions
13. Change copyright decls in some files to point to LICENSE file.
14. Add notes to RELEASENOTES.md
Update utf8proc.[ch] to use the version now
maintained by the Julia Language project
(https://github.com/JuliaLang/utf8proc/blob/master/LICENSE.md).
The license for the previous version was
unacceptable for the Debian and Ubuntu release
systems. The new version both updates the code
and addresses the license issue.
It turns out that the utf8proc software we are using
was turned over to the Julia Language developers
and the license terms changed to allow modification.
(https://github.com/JuliaLang/utf8proc/blob/master/LICENSE.md).
So the fix here is as follows:
1. Wrap the library with a fixed interface: libdispatch/dutf8.c
and include/ncutf8.h.
2. Replace the existing utf8proc code with the new version
from https://github.com/JuliaLang/utf8proc.
3. Add a couple more test cases: nc_test/tst_utf8_validate.c
and nc_test_utf8_phrases.c. If/when I can find a usable
normalization test, I will incorporate that later.
(Re: [netcdfgroup] nccopy fails with corrupted double link list)
shows that ncdump/nccopy was returning EPERM instead of
NC_EDAPCONSTRAINT as an error when we have a malformed constraint.
Also clean up a potential bug that might occur if the user invokes
nc_set_default_format before calling nc_open on a dap url.
not being computed. Fix in nc4file.c.
Not sure how this ever worked for any variable.
What is also weird is that the dim hash is
apparently being computed.
re: github netcdf-c issue #271
This occurs for several reasons, including:
1. using H5Aopen_name instead of H5Aexists to test if attribute exists.
2. using H5Eset_auto instead of H5Eset_auto2.
There are probably others that will have to be extinguished as encountered.
p.s Hope I did not overdo this and kill too much.
The hash field for phony dimensions was not being set
(in nc4hdf.c). Also added test case (nc_test4/?).
Note that I searched for other similar failures and
did not find any, but I may have missed them.
This consists of a persistent attribute named
_NCProperties plus two computed attributes
_IsNetcdf4 and _SuperblockVersion.
See the 'Provenance Attributes' section
of docs/attribute_conventions.md for details.
NetCDF-c Github issue #178 / esupport BNL-694121
The ncgen man pages says:
> Note also that the words variable',dimension', data',group', and
> `types' are legal CDL names, but be careful that there is a space be-
> tween them and any following colon character when used as a variable
> name. This is mostly an issue with attribute declarations.
Ncdump does not obey this rule.
The fix is to modify ncdump/ncdump.c to check if a variable name is
a keyword.
Also added test case.
that were not taking the CDF-5 format into account.
2. Had to update ncgen.1 man page to define the new
k-flag rules to deal with cdf-5.
3. Had to fix some tests that use 'cmp' for comparison;
this really should be deprecated.
3. There was a bug in configure.ac with respect
to using the enable-netcdf-4 flag vs
using disable-netcdf-4.
AC_CHECK_SIZEOF is not working because anti-virus
will not allow very rapid creation/deletion of a
file with same name.
2. modified some test baselines to attempt to fix
Ward's issue
AC_CHECK_SIZEOF is not working because anti-virus
will not allow very rapid creation/deletion of a
file with same name.
2. modified some test baselines to attempt to fix
Ward's issue
AC_CHECK_SIZEOF is not working because anti-virus
will not allow very rapid creation/deletion of a
file with same name.
2. modified some test baselines to attempt to fix
Ward's issue
DAPRCFILE. Note that the value of this environment
variable should be the absolute path of the rc file, not
the path to its containing directory.
2. fixup testauth.sh and add some new tests
3. synch oc
This supports better authorization
handling for DAP requests, especially redirection
based authorization. I also added a new test case
ncdap_tests/testauth.sh.
Specifically, suppose I have a netrc file /tmp/netrc
containing this.
machine uat.urs.earthdata.nasa.gov login xxxxxx password yyyyyy
Also suppose I have a .ocrc file containing these lines
HTTP.COOKIEJAR=/tmp/cookies
HTTP.NETRC=/tmp/netrc
Assume that .ocrc is in the local directory or HOME.
Then this command should work (assuming a valid login and password).
ncdump -h "https://54.86.135.31/opendap/data/nc/fnoc1.nc"
The pnetcdf support was not
properly being used to provide
mpi parallel io for netcdf-3 classic
files. The wrong dispatch table was being
used. The fix was to modify
dfile.c#NC_check_file_type to properly
specify the pnetcdf dispatch table when
use_parallel was true.
If the netCDF-C library is built with the
HDF5 library but without the HDF4 library and one attempts
to open an HDF4 file, an abort occurs rather than returning
a proper error code (NC_ENOTNC).
Fix is to modify dfile.c#NC_check_file_type to properly
#ifdef relevant tests.