Some versions of some servers are returning malformed responses.
Make the library either handle them or gracefully fail.
The three server errors "fixed" here are as follows.
1. The attribute _NCProperties sometimes has a trailing nul character
in its value. Soln is to elide the nul(s).
2. Sometimes a DAP response has no data part, only a DMR.
Soln is to detect and return an error code instead of crashing.
3. Sometimes a server returns a redirection, but our current
openmagic() function was not following the redirect. Soln
is to follow redirects.
Also because of #2, I am temporarily making --disable-dap-remote-tests
be the default.
re: https://github.com/Unidata/netcdf-c/issues/1451
The situation with the various DAP (and other) remote test
servers is currently in a state of flux. For example, Unidata
admin is planning to forcibly shift the remote test server to
remotetest.unidata.ucar.edu soon. In addition, the server
test.opendap.org has shown some recent instability.
The result is that various DAP (and byterange) tests can fail
unexpectedly. This is an irritant to users and reveals nothing
about test sucess or failure.
Solve by modifying tests to report server inaccessibility and
otherwise pretend to succeed.
This puts an onus on Unidata to detect such server failures, but
will not cause users to see spurious failures. [Note. Do similar
fix for netcdf-java]. The check is:
1. export SETX=1 to cause all the shell scripts to trace
2. search the log files for the phrase "WARNING" (in upper case)
and see if it is complaining about not finding a server.
Misc. Changes
-------------
1. Added a pingurl program to see if a server was up.
2. modified some test case url targets
supercede PR: https://github.com/Unidata/netcdf-c/pull/1384
Since we have an mmap user, undeprecate it and make sure
it works. Other changes:
* fix test cases to work with make -j
* fix exposed ncgen error.
So, fixed the following:
1. Forgot to check for NC_FORMATX_PNETCDF case
in one of the switches in NC_infermodel.
2. Accidentally turned on both the NC_64BIT_OFFSET
and the NC_64BIT_DATA mode flags.
re: issue https://github.com/Unidata/netcdf-c/issues/1251
Assume that you have the URL to a remote dataset
which is a normal netcdf-3 or netcdf-4 file.
This PR allows the netcdf-c to read that dataset's
contents as a netcdf file using HTTP byte ranges
if the remote server supports byte-range access.
Originally, this PR was set up to access Amazon S3 objects,
but it can also access other remote datasets such as those
provided by a Thredds server via the HTTPServer access protocol.
It may also work for other kinds of servers.
Note that this is not intended as a true production
capability because, as is known, this kind of access to
can be quite slow. In addition, the byte-range IO drivers
do not currently do any sort of optimization or caching.
An additional goal here is to gain some experience with
the Amazon S3 REST protocol.
This architecture and its use documented in
the file docs/byterange.dox.
There are currently two test cases:
1. nc_test/tst_s3raw.c - this does a simple open, check format, close cycle
for a remote netcdf-3 file and a remote netcdf-4 file.
2. nc_test/test_s3raw.sh - this uses ncdump to investigate some remote
datasets.
This PR also incorporates significantly changed model inference code
(see the superceded PR https://github.com/Unidata/netcdf-c/pull/1259).
1. It centralizes the code that infers the dispatcher.
2. It adds support for byte-range URLs
Other changes:
1. NC_HDF5_finalize was not being properly called by nc_finalize().
2. Fix minor bug in ncgen3.l
3. fix memory leak in nc4info.c
4. add code to walk the .daprc triples and to replace protocol=
fragment tag with a more general mode= tag.
Final Note:
Th inference code is still way too complicated. We need to move
to the validfile() model used by netcdf Java, where each
dispatcher is asked if it can process the file. This decentralizes
the inference code. This will be done after all the major new
dispatchers (PIO, Zarr, etc) have been implemented.
Primary fixes to get -ansi to work.
1. Convert all '//' C++ style comments to /*...*/ or to use #if 0...#endif
2. It turns out that when -ansi is specified, then a number of
functions no longer are defined in the header -- but they are still
in the .so file.<br>
The big example is strdup(). So, added code to include/ncconfig.h to define
externs for those missing functions that occur in more than one place.
These are enabled if !_WIN32 && __STDC__ == 1 (__STDC__ is supposed to
be the equivalent compile time flag to -ansi). Note that this requires
config.h (which references ncconfig.h) to be included in files where it is
currently not included. Single uses will be only in the file that uses them.
3. Added mmap test for the MAP_ANONYMOUS flag to configure.ac. Apparently
this is not always defined with -ansi.
4. fix some large integer constants in nc_test4/tst_atts3.c and nc_test4/tst_filterparser.c
to avoid compiler complaints.
5. fix a double constant in nc_test4/tst_filterparser.c to avoid compiler complaints.
[Note I suspect #4 and #5 will be a problem on big-endian machines, but we have no way to test]
Misc. Changes:
1. convert more instances of _MSC_VER to _WIN32.
2. added some debugging code to include/nctestserver.h
3. added comment about libdispatch/drc.c always being compiled.
4. modify parser generation in ncgen to remove unneeded files.
re: issue https://github.com/Unidata/netcdf-c/issues/1233
Changes:
1. remove exit that was there for testing.
2. the program tst_open_mem must be netcdf-4 only.
3. fix some diff problems
- Change dataset name for tst_inmemory4_create to tst_inmemory4
- Modify tst_inmemory.c to reorder the variables (somewhat major rewrite)
Minor Unrelated Fixes:
1. fix comment problem in nc_provenance.h
2. Fix memory leak in tst_open_mem.c
3. fix ncdump/bindata.c to properly compile if netcdf4 is disabled.
4. minor changes to ncgen.l