re: https://github.com/Unidata/netcdf-c/issues/1684
re: e-support VZL-904142
Two issues:
1. As of libcurl 7.66, the semantics of CURLOPT_SSL_VERIFYHOST
changed so that the non-zero values affects certificate processing.
2. The current library was forcing the values of VERIFYPEER
and VERIFYHOST to zero instead of leaving them to the default values.
Solution was first to leave the defaults in place for VERIFYPEER and VERIFYHOST
as long as they are not set in .ocrc/.dodsrc file.
Second, the value of HTTP.SSL.VERIFYPEER or HTTP.SSL.VERIFYHOST
as set in .ocrc/.dodrc is used to set the corresponding CURLOPT flags.
So for example, adding
> HTTP.SSL.VERIFYHOST=2
will set the value of CURLOPT_SSL_VERIFYHOST to 2, the default.
Using
> HTTP.SSL.VERIFYHOST=0
will set the value of CURLOPT_SSL_VERIFYHOST to 0, which disables it.
Similarly for VERIFYPEER.
Finally the semantics of HTTP.SSL.VALIDATE is now equivalent to
> HTTP.SSL.VERIFYPEER=1
> HTTP.SSL.VERIFYHOST=2
re: https://github.com/Unidata/thredds/issues/1224
[note that this is an issue in thredds, but the fix is in netcdf-c]
A thredds server can encode a netcdf-4 file into DAP2
by flattening names to include the containing group path,
where the group names are separated by '/'.
But the '/' is prohibited in netcdf names even if escaped
(a decision before my time).
So, if the netcdf-c/libdap2 code encounters a DAP2 name with '/'
characters, the '/' characters are converted to the string
%2f. Unfortunately, there is a glitch, namely that converting
the leading '/' produces a name that is still illegal. This PR
modifies the code to just drop the leading '/' character.
re: issue https://github.com/Unidata/netcdf-c/issues/1251
Assume that you have the URL to a remote dataset
which is a normal netcdf-3 or netcdf-4 file.
This PR allows the netcdf-c to read that dataset's
contents as a netcdf file using HTTP byte ranges
if the remote server supports byte-range access.
Originally, this PR was set up to access Amazon S3 objects,
but it can also access other remote datasets such as those
provided by a Thredds server via the HTTPServer access protocol.
It may also work for other kinds of servers.
Note that this is not intended as a true production
capability because, as is known, this kind of access to
can be quite slow. In addition, the byte-range IO drivers
do not currently do any sort of optimization or caching.
An additional goal here is to gain some experience with
the Amazon S3 REST protocol.
This architecture and its use documented in
the file docs/byterange.dox.
There are currently two test cases:
1. nc_test/tst_s3raw.c - this does a simple open, check format, close cycle
for a remote netcdf-3 file and a remote netcdf-4 file.
2. nc_test/test_s3raw.sh - this uses ncdump to investigate some remote
datasets.
This PR also incorporates significantly changed model inference code
(see the superceded PR https://github.com/Unidata/netcdf-c/pull/1259).
1. It centralizes the code that infers the dispatcher.
2. It adds support for byte-range URLs
Other changes:
1. NC_HDF5_finalize was not being properly called by nc_finalize().
2. Fix minor bug in ncgen3.l
3. fix memory leak in nc4info.c
4. add code to walk the .daprc triples and to replace protocol=
fragment tag with a more general mode= tag.
Final Note:
Th inference code is still way too complicated. We need to move
to the validfile() model used by netcdf Java, where each
dispatcher is asked if it can process the file. This decentralizes
the inference code. This will be done after all the major new
dispatchers (PIO, Zarr, etc) have been implemented.
Add the ability to set some additional curlopt values via .daprc (aka .dodsrc).
This effects both DAP2 and DAP4 protocols.
Related issues:
[1] re: esupport: KOZ-821332
[2] re: github issue https://github.com/Unidata/netcdf4-python/issues/836
[3] re: github issue https://github.com/Unidata/netcdf-c/issues/1074
1. CURLOPT_BUFFERSIZE: Relevant to [1]. Allow user to set the read/write
buffersizes used by curl.
This is done by adding the following to .daprc (aka .dodsrc):
HTTP.READ.BUFFERSIZE=n
where n is the buffersize in bytes. There is a built-in (to curl)
limit of 512k for this value.
2. CURLOPT_TCP_KEEPALIVE (and CURLOPT_TCP_KEEPIDLE and CURLOPT_TCP_KEEPINTVL):
Relevant (maybe) to [2] and [3]. Allow the user to turn on KEEPALIVE
This is done by adding the following to .daprc (aka .dodsrc):
HTTP.KEEPALIVE=on|n/m
If the value is "on", then simply enable default KEEPALIVE. If the value
is n/m, then enable KEEPALIVE and set KEEPIDLE to n and KEEPINTVL to m.
stored in the _NCProperties attribute to allow two things:
1. capture of additional library dependencies (over and above
hdf5)
2. Recognition of non-netcdf libraries that create netcdf-4 format
files.
To this end, the _NCProperties format has been extended to be
and arbitrary set of key=value pairs separated by commas.
This new format has version = 2, and uses commas as the pair separator.
Thus the general form is:
_NCProperties = "version=2,key1=value,key2=value2..." ;
This new version is accompanied by a new ./configure option of the form
--with-ncproperties="key1=value1,key2=value2..."
that specifies pairs to add to the _NCProperties attribute for all
files created with that netcdf library.
At this point, what is missing is some programmatic way to
specify either all the pairs or additional pairs
to the _NCProperties attribute. Not sure of the best way
to do this.
Builders using non-netcdf libraries can specify
whatever they want in the key value pairs (as long
as the version=2 is specified first).
By convention, the primary library is expected to be the
the first pair after the leading version=2 pair, but this
is convention only and is neither required nor enforced.
Related changes:
1. Fixed the tests that check _NCProperties to properly operate with version=2.
2. When reading a version 1 _NCProperties attribute, convert it to look
like a version 2 attribute.
2. Added some version 2 tests to ncdump/tst_fileinfo.c and
ncdump/tst_fileinfo.sh
Misc Changes:
1. Fix minor problem in ncdap_test/testurl.sh where a parameter to
buildurl needed to be quoted.
2. Minor fix to ncgen to swap switches -H and -h to be consistent
with other utilities.
3. Document the -M flag in nccopy usage() and the nccopy man page.
4. Modify a test case to use the nccopy -M flag.
In C, `char`, `signed char`, and `unsigned char` are three separate,
distinct types, so just because `char` happens to be signed does not
mean it is interchangeable with `signed char`.
(I hope) metadata mechanism. This mostly just adds new pieces of
code (e.g. nclistmap) and does some minor fixes.
It should be transparent to everything else.
The next set of changes will be the big step.
strlcat provides better protection against buffer overflows.
Code is taken from the FreeBSD project source code. Specifically:
https://github.com/freebsd/freebsd/blob/master/lib/libc/string/strlcat.c
License appears to be acceptable, but needs to be checked by e.g. Debian.
Step 1:
1. Add to netcdf-c/include/ncconfigure.h to use our version
if not already available as determined by HAVE_STRLCAT in config.h.
2. Add the strlcat code to libdispatch/dstring.c
3. Turns out that strlcat was already defined in several places.
So remove it from:
ncgen3/genlib.c
ncdump/dumplib.c
3. Define strlcat extern definition in ncconfigure.h.
4. Modify following directories to use strlcat:
libdap2 libdap4 ncdap_test dap4_test
Will do others in subsequent steps.
Primary change is to cleanup code and remove duplicated code.
1. Unify the rc file reading into libdispatch/drc.c. Eventually extend
if we need rc file for netcdf itself as opposed to the dap code.
2. Unify the extraction from the rc file of DAP authorization info.
3. Misc. other small unifications: make temp file, read file.
4. Avoid use of libcurl when reading file:// because
there is some kind of problem with the Visual Studio version.
Might be related to the winpath problem.
In any case, do direct read instead.
5. Add new error code NC_ERCFILE for errors in reading RC file.
6. Complete documentation cleanup as indicated in this comment
https://github.com/Unidata/netcdf-c/pull/472#issuecomment-325926426
7. Convert some occurrences of #ifdef _WIN32 to #ifdef _MSC_VER
were added to provide a path name converter from e.g. cygwin
paths to e.g. windows paths. This is necessary because
the shell scripts may produce cygwin paths, but the code
may have been compiled with Visual Studio. Similar issues
arise with Mingw.
At appropriate places, and if using Visual Studio or Mingw,
I added calls to the path conversion code.
Apparently I forgot to find all the places where this
conversion was needed. So this pr does the following:
1. Push the calls to the converter to the various libXXX
directories and out of libdispatch/dfile.c.
2. Add conversion calls to other parts of the code like oc2.
I also turns out that conversion code in dapcvt.c
had a bug when handling DAP Byte type under visual studio.
Notes:
1. there may still be places I missed that need to do path conversion.
2. need to make sure that calls to e.g. H5open also use converted path.
Some temporary files are being left in a tempdir (e.g. /tmp
under *nix*).
The situation is described tersely in
netcdf-c/docs/auth.html#REDIR Basically, when a url is used that
requires redirection, a physical cookiejar file is required
to exist in the file system in order for this to work.
Since it was difficult to figure out when redirection was
being used (it was internal to libcurl) I needed to be prepared for that
eventuality. The result was that I always created a cookiejar file if one
was not specified in the rc file. This actually occurs in two places:
one inside oc2 and one inside libdap4.
The solution was two-fold:
1. do not use a cookiejar directory -- create cookiejar file directly
2. ensure that all cookiejar related files are reclaimed by nc_close().
Note that if nc_close (or nc_abort) is not called for whatever reason,
then reclamation will not occur.