2. Fixed plugin building (nc_test4/hdf5plugins)
to be done properly by cmake and automake.
4. Duplicated part of the nc_test4 filter test code
in examples/C
An incomplete and untested set of hooks exist
for OS-X in nc_test4/findplugins.in. They need testing.
Eliminate this warning from clang:
```
TPL/netcdf/netcdf-c/libdispatch/ncuri.c:914:31: warning: for loop has empty body [-Wempty-body]
for(p=sp,q=sp+1;(*p++=*q++););
^
TPL/netcdf/netcdf-c/libdispatch/ncuri.c:914:31: note: put the semicolon on a separate line to silence this warning
1 warning generated.
```
strlcat provides better protection against buffer overflows.
Code is taken from the FreeBSD project source code. Specifically:
https://github.com/freebsd/freebsd/blob/master/lib/libc/string/strlcat.c
License appears to be acceptable, but needs to be checked by e.g. Debian.
Step 1:
1. Add to netcdf-c/include/ncconfigure.h to use our version
if not already available as determined by HAVE_STRLCAT in config.h.
2. Add the strlcat code to libdispatch/dstring.c
3. Turns out that strlcat was already defined in several places.
So remove it from:
ncgen3/genlib.c
ncdump/dumplib.c
3. Define strlcat extern definition in ncconfigure.h.
4. Modify following directories to use strlcat:
libdap2 libdap4 ncdap_test dap4_test
Will do others in subsequent steps.
Some parameters like stringlength actually affect a dimension
named maxStrlen. So, add some aliasing so maxstrlen can be
specified as a parameter and as an alias for stringlength.
The affected parameters (case insensitive):
stringlength has alias maxstrlen
stringlength_<varname> has alias maxstrlen_<varname>
Also:
1. added a test case in ncdap_test/testurl.sh
2. added note to documentation
re pull request https://github.com/Unidata/netcdf-c/pull/405
re pull request https://github.com/Unidata/netcdf-c/pull/446
Notes:
1. This branch is a cleanup of the magic.dmh branch.
2. magic.dmh was originally merged, but caused problems with parallel IO.
It was re-issued as pull request https://github.com/Unidata/netcdf-c/pull/446.
3. This branch + pull request replace any previous pull requests and magic.dmh branch.
Given an otherwise valid netCDF file that has a corrupted header,
the netcdf library currently crashes. Instead, it should return
NC_ENOTNC.
Additionally, the NC_check_file_type code does not do the
forward search required by hdf5 files. It currently only looks
at file position 0 instead of 512, 1024, 2048,... Also, it turns
out that the HDF4 magic number is assumed to always be at the
beginning of the file (unlike HDF5).
The change is localized to libdispatch/dfile.c See
https://support.hdfgroup.org/release4/doc/DSpec_html/DS.pdf
Also, it turns out that the code in NC_check_file_type is duplicated
(mostly) in the function libsrc4/nc4file.c#nc_check_for_hdf.
This branch does the following.
1. Make NC_check_file_type return NC_ENOTNC instead of crashing.
2. Remove nc_check_for_hdf and centralize all file format checking
NC_check_file_type.
3. Add proper forward search for HDF5 files (but not HDF4 files)
to look for the magic number at offsets of 0, 512, 1024...
4. Add test tst_hdf5_offset.sh. This tests that hdf5 files with
an offset are properly recognized. It does so by prefixing
a legal file with some number of zero bytes: 512, 1024, etc.
5. Off-topic: Added -N flag to ncdump to force a specific output dataset name.
2. Factored out the parameter string parsing for ncgen and nccopy
int libdispatch/dfilter.c + include/ncfilter.h
3. Allow a parameter string to use constant types other than
unsigned int. See docs/filters.md for details.
4. Moved the old content of include/netcdf_filter.h into include/netcdf.h
and removed include/netcdf_filter.h as no longer needed.
5. Force the test filter (bzip2) in nc_test4/filter_test to
be built using BUILT_SOURCES.
Primary change is to cleanup code and remove duplicated code.
1. Unify the rc file reading into libdispatch/drc.c. Eventually extend
if we need rc file for netcdf itself as opposed to the dap code.
2. Unify the extraction from the rc file of DAP authorization info.
3. Misc. other small unifications: make temp file, read file.
4. Avoid use of libcurl when reading file:// because
there is some kind of problem with the Visual Studio version.
Might be related to the winpath problem.
In any case, do direct read instead.
5. Add new error code NC_ERCFILE for errors in reading RC file.
6. Complete documentation cleanup as indicated in this comment
https://github.com/Unidata/netcdf-c/pull/472#issuecomment-325926426
7. Convert some occurrences of #ifdef _WIN32 to #ifdef _MSC_VER
generates garbage. This in turn interferes with using .netrc
because the garbage user+pwd can will override the
.netrc. Note that this may work ok sometimes
if the garbage happens to start with a nul character.
2. It turns out that the user:pwd combination needs to support
character escaping. One reason is the user may contain an '@' character.
The other is that modern password rules make it not unlikely that
the password will contain characters that interfere with url parsing.
So, the rule I have implemented is that all occurrences of the user:pwd
format must escape any dodgy characters. The escape format is URL escaping
of the form %XX. This applies both to user:pwd
embedded in a URL as well as the use of HTTP.CREDENTIALS.USERPASSWORD
in a .dodsrc/.daprc file. The user and password in .netrc must not
be escaped. This is now documented in docs/auth.md
The fix for #2 actually obviated #1. Now, internally, the user and pwd
are stored separately and not in the user:pwd format. They are combined
(and escaped) only when needed.
dap code will create a real temporary file in which to store the
converted metadata for the DAP .dds or .dmr.
It was assumed that the nc_close code would reclaim the
temporary file. For DAP2, reclamation occurs in the ncio
code. For DAP4, it was assumed that the libsrc4 code would do
the reclamation, but for whatever reason, this is not happening.
Thus, in this situation, a temporary file is left in the file
system. Aside from being irritating to users, this screws up
'make distcheck'.
So the DAP4 code is fixed to ensure that the temporary file is
properly reclaimed independent of the libsrc4 code.
The previous change where the `nels==0` check was removed caused problems with nc_test.
If the check is moved after the `NC_put_vara` call, then it will avoid the parallel hang problem for stride=1, but will still pass all tests in `nc_test`. Note that this is somewhat of a kluge since there will still be a parallel hang if nels==0 and stride is > 1 and the code falls into the odometer section.
2. modified ncdap_tests to remove <cr>
from generated output before comparison
to expected. This is a hack because
my other attempts to force windows to use
binary mode have not worked.
from e-support OYW-455599.
Problem was that in nctime.c#CDMonthDay, it was setting up
the month -> #days table correctly, but it did not use it
because it forgot to check for Cd366, it only checked for Cd365.
were added to provide a path name converter from e.g. cygwin
paths to e.g. windows paths. This is necessary because
the shell scripts may produce cygwin paths, but the code
may have been compiled with Visual Studio. Similar issues
arise with Mingw.
At appropriate places, and if using Visual Studio or Mingw,
I added calls to the path conversion code.
Apparently I forgot to find all the places where this
conversion was needed. So this pr does the following:
1. Push the calls to the converter to the various libXXX
directories and out of libdispatch/dfile.c.
2. Add conversion calls to other parts of the code like oc2.
I also turns out that conversion code in dapcvt.c
had a bug when handling DAP Byte type under visual studio.
Notes:
1. there may still be places I missed that need to do path conversion.
2. need to make sure that calls to e.g. H5open also use converted path.
This is a follow-on in that the old utf8 code was still being
used in ncgen to convert utf8->utf16 when converting cdl to Java
(see genj.c).
The new code apparently has no utf16 support, but it does have
utf32 support. Converting utf32 -> utf16 can be approximated by
truncating the 32bits to 16 bits, unless the top 16 bits are
not zero. This latter condition is unlikely to be common because
it implies use of some rather obscure characters.
So solution is to convert to utf32 and truncate to 16 bits to
get utf16. An error is reported if the high-order truncated 16
bits are not zero. If we get complaints, then I will figure out
how to convert full utf32 to a utf16 pair.
Other changes:
1. removed the old code from ncgen.
2. changed UTF8PROC_DLLEXPORT (in utf8proc) to EXTERNL
and added appropriate includes. This should fix
issue https://github.com/Unidata/netcdf-c/issues/404,
but since we cannot duplicate the failure, I am not quite
sure.
This is a follow-on in that the old utf8 code was still being
used in ncgen to convert utf8->utf16 when converting cdl to Java
(see genj.c).
The new code apparently has no utf16 support, but it does have
utf32 support. Converting utf32 -> utf16 can be approximated by
truncating the 32bits to 16 bits, unless the top 16 bits are
not zero. This latter condition is unlikely to be common because
it implies use of some rather obscure characters.
So solution is to convert to utf32 and truncate to 16 bits to
get utf16. An error is reported if the high-order truncated 16
bits are not zero. If we get complaints, then I will figure out
how to convert full utf32 to a utf16 pair.
Also removed the old code from ncgen.
This relies on the HDF5 capability to
dynamically load compression filters.
Note that a compression filter is just
a subcase of filters.
The primary user-visible changes are as follows:
1. Add a standard header "netcdf_filter.h" that defines
the necessary API extensions
2. Modify ncgen to support two new special attributes
"_Filter_ID" and "_Filter_Parameters" so that compression
can be turned on when creating a file using ncgen.
4. Add a detailed description of filtering support
to the user's guide; see the file filters.md
5. Add a test case directory for this: nc_test4/filter_test.
It is fragile and a ./configure flags (-enable-filter-test)
is defined (default disabled) to shut this off this test
to avoid spurious 'make check' failures.
Note that the HDF5 documentation is not up-to-date, so
much of what is encoded here comes from examining the
actual code in the file H5PL.c in the HDF5 source code.
1. Cleanup test_common.sh to expunge (mostly) the use of the VS
path value. This has the effect of being unable to use the
Visual Studio C compiler for shell tests.
2. There is a missing case in CMakeLists.txt so add
defaulting for HDF5_C_LIBRARY_hdf5 using HDF5_C_LIBRARY.
Ward should probably examine this to get it fixed correctly.
3. Put back ref to esg.md in docs/Doxyfile.in
4. Fix minor warning in dut8proc.h