Commit Graph

92 Commits

Author SHA1 Message Date
Dennis Heimbigner
730aa1f6bc Improve the building of NCZARR S3 support in CMake and Autoconf
There were some irregularities in the flags for handling NCZarr S3 support.

The primary change is to regularize the flags controlling this to the following.

1. Automake: --enable-nczarr-s3 and CMake: ENABLE_NCZARR_S3
2. Automake: --enable-nczarr-s3-tests and CMake: ENABLE_NCZARR_S3_TESTS

Flag 1 indicates that NCZarr should be built with S3 support enabled.
Flag 2 indicates that the NCZarr S3 tests should be run

These two flags are separate because running the NCZarr S3 tests
requires access to protected S3 resources. Currently, running
these tests is restricted to Unidata personnel. However, users
may want to enable S3 support even if they cannot run the tests.
It is, of course, an error to specify 2 without specifying 1.

Additionally, if the AWS S3 SDK library is not found, then the NCZARR S3
support and testing must be disabled. Otherwise an error is signaled
during the build.

Some of these NCZarr and S3 changes are propagated to nc-config.

Misc. Other Changes:

1. Allow testing for CYGWIN or MSVC in shell scripts.
2. Add specific test for HDF5 library version 1.10.6.
   This is encoded as "HDF5_UTF8_PATHS" because that is the first
   version where HDF5 properly supports it under Windows. This is used
   in hdf5internal/nc4_ndf5_ansi_to_utf8.
3. Add a AM Conditional -- AX_IGNORE -- for use in testing
   when it is desirable to temporarily suppress Makefile code.
4. Add MULTIFILTER flag to CMakeLists.txt
2020-10-16 15:04:51 -06:00
Dennis Heimbigner
aeb3ac2809 Mostly revert the filter code to reduce its complexity of use.
re: https://github.com/Unidata/netcdf-c/issues/1836

Revert the internal filter code to simplify it. From the user's
point of view, the only visible changes should be:

1. The functions that convert text to filter specs have had their signature reverted and have been moved to netcdf_aux.h
2. Some filter API functions now return NC_ENOFILTER when inquiry is made about some filter.

Internally,the dispatch table has been modified to get rid of the filter_actions
entry and associated complex structures. It has been replaced with
inq_var_filter_ids and inq_var_filter_info entries and the dispatch table
version has been bumped to 3. Corresponding NOOP and NOTNC4 functions
were added to libdispatch/dnotnc4.c. Also, the filter_action entries
in dispatch tables were replaced for all dispatch code bases (HDF5, DAP2,
etc). This should only impact UDF users.

In the process, it became clear that the form of the filters
field in NC_VAR_INFO_T was format dependent, so I converted it to
be of type void* and pushed its management into the various dispatch
code bases. Specifically libhdf5 and libnczarr now manage the filters
field in their own way.

The auxilliary functions for parsing textual filter specifications
were moved to netcdf_aux.h and were renamed to the following:
* ncaux_h5filterspec_parse
* ncaux_h5filterspec_parselist
* ncaux_h5filterspec_free
* ncaux_h5filter_fix8

Misc. Other Changes:

1. Document NUG/filters.md updated to reflect the changes above.
2. All the old data types (structs and enums)
   used by filter_actions actions were deleted.
   The exception is the NC_H5_Filterspec because it is needed
   by ncaux_h5filterspec_parselist.
3. Clientside filters were removed -- another enhancement
   for which no-one ever asked.
4. The ability to remove filters was itself removed.
5. Some functionality needed by nczarr was moved from libhdf5
   to libsrc4 e.g. nc4_find_default_chunksizes
6. All the filterx code was removed
7. ncfilter.h and nc4filter.c no longer used

Misc. Unrelated Changes:

1. The nczarr_test makefile clean was leaving some directories; so
   add clean-local to take care of them.
2020-09-27 12:43:46 -06:00
Dennis Heimbigner
f3218a2e2c Use the built-in HDF5 byte-range reader, if available.
re: Issue https://github.com/Unidata/netcdf-c/issues/1848

The existing Virtual File Driver built to support byte-range
read-only file access is quite old. It turns out to be extremely
slow (reason unknown at the moment).

Starting with HDF5 1.10.6, the HDF5 library has its own version
of such a file driver. The HDF5 developers have better knowledge
about building such a driver and what incantations are needed to
get good performance.

This PR modifies the byte-range code in hdf5open.c so
that if the HDF5 file driver is available, then it is used
in preference to the one written by the Netcdf group.

Misc. Other Changes:

1. Moved all of nc4print code to ncdump to keep appveyor quiet.
2020-09-24 14:33:58 -06:00
Dennis Heimbigner
62a4cc1ae0 Fix nccopy -c dim/x to actually use the dim/x value.
As it was, nccopy -c dim/x was sometimes being ignored. So
modify nccopy to properly take into account. This also required
a change to the nczarr code because it was not applying default
chunking in the same way as libhdf5.

Modify ncdump/tst_nccopy4.sh to test this feature properly.
Also add a similar test to nczarr_test.

Additionally, fix some other things that were causing Visual
Studio builds with testing to not work.
* fix curl testing under CMake to properly handle case
  where DAP is disabled, but byterange support is enabled.
* properly test and/or define uintptr_t
* Convert _O_XXX to O_XXX flags used by open();
2020-09-01 13:44:24 -06:00
Ward Fisher
31dee0c4da
Revert "Revert "Fix nczarr-experimental: improve build support, disengage hdf5 vs netcdf4 flags, and find AWS libraries"" 2020-08-17 19:15:47 -06:00
Ward Fisher
16c27ca13f
Revert "Fix nczarr-experimental: improve build support, disengage hdf5 vs netcdf4 flags, and find AWS libraries" 2020-08-17 15:51:01 -06:00
Dennis Heimbigner
b3ec7140e1 Move closer to getting S3 support work with CMake under Visual
Studio. The code will build and all the tests will run except for the
S3 tests in nczarr_test.
2020-07-14 19:24:20 -06:00
Dennis Heimbigner
59e04ae071 This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".

The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.

More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).

WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:

Platform | Build System | S3 support
------------------------------------
Linux+gcc      | Automake     | yes
Linux+gcc      | CMake        | yes
Visual Studio  | CMake        | no

Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future.  Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.

In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*.  The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
   and the version bumped.
4. An overly complex set of structs was created to support funnelling
   all of the filterx operations thru a single dispatch
   "filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
   to nczarr.

Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
   -- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
   support zarr and to regularize the structure of the fragments
   section of a URL.

Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
   e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
   * Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
   and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.

Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-28 18:02:47 -06:00
Sean Arms
7e2408680c Define strncasecmp as _strnicmp on Windows 2020-05-14 07:27:30 -06:00
Dennis Heimbigner
313121a229 Use proper CURLOPT values for VERIFYHOST and VERIFYPEER
re: https://github.com/Unidata/netcdf-c/issues/1684
re: e-support VZL-904142

Two issues:
1. As of libcurl 7.66, the semantics of CURLOPT_SSL_VERIFYHOST
   changed so that the non-zero values affects certificate processing.
2. The current library was forcing the values of VERIFYPEER
   and VERIFYHOST to zero instead of leaving them to the default values.

Solution was first to leave the defaults in place for VERIFYPEER and VERIFYHOST
as long as they are not set in .ocrc/.dodsrc file.
Second, the value of HTTP.SSL.VERIFYPEER or HTTP.SSL.VERIFYHOST
as set in .ocrc/.dodrc is used to set the corresponding CURLOPT flags.
So for example, adding
> HTTP.SSL.VERIFYHOST=2
will set the value of CURLOPT_SSL_VERIFYHOST to 2, the default.
Using
> HTTP.SSL.VERIFYHOST=0
will set the value of CURLOPT_SSL_VERIFYHOST to 0, which disables it.
Similarly for VERIFYPEER.

Finally the semantics of HTTP.SSL.VALIDATE is now equivalent to
> HTTP.SSL.VERIFYPEER=1
> HTTP.SSL.VERIFYHOST=2
2020-04-10 13:42:27 -06:00
Ward Fisher
65a17399b9 Corrected parallel (mpi) testing on cmake builds. 2020-04-02 10:09:57 -06:00
Dennis Heimbigner
62e2b472b4 Minor config.h changes to support filters in Fortran 2019-04-29 16:36:08 -06:00
Dennis Heimbigner
4026323383 Fix minor --ansi warnings in dinfermodel.c and bzlib.c
re:

Needed to provide centralized definitions of fileno and fdopen;
also need to #include sys/types.h
2019-03-22 15:16:47 -06:00
Ward Fisher
e2b31ffae4
Merge branch 'master' into byterange.dmh 2019-03-19 12:05:44 -06:00
Dennis Heimbigner
0c59e13bf7 Master merge, conflict resolution, cleanup 2019-02-24 16:54:13 -07:00
Dennis Heimbigner
45a8a265b8 master merge 2019-02-23 17:14:12 -07:00
Ward Fisher
3892b439ce Syntax tweak. 2019-02-21 15:37:36 -07:00
Ward Fisher
d6c845370b Added fenceposting around H5free_memory, H5allocate_memory, H5resize_memory, as these were all introduced in hdf5 1.8.15. 2019-02-21 14:02:46 -07:00
Dennis Heimbigner
c59d5ce205 Fix handling of '/' characters in names in DAP2.
re: https://github.com/Unidata/thredds/issues/1224
[note that this is an issue in thredds, but the fix is in netcdf-c]

A thredds server can encode a netcdf-4 file into DAP2
by flattening names to include the containing group path,
where the group names are separated by '/'.

But the '/' is prohibited in netcdf names even if escaped
(a decision before my time).

So, if the netcdf-c/libdap2 code encounters a DAP2 name with '/'
characters, the '/' characters are converted to the string
%2f. Unfortunately, there is a glitch, namely that converting
the leading '/' produces a name that is still illegal. This PR
modifies the code to just drop the leading '/' character.
2019-02-14 20:25:40 -07:00
Dennis Heimbigner
bf2746b8ea Provide byte-range reading of remote datasets
re: issue https://github.com/Unidata/netcdf-c/issues/1251

Assume that you have the URL to a remote dataset
which is a normal netcdf-3 or netcdf-4 file.

This PR allows the netcdf-c to read that dataset's
contents as a netcdf file using HTTP byte ranges
if the remote server supports byte-range access.

Originally, this PR was set up to access Amazon S3 objects,
but it can also access other remote datasets such as those
provided by a Thredds server via the HTTPServer access protocol.
It may also work for other kinds of servers.

Note that this is not intended as a true production
capability because, as is known, this kind of access to
can be quite slow. In addition, the byte-range IO drivers
do not currently do any sort of optimization or caching.

An additional goal here is to gain some experience with
the Amazon S3 REST protocol.

This architecture and its use documented in
the file docs/byterange.dox.

There are currently two test cases:

1. nc_test/tst_s3raw.c - this does a simple open, check format, close cycle
   for a remote netcdf-3 file and a remote netcdf-4 file.
2. nc_test/test_s3raw.sh - this uses ncdump to investigate some remote
   datasets.

This PR also incorporates significantly changed model inference code
(see the superceded PR https://github.com/Unidata/netcdf-c/pull/1259).

1. It centralizes the code that infers the dispatcher.
2. It adds support for byte-range URLs

Other changes:

1. NC_HDF5_finalize was not being properly called by nc_finalize().
2. Fix minor bug in ncgen3.l
3. fix memory leak in nc4info.c
4. add code to walk the .daprc triples and to replace protocol=
   fragment tag with a more general mode= tag.

Final Note:
Th inference code is still way too complicated. We need to move
to the validfile() model used by netcdf Java, where each
dispatcher is asked if it can process the file. This decentralizes
the inference code. This will be done after all the major new
dispatchers (PIO, Zarr, etc) have been implemented.
2019-01-01 18:27:36 -07:00
Ward Fisher
05818ac990 Misc. files updated with copyright stanza. 2018-12-06 15:51:35 -07:00
Wei-keng Liao
0ed70756cc Ignore flags NC_MPIIO and NC_MPIPOSIX. 2018-09-22 20:22:34 -05:00
Ward Fisher
784d777bff Merge branch 'master' into provenance.dmh 2018-09-06 15:13:09 -06:00
Dennis Heimbigner
79e38de840 Add the ability to set some additional curlopt values
Add the ability to set some additional curlopt values via .daprc (aka .dodsrc).
This effects both DAP2 and DAP4 protocols.

Related issues:
[1] re: esupport: KOZ-821332
[2] re: github issue https://github.com/Unidata/netcdf4-python/issues/836
[3] re: github issue https://github.com/Unidata/netcdf-c/issues/1074

1. CURLOPT_BUFFERSIZE: Relevant to [1]. Allow user to set the read/write
buffersizes used by curl.
This is done by adding the following to .daprc (aka .dodsrc):
	HTTP.READ.BUFFERSIZE=n
where n is the buffersize in bytes. There is a built-in (to curl)
limit of 512k for this value.

2. CURLOPT_TCP_KEEPALIVE (and CURLOPT_TCP_KEEPIDLE and CURLOPT_TCP_KEEPINTVL):
Relevant (maybe) to [2] and [3]. Allow the user to turn on KEEPALIVE
This is done by adding the following to .daprc (aka .dodsrc):
	HTTP.KEEPALIVE=on|n/m
If the value is "on", then simply enable default KEEPALIVE. If the value
is n/m, then enable KEEPALIVE and set KEEPIDLE to n and KEEPINTVL to m.
2018-08-26 17:04:46 -06:00
Dennis Heimbigner
2ea1cf5f1b There was a request to extend the provenance information
stored in the _NCProperties attribute to allow two things:
1. capture of additional library dependencies (over and above
   hdf5)
2. Recognition of non-netcdf libraries that create netcdf-4 format
   files.

To this end, the _NCProperties format has been extended to be
and arbitrary set of key=value pairs separated by commas.
This new format has version = 2, and uses commas as the pair separator.
Thus the general form is:
    _NCProperties = "version=2,key1=value,key2=value2..." ;

This new version is accompanied by a new ./configure option of the form
    --with-ncproperties="key1=value1,key2=value2..."
that specifies pairs to add to the _NCProperties attribute for all
files created with that netcdf library.

At this point, what is missing is some programmatic way to
specify either all the pairs or additional pairs
to the _NCProperties attribute. Not sure of the best way
to do this.

Builders using non-netcdf libraries can specify
whatever they want in the key value pairs (as long
as the version=2 is specified first).

By convention, the primary library is expected to be the
the first pair after the leading version=2 pair, but this
is convention only and is neither required nor enforced.

Related changes:
1. Fixed the tests that check _NCProperties to properly operate with version=2.
2. When reading a version 1 _NCProperties attribute, convert it to look
   like a version 2 attribute.
2. Added some version 2 tests to ncdump/tst_fileinfo.c and
   ncdump/tst_fileinfo.sh

Misc Changes:
1. Fix minor problem in ncdap_test/testurl.sh where a parameter to
   buildurl needed to be quoted.
2. Minor fix to ncgen to swap switches -H and -h to be consistent
   with other utilities.
3. Document the -M flag in nccopy usage() and the nccopy man page.
4. Modify a test case to use the nccopy -M flag.
2018-08-25 21:44:41 -06:00
Ed Hartnett
ef7b525ce4 merged master 2018-07-23 09:45:39 -06:00
Ed Hartnett
03de993ce5 attempting to fix ENABLE_SET_LOG_LEVEL problem with cmake build 2018-07-16 06:31:44 -06:00
Wei-keng Liao
f95d3e3325 replace USE_CDF5 with ENABLE_CDF5 2018-06-29 21:17:07 -05:00
Dennis Heimbigner
8cb1fc4cfe This is the second step in refactoring the libsrc4 code.
The first was branch newhash0.dmh.

As with newhash0.dmh, these changes should be transparent.
2018-02-24 20:36:24 -07:00
Ward Fisher
f9f27c5ab7
Merge branch 'master' into newhash0.dmh 2018-02-21 14:19:43 -07:00
Ben Boeckel
b432a527c4 c: remove __CHAR_UNSIGNED__
In C, `char`, `signed char`, and `unsigned char` are three separate,
distinct types, so just because `char` happens to be signed does not
mean it is interchangeable with `signed char`.
2018-02-14 17:24:49 -05:00
Ben Boeckel
a7057925d6 configure: remove unused configure checks
These checks all control variables which are unused within the codebase.
2018-02-14 17:24:45 -05:00
Dennis Heimbigner
727b613459 This is the initial step in moving to the new higher performance
(I hope) metadata mechanism. This mostly just adds new pieces of
code (e.g. nclistmap) and does some minor fixes.

It should be transparent to everything else.
The next set of changes will be the big step.
2018-02-08 19:53:40 -07:00
Ward Fisher
d02a905aa9 Updated typo. 2018-02-02 20:27:06 -07:00
Ward Fisher
0fee3b9404 Added a missing line in config.h.cmake.in 2018-02-02 20:22:49 -07:00
Ward Fisher
c1d54b0213 Added check for genlib.h 2018-02-02 20:57:55 -06:00
Dennis Heimbigner
4db4393e69 Begin changing over to use strlcat instead of strncat because
strlcat provides better protection against buffer overflows.

Code is taken from the FreeBSD project source code. Specifically:
https://github.com/freebsd/freebsd/blob/master/lib/libc/string/strlcat.c
License appears to be acceptable, but needs to be checked by e.g. Debian.

Step 1:
1. Add to netcdf-c/include/ncconfigure.h to use our version
   if not already available as determined by HAVE_STRLCAT in config.h.
2. Add the strlcat code to libdispatch/dstring.c
3. Turns out that strlcat was already defined in several places.
   So remove it from:
	ncgen3/genlib.c
	ncdump/dumplib.c
3. Define strlcat extern definition in ncconfigure.h.
4. Modify following directories to use strlcat:
	libdap2 libdap4 ncdap_test dap4_test
   Will do others in subsequent steps.
2017-11-23 10:55:24 -07:00
Ward Fisher
e8af76c2f4 Wiring in a quick test. 2017-11-20 13:52:06 -07:00
Dennis Heimbigner
9935d54fdf Merge master and resolve conflicts 2017-10-28 13:57:23 -06:00
Ward Fisher
4299653319 Updated an issue with libcurl and dap4 on Windows 2017-09-25 17:38:48 -06:00
Ward Fisher
8b824a3dd0 Added support for probing libcurl for CURLINFO_HTTP_CODE in cmake. 2017-09-25 13:29:04 -06:00
Ward Fisher
1a56d3fdc8 Making cdf5 tests conditional on cdf5 support setting at configure time. 2017-09-14 14:18:56 -06:00
Ward Fisher
035ec80fb2 Wiring in CDF5 configure-time option. 2017-09-13 15:25:40 -06:00
Dennis Heimbigner
ad32350355 Oops. Forgot to convert over libdap4 use of NC_mktmp
and NC_readfile and NC_combinehostport.
2017-09-03 15:09:10 -06:00
Dennis Heimbigner
a2e0f069ec This pr should probably be delayed until after Version 4.5.
Primary change is to cleanup code and remove duplicated code.

1. Unify the rc file reading into libdispatch/drc.c. Eventually extend
   if we need rc file for netcdf itself as opposed to the dap code.
2. Unify the extraction from the rc file of DAP authorization info.
3. Misc. other small unifications: make temp file, read file.
4. Avoid use of libcurl when reading file:// because
   there is some kind of problem with the Visual Studio version.
   Might be related to the winpath problem.
   In any case, do direct read instead.
5. Add new error code NC_ERCFILE for errors in reading RC file.
6. Complete documentation cleanup as indicated in this comment
   https://github.com/Unidata/netcdf-c/pull/472#issuecomment-325926426
7. Convert some occurrences of #ifdef _WIN32 to #ifdef _MSC_VER
2017-09-02 18:09:36 -06:00
Ward Fisher
9e7a902dcf Merge branch 'issue435.dmh' into multi-pull 2017-07-27 12:20:11 -06:00
Ward Fisher
3e166fd26a Accomodating Windows winsock issue. 2017-07-26 13:40:03 -06:00
Dennis Heimbigner
88b3d20e4e turn debug on 2017-07-16 13:13:10 -06:00
Dennis Heimbigner
715a6fe5eb The files libdispatch/dwinpath.c and include/ncwinpath.h
were added to provide a path name converter from e.g. cygwin
paths to e.g. windows paths. This is necessary because
the shell scripts may produce cygwin paths, but the code
may have been compiled with Visual Studio. Similar issues
arise with Mingw.

At appropriate places, and if using Visual Studio or Mingw,
I added calls to the path conversion code.
Apparently I forgot to find all the places where this
conversion was needed. So this pr does the following:
1. Push the calls to the converter to the various libXXX
   directories and out of libdispatch/dfile.c.
2. Add conversion calls to other parts of the code like oc2.

I also turns out that conversion code in dapcvt.c
had a bug when handling DAP Byte type under visual studio.

Notes:
1. there may still be places I missed that need to do path conversion.
2. need to make sure that calls to e.g. H5open also use converted path.
2017-07-13 10:40:07 -06:00
Dennis Heimbigner
9719fbfbad re: hithub issue https://github.com/Unidata/netcdf-c/issues/435
Some temporary files are being left in a tempdir (e.g. /tmp
under *nix*).

The situation is described tersely in
netcdf-c/docs/auth.html#REDIR Basically, when a url is used that
requires redirection, a physical cookiejar file is required
to exist in the file system in order for this to work.

Since it was difficult to figure out when redirection was
being used (it was internal to libcurl) I needed to be prepared for that
eventuality. The result was that I always created a cookiejar file if one
was not specified in the rc file. This actually occurs in two places:
one inside oc2 and one inside libdap4.

The solution was two-fold:
1. do not use a cookiejar directory -- create cookiejar file directly
2. ensure that all cookiejar related files are reclaimed by nc_close().
Note that if nc_close (or nc_abort) is not called for whatever reason,
then reclamation will not occur.
2017-07-05 10:03:48 -06:00