Commit Graph

834 Commits

Author SHA1 Message Date
Ward Fisher
f75c1c4d7b
Merge branch 'master' into dispatchversion.dmh 2021-03-22 12:40:09 -06:00
Dennis Heimbigner
0b7a5382e7 Codify cross-platform file paths
The netcdf-c code has to deal with a variety of platforms:
Windows, OSX, Linux, Cygwin, MSYS, etc.  These platforms differ
significantly in the kind of file paths that they accept.  So in
order to handle this, I have created a set of replacements for
the most common file system operations such as _open_ or _fopen_
or _access_ to manage the file path differences correctly.

A more limited version of this idea was already implemented via
the ncwinpath.h and dwinpath.c code. So this can be viewed as a
replacement for that code. And in path in many cases, the only
change that was required was to replace '#include <ncwinpath.h>'
with '#include <ncpathmgt.h>' and then replace file operation
calls with the NCxxx equivalent from ncpathmgr.h Note that
recently, the ncwinpath.h was renamed ncpathmgmt.h, so this pull
request should not require dealing with winpath.

The heart of the change is include/ncpathmgmt.h, which provides
alternate operations such as NCfopen or NCaccess and which properly
parse and rebuild path arguments to work for the platform on which
the code is executing. This mostly matters for Windows because of the
way that it uses backslash and drive letters, as compared to *nix*.
One important feature is that the user can do string manipulations
on a file path without having to worry too much about the platform
because the path management code will properly handle most mixed cases.
So one can for example concatenate a path suffix that uses forward
slashes to a Windows path and have it work correctly.

The conversion code is in libdispatch/dpathmgr.c, and the
important function there is NCpathcvt which does the proper
conversions to the local path format.

As a rule, most code should just replace their file operations with
the corresponding NCxxx ones defined in include/ncpathmgmt.h. These
NCxxx functions all call NCpathcvt on their path arguments before
executing the actual file operation.

In some rare cases, the client may need to directly use NCpathcvt,
but this should be avoided as much as possible. If there is a need
for supporting a new file operation not already in ncpathmgmt.h, then
use the code in dpathmgr.c as a template. Also please notify Unidata
so we can include it as a formal part or our supported operations.
Also, if you see an operation in the library that is not using the
NCxxx form, then please submit an issue so we can fix it.

Misc. Changes:
* Clean up the utf8 testing code; it is impossible to get some
  tests to work under windows using shell scripts; the args do
  not pass as utf8 but as some other encoding.
* Added an extra utf8 test case: test_unicode_path.sh
* Add a true test for HDF5 1.10.6 or later because as noted in
  PR https://github.com/Unidata/netcdf-c/pull/1794,
  HDF5 changed its Windows file path handling.
2021-03-04 13:41:31 -07:00
Dennis Heimbigner
7a44ae9184 Unify definition of NC_DISPATCH_VERSION
re: Issue

The netcdf dispatch table version was defined in several places.
Modify to only require defining it in CMakeLists.txt and configure.ac.

Fix entailed the following changes:
* Up the NC_DISPATCH_VERSION from 2 to 3 in configure.ac and CMakeLists.txt
* Create include/netcdf_dispatch.h.in and use it to configure include/netcdf_dispatch.h
* For CMAKE, make it search CMAKE_CURRENT_BINARY_DIR so code can locate the configured netcdf_dispatch.h
* Add entry to config.h.cmake.in for NC_DISPATCH_VERSION
* Move NCerror from include/ncdispatch.h to libdap2/nccomon.h
* Fix an API problem re nchttp.h
* Fix a conversion warning in libdispatch/dinfermodel.c
2021-01-31 21:40:08 -07:00
Dennis Heimbigner
e7d5f24078 Add zip file support
The primary change is to support the use of a zip file as a
storage format. Simultaneously the .nz4 support is made obsolete

Use of zip requires the libzip support library, so a number of
changes to the build files (Makefile.am, CMakeLists.txt) are
necessary to locate and incorporate libzip.  The nczarr_tests
tests are also changed to add zip testing.

Other changes:
* Make sure distcheck leaves no files around.
* Add some functions to netcdf_aux to export some functions of libnetcdf.
* Add a new error NC_EFOUND as the complement of NC_EEMPTY.
* Add tracing support to nclog and use it in libnczarr.
* Modify the zmap interface to support the writeonce semantics of zip.
* Create a new s3util.c to support a variety of S3 auxilliary functions.
* EXTERNL'ize a number of functions so they can be used in s3util.
* Add support for the S3 ListObjects CommonPrefixes mechanism
  to improve search.
* Add experimental support for running nczarr X s3 tests against
  the actual Amazon S3 cloud.
2021-01-28 20:11:01 -07:00
Rostislav Kouznetsov
74ef4eaa8c Fix for :60 seconds in ncdump
Ncdump reports times like "2015-03-12 12:19:60.000000" #1928
Imposes a microsecond accuracy on dumped time representation
2021-01-26 10:24:43 +02:00
Ward Fisher
7630f466a2
Merge pull request #1919 from gsjaardema/patch-47
Fix so setting of NC_FORMATX_NC3 in parallel is kept
2021-01-21 14:14:28 -07:00
Dennis Heimbigner
beba1c5b90 remove lgtm alert 2021-01-06 14:26:33 -07:00
Dennis Heimbigner
93e9d92778 More NCZarr optimizations
* Replace wholevar with more useful wholechunk optimization
* Add optimization to read multiple values at one time
* Replace NCDEFAULT_get/put_vars with native nczarr versions.
* Clarify chunk projection computations
* zdebdispatch.h
* Add more chunking test cases and re-enable run_chunkcases
* If !szip, then suppress deflate interference test
* Make H5Znoop(1) filter produce more information
* cleanup bzlib.c API
2021-01-06 13:35:59 -07:00
Dennis Heimbigner
efd905a323 Add tests for filter order on read and write cases
re: https://github.com/Unidata/netcdf-c/issues/1923
re: https://github.com/Unidata/netcdf-c/issues/1921

The issue was raised about the order of returned filter ids
for nc_inq_var_filter_ids() when creating a file as opposed
to later reading the file.

For creation, the order is the same as the order in which the
calls to nc_def_var_filter() occur.
However, after the file is closed and then reopened for reading,
the question was raised if the returned order is the same or the reverse.
In fact the order is the same in both cases.

This PR extends the existing filter order testcase to check the create
versus read orders. This also required changing the H5Znoop(1) filters
in the plugins directory.

Misc. Unrelated Changes
1. fix calls to fdopen under windows
2. Temporarily suppres the nczarr_tests/run_chunkcases test
   since it seems to be causing problems with github actions.
2020-12-29 20:12:35 -07:00
Greg Sjaardema
d67b8c47ea
Fix so setting of NC_FORMATX_NC3 in parallel is kept
If the user is opening a existing file for appending (NC_WRITE) in parallel and the file is in CDF5 format, the `NC_interpret_magic_number()` routine clears the `model->impl` setting of `NC_FORMATX_PNETCDF` which was set in `NC_omodeinfer` (See lines following the `done:` label in that routine which specifically set the `impl` if `useparallel` is true.)

This setting then gets overwritten when `NC_interpret_magic_number` is called which sets the `model->impl` back to `NC_FORMATX_NC3`.  This can (did) cause problems with parallel output as the `NC3` format does not correctly handle parallel writing but the `PNETCDF` does.

Not sure if this is the best place for the test, but it did fix the parallel write issues I was seeing...

If you need more details on what is happening, let me know.  But a restatement at a higher level is that I was calling `nc_open_par` with `NC_WRITE` and `NC_64BIT_DATA` mode and the existing file has `CDF5` for the magic number.  However, the dispatcher was being set to `NC3_dispatch_table` instead of `NCP_dispatch_table` which is the dispatcher which had been chosen for the original creation of the file being appended to.

I was then getting zeroes in the data being written to the vars since NC3 wasn't correctly handling multiple MPI ranks writing to different parts of the same variable...
2020-12-22 15:26:33 -07:00
Dennis Heimbigner
d2316f866c Additional Fixes to NCZarr
Primary Fixes:
* Add a whole variable optimization -- used in the rare case that nc_get/put_vara covers the whole of a variable and the variable has a single chunk.
* Fix chunking error when stride causes whole chunks to be skipped.
* Fix some memory leaks
* Add test cases
* Add one performance test to nczarr_test/. This uses the timer utils from unit_test: timer_utils.[ch].
* Move ncdumpchunks utility from ncdump to nczarr_test

Misc. Other Changes:
* Make check for aws libraries conditional on --enable-nczarr-s3
* Remove all but one bm tests from nczarr_test until they are working.
* Remove another dependency on HDF5 from supposedly non-HDF5 specific code; specifically hdf5_log_hdf5.
* Make the BAIL2 macro be hdf5 specific and replace elsewhere with an HDF5 independent equivalent.
* Move hdf5cache.c to libsrc4/nc4cache.c because it is used by nczarr.
* Modify unit_tests so that some of them are run even if using Windows.
* Misc. small bug fixes and refactors and memory leaks.
* Rename some conflicting tests for cmake.
* Attempted to make nc_perf work with cmake and failed.
2020-12-16 20:48:02 -07:00
Dennis Heimbigner
90fd1406bc Make use of clock_gettime be conditional.
Re: GH Issue https://github.com/Unidata/netcdf-c/issues/1900

Apparently the clock_gettime() function is not always available.
It is used in unit_test/tst_exhash.c and unit_test/tst_xcache.c.

To solve this, a number of things were changed:
* Move the timing code to a new file unit_tests/timer_utils.[ch]
* Modify the timing code to choose one of several timing methods
depending on availability. The prioritized order is as follows:
    1. If Windows, use the QueryPerformanceCounter mechanism else
    2. Use clock_gettime if available else
    3. Use gettimeofday if available else
    4. Use getrusage if available

Note that the resolution of 3 and 4 is less than 1 or 2.

Misc. Other Changes:
* Move the test in CMakeLists.txt that disables unit tests for WIN32 to unit_test/CMakeLists.txt since some unit tests actually work under Visual Studio.
* Fix some of the unit tests to work under visual studio
* Fix problem with using remove() in zmap_nzf.c
* Remove some warning about use of EXTERNL
2020-12-06 18:19:53 -07:00
Dennis Heimbigner
eb3d9eb0c9 Provide a Number of fixes/improvements to NCZarr
Primary changes:
* Add an improved cache system to speed up performance.
* Fix NCZarr to properly handle scalar variables.

Misc. Related Changes:
* Added unit tests for extendible hash and for the generic cache.
* Add config parameter to set size of the NCZarr cache.
* Add initial performance tests but leave them unused.
* Add CRC64 support.
* Move location of ncdumpchunks utility from /ncgen to /ncdump.
* Refactor auth support.

Misc. Unrelated Changes:
* More cleanup of the S3 support
* Add support for S3 authentication in .rc files: HTTP.S3.ACCESSID and HTTP.S3.SECRETKEY.
* Remove the hashkey from the struct OBJHDR since it is never used.
2020-11-19 17:01:04 -07:00
Dennis Heimbigner
793ecc8e60 Yet another fix for DAP2 double URL encoding.
re:  https://github.com/Unidata/netcdf-c/issues/1876
and: https://github.com/Unidata/netcdf-c/pull/1835
and: https://github.com/Unidata/netcdf4-python/issues/1041

The change in PR 1835 was correct with respect to using %20 instead of '+'
for encoding blanks. However, it was a mistake to assume everything was
unencoded and then to do encoding ourselves. The problem is that
different servers do different things, with Columbia being an outlier.

So, I have added a set of client controls that can at least give
the caller some control over this. The caller can append
the following fragment to his URL to control what gets encoded before
sending it to the server. The syntax is as follows:
````
https://<host>/<path>/<query>#encode=path|query|all|none
````

The possible values:
* path  -- URL encode (i.e. %xx encode) as needed in the path part of the URL.
* query -- URL encode as needed in the query part of the URL.
* all   -- equivalent to ````#encode=path,query````.
* none  -- do not url encode any part of the URL sent to the server; not strictly necessary, so mostly for completeness.

Note that if "encode=" is used, then before it is processed, all encoding
is turned of so that ````#encode=path```` will only encode the path
and not the query.

The default is ````#encode=query````, so the path is left untouched,
but the query is always encoded.

Internally, this required changes to pass the encode flags down into
the OC2 library.

Misc. Unrelated Changes:
* Shut up those irritating warning from putget.m4
2020-11-05 11:04:56 -07:00
Ward Fisher
564d01beb9 Merge branch 'gh1417.allured' of https://github.com/Dave-Allured/netcdf-c into gh1866-notes.wif 2020-10-16 10:00:17 -06:00
Dave-Allured
4300d57de7 Fix time zone parser bug, github #1417 2020-10-14 15:30:49 -06:00
Dennis Heimbigner
25d2e05444 Prepare for the path management code
Rename some files in prep for eventual implementation
of more comprehensive cross-platform file path management.
2020-10-13 19:12:15 -06:00
Dennis Heimbigner
aeb3ac2809 Mostly revert the filter code to reduce its complexity of use.
re: https://github.com/Unidata/netcdf-c/issues/1836

Revert the internal filter code to simplify it. From the user's
point of view, the only visible changes should be:

1. The functions that convert text to filter specs have had their signature reverted and have been moved to netcdf_aux.h
2. Some filter API functions now return NC_ENOFILTER when inquiry is made about some filter.

Internally,the dispatch table has been modified to get rid of the filter_actions
entry and associated complex structures. It has been replaced with
inq_var_filter_ids and inq_var_filter_info entries and the dispatch table
version has been bumped to 3. Corresponding NOOP and NOTNC4 functions
were added to libdispatch/dnotnc4.c. Also, the filter_action entries
in dispatch tables were replaced for all dispatch code bases (HDF5, DAP2,
etc). This should only impact UDF users.

In the process, it became clear that the form of the filters
field in NC_VAR_INFO_T was format dependent, so I converted it to
be of type void* and pushed its management into the various dispatch
code bases. Specifically libhdf5 and libnczarr now manage the filters
field in their own way.

The auxilliary functions for parsing textual filter specifications
were moved to netcdf_aux.h and were renamed to the following:
* ncaux_h5filterspec_parse
* ncaux_h5filterspec_parselist
* ncaux_h5filterspec_free
* ncaux_h5filter_fix8

Misc. Other Changes:

1. Document NUG/filters.md updated to reflect the changes above.
2. All the old data types (structs and enums)
   used by filter_actions actions were deleted.
   The exception is the NC_H5_Filterspec because it is needed
   by ncaux_h5filterspec_parselist.
3. Clientside filters were removed -- another enhancement
   for which no-one ever asked.
4. The ability to remove filters was itself removed.
5. Some functionality needed by nczarr was moved from libhdf5
   to libsrc4 e.g. nc4_find_default_chunksizes
6. All the filterx code was removed
7. ncfilter.h and nc4filter.c no longer used

Misc. Unrelated Changes:

1. The nczarr_test makefile clean was leaving some directories; so
   add clean-local to take care of them.
2020-09-27 12:43:46 -06:00
Ward Fisher
3e9293d40b Working on autoconf-based build on OSX 2020-09-15 14:56:12 -06:00
Dennis Heimbigner
c3c89693c4 Fix URL encoding in DAP2 url processing
re: Github issue https://github.com/Unidata/netcdf-c/issues/1832
and Github issue https://github.com/Unidata/netcdf4-python/issues/1041

Handling of URL escape sequences for some servers
(e.g. http://iridl.ldeo.columbia.edu) appears to be somewhat
non-standard.
In particular, certain characters need escaping that other servers
do not. Fortunately, the changes should also work existing other servers.
2020-09-08 12:41:12 -06:00
Ward Fisher
31dee0c4da
Revert "Revert "Fix nczarr-experimental: improve build support, disengage hdf5 vs netcdf4 flags, and find AWS libraries"" 2020-08-17 19:15:47 -06:00
Ward Fisher
16c27ca13f
Revert "Fix nczarr-experimental: improve build support, disengage hdf5 vs netcdf4 flags, and find AWS libraries" 2020-08-17 15:51:01 -06:00
Dennis Heimbigner
931f6d0ad4 Define isnan and isinf for OSX 2020-08-04 19:22:42 -06:00
Dennis Heimbigner
a905c886d1
Merge branch 'master' into nczarr-update1.dmh 2020-07-27 19:17:57 -06:00
Ryan May
6ea54a76ed
Fix for cURL >7.69
Found on conda-forge (which is now running 7.71.1), that byte-range
requests would stall. It turns out this is due to
CURLOPT_NOBODY--apparently setting this to 0 disables the HEAD request,
but does not restore downloading the body. The way to fix this is to
reset to CURLOPT_HTTPGET when done with a HEAD request.
2020-07-22 02:19:43 -06:00
Dennis Heimbigner
d538cf38c2 Fix nczarr-experimental to better support CMake and find AWS libraries
The primary fix is to improve CMake build support.
Specific changes include:
* CMake: Provide a better soln to locating the AWS SDK
  libraries; the new way is the preferred method as described in
  the aws-cpp-sdk documentation.
* CMake (and Automake): allow -DENABLE_S3_SDK (default off) to suppress
  looking for AWS libraries.
* CMake: add the complete set of nczarr tests
* CMake: add EXTERNL as needed to various .h files.
* Improve support for windows drive letters in paths.
* Add nczarr and s3 flags to nc-config
* For VisualStudio X nczarr, cleanup the NAN+INFINITY handling
* Convert _MSC_VER -> _WIN32 and vice versa as needed
* NCZarr - support multiple platform paths including windows, cygwin.
  mingw, etc.
* NCZarr - sort the test outputs because different platforms
  produce directory contents in different orders.

One big change concerns netcdf-c/CMakeLists.txt and netcdf-c/configure.ac.
In the current versions, it was the case that --disable-hdf5
disabled netcdf-4 (libsrc4). With nczarr, this can no longer
be the case because nczarr requires libsrc4 even if libhdf5
is disabled. So, I modified the above files to move the
format options (HDF5, NCZarr, HDF4, etc) to a single place
near the front of the files. Now it is the case that:
* Enabling any of the formats that require libsrc4
  also does an implicit --enable-netcdf4.
* --disable-netcdf4 | --disable-netcdf-4 now becomes
  and alias for --disable-hdf5.

There are probably some bugs in this change in terms of
dependencies between format options.

Problems:
* CMake S3 support is still not working for Visual Studio
* A recent issue points out that there is work to do on handling
  UTF8 filenames, but that will be addressed in a separate fix.

Notes:
* Consider converting all of our includes/.h files to use EXTERNL
2020-07-12 12:21:56 -06:00
Dennis Heimbigner
c7cf0d3807 Fix LGTM errors 2020-06-28 19:07:08 -06:00
Dennis Heimbigner
59e04ae071 This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".

The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.

More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).

WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:

Platform | Build System | S3 support
------------------------------------
Linux+gcc      | Automake     | yes
Linux+gcc      | CMake        | yes
Visual Studio  | CMake        | no

Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future.  Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.

In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*.  The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
   and the version bumped.
4. An overly complex set of structs was created to support funnelling
   all of the filterx operations thru a single dispatch
   "filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
   to nczarr.

Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
   -- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
   support zarr and to regularize the structure of the fragments
   section of a URL.

Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
   e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
   * Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
   and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.

Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-28 18:02:47 -06:00
Edward Hartnett
87226c4879 readded NOTNC3 varm functions to dispatch 2020-06-03 05:55:30 -06:00
Sean Arms
c37cc13dca Treat time units as case-insensitive in ncdump
Enables ncdump -t (-i) to recognize a wider variety of time related units
and calendar names. This brings ncdump closer to what it advertises in its
man page regarding its understanding of udunits compliant time units.
2020-05-14 06:48:03 -06:00
Dennis Heimbigner
84c69afca7 Allow redefinition of variable filters
re: Github issue https://github.com/Unidata/netcdf-c/issues/1713

If nc_def_var_filter or nc_def_var_deflate or nc_def_var_szip is
called multiple times with the same filter id, but possibly with
different sets of parameters, then the first invocation is
sticky and later invocations are ignored. The desired behavior
is to have the last invocation be used.

This PR implements that desired behavior, with some special
cases.  If you call nc_def_var_deflate multiple times, then the
last invocation rule applies with respect to deflate. However,
the shuffle filter, if enabled, is always applied just before
applying deflate.

Misc unrelated changes:
1. Make client-side filters be disabled by default
2. Fix the definition of uintptr_t and use in oc2 and libdap4
3. Add some test cases
4. modify filter order tests to use plugin filters rather
   than client-side filters
2020-05-11 09:42:31 -06:00
Ward Fisher
d772543a9b
Merge branch 'master' into dispnoop.dmh 2020-04-27 15:54:22 -06:00
Dennis Heimbigner
f0cd7f8ec1 Support no-op dispatch functions
re: https://github.com/Unidata/netcdf-c/issues/1693

1. Add functions to libdispatch/dnotnc4.c to support
   dispatch table operations that should work for any
   dispatch table, even if they do not do anything.
   Functions such as nc_inq_var_filter.
2. Modify selected dispatch tables to utilize
   the noop functions.
3. Extend nc_test/tst_formats.c to test.

This is an extension of Ed's work to do this for
chunking and deflate and szip. See PRs
https://github.com/Unidata/netcdf-c/pull/1697
and
https://github.com/Unidata/netcdf-c/pull/1692

As a side effect, elide libdispatch/dnotnc3.c since
it is no longer used.
2020-04-15 14:44:58 -06:00
Edward Hartnett
9ac441ad6a cleanup 2020-04-15 05:53:59 -06:00
Dennis Heimbigner
313121a229 Use proper CURLOPT values for VERIFYHOST and VERIFYPEER
re: https://github.com/Unidata/netcdf-c/issues/1684
re: e-support VZL-904142

Two issues:
1. As of libcurl 7.66, the semantics of CURLOPT_SSL_VERIFYHOST
   changed so that the non-zero values affects certificate processing.
2. The current library was forcing the values of VERIFYPEER
   and VERIFYHOST to zero instead of leaving them to the default values.

Solution was first to leave the defaults in place for VERIFYPEER and VERIFYHOST
as long as they are not set in .ocrc/.dodsrc file.
Second, the value of HTTP.SSL.VERIFYPEER or HTTP.SSL.VERIFYHOST
as set in .ocrc/.dodrc is used to set the corresponding CURLOPT flags.
So for example, adding
> HTTP.SSL.VERIFYHOST=2
will set the value of CURLOPT_SSL_VERIFYHOST to 2, the default.
Using
> HTTP.SSL.VERIFYHOST=0
will set the value of CURLOPT_SSL_VERIFYHOST to 0, which disables it.
Similarly for VERIFYPEER.

Finally the semantics of HTTP.SSL.VALIDATE is now equivalent to
> HTTP.SSL.VERIFYPEER=1
> HTTP.SSL.VERIFYHOST=2
2020-04-10 13:42:27 -06:00
Edward Hartnett
b76a0c8521 documentation improvements 2020-04-08 09:12:19 -06:00
Edward Hartnett
7366edb43f documentation improvements 2020-04-08 09:10:42 -06:00
Edward Hartnett
58e5d53e96 documentation improvements 2020-04-08 09:09:46 -06:00
Edward Hartnett
41ea23a8ac
Merge branch 'master' into ejh_fix_nc3_deflate 2020-04-08 08:54:50 -06:00
Edward Hartnett
1c189b2c56 dealing with nc_inq_var_szip(), testing, and release notes 2020-04-08 08:49:04 -06:00
Edward Hartnett
aab2f998b3 now testing that nc_inq_var_deflate() works for all formats and returns 0 deflate and deflate_level 2020-04-08 08:31:53 -06:00
Dennis Heimbigner
6f86660da8 Fix missing forward declarations
re: issue https://github.com/Unidata/netcdf-c/issues/1687

static functions are being used before decl and it causes
errors. Only occurs when BIG_ENDIAN is defined.
Solution is to add the forward declarations.
2020-04-03 20:15:34 -06:00
Edward Hartnett
9b6215936b updated documentation of nc_inq_var_deflate() to describe behavior of deflate_level when deflate not in use 2020-03-17 10:33:53 -06:00
Edward Hartnett
edea5e3552 now pass 0 for deflate_level if deflate not in use 2020-03-16 11:01:13 -06:00
Dennis Heimbigner
1bce6b9b5c Fix open/create of UTF8 names
re: issue https://github.com/Unidata/netcdf-c/issues/1666

The code in NC_open and NC_create (in dfile.c)
was using improperly testing for leading whitespace chars.
It was treating UTF-8 as whitespace.

Fix is to do tests using unsigned char.
2020-03-11 11:25:57 -06:00
Edward Hartnett
7004bbc2d5 updated documentation 2020-03-06 09:54:26 -07:00
Edward Hartnett
d5aba68cec updated docs for nc_def_var_chunking WRT scalars 2020-03-02 16:41:01 -07:00
Edward Hartnett
ba0491bb40 documentation improvements for nc_var_par_access() 2020-03-02 16:36:56 -07:00
Dennis Heimbigner
b488c272d5 Fix conflicts with master 2020-02-27 14:06:45 -07:00
Dennis Heimbigner
44d0dcaad2 Add support for multiple filters per variable.
re: https://github.com/Unidata/netcdf-c/issues/1584

Support has been added for multiple filters per variable.  This
affects a number of components in netcdf. The new APIs are
documented in NUG/filters.md.

The primary changes are:
* A set of new functions are provided (see __include/netcdf_filter.h__).
    - Obtain a list of the filters associated with a variable
    - Obtain the parameters for a specific filter.
* The existing __nc_inq_var_filter__ function now returns info
  about the first defined filter.
* The utilities (ncgen, ncdump, and nccopy) now support
  an extended format for specifying a sequence of filters.
  The general form is __<filter>|<filter>..._.
* The ncdump **_Filter** attribute now dumps a list of all the
  filters associated with a variable using the above new format.
* Filter specifications can now use a filter name instead of number
  for filters known to the netcdf library, which in turn is taken
  from the HDF5 filter registration page.
* New errors are defined: NC_EFILTER and NC_ENOFILTER. The latter
  is returned if an attempt is made to access an unknown filter.
* Internally, the dispatch table has been extended to add a function
  to handle all of the filter functions.
* New, filter-related, tests were added to nc_test4.
* A new plugin was added to the plugins directory to help with testing.

Notes:
1. The shuffle and fletcher32 filters are not part of the multifilter system.

Misc. changes:
1. A debug module was added to libhdf5 to help catch error locations.
2020-02-16 12:59:33 -07:00