netcdf-c/libdispatch/derror.c

301 lines
10 KiB
C
Raw Normal View History

2011-07-12 00:04:49 +08:00
/** \file
Error messages and library version.
2010-06-07 19:49:48 +08:00
2011-07-12 00:04:49 +08:00
These functions return the library version, and error messages.
2010-06-07 19:49:48 +08:00
Copyright 2018 University Corporation for Atmospheric
2011-07-12 00:04:49 +08:00
Research/Unidata. See COPYRIGHT file for more info.
2010-06-07 19:49:48 +08:00
*/
This PR adds EXPERIMENTAL support for accessing data in the cloud using a variant of the Zarr protocol and storage format. This enhancement is generically referred to as "NCZarr". The data model supported by NCZarr is netcdf-4 minus the user-defined types and the String type. In this sense it is similar to the CDF-5 data model. More detailed information about enabling and using NCZarr is described in the document NUG/nczarr.md and in a [Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in). WARNING: this code has had limited testing, so do use this version for production work. Also, performance improvements are ongoing. Note especially the following platform matrix of successful tests: Platform | Build System | S3 support ------------------------------------ Linux+gcc | Automake | yes Linux+gcc | CMake | yes Visual Studio | CMake | no Additionally, and as a consequence of the addition of NCZarr, major changes have been made to the Filter API. NOTE: NCZarr does not yet support filters, but these changes are enablers for that support in the future. Note that it is possible (probable?) that there will be some accidental reversions if the changes here did not correctly mimic the existing filter testing. In any case, previously filter ids and parameters were of type unsigned int. In order to support the more general zarr filter model, this was all converted to char*. The old HDF5-specific, unsigned int operations are still supported but they are wrappers around the new, char* based nc_filterx_XXX functions. This entailed at least the following changes: 1. Added the files libdispatch/dfilterx.c and include/ncfilter.h 2. Some filterx utilities have been moved to libdispatch/daux.c 3. A new entry, "filter_actions" was added to the NCDispatch table and the version bumped. 4. An overly complex set of structs was created to support funnelling all of the filterx operations thru a single dispatch "filter_actions" entry. 5. Move common code to from libhdf5 to libsrc4 so that it is accessible to nczarr. Changes directly related to Zarr: 1. Modified CMakeList.txt and configure.ac to support both C and C++ -- this is in support of S3 support via the awd-sdk libraries. 2. Define a size64_t type to support nczarr. 3. More reworking of libdispatch/dinfermodel.c to support zarr and to regularize the structure of the fragments section of a URL. Changes not directly related to Zarr: 1. Make client-side filter registration be conditional, with default off. 2. Hack include/nc4internal.h to make some flags added by Ed be unique: e.g. NC_CREAT, NC_INDEF, etc. 3. cleanup include/nchttp.h and libdispatch/dhttp.c. 4. Misc. changes to support compiling under Visual Studio including: * Better testing under windows for dirent.h and opendir and closedir. 5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags and to centralize error reporting. 6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them. 7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible. Changes Left TO-DO: 1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
#include "config.h"
#include "ncdispatch.h"
2015-08-16 06:26:35 +08:00
#ifdef USE_PNETCDF
#include <pnetcdf.h> /* for ncmpi_strerror() */
#endif
2010-06-07 19:49:48 +08:00
2017-12-05 04:20:42 +08:00
/** @internal The version string for the library, used by
* nc_inq_libvers(). */
2010-06-07 19:49:48 +08:00
static const char nc_libvers[] = PACKAGE_VERSION " of "__DATE__" "__TIME__" $";
2011-07-12 00:04:49 +08:00
/**
2017-11-30 23:45:53 +08:00
Return the library version.
2014-12-10 06:57:59 +08:00
2017-11-30 23:45:53 +08:00
\returns short string that contains the version information for the
2011-07-12 00:04:49 +08:00
library.
*/
2010-06-07 19:49:48 +08:00
const char *
nc_inq_libvers(void)
{
return nc_libvers;
}
/*! NetCDF Error Handling
\addtogroup error NetCDF Error Handling
NetCDF functions return a non-zero status codes on error.
2011-07-12 00:04:49 +08:00
Each netCDF function returns an integer status value. If the returned
status value indicates an error, you may handle it in any way desired,
from printing an associated error message and exiting to ignoring the
error indication and proceeding (not recommended!). For simplicity,
the examples in this guide check the error status and call a separate
function, handle_err(), to handle any errors. One possible definition
of handle_err() can be found within the documentation of
nc_strerror().
The nc_strerror() function is available to convert a returned integer
error status into an error message string.
Occasionally, low-level I/O errors may occur in a layer below the
netCDF library. For example, if a write operation causes you to exceed
disk quotas or to attempt to write to a device that is no longer
available, you may get an error from a layer below the netCDF library,
but the resulting write error will still be reflected in the returned
status value.
*/
/** \{ */
2011-07-12 00:04:49 +08:00
/*! Given an error number, return an error message.
2011-07-12 00:04:49 +08:00
This function returns a static reference to an error message string
corresponding to an integer netCDF error status or to a system error
number, presumably returned by a previous call to some other netCDF
function. The error codes are defined in netcdf.h.
\param ncerr1 error number
\returns short string containing error message.
Here is an example of a simple error handling function that uses
nc_strerror() to print the error message corresponding to the netCDF
2011-07-12 00:04:49 +08:00
error status returned from any netCDF function call and then exit:
\code
#include <netcdf.h>
...
void handle_error(int status) {
if (status != NC_NOERR) {
fprintf(stderr, "%s\n", nc_strerror(status));
exit(-1);
}
}
\endcode
*/
const char *nc_strerror(int ncerr1)
2010-06-07 19:49:48 +08:00
{
/* System error? */
if(NC_ISSYSERR(ncerr1))
{
const char *cp = (const char *) strerror(ncerr1);
if(cp == NULL)
return "Unknown Error";
return cp;
}
/* If we're here, this is a netcdf error code. */
switch(ncerr1)
{
case NC_NOERR:
return "No error";
case NC_EBADID:
return "NetCDF: Not a valid ID";
case NC_ENFILE:
return "NetCDF: Too many files open";
case NC_EEXIST:
return "NetCDF: File exists && NC_NOCLOBBER";
case NC_EINVAL:
return "NetCDF: Invalid argument";
case NC_EPERM:
return "NetCDF: Write to read only";
case NC_ENOTINDEFINE:
return "NetCDF: Operation not allowed in data mode";
case NC_EINDEFINE:
return "NetCDF: Operation not allowed in define mode";
case NC_EINVALCOORDS:
return "NetCDF: Index exceeds dimension bound";
case NC_EMAXDIMS:
return "NetCDF: NC_MAX_DIMS exceeded"; /* not enforced after 4.5.0 */
2010-06-07 19:49:48 +08:00
case NC_ENAMEINUSE:
return "NetCDF: String match to name in use";
case NC_ENOTATT:
return "NetCDF: Attribute not found";
case NC_EMAXATTS:
return "NetCDF: NC_MAX_ATTRS exceeded"; /* not enforced after 4.5.0 */
2010-06-07 19:49:48 +08:00
case NC_EBADTYPE:
return "NetCDF: Not a valid data type or _FillValue type mismatch";
case NC_EBADDIM:
return "NetCDF: Invalid dimension ID or name";
case NC_EUNLIMPOS:
return "NetCDF: NC_UNLIMITED in the wrong index";
case NC_EMAXVARS: return "NetCDF: NC_MAX_VARS exceeded"; /* not enforced after 4.5.0 */
2010-06-07 19:49:48 +08:00
case NC_ENOTVAR:
return "NetCDF: Variable not found";
case NC_EGLOBAL:
return "NetCDF: Action prohibited on NC_GLOBAL varid";
case NC_ENOTNC:
return "NetCDF: Unknown file format";
case NC_ESTS:
return "NetCDF: In Fortran, string too short";
case NC_EMAXNAME:
return "NetCDF: NC_MAX_NAME exceeded";
case NC_EUNLIMIT:
return "NetCDF: NC_UNLIMITED size already in use";
case NC_ENORECVARS:
return "NetCDF: nc_rec op when there are no record vars";
case NC_ECHAR:
return "NetCDF: Attempt to convert between text & numbers";
case NC_EEDGE:
return "NetCDF: Start+count exceeds dimension bound";
case NC_ESTRIDE:
return "NetCDF: Illegal stride";
case NC_EBADNAME:
return "NetCDF: Name contains illegal characters";
case NC_ERANGE:
return "NetCDF: Numeric conversion not representable";
case NC_ENOMEM:
return "NetCDF: Memory allocation (malloc) failure";
case NC_EVARSIZE:
return "NetCDF: One or more variable sizes violate format constraints";
case NC_EDIMSIZE:
return "NetCDF: Invalid dimension size";
case NC_ETRUNC:
return "NetCDF: File likely truncated or possibly corrupted";
case NC_EAXISTYPE:
return "NetCDF: Illegal axis type";
case NC_EDAP:
return "NetCDF: DAP failure";
case NC_ECURL:
return "NetCDF: libcurl failure";
case NC_EIO:
return "NetCDF: I/O failure";
case NC_ENODATA:
This PR adds EXPERIMENTAL support for accessing data in the cloud using a variant of the Zarr protocol and storage format. This enhancement is generically referred to as "NCZarr". The data model supported by NCZarr is netcdf-4 minus the user-defined types and the String type. In this sense it is similar to the CDF-5 data model. More detailed information about enabling and using NCZarr is described in the document NUG/nczarr.md and in a [Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in). WARNING: this code has had limited testing, so do use this version for production work. Also, performance improvements are ongoing. Note especially the following platform matrix of successful tests: Platform | Build System | S3 support ------------------------------------ Linux+gcc | Automake | yes Linux+gcc | CMake | yes Visual Studio | CMake | no Additionally, and as a consequence of the addition of NCZarr, major changes have been made to the Filter API. NOTE: NCZarr does not yet support filters, but these changes are enablers for that support in the future. Note that it is possible (probable?) that there will be some accidental reversions if the changes here did not correctly mimic the existing filter testing. In any case, previously filter ids and parameters were of type unsigned int. In order to support the more general zarr filter model, this was all converted to char*. The old HDF5-specific, unsigned int operations are still supported but they are wrappers around the new, char* based nc_filterx_XXX functions. This entailed at least the following changes: 1. Added the files libdispatch/dfilterx.c and include/ncfilter.h 2. Some filterx utilities have been moved to libdispatch/daux.c 3. A new entry, "filter_actions" was added to the NCDispatch table and the version bumped. 4. An overly complex set of structs was created to support funnelling all of the filterx operations thru a single dispatch "filter_actions" entry. 5. Move common code to from libhdf5 to libsrc4 so that it is accessible to nczarr. Changes directly related to Zarr: 1. Modified CMakeList.txt and configure.ac to support both C and C++ -- this is in support of S3 support via the awd-sdk libraries. 2. Define a size64_t type to support nczarr. 3. More reworking of libdispatch/dinfermodel.c to support zarr and to regularize the structure of the fragments section of a URL. Changes not directly related to Zarr: 1. Make client-side filter registration be conditional, with default off. 2. Hack include/nc4internal.h to make some flags added by Ed be unique: e.g. NC_CREAT, NC_INDEF, etc. 3. cleanup include/nchttp.h and libdispatch/dhttp.c. 4. Misc. changes to support compiling under Visual Studio including: * Better testing under windows for dirent.h and opendir and closedir. 5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags and to centralize error reporting. 6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them. 7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible. Changes Left TO-DO: 1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
return "NetCDF: Variable has no data";
2010-06-07 19:49:48 +08:00
case NC_EDAPSVC:
return "NetCDF: DAP server error";
case NC_EDAS:
return "NetCDF: Malformed or inaccessible DAP DAS";
case NC_EDDS:
return "NetCDF: Malformed or inaccessible DAP2 DDS or DAP4 DMR response";
2010-06-07 19:49:48 +08:00
case NC_EDATADDS:
return "NetCDF: Malformed or inaccessible DAP2 DATADDS or DAP4 DAP response";
2010-06-07 19:49:48 +08:00
case NC_EDAPURL:
return "NetCDF: Malformed URL";
2010-06-07 19:49:48 +08:00
case NC_EDAPCONSTRAINT:
return "NetCDF: Malformed or unexpected Constraint";
2011-04-17 04:56:36 +08:00
case NC_ETRANSLATION:
return "NetCDF: Untranslatable construct";
case NC_EACCESS:
return "NetCDF: Access failure";
case NC_EAUTH:
return "NetCDF: Authorization failure";
case NC_ENOTFOUND:
return "NetCDF: file not found";
case NC_ECANTREMOVE:
return "NetCDF: cannot delete file";
case NC_EINTERNAL:
return "NetCDF: internal library error; Please contact Unidata support";
case NC_EPNETCDF:
return "NetCDF: PnetCDF error";
2010-06-07 19:49:48 +08:00
case NC_EHDFERR:
return "NetCDF: HDF error";
case NC_ECANTREAD:
return "NetCDF: Can't read file";
case NC_ECANTWRITE:
return "NetCDF: Can't write file";
case NC_ECANTCREATE:
return "NetCDF: Can't create file";
case NC_EFILEMETA:
return "NetCDF: Can't add HDF5 file metadata";
case NC_EDIMMETA:
2010-06-07 19:49:48 +08:00
return "NetCDF: Can't define dimensional metadata";
case NC_EATTMETA:
return "NetCDF: Can't open HDF5 attribute";
case NC_EVARMETA:
return "NetCDF: Problem with variable metadata.";
case NC_ENOCOMPOUND:
return "NetCDF: Can't create HDF5 compound type";
case NC_EATTEXISTS:
2019-09-18 10:27:43 +08:00
return "NetCDF: Attempt to create attribute that already exists";
2010-06-07 19:49:48 +08:00
case NC_ENOTNC4:
return "NetCDF: Attempting netcdf-4 operation on netcdf-3 file";
case NC_ESTRICTNC3:
return "NetCDF: Attempting netcdf-4 operation on strict nc3 netcdf-4 file";
case NC_ENOTNC3:
return "NetCDF: Attempting netcdf-3 operation on netcdf-4 file";
case NC_ENOPAR:
return "NetCDF: Parallel operation on file opened for non-parallel access";
case NC_EPARINIT:
return "NetCDF: Error initializing for parallel access";
case NC_EBADGRPID:
return "NetCDF: Bad group ID";
case NC_EBADTYPID:
return "NetCDF: Bad type ID";
case NC_ETYPDEFINED:
return "NetCDF: Type has already been defined and may not be edited";
case NC_EBADFIELD:
return "NetCDF: Bad field ID";
case NC_EBADCLASS:
return "NetCDF: Bad class";
case NC_EMAPTYPE:
return "NetCDF: Mapped access for atomic types only";
case NC_ELATEFILL:
return "NetCDF: Attempt to define fill value when data already exists.";
case NC_ELATEDEF:
return "NetCDF: Attempt to define var properties, like deflate, after enddef.";
case NC_EDIMSCALE:
2019-09-18 10:27:43 +08:00
return "NetCDF: Problem with HDF5 dimscales.";
2010-06-07 19:49:48 +08:00
case NC_ENOGRP:
return "NetCDF: No group found.";
case NC_ESTORAGE:
return "NetCDF: Cannot specify both contiguous and chunking.";
case NC_EBADCHUNK:
return "NetCDF: Bad chunk sizes.";
case NC_ENOTBUILT:
return "NetCDF: Attempt to use feature that was not turned on "
"when netCDF was built.";
case NC_EDISKLESS:
return "NetCDF: Error in using diskless access";
case NC_EFILTER:
return "NetCDF: Filter error: bad id or parameters or duplicate filter";
Add support for multiple filters per variable. re: https://github.com/Unidata/netcdf-c/issues/1584 Support has been added for multiple filters per variable. This affects a number of components in netcdf. The new APIs are documented in NUG/filters.md. The primary changes are: * A set of new functions are provided (see __include/netcdf_filter.h__). - Obtain a list of the filters associated with a variable - Obtain the parameters for a specific filter. * The existing __nc_inq_var_filter__ function now returns info about the first defined filter. * The utilities (ncgen, ncdump, and nccopy) now support an extended format for specifying a sequence of filters. The general form is __<filter>|<filter>..._. * The ncdump **_Filter** attribute now dumps a list of all the filters associated with a variable using the above new format. * Filter specifications can now use a filter name instead of number for filters known to the netcdf library, which in turn is taken from the HDF5 filter registration page. * New errors are defined: NC_EFILTER and NC_ENOFILTER. The latter is returned if an attempt is made to access an unknown filter. * Internally, the dispatch table has been extended to add a function to handle all of the filter functions. * New, filter-related, tests were added to nc_test4. * A new plugin was added to the plugins directory to help with testing. Notes: 1. The shuffle and fletcher32 filters are not part of the multifilter system. Misc. changes: 1. A debug module was added to libhdf5 to help catch error locations.
2020-02-17 03:59:33 +08:00
case NC_ENOFILTER:
Enhance/Fix filter support re: Discussion https://github.com/Unidata/netcdf-c/discussions/2214 The primary change is to support so-called "standard filters". A standard filter is one that is defined by the following netcdf-c API: ```` int nc_def_var_XXX(int ncid, int varid, size_t nparams, unsigned* params); int nc_inq_var_XXXX(int ncid, int varid, int* usefilterp, unsigned* params); ```` So for example, zstandard would be a standard filter by defining the functions *nc_def_var_zstandard* and *nc_inq_var_zstandard*. In order to define these functions, we need a new dispatch function: ```` int nc_inq_filter_avail(int ncid, unsigned filterid); ```` This function, combined with the existing filter API can be used to implement arbitrary standard filters using a simple code pattern. Note that I would have preferred that this function return a list of all available filters, but HDF5 does not support that functionality. So this PR implements the dispatch function and implements the following standard functions: + bzip2 + zstandard + blosc Specific test cases are also provided for HDF5 and NCZarr. Over time, other specific standard filters will be defined. ## Primary Changes * Add nc_inq_filter_avail() to netcdf-c API. * Add standard filter implementations to test use of *nc_inq_filter_avail*. * Bump the dispatch table version number and add to all the relevant dispatch tables (libsrc, libsrcp, etc). * Create a program to invoke nc_inq_filter_avail so that it is accessible to shell scripts. * Cleanup szip support to properly support szip when HDF5 is disabled. This involves detecting libsz separately from testing if HDF5 supports szip. * Integrate shuffle and fletcher32 into the existing filter API. This means that, for example, nc_def_var_fletcher32 is now a wrapper around nc_def_var_filter. * Extend the Codec defaulting to allow multiple default shared libraries. ## Misc. Changes * Modify configure.ac/CMakeLists.txt to look for the relevant libraries implementing standard filters. * Modify libnetcdf.settings to list available standard filters (including deflate and szip). * Add CMake test modules to locate libbz2 and libzstd. * Cleanup the HDF5 memory manager function use in the plugins. * remove unused file include//ncfilter.h * remove tests for the HDF5 memory operations e.g. H5allocate_memory. * Add flag to ncdump to force use of _Filter instead of _Deflate or _Shuffle or _Fletcher32. Used for testing.
2022-03-15 02:39:37 +08:00
return "NetCDF: Filter error: undefined filter encountered";
case NC_ECANTEXTEND:
return "NetCDF: Attempt to extend dataset during NC_INDEPENDENT I/O operation. Use nc_var_par_access to set mode NC_COLLECTIVE before extending variable.";
case NC_EMPI: return "NetCDF: MPI operation failed.";
case NC_ERCFILE:
return "NetCDF: RC File Failure.";
case NC_ENULLPAD:
return "NetCDF: File fails strict Null-Byte Header check.";
case NC_EINMEMORY:
return "NetCDF: In-memory File operation failed.";
This PR adds EXPERIMENTAL support for accessing data in the cloud using a variant of the Zarr protocol and storage format. This enhancement is generically referred to as "NCZarr". The data model supported by NCZarr is netcdf-4 minus the user-defined types and the String type. In this sense it is similar to the CDF-5 data model. More detailed information about enabling and using NCZarr is described in the document NUG/nczarr.md and in a [Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in). WARNING: this code has had limited testing, so do use this version for production work. Also, performance improvements are ongoing. Note especially the following platform matrix of successful tests: Platform | Build System | S3 support ------------------------------------ Linux+gcc | Automake | yes Linux+gcc | CMake | yes Visual Studio | CMake | no Additionally, and as a consequence of the addition of NCZarr, major changes have been made to the Filter API. NOTE: NCZarr does not yet support filters, but these changes are enablers for that support in the future. Note that it is possible (probable?) that there will be some accidental reversions if the changes here did not correctly mimic the existing filter testing. In any case, previously filter ids and parameters were of type unsigned int. In order to support the more general zarr filter model, this was all converted to char*. The old HDF5-specific, unsigned int operations are still supported but they are wrappers around the new, char* based nc_filterx_XXX functions. This entailed at least the following changes: 1. Added the files libdispatch/dfilterx.c and include/ncfilter.h 2. Some filterx utilities have been moved to libdispatch/daux.c 3. A new entry, "filter_actions" was added to the NCDispatch table and the version bumped. 4. An overly complex set of structs was created to support funnelling all of the filterx operations thru a single dispatch "filter_actions" entry. 5. Move common code to from libhdf5 to libsrc4 so that it is accessible to nczarr. Changes directly related to Zarr: 1. Modified CMakeList.txt and configure.ac to support both C and C++ -- this is in support of S3 support via the awd-sdk libraries. 2. Define a size64_t type to support nczarr. 3. More reworking of libdispatch/dinfermodel.c to support zarr and to regularize the structure of the fragments section of a URL. Changes not directly related to Zarr: 1. Make client-side filter registration be conditional, with default off. 2. Hack include/nc4internal.h to make some flags added by Ed be unique: e.g. NC_CREAT, NC_INDEF, etc. 3. cleanup include/nchttp.h and libdispatch/dhttp.c. 4. Misc. changes to support compiling under Visual Studio including: * Better testing under windows for dirent.h and opendir and closedir. 5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags and to centralize error reporting. 6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them. 7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible. Changes Left TO-DO: 1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
case NC_ENCZARR:
return "NetCDF: NCZarr error";
case NC_ES3:
Improve S3 Documentation and Support ## Improvements to S3 Documentation * Create a new document *quickstart_paths.md* that give a summary of the legal path formats used by netcdf-c. This includes both file paths and URL paths. * Modify *nczarr.md* to remove most of the S3 related text. * Move the S3 text from *nczarr.md* to a new document *cloud.md*. * Add some S3-related text to the *byterange.md* document. Hopefully, this will make it easier for users to find the information they want. ## Rebuild NCZarr Testing In order to avoid problems with running make check in parallel, two changes were made: 1. The *nczarr_test* test system was rebuilt. Now, for each test. any generated files are kept in a test-specific directory, isolated from all other test executions. 2. Similarly, since the S3 test bucket is shared, any generated S3 objects are isolated using a test-specific key path. ## Other S3 Related Changes * Add code to ensure that files created on S3 are reclaimed at end of testing. * Used the bash "trap" command to ensure S3 cleanup even if the test fails. * Cleanup the S3 related configure.ac flag set since S3 is used in several places. So now one should use the option *--enable-s3* instead of *--enable-nczarr-s3*, although the latter is still kept as a deprecated alias for the former. * Get some of the github actions yml to work with S3; required fixing various test scripts adding a secret to access the Unidata S3 bucket. * Cleanup S3 portion of libnetcdf.settings.in and netcdf_meta.h.in and test_common.in. * Merge partial S3 support into dhttp.c. * Create an experimental s3 access library especially for use with Windows. It is enabled by using the options *--enable-s3-internal* (automake) or *-DENABLE_S3_INTERNAL=ON* (CMake). Also add a unit-test for it. * Move some definitions from ncrc.h to ncs3sdk.h ## Other Changes * Provide a default implementation of strlcpy and move this and similar defaults into *dmissing.c*.
2023-04-26 07:15:06 +08:00
return "NetCDF: S3 error";
This PR adds EXPERIMENTAL support for accessing data in the cloud using a variant of the Zarr protocol and storage format. This enhancement is generically referred to as "NCZarr". The data model supported by NCZarr is netcdf-4 minus the user-defined types and the String type. In this sense it is similar to the CDF-5 data model. More detailed information about enabling and using NCZarr is described in the document NUG/nczarr.md and in a [Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in). WARNING: this code has had limited testing, so do use this version for production work. Also, performance improvements are ongoing. Note especially the following platform matrix of successful tests: Platform | Build System | S3 support ------------------------------------ Linux+gcc | Automake | yes Linux+gcc | CMake | yes Visual Studio | CMake | no Additionally, and as a consequence of the addition of NCZarr, major changes have been made to the Filter API. NOTE: NCZarr does not yet support filters, but these changes are enablers for that support in the future. Note that it is possible (probable?) that there will be some accidental reversions if the changes here did not correctly mimic the existing filter testing. In any case, previously filter ids and parameters were of type unsigned int. In order to support the more general zarr filter model, this was all converted to char*. The old HDF5-specific, unsigned int operations are still supported but they are wrappers around the new, char* based nc_filterx_XXX functions. This entailed at least the following changes: 1. Added the files libdispatch/dfilterx.c and include/ncfilter.h 2. Some filterx utilities have been moved to libdispatch/daux.c 3. A new entry, "filter_actions" was added to the NCDispatch table and the version bumped. 4. An overly complex set of structs was created to support funnelling all of the filterx operations thru a single dispatch "filter_actions" entry. 5. Move common code to from libhdf5 to libsrc4 so that it is accessible to nczarr. Changes directly related to Zarr: 1. Modified CMakeList.txt and configure.ac to support both C and C++ -- this is in support of S3 support via the awd-sdk libraries. 2. Define a size64_t type to support nczarr. 3. More reworking of libdispatch/dinfermodel.c to support zarr and to regularize the structure of the fragments section of a URL. Changes not directly related to Zarr: 1. Make client-side filter registration be conditional, with default off. 2. Hack include/nc4internal.h to make some flags added by Ed be unique: e.g. NC_CREAT, NC_INDEF, etc. 3. cleanup include/nchttp.h and libdispatch/dhttp.c. 4. Misc. changes to support compiling under Visual Studio including: * Better testing under windows for dirent.h and opendir and closedir. 5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags and to centralize error reporting. 6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them. 7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible. Changes Left TO-DO: 1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
case NC_EEMPTY:
return "NetCDF: Attempt to read empty NCZarr map key";
case NC_EOBJECT:
return "NetCDF: Some object exists when it should not";
case NC_ENOOBJECT:
return "NetCDF: Some object not found";
case NC_EPLUGIN:
return "NetCDF: Unclassified failure in accessing a dynamically loaded plugin";
default:
2015-08-16 06:26:35 +08:00
#ifdef USE_PNETCDF
/* The behavior of ncmpi_strerror here is to return
NULL, not a string. This causes problems in (at least)
the fortran interface. */
return (ncmpi_strerror(ncerr1) ?
ncmpi_strerror(ncerr1) :
"Unknown Error");
2015-08-16 06:26:35 +08:00
#else
2010-06-07 19:49:48 +08:00
return "Unknown Error";
2015-08-16 06:26:35 +08:00
#endif
2010-06-07 19:49:48 +08:00
}
}
/** \} */