2011-07-12 00:04:49 +08:00
/** \file
Error messages and library version .
2010-06-07 19:49:48 +08:00
2011-07-12 00:04:49 +08:00
These functions return the library version , and error messages .
2010-06-07 19:49:48 +08:00
2018-12-07 05:29:57 +08:00
Copyright 2018 University Corporation for Atmospheric
2011-07-12 00:04:49 +08:00
Research / Unidata . See COPYRIGHT file for more info .
2010-06-07 19:49:48 +08:00
*/
This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
# include "config.h"
2010-08-05 10:44:59 +08:00
# include "ncdispatch.h"
2015-08-16 06:26:35 +08:00
# ifdef USE_PNETCDF
# include <pnetcdf.h> /* for ncmpi_strerror() */
# endif
2010-06-07 19:49:48 +08:00
2017-12-05 04:20:42 +08:00
/** @internal The version string for the library, used by
* nc_inq_libvers ( ) . */
2010-06-07 19:49:48 +08:00
static const char nc_libvers [ ] = PACKAGE_VERSION " of " __DATE__ " " __TIME__ " $ " ;
2011-07-12 00:04:49 +08:00
/**
2017-11-30 23:45:53 +08:00
Return the library version .
2014-12-10 06:57:59 +08:00
2017-11-30 23:45:53 +08:00
\ returns short string that contains the version information for the
2011-07-12 00:04:49 +08:00
library .
*/
2010-06-07 19:49:48 +08:00
const char *
nc_inq_libvers ( void )
{
return nc_libvers ;
}
2014-07-15 06:56:42 +08:00
/*! NetCDF Error Handling
2014-02-28 05:19:28 +08:00
2014-07-15 06:56:42 +08:00
\ addtogroup error NetCDF Error Handling
NetCDF functions return a non - zero status codes on error .
2011-07-12 00:04:49 +08:00
Each netCDF function returns an integer status value . If the returned
status value indicates an error , you may handle it in any way desired ,
from printing an associated error message and exiting to ignoring the
error indication and proceeding ( not recommended ! ) . For simplicity ,
the examples in this guide check the error status and call a separate
function , handle_err ( ) , to handle any errors . One possible definition
of handle_err ( ) can be found within the documentation of
nc_strerror ( ) .
The nc_strerror ( ) function is available to convert a returned integer
error status into an error message string .
Occasionally , low - level I / O errors may occur in a layer below the
netCDF library . For example , if a write operation causes you to exceed
disk quotas or to attempt to write to a device that is no longer
available , you may get an error from a layer below the netCDF library ,
but the resulting write error will still be reflected in the returned
status value .
2014-07-15 06:56:42 +08:00
*/
/** \{ */
2011-07-12 00:04:49 +08:00
2014-07-15 06:56:42 +08:00
/*! Given an error number, return an error message.
2011-07-12 00:04:49 +08:00
This function returns a static reference to an error message string
corresponding to an integer netCDF error status or to a system error
number , presumably returned by a previous call to some other netCDF
function . The error codes are defined in netcdf . h .
\ param ncerr1 error number
\ returns short string containing error message .
Here is an example of a simple error handling function that uses
2014-07-15 06:56:42 +08:00
nc_strerror ( ) to print the error message corresponding to the netCDF
2011-07-12 00:04:49 +08:00
error status returned from any netCDF function call and then exit :
\ code
# include <netcdf.h>
. . .
void handle_error ( int status ) {
if ( status ! = NC_NOERR ) {
fprintf ( stderr , " %s \n " , nc_strerror ( status ) ) ;
exit ( - 1 ) ;
}
}
\ endcode
*/
2014-07-15 06:56:42 +08:00
const char * nc_strerror ( int ncerr1 )
2010-06-07 19:49:48 +08:00
{
/* System error? */
if ( NC_ISSYSERR ( ncerr1 ) )
{
const char * cp = ( const char * ) strerror ( ncerr1 ) ;
if ( cp = = NULL )
return " Unknown Error " ;
return cp ;
}
/* If we're here, this is a netcdf error code. */
switch ( ncerr1 )
{
case NC_NOERR :
return " No error " ;
case NC_EBADID :
return " NetCDF: Not a valid ID " ;
case NC_ENFILE :
return " NetCDF: Too many files open " ;
case NC_EEXIST :
return " NetCDF: File exists && NC_NOCLOBBER " ;
case NC_EINVAL :
return " NetCDF: Invalid argument " ;
case NC_EPERM :
return " NetCDF: Write to read only " ;
case NC_ENOTINDEFINE :
return " NetCDF: Operation not allowed in data mode " ;
case NC_EINDEFINE :
return " NetCDF: Operation not allowed in define mode " ;
case NC_EINVALCOORDS :
return " NetCDF: Index exceeds dimension bound " ;
case NC_EMAXDIMS :
2017-07-20 23:21:21 +08:00
return " NetCDF: NC_MAX_DIMS exceeded " ; /* not enforced after 4.5.0 */
2010-06-07 19:49:48 +08:00
case NC_ENAMEINUSE :
return " NetCDF: String match to name in use " ;
case NC_ENOTATT :
return " NetCDF: Attribute not found " ;
case NC_EMAXATTS :
2017-07-20 23:21:21 +08:00
return " NetCDF: NC_MAX_ATTRS exceeded " ; /* not enforced after 4.5.0 */
2010-06-07 19:49:48 +08:00
case NC_EBADTYPE :
return " NetCDF: Not a valid data type or _FillValue type mismatch " ;
case NC_EBADDIM :
return " NetCDF: Invalid dimension ID or name " ;
case NC_EUNLIMPOS :
return " NetCDF: NC_UNLIMITED in the wrong index " ;
2017-07-20 23:21:21 +08:00
case NC_EMAXVARS : return " NetCDF: NC_MAX_VARS exceeded " ; /* not enforced after 4.5.0 */
2010-06-07 19:49:48 +08:00
case NC_ENOTVAR :
return " NetCDF: Variable not found " ;
case NC_EGLOBAL :
return " NetCDF: Action prohibited on NC_GLOBAL varid " ;
case NC_ENOTNC :
return " NetCDF: Unknown file format " ;
case NC_ESTS :
return " NetCDF: In Fortran, string too short " ;
case NC_EMAXNAME :
return " NetCDF: NC_MAX_NAME exceeded " ;
case NC_EUNLIMIT :
return " NetCDF: NC_UNLIMITED size already in use " ;
case NC_ENORECVARS :
return " NetCDF: nc_rec op when there are no record vars " ;
case NC_ECHAR :
return " NetCDF: Attempt to convert between text & numbers " ;
case NC_EEDGE :
return " NetCDF: Start+count exceeds dimension bound " ;
case NC_ESTRIDE :
return " NetCDF: Illegal stride " ;
case NC_EBADNAME :
return " NetCDF: Name contains illegal characters " ;
case NC_ERANGE :
return " NetCDF: Numeric conversion not representable " ;
case NC_ENOMEM :
return " NetCDF: Memory allocation (malloc) failure " ;
case NC_EVARSIZE :
return " NetCDF: One or more variable sizes violate format constraints " ;
case NC_EDIMSIZE :
return " NetCDF: Invalid dimension size " ;
case NC_ETRUNC :
return " NetCDF: File likely truncated or possibly corrupted " ;
case NC_EAXISTYPE :
return " NetCDF: Illegal axis type " ;
case NC_EDAP :
return " NetCDF: DAP failure " ;
case NC_ECURL :
return " NetCDF: libcurl failure " ;
case NC_EIO :
return " NetCDF: I/O failure " ;
case NC_ENODATA :
This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
return " NetCDF: Variable has no data " ;
2010-06-07 19:49:48 +08:00
case NC_EDAPSVC :
return " NetCDF: DAP server error " ;
case NC_EDAS :
return " NetCDF: Malformed or inaccessible DAP DAS " ;
case NC_EDDS :
2020-01-09 06:18:31 +08:00
return " NetCDF: Malformed or inaccessible DAP2 DDS or DAP4 DMR response " ;
2010-06-07 19:49:48 +08:00
case NC_EDATADDS :
2020-01-09 06:18:31 +08:00
return " NetCDF: Malformed or inaccessible DAP2 DATADDS or DAP4 DAP response " ;
2010-06-07 19:49:48 +08:00
case NC_EDAPURL :
2010-12-04 06:01:21 +08:00
return " NetCDF: Malformed URL " ;
2010-06-07 19:49:48 +08:00
case NC_EDAPCONSTRAINT :
2012-08-01 04:34:13 +08:00
return " NetCDF: Malformed or unexpected Constraint " ;
2011-04-17 04:56:36 +08:00
case NC_ETRANSLATION :
return " NetCDF: Untranslatable construct " ;
2014-03-12 01:58:22 +08:00
case NC_EACCESS :
return " NetCDF: Access failure " ;
case NC_EAUTH :
return " NetCDF: Authorization failure " ;
case NC_ENOTFOUND :
return " NetCDF: file not found " ;
case NC_ECANTREMOVE :
return " NetCDF: cannot delete file " ;
2018-02-09 10:53:40 +08:00
case NC_EINTERNAL :
return " NetCDF: internal library error; Please contact Unidata support " ;
2018-07-30 03:53:36 +08:00
case NC_EPNETCDF :
return " NetCDF: PnetCDF error " ;
2010-06-07 19:49:48 +08:00
case NC_EHDFERR :
return " NetCDF: HDF error " ;
case NC_ECANTREAD :
return " NetCDF: Can't read file " ;
case NC_ECANTWRITE :
return " NetCDF: Can't write file " ;
case NC_ECANTCREATE :
return " NetCDF: Can't create file " ;
case NC_EFILEMETA :
return " NetCDF: Can't add HDF5 file metadata " ;
2014-07-15 06:56:42 +08:00
case NC_EDIMMETA :
2010-06-07 19:49:48 +08:00
return " NetCDF: Can't define dimensional metadata " ;
case NC_EATTMETA :
return " NetCDF: Can't open HDF5 attribute " ;
case NC_EVARMETA :
return " NetCDF: Problem with variable metadata. " ;
case NC_ENOCOMPOUND :
return " NetCDF: Can't create HDF5 compound type " ;
case NC_EATTEXISTS :
2019-09-18 10:27:43 +08:00
return " NetCDF: Attempt to create attribute that already exists " ;
2010-06-07 19:49:48 +08:00
case NC_ENOTNC4 :
return " NetCDF: Attempting netcdf-4 operation on netcdf-3 file " ;
case NC_ESTRICTNC3 :
return " NetCDF: Attempting netcdf-4 operation on strict nc3 netcdf-4 file " ;
case NC_ENOTNC3 :
return " NetCDF: Attempting netcdf-3 operation on netcdf-4 file " ;
case NC_ENOPAR :
return " NetCDF: Parallel operation on file opened for non-parallel access " ;
case NC_EPARINIT :
return " NetCDF: Error initializing for parallel access " ;
case NC_EBADGRPID :
return " NetCDF: Bad group ID " ;
case NC_EBADTYPID :
return " NetCDF: Bad type ID " ;
case NC_ETYPDEFINED :
return " NetCDF: Type has already been defined and may not be edited " ;
case NC_EBADFIELD :
return " NetCDF: Bad field ID " ;
case NC_EBADCLASS :
return " NetCDF: Bad class " ;
case NC_EMAPTYPE :
return " NetCDF: Mapped access for atomic types only " ;
case NC_ELATEFILL :
return " NetCDF: Attempt to define fill value when data already exists. " ;
case NC_ELATEDEF :
return " NetCDF: Attempt to define var properties, like deflate, after enddef. " ;
case NC_EDIMSCALE :
2019-09-18 10:27:43 +08:00
return " NetCDF: Problem with HDF5 dimscales. " ;
2010-06-07 19:49:48 +08:00
case NC_ENOGRP :
return " NetCDF: No group found. " ;
case NC_ESTORAGE :
return " NetCDF: Cannot specify both contiguous and chunking. " ;
case NC_EBADCHUNK :
return " NetCDF: Bad chunk sizes. " ;
case NC_ENOTBUILT :
return " NetCDF: Attempt to use feature that was not turned on "
" when netCDF was built. " ;
2012-03-15 07:26:48 +08:00
case NC_EDISKLESS :
return " NetCDF: Error in using diskless access " ;
2017-04-28 03:01:59 +08:00
case NC_EFILTER :
2021-07-18 06:55:30 +08:00
return " NetCDF: Filter error: bad id or parameters or duplicate filter " ;
2020-02-17 03:59:33 +08:00
case NC_ENOFILTER :
2021-05-18 09:49:58 +08:00
return " NetCDF: Filter error: unimplemented filter encountered " ;
2017-08-31 07:44:57 +08:00
case NC_ECANTEXTEND :
return " NetCDF: Attempt to extend dataset during NC_INDEPENDENT I/O operation. Use nc_var_par_access to set mode NC_COLLECTIVE before extending variable. " ;
case NC_EMPI : return " NetCDF: MPI operation failed. " ;
case NC_ERCFILE :
return " NetCDF: RC File Failure. " ;
2018-02-26 12:45:31 +08:00
case NC_ENULLPAD :
return " NetCDF: File fails strict Null-Byte Header check. " ;
case NC_EINMEMORY :
return " NetCDF: In-memory File operation failed. " ;
This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
case NC_ENCZARR :
return " NetCDF: NCZarr error " ;
case NC_ES3 :
return " NetCDF: AWS S3 error " ;
case NC_EEMPTY :
return " NetCDF: Attempt to read empty NCZarr map key " ;
2021-07-18 06:55:30 +08:00
case NC_EOBJECT :
2021-01-29 11:11:01 +08:00
return " NetCDF: Some object exists when it should not " ;
2021-07-18 06:55:30 +08:00
case NC_ENOOBJECT :
return " NetCDF: Some object not found " ;
case NC_EPLUGIN :
return " NetCDF: Unclassified failure in accessing a dynamically loaded plugin " ;
2018-02-26 12:45:31 +08:00
default :
2015-08-16 06:26:35 +08:00
# ifdef USE_PNETCDF
2015-10-29 06:44:54 +08:00
/* The behavior of ncmpi_strerror here is to return
NULL , not a string . This causes problems in ( at least )
the fortran interface . */
return ( ncmpi_strerror ( ncerr1 ) ?
ncmpi_strerror ( ncerr1 ) :
" Unknown Error " ) ;
2015-08-16 06:26:35 +08:00
# else
2010-06-07 19:49:48 +08:00
return " Unknown Error " ;
2015-08-16 06:26:35 +08:00
# endif
2010-06-07 19:49:48 +08:00
}
}
2014-07-15 06:56:42 +08:00
/** \} */