2018-10-31 10:48:12 +08:00
|
|
|
/*
|
2018-12-07 05:29:57 +08:00
|
|
|
Copyright (c) 1998-2018 University Corporation for Atmospheric Research/Unidata
|
|
|
|
See COPYRIGHT for license information.
|
2018-10-31 10:48:12 +08:00
|
|
|
*/
|
|
|
|
|
2012-09-26 05:48:14 +08:00
|
|
|
/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
|
|
|
|
* Copyright by The HDF Group. *
|
|
|
|
* Copyright by the Board of Trustees of the University of Illinois. *
|
|
|
|
* All rights reserved. *
|
|
|
|
* *
|
|
|
|
* This file is part of HDF5. The full HDF5 copyright notice, including *
|
|
|
|
* terms governing use, modification, and redistribution, is contained in *
|
|
|
|
* the files COPYING and Copyright.html. COPYING can be found at the root *
|
|
|
|
* of the source code distribution tree; Copyright.html can be found at the *
|
|
|
|
* root level of an installed copy of the electronic HDF5 document set and *
|
|
|
|
* is linked from the top-level documents page. It can also be found at *
|
|
|
|
* http://hdfgroup.org/HDF5/doc/Copyright.html. If you do not have *
|
|
|
|
* access to either file, you may request a copy from help@hdfgroup.org. *
|
|
|
|
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */
|
|
|
|
|
This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
|
|
|
#include "config.h"
|
2012-09-26 05:48:14 +08:00
|
|
|
#include <stdlib.h>
|
|
|
|
#include <string.h>
|
|
|
|
#include <assert.h>
|
2012-09-28 06:50:41 +08:00
|
|
|
#include "config.h"
|
2018-10-31 10:48:12 +08:00
|
|
|
#include "netcdf.h"
|
|
|
|
#include "netcdf_aux.h"
|
Improve performance of the nc_reclaim_data and nc_copy_data functions.
re: Issue https://github.com/Unidata/netcdf-c/issues/2685
re: PR https://github.com/Unidata/netcdf-c/pull/2179
As noted in PR https://github.com/Unidata/netcdf-c/pull/2179,
the old code did not allow for reclaiming instances of types,
nor for properly copying them. That PR provided new functions
capable of reclaiming/copying instances of arbitrary types.
However, as noted by Issue https://github.com/Unidata/netcdf-c/issues/2685, using these
most general functions resulted in a significant performance
degradation, even for common cases.
This PR attempts to mitigate the cost of using the general
reclaim/copy functions in two ways.
First, the previous functions operating at the top level by
using ncid and typeid arguments. These functions were augmented
with equivalent versions that used the netcdf-c library internal
data structures to allow direct access to needed information.
These new functions are used internally to the library.
The second mitigation involves optimizing the internal functions
by providing early tests for common cases. This avoids
unnecessary recursive function calls.
The overall result is a significant improvement in speed by a
factor of roughly twenty -- your mileage may vary. These
optimized functions are still not as fast as the original (more
limited) functions, but they are getting close. Additional optimizations are
possible. But the cost is a significant "uglification" of the
code that I deemed a step too far, at least for now.
## Misc. Changes
1. Added a test case to check the proper reclamation/copy of complex types.
2. Found and fixed some places where nc_reclaim/copy should have been used.
3. Replaced, in the netcdf-c library, (almost all) occurrences of nc_reclaim_copy with calls to NC_reclaim/copy. This plus the optimizations is the primary speed-up mechanism.
4. In DAP4, the metadata is held in a substrate in-memory file; this required some changes so that the reclaim/copy code accessed that substrate dispatcher rather than the DAP4 dispatcher.
5. Re-factored and isolated the code that computes if a type is (transitively) variable-sized or not.
6. Clean up the reclamation code in ncgen; adding the use of nc_reclaim exposed some memory problems.
2023-05-21 07:11:25 +08:00
|
|
|
#include "nc4internal.h"
|
2018-11-16 01:00:38 +08:00
|
|
|
#include "ncoffsets.h"
|
|
|
|
#include "nclog.h"
|
2021-01-29 11:11:01 +08:00
|
|
|
#include "ncrc.h"
|
This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
|
|
|
#include "netcdf_filter.h"
|
2012-09-26 05:48:14 +08:00
|
|
|
|
|
|
|
struct NCAUX_FIELD {
|
|
|
|
char* name;
|
|
|
|
nc_type fieldtype;
|
2014-03-11 02:09:36 +08:00
|
|
|
size_t ndims;
|
2018-12-07 05:29:57 +08:00
|
|
|
int dimsizes[NC_MAX_VAR_DIMS];
|
2012-09-26 05:48:14 +08:00
|
|
|
size_t size;
|
|
|
|
size_t offset;
|
|
|
|
size_t alignment;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct NCAUX_CMPD {
|
|
|
|
int ncid;
|
|
|
|
int mode;
|
|
|
|
char* name;
|
2014-03-11 02:09:36 +08:00
|
|
|
size_t nfields;
|
2012-09-26 05:48:14 +08:00
|
|
|
struct NCAUX_FIELD* fields;
|
|
|
|
size_t size;
|
|
|
|
size_t offset; /* cumulative as fields are added */
|
|
|
|
size_t alignment;
|
|
|
|
};
|
|
|
|
|
|
|
|
static int computefieldinfo(struct NCAUX_CMPD* cmpd);
|
2018-10-31 10:48:12 +08:00
|
|
|
|
2020-09-28 02:43:46 +08:00
|
|
|
static int filterspec_cvt(const char* txt, size_t* nparamsp, unsigned int* params);
|
|
|
|
|
2018-10-31 10:48:12 +08:00
|
|
|
/**************************************************/
|
|
|
|
/*
|
|
|
|
This code is a variant of the H5detect.c code from HDF5.
|
|
|
|
Author: D. Heimbigner 10/7/2008
|
|
|
|
*/
|
2012-09-26 05:48:14 +08:00
|
|
|
|
2018-11-16 01:00:38 +08:00
|
|
|
EXTERNL int
|
2012-09-26 05:48:14 +08:00
|
|
|
ncaux_begin_compound(int ncid, const char *name, int alignmode, void** tagp)
|
|
|
|
{
|
2018-10-31 10:48:12 +08:00
|
|
|
#ifdef USE_NETCDF4
|
2012-09-26 05:48:14 +08:00
|
|
|
int status = NC_NOERR;
|
|
|
|
struct NCAUX_CMPD* cmpd = NULL;
|
|
|
|
|
|
|
|
if(tagp) *tagp = NULL;
|
2018-12-07 05:29:57 +08:00
|
|
|
|
2012-09-26 05:48:14 +08:00
|
|
|
cmpd = (struct NCAUX_CMPD*)calloc(1,sizeof(struct NCAUX_CMPD));
|
|
|
|
if(cmpd == NULL) {status = NC_ENOMEM; goto fail;}
|
|
|
|
cmpd->ncid = ncid;
|
|
|
|
cmpd->mode = alignmode;
|
|
|
|
cmpd->nfields = 0;
|
2018-12-07 05:29:57 +08:00
|
|
|
cmpd->name = strdup(name);
|
2012-09-26 05:48:14 +08:00
|
|
|
if(cmpd->name == NULL) {status = NC_ENOMEM; goto fail;}
|
|
|
|
|
2014-08-01 06:32:50 +08:00
|
|
|
if(tagp) {
|
|
|
|
*tagp = (void*)cmpd;
|
|
|
|
} else { /* Error, free cmpd to avoid memory leak. */
|
|
|
|
free(cmpd);
|
|
|
|
}
|
2012-09-26 05:48:14 +08:00
|
|
|
return status;
|
|
|
|
|
|
|
|
fail:
|
|
|
|
ncaux_abort_compound((void*)cmpd);
|
|
|
|
return status;
|
2018-10-31 10:48:12 +08:00
|
|
|
#else
|
|
|
|
return NC_ENOTBUILT;
|
|
|
|
#endif
|
2012-09-26 05:48:14 +08:00
|
|
|
}
|
|
|
|
|
2018-11-16 01:00:38 +08:00
|
|
|
EXTERNL int
|
2012-09-26 05:48:14 +08:00
|
|
|
ncaux_abort_compound(void* tag)
|
|
|
|
{
|
2018-10-31 10:48:12 +08:00
|
|
|
#ifdef USE_NETCDF4
|
2012-09-26 05:48:14 +08:00
|
|
|
int i;
|
|
|
|
struct NCAUX_CMPD* cmpd = (struct NCAUX_CMPD*)tag;
|
|
|
|
if(cmpd == NULL) goto done;
|
|
|
|
if(cmpd->name) free(cmpd->name);
|
|
|
|
for(i=0;i<cmpd->nfields;i++) {
|
|
|
|
struct NCAUX_FIELD* field = &cmpd->fields[i];
|
|
|
|
if(field->name) free(field->name);
|
|
|
|
}
|
|
|
|
if(cmpd->fields) free(cmpd->fields);
|
|
|
|
free(cmpd);
|
|
|
|
|
|
|
|
done:
|
|
|
|
return NC_NOERR;
|
2018-10-31 10:48:12 +08:00
|
|
|
#else
|
|
|
|
return NC_ENOTBUILT;
|
|
|
|
#endif
|
2012-09-26 05:48:14 +08:00
|
|
|
}
|
|
|
|
|
2018-11-16 01:00:38 +08:00
|
|
|
EXTERNL int
|
2012-09-26 05:48:14 +08:00
|
|
|
ncaux_add_field(void* tag, const char *name, nc_type field_type,
|
|
|
|
int ndims, const int* dimsizes)
|
|
|
|
{
|
2018-10-31 10:48:12 +08:00
|
|
|
#ifdef USE_NETCDF4
|
2012-09-26 05:48:14 +08:00
|
|
|
int i;
|
|
|
|
int status = NC_NOERR;
|
|
|
|
struct NCAUX_CMPD* cmpd = (struct NCAUX_CMPD*)tag;
|
|
|
|
struct NCAUX_FIELD* newfields = NULL;
|
|
|
|
struct NCAUX_FIELD* field = NULL;
|
|
|
|
|
|
|
|
if(cmpd == NULL) goto done;
|
|
|
|
if(ndims < 0) {status = NC_EINVAL; goto done;}
|
|
|
|
for(i=0;i<ndims;i++) {
|
|
|
|
if(dimsizes[i] <= 0) {status = NC_EINVAL; goto done;}
|
|
|
|
}
|
|
|
|
if(cmpd->fields == NULL) {
|
|
|
|
newfields = (struct NCAUX_FIELD*)calloc(1,sizeof(struct NCAUX_FIELD));
|
|
|
|
} else {
|
|
|
|
newfields = (struct NCAUX_FIELD*)realloc(cmpd->fields,cmpd->nfields+1*sizeof(struct NCAUX_FIELD));
|
|
|
|
}
|
|
|
|
if(cmpd->fields == NULL) {status = NC_ENOMEM; goto done;}
|
|
|
|
cmpd->fields = newfields;
|
|
|
|
field = &cmpd->fields[cmpd->nfields+1];
|
|
|
|
field->name = strdup(name);
|
|
|
|
field->fieldtype = field_type;
|
2018-12-07 05:29:57 +08:00
|
|
|
if(field->name == NULL) {status = NC_ENOMEM; goto done;}
|
2014-03-11 02:09:36 +08:00
|
|
|
field->ndims = (size_t)ndims;
|
|
|
|
memcpy(field->dimsizes,dimsizes,sizeof(int)*field->ndims);
|
2012-09-26 05:48:14 +08:00
|
|
|
cmpd->nfields++;
|
|
|
|
|
|
|
|
done:
|
2014-04-20 07:55:01 +08:00
|
|
|
if(newfields)
|
|
|
|
free(newfields);
|
2012-09-26 05:48:14 +08:00
|
|
|
return status;
|
2018-10-31 10:48:12 +08:00
|
|
|
#else
|
|
|
|
return NC_ENOTBUILT;
|
|
|
|
#endif
|
2012-09-26 05:48:14 +08:00
|
|
|
}
|
|
|
|
|
2018-11-16 01:00:38 +08:00
|
|
|
EXTERNL int
|
2012-09-26 05:48:14 +08:00
|
|
|
ncaux_end_compound(void* tag, nc_type* idp)
|
|
|
|
{
|
2018-10-31 10:48:12 +08:00
|
|
|
#ifdef USE_NETCDF4
|
2012-09-26 05:48:14 +08:00
|
|
|
int i;
|
|
|
|
int status = NC_NOERR;
|
|
|
|
struct NCAUX_CMPD* cmpd = (struct NCAUX_CMPD*)tag;
|
|
|
|
|
|
|
|
if(cmpd == NULL) {status = NC_EINVAL; goto done;}
|
|
|
|
|
|
|
|
/* Compute field and compound info */
|
|
|
|
status = computefieldinfo(cmpd);
|
|
|
|
if(status != NC_NOERR) goto done;
|
|
|
|
|
|
|
|
status = nc_def_compound(cmpd->ncid, cmpd->size, cmpd->name, idp);
|
|
|
|
if(status != NC_NOERR) goto done;
|
|
|
|
|
|
|
|
for(i=0;i<cmpd->nfields;i++) {
|
|
|
|
struct NCAUX_FIELD* field = &cmpd->fields[i];
|
|
|
|
if(field->ndims > 0) {
|
|
|
|
status = nc_insert_compound(cmpd->ncid, *idp, field->name,
|
|
|
|
field->offset, field->fieldtype);
|
|
|
|
} else {
|
|
|
|
status = nc_insert_array_compound(cmpd->ncid, *idp, field->name,
|
|
|
|
field->offset, field->fieldtype,
|
2014-03-11 02:09:36 +08:00
|
|
|
(int)field->ndims,field->dimsizes);
|
2012-09-26 05:48:14 +08:00
|
|
|
}
|
|
|
|
if(status != NC_NOERR) goto done;
|
|
|
|
}
|
|
|
|
|
|
|
|
done:
|
|
|
|
return status;
|
2018-10-31 10:48:12 +08:00
|
|
|
#else
|
|
|
|
return NC_ENOTBUILT;
|
|
|
|
#endif
|
2012-09-26 05:48:14 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/**************************************************/
|
2018-10-31 10:48:12 +08:00
|
|
|
|
2018-11-16 01:00:38 +08:00
|
|
|
/**
|
|
|
|
@param ncclass - type class for which alignment is requested; excludes ENUM|COMPOUND
|
2012-09-26 05:48:14 +08:00
|
|
|
*/
|
|
|
|
|
Fix various problem around VLEN's
re: https://github.com/Unidata/netcdf-c/issues/541
re: https://github.com/Unidata/netcdf-c/issues/1208
re: https://github.com/Unidata/netcdf-c/issues/2078
re: https://github.com/Unidata/netcdf-c/issues/2041
re: https://github.com/Unidata/netcdf-c/issues/2143
For a long time, there have been known problems with the
management of complex types containing VLENs. This also
involves the string type because it is stored as a VLEN of
chars.
This PR (mostly) fixes this problem. But note that it adds new
functions to netcdf.h (see below) and this may require bumping
the .so number. These new functions can be removed, if desired,
in favor of functions in netcdf_aux.h, but netcdf.h seems the
better place for them because they are intended as alternatives
to the nc_free_vlen and nc_free_string functions already in
netcdf.h.
The term complex type refers to any type that directly or
transitively references a VLEN type. So an array of VLENS, a
compound with a VLEN field, and so on.
In order to properly handle instances of these complex types, it
is necessary to have function that can recursively walk
instances of such types to perform various actions on them. The
term "deep" is also used to mean recursive.
At the moment, the two operations needed by the netcdf library are:
* free'ing an instance of the complex type
* copying an instance of the complex type.
The current library does only shallow free and shallow copy of
complex types. This means that only the top level is properly
free'd or copied, but deep internal blocks in the instance are
not touched.
Note that the term "vector" will be used to mean a contiguous (in
memory) sequence of instances of some type. Given an array with,
say, dimensions 2 X 3 X 4, this will be stored in memory as a
vector of length 2*3*4=24 instances.
The use cases are primarily these.
## nc_get_vars
Suppose one is reading a vector of instances using nc_get_vars
(or nc_get_vara or nc_get_var, etc.). These functions will
return the vector in the top-level memory provided. All
interior blocks (form nested VLEN or strings) will have been
dynamically allocated.
After using this vector of instances, it is necessary to free
(aka reclaim) the dynamically allocated memory, otherwise a
memory leak occurs. So, the recursive reclaim function is used
to walk the returned instance vector and do a deep reclaim of
the data.
Currently functions are defined in netcdf.h that are supposed to
handle this: nc_free_vlen(), nc_free_vlens(), and
nc_free_string(). Unfortunately, these functions only do a
shallow free, so deeply nested instances are not properly
handled by them.
Note that internally, the provided data is immediately written so
there is no need to copy it. But the caller may need to reclaim the
data it passed into the function.
## nc_put_att
Suppose one is writing a vector of instances as the data of an attribute
using, say, nc_put_att.
Internally, the incoming attribute data must be copied and stored
so that changes/reclamation of the input data will not affect
the attribute.
Again, the code inside the netcdf library does only shallow copying
rather than deep copy. As a result, one sees effects such as described
in Github Issue https://github.com/Unidata/netcdf-c/issues/2143.
Also, after defining the attribute, it may be necessary for the user
to free the data that was provided as input to nc_put_att().
## nc_get_att
Suppose one is reading a vector of instances as the data of an attribute
using, say, nc_get_att.
Internally, the existing attribute data must be copied and returned
to the caller, and the caller is responsible for reclaiming
the returned data.
Again, the code inside the netcdf library does only shallow copying
rather than deep copy. So this can lead to memory leaks and errors
because the deep data is shared between the library and the user.
# Solution
The solution is to build properly recursive reclaim and copy
functions and use those as needed.
These recursive functions are defined in libdispatch/dinstance.c
and their signatures are defined in include/netcdf.h.
For back compatibility, corresponding "ncaux_XXX" functions
are defined in include/netcdf_aux.h.
````
int nc_reclaim_data(int ncid, nc_type xtypeid, void* memory, size_t count);
int nc_reclaim_data_all(int ncid, nc_type xtypeid, void* memory, size_t count);
int nc_copy_data(int ncid, nc_type xtypeid, const void* memory, size_t count, void* copy);
int nc_copy_data_all(int ncid, nc_type xtypeid, const void* memory, size_t count, void** copyp);
````
There are two variants. The first two, nc_reclaim_data() and
nc_copy_data(), assume the top-level vector is managed by the
caller. For reclaim, this is so the user can use, for example, a
statically allocated vector. For copy, it assumes the user
provides the space into which the copy is stored.
The second two, nc_reclaim_data_all() and
nc_copy_data_all(), allows the functions to manage the
top-level. So for nc_reclaim_data_all, the top level is
assumed to be dynamically allocated and will be free'd by
nc_reclaim_data_all(). The nc_copy_data_all() function
will allocate the top level and return a pointer to it to the
user. The user can later pass that pointer to
nc_reclaim_data_all() to reclaim the instance(s).
# Internal Changes
The netcdf-c library internals are changed to use the proper
reclaim and copy functions. It turns out that the places where
these functions are needed is quite pervasive in the netcdf-c
library code. Using these functions also allows some
simplification of the code since the stdata and vldata fields of
NC_ATT_INFO are no longer needed. Currently this is commented
out using the SEPDATA \#define macro. When any bugs are largely
fixed, all this code will be removed.
# Known Bugs
1. There is still one known failure that has not been solved.
All the failures revolve around some variant of this .cdl file.
The proximate cause of failure is the use of a VLEN FillValue.
````
netcdf x {
types:
float(*) row_of_floats ;
dimensions:
m = 5 ;
variables:
row_of_floats ragged_array(m) ;
row_of_floats ragged_array:_FillValue = {-999} ;
data:
ragged_array = {10, 11, 12, 13, 14}, {20, 21, 22, 23}, {30, 31, 32},
{40, 41}, _ ;
}
````
When a solution is found, I will either add it to this PR or post a new PR.
# Related Changes
* Mark nc_free_vlen(s) as deprecated in favor of ncaux_reclaim_data.
* Remove the --enable-unfixed-memory-leaks option.
* Remove the NC_VLENS_NOTEST code that suppresses some vlen tests.
* Document this change in docs/internal.md
* Disable the tst_vlen_data test in ncdump/tst_nccopy4.sh.
* Mark types as fixed size or not (transitively) to optimize the reclaim
and copy functions.
# Misc. Changes
* Make Doxygen process libdispatch/daux.c
* Make sure the NC_ATT_INFO_T.container field is set.
2022-01-09 09:30:00 +08:00
|
|
|
int
|
|
|
|
ncaux_class_alignment(int ncclass, size_t* alignp)
|
2012-09-26 05:48:14 +08:00
|
|
|
{
|
Fix various problem around VLEN's
re: https://github.com/Unidata/netcdf-c/issues/541
re: https://github.com/Unidata/netcdf-c/issues/1208
re: https://github.com/Unidata/netcdf-c/issues/2078
re: https://github.com/Unidata/netcdf-c/issues/2041
re: https://github.com/Unidata/netcdf-c/issues/2143
For a long time, there have been known problems with the
management of complex types containing VLENs. This also
involves the string type because it is stored as a VLEN of
chars.
This PR (mostly) fixes this problem. But note that it adds new
functions to netcdf.h (see below) and this may require bumping
the .so number. These new functions can be removed, if desired,
in favor of functions in netcdf_aux.h, but netcdf.h seems the
better place for them because they are intended as alternatives
to the nc_free_vlen and nc_free_string functions already in
netcdf.h.
The term complex type refers to any type that directly or
transitively references a VLEN type. So an array of VLENS, a
compound with a VLEN field, and so on.
In order to properly handle instances of these complex types, it
is necessary to have function that can recursively walk
instances of such types to perform various actions on them. The
term "deep" is also used to mean recursive.
At the moment, the two operations needed by the netcdf library are:
* free'ing an instance of the complex type
* copying an instance of the complex type.
The current library does only shallow free and shallow copy of
complex types. This means that only the top level is properly
free'd or copied, but deep internal blocks in the instance are
not touched.
Note that the term "vector" will be used to mean a contiguous (in
memory) sequence of instances of some type. Given an array with,
say, dimensions 2 X 3 X 4, this will be stored in memory as a
vector of length 2*3*4=24 instances.
The use cases are primarily these.
## nc_get_vars
Suppose one is reading a vector of instances using nc_get_vars
(or nc_get_vara or nc_get_var, etc.). These functions will
return the vector in the top-level memory provided. All
interior blocks (form nested VLEN or strings) will have been
dynamically allocated.
After using this vector of instances, it is necessary to free
(aka reclaim) the dynamically allocated memory, otherwise a
memory leak occurs. So, the recursive reclaim function is used
to walk the returned instance vector and do a deep reclaim of
the data.
Currently functions are defined in netcdf.h that are supposed to
handle this: nc_free_vlen(), nc_free_vlens(), and
nc_free_string(). Unfortunately, these functions only do a
shallow free, so deeply nested instances are not properly
handled by them.
Note that internally, the provided data is immediately written so
there is no need to copy it. But the caller may need to reclaim the
data it passed into the function.
## nc_put_att
Suppose one is writing a vector of instances as the data of an attribute
using, say, nc_put_att.
Internally, the incoming attribute data must be copied and stored
so that changes/reclamation of the input data will not affect
the attribute.
Again, the code inside the netcdf library does only shallow copying
rather than deep copy. As a result, one sees effects such as described
in Github Issue https://github.com/Unidata/netcdf-c/issues/2143.
Also, after defining the attribute, it may be necessary for the user
to free the data that was provided as input to nc_put_att().
## nc_get_att
Suppose one is reading a vector of instances as the data of an attribute
using, say, nc_get_att.
Internally, the existing attribute data must be copied and returned
to the caller, and the caller is responsible for reclaiming
the returned data.
Again, the code inside the netcdf library does only shallow copying
rather than deep copy. So this can lead to memory leaks and errors
because the deep data is shared between the library and the user.
# Solution
The solution is to build properly recursive reclaim and copy
functions and use those as needed.
These recursive functions are defined in libdispatch/dinstance.c
and their signatures are defined in include/netcdf.h.
For back compatibility, corresponding "ncaux_XXX" functions
are defined in include/netcdf_aux.h.
````
int nc_reclaim_data(int ncid, nc_type xtypeid, void* memory, size_t count);
int nc_reclaim_data_all(int ncid, nc_type xtypeid, void* memory, size_t count);
int nc_copy_data(int ncid, nc_type xtypeid, const void* memory, size_t count, void* copy);
int nc_copy_data_all(int ncid, nc_type xtypeid, const void* memory, size_t count, void** copyp);
````
There are two variants. The first two, nc_reclaim_data() and
nc_copy_data(), assume the top-level vector is managed by the
caller. For reclaim, this is so the user can use, for example, a
statically allocated vector. For copy, it assumes the user
provides the space into which the copy is stored.
The second two, nc_reclaim_data_all() and
nc_copy_data_all(), allows the functions to manage the
top-level. So for nc_reclaim_data_all, the top level is
assumed to be dynamically allocated and will be free'd by
nc_reclaim_data_all(). The nc_copy_data_all() function
will allocate the top level and return a pointer to it to the
user. The user can later pass that pointer to
nc_reclaim_data_all() to reclaim the instance(s).
# Internal Changes
The netcdf-c library internals are changed to use the proper
reclaim and copy functions. It turns out that the places where
these functions are needed is quite pervasive in the netcdf-c
library code. Using these functions also allows some
simplification of the code since the stdata and vldata fields of
NC_ATT_INFO are no longer needed. Currently this is commented
out using the SEPDATA \#define macro. When any bugs are largely
fixed, all this code will be removed.
# Known Bugs
1. There is still one known failure that has not been solved.
All the failures revolve around some variant of this .cdl file.
The proximate cause of failure is the use of a VLEN FillValue.
````
netcdf x {
types:
float(*) row_of_floats ;
dimensions:
m = 5 ;
variables:
row_of_floats ragged_array(m) ;
row_of_floats ragged_array:_FillValue = {-999} ;
data:
ragged_array = {10, 11, 12, 13, 14}, {20, 21, 22, 23}, {30, 31, 32},
{40, 41}, _ ;
}
````
When a solution is found, I will either add it to this PR or post a new PR.
# Related Changes
* Mark nc_free_vlen(s) as deprecated in favor of ncaux_reclaim_data.
* Remove the --enable-unfixed-memory-leaks option.
* Remove the NC_VLENS_NOTEST code that suppresses some vlen tests.
* Document this change in docs/internal.md
* Disable the tst_vlen_data test in ncdump/tst_nccopy4.sh.
* Mark types as fixed size or not (transitively) to optimize the reclaim
and copy functions.
# Misc. Changes
* Make Doxygen process libdispatch/daux.c
* Make sure the NC_ATT_INFO_T.container field is set.
2022-01-09 09:30:00 +08:00
|
|
|
int stat = NC_NOERR;
|
|
|
|
size_t align = 0;
|
|
|
|
if(ncclass <= NC_MAX_ATOMIC_TYPE || ncclass == NC_VLEN || ncclass == NC_OPAQUE) {
|
|
|
|
stat = NC_class_alignment(ncclass,&align);
|
|
|
|
} else {
|
|
|
|
nclog(NCLOGERR,"ncaux_class_alignment: class %d; alignment cannot be determermined",ncclass);
|
2012-09-26 05:48:14 +08:00
|
|
|
}
|
Fix various problem around VLEN's
re: https://github.com/Unidata/netcdf-c/issues/541
re: https://github.com/Unidata/netcdf-c/issues/1208
re: https://github.com/Unidata/netcdf-c/issues/2078
re: https://github.com/Unidata/netcdf-c/issues/2041
re: https://github.com/Unidata/netcdf-c/issues/2143
For a long time, there have been known problems with the
management of complex types containing VLENs. This also
involves the string type because it is stored as a VLEN of
chars.
This PR (mostly) fixes this problem. But note that it adds new
functions to netcdf.h (see below) and this may require bumping
the .so number. These new functions can be removed, if desired,
in favor of functions in netcdf_aux.h, but netcdf.h seems the
better place for them because they are intended as alternatives
to the nc_free_vlen and nc_free_string functions already in
netcdf.h.
The term complex type refers to any type that directly or
transitively references a VLEN type. So an array of VLENS, a
compound with a VLEN field, and so on.
In order to properly handle instances of these complex types, it
is necessary to have function that can recursively walk
instances of such types to perform various actions on them. The
term "deep" is also used to mean recursive.
At the moment, the two operations needed by the netcdf library are:
* free'ing an instance of the complex type
* copying an instance of the complex type.
The current library does only shallow free and shallow copy of
complex types. This means that only the top level is properly
free'd or copied, but deep internal blocks in the instance are
not touched.
Note that the term "vector" will be used to mean a contiguous (in
memory) sequence of instances of some type. Given an array with,
say, dimensions 2 X 3 X 4, this will be stored in memory as a
vector of length 2*3*4=24 instances.
The use cases are primarily these.
## nc_get_vars
Suppose one is reading a vector of instances using nc_get_vars
(or nc_get_vara or nc_get_var, etc.). These functions will
return the vector in the top-level memory provided. All
interior blocks (form nested VLEN or strings) will have been
dynamically allocated.
After using this vector of instances, it is necessary to free
(aka reclaim) the dynamically allocated memory, otherwise a
memory leak occurs. So, the recursive reclaim function is used
to walk the returned instance vector and do a deep reclaim of
the data.
Currently functions are defined in netcdf.h that are supposed to
handle this: nc_free_vlen(), nc_free_vlens(), and
nc_free_string(). Unfortunately, these functions only do a
shallow free, so deeply nested instances are not properly
handled by them.
Note that internally, the provided data is immediately written so
there is no need to copy it. But the caller may need to reclaim the
data it passed into the function.
## nc_put_att
Suppose one is writing a vector of instances as the data of an attribute
using, say, nc_put_att.
Internally, the incoming attribute data must be copied and stored
so that changes/reclamation of the input data will not affect
the attribute.
Again, the code inside the netcdf library does only shallow copying
rather than deep copy. As a result, one sees effects such as described
in Github Issue https://github.com/Unidata/netcdf-c/issues/2143.
Also, after defining the attribute, it may be necessary for the user
to free the data that was provided as input to nc_put_att().
## nc_get_att
Suppose one is reading a vector of instances as the data of an attribute
using, say, nc_get_att.
Internally, the existing attribute data must be copied and returned
to the caller, and the caller is responsible for reclaiming
the returned data.
Again, the code inside the netcdf library does only shallow copying
rather than deep copy. So this can lead to memory leaks and errors
because the deep data is shared between the library and the user.
# Solution
The solution is to build properly recursive reclaim and copy
functions and use those as needed.
These recursive functions are defined in libdispatch/dinstance.c
and their signatures are defined in include/netcdf.h.
For back compatibility, corresponding "ncaux_XXX" functions
are defined in include/netcdf_aux.h.
````
int nc_reclaim_data(int ncid, nc_type xtypeid, void* memory, size_t count);
int nc_reclaim_data_all(int ncid, nc_type xtypeid, void* memory, size_t count);
int nc_copy_data(int ncid, nc_type xtypeid, const void* memory, size_t count, void* copy);
int nc_copy_data_all(int ncid, nc_type xtypeid, const void* memory, size_t count, void** copyp);
````
There are two variants. The first two, nc_reclaim_data() and
nc_copy_data(), assume the top-level vector is managed by the
caller. For reclaim, this is so the user can use, for example, a
statically allocated vector. For copy, it assumes the user
provides the space into which the copy is stored.
The second two, nc_reclaim_data_all() and
nc_copy_data_all(), allows the functions to manage the
top-level. So for nc_reclaim_data_all, the top level is
assumed to be dynamically allocated and will be free'd by
nc_reclaim_data_all(). The nc_copy_data_all() function
will allocate the top level and return a pointer to it to the
user. The user can later pass that pointer to
nc_reclaim_data_all() to reclaim the instance(s).
# Internal Changes
The netcdf-c library internals are changed to use the proper
reclaim and copy functions. It turns out that the places where
these functions are needed is quite pervasive in the netcdf-c
library code. Using these functions also allows some
simplification of the code since the stdata and vldata fields of
NC_ATT_INFO are no longer needed. Currently this is commented
out using the SEPDATA \#define macro. When any bugs are largely
fixed, all this code will be removed.
# Known Bugs
1. There is still one known failure that has not been solved.
All the failures revolve around some variant of this .cdl file.
The proximate cause of failure is the use of a VLEN FillValue.
````
netcdf x {
types:
float(*) row_of_floats ;
dimensions:
m = 5 ;
variables:
row_of_floats ragged_array(m) ;
row_of_floats ragged_array:_FillValue = {-999} ;
data:
ragged_array = {10, 11, 12, 13, 14}, {20, 21, 22, 23}, {30, 31, 32},
{40, 41}, _ ;
}
````
When a solution is found, I will either add it to this PR or post a new PR.
# Related Changes
* Mark nc_free_vlen(s) as deprecated in favor of ncaux_reclaim_data.
* Remove the --enable-unfixed-memory-leaks option.
* Remove the NC_VLENS_NOTEST code that suppresses some vlen tests.
* Document this change in docs/internal.md
* Disable the tst_vlen_data test in ncdump/tst_nccopy4.sh.
* Mark types as fixed size or not (transitively) to optimize the reclaim
and copy functions.
# Misc. Changes
* Make Doxygen process libdispatch/daux.c
* Make sure the NC_ATT_INFO_T.container field is set.
2022-01-09 09:30:00 +08:00
|
|
|
if(alignp) *alignp = align;
|
|
|
|
if(align == 0) stat = NC_EINVAL;
|
|
|
|
return stat;
|
2012-09-26 05:48:14 +08:00
|
|
|
}
|
|
|
|
|
Fix various problem around VLEN's
re: https://github.com/Unidata/netcdf-c/issues/541
re: https://github.com/Unidata/netcdf-c/issues/1208
re: https://github.com/Unidata/netcdf-c/issues/2078
re: https://github.com/Unidata/netcdf-c/issues/2041
re: https://github.com/Unidata/netcdf-c/issues/2143
For a long time, there have been known problems with the
management of complex types containing VLENs. This also
involves the string type because it is stored as a VLEN of
chars.
This PR (mostly) fixes this problem. But note that it adds new
functions to netcdf.h (see below) and this may require bumping
the .so number. These new functions can be removed, if desired,
in favor of functions in netcdf_aux.h, but netcdf.h seems the
better place for them because they are intended as alternatives
to the nc_free_vlen and nc_free_string functions already in
netcdf.h.
The term complex type refers to any type that directly or
transitively references a VLEN type. So an array of VLENS, a
compound with a VLEN field, and so on.
In order to properly handle instances of these complex types, it
is necessary to have function that can recursively walk
instances of such types to perform various actions on them. The
term "deep" is also used to mean recursive.
At the moment, the two operations needed by the netcdf library are:
* free'ing an instance of the complex type
* copying an instance of the complex type.
The current library does only shallow free and shallow copy of
complex types. This means that only the top level is properly
free'd or copied, but deep internal blocks in the instance are
not touched.
Note that the term "vector" will be used to mean a contiguous (in
memory) sequence of instances of some type. Given an array with,
say, dimensions 2 X 3 X 4, this will be stored in memory as a
vector of length 2*3*4=24 instances.
The use cases are primarily these.
## nc_get_vars
Suppose one is reading a vector of instances using nc_get_vars
(or nc_get_vara or nc_get_var, etc.). These functions will
return the vector in the top-level memory provided. All
interior blocks (form nested VLEN or strings) will have been
dynamically allocated.
After using this vector of instances, it is necessary to free
(aka reclaim) the dynamically allocated memory, otherwise a
memory leak occurs. So, the recursive reclaim function is used
to walk the returned instance vector and do a deep reclaim of
the data.
Currently functions are defined in netcdf.h that are supposed to
handle this: nc_free_vlen(), nc_free_vlens(), and
nc_free_string(). Unfortunately, these functions only do a
shallow free, so deeply nested instances are not properly
handled by them.
Note that internally, the provided data is immediately written so
there is no need to copy it. But the caller may need to reclaim the
data it passed into the function.
## nc_put_att
Suppose one is writing a vector of instances as the data of an attribute
using, say, nc_put_att.
Internally, the incoming attribute data must be copied and stored
so that changes/reclamation of the input data will not affect
the attribute.
Again, the code inside the netcdf library does only shallow copying
rather than deep copy. As a result, one sees effects such as described
in Github Issue https://github.com/Unidata/netcdf-c/issues/2143.
Also, after defining the attribute, it may be necessary for the user
to free the data that was provided as input to nc_put_att().
## nc_get_att
Suppose one is reading a vector of instances as the data of an attribute
using, say, nc_get_att.
Internally, the existing attribute data must be copied and returned
to the caller, and the caller is responsible for reclaiming
the returned data.
Again, the code inside the netcdf library does only shallow copying
rather than deep copy. So this can lead to memory leaks and errors
because the deep data is shared between the library and the user.
# Solution
The solution is to build properly recursive reclaim and copy
functions and use those as needed.
These recursive functions are defined in libdispatch/dinstance.c
and their signatures are defined in include/netcdf.h.
For back compatibility, corresponding "ncaux_XXX" functions
are defined in include/netcdf_aux.h.
````
int nc_reclaim_data(int ncid, nc_type xtypeid, void* memory, size_t count);
int nc_reclaim_data_all(int ncid, nc_type xtypeid, void* memory, size_t count);
int nc_copy_data(int ncid, nc_type xtypeid, const void* memory, size_t count, void* copy);
int nc_copy_data_all(int ncid, nc_type xtypeid, const void* memory, size_t count, void** copyp);
````
There are two variants. The first two, nc_reclaim_data() and
nc_copy_data(), assume the top-level vector is managed by the
caller. For reclaim, this is so the user can use, for example, a
statically allocated vector. For copy, it assumes the user
provides the space into which the copy is stored.
The second two, nc_reclaim_data_all() and
nc_copy_data_all(), allows the functions to manage the
top-level. So for nc_reclaim_data_all, the top level is
assumed to be dynamically allocated and will be free'd by
nc_reclaim_data_all(). The nc_copy_data_all() function
will allocate the top level and return a pointer to it to the
user. The user can later pass that pointer to
nc_reclaim_data_all() to reclaim the instance(s).
# Internal Changes
The netcdf-c library internals are changed to use the proper
reclaim and copy functions. It turns out that the places where
these functions are needed is quite pervasive in the netcdf-c
library code. Using these functions also allows some
simplification of the code since the stdata and vldata fields of
NC_ATT_INFO are no longer needed. Currently this is commented
out using the SEPDATA \#define macro. When any bugs are largely
fixed, all this code will be removed.
# Known Bugs
1. There is still one known failure that has not been solved.
All the failures revolve around some variant of this .cdl file.
The proximate cause of failure is the use of a VLEN FillValue.
````
netcdf x {
types:
float(*) row_of_floats ;
dimensions:
m = 5 ;
variables:
row_of_floats ragged_array(m) ;
row_of_floats ragged_array:_FillValue = {-999} ;
data:
ragged_array = {10, 11, 12, 13, 14}, {20, 21, 22, 23}, {30, 31, 32},
{40, 41}, _ ;
}
````
When a solution is found, I will either add it to this PR or post a new PR.
# Related Changes
* Mark nc_free_vlen(s) as deprecated in favor of ncaux_reclaim_data.
* Remove the --enable-unfixed-memory-leaks option.
* Remove the NC_VLENS_NOTEST code that suppresses some vlen tests.
* Document this change in docs/internal.md
* Disable the tst_vlen_data test in ncdump/tst_nccopy4.sh.
* Mark types as fixed size or not (transitively) to optimize the reclaim
and copy functions.
# Misc. Changes
* Make Doxygen process libdispatch/daux.c
* Make sure the NC_ATT_INFO_T.container field is set.
2022-01-09 09:30:00 +08:00
|
|
|
|
2018-11-16 01:00:38 +08:00
|
|
|
#ifdef USE_NETCDF4
|
2012-09-26 05:48:14 +08:00
|
|
|
/* Find first primitive field of a possibly nested sequence of compounds */
|
|
|
|
static nc_type
|
|
|
|
findfirstfield(int ncid, nc_type xtype)
|
|
|
|
{
|
|
|
|
int status = NC_NOERR;
|
|
|
|
nc_type fieldtype = xtype;
|
|
|
|
if(xtype <= NC_MAX_ATOMIC_TYPE) goto done;
|
2018-12-07 05:29:57 +08:00
|
|
|
|
2012-09-26 05:48:14 +08:00
|
|
|
status = nc_inq_compound_fieldtype(ncid, xtype, 0, &fieldtype);
|
|
|
|
if(status != NC_NOERR) goto done;
|
|
|
|
fieldtype = findfirstfield(ncid,fieldtype);
|
|
|
|
|
|
|
|
done:
|
|
|
|
return (status == NC_NOERR?fieldtype:NC_NAT);
|
|
|
|
}
|
|
|
|
|
2018-11-16 01:00:38 +08:00
|
|
|
static size_t
|
|
|
|
getpadding(size_t offset, size_t alignment)
|
|
|
|
{
|
|
|
|
size_t rem = (alignment==0?0:(offset % alignment));
|
|
|
|
size_t pad = (rem==0?0:(alignment - rem));
|
|
|
|
return pad;
|
|
|
|
}
|
|
|
|
|
|
|
|
static size_t
|
|
|
|
dimproduct(size_t ndims, int* dimsizes)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
size_t product = 1;
|
|
|
|
for(i=0;i<ndims;i++) product *= (size_t)dimsizes[i];
|
|
|
|
return product;
|
|
|
|
}
|
|
|
|
|
2012-09-26 05:48:14 +08:00
|
|
|
static int
|
|
|
|
computefieldinfo(struct NCAUX_CMPD* cmpd)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
int status = NC_NOERR;
|
|
|
|
size_t offset = 0;
|
|
|
|
size_t totaldimsize;
|
|
|
|
|
|
|
|
/* Assign the sizes for the fields */
|
|
|
|
for(i=0;i<cmpd->nfields;i++) {
|
2018-12-07 05:29:57 +08:00
|
|
|
struct NCAUX_FIELD* field = &cmpd->fields[i];
|
2012-09-26 05:48:14 +08:00
|
|
|
status = nc_inq_type(cmpd->ncid,field->fieldtype,NULL,&field->size);
|
|
|
|
if(status != NC_NOERR) goto done;
|
|
|
|
totaldimsize = dimproduct(field->ndims,field->dimsizes);
|
|
|
|
field->size *= totaldimsize;
|
|
|
|
}
|
|
|
|
|
|
|
|
for(offset=0,i=0;i<cmpd->nfields;i++) {
|
|
|
|
struct NCAUX_FIELD* field = &cmpd->fields[i];
|
2014-03-11 00:01:44 +08:00
|
|
|
int alignment = 0;
|
2012-09-26 05:48:14 +08:00
|
|
|
nc_type firsttype = findfirstfield(cmpd->ncid,field->fieldtype);
|
|
|
|
|
|
|
|
/* only support 'c' alignment for now*/
|
|
|
|
switch (field->fieldtype) {
|
|
|
|
case NC_OPAQUE:
|
|
|
|
field->alignment = 1;
|
|
|
|
break;
|
|
|
|
case NC_ENUM:
|
Fix various problem around VLEN's
re: https://github.com/Unidata/netcdf-c/issues/541
re: https://github.com/Unidata/netcdf-c/issues/1208
re: https://github.com/Unidata/netcdf-c/issues/2078
re: https://github.com/Unidata/netcdf-c/issues/2041
re: https://github.com/Unidata/netcdf-c/issues/2143
For a long time, there have been known problems with the
management of complex types containing VLENs. This also
involves the string type because it is stored as a VLEN of
chars.
This PR (mostly) fixes this problem. But note that it adds new
functions to netcdf.h (see below) and this may require bumping
the .so number. These new functions can be removed, if desired,
in favor of functions in netcdf_aux.h, but netcdf.h seems the
better place for them because they are intended as alternatives
to the nc_free_vlen and nc_free_string functions already in
netcdf.h.
The term complex type refers to any type that directly or
transitively references a VLEN type. So an array of VLENS, a
compound with a VLEN field, and so on.
In order to properly handle instances of these complex types, it
is necessary to have function that can recursively walk
instances of such types to perform various actions on them. The
term "deep" is also used to mean recursive.
At the moment, the two operations needed by the netcdf library are:
* free'ing an instance of the complex type
* copying an instance of the complex type.
The current library does only shallow free and shallow copy of
complex types. This means that only the top level is properly
free'd or copied, but deep internal blocks in the instance are
not touched.
Note that the term "vector" will be used to mean a contiguous (in
memory) sequence of instances of some type. Given an array with,
say, dimensions 2 X 3 X 4, this will be stored in memory as a
vector of length 2*3*4=24 instances.
The use cases are primarily these.
## nc_get_vars
Suppose one is reading a vector of instances using nc_get_vars
(or nc_get_vara or nc_get_var, etc.). These functions will
return the vector in the top-level memory provided. All
interior blocks (form nested VLEN or strings) will have been
dynamically allocated.
After using this vector of instances, it is necessary to free
(aka reclaim) the dynamically allocated memory, otherwise a
memory leak occurs. So, the recursive reclaim function is used
to walk the returned instance vector and do a deep reclaim of
the data.
Currently functions are defined in netcdf.h that are supposed to
handle this: nc_free_vlen(), nc_free_vlens(), and
nc_free_string(). Unfortunately, these functions only do a
shallow free, so deeply nested instances are not properly
handled by them.
Note that internally, the provided data is immediately written so
there is no need to copy it. But the caller may need to reclaim the
data it passed into the function.
## nc_put_att
Suppose one is writing a vector of instances as the data of an attribute
using, say, nc_put_att.
Internally, the incoming attribute data must be copied and stored
so that changes/reclamation of the input data will not affect
the attribute.
Again, the code inside the netcdf library does only shallow copying
rather than deep copy. As a result, one sees effects such as described
in Github Issue https://github.com/Unidata/netcdf-c/issues/2143.
Also, after defining the attribute, it may be necessary for the user
to free the data that was provided as input to nc_put_att().
## nc_get_att
Suppose one is reading a vector of instances as the data of an attribute
using, say, nc_get_att.
Internally, the existing attribute data must be copied and returned
to the caller, and the caller is responsible for reclaiming
the returned data.
Again, the code inside the netcdf library does only shallow copying
rather than deep copy. So this can lead to memory leaks and errors
because the deep data is shared between the library and the user.
# Solution
The solution is to build properly recursive reclaim and copy
functions and use those as needed.
These recursive functions are defined in libdispatch/dinstance.c
and their signatures are defined in include/netcdf.h.
For back compatibility, corresponding "ncaux_XXX" functions
are defined in include/netcdf_aux.h.
````
int nc_reclaim_data(int ncid, nc_type xtypeid, void* memory, size_t count);
int nc_reclaim_data_all(int ncid, nc_type xtypeid, void* memory, size_t count);
int nc_copy_data(int ncid, nc_type xtypeid, const void* memory, size_t count, void* copy);
int nc_copy_data_all(int ncid, nc_type xtypeid, const void* memory, size_t count, void** copyp);
````
There are two variants. The first two, nc_reclaim_data() and
nc_copy_data(), assume the top-level vector is managed by the
caller. For reclaim, this is so the user can use, for example, a
statically allocated vector. For copy, it assumes the user
provides the space into which the copy is stored.
The second two, nc_reclaim_data_all() and
nc_copy_data_all(), allows the functions to manage the
top-level. So for nc_reclaim_data_all, the top level is
assumed to be dynamically allocated and will be free'd by
nc_reclaim_data_all(). The nc_copy_data_all() function
will allocate the top level and return a pointer to it to the
user. The user can later pass that pointer to
nc_reclaim_data_all() to reclaim the instance(s).
# Internal Changes
The netcdf-c library internals are changed to use the proper
reclaim and copy functions. It turns out that the places where
these functions are needed is quite pervasive in the netcdf-c
library code. Using these functions also allows some
simplification of the code since the stdata and vldata fields of
NC_ATT_INFO are no longer needed. Currently this is commented
out using the SEPDATA \#define macro. When any bugs are largely
fixed, all this code will be removed.
# Known Bugs
1. There is still one known failure that has not been solved.
All the failures revolve around some variant of this .cdl file.
The proximate cause of failure is the use of a VLEN FillValue.
````
netcdf x {
types:
float(*) row_of_floats ;
dimensions:
m = 5 ;
variables:
row_of_floats ragged_array(m) ;
row_of_floats ragged_array:_FillValue = {-999} ;
data:
ragged_array = {10, 11, 12, 13, 14}, {20, 21, 22, 23}, {30, 31, 32},
{40, 41}, _ ;
}
````
When a solution is found, I will either add it to this PR or post a new PR.
# Related Changes
* Mark nc_free_vlen(s) as deprecated in favor of ncaux_reclaim_data.
* Remove the --enable-unfixed-memory-leaks option.
* Remove the NC_VLENS_NOTEST code that suppresses some vlen tests.
* Document this change in docs/internal.md
* Disable the tst_vlen_data test in ncdump/tst_nccopy4.sh.
* Mark types as fixed size or not (transitively) to optimize the reclaim
and copy functions.
# Misc. Changes
* Make Doxygen process libdispatch/daux.c
* Make sure the NC_ATT_INFO_T.container field is set.
2022-01-09 09:30:00 +08:00
|
|
|
status = ncaux_type_alignment(firsttype,cmpd->ncid,&field->alignment);
|
2018-12-07 05:29:57 +08:00
|
|
|
break;
|
2012-09-26 05:48:14 +08:00
|
|
|
case NC_VLEN: /*fall thru*/
|
|
|
|
case NC_COMPOUND:
|
Fix various problem around VLEN's
re: https://github.com/Unidata/netcdf-c/issues/541
re: https://github.com/Unidata/netcdf-c/issues/1208
re: https://github.com/Unidata/netcdf-c/issues/2078
re: https://github.com/Unidata/netcdf-c/issues/2041
re: https://github.com/Unidata/netcdf-c/issues/2143
For a long time, there have been known problems with the
management of complex types containing VLENs. This also
involves the string type because it is stored as a VLEN of
chars.
This PR (mostly) fixes this problem. But note that it adds new
functions to netcdf.h (see below) and this may require bumping
the .so number. These new functions can be removed, if desired,
in favor of functions in netcdf_aux.h, but netcdf.h seems the
better place for them because they are intended as alternatives
to the nc_free_vlen and nc_free_string functions already in
netcdf.h.
The term complex type refers to any type that directly or
transitively references a VLEN type. So an array of VLENS, a
compound with a VLEN field, and so on.
In order to properly handle instances of these complex types, it
is necessary to have function that can recursively walk
instances of such types to perform various actions on them. The
term "deep" is also used to mean recursive.
At the moment, the two operations needed by the netcdf library are:
* free'ing an instance of the complex type
* copying an instance of the complex type.
The current library does only shallow free and shallow copy of
complex types. This means that only the top level is properly
free'd or copied, but deep internal blocks in the instance are
not touched.
Note that the term "vector" will be used to mean a contiguous (in
memory) sequence of instances of some type. Given an array with,
say, dimensions 2 X 3 X 4, this will be stored in memory as a
vector of length 2*3*4=24 instances.
The use cases are primarily these.
## nc_get_vars
Suppose one is reading a vector of instances using nc_get_vars
(or nc_get_vara or nc_get_var, etc.). These functions will
return the vector in the top-level memory provided. All
interior blocks (form nested VLEN or strings) will have been
dynamically allocated.
After using this vector of instances, it is necessary to free
(aka reclaim) the dynamically allocated memory, otherwise a
memory leak occurs. So, the recursive reclaim function is used
to walk the returned instance vector and do a deep reclaim of
the data.
Currently functions are defined in netcdf.h that are supposed to
handle this: nc_free_vlen(), nc_free_vlens(), and
nc_free_string(). Unfortunately, these functions only do a
shallow free, so deeply nested instances are not properly
handled by them.
Note that internally, the provided data is immediately written so
there is no need to copy it. But the caller may need to reclaim the
data it passed into the function.
## nc_put_att
Suppose one is writing a vector of instances as the data of an attribute
using, say, nc_put_att.
Internally, the incoming attribute data must be copied and stored
so that changes/reclamation of the input data will not affect
the attribute.
Again, the code inside the netcdf library does only shallow copying
rather than deep copy. As a result, one sees effects such as described
in Github Issue https://github.com/Unidata/netcdf-c/issues/2143.
Also, after defining the attribute, it may be necessary for the user
to free the data that was provided as input to nc_put_att().
## nc_get_att
Suppose one is reading a vector of instances as the data of an attribute
using, say, nc_get_att.
Internally, the existing attribute data must be copied and returned
to the caller, and the caller is responsible for reclaiming
the returned data.
Again, the code inside the netcdf library does only shallow copying
rather than deep copy. So this can lead to memory leaks and errors
because the deep data is shared between the library and the user.
# Solution
The solution is to build properly recursive reclaim and copy
functions and use those as needed.
These recursive functions are defined in libdispatch/dinstance.c
and their signatures are defined in include/netcdf.h.
For back compatibility, corresponding "ncaux_XXX" functions
are defined in include/netcdf_aux.h.
````
int nc_reclaim_data(int ncid, nc_type xtypeid, void* memory, size_t count);
int nc_reclaim_data_all(int ncid, nc_type xtypeid, void* memory, size_t count);
int nc_copy_data(int ncid, nc_type xtypeid, const void* memory, size_t count, void* copy);
int nc_copy_data_all(int ncid, nc_type xtypeid, const void* memory, size_t count, void** copyp);
````
There are two variants. The first two, nc_reclaim_data() and
nc_copy_data(), assume the top-level vector is managed by the
caller. For reclaim, this is so the user can use, for example, a
statically allocated vector. For copy, it assumes the user
provides the space into which the copy is stored.
The second two, nc_reclaim_data_all() and
nc_copy_data_all(), allows the functions to manage the
top-level. So for nc_reclaim_data_all, the top level is
assumed to be dynamically allocated and will be free'd by
nc_reclaim_data_all(). The nc_copy_data_all() function
will allocate the top level and return a pointer to it to the
user. The user can later pass that pointer to
nc_reclaim_data_all() to reclaim the instance(s).
# Internal Changes
The netcdf-c library internals are changed to use the proper
reclaim and copy functions. It turns out that the places where
these functions are needed is quite pervasive in the netcdf-c
library code. Using these functions also allows some
simplification of the code since the stdata and vldata fields of
NC_ATT_INFO are no longer needed. Currently this is commented
out using the SEPDATA \#define macro. When any bugs are largely
fixed, all this code will be removed.
# Known Bugs
1. There is still one known failure that has not been solved.
All the failures revolve around some variant of this .cdl file.
The proximate cause of failure is the use of a VLEN FillValue.
````
netcdf x {
types:
float(*) row_of_floats ;
dimensions:
m = 5 ;
variables:
row_of_floats ragged_array(m) ;
row_of_floats ragged_array:_FillValue = {-999} ;
data:
ragged_array = {10, 11, 12, 13, 14}, {20, 21, 22, 23}, {30, 31, 32},
{40, 41}, _ ;
}
````
When a solution is found, I will either add it to this PR or post a new PR.
# Related Changes
* Mark nc_free_vlen(s) as deprecated in favor of ncaux_reclaim_data.
* Remove the --enable-unfixed-memory-leaks option.
* Remove the NC_VLENS_NOTEST code that suppresses some vlen tests.
* Document this change in docs/internal.md
* Disable the tst_vlen_data test in ncdump/tst_nccopy4.sh.
* Mark types as fixed size or not (transitively) to optimize the reclaim
and copy functions.
# Misc. Changes
* Make Doxygen process libdispatch/daux.c
* Make sure the NC_ATT_INFO_T.container field is set.
2022-01-09 09:30:00 +08:00
|
|
|
status = ncaux_type_alignment(firsttype,cmpd->ncid,&field->alignment);
|
2012-09-26 05:48:14 +08:00
|
|
|
break;
|
|
|
|
default:
|
Fix various problem around VLEN's
re: https://github.com/Unidata/netcdf-c/issues/541
re: https://github.com/Unidata/netcdf-c/issues/1208
re: https://github.com/Unidata/netcdf-c/issues/2078
re: https://github.com/Unidata/netcdf-c/issues/2041
re: https://github.com/Unidata/netcdf-c/issues/2143
For a long time, there have been known problems with the
management of complex types containing VLENs. This also
involves the string type because it is stored as a VLEN of
chars.
This PR (mostly) fixes this problem. But note that it adds new
functions to netcdf.h (see below) and this may require bumping
the .so number. These new functions can be removed, if desired,
in favor of functions in netcdf_aux.h, but netcdf.h seems the
better place for them because they are intended as alternatives
to the nc_free_vlen and nc_free_string functions already in
netcdf.h.
The term complex type refers to any type that directly or
transitively references a VLEN type. So an array of VLENS, a
compound with a VLEN field, and so on.
In order to properly handle instances of these complex types, it
is necessary to have function that can recursively walk
instances of such types to perform various actions on them. The
term "deep" is also used to mean recursive.
At the moment, the two operations needed by the netcdf library are:
* free'ing an instance of the complex type
* copying an instance of the complex type.
The current library does only shallow free and shallow copy of
complex types. This means that only the top level is properly
free'd or copied, but deep internal blocks in the instance are
not touched.
Note that the term "vector" will be used to mean a contiguous (in
memory) sequence of instances of some type. Given an array with,
say, dimensions 2 X 3 X 4, this will be stored in memory as a
vector of length 2*3*4=24 instances.
The use cases are primarily these.
## nc_get_vars
Suppose one is reading a vector of instances using nc_get_vars
(or nc_get_vara or nc_get_var, etc.). These functions will
return the vector in the top-level memory provided. All
interior blocks (form nested VLEN or strings) will have been
dynamically allocated.
After using this vector of instances, it is necessary to free
(aka reclaim) the dynamically allocated memory, otherwise a
memory leak occurs. So, the recursive reclaim function is used
to walk the returned instance vector and do a deep reclaim of
the data.
Currently functions are defined in netcdf.h that are supposed to
handle this: nc_free_vlen(), nc_free_vlens(), and
nc_free_string(). Unfortunately, these functions only do a
shallow free, so deeply nested instances are not properly
handled by them.
Note that internally, the provided data is immediately written so
there is no need to copy it. But the caller may need to reclaim the
data it passed into the function.
## nc_put_att
Suppose one is writing a vector of instances as the data of an attribute
using, say, nc_put_att.
Internally, the incoming attribute data must be copied and stored
so that changes/reclamation of the input data will not affect
the attribute.
Again, the code inside the netcdf library does only shallow copying
rather than deep copy. As a result, one sees effects such as described
in Github Issue https://github.com/Unidata/netcdf-c/issues/2143.
Also, after defining the attribute, it may be necessary for the user
to free the data that was provided as input to nc_put_att().
## nc_get_att
Suppose one is reading a vector of instances as the data of an attribute
using, say, nc_get_att.
Internally, the existing attribute data must be copied and returned
to the caller, and the caller is responsible for reclaiming
the returned data.
Again, the code inside the netcdf library does only shallow copying
rather than deep copy. So this can lead to memory leaks and errors
because the deep data is shared between the library and the user.
# Solution
The solution is to build properly recursive reclaim and copy
functions and use those as needed.
These recursive functions are defined in libdispatch/dinstance.c
and their signatures are defined in include/netcdf.h.
For back compatibility, corresponding "ncaux_XXX" functions
are defined in include/netcdf_aux.h.
````
int nc_reclaim_data(int ncid, nc_type xtypeid, void* memory, size_t count);
int nc_reclaim_data_all(int ncid, nc_type xtypeid, void* memory, size_t count);
int nc_copy_data(int ncid, nc_type xtypeid, const void* memory, size_t count, void* copy);
int nc_copy_data_all(int ncid, nc_type xtypeid, const void* memory, size_t count, void** copyp);
````
There are two variants. The first two, nc_reclaim_data() and
nc_copy_data(), assume the top-level vector is managed by the
caller. For reclaim, this is so the user can use, for example, a
statically allocated vector. For copy, it assumes the user
provides the space into which the copy is stored.
The second two, nc_reclaim_data_all() and
nc_copy_data_all(), allows the functions to manage the
top-level. So for nc_reclaim_data_all, the top level is
assumed to be dynamically allocated and will be free'd by
nc_reclaim_data_all(). The nc_copy_data_all() function
will allocate the top level and return a pointer to it to the
user. The user can later pass that pointer to
nc_reclaim_data_all() to reclaim the instance(s).
# Internal Changes
The netcdf-c library internals are changed to use the proper
reclaim and copy functions. It turns out that the places where
these functions are needed is quite pervasive in the netcdf-c
library code. Using these functions also allows some
simplification of the code since the stdata and vldata fields of
NC_ATT_INFO are no longer needed. Currently this is commented
out using the SEPDATA \#define macro. When any bugs are largely
fixed, all this code will be removed.
# Known Bugs
1. There is still one known failure that has not been solved.
All the failures revolve around some variant of this .cdl file.
The proximate cause of failure is the use of a VLEN FillValue.
````
netcdf x {
types:
float(*) row_of_floats ;
dimensions:
m = 5 ;
variables:
row_of_floats ragged_array(m) ;
row_of_floats ragged_array:_FillValue = {-999} ;
data:
ragged_array = {10, 11, 12, 13, 14}, {20, 21, 22, 23}, {30, 31, 32},
{40, 41}, _ ;
}
````
When a solution is found, I will either add it to this PR or post a new PR.
# Related Changes
* Mark nc_free_vlen(s) as deprecated in favor of ncaux_reclaim_data.
* Remove the --enable-unfixed-memory-leaks option.
* Remove the NC_VLENS_NOTEST code that suppresses some vlen tests.
* Document this change in docs/internal.md
* Disable the tst_vlen_data test in ncdump/tst_nccopy4.sh.
* Mark types as fixed size or not (transitively) to optimize the reclaim
and copy functions.
# Misc. Changes
* Make Doxygen process libdispatch/daux.c
* Make sure the NC_ATT_INFO_T.container field is set.
2022-01-09 09:30:00 +08:00
|
|
|
status = ncaux_type_alignment(field->fieldtype,cmpd->ncid,&field->alignment);
|
2012-09-26 05:48:14 +08:00
|
|
|
break;
|
2018-11-16 01:00:38 +08:00
|
|
|
|
2012-09-26 05:48:14 +08:00
|
|
|
}
|
|
|
|
offset += getpadding(offset,alignment);
|
|
|
|
field->offset = offset;
|
|
|
|
offset += field->size;
|
|
|
|
}
|
|
|
|
cmpd->size = offset;
|
|
|
|
cmpd->alignment = cmpd->fields[0].alignment;
|
|
|
|
|
|
|
|
done:
|
|
|
|
return status;
|
|
|
|
}
|
2018-10-31 10:48:12 +08:00
|
|
|
|
2018-11-16 01:00:38 +08:00
|
|
|
#endif /*USE_NETCDF4*/
|
This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
|
|
|
|
|
|
|
|
|
|
|
/**************************************************/
|
|
|
|
/* Forward */
|
|
|
|
|
|
|
|
#define NUMCHAR "0123456789"
|
|
|
|
#define LPAREN '('
|
|
|
|
#define RPAREN ')'
|
|
|
|
#define LBRACK '['
|
|
|
|
#define RBRACK ']'
|
|
|
|
|
|
|
|
/* Look at q0 and q1) to determine type */
|
|
|
|
static int
|
2024-02-02 18:59:55 +08:00
|
|
|
gettype(const char q0, const char q1, int* isunsignedp)
|
This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
|
|
|
{
|
|
|
|
int type = 0;
|
|
|
|
int isunsigned = 0;
|
|
|
|
char typechar;
|
|
|
|
|
|
|
|
isunsigned = (q0 == 'u' || q0 == 'U');
|
|
|
|
if(q1 == '\0')
|
|
|
|
typechar = q0; /* we were given only a single char */
|
|
|
|
else if(isunsigned)
|
|
|
|
typechar = q1; /* we have something like Ux as the tag */
|
|
|
|
else
|
|
|
|
typechar = q1; /* look at last char for tag */
|
|
|
|
switch (typechar) {
|
|
|
|
case 'f': case 'F': case '.': type = 'f'; break; /* float */
|
|
|
|
case 'd': case 'D': type = 'd'; break; /* double */
|
|
|
|
case 'b': case 'B': type = 'b'; break; /* byte */
|
|
|
|
case 's': case 'S': type = 's'; break; /* short */
|
|
|
|
case 'l': case 'L': type = 'l'; break; /* long long */
|
|
|
|
case '0': case '1': case '2': case '3': case '4':
|
|
|
|
case '5': case '6': case '7': case '8': case '9': type = 'i'; break;
|
|
|
|
case 'u': case 'U': type = 'i'; isunsigned = 1; break; /* unsigned int */
|
|
|
|
case '\0': type = 'i'; break;
|
|
|
|
default: break;
|
|
|
|
}
|
|
|
|
if(isunsignedp) *isunsignedp = isunsigned;
|
|
|
|
return type;
|
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef WORDS_BIGENDIAN
|
|
|
|
/* Byte swap an 8-byte integer in place */
|
|
|
|
static void
|
|
|
|
byteswap8(unsigned char* mem)
|
|
|
|
{
|
|
|
|
unsigned char c;
|
|
|
|
c = mem[0];
|
|
|
|
mem[0] = mem[7];
|
|
|
|
mem[7] = c;
|
|
|
|
c = mem[1];
|
|
|
|
mem[1] = mem[6];
|
|
|
|
mem[6] = c;
|
|
|
|
c = mem[2];
|
|
|
|
mem[2] = mem[5];
|
|
|
|
mem[5] = c;
|
|
|
|
c = mem[3];
|
|
|
|
mem[3] = mem[4];
|
|
|
|
mem[4] = c;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Byte swap an 8-byte integer in place */
|
|
|
|
static void
|
|
|
|
byteswap4(unsigned char* mem)
|
|
|
|
{
|
|
|
|
unsigned char c;
|
|
|
|
c = mem[0];
|
|
|
|
mem[0] = mem[3];
|
|
|
|
mem[3] = c;
|
|
|
|
c = mem[1];
|
|
|
|
mem[1] = mem[2];
|
|
|
|
mem[2] = c;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2020-09-28 02:43:46 +08:00
|
|
|
/**************************************************/
|
|
|
|
/* Moved here from netcdf_filter.h */
|
|
|
|
|
|
|
|
/*
|
|
|
|
This function implements the 8-byte conversion algorithms for HDF5
|
|
|
|
Before calling *nc_def_var_filter* (unless *NC_parsefilterspec* was used),
|
|
|
|
the client must call this function with the decode argument set to 0.
|
|
|
|
Inside the filter code, this function should be called with the decode
|
|
|
|
argument set to 1.
|
|
|
|
|
|
|
|
* @params mem8 is a pointer to the 8-byte value either to fix.
|
|
|
|
* @params decode is 1 if the function should apply the 8-byte decoding algorithm
|
|
|
|
else apply the encoding algorithm.
|
|
|
|
*/
|
|
|
|
|
|
|
|
void
|
|
|
|
ncaux_h5filterspec_fix8(unsigned char* mem8, int decode)
|
|
|
|
{
|
|
|
|
#ifdef WORDS_BIGENDIAN
|
|
|
|
if(decode) { /* Apply inverse of the encode case */
|
|
|
|
byteswap4(mem8); /* step 1: byte-swap each piece */
|
|
|
|
byteswap4(mem8+4);
|
|
|
|
byteswap8(mem8); /* step 2: convert to little endian format */
|
|
|
|
} else { /* encode */
|
|
|
|
byteswap8(mem8); /* step 1: convert to little endian format */
|
|
|
|
byteswap4(mem8); /* step 2: byte-swap each piece */
|
|
|
|
byteswap4(mem8+4);
|
|
|
|
}
|
|
|
|
#else /* Little endian */
|
|
|
|
/* No action is necessary */
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
|
|
|
/*
|
|
|
|
Parse a filter spec string into a NC_FILTER_SPEC*
|
2020-09-28 02:43:46 +08:00
|
|
|
Note that this differs from the usual case in that the
|
|
|
|
function is called once to get both the number of parameters
|
|
|
|
and the parameters themselves (hence the unsigned int** paramsp).
|
|
|
|
|
This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
|
|
|
@param txt - a string containing the spec as a sequence of
|
2020-09-28 02:43:46 +08:00
|
|
|
constants separated by commas, where first constant
|
|
|
|
is the filter id and the rest are parameters.
|
|
|
|
@param idp - store the parsed filter id here
|
|
|
|
@param nparamsp - store the number of parameters here
|
|
|
|
@param paramsp - store the vector of parameters here; caller frees.
|
This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
|
|
|
@return NC_NOERR if parse succeeded
|
|
|
|
@return NC_EINVAL otherwise
|
|
|
|
*/
|
|
|
|
|
|
|
|
EXTERNL int
|
2020-09-28 02:43:46 +08:00
|
|
|
ncaux_h5filterspec_parse(const char* txt, unsigned int* idp, size_t* nparamsp, unsigned int** paramsp)
|
This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
|
|
|
{
|
|
|
|
int i,stat = NC_NOERR;
|
|
|
|
char* p;
|
|
|
|
char* sdata0 = NULL; /* what to free */
|
|
|
|
char* sdata = NULL; /* sdata0 with leading prefix skipped */
|
|
|
|
size_t nparams; /* no. of comma delimited params */
|
2020-09-28 02:43:46 +08:00
|
|
|
size_t nactual; /* actual number of unsigned int's */
|
|
|
|
const char* sid = NULL;
|
|
|
|
unsigned int filterid = 0;
|
|
|
|
unsigned int* params = NULL;
|
This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
|
|
|
size_t len;
|
|
|
|
|
|
|
|
if(txt == NULL)
|
|
|
|
{stat = NC_EINVAL; goto done;}
|
|
|
|
len = strlen(txt);
|
|
|
|
if(len == 0)
|
|
|
|
{stat = NC_EINVAL; goto done;}
|
|
|
|
|
|
|
|
if((sdata0 = (char*)calloc(1,len+1+1))==NULL)
|
|
|
|
{stat = NC_ENOMEM; goto done;}
|
|
|
|
memcpy(sdata0,txt,len);
|
|
|
|
sdata = sdata0;
|
|
|
|
|
|
|
|
/* Count number of parameters + id and delimit */
|
|
|
|
p=sdata;
|
|
|
|
for(nparams=0;;nparams++) {
|
|
|
|
char* q = strchr(p,',');
|
|
|
|
if(q == NULL) break;
|
|
|
|
*q++ = '\0';
|
|
|
|
p = q;
|
|
|
|
}
|
|
|
|
nparams++; /* for final piece */
|
|
|
|
|
|
|
|
if(nparams == 0)
|
|
|
|
{stat = NC_EINVAL; goto done;} /* no id and no parameters */
|
|
|
|
|
|
|
|
p = sdata;
|
|
|
|
|
|
|
|
/* Extract the filter id */
|
2020-09-28 02:43:46 +08:00
|
|
|
sid = p;
|
|
|
|
if((sscanf(sid,"%u",&filterid)) != 1) {stat = NC_EINVAL; goto done;}
|
This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
|
|
|
nparams--;
|
|
|
|
|
|
|
|
/* skip past the filter id */
|
|
|
|
p = p + strlen(p) + 1;
|
|
|
|
|
2020-09-28 02:43:46 +08:00
|
|
|
/* Allocate the max needed space (assume all params are 64 bit) */
|
|
|
|
if((params = (unsigned int*)calloc(sizeof(unsigned int),(nparams)*2))==NULL)
|
This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
|
|
|
{stat = NC_ENOMEM; goto done;}
|
|
|
|
|
|
|
|
/* walk and capture */
|
2020-09-28 02:43:46 +08:00
|
|
|
for(nactual=0,i=0;i<nparams;i++) { /* step thru param strings */
|
|
|
|
size_t count = 0;
|
This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
|
|
|
len = strlen(p);
|
|
|
|
/* skip leading white space */
|
|
|
|
while(strchr(" ",*p) != NULL) {p++; len--;}
|
2020-09-28 02:43:46 +08:00
|
|
|
if((stat = filterspec_cvt(p,&count,params+nactual))) goto done;
|
|
|
|
nactual += count;
|
|
|
|
p = p + strlen(p) + 1; /* move to next param string */
|
This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
|
|
|
}
|
|
|
|
/* Now return results */
|
2020-09-28 02:43:46 +08:00
|
|
|
if(idp) *idp = filterid;
|
|
|
|
if(nparamsp) *nparamsp = nactual;
|
|
|
|
if(paramsp) {*paramsp = params; params = NULL;}
|
This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
|
|
|
done:
|
2020-09-28 02:43:46 +08:00
|
|
|
nullfree(params);
|
2021-09-03 07:04:26 +08:00
|
|
|
nullfree(sdata0);
|
|
|
|
return stat;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
Parse a filter parameter string into a sequence of unsigned ints.
|
|
|
|
|
|
|
|
@param txt - a string containing the parameter string.
|
|
|
|
@param nuiparamsp - store the number of unsigned ints here
|
|
|
|
@param uiparamsp - store the vector of unsigned ints here; caller frees.
|
|
|
|
@return NC_NOERR if parse succeeded
|
|
|
|
@return NC_EINVAL otherwise
|
|
|
|
*/
|
|
|
|
|
|
|
|
EXTERNL int
|
|
|
|
ncaux_h5filterspec_parse_parameter(const char* txt, size_t* nuiparamsp, unsigned int* uiparams)
|
|
|
|
{
|
|
|
|
int stat = NC_NOERR;
|
|
|
|
char* p;
|
|
|
|
char* sdata0 = NULL; /* what to free */
|
|
|
|
char* sdata = NULL; /* sdata0 with leading prefix skipped */
|
|
|
|
size_t nuiparams = 0;
|
|
|
|
size_t len;
|
|
|
|
|
|
|
|
if(txt == NULL)
|
|
|
|
{stat = NC_EINVAL; goto done;}
|
|
|
|
len = strlen(txt);
|
|
|
|
if(len == 0)
|
|
|
|
{stat = NC_EINVAL; goto done;}
|
|
|
|
|
|
|
|
if((sdata0 = (char*)calloc(1,len+1+1))==NULL)
|
|
|
|
{stat = NC_ENOMEM; goto done;}
|
|
|
|
memcpy(sdata0,txt,len);
|
|
|
|
sdata = sdata0;
|
|
|
|
|
|
|
|
p = sdata;
|
|
|
|
|
|
|
|
nuiparams = 0;
|
|
|
|
len = strlen(p);
|
|
|
|
/* skip leading white space */
|
|
|
|
while(strchr(" ",*p) != NULL) {p++; len--;}
|
|
|
|
if((stat = filterspec_cvt(p,&nuiparams,uiparams))) goto done;
|
|
|
|
/* Now return results */
|
|
|
|
if(nuiparamsp) *nuiparamsp = nuiparams;
|
|
|
|
done:
|
|
|
|
nullfree(sdata0);
|
This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
|
|
|
return stat;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
Parse a string containing multiple '|' separated filter specs.
|
2020-09-28 02:43:46 +08:00
|
|
|
Use a vector of NC_Filterspec structs to return results.
|
|
|
|
@param txt0 - a string containing the list of filter specs.
|
|
|
|
@param formatp - store any leading format integer here
|
This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
|
|
|
@param nspecsp - # of parsed specs
|
|
|
|
@param specsp - pointer to hold vector of parsed specs. Caller frees
|
|
|
|
@return NC_NOERR if parse succeeded
|
|
|
|
@return NC_EINVAL if bad parameters or parse failed
|
|
|
|
*/
|
|
|
|
|
|
|
|
EXTERNL int
|
2020-09-28 02:43:46 +08:00
|
|
|
ncaux_h5filterspec_parselist(const char* txt0, int* formatp, size_t* nspecsp, NC_H5_Filterspec*** vectorp)
|
This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
|
|
|
{
|
|
|
|
int stat = NC_NOERR;
|
2020-09-28 02:43:46 +08:00
|
|
|
int format = 0;
|
This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
|
|
|
size_t len = 0;
|
|
|
|
size_t nspecs = 0;
|
2020-09-28 02:43:46 +08:00
|
|
|
NC_H5_Filterspec** vector = NULL;
|
This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
|
|
|
char* spec = NULL; /* without prefix */
|
|
|
|
char* p = NULL;
|
|
|
|
char* q = NULL;
|
|
|
|
|
|
|
|
if(txt0 == NULL) return NC_EINVAL;
|
|
|
|
/* Duplicate txt0 so we can modify it */
|
|
|
|
len = strlen(txt0);
|
|
|
|
if((spec = calloc(1,len+1+1)) == NULL) {stat = NC_ENOMEM; goto done;}
|
|
|
|
memcpy(spec,txt0,len); /* Note double ending nul */
|
|
|
|
|
|
|
|
/* See if there is a prefix '[format]' tag */
|
|
|
|
if(spec[0] == LBRACK) {
|
|
|
|
p = spec + 1;
|
|
|
|
q = strchr(p,RBRACK);
|
|
|
|
if(q == NULL) {stat = NC_EINVAL; goto done;}
|
|
|
|
*q++ = '\0'; /* delimit tag */
|
2020-09-28 02:43:46 +08:00
|
|
|
if(sscanf(p,"%d",&format) != 1) {stat = NC_EINVAL; goto done;}
|
This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
|
|
|
spec = q; /* skip tag wrt later processing */
|
|
|
|
}
|
|
|
|
|
|
|
|
/* pass 1: count number of specs */
|
|
|
|
p = spec;
|
|
|
|
nspecs = 0;
|
|
|
|
while(*p) {
|
|
|
|
q = strchr(p,'|');
|
|
|
|
if(q == NULL) q = p + strlen(p); /* fake it */
|
|
|
|
nspecs++;
|
|
|
|
p = q + 1;
|
|
|
|
}
|
|
|
|
if(nspecs > 0) {
|
|
|
|
int count = 0;
|
2020-09-28 02:43:46 +08:00
|
|
|
if((vector = (NC_H5_Filterspec**)calloc(sizeof(NC_H5_Filterspec*),nspecs)) == NULL)
|
This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
|
|
|
{stat = NC_ENOMEM; goto done;}
|
|
|
|
/* pass 2: parse */
|
|
|
|
p = spec;
|
|
|
|
for(count=0;count<nspecs;count++) {
|
2020-09-28 02:43:46 +08:00
|
|
|
NC_H5_Filterspec* spec = (NC_H5_Filterspec*)calloc(1,sizeof(NC_H5_Filterspec));
|
|
|
|
if(spec == NULL) {stat = NC_ENOMEM; goto done;}
|
|
|
|
vector[count] = spec;
|
This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
|
|
|
q = strchr(p,'|');
|
|
|
|
if(q == NULL) q = p + strlen(p); /* fake it */
|
|
|
|
*q = '\0';
|
2020-09-28 02:43:46 +08:00
|
|
|
if((stat=ncaux_h5filterspec_parse(p,&spec->filterid,&spec->nparams,&spec->params))) goto done;
|
This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
|
|
|
p = q+1; /* ok because of double nul */
|
|
|
|
}
|
|
|
|
}
|
2020-09-28 02:43:46 +08:00
|
|
|
if(formatp) *formatp = format;
|
This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
|
|
|
if(nspecsp) *nspecsp = nspecs;
|
|
|
|
if(vectorp) {*vectorp = vector; vector = NULL;}
|
|
|
|
done:
|
2020-09-28 02:43:46 +08:00
|
|
|
nullfree(spec);
|
This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
|
|
|
if(vector) {
|
|
|
|
int i;
|
|
|
|
for(i=0;i<nspecs;i++)
|
2020-09-28 02:43:46 +08:00
|
|
|
ncaux_h5filterspec_free(vector[i]);
|
This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
|
|
|
nullfree(vector);
|
|
|
|
}
|
|
|
|
return stat;
|
|
|
|
}
|
|
|
|
|
2020-09-28 02:43:46 +08:00
|
|
|
/*
|
|
|
|
Parse a string containing multiple '|' separated filter specs.
|
|
|
|
Use a vector of NC_Filterspec structs to return results.
|
|
|
|
@param txt0 - a string containing the list of filter specs.
|
|
|
|
@param formatp - store any leading format integer here
|
|
|
|
@param nspecsp - # of parsed specs
|
|
|
|
@param specsp - pointer to hold vector of parsed specs. Caller frees
|
|
|
|
@return NC_NOERR if parse succeeded
|
|
|
|
@return NC_EINVAL if bad parameters or parse failed
|
|
|
|
*/
|
This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
|
|
|
|
2020-09-28 02:43:46 +08:00
|
|
|
EXTERNL void
|
|
|
|
ncaux_h5filterspec_free(NC_H5_Filterspec* f)
|
This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
|
|
|
{
|
2020-09-28 02:43:46 +08:00
|
|
|
if(f) nullfree(f->params);
|
|
|
|
nullfree(f);
|
This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
2020-09-28 02:43:46 +08:00
|
|
|
Convert a parameter string to one or two unsigned ints/
|
|
|
|
@param txt - (in) string constant
|
|
|
|
@param nparamsp - (out) # of unsigned ints produced
|
|
|
|
@param params - (out) produced unsigned ints
|
This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
|
|
|
@return NC_NOERR if parse succeeded
|
|
|
|
@return NC_EINVAL if bad parameters or parse failed
|
|
|
|
*/
|
|
|
|
|
2020-09-28 02:43:46 +08:00
|
|
|
static int
|
|
|
|
filterspec_cvt(const char* txt, size_t* nparamsp, unsigned int* params)
|
This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
|
|
|
{
|
2020-09-28 02:43:46 +08:00
|
|
|
int stat = NC_NOERR;
|
This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
|
|
|
size_t nparams = 0; /*actual count*/
|
2020-09-28 02:43:46 +08:00
|
|
|
unsigned long long val64u;
|
|
|
|
unsigned int val32u;
|
|
|
|
double vald;
|
|
|
|
float valf;
|
|
|
|
unsigned int *vector;
|
|
|
|
unsigned char mem[8];
|
|
|
|
int isunsigned = 0;
|
|
|
|
int isnegative = 0;
|
|
|
|
int type = 0;
|
|
|
|
const char* q;
|
|
|
|
const char* p = txt;
|
|
|
|
size_t len = strlen(p);
|
|
|
|
int sstat;
|
|
|
|
|
|
|
|
/* skip leading white space */
|
|
|
|
while(strchr(" ",*p) != NULL) {p++; len--;}
|
|
|
|
/* Get leading sign character, if any */
|
|
|
|
if(*p == '-') isnegative = 1;
|
|
|
|
/* Get trailing type tag characters */
|
|
|
|
switch (len) {
|
|
|
|
case 0: stat = NC_EINVAL; goto done; /* empty parameter */
|
|
|
|
case 1: case 2:
|
|
|
|
q = (p + len) - 1; /* point to last char */
|
|
|
|
type = gettype(*q,'\0',&isunsigned);
|
|
|
|
break;
|
|
|
|
default: /* > 2 => we might have a two letter tag */
|
|
|
|
q = (p + len) - 2;
|
|
|
|
type = gettype(*q,*(q+1),&isunsigned);
|
|
|
|
break;
|
This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
|
|
|
}
|
2020-09-28 02:43:46 +08:00
|
|
|
/* Now parse */
|
|
|
|
switch (type) {
|
|
|
|
case 'b': case 's': case 'i':
|
|
|
|
/* special case for a positive integer;for back compatibility.*/
|
|
|
|
if(!isnegative)
|
|
|
|
sstat = sscanf(p,"%u",&val32u);
|
|
|
|
else
|
|
|
|
sstat = sscanf(p,"%d",(int*)&val32u);
|
|
|
|
if(sstat != 1) {stat = NC_EINVAL; goto done;}
|
|
|
|
switch(type) {
|
|
|
|
case 'b': val32u = (val32u & 0xFF); break;
|
|
|
|
case 's': val32u = (val32u & 0xFFFF); break;
|
|
|
|
}
|
|
|
|
params[nparams++] = val32u;
|
|
|
|
break;
|
|
|
|
case 'f':
|
|
|
|
sstat = sscanf(p,"%lf",&vald);
|
|
|
|
if(sstat != 1) {stat = NC_EINVAL; goto done;}
|
|
|
|
valf = (float)vald;
|
2021-07-12 15:58:28 +08:00
|
|
|
/* avoid type punning */
|
|
|
|
memcpy(¶ms[nparams++], &valf, sizeof(unsigned int));
|
2020-09-28 02:43:46 +08:00
|
|
|
break;
|
|
|
|
/* The following are 8-byte values, so we must swap pieces if this
|
|
|
|
is a little endian machine */
|
|
|
|
case 'd':
|
|
|
|
sstat = sscanf(p,"%lf",&vald);
|
|
|
|
if(sstat != 1) {stat = NC_EINVAL; goto done;};
|
|
|
|
memcpy(mem,&vald,sizeof(mem));
|
|
|
|
ncaux_h5filterspec_fix8(mem,0);
|
|
|
|
vector = (unsigned int*)mem;
|
|
|
|
params[nparams++] = vector[0];
|
|
|
|
params[nparams++] = vector[1];
|
|
|
|
break;
|
|
|
|
case 'l': /* long long */
|
|
|
|
if(isunsigned)
|
|
|
|
sstat = sscanf(p,"%llu",&val64u);
|
|
|
|
else
|
|
|
|
sstat = sscanf(p,"%lld",(long long*)&val64u);
|
|
|
|
if(sstat != 1) {stat = NC_EINVAL; goto done;};
|
|
|
|
memcpy(mem,&val64u,sizeof(mem));
|
|
|
|
ncaux_h5filterspec_fix8(mem,0);
|
|
|
|
vector = (unsigned int*)&mem;
|
|
|
|
params[nparams++] = vector[0];
|
|
|
|
params[nparams++] = vector[1];
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
{stat = NC_EINVAL; goto done;};
|
|
|
|
}
|
|
|
|
*nparamsp = nparams;
|
This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
|
|
|
|
|
|
|
done:
|
|
|
|
return stat;
|
|
|
|
}
|
|
|
|
|
|
|
|
#if 0
|
|
|
|
/*
|
|
|
|
Parse a filter spec string into a NC_H5_Filterspec*
|
|
|
|
@param txt - a string containing the spec as a sequence of
|
|
|
|
constants separated by commas.
|
|
|
|
@param specp - store the parsed filter here -- caller frees
|
|
|
|
@return NC_NOERR if parse succeeded
|
|
|
|
@return NC_EINVAL otherwise
|
|
|
|
*/
|
|
|
|
|
|
|
|
EXTERNL int
|
|
|
|
ncaux_filter_parsespec(const char* txt, NC_H5_Filterspec** h5specp)
|
|
|
|
{
|
|
|
|
int stat = NC_NOERR;
|
|
|
|
NC_Filterspec* spec = NULL;
|
|
|
|
NC_H5_Filterspec* h5spec = NULL;
|
|
|
|
size_t len;
|
|
|
|
|
|
|
|
if(txt == NULL)
|
|
|
|
{stat = NC_EINVAL; goto done;}
|
|
|
|
len = strlen(txt);
|
|
|
|
if(len == 0) {stat = NC_EINVAL; goto done;}
|
|
|
|
|
|
|
|
/* Parse as strings */
|
|
|
|
if((stat = ncaux_filterspec_parse(txt,&spec))) goto done;
|
|
|
|
/* walk and convert */
|
|
|
|
if((stat = ncaux_filterspec_cvt(spec,&h5spec))) goto done;
|
|
|
|
/* Now return results */
|
|
|
|
if(h5specp != NULL) {*h5specp = h5spec; h5spec = NULL;}
|
|
|
|
|
|
|
|
done:
|
|
|
|
ncaux_filterspec_free(spec);
|
|
|
|
if(h5spec) nullfree(h5spec->params);
|
|
|
|
nullfree(h5spec);
|
|
|
|
return stat;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
Parse a string containing multiple '|' separated filter specs.
|
|
|
|
|
|
|
|
@param spec0 - a string containing the list of filter specs.
|
|
|
|
@param nspecsp - # of parsed specs
|
|
|
|
@param specsp - pointer to hold vector of parsed specs. Caller frees
|
|
|
|
@return NC_NOERR if parse succeeded
|
|
|
|
@return NC_EINVAL if bad parameters or parse failed
|
|
|
|
*/
|
|
|
|
|
|
|
|
EXTERNL int
|
|
|
|
ncaux_filter_parselist(const char* txt0, size_t* nspecsp, NC_H5_Filterspec*** vectorp)
|
|
|
|
{
|
|
|
|
int stat = NC_NOERR;
|
|
|
|
size_t len = 0;
|
|
|
|
size_t nspecs = 0;
|
|
|
|
NC_H5_Filterspec** vector = NULL;
|
|
|
|
char* spec0 = NULL; /* with prefix */
|
|
|
|
char* spec = NULL; /* without prefix */
|
|
|
|
char* p = NULL;
|
|
|
|
char* q = NULL;
|
|
|
|
|
|
|
|
if(txt0 == NULL) return NC_EINVAL;
|
|
|
|
/* Duplicate txt0 so we can modify it */
|
|
|
|
len = strlen(txt0);
|
|
|
|
if((spec = calloc(1,len+1+1)) == NULL) return NC_ENOMEM;
|
|
|
|
memcpy(spec,txt0,len); /* Note double ending nul */
|
|
|
|
spec0 = spec; /* Save for later free */
|
|
|
|
|
|
|
|
/* See if there is a prefix '[format]' tag; ignore it */
|
|
|
|
if(spec[0] == LBRACK) {
|
|
|
|
spec = q; /* skip tag wrt later processing */
|
|
|
|
}
|
|
|
|
/* pass 1: count number of specs */
|
|
|
|
p = spec;
|
|
|
|
nspecs = 0;
|
|
|
|
while(*p) {
|
|
|
|
q = strchr(p,'|');
|
|
|
|
if(q == NULL) q = p + strlen(p); /* fake it */
|
|
|
|
nspecs++;
|
|
|
|
p = q + 1;
|
|
|
|
}
|
|
|
|
if(nspecs > 0) {
|
|
|
|
int count = 0;
|
|
|
|
if((vector = (NC_H5_Filterspec**)malloc(sizeof(NC_H5_Filterspec*)*nspecs)) == NULL)
|
|
|
|
{stat = NC_ENOMEM; goto done;}
|
|
|
|
/* pass 2: parse */
|
|
|
|
p = spec;
|
|
|
|
for(count=0;count<nspecs;count++) {
|
|
|
|
NC_H5_Filterspec* aspec = NULL;
|
|
|
|
q = strchr(p,'|');
|
|
|
|
if(q == NULL) q = p + strlen(p); /* fake it */
|
|
|
|
*q = '\0';
|
|
|
|
if(ncaux_filter_parsespec(p,&aspec))
|
|
|
|
{stat = NC_EINVAL; goto done;}
|
|
|
|
vector[count] = aspec; aspec = NULL;
|
|
|
|
p = q+1; /* ok because of double nul */
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if(nspecsp) *nspecsp = nspecs;
|
|
|
|
if(vectorp) *vectorp = (nspecs == 0 ? NULL : vector);
|
|
|
|
vector = NULL;
|
|
|
|
done:
|
|
|
|
nullfree(spec0);
|
|
|
|
if(vector != NULL) {
|
|
|
|
int k;
|
|
|
|
for(k=0;k<nspecs;k++) {
|
|
|
|
NC_H5_Filterspec* nfs = vector[k];
|
|
|
|
if(nfs->params) free(nfs->params);
|
|
|
|
nullfree(nfs);
|
|
|
|
}
|
|
|
|
free(vector);
|
|
|
|
}
|
|
|
|
return stat;
|
|
|
|
}
|
|
|
|
#endif
|
2021-01-29 11:11:01 +08:00
|
|
|
|
|
|
|
/**************************************************/
|
|
|
|
/* Wrappers to export selected functions from libnetcdf */
|
|
|
|
|
|
|
|
EXTERNL int
|
2023-01-19 10:47:29 +08:00
|
|
|
ncaux_readfile(const char* filename, size_t* sizep, void** datap)
|
2021-01-29 11:11:01 +08:00
|
|
|
{
|
|
|
|
int stat = NC_NOERR;
|
|
|
|
NCbytes* content = ncbytesnew();
|
|
|
|
stat = NC_readfile(filename,content);
|
2021-10-30 10:06:37 +08:00
|
|
|
if(stat == NC_NOERR && sizep)
|
|
|
|
*sizep = ncbyteslength(content);
|
2023-01-19 10:47:29 +08:00
|
|
|
if(stat == NC_NOERR && datap)
|
|
|
|
*datap = ncbytesextract(content);
|
2021-01-29 11:11:01 +08:00
|
|
|
ncbytesfree(content);
|
|
|
|
return stat;
|
|
|
|
}
|
|
|
|
|
|
|
|
EXTERNL int
|
|
|
|
ncaux_writefile(const char* filename, size_t size, void* content)
|
|
|
|
{
|
|
|
|
return NC_writefile(filename,size,content);
|
|
|
|
}
|
Fix various problem around VLEN's
re: https://github.com/Unidata/netcdf-c/issues/541
re: https://github.com/Unidata/netcdf-c/issues/1208
re: https://github.com/Unidata/netcdf-c/issues/2078
re: https://github.com/Unidata/netcdf-c/issues/2041
re: https://github.com/Unidata/netcdf-c/issues/2143
For a long time, there have been known problems with the
management of complex types containing VLENs. This also
involves the string type because it is stored as a VLEN of
chars.
This PR (mostly) fixes this problem. But note that it adds new
functions to netcdf.h (see below) and this may require bumping
the .so number. These new functions can be removed, if desired,
in favor of functions in netcdf_aux.h, but netcdf.h seems the
better place for them because they are intended as alternatives
to the nc_free_vlen and nc_free_string functions already in
netcdf.h.
The term complex type refers to any type that directly or
transitively references a VLEN type. So an array of VLENS, a
compound with a VLEN field, and so on.
In order to properly handle instances of these complex types, it
is necessary to have function that can recursively walk
instances of such types to perform various actions on them. The
term "deep" is also used to mean recursive.
At the moment, the two operations needed by the netcdf library are:
* free'ing an instance of the complex type
* copying an instance of the complex type.
The current library does only shallow free and shallow copy of
complex types. This means that only the top level is properly
free'd or copied, but deep internal blocks in the instance are
not touched.
Note that the term "vector" will be used to mean a contiguous (in
memory) sequence of instances of some type. Given an array with,
say, dimensions 2 X 3 X 4, this will be stored in memory as a
vector of length 2*3*4=24 instances.
The use cases are primarily these.
## nc_get_vars
Suppose one is reading a vector of instances using nc_get_vars
(or nc_get_vara or nc_get_var, etc.). These functions will
return the vector in the top-level memory provided. All
interior blocks (form nested VLEN or strings) will have been
dynamically allocated.
After using this vector of instances, it is necessary to free
(aka reclaim) the dynamically allocated memory, otherwise a
memory leak occurs. So, the recursive reclaim function is used
to walk the returned instance vector and do a deep reclaim of
the data.
Currently functions are defined in netcdf.h that are supposed to
handle this: nc_free_vlen(), nc_free_vlens(), and
nc_free_string(). Unfortunately, these functions only do a
shallow free, so deeply nested instances are not properly
handled by them.
Note that internally, the provided data is immediately written so
there is no need to copy it. But the caller may need to reclaim the
data it passed into the function.
## nc_put_att
Suppose one is writing a vector of instances as the data of an attribute
using, say, nc_put_att.
Internally, the incoming attribute data must be copied and stored
so that changes/reclamation of the input data will not affect
the attribute.
Again, the code inside the netcdf library does only shallow copying
rather than deep copy. As a result, one sees effects such as described
in Github Issue https://github.com/Unidata/netcdf-c/issues/2143.
Also, after defining the attribute, it may be necessary for the user
to free the data that was provided as input to nc_put_att().
## nc_get_att
Suppose one is reading a vector of instances as the data of an attribute
using, say, nc_get_att.
Internally, the existing attribute data must be copied and returned
to the caller, and the caller is responsible for reclaiming
the returned data.
Again, the code inside the netcdf library does only shallow copying
rather than deep copy. So this can lead to memory leaks and errors
because the deep data is shared between the library and the user.
# Solution
The solution is to build properly recursive reclaim and copy
functions and use those as needed.
These recursive functions are defined in libdispatch/dinstance.c
and their signatures are defined in include/netcdf.h.
For back compatibility, corresponding "ncaux_XXX" functions
are defined in include/netcdf_aux.h.
````
int nc_reclaim_data(int ncid, nc_type xtypeid, void* memory, size_t count);
int nc_reclaim_data_all(int ncid, nc_type xtypeid, void* memory, size_t count);
int nc_copy_data(int ncid, nc_type xtypeid, const void* memory, size_t count, void* copy);
int nc_copy_data_all(int ncid, nc_type xtypeid, const void* memory, size_t count, void** copyp);
````
There are two variants. The first two, nc_reclaim_data() and
nc_copy_data(), assume the top-level vector is managed by the
caller. For reclaim, this is so the user can use, for example, a
statically allocated vector. For copy, it assumes the user
provides the space into which the copy is stored.
The second two, nc_reclaim_data_all() and
nc_copy_data_all(), allows the functions to manage the
top-level. So for nc_reclaim_data_all, the top level is
assumed to be dynamically allocated and will be free'd by
nc_reclaim_data_all(). The nc_copy_data_all() function
will allocate the top level and return a pointer to it to the
user. The user can later pass that pointer to
nc_reclaim_data_all() to reclaim the instance(s).
# Internal Changes
The netcdf-c library internals are changed to use the proper
reclaim and copy functions. It turns out that the places where
these functions are needed is quite pervasive in the netcdf-c
library code. Using these functions also allows some
simplification of the code since the stdata and vldata fields of
NC_ATT_INFO are no longer needed. Currently this is commented
out using the SEPDATA \#define macro. When any bugs are largely
fixed, all this code will be removed.
# Known Bugs
1. There is still one known failure that has not been solved.
All the failures revolve around some variant of this .cdl file.
The proximate cause of failure is the use of a VLEN FillValue.
````
netcdf x {
types:
float(*) row_of_floats ;
dimensions:
m = 5 ;
variables:
row_of_floats ragged_array(m) ;
row_of_floats ragged_array:_FillValue = {-999} ;
data:
ragged_array = {10, 11, 12, 13, 14}, {20, 21, 22, 23}, {30, 31, 32},
{40, 41}, _ ;
}
````
When a solution is found, I will either add it to this PR or post a new PR.
# Related Changes
* Mark nc_free_vlen(s) as deprecated in favor of ncaux_reclaim_data.
* Remove the --enable-unfixed-memory-leaks option.
* Remove the NC_VLENS_NOTEST code that suppresses some vlen tests.
* Document this change in docs/internal.md
* Disable the tst_vlen_data test in ncdump/tst_nccopy4.sh.
* Mark types as fixed size or not (transitively) to optimize the reclaim
and copy functions.
# Misc. Changes
* Make Doxygen process libdispatch/daux.c
* Make sure the NC_ATT_INFO_T.container field is set.
2022-01-09 09:30:00 +08:00
|
|
|
|
|
|
|
/**************************************************/
|
|
|
|
/**
|
|
|
|
Reclaim the output tree of data from a call
|
|
|
|
to e.g. nc_get_vara or the input to e.g. nc_put_vara.
|
|
|
|
This recursively walks the top-level instances to
|
|
|
|
reclaim any nested data such as vlen or strings or such.
|
|
|
|
|
|
|
|
This function is just a wrapper around nc_reclaim_data.
|
|
|
|
|
|
|
|
@param ncid file ncid
|
|
|
|
@param xtype type id
|
|
|
|
@param memory to reclaim
|
|
|
|
@param count number of instances of the type in memory
|
|
|
|
@return error code
|
|
|
|
*/
|
|
|
|
|
|
|
|
EXTERNL int
|
|
|
|
ncaux_reclaim_data(int ncid, int xtype, void* memory, size_t count)
|
|
|
|
{
|
|
|
|
/* Defer to the internal version */
|
|
|
|
return nc_reclaim_data(ncid, xtype, memory, count);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
This function is just a wrapper around nc_reclaim_data_all.
|
|
|
|
@param ncid file ncid
|
|
|
|
@param xtype type id
|
|
|
|
@param memory to reclaim
|
|
|
|
@param count number of instances of the type in memory
|
|
|
|
@return error code
|
|
|
|
*/
|
|
|
|
|
|
|
|
EXTERNL int
|
|
|
|
ncaux_reclaim_data_all(int ncid, int xtype, void* memory, size_t count)
|
|
|
|
{
|
|
|
|
/* Defer to the internal version */
|
|
|
|
return nc_reclaim_data_all(ncid, xtype, memory, count);
|
|
|
|
}
|
|
|
|
|
Improve performance of the nc_reclaim_data and nc_copy_data functions.
re: Issue https://github.com/Unidata/netcdf-c/issues/2685
re: PR https://github.com/Unidata/netcdf-c/pull/2179
As noted in PR https://github.com/Unidata/netcdf-c/pull/2179,
the old code did not allow for reclaiming instances of types,
nor for properly copying them. That PR provided new functions
capable of reclaiming/copying instances of arbitrary types.
However, as noted by Issue https://github.com/Unidata/netcdf-c/issues/2685, using these
most general functions resulted in a significant performance
degradation, even for common cases.
This PR attempts to mitigate the cost of using the general
reclaim/copy functions in two ways.
First, the previous functions operating at the top level by
using ncid and typeid arguments. These functions were augmented
with equivalent versions that used the netcdf-c library internal
data structures to allow direct access to needed information.
These new functions are used internally to the library.
The second mitigation involves optimizing the internal functions
by providing early tests for common cases. This avoids
unnecessary recursive function calls.
The overall result is a significant improvement in speed by a
factor of roughly twenty -- your mileage may vary. These
optimized functions are still not as fast as the original (more
limited) functions, but they are getting close. Additional optimizations are
possible. But the cost is a significant "uglification" of the
code that I deemed a step too far, at least for now.
## Misc. Changes
1. Added a test case to check the proper reclamation/copy of complex types.
2. Found and fixed some places where nc_reclaim/copy should have been used.
3. Replaced, in the netcdf-c library, (almost all) occurrences of nc_reclaim_copy with calls to NC_reclaim/copy. This plus the optimizations is the primary speed-up mechanism.
4. In DAP4, the metadata is held in a substrate in-memory file; this required some changes so that the reclaim/copy code accessed that substrate dispatcher rather than the DAP4 dispatcher.
5. Re-factored and isolated the code that computes if a type is (transitively) variable-sized or not.
6. Clean up the reclamation code in ncgen; adding the use of nc_reclaim exposed some memory problems.
2023-05-21 07:11:25 +08:00
|
|
|
EXTERNL int NC_inq_any_type(int ncid, nc_type typeid, char *name, size_t *size, nc_type *basetypep, size_t *nfieldsp, int *classp);
|
|
|
|
|
|
|
|
EXTERNL int
|
|
|
|
ncaux_inq_any_type(int ncid, nc_type typeid, char *name, size_t *sizep, nc_type *basetypep, size_t *nfieldsp, int *classp)
|
|
|
|
{
|
|
|
|
return NC_inq_any_type(ncid, typeid, name, sizep, basetypep, nfieldsp, classp);
|
|
|
|
}
|
|
|
|
|
Fix various problem around VLEN's
re: https://github.com/Unidata/netcdf-c/issues/541
re: https://github.com/Unidata/netcdf-c/issues/1208
re: https://github.com/Unidata/netcdf-c/issues/2078
re: https://github.com/Unidata/netcdf-c/issues/2041
re: https://github.com/Unidata/netcdf-c/issues/2143
For a long time, there have been known problems with the
management of complex types containing VLENs. This also
involves the string type because it is stored as a VLEN of
chars.
This PR (mostly) fixes this problem. But note that it adds new
functions to netcdf.h (see below) and this may require bumping
the .so number. These new functions can be removed, if desired,
in favor of functions in netcdf_aux.h, but netcdf.h seems the
better place for them because they are intended as alternatives
to the nc_free_vlen and nc_free_string functions already in
netcdf.h.
The term complex type refers to any type that directly or
transitively references a VLEN type. So an array of VLENS, a
compound with a VLEN field, and so on.
In order to properly handle instances of these complex types, it
is necessary to have function that can recursively walk
instances of such types to perform various actions on them. The
term "deep" is also used to mean recursive.
At the moment, the two operations needed by the netcdf library are:
* free'ing an instance of the complex type
* copying an instance of the complex type.
The current library does only shallow free and shallow copy of
complex types. This means that only the top level is properly
free'd or copied, but deep internal blocks in the instance are
not touched.
Note that the term "vector" will be used to mean a contiguous (in
memory) sequence of instances of some type. Given an array with,
say, dimensions 2 X 3 X 4, this will be stored in memory as a
vector of length 2*3*4=24 instances.
The use cases are primarily these.
## nc_get_vars
Suppose one is reading a vector of instances using nc_get_vars
(or nc_get_vara or nc_get_var, etc.). These functions will
return the vector in the top-level memory provided. All
interior blocks (form nested VLEN or strings) will have been
dynamically allocated.
After using this vector of instances, it is necessary to free
(aka reclaim) the dynamically allocated memory, otherwise a
memory leak occurs. So, the recursive reclaim function is used
to walk the returned instance vector and do a deep reclaim of
the data.
Currently functions are defined in netcdf.h that are supposed to
handle this: nc_free_vlen(), nc_free_vlens(), and
nc_free_string(). Unfortunately, these functions only do a
shallow free, so deeply nested instances are not properly
handled by them.
Note that internally, the provided data is immediately written so
there is no need to copy it. But the caller may need to reclaim the
data it passed into the function.
## nc_put_att
Suppose one is writing a vector of instances as the data of an attribute
using, say, nc_put_att.
Internally, the incoming attribute data must be copied and stored
so that changes/reclamation of the input data will not affect
the attribute.
Again, the code inside the netcdf library does only shallow copying
rather than deep copy. As a result, one sees effects such as described
in Github Issue https://github.com/Unidata/netcdf-c/issues/2143.
Also, after defining the attribute, it may be necessary for the user
to free the data that was provided as input to nc_put_att().
## nc_get_att
Suppose one is reading a vector of instances as the data of an attribute
using, say, nc_get_att.
Internally, the existing attribute data must be copied and returned
to the caller, and the caller is responsible for reclaiming
the returned data.
Again, the code inside the netcdf library does only shallow copying
rather than deep copy. So this can lead to memory leaks and errors
because the deep data is shared between the library and the user.
# Solution
The solution is to build properly recursive reclaim and copy
functions and use those as needed.
These recursive functions are defined in libdispatch/dinstance.c
and their signatures are defined in include/netcdf.h.
For back compatibility, corresponding "ncaux_XXX" functions
are defined in include/netcdf_aux.h.
````
int nc_reclaim_data(int ncid, nc_type xtypeid, void* memory, size_t count);
int nc_reclaim_data_all(int ncid, nc_type xtypeid, void* memory, size_t count);
int nc_copy_data(int ncid, nc_type xtypeid, const void* memory, size_t count, void* copy);
int nc_copy_data_all(int ncid, nc_type xtypeid, const void* memory, size_t count, void** copyp);
````
There are two variants. The first two, nc_reclaim_data() and
nc_copy_data(), assume the top-level vector is managed by the
caller. For reclaim, this is so the user can use, for example, a
statically allocated vector. For copy, it assumes the user
provides the space into which the copy is stored.
The second two, nc_reclaim_data_all() and
nc_copy_data_all(), allows the functions to manage the
top-level. So for nc_reclaim_data_all, the top level is
assumed to be dynamically allocated and will be free'd by
nc_reclaim_data_all(). The nc_copy_data_all() function
will allocate the top level and return a pointer to it to the
user. The user can later pass that pointer to
nc_reclaim_data_all() to reclaim the instance(s).
# Internal Changes
The netcdf-c library internals are changed to use the proper
reclaim and copy functions. It turns out that the places where
these functions are needed is quite pervasive in the netcdf-c
library code. Using these functions also allows some
simplification of the code since the stdata and vldata fields of
NC_ATT_INFO are no longer needed. Currently this is commented
out using the SEPDATA \#define macro. When any bugs are largely
fixed, all this code will be removed.
# Known Bugs
1. There is still one known failure that has not been solved.
All the failures revolve around some variant of this .cdl file.
The proximate cause of failure is the use of a VLEN FillValue.
````
netcdf x {
types:
float(*) row_of_floats ;
dimensions:
m = 5 ;
variables:
row_of_floats ragged_array(m) ;
row_of_floats ragged_array:_FillValue = {-999} ;
data:
ragged_array = {10, 11, 12, 13, 14}, {20, 21, 22, 23}, {30, 31, 32},
{40, 41}, _ ;
}
````
When a solution is found, I will either add it to this PR or post a new PR.
# Related Changes
* Mark nc_free_vlen(s) as deprecated in favor of ncaux_reclaim_data.
* Remove the --enable-unfixed-memory-leaks option.
* Remove the NC_VLENS_NOTEST code that suppresses some vlen tests.
* Document this change in docs/internal.md
* Disable the tst_vlen_data test in ncdump/tst_nccopy4.sh.
* Mark types as fixed size or not (transitively) to optimize the reclaim
and copy functions.
# Misc. Changes
* Make Doxygen process libdispatch/daux.c
* Make sure the NC_ATT_INFO_T.container field is set.
2022-01-09 09:30:00 +08:00
|
|
|
/**
|
|
|
|
@param ncid - only needed for a compound type
|
|
|
|
@param xtype - type for which alignment is requested
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
ncaux_type_alignment(int xtype, int ncid, size_t* alignp)
|
|
|
|
{
|
|
|
|
/* Defer to the internal version */
|
|
|
|
return NC_type_alignment(ncid, xtype, alignp);
|
|
|
|
}
|
Improve performance of the nc_reclaim_data and nc_copy_data functions.
re: Issue https://github.com/Unidata/netcdf-c/issues/2685
re: PR https://github.com/Unidata/netcdf-c/pull/2179
As noted in PR https://github.com/Unidata/netcdf-c/pull/2179,
the old code did not allow for reclaiming instances of types,
nor for properly copying them. That PR provided new functions
capable of reclaiming/copying instances of arbitrary types.
However, as noted by Issue https://github.com/Unidata/netcdf-c/issues/2685, using these
most general functions resulted in a significant performance
degradation, even for common cases.
This PR attempts to mitigate the cost of using the general
reclaim/copy functions in two ways.
First, the previous functions operating at the top level by
using ncid and typeid arguments. These functions were augmented
with equivalent versions that used the netcdf-c library internal
data structures to allow direct access to needed information.
These new functions are used internally to the library.
The second mitigation involves optimizing the internal functions
by providing early tests for common cases. This avoids
unnecessary recursive function calls.
The overall result is a significant improvement in speed by a
factor of roughly twenty -- your mileage may vary. These
optimized functions are still not as fast as the original (more
limited) functions, but they are getting close. Additional optimizations are
possible. But the cost is a significant "uglification" of the
code that I deemed a step too far, at least for now.
## Misc. Changes
1. Added a test case to check the proper reclamation/copy of complex types.
2. Found and fixed some places where nc_reclaim/copy should have been used.
3. Replaced, in the netcdf-c library, (almost all) occurrences of nc_reclaim_copy with calls to NC_reclaim/copy. This plus the optimizations is the primary speed-up mechanism.
4. In DAP4, the metadata is held in a substrate in-memory file; this required some changes so that the reclaim/copy code accessed that substrate dispatcher rather than the DAP4 dispatcher.
5. Re-factored and isolated the code that computes if a type is (transitively) variable-sized or not.
6. Clean up the reclamation code in ncgen; adding the use of nc_reclaim exposed some memory problems.
2023-05-21 07:11:25 +08:00
|
|
|
|
|
|
|
/**
|
|
|
|
Dump the output tree of data from a call
|
|
|
|
to e.g. nc_get_vara or the input to e.g. nc_put_vara.
|
|
|
|
This function is just a wrapper around nc_dump__data.
|
|
|
|
|
|
|
|
@param ncid file ncid
|
|
|
|
@param xtype type id
|
|
|
|
@param memory to print
|
|
|
|
@param count number of instances of the type in memory
|
|
|
|
@return error code
|
|
|
|
*/
|
|
|
|
|
|
|
|
EXTERNL int
|
|
|
|
ncaux_dump_data(int ncid, int xtype, void* memory, size_t count, char** bufp)
|
|
|
|
{
|
|
|
|
EXTERNL int nc_dump_data(int ncid, nc_type xtype, void* memory, size_t count, char** bufp);
|
|
|
|
return nc_dump_data(ncid, xtype, memory, count, bufp);
|
|
|
|
}
|