netcdf-c/ncgen/dump.c

263 lines
6.0 KiB
C
Raw Normal View History

2010-06-03 21:24:43 +08:00
/*********************************************************************
2018-12-07 06:40:43 +08:00
* Copyright 2018, UCAR/Unidata
2010-06-03 21:24:43 +08:00
* See netcdf/COPYRIGHT file for copying and redistribution conditions.
*********************************************************************/
/* $Id: dump.c,v 1.3 2010/05/24 19:59:57 dmh Exp $ */
/* $Header: /upc/share/CVS/netcdf-3/ncgen/dump.c,v 1.3 2010/05/24 19:59:57 dmh Exp $ */
#include "includes.h"
#include "dump.h"
#undef DEBUGSRC
2010-06-03 21:24:43 +08:00
This PR adds EXPERIMENTAL support for accessing data in the cloud using a variant of the Zarr protocol and storage format. This enhancement is generically referred to as "NCZarr". The data model supported by NCZarr is netcdf-4 minus the user-defined types and the String type. In this sense it is similar to the CDF-5 data model. More detailed information about enabling and using NCZarr is described in the document NUG/nczarr.md and in a [Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in). WARNING: this code has had limited testing, so do use this version for production work. Also, performance improvements are ongoing. Note especially the following platform matrix of successful tests: Platform | Build System | S3 support ------------------------------------ Linux+gcc | Automake | yes Linux+gcc | CMake | yes Visual Studio | CMake | no Additionally, and as a consequence of the addition of NCZarr, major changes have been made to the Filter API. NOTE: NCZarr does not yet support filters, but these changes are enablers for that support in the future. Note that it is possible (probable?) that there will be some accidental reversions if the changes here did not correctly mimic the existing filter testing. In any case, previously filter ids and parameters were of type unsigned int. In order to support the more general zarr filter model, this was all converted to char*. The old HDF5-specific, unsigned int operations are still supported but they are wrappers around the new, char* based nc_filterx_XXX functions. This entailed at least the following changes: 1. Added the files libdispatch/dfilterx.c and include/ncfilter.h 2. Some filterx utilities have been moved to libdispatch/daux.c 3. A new entry, "filter_actions" was added to the NCDispatch table and the version bumped. 4. An overly complex set of structs was created to support funnelling all of the filterx operations thru a single dispatch "filter_actions" entry. 5. Move common code to from libhdf5 to libsrc4 so that it is accessible to nczarr. Changes directly related to Zarr: 1. Modified CMakeList.txt and configure.ac to support both C and C++ -- this is in support of S3 support via the awd-sdk libraries. 2. Define a size64_t type to support nczarr. 3. More reworking of libdispatch/dinfermodel.c to support zarr and to regularize the structure of the fragments section of a URL. Changes not directly related to Zarr: 1. Make client-side filter registration be conditional, with default off. 2. Hack include/nc4internal.h to make some flags added by Ed be unique: e.g. NC_CREAT, NC_INDEF, etc. 3. cleanup include/nchttp.h and libdispatch/dhttp.c. 4. Misc. changes to support compiling under Visual Studio including: * Better testing under windows for dirent.h and opendir and closedir. 5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags and to centralize error reporting. 6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them. 7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible. Changes Left TO-DO: 1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
#define MAXELEM 8
#define MAXDEPTH 4
2010-06-03 21:24:43 +08:00
/* Forward */
static void dumpdataprim(NCConstant*,Bytebuffer*);
2010-06-03 21:24:43 +08:00
char*
indentstr(int n)
{
static char indentline[1024];
memset(indentline,' ',n+1);
indentline[n+1] = '\0';
return indentline;
}
void
dumpconstant(NCConstant* con, char* tag)
2010-06-03 21:24:43 +08:00
{
Bytebuffer* buf = bbNew();
Datalist* dl = builddatalist(1);
dlappend(dl,con);
bufdump(dl,buf);
fprintf(stderr,"%s: %s\n",tag,bbContents(buf));
bbFree(buf);
}
void
dumpdatalist(Datalist* list, char* tag)
{
Bytebuffer* buf = bbNew();
bufdump(list,buf);
fprintf(stderr,"%s: %s\n",tag,bbContents(buf));
bbFree(buf);
}
void
bufdump(Datalist* list, Bytebuffer* buf)
{
int i;
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
NCConstant** dpl;
2010-06-03 21:24:43 +08:00
unsigned int count;
if(list == NULL) {
bbCat(buf,"NULL");
return;
}
count = list->length;
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
for(dpl=list->data,i=0;i<count;i++,dpl++) {
NCConstant* dp = *dpl;
2010-06-03 21:24:43 +08:00
switch (dp->nctype) {
case NC_COMPOUND:
This PR adds EXPERIMENTAL support for accessing data in the cloud using a variant of the Zarr protocol and storage format. This enhancement is generically referred to as "NCZarr". The data model supported by NCZarr is netcdf-4 minus the user-defined types and the String type. In this sense it is similar to the CDF-5 data model. More detailed information about enabling and using NCZarr is described in the document NUG/nczarr.md and in a [Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in). WARNING: this code has had limited testing, so do use this version for production work. Also, performance improvements are ongoing. Note especially the following platform matrix of successful tests: Platform | Build System | S3 support ------------------------------------ Linux+gcc | Automake | yes Linux+gcc | CMake | yes Visual Studio | CMake | no Additionally, and as a consequence of the addition of NCZarr, major changes have been made to the Filter API. NOTE: NCZarr does not yet support filters, but these changes are enablers for that support in the future. Note that it is possible (probable?) that there will be some accidental reversions if the changes here did not correctly mimic the existing filter testing. In any case, previously filter ids and parameters were of type unsigned int. In order to support the more general zarr filter model, this was all converted to char*. The old HDF5-specific, unsigned int operations are still supported but they are wrappers around the new, char* based nc_filterx_XXX functions. This entailed at least the following changes: 1. Added the files libdispatch/dfilterx.c and include/ncfilter.h 2. Some filterx utilities have been moved to libdispatch/daux.c 3. A new entry, "filter_actions" was added to the NCDispatch table and the version bumped. 4. An overly complex set of structs was created to support funnelling all of the filterx operations thru a single dispatch "filter_actions" entry. 5. Move common code to from libhdf5 to libsrc4 so that it is accessible to nczarr. Changes directly related to Zarr: 1. Modified CMakeList.txt and configure.ac to support both C and C++ -- this is in support of S3 support via the awd-sdk libraries. 2. Define a size64_t type to support nczarr. 3. More reworking of libdispatch/dinfermodel.c to support zarr and to regularize the structure of the fragments section of a URL. Changes not directly related to Zarr: 1. Make client-side filter registration be conditional, with default off. 2. Hack include/nc4internal.h to make some flags added by Ed be unique: e.g. NC_CREAT, NC_INDEF, etc. 3. cleanup include/nchttp.h and libdispatch/dhttp.c. 4. Misc. changes to support compiling under Visual Studio including: * Better testing under windows for dirent.h and opendir and closedir. 5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags and to centralize error reporting. 6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them. 7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible. Changes Left TO-DO: 1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
if(dp->subtype == NC_DIM) bbCat(buf,"("); else bbCat(buf,"{");
2010-06-03 21:24:43 +08:00
bufdump(dp->value.compoundv,buf);
This PR adds EXPERIMENTAL support for accessing data in the cloud using a variant of the Zarr protocol and storage format. This enhancement is generically referred to as "NCZarr". The data model supported by NCZarr is netcdf-4 minus the user-defined types and the String type. In this sense it is similar to the CDF-5 data model. More detailed information about enabling and using NCZarr is described in the document NUG/nczarr.md and in a [Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in). WARNING: this code has had limited testing, so do use this version for production work. Also, performance improvements are ongoing. Note especially the following platform matrix of successful tests: Platform | Build System | S3 support ------------------------------------ Linux+gcc | Automake | yes Linux+gcc | CMake | yes Visual Studio | CMake | no Additionally, and as a consequence of the addition of NCZarr, major changes have been made to the Filter API. NOTE: NCZarr does not yet support filters, but these changes are enablers for that support in the future. Note that it is possible (probable?) that there will be some accidental reversions if the changes here did not correctly mimic the existing filter testing. In any case, previously filter ids and parameters were of type unsigned int. In order to support the more general zarr filter model, this was all converted to char*. The old HDF5-specific, unsigned int operations are still supported but they are wrappers around the new, char* based nc_filterx_XXX functions. This entailed at least the following changes: 1. Added the files libdispatch/dfilterx.c and include/ncfilter.h 2. Some filterx utilities have been moved to libdispatch/daux.c 3. A new entry, "filter_actions" was added to the NCDispatch table and the version bumped. 4. An overly complex set of structs was created to support funnelling all of the filterx operations thru a single dispatch "filter_actions" entry. 5. Move common code to from libhdf5 to libsrc4 so that it is accessible to nczarr. Changes directly related to Zarr: 1. Modified CMakeList.txt and configure.ac to support both C and C++ -- this is in support of S3 support via the awd-sdk libraries. 2. Define a size64_t type to support nczarr. 3. More reworking of libdispatch/dinfermodel.c to support zarr and to regularize the structure of the fragments section of a URL. Changes not directly related to Zarr: 1. Make client-side filter registration be conditional, with default off. 2. Hack include/nc4internal.h to make some flags added by Ed be unique: e.g. NC_CREAT, NC_INDEF, etc. 3. cleanup include/nchttp.h and libdispatch/dhttp.c. 4. Misc. changes to support compiling under Visual Studio including: * Better testing under windows for dirent.h and opendir and closedir. 5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags and to centralize error reporting. 6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them. 7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible. Changes Left TO-DO: 1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
if(dp->subtype == NC_DIM) bbCat(buf,")"); else bbCat(buf,"}");
2010-06-03 21:24:43 +08:00
break;
case NC_ARRAY:
bbCat(buf,"[");
bufdump(dp->value.compoundv,buf);
bbCat(buf,"]");
break;
case NC_VLEN:
bbCat(buf,"{*");
bufdump(dp->value.compoundv,buf);
bbCat(buf,"}");
break;
default:
if(isprimplus(dp->nctype) || dp->nctype == NC_FILLVALUE) {
bbCat(buf," ");
dumpdataprim(dp,buf);
} else {
char tmp[64];
snprintf(tmp,sizeof(tmp),"?%d? ",dp->nctype);
2010-06-03 21:24:43 +08:00
bbCat(buf,tmp);
} break;
}
}
}
static void
dumpdataprim(NCConstant* ci, Bytebuffer* buf)
2010-06-03 21:24:43 +08:00
{
char tmp[64];
ASSERT(isprimplus(ci->nctype) || ci->nctype == NC_FILLVALUE);
switch (ci->nctype) {
case NC_CHAR: {
bbCat(buf,"'");
escapifychar(ci->value.charv,tmp,'\'');
bbCat(buf,tmp);
bbCat(buf,"'");
} break;
case NC_BYTE:
snprintf(tmp,sizeof(tmp),"%hhd",ci->value.int8v);
2010-06-03 21:24:43 +08:00
bbCat(buf,tmp);
break;
case NC_SHORT:
snprintf(tmp,sizeof(tmp),"%hd",ci->value.int16v);
2010-06-03 21:24:43 +08:00
bbCat(buf,tmp);
break;
case NC_INT:
snprintf(tmp,sizeof(tmp),"%d",ci->value.int32v);
2010-06-03 21:24:43 +08:00
bbCat(buf,tmp);
break;
case NC_FLOAT:
snprintf(tmp,sizeof(tmp),"%g",ci->value.floatv);
2010-06-03 21:24:43 +08:00
bbCat(buf,tmp);
break;
case NC_DOUBLE:
snprintf(tmp,sizeof(tmp),"%lg",ci->value.doublev);
2010-06-03 21:24:43 +08:00
bbCat(buf,tmp);
break;
case NC_UBYTE:
snprintf(tmp,sizeof(tmp),"%hhu",ci->value.int8v);
2010-06-03 21:24:43 +08:00
bbCat(buf,tmp);
break;
case NC_USHORT:
snprintf(tmp,sizeof(tmp),"%hu",ci->value.uint16v);
2010-06-03 21:24:43 +08:00
bbCat(buf,tmp);
break;
case NC_UINT:
snprintf(tmp,sizeof(tmp),"%u",ci->value.uint32v);
2010-06-03 21:24:43 +08:00
bbCat(buf,tmp);
break;
case NC_INT64:
snprintf(tmp,sizeof(tmp),"%lld",ci->value.int64v);
2010-06-03 21:24:43 +08:00
bbCat(buf,tmp);
break;
case NC_UINT64:
snprintf(tmp,sizeof(tmp),"%llu",ci->value.uint64v);
2010-06-03 21:24:43 +08:00
bbCat(buf,tmp);
break;
case NC_ECONST:
snprintf(tmp,sizeof(tmp),"%s",ci->value.enumv->fqn);
2010-06-03 21:24:43 +08:00
bbCat(buf,tmp);
break;
case NC_STRING:
bbCat(buf,"\"");
bbCat(buf,ci->value.stringv.stringv);
bbCat(buf,"\"");
break;
case NC_OPAQUE:
bbCat(buf,"0x");
bbCat(buf,ci->value.opaquev.stringv);
break;
case NC_FILLVALUE:
bbCat(buf,"_");
break;
default: PANIC1("dumpdataprim: bad type code:%d",ci->nctype);
}
}
void
dumpgroup(Symbol* g)
{
if(debug <= 1) return;
fdebug("group %s {\n",(g==NULL?"null":g->name));
if(g != NULL && g->subnodes != NULL) {
for(size_t i=0;i<listlength(g->subnodes);i++) {
2010-06-03 21:24:43 +08:00
Symbol* sym = (Symbol*)listget(g->subnodes,i);
char* tname;
if(sym->objectclass == NC_PRIM
|| sym->objectclass == NC_TYPE) {
tname = nctypename(sym->subclass);
} else
tname = nctypename(sym->objectclass);
fdebug(" %3zu: %s\t%s\t%s\n",
2010-06-03 21:24:43 +08:00
i,
sym->name,
tname,
(sym->ref.is_ref?"ref":"")
2010-06-03 21:24:43 +08:00
);
}
}
fdebug("}\n");
}
void
dumpconstant1(NCConstant* con)
2010-06-03 21:24:43 +08:00
{
switch (con->nctype) {
case NC_COMPOUND: {
Datalist* dl = con->value.compoundv;
Bytebuffer* buf = bbNew();
bufdump(dl,buf);
/* fprintf(stderr,"(0x%lx){",(unsigned long)dl);*/
This PR adds EXPERIMENTAL support for accessing data in the cloud using a variant of the Zarr protocol and storage format. This enhancement is generically referred to as "NCZarr". The data model supported by NCZarr is netcdf-4 minus the user-defined types and the String type. In this sense it is similar to the CDF-5 data model. More detailed information about enabling and using NCZarr is described in the document NUG/nczarr.md and in a [Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in). WARNING: this code has had limited testing, so do use this version for production work. Also, performance improvements are ongoing. Note especially the following platform matrix of successful tests: Platform | Build System | S3 support ------------------------------------ Linux+gcc | Automake | yes Linux+gcc | CMake | yes Visual Studio | CMake | no Additionally, and as a consequence of the addition of NCZarr, major changes have been made to the Filter API. NOTE: NCZarr does not yet support filters, but these changes are enablers for that support in the future. Note that it is possible (probable?) that there will be some accidental reversions if the changes here did not correctly mimic the existing filter testing. In any case, previously filter ids and parameters were of type unsigned int. In order to support the more general zarr filter model, this was all converted to char*. The old HDF5-specific, unsigned int operations are still supported but they are wrappers around the new, char* based nc_filterx_XXX functions. This entailed at least the following changes: 1. Added the files libdispatch/dfilterx.c and include/ncfilter.h 2. Some filterx utilities have been moved to libdispatch/daux.c 3. A new entry, "filter_actions" was added to the NCDispatch table and the version bumped. 4. An overly complex set of structs was created to support funnelling all of the filterx operations thru a single dispatch "filter_actions" entry. 5. Move common code to from libhdf5 to libsrc4 so that it is accessible to nczarr. Changes directly related to Zarr: 1. Modified CMakeList.txt and configure.ac to support both C and C++ -- this is in support of S3 support via the awd-sdk libraries. 2. Define a size64_t type to support nczarr. 3. More reworking of libdispatch/dinfermodel.c to support zarr and to regularize the structure of the fragments section of a URL. Changes not directly related to Zarr: 1. Make client-side filter registration be conditional, with default off. 2. Hack include/nc4internal.h to make some flags added by Ed be unique: e.g. NC_CREAT, NC_INDEF, etc. 3. cleanup include/nchttp.h and libdispatch/dhttp.c. 4. Misc. changes to support compiling under Visual Studio including: * Better testing under windows for dirent.h and opendir and closedir. 5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags and to centralize error reporting. 6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them. 7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible. Changes Left TO-DO: 1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
if(con->subtype == NC_DIM)
fprintf(stderr,"{%s}",bbDup(buf));
else
fprintf(stderr,"{%s}",bbDup(buf));
2010-06-03 21:24:43 +08:00
bbFree(buf);
} break;
case NC_STRING:
if(con->value.stringv.len > 0 && con->value.stringv.stringv != NULL)
fprintf(stderr,"\"%s\"",con->value.stringv.stringv);
else
fprintf(stderr,"\"\"");
break;
case NC_OPAQUE:
if(con->value.opaquev.len > 0 && con->value.opaquev.stringv != NULL)
fprintf(stderr,"0x%s",con->value.opaquev.stringv);
else
fprintf(stderr,"0x--");
break;
case NC_ECONST:
fprintf(stderr,"%s",(con->value.enumv==NULL?"?":con->value.enumv->name));
break;
case NC_FILLVALUE:
fprintf(stderr,"_");
break;
case NC_CHAR:
fprintf(stderr,"'%c'",con->value.charv);
break;
case NC_BYTE:
fprintf(stderr,"%hhd",con->value.int8v);
break;
case NC_UBYTE:
fprintf(stderr,"%hhu",con->value.uint8v);
break;
case NC_SHORT:
fprintf(stderr,"%hd",con->value.int16v);
break;
case NC_USHORT:
fprintf(stderr,"%hu",con->value.uint16v);
break;
case NC_INT:
fprintf(stderr,"%d",con->value.int32v);
break;
case NC_UINT:
fprintf(stderr,"%u",con->value.uint32v);
break;
case NC_INT64:
fprintf(stderr,"%lld",con->value.int64v);
break;
case NC_UINT64:
fprintf(stderr,"%llu",con->value.uint64v);
break;
case NC_FLOAT:
fprintf(stderr,"%g",con->value.floatv);
break;
case NC_DOUBLE:
fprintf(stderr,"%g",con->value.doublev);
break;
default:
fprintf(stderr,"<unknown>");
break;
}
fflush(stderr);
}