netcdf-c/ncgen/semantics.c

1317 lines
39 KiB
C
Raw Normal View History

2010-06-03 21:24:43 +08:00
/*********************************************************************
2018-12-07 06:40:43 +08:00
* Copyright 2018, UCAR/Unidata
2010-06-03 21:24:43 +08:00
* See netcdf/COPYRIGHT file for copying and redistribution conditions.
*********************************************************************/
/* $Id: semantics.c,v 1.4 2010/05/24 19:59:58 dmh Exp $ */
/* $Header: /upc/share/CVS/netcdf-3/ncgen/semantics.c,v 1.4 2010/05/24 19:59:58 dmh Exp $ */
#include "includes.h"
#include "dump.h"
Primary change: add dap4 support Specific changes: 1. Add dap4 code: libdap4 and dap4_test. Note that until the d4ts server problem is solved, dap4 is turned off. 2. Modify various files to support dap4 flags: configure.ac, Makefile.am, CMakeLists.txt, etc. 3. Add nc_test/test_common.sh. This centralizes the handling of the locations of various things in the build tree: e.g. where is ncgen.exe located. See nc_test/test_common.sh for details. 4. Modify .sh files to use test_common.sh 5. Obsolete separate oc2 by moving it to be part of netcdf-c. This means replacing code with netcdf-c equivalents. 5. Add --with-testserver to configure.ac to allow override of the servers to be used for --enable-dap-remote-tests. 6. There were multiple versions of nctypealignment code. Try to centralize in libdispatch/doffset.c and include/ncoffsets.h 7. Add a unit test for the ncuri code because of its complexity. 8. Move the findserver code out of libdispatch and into a separate, self contained program in ncdap_test and dap4_test. 9. Move the dispatch header files (nc{3,4}dispatch.h) to .../include because they are now shared by modules. 10. Revamp the handling of TOPSRCDIR and TOPBUILDDIR for shell scripts. 11. Make use of MREMAP if available 12. Misc. minor changes e.g. - #include <config.h> -> #include "config.h" - Add some no-install headers to /include - extern -> EXTERNL and vice versa as needed - misc header cleanup - clean up checking for misc. unix vs microsoft functions 13. Change copyright decls in some files to point to LICENSE file. 14. Add notes to RELEASENOTES.md
2017-03-09 08:01:10 +08:00
#include "ncoffsets.h"
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
#include "netcdf_aux.h"
2010-06-03 21:24:43 +08:00
/* Forward*/
static void filltypecodes(void);
static void processenums(void);
static void processeconstrefs(void);
2010-06-03 21:24:43 +08:00
static void processtypes(void);
static void processtypesizes(void);
static void processvars(void);
static void processattributes(void);
static void processunlimiteddims(void);
static void processeconstrefs(void);
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
static void processeconstrefsR(Symbol*,Datalist*);
static void processroot(void);
2010-06-03 21:24:43 +08:00
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
static void computefqns(void);
static void fixeconstref(Symbol*,NCConstant* con);
2010-06-03 21:24:43 +08:00
static void inferattributetype(Symbol* asym);
static void validateNIL(Symbol* sym);
2010-06-03 21:24:43 +08:00
static void checkconsistency(void);
static int tagvlentypes(Symbol* tsym);
static void computefqns(void);
2010-06-03 21:24:43 +08:00
static Symbol* uniquetreelocate(Symbol* refsym, Symbol* root);
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
static char* createfilename(void);
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
#if 0
static Symbol* locateenumtype(Symbol* econst, Symbol* group, NCConstant*);
static List* findecmatches(char* ident);
static List* ecsearchgrp(Symbol* grp, List* candidates);
static Symbol* checkeconst(Symbol* en, const char* refname);
#endif
2010-06-03 21:24:43 +08:00
List* vlenconstants; /* List<Constant*>;*/
/* ptr to vlen instances across all datalists*/
/* Post-parse semantic checks and actions*/
void
processsemantics(void)
{
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
/* Fix up the root name to match the chosen filename */
processroot();
/* Fill in the fqn for every defining symbol */
computefqns();
2010-06-03 21:24:43 +08:00
/* Process each type and sort by dependency order*/
processtypes();
/* Make sure all typecodes are set if basetype is set*/
filltypecodes();
/* Process each type to compute its size*/
processtypesizes();
/* Process each var to fill in missing fields, etc*/
processvars();
/* Process attributes to connect to corresponding variable*/
processattributes();
/* Fix up enum constant values*/
processenums();
/* Fix up enum constant references*/
processeconstrefs();
/* Compute the unlimited dimension sizes */
processunlimiteddims();
2010-06-03 21:24:43 +08:00
/* check internal consistency*/
checkconsistency();
}
/*
Given a reference symbol, produce the corresponding
definition symbol; return NULL if there is no definition
Note that this is somewhat complicated to conform to
various scoping rules, namely:
1. look into parent hierarchy for un-prefixed dimension names.
2. look in whole group tree for un-prefixed type names;
search is depth first. MODIFIED 5/26/2009: Search is as follows:
a. search parent hierarchy for matching type names.
b. search whole tree for unique matching type name
c. complain and require prefixed name.
3. look in the same group as ref for un-prefixed variable names.
4. ditto for group references
5. look in whole group tree for un-prefixed enum constants;
result must be unique
2010-06-03 21:24:43 +08:00
*/
Symbol*
locate(Symbol* refsym)
{
Symbol* sym = NULL;
switch (refsym->objectclass) {
case NC_DIM:
if(refsym->is_prefixed) {
/* locate exact dimension specified*/
sym = lookup(NC_DIM,refsym);
} else { /* Search for matching dimension in all parent groups*/
Symbol* parent = lookupgroup(refsym->prefix);/*get group for refsym*/
while(parent != NULL) {
/* search this parent for matching name and type*/
sym = lookupingroup(NC_DIM,refsym->name,parent);
if(sym != NULL) break;
parent = parent->container;
}
}
2010-06-03 21:24:43 +08:00
break;
case NC_TYPE:
if(refsym->is_prefixed) {
/* locate exact type specified*/
sym = lookup(NC_TYPE,refsym);
} else {
Symbol* parent;
int i; /* Search for matching type in all groups (except...)*/
/* Short circuit test for primitive types*/
for(i=NC_NAT;i<=NC_STRING;i++) {
Symbol* prim = basetypefor(i);
if(prim == NULL) continue;
if(strcmp(refsym->name,prim->name)==0) {
sym = prim;
break;
}
}
if(sym == NULL) {
/* Added 5/26/09: look in parent hierarchy first */
parent = lookupgroup(refsym->prefix);/*get group for refsym*/
while(parent != NULL) {
/* search this parent for matching name and type*/
sym = lookupingroup(NC_TYPE,refsym->name,parent);
if(sym != NULL) break;
parent = parent->container;
}
}
if(sym == NULL) {
sym = uniquetreelocate(refsym,rootgroup); /* want unique */
}
}
2010-06-03 21:24:43 +08:00
break;
case NC_VAR:
if(refsym->is_prefixed) {
/* locate exact variable specified*/
sym = lookup(NC_VAR,refsym);
} else {
Symbol* parent = lookupgroup(refsym->prefix);/*get group for refsym*/
/* search this parent for matching name and type*/
sym = lookupingroup(NC_VAR,refsym->name,parent);
}
2010-06-03 21:24:43 +08:00
break;
case NC_GRP:
if(refsym->is_prefixed) {
/* locate exact group specified*/
sym = lookup(NC_GRP,refsym);
} else {
Symbol* parent = lookupgroup(refsym->prefix);/*get group for refsym*/
2010-06-03 21:24:43 +08:00
/* search this parent for matching name and type*/
sym = lookupingroup(NC_GRP,refsym->name,parent);
}
2010-06-03 21:24:43 +08:00
break;
default: PANIC1("locate: bad refsym type: %d",refsym->objectclass);
}
if(debug > 1) {
char* ncname;
if(refsym->objectclass == NC_TYPE)
ncname = ncclassname(refsym->subclass);
else
ncname = ncclassname(refsym->objectclass);
fdebug("locate: %s: %s -> %s\n",
ncname,fullname(refsym),(sym?fullname(sym):"NULL"));
}
2010-06-03 21:24:43 +08:00
return sym;
}
/*
Search for an object in all groups using preorder depth-first traversal.
Return NULL if symbol is not unique or not found at all.
*/
static Symbol*
uniquetreelocate(Symbol* refsym, Symbol* root)
{
unsigned long i;
2010-06-03 21:24:43 +08:00
Symbol* sym = NULL;
/* search the root for matching name and major type*/
sym = lookupingroup(refsym->objectclass,refsym->name,root);
if(sym == NULL) {
for(i=0;i<listlength(root->subnodes);i++) {
Symbol* grp = (Symbol*)listget(root->subnodes,i);
if(grp->objectclass == NC_GRP && !grp->ref.is_ref) {
2010-06-03 21:24:43 +08:00
Symbol* nextsym = uniquetreelocate(refsym,grp);
if(nextsym != NULL) {
if(sym != NULL) return NULL; /* not unique */
2010-06-03 21:24:43 +08:00
sym = nextsym;
}
}
}
}
return sym;
}
/*
Compute the fqn for every top-level definition symbol
*/
static void
computefqns(void)
{
unsigned long i,j;
/* Groups first */
for(i=0;i<listlength(grpdefs);i++) {
Symbol* sym = (Symbol*)listget(grpdefs,i);
topfqn(sym);
}
/* Dimensions */
for(i=0;i<listlength(dimdefs);i++) {
Symbol* sym = (Symbol*)listget(dimdefs,i);
topfqn(sym);
}
/* types */
for(i=0;i<listlength(typdefs);i++) {
Symbol* sym = (Symbol*)listget(typdefs,i);
topfqn(sym);
}
/* variables */
for(i=0;i<listlength(vardefs);i++) {
Symbol* sym = (Symbol*)listget(vardefs,i);
topfqn(sym);
}
/* fill in the fqn names of econsts */
for(i=0;i<listlength(typdefs);i++) {
Symbol* sym = (Symbol*)listget(typdefs,i);
if(sym->subclass == NC_ENUM) {
for(j=0;j<listlength(sym->subnodes);j++) {
Symbol* econ = (Symbol*)listget(sym->subnodes,j);
nestedfqn(econ);
}
}
}
/* fill in the fqn names of fields */
for(i=0;i<listlength(typdefs);i++) {
Symbol* sym = (Symbol*)listget(typdefs,i);
if(sym->subclass == NC_COMPOUND) {
for(j=0;j<listlength(sym->subnodes);j++) {
Symbol* field = (Symbol*)listget(sym->subnodes,j);
nestedfqn(field);
}
}
}
/* fill in the fqn names of attributes */
for(i=0;i<listlength(gattdefs);i++) {
Symbol* sym = (Symbol*)listget(gattdefs,i);
attfqn(sym);
}
for(i=0;i<listlength(attdefs);i++) {
Symbol* sym = (Symbol*)listget(attdefs,i);
attfqn(sym);
}
}
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
/**
Process the root group.
Currently mean:
1. Compute and store the filename
*/
static void
processroot(void)
{
rootgroup->file.filename = createfilename();
}
2010-06-03 21:24:43 +08:00
/* 1. Do a topological sort of the types based on dependency*/
/* so that the least dependent are first in the typdefs list*/
/* 2. fill in type typecodes*/
/* 3. mark types that use vlen*/
static void
processtypes(void)
{
unsigned long i,j;
int keep,added;
2010-06-03 21:24:43 +08:00
List* sorted = listnew(); /* hold re-ordered type set*/
/* Prime the walk by capturing the set*/
/* of types that are dependent on primitive types*/
/* e.g. uint vlen(*) or primitive types*/
for(i=0;i<listlength(typdefs);i++) {
Symbol* sym = (Symbol*)listget(typdefs,i);
keep=0;
switch (sym->subclass) {
case NC_PRIM: /*ignore pre-defined primitive types*/
sym->touched=1;
break;
case NC_OPAQUE:
case NC_ENUM:
keep=1;
break;
case NC_VLEN: /* keep if its basetype is primitive*/
if(sym->typ.basetype->subclass == NC_PRIM) keep=1;
break;
2010-06-03 21:24:43 +08:00
case NC_COMPOUND: /* keep if all fields are primitive*/
keep=1; /*assume all fields are primitive*/
for(j=0;j<listlength(sym->subnodes);j++) {
Symbol* field = (Symbol*)listget(sym->subnodes,j);
ASSERT(field->subclass == NC_FIELD);
if(field->typ.basetype->subclass != NC_PRIM) {keep=0;break;}
}
2010-06-03 21:24:43 +08:00
break;
default: break;/* ignore*/
}
if(keep) {
sym->touched = 1;
listpush(sorted,(void*)sym);
2010-06-03 21:24:43 +08:00
}
}
2010-06-03 21:24:43 +08:00
/* 2. repeated walk to collect level i types*/
do {
added=0;
for(i=0;i<listlength(typdefs);i++) {
Symbol* sym = (Symbol*)listget(typdefs,i);
if(sym->touched) continue; /* ignore already processed types*/
keep=0; /* assume not addable yet.*/
switch (sym->subclass) {
case NC_PRIM:
2010-06-03 21:24:43 +08:00
case NC_OPAQUE:
case NC_ENUM:
PANIC("type re-touched"); /* should never happen*/
break;
case NC_VLEN: /* keep if its basetype is already processed*/
if(sym->typ.basetype->touched) keep=1;
break;
2010-06-03 21:24:43 +08:00
case NC_COMPOUND: /* keep if all fields are processed*/
keep=1; /*assume all fields are touched*/
for(j=0;j<listlength(sym->subnodes);j++) {
Symbol* field = (Symbol*)listget(sym->subnodes,j);
ASSERT(field->subclass == NC_FIELD);
if(!field->typ.basetype->touched) {keep=1;break;}
}
2010-06-03 21:24:43 +08:00
break;
default: break;
2010-06-03 21:24:43 +08:00
}
if(keep) {
listpush(sorted,(void*)sym);
2010-06-03 21:24:43 +08:00
sym->touched = 1;
added++;
}
2010-06-03 21:24:43 +08:00
}
} while(added > 0);
/* Any untouched type => circular dependency*/
for(i=0;i<listlength(typdefs);i++) {
Symbol* tsym = (Symbol*)listget(typdefs,i);
if(tsym->touched) continue;
semerror(tsym->lineno,"Circular type dependency for type: %s",fullname(tsym));
}
listfree(typdefs);
typdefs = sorted;
/* fill in type typecodes*/
for(i=0;i<listlength(typdefs);i++) {
Symbol* sym = (Symbol*)listget(typdefs,i);
if(sym->typ.basetype != NULL && sym->typ.typecode == NC_NAT)
sym->typ.typecode = sym->typ.basetype->typ.typecode;
}
/* Identify types containing vlens */
for(i=0;i<listlength(typdefs);i++) {
Symbol* tsym = (Symbol*)listget(typdefs,i);
tagvlentypes(tsym);
}
}
/* Recursively check for vlens*/
static int
tagvlentypes(Symbol* tsym)
{
int tagged = 0;
unsigned long j;
2010-06-03 21:24:43 +08:00
switch (tsym->subclass) {
case NC_VLEN:
2010-06-03 21:24:43 +08:00
tagged = 1;
tagvlentypes(tsym->typ.basetype);
break;
2010-06-03 21:24:43 +08:00
case NC_COMPOUND: /* keep if all fields are primitive*/
for(j=0;j<listlength(tsym->subnodes);j++) {
Symbol* field = (Symbol*)listget(tsym->subnodes,j);
ASSERT(field->subclass == NC_FIELD);
if(tagvlentypes(field->typ.basetype)) tagged = 1;
}
2010-06-03 21:24:43 +08:00
break;
default: break;/* ignore*/
}
if(tagged) tsym->typ.hasvlen = 1;
return tagged;
}
/* Make sure all typecodes are set if basetype is set*/
static void
filltypecodes(void)
{
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
int i;
for(i=0;i<listlength(symlist);i++) {
Symbol* sym = listget(symlist,i);
2010-06-03 21:24:43 +08:00
if(sym->typ.basetype != NULL && sym->typ.typecode == NC_NAT)
sym->typ.typecode = sym->typ.basetype->typ.typecode;
}
}
static void
processenums(void)
{
unsigned long i,j;
2017-10-31 05:11:23 +08:00
#if 0 /* Unused? */
2010-06-03 21:24:43 +08:00
List* enumids = listnew();
2017-10-31 05:11:23 +08:00
#endif
2010-06-03 21:24:43 +08:00
for(i=0;i<listlength(typdefs);i++) {
Symbol* sym = (Symbol*)listget(typdefs,i);
ASSERT(sym->objectclass == NC_TYPE);
if(sym->subclass != NC_ENUM) continue;
for(j=0;j<listlength(sym->subnodes);j++) {
Symbol* esym = (Symbol*)listget(sym->subnodes,j);
ASSERT(esym->subclass == NC_ECONST);
2017-10-31 05:11:23 +08:00
#if 0 /* Unused? */
listpush(enumids,(void*)esym);
2017-10-31 05:11:23 +08:00
#endif
2010-06-03 21:24:43 +08:00
}
}
2010-06-03 21:24:43 +08:00
/* Convert enum values to match enum type*/
for(i=0;i<listlength(typdefs);i++) {
Symbol* tsym = (Symbol*)listget(typdefs,i);
ASSERT(tsym->objectclass == NC_TYPE);
if(tsym->subclass != NC_ENUM) continue;
for(j=0;j<listlength(tsym->subnodes);j++) {
Symbol* esym = (Symbol*)listget(tsym->subnodes,j);
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
NCConstant* newec = nullconst();
2010-06-03 21:24:43 +08:00
ASSERT(esym->subclass == NC_ECONST);
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
newec->nctype = esym->typ.typecode;
convert1(esym->typ.econst,newec);
reclaimconstant(esym->typ.econst);
2010-06-03 21:24:43 +08:00
esym->typ.econst = newec;
}
2010-06-03 21:24:43 +08:00
}
}
/* Walk all data lists looking for econst refs
and convert to point to actual definition
*/
static void
processeconstrefs(void)
{
unsigned long i;
/* locate all the datalist and walk them recursively */
for(i=0;i<listlength(gattdefs);i++) {
Symbol* att = (Symbol*)listget(gattdefs,i);
if(att->data != NULL && listlength(att->data) > 0)
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
processeconstrefsR(att,att->data);
}
for(i=0;i<listlength(attdefs);i++) {
Symbol* att = (Symbol*)listget(attdefs,i);
if(att->data != NULL && listlength(att->data) > 0)
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
processeconstrefsR(att,att->data);
}
for(i=0;i<listlength(vardefs);i++) {
Symbol* var = (Symbol*)listget(vardefs,i);
if(var->data != NULL && listlength(var->data) > 0)
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
processeconstrefsR(var,var->data);
if(var->var.special->_Fillvalue != NULL)
processeconstrefsR(var,var->var.special->_Fillvalue);
}
}
/* Recursive helper for processeconstrefs */
static void
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
processeconstrefsR(Symbol* avsym, Datalist* data)
{
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
NCConstant** dlp = NULL;
int i;
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
for(i=0,dlp=data->data;i<data->length;i++,dlp++) {
NCConstant* con = *dlp;
if(con->nctype == NC_COMPOUND) {
/* Iterate over the sublists */
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
processeconstrefsR(avsym,con->value.compoundv);
} else if(con->nctype == NC_ECONST || con->nctype == NC_FILLVALUE) {
fixeconstref(avsym,con);
}
}
}
static void
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
fixeconstref(Symbol* avsym, NCConstant* con)
{
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
Symbol* basetype = NULL;
Symbol* refsym = con->value.enumv;
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
Symbol* varsym = NULL;
int i;
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
/* Figure out the proper type associated with avsym */
ASSERT(avsym->objectclass == NC_VAR || avsym->objectclass == NC_ATT);
if(avsym->objectclass == NC_VAR) {
basetype = avsym->typ.basetype;
varsym = avsym;
} else { /*(avsym->objectclass == NC_ATT)*/
basetype = avsym->typ.basetype;
varsym = avsym->container;
if(varsym->objectclass == NC_GRP)
varsym = NULL;
}
/* If this is a non-econst fillvalue, then ignore it */
if(con->nctype == NC_FILLVALUE && basetype->subclass != NC_ENUM)
return;
/* If this is an econst then validate against type */
if(con->nctype == NC_ECONST && basetype->subclass != NC_ENUM)
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
semerror(con->lineno,"Enumconstant associated with a non-econst type");
if(con->nctype == NC_FILLVALUE) {
Datalist* filllist = NULL;
NCConstant* filler = NULL;
filllist = getfiller(varsym == NULL?basetype:varsym);
if(filllist == NULL)
semerror(con->lineno, "Cannot determine enum constant fillvalue");
filler = datalistith(filllist,0);
con->value.enumv = filler->value.enumv;
return;
}
for(i=0;i<listlength(basetype->subnodes);i++) {
Symbol* econst = listget(basetype->subnodes,i);
ASSERT(econst->subclass == NC_ECONST);
if(strcmp(econst->name,refsym->name)==0) {
con->value.enumv = econst;
return;
}
}
semerror(con->lineno,"Undefined enum or enum constant reference: %s",refsym->name);
}
#if 0
/* If we have an enum-valued group attribute, then we need to do
extra work to find the containing enum type
*/
static Symbol*
locateenumtype(Symbol* refsym, Symbol* parent, NCConstant* con)
{
Symbol* match = NULL;
List* grpmatches;
/* Locate all possible matching enum constant definitions */
List* candidates = findecmatches(refsym->name);
if(candidates == NULL) {
semerror(con->lineno,"Undefined enum or enum constant reference: %s",refsym->name);
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
return NULL;
}
/* One hopes that 99% of the time, the match is unique */
if(listlength(candidates) == 1) {
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
match = listget(candidates,0);
2013-10-08 06:21:45 +08:00
goto done;
}
/* If this ref has a specified group prefix, then find that group
and search only within it for matches to the candidates */
if(refsym->is_prefixed && refsym->prefix != NULL) {
parent = lookupgroup(refsym->prefix);
if(parent == NULL) {
semerror(con->lineno,"Undefined group reference: ",fullname(refsym));
2013-10-08 06:21:45 +08:00
goto done;
}
/* Search this group only for matches */
grpmatches = ecsearchgrp(parent,candidates);
switch (listlength(grpmatches)) {
case 0:
semerror(con->lineno,"Undefined enum or enum constant reference: ",refsym->name);
2013-10-08 06:21:45 +08:00
listfree(grpmatches);
goto done;
case 1:
break;
default:
semerror(con->lineno,"Ambiguous enum constant reference: %s", fullname(refsym));
}
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
match = listget(grpmatches,0);
listfree(grpmatches);
2013-10-08 06:21:45 +08:00
goto done;
}
/* Sigh, we have to search up the tree to see if any of our candidates are there */
2013-10-08 06:21:45 +08:00
assert(parent == NULL || parent->objectclass == NC_GRP);
while(parent != NULL && match == NULL) {
grpmatches = ecsearchgrp(parent,candidates);
switch (listlength(grpmatches)) {
case 0: break;
case 1: match = listget(grpmatches,0); break;
default:
semerror(con->lineno,"Ambiguous enum constant reference: %s", fullname(refsym));
match = listget(grpmatches,0);
break;
}
2013-10-08 06:21:45 +08:00
listfree(grpmatches);
}
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
if(match != NULL) goto done;
/* Not unique and not in the parent tree, so complains and pick the first candidate */
semerror(con->lineno,"Ambiguous enum constant reference: %s", fullname(refsym));
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
match = (Symbol*)listget(candidates,0);
2013-10-08 06:21:45 +08:00
done:
listfree(candidates);
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
return match;
}
/*
Locate enums whose name is a prefix of ident
and contains the suffix as an enum const
and capture that enum constant.
*/
static List*
findecmatches(char* ident)
{
List* matches = listnew();
int i;
for(i=0;i<listlength(typdefs);i++) {
int len;
Symbol* ec;
Symbol* en = (Symbol*)listget(typdefs,i);
if(en->subclass != NC_ENUM)
continue;
/* First, assume that the ident is the econst name only */
ec = checkeconst(en,ident);
if(ec != NULL)
listpush(matches,ec);
/* Second, do the prefix check */
len = strlen(en->name);
if(strncmp(ident,en->name,len) == 0) {
2013-09-24 02:04:39 +08:00
Symbol *ec;
/* Find the matching ec constant, if any */
if(*(ident+len) != '.') continue;
2013-09-24 02:04:39 +08:00
ec = checkeconst(en,ident+len+1); /* +1 for the dot */
if(ec != NULL)
listpush(matches,ec);
}
}
if(listlength(matches) == 0) {
listfree(matches);
matches = NULL;
}
return matches;
}
static List*
ecsearchgrp(Symbol* grp, List* candidates)
{
List* matches = listnew();
int i,j;
/* do the intersection of grp subnodes and candidates */
for(i=0;i<listlength(grp->subnodes);i++) {
Symbol* sub= (Symbol*)listget(grp->subnodes,i);
if(sub->subclass != NC_ENUM)
continue;
for(j=0;j<listlength(candidates);j++) {
Symbol* ec = (Symbol*)listget(candidates,j);
if(ec->container == sub)
listpush(matches,ec);
}
}
if(listlength(matches) == 0) {
listfree(matches);
matches = NULL;
}
return matches;
}
static Symbol*
checkeconst(Symbol* en, const char* refname)
{
int i;
for(i=0;i<listlength(en->subnodes);i++) {
Symbol* ec = (Symbol*)listget(en->subnodes,i);
if(strcmp(ec->name,refname) == 0)
return ec;
}
return NULL;
}
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
#endif
2010-06-03 21:24:43 +08:00
/* Compute type sizes and compound offsets*/
void
computesize(Symbol* tsym)
{
int i;
int offset = 0;
2017-01-11 04:54:09 +08:00
int largealign;
2010-06-03 21:24:43 +08:00
unsigned long totaldimsize;
if(tsym->touched) return;
tsym->touched=1;
switch (tsym->subclass) {
case NC_VLEN: /* actually two sizes for vlen*/
computesize(tsym->typ.basetype); /* first size*/
tsym->typ.size = ncsize(tsym->typ.typecode);
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
tsym->typ.alignment = ncaux_class_alignment(tsym->typ.typecode);
2010-06-03 21:24:43 +08:00
tsym->typ.nelems = 1; /* always a single compound datalist */
break;
case NC_PRIM:
tsym->typ.size = ncsize(tsym->typ.typecode);
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
tsym->typ.alignment = ncaux_class_alignment(tsym->typ.typecode);
2010-06-03 21:24:43 +08:00
tsym->typ.nelems = 1;
break;
case NC_OPAQUE:
/* size and alignment already assigned*/
tsym->typ.nelems = 1;
break;
case NC_ENUM:
computesize(tsym->typ.basetype); /* first size*/
tsym->typ.size = tsym->typ.basetype->typ.size;
tsym->typ.alignment = tsym->typ.basetype->typ.alignment;
tsym->typ.nelems = 1;
break;
case NC_COMPOUND: /* keep if all fields are primitive*/
/* First, compute recursively, the size and alignment of fields*/
Primary change: add dap4 support Specific changes: 1. Add dap4 code: libdap4 and dap4_test. Note that until the d4ts server problem is solved, dap4 is turned off. 2. Modify various files to support dap4 flags: configure.ac, Makefile.am, CMakeLists.txt, etc. 3. Add nc_test/test_common.sh. This centralizes the handling of the locations of various things in the build tree: e.g. where is ncgen.exe located. See nc_test/test_common.sh for details. 4. Modify .sh files to use test_common.sh 5. Obsolete separate oc2 by moving it to be part of netcdf-c. This means replacing code with netcdf-c equivalents. 5. Add --with-testserver to configure.ac to allow override of the servers to be used for --enable-dap-remote-tests. 6. There were multiple versions of nctypealignment code. Try to centralize in libdispatch/doffset.c and include/ncoffsets.h 7. Add a unit test for the ncuri code because of its complexity. 8. Move the findserver code out of libdispatch and into a separate, self contained program in ncdap_test and dap4_test. 9. Move the dispatch header files (nc{3,4}dispatch.h) to .../include because they are now shared by modules. 10. Revamp the handling of TOPSRCDIR and TOPBUILDDIR for shell scripts. 11. Make use of MREMAP if available 12. Misc. minor changes e.g. - #include <config.h> -> #include "config.h" - Add some no-install headers to /include - extern -> EXTERNL and vice versa as needed - misc header cleanup - clean up checking for misc. unix vs microsoft functions 13. Change copyright decls in some files to point to LICENSE file. 14. Add notes to RELEASENOTES.md
2017-03-09 08:01:10 +08:00
for(i=0;i<listlength(tsym->subnodes);i++) {
2010-06-03 21:24:43 +08:00
Symbol* field = (Symbol*)listget(tsym->subnodes,i);
Primary change: add dap4 support Specific changes: 1. Add dap4 code: libdap4 and dap4_test. Note that until the d4ts server problem is solved, dap4 is turned off. 2. Modify various files to support dap4 flags: configure.ac, Makefile.am, CMakeLists.txt, etc. 3. Add nc_test/test_common.sh. This centralizes the handling of the locations of various things in the build tree: e.g. where is ncgen.exe located. See nc_test/test_common.sh for details. 4. Modify .sh files to use test_common.sh 5. Obsolete separate oc2 by moving it to be part of netcdf-c. This means replacing code with netcdf-c equivalents. 5. Add --with-testserver to configure.ac to allow override of the servers to be used for --enable-dap-remote-tests. 6. There were multiple versions of nctypealignment code. Try to centralize in libdispatch/doffset.c and include/ncoffsets.h 7. Add a unit test for the ncuri code because of its complexity. 8. Move the findserver code out of libdispatch and into a separate, self contained program in ncdap_test and dap4_test. 9. Move the dispatch header files (nc{3,4}dispatch.h) to .../include because they are now shared by modules. 10. Revamp the handling of TOPSRCDIR and TOPBUILDDIR for shell scripts. 11. Make use of MREMAP if available 12. Misc. minor changes e.g. - #include <config.h> -> #include "config.h" - Add some no-install headers to /include - extern -> EXTERNL and vice versa as needed - misc header cleanup - clean up checking for misc. unix vs microsoft functions 13. Change copyright decls in some files to point to LICENSE file. 14. Add notes to RELEASENOTES.md
2017-03-09 08:01:10 +08:00
ASSERT(field->subclass == NC_FIELD);
2010-06-03 21:24:43 +08:00
computesize(field);
if(i==0) tsym->typ.alignment = field->typ.alignment;
Primary change: add dap4 support Specific changes: 1. Add dap4 code: libdap4 and dap4_test. Note that until the d4ts server problem is solved, dap4 is turned off. 2. Modify various files to support dap4 flags: configure.ac, Makefile.am, CMakeLists.txt, etc. 3. Add nc_test/test_common.sh. This centralizes the handling of the locations of various things in the build tree: e.g. where is ncgen.exe located. See nc_test/test_common.sh for details. 4. Modify .sh files to use test_common.sh 5. Obsolete separate oc2 by moving it to be part of netcdf-c. This means replacing code with netcdf-c equivalents. 5. Add --with-testserver to configure.ac to allow override of the servers to be used for --enable-dap-remote-tests. 6. There were multiple versions of nctypealignment code. Try to centralize in libdispatch/doffset.c and include/ncoffsets.h 7. Add a unit test for the ncuri code because of its complexity. 8. Move the findserver code out of libdispatch and into a separate, self contained program in ncdap_test and dap4_test. 9. Move the dispatch header files (nc{3,4}dispatch.h) to .../include because they are now shared by modules. 10. Revamp the handling of TOPSRCDIR and TOPBUILDDIR for shell scripts. 11. Make use of MREMAP if available 12. Misc. minor changes e.g. - #include <config.h> -> #include "config.h" - Add some no-install headers to /include - extern -> EXTERNL and vice versa as needed - misc header cleanup - clean up checking for misc. unix vs microsoft functions 13. Change copyright decls in some files to point to LICENSE file. 14. Add notes to RELEASENOTES.md
2017-03-09 08:01:10 +08:00
}
/* now compute the size of the compound based on what user specified*/
offset = 0;
largealign = 1;
for(i=0;i<listlength(tsym->subnodes);i++) {
Symbol* field = (Symbol*)listget(tsym->subnodes,i);
/* only support 'c' alignment for now*/
int alignment = field->typ.alignment;
int padding = getpadding(offset,alignment);
offset += padding;
field->typ.offset = offset;
offset += field->typ.size;
if (alignment > largealign) {
largealign = alignment;
}
}
tsym->typ.cmpdalign = largealign; /* total structure size alignment */
offset += (offset % largealign);
tsym->typ.size = offset;
break;
case NC_FIELD: /* Compute size assume no unlimited dimensions*/
if(tsym->typ.dimset.ndims > 0) {
computesize(tsym->typ.basetype);
totaldimsize = crossproduct(&tsym->typ.dimset,0,rankfor(&tsym->typ.dimset));
tsym->typ.size = tsym->typ.basetype->typ.size * totaldimsize;
tsym->typ.alignment = tsym->typ.basetype->typ.alignment;
tsym->typ.nelems = 1;
} else {
tsym->typ.size = tsym->typ.basetype->typ.size;
tsym->typ.alignment = tsym->typ.basetype->typ.alignment;
tsym->typ.nelems = tsym->typ.basetype->typ.nelems;
}
break;
2010-06-03 21:24:43 +08:00
default:
PANIC1("computesize: unexpected type class: %d",tsym->subclass);
break;
}
}
void
processvars(void)
{
int i,j;
for(i=0;i<listlength(vardefs);i++) {
Symbol* vsym = (Symbol*)listget(vardefs,i);
Symbol* basetype = vsym->typ.basetype;
/* If we are in classic mode, then convert long -> int32 */
if(usingclassic) {
if(basetype->typ.typecode == NC_LONG || basetype->typ.typecode == NC_INT64) {
vsym->typ.basetype = primsymbols[NC_INT];
basetype = vsym->typ.basetype;
}
}
2010-06-03 21:24:43 +08:00
/* fill in the typecode*/
vsym->typ.typecode = basetype->typ.typecode;
2013-07-11 04:53:50 +08:00
/* validate uses of NIL */
validateNIL(vsym);
for(j=0;j<vsym->typ.dimset.ndims;j++) {
/* validate the dimensions*/
/* UNLIMITED must only be in first place if using classic */
if(vsym->typ.dimset.dimsyms[j]->dim.declsize == NC_UNLIMITED) {
if(usingclassic && j != 0)
2010-06-03 21:24:43 +08:00
semerror(vsym->lineno,"Variable: %s: UNLIMITED must be in first dimension only",fullname(vsym));
}
}
2010-06-03 21:24:43 +08:00
}
}
static void
processtypesizes(void)
{
int i;
/* use touch flag to avoid circularity*/
for(i=0;i<listlength(typdefs);i++) {
Symbol* tsym = (Symbol*)listget(typdefs,i);
tsym->touched = 0;
}
for(i=0;i<listlength(typdefs);i++) {
Symbol* tsym = (Symbol*)listget(typdefs,i);
computesize(tsym); /* this will recurse*/
}
}
static void
processattributes(void)
{
int i,j;
/* process global attributes*/
for(i=0;i<listlength(gattdefs);i++) {
Symbol* asym = (Symbol*)listget(gattdefs,i);
if(asym->typ.basetype == NULL) inferattributetype(asym);
/* fill in the typecode*/
asym->typ.typecode = asym->typ.basetype->typ.typecode;
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
if(asym->data != NULL && asym->data->length == 0) {
NCConstant* empty = NULL;
/* If the attribute has a zero length, then default it;
note that it must be of type NC_CHAR */
if(asym->typ.typecode != NC_CHAR)
semerror(asym->lineno,"Empty datalist can only be assigned to attributes of type char",fullname(asym));
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
empty = emptystringconst(asym->lineno);
dlappend(asym->data,empty);
}
validateNIL(asym);
2010-06-03 21:24:43 +08:00
}
/* process per variable attributes*/
for(i=0;i<listlength(attdefs);i++) {
Symbol* asym = (Symbol*)listget(attdefs,i);
/* If no basetype is specified, then try to infer it;
the exception is _Fillvalue, whose type is that of the
containing variable.
*/
if(strcmp(asym->name,specialname(_FILLVALUE_FLAG)) == 0) {
/* This is _Fillvalue */
asym->typ.basetype = asym->att.var->typ.basetype; /* its basetype is same as its var*/
/* put the datalist into the specials structure */
if(asym->data == NULL) {
/* Generate a default fill value */
asym->data = getfiller(asym->typ.basetype);
}
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
if(asym->att.var->var.special->_Fillvalue != NULL)
reclaimdatalist(asym->att.var->var.special->_Fillvalue);
asym->att.var->var.special->_Fillvalue = clonedatalist(asym->data);
} else if(asym->typ.basetype == NULL) {
inferattributetype(asym);
}
2010-06-03 21:24:43 +08:00
/* fill in the typecode*/
asym->typ.typecode = asym->typ.basetype->typ.typecode;
if(asym->data->length == 0) {
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
NCConstant* empty = NULL;
/* If the attribute has a zero length, and is char type, then default it */
if(asym->typ.typecode != NC_CHAR)
semerror(asym->lineno,"Empty datalist can only be assigned to attributes of type char",fullname(asym));
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
empty = emptystringconst(asym->lineno);
dlappend(asym->data,empty);
}
validateNIL(asym);
2010-06-03 21:24:43 +08:00
}
/* collect per-variable attributes per variable*/
for(i=0;i<listlength(vardefs);i++) {
Symbol* vsym = (Symbol*)listget(vardefs,i);
List* list = listnew();
for(j=0;j<listlength(attdefs);j++) {
Symbol* asym = (Symbol*)listget(attdefs,j);
if(asym->att.var == NULL)
continue; /* ignore globals for now */
if(asym->att.var != vsym) continue;
listpush(list,(void*)asym);
2010-06-03 21:24:43 +08:00
}
vsym->var.attributes = list;
}
}
/*
2015-11-20 04:44:07 +08:00
Given two types, attempt to upgrade to the "bigger type"
Rules:
- type size has precedence over signed/unsigned:
e.g. NC_INT over NC_UBYTE
2010-06-03 21:24:43 +08:00
*/
2015-11-20 04:44:07 +08:00
static nc_type
infertype(nc_type prior, nc_type next, int hasneg)
{
nc_type sp, sn;
/* assert isinttype(prior) && isinttype(next) */
if(prior == NC_NAT) return next;
if(prior == next) return next;
sp = signedtype(prior);
sn = signedtype(next);
if(sp <= sn)
return next;
if(sn < sp)
return prior;
return NC_NAT; /* all other cases illegal */
}
2010-06-03 21:24:43 +08:00
2015-11-20 04:44:07 +08:00
/*
Collect info by repeated walking of the attribute value list.
*/
2010-06-03 21:24:43 +08:00
static nc_type
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
inferattributetype1(Datalist* adata)
2010-06-03 21:24:43 +08:00
{
nc_type result = NC_NAT;
2015-11-20 04:44:07 +08:00
int hasneg = 0;
int stringcount = 0;
int charcount = 0;
int forcefloat = 0;
int forcedouble = 0;
int forceuint64 = 0;
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
int i;
2015-11-20 04:44:07 +08:00
/* Walk the top level set of attribute values to ensure non-nesting */
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
for(i=0;i<datalistlen(adata);i++) {
NCConstant* con = datalistith(adata,i);
2015-11-20 04:44:07 +08:00
if(con == NULL) return NC_NAT;
if(con->nctype > NC_MAX_ATOMIC_TYPE) { /* illegal */
return NC_NAT;
2010-06-03 21:24:43 +08:00
}
}
2015-11-20 04:44:07 +08:00
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
/* Walk repeatedly to get info for inference (loops could be combined) */
2015-11-20 04:44:07 +08:00
/* Compute: all strings or chars? */
stringcount = 0;
charcount = 0;
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
for(i=0;i<datalistlen(adata);i++) {
NCConstant* con = datalistith(adata,i);
2015-11-20 04:44:07 +08:00
if(con->nctype == NC_STRING) stringcount++;
else if(con->nctype == NC_CHAR) charcount++;
}
if((stringcount+charcount) > 0) {
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
if((stringcount+charcount) < datalistlen(adata))
2015-11-20 04:44:07 +08:00
return NC_NAT; /* not all textual */
return NC_CHAR;
}
/* Compute: any floats/doubles? */
forcefloat = 0;
forcedouble = 0;
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
for(i=0;i<datalistlen(adata);i++) {
NCConstant* con = datalistith(adata,i);
2015-11-20 04:44:07 +08:00
if(con->nctype == NC_FLOAT) forcefloat = 1;
else if(con->nctype == NC_DOUBLE) {forcedouble=1; break;}
}
if(forcedouble) return NC_DOUBLE;
if(forcefloat) return NC_FLOAT;
/* At this point all the constants should be integers */
/* Compute: are there any uint64 values > NC_MAX_INT64? */
forceuint64 = 0;
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
for(i=0;i<datalistlen(adata);i++) {
NCConstant* con = datalistith(adata,i);
2015-11-20 04:44:07 +08:00
if(con->nctype != NC_UINT64) continue;
if(con->value.uint64v > NC_MAX_INT64) {forceuint64=1; break;}
}
if(forceuint64)
return NC_UINT64;
/* Compute: are there any negative constants? */
hasneg = 0;
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
for(i=0;i<datalistlen(adata);i++) {
NCConstant* con = datalistith(adata,i);
2015-11-20 04:44:07 +08:00
switch (con->nctype) {
case NC_BYTE : if(con->value.int8v < 0) {hasneg = 1;} break;
case NC_SHORT: if(con->value.int16v < 0) {hasneg = 1;} break;
case NC_INT: if(con->value.int32v < 0) {hasneg = 1;} break;
}
2015-11-20 04:44:07 +08:00
}
/* Compute: inferred integer type */
result = NC_NAT;
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
for(i=0;i<datalistlen(adata);i++) {
NCConstant* con = datalistith(adata,i);
result = infertype(result,con->nctype,hasneg);
2015-11-20 04:44:07 +08:00
if(result == NC_NAT) break; /* something wrong */
2010-06-03 21:24:43 +08:00
}
return result;
}
static void
inferattributetype(Symbol* asym)
{
Datalist* datalist;
nc_type nctype;
ASSERT(asym->data != NULL);
datalist = asym->data;
if(datalist->length == 0) {
/* Default for zero length attributes */
asym->typ.basetype = basetypefor(NC_CHAR);
return;
}
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
nctype = inferattributetype1(datalist);
2015-11-20 04:44:07 +08:00
if(nctype == NC_NAT) { /* Illegal attribute value list */
semerror(asym->lineno,"Non-simple list of values for untyped attribute: %s",fullname(asym));
return;
}
2010-06-03 21:24:43 +08:00
/* get the corresponding primitive type built-in symbol*/
/* special case for string*/
if(nctype == NC_STRING)
asym->typ.basetype = basetypefor(NC_CHAR);
else if(usingclassic) {
/* If we are in classic mode, then restrict the inferred type
2015-08-16 06:26:35 +08:00
to the classic or cdf5 atypes */
2010-06-03 21:24:43 +08:00
switch (nctype) {
case NC_OPAQUE:
case NC_ENUM:
nctype = NC_INT;
break;
default: /* leave as is */
break;
}
asym->typ.basetype = basetypefor(nctype);
} else
asym->typ.basetype = basetypefor(nctype);
}
#ifdef USE_NETCDF4
/* recursive helper for validataNIL */
static void
validateNILr(Datalist* src)
{
int i;
for(i=0;i<src->length;i++) {
NCConstant* con = datalistith(src,i);
if(isnilconst(con))
semerror(con->lineno,"NIL data can only be assigned to variables or attributes of type string");
else if(islistconst(con)) /* recurse */
validateNILr(con->value.compoundv);
}
}
#endif
static void
validateNIL(Symbol* sym)
{
#ifdef USE_NETCDF4
Datalist* datalist = sym->data;
if(datalist == NULL || datalist->length == 0) return;
if(sym->typ.typecode == NC_STRING) return;
validateNILr(datalist);
#endif
}
2010-06-03 21:24:43 +08:00
/* Find name within group structure*/
Symbol*
lookupgroup(List* prefix)
{
#ifdef USE_NETCDF4
if(prefix == NULL || listlength(prefix) == 0)
return rootgroup;
else
return (Symbol*)listtop(prefix);
#else
return rootgroup;
#endif
}
/* Find name within given group*/
Symbol*
lookupingroup(nc_class objectclass, char* name, Symbol* grp)
{
int i;
if(name == NULL) return NULL;
if(grp == NULL) grp = rootgroup;
dumpgroup(grp);
for(i=0;i<listlength(grp->subnodes);i++) {
Symbol* sym = (Symbol*)listget(grp->subnodes,i);
if(sym->ref.is_ref) continue;
2010-06-03 21:24:43 +08:00
if(sym->objectclass != objectclass) continue;
if(strcmp(sym->name,name)!=0) continue;
return sym;
}
return NULL;
}
/* Find symbol within group structure*/
Symbol*
lookup(nc_class objectclass, Symbol* pattern)
{
Symbol* grp;
if(pattern == NULL) return NULL;
grp = lookupgroup(pattern->prefix);
if(grp == NULL) return NULL;
return lookupingroup(objectclass,pattern->name,grp);
}
/* return internal size for values of specified netCDF type */
size_t
nctypesize(
nc_type type) /* netCDF type code */
{
switch (type) {
case NC_BYTE: return sizeof(char);
case NC_CHAR: return sizeof(char);
case NC_SHORT: return sizeof(short);
case NC_INT: return sizeof(int);
case NC_FLOAT: return sizeof(float);
case NC_DOUBLE: return sizeof(double);
case NC_UBYTE: return sizeof(unsigned char);
case NC_USHORT: return sizeof(unsigned short);
case NC_UINT: return sizeof(unsigned int);
case NC_INT64: return sizeof(long long);
case NC_UINT64: return sizeof(unsigned long long);
case NC_STRING: return sizeof(char*);
default:
PANIC("nctypesize: bad type code");
}
return 0;
}
static int
sqContains(List* seq, Symbol* sym)
{
int i;
if(seq == NULL) return 0;
for(i=0;i<listlength(seq);i++) {
Symbol* sub = (Symbol*)listget(seq,i);
if(sub == sym) return 1;
}
return 0;
}
static void
checkconsistency(void)
{
int i;
for(i=0;i<listlength(grpdefs);i++) {
Symbol* sym = (Symbol*)listget(grpdefs,i);
if(sym == rootgroup) {
if(sym->container != NULL)
PANIC("rootgroup has a container");
} else if(sym->container == NULL && sym != rootgroup)
PANIC1("symbol with no container: %s",sym->name);
else if(sym->container->ref.is_ref != 0)
2010-06-03 21:24:43 +08:00
PANIC1("group with reference container: %s",sym->name);
else if(sym != rootgroup && !sqContains(sym->container->subnodes,sym))
PANIC1("group not in container: %s",sym->name);
if(sym->subnodes == NULL)
PANIC1("group with null subnodes: %s",sym->name);
}
for(i=0;i<listlength(typdefs);i++) {
Symbol* sym = (Symbol*)listget(typdefs,i);
if(!sqContains(sym->container->subnodes,sym))
PANIC1("type not in container: %s",sym->name);
}
for(i=0;i<listlength(dimdefs);i++) {
Symbol* sym = (Symbol*)listget(dimdefs,i);
if(!sqContains(sym->container->subnodes,sym))
PANIC1("dimension not in container: %s",sym->name);
}
for(i=0;i<listlength(vardefs);i++) {
Symbol* sym = (Symbol*)listget(vardefs,i);
if(!sqContains(sym->container->subnodes,sym))
PANIC1("variable not in container: %s",sym->name);
if(!(isprimplus(sym->typ.typecode)
|| sqContains(typdefs,sym->typ.basetype)))
PANIC1("variable with undefined type: %s",sym->name);
}
}
static void
computeunlimitedsizes(Dimset* dimset, int dimindex, Datalist* data, int ischar)
2010-06-03 21:24:43 +08:00
{
int i;
size_t xproduct, unlimsize;
int nextunlim,lastunlim;
Symbol* thisunlim = dimset->dimsyms[dimindex];
size_t length;
ASSERT(thisunlim->dim.isunlimited);
nextunlim = findunlimited(dimset,dimindex+1);
lastunlim = (nextunlim == dimset->ndims);
xproduct = crossproduct(dimset,dimindex+1,nextunlim);
if(!lastunlim) {
/* Compute candidate size of this unlimited */
length = data->length;
unlimsize = length / xproduct;
if(length % xproduct != 0)
unlimsize++; /* => fill requires at some point */
2013-11-15 06:13:20 +08:00
#ifdef GENDEBUG2
fprintf(stderr,"unlimsize: dim=%s declsize=%lu xproduct=%lu newsize=%lu\n",
thisunlim->name,
(unsigned long)thisunlim->dim.declsize,
(unsigned long)xproduct,
(unsigned long)unlimsize);
#endif
if(thisunlim->dim.declsize < unlimsize) /* want max length of the unlimited*/
thisunlim->dim.declsize = unlimsize;
/*!lastunlim => data is list of sublists, recurse on each sublist*/
for(i=0;i<data->length;i++) {
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
NCConstant* con = data->data[i];
2014-07-24 01:25:49 +08:00
if(con->nctype != NC_COMPOUND) {
semerror(con->lineno,"UNLIMITED dimension (other than first) must be enclosed in {}");
}
computeunlimitedsizes(dimset,nextunlim,con->value.compoundv,ischar);
}
} else { /* lastunlim */
if(ischar) {
/* Char case requires special computations;
compute total number of characters */
length = 0;
for(i=0;i<data->length;i++) {
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
NCConstant* con = data->data[i];
switch (con->nctype) {
case NC_CHAR: case NC_BYTE: case NC_UBYTE:
length++;
break;
case NC_STRING:
length += con->value.stringv.len;
break;
case NC_COMPOUND:
semwarn(datalistline(data),"Expected character constant, found {...}");
break;
default:
semwarn(datalistline(data),"Illegal character constant: %d",con->nctype);
}
2010-06-03 21:24:43 +08:00
}
} else { /* Data list should be a list of simple non-char constants */
length = data->length;
2010-06-03 21:24:43 +08:00
}
unlimsize = length / xproduct;
if(length % xproduct != 0)
unlimsize++; /* => fill requires at some point */
2013-11-15 06:13:20 +08:00
#ifdef GENDEBUG2
fprintf(stderr,"unlimsize: dim=%s declsize=%lu xproduct=%lu newsize=%lu\n",
thisunlim->name,
(unsigned long)thisunlim->dim.declsize,
(unsigned long)xproduct,
(unsigned long)unlimsize);
2012-01-10 02:39:37 +08:00
#endif
if(thisunlim->dim.declsize < unlimsize) /* want max length of the unlimited*/
thisunlim->dim.declsize = unlimsize;
}
}
static void
processunlimiteddims(void)
{
int i;
/* Set all unlimited dims to size 0; */
for(i=0;i<listlength(dimdefs);i++) {
Symbol* dim = (Symbol*)listget(dimdefs,i);
if(dim->dim.isunlimited)
dim->dim.declsize = 0;
}
/* Walk all variables */
for(i=0;i<listlength(vardefs);i++) {
Symbol* var = (Symbol*)listget(vardefs,i);
int first,ischar;
Dimset* dimset = &var->typ.dimset;
if(dimset->ndims == 0) continue; /* ignore scalars */
if(var->data == NULL) continue; /* no data list to walk */
ischar = (var->typ.basetype->typ.typecode == NC_CHAR);
first = findunlimited(dimset,0);
if(first == dimset->ndims) continue; /* no unlimited dims */
if(first == 0) {
computeunlimitedsizes(dimset,first,var->data,ischar);
} else {
int j;
for(j=0;j<var->data->length;j++) {
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
NCConstant* con = var->data->data[j];
if(con->nctype != NC_COMPOUND)
semerror(con->lineno,"UNLIMITED dimension (other than first) must be enclosed in {}");
else
computeunlimitedsizes(dimset,first,con->value.compoundv,ischar);
}
}
2010-06-03 21:24:43 +08:00
}
2013-11-15 06:13:20 +08:00
#ifdef GENDEBUG1
/* print unlimited dim size */
if(listlength(dimdefs) == 0)
fprintf(stderr,"unlimited: no unlimited dimensions\n");
else for(i=0;i<listlength(dimdefs);i++) {
Symbol* dim = (Symbol*)listget(dimdefs,i);
if(dim->dim.isunlimited)
fprintf(stderr,"unlimited: %s = %lu\n",
dim->name,
(unsigned long)dim->dim.declsize);
}
#endif
}
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
/* Rules for specifying the dataset name:
1. use -o name
2. use the datasetname from the .cdl file
3. use input cdl file name (with .cdl removed)
It would be better if there was some way
to specify the datasetname independently of the
file name, but oh well.
*/
static char*
createfilename(void)
{
char filename[4096];
filename[0] = '\0';
if(netcdf_name) { /* -o flag name */
strlcat(filename,netcdf_name,sizeof(filename));
} else { /* construct a usable output file name */
if (cdlname != NULL && strcmp(cdlname,"-") != 0) {/* cmd line name */
char* p;
strlcat(filename,cdlname,sizeof(filename));
/* remove any suffix and prefix*/
p = strrchr(filename,'.');
if(p != NULL) {*p= '\0';}
p = strrchr(filename,'/');
if(p != NULL) {
char* q = filename;
p++; /* skip the '/' */
while((*q++ = *p++));
}
} else {/* construct name from dataset name */
strlcat(filename,datasetname,sizeof(filename));
}
/* Append the proper extension */
strlcat(filename,binary_ext,sizeof(filename));
}
return strdup(filename);
}