netcdf-c/ncgen/main.c

614 lines
16 KiB
C
Raw Normal View History

2010-06-03 21:24:43 +08:00
/*********************************************************************
2018-12-07 06:40:43 +08:00
* Copyright 2018, UCAR/Unidata
2010-06-03 21:24:43 +08:00
* See netcdf/COPYRIGHT file for copying and redistribution conditions.
*********************************************************************/
/* $Id: main.c,v 1.33 2010/05/26 21:43:36 dmh Exp $ */
/* $Header: /upc/share/CVS/netcdf-3/ncgen/main.c,v 1.33 2010/05/26 21:43:36 dmh Exp $ */
#include "includes.h"
Primary change: add dap4 support Specific changes: 1. Add dap4 code: libdap4 and dap4_test. Note that until the d4ts server problem is solved, dap4 is turned off. 2. Modify various files to support dap4 flags: configure.ac, Makefile.am, CMakeLists.txt, etc. 3. Add nc_test/test_common.sh. This centralizes the handling of the locations of various things in the build tree: e.g. where is ncgen.exe located. See nc_test/test_common.sh for details. 4. Modify .sh files to use test_common.sh 5. Obsolete separate oc2 by moving it to be part of netcdf-c. This means replacing code with netcdf-c equivalents. 5. Add --with-testserver to configure.ac to allow override of the servers to be used for --enable-dap-remote-tests. 6. There were multiple versions of nctypealignment code. Try to centralize in libdispatch/doffset.c and include/ncoffsets.h 7. Add a unit test for the ncuri code because of its complexity. 8. Move the findserver code out of libdispatch and into a separate, self contained program in ncdap_test and dap4_test. 9. Move the dispatch header files (nc{3,4}dispatch.h) to .../include because they are now shared by modules. 10. Revamp the handling of TOPSRCDIR and TOPBUILDDIR for shell scripts. 11. Make use of MREMAP if available 12. Misc. minor changes e.g. - #include <config.h> -> #include "config.h" - Add some no-install headers to /include - extern -> EXTERNL and vice versa as needed - misc header cleanup - clean up checking for misc. unix vs microsoft functions 13. Change copyright decls in some files to point to LICENSE file. 14. Add notes to RELEASENOTES.md
2017-03-09 08:01:10 +08:00
#include "ncoffsets.h"
2017-04-15 01:05:30 +08:00
#include "ncwinpath.h"
2010-06-03 21:24:43 +08:00
#ifdef HAVE_GETOPT_H
#include <getopt.h>
#endif
2012-09-12 03:53:47 +08:00
#ifdef _MSC_VER
#include "XGetopt.h"
int opterr;
2012-09-14 06:07:35 +08:00
int optind;
2012-09-12 03:53:47 +08:00
#endif
2010-06-03 21:24:43 +08:00
/* Default is netcdf-3 mode 1 */
#define DFALTCMODE 0
/* For error messages */
2017-10-31 05:11:23 +08:00
char* progname; /* Global: not reclaimed */
char* cdlname; /* Global: not reclaimed */
2010-06-03 21:24:43 +08:00
/* option flags */
2010-06-03 21:24:43 +08:00
int nofill_flag;
char* mainname; /* name to use for main function; defaults to "main"*/
Language l_flag;
2011-07-28 04:48:58 +08:00
int syntax_only;
2012-03-08 07:38:51 +08:00
int header_only;
2010-06-03 21:24:43 +08:00
/* flags for tracking what output format to use */
int k_flag; /* > 0 => -k was specified on command line*/
int format_attribute; /* 1=>format came from format attribute */
int enhanced_flag; /* 1 => netcdf-4 */
int cdf5_flag; /* 1 => cdf5 | maybe netcdf-4 */
int specials_flag; /* 1=> special attributes are present */
int usingclassic;
int cmode_modifier;
int diskless;
int ncloglevel;
GlobalSpecialData globalspecials;
char* binary_ext = ".nc";
2010-06-03 21:24:43 +08:00
size_t nciterbuffersize;
struct Vlendata* vlendata;
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
char *netcdf_name = NULL; /* command line -o file name */
char *datasetname = NULL; /* name from the netcdf <name> {} || from -N */
2010-06-03 21:24:43 +08:00
extern FILE *ncgin;
/* Forward */
static char* ubasename(char*);
void usage( void );
2018-04-21 04:40:28 +08:00
int main( int argc, char** argv );
2010-06-03 21:24:43 +08:00
/* Define tables vs modes for legal -k values*/
struct Kvalues legalkinds[] = {
/* NetCDF-3 classic format (32-bit offsets) */
{"classic", NC_FORMAT_CLASSIC}, /* canonical format name */
{"nc3", NC_FORMAT_CLASSIC}, /* short format name */
{"1", NC_FORMAT_CLASSIC}, /* deprecated, use "-3" or "-k nc3" instead */
/* NetCDF-3 64-bit offset format */
2015-08-16 06:26:35 +08:00
{"64-bit offset", NC_FORMAT_64BIT_OFFSET}, /* canonical format name */
{"nc6", NC_FORMAT_64BIT_OFFSET}, /* short format name */
{"2", NC_FORMAT_64BIT_OFFSET}, /* deprecated, use "-6" or "-k nc6" instead */
{"64-bit-offset", NC_FORMAT_64BIT_OFFSET}, /* aliases */
/* NetCDF-4 HDF5-based format */
{"netCDF-4", NC_FORMAT_NETCDF4}, /* canonical format name */
{"nc4", NC_FORMAT_NETCDF4}, /* short format name */
{"3", NC_FORMAT_NETCDF4}, /* deprecated, use "-4" or "-k nc4" instead */
{"netCDF4", NC_FORMAT_NETCDF4}, /* aliases */
{"hdf5", NC_FORMAT_NETCDF4},
{"enhanced", NC_FORMAT_NETCDF4},
{"netcdf-4", NC_FORMAT_NETCDF4},
{"netcdf4", NC_FORMAT_NETCDF4},
/* NetCDF-4 HDF5-based format, restricted to classic data model */
{"netCDF-4 classic model", NC_FORMAT_NETCDF4_CLASSIC}, /* canonical format name */
{"nc7", NC_FORMAT_NETCDF4_CLASSIC}, /* short format name */
{"4", NC_FORMAT_NETCDF4_CLASSIC}, /* deprecated, use "-7" or -k nc7" instead */
{"netCDF-4-classic", NC_FORMAT_NETCDF4_CLASSIC}, /* aliases */
{"netCDF-4_classic", NC_FORMAT_NETCDF4_CLASSIC},
{"netCDF4_classic", NC_FORMAT_NETCDF4_CLASSIC},
{"hdf5-nc3", NC_FORMAT_NETCDF4_CLASSIC},
{"enhanced-nc3", NC_FORMAT_NETCDF4_CLASSIC},
2010-06-03 21:24:43 +08:00
2015-08-16 06:26:35 +08:00
/* CDF-5 format */
{"5", NC_FORMAT_64BIT_DATA},
{"64-bit-data", NC_FORMAT_64BIT_DATA},
{"64-bit data", NC_FORMAT_64BIT_DATA},
{"nc5", NC_FORMAT_64BIT_DATA},
{"cdf5", NC_FORMAT_64BIT_DATA},
{"cdf-5", NC_FORMAT_64BIT_DATA},
2010-06-03 21:24:43 +08:00
/* null terminate*/
{NULL,0}
};
2013-09-24 02:04:39 +08:00
#ifndef _MSC_VER
2010-06-03 21:24:43 +08:00
struct Languages {
char* name;
Language flag;
2010-06-03 21:24:43 +08:00
} legallanguages[] = {
{"b", L_BINARY},
{"c", L_C},
{"C", L_C},
{"f77", L_F77},
{"fortran77", L_F77},
{"Fortran77", L_F77},
{"j", L_JAVA},
{"java", L_JAVA},
2013-09-24 02:04:39 +08:00
{NULL,L_UNDEFINED}
2010-06-03 21:24:43 +08:00
};
2013-09-24 02:04:39 +08:00
#else
typedef struct Languages {
char* name;
Language flag;
} Languages;
struct Languages legallanguages[] = {
{"b", L_BINARY},
{"c", L_C},
{"C", L_C},
{"f77", L_F77},
{"fortran77", L_F77},
{"Fortran77", L_F77},
{"j", L_JAVA},
{"java", L_JAVA},
{NULL,L_UNDEFINED}
};
#endif
2010-06-03 21:24:43 +08:00
#if 0 /*not used*/
/* BOM Sequences */
static char* U8 = "\xEF\xBB\xBF"; /* UTF-8 */
static char* BE32 = "\x00\x00\xFE\xFF"; /* UTF-32; big-endian */
static char* LE32 = "\xFF\xFE"; /* UTF-32; little-endian */
static char* BE16 = "\xFE\xFF"; /* UTF-16; big-endian */
static char* LE16 = "\xFF\xFE"; /* UTF-16; little-endian */
#endif
2010-06-03 21:24:43 +08:00
/* The default minimum iterator size depends
on whether we are doing binary or language
based output.
*/
#define DFALTBINNCITERBUFFERSIZE 0x40000 /* about 250k bytes */
#define DFALTLANGNCITERBUFFERSIZE 0x4000 /* about 15k bytes */
/* strip off leading path */
/* result is malloc'd */
static char *
ubasename(char *logident)
2010-06-03 21:24:43 +08:00
{
char* sep;
sep = strrchr(logident,'/');
#ifdef MSDOS
if(sep == NULL) sep = strrchr(logident,'\\');
#endif
if(sep == NULL) return logident;
sep++; /* skip past the separator */
return sep;
}
void
usage(void)
{
2015-08-16 06:26:35 +08:00
derror("Usage: %s"
" [-1]"
" [-3]"
" [-4]"
" [-5]"
" [-6]"
" [-7]"
" [-b]"
" [-B buffersize]"
" [-d]"
" [-D debuglevel]"
" [-h]"
" [-k kind ]"
" [-l language=b|c|f77|java]"
" [-M <name>]"
" [-n]"
" [-o outfile]"
" [-P]"
" [-x]"
re e-support UBS-599337 re pull request https://github.com/Unidata/netcdf-c/pull/405 re pull request https://github.com/Unidata/netcdf-c/pull/446 Notes: 1. This branch is a cleanup of the magic.dmh branch. 2. magic.dmh was originally merged, but caused problems with parallel IO. It was re-issued as pull request https://github.com/Unidata/netcdf-c/pull/446. 3. This branch + pull request replace any previous pull requests and magic.dmh branch. Given an otherwise valid netCDF file that has a corrupted header, the netcdf library currently crashes. Instead, it should return NC_ENOTNC. Additionally, the NC_check_file_type code does not do the forward search required by hdf5 files. It currently only looks at file position 0 instead of 512, 1024, 2048,... Also, it turns out that the HDF4 magic number is assumed to always be at the beginning of the file (unlike HDF5). The change is localized to libdispatch/dfile.c See https://support.hdfgroup.org/release4/doc/DSpec_html/DS.pdf Also, it turns out that the code in NC_check_file_type is duplicated (mostly) in the function libsrc4/nc4file.c#nc_check_for_hdf. This branch does the following. 1. Make NC_check_file_type return NC_ENOTNC instead of crashing. 2. Remove nc_check_for_hdf and centralize all file format checking NC_check_file_type. 3. Add proper forward search for HDF5 files (but not HDF4 files) to look for the magic number at offsets of 0, 512, 1024... 4. Add test tst_hdf5_offset.sh. This tests that hdf5 files with an offset are properly recognized. It does so by prefixing a legal file with some number of zero bytes: 512, 1024, etc. 5. Off-topic: Added -N flag to ncdump to force a specific output dataset name.
2017-10-25 06:25:09 +08:00
" [-N datasetname]"
" [-L loglevel]"
There was a request to extend the provenance information stored in the _NCProperties attribute to allow two things: 1. capture of additional library dependencies (over and above hdf5) 2. Recognition of non-netcdf libraries that create netcdf-4 format files. To this end, the _NCProperties format has been extended to be and arbitrary set of key=value pairs separated by commas. This new format has version = 2, and uses commas as the pair separator. Thus the general form is: _NCProperties = "version=2,key1=value,key2=value2..." ; This new version is accompanied by a new ./configure option of the form --with-ncproperties="key1=value1,key2=value2..." that specifies pairs to add to the _NCProperties attribute for all files created with that netcdf library. At this point, what is missing is some programmatic way to specify either all the pairs or additional pairs to the _NCProperties attribute. Not sure of the best way to do this. Builders using non-netcdf libraries can specify whatever they want in the key value pairs (as long as the version=2 is specified first). By convention, the primary library is expected to be the the first pair after the leading version=2 pair, but this is convention only and is neither required nor enforced. Related changes: 1. Fixed the tests that check _NCProperties to properly operate with version=2. 2. When reading a version 1 _NCProperties attribute, convert it to look like a version 2 attribute. 2. Added some version 2 tests to ncdump/tst_fileinfo.c and ncdump/tst_fileinfo.sh Misc Changes: 1. Fix minor problem in ncdap_test/testurl.sh where a parameter to buildurl needed to be quoted. 2. Minor fix to ncgen to swap switches -H and -h to be consistent with other utilities. 3. Document the -M flag in nccopy usage() and the nccopy man page. 4. Modify a test case to use the nccopy -M flag.
2018-08-26 11:44:41 +08:00
" [-H]"
2015-08-16 06:26:35 +08:00
" [file ... ]",
2010-06-03 21:24:43 +08:00
progname);
derror("netcdf library version %s", nc_inq_libvers());
}
int
main(
int argc,
char *argv[])
{
2017-11-01 04:03:57 +08:00
int code = 0;
2010-06-03 21:24:43 +08:00
int c;
FILE *fp;
2013-09-24 02:04:39 +08:00
struct Languages* langs;
2010-06-03 21:24:43 +08:00
#ifdef __hpux
setlocale(LC_CTYPE,"");
#endif
2010-06-03 21:24:43 +08:00
init_netcdf();
opterr = 1; /* print error message if bad option */
2017-10-31 05:11:23 +08:00
progname = nulldup(ubasename(argv[0]));
cdlname = NULL;
2010-06-03 21:24:43 +08:00
netcdf_name = NULL;
datasetname = NULL;
l_flag = 0;
2010-06-03 21:24:43 +08:00
nofill_flag = 0;
2011-07-28 04:48:58 +08:00
syntax_only = 0;
2012-03-08 07:38:51 +08:00
header_only = 0;
2010-06-03 21:24:43 +08:00
mainname = "main";
nciterbuffersize = 0;
k_flag = 0;
format_attribute = 0;
enhanced_flag = 0;
cdf5_flag = 0;
specials_flag = 0;
diskless = 0;
#ifdef LOGGING
ncloglevel = NC_TURN_OFF_LOGGING;
#else
ncloglevel = -1;
#endif
memset(&globalspecials,0,sizeof(GlobalSpecialData));
2010-06-03 21:24:43 +08:00
#if _CRAYMPP && 0
/* initialize CRAY MPP parallel-I/O library */
(void) par_io_init(32, 32);
#endif
re e-support UBS-599337 re pull request https://github.com/Unidata/netcdf-c/pull/405 re pull request https://github.com/Unidata/netcdf-c/pull/446 Notes: 1. This branch is a cleanup of the magic.dmh branch. 2. magic.dmh was originally merged, but caused problems with parallel IO. It was re-issued as pull request https://github.com/Unidata/netcdf-c/pull/446. 3. This branch + pull request replace any previous pull requests and magic.dmh branch. Given an otherwise valid netCDF file that has a corrupted header, the netcdf library currently crashes. Instead, it should return NC_ENOTNC. Additionally, the NC_check_file_type code does not do the forward search required by hdf5 files. It currently only looks at file position 0 instead of 512, 1024, 2048,... Also, it turns out that the HDF4 magic number is assumed to always be at the beginning of the file (unlike HDF5). The change is localized to libdispatch/dfile.c See https://support.hdfgroup.org/release4/doc/DSpec_html/DS.pdf Also, it turns out that the code in NC_check_file_type is duplicated (mostly) in the function libsrc4/nc4file.c#nc_check_for_hdf. This branch does the following. 1. Make NC_check_file_type return NC_ENOTNC instead of crashing. 2. Remove nc_check_for_hdf and centralize all file format checking NC_check_file_type. 3. Add proper forward search for HDF5 files (but not HDF4 files) to look for the magic number at offsets of 0, 512, 1024... 4. Add test tst_hdf5_offset.sh. This tests that hdf5 files with an offset are properly recognized. It does so by prefixing a legal file with some number of zero bytes: 512, 1024, etc. 5. Off-topic: Added -N flag to ncdump to force a specific output dataset name.
2017-10-25 06:25:09 +08:00
while ((c = getopt(argc, argv, "134567bB:cdD:fhHk:l:M:no:Pv:xL:N:")) != EOF)
2010-06-03 21:24:43 +08:00
switch(c) {
case 'd':
debug = 1;
2010-06-03 21:24:43 +08:00
break;
case 'D':
debug = atoi(optarg);
break;
case 'c': /* for c output, old version of "-lc" */
if(l_flag != 0) {
fprintf(stderr,"Please specify only one language\n");
return 1;
}
l_flag = L_C;
2010-06-03 21:24:43 +08:00
fprintf(stderr,"-c is deprecated: please use -lc\n");
break;
case 'f': /* for f77 output, old version of "-lf" */
if(l_flag != 0) {
fprintf(stderr,"Please specify only one language\n");
return 1;
}
l_flag = L_F77;
2010-06-03 21:24:43 +08:00
fprintf(stderr,"-f is deprecated: please use -lf77\n");
break;
case 'b': /* for binary netcdf output, ".nc" extension */
if(l_flag != 0) {
fprintf(stderr,"Please specify only one language\n");
return 1;
}
l_flag = L_BINARY;
2010-06-03 21:24:43 +08:00
break;
There was a request to extend the provenance information stored in the _NCProperties attribute to allow two things: 1. capture of additional library dependencies (over and above hdf5) 2. Recognition of non-netcdf libraries that create netcdf-4 format files. To this end, the _NCProperties format has been extended to be and arbitrary set of key=value pairs separated by commas. This new format has version = 2, and uses commas as the pair separator. Thus the general form is: _NCProperties = "version=2,key1=value,key2=value2..." ; This new version is accompanied by a new ./configure option of the form --with-ncproperties="key1=value1,key2=value2..." that specifies pairs to add to the _NCProperties attribute for all files created with that netcdf library. At this point, what is missing is some programmatic way to specify either all the pairs or additional pairs to the _NCProperties attribute. Not sure of the best way to do this. Builders using non-netcdf libraries can specify whatever they want in the key value pairs (as long as the version=2 is specified first). By convention, the primary library is expected to be the the first pair after the leading version=2 pair, but this is convention only and is neither required nor enforced. Related changes: 1. Fixed the tests that check _NCProperties to properly operate with version=2. 2. When reading a version 1 _NCProperties attribute, convert it to look like a version 2 attribute. 2. Added some version 2 tests to ncdump/tst_fileinfo.c and ncdump/tst_fileinfo.sh Misc Changes: 1. Fix minor problem in ncdap_test/testurl.sh where a parameter to buildurl needed to be quoted. 2. Minor fix to ncgen to swap switches -H and -h to be consistent with other utilities. 3. Document the -M flag in nccopy usage() and the nccopy man page. 4. Modify a test case to use the nccopy -M flag.
2018-08-26 11:44:41 +08:00
case 'H':
header_only = 1;
2012-03-08 07:38:51 +08:00
break;
There was a request to extend the provenance information stored in the _NCProperties attribute to allow two things: 1. capture of additional library dependencies (over and above hdf5) 2. Recognition of non-netcdf libraries that create netcdf-4 format files. To this end, the _NCProperties format has been extended to be and arbitrary set of key=value pairs separated by commas. This new format has version = 2, and uses commas as the pair separator. Thus the general form is: _NCProperties = "version=2,key1=value,key2=value2..." ; This new version is accompanied by a new ./configure option of the form --with-ncproperties="key1=value1,key2=value2..." that specifies pairs to add to the _NCProperties attribute for all files created with that netcdf library. At this point, what is missing is some programmatic way to specify either all the pairs or additional pairs to the _NCProperties attribute. Not sure of the best way to do this. Builders using non-netcdf libraries can specify whatever they want in the key value pairs (as long as the version=2 is specified first). By convention, the primary library is expected to be the the first pair after the leading version=2 pair, but this is convention only and is neither required nor enforced. Related changes: 1. Fixed the tests that check _NCProperties to properly operate with version=2. 2. When reading a version 1 _NCProperties attribute, convert it to look like a version 2 attribute. 2. Added some version 2 tests to ncdump/tst_fileinfo.c and ncdump/tst_fileinfo.sh Misc Changes: 1. Fix minor problem in ncdap_test/testurl.sh where a parameter to buildurl needed to be quoted. 2. Minor fix to ncgen to swap switches -H and -h to be consistent with other utilities. 3. Document the -M flag in nccopy usage() and the nccopy man page. 4. Modify a test case to use the nccopy -M flag.
2018-08-26 11:44:41 +08:00
case 'h':
2015-11-20 04:44:07 +08:00
usage();
2017-11-01 04:03:57 +08:00
goto done;
2015-08-16 06:26:35 +08:00
case 'l': /* specify language, instead of using -c or -f or -b */
{
2017-10-31 05:11:23 +08:00
char* lang_name = NULL;
2015-08-16 06:26:35 +08:00
if(l_flag != 0) {
fprintf(stderr,"Please specify only one language\n");
return 1;
}
if(!optarg) {
derror("%s: output language is null", progname);
return(1);
}
#if 0
lang_name = estrdup(optarg);
#endif
2018-04-21 04:40:28 +08:00
lang_name = (char*) emalloc(strlen(optarg)+1);
(void)strcpy(lang_name, optarg);
for(langs=legallanguages;langs->name != NULL;langs++) {
2015-08-16 06:26:35 +08:00
if(strcmp(lang_name,langs->name)==0) {
2018-04-21 04:40:28 +08:00
l_flag = langs->flag;
2015-08-16 06:26:35 +08:00
break;
}
2018-04-21 04:40:28 +08:00
}
2015-08-16 06:26:35 +08:00
if(langs->name == NULL) {
derror("%s: output language %s not implemented",progname, lang_name);
2017-10-31 05:11:23 +08:00
nullfree(lang_name);
2015-08-16 06:26:35 +08:00
return(1);
}
2017-10-31 05:11:23 +08:00
nullfree(lang_name);
2015-08-16 06:26:35 +08:00
}; break;
case 'L':
ncloglevel = atoi(optarg);
break;
2010-06-03 21:24:43 +08:00
case 'n': /* old version of -b, uses ".cdf" extension */
if(l_flag != 0) {
fprintf(stderr,"Please specify only one language\n");
return 1;
}
l_flag = L_BINARY;
binary_ext = ".cdf";
2010-06-03 21:24:43 +08:00
break;
case 'o': /* to explicitly specify output name */
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
if(netcdf_name) efree(netcdf_name);
2010-06-03 21:24:43 +08:00
netcdf_name = nulldup(optarg);
break;
re e-support UBS-599337 re pull request https://github.com/Unidata/netcdf-c/pull/405 re pull request https://github.com/Unidata/netcdf-c/pull/446 Notes: 1. This branch is a cleanup of the magic.dmh branch. 2. magic.dmh was originally merged, but caused problems with parallel IO. It was re-issued as pull request https://github.com/Unidata/netcdf-c/pull/446. 3. This branch + pull request replace any previous pull requests and magic.dmh branch. Given an otherwise valid netCDF file that has a corrupted header, the netcdf library currently crashes. Instead, it should return NC_ENOTNC. Additionally, the NC_check_file_type code does not do the forward search required by hdf5 files. It currently only looks at file position 0 instead of 512, 1024, 2048,... Also, it turns out that the HDF4 magic number is assumed to always be at the beginning of the file (unlike HDF5). The change is localized to libdispatch/dfile.c See https://support.hdfgroup.org/release4/doc/DSpec_html/DS.pdf Also, it turns out that the code in NC_check_file_type is duplicated (mostly) in the function libsrc4/nc4file.c#nc_check_for_hdf. This branch does the following. 1. Make NC_check_file_type return NC_ENOTNC instead of crashing. 2. Remove nc_check_for_hdf and centralize all file format checking NC_check_file_type. 3. Add proper forward search for HDF5 files (but not HDF4 files) to look for the magic number at offsets of 0, 512, 1024... 4. Add test tst_hdf5_offset.sh. This tests that hdf5 files with an offset are properly recognized. It does so by prefixing a legal file with some number of zero bytes: 512, 1024, etc. 5. Off-topic: Added -N flag to ncdump to force a specific output dataset name.
2017-10-25 06:25:09 +08:00
case 'N': /* to explicitly specify dataset name */
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
if(datasetname) efree(datasetname);
re e-support UBS-599337 re pull request https://github.com/Unidata/netcdf-c/pull/405 re pull request https://github.com/Unidata/netcdf-c/pull/446 Notes: 1. This branch is a cleanup of the magic.dmh branch. 2. magic.dmh was originally merged, but caused problems with parallel IO. It was re-issued as pull request https://github.com/Unidata/netcdf-c/pull/446. 3. This branch + pull request replace any previous pull requests and magic.dmh branch. Given an otherwise valid netCDF file that has a corrupted header, the netcdf library currently crashes. Instead, it should return NC_ENOTNC. Additionally, the NC_check_file_type code does not do the forward search required by hdf5 files. It currently only looks at file position 0 instead of 512, 1024, 2048,... Also, it turns out that the HDF4 magic number is assumed to always be at the beginning of the file (unlike HDF5). The change is localized to libdispatch/dfile.c See https://support.hdfgroup.org/release4/doc/DSpec_html/DS.pdf Also, it turns out that the code in NC_check_file_type is duplicated (mostly) in the function libsrc4/nc4file.c#nc_check_for_hdf. This branch does the following. 1. Make NC_check_file_type return NC_ENOTNC instead of crashing. 2. Remove nc_check_for_hdf and centralize all file format checking NC_check_file_type. 3. Add proper forward search for HDF5 files (but not HDF4 files) to look for the magic number at offsets of 0, 512, 1024... 4. Add test tst_hdf5_offset.sh. This tests that hdf5 files with an offset are properly recognized. It does so by prefixing a legal file with some number of zero bytes: 512, 1024, etc. 5. Off-topic: Added -N flag to ncdump to force a specific output dataset name.
2017-10-25 06:25:09 +08:00
datasetname = nulldup(optarg);
break;
2010-06-03 21:24:43 +08:00
case 'x': /* set nofill mode to speed up creation of large files */
nofill_flag = 1;
break;
case 'v': /* a deprecated alias for "kind" option */
/*FALLTHRU*/
2015-11-20 04:44:07 +08:00
case 'k': { /* for specifying variant of netCDF format to be generated
Possible values are:
Format names:
"classic" or "nc3"
"64-bit offset" or "nc6"
2015-08-16 06:26:35 +08:00
"64-bit data" or "nc5" or "cdf-5"
"netCDF-4" or "nc4"
"netCDF-4 classic model" or "nc7"
2015-08-16 06:26:35 +08:00
"netCDF-5" or "nc5" or "cdf5"
Format version numbers (deprecated):
1 (=> classic)
2 (=> 64-bit offset)
3 (=> netCDF-4)
4 (=> netCDF-4 classic model)
2015-08-16 06:26:35 +08:00
5 (=> classic 64 bit data aka CDF-5)
2010-06-03 21:24:43 +08:00
*/
2015-11-20 04:44:07 +08:00
struct Kvalues* kvalue;
2017-10-31 05:52:08 +08:00
if(optarg == NULL) {
derror("-k flag has no value");
return 2;
}
2015-11-20 04:44:07 +08:00
for(kvalue=legalkinds;kvalue->name;kvalue++) {
2017-10-31 05:52:08 +08:00
if(strcmp(optarg,kvalue->name) == 0) {
k_flag = kvalue->k_flag;
break;
}
2015-11-20 04:44:07 +08:00
}
if(kvalue->name == NULL) {
2017-10-31 05:52:08 +08:00
derror("Invalid format: %s",optarg);
2015-11-20 04:44:07 +08:00
return 2;
}
} break;
case '3': /* output format is classic (netCDF-3) */
k_flag = NC_FORMAT_CLASSIC;
break;
case '6': /* output format is 64-bit-offset (netCDF-3 version 2) */
2015-08-16 06:26:35 +08:00
k_flag = NC_FORMAT_64BIT_OFFSET;
break;
case '4': /* output format is netCDF-4 (variant of HDF5) */
k_flag = NC_FORMAT_NETCDF4;
break;
2015-08-16 06:26:35 +08:00
case '5': /* output format is CDF5 */
k_flag = NC_FORMAT_CDF5;
break;
case '7': /* output format is netCDF-4 (restricted to classic model)*/
k_flag = NC_FORMAT_NETCDF4_CLASSIC;
break;
2010-06-03 21:24:43 +08:00
case 'M': /* Determine the name for the main function */
mainname = nulldup(optarg);
break;
case 'B':
nciterbuffersize = atoi(optarg);
break;
case 'P': /* diskless with persistence */
diskless = 1;
break;
2010-06-03 21:24:43 +08:00
case '?':
usage();
return(8);
}
if(l_flag == 0) {
l_flag = L_BINARY; /* default */
/* Treat -k or -o as an implicit -lb assuming no other -l flags */
if(k_flag == 0 && netcdf_name == NULL)
2011-08-12 02:46:18 +08:00
syntax_only = 1;
2010-06-03 21:24:43 +08:00
}
/* Compute/default the iterator buffer size */
if(l_flag == L_BINARY) {
2010-06-03 21:24:43 +08:00
if(nciterbuffersize == 0 )
nciterbuffersize = DFALTBINNCITERBUFFERSIZE;
} else {
if(nciterbuffersize == 0)
nciterbuffersize = DFALTLANGNCITERBUFFERSIZE;
}
#ifndef ENABLE_C
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
if(l_flag == L_C) {
2010-06-03 21:24:43 +08:00
fprintf(stderr,"C not currently supported\n");
2017-11-01 04:03:57 +08:00
code=1; goto done;
2010-06-03 21:24:43 +08:00
}
#endif
#ifndef ENABLE_BINARY
if(l_flag == L_BINARY) {
2010-06-03 21:24:43 +08:00
fprintf(stderr,"Binary netcdf not currently supported\n");
2017-11-01 04:03:57 +08:00
code=1; goto done;
2010-06-03 21:24:43 +08:00
}
#endif
#ifndef ENABLE_JAVA
if(l_flag == L_JAVA) {
2010-06-03 21:24:43 +08:00
fprintf(stderr,"Java not currently supported\n");
2017-11-01 04:03:57 +08:00
code=1; goto done;
2010-06-03 21:24:43 +08:00
}
#else
if(l_flag == L_JAVA && mainname != NULL && strcmp(mainname,"main")==0)
mainname = "Main";
2010-06-03 21:24:43 +08:00
#endif
#ifndef ENABLE_F77
if(l_flag == L_F77) {
2010-06-03 21:24:43 +08:00
fprintf(stderr,"F77 not currently supported\n");
2017-11-01 04:03:57 +08:00
code=1; goto done;
2010-06-03 21:24:43 +08:00
}
#endif
if(l_flag != L_BINARY)
diskless = 0;
2010-06-03 21:24:43 +08:00
argc -= optind;
argv += optind;
if (argc > 1) {
derror ("%s: only one input file argument permitted",progname);
return(6);
}
fp = stdin;
if (argc > 0 && strcmp(argv[0], "-") != 0) {
char bom[4];
size_t count;
2017-04-15 01:05:30 +08:00
if ((fp = NCfopen(argv[0], "r")) == NULL) {
2010-06-03 21:24:43 +08:00
derror ("can't open file %s for reading: ", argv[0]);
perror("");
return(7);
}
/* Check the leading bytes for an occurrence of a BOM */
/* re: http://www.unicode.org/faq/utf_bom.html#BOM */
/* Attempt to read the first four bytes */
memset(bom,0,sizeof(bom));
count = fread(bom,1,2,fp);
if(count == 2) {
switch (bom[0]) {
case '\x00':
case '\xFF':
case '\xFE':
/* Only UTF-* is allowed; complain and exit */
fprintf(stderr,"Input file contains a BOM indicating a non-UTF8 encoding\n");
return 1;
case '\xEF':
/* skip the BOM */
(void)fread(bom,1,1,fp);
break;
default: /* legal printable char, presumably; rewind */
rewind(fp);
break;
}
}
2017-10-31 05:11:23 +08:00
}
2017-10-31 05:11:23 +08:00
cdlname = nulldup(argv[0]);
if(cdlname != NULL) {
if(strlen(cdlname) > NC_MAX_NAME)
cdlname[NC_MAX_NAME] = '\0';
2010-06-03 21:24:43 +08:00
}
parse_init();
ncgin = fp;
if(debug >= 2) {ncgdebug=1;}
if(ncgparse() != 0)
return 1;
2010-06-03 21:24:43 +08:00
/* Compute the k_flag (1st pass) using rules in the man page (ncgen.1).*/
2018-06-30 10:17:07 +08:00
#ifndef ENABLE_CDF5
if(k_flag == NC_FORMAT_CDF5) {
derror("Output format CDF5 requested, but netcdf was built without cdf5 support.");
return 0;
}
#endif
#ifndef USE_NETCDF4
if(enhanced_flag) {
derror("CDL input is enhanced mode, but --disable-netcdf4 was specified during build");
return 0;
2010-06-03 21:24:43 +08:00
}
#endif
2010-06-03 21:24:43 +08:00
if(l_flag == L_JAVA || l_flag == L_F77) {
k_flag = NC_FORMAT_CLASSIC;
if(enhanced_flag) {
derror("Java or Fortran requires classic model CDL input");
return 0;
}
}
if(k_flag == 0)
k_flag = globalspecials._Format;
if(cdf5_flag && !enhanced_flag && k_flag == 0)
k_flag = NC_FORMAT_64BIT_DATA;
if(enhanced_flag && k_flag == 0)
k_flag = NC_FORMAT_NETCDF4;
if(enhanced_flag && k_flag != NC_FORMAT_NETCDF4) {
if(enhanced_flag && k_flag != NC_FORMAT_NETCDF4 && k_flag != NC_FORMAT_64BIT_DATA) {
derror("-k or _Format conflicts with enhanced CDL input");
return 0;
}
}
if(specials_flag > 0 && k_flag == 0)
#ifdef USE_NETCDF4
k_flag = NC_FORMAT_NETCDF4;
#else
k_flag = NC_FORMAT_CLASSIC;
2010-06-03 21:24:43 +08:00
#endif
if(k_flag == 0)
k_flag = NC_FORMAT_CLASSIC;
2015-11-20 04:44:07 +08:00
/* Figure out usingclassic */
switch (k_flag) {
case NC_FORMAT_64BIT_DATA:
case NC_FORMAT_CLASSIC:
case NC_FORMAT_64BIT_OFFSET:
case NC_FORMAT_NETCDF4_CLASSIC:
usingclassic = 1;
break;
case NC_FORMAT_NETCDF4:
default:
usingclassic = 0;
break;
}
/* compute cmode_modifier */
switch (k_flag) {
2015-11-20 04:44:07 +08:00
case NC_FORMAT_CLASSIC:
cmode_modifier = 0; break;
case NC_FORMAT_64BIT_OFFSET:
cmode_modifier = NC_64BIT_OFFSET; break;
case NC_FORMAT_NETCDF4:
cmode_modifier = NC_NETCDF4; break;
case NC_FORMAT_NETCDF4_CLASSIC:
cmode_modifier = NC_NETCDF4 | NC_CLASSIC_MODEL; break;
case NC_FORMAT_64BIT_DATA:
cmode_modifier = NC_CDF5; break;
default: ASSERT(0); /* cannot happen */
}
if(diskless)
cmode_modifier |= (NC_DISKLESS|NC_NOCLOBBER);
2010-06-03 21:24:43 +08:00
processsemantics();
if(!syntax_only && error_count == 0)
2011-07-28 04:48:58 +08:00
define_netcdf();
2010-06-03 21:24:43 +08:00
2017-11-01 04:03:57 +08:00
done:
Fix more memory leaks in netcdf-c library This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173 Sorry that it is so big, but leak suppression can be complex. This PR fixes all remaining memory leaks -- as determined by -fsanitize=address, and with the exceptions noted below. Unfortunately. there remains a significant leak that I cannot solve. It involves vlens, and it is unclear if the leak is occurring in the netcdf-c library or the HDF5 library. I have added a check_PROGRAM to the ncdump directory to show the problem. The program is called tst_vlen_demo.c To exercise it, build the netcdf library with -fsanitize=address enabled. Then go into ncdump and do a "make clean check". This should build tst_vlen_demo without actually executing it. Then do the command "./tst_vlen_demo" to see the output of the memory checker. Note the the lost malloc is deep in the HDF5 library (in H5Tvlen.c). I am temporarily working around this error in the following way. 1. I modified several test scripts to not execute known vlen tests that fail as described above. 2. Added an environment variable called NC_VLEN_NOTEST. If set, then those specific tests are suppressed. This should mean that the --disable-utilities option to ./configure should not need to be set to get a memory leak clean build. This should allow for detection of any new leaks. Note: I used an environment variable rather than a ./configure option to control the vlen tests. This is because it is temporary (I hope) and because it is a bit tricky for shell scripts to access ./configure options. Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-16 01:00:38 +08:00
nullfree(netcdf_name);
nullfree(datasetname);
2017-11-01 04:03:57 +08:00
finalize_netcdf(code);
return code;
2010-06-03 21:24:43 +08:00
}
void
init_netcdf(void) /* initialize global counts, flags */
{
memset((void*)&nullconstant,0,sizeof(NCConstant));
2010-06-03 21:24:43 +08:00
fillconstant = nullconstant;
fillconstant.nctype = NC_FILLVALUE;
2010-06-03 21:24:43 +08:00
codebuffer = bbNew();
stmt = bbNew();
error_count = 0; /* Track # of errors */
2010-06-03 21:24:43 +08:00
}
2017-11-01 04:03:57 +08:00
void
finalize_netcdf(int retcode)
{
nc_finalize();
exit(retcode);
}