Commit Graph

312 Commits

Author SHA1 Message Date
Ward Fisher
b0f85f41df
Merge branch 'master' into authfix.dmh 2021-06-01 14:09:43 -06:00
Dennis Heimbigner
e632d02041 Re-enable DAP2 authorization tests
The thredds-test server now has some password protected datasets
that can be used to test DAP2 authorization support.
The general location is
````
https://thredds.ucar.edu/thredds/tdscapabilities/authTest.html
````
and specifically:
````
https://thredds.ucar.edu/thredds/dodsC/test3/testData.nc.html
````

This PR replaces old testcases with ncdap_test/testauth.sh.
This testcase allows us to test use of the .dodsrc file and .netrc file
and embedded user+pwd.

As part of this, I had to create a program (ncdap_test/pathcvt.c)
that is essentially the equivalent to cygpath. Given a path in
windows, unix, msys or cygwin format, it converts it to the
equivalent format in one of those four cases.  So it can be used
to convert a cygwin path to a windows path, for example. This is
needed in testpathcvt and testauth to make sure that the paths
in .daprc (e.g. the reference to .netrc) are of the proper
format.

Misc. Other Changes:
1. Fix some memory leaks in libdap2
2. Setting the env variable CURLOPT_VERBOSE allows tracking of curl
   operations.
3. Make tst_charvlenbug be conditional on NC_VLEN_NOTEST.
2021-05-29 21:30:33 -06:00
Dennis Heimbigner
6901206927 Regularize the semantics of mkstemp.
re: https://github.com/Unidata/netcdf-c/issues/1827

The issue is partly resolved by this PR. The proximate problem appears to be that the semantics of mkstemp in **nix is different than the semantics of _mktemp_s in Windows. I had thought they were the same but that is incorrect. The _mktemp_s function will only produce 26 different files and so the netcdf temp file code will fail after about that many iterations.

So, to solve this, I created my own version of mkstemp for windows that uses a random number generator. This appears to solve the reported issue.  I also added the testcase ncdap_test/test_manyurls but made it conditional on --enable-dap-long-tests because it is very slow.

I did note that the provided test program now fails after some 800 iterations with a libcurl error claiming it cannot resolve the host name. My belief is that the library is just running out of resources at this point: too many open curl handles or some such. I doubt if this failure is fixable.

So bottom line is that it is really important to do nc_close when you are finished with a file.

Misc. Other Changes:

1. I took the opportunity to clean up some bad string hacks in the code. Specifically
    * change all uses of strncat to strlcat
    * remove old string hacks: occoncat and occopycat
2. Add heck to see if test.opendap.org is running and if not, then skip test
3. Make CYGWIN use TEMP environment variable
2021-05-14 11:33:03 -06:00
Dennis Heimbigner
91168e33a0 Fix JSON quoted string processing in libnczarr
re: github issue https://github.com/Unidata/netcdf-c/issues/1982

The problem was that the libnczarr/zsjon.c handling of strings with
embedded double quotes was wrong; a one line fix.
Also added a test case.

Misc. other changes:

1. I Discovered, en passant, that the handling of 64 bit constants
had an error that was fixed.
2. cleanup of the constant conversion code to recurse on arrays of values.
2021-05-06 16:39:44 -06:00
Dennis Heimbigner
1243c3d866 Allow .rc tests to work in parallel by isolation 2021-04-25 22:02:29 -06:00
Dennis Heimbigner
391648fb8b Fix duplicate BOOL definitions 2021-02-02 22:48:22 -07:00
Dennis Heimbigner
7a44ae9184 Unify definition of NC_DISPATCH_VERSION
re: Issue

The netcdf dispatch table version was defined in several places.
Modify to only require defining it in CMakeLists.txt and configure.ac.

Fix entailed the following changes:
* Up the NC_DISPATCH_VERSION from 2 to 3 in configure.ac and CMakeLists.txt
* Create include/netcdf_dispatch.h.in and use it to configure include/netcdf_dispatch.h
* For CMAKE, make it search CMAKE_CURRENT_BINARY_DIR so code can locate the configured netcdf_dispatch.h
* Add entry to config.h.cmake.in for NC_DISPATCH_VERSION
* Move NCerror from include/ncdispatch.h to libdap2/nccomon.h
* Fix an API problem re nchttp.h
* Fix a conversion warning in libdispatch/dinfermodel.c
2021-01-31 21:40:08 -07:00
Dennis Heimbigner
016fd578b3 Merge branch 'master' into unmismatch.dmh 2021-01-07 18:30:57 -07:00
Dennis Heimbigner
5e5ff8ece9 Make fillmismatch the default for DAP2 and DAP4
re: https://github.com/Unidata/netcdf-c/issues/1614

NetCDF has a requirement that the type of a _FillValue attribute
be the same as the containing variable.  However, it is clear
that too many servers do not honor this requirement.  So, change
the default for DAP2 and DAP4 to allow fill mismatch.
2021-01-07 13:17:53 -07:00
Dennis Heimbigner
93e9d92778 More NCZarr optimizations
* Replace wholevar with more useful wholechunk optimization
* Add optimization to read multiple values at one time
* Replace NCDEFAULT_get/put_vars with native nczarr versions.
* Clarify chunk projection computations
* zdebdispatch.h
* Add more chunking test cases and re-enable run_chunkcases
* If !szip, then suppress deflate interference test
* Make H5Znoop(1) filter produce more information
* cleanup bzlib.c API
2021-01-06 13:35:59 -07:00
Dennis Heimbigner
d2316f866c Additional Fixes to NCZarr
Primary Fixes:
* Add a whole variable optimization -- used in the rare case that nc_get/put_vara covers the whole of a variable and the variable has a single chunk.
* Fix chunking error when stride causes whole chunks to be skipped.
* Fix some memory leaks
* Add test cases
* Add one performance test to nczarr_test/. This uses the timer utils from unit_test: timer_utils.[ch].
* Move ncdumpchunks utility from ncdump to nczarr_test

Misc. Other Changes:
* Make check for aws libraries conditional on --enable-nczarr-s3
* Remove all but one bm tests from nczarr_test until they are working.
* Remove another dependency on HDF5 from supposedly non-HDF5 specific code; specifically hdf5_log_hdf5.
* Make the BAIL2 macro be hdf5 specific and replace elsewhere with an HDF5 independent equivalent.
* Move hdf5cache.c to libsrc4/nc4cache.c because it is used by nczarr.
* Modify unit_tests so that some of them are run even if using Windows.
* Misc. small bug fixes and refactors and memory leaks.
* Rename some conflicting tests for cmake.
* Attempted to make nc_perf work with cmake and failed.
2020-12-16 20:48:02 -07:00
Dennis Heimbigner
793ecc8e60 Yet another fix for DAP2 double URL encoding.
re:  https://github.com/Unidata/netcdf-c/issues/1876
and: https://github.com/Unidata/netcdf-c/pull/1835
and: https://github.com/Unidata/netcdf4-python/issues/1041

The change in PR 1835 was correct with respect to using %20 instead of '+'
for encoding blanks. However, it was a mistake to assume everything was
unencoded and then to do encoding ourselves. The problem is that
different servers do different things, with Columbia being an outlier.

So, I have added a set of client controls that can at least give
the caller some control over this. The caller can append
the following fragment to his URL to control what gets encoded before
sending it to the server. The syntax is as follows:
````
https://<host>/<path>/<query>#encode=path|query|all|none
````

The possible values:
* path  -- URL encode (i.e. %xx encode) as needed in the path part of the URL.
* query -- URL encode as needed in the query part of the URL.
* all   -- equivalent to ````#encode=path,query````.
* none  -- do not url encode any part of the URL sent to the server; not strictly necessary, so mostly for completeness.

Note that if "encode=" is used, then before it is processed, all encoding
is turned of so that ````#encode=path```` will only encode the path
and not the query.

The default is ````#encode=query````, so the path is left untouched,
but the query is always encoded.

Internally, this required changes to pass the encode flags down into
the OC2 library.

Misc. Unrelated Changes:
* Shut up those irritating warning from putget.m4
2020-11-05 11:04:56 -07:00
Dennis Heimbigner
aeb3ac2809 Mostly revert the filter code to reduce its complexity of use.
re: https://github.com/Unidata/netcdf-c/issues/1836

Revert the internal filter code to simplify it. From the user's
point of view, the only visible changes should be:

1. The functions that convert text to filter specs have had their signature reverted and have been moved to netcdf_aux.h
2. Some filter API functions now return NC_ENOFILTER when inquiry is made about some filter.

Internally,the dispatch table has been modified to get rid of the filter_actions
entry and associated complex structures. It has been replaced with
inq_var_filter_ids and inq_var_filter_info entries and the dispatch table
version has been bumped to 3. Corresponding NOOP and NOTNC4 functions
were added to libdispatch/dnotnc4.c. Also, the filter_action entries
in dispatch tables were replaced for all dispatch code bases (HDF5, DAP2,
etc). This should only impact UDF users.

In the process, it became clear that the form of the filters
field in NC_VAR_INFO_T was format dependent, so I converted it to
be of type void* and pushed its management into the various dispatch
code bases. Specifically libhdf5 and libnczarr now manage the filters
field in their own way.

The auxilliary functions for parsing textual filter specifications
were moved to netcdf_aux.h and were renamed to the following:
* ncaux_h5filterspec_parse
* ncaux_h5filterspec_parselist
* ncaux_h5filterspec_free
* ncaux_h5filter_fix8

Misc. Other Changes:

1. Document NUG/filters.md updated to reflect the changes above.
2. All the old data types (structs and enums)
   used by filter_actions actions were deleted.
   The exception is the NC_H5_Filterspec because it is needed
   by ncaux_h5filterspec_parselist.
3. Clientside filters were removed -- another enhancement
   for which no-one ever asked.
4. The ability to remove filters was itself removed.
5. Some functionality needed by nczarr was moved from libhdf5
   to libsrc4 e.g. nc4_find_default_chunksizes
6. All the filterx code was removed
7. ncfilter.h and nc4filter.c no longer used

Misc. Unrelated Changes:

1. The nczarr_test makefile clean was leaving some directories; so
   add clean-local to take care of them.
2020-09-27 12:43:46 -06:00
Dennis Heimbigner
c3c89693c4 Fix URL encoding in DAP2 url processing
re: Github issue https://github.com/Unidata/netcdf-c/issues/1832
and Github issue https://github.com/Unidata/netcdf4-python/issues/1041

Handling of URL escape sequences for some servers
(e.g. http://iridl.ldeo.columbia.edu) appears to be somewhat
non-standard.
In particular, certain characters need escaping that other servers
do not. Fortunately, the changes should also work existing other servers.
2020-09-08 12:41:12 -06:00
Ward Fisher
31dee0c4da
Revert "Revert "Fix nczarr-experimental: improve build support, disengage hdf5 vs netcdf4 flags, and find AWS libraries"" 2020-08-17 19:15:47 -06:00
Ward Fisher
16c27ca13f
Revert "Fix nczarr-experimental: improve build support, disengage hdf5 vs netcdf4 flags, and find AWS libraries" 2020-08-17 15:51:01 -06:00
Dennis Heimbigner
d85bb6fe20 The big change for this commit is complete the
disengagement of enable-netcdf4 from enable-hdf5.
That is, with the advent of nczarr, it is possible
to turn off hdf5 but still need netcdf-4 enabled
because nczarr uses libsrc4, but not libhdf5.
This change involves a bunch of things:
1. Modify configure.ac and CMakelist to make enable_hdf5
   control if hdf5 support is provided. For back compatibility,
   disable-netcdf4 is treated as disable-hdf5. But internally,
   netcdf4 support is controlled only by the enabling of formats
   that require it.
2. In support of #1, modify .travis.yml to use enable/disable-hdf5
   instead of enable/disable-netcdf4.
3. test_common.in is modified to track selected features,
   including enable-hdf5 and enable-s3-tests. This is used in
   selected tests that mix netcdf-3 and netcdf4 tests.
4. The conflation of USE_HDF5 and USE_NETCDF4 is common in
   code, tests, and build files, so all of those had to be weeded out.
5. It turns out that some of the NC4_dim functions really are HDF5 specific,
   but are not treated as such. So they are moved from nc4dim.c to
   hdf5dim.c or hdf5dispatch.c
6. Some generic functions in libhdf5 can be (and were) moved to libsrc4.
2020-08-12 15:42:50 -06:00
Ward Fisher
cee0a9332d
Merge pull request #1759 from brianmckenna/CES_FCN
parse projection functions
2020-07-14 17:35:37 -06:00
Dennis Heimbigner
59e04ae071 This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".

The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.

More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).

WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:

Platform | Build System | S3 support
------------------------------------
Linux+gcc      | Automake     | yes
Linux+gcc      | CMake        | yes
Visual Studio  | CMake        | no

Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future.  Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.

In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*.  The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
   and the version bumped.
4. An overly complex set of structs was created to support funnelling
   all of the filterx operations thru a single dispatch
   "filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
   to nczarr.

Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
   -- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
   support zarr and to regularize the structure of the fragments
   section of a URL.

Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
   e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
   * Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
   and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.

Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-28 18:02:47 -06:00
Brian McKenna
c0c56b1b06 parse projection functions 2020-06-15 15:20:24 -04:00
Dennis Heimbigner
518d70ef64 ocdebug 2020-05-14 14:44:48 -06:00
Dennis Heimbigner
f0cd7f8ec1 Support no-op dispatch functions
re: https://github.com/Unidata/netcdf-c/issues/1693

1. Add functions to libdispatch/dnotnc4.c to support
   dispatch table operations that should work for any
   dispatch table, even if they do not do anything.
   Functions such as nc_inq_var_filter.
2. Modify selected dispatch tables to utilize
   the noop functions.
3. Extend nc_test/tst_formats.c to test.

This is an extension of Ed's work to do this for
chunking and deflate and szip. See PRs
https://github.com/Unidata/netcdf-c/pull/1697
and
https://github.com/Unidata/netcdf-c/pull/1692

As a side effect, elide libdispatch/dnotnc3.c since
it is no longer used.
2020-04-15 14:44:58 -06:00
Dennis Heimbigner
b488c272d5 Fix conflicts with master 2020-02-27 14:06:45 -07:00
Dennis Heimbigner
44d0dcaad2 Add support for multiple filters per variable.
re: https://github.com/Unidata/netcdf-c/issues/1584

Support has been added for multiple filters per variable.  This
affects a number of components in netcdf. The new APIs are
documented in NUG/filters.md.

The primary changes are:
* A set of new functions are provided (see __include/netcdf_filter.h__).
    - Obtain a list of the filters associated with a variable
    - Obtain the parameters for a specific filter.
* The existing __nc_inq_var_filter__ function now returns info
  about the first defined filter.
* The utilities (ncgen, ncdump, and nccopy) now support
  an extended format for specifying a sequence of filters.
  The general form is __<filter>|<filter>..._.
* The ncdump **_Filter** attribute now dumps a list of all the
  filters associated with a variable using the above new format.
* Filter specifications can now use a filter name instead of number
  for filters known to the netcdf library, which in turn is taken
  from the HDF5 filter registration page.
* New errors are defined: NC_EFILTER and NC_ENOFILTER. The latter
  is returned if an attempt is made to access an unknown filter.
* Internally, the dispatch table has been extended to add a function
  to handle all of the filter functions.
* New, filter-related, tests were added to nc_test4.
* A new plugin was added to the plugins directory to help with testing.

Notes:
1. The shuffle and fletcher32 filters are not part of the multifilter system.

Misc. changes:
1. A debug module was added to libhdf5 to help catch error locations.
2020-02-16 12:59:33 -07:00
Ben Boeckel
464e2953a0 windows: detect Windows using the correct define name 2019-11-07 07:55:47 -05:00
Ward Fisher
07d157d016 Whack-a-mole 2019-11-04 14:11:55 -07:00
Dennis Heimbigner
f1506d552e Change (again), and hopefully simplify, the file model inference algorithm.
* For URL paths, the new approach essentially centralizes all information
  in the URL into the "#mode=" fragment key and uses that value
  to determine the dispatcher for (most) URLs.

* The new approach has the following steps:

  1. canonicalize the path if it is a URL.
  2. use the mode= fragment key to determine the dispatcher
  3. if dispatcher still not determined, then use the mode flags
     argument to nc_open/nc_create to determine the dispatcher.
  4. if the path points to something readable, attempt to read the
     magic number at the front, and use that to determine the dispatcher.
     this case may override all previous cases.

* Misc changes.

  1. Update documentation
  2. Moved some unit tests from libdispatch to unit_test directory.
  3. Fixed use of wrong #ifdef macro in test_filter_reg.c
     [I think this may fix an previously reported esupport query].
2019-09-29 12:59:28 -06:00
Greg Sjaardema
56c0d5cf8a Spelling fixes 2019-09-18 08:03:01 -06:00
edwardhartnett
94f1a89a40 final removal 2019-08-15 07:05:10 -06:00
edwardhartnett
2077729abc removed base_pe functions from dispatch table 2019-08-15 06:51:06 -06:00
edwardhartnett
170c5b0901 removed NC from open in dispatch table 2019-08-01 14:30:20 -06:00
edwardhartnett
659939d397 turned off tst_zero_len_var.sh in cmake build 2019-08-01 12:46:12 -06:00
Dennis Heimbigner
4c92fc3405 Remove netcdf-4 conditional on the dispatch table.
Partially address: https://github.com/Unidata/netcdf-c/issues/1056

Currently, some of the entries in the dispatch table
are conditional'd on USE_NETCDF4.

As a step in upgrading the dispatch table for use
with user-defined tables, we remove that conditional.
This means that all dispatch tables must implement the
netcdf-4 specific functions even if only to make them
return NC_ENOTNC4. To simplify this, a set of default
functions are defined in libdispatch/dnotnc4.c to provide this
behavior. The file libdispatch/dnotnc3.c is also relevant to
this.

The primary fix is to modify the various dispatch tables to
remove the conditional and use the functions in
libdispatch/dnotnc4.c as appropriate. In practice, all of the
existing tables are prepared to handle this, so the only
real change is to remove the conditionals.

Misc. Unrelated fixes
1. Fix some annoying warnings in ncvalidator.

Notes:
1. This has not been tested with either pnetcdf or hdf4 enabled.
   When those are enabled, it is possible that there are still
   some conditionals that need to be fixed.
2019-07-20 13:59:40 -06:00
Dennis Heimbigner
331a1f1c63 Centralize calls to curl_global_init and curl_global_cleanup
re: https://github.com/Unidata/netcdf-c/issues/1388

1. Centralize calls to curl_global_init and curl_global_cleanup
   to libdispatch/ddispatch.c
2. Make the above calls if options require curl: currently
   any of DAP2, DAP4, or byterange.
3. Side issue: Fix obscure bug in mmapio.c involving non-persistent mmap.
2019-05-03 13:22:54 -06:00
Ward Fisher
3b34a82e19 Merge branch 'master' into threads_part1.dmh 2019-05-01 14:41:13 -06:00
Dennis Heimbigner
6934aa2e8b Thread safety: step 1: cleanup
re: https://github.com/Unidata/netcdf-c/issues/1373 (partial)

* Mark some global constants be const to indicate to make them easier to track.
* Hide direct access to the ncrc_globalstate behind a function call.
* Convert dispatch tables to constants (except the user defined ones)
  This has some consequences in terms of function arguments needing to be marked
  as const also.
* Remove some no longer needed global fields
* Aggregate all the globals in nclog.c
* Uniformly replace nc_sizevector{0,1} with NC_coord_{zero,one}
* Uniformly replace nc_ptrdffvector1 with NC_stride_one
* Remove some obsolete code
2019-03-30 14:06:20 -06:00
Julien Schueller
a3c2031071 Fix MinGW build
Som Win32 includes are also needed on MinGW
2019-03-17 09:18:56 +01:00
Ward Fisher
05dadede2b
Merge pull request #1321 from Unidata/dap4endianfix2.dmh
Fix additional big-endian machine error in dap4.
2019-02-15 15:45:48 -07:00
Dennis Heimbigner
ddd324c1f0 Fix LGTM complaint 2019-02-15 12:40:30 -07:00
Dennis Heimbigner
c59d5ce205 Fix handling of '/' characters in names in DAP2.
re: https://github.com/Unidata/thredds/issues/1224
[note that this is an issue in thredds, but the fix is in netcdf-c]

A thredds server can encode a netcdf-4 file into DAP2
by flattening names to include the containing group path,
where the group names are separated by '/'.

But the '/' is prohibited in netcdf names even if escaped
(a decision before my time).

So, if the netcdf-c/libdap2 code encounters a DAP2 name with '/'
characters, the '/' characters are converted to the string
%2f. Unfortunately, there is a glitch, namely that converting
the leading '/' produces a name that is still illegal. This PR
modifies the code to just drop the leading '/' character.
2019-02-14 20:25:40 -07:00
Dennis Heimbigner
724128ad3b Add hack to deal with DAP2 signed byte hack.
re: issue https://github.com/Unidata/netcdf-c/issues/1316

The DAP2 data model does not have a signed byte type,
but netcdf-3 does have (only) a signed byte type.
So, when converting a netcdf-3 signed byte typed variable to
a DAP2 unsigned byte, the following actions are taken by thredds:
1. The variable type is marked as DAP2 (unsigned) byte.
2. A special attribute, "_Unsigned=false" is set for the variable
3. The corresponding "_FillValue" attribute, if any, is up-converted
   to the DAP2 Int16 type in order to hold, with sign, any signed byte
   fill value.

On the netcdf-c side, this looks like a fillvalue type mismatch and causes
an error. Instead, the netcdf-c dap2 conversion code needs to recognize
this hack and undo it locally.

So this change looks for the above case, and if found, then it properly
converts the _FillValue type to netcdf-3 signed byte.

Since DAP2 supports both signed and unsigned integers of sizes 16 and 32 bits,
this should be the only hack needed (famous last words).

It may later be desirable for the thredds DAP2 converter to modify its
behavior as well.
2019-02-13 14:36:14 -07:00
Dennis Heimbigner
54656b4a16 When doing prefetch in DAP2, ignore invisible variables.
re: Issue https://github.com/Unidata/netcdf-c/issues/1300

Add a check for invisible variables.
2019-01-29 13:43:02 -07:00
Dennis Heimbigner
75759ca957 Separate out the --ansi comment fixes.
re: pull request https://github.com/Unidata/netcdf-c/pull/1242

This pr should be applied before https://github.com/Unidata/netcdf-c/pull/1242.
It fixes only the -ansi '//' comment problems. There may be some
slight conflicts with that other pr when it is applied, since in some
cases I converted #if 0...#endif to /*...*/
2018-12-12 13:23:09 -07:00
Ward Fisher
c70480dc05 Updated COPYRIGHT stanza in libdap2 2018-12-06 14:21:03 -07:00
Dennis Heimbigner
751300ec59 Fix more memory leaks in netcdf-c library
This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173

Sorry that it is so big, but leak suppression can be complex.

This PR fixes all remaining memory leaks -- as determined by
-fsanitize=address, and with the exceptions noted below.

Unfortunately. there remains a significant leak that I cannot
solve. It involves vlens, and it is unclear if the leak is
occurring in the netcdf-c library or the HDF5 library.

I have added a check_PROGRAM to the ncdump directory to show the
problem.  The program is called tst_vlen_demo.c To exercise it,
build the netcdf library with -fsanitize=address enabled. Then
go into ncdump and do a "make clean check".  This should build
tst_vlen_demo without actually executing it.  Then do the
command "./tst_vlen_demo" to see the output of the memory
checker.  Note the the lost malloc is deep in the HDF5 library
(in H5Tvlen.c).

I am temporarily working around this error in the following way.
1. I modified several test scripts to not execute known vlen tests
   that fail as described above.
2. Added an environment variable called NC_VLEN_NOTEST.
   If set, then those specific tests are suppressed.

This should mean that the --disable-utilities option to
./configure should not need to be set to get a memory leak clean
build.  This should allow for detection of any new leaks.

Note: I used an environment variable rather than a ./configure
option to control the vlen tests. This is because it is
temporary (I hope) and because it is a bit tricky for shell
scripts to access ./configure options.

Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-15 10:00:38 -07:00
Dennis Heimbigner
245961de00 re: github issues
https://github.com/Unidata/netcdf-c/issues/1168
    https://github.com/Unidata/netcdf-c/issues/1163
    https://github.com/Unidata/netcdf-c/issues/1162

This PR partially fixes memory leaks in the netcdf-c library,
in the ncdump utility, and in some test cases.

The netcdf-c library now runs memory clean with the assumption
that the --disable-utilities option is used. The primary remaining
problem is ncgen. Once that is fixed, I believe the netcdf-c library
will run memory clean with no limitations.

Notes
-----------
1. Memory checking was performed using gcc -fsanitize=address.
   Valgrind-based testing has yet to be performed.
2. The pnetcdf, hdf4, and examples code has not been tested.

Misc. Non-leak changes
1. Make tst_diskless2 only run when netcdf4 is enabled (issue 1162)
2. Fix CmakeLists.txt to turn off logging if ENABLE_NETCDF_4 is OFF
3. Isolated all my debug scripts into a single top-level directory
   called debug
4. Fix some USE_NETCDF4 dependencies in nc_test and nc_test4 Makefile.am
2018-10-30 20:48:12 -06:00
Dennis Heimbigner
a5a34f6aba
Merge branch 'master' into nc_mpiio_nc_mpiposix 2018-10-06 13:33:55 -06:00
Dennis Heimbigner
8072d1f6bb Modify DAP2 and DAP4 to optionally allow Fillvalue/Variable mismatch
re: issue https://github.com/Unidata/netcdf-c/issues/1151

Modify DAP2 and DAP4 code to handle case when _FillValue type is not
same as the parent variable type.

Specifically:
1. Define a parameter [fillmismatch] to allow this mismatch;
   default is to disallow.
2. If allowed, forcibly change the type of the _FillValue to match
   the parent variable.
3. If allowed Convert the values to match new type
4. Generate a log message
5. if not allowed, then fail

Implementing this required some changes to ncdap_test/dapcvt.c
Also added test cases.

Minor Unrelated Changes:
1. There were a number of warnings about e.g.
   assigning a const char* to a char*. Fix these
2. In nccopy.1, replace .NP with .IP "n"
   (re PR https://github.com/Unidata/netcdf-c/pull/1144)
3. fix minor error in ncdump/ocprint
2018-10-01 15:51:43 -06:00
Wei-keng Liao
0ed70756cc Ignore flags NC_MPIIO and NC_MPIPOSIX. 2018-09-22 20:22:34 -05:00
Dennis Heimbigner
d62a9e623c Fix the NC_INMEMORY code to work in all cases with HDF5 1.10.
re: github issue https://github.com/Unidata/netcdf-c/issues/1111

One of the less common use cases for the in-memory feature is
apparently failing with HDF5-1.10.x.  The fix is complicated and
requires significant changes to libhdf5/nc4memcb.c. The current
setup is detailed in the file docs/inmeminternal.dox.

Additionally, it was discovered that the program
nc_test/tst_inmemory.c, which is invoked by
nc_test/run_inmemory.sh, actually was failing because of the
above problem. But the failure is not detected since the script
does not return non-zero value.

Other Changes:
1. Fix nc_test_tst_inmemory to return errors correctly.
2. Make ncdap_tests/findtestserver.c and dap4_tests/findtestserver4.c
   be generated from ncdap_test/findtestserver.c.in.
3. Make LOG() print output to stderr instead of stdout to
   avoid contaminating e.g. ncdump output.
4. Modify the handling of NC_INMEMORY and NC_DISKLESS flags
   to properly handle that NC_DISKLESS => NC_INMEMORY. This
   affects a number of code pieces, especially memio.c.
2018-09-04 11:27:47 -06:00