Commit Graph

38 Commits

Author SHA1 Message Date
Peter Hill
653e09fd6d
Try to more consistently use size_t for nclistget index argument 2023-11-28 16:28:31 +00:00
Dennis Heimbigner
8cab468169 Suppress filters on variables with non-fixed-size types.
re: Discussion https://github.com/Unidata/netcdf-c/discussions/2554
re: PR https://github.com/Unidata/netcdf-c/pull/2231
re: Issue https://github.com/Unidata/netcdf-c/issues/2189

After some discussion, the issue of applying filters on variables
whose type is not fixed size, was resolved as follows:
1. A call to nc_def_var_filter will ignore such filters, but will issue a log warning.
2. Loading (from an existing file) a variable whose type is not fixed-size and which has filters, will cause the variable to be suppressed.

This PR enforces those rules.

### Misc. Other changes
* Add a test case to test the vlen change.
* Make some minor clean-ups in various cmake and automake files.
* Remove unused test
2023-06-21 14:46:22 -06:00
Dennis Heimbigner
fb40a72b45 Improve performance of the nc_reclaim_data and nc_copy_data functions.
re: Issue https://github.com/Unidata/netcdf-c/issues/2685
re: PR https://github.com/Unidata/netcdf-c/pull/2179

As noted in PR https://github.com/Unidata/netcdf-c/pull/2179,
the old code did not allow for reclaiming instances of types,
nor for properly copying them. That PR provided new functions
capable of reclaiming/copying instances of arbitrary types.

However, as noted by Issue https://github.com/Unidata/netcdf-c/issues/2685, using these
most general functions resulted in a significant performance
degradation, even for common cases.

This PR attempts to mitigate the cost of using the general
reclaim/copy functions in two ways.

First, the previous functions operating at the top level by
using ncid and typeid arguments. These functions were augmented
with equivalent versions that used the netcdf-c library internal
data structures to allow direct access to needed information.
These new functions are used internally to the library.

The second mitigation involves optimizing the internal functions
by providing early tests for common cases. This avoids
unnecessary recursive function calls.

The overall result is a significant improvement in speed by a
factor of roughly twenty -- your mileage may vary. These
optimized functions are still not as fast as the original (more
limited) functions, but they are getting close. Additional optimizations are
possible. But the cost is a significant "uglification" of the
code that I deemed a step too far, at least for now.

## Misc. Changes
1. Added a test case to check the proper reclamation/copy of complex types.
2. Found and fixed some places where nc_reclaim/copy should have been used.
3. Replaced, in the netcdf-c library, (almost all) occurrences of nc_reclaim_copy with calls to NC_reclaim/copy. This plus the optimizations is the primary speed-up mechanism.
4. In DAP4, the metadata is held in a substrate in-memory file; this required some changes so that the reclaim/copy code accessed that substrate dispatcher rather than the DAP4 dispatcher.
5. Re-factored and isolated the code that computes if a type is (transitively) variable-sized or not.
6. Clean up the reclamation code in ncgen; adding the use of nc_reclaim exposed some memory problems.
2023-05-20 17:11:25 -06:00
Dennis Heimbigner
591e6b2f6d Fix DAP4 remotetest server
Warning: This PR is a follow on to PR https://github.com/Unidata/netcdf-c/pull/2555 and should not be merged until that prior PR has been merged. The changeset for this PR is a delta on the PR https://github.com/Unidata/netcdf-c/pull/2555.

This PR re-enables the use of the server *remotetest.unidata.ucar.edu/d4ts*
to test several features:
1. Show that access over the Internet to servers using the DAP4 protocol works.
2. Test that DAP4 support in the [Thredds Data Server](https://github.com/Unidata/tds) is operating correctly.
4. Test that the DAP4 support in the [netcdf-java library](https://github.com/Unidata/netcdf-java) library and the DAP4 support in the netcdf-c library are consistent and are interoperable.

The test inputs (primarily *\*.nc* files) provided in the netcdf-c library
are also used by the DAP4 Test Server (aka d4ts) to present web access to a
collection of data files accessible via the DAP4 protocol and which can be
used for testing Internet access to a working server.

To be precise, this version of d4ts is currently in unmerged branches
of the *netcdf-java* and *tds* Github repositories and so are not actually
in the main repositories *yet*. However, the *d4ts.war* file was created
from that branch and used to populate the *remotetest.unidata.ucar.edu*
server

The two other remote servers that were used in the past are *Hyrax* (OPenDAP.org)
and *thredds-test*. These will continue to remain disabled until
those servers can be fixed.

## Primary Changes

* Rebuild the *baselineremote* directory. This directory contains the validation data needed to test the remote servers.
* Re-enable using remotetest.unidata.ucar.edu as part of the DAP4 testing process.
* Fix the *dap4_test/test_remote.sh* test script to match the current available test data.
* Make some changes to libdap4 to improve the ability to catch malformed data streams [affects a lot of files in libdap4].

## Misc. Unrelated Changes

* Remove a raft of warnings, especially in nc_test4/tst_quantize.c.
* Add some additional explanatory information to the NCZarr documentation.
* Cleanup some Doxygen errors in the docs file and reorder some files.
2022-11-15 20:29:21 -07:00
Dennis Heimbigner
65fd9fe1a5 Provide a default enum const when fill value does not match any enum const.
re: https://github.com/Unidata/netcdf-c/issues/982

It is possible to define an enum type that has no enum constant
with value zero. However, HDF5 has a default fill value of zero
that it used to fill all chunks. In the event that this situation
occurs, ncdump, say, will fail because there is no enum const
to print for the value zero.

The solution is to create a special enum constant called "_UNDEFINED"
that has the value zero. It is only used in the case that there is
no constant in the enum that already covers zero.

A test case is added in netcdf-c/ncdump to validate this solution.

Note: the changes occur primarily in libsrc4, so they also work for NCZarr.
2022-07-17 14:32:31 -06:00
Dennis Heimbigner
11547ffd29 1. Fix an additional flaw in fill_value handling where non-atomic default values were not properly being handled.
2. Rename the NC4_inq_any_type to NC_inq_any_type
2022-01-10 15:27:16 -07:00
Dennis Heimbigner
8b9253fef2 Fix various problem around VLEN's
re: https://github.com/Unidata/netcdf-c/issues/541
re: https://github.com/Unidata/netcdf-c/issues/1208
re: https://github.com/Unidata/netcdf-c/issues/2078
re: https://github.com/Unidata/netcdf-c/issues/2041
re: https://github.com/Unidata/netcdf-c/issues/2143

For a long time, there have been known problems with the
management of complex types containing VLENs.  This also
involves the string type because it is stored as a VLEN of
chars.

This PR (mostly) fixes this problem. But note that it adds new
functions to netcdf.h (see below) and this may require bumping
the .so number.  These new functions can be removed, if desired,
in favor of functions in netcdf_aux.h, but netcdf.h seems the
better place for them because they are intended as alternatives
to the nc_free_vlen and nc_free_string functions already in
netcdf.h.

The term complex type refers to any type that directly or
transitively references a VLEN type. So an array of VLENS, a
compound with a VLEN field, and so on.

In order to properly handle instances of these complex types, it
is necessary to have function that can recursively walk
instances of such types to perform various actions on them.  The
term "deep" is also used to mean recursive.

At the moment, the two operations needed by the netcdf library are:
* free'ing an instance of the complex type
* copying an instance of the complex type.

The current library does only shallow free and shallow copy of
complex types. This means that only the top level is properly
free'd or copied, but deep internal blocks in the instance are
not touched.

Note that the term "vector" will be used to mean a contiguous (in
memory) sequence of instances of some type. Given an array with,
say, dimensions 2 X 3 X 4, this will be stored in memory as a
vector of length 2*3*4=24 instances.

The use cases are primarily these.

## nc_get_vars
Suppose one is reading a vector of instances using nc_get_vars
(or nc_get_vara or nc_get_var, etc.).  These functions will
return the vector in the top-level memory provided.  All
interior blocks (form nested VLEN or strings) will have been
dynamically allocated.

After using this vector of instances, it is necessary to free
(aka reclaim) the dynamically allocated memory, otherwise a
memory leak occurs.  So, the recursive reclaim function is used
to walk the returned instance vector and do a deep reclaim of
the data.

Currently functions are defined in netcdf.h that are supposed to
handle this: nc_free_vlen(), nc_free_vlens(), and
nc_free_string().  Unfortunately, these functions only do a
shallow free, so deeply nested instances are not properly
handled by them.

Note that internally, the provided data is immediately written so
there is no need to copy it. But the caller may need to reclaim the
data it passed into the function.

## nc_put_att
Suppose one is writing a vector of instances as the data of an attribute
using, say, nc_put_att.

Internally, the incoming attribute data must be copied and stored
so that changes/reclamation of the input data will not affect
the attribute.

Again, the code inside the netcdf library does only shallow copying
rather than deep copy. As a result, one sees effects such as described
in Github Issue https://github.com/Unidata/netcdf-c/issues/2143.

Also, after defining the attribute, it may be necessary for the user
to free the data that was provided as input to nc_put_att().

## nc_get_att
Suppose one is reading a vector of instances as the data of an attribute
using, say, nc_get_att.

Internally, the existing attribute data must be copied and returned
to the caller, and the caller is responsible for reclaiming
the returned data.

Again, the code inside the netcdf library does only shallow copying
rather than deep copy. So this can lead to memory leaks and errors
because the deep data is shared between the library and the user.

# Solution

The solution is to build properly recursive reclaim and copy
functions and use those as needed.
These recursive functions are defined in libdispatch/dinstance.c
and their signatures are defined in include/netcdf.h.
For back compatibility, corresponding "ncaux_XXX" functions
are defined in include/netcdf_aux.h.
````
int nc_reclaim_data(int ncid, nc_type xtypeid, void* memory, size_t count);
int nc_reclaim_data_all(int ncid, nc_type xtypeid, void* memory, size_t count);
int nc_copy_data(int ncid, nc_type xtypeid, const void* memory, size_t count, void* copy);
int nc_copy_data_all(int ncid, nc_type xtypeid, const void* memory, size_t count, void** copyp);
````
There are two variants. The first two, nc_reclaim_data() and
nc_copy_data(), assume the top-level vector is managed by the
caller. For reclaim, this is so the user can use, for example, a
statically allocated vector. For copy, it assumes the user
provides the space into which the copy is stored.

The second two, nc_reclaim_data_all() and
nc_copy_data_all(), allows the functions to manage the
top-level.  So for nc_reclaim_data_all, the top level is
assumed to be dynamically allocated and will be free'd by
nc_reclaim_data_all().  The nc_copy_data_all() function
will allocate the top level and return a pointer to it to the
user. The user can later pass that pointer to
nc_reclaim_data_all() to reclaim the instance(s).

# Internal Changes
The netcdf-c library internals are changed to use the proper
reclaim and copy functions.  It turns out that the places where
these functions are needed is quite pervasive in the netcdf-c
library code.  Using these functions also allows some
simplification of the code since the stdata and vldata fields of
NC_ATT_INFO are no longer needed.  Currently this is commented
out using the SEPDATA \#define macro.  When any bugs are largely
fixed, all this code will be removed.

# Known Bugs

1. There is still one known failure that has not been solved.
   All the failures revolve around some variant of this .cdl file.
   The proximate cause of failure is the use of a VLEN FillValue.
````
        netcdf x {
        types:
          float(*) row_of_floats ;
        dimensions:
          m = 5 ;
        variables:
          row_of_floats ragged_array(m) ;
              row_of_floats ragged_array:_FillValue = {-999} ;
        data:
          ragged_array = {10, 11, 12, 13, 14}, {20, 21, 22, 23}, {30, 31, 32},
                         {40, 41}, _ ;
        }
````
When a solution is found, I will either add it to this PR or post a new PR.

# Related Changes

* Mark nc_free_vlen(s) as deprecated in favor of ncaux_reclaim_data.
* Remove the --enable-unfixed-memory-leaks option.
* Remove the NC_VLENS_NOTEST code that suppresses some vlen tests.
* Document this change in docs/internal.md
* Disable the tst_vlen_data test in ncdump/tst_nccopy4.sh.
* Mark types as fixed size or not (transitively) to optimize the reclaim
  and copy functions.

# Misc. Changes

* Make Doxygen process libdispatch/daux.c
* Make sure the NC_ATT_INFO_T.container field is set.
2022-01-08 18:30:00 -07:00
Dennis Heimbigner
ec5b3f9a4f Regularize the scoping of dimensions
This is a follow-on to pull request
````https://github.com/Unidata/netcdf-c/pull/1959````,
which fixed up type scoping.

The primary changes are to _nc\_inq\_dimid()_ and to ncdump.

The _nc\_inq\_dimid()_ function is supposed to allow the name to be
and FQN, but this apparently never got implemented. So if was modified
to support FQNs.

The ncdump program is supposed to output fully qualified dimension names
in its generated CDL file under certain conditions.

Suppose ncdump has a netcdf-4 file F with variable V, and V's parent group
is G. For each dimension id D referenced by V, ncdump needs to determine
whether to print its name as a simple name or as a fully qualified name (FQN).

The algorithm is as follows:

1. Search up the tree of ancestor groups.
2. If one of those ancestor groups contains the dimid, then call it dimgrp.
3. If one of those ancestor groups contains a dim with the same name as the dimid, but with a different dimid, then record that as duplicate=true.
4. If dimgrp is defined and duplicate == false, then we do not need an fqn.
5. If dimgrp is defined and duplicate == true, then we do need an fqn to avoid incorrectly using the duplicate.
6. If dimgrp is undefined, then do a preorder breadth-first search of all the groups looking for the dimid.
7. If found, then use the fqn of the first found such dimension location.
8. If not found, then fail.

Test case ncdump/test_scope.sh was modified to test the proper
operation of ncdump and _nc\_inq\_dimid()_.

Misc. Other Changes:
* Fix nc_inq_ncid (NC4_inq_ncid actually) to return root group id if the name argument is NULL.
* Modify _ncdump/printfqn_ to print out a dimid FQN; this supports verification that the resulting .nc files were properly created.
2021-05-31 15:51:12 -06:00
Dennis Heimbigner
0428c38b1e Regularize the scoping of types
re: Github issue https://github.com/Unidata/netcdf-c/issues/1956

The function NC_compare_nc_types in libdispatch/dcopy.c uses an
incorrect algorithm to search for types. The core of this is the
function NC_rec_find_nc_type in libdispatch/dcopy.c. Currently
it searchs the current group and its subtree.

Additionally, the function NC4_inq_typeid in libsrc4/nc4internal.c
has been extended to handle fully qualified names. It was originally
designed to do this, but for some reason never completed.

The NC_rec_find_nc_type algorithm has been altered to match the
algorithm used by NC4_inq_typeid. It operates as follows.

Given a file F, group G and a type T. It searches file F2, group
G2, for another type T2 that is equivalent to T.

The search order is as follows.
1. Search G2 for a type T2 equivalent to T.
2. Search upwards in the ancestor groups of G2 for a type T2 equivalent to T.
3. Search the complete group tree of F2 in pre-order, breadth-first order to locate T2 equivalent to T.

Also add a test case to validate algorithm: ncdump/test_scope.sh.

Note, this change may cause compatibility problems, though it is
unlikely because two different equivalent type declarations in
one dataset is unlikely.
2021-03-06 14:09:37 -07:00
Ward Fisher
31dee0c4da
Revert "Revert "Fix nczarr-experimental: improve build support, disengage hdf5 vs netcdf4 flags, and find AWS libraries"" 2020-08-17 19:15:47 -06:00
Ward Fisher
16c27ca13f
Revert "Fix nczarr-experimental: improve build support, disengage hdf5 vs netcdf4 flags, and find AWS libraries" 2020-08-17 15:51:01 -06:00
Dennis Heimbigner
d85bb6fe20 The big change for this commit is complete the
disengagement of enable-netcdf4 from enable-hdf5.
That is, with the advent of nczarr, it is possible
to turn off hdf5 but still need netcdf-4 enabled
because nczarr uses libsrc4, but not libhdf5.
This change involves a bunch of things:
1. Modify configure.ac and CMakelist to make enable_hdf5
   control if hdf5 support is provided. For back compatibility,
   disable-netcdf4 is treated as disable-hdf5. But internally,
   netcdf4 support is controlled only by the enabling of formats
   that require it.
2. In support of #1, modify .travis.yml to use enable/disable-hdf5
   instead of enable/disable-netcdf4.
3. test_common.in is modified to track selected features,
   including enable-hdf5 and enable-s3-tests. This is used in
   selected tests that mix netcdf-3 and netcdf4 tests.
4. The conflation of USE_HDF5 and USE_NETCDF4 is common in
   code, tests, and build files, so all of those had to be weeded out.
5. It turns out that some of the NC4_dim functions really are HDF5 specific,
   but are not treated as such. So they are moved from nc4dim.c to
   hdf5dim.c or hdf5dispatch.c
6. Some generic functions in libhdf5 can be (and were) moved to libsrc4.
2020-08-12 15:42:50 -06:00
Dennis Heimbigner
59e04ae071 This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".

The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.

More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).

WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:

Platform | Build System | S3 support
------------------------------------
Linux+gcc      | Automake     | yes
Linux+gcc      | CMake        | yes
Visual Studio  | CMake        | no

Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future.  Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.

In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*.  The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
   and the version bumped.
4. An overly complex set of structs was created to support funnelling
   all of the filterx operations thru a single dispatch
   "filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
   to nczarr.

Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
   -- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
   support zarr and to regularize the structure of the fragments
   section of a URL.

Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
   e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
   * Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
   and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.

Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-28 18:02:47 -06:00
Dennis Heimbigner
6934aa2e8b Thread safety: step 1: cleanup
re: https://github.com/Unidata/netcdf-c/issues/1373 (partial)

* Mark some global constants be const to indicate to make them easier to track.
* Hide direct access to the ncrc_globalstate behind a function call.
* Convert dispatch tables to constants (except the user defined ones)
  This has some consequences in terms of function arguments needing to be marked
  as const also.
* Remove some no longer needed global fields
* Aggregate all the globals in nclog.c
* Uniformly replace nc_sizevector{0,1} with NC_coord_{zero,one}
* Uniformly replace nc_ptrdffvector1 with NC_stride_one
* Remove some obsolete code
2019-03-30 14:06:20 -06:00
Ed Hartnett
ae313334b5 cleanup of whitespace in libsrc4 directory 2019-02-19 05:56:22 -07:00
Ed Hartnett
59307c405d removing unneeded function nc4_rec_find_nc_type 2018-08-22 06:29:44 -06:00
Ed Hartnett
8885c75ade removing unneeded lookups 2018-08-22 06:08:19 -06:00
Ed Hartnett
697f033823 renamed NC_HDF5_FILE_INFO to NC_FILE_INFO 2018-06-22 07:08:09 -06:00
Ed Hartnett
8996b36c66 fixed make clean in docs 2018-06-05 11:30:59 -06:00
Ed Hartnett
f2cb4678ee moving HDF5 functions to libhdf5 2018-05-24 14:27:16 -06:00
Dennis Heimbigner
25f062528b This completes (for now) the refactoring of libsrc4.
The file docs/indexing.dox tries to provide design
information for the refactoring.

The primary change is to replace all walking of linked
lists with the use of the NCindex data structure.
Ncindex is a combination of a hash table (for name-based
lookup) and a vector (for walking the elements in the index).
Additionally, global vectors are added to NC_HDF5_FILE_INFO_T
to support direct mapping of an e.g. dimid to the NC_DIM_INFO_T
object. These global vectors exist for dimensions, types, and groups
because they have globally unique id numbers.

WARNING:
1. since libsrc4 and libsrchdf4 share code, there are also
   changes in libsrchdf4.
2. Any outstanding pull requests that change libsrc4 or libhdf4
   are likely to cause conflicts with this code.
3. The original reason for doing this was for performance improvements,
   but as noted elsewhere, this may not be significant because
   the meta-data read performance apparently is being dominated
   by the hdf5 library because we do bulk meta-data reading rather
   than lazy reading.
2018-03-16 11:46:18 -06:00
Ed Hartnett
d350f82889 removed unneeded h5 checks 2018-01-18 07:46:31 -07:00
Ed Hartnett
126d34da1d more tests, error handling 2018-01-18 07:36:52 -07:00
Ed Hartnett
38c7cddf8f further test development, added documentation for uncommitted user-defined types, fixed check of return value 2018-01-18 06:47:50 -07:00
Ed Hartnett
4de61e21f2 more docs, more cleaning 2017-12-04 12:21:14 -07:00
Ed Hartnett
11e3e81489 more internal docs 2017-12-03 15:37:56 -07:00
Dennis Heimbigner
3db4f013bf Primary change: add dap4 support
Specific changes:
1. Add dap4 code: libdap4 and dap4_test.
   Note that until the d4ts server problem is solved, dap4 is turned off.
2. Modify various files to support dap4 flags:
	configure.ac, Makefile.am, CMakeLists.txt, etc.
3. Add nc_test/test_common.sh. This centralizes
   the handling of the locations of various
   things in the build tree: e.g. where is
   ncgen.exe located. See nc_test/test_common.sh
   for details.
4. Modify .sh files to use test_common.sh
5. Obsolete separate oc2 by moving it to be part of
   netcdf-c. This means replacing code with netcdf-c
   equivalents.
5. Add --with-testserver to configure.ac to allow
   override of the servers to be used for --enable-dap-remote-tests.
6. There were multiple versions of nctypealignment code. Try to
   centralize in libdispatch/doffset.c and include/ncoffsets.h
7. Add a unit test for the ncuri code because of its complexity.
8. Move the findserver code out of libdispatch and into
   a separate, self contained program in ncdap_test and dap4_test.
9. Move the dispatch header files (nc{3,4}dispatch.h) to
   .../include because they are now shared by modules.
10. Revamp the handling of TOPSRCDIR and TOPBUILDDIR for shell scripts.
11. Make use of MREMAP if available
12. Misc. minor changes e.g.
	- #include <config.h> -> #include "config.h"
	- Add some no-install headers to /include
	- extern -> EXTERNL and vice versa as needed
	- misc header cleanup
	- clean up checking for misc. unix vs microsoft functions
13. Change copyright decls in some files to point to LICENSE file.
14. Add notes to RELEASENOTES.md
2017-03-08 17:01:10 -07:00
Ward Fisher
61a7dab58f Fixed an issue preventing compilation with hdf4 support with Visual Studio. 2014-08-28 18:14:14 -06:00
Quincey Koziol
b2dfacbcfa Big clean up to type handling in libsrc4, which makes fill-values work
correctly for variables with string datatype, plus a few other minor changes.
2014-02-11 17:12:08 -06:00
Quincey Koziol
b3044de434 Refactored read_scale(), memio_new(), var_create_dataset() and makespecial()
to clean up resources properly on failure.

Refactored doubly-linked list code for objects in the libsrc4 directory,
    cleaning up the add/del routines, breaking out the common next/prev
    pointers into a struct and extracting the add/del operations on them,
    changed the list of dims to add new dims in the same order as the other
    types, made all add routines able to optionally return a pointer to the
    newly created object.

Removed some dead code (pg_var(), nc4_pg_var1(), nc4_pg_varm(), misc. small
    routines, etc)

Fixed fill value handling for string types in nc4_get_vara().

Changed many malloc()+strcpy() pairs into calls to strdup().

Cleaned up misc. other minor Coverity issues.
2013-12-08 03:29:26 -06:00
Ward Fisher
6096f6b43b Merged changes from CMake branch.
Addressed a handful of memory leaks
reported by Coverity.
2013-03-14 22:49:21 +00:00
Dennis Heimbigner
adbe2ba5f1 extend r2822 to work when logging is enabled 2012-12-18 19:44:39 +00:00
Ward Fisher
7b91248723 Merged changes from netcdf-branch.
o Changed variable names 'typeid' to 'typeid1' to avoid a namespace conflict
in visual studio.
o Cleaned up a handful of warnings in Visual Studio.
o Addressed a few Coverity-discovered issuesl
o Made changes to CMake-based builds.
2012-12-13 22:09:41 +00:00
Dennis Heimbigner
5f2eb8afbf Fix a number of potential problems by changing calls to nc_XXX to NC3/4_XXX 2012-12-12 20:05:06 +00:00
Ward Fisher
afa67452f6 Took some time to address a handful of errors identified by Coverity.
Primarily focused on memory errors falling into a couple different types:

1) Static overrun errors.
2) Dereference uninitialized memory errors.

make distcheck works after applying these fixes, and coverity no longer sees an issue, so hopefully they are properly resolved.
2012-11-14 18:24:45 +00:00
Dennis Heimbigner
5ca78309cc The effect of this change is to make the struct NC structure
contain as little file-type specific info as possible.  It
modifies especially libsrc so that all of the netcdf-3 data
that used to be in struct NC is now kept in a separate chunk
of data pointed to by the struct NC. This makes all of
current protocols consistent: netcdf-3, netcdf-4, and dap.
2012-09-06 19:44:03 +00:00
Ed Hartnett
1d5bbb7cfa reduced memory use of type structs for netcdf-4 2010-07-01 13:17:54 +00:00
Ed Hartnett
18f4bca367 moving to trunk subdir 2010-06-03 13:24:43 +00:00