Commit Graph

139 Commits

Author SHA1 Message Date
Peter Hill
409ca579ab
Fix return type on a couple of internal utility functions 2024-01-15 15:46:13 +00:00
Peter Hill
d07dac918c
Silence conversion warnings from malloc arguments
Mostly just add an explicit cast when calling `malloc` and its
variants. Sometimes instead change the type of a local variable if
this would silence multiple warnings.
2023-11-24 18:20:52 +00:00
Peter Hill
763e54f8a1
Fix most float conversion warnings
Fixes ~200 out of ~240 `-Wfloat-conversion` warnings on gcc 13.2
2023-10-26 16:01:24 +01:00
Dennis Heimbigner
3ffe7be446 Enhance/Fix filter support
re: Discussion https://github.com/Unidata/netcdf-c/discussions/2214

The primary change is to support so-called "standard filters".
A standard filter is one that is defined by the following
netcdf-c API:
````
int nc_def_var_XXX(int ncid, int varid, size_t nparams, unsigned* params);
int nc_inq_var_XXXX(int ncid, int varid, int* usefilterp, unsigned* params);
````
So for example, zstandard would be a standard filter by defining
the functions *nc_def_var_zstandard* and *nc_inq_var_zstandard*.

In order to define these functions, we need a new dispatch function:
````
int nc_inq_filter_avail(int ncid, unsigned filterid);
````
This function, combined with the existing filter API can be used
to implement arbitrary standard filters using a simple code pattern.
Note that I would have preferred that this function return a list
of all available filters, but HDF5 does not support that functionality.

So this PR implements the dispatch function and implements
the following standard functions:
    + bzip2
    + zstandard
    + blosc
Specific test cases are also provided for HDF5 and NCZarr.
Over time, other specific standard filters will be defined.

## Primary Changes
* Add nc_inq_filter_avail() to netcdf-c API.
* Add standard filter implementations to test use of *nc_inq_filter_avail*.
* Bump the dispatch table version number and add to all the relevant
   dispatch tables (libsrc, libsrcp, etc).
* Create a program to invoke nc_inq_filter_avail so that it is accessible
  to shell scripts.
* Cleanup szip support to properly support szip
  when HDF5 is disabled. This involves detecting
  libsz separately from testing if HDF5 supports szip.
* Integrate shuffle and fletcher32 into the existing
  filter API. This means that, for example, nc_def_var_fletcher32
  is now a wrapper around nc_def_var_filter.
* Extend the Codec defaulting to allow multiple default shared libraries.

## Misc. Changes
* Modify configure.ac/CMakeLists.txt to look for the relevant
  libraries implementing standard filters.
* Modify libnetcdf.settings to list available standard filters
  (including deflate and szip).
* Add CMake test modules to locate libbz2 and libzstd.
* Cleanup the HDF5 memory manager function use in the plugins.
* remove unused file include//ncfilter.h
* remove tests for the HDF5 memory operations e.g. H5allocate_memory.
* Add flag to ncdump to force use of _Filter instead of _Deflate
  or _Shuffle or _Fletcher32. Used for testing.
2022-03-14 12:39:37 -06:00
Ward Fisher
978a99bbea
Merge branch 'main' into vlenfix.dmh 2022-01-13 16:31:34 -07:00
Dennis Heimbigner
8b9253fef2 Fix various problem around VLEN's
re: https://github.com/Unidata/netcdf-c/issues/541
re: https://github.com/Unidata/netcdf-c/issues/1208
re: https://github.com/Unidata/netcdf-c/issues/2078
re: https://github.com/Unidata/netcdf-c/issues/2041
re: https://github.com/Unidata/netcdf-c/issues/2143

For a long time, there have been known problems with the
management of complex types containing VLENs.  This also
involves the string type because it is stored as a VLEN of
chars.

This PR (mostly) fixes this problem. But note that it adds new
functions to netcdf.h (see below) and this may require bumping
the .so number.  These new functions can be removed, if desired,
in favor of functions in netcdf_aux.h, but netcdf.h seems the
better place for them because they are intended as alternatives
to the nc_free_vlen and nc_free_string functions already in
netcdf.h.

The term complex type refers to any type that directly or
transitively references a VLEN type. So an array of VLENS, a
compound with a VLEN field, and so on.

In order to properly handle instances of these complex types, it
is necessary to have function that can recursively walk
instances of such types to perform various actions on them.  The
term "deep" is also used to mean recursive.

At the moment, the two operations needed by the netcdf library are:
* free'ing an instance of the complex type
* copying an instance of the complex type.

The current library does only shallow free and shallow copy of
complex types. This means that only the top level is properly
free'd or copied, but deep internal blocks in the instance are
not touched.

Note that the term "vector" will be used to mean a contiguous (in
memory) sequence of instances of some type. Given an array with,
say, dimensions 2 X 3 X 4, this will be stored in memory as a
vector of length 2*3*4=24 instances.

The use cases are primarily these.

## nc_get_vars
Suppose one is reading a vector of instances using nc_get_vars
(or nc_get_vara or nc_get_var, etc.).  These functions will
return the vector in the top-level memory provided.  All
interior blocks (form nested VLEN or strings) will have been
dynamically allocated.

After using this vector of instances, it is necessary to free
(aka reclaim) the dynamically allocated memory, otherwise a
memory leak occurs.  So, the recursive reclaim function is used
to walk the returned instance vector and do a deep reclaim of
the data.

Currently functions are defined in netcdf.h that are supposed to
handle this: nc_free_vlen(), nc_free_vlens(), and
nc_free_string().  Unfortunately, these functions only do a
shallow free, so deeply nested instances are not properly
handled by them.

Note that internally, the provided data is immediately written so
there is no need to copy it. But the caller may need to reclaim the
data it passed into the function.

## nc_put_att
Suppose one is writing a vector of instances as the data of an attribute
using, say, nc_put_att.

Internally, the incoming attribute data must be copied and stored
so that changes/reclamation of the input data will not affect
the attribute.

Again, the code inside the netcdf library does only shallow copying
rather than deep copy. As a result, one sees effects such as described
in Github Issue https://github.com/Unidata/netcdf-c/issues/2143.

Also, after defining the attribute, it may be necessary for the user
to free the data that was provided as input to nc_put_att().

## nc_get_att
Suppose one is reading a vector of instances as the data of an attribute
using, say, nc_get_att.

Internally, the existing attribute data must be copied and returned
to the caller, and the caller is responsible for reclaiming
the returned data.

Again, the code inside the netcdf library does only shallow copying
rather than deep copy. So this can lead to memory leaks and errors
because the deep data is shared between the library and the user.

# Solution

The solution is to build properly recursive reclaim and copy
functions and use those as needed.
These recursive functions are defined in libdispatch/dinstance.c
and their signatures are defined in include/netcdf.h.
For back compatibility, corresponding "ncaux_XXX" functions
are defined in include/netcdf_aux.h.
````
int nc_reclaim_data(int ncid, nc_type xtypeid, void* memory, size_t count);
int nc_reclaim_data_all(int ncid, nc_type xtypeid, void* memory, size_t count);
int nc_copy_data(int ncid, nc_type xtypeid, const void* memory, size_t count, void* copy);
int nc_copy_data_all(int ncid, nc_type xtypeid, const void* memory, size_t count, void** copyp);
````
There are two variants. The first two, nc_reclaim_data() and
nc_copy_data(), assume the top-level vector is managed by the
caller. For reclaim, this is so the user can use, for example, a
statically allocated vector. For copy, it assumes the user
provides the space into which the copy is stored.

The second two, nc_reclaim_data_all() and
nc_copy_data_all(), allows the functions to manage the
top-level.  So for nc_reclaim_data_all, the top level is
assumed to be dynamically allocated and will be free'd by
nc_reclaim_data_all().  The nc_copy_data_all() function
will allocate the top level and return a pointer to it to the
user. The user can later pass that pointer to
nc_reclaim_data_all() to reclaim the instance(s).

# Internal Changes
The netcdf-c library internals are changed to use the proper
reclaim and copy functions.  It turns out that the places where
these functions are needed is quite pervasive in the netcdf-c
library code.  Using these functions also allows some
simplification of the code since the stdata and vldata fields of
NC_ATT_INFO are no longer needed.  Currently this is commented
out using the SEPDATA \#define macro.  When any bugs are largely
fixed, all this code will be removed.

# Known Bugs

1. There is still one known failure that has not been solved.
   All the failures revolve around some variant of this .cdl file.
   The proximate cause of failure is the use of a VLEN FillValue.
````
        netcdf x {
        types:
          float(*) row_of_floats ;
        dimensions:
          m = 5 ;
        variables:
          row_of_floats ragged_array(m) ;
              row_of_floats ragged_array:_FillValue = {-999} ;
        data:
          ragged_array = {10, 11, 12, 13, 14}, {20, 21, 22, 23}, {30, 31, 32},
                         {40, 41}, _ ;
        }
````
When a solution is found, I will either add it to this PR or post a new PR.

# Related Changes

* Mark nc_free_vlen(s) as deprecated in favor of ncaux_reclaim_data.
* Remove the --enable-unfixed-memory-leaks option.
* Remove the NC_VLENS_NOTEST code that suppresses some vlen tests.
* Document this change in docs/internal.md
* Disable the tst_vlen_data test in ncdump/tst_nccopy4.sh.
* Mark types as fixed size or not (transitively) to optimize the reclaim
  and copy functions.

# Misc. Changes

* Make Doxygen process libdispatch/daux.c
* Make sure the NC_ATT_INFO_T.container field is set.
2022-01-08 18:30:00 -07:00
Ward Fisher
7ec0ac0a08
Merge branch 'main' into mingw-w64-strcasecmp 2021-10-01 17:07:37 -05:00
Milton Woods
4fa91d8241 Use strcasecmp definitions from config.h 2021-09-05 17:17:30 +10:00
Dennis Heimbigner
11fe00ea05 Add filter support to NCZarr
Filter support has three goals:

1. Use the existing HDF5 filter implementations,
2. Allow filter metadata to be stored in the NumCodecs metadata format used by Zarr,
3. Allow filters to be used even when HDF5 is disabled

Detailed usage directions are define in docs/filters.md.

For now, the existing filter API is left in place. So filters
are defined using ''nc_def_var_filter'' using the HDF5 style
where the id and parameters are unsigned integers.

This is a big change since filters affect many parts of the code.

In the following, the terms "compressor" and "filter" and "codec" are generally
used synonomously.

### Filter-Related Changes:
* In order to support dynamic loading of shared filter libraries, a new library was added in the libncpoco directory; it helps to isolate dynamic loading across multiple platforms.
* Provide a json parsing library for use by plugins; this is created by merging libdispatch/ncjson.c with include/ncjson.h.
* Add a new _Codecs attribute to allow clients to see what codecs are being used; let ncdump -s print it out.
* Provide special headers to help support compilation of HDF5 filters when HDF5 is not enabled: netcdf_filter_hdf5_build.h and netcdf_filter_build.h.
* Add a number of new test to test the new nczarr filters.
* Let ncgen parse _Codecs attribute, although it is ignored.

### Plugin directory changes:
* Add support for the Blosc compressor; this is essential because it is the most common compressor used in Zarr datasets. This also necessitated adding a CMake FindBlosc.cmake file
* Add NCZarr support for the big-four filters provided by HDF5: shuffle, fletcher32, deflate (zlib), and szip
* Add a Codec defaulter (see docs/filters.md) for the big four filters.
* Make plugins work with windows by properly adding __declspec declaration.

### Misc. Non-Filter Changes
* Replace most uses of USE_NETCDF4 (deprecated) with USE_HDF5.
* Improve support for caching
* More fixes for path conversion code
* Fix misc. memory leaks
* Add new utility -- ncdump/ncpathcvt -- that does more or less the same thing as cygpath.
* Add a number of new test to test the non-filter fixes.
* Update the parsers
* Convert most instances of '#ifdef _MSC_VER' to '#ifdef _WIN32'
2021-09-02 17:04:26 -06:00
Dennis Heimbigner
d953899559 Move to Version 2 NCZarr Extended Meta-Data
re: https://github.com/zarr-developers/zarr-specs/issues/41

After discussions with the Zarr community, it was decided to
convert to a new representation of the NCZarr meta-data extensions: version 2.
These extensions store information necessary to mapping the Zarr data model
to the netcdf-4 data model.

The basic change is to remove the NCZarr specific objects: .nczarr, .nczgroup, .nczarray, and .nczattr.
The contents of these objects is moved into the corresponding existing Zarr objects as special keys. The mapping is as follows:

* ''.nczarr'' => ''/.zgroup/_NCZARR_SUPERBLOCK_''
* ''.nczgroup => ''.zgroup/_NCZARR_GROUP_''
* ''.nczarray => ''.zarray/_NCZARR_ARRAY_''
* ''.nczattr => ''.zattr/_NCZARR_ATTR_''

Backward compatibility is maintained by looking for the object ''/.nczarr''
and if found, then assuming that the dataset is in the older version 1 format.
This compatibility only supports reading of such version 1 datasets.

Documentation and test cases are also added.

Misc. Other Changes:
1. The json parsing code was added to the general library instead of nczarr only (ncjson.c, ncjson.h).
2. Improved support for different platform paths by allowing conversion
   to a single common path representation.
3. Add some new error codes.
4. Modify nccopy usage to mention the new chunking specification.
2021-07-17 16:55:30 -06:00
Dennis Heimbigner
d3f6c126b6 Fix Mingw versus XGetopt (again)
re: https://github.com/Unidata/netcdf-c/pull/2003#issuecomment-847637871

Turns out that mingw defines both _WIN32 and also defines getopt.
This means that this test:
````
#ifdef _WIN32
#include "XGetopt.h"
#endif
````
fails on this error:
````
../include/XGetopt.h:38:24: error: conflicting types for 'getopt'
````

Fix is to replace
````
#ifdef _WIN32
with
#if defined(_WIN32) && !defined(__MINGW32__)
````
2021-05-26 14:27:27 -06:00
Dennis Heimbigner
fba7198039 Fix NCclosedir in dpathmgr.c
re: Issue https://github.com/Unidata/netcdf-c/issues/1999

NCclosedir code is incorrect. Fix.
Note that this issue crops up when using a non-VisualStudio windows build
such as Mingw because Mingq defines dirent.h, but Visual Studio does not.

Addendum:
Fix some mingw bugs:

1. Modify XGetopt.h to be conditional on _WIN32 instead of _MSC_VER.
2. Make sure sys/stat.h is included in ncpathmgr.h
2021-05-19 14:19:28 -06:00
Dennis Heimbigner
efd1be5d62 Fix shell handling of escapes
re: https://github.com/Unidata/netcdf-c/issues/1988

There was an issue with certain shell programs (bash notably).
For certain platforms and when given a url that had an escaped
'#' character (e.g. \\#) bash would not remove the backslash. So I
had to add a hack for this. Unfortunately I overdid it and it
removed all '' characters. This is ok for non-windows platforms,
but obviously fails for windows.

The fix is this.

1. In a utility program (ncgen, ncdump, nccopy, etc) there is probably a call (or calls) to NC_backslashUnescape(xxx) where xxx is a path argument from the command line.
2. Replace each such call with NC_shellUnescape(xxx).

The NC_shellUnescape function was added and searched only for occurrences of "\#" and replaces them with "#".
2021-04-21 14:59:15 -06:00
Dennis Heimbigner
0b7a5382e7 Codify cross-platform file paths
The netcdf-c code has to deal with a variety of platforms:
Windows, OSX, Linux, Cygwin, MSYS, etc.  These platforms differ
significantly in the kind of file paths that they accept.  So in
order to handle this, I have created a set of replacements for
the most common file system operations such as _open_ or _fopen_
or _access_ to manage the file path differences correctly.

A more limited version of this idea was already implemented via
the ncwinpath.h and dwinpath.c code. So this can be viewed as a
replacement for that code. And in path in many cases, the only
change that was required was to replace '#include <ncwinpath.h>'
with '#include <ncpathmgt.h>' and then replace file operation
calls with the NCxxx equivalent from ncpathmgr.h Note that
recently, the ncwinpath.h was renamed ncpathmgmt.h, so this pull
request should not require dealing with winpath.

The heart of the change is include/ncpathmgmt.h, which provides
alternate operations such as NCfopen or NCaccess and which properly
parse and rebuild path arguments to work for the platform on which
the code is executing. This mostly matters for Windows because of the
way that it uses backslash and drive letters, as compared to *nix*.
One important feature is that the user can do string manipulations
on a file path without having to worry too much about the platform
because the path management code will properly handle most mixed cases.
So one can for example concatenate a path suffix that uses forward
slashes to a Windows path and have it work correctly.

The conversion code is in libdispatch/dpathmgr.c, and the
important function there is NCpathcvt which does the proper
conversions to the local path format.

As a rule, most code should just replace their file operations with
the corresponding NCxxx ones defined in include/ncpathmgmt.h. These
NCxxx functions all call NCpathcvt on their path arguments before
executing the actual file operation.

In some rare cases, the client may need to directly use NCpathcvt,
but this should be avoided as much as possible. If there is a need
for supporting a new file operation not already in ncpathmgmt.h, then
use the code in dpathmgr.c as a template. Also please notify Unidata
so we can include it as a formal part or our supported operations.
Also, if you see an operation in the library that is not using the
NCxxx form, then please submit an issue so we can fix it.

Misc. Changes:
* Clean up the utf8 testing code; it is impossible to get some
  tests to work under windows using shell scripts; the args do
  not pass as utf8 but as some other encoding.
* Added an extra utf8 test case: test_unicode_path.sh
* Add a true test for HDF5 1.10.6 or later because as noted in
  PR https://github.com/Unidata/netcdf-c/pull/1794,
  HDF5 changed its Windows file path handling.
2021-03-04 13:41:31 -07:00
Dennis Heimbigner
467faeaaeb Fix memory leak in nccopy.c 2021-02-25 15:06:39 -07:00
Dennis Heimbigner
1d9727c616 More fixes to the nccopy filter x chunking algorithm
re: Issue https://github.com/Unidata/netcdf-c/issues/1936

The algorithm controlling interaction of -d 0 -F and -c / options
is incorrect.

The fix:
1. make -d 0 => no deflation
2. make -F properly use the filter specs to decide.

Also added a test case to ncdump/tst_nccopy4.sh.
2021-01-31 15:10:39 -07:00
Dennis Heimbigner
d2316f866c Additional Fixes to NCZarr
Primary Fixes:
* Add a whole variable optimization -- used in the rare case that nc_get/put_vara covers the whole of a variable and the variable has a single chunk.
* Fix chunking error when stride causes whole chunks to be skipped.
* Fix some memory leaks
* Add test cases
* Add one performance test to nczarr_test/. This uses the timer utils from unit_test: timer_utils.[ch].
* Move ncdumpchunks utility from ncdump to nczarr_test

Misc. Other Changes:
* Make check for aws libraries conditional on --enable-nczarr-s3
* Remove all but one bm tests from nczarr_test until they are working.
* Remove another dependency on HDF5 from supposedly non-HDF5 specific code; specifically hdf5_log_hdf5.
* Make the BAIL2 macro be hdf5 specific and replace elsewhere with an HDF5 independent equivalent.
* Move hdf5cache.c to libsrc4/nc4cache.c because it is used by nczarr.
* Modify unit_tests so that some of them are run even if using Windows.
* Misc. small bug fixes and refactors and memory leaks.
* Rename some conflicting tests for cmake.
* Attempted to make nc_perf work with cmake and failed.
2020-12-16 20:48:02 -07:00
Ward Fisher
878866c039 Merge branch 'ncgenkw.dmh' of https://github.com/DennisHeimbigner/netcdf-c into gh1753.wif 2020-12-07 11:29:12 -07:00
Dennis Heimbigner
25d2e05444 Prepare for the path management code
Rename some files in prep for eventual implementation
of more comprehensive cross-platform file path management.
2020-10-13 19:12:15 -06:00
Dennis Heimbigner
aeb3ac2809 Mostly revert the filter code to reduce its complexity of use.
re: https://github.com/Unidata/netcdf-c/issues/1836

Revert the internal filter code to simplify it. From the user's
point of view, the only visible changes should be:

1. The functions that convert text to filter specs have had their signature reverted and have been moved to netcdf_aux.h
2. Some filter API functions now return NC_ENOFILTER when inquiry is made about some filter.

Internally,the dispatch table has been modified to get rid of the filter_actions
entry and associated complex structures. It has been replaced with
inq_var_filter_ids and inq_var_filter_info entries and the dispatch table
version has been bumped to 3. Corresponding NOOP and NOTNC4 functions
were added to libdispatch/dnotnc4.c. Also, the filter_action entries
in dispatch tables were replaced for all dispatch code bases (HDF5, DAP2,
etc). This should only impact UDF users.

In the process, it became clear that the form of the filters
field in NC_VAR_INFO_T was format dependent, so I converted it to
be of type void* and pushed its management into the various dispatch
code bases. Specifically libhdf5 and libnczarr now manage the filters
field in their own way.

The auxilliary functions for parsing textual filter specifications
were moved to netcdf_aux.h and were renamed to the following:
* ncaux_h5filterspec_parse
* ncaux_h5filterspec_parselist
* ncaux_h5filterspec_free
* ncaux_h5filter_fix8

Misc. Other Changes:

1. Document NUG/filters.md updated to reflect the changes above.
2. All the old data types (structs and enums)
   used by filter_actions actions were deleted.
   The exception is the NC_H5_Filterspec because it is needed
   by ncaux_h5filterspec_parselist.
3. Clientside filters were removed -- another enhancement
   for which no-one ever asked.
4. The ability to remove filters was itself removed.
5. Some functionality needed by nczarr was moved from libhdf5
   to libsrc4 e.g. nc4_find_default_chunksizes
6. All the filterx code was removed
7. ncfilter.h and nc4filter.c no longer used

Misc. Unrelated Changes:

1. The nczarr_test makefile clean was leaving some directories; so
   add clean-local to take care of them.
2020-09-27 12:43:46 -06:00
Dennis Heimbigner
62a4cc1ae0 Fix nccopy -c dim/x to actually use the dim/x value.
As it was, nccopy -c dim/x was sometimes being ignored. So
modify nccopy to properly take into account. This also required
a change to the nczarr code because it was not applying default
chunking in the same way as libhdf5.

Modify ncdump/tst_nccopy4.sh to test this feature properly.
Also add a similar test to nczarr_test.

Additionally, fix some other things that were causing Visual
Studio builds with testing to not work.
* fix curl testing under CMake to properly handle case
  where DAP is disabled, but byterange support is enabled.
* properly test and/or define uintptr_t
* Convert _O_XXX to O_XXX flags used by open();
2020-09-01 13:44:24 -06:00
Dennis Heimbigner
59e04ae071 This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".

The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.

More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).

WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:

Platform | Build System | S3 support
------------------------------------
Linux+gcc      | Automake     | yes
Linux+gcc      | CMake        | yes
Visual Studio  | CMake        | no

Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future.  Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.

In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*.  The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
   and the version bumped.
4. An overly complex set of structs was created to support funnelling
   all of the filterx operations thru a single dispatch
   "filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
   to nczarr.

Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
   -- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
   support zarr and to regularize the structure of the fragments
   section of a URL.

Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
   e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
   * Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
   and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.

Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-28 18:02:47 -06:00
Dennis Heimbigner
c195e24f2a Fix nccopy chunking to use default chunking
re: https://github.com/Unidata/netcdf-c/issues/1763

The nccopy program incorrectly set default chunking parameters
to use full dimension lengths. Instead, it should use the
values computed by the default chunking values as defined
in nc4_default_chunksizes2 in the netcdf library.

Solution: Revert to the old behavior.
2020-06-24 16:34:32 -06:00
Dennis Heimbigner
90b912b7e8 Allow use of type keywords as identifier in formats that do not support that type.
Built-in type-name keywords are currently flagged when used as
identifiers in formats that do not support that type.  So if a
user declares a dimension named "string" in a classic .cdl file,
it causes an error.

This PR modifies ncgen to allow those format-specific type keywords
to be used as identifiers when compiling to formats that do not
support that type. Also added a test for this.

Also a couple of misc. changes to conditionalize some debug output.
2020-06-05 17:03:29 -06:00
Dennis Heimbigner
c68c4c804d Fix undefined references when using Visual Studio
Fix Issue https://github.com/Unidata/netcdf-c/issues/1725.
Replace PR https://github.com/Unidata/netcdf-c/pull/1726
Also replace PR https://github.com/Unidata/netcdf-c/pull/1694

The general problem is that under Visual Studio, we are seeing
a number of undefined reference and other scoping errors.
The reason is that the code is not properly using Visual Studio
_declspec() declarations.

The basic solution is to ensure that when compiling the code itself
one needs to ensure that _declspec(dllexport) is used. There
are several sets of macros to handle this, but they all rely
on the flag DLL_EXPORT being define when the code is compiled,
but not being defined when the code is used via a .h file.

As a test, I modified XGetOpt.c to build properly. I also
fixed the oc2 library to properly _declspec things like ocdebug.

I also made some misc. changes to get all the tests to run
if cygwin is installed (to get bash, sed, etc).

Misc. Changes:
* Put XGetOpt.c into libsrc and copy at build time
  to the other directories where it is needed.
2020-05-18 19:36:28 -06:00
Dennis Heimbigner
710039c146 ckp 2020-04-29 14:34:30 -06:00
Dennis Heimbigner
73537603e2 Make scalar X filter return an error instead of ignoring it 2020-03-02 15:10:54 -07:00
Dennis Heimbigner
420fdf4625 fix memory allocation failure in hdf5var.c 2020-03-02 11:45:41 -07:00
Dennis Heimbigner
7ff42a2848 Fix typo 2020-02-29 13:02:49 -07:00
Dennis Heimbigner
f376c23329 Make utilities support NC_COMPACT
re: https://github.com/Unidata/netcdf-c/issues/1642

Modify ncdump, nccopy, and ncgen to support the NC_COMPACT storage option.
Added test cases and added description to the man pages for the utilities.

1. ncdump: For compact storage variable, print special attribute __Storage_ as
````
    <var>: _Storage = "compact";
````

2. ncgen: parse and implement
````
    <var>: _Storage = "compact";
````
in a .cdl file

3. nccopy: Extend the chunk specification (-c flag) to support
   compact using the forms
````
nccopy ... -c <var>:compact
and
nccopy ... -c <var>:contiguous
````

Misc. other changes
1. cleanup the copy_chunking function in ncdump/nccopy.c
2020-02-29 12:06:21 -07:00
Dennis Heimbigner
44d0dcaad2 Add support for multiple filters per variable.
re: https://github.com/Unidata/netcdf-c/issues/1584

Support has been added for multiple filters per variable.  This
affects a number of components in netcdf. The new APIs are
documented in NUG/filters.md.

The primary changes are:
* A set of new functions are provided (see __include/netcdf_filter.h__).
    - Obtain a list of the filters associated with a variable
    - Obtain the parameters for a specific filter.
* The existing __nc_inq_var_filter__ function now returns info
  about the first defined filter.
* The utilities (ncgen, ncdump, and nccopy) now support
  an extended format for specifying a sequence of filters.
  The general form is __<filter>|<filter>..._.
* The ncdump **_Filter** attribute now dumps a list of all the
  filters associated with a variable using the above new format.
* Filter specifications can now use a filter name instead of number
  for filters known to the netcdf library, which in turn is taken
  from the HDF5 filter registration page.
* New errors are defined: NC_EFILTER and NC_ENOFILTER. The latter
  is returned if an attempt is made to access an unknown filter.
* Internally, the dispatch table has been extended to add a function
  to handle all of the filter functions.
* New, filter-related, tests were added to nc_test4.
* A new plugin was added to the plugins directory to help with testing.

Notes:
1. The shuffle and fletcher32 filters are not part of the multifilter system.

Misc. changes:
1. A debug module was added to libhdf5 to help catch error locations.
2020-02-16 12:59:33 -07:00
Ward Fisher
6c5da1d6ee
Merge pull request #1405 from gsjaardema/patch-34
Eliminate compiler warning
2019-08-01 15:21:53 -06:00
Dennis Heimbigner
29281a2f89 nccopy must use defaulting when there are no -c parameters
re: issue https://github.com/Unidata/netcdf-c/issues/1436

It used to be that when no -c parameters were specified
(and the input was some variant of netcdf-3) that nccopy
let the netcdf-c library decide on any default chunking.
Now, it attempts to do it itself and is not doing it
correctly when unlimited dimensions are involved.

So fix is to revert to previous behavior.

Note: The chunking rules are getting too complicated; consider revising.
2019-07-14 15:18:03 -06:00
Dennis Heimbigner
06498ff16a various fixes 2019-05-23 16:35:03 -06:00
Greg Sjaardema
7df5569d95
Eliminate compiler warning
`option_min_chunk_bytes` is `size_t` which is unsigned so value can never be less than zero.
2019-05-22 08:19:22 -06:00
Dennis Heimbigner
6ebc108f00 Nccopy was overriding default chunking when it should not.
re: issue https://github.com/Unidata/netcdf-c/issues/1398
re: esupport NDY-294972

The new chunking code added to nccopy missed one case.
In the event that there are no chunking specifications
of any kind, and the input is not netcdf-4, and the output
is netcdf-4 and must be chunked, then use the default chunking
that the library computes as part of the nc_def_var() function.

Misc. changes:
1. add some chunking debug code to hdf5var.c
2019-05-21 15:59:27 -06:00
Dennis Heimbigner
2420b69a83 Fix nccopy to use NC_PERSIST so that -w actually persists the output.
re: Issue https://github.com/Unidata/netcdf-c/issues/1365

At some point (4.6.1) we changed the diskless handling of flags
and added a new NC_PERSIST flag. Looks like we did not fix all
occurrences of the old flag set, specifically for 'nccopy -w' command.
Also added some tests (tst_nccopy_w{3,4}.sh) for this situation.
2019-03-15 12:05:27 -06:00
Ward Fisher
e32e0b1eb2
Merge branch 'master' into filterexpr.dmh 2019-02-12 09:48:03 -07:00
Dennis Heimbigner
a6b04c0c66 Extend nccopy -F option syntax.
A user suggested that the nccopy -F option
syntax should be extended to support specification
of multiple (or all) variables in a single -F option.

The new syntax allows:

1. '*' as the name of the variable; this means apply the
   filter to all variables in the data set.
2. *var1|var2|...* as the variable name to indicate that the filter
   should be applied to the multiple specified variables.
2019-02-08 18:48:17 -07:00
Dennis Heimbigner
8714066b18 Fix errors when building on big-endian machine
re: issue https://github.com/Unidata/netcdf-c/issues/1278
re: issue https://github.com/Unidata/netcdf-c/issues/876
re: issue https://github.com/Unidata/netcdf-c/issues/806

* Major change to the handling of 8-byte parameters for nc_def_var_filter.
  The old code was not well thought out.
  * The new algorithm is documented in docs/filters.md.
  * Added new utility file plugins/H5Zutil.c to support
  * Modified plugins/H5Zmisc.c to use new algorithm
  the new algorithm.
  * Renamed include/ncfilter.h to include/netcdf_filter.h
    and made it an installed header so clients can access the
    new algorithm utility.
  * Fixed nc_test4/tst_filterparser.c and nc_test4/test_filter_misc.c
    to use the new algorithm
* libdap4/ fixes:
  * d4swap.c has an error in the endian pre-processing such
    that record counts were not being swapped correctly.
  * d4data.c had an error in that checksums were being computed
    after endian swapping rather than before.
* ocinitialize() was never being called, so xxdr bigendian handling
  was never set correctly.
  * Required adding debug statements to occompile
* Found and fixed memory leak in ncdump.c

Not tested:
* HDF4
* Pnetcdf
* parallel HDF5
2019-01-31 21:13:06 -07:00
Dennis Heimbigner
75759ca957 Separate out the --ansi comment fixes.
re: pull request https://github.com/Unidata/netcdf-c/pull/1242

This pr should be applied before https://github.com/Unidata/netcdf-c/pull/1242.
It fixes only the -ansi '//' comment problems. There may be some
slight conflicts with that other pr when it is applied, since in some
cases I converted #if 0...#endif to /*...*/
2018-12-12 13:23:09 -07:00
Ward Fisher
02937d2d0e ncdump, other directories updated with copyright stanza. 2018-12-06 15:36:53 -07:00
Ward Fisher
a068892ace Merge branch 'master' into gh803.wif 2018-11-27 16:09:35 -07:00
Ward Fisher
a8673c3dfe Moving provenance info out so that it doesn't depend on netCDF4 support to display. 2018-11-27 16:09:17 -07:00
Ward Fisher
4144f03a44 Added a stanza to not copy _NCProperties atttribute from an old file to a new file, and instead use the properties appropriate for the newly created file. 2018-11-26 17:56:44 -07:00
Ward Fisher
ede7c5da60
Merge branch 'master' into provenance.dmh 2018-09-04 11:22:36 -06:00
Dennis Heimbigner
2ea1cf5f1b There was a request to extend the provenance information
stored in the _NCProperties attribute to allow two things:
1. capture of additional library dependencies (over and above
   hdf5)
2. Recognition of non-netcdf libraries that create netcdf-4 format
   files.

To this end, the _NCProperties format has been extended to be
and arbitrary set of key=value pairs separated by commas.
This new format has version = 2, and uses commas as the pair separator.
Thus the general form is:
    _NCProperties = "version=2,key1=value,key2=value2..." ;

This new version is accompanied by a new ./configure option of the form
    --with-ncproperties="key1=value1,key2=value2..."
that specifies pairs to add to the _NCProperties attribute for all
files created with that netcdf library.

At this point, what is missing is some programmatic way to
specify either all the pairs or additional pairs
to the _NCProperties attribute. Not sure of the best way
to do this.

Builders using non-netcdf libraries can specify
whatever they want in the key value pairs (as long
as the version=2 is specified first).

By convention, the primary library is expected to be the
the first pair after the leading version=2 pair, but this
is convention only and is neither required nor enforced.

Related changes:
1. Fixed the tests that check _NCProperties to properly operate with version=2.
2. When reading a version 1 _NCProperties attribute, convert it to look
   like a version 2 attribute.
2. Added some version 2 tests to ncdump/tst_fileinfo.c and
   ncdump/tst_fileinfo.sh

Misc Changes:
1. Fix minor problem in ncdap_test/testurl.sh where a parameter to
   buildurl needed to be quoted.
2. Minor fix to ncgen to swap switches -H and -h to be consistent
   with other utilities.
3. Document the -M flag in nccopy usage() and the nccopy man page.
4. Modify a test case to use the nccopy -M flag.
2018-08-25 21:44:41 -06:00
Greg Sjaardema
8743de7f34
Eliminate double printing of nccopy program name in usage output
The `error` macro used to output the usage information already outputs `progname`, so the `progname` argument is not needed and results in double printing of the `progname` as shown below:

```
ceerws2703a:build(master)> ../bin/nccopy
../bin/nccopy: ../bin/nccopy [-k kind] [-[3|4|6|7]] [-d n] [-s] [-c chunkspec] [-u] [-w] [-[v|V] varlist] [-[g|G] grplist] [-m n] [-h n] [-e n] [-r] infile outfile
  [-k kind] specify kind of netCDF format for output file, default same as input
            kind strings: 'classic', '64-bit offset', 'cdf5',
                          'netCDF-4', 'netCDF-4 classic model'
```
2018-08-20 11:28:55 -06:00
Dennis Heimbigner
f46b65c602 Clear up coverity complaints
with respect to this coverity report.

https://u2389337.ct.sendgrid.net/wf/click?upn=08onrYu34A-2BWcWUl-2F-2BfV0V05UPxvVjWch-2Bd2MGckcRZApv53Ry-2FReUME-2Fmei1et81WqdBwm5AxSAYBM9iFSmbw-3D-3D_YxTn3nq1kZ9CfXcyYFaB2DtJSVHTn8U-2BK0LV-2FDyLYKcFvnwYJ7pm2zFB5UovsEUh5miIfeCfQzYxpN9HYcdqs9sKVjtklOkfPIjEpg0Tj8G9wiptznJeX-2FO4PrqJA1ViQYtQfpjugq01XGMQ3-2Bll42dUMxrmbdYSqi8T7bCcANC5evyZs4MKvmAWXt1bnPaHcOYJDN-2FUt-2FJTlhwu91HyR7h-2FnkvMY9paYUNB1gxiyUA-3D
2018-08-04 13:22:29 -06:00
Dennis Heimbigner
b02703aa24 This PR primarily addresses Issue https://github.com/Unidata/netcdf-c/issues/725.
After a long discussion, I implemented the rules at the end of that issue.
They are documented in nccopy.1.

Additionally, I added a new, per-variable, -c flag that allows
for the direct setting of the chunking parameters for a variable.
The form is
    -c var:c1,c2,...ck
where var is the name of the variable (possibly a fully qualified name)
and the ci are the chunksizes for that variable. It must be the case
that the rank of the variable is k. If the new form is used as well
as the old form, then the new form overrides the old form for the
specified variable. Note that multiple occurrences of the new form
-c flag may be specified.

Misc. Other fixes
1. Added -M <size> option to nccopy to specify the minimum
   allowable chunksize.
2. Removed the unused variables from bigmeta.c
   (Issue https://github.com/Unidata/netcdf-c/issues/1079)
3. Fixed failure of nc_test4/tst_filter.sh by using the new -M
   flag (#1) to allow filter test on a small chunk size.
2018-07-26 20:16:02 -06:00