Commit Graph

43 Commits

Author SHA1 Message Date
Dennis Heimbigner
ca3dfe43b7 Fix FreeBSD fileno problem in the ncgen parsers 2021-09-28 14:03:19 -06:00
Dennis Heimbigner
11fe00ea05 Add filter support to NCZarr
Filter support has three goals:

1. Use the existing HDF5 filter implementations,
2. Allow filter metadata to be stored in the NumCodecs metadata format used by Zarr,
3. Allow filters to be used even when HDF5 is disabled

Detailed usage directions are define in docs/filters.md.

For now, the existing filter API is left in place. So filters
are defined using ''nc_def_var_filter'' using the HDF5 style
where the id and parameters are unsigned integers.

This is a big change since filters affect many parts of the code.

In the following, the terms "compressor" and "filter" and "codec" are generally
used synonomously.

### Filter-Related Changes:
* In order to support dynamic loading of shared filter libraries, a new library was added in the libncpoco directory; it helps to isolate dynamic loading across multiple platforms.
* Provide a json parsing library for use by plugins; this is created by merging libdispatch/ncjson.c with include/ncjson.h.
* Add a new _Codecs attribute to allow clients to see what codecs are being used; let ncdump -s print it out.
* Provide special headers to help support compilation of HDF5 filters when HDF5 is not enabled: netcdf_filter_hdf5_build.h and netcdf_filter_build.h.
* Add a number of new test to test the new nczarr filters.
* Let ncgen parse _Codecs attribute, although it is ignored.

### Plugin directory changes:
* Add support for the Blosc compressor; this is essential because it is the most common compressor used in Zarr datasets. This also necessitated adding a CMake FindBlosc.cmake file
* Add NCZarr support for the big-four filters provided by HDF5: shuffle, fletcher32, deflate (zlib), and szip
* Add a Codec defaulter (see docs/filters.md) for the big four filters.
* Make plugins work with windows by properly adding __declspec declaration.

### Misc. Non-Filter Changes
* Replace most uses of USE_NETCDF4 (deprecated) with USE_HDF5.
* Improve support for caching
* More fixes for path conversion code
* Fix misc. memory leaks
* Add new utility -- ncdump/ncpathcvt -- that does more or less the same thing as cygpath.
* Add a number of new test to test the non-filter fixes.
* Update the parsers
* Convert most instances of '#ifdef _MSC_VER' to '#ifdef _WIN32'
2021-09-02 17:04:26 -06:00
Dennis Heimbigner
ac421620b3 Fix the handling of certain alias types on CDL files.
re: https://github.com/Unidata/netcdf-c/issues/1977

PR https://github.com/Unidata/netcdf-c/pull/1753, changed ncgen
to allows certain type names to be used as identifiers in
selected situations.

An unwanted side effect was that existing type aliases no longer
were accepted by ncgen. Specifically, using the "long" type
caused an error.

I was able to figure out a better solution to the original
problem (https://github.com/Unidata/netcdf-c/issues/1750)
that also fixes this problem as well.

This PR fixes that problem in ncgen/ncgen.l,
and adds tests to ncdump/test_keywords.sh
2021-04-13 16:56:43 -06:00
Dennis Heimbigner
2afbdbd18f Add support for the XArray Zarr _ARRAY_DIMENSIONS attribute
The XArray implementation that uses Zarr for storage
provides a mechanism to simulate named dimensions.
It does this by adding a per-variable attribute called
_ARRAY_DIMENSIONS. This attribute contains a list of names
to be matched against the shape values of the variable.
In effect a named dimension is created with the name
_ARRAY_DIMENSIONS(i) and length shape(i) for all i
in range 0..rank(variable).
Both read and write support is provided.

This XArray support is only invoked if the mode value
of "xarray" is defined. So for example, as in this URL.
````
https://s3.us-west-1.amazonaws.com/bucket/dataset#mode=nczarr,xarray,s3
````
Note that the "xarray" mode flag also implies mode flag "zarr", so the above
is equivalent to this URL.
````
https://s3.us-west-1.amazonaws.com/bucket/dataset#mode=nczarr,zarr,xarray,s3
````

The primary change to implement this was to unify the handling
of dimension references in libnczarr/zsync.

A test for this and other pure-zarr features was added as
nczarr_test/run_purezarr.sh

Other changes:
* Make sure distcheck leaves no files around.
* Change the special attribute flag DIMSCALEFLAG to HIDDENATTRFLAG
  to support the xarray attribute.
* Annotate the zmap implementations with feature flags such as
  WRITEONCE (for zip files).
2021-02-24 13:46:11 -07:00
Ward Fisher
66a2cd371a More tweaking. 2020-12-07 14:45:14 -07:00
Ward Fisher
878866c039 Merge branch 'ncgenkw.dmh' of https://github.com/DennisHeimbigner/netcdf-c into gh1753.wif 2020-12-07 11:29:12 -07:00
Dennis Heimbigner
aeb3ac2809 Mostly revert the filter code to reduce its complexity of use.
re: https://github.com/Unidata/netcdf-c/issues/1836

Revert the internal filter code to simplify it. From the user's
point of view, the only visible changes should be:

1. The functions that convert text to filter specs have had their signature reverted and have been moved to netcdf_aux.h
2. Some filter API functions now return NC_ENOFILTER when inquiry is made about some filter.

Internally,the dispatch table has been modified to get rid of the filter_actions
entry and associated complex structures. It has been replaced with
inq_var_filter_ids and inq_var_filter_info entries and the dispatch table
version has been bumped to 3. Corresponding NOOP and NOTNC4 functions
were added to libdispatch/dnotnc4.c. Also, the filter_action entries
in dispatch tables were replaced for all dispatch code bases (HDF5, DAP2,
etc). This should only impact UDF users.

In the process, it became clear that the form of the filters
field in NC_VAR_INFO_T was format dependent, so I converted it to
be of type void* and pushed its management into the various dispatch
code bases. Specifically libhdf5 and libnczarr now manage the filters
field in their own way.

The auxilliary functions for parsing textual filter specifications
were moved to netcdf_aux.h and were renamed to the following:
* ncaux_h5filterspec_parse
* ncaux_h5filterspec_parselist
* ncaux_h5filterspec_free
* ncaux_h5filter_fix8

Misc. Other Changes:

1. Document NUG/filters.md updated to reflect the changes above.
2. All the old data types (structs and enums)
   used by filter_actions actions were deleted.
   The exception is the NC_H5_Filterspec because it is needed
   by ncaux_h5filterspec_parselist.
3. Clientside filters were removed -- another enhancement
   for which no-one ever asked.
4. The ability to remove filters was itself removed.
5. Some functionality needed by nczarr was moved from libhdf5
   to libsrc4 e.g. nc4_find_default_chunksizes
6. All the filterx code was removed
7. ncfilter.h and nc4filter.c no longer used

Misc. Unrelated Changes:

1. The nczarr_test makefile clean was leaving some directories; so
   add clean-local to take care of them.
2020-09-27 12:43:46 -06:00
Ward Fisher
a89e1f73b8 Merge branch 'ncgenchunks.dmh' of https://github.com/DennisHeimbigner/netcdf-c into master 2020-09-09 10:24:33 -06:00
Dennis Heimbigner
68a98f6e81 Fix ncgen handling of big data sections
The current ncgen does not properly handle very large
data sections. Apparently this is very uncommon because
it was only discovered in testing the new zarr code.

The fix required a new approach to processing data sections.
Unfortunately, the resulting ncgen is slower than before
but at least it is, I think, now correct.

The added test cases are in libnczarr, and so will
not show up until that is incorporated into master.

Note also that fortran code generation changed, but
has not been tested here.

Misc. Changes
1. Cleanup error handling in ncgen -lc and -lb output
2. Cleanup Makefiles for ncgen to remove unused code
3. Added a program, ncgen/ncdumpchunks, to print
   the data for a .nc file on a per-chunk format.
4. Made the XGetOpt change in PR https://github.com/Unidata/netcdf-c/pull/1694
   for ncdump/ncvalidator
2020-05-14 11:20:46 -06:00
Dennis Heimbigner
44d0dcaad2 Add support for multiple filters per variable.
re: https://github.com/Unidata/netcdf-c/issues/1584

Support has been added for multiple filters per variable.  This
affects a number of components in netcdf. The new APIs are
documented in NUG/filters.md.

The primary changes are:
* A set of new functions are provided (see __include/netcdf_filter.h__).
    - Obtain a list of the filters associated with a variable
    - Obtain the parameters for a specific filter.
* The existing __nc_inq_var_filter__ function now returns info
  about the first defined filter.
* The utilities (ncgen, ncdump, and nccopy) now support
  an extended format for specifying a sequence of filters.
  The general form is __<filter>|<filter>..._.
* The ncdump **_Filter** attribute now dumps a list of all the
  filters associated with a variable using the above new format.
* Filter specifications can now use a filter name instead of number
  for filters known to the netcdf library, which in turn is taken
  from the HDF5 filter registration page.
* New errors are defined: NC_EFILTER and NC_ENOFILTER. The latter
  is returned if an attempt is made to access an unknown filter.
* Internally, the dispatch table has been extended to add a function
  to handle all of the filter functions.
* New, filter-related, tests were added to nc_test4.
* A new plugin was added to the plugins directory to help with testing.

Notes:
1. The shuffle and fletcher32 filters are not part of the multifilter system.

Misc. changes:
1. A debug module was added to libhdf5 to help catch error locations.
2020-02-16 12:59:33 -07:00
Dennis Heimbigner
751300ec59 Fix more memory leaks in netcdf-c library
This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173

Sorry that it is so big, but leak suppression can be complex.

This PR fixes all remaining memory leaks -- as determined by
-fsanitize=address, and with the exceptions noted below.

Unfortunately. there remains a significant leak that I cannot
solve. It involves vlens, and it is unclear if the leak is
occurring in the netcdf-c library or the HDF5 library.

I have added a check_PROGRAM to the ncdump directory to show the
problem.  The program is called tst_vlen_demo.c To exercise it,
build the netcdf library with -fsanitize=address enabled. Then
go into ncdump and do a "make clean check".  This should build
tst_vlen_demo without actually executing it.  Then do the
command "./tst_vlen_demo" to see the output of the memory
checker.  Note the the lost malloc is deep in the HDF5 library
(in H5Tvlen.c).

I am temporarily working around this error in the following way.
1. I modified several test scripts to not execute known vlen tests
   that fail as described above.
2. Added an environment variable called NC_VLEN_NOTEST.
   If set, then those specific tests are suppressed.

This should mean that the --disable-utilities option to
./configure should not need to be set to get a memory leak clean
build.  This should allow for detection of any new leaks.

Note: I used an environment variable rather than a ./configure
option to control the vlen tests. This is because it is
temporary (I hope) and because it is a bit tricky for shell
scripts to access ./configure options.

Finally, as before, this only been tested with netcdf-4 and hdf5 support.
2018-11-15 10:00:38 -07:00
Ed Hartnett
0c0d066927 changed macro STREQ to NCSTREQ to avoid name collusion with HDF4 library 2018-05-12 08:55:51 -06:00
Dennis Heimbigner
727b613459 This is the initial step in moving to the new higher performance
(I hope) metadata mechanism. This mostly just adds new pieces of
code (e.g. nclistmap) and does some minor fixes.

It should be transparent to everything else.
The next set of changes will be the big step.
2018-02-08 19:53:40 -07:00
Dennis Heimbigner
99fccab359 1. Keep up to date by merging master
2. Fixed plugin building (nc_test4/hdf5plugins)
   to be done properly by cmake and automake.
4. Duplicated part of the nc_test4 filter test code
   in examples/C

An incomplete and untested set of hooks exist
for OS-X in nc_test4/findplugins.in. They need testing.
2018-01-16 11:00:09 -07:00
Ward Fisher
efff646587 Quick test of something to resolve conflict in generated files. 2017-11-13 12:33:19 -07:00
Ward Fisher
16d6f94f30 Merge branch 'master' into filters.dmh 2017-11-13 11:15:02 -07:00
Dennis Heimbigner
9983b9d911 re e-support UBS-599337
re pull request https://github.com/Unidata/netcdf-c/pull/405
re pull request https://github.com/Unidata/netcdf-c/pull/446

Notes:
1. This branch is a cleanup of the magic.dmh branch.
2. magic.dmh was originally merged, but caused problems with parallel IO.
   It was re-issued as pull request https://github.com/Unidata/netcdf-c/pull/446.
3. This branch + pull request replace any previous pull requests and magic.dmh branch.

Given an otherwise valid netCDF file that has a corrupted header,
the netcdf library currently crashes. Instead, it should return
NC_ENOTNC.

Additionally, the NC_check_file_type code does not do the
forward search required by hdf5 files. It currently only looks
at file position 0 instead of 512, 1024, 2048,... Also, it turns
out that the HDF4 magic number is assumed to always be at the
beginning of the file (unlike HDF5).
The change is localized to libdispatch/dfile.c See
https://support.hdfgroup.org/release4/doc/DSpec_html/DS.pdf

Also, it turns out that the code in NC_check_file_type is duplicated
(mostly) in the function libsrc4/nc4file.c#nc_check_for_hdf.

This branch does the following.
1. Make NC_check_file_type return NC_ENOTNC instead of crashing.
2. Remove nc_check_for_hdf and centralize all file format checking
   NC_check_file_type.
3. Add proper forward search for HDF5 files (but not HDF4 files)
   to look for the magic number at offsets of 0, 512, 1024...
4. Add test tst_hdf5_offset.sh. This tests that hdf5 files with
   an offset are properly recognized. It does so by prefixing
   a legal file with some number of zero bytes: 512, 1024, etc.
5. Off-topic: Added -N flag to ncdump to force a specific output dataset name.
2017-10-24 16:25:09 -06:00
Ward Fisher
399a43ae89 Updated nc_test to respect USE_CDF5 2017-09-18 13:24:11 -06:00
Ward Fisher
08c51e6064 Corrected a couple issues uncovered when revisiting https://github.com/Unidata/netcdf-c/issues/244 2017-06-14 14:01:09 -06:00
Dennis Heimbigner
d37ac215e2 Add new capabilities to filter code:
1. Allow nccopy to apply filters, especially on the output file.
   This provides a third way to do this other than using ncgen or
   programatically
2. Make sure that even if the filter code is not available, it is
   possible to see the filter id and parameters for variables using
   e.g ncdump -hs.
3. Fix bug in nccopy so that the input file does
   not necessarily have to be netcdf-4.
4. At last minute decided to change to using a
   single "_Filter" attribute for ncgen
5. Added a test to tst_filter.sh to generate C code using ncgen.
2017-05-14 18:10:02 -06:00
Dennis Heimbigner
7c3164577e Finalize the compression support.
This relies on the HDF5 capability to
dynamically load compression filters.
Note that a compression filter is just
a subcase of filters.

The primary user-visible changes are as follows:
1. Add a standard header "netcdf_filter.h" that defines
   the necessary API extensions
2. Modify ncgen to support two new special attributes
   "_Filter_ID" and "_Filter_Parameters" so that compression
   can be turned on when creating a file using ncgen.
4. Add a detailed description of filtering support
   to the user's guide; see the file filters.md
5. Add a test case directory for this: nc_test4/filter_test.
   It is fragile and a ./configure flags (-enable-filter-test)
   is defined (default disabled) to shut this off this test
   to avoid spurious 'make check' failures.

Note that the HDF5 documentation is not up-to-date, so
much of what is encoded here comes from examining the
actual code in the file H5PL.c in the HDF5 source code.
2017-04-27 13:01:59 -06:00
Ward Fisher
8dddd222a3 Merged master, DAP4 support into branch. 2017-04-19 09:29:35 -06:00
Ward Fisher
156e6a8e39 Merged master into ghpull-375 2017-03-27 15:31:34 -06:00
Dennis Heimbigner
38bf48d2ca re: gihub issue https://github.com/Unidata/netcdf-c/issues/380
Re: esupport ticket support-netcdf : SKS-534087
Ncgen treats an integer with just a U/u suffix as uint64 instead of uint32.
Fix is in ncgen.l
2017-03-24 18:56:14 -06:00
Ward Fisher
53c018a3f6 Attempting to fix visual studio errors in support of https://github.com/Unidata/netcdf-c/pull/375 2017-03-13 15:12:47 -06:00
Ward Fisher
287374aee4 Regenerated parser files on ARM. 2017-03-09 13:23:30 -07:00
Ward Fisher
8e3790f7ce Merged master. 2017-03-09 12:53:28 -07:00
Dennis Heimbigner
3db4f013bf Primary change: add dap4 support
Specific changes:
1. Add dap4 code: libdap4 and dap4_test.
   Note that until the d4ts server problem is solved, dap4 is turned off.
2. Modify various files to support dap4 flags:
	configure.ac, Makefile.am, CMakeLists.txt, etc.
3. Add nc_test/test_common.sh. This centralizes
   the handling of the locations of various
   things in the build tree: e.g. where is
   ncgen.exe located. See nc_test/test_common.sh
   for details.
4. Modify .sh files to use test_common.sh
5. Obsolete separate oc2 by moving it to be part of
   netcdf-c. This means replacing code with netcdf-c
   equivalents.
5. Add --with-testserver to configure.ac to allow
   override of the servers to be used for --enable-dap-remote-tests.
6. There were multiple versions of nctypealignment code. Try to
   centralize in libdispatch/doffset.c and include/ncoffsets.h
7. Add a unit test for the ncuri code because of its complexity.
8. Move the findserver code out of libdispatch and into
   a separate, self contained program in ncdap_test and dap4_test.
9. Move the dispatch header files (nc{3,4}dispatch.h) to
   .../include because they are now shared by modules.
10. Revamp the handling of TOPSRCDIR and TOPBUILDDIR for shell scripts.
11. Make use of MREMAP if available
12. Misc. minor changes e.g.
	- #include <config.h> -> #include "config.h"
	- Add some no-install headers to /include
	- extern -> EXTERNL and vice versa as needed
	- misc header cleanup
	- clean up checking for misc. unix vs microsoft functions
13. Change copyright decls in some files to point to LICENSE file.
14. Add notes to RELEASENOTES.md
2017-03-08 17:01:10 -07:00
Ward Fisher
78c0f34c82 Merging master into branch. 2017-02-27 11:00:24 -07:00
Dennis Heimbigner
0415300fbc It appears that the token OPAQUE in ncgen.y
is somehow in interference with something
in the HDF4 code. So, I changed
the OPAQUE -> OPAQUE_ and that appears
to fix the problem with bison when HDF4
is enabled.

ps. when Visual Studio complained about token
'constant' it turn out that it mean token type,
not the actual token named 'constant'. Instead
the actual token that was causing the problem
was 'OPAQUE'.
2017-02-23 22:34:11 -07:00
Ward Fisher
1cdde8bbd2 Invoked makeparser from OSX. 2017-02-23 12:22:44 -07:00
Dennis Heimbigner
d0fb4a472a 1. Added a restriction note on ncgen/Makefile.am.makeparser task.
2. invoked makeparser using bison 3.0.4 and flex 2.5.35
2017-02-22 15:56:29 -07:00
Ward Fisher
4208a875c4 Updated generated ncgen files. 2017-02-22 13:22:50 -07:00
Dennis Heimbigner
47daf33074 Resolves Github issue https://github.com/Unidata/netcdf-c/issues/349.
Update utf8proc.[ch] to use the version now
maintained by the Julia Language project
(https://github.com/JuliaLang/utf8proc/blob/master/LICENSE.md).
The license for the previous version was
unacceptable for the Debian and Ubuntu release
systems. The new version both updates the code
and addresses the license issue.

It turns out that the utf8proc software we are using
was turned over to the Julia Language developers
and the license terms changed to allow modification.
(https://github.com/JuliaLang/utf8proc/blob/master/LICENSE.md).

So the fix here is as follows:
1. Wrap the library with a fixed interface: libdispatch/dutf8.c
   and include/ncutf8.h.
2. Replace the existing utf8proc code with the new version
   from https://github.com/JuliaLang/utf8proc.
3. Add a couple more test cases: nc_test/tst_utf8_validate.c
   and nc_test_utf8_phrases.c.  If/when I can find a usable
   normalization test, I will incorporate that later.
2017-02-16 14:27:54 -07:00
Ward Fisher
20721cc46b Additional work in trying to diagnose failures in nc_test. 2017-02-07 15:16:32 -07:00
Ward Fisher
870b6c534f Working out an error with unistd.h in windows. 2017-01-30 14:54:00 -07:00
Ward Fisher
6d5a924354 Addressed coverity issue 719941, missing varargs cleanup. 2016-07-06 15:41:49 -06:00
Ward Fisher
a7b7d216b2 Corrected an issue on Linux, jumping back over to Windows to see if the issue persists. 2016-05-11 15:38:45 -06:00
Ward Fisher
ef2c6f9bc4 Things are working? 2016-05-11 15:31:17 -06:00
Ward Fisher
fc0d7d0d80 Added a tweak to prevent problems with ncdump and hdf5 trying to correct for a lack of ssize_t. 2016-05-10 15:52:46 -06:00
Dennis Heimbigner
11a259ad86 Add provenance info for netcdf-4 files.
This consists of a persistent attribute named
_NCProperties plus two computed attributes
_IsNetcdf4 and _SuperblockVersion.
See the 'Provenance Attributes' section
of docs/attribute_conventions.md for details.
2016-05-07 14:32:07 -06:00
Ward Fisher
dd2201621e Addressed an API-related usage of strncmp. 2015-11-24 17:19:36 -06:00
dmh
47e10591b4 ckp 2015-11-19 13:44:55 -07:00