The `http` field of the hdf5 info struct is not defined unless `ENABLE_HDF5_ROS3` or `ENABLE_BYTERANGE` or `ENABLE_S3_SDK` is defined. Based on a quick look at the code, I think that the `ENABLE_HDF5_ROS3` define is the relavant one here. Maybe a better fix is to check if any of them are defined...
Primary changes:
* Add an improved cache system to speed up performance.
* Fix NCZarr to properly handle scalar variables.
Misc. Related Changes:
* Added unit tests for extendible hash and for the generic cache.
* Add config parameter to set size of the NCZarr cache.
* Add initial performance tests but leave them unused.
* Add CRC64 support.
* Move location of ncdumpchunks utility from /ncgen to /ncdump.
* Refactor auth support.
Misc. Unrelated Changes:
* More cleanup of the S3 support
* Add support for S3 authentication in .rc files: HTTP.S3.ACCESSID and HTTP.S3.SECRETKEY.
* Remove the hashkey from the struct OBJHDR since it is never used.
If NCPROPERTIES_EXTRA (see configure.ac) is defined
but is null or empty, then an extra comma is being generated
at the end of _NCProperties global attribute.
Soln: check for null/empty NCPROPERTIES_EXTRA value.
There were some irregularities in the flags for handling NCZarr S3 support.
The primary change is to regularize the flags controlling this to the following.
1. Automake: --enable-nczarr-s3 and CMake: ENABLE_NCZARR_S3
2. Automake: --enable-nczarr-s3-tests and CMake: ENABLE_NCZARR_S3_TESTS
Flag 1 indicates that NCZarr should be built with S3 support enabled.
Flag 2 indicates that the NCZarr S3 tests should be run
These two flags are separate because running the NCZarr S3 tests
requires access to protected S3 resources. Currently, running
these tests is restricted to Unidata personnel. However, users
may want to enable S3 support even if they cannot run the tests.
It is, of course, an error to specify 2 without specifying 1.
Additionally, if the AWS S3 SDK library is not found, then the NCZARR S3
support and testing must be disabled. Otherwise an error is signaled
during the build.
Some of these NCZarr and S3 changes are propagated to nc-config.
Misc. Other Changes:
1. Allow testing for CYGWIN or MSVC in shell scripts.
2. Add specific test for HDF5 library version 1.10.6.
This is encoded as "HDF5_UTF8_PATHS" because that is the first
version where HDF5 properly supports it under Windows. This is used
in hdf5internal/nc4_ndf5_ansi_to_utf8.
3. Add a AM Conditional -- AX_IGNORE -- for use in testing
when it is desirable to temporarily suppress Makefile code.
4. Add MULTIFILTER flag to CMakeLists.txt
re: https://github.com/Unidata/netcdf-c/issues/1836
Revert the internal filter code to simplify it. From the user's
point of view, the only visible changes should be:
1. The functions that convert text to filter specs have had their signature reverted and have been moved to netcdf_aux.h
2. Some filter API functions now return NC_ENOFILTER when inquiry is made about some filter.
Internally,the dispatch table has been modified to get rid of the filter_actions
entry and associated complex structures. It has been replaced with
inq_var_filter_ids and inq_var_filter_info entries and the dispatch table
version has been bumped to 3. Corresponding NOOP and NOTNC4 functions
were added to libdispatch/dnotnc4.c. Also, the filter_action entries
in dispatch tables were replaced for all dispatch code bases (HDF5, DAP2,
etc). This should only impact UDF users.
In the process, it became clear that the form of the filters
field in NC_VAR_INFO_T was format dependent, so I converted it to
be of type void* and pushed its management into the various dispatch
code bases. Specifically libhdf5 and libnczarr now manage the filters
field in their own way.
The auxilliary functions for parsing textual filter specifications
were moved to netcdf_aux.h and were renamed to the following:
* ncaux_h5filterspec_parse
* ncaux_h5filterspec_parselist
* ncaux_h5filterspec_free
* ncaux_h5filter_fix8
Misc. Other Changes:
1. Document NUG/filters.md updated to reflect the changes above.
2. All the old data types (structs and enums)
used by filter_actions actions were deleted.
The exception is the NC_H5_Filterspec because it is needed
by ncaux_h5filterspec_parselist.
3. Clientside filters were removed -- another enhancement
for which no-one ever asked.
4. The ability to remove filters was itself removed.
5. Some functionality needed by nczarr was moved from libhdf5
to libsrc4 e.g. nc4_find_default_chunksizes
6. All the filterx code was removed
7. ncfilter.h and nc4filter.c no longer used
Misc. Unrelated Changes:
1. The nczarr_test makefile clean was leaving some directories; so
add clean-local to take care of them.
re: Issue https://github.com/Unidata/netcdf-c/issues/1848
The existing Virtual File Driver built to support byte-range
read-only file access is quite old. It turns out to be extremely
slow (reason unknown at the moment).
Starting with HDF5 1.10.6, the HDF5 library has its own version
of such a file driver. The HDF5 developers have better knowledge
about building such a driver and what incantations are needed to
get good performance.
This PR modifies the byte-range code in hdf5open.c so
that if the HDF5 file driver is available, then it is used
in preference to the one written by the Netcdf group.
Misc. Other Changes:
1. Moved all of nc4print code to ncdump to keep appveyor quiet.
Older HDF5 libraries do not support virtual datasets but could otherwise
be supported by netCDF4. This change removes the special case to handle
HDF5 virtual datasets if the installed HDF5 version does not support
virtual datasets.
re: Github Issue https://github.com/Unidata/netcdf-c/issues/1826
It turns out that the common get code (NC4_get_vars) in libhdf5
(and libnczarr) has an optimization where it does not attempt to
read from the file if the file is all fill values. Rather it
just fills the output buffer with the fill value. The problem
is that -- in that case -- it forgets that conversion might still be
needed. So the conversion never occurs and the raw bits of
the fill data are stored directly into the memory space.
Solution: move some code around to properly do the
conversion no matter how the data was obtained.
Added a test cases nc_test4/test_fillonly.sh and
nczarr_test/test_fillonlyz.sh
It seems like it is part of the design of HDF5 virtual datasets that
objects within a file remain opened while the files is aready "closed".
Setting the fclose degree to SEMI would cause the library to bail out.
This commit makes nc_test4/tst_virtual_dataset succeed.
See also Unidata/netcdf-c#1799
In case HDF5 adds more storage specifications, netcdf4 should be able to
cope with them by default. Further specializations could be added
nonetheless.
disengagement of enable-netcdf4 from enable-hdf5.
That is, with the advent of nczarr, it is possible
to turn off hdf5 but still need netcdf-4 enabled
because nczarr uses libsrc4, but not libhdf5.
This change involves a bunch of things:
1. Modify configure.ac and CMakelist to make enable_hdf5
control if hdf5 support is provided. For back compatibility,
disable-netcdf4 is treated as disable-hdf5. But internally,
netcdf4 support is controlled only by the enabling of formats
that require it.
2. In support of #1, modify .travis.yml to use enable/disable-hdf5
instead of enable/disable-netcdf4.
3. test_common.in is modified to track selected features,
including enable-hdf5 and enable-s3-tests. This is used in
selected tests that mix netcdf-3 and netcdf4 tests.
4. The conflation of USE_HDF5 and USE_NETCDF4 is common in
code, tests, and build files, so all of those had to be weeded out.
5. It turns out that some of the NC4_dim functions really are HDF5 specific,
but are not treated as such. So they are moved from nc4dim.c to
hdf5dim.c or hdf5dispatch.c
6. Some generic functions in libhdf5 can be (and were) moved to libsrc4.
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
It is possible for the values stored to `file_value_size` to overrun the storage capacity of a 32-bit integer. The value does need to store negative values potentially, so can be `size_t` or `hsize_t`, so use the `hssize_t` which is a signed 64-bit value. Could also use `ssize_t`, but that is not used in this routine...
re: Github issue https://github.com/Unidata/netcdf-c/issues/1713
If nc_def_var_filter or nc_def_var_deflate or nc_def_var_szip is
called multiple times with the same filter id, but possibly with
different sets of parameters, then the first invocation is
sticky and later invocations are ignored. The desired behavior
is to have the last invocation be used.
This PR implements that desired behavior, with some special
cases. If you call nc_def_var_deflate multiple times, then the
last invocation rule applies with respect to deflate. However,
the shuffle filter, if enabled, is always applied just before
applying deflate.
Misc unrelated changes:
1. Make client-side filters be disabled by default
2. Fix the definition of uintptr_t and use in oc2 and libdap4
3. Add some test cases
4. modify filter order tests to use plugin filters rather
than client-side filters
The current library seems to have some behavior which is N^2 in the number of vars in a file.
The `NC4_inq_dim` routine calls down to `nc4_find_dim_len` which iterates through each `var` in the file/group and calls `find_var_dim_max_length` on each var and finds the largest length of the dim on each of those vars. This is done only for unlimited vars.
I have a file with 129 dim and 1630 vars. The unlimited dimension is of length 41. In my test program, I am reading data from 4 files which have the same dim and var count and reading every 4th time step (unlimited dimension). If I run a profile, I see that 98.2% of the program time is in the `nc_get_vara_float` call tree and most of that is in `find_var_dim_max_length` (94.8%).
There are 66,142 calls to `nc_get_vara_float` resulting in 107,307,290 calls to `find_var_dim_max_length` with twice that number of calls to `malloc/free` and calls to 5 HDF5 routines. All of this, at least in my case, to return the same `41` each time.
The proof of concept patch here will check whether the file is read-only (or no_write) and if so, it will cache the value of the dim length the first time it is calculated. With this change, my example run is sped up by a factor of 60. The time for `NC4_inq_dim` and below drops from 97.2% down to 2.7%.
I'm not sure whether this is the correct fix, or if there is some behavior that I am overlooking, but my users would definitely like a 10 second run compared to a 10 minute run...
This is on current Netcdf master branch.
I will try to attach some valgrind/callgrind profiles.
nc4internal.c contains code to free the format_XXX_info
fields. Since these are format specific, this code
was moved to the dispatch code (libhdf5 and libhdf4
in the current case).
Additionally, there are some fields in nc4internal.h (e.g.
dimscale fields) that are specific to HDF5 and have been moved
to the corresponding HDF5 data structures and code.
Misc. other changes:
1. NC_VAR_INFO_T->hdf5_name renamed to alt_name to avoid
implying it is necessarily HDF5 specific.
2. prefix NC_FILE_INFO_T with an instance of NC_OBJ for consistency.
this also requires wrapping move_in_NCList() to keep
hdr.id consistent.
re: https://github.com/Unidata/netcdf-c/issues/1642
Modify ncdump, nccopy, and ncgen to support the NC_COMPACT storage option.
Added test cases and added description to the man pages for the utilities.
1. ncdump: For compact storage variable, print special attribute __Storage_ as
````
<var>: _Storage = "compact";
````
2. ncgen: parse and implement
````
<var>: _Storage = "compact";
````
in a .cdl file
3. nccopy: Extend the chunk specification (-c flag) to support
compact using the forms
````
nccopy ... -c <var>:compact
and
nccopy ... -c <var>:contiguous
````
Misc. other changes
1. cleanup the copy_chunking function in ncdump/nccopy.c
re: https://github.com/Unidata/netcdf-c/issues/1584
Support has been added for multiple filters per variable. This
affects a number of components in netcdf. The new APIs are
documented in NUG/filters.md.
The primary changes are:
* A set of new functions are provided (see __include/netcdf_filter.h__).
- Obtain a list of the filters associated with a variable
- Obtain the parameters for a specific filter.
* The existing __nc_inq_var_filter__ function now returns info
about the first defined filter.
* The utilities (ncgen, ncdump, and nccopy) now support
an extended format for specifying a sequence of filters.
The general form is __<filter>|<filter>..._.
* The ncdump **_Filter** attribute now dumps a list of all the
filters associated with a variable using the above new format.
* Filter specifications can now use a filter name instead of number
for filters known to the netcdf library, which in turn is taken
from the HDF5 filter registration page.
* New errors are defined: NC_EFILTER and NC_ENOFILTER. The latter
is returned if an attempt is made to access an unknown filter.
* Internally, the dispatch table has been extended to add a function
to handle all of the filter functions.
* New, filter-related, tests were added to nc_test4.
* A new plugin was added to the plugins directory to help with testing.
Notes:
1. The shuffle and fletcher32 filters are not part of the multifilter system.
Misc. changes:
1. A debug module was added to libhdf5 to help catch error locations.
Some versions of some servers are returning malformed responses.
Make the library either handle them or gracefully fail.
The three server errors "fixed" here are as follows.
1. The attribute _NCProperties sometimes has a trailing nul character
in its value. Soln is to elide the nul(s).
2. Sometimes a DAP response has no data part, only a DMR.
Soln is to detect and return an error code instead of crashing.
3. Sometimes a server returns a redirection, but our current
openmagic() function was not following the redirect. Soln
is to follow redirects.
Also because of #2, I am temporarily making --disable-dap-remote-tests
be the default.
1) We have to use H5Tequal() to compare HDF5 type IDs.
2) When checking if we can re-use an NC_CHAR attribute it is enough to
compare data types (H5Tequal() takes care of the size comparison).
3) This commit adds missing code (reuse_att was set but not used).
Now an attribute in a NetCDF-4 file can be modified as many times as
necessary, as long its type and length remain the same.
Modifications changing either type or length of an attribute require
deleting and re-creating an attribute which increments the attribute
order creation index. Once this index reaches 65535 all attribute
modifications (for a particular group or variable) will fail.
For reference:
Issue 350 title: NetCDF-4 limits the number of times an attribute can
be modified
Pull request 1119 title: Fix checking for HDF5 max dims, no longer
re-create atts if not needed, confirm behavior for HDF5 cyclical
files, allow user to set mpiexec
* For URL paths, the new approach essentially centralizes all information
in the URL into the "#mode=" fragment key and uses that value
to determine the dispatcher for (most) URLs.
* The new approach has the following steps:
1. canonicalize the path if it is a URL.
2. use the mode= fragment key to determine the dispatcher
3. if dispatcher still not determined, then use the mode flags
argument to nc_open/nc_create to determine the dispatcher.
4. if the path points to something readable, attempt to read the
magic number at the front, and use that to determine the dispatcher.
this case may override all previous cases.
* Misc changes.
1. Update documentation
2. Moved some unit tests from libdispatch to unit_test directory.
3. Fixed use of wrong #ifdef macro in test_filter_reg.c
[I think this may fix an previously reported esupport query].
Partially address: https://github.com/Unidata/netcdf-c/issues/1056
Currently, some of the entries in the dispatch table
are conditional'd on USE_NETCDF4.
As a step in upgrading the dispatch table for use
with user-defined tables, we remove that conditional.
This means that all dispatch tables must implement the
netcdf-4 specific functions even if only to make them
return NC_ENOTNC4. To simplify this, a set of default
functions are defined in libdispatch/dnotnc4.c to provide this
behavior. The file libdispatch/dnotnc3.c is also relevant to
this.
The primary fix is to modify the various dispatch tables to
remove the conditional and use the functions in
libdispatch/dnotnc4.c as appropriate. In practice, all of the
existing tables are prepared to handle this, so the only
real change is to remove the conditionals.
Misc. Unrelated fixes
1. Fix some annoying warnings in ncvalidator.
Notes:
1. This has not been tested with either pnetcdf or hdf4 enabled.
When those are enabled, it is possible that there are still
some conditionals that need to be fixed.
This fixes an issue hit by GDAL, and that is found in netcdf 4.6.3
and 4.7.0
git bisect pointed the problem to have started with
```
77ab979c5f is the first bad commit
commit 77ab979c5f
Author: Ed Hartnett <edwardjameshartnett@gmail.com>
Date: Sat Jun 16 09:58:48 2018 -0600
using get_vars but not put_vars
:040000 040000 8611e77aaefc9ffd1d13 M libsrc4
```
where nc_get_vara_double() started using nc4_get_vars() underneath.
It turns out that nc4_get_vars() was buggy in the situation exercised by GDAL.
This can be reproduced with the following simple test case:
```
int main()
{
int status;
int cdfid = -1;
int first_dim;
int varid;
int other_var;
size_t anStart[NC_MAX_DIMS];
size_t anCount[NC_MAX_DIMS];
double* val = (double*)calloc(3, sizeof(double));
status = nc_create("foo.nc", NC_NETCDF4, &cdfid);
assert( status == NC_NOERR );
status = nc_def_dim(cdfid, "unlimited_dim", NC_UNLIMITED, &first_dim);
assert( status == NC_NOERR );
status = nc_def_var(cdfid, "my_var", NC_DOUBLE, 1, &first_dim, &varid);
assert( status == NC_NOERR );
status = nc_def_var(cdfid, "other_var", NC_DOUBLE, 1, &first_dim, &other_var);
assert( status == NC_NOERR );
status = nc_enddef(cdfid);
assert( status == NC_NOERR );
/* Write 3 elements to set the size of the unlimited dim to 3 */
anStart[0] = 0;
anCount[0] = 3;
status = nc_put_vara_double(cdfid, other_var, anStart, anCount, val);
assert( status == NC_NOERR );
/* Read 2 elements starting with index=1 */
anStart[0] = 1;
anCount[0] = 2;
status = nc_get_vara_double(cdfid, varid, anStart, anCount, val);
assert( status == NC_NOERR );
status = nc_close(cdfid);
assert( status == NC_NOERR );
free(val);
return 0;
}
```
Running it under Valgrind without this patch leads to
```
==19637==
==19637== Invalid write of size 8
==19637== at 0x4C326CB: memcpy@@GLIBC_2.14 (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==19637== by 0x4EDBE3D: NC4_get_vars (hdf5var.c:2131)
==19637== by 0x4EDA24C: NC4_get_vara (hdf5var.c:1342)
==19637== by 0x4E68878: NC_get_vara (dvarget.c:104)
==19637== by 0x4E69FDB: nc_get_vara_double (dvarget.c:815)
==19637== by 0x400C08: main (in /home/even/netcdf-c/build/test)
==19637== Address 0xb70e3e8 is 8 bytes before a block of size 24 alloc'd
==19637== at 0x4C2FB55: calloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==19637== by 0x4009E8: main (in /home/even/netcdf-c/build/test)
==19637==
```