corresponding HDF5 operations.
re: github issue https://github.com/Unidata/netcdf-c/issues/908
also in reference to https://github.com/pydata/xarray/issues/2004
The netcdf-c library has implemented the nc_get_vars and nc_put_vars
operations as element at a time. This has resulted in very slow
operation.
This pr attempts to improve the situation for netcdf-4/hdf5 files
by using the slab operations provided by the hdf5 library. The new
implementation passes the get/put vars stride information down to
the hdf5 slab operations.
The result appears to improve performance significantly. Some simple
tests on large 2-D arrays shows speedups in excess of 150.
Misc. other changes:
1. fix bug in ncgen/semantics.c; using a list's allocated length
instead of actual length.
2. Added a temporary hook in the netcdf library plus a performance
test case (tst_varsperf.c) to estimate the speedup. After users
have had some experience with this, I will remove it, probably
after the 4.7 release.
The file docs/indexing.dox tries to provide design
information for the refactoring.
The primary change is to replace all walking of linked
lists with the use of the NCindex data structure.
Ncindex is a combination of a hash table (for name-based
lookup) and a vector (for walking the elements in the index).
Additionally, global vectors are added to NC_HDF5_FILE_INFO_T
to support direct mapping of an e.g. dimid to the NC_DIM_INFO_T
object. These global vectors exist for dimensions, types, and groups
because they have globally unique id numbers.
WARNING:
1. since libsrc4 and libsrchdf4 share code, there are also
changes in libsrchdf4.
2. Any outstanding pull requests that change libsrc4 or libhdf4
are likely to cause conflicts with this code.
3. The original reason for doing this was for performance improvements,
but as noted elsewhere, this may not be significant because
the meta-data read performance apparently is being dominated
by the hdf5 library because we do bulk meta-data reading rather
than lazy reading.
and https://github.com/Unidata/netcdf-c/issues/708
Expand the NC_INMEMORY capabilities to support writing and accessing
the final modified memory.
Three new functions have been added:
nc_open_memio, nc_create_mem, and nc_close_memio.
The following new capabilities were added.
1. nc_open_memio() allows the NC_WRITE mode flag
so a chunk of memory can be passed in and be modified
2. nc_create_mem() allows the NC_INMEMORY flag to be set
to cause the created file to be kept in memory.
3. nc_close_mem() allows the final in-memory contents to be
retrieved at the time the file is closed.
4. A special flag, NC_MEMIO_LOCK, is provided to ensure that
the provided memory will not be freed or reallocated.
Note the following.
1. If nc_open_memio() is called with NC_WRITE, and NC_MEMIO_LOCK is not set,
then the netcdf-c library will take control of the incoming memory.
This means that the original memory block should not be freed
but the block returned by nc_close_mem() must be freed.
2. If nc_open_memio() is called with NC_WRITE, and NC_MEMIO_LOCK is set,
then modifications to the original memory may fail if the space available
is insufficient.
Documentation is provided in the file docs/inmemory.md.
A test case is provided: nc_test/tst_inmemory.c driven by
nc_test/run_inmemory.sh
WARNING: changes were made to the dispatch table for
the close entry. From int (*close)(int) to int (*close)(int,void*).
This relies on the HDF5 capability to
dynamically load compression filters.
Note that a compression filter is just
a subcase of filters.
The primary user-visible changes are as follows:
1. Add a standard header "netcdf_filter.h" that defines
the necessary API extensions
2. Modify ncgen to support two new special attributes
"_Filter_ID" and "_Filter_Parameters" so that compression
can be turned on when creating a file using ncgen.
4. Add a detailed description of filtering support
to the user's guide; see the file filters.md
5. Add a test case directory for this: nc_test4/filter_test.
It is fragile and a ./configure flags (-enable-filter-test)
is defined (default disabled) to shut this off this test
to avoid spurious 'make check' failures.
Note that the HDF5 documentation is not up-to-date, so
much of what is encoded here comes from examining the
actual code in the file H5PL.c in the HDF5 source code.
The nvars, ndims, and natts fields on the NC_HDF5_FILE_INFO struct are
never set. The nvars field is read, but since it is never written,
the value is always zero.
Modified provenance code to allocate the minimal space
needed for _NCProperties attribute in file. Basically
required using malloc in the provenance code and in ncdump.
Otherwise should cause no externally visible effects.
Also removed the ENABLE_FILEINFO from configure.ac since
the provenance code is no longer optional.
This consists of a persistent attribute named
_NCProperties plus two computed attributes
_IsNetcdf4 and _SuperblockVersion.
See the 'Provenance Attributes' section
of docs/attribute_conventions.md for details.
In non-classic netcdf-4 models, it is allowable to have
large numbers of dims and vars. In many operations, the
entire list of dims or vars is searched for a dim/var matching
a specific name which results in *lots* of strncmp or strcmp
calls.
If we add a hash field to the var and dim structs similar to what
has already been done for the netcdf-3 formats, then we can hash the
name being searched for and numerically compare that value with
the var/dim hash value. If they match, then do a more expensive
strncmp call to ensure that the names truly match.
to clean up resources properly on failure.
Refactored doubly-linked list code for objects in the libsrc4 directory,
cleaning up the add/del routines, breaking out the common next/prev
pointers into a struct and extracting the add/del operations on them,
changed the list of dims to add new dims in the same order as the other
types, made all add routines able to optionally return a pointer to the
newly created object.
Removed some dead code (pg_var(), nc4_pg_var1(), nc4_pg_varm(), misc. small
routines, etc)
Fixed fill value handling for string types in nc4_get_vara().
Changed many malloc()+strcpy() pairs into calls to strdup().
Cleaned up misc. other minor Coverity issues.
many cleanups to fix compiler warnings, streamline iteration over objects
in HDF5 file when opening the file, and generally straightening out the code
to be cleaner and simpler.
Tested on Mac OS/X with gcc 4.8 and OpenMPI (which uses clang).
an unlimited-dimension variable, and different processes don't agree on the
whether to extend the underlying HDF5 dataset, or don't agree on the amount
to extend the dataset.
group renaming. The primary API
is 'nc_rename_grp(int grpid, const char* name)'.
No test cases provided yet.
This also required adding a rename_grp entry
to the dispatch tables.
unlimited dimension hanging. Extending the size of an unlimited
dimension in HDF5 must be a collective operation, so now an error is
returned if trying to extend in independent access mode.
Quincey's bug fixes for parallel build portability, particularly
OpenMPI on MacOS-X.
coordinate variables and their associated dimensions occurs in any
subgroup, rather than just the root group. If this occurs, the
variable attribute "_Netcdf4Dimid" is created for every dimension
scale.
Also add a test for this bug fix in tst_dims3.nc, based on Pedro
Vicente's demo.
Fix Jira issue NCF-29 (https://bugtracking.unidata.ucar.edu/browse/NCF-29):
making the netCDF-4 library ignore HDF5 datasets and attributes which have
datatypes (such as references) that it doesn't understand.
Tested on:
Mac OSX/64 10.8.2 (amazon) w/--enable-netcdf-4 --enable-extra-tests --enable-extra-example-tests
--disable-shared --enable-logging
o Changed variable names 'typeid' to 'typeid1' to avoid a namespace conflict
in visual studio.
o Cleaned up a handful of warnings in Visual Studio.
o Addressed a few Coverity-discovered issuesl
o Made changes to CMake-based builds.
contain as little file-type specific info as possible. It
modifies especially libsrc so that all of the netcdf-3 data
that used to be in struct NC is now kept in a separate chunk
of data pointed to by the struct NC. This makes all of
current protocols consistent: netcdf-3, netcdf-4, and dap.