There was a problem with the
distcheck testing since the test input files
are in ${src_dir} and the test is in ${build_dir}.
So modify run_chunk_hdf4 to copy as necessary.
It turns out that HDF4 supports chunking
(and compression). However the existing
HDF4 code does not support it.
So add HDF4 support for chunking.
Also add a test case.
error occurs after an "exit:" label.
Corrected a dozen Coverity errors (mainly allocation issues, along with a few
other things):
711711, 711802, 711803, 711905, 970825, 996123, 996124, 1025787,
1047274, 1130013, 1130014, 1139538
Refactored internal fill-value code to correctly handle string types, and
especially to allow NULL pointers and null strings (ie. "") to be
distinguished. The code now avoids partially aliasing the two together
(which only happened on the 'write' side of things and wasn't reflected on
the 'read' side, adding to the previous confusion).
Probably still weak on handling fill-values of variable-length and compound
datatypes.
Refactored the recursive metadata reads a bit more, to process HDF5 named
datatypes and datasets immediately, avoiding chewing up memory for those
types of objects, etc.
Finished uncommenting and updating the nc_test4/tst_fills2.c code (as I'm
proceeding alphabetically through the nc_test4 code files).
Add a new function called nc_inq_format_extended that
returns more detailed format information (vis-a-vis
nc_inq_format) about an open dataset.
Note that the netcdf API will present the file as if it had
the format specified by nc_inq_format. The true file
format, however, may not even be a netcdf file; it might be
DAP, HDF4, or PNETCDF, for example. This function returns
that true file type. It also returns the effective mode for
the file.
signature: nc_inq_format_extended(int ncid, int* formatp, int* modep)
where
* ncid is the NetCDF ID from a previous call to nc_open() or
nc_create().
* formatp is a pointer to a location for returned true format.
* modep is a pointer to a location for returned mode flags.
Refer to the actual list in the file netcdf.h to see the
currently defined set.
Also added test cases (tst_formatx*).
the rest of the dimension queries. Correct error in library where types used
in sub-group variables but that were added to the file after the sub-group was
created weren't available for sub-group variables to use. Start cleaning up
test suite and un-commenting tests that were commented out (got up to
nc_test4/tst_fills2.c, alphabetically) before running into an error in HDF5.
many cleanups to fix compiler warnings, streamline iteration over objects
in HDF5 file when opening the file, and generally straightening out the code
to be cleaner and simpler.
Tested on Mac OS/X with gcc 4.8 and OpenMPI (which uses clang).
an unlimited-dimension variable, and different processes don't agree on the
whether to extend the underlying HDF5 dataset, or don't agree on the amount
to extend the dataset.
Also added ability capability for netCDF-4 to write and read NIL
values for string type attributes and variables, so these can be read
if used in HDF5 files.
Include are additions to CMakeLists files to reflect new tests.
unlimited dimension hanging. Extending the size of an unlimited
dimension in HDF5 must be a collective operation, so now an error is
returned if trying to extend in independent access mode.
Quincey's bug fixes for parallel build portability, particularly
OpenMPI on MacOS-X.
coordinate variables and their associated dimensions occurs in any
subgroup, rather than just the root group. If this occurs, the
variable attribute "_Netcdf4Dimid" is created for every dimension
scale.
Also add a test for this bug fix in tst_dims3.nc, based on Pedro
Vicente's demo.
for JIRA issue NCF-241.
This is only temporary
until I can make pnetcdf
operate as a separate dispatch table.
Also, fix nc_test4/tst_pnetcdf
to open with nc_open_par;
this is necessary because a pnetcdf
created file cannot be opened
as a netcdf classic file.