2. Fixed plugin building (nc_test4/hdf5plugins)
to be done properly by cmake and automake.
4. Duplicated part of the nc_test4 filter test code
in examples/C
An incomplete and untested set of hooks exist
for OS-X in nc_test4/findplugins.in. They need testing.
Fix#299
The conditions to make this error are the following:
* Two variables with different chunk sizes
* Both variables write on the same unlimited dimension
* The first variable has already written data when the second variable is created
re pull request https://github.com/Unidata/netcdf-c/pull/405
re pull request https://github.com/Unidata/netcdf-c/pull/446
Notes:
1. This branch is a cleanup of the magic.dmh branch.
2. magic.dmh was originally merged, but caused problems with parallel IO.
It was re-issued as pull request https://github.com/Unidata/netcdf-c/pull/446.
3. This branch + pull request replace any previous pull requests and magic.dmh branch.
Given an otherwise valid netCDF file that has a corrupted header,
the netcdf library currently crashes. Instead, it should return
NC_ENOTNC.
Additionally, the NC_check_file_type code does not do the
forward search required by hdf5 files. It currently only looks
at file position 0 instead of 512, 1024, 2048,... Also, it turns
out that the HDF4 magic number is assumed to always be at the
beginning of the file (unlike HDF5).
The change is localized to libdispatch/dfile.c See
https://support.hdfgroup.org/release4/doc/DSpec_html/DS.pdf
Also, it turns out that the code in NC_check_file_type is duplicated
(mostly) in the function libsrc4/nc4file.c#nc_check_for_hdf.
This branch does the following.
1. Make NC_check_file_type return NC_ENOTNC instead of crashing.
2. Remove nc_check_for_hdf and centralize all file format checking
NC_check_file_type.
3. Add proper forward search for HDF5 files (but not HDF4 files)
to look for the magic number at offsets of 0, 512, 1024...
4. Add test tst_hdf5_offset.sh. This tests that hdf5 files with
an offset are properly recognized. It does so by prefixing
a legal file with some number of zero bytes: 512, 1024, etc.
5. Off-topic: Added -N flag to ncdump to force a specific output dataset name.
dap code will create a real temporary file in which to store the
converted metadata for the DAP .dds or .dmr.
It was assumed that the nc_close code would reclaim the
temporary file. For DAP2, reclamation occurs in the ncio
code. For DAP4, it was assumed that the libsrc4 code would do
the reclamation, but for whatever reason, this is not happening.
Thus, in this situation, a temporary file is left in the file
system. Aside from being irritating to users, this screws up
'make distcheck'.
So the DAP4 code is fixed to ensure that the temporary file is
properly reclaimed independent of the libsrc4 code.
nc4_check_name() checks that the provided string doesn't exceed NC_MAX_NAME,
but fails to do so after calling nc_utf8_normalize(). This extra check is
needed since a caller of nc4_check_name(), like NC4_def_dim, allocates
norm_name as char norm_name[NC_MAX_NAME + 1]
Fixes https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=2840
Credit to OSS-Fuzz
were added to provide a path name converter from e.g. cygwin
paths to e.g. windows paths. This is necessary because
the shell scripts may produce cygwin paths, but the code
may have been compiled with Visual Studio. Similar issues
arise with Mingw.
At appropriate places, and if using Visual Studio or Mingw,
I added calls to the path conversion code.
Apparently I forgot to find all the places where this
conversion was needed. So this pr does the following:
1. Push the calls to the converter to the various libXXX
directories and out of libdispatch/dfile.c.
2. Add conversion calls to other parts of the code like oc2.
I also turns out that conversion code in dapcvt.c
had a bug when handling DAP Byte type under visual studio.
Notes:
1. there may still be places I missed that need to do path conversion.
2. need to make sure that calls to e.g. H5open also use converted path.
Current code only frees char* text in error cases. It should
also free it in success case.
Otherwise Valgrind reports a leak:
==28536== 64 bytes in 1 blocks are definitely lost in loss record 4 of 13
==28536== at 0x4C2DB8F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==28536== by 0xE673496: NC4_buildpropinfo (nc4info.c:239)
==28536== by 0xE67313B: NC4_put_propattr (nc4info.c:162)
==28536== by 0xE65BF35: nc4_create_file (nc4file.c:468)
==28536== by 0xE65C0AC: NC4_create (nc4file.c:564)
==28536== by 0xE608A08: NC_create (dfile.c:1773)
==28536== by 0xE607E6A: nc__create (dfile.c:511)
==28536== by 0xE607E23: nc_create (dfile.c:440)
Credit to OSS Fuzz
This relies on the HDF5 capability to
dynamically load compression filters.
Note that a compression filter is just
a subcase of filters.
The primary user-visible changes are as follows:
1. Add a standard header "netcdf_filter.h" that defines
the necessary API extensions
2. Modify ncgen to support two new special attributes
"_Filter_ID" and "_Filter_Parameters" so that compression
can be turned on when creating a file using ncgen.
4. Add a detailed description of filtering support
to the user's guide; see the file filters.md
5. Add a test case directory for this: nc_test4/filter_test.
It is fragile and a ./configure flags (-enable-filter-test)
is defined (default disabled) to shut this off this test
to avoid spurious 'make check' failures.
Note that the HDF5 documentation is not up-to-date, so
much of what is encoded here comes from examining the
actual code in the file H5PL.c in the HDF5 source code.
Specific changes:
1. Add dap4 code: libdap4 and dap4_test.
Note that until the d4ts server problem is solved, dap4 is turned off.
2. Modify various files to support dap4 flags:
configure.ac, Makefile.am, CMakeLists.txt, etc.
3. Add nc_test/test_common.sh. This centralizes
the handling of the locations of various
things in the build tree: e.g. where is
ncgen.exe located. See nc_test/test_common.sh
for details.
4. Modify .sh files to use test_common.sh
5. Obsolete separate oc2 by moving it to be part of
netcdf-c. This means replacing code with netcdf-c
equivalents.
5. Add --with-testserver to configure.ac to allow
override of the servers to be used for --enable-dap-remote-tests.
6. There were multiple versions of nctypealignment code. Try to
centralize in libdispatch/doffset.c and include/ncoffsets.h
7. Add a unit test for the ncuri code because of its complexity.
8. Move the findserver code out of libdispatch and into
a separate, self contained program in ncdap_test and dap4_test.
9. Move the dispatch header files (nc{3,4}dispatch.h) to
.../include because they are now shared by modules.
10. Revamp the handling of TOPSRCDIR and TOPBUILDDIR for shell scripts.
11. Make use of MREMAP if available
12. Misc. minor changes e.g.
- #include <config.h> -> #include "config.h"
- Add some no-install headers to /include
- extern -> EXTERNL and vice versa as needed
- misc header cleanup
- clean up checking for misc. unix vs microsoft functions
13. Change copyright decls in some files to point to LICENSE file.
14. Add notes to RELEASENOTES.md
The nvars, ndims, and natts fields on the NC_HDF5_FILE_INFO struct are
never set. The nvars field is read, but since it is never written,
the value is always zero.
Update utf8proc.[ch] to use the version now
maintained by the Julia Language project
(https://github.com/JuliaLang/utf8proc/blob/master/LICENSE.md).
The license for the previous version was
unacceptable for the Debian and Ubuntu release
systems. The new version both updates the code
and addresses the license issue.
It turns out that the utf8proc software we are using
was turned over to the Julia Language developers
and the license terms changed to allow modification.
(https://github.com/JuliaLang/utf8proc/blob/master/LICENSE.md).
So the fix here is as follows:
1. Wrap the library with a fixed interface: libdispatch/dutf8.c
and include/ncutf8.h.
2. Replace the existing utf8proc code with the new version
from https://github.com/JuliaLang/utf8proc.
3. Add a couple more test cases: nc_test/tst_utf8_validate.c
and nc_test_utf8_phrases.c. If/when I can find a usable
normalization test, I will incorporate that later.
HDF5 does not permit a variable to have no_fill == TRUE if the variable type is variable length. This includes types NC_STRING and NC_VLEN.
The test above also excludes user-defined types which I'm not sure is needed or not.
In nc4 mode, the variables were ignoring the default "database" fill/no_fill mode set via a call to nc_set_fill(). This sets the no_fill mode on the variable to match the default database setting at the time the variable is defined.
Modified provenance code to allocate the minimal space
needed for _NCProperties attribute in file. Basically
required using malloc in the provenance code and in ncdump.
Otherwise should cause no externally visible effects.
Also removed the ENABLE_FILEINFO from configure.ac since
the provenance code is no longer optional.
This modifies the previous change to be more pedantically correct. It should always be an NC_EINVALCOORDS error if start exceeds fdims[2]; however, if start equals fdims[2], then it is only an error if count is non-zero.
The following code is in nc4hdf.c, function `nc4_put_vara`.
```
/* Check dimension bounds. Remember that unlimited dimnsions can
* put data beyond their current length. */
for (d2 = 0; d2 < var->ndims; d2++)
{
dim = var->dim[d2];
assert(dim && dim->dimid == var->dimids[d2]);
if (!dim->unlimited)
{
if (start[d2] >= (hssize_t)fdims[d2])
BAIL_QUIET(NC_EINVALCOORDS);
if (start[d2] + count[d2] > fdims[d2])
BAIL_QUIET(NC_EEDGE);
}
}
```
There is an issue when the process with the highest rank has zero items to output. As an example, if I have 4 mpi processes which are each writing the following amount of data:
* rank 0: 0 items
* rank 1: 2548 items
* rank 2: 4352 items
* rank 3: 0 items.
I will define the variable to have a length of 6900 items (0 + 2548 + 4352 + 0). When I am outputting data to the variable, each rank will call nc_put_vara_longlong with the following start and count values:
* rank 0: start = 0, count = 0
* rank 1: start = 0, count = 2548
* rank 2: start = 2548, count = 4352
* rank 3: start = 6900, count = 0.
In each case, the `start` for rank N is equal to `start` for rank N-1 + `count` for rank N-1. This all works ok until the highest rank is writing 0 items. In that case, the `start` value for that rank is equal to the total size of the variable and the check in the code fragment shown above fails since `start[] == fdims[]`.
This could be fixed in the application code by checking whether the `count` is zero and if so, then set `start` to 0 also, but I think that is a kluge that should not be required.
Note that this test appears three times in this file. In one case, the check for non-zero count already exists, but not in the other two. This pull request adds the check to the other two tests.
The H5Aexists hdf5 function does the same function as the manually coded loop with much less code and fewer function calls.
Also, the H5Aopen_idx and H5Aget_num_attrs functions are deprecated.
If H5Aopen_idx on line 1964 fails, then attid will be < 0. The BAIL will goto exit at line 1989 and then the test of "if (attid ...)" at line 1995 will pass (attd != 0) and then call H5Aclose(attid) with a negative attid. Similar issue for spaceid.
Result of function if probably the same since there is a failure somewhere, but more difficult to track down if looks like failure is happening in the wrong place.
not being computed. Fix in nc4file.c.
Not sure how this ever worked for any variable.
What is also weird is that the dim hash is
apparently being computed.
re: github netcdf-c issue #271
This occurs for several reasons, including:
1. using H5Aopen_name instead of H5Aexists to test if attribute exists.
2. using H5Eset_auto instead of H5Eset_auto2.
There are probably others that will have to be extinguished as encountered.
p.s Hope I did not overdo this and kill too much.
The hash field for phony dimensions was not being set
(in nc4hdf.c). Also added test case (nc_test4/?).
Note that I searched for other similar failures and
did not find any, but I may have missed them.
If in parallel mode and the H5Fcreate fails, then the h5->comm field may/will point to MPI_COMM_NULL instead of to a valid communicator. If that is passed to MPI_Comm_free, then the code will core dump down in the MPI_Comm_free function and not return a valid return status to the caller routine. If communicator is checked, then execution proceeds back up the call tree and caller can report a better error message about the failed file create.
Charlie Zender noted that we forgot to define what happens for
various netcdf API attribute operations, notably nc_inq_att()
and nc_get_att().
So, I added a list of legal and illegal api calls for the provenance
attributes in docs/attribute_conventions.md.
I also added more test cases to ncdump/tst_fileinfo.c to verify
and fixed resultant errors.
This consists of a persistent attribute named
_NCProperties plus two computed attributes
_IsNetcdf4 and _SuperblockVersion.
See the 'Provenance Attributes' section
of docs/attribute_conventions.md for details.
The var struct has a 'dim' field which was not being used
Instead, the dimids field would always search for the dim
with the matching dimid. For db with large numbers of dims,
this could be a significant time sync.
Modified code to always set var-dim[i] when var->dimids[i] was
set (if the dim existed at that point). Then use the var->dim
field instead of var->dimids and search whenever requested.
All var->dim accesses are protected by asserts that verify
non-null and that the var->dim[]->dimid == var->dimids[].
In non-classic netcdf-4 models, it is allowable to have
large numbers of dims and vars. In many operations, the
entire list of dims or vars is searched for a dim/var matching
a specific name which results in *lots* of strncmp or strcmp
calls.
If we add a hash field to the var and dim structs similar to what
has already been done for the netcdf-3 formats, then we can hash the
name being searched for and numerically compare that value with
the var/dim hash value. If they match, then do a more expensive
strncmp call to ensure that the names truly match.
1. Added check to libsrc4/nc4var.nc_def_var_extra to
check that the no specified chunks size is greater than
the dimension size.
2. Added test to nc_test4/tst_chunks.c
re: e support ZCL-340681 and CPW-270700
HDF4 supports compression (and chunking)
but the chunking was not being recorded
for HDF4 files. So, I modified the necessary
files to support HDF4 chunking.
It turns out that HDF4 supports chunking
(and compression). However the existing
HDF4 code does not support it.
So add HDF4 support for chunking.
Also add a test case.
error occurs after an "exit:" label.
Corrected a dozen Coverity errors (mainly allocation issues, along with a few
other things):
711711, 711802, 711803, 711905, 970825, 996123, 996124, 1025787,
1047274, 1130013, 1130014, 1139538
Refactored internal fill-value code to correctly handle string types, and
especially to allow NULL pointers and null strings (ie. "") to be
distinguished. The code now avoids partially aliasing the two together
(which only happened on the 'write' side of things and wasn't reflected on
the 'read' side, adding to the previous confusion).
Probably still weak on handling fill-values of variable-length and compound
datatypes.
Refactored the recursive metadata reads a bit more, to process HDF5 named
datatypes and datasets immediately, avoiding chewing up memory for those
types of objects, etc.
Finished uncommenting and updating the nc_test4/tst_fills2.c code (as I'm
proceeding alphabetically through the nc_test4 code files).
Add a new function called nc_inq_format_extended that
returns more detailed format information (vis-a-vis
nc_inq_format) about an open dataset.
Note that the netcdf API will present the file as if it had
the format specified by nc_inq_format. The true file
format, however, may not even be a netcdf file; it might be
DAP, HDF4, or PNETCDF, for example. This function returns
that true file type. It also returns the effective mode for
the file.
signature: nc_inq_format_extended(int ncid, int* formatp, int* modep)
where
* ncid is the NetCDF ID from a previous call to nc_open() or
nc_create().
* formatp is a pointer to a location for returned true format.
* modep is a pointer to a location for returned mode flags.
Refer to the actual list in the file netcdf.h to see the
currently defined set.
Also added test cases (tst_formatx*).
the rest of the dimension queries. Correct error in library where types used
in sub-group variables but that were added to the file after the sub-group was
created weren't available for sub-group variables to use. Start cleaning up
test suite and un-commenting tests that were commented out (got up to
nc_test4/tst_fills2.c, alphabetically) before running into an error in HDF5.
to clean up resources properly on failure.
Refactored doubly-linked list code for objects in the libsrc4 directory,
cleaning up the add/del routines, breaking out the common next/prev
pointers into a struct and extracting the add/del operations on them,
changed the list of dims to add new dims in the same order as the other
types, made all add routines able to optionally return a pointer to the
newly created object.
Removed some dead code (pg_var(), nc4_pg_var1(), nc4_pg_varm(), misc. small
routines, etc)
Fixed fill value handling for string types in nc4_get_vara().
Changed many malloc()+strcpy() pairs into calls to strdup().
Cleaned up misc. other minor Coverity issues.
many cleanups to fix compiler warnings, streamline iteration over objects
in HDF5 file when opening the file, and generally straightening out the code
to be cleaner and simpler.
Tested on Mac OS/X with gcc 4.8 and OpenMPI (which uses clang).
an unlimited-dimension variable, and different processes don't agree on the
whether to extend the underlying HDF5 dataset, or don't agree on the amount
to extend the dataset.
group renaming. The primary API
is 'nc_rename_grp(int grpid, const char* name)'.
No test cases provided yet.
This also required adding a rename_grp entry
to the dispatch tables.
Also added ability capability for netCDF-4 to write and read NIL
values for string type attributes and variables, so these can be read
if used in HDF5 files.
Include are additions to CMakeLists files to reflect new tests.
unlimited dimension hanging. Extending the size of an unlimited
dimension in HDF5 must be a collective operation, so now an error is
returned if trying to extend in independent access mode.
Quincey's bug fixes for parallel build portability, particularly
OpenMPI on MacOS-X.
netCDF classic or 64-bit offset files that have a UINT32_MAX flag for
large last record size of a variable that has values larger than 1
byte. This problem had previously been fixed for *writing* such data,
but was only tested with an ncbyte variable. Fixed test to
demonstrate problem and the fix.
More updates to chunking documentation, cosmetic fixes for some
"--option=" documentation that doxygen turns into mdash.
So, it turns out that just freeing
the nc4_info is not enough;
The root group must also be reclaimed.
So, it appears the best approach
is to invoke an abort on the
failed file.
coordinate variables and their associated dimensions occurs in any
subgroup, rather than just the root group. If this occurs, the
variable attribute "_Netcdf4Dimid" is created for every dimension
scale.
Also add a test for this bug fix in tst_dims3.nc, based on Pedro
Vicente's demo.
for JIRA issue NCF-241.
This is only temporary
until I can make pnetcdf
operate as a separate dispatch table.
Also, fix nc_test4/tst_pnetcdf
to open with nc_open_par;
this is necessary because a pnetcdf
created file cannot be opened
as a netcdf classic file.
Fix Jira issue NCF-29 (https://bugtracking.unidata.ucar.edu/browse/NCF-29):
making the netCDF-4 library ignore HDF5 datasets and attributes which have
datatypes (such as references) that it doesn't understand.
Tested on:
Mac OSX/64 10.8.2 (amazon) w/--enable-netcdf-4 --enable-extra-tests --enable-extra-example-tests
--disable-shared --enable-logging
1. In nc4hdf.c, had a variable declaration located such
that Visual Studio complained and threw an error. Moved
to head of function.
2. Visual Studio was complaining about variable declarations
being made after OCVERIFY/OCDEREF macros.
CMake related changes in CMakeLists.txt files,
cmake_config.h.in. Other changes relate to
Windows-specific issues, and changes made
when regenerating generated source files.
with parallel-netcdf, to fully implement parallel-netcdf support for
other functions, and to prevent a hang in hdf5 from an eary return in
an nc4_put_vara() call. Also fixed an nccopy bug when
nc_inq_var_deflate() returns defalate_level of 0, but says the variable
is deflated.
o Changed variable names 'typeid' to 'typeid1' to avoid a namespace conflict
in visual studio.
o Cleaned up a handful of warnings in Visual Studio.
o Addressed a few Coverity-discovered issuesl
o Made changes to CMake-based builds.
Primarily focused on memory errors falling into a couple different types:
1) Static overrun errors.
2) Dereference uninitialized memory errors.
make distcheck works after applying these fixes, and coverity no longer sees an issue, so hopefully they are properly resolved.
contain as little file-type specific info as possible. It
modifies especially libsrc so that all of the netcdf-3 data
that used to be in struct NC is now kept in a separate chunk
of data pointed to by the struct NC. This makes all of
current protocols consistent: netcdf-3, netcdf-4, and dap.
reported by static analysis, including memory leak in ncdump, missing
size_t cast for chunk cache. Fixed various doc problems, including
byte vs. char issues, missing NC_UBYTE in type list, needed link to
"Building with Windows" page.