Commit Graph

354 Commits

Author SHA1 Message Date
Dennis Heimbigner
ddfb6d6279 Make sure that the _NcProperties attr is null terminated and stored as such 2016-08-08 21:54:23 -06:00
Dennis Heimbigner
0cf1e2c49f re: Github issue netcdf-c 300
Modified provenance code to allocate the minimal space
needed for _NCProperties attribute in file.  Basically
required using malloc in the provenance code and in ncdump.
Otherwise should cause no externally visible effects.
Also removed the ENABLE_FILEINFO from configure.ac since
the provenance code is no longer optional.
2016-08-08 09:24:19 -06:00
Greg Sjaardema
c7ccdfa543 More pedantically correct check
This modifies the previous change to be more pedantically correct.  It should always be an NC_EINVALCOORDS error if start exceeds fdims[2]; however, if start equals fdims[2], then it is only an error if count is non-zero.
2016-07-21 09:30:18 -06:00
Greg Sjaardema
9290b31c9d Fix variable bounds check for parallel output
The following code is in nc4hdf.c, function `nc4_put_vara`.

```
  /* Check dimension bounds. Remember that unlimited dimnsions can
   * put data beyond their current length. */
  for (d2 = 0; d2 < var->ndims; d2++)
    {
      dim = var->dim[d2];
      assert(dim && dim->dimid == var->dimids[d2]);
      if (!dim->unlimited)
        {
          if (start[d2] >= (hssize_t)fdims[d2])
            BAIL_QUIET(NC_EINVALCOORDS);
          if (start[d2] + count[d2] > fdims[d2])
            BAIL_QUIET(NC_EEDGE);
        }
    }
```

There is an issue when the process with the highest rank has zero items to output.  As an example, if I have 4 mpi processes which are each writing the following amount of data:
 * rank 0: 0 items
 * rank 1: 2548 items
 * rank 2: 4352 items
 * rank 3: 0 items.

I will define the variable to have a length of 6900 items (0 + 2548 + 4352 + 0).  When I am outputting data to the variable, each rank will call nc_put_vara_longlong with the following start and count values:
 * rank 0: start = 0, count = 0
 * rank 1: start = 0, count = 2548
 * rank 2: start = 2548, count = 4352
 * rank 3: start = 6900, count = 0.

In each case, the `start` for rank N is equal to `start` for rank N-1 + `count` for rank N-1.  This all works ok until the highest rank is writing 0 items.  In that case, the `start` value for that rank is equal to the total size of the variable and the check in the code fragment shown above fails since `start[] == fdims[]`.

This could be fixed in the application code by checking whether the `count` is zero and if so, then set `start` to 0 also, but I think that is a kluge that should not be required.

Note that this test appears three times in this file.  In one case, the check for non-zero count already exists, but not in the other two.  This pull request adds the check to the other two tests.
2016-07-21 09:30:18 -06:00
Ward Fisher
c1ec950d70 Merge branch 'extent-llu' of https://github.com/brtnfld/netcdf-c into consolidate-gh 2016-07-15 14:42:49 -06:00
Ward Fisher
d419b53925 Merge branch 'patch-1' of https://github.com/gsjaardema/netcdf-c into consolidate-gh 2016-07-15 14:40:32 -06:00
Greg Sjaardema
fcb1455b28 Update nc4hdf.c
If H5Aopen_idx on line 1964 fails, then attid will be < 0.  The BAIL will goto exit at line 1989 and then the test of "if (attid ...)" at line 1995 will pass (attd != 0) and then call H5Aclose(attid) with a negative attid.    Similar issue for spaceid.  

Result of function if probably the same since there is a failure somewhere, but more difficult to track down if looks like failure is happening in the wrong place.
2016-07-12 08:59:01 -04:00
Greg Sjaardema
c361938c8e Fix att_name size
There was a mismatch between the allocated size of att_name and the size that H5Aget_name was told the size was.
2016-07-12 08:12:23 -04:00
Scot Breitenfeld
55bf63d35c Merge branch 'master' into extent-llu 2016-06-21 13:07:20 -05:00
M. Scot Breitenfeld
2bf233c9d5 This patch changes the algorithm for determining the extended size of a dataset in parallel to pass a variable of type unsigned long long to MPI_Allreduce. Despite the comment in the code on this line (removed in this patch), the current usage is not correct. For example, consider if process 0 has an extend size of 2^32 (0x100000000) and process 2 has an extend size of 1 (0x1). The current algorithm will compute the max of each 4 byte segment then combine these into an 8 byte number, yielding a max of (2^32)+1 (0x100000001), when it should simply be 2^32.
N. Fortner
2016-06-21 13:04:15 -05:00
Ward Fisher
3e24dcd575 Removed some dead assignments reported by clang. 2016-06-20 15:28:46 -06:00
Ward Fisher
46c63344f7 Added comments where needed. 2016-06-14 10:47:24 -06:00
Ward Fisher
a499bf1ed8 Extending when attributes are copied. 2016-06-14 10:29:14 -06:00
Ward Fisher
13b088f49f Moved fix out to a separate function so that we can hopefully address a few other NCO-reported issues. 2016-06-14 10:22:06 -06:00
Ward Fisher
1ebb104f74 Tentatively fixed https://github.com/Unidata/netcdf-c/issues/239 but the test needs to be extended. 2016-06-10 17:03:08 -06:00
Ward Fisher
2edc4ce64a Added a fix as contributed by Kent at HDF group for a collective I/O, parallel issue. 2016-06-09 14:16:20 -06:00
dmh
5bfdf54263 The name hash for hdf4 variables was
not being computed. Fix in nc4file.c.
Not sure how this ever worked for any variable.
What is also weird is that the dim hash is
apparently being computed.
2016-06-01 15:20:36 -06:00
Dennis Heimbigner
835511eaeb HDF5 is generating unnecessary error messages when netcdf4 logging is enabled
re: github netcdf-c issue #271

This occurs for several reasons, including:
1. using H5Aopen_name instead of H5Aexists to test if attribute exists.
2. using H5Eset_auto instead of H5Eset_auto2.
There are probably others that will have to be extinguished as encountered.
p.s Hope I did not overdo this and kill too much.
2016-05-27 10:08:01 -06:00
Dennis Heimbigner
8c1f8f57ee re: https://github.com/Unidata/netcdf-c/issues/269
The hash field for phony dimensions was not being set
(in nc4hdf.c). Also added test case (nc_test4/?).
Note that I searched for other similar failures and
did not find any, but I may have missed them.
2016-05-24 19:37:21 -06:00
Greg Sjaardema
4d037a1f62 Check for valid MPI_Comm before freeing
If in parallel mode and the H5Fcreate fails, then the h5->comm field may/will point to MPI_COMM_NULL instead of to a valid communicator.  If that is passed to MPI_Comm_free, then the code will core dump down in the MPI_Comm_free function and not return a valid return status to the caller routine. If communicator is checked, then execution proceeds back up the call tree and caller can report a better error message about the failed file create.
2016-05-24 10:50:00 -06:00
Ward Fisher
fcca7ae57d Merge branch 'master' into provfix 2016-05-16 12:39:15 -06:00
Dennis Heimbigner
4fa1470241 re: github issue https://github.com/Unidata/netcdf-c/issues/265
Charlie Zender noted that we forgot to define what happens for
various netcdf API attribute operations, notably nc_inq_att()
and nc_get_att().
So, I added a list of legal and illegal api calls for the provenance
attributes in docs/attribute_conventions.md.
I also added more test cases to ncdump/tst_fileinfo.c to verify
and fixed resultant errors.
2016-05-15 18:03:04 -06:00
Dennis Heimbigner
7e0db68dce Finally get around to removing all that
obsolete pnetcdf related code in libsrc4.
2016-05-14 22:31:41 -06:00
Ward Fisher
e4d79dbb6c Removed an unused, unneeded struct, NCdata, that was causing problems on Visual Studio because it was empty. 2016-05-10 11:11:50 -06:00
Dennis Heimbigner
11a259ad86 Add provenance info for netcdf-4 files.
This consists of a persistent attribute named
_NCProperties plus two computed attributes
_IsNetcdf4 and _SuperblockVersion.
See the 'Provenance Attributes' section
of docs/attribute_conventions.md for details.
2016-05-07 14:32:07 -06:00
Ward Fisher
493a6e5d62 Found pre-existing call to H5Pset_libver_bounds, modified it so that the generated files would be created without the 1.10 specific things. 2016-04-08 21:36:08 +00:00
dmh
764a1c40a3 ckp 2016-04-06 19:51:40 -06:00
Ward Fisher
0bb9856880 Merge branch 'upstream' of https://github.com/gsjaardema/netcdf-c into gs-pulls 2016-03-04 14:58:10 -07:00
Greg Sjaardema
4ccebf25b5 Use dim field of var instead of finding dim from var->dimids.
The var struct has a 'dim' field which was not being used
Instead, the dimids field would always search for the dim
with the matching dimid.  For db with large numbers of dims,
this could be a significant time sync.

Modified code to always set var-dim[i] when var->dimids[i] was
set (if the dim existed at that point).  Then use the var->dim
field instead of var->dimids and search whenever requested.

All var->dim accesses are protected by asserts that verify
non-null and that the var->dim[]->dimid == var->dimids[].
2016-03-04 10:45:36 -07:00
Greg Sjaardema
1a84a6a99e Add hash field to dim and var to facilitate fast name compare
In non-classic netcdf-4 models, it is allowable to have
large numbers of dims and vars.  In many operations, the
entire list of dims or vars is searched for a dim/var matching
a specific name which results in *lots* of strncmp or strcmp
calls.

If we add a hash field to the var and dim structs similar to what
has already been done for the netcdf-3 formats, then we can hash the
name being searched for and numerically compare that value with
the var/dim hash value.  If they match, then do a more expensive
strncmp call to ensure that the names truly match.
2016-03-03 13:18:31 -07:00
Ward Fisher
332d71fd1a Merge branch 'master' into gh223 2016-02-19 15:32:50 -07:00
Ward Fisher
d8b65ccea1 Fix for https://github.com/Unidata/netcdf-c/issues/223 2016-02-19 15:05:39 -07:00
Ward Fisher
db84f39adc Tentative, robust fix for https://github.com/Unidata/netcdf-c/issues/221 that does not immediately introduce other issues into ncdump. Broader validation pending. 2016-02-18 15:42:03 -07:00
Ward Fisher
9791b1a397 Tentative fix for initial issue at http://github.com/Unidata/netcdf-c/issues/221 . investigating knock-on issues now. 2016-02-18 14:46:16 -07:00
Dennis Heimbigner
45572f5971 Fix github issue: https://github.com/Unidata/netcdf-c/issues/208
Return an error when specifying deflation (compression) or fletcher32 on
a file created for parallel IO in netcdf-4.
2016-02-01 16:15:58 -07:00
Dennis Heimbigner
b5ba424793 Clean up the handling of hdf5 initialization by
creating an nc4_hdf5_initialize(void) function
plus nc4_hdf5_initialized flag.
Also fix potential null exception in nc4internal.c
2016-01-28 16:19:38 -07:00
Ward Fisher
bb00562779 Addressed a static-analysis issue. 2015-12-31 11:47:39 -07:00
Ward Fisher
473259b772 Corrected issue where overwriting an attribute of type NC_CHAR with NC_STRING would result in dangling data. 2015-11-11 11:32:12 -07:00
Ward Fisher
c1210f4020 Merge branch 'master' into cdf5-sync-master 2015-11-09 13:45:11 -07:00
dmh
5ad26bb68f Fix github issues #140
1. Added check to libsrc4/nc4var.nc_def_var_extra to
   check that the no specified chunks size is greater than
   the dimension size.

2. Added test to nc_test4/tst_chunks.c
2015-11-07 20:29:16 -07:00
Ward Fisher
612b35a84c Merge branch 'master' into cdf-5, in preparation for merging the CDF-5 functionality into the master branch. This will be the key new feature for netcdf 4.4.0. 2015-11-05 13:40:35 -07:00
Ward Fisher
519a56019f Merge branch 'fix-typos' of https://github.com/tbeu/netcdf-c into tbeu-fix-typos 2015-10-16 14:16:09 -06:00
dmh
49597a64af merge-squash 2015-10-09 10:12:11 -06:00
Russ Rew
d3d442537d Fix 1D variables with an unlimited dimension taking DEFAULT_CHUNK_SIZE (4MiB), by default, in netCDF-4 files 2015-09-29 13:58:51 -06:00
dmh
5b89b22021 missing include file 2015-09-18 11:11:57 -06:00
dmh
0a7ff043a2 re: Jira NCF-320
Partially resolve by making
string variables and attributes use
UTF-8 encoding.
Normalization is not necessarily fixed,
however.
2015-08-20 15:53:48 -06:00
tbeu
e2820e4d8a Fix common typos
Detected by https://github.com/vlajos/misspell_fixer
2015-08-20 11:42:05 +02:00
Ward Fisher
bb42e4639e Addressed conflict between master, mem2. 2015-06-02 15:03:12 -06:00
Ward Fisher
cf6d87f1dc [NCF-332] Addressed the issue in get_netcdf_type_from_hdf4() by adding case statements explicitly for the little-endian hdf4 values as defined by http://www.hdfgroup.org/training/HDFtraining/UsersGuide/Fundmtls.fm3.html. 2015-05-28 17:27:57 -06:00
Ward Fisher
afa157f918 Started adding checks for little-endian HDF data types [NCF-332] 2015-05-28 16:41:48 -06:00