Commit Graph

387 Commits

Author SHA1 Message Date
Wei-keng Liao
1508bd5529 fix the previous commit that missed the case of re-open file 2017-03-29 12:16:25 -05:00
Wei-keng Liao
73ccb364a9 To solve NC_ELATEFILL error for NetCDF-4 files, mark all variables written at enddef. 2017-03-24 20:55:00 -05:00
Stephan Hoyer
4dd8e380c1 Switch NC_CHAR on netCDF4 to use ASCII
Fixes GH298
2017-03-13 20:12:08 -07:00
Ward Fisher
2558fd6f9f Merge branch 'att_callbk' of https://github.com/brtnfld/netcdf-c into gh276 2017-03-06 12:56:33 -07:00
Greg Sjaardema
cbb9448ab0 Remove unused fields from struct
The nvars, ndims, and natts fields on the NC_HDF5_FILE_INFO struct are
never set.  The nvars field is read, but since it is never written,
the value is always zero.
2017-03-06 11:14:00 -07:00
Greg Sjaardema
473529d199 Remove unused ndims from grp struct 2017-03-06 11:14:00 -07:00
Dennis Heimbigner
47daf33074 Resolves Github issue https://github.com/Unidata/netcdf-c/issues/349.
Update utf8proc.[ch] to use the version now
maintained by the Julia Language project
(https://github.com/JuliaLang/utf8proc/blob/master/LICENSE.md).
The license for the previous version was
unacceptable for the Debian and Ubuntu release
systems. The new version both updates the code
and addresses the license issue.

It turns out that the utf8proc software we are using
was turned over to the Julia Language developers
and the license terms changed to allow modification.
(https://github.com/JuliaLang/utf8proc/blob/master/LICENSE.md).

So the fix here is as follows:
1. Wrap the library with a fixed interface: libdispatch/dutf8.c
   and include/ncutf8.h.
2. Replace the existing utf8proc code with the new version
   from https://github.com/JuliaLang/utf8proc.
3. Add a couple more test cases: nc_test/tst_utf8_validate.c
   and nc_test_utf8_phrases.c.  If/when I can find a usable
   normalization test, I will incorporate that later.
2017-02-16 14:27:54 -07:00
Ward Fisher
0d2727d9da Merging pull request from gsjaardema, see https://github.com/Unidata/netcdf-c/pull/335 for more information. 2017-01-30 12:45:50 -07:00
Ward Fisher
34161aab69 Added fixes for Visual Studio 10 2017-01-10 13:54:09 -07:00
Greg Sjaardema
a2fdfa04ab Eliminate an MPI_Allreduce in many cases 2016-12-09 09:52:15 -07:00
Greg Sjaardema
72c1948980 Move metadata ops calls 2016-12-01 13:35:16 -07:00
Greg Sjaardema
e0269d6cac Add hdf5 collective metadata api detection to cmake build 2016-12-01 13:35:10 -07:00
Greg Sjaardema
39c90e7b76 Enable collective metadata operations for hdf5-1.10; not protected yet 2016-12-01 13:34:58 -07:00
Ward Fisher
767a5b372c Corrected an issue reported as part of the pull request at https://github.com/Unidata/netcdfic/pull/328 2016-11-28 13:31:43 -07:00
Ward Fisher
05ceb8d471 Merge branch 'nc4-var-array' of https://github.com/gsjaardema/netcdf-c into gh328 2016-11-28 13:10:07 -07:00
Ward Fisher
24a4a230e6 Updated debugging script, fixed a problem in logging type size. 2016-11-16 12:18:20 -07:00
Greg Sjaardema
b9c50aec89 Create var correctly for hdf4 files 2016-11-16 10:37:37 -07:00
Greg Sjaardema
a55d96eba1 Clean-up build after changes -- remove unused variables 2016-11-16 08:45:28 -07:00
Greg Sjaardema
207a2ee4f9 Fix stdc violation 2016-11-16 08:45:23 -07:00
Greg Sjaardema
d16f5a8842 Whitespace cleanup 2016-11-16 08:45:19 -07:00
Greg Sjaardema
8698e57424 Compile with c89 -- eliminate init in for-loop 2016-11-16 08:45:15 -07:00
Greg Sjaardema
c84b475ccf Remove var linked list 2016-11-16 08:45:10 -07:00
Greg Sjaardema
dee1baca8e Store vars in array instead of linked list (linked list still active) 2016-11-16 08:45:06 -07:00
Ward Fisher
f4ac2f827d Merge branch 'patch-4' of https://github.com/gsjaardema/netcdf-c into gh290 2016-11-09 12:27:48 -07:00
Ward Fisher
49fb3241c6 Corrected coverity issue 1372965. 2016-09-15 11:07:11 -06:00
Greg Sjaardema
dab30468f9 Merge branch 'master' into patch-4 2016-09-15 11:04:30 -06:00
Ward Fisher
a08be0a312 Corrected issue 1372910 in coverity. 2016-09-14 16:26:08 -06:00
Ward Fisher
934bb4bd66 Corrected typo. 2016-09-14 16:20:43 -06:00
Ward Fisher
ad1220453a Corrected issue 1372911 in Coverity. 2016-09-14 16:16:06 -06:00
Ward Fisher
485faa0333 Addressed defect 1372912 in coverity. 2016-09-14 16:11:22 -06:00
Dennis Heimbigner
ddfb6d6279 Make sure that the _NcProperties attr is null terminated and stored as such 2016-08-08 21:54:23 -06:00
Dennis Heimbigner
0cf1e2c49f re: Github issue netcdf-c 300
Modified provenance code to allocate the minimal space
needed for _NCProperties attribute in file.  Basically
required using malloc in the provenance code and in ncdump.
Otherwise should cause no externally visible effects.
Also removed the ENABLE_FILEINFO from configure.ac since
the provenance code is no longer optional.
2016-08-08 09:24:19 -06:00
Greg Sjaardema
c7ccdfa543 More pedantically correct check
This modifies the previous change to be more pedantically correct.  It should always be an NC_EINVALCOORDS error if start exceeds fdims[2]; however, if start equals fdims[2], then it is only an error if count is non-zero.
2016-07-21 09:30:18 -06:00
Greg Sjaardema
9290b31c9d Fix variable bounds check for parallel output
The following code is in nc4hdf.c, function `nc4_put_vara`.

```
  /* Check dimension bounds. Remember that unlimited dimnsions can
   * put data beyond their current length. */
  for (d2 = 0; d2 < var->ndims; d2++)
    {
      dim = var->dim[d2];
      assert(dim && dim->dimid == var->dimids[d2]);
      if (!dim->unlimited)
        {
          if (start[d2] >= (hssize_t)fdims[d2])
            BAIL_QUIET(NC_EINVALCOORDS);
          if (start[d2] + count[d2] > fdims[d2])
            BAIL_QUIET(NC_EEDGE);
        }
    }
```

There is an issue when the process with the highest rank has zero items to output.  As an example, if I have 4 mpi processes which are each writing the following amount of data:
 * rank 0: 0 items
 * rank 1: 2548 items
 * rank 2: 4352 items
 * rank 3: 0 items.

I will define the variable to have a length of 6900 items (0 + 2548 + 4352 + 0).  When I am outputting data to the variable, each rank will call nc_put_vara_longlong with the following start and count values:
 * rank 0: start = 0, count = 0
 * rank 1: start = 0, count = 2548
 * rank 2: start = 2548, count = 4352
 * rank 3: start = 6900, count = 0.

In each case, the `start` for rank N is equal to `start` for rank N-1 + `count` for rank N-1.  This all works ok until the highest rank is writing 0 items.  In that case, the `start` value for that rank is equal to the total size of the variable and the check in the code fragment shown above fails since `start[] == fdims[]`.

This could be fixed in the application code by checking whether the `count` is zero and if so, then set `start` to 0 also, but I think that is a kluge that should not be required.

Note that this test appears three times in this file.  In one case, the check for non-zero count already exists, but not in the other two.  This pull request adds the check to the other two tests.
2016-07-21 09:30:18 -06:00
Ward Fisher
c1ec950d70 Merge branch 'extent-llu' of https://github.com/brtnfld/netcdf-c into consolidate-gh 2016-07-15 14:42:49 -06:00
Ward Fisher
d419b53925 Merge branch 'patch-1' of https://github.com/gsjaardema/netcdf-c into consolidate-gh 2016-07-15 14:40:32 -06:00
Greg Sjaardema
382ff98e6c Use an hdf5-api function that eliminates code
The H5Aexists hdf5 function does the same function as the manually coded loop with much less code and fewer function calls.

Also, the H5Aopen_idx and H5Aget_num_attrs functions are deprecated.
2016-07-13 09:30:54 -04:00
Greg Sjaardema
fcb1455b28 Update nc4hdf.c
If H5Aopen_idx on line 1964 fails, then attid will be < 0.  The BAIL will goto exit at line 1989 and then the test of "if (attid ...)" at line 1995 will pass (attd != 0) and then call H5Aclose(attid) with a negative attid.    Similar issue for spaceid.  

Result of function if probably the same since there is a failure somewhere, but more difficult to track down if looks like failure is happening in the wrong place.
2016-07-12 08:59:01 -04:00
Greg Sjaardema
c361938c8e Fix att_name size
There was a mismatch between the allocated size of att_name and the size that H5Aget_name was told the size was.
2016-07-12 08:12:23 -04:00
Scot Breitenfeld
55bf63d35c Merge branch 'master' into extent-llu 2016-06-21 13:07:20 -05:00
M. Scot Breitenfeld
2bf233c9d5 This patch changes the algorithm for determining the extended size of a dataset in parallel to pass a variable of type unsigned long long to MPI_Allreduce. Despite the comment in the code on this line (removed in this patch), the current usage is not correct. For example, consider if process 0 has an extend size of 2^32 (0x100000000) and process 2 has an extend size of 1 (0x1). The current algorithm will compute the max of each 4 byte segment then combine these into an 8 byte number, yielding a max of (2^32)+1 (0x100000001), when it should simply be 2^32.
N. Fortner
2016-06-21 13:04:15 -05:00
Ward Fisher
3e24dcd575 Removed some dead assignments reported by clang. 2016-06-20 15:28:46 -06:00
Scot Breitenfeld
b2dc48d7b3 Merge branch 'master' into att_callbk 2016-06-15 08:36:41 -05:00
Ward Fisher
46c63344f7 Added comments where needed. 2016-06-14 10:47:24 -06:00
Ward Fisher
a499bf1ed8 Extending when attributes are copied. 2016-06-14 10:29:14 -06:00
Ward Fisher
13b088f49f Moved fix out to a separate function so that we can hopefully address a few other NCO-reported issues. 2016-06-14 10:22:06 -06:00
M. Scot Breitenfeld
4f43988e84 Changed from discovering attributes and then looping through them to using H5Aiterate2 instead. 2016-06-14 09:33:52 -05:00
Ward Fisher
1ebb104f74 Tentatively fixed https://github.com/Unidata/netcdf-c/issues/239 but the test needs to be extended. 2016-06-10 17:03:08 -06:00
Ward Fisher
2edc4ce64a Added a fix as contributed by Kent at HDF group for a collective I/O, parallel issue. 2016-06-09 14:16:20 -06:00
dmh
5bfdf54263 The name hash for hdf4 variables was
not being computed. Fix in nc4file.c.
Not sure how this ever worked for any variable.
What is also weird is that the dim hash is
apparently being computed.
2016-06-01 15:20:36 -06:00