Bug Fix
Description:
Removing the code from configure which strips the '-g' flag from CFLAGS
when in production mode. The current default CFLAGS in production mode
does not include '-g', as intended, but we should allow users to
override this and enable '-g' by setting the CFLAGS environment variable
if desired. Note that this applies to FCFLAGS and CXXFLAGS as well.
Tested:
kagiso, linew, liberty
test the correctness of the data when the fill value is defined or not. The
library should let the chunks bypass the cache depending on the size of the
chunks and whether to write fill value to the chunks.
Tested on jam - simple change.
Pass the chunk "user data" to H5D_chunk_unlock(), so that chunks with
an address already aren't reallocated.
Tested on:
FreeBSD/32 6.3 (duty) in debug mode
FreeBSD/64 6.3 (liberty) w/C++ & FORTRAN, in debug mode
Linux/64-ia64 2.4 (tg-login3) w/parallel, w/FORTRAN, in production mode
Description:
In some situations it was possible for the fill value to not be written to parts
of a chunked dataset, particularly when extending and/or shrinking. Prior to
the fix for the chunk cache (1015) these bugs would have been exceedingly rare.
Tested: jam, smirom, linew (h5committest)
the way h5ls prints types, it starts searching for NATIVE types first. One solution would be h5ls not to detect these native types, using for example the same print datatype function that h5dump does, that would make the output look the same on all platforms ("32-bit little-endian integer" would be printed instead). Drawback, this "native" information would not be available. Other solution is to have not one but 2 expected outputs and make the shell script detect the endianess and compare with one output or other
tested: h5committest
creation order in query function but there's no creation order indexed in the file, the library
tried to build and sort a table of all links. To optimize it, let the library use the B-tree for
names of the links.
Tested on jam. I tested the same change for v1.8 with h5committest.
Clean up code and eliminate resource leaks. Also avoid "null" I/O when
chunk doesn't exist and we can skip it.
Tested on:
Mac OS X/32 10.5.6 (amazon)
(too minor to require h5committest)
Clean up (i.e. remove) more internal calls to H5E_clear_stack(), along with
some other minor code cleanups.
Tested on:
Mac OS X/32 10.5.6 (amazon)
(too minor to require h5committest)
Description:
The meaning of the "nbytes" field in H5D_rdcc_t was not clear, and some places
assumed it was the maximum size of the chunk cache, while some assumed it was
the current size of the chunk cache. The end result was that only 1 chunk could
be held in cache at a time. This field has been replaced by "nbytes_max" and
"nbytes_used". Performance of cached I/O should improve greatly.
Tested: jam, smirom (h5committest)
file handles.
Description:
An attribute's "oloc" field which specifies the file it resides in was located
in the attribute's "shared" structure. So when an attribute was opened multiple
times all of the handles for that attribute pointed to the same file id, even if
different file id's were used to open the different handles for the attribute.
The "oloc" has been moved to the top level H5A_t struct.
Tested: jam, smirom (h5committest)
Description:
Since the new object header format, it has been possible for a situation to be
created where none of the messages are large enough to hold a continuation
message and there are no null messages to merge with. This makes it impossible
to add a new object header chunk. This case will now be handled by moving every
message in the last chunk to the newly allocated one, except for null messages
which are deleted.
Tested: jam, smirom (h5committest)
Description:
When an attribute was created with a datatype or dataspace that was shared in
the same object header that the attribute was in, the attribute could not be
deleted. Changes made to ensure that the attribute can be deleted both when the
attribute is in the object header and when it is shared in the heap. Object
header message decode routines now take an "open_oh" parameter to enable them to
avoid opening the same object header twice.
Tested: jam, smirom (h5committest)
on disk, the library still loaded it in the cache, which is redundant. I changed it to bypass the
cache and added a test in dsets.c.
Tested on jam and smirom.
on disk, the library still loaded it in the cache, which is redundant. I changed it to bypass the
cache and added a test in dsets.c.
Tested on jam and smirom.
Solution: for compound types, recursively apply that check
Two new cases are added
1) the compound type has a different number of members. Message printed is
<obj1> has X members <obj2> has Y members
Where X and Y are the number of members of each compound type being compared
2) the compound type has not comparable types (for example a double and an int at the same index)
In this case the message
Comparison not possible: object1 is of class1 and object2 is of class2
Is replaced with
Comparison not possible: object1 has a class1 and object2 has a class2
Modified the test generator program to have these 2 cases
Added a shell run for these 2 cases
Tested: windows, h5committest
The failure was caused by some over active sanity checking code in
unlock_entry(). In essence the code did not consider the possibility
that under certain, very unusual circumstances, an entry could be flushed
to disk during the H5AC_unprotect() call. Instead, it simply failed
if a dirty entry was marked clean after the call to H5AC_unprotect().
This bug in the test code was exposed by recent changes to the default
cache configuration made as part of the "metadata blizard" bug fix.
Fixed the bug by adding code to detect when an entry is flushed during
the call to H5AC_unprotect(), and not trigger a failure if a dirty entry
is marked clean after a call to H5AC_unprotect() if the entry has been
flushed.
In passing also found and fixed another test bug in which expunged
entries were erroneously marked as dirty in the test code's independant
register of entry status.
Tested parallel on Phoenix (AMD64 Linux) and Jam. Also ran t_cache
manually hundreds of times looking for intermittant failures.
Larry kindly tested (parallel) on Mercury.
Bring r16435 from revise_chunks branch back to trunk:
Expand object copy tests for chunked datasets to include 1-D datasets
with an unlimited dimension. (Fix typo in comment for test/links.c)
Tested on:
FreeBSD/32 6.3 (duty) in debug mode
(more thoroughly tested already on revise_chunks branch)
Description:
A user discovered that the HDF5 1.8.2 Windows release binaries were missing a few of the HDF5 tools. This is due to the Windows install script, which simply didn't include them. This commit fixes the install script to include h5copy, h5mkgrp, and h5stat
Tested:
VS2005 w/ WinXP, build and install only
Call h5_fixname (with an array of test filenames) for generating the
filename to create and then call h5_cleanup() when the tests pass, to delete the files
created and close the FAPL from h5_fileaccess().
Defined a macro
#define TESTING2(WHAT) {printf(" Testing %-62s",WHAT); fflush(stdout);}
Similar to TESTING, except that it has a initial indentation space.
The effect is for nested loop tests
Testing with old file format:
Testing with fill value, no compression PASSED
tested: windows, linux
Bring r16416 from revise_chunks branch to trunk:
Bring closer to standard standalone test format, add checks for using
the latest file format and close dataset ID leaked.
Tested on:
FreeBSD/32 6.3 (duty)
(too minor to require h5committest)