Bring changes from Coverity branch back to trunk:
r19079 & 19080:
[BZ1942] h5dump -u to generate XML, it does not respect the -m option
xml version of dump_data function didn't check for use of fp_format variable.
Added new test expected file for committed bug 1942
r19103, 19104 & 19105:
[BZ1821] h5repack -v did not display correct output for a selected compression. Needed new test for comparing output of -v option.
Added new test file for solution to BZ1821
BZ1821 - Bring test changes from the shell script actually used.
Tested on:
Mac OS X/32 10.6.4 (amazon) debug & production
(h5committested on branch)
Add --no-dangling-links option to h5ls.
Description:
Related to "Bug 1830 - Following an dangling external link in h5ls should set non-zero return code."
If --no-dangling-links option is specified and any dangling link is found, return exit code 1 (error).
Tested:
jam, amani and heiwa
Support follow symbolic links.
Description:
Add '--follow-symlinks' option to follow symbolic links (soft and external).
Update help page according to RM.
Remove some warning messages from compiler.
Tested:
jam, amani and linew
*sigh* Bring along another updated file after recent object header
message info update.
Tested on:
FreeBSD/32 6.3 (duty) in debug mode
FreeBSD/64 6.3 (liberty) w/C++ & FORTRAN, in debug mode
Linux/32 2.6 (jam) w/PGI compilers, w/default API=1.8.x,
w/C++ & FORTRAN, w/threadsafe, in debug mode
Linux/64-amd64 2.6 (smirom) w/Intel compilers, w/default API=1.6.x,
w/C++ & FORTRAN, in production mode
Solaris/32 2.10 (linew) w/deprecated symbols disabled, w/C++ & FORTRAN,
w/szip filter, in production mode
Linux/64-ia64 2.6 (cobalt) w/Intel compilers, w/C++ & FORTRAN,
in production mode
Linux/64-ia64 2.4 (tg-login3) w/parallel, w/FORTRAN, in debug mode
Linux/64-amd64 2.6 (abe) w/parallel, w/FORTRAN, in production mode
Mac OS X/32 10.6.2 (amazon) in debug mode
Mac OS X/32 10.6.2 (amazon) w/C++ & FORTRAN, w/threadsafe,
in production mode
Bring changes from file free space branch back to the trunk. *yay!*
Tested on:
FreeBSD/32 6.3 (duty) in debug mode
FreeBSD/64 6.3 (liberty) w/C++ & FORTRAN, in debug mode
Linux/32 2.6 (jam) w/PGI compilers, w/default API=1.8.x,
w/C++ & FORTRAN, w/threadsafe, in debug mode
Linux/64-amd64 2.6 (smirom) w/Intel compilers, w/default API=1.6.x,
w/C++ & FORTRAN, in production mode
Solaris/32 2.10 (linew) w/deprecated symbols disabled, w/C++ & FORTRAN,
w/szip filter, in production mode
Linux/64-ia64 2.6 (cobalt) w/Intel compilers, w/C++ & FORTRAN,
in production mode
Linux/64-ia64 2.4 (tg-login3) w/parallel, w/FORTRAN, in debug mode
Linux/64-amd64 2.6 (abe) w/parallel, w/FORTRAN, in production mode
Mac OS X/32 10.5.8 (amazon) in debug mode
Mac OS X/32 10.5.8 (amazon) w/C++ & FORTRAN, w/threadsafe,
in production mode
#1585). I changed it to undefined and let the caller functions decide the location
of the datatype. For H5Tdecode, it should mark the datatype as in memory. For other
callers like H5Dopen or H5Aopen, they should makr it as on disk.
Tested it on jam, smirom, linew.
ISSUE : the tools use the following formula to read by hyperslabs: hyperslab_size[i] = MIN( dim_size[i], H5TOOLS_BUFSIZE / datum_size) where H5TOOLS_BUFSIZE is a constant defined of 1024K. This is OK as long as the datum_size does not exceed 1024K, otherwise we have a hyperslab size of 0 (since 1024K/(greater than 1024K) = 0). This affects h5dump. h5repack, h5diff
SOLUTION: add a check for a 0 size and define as 1 if so.
TEST FOR H5DUMP: Defined a case in the h5dump test generator program of such a type (an array type of doubles with a large array dimension, that was the case the user reported). Since the written file commited in svn would be around 1024K, opted for not writing the data (the part of the code where the hyperslab is defined is executed, since h5dump always reads the files). Defined a macro WRITE_ARRAY to enable such writing if needed. Added a run on the h5dump shell script. Added 2 new files to svn: tools/testfiles/tarray8.ddl, tools/testfiles/tarray8.h5. NOTE: while doing this I thought of adding this dataset case to an existing file, but that would add the large array output to those files (the ddls). The issue is that the file list is increasing.
TEST FOR H5DIFF: for h5diff the check for reading by hyperslabs is H5TOOLS_MALLOCSIZE (128 * H5TOOLS_BUFSIZE) or 128 Mb. This makes it not possible to add such a file to svn, so used the same method as h5dump (only write the dataset if WRITE_ARRAY is defined). As opposed to h5dump, the hyperslab code is NOT executed when the dataset is empty (dataset is not read). Added the new dataset to existing files and shell run (tools/h5diff/testfiles/h5diff_dset1.h5 and tools/h5diff/testfiles/h5diff_dset2.h5 and output in tools/h5diff/testfiles/h5diff_80.txt).
TEST FOR H5REPACK: similar issue as h5diff with the difference that the hyperslab code is run. Added a run to the shell script (with a filter, otherwise the code uses H5Ocopy).
tested: linux (h5commitest failed , apparently it did not detect the code changes in /tools/lib that fix the bug: the error in an assertion in the hyperslab of 0. I am sure that making h5ccomitest --distclean will detect the new code , but don't want to wait more 3 hours :-) )
the way h5ls prints types, it starts searching for NATIVE types first. One solution would be h5ls not to detect these native types, using for example the same print datatype function that h5dump does, that would make the output look the same on all platforms ("32-bit little-endian integer" would be printed instead). Drawback, this "native" information would not be available. Other solution is to have not one but 2 expected outputs and make the shell script detect the endianess and compare with one output or other
tested: h5committest
Description: When attempting to copy an object with a message shared in its own
object header, the library attempts to protect the same object header twice.
Previously, it was possible for the object header to be protected with write
access in one or both of these protects, which would be illegal. The library
should now always protect with read only access in this case. The conditions
for fixing incorrect datatype versions have been made weaker to support this
change. The version will only be corrected if the object header the datatype
is in is modified.
Tested: jam, smirom (h5committest)
Storage: information not available
When displaying storage information for VL and dataset region types
Added 2 shell runs that display this information
#818
Tested: windows, linux
Introduced a new feature in the tools library regarding command line parsing
In the definition of arguments, an "*" means that the switch can or can not have an optional argument. This "*" is put in the code regarding the letter definition, and it is transparent to the user (e.g b* instead of the previous b: ), where ":" notes a required argument after the letter (and no ":" or "*" notes no argument, mandatory)
Used for the h5dump binary option -b
It can be now
1) -b (defaults to NATIVE)
2) - b NATIVE
3) - b FILE
4) -b LE
5) -b BE
Note: the keyword NATIVE replaces MEMORY
This feature (-b with no argument) was tested with the sequence of h5dump to binary (NATIVE) then h5import to generate an HDF5 file from the binary file and h5diff to compare the 2 HDF5 files
Tested: linux
Description: Improved external link traversal of h5dump. h5dump will now
properly avoid all cycles, even those spanning multiple files. Improvement
to the output of committed datatypes. Committed datatypes are now checked
for uniqueness (like other objects). Tests added for these cases.
Tested: kagiso, linew, smirom (h5committest)
Description: Added -E option to h5ls. When set, this alows h5ls to enter
external files (currently only through an external link). The -r option by
itself will no longer allow h5ls to traverse external links.
Tested: kagiso, linew, smirom (h5committest)
Description: Adds capability to h5ls to traverse external links when the -r
(recursive) option is given. Changes to the way absolute path names are patched
in h5trav.c. Changes to the way recursive traversal starting from a non-root
group is handled (which also fixes some preexisting issues). Tests added for
these cases.
Tested: kagiso, smirom, linew (h5committest)
datatype versions are encountered.
Description: The library now recognizes some problems with datatype versions in
H5O_decode_helper(), and, if not performing strict format checks, automatically
corrects them. Framework added for other message decode routines to
automatically correct file errors. Datatype version information added to
h5debug.
Tested: kagiso, smirom, linew (h5committest)
for now, the test script just calls the tool binary with the command line parameters, with no special
checking of the accuracy of the output contents (jpeg files)
tested: linux
Fixed bug in h5ls that prevented relative group listings (like
"h5ls foo.h5/bar") from working correctly.
Tested on:
FreeBSD/32 6.2 (duty) in debug mode
FreeBSD/64 6.2 (liberty) w/C++ & FORTRAN, in debug mode
Linux/32 2.6 (kagiso) w/PGI compilers, w/C++ & FORTRAN, w/threadsafe,
in debug mode
Linux/64-amd64 2.6 (smirom) w/default API=1.6.x, w/C++ & FORTRAN,
in production mode
Linux/64-ia64 2.6 (cobalt) w/Intel compilers, w/C++ & FORTRAN,
in production mode
Solaris/32 2.10 (linew) w/deprecated symbols disabled, w/C++ & FORTRAN,
w/szip filter, in production mode
Mac OS X/32 10.5.3 (amazon) in debug mode
Linux/64-ia64 2.4 (tg-login3) w/parallel, w/FORTRAN, in production mode
bug fixes.
Description:
Added code to create an empty hdf5 (named h5diff_empty.h5) in order to test
if h5diff compares correctly an empty hdf5 vs. a non-empty one.
Tested:
Tested in kagiso of h5diffgentest itself.
Verified by h5dump that h5diff_empty.h5 was indeed empty.
Then "h5diff h5diff_empty.h5 h5diff_basic1.h5" returned 0 (should have
returned non-zero).
The previous printing of
LINKCLASS 64
was removed
HDF5 "textlinksrc.h5" {
GROUP "/" {
EXTERNAL_LINK "ext_link1" {
TARGETFILE "textlinktar.h5"
TARGETPATH "dset"
DATASET "dset" {
DATATYPE H5T_STD_I32LE
DATASPACE SIMPLE { ( 6 ) / ( 6 ) }
DATA {
(0): 1, 2, 3, 4, 5, 6
}
}
}
}
}
There is no script test for this behavior so far, because test script uses complete paths that vary from test to test, making not possible to define a valid TARGETFILE in the file
tested: windows, linux, solaris
Usage is
-m T, --format=T
Where T - is a string containing the floating point format, e.g '%.3f'
The test consists of writing a number with 7 fractional digits (default precision display of %f is 6 digits) and have the 7 digits displayed with
-m %.7f fpformat.h5
Tested: windows, linux, solaris
Note: the output file was generated in linux, it may be possible that platforms other than the ones tested have a different representation of the number
add a check for block overlap after the command line parsing
* Algorithm
*
* In a inner loop, the parameters from SSET are translated into temporary
* variables so that 1 row is printed at a time (getting the coordinate indices
* at each row).
* We define the stride, count and block to be 1 in the row dimension to achieve
* this and advance until all points are printed.
* An outer loop for cases where dimensionality is greater than 2D is made.
* In each iteration, the 2D block is displayed in the inner loop. The remaining
* slower dimensions above the first 2 are incremented one at a time in the outer loop
*
* The element position is obtained from the matrix according to:
* Given an index I(z,y,x) its position from the beginning of an array
* of sizes A(size_z, size_y,size_x) is given by
* Position of I(z,y,x) = index_z * size_y * size_x
* + index_y * size_x
* + index_x
*
tested: windows, linux
- Add hash value for skip list string types, to reduce # of string
comparisons.
- Fixed bug with metadata/small data block aggregator adding size == 0
block into file free space list.
- Refactored metadata/small data block aggregator code into single set of
common routines.
- Changed block aggregator code to be smarter about releasing space in the
'other' block when the 'other' block has aggregated enough data.
Tested on:
FreeBSD/32 6.2 (duty) in debug mode
FreeBSD/64 6.2 (liberty) w/C++ & FORTRAN, in debug mode
Linux/32 2.6 (kagiso) w/PGI compilers, w/C++ & FORTRAN, w/threadsafe,
in debug mode
Linux/64-amd64 2.6 (smirom) w/default API=1.6.x, w/C++ & FORTRAN,
in production mode
Linux/64-ia64 2.6 (cobalt) w/Intel compilers, w/C++ & FORTRAN,
in production mode
Solaris/32 2.10 (linew) w/deprecated symbols disabled, w/C++ & FORTRAN,
w/szip filter, in production mode
Mac OS X/32 10.4.10 (amazon) in debug mode
Linux/64-ia64 2.4 (tg-login3) w/parallel, w/FORTRAN, in production mode
Add H5Lvisit_by_name() API routine to library.
Eliminated all (five!) other group traversal routines and changed them
all to use the new API routine.
Cleaned up output of h5ls & h5stat:
- Issue error when requesting recursive traversal of a file
with the "group info" flag, but no group given
- Print info about root group in all(?) appropriate situations
- Don't print "verbose" information about root group until the
root group is in the list of objects to display
(mostly because h5ls & h5stat had a different twist on traversing the
groups in a file that the other utilities)
Tested on:
FreeBSD/32 6.2 (duty) in debug mode
FreeBSD/64 6.2 (liberty) w/C++ & FORTRAN, in debug mode
Linux/32 2.6 (kagiso) w/PGI compilers, w/C++ & FORTRAN, w/threadsafe,
in debug mode
Linux/64-amd64 2.6 (smirom) w/default API=1.6.x, w/C++ & FORTRAN,
in production mode
Linux/64-ia64 2.6 (cobalt) w/Intel compilers, w/C++ & FORTRAN,
in production mode
Solaris/32 2.10 (linew) w/deprecated symbols disabled, w/C++ & FORTRAN,
w/szip filter, in production mode
Mac OS X/32 10.4.10 (amazon) in debug mode
Linux/64-ia64 2.4 (tg-login3) w/parallel, w/FORTRAN, in production mode
current behavior is
if DCPL has creation order tracked for attributes then sort the attributes by creation order otherwise by name
regarding sort order (ascending or descending) it is done in whatever is requested
tested: linux
Bug fixes
Avoid passing iteration flags on the dump_group function parameters but instead inspect the groups's property list just before calling H5Literate and H5Aiterate, in this later case checking the creation order flags with H5Pget_attr_creation_order
Tested: windows, linux
Added support for displaying several iteration orders on dataset attributes, 4 new tests in test script (name ascending, name descending, creation_order ascending, creation_order descending)
New h5 file is made on the generator program
Tested: windows, linux
Make H5Aiterate() versioned and change all internal use to H5Aiterate2()
Leave some regression tests that exercise H5Aiterate1()
Fix attribute display in h5dump & h5ls to be "by name" by default
Tested on:
FreeBSD/32 6.2 (duty) in debug mode
FreeBSD/64 6.2 (liberty) w/C++ & FORTRAN, in debug mode
Linux/32 2.6 (kagiso) w/PGI compilers, w/C++ & FORTRAN, w/threadsafe,
in debug mode
Linux/64 2.6 (smirom) w/default API=1.6.x, w/C++ & FORTRAN,
in production mode
Solaris/32 2.10 (linew) w/deprecated symbols disabled, w/C++ & FORTRAN,
w/szip filter, in production mode
Mac OS X/32 10.4.10 (amazon) in debug mode
tested: windows, linux, solaris 5.10
usage: h5dump [OPTIONS] file
OPTIONS
-h, --help Print a usage message and exit
-n, --contents Print a list of the file contents and exit
-B, --bootblock Print the content of the boot block
-H, --header Print the header only; no data is displayed
-A, --onlyattr Print the header and value of attributes
-i, --object-ids Print the object ids
-r, --string Print 1-byte integer datasets as ASCII
-e, --escape Escape non printing characters
-V, --version Print version number and exit
-a P, --attribute=P Print the specified attribute
-d P, --dataset=P Print the specified dataset
-y, --noindex Do not print array indices with the data
-p, --properties Print dataset filters, storage layout and fill value
-f D, --filedriver=D Specify which driver to open the file with
-g P, --group=P Print the specified group and all members
-l P, --soft-link=P Print the value(s) of the specified soft link
-o F, --output=F Output raw data into file F
-b B, --binary=B Binary file output, of form B
-t P, --datatype=P Print the specified named datatype
-w N, --width=N Set the number of columns of output
-q Q, --sort_by=Q Sort groups and attributes by index Q
-z Z, --sort_order=Z Sort groups and attributes by order Z
-x, --xml Output in XML using Schema
-u, --use-dtd Output in XML using DTD
-D U, --xml-dtd=U Use the DTD or schema at U
-X S, --xml-ns=S (XML Schema) Use qualified names n the XML
":": no namespace, default: "hdf5:"
E.g., to dump a file called `-f', use h5dump -- -f
Subsetting is available by using the following options with a dataset
attribute. Subsetting is done by selecting a hyperslab from the data.
Thus, the options mirror those for performing a hyperslab selection.
The START and COUNT parameters are mandatory if you do subsetting.
The STRIDE and BLOCK parameters are optional and will default to 1 in
each dimension.
-s L, --start=L Offset of start of subsetting selection
-S L, --stride=L Hyperslab stride
-c L, --count=L Number of blocks to include in selection
-k L, --block=L Size of block in hyperslab
D - is the file driver to use in opening the file. Acceptable values
are "sec2", "family", "split", "multi", "direct", and "stream". Without
the file driver flag, the file will be opened with each driver in
turn and in the order specified above until one driver succeeds
in opening the file.
F - is a filename.
P - is the full path from the root group to the object.
N - is an integer greater than 1.
L - is a list of integers the number of which are equal to the
number of dimensions in the dataspace being queried
U - is a URI reference (as defined in [IETF RFC 2396],
updated by [IETF RFC 2732])
B - is the form of binary output: MEMORY for a memory type, FILE for the
file type, LE or BE for pre-existing little or big endian types.
Must be used with -o (output file) and it is recommended that
-d (dataset) is used
Q - is the sort index type. It can be "creation_order" or "name" (default)
Z - is the sort order type. It can be "descending" or "ascending" (default)
Examples:
1) Attribute foo of the group /bar_none in file quux.h5
h5dump -a /bar_none/foo quux.h5
2) Selecting a subset from dataset /foo in file quux.h5
h5dump -d /foo -s "0,1" -S "1,1" -c "2,3" -k "2,2" quux.h5
3) Saving dataset 'dset' in file quux.h5 to binary file 'out.bin' using a litt
le-endian type
h5dump -d /dset -b LE -o out.bin quux.h5
bug fix
the binary option expects a full path in -o
TOOLTEST tbin1.ddl -d integer -o $TESTDIR/out1.bin -b LE tbinary.h5
and it prints it in the expected output , making it absolutely not portable
Solution: made a special macro function TOOLTEST1 identical to TOOLTEST except that it does not print the Expected output header
#############################
Expected output for 'h5dump -d integer -o /home/pvn/kagiso/build_hdf5/tools/h5dump/../testfiles/out1.bin -b LE tbinary.h5'
#############################
Tested : linux
1) added 5 new tests for the group creation order
2) modified the h5dump test script to automatically generated non existing (new) output files
3) cleaning of unused DDL files
4) new modified DDL files include tcomp-3.ddl ( new form of named datatype) and the binary output files
tested : linux
first batch for displaying groups in creation order
1) Added extra parameter to dump_group of type H5_index_t , to be passed to H5Literate.
When a group is tried to be displayed, an inquiry of its creation properties is made. If H5P_CRT_ORDER_TRACKED is present on the group property list then the display is made by creation order, otherwise it is made by name
2) Added a new file to h5dumpgentest that generates a file with a hierarchy of groups with creation order on and off in alternately
Note : XML code was not modified
Tested : windows, linux
Move H5Gget_num_objs() and several minor macros, etc. to deprecated
symbols section, replacing it with H5Gget_info().
Tested on:
FreeBSD/32 6.2 (duty)
FreeBSD/64 6.2 (liberty)
Linux/32 2.6 (kagiso)
Linux/64 2.6 (smirom)
AIX/32 5.3 (copper)
Solaris/32 5.10 (linew)
Mac OS X/32 10.4.10 (amazon)
Regenerate the h5diff_types.h5 file, which had somehow gotten mangled
in some earlier checkin (prior to ~2-3 days ago, at least).
Clean up formatting a bit also..
Tested on:
Mac OS X/32 10.4.10 (amazon)
On Windows, Mingw interprets all parameters starting with '/' as paths, and replaces the '/' with its home directory, "C:\Windows\msys\". This was a problem in h5diff tests such as:
h5diff h5diff_101.txt $FILE1 $FILE1 /g1/d1 g1/d2 -v
I've removed the leading '/', as h5diff will interpret it the same either way.
Tested:
kagiso, linew, and smirom, via h5committest
mingw on Windows XP
Modified the current h5dump test script to use h5import/h5diff calls to validate the binary output. At this moment it can only be used with the native test, since h5import does not deal with input endianess.
tested: linux, sunos 5.10
Change H5[D|G|T]<foo>_expand() "temporary" API routines to
H5[D|G|T]<foo>2() "versioned" routines. Also added
H5[D|G|T](create|commit)_anon() routines to continue to allow "anonymous"
objects to be created in a file.
Tested on:
Mac OS X/32 10.4.9 (amazon)
FreeBSD/32 6.2 (duty)
FreeBSD/64 6.2 (liberty)
Linux/32 2.6 (chicago)
Linux/64 2.6 (chicago2)
Add version # and flags to external link format (as fields in a single
byte), in order to accomodate future changes/expansions.
Tested on:
Mac OS X/32 10.4.9 (amazon)
Linux/32 2.6 (chicago)
Linux/64 2.6 (chicago2)
FreeBSD/32 6.2 (duty)
FreeBSD/64 6.2 (liberty)
Bug fixes
Reset external file list slots name_offset to a state when created (0) in H5P_dcrt_copy
so that it conforms to an assertion in H5D_update_entry_info that assumes the name_offset is 0 at this point
this fixes the problem of h5repack and external files, add a new test and files for an external file
h5diff, check for an error return in H5D_get_storage_size
tested linux 32, 64
Eliminate storing # of links in "link info" message, regenerate it
when the object is opened instead.
Tested on:
FreeBSD/32 6.2 (duty)
Mac OS X/32 10.4.8 (amazon)
Move ref. count of # of links to an object out of the object header's
prefix and make it a header message instead (since it's a "rare" occurence),
eliminating some more space for each object in the file.
Inserting this "ref. count" message exposed a flaw in the library's
mechanism for locating a message to promote to another chunk and replace
with a continuation message, which required some additional work to fix.
It's still not completely robust, but it's working for more cases now and
detects failures robustly.
Reduced the minimum size of an object header chunk to just enough to
contain a header message prefix and continuation message.
Tested on:
FreeBSD/32 6.2 (duty)
Move "creation order tracked" flag from "group info" to "link info"
object header message and make the "max. creation order value" optional in the
"link info", if the creation order for links is not tracked.
Also, get rid of unused "index names" flag - names are always indexed
currently.
Tested on:
FreeBSD/32 6.2 (duty)
Eliminate message count from new version of object header prefix -
it can be computed when the header is loaded and the table of messages is
built.
Tested on:
FreeBSD/32 6.2 (duty)
Move attribute tracking information out of object header prefix and
make it into a message that is inserted only when attributes are present on
the object.
Tested on:
FreeBSD/32 6.2 (duty)
The main purpose of this checkin was to eliminate the
space used for tracking creation time indices when there is no way they
can be used (i.e. attributes can't be shared in the file and the user hasn't
turned on attribute creation tracking), however there were some other minor
changes which crept in:
- Fix a cache locking deadlock when a shared attribute and one of its
components end up in the same fractal heap direct block.
(This is fixed the "slow" way for right now, until John has time
to add support for readers/writer locking to the cache.
- Optimize attribute copying when a copy will be kept during a v2 B-tree
search.
- When freeing a block on disk, attempt to merge it with the metadata
and "small data" aggregators.
Tested on:
Mac OS X/32 10.4.8 (amazon)
FreeBSD/32 6.2 (duty)
Revise latest form of superblock format pretty drastically, to
eliminate unused fields and move rarely used fields into superblock extension.
Finished removing last vestiges of references to (never used) i"shared"
object header message ID.
Added object header messages for non-default v1 B-tree 'K' values
and for driver info.
Updated testfiles to reflect size changes, etc.
Various minor cleanups, etc.
Tested on:
FreeBSD/32 6.2 (duty)
Mac OS X/32 10.4.8 (amazon)
Bug fix: the macro used for percentage in the unsigned types needed a cast to signed (the difference can be negative)
Unlike the 1.6 branch the unsigned long long conversion to float is supported by H5Tconvert , so the test commented for 1.6 is run (tools/testfiles/h5diff_16_2.txt)
Update the file offsets for files created with "latest version" flag of
h5mkgrp to reflect changes to superblock size.
Tested on:
Linux/32 2.6 (chicago)
Linux/64 2.6 (chicago2)
Mask off the storage utilization for the h5ls output, so that the
h5ls output is more portable (VL datatype size is reported as the memory size
instead of the file size, making the storage utilization incorrect - entered
in bugzilla)
Tested on:
Linux/32 2.6 (chicago)
Linux/64 2.6 (chicago2)
Blank out the modification time, to eliminate another portability issue.
Tested on:
Linux/32 2.6 (chicago)
FreeBSD/32 6.2 (duty)
Mac OS X/32 10.4.8 (amazon)
Add '-p' flag to h5copy tool, to create intermediate "parent" groups
that don't exist in destination file yet.
Add more tests to h5copy script.
Tested on:
Linux/32 2.6 (chicago)
Linux/64 2.6 (chicago2)
Add small 'h5mkgrp' tool to create groups in an HDF5 file from the command
line, allowing the group structure for a file to be created in a script. This
tool closely follows the 'mkdir' command line tool in UNIX/Linux.
Allow tool library applications to pass a FAPL to the h5tool_fopen() call,
giving some additional flexibility to tools which are adding objects to an
existing HDF5 file (like h5copy & h5mkgrp).
Fix missing files in MANIFEST from previous checkin(s).
Tested on:
Linux/32 2.6 (chicago)
Linux/64 2.6 (chicago2)
Add empty & "full" groups to source HDF5 file and test copying them.
Test renaming objects during copy
Test specifying root group path for source & destination objects
Tested on:
Linux/32 2.6 (chicago)
Too minor to require more tests
Refactor h5copy testing script to abstract out some of the common behavior,
obey the "HDF5_NOCLEANUP" environment variable, delete any output file left
over from a previous run, add a "test variation" parameter to output file name
for adding next sequence of test variations, etc.
Tested on:
Linux/32 2.6 (chicago)
Linux/64 2.6 (chicago2)
Add feature to h5copy to allow it to add an object to an existing file,
instead of blowing away existing file.
Modify h5tools_fopen() routine to take access flags, so it can be used
to open an existing file for writing.
Added check to h5copy test script that verifies it has produced a file
with the correct structure.
Tested on:
Linux/32 2.6 (chicago)
Linux/64 2.6 (chicago2)
Fix core dump for iterating over attributes and not passing in a "starting
point".
Update output files missed in previous checkin. This change essentially
reverses a previous change of attribute ordering, leaving the output of h5dump
& h5ls compatible with 1.6.x
Tested on:
Linux/32 2.6 (chicago)
Linux/64 2.6 (chicago2)
2 tests that were previously incorporated inside the array indices test file were separated from it. These are a test with a dataset with dimensions greater tan 4GB and a test to read by hyperslabs
New version of the function h5tools_dump_simple_subset, to display subsetting. The new algorithm is:
Introduced an outer loop for cases where dimensionality is greater
than 2D. In each iteration a 2D block is displayed by rows in a inner
loop. The remainning slower dimensions above the first 2 are incremented
one at a time in the outer loop
Note: when blocks are introduced, the display is not correct. This is a bug that requires an improvement of the algorithm.
Fixed#720 h5dump: improve how region references are displayed. h5dump now uses the new API function H5Rget_name to display the name of the dataset referenced instead of its ID. Added a case to the script test file
Fix several bugs
1) the parsing of subsetting was using atoi to convert the parameter to an int, which caused problems for numbers greater that int. Substitute with atof
2) the printing of indices in the subsetting case was not being done. Solution: calculate the element position at the start of the subsetting using the algorythm
Given an index I(z,y,x) its position from the beginning of an array of sizes A(size_z, size_y,size_x) is given by
Position of I(z,y,x) = index_z * size_y * size_x
+ index_y * size_x
+ index_x
And pass that position to the function that dumps data, h5tools_dump_simple_data.
3) several index counters were declared as int, use hsize_t instead
4) modified the test generation program so that it includes test cases for subsetting of 1d, 2d, 3d, and 4d arrays and add these tests to the shell script
Cleaned warnings
h5diff_array.c:804: warning: passing arg 1 of `fabs' as floating rather
than integer due to prototype
introduced double precision arithmetic when possible instead of single
precision
Added a relative error formula to deal with floating point uncertainty
in the comparison of floats and double types.
Added new tests for this feature to the file generator program and to
the shell script
h5repack revision:
1. added a new test due to the introduction of H5Ocopy in the copy of objects (compressed dataset with references, that still must go a second sweep of the file to be regenerated).
2. Moved all the source files from the h5repack test program to a new file h5repacktst.c and removed the old ones (testh5repack*.c).
3. Renamed the binary files from test*.h5 to h5repack*.h5 for easy reference.
4. Modified the shell script to use variables for file names instead of hard coded names
h5dump bug 701. Symptom: The creation of a hardlink pointing to the root group "/" causes h5dump to display it as a link pointing to itself.
Cure: the root group was not being inserted in the table that keeps track of object names and links.
Added a test for this in the test generation program, the creation of a hardlink to the root
h5diff: print a message of "not comparable" in a case where the relative error
compare is not possible, due to the denominator being zero. Modified
the test file generator program to include a example for this and a new
test on the shell script
1) added a new parameter to the h5diff function diff_array that contains
the beginning position of the hyperslab, so that the total position in
the array is printed correctly when reading by hyperslabs.
2) added a new test to h5diff that reads and diffs by hyperslabs. The
test reads a 1GB dataset, from which a 1KB hyperslab was written with
differences .
3) added the generation of 2 files to the generator program to test the
h5diff hyperslab read.
4) changed the h5diff binary pre-generated file names to be more
descriptive (e.g, instead of file1.h5, made it h5diff_basic1.h5)
5) changed the name of the h5repack options text file to info.h5repack
Fixes for bugs 676, 228
676: both h5repack and h5diff use H5Dread. In the case of a "big"
dataset, use read/write by hyperslabs the same way h5dump uses. An
arbitrary value of 1GB was defined for "big", i.e, if the dataset is
greater than 1GB, then read/write by hyperslabs
228: use the file type in read/write by default. A new switch -n was
introduced if the user wants to use a native type, which was the
previous use by default.
Added a new test for h5repack that repacks a 1GB dataset
Tested: heping (serial, parallel), sol, copper
Add "use the latest format" support for dataspace object header encode/
decode routines and clean up format a bit for the latest format (new to 1.8.x
releases)
Remove storing 'perm' parameter for array datatypes in memory and the file,
and add test to make certain that if any user applications are attempting to
store them, we get some reports back. (Should be unlikely, since the RefMan
says that the parameter is not implemented and is unsupported).
Carry those changes into the tests, etc.
Clean up a bunch more compiler warnings.
Tested on:
FreeBSD/32 4.11 (sleipnir) w/threadsafe
Linux/32 2.4 (heping) w/FORTRAN & C++
Linux/64 2.4 (mir) w/enable-1.6-compat
Further minor modifications to the file format for tracking links in groups.
This is tentatively the "final" file format for groups.
Tested on:
Linux/32 2.6 (chicago)
Linux/64 2.6 (chicago2)
File format is not stable, don't keep files produced!
Description:
First stage of checkins modifying the format of groups to support creation
order. Implement "dense" storage for links in groups.
Try to clarify some of the symbols for the H5L API.
Add the H5Pset_latest_format() flag for FAPLs, to choose to use the newest
file format options (including "dense" link storage in groups)
Add the H5Pset_track_creation_order() flag for GCPLs, to enable creation
order tracking in groups (although no index on creation order yet).
Remove --enable-group-revision configure flag, as file format issues are
now handled in a backwardly/forwardly compatible way.
Clean up lots of compiler warnings and other minor formatting issues.
Tested on:
FreeBSD/32 4.11 (sleipnir) w/threadsafe
Linux/32 2.4 (heping) w/FORTRAN & C++
Linux/64 2.4 (mir) w/enable-v1.6 compa
Mac OSX/32 10.4.8 (amazon)
AIX 5.3 (copper) w/parallel & FORTRAN
revised binary flags, added a new file to the test generator program to
be used in the binary tests
usage is now
-o F, --output=F Output raw data into file F
-b F, --binary=F Binary output, of form F (into file -o F).
Recommended usage is with --dataset=P
Form F of binary output is: MEMORY for memory type,
FILE for the disk file type, LE or BE for pre-existing
little or big endian types
example
./h5dump -d integer -b MEMORY -o out.bin tbinary.h5
Users can create external links using H5L_create_external(). These links
point to an object in another HDF5 file. Users can alter the behavior of
external links or create new kinds of links by registering callbacks
using the H5L interface.
Added tests, tools support, etc.
Also a number of other, minor changes have been made (some restructuring of
the H5L interface, for instance).
Additional documentation and examples are forthcoming.
1. changed the -F flag option names to "BE and "LE" for big and little endian
2. added a more verbose usage message for these options
3. add a new test
4. add a make clean instruction to *.bin
new feature
Description:
added support for h5dump to dump binary data using the file type format
added one test to the test script that tests this
Solution:
Platforms tested:
mir
shanti
copper
Misc. update:
new feature. h5dump output of binary data
Description:
a new switch -b FILE_NAME that dumps the contents of memory data to file FILE_NAME in binary form
new program binread.c that reads the contents of this file and outputs it to stdout
added a test for the h5dump shell script that does a run of -b
the binread.c program reads the data used in this run, usage is ./binread FILE_NAME
Solution:
Platforms tested:
linux
solaris
AIX
Misc. update:
new feature
Description:
modified the test case for region points, so that there is not an ordered sequence of points that differ, but one with gaps
Solution:
Platforms tested:
linux
Misc. update:
new feature
Description:
added support for the printout of dataset region references differences
added a new test for this
merged the h5diff generator of test files into a single file
Solution:
Platforms tested:
linux 32, 64
solaris
Misc. update:
"Hide" file format changes (for now)
Description:
Add ifdef's (controlled by the --enable-group-revision configure flag)
to disable group revision changes to the file format, in order to allow alpha
release to go ahead without releasing an unsupported version into the wild.
Platforms tested:
FreeBSD 4.11 (sleipnir)
Linux 2.4 32-bit (heping)
Linux 2.4 64-bit (mir)
Solaris 2.9 (shanti)
new feature
Description:
added the printout of the compression ratio for filters in h5repack, after the filter name, obtained with H5Dget_storage_size, before and after applying the filter
Solution:
Platforms tested:
linux
solaris
AIX
Misc. update:
bug fixes
Description:
h5dump/h5ls were not displaying long doubles correctly
Solution:
1) the print datatype functions were incorrectly testing for the valid return value from H5Tequal,
(TRUE), causing the display of an incorrect name of a dataype in error cases from H5Tequal
2) h5tools_print_str did not have a case for native long double
3) added a file generator for a long double dataset
4) added one script test for the long double data (commented , some sytems don't have a native long double match, and the output differs)
5) added a vms file and h5dump script test
Platforms tested:
linux 32, 64
solaris
AIX
Misc. update:
bug fix
Description:
regenerate a binary file. the previous one was not being open on the scale offeset filter dataset, probably due to a solved bug with the corresponding binary file not being updated
Solution:
Platforms tested:
linux
Misc. update:
bug fix
Description:
percent relative error was done using integer arythmetic; use floating point instead
added the case for unsigned long long integer to float conversion
Solution:
Platforms tested:
linux (32,64)
AIX
solaris
Misc. update:
bug fix
Description:
1) added a more explainative usage message
2) the percent relative error for the integer type (division) was being done using integer arythmetic; use floating point arythmetic instead
3) added a new test for integer percent
Solution:
Platforms tested:
linux (32,64)
AIX
solaris
Misc. update:
bug fix
Description:
the compare check for the datatype sign was not done in the correct place, causing invalid
comparisons to be made
Solution:
put it on the correct place
Platforms tested:
linux 32, 64
AIX
Misc. update:
bug fix
Description:
1) the compare flag test was not being put in a correct place, making comparisons attempts that were not supposed to be done
2) some duplicate warnings were being made
Solution:
eliminate the duplicate warnings, put the if compare flag on the correct place
Platforms tested:
linux 32, 64
solaris
Misc. update:
bug fixes
Description:
1) the option logic for the print of long types was done incorrectly (extra lines of code that were not supposed to be there)
2) the print of loong long types was incorrect
Solution:
1) removed the incorrect code
2) made a long long cast on the printf call
Platforms tested:
linux (32 and 64)
solaris
Misc. update:
bug fix
Description:
long long type was being incorrectly printed
Solution:
added a long_long cast to print function
Platforms tested:
linux
Misc. update:
bug fix, new features
Description:
when comparing links , the output for the number of differences found was not being done
Solution:
print it
add 3 more tests that test the output of differences for 1) groups 2) datatypes 3) links
Platforms tested:
linux
Misc. update:
Code cleanup
Description:
Check in some of the code cleanups from working on the external link
support. (This doesn't include any of the external link features)
Platforms tested:
FreeBSD 4.11 (sleipnir)
Mac OSX.4 (amazon)
Linux 2.4
new h5diff test
Description:
added a test to the test h5diff script that compares a file to itself.
this test is done to test some features of the library that
open the same file and the root group twice
Solution:
Platforms tested:
linux
solaris
Misc. update:
h5diff new feature
Description:
a user asked for the message
"Some objects are not comparable"
to be more noticeable
Solution:
--------------------------------
Some objects are not comparable
--------------------------------
Platforms tested:
linux
Misc. update:
new test hor h5repack, to syncronize tests between unix and windows
it requires a new file added
Description:
Solution:
Platforms tested:
linux
Misc. update:
bug fix
Description:
when 2 objects were not comparable, the final print information for the non verbose mode printed "0 differences found"
Solution:
replaced instead with a Summary message that says
"Some objects were not comparable"
Platforms tested:
linux
solaris
Misc. update:
Description: See details from Bug #213. Family member file size wasn't saved
anywhere in file. When family file is opened, the first member size determine
the member size.
Solution: This is the fourth step of checkin. A test suit is added for h5repart,
including a program to generate the test files, a script file to run h5repart,
and a program to verify repartitioned files can be opened by the library.
There's a change from the first step of checkin. Family name template is no
longer saved in the superblock because different pathname can make the name
different.
In the third step of checkin, h5repart has been modified. If h5repart is used
to change the size of family member file, the new size(actual member size) is saved
in the superblock.
In the second step of checkin, multi driver is checked against the driver
name saved in superblock. Wrong driver will result in a failure with an error message indicating
multi driver should be used. This change includes split driver because it's a special case for multi
driver.
In the first step of checkin. Family member size and name template(unused at this stage) are saved
in file superblock. When file is reopened,the size passed in thrin superblock. A different size
will trigger a failure with an error message indicating the right size. Wrong driver to open family
file will cause a failure, too.
Platforms tested: h5committest and fuss.
Misc. update: MANIFEST
Bug fix/code cleanup
Description:
Add tests to determine that very long (64K+) object names are working.
Fixed a couple of bugs in h5dump where they weren't...
Platforms tested:
FreeBSD 4.11 (sleipnir)
Solaris 2.9 (shanti)
feature
Description:
h5repack support for scaleoffset compression
Checking in early to help debug the filter.
Solution:
Added messages and command line to handle new scale offset filter.
Note: TESTS ARE DISABLED FOR NOW. The filter is not
complete, repack tests may fail due to know problems.
PLEASE DO NOT MESS WITH THE SCALEOFFSET TESTS AT THIS TIME.
They will be enabled when the filter is ready.
Platforms tested:
verbena,copper,shanti
Misc. update:
MANIFEST
Purpose:
bug fix, new test file
Description:
h5dump was not properly displaying array indices > 3D
Solution:
added the same algorythm and data structure that h5diff uses to calculate the array index
from a element number position
Platforms tested:
linux
solaris
Misc. update:
Bug Fix/Code Cleanup/Doc Cleanup/Optimization/Branch Sync :-)
Description:
Generally speaking, this is the "signed->unsigned" change to selections.
However, in the process of merging code back, things got stickier and stickier
until I ended up doing a big "sync the two branches up" operation. So... I
brought back all the "infrastructure" fixes from the development branch to the
release branch (which I think were actually making some improvement in
performance) as well as fixed several bugs which had been fixed in one branch,
but not the other.
I've also tagged the repository before making this checkin with the label
"before_signed_unsigned_changes".
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel & fphdf5
FreeBSD 4.10 (sleipnir) w/threadsafe
FreeBSD 4.10 (sleipnir) w/backward compatibility
Solaris 2.7 (arabica) w/"purify options"
Solaris 2.8 (sol) w/FORTRAN & C++
AIX 5.x (copper) w/parallel & FORTRAN
IRIX64 6.5 (modi4) w/FORTRAN
Linux 2.4 (heping) w/FORTRAN & C++
Misc. update:
Bug fix (#264)
Description:
h5dump did not print attribute data in ASCII format when
-r is used.
Solution:
Added the ability to print in ASCII for Attributes Data also.
Added a test for printing Attributes with -r option.
tall-2B.ddl is the standard output for printing attributes with -r option.
Platforms tested:
H5committested.
Also in heping.
Misc. update:
Update MANIFEST.
h5repack test
Description:
modified a test file generation contents, for more easy debugging
(generated just one reference dataset and one region reference )
Solution:
Platforms tested:
linux
aix
solaris
Misc. update:
new test
Description:
added a test that generates and copies a file with a dataset with fill value
(this is to test the property list function H5Pequal)
Solution:
Platforms tested:
linux
solaris
aix
Misc. update: