Description: Multiple copies of Copyright appeared in Makefile.in. This was
due to automake copying the copyright right in the included files such as
config/commence.am.
Solution: Automake treats double hashes as comments and does not copy them
to Makefile.in. Changed all the copyright notices in config/*.am to use
double hashes for the Copyright right notice.
Tested: kagiso via bin/reconfigure.
Mask off the storage utilization for the h5ls output, so that the
h5ls output is more portable (VL datatype size is reported as the memory size
instead of the file size, making the storage utilization incorrect - entered
in bugzilla)
Tested on:
Linux/32 2.6 (chicago)
Linux/64 2.6 (chicago2)
Blank out the modification time, to eliminate another portability issue.
Tested on:
Linux/32 2.6 (chicago)
FreeBSD/32 6.2 (duty)
Mac OS X/32 10.4.8 (amazon)
Add '-p' flag to h5copy tool, to create intermediate "parent" groups
that don't exist in destination file yet.
Add more tests to h5copy script.
Tested on:
Linux/32 2.6 (chicago)
Linux/64 2.6 (chicago2)
Put paths to testfile input & output directories in one place, making it
easier to modify them if we choose to re-arrange our testfile locations in
the future (this should probably be carried over to other test scripts).
Make h5copy exit more cleanly if no command line parameters are given.
Tested on:
Linux/32 2.6 (chicago)
Linux/64 2.6 (chicago2)
Add small 'h5mkgrp' tool to create groups in an HDF5 file from the command
line, allowing the group structure for a file to be created in a script. This
tool closely follows the 'mkdir' command line tool in UNIX/Linux.
Allow tool library applications to pass a FAPL to the h5tool_fopen() call,
giving some additional flexibility to tools which are adding objects to an
existing HDF5 file (like h5copy & h5mkgrp).
Fix missing files in MANIFEST from previous checkin(s).
Tested on:
Linux/32 2.6 (chicago)
Linux/64 2.6 (chicago2)
Add empty & "full" groups to source HDF5 file and test copying them.
Test renaming objects during copy
Test specifying root group path for source & destination objects
Tested on:
Linux/32 2.6 (chicago)
Too minor to require more tests
Refactor h5copy testing script to abstract out some of the common behavior,
obey the "HDF5_NOCLEANUP" environment variable, delete any output file left
over from a previous run, add a "test variation" parameter to output file name
for adding next sequence of test variations, etc.
Tested on:
Linux/32 2.6 (chicago)
Linux/64 2.6 (chicago2)
Add feature to h5copy to allow it to add an object to an existing file,
instead of blowing away existing file.
Modify h5tools_fopen() routine to take access flags, so it can be used
to open an existing file for writing.
Added check to h5copy test script that verifies it has produced a file
with the correct structure.
Tested on:
Linux/32 2.6 (chicago)
Linux/64 2.6 (chicago2)
Fix core dump for iterating over attributes and not passing in a "starting
point".
Update output files missed in previous checkin. This change essentially
reverses a previous change of attribute ordering, leaving the output of h5dump
& h5ls compatible with 1.6.x
Tested on:
Linux/32 2.6 (chicago)
Linux/64 2.6 (chicago2)
2 tests that were previously incorporated inside the array indices test file were separated from it. These are a test with a dataset with dimensions greater tan 4GB and a test to read by hyperslabs
Tested platform:
Kagiso only since it is only a comment block change. If it works in one
machine, it should work in all, I hope. Still need to check the parallel
build on copper.
Add support for inserting attributes into creation order index.
Also, update support for dense link & attribute storage in h5debug.
Tested on:
FreeBSD/32 6.2 (duty)
Mac OS X/32 10.4.8 (amazon)
It seems that while Cygwin supports the time command, it has trouble with
the syntax
srcdir="../../hdf5/test" time ./testhdf5
and complains.
The solution is to test the above case in configure and not to use the time
command if it fails; Cygwin is fine with
srcdir="../../hdf5/test" ./testhdf5
Tested on Cygwin and kagiso. This feature shouldn't be a major compatibility
problem since every platform but Cygwin is already fine with the current
syntax.
New version of the function h5tools_dump_simple_subset, to display subsetting. The new algorithm is:
Introduced an outer loop for cases where dimensionality is greater
than 2D. In each iteration a 2D block is displayed by rows in a inner
loop. The remainning slower dimensions above the first 2 are incremented
one at a time in the outer loop
Note: when blocks are introduced, the display is not correct. This is a bug that requires an improvement of the algorithm.
modified the format for the printing of reference information for a string, related to the same change in h5dump to display the name of the referenced dataset
Fixed#720 h5dump: improve how region references are displayed. h5dump now uses the new API function H5Rget_name to display the name of the dataset referenced instead of its ID. Added a case to the script test file
fix for bugzilla bug #551
several programming errors contributed to this bug
1) the parsing of subsetting was using atoi to convert the parameter to an int,
which caused problems for numbers greater that int. Substitute with atof
2) several index counters were declared as int, use hsize_t instead
3) the numerical format passed for printf was %lu, defined one compatible with
hsize_t instead (unsigned long long)
Fix several bugs
1) the parsing of subsetting was using atoi to convert the parameter to an int, which caused problems for numbers greater that int. Substitute with atof
2) the printing of indices in the subsetting case was not being done. Solution: calculate the element position at the start of the subsetting using the algorythm
Given an index I(z,y,x) its position from the beginning of an array of sizes A(size_z, size_y,size_x) is given by
Position of I(z,y,x) = index_z * size_y * size_x
+ index_y * size_x
+ index_x
And pass that position to the function that dumps data, h5tools_dump_simple_data.
3) several index counters were declared as int, use hsize_t instead
4) modified the test generation program so that it includes test cases for subsetting of 1d, 2d, 3d, and 4d arrays and add these tests to the shell script
Take out separate memory type in the file for SOHM objects and create
aliases for existing memory types for SOHM use.
Tested on:
FreeBSD/32 4.11 (sleipnir)
warnings clean
../../../hdf5/tools/h5repack/h5repack_copy.c:615: warning: passing arg 3 of `print_dataset_info' as `float' rather than `double' due to prototype
introduced double precision arithmetic
fixed warning
../../../hdf5/tools/h5diff/h5diff_common.c: In function `usage':
../../../hdf5/tools/h5diff/h5diff_common.c:346: warning: function might be possible candidate for attribute `noreturn'
fixed 2 initializations of char* with HDstrdup and HDcalloc
info->prefix = HDcalloc(1, 1);
fname = HDstrdup(argv[opt_ind]);
some were exposed by compiler warnings
more compiler warning
../../../hdf5/tools/h5diff/h5diffgentest.c:111: warning: passing arg 1 of `test_hyperslab' discards qualifiers from pointer target type
Cleaned warnings
h5diff_array.c:804: warning: passing arg 1 of `fabs' as floating rather
than integer due to prototype
introduced double precision arithmetic when possible instead of single
precision
Added a relative error formula to deal with floating point uncertainty
in the comparison of floats and double types.
Added new tests for this feature to the file generator program and to
the shell script
make h5repacktst clean a big file which name was changed to "h5repack_big_out.h5", do not use H5Ocopy only when the original DCPL has filters or a request is made for such, more code cleaning
Basic support for H5Literate() routine. Still needs to be fleshed out and
refactored to simplify. Also, needs tests. :-)
Tested on:
FreeBSD/32 4.11 (sleipnir)
Linux/32 2.4 (heping)
Linux/64 2.4 (mir)
AIX/32 5.? (copper)
Mac OS X/32 10.4.8 (amazon)
The version of libtool used by HDF5 isn't directly affected by the reconfigure
script; instead, libtoolize --force must be used by hand. Libtool was the
source of the problem, so rolling its version back to 1.5.14 should solve the
issue (at least temporarily).
Reconfigure should still work on both heping and kagiso.
Tested on heping, kagiso, and tg-login3.
h5repack revision:
1. added a new test due to the introduction of H5Ocopy in the copy of objects (compressed dataset with references, that still must go a second sweep of the file to be regenerated).
2. Moved all the source files from the h5repack test program to a new file h5repacktst.c and removed the old ones (testh5repack*.c).
3. Renamed the binary files from test*.h5 to h5repack*.h5 for easy reference.
4. Modified the shell script to use variables for file names instead of hard coded names
This feature is still in progress; Shared Object Header Messages are not
complete as a feature and are not thoroughly tested. There are still
"TODO" comments in the code (comments with the word "JAMES" in them,
so as not to be confused with other TODO comments).
Hopefully this checkin will reduce the liklihood of conflicts as I finish
implementing this feature.
All current tests pass on juniper, copper (parallel), heping, kagiso, and mir.
Add new H5Lget_val_by_idx() routine & tests.
Also includes most of changes for H5Ldelete_by_idx() routine.
Tested on:
Mac OS X/32 10.4.8 (amazon)
FreeBSD/32 4.11 (sleipnir)
Linux/32 2.4 (heping)
Linux/64 2.4 (mir)
AIX/32 5.? (copper)
Introduced the second sweep of the file for a case a reference is present and H5Ocopy was not used.
Moved the code from file h5repack_refs.c to h5repack_copy.c and removed the first file
Should disable linking against shared libraries in Fortran for compilers that
don't support shared libraries.
Should also fix problem when the wrong Fortran file extension was specified.
If these changes don't solve the Daily Test issues, I'll look at backing out
the autotool version change until I have time to fix them.
Tested on heping, kagiso, juniper.
h5repack support for H5Ocopy in the copy of objects. The old method
for recreating references was dropped (references recreated in a second
traversal of the file)
The logic for using H5Ocopy or not is
if the input DCPL has filters or non default layout OR these are
requested by the user THEN
use the old h5repack read / write
ELSE
use H5Ocopy
h5dump bug 701. Symptom: The creation of a hardlink pointing to the root group "/" causes h5dump to display it as a link pointing to itself.
Cure: the root group was not being inserted in the table that keeps track of object names and links.
Added a test for this in the test generation program, the creation of a hardlink to the root
Added a framework to display information about a particular object.
This option (-O object_name) is not available to the users yet.
Currently only name of an object (or objects) is displayed.
Platforms tested:
sol, kagiso and copper.
the linkval buffer, per Elena and Frank's suggestions while revising
the documentation. Added error checking using this size, as well as a
couple of tests.
Tested on juniper, kagiso, and sol.
h5diff: print a message of "not comparable" in a case where the relative error
compare is not possible, due to the denominator being zero. Modified
the test file generator program to include a example for this and a new
test on the shell script
1) added a new parameter to the h5diff function diff_array that contains
the beginning position of the hyperslab, so that the total position in
the array is printed correctly when reading by hyperslabs.
2) added a new test to h5diff that reads and diffs by hyperslabs. The
test reads a 1GB dataset, from which a 1KB hyperslab was written with
differences .
3) added the generation of 2 files to the generator program to test the
h5diff hyperslab read.
4) changed the h5diff binary pre-generated file names to be more
descriptive (e.g, instead of file1.h5, made it h5diff_basic1.h5)
5) changed the name of the h5repack options text file to info.h5repack
1. bug fix. the h5_cleanup file names were not build properly on the h5repacktest call
2. added only a call to test_bigout.h5 to be clean, because the other files are generated anyway by the shell script. test_bigout.h5 is only made on the C program part (h5repacktst)
Fixes for bugs 676, 228
676: both h5repack and h5diff use H5Dread. In the case of a "big"
dataset, use read/write by hyperslabs the same way h5dump uses. An
arbitrary value of 1GB was defined for "big", i.e, if the dataset is
greater than 1GB, then read/write by hyperslabs
228: use the file type in read/write by default. A new switch -n was
introduced if the user wants to use a native type, which was the
previous use by default.
Added a new test for h5repack that repacks a 1GB dataset
Tested: heping (serial, parallel), sol, copper
Overhaul usage of object header chunks to reduce I/O operations and
memory allocations. The object header prefix is now stored in the first
object header chunk's "image" in memory.
Also, lots of formatting cleanups.
Taught h5debug tool about new object header format (which isn't enabled
just yet).
Tested on:
Linux/32 2.6 (chicago)
Linux/64 2.6 (chicago2)
option --enable-direct-vfd/--disable-direct-vfd to enable/disable Direct I/O support. The default
is enabled. There's a small test in test/vfd.c. Another way to test it is to set environment
variable HDF5_DRIVER to "direct" and run "make check" in the test/ directory. There'll be some
further improvement in the following checkin including allowing user to provide memory boundary
value, file block size, and copying buffer size.
Add "use the latest format" support for dataspace object header encode/
decode routines and clean up format a bit for the latest format (new to 1.8.x
releases)
Remove storing 'perm' parameter for array datatypes in memory and the file,
and add test to make certain that if any user applications are attempting to
store them, we get some reports back. (Should be unlikely, since the RefMan
says that the parameter is not implemented and is unsupported).
Carry those changes into the tests, etc.
Clean up a bunch more compiler warnings.
Tested on:
FreeBSD/32 4.11 (sleipnir) w/threadsafe
Linux/32 2.4 (heping) w/FORTRAN & C++
Linux/64 2.4 (mir) w/enable-1.6-compat
and quote its arguments. Also checks for the 'socket' library on
Solaris.
If this patch passes the Daily Tests and makes the user happy, I'll
port it back to the 1.6 branch.
Tested on mir and sol.
Further minor modifications to the file format for tracking links in groups.
This is tentatively the "final" file format for groups.
Tested on:
Linux/32 2.6 (chicago)
Linux/64 2.6 (chicago2)
File format is not stable, don't keep files produced!
Description:
First stage of checkins modifying the format of groups to support creation
order. Implement "dense" storage for links in groups.
Try to clarify some of the symbols for the H5L API.
Add the H5Pset_latest_format() flag for FAPLs, to choose to use the newest
file format options (including "dense" link storage in groups)
Add the H5Pset_track_creation_order() flag for GCPLs, to enable creation
order tracking in groups (although no index on creation order yet).
Remove --enable-group-revision configure flag, as file format issues are
now handled in a backwardly/forwardly compatible way.
Clean up lots of compiler warnings and other minor formatting issues.
Tested on:
FreeBSD/32 4.11 (sleipnir) w/threadsafe
Linux/32 2.4 (heping) w/FORTRAN & C++
Linux/64 2.4 (mir) w/enable-v1.6 compa
Mac OSX/32 10.4.8 (amazon)
AIX 5.3 (copper) w/parallel & FORTRAN
revised binary flags, added a new file to the test generator program to
be used in the binary tests
usage is now
-o F, --output=F Output raw data into file F
-b F, --binary=F Binary output, of form F (into file -o F).
Recommended usage is with --dataset=P
Form F of binary output is: MEMORY for memory type,
FILE for the disk file type, LE or BE for pre-existing
little or big endian types
example
./h5dump -d integer -b MEMORY -o out.bin tbinary.h5
Review, revise & checkin in Peter's latest round of object copy changes,
which add basic support for datasets & attributes with reference datatypes.
Tested on:
Linux/32 2.6 (chicago)
Linux/64 2.6 (chicago2)
Improve density of the B-tree further. For greater depths of B-trees,
the gains are over 100%...
Also, don't split internal nodes with 3->4 splits, use a 1->2 split
instead, so that the density of the nodes around a split is maximized.
Tested:
Mac OS X/PPC 10.4 (amazon)
Linux/32 2.6 (chicago)
Linux/64 2.6 (chicago2)
Code cleanup.
Description:
Removed argc and argv from the function arguments of h5tools_get_fapl() and
h5tools_fopen(). They were used to call MPI_Init() which was no longer
needed.
Tested:
heping (serial and parallel).
Refactor the file storage of "twig" nodes in the B-tree to allow them to
store more records, increasing the average density of the B-tree 30-40%.
Increase # of records in "insert lots" regression test to still create
B-tree of depth 4
Update h5debug to interpret difference of 'branch' and 'twig' internal
nodes in B-tree correctly.
Tested on:
FreeBSD/32 4.11 (sleipnir)
Linux/32 2.4 (heping)
Linux/64 2.4 (mir)
Solaris/64 2.9 (shanti)
Since these examples need to follow filesystem paths, the Makefiles need
to create directories in the examples directory; added this to the
Makefile.am.
Tested on Windows, mir, juniper
Several changes, all mooshed together:
- Add support for "tiny" objects - which can be stored in the heap
ID itself, instead of in the heap data blocks.
- Flesh out support for compressed direct blocks, but comment it
out until John's got some metadata cache changes in place to
support it.
- Add support for applying I/O pipeline filters to 'huge' objects
- Refactor 'huge' object code to store information for 'huge' objects
directly in the heap ID, when there are I/O pipeline filters
applied to the heap (and the heap ID is large enough to hold the
information)
- Update h5debug tool to correctly handle 'huge' & 'tiny' objects.
- Misc. other code cleanups, etc.
Tested on:
FreeBSD/32 4.11 (sleipnir)
Linux/64 2.4 (mir)
Solaris/64 2.9 (shanti)
Clean up compiler warnings/failures in test/links.c, especially when
--disable-production flag used with --enable-group-revision
Modify binary dumping in h5dump to clean up files created [a band-aid
solution to not actually creating the files in the srcdir, but better than
just leaving the files around... :-/ ]
Tested:
FreeBSD 4.11 (sleipnir) (w/ configure flags above)
Too minor to require h5committest
Bug fixes.
Description:
There were MPI_init and MPI_finalize calls in the code of h5tools_fopen in
parallel mode. But if a non-MPI tool is invoke to open a non-existing
file, it tries to open the non-existing file with different VFD and
eventaully came to try with the MPIO or the MPIPOSIX vfd, then it would
try to do MPI_Init which would fail in the MPI environment if the a.out
was not launched by MPI properly.
Solution:
MPI_Init and MPI_Finalize in general should be called by the MPI application,
not called by a library subroutine as in the manner that was done here.
Removed the MPI_init and MPI_Finalize calls. Used MPI_Initialized to
verify if this has been launched as an MPI application in the proper
manner before attempting to use the MPIO or the MPIPOSIX VFD to open
the file.
Tested:
In tg-ncsa parallel, where it had failed explicitly and also in Heping,
using both serial and parallel mode.
Users can create external links using H5L_create_external(). These links
point to an object in another HDF5 file. Users can alter the behavior of
external links or create new kinds of links by registering callbacks
using the H5L interface.
Added tests, tools support, etc.
Also a number of other, minor changes have been made (some restructuring of
the H5L interface, for instance).
Additional documentation and examples are forthcoming.
Refactored free space manager to use metadata cache for serialized free
space sections. This speeds up the fractal heap test considerably...
Tested:
FreeBSD 4.11 (sleipnir)
Linux 2.4/32 (chicago)
Linux 2.4/64 (mir)
Mac OS X (amazon)
the $srcdir properly. It is not right to chdir into testfiles and write
files there because in real srcdir mode, one should not changes things
in the srcdir area which could be shared by multiple builds simultanteously.
Solution: added the proper $srcdir components to the source file name.
Also clean up the indentation by cb.
Tested: only by hand in heping.
"make check-vfd" will now run all tests in the test directory with different
file drivers (at least, all of those tests that use the testing framework's
FAPL). Tests that fail will be skipped.
This is not a perfect fix, but is better than nothing.
Along with this change, check-vfd should be added to the Daily Tests.
1. changed the -F flag option names to "BE and "LE" for big and little endian
2. added a more verbose usage message for these options
3. add a new test
4. add a make clean instruction to *.bin
bug fix
calling h5tools_get_fapl running the mpio driver caused H5FD_pl_copy to
fail silently in some conditions. a MPI call was made before MPI_Init.
solution: corrected the MPI call to be made after MPI_Init and added error return
conditions to H5FD_pl_copy and h5tools_get_fapl
tested on copper parallel, mir, shanti