revised binary flags, added a new file to the test generator program to
be used in the binary tests
usage is now
-o F, --output=F Output raw data into file F
-b F, --binary=F Binary output, of form F (into file -o F).
Recommended usage is with --dataset=P
Form F of binary output is: MEMORY for memory type,
FILE for the disk file type, LE or BE for pre-existing
little or big endian types
example
./h5dump -d integer -b MEMORY -o out.bin tbinary.h5
Review, revise & checkin in Peter's latest round of object copy changes,
which add basic support for datasets & attributes with reference datatypes.
Tested on:
Linux/32 2.6 (chicago)
Linux/64 2.6 (chicago2)
Use a slightly less efficient method of computing the log2() on SGI IRIX64,
in order to avoid a compiler bug when optimizations are turned on.
Tested on:
SGI IRIX64 6.5 (atlantia)
Don't protect direct block when removing object from managed heap blocks -
all the information we need is available without the extra I/O.
Tested on:
Linux/32 2.6 (chicago)
Linux/64 2.6 (chicago2)
Add 'loookup3' checksum routine and switch to using it for metadata
checksums - it's just as "strong" as the CRC32 and about 40% faster in general
(with some compiler optimizations, it's nearly as fast as the fletcher-32
algorithm).
Tested on:
Linux/32 2.6 (chicago)
Linux/64 2.6 (chicago2)
Some of the tests cannot be run on VMS since they try to open
the same file twice.
Solution:
Bypass the tests according to the H5_CANNOT_OPEN_TWICE variable setting.
Platforms tested:
VMS server and heping.
If either szip or zlib filter was not present, the batch still tried to use h5repack tool to test the data compression feature. Therefore the h5repack test failed.
Fixed the bug. Now if the compression filter is not present, that particular repack test will be skipped.
Add "use the latest version of the file format" flag to the file access
property list and internal file data structures.
Fix bug where metadata block size was retrieved instead of the small
data block size.
Categorize property list routine prototypes in the public header file.
Tested on:
Mac OS/PPC 10.4 (amazon)
Linux/32 2.6 (chicago)
Linux/64 2.6 (chicago2)
Add "op" routine to perform operation on heap object "in situ", to allow
for faster operations on dense links during B-tree traversal & lookup.
Refactor the "read" routine to use the internal version of the "op" routine,
to keep the code duplication as low as possible.
Tested on:
Mac OS X.4/PPC (amazon)
Linux/32 2.6 (chicago)
Linux/64 2.6 (chicago2)
Purify found some memory leaks in the code related to the HDF5 external links.
James provided the fix and asked me to check it in.
Tested:
heping, mir, shanti, and juniper
Add a CRC algorithm to the library, initially for "small" (<256 byte)
metadata blocks.
Update checksum tests to verify it's working correctly.
Tested:
Linux/32 2.6 (chicago)
Linux/64 2.6 (chicago2)
(Will be testing on more platforms after checkin)
This is VMS problem only.
H5Dremove_all function was modified to use HDremove. Since HDremove
is defined as HDremove_all in H5private.h, function became recursive causing
all kinds of resource problems.
Solution:
Use "remove" instead.
Platforms tested:
VMS server
Remove some references to "twig" and "branch" internal B-tree nodes, which
were eliminated in the previous checkin.
Tested on:
Linux/32 2.6 (chicago)
Linux/64 2.6 (chicago2)
Enable the checksums on the free space tracker's metadata.
Clean up a few compiler warnings from 64-bit machines.
Tested:
Linux/32 2.6 (chicago)
Linux/64 2.6 (chicago2)
Improve density of the B-tree further. For greater depths of B-trees,
the gains are over 100%...
Also, don't split internal nodes with 3->4 splits, use a 1->2 split
instead, so that the density of the nodes around a split is maximized.
Tested:
Mac OS X/PPC 10.4 (amazon)
Linux/32 2.6 (chicago)
Linux/64 2.6 (chicago2)
Description:
Update copyright notice, after assignment of the HDF products to THG.
Adds "Copyright 2006 by The HDF Group (THG)."
Provides separate credits to the U of I for 'NCSA HF5' and
to THG for 'HDF5'.
Testing:
Visual inspection.
Split edge nodes in the tree with a 1->2 node split, instead of a 2->3 node
split, which creates a more dense tree when a pattern of record insertions
occurs (because it leaves behind full nodes instead of 2/3 full nodes).
Tested:
FreeBSD/32 4.11 (sleipnir)
Linux/64 2.4 (mir)
Linux/32 2.4 (heping)
Solaris/64 2.9 (shanti)
Improve default settings.
Use mpicc, mpif90, mpirun as the default $CC, $FC, and $RUNPARALLEL if
enable-parallel.
Tested:
in TG-NCSA both serial and parallel.
Code cleanup.
Description:
Removed argc and argv from the function arguments of h5tools_get_fapl() and
h5tools_fopen(). They were used to call MPI_Init() which was no longer
needed.
Tested:
heping (serial and parallel).
Thread safe error test fails due to the changes in the error stack.
Solution:
Updated the expected error stack.
Platforms tested:
heping (too minor, probably will fail on Tuesday anyway)
Refactor the file storage of "twig" nodes in the B-tree to allow them to
store more records, increasing the average density of the B-tree 30-40%.
Increase # of records in "insert lots" regression test to still create
B-tree of depth 4
Update h5debug to interpret difference of 'branch' and 'twig' internal
nodes in B-tree correctly.
Tested on:
FreeBSD/32 4.11 (sleipnir)
Linux/32 2.4 (heping)
Linux/64 2.4 (mir)
Solaris/64 2.9 (shanti)
Re-order the fheap & btree2 tests so that the btree2 test runs first,
because the fractal heaps use v2 B-trees for tracking huge objects.
Tested on:
FreeBSD/32 4.11 (sleipnir)
Linux/32 2.4 (heping)
Linux/64 2.4 (mir)
Solaris/64 2.9 (shanti)
These errors should be investigated more thoroughly later. The underlying
problem in links.c seems to be that files opened multiple times don't share
the same H5F_shared_t struct. Perhaps identifying when this is the case
would be helpful?
Tested on mir.