Bug fix
Description:
The "shared" raw B-tree node can get freed before all the B-tree nodes
had been flushed out to disk and released by the cache.
Solution:
Implement a simple reference counting wrapper for objects in the library
and use it to hold the shared raw B-tree nodes so they aren't freed before all
references to them in memory are released.
Platforms tested:
Solaris 2.7 (arabica)
FreeBSD 4.10 (sleipnir)
IRIX64 6.5 (modei4)
h5dump new tests
Description:
added more tests for the escape/not escape feature for string data (with vlen, with
compound, with char data)
Solution:
Platforms tested:
linux
solaris
AIX
Misc. update:
h5dump new tests
Description:
added new tests for the -p option, superblock, file contents, fill values, array indices.
Solution:
Platforms tested:
linux
AIX
solaris
Misc. update:
Description:
Replaced the old metadata cache with a cache with a modified LRU
replacement policy. This should improve the hit rate.
Solution:
Since we want to flush cache entries in increasing address order, I
used the threaded binary B-tree code to store the cache entries.
There is a fair bit of overhead here, so we may want to consider
other options.
While the code is designed to allow the support of other replacement
algorithms, at present, only a modified version of LRU is supported.
The modified LRU algorithm requires that a user selectable portion
of the cache entries be clean. The clean entries are evicted first
when writes are not permitted. If the pool of clean entries is used
up, the cache grows beyond its user specified maximum size. The
cache can also exceed its maximum size if the combined size of the
protected (or locked) entries exceeds the maximum size of the cache.
Platforms tested:
eirene (serial, parallel, fp), h5committested
Misc. update:
Purpose:
HDF5 now supports SZIP with no encoder.
Description:
SZIP can be configured to have both encoder and decoder or just to have the decoder. HDF5 can now query the configuration of any filter, and will throw errors if users try to write using a filter with encoding disabled.
Solution:
Added H5Zget_filter_info function, changed API for H5Pget_filter and H5P_get_filter_by_id. See SZIP RFC.
Platforms tested:
Copper (fortran, C++, parallel), Sleipnir (C++), Arabica (fortran, C++), Verbena (fortran, C++)
Misc. update:
Description: Added new API H5Fget_name and new test program called filename.c. This function
returns the name of the file by object ID(file, group, dataset, named datatype, and attribute)
which belongs to the file.
Platforms tested: h5committest and fuss.
Misc. update: MANIFEST and RELEASE.txt
h5dump output change, new tests
Description:
the storage layout output format the storage layout output format had some changes
same for the user defined filter
add an option (-y) for not printing the array indices (default is print indices )
the option for escaping non printable characters covers all characters (default is not escape)
(this might be not very portable, the test files are tstring.ddl and tstringe.ddl )
add tests for the new options
Solution:
Platforms tested:
linux
solaris
AIX
Misc. update:
Description: This is the second effort to correct XML dumper after adding null
dataspace test for attribute and dataset. Since XML schema hasn't been updated
for null space, took out null space test from tdset.h5 and tattr.h5 and put it
into a seperate file, tnullspace.h5. Only h5dump tests this null space file;
XML dumper doesn't do it at this moment. We'll wait until XML schema is updated
first.
Platforms tested: h5committest and RH 8(fuss)
Misc. update: MANIFEST(added two new files in tools/testfiles, tnullspace.h5
and tnullspace.ddl)
Description:
added the code for print strings with new line and display the path of references (new source files h5tools_ref.c and .h )
added a test suite in testh5dump.sh.in for
( note : to create testh5dump.sh , one must redo ./configure; this detects the availability of filters
and generates testh5dump.sh accordingly)
1) storage layout
2) fill value
3) print reference with path
4) print strings with new lines
5) filters
Solution:
Platforms tested:
linux
solaris
AIX
Misc. update:
Update shell scripts
Description:
Switch to generating the testh5dump.sh script at configure time, so we can
determine which filters are available to test.
Platforms tested:
FreeBSD 4.9 (sleipnir)
too small to require h5committest
h5dump new version
Description:
added the changes already made for 1.6
support for dumping of
1) filters
2) storage layout
3) fill value
4) comments
5) superblock
6) file contents
7) array indices
Solution:
Platforms tested:
linux
solaris
AIX
Misc. update:
Code optimization & bug fix
Description:
When dimension information is being stored in the storage layout message
on disk, it is stored as 32-bit quantities, possibly truncating the dimension
information, if a dimension is greater than 32-bits in size.
Solution:
Fix the storage layout message problem by revising file format to not store
dimension information, since it is already available in the dataspace.
Also revise the storage layout data structures to be more compartmentalized
for the information for contiguous, chunked and compact storage.
Platforms tested:
FreeBSD 4.9 (sleipnir) w/parallel
Solaris 2.7 (arabica)
h5committest
New Feature
Description:
Add the data transform function, H5Pset_transform().
Platforms tested:
"h5committested".
Copper was down. Ran parallel tests in sol instead.
Misc. update:
new tests for h5repack
Description:
added more tests both to the test program and shell script that test
a variation of different filter converssions
Solution:
Platforms tested:
linux
Misc. update:
new tests for h5repack
Description:
added tests that do layout type to layout type conversion in a matrix of 9 between compact, contiguous and chunking
Solution:
Platforms tested:
linux
afs has problems; I could not telnet to sol and copper, arabica is really slow (meaning
waiting 1 minute for a typed character) and the writing of a file gave an error
arabica 181% afs: failed to store file (145)
afs: failed to store file (145)
Misc. update:
1) new function for tools library
2) new test script for h5repack
Description:
1) currently all the tools (h5dump, h5diff, etc) do not check if a filter is available
for reading some dataset that might have a filter not available on the current configuration (the behaviour
of the tools until now was to trigger a library error, saying that the dataset cannot be read
due to the lack of the filter)
Solution:
1) added a new function h5tools_canreadf that checks if a dataset can be read
depending on the availability of filters.
this function was added in calls for h5diff and h5repack.
instead of triggering the library error, a message is printed, saying that the dataset
cannot be read (the print is optional, it is on on verbose mode)
2) added a shell script that tests the commannd line tool behaviour of h5repack
the script does a series of runs of h5repack with several options on the same file (this file test4.h5
was added to the testfiles dir).
then, it runs the h5diff tool, with the input and output files , in each run.
the goal of the test is also to check item 1) . the binary file was saved with filters
that might not be available on other configurations
Platforms tested:
linux (all filters enabled)
linux (some filters disabled)
solaris (some filters disabled)
AIX (some filters disabled)
windows (all filters on and off )
Misc. update:
Bug fix/optimization
Description:
Address slowdown in MPI-I/O file metadata operations that was introduced
mid-stream. We now _require_ a POSIX compliant parallel file system for the
MPI-I/O file driver (as well as for the MPI-POSIX file driver).
Also optimized file open operation when the file is being created by
reducing the number of collective & syncronizing calls.
Additionally, refactor the MPI routines into a common place, eliminating
duplicated code.
Platforms tested:
FreeBSD 4.9 (sleipnir) w/parallel
h5committest
Purpose: Maintenance
Description: Fortran APIs MAC OS X port for IBM XL Fortran compiler
Solution: Brought back changes from 1.6 branch
Platforms tested: pommier, h5comittested; this time h5committest
complained about copperpp directory and didn't run;
tests on verbena and sol passed.
Misc. update:
Code cleanup
Description:
Refactor library testing framework (used for the testhdf5 & ttsafe tests)
to remove almost all of the duplicated code, moving the common code into a
new 'testframe.c' source file.
Platforms tested:
FreeBSD 4.9 (sleipnir) w & w/o thread-safety
h5committest
h5diff new feature
Description:
added compare for attributes
a new options flag (-a) was added to the options structure. it is 0 by default (no compare )
the output of the compare is the same that for datasets, and all the other flags also apply for attributes
(the memory compare is done in the same function diff_array)
all the other requirements for compare of datasets (type, space) are identical too
Platforms tested:
linux
solaris 2.7
IRIX
Misc. update:
Code cleanup
Description:
Removed "H5Git" routines, now that there are library routines which perform
the same functionality.
Platforms tested:
FreeBSD 4.9 (sleipnir)
Linux 2.4 (verbena) w/FORTRAN
too minor for h5committest
Description: The standard output from Error API test has some non-standard
message like path name or line number.
Solution: use sed to remove any non-standard information in testerror.sh
Platforms tested: h5committest
to avoid printing error messages.
Description: If enable-hdf5v1_6 is configured in, make some functions
compatible with v1.6. Error test program print out some error messages as
it succeeds.
Solution: Use #ifdef H5_WANT_H5_V1_6_COMPAT statements. Use shell script
to compare error test output with the standard one.
Platforms tested: h5committest