new test
Description:
added a test that generates and copies a file with a dataset with fill value
(this is to test the property list function H5Pequal)
Solution:
Platforms tested:
linux
solaris
aix
Misc. update:
Code cleanup
Description:
Fix another batch of minor differences between the development and release
branches.
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel
Too minor to require h5committest
Code cleanup
Description:
Clean up collective chunking code a bit.
Also, add '--enable-instrument' configure flag to have a mechanism for
determining that optimized operations happened correctly in the library (instead
of just the "normal" way) by allowing 'flag' properties to be set outside the
library and set when the "right" thing happens. This is mainly for debugging
and regression checks, so we make certain we don't break optimized I/O by
accident. It's enabled by default when --enable-debug is on (which is on by
default in the development branch and off by default in the release branch),
but can also be independently controlled with its own configure flag.
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel
IBM p690 (copper) w/parallel
h5diff and h5repack changes
Description:
h5diff
introduced the following four modes of output:
Normal mode: print the number of differences found and where they occured
Report mode: print the above plus the differences
Verbose mode: print the above plus a list of objects and warnings
Quiet mode: do not print output (h5diff always returns an exit code of 1 when differences are found)
h5repack
added an extra parameter for SZIP filter (coding method)
the new syntax is
-f SZIP=<pixels per block,coding>
(pixels per block is a even number in 2-32 and coding method is 'EC' or 'NN')
Example of use:
./h5repack -i file1 -o file2 -f SZIP=8,NN -v
updated usage messages, test scripts and files accordingly
Solution:
Platforms tested:
linux
AIX
solaris
Misc. update:
Purpose:
Bug Fix
Description:
If an HDF5 file grows larger than its address space, it dies and is unable to
write any data. This is more likely to happen since users are able to change
the number of bytes used to store addresses in the file.
Solution:
HDF5 now throws an error instead of dying. In addition, it "reserves" address
space for the local heap and for object headers (which do not allocate space
immediately). This ensures that after the error occurs, there is enough address
space left to flush the entire file to disk, so no data is lost.
A more complete explanation is at /doc/html/TechNotes/ReservedFileSpace.html
Platforms tested:
sleipnir, copper (parallel), verbena, arabica, Windows (Visual Studio 7)
Solution:
Platforms tested:
Misc. update:
h5repack changes
Description:
there were some requests to change some minor h5repack features
h5repack only made a warning about a non available filter in verbose mode ( -v )
without -v it kept silent, and users sometimes missed this warning
the request was that it should print this warning always. so, the new format, is e.g
./h5repack -i test_szip.h5 -o out.h5
Warning: dataset </dset_szip> cannot be read, SZIP filter is not available
due to this, and to avoid a lot of these messages in the shell test script, I modified
the script h5repack.sh so that it detects the presence of all filters in the environment
(previously it only detected SZIP)
the test files were also divided in more files , to make the script code easier to
follow
Solution:
Platforms tested:
linux
AIX (no szip)
solaris (no szip, no gzip )
Misc. update:
h5dump new tests
Description:
added new tests for the print of array indices (nested objects, several ranks)
Solution:
Platforms tested:
linux
AIX
solaris
Misc. update:
Bug fix
Description:
The "shared" raw B-tree node can get freed before all the B-tree nodes
had been flushed out to disk and released by the cache.
Solution:
Implement a simple reference counting wrapper for objects in the library
and use it to hold the shared raw B-tree nodes so they aren't freed before all
references to them in memory are released.
Platforms tested:
Solaris 2.7 (arabica)
FreeBSD 4.10 (sleipnir)
IRIX64 6.5 (modei4)
h5dump new tests
Description:
added more tests for the escape/not escape feature for string data (with vlen, with
compound, with char data)
Solution:
Platforms tested:
linux
solaris
AIX
Misc. update:
h5dump new tests
Description:
added new tests for the -p option, superblock, file contents, fill values, array indices.
Solution:
Platforms tested:
linux
AIX
solaris
Misc. update:
Description:
Replaced the old metadata cache with a cache with a modified LRU
replacement policy. This should improve the hit rate.
Solution:
Since we want to flush cache entries in increasing address order, I
used the threaded binary B-tree code to store the cache entries.
There is a fair bit of overhead here, so we may want to consider
other options.
While the code is designed to allow the support of other replacement
algorithms, at present, only a modified version of LRU is supported.
The modified LRU algorithm requires that a user selectable portion
of the cache entries be clean. The clean entries are evicted first
when writes are not permitted. If the pool of clean entries is used
up, the cache grows beyond its user specified maximum size. The
cache can also exceed its maximum size if the combined size of the
protected (or locked) entries exceeds the maximum size of the cache.
Platforms tested:
eirene (serial, parallel, fp), h5committested
Misc. update:
Purpose:
HDF5 now supports SZIP with no encoder.
Description:
SZIP can be configured to have both encoder and decoder or just to have the decoder. HDF5 can now query the configuration of any filter, and will throw errors if users try to write using a filter with encoding disabled.
Solution:
Added H5Zget_filter_info function, changed API for H5Pget_filter and H5P_get_filter_by_id. See SZIP RFC.
Platforms tested:
Copper (fortran, C++, parallel), Sleipnir (C++), Arabica (fortran, C++), Verbena (fortran, C++)
Misc. update:
Description: Added new API H5Fget_name and new test program called filename.c. This function
returns the name of the file by object ID(file, group, dataset, named datatype, and attribute)
which belongs to the file.
Platforms tested: h5committest and fuss.
Misc. update: MANIFEST and RELEASE.txt
h5dump output change, new tests
Description:
the storage layout output format the storage layout output format had some changes
same for the user defined filter
add an option (-y) for not printing the array indices (default is print indices )
the option for escaping non printable characters covers all characters (default is not escape)
(this might be not very portable, the test files are tstring.ddl and tstringe.ddl )
add tests for the new options
Solution:
Platforms tested:
linux
solaris
AIX
Misc. update:
Description: This is the second effort to correct XML dumper after adding null
dataspace test for attribute and dataset. Since XML schema hasn't been updated
for null space, took out null space test from tdset.h5 and tattr.h5 and put it
into a seperate file, tnullspace.h5. Only h5dump tests this null space file;
XML dumper doesn't do it at this moment. We'll wait until XML schema is updated
first.
Platforms tested: h5committest and RH 8(fuss)
Misc. update: MANIFEST(added two new files in tools/testfiles, tnullspace.h5
and tnullspace.ddl)
Description:
added the code for print strings with new line and display the path of references (new source files h5tools_ref.c and .h )
added a test suite in testh5dump.sh.in for
( note : to create testh5dump.sh , one must redo ./configure; this detects the availability of filters
and generates testh5dump.sh accordingly)
1) storage layout
2) fill value
3) print reference with path
4) print strings with new lines
5) filters
Solution:
Platforms tested:
linux
solaris
AIX
Misc. update:
Update shell scripts
Description:
Switch to generating the testh5dump.sh script at configure time, so we can
determine which filters are available to test.
Platforms tested:
FreeBSD 4.9 (sleipnir)
too small to require h5committest
h5dump new version
Description:
added the changes already made for 1.6
support for dumping of
1) filters
2) storage layout
3) fill value
4) comments
5) superblock
6) file contents
7) array indices
Solution:
Platforms tested:
linux
solaris
AIX
Misc. update:
Code optimization & bug fix
Description:
When dimension information is being stored in the storage layout message
on disk, it is stored as 32-bit quantities, possibly truncating the dimension
information, if a dimension is greater than 32-bits in size.
Solution:
Fix the storage layout message problem by revising file format to not store
dimension information, since it is already available in the dataspace.
Also revise the storage layout data structures to be more compartmentalized
for the information for contiguous, chunked and compact storage.
Platforms tested:
FreeBSD 4.9 (sleipnir) w/parallel
Solaris 2.7 (arabica)
h5committest
New Feature
Description:
Add the data transform function, H5Pset_transform().
Platforms tested:
"h5committested".
Copper was down. Ran parallel tests in sol instead.
Misc. update:
new tests for h5repack
Description:
added more tests both to the test program and shell script that test
a variation of different filter converssions
Solution:
Platforms tested:
linux
Misc. update:
new tests for h5repack
Description:
added tests that do layout type to layout type conversion in a matrix of 9 between compact, contiguous and chunking
Solution:
Platforms tested:
linux
afs has problems; I could not telnet to sol and copper, arabica is really slow (meaning
waiting 1 minute for a typed character) and the writing of a file gave an error
arabica 181% afs: failed to store file (145)
afs: failed to store file (145)
Misc. update:
1) new function for tools library
2) new test script for h5repack
Description:
1) currently all the tools (h5dump, h5diff, etc) do not check if a filter is available
for reading some dataset that might have a filter not available on the current configuration (the behaviour
of the tools until now was to trigger a library error, saying that the dataset cannot be read
due to the lack of the filter)
Solution:
1) added a new function h5tools_canreadf that checks if a dataset can be read
depending on the availability of filters.
this function was added in calls for h5diff and h5repack.
instead of triggering the library error, a message is printed, saying that the dataset
cannot be read (the print is optional, it is on on verbose mode)
2) added a shell script that tests the commannd line tool behaviour of h5repack
the script does a series of runs of h5repack with several options on the same file (this file test4.h5
was added to the testfiles dir).
then, it runs the h5diff tool, with the input and output files , in each run.
the goal of the test is also to check item 1) . the binary file was saved with filters
that might not be available on other configurations
Platforms tested:
linux (all filters enabled)
linux (some filters disabled)
solaris (some filters disabled)
AIX (some filters disabled)
windows (all filters on and off )
Misc. update:
Bug fix/optimization
Description:
Address slowdown in MPI-I/O file metadata operations that was introduced
mid-stream. We now _require_ a POSIX compliant parallel file system for the
MPI-I/O file driver (as well as for the MPI-POSIX file driver).
Also optimized file open operation when the file is being created by
reducing the number of collective & syncronizing calls.
Additionally, refactor the MPI routines into a common place, eliminating
duplicated code.
Platforms tested:
FreeBSD 4.9 (sleipnir) w/parallel
h5committest
Purpose: Maintenance
Description: Fortran APIs MAC OS X port for IBM XL Fortran compiler
Solution: Brought back changes from 1.6 branch
Platforms tested: pommier, h5comittested; this time h5committest
complained about copperpp directory and didn't run;
tests on verbena and sol passed.
Misc. update:
Code cleanup
Description:
Refactor library testing framework (used for the testhdf5 & ttsafe tests)
to remove almost all of the duplicated code, moving the common code into a
new 'testframe.c' source file.
Platforms tested:
FreeBSD 4.9 (sleipnir) w & w/o thread-safety
h5committest
h5diff new feature
Description:
added compare for attributes
a new options flag (-a) was added to the options structure. it is 0 by default (no compare )
the output of the compare is the same that for datasets, and all the other flags also apply for attributes
(the memory compare is done in the same function diff_array)
all the other requirements for compare of datasets (type, space) are identical too
Platforms tested:
linux
solaris 2.7
IRIX
Misc. update: