New Feature
Description:
Add -o option to h5dumper. It displays the raw data of datasets to a
separate output file.
Add a feature to h5tools library that it uses the FILE *rawdatastream
as the stream for the display of datasets raw data.
Solution:
Define an "extern FILE *rawdatastream" in h5tools.h
and declare it in h5tools.c. This way, it would work
even if an application does not explicitely declare it.
Tried to initialized it to stdout as
FILE *rawdatastream = stdout;
but Linux gcc rejected it though all other platforms+compilers
accepted it fine. For now, put in a kludge to set it right
before it is used. Need a safer way to initialize it.
Platforms tested:
arabica, eirene, modi4 -64.
Bug fix and feature
Description:
It could not find a working h5dump to process the hdf5 files.
This could be because h5dump is not installed in $PATH or
a disfunctional one is found. (E.g. arabica:/usr/sdt/bin/h5dump
does not work.)
Setting it to ./h5dump or $PWD/h5dump does not work because
when h5dump is used, it has "cd testfiles", a different place.
Solution:
Set H5DUMP with the current absolute path (used `pwd` instead
of $PWD which is sometimes not set for whatever reason.)
Also add a feature to allow H5DUMP to be set to a different
value by hand. For example, if the h5dump just built is not
working correctly, one can do "H5DUMP=/usr/local/bin/h5dump make check"
to bypass the broken h5dump.
Platforms tested:
arabica
Reformat the source
Description:
The tabstop seems to defined different from 8-stops. The
source files looked very confusing. Just reformate the
files. Not change to source code at all.
Platforms tested:
modi4 -64.
Purpose:
Bug fix -- #445
Description:
In RM_H5D.html in the H5 Reference Manual, the H5Dget_storage_size
entry described the wrong FAILURE return value.
Solution:
Changed H5Dget_storage_size return value on FAILURE to 0 (zero).
Platforms tested:
Tested in Internet Explorer 5.
Purpose:
add h4toh5 converter source codes under tools directory.
Description:
this is the expected hdf5 result for h4toh5 converter.
Solution:
[details about the changes, algorithm, etc...]
[Please as detail as you can since your own explanation is
better than others guessing it from the code.]
Platforms tested:
[machines you have tested the changed version. This is absolute
important. Test it out on at least two or three different platforms
such as Big-endian-32bit (SUN/IRIX), little-endian-32(LINUX) and
64-bit (IRIX64/UNICOS/DEC-ALPHA) would be good.]
Purpose:
add h4toh5 converter source codes under tools directory.
Description:
this is the test file for h4toh5 converter.
Solution:
[details about the changes, algorithm, etc...]
[Please as detail as you can since your own explanation is
better than others guessing it from the code.]
Platforms tested:
[machines you have tested the changed version. This is absolute
important. Test it out on at least two or three different platforms
such as Big-endian-32bit (SUN/IRIX), little-endian-32(LINUX) and
64-bit (IRIX64/UNICOS/DEC-ALPHA) would be good.]
Purpose:
add h4toh5 converter source codes under tools directory.
Description:
[describe the bug, or describe the new feature, etc]
Solution:
[details about the changes, algorithm, etc...]
[Please as detail as you can since your own explanation is
better than others guessing it from the code.]
Platforms tested:
[machines you have tested the changed version. This is absolute
important. Test it out on at least two or three different platforms
such as Big-endian-32bit (SUN/IRIX), little-endian-32(LINUX) and
64-bit (IRIX64/UNICOS/DEC-ALPHA) would be good.]
Purpose:
add h4toh5 converter tool
Description:
add flag h4toh5 and testh4toh5 in the Makefile.
Solution:
[details about the changes, algorithm, etc...]
[Please as detail as you can since your own explanation is
better than others guessing it from the code.]
Platforms tested:
test on eirene and arabica.
[machines you have tested the changed version. This is absolute
important. Test it out on at least two or three different platforms
such as Big-endian-32bit (SUN/IRIX), little-endian-32(LINUX) and
64-bit (IRIX64/UNICOS/DEC-ALPHA) would be good.]
Purpose:
h4toh5 converter tool under tools
Description:
put flag h4toh5 and testh4toh5 into the configure file.
Solution:
[details about the changes, algorithm, etc...]
[Please as detail as you can since your own explanation is
better than others guessing it from the code.]
Platforms tested:
at eirene and arabica.
[machines you have tested the changed version. This is absolute
important. Test it out on at least two or three different platforms
such as Big-endian-32bit (SUN/IRIX), little-endian-32(LINUX) and
64-bit (IRIX64/UNICOS/DEC-ALPHA) would be good.]
Rearrange code
Description:
The data sieve buffering code for contiguously stored datasets was
wedged in the H5F_arr_read/H5F_arr_write routines.
Solution:
Created a new H5Fcontig.c to hold I/O routines for contiguously stored
datasets (like H5Fistore.c for chunked dataset I/O routines) and moved
data sieving code into those routines.
Platforms tested:
Solaris 2.6 (i.e. baldric)
Code Optimization.
Description:
The optimized routines for copying regular hyperslabs in memory have been
using the same matrix routines to copy their hyperslab pieces as the
routines for irregularly shaped hyperslabs. This ends up imposing lots of
extra overhead on the optimized routine, since it basically "knows" all the
matrix information it needs.
Solution:
Keep track of the [small] amount of matrix information necessary to perform
the regular hyperslab copies in the optimized routines themselves instead of
using the matrix routines. This improves the performance for the benchmark
I'm running from ~18 seconds to ~12 seconds and should apply to parallel
I/O situations also.
Platforms tested:
Solaris 2.6 (i.e. baldric)
Code Optimization
Description:
The matrix operations are currently the hot-spot in the library code
for regular hyperslab operations.
Solution:
Unrolled loops for 3 of the more heavily used functions
(H5V_stride_optimize2, H5V_hyper_stride & H5V_hyper_copy) for the common
cases (i.e. up to 3-D datasets). This squeezes some more blood out of
the stone (turnip? :-) and improves the h5hypers.c benchmark on baldric
by another 20-25%.
Platforms tested:
Solaris 2.6 (i.e. baldric)
Bug Fix
Description:
The core and log VFL drivers were leaking small amounts of memory when they
were used.
Solution:
Free the appropriate memory block (for the core driver) and don't allocate
a block (for the log driver).
Platforms tested:
Solaris 2.6 (i.e. baldric)
Implemented new feature
Description:
Added data sieve buffering code to raw I/O data path. This is enabled for
all the VFL drivers except the mpio & core drivers. Also added two new
API functions to control the sieve buffer size: H5Pset_sieve_buf_size() and
H5Pget_sieve_buf_size().
Platforms tested:
Solaris 2.6 (i.e. baldric)
Fix compiler warning
Description:
"HUGE_VAL" (a double value) was being put into a float type and generating
a warning during compile time.
Solution:
Replaced "HUGE_VAL" with "FLT_MAX"
Platforms tested:
FreeBSD 4.1
Feature
Description:
Added a new document of all the controls (compiler macros,
environment variables, ...) that affect the functionality of
the libraries and tools.
Platforms tested:
Viewed with MS IE.
Small Code Cleanup
Description:
Code to optimize adjacent (i.e. contiguous) hyperslab was ugly and used too
many temporary variables.
Solution:
Computed the optimized hyperslabs slightly differently and got rid of
unnecessary temporary variables.
Platforms tested:
FreeBSD 4.1
Libtool bug
Description:
The AR macro wasn't being propagated to the libtool file
correctly. When libtool was being generated, it wasn't
recoginizing the AR that was set in the configure script.
Solution:
export the AR macro after it's set.
Platforms tested:
Linux
Bug fix
Description:
The old code was using count as the block size. The result was
asking for a slab of count blocks, each of 1 element. The recent
change in the hyperslab algorithm exposed this problem. (The
old algorithm merge the count blocks back into 1 big block of
count elements.) (This error was due to that the block argument
was not in the very early version of hyperslab. Then it was
not updated since it had been "working".)
Solution:
Added in the block argument to the setup and calculation of
slab and its data. Also found a dumb error in the dataset_fill
algorithm in which stride was used in the calculation. Not so
for the cases of BYROW and BYCOL.
Platforms tested:
modi4 parallel, both -64 an -n32 modes.
Bug fix (sorta)
Description:
When the stride and block size of a hyperslab selection are equal, the
blocks that are selected are contiguous in the dataset. Prior to my
hyperslab optimizations, this situation used to be detected and somewhat
optimized to improve performance. I've added more code to optimize for
this situation and integrated it with the new hyperslab optimization that
weren't very efficient for that case as they should have been.
Solution:
Detect contiguous hyperslab selections (i.e. block size in a dimension is
the same as the stride in that dimension) and store the optimized,
contiguous version of that hyperslab. We also store the original, un-
optimized version of the hyperslab to give back to the user if they query
the hyperslab selection they just made.
Platforms tested:
FreeBSD 4.1
Bug Fix
Description:
The prototype for the H5Pregister function has a variable named
`class'. This is a reserved word in C++ and causes the C++
compiler to freak.
Solution:
This variable's name was changed to cls_id in the .c file, so I
changed it in the header file to cls_id to match.
Platforms tested:
Linux
Bug Fix.
Description:
An assertion in the local heap code was mistakenly checking against too
large of a value for the size of new local heap created. When used with
larger-sized (>10KB) variable-length objects, it was failing the check.
Solution:
Corrected to check against the actual size of the heap allocated, without
the heap header.
Platforms tested:
FreeBSD 4.1
Restore file
Description:
It appears that Robb's checkin earlier today erroneously overwrote this
file with an older version... *grumble*
Solution:
Found another copy of newest version, verified that it is operating
correctly and re-checked it in.
Platforms tested:
FreeBSD 4.1
Adding the Fortran interface to the HDF5 library
Description:
Fortran is now a subdirectory of the HDF5 library tree.
Platforms tested:
Solaris and IRIX (O2K)
H5FDstream.h needs to be installed.
Description:
H5FDstream.h is included in the hdf5.h file and needs to be
installed with the other public headers.
Solution:
Added it to the rest of the install headers.
Fix Irix pmake bugs
Description:
Build fails on Irix when builddir != srcdir
Solution:
* acconfig.h
* src/H5config.h.in [REGENERATED]
Added definition for HAVE_STREAM
* config/conclude.in
* config/depend1.in
* config/depend2.in
* config/depend3.in
* config/depend4.in
The `Dependencies' file is located in the source
tree. This fixes bugs for Irix pmake when compiling
outside the source tree. Hopefully it still preserves
Albert's changes which allow concurrent compilations
to not stomp on each other's Dependencies files.
* examples/Dependencies [REGENERATED]
* src/Dependencies [REGENERATED]
* test/Dependencies [REGENERATED]
* tools/Dependencies [REGENERATED]
Regenerated for testing purposes.
Platforms:
i686-pc-linux
mips-sgi-irix6.5
sparc-sun-solaris2.6
Feature
Description:
Most tests are done inside a for-loop. Whenever a test exits
with error, the for-loop does a "exit 1" to exit the make.
"make -i" could not catch and ignore the error status.
Solution:
Replaced "exit 1" with break. At the end of the for-loop,
test if all tests have been run. If not, the for-loop is
ended by the break command, thus raise an error. Now,
'make -i' can catch and ignor it.
Also added the test of variable HDF5_Make_Ignore inside the
for-loop to indicate the desire to ignore errors when the
HDF5_Make_Ignore is set to a non-null/blank string.
Platforms:
Tested on modi4 and eirene.
I introduced a small bug when trying to fix the zlib stuff.
Description:
-lz wouldn't be specified with the compile flags if it was found
while checking for the HDF4 library.
Solution:
Removed my bad check and replaced with a better one.
Platforms:
Linux, Solaris
Added the Stream Virtual File Driver to the list of drivers
used for trying to open a file via h5dump_fopen().
Description:
The Stream VFD was added at bottom of the driver list for h5dump_fopen().
If no driver succeeded to open a file given by its filename
the Stream VFD would try to do so by parsing the filename as an
'hostname:port' argument, open a socket to that address and read
read the file.
This feature can be used to h5ls/h5dump streamed files.
Platforms:
All platforms (also between heterogenous).
Added test program to verify the Stream Virtual File Driver.
Description:
This program tests the functionality of the Stream Virtual File Driver.
1. It spawns two new processes, a sender and a receiver.
2. The sender opens an HDF5 file for writing and writes
a sample dataset to it.
On closing the file the Stream VFD would send the file
contents to any connected client.
3. The receiver serves as a client attempting to open an
HDF5 file for reading. On opening the file the Stream VFD
would establish a socket connection to the sender process,
identified by its hostname (which is localhost in this example)
and a port number, and read the file contents via this socket.
Aftwerwards the dataset is read from the file into memory
and verified.
4. The main program waits for termination of its two child
processes and returns their exit code.
Platforms:
Tested so far under Linux, Irix 32/64bit, OSF1, Solaris, Cray Unicos,
Hitachi SR8000, IBM AIX.
Not tested under Windows yet.
Add the Stream VFD sources to the appropriate makefile variables.
Description:
Added H5FDstream.c to the LIB_SRC variable and H5FDstream.h
to the PUB_HDR variable for building the Stream VFD.
Define HAVE_STREAM.
Description:
If the Stream VFD was configured the configured script
will expand this into
'#define HAVE_STREAM 1' in H5config.h and
'#define H5_STREAM 1' in H5pubconf.h.