*sigh*
Description:
Wasn't picking up a header file which is in the source directory.
Solution:
Changed some flags so that it does this now.
Platforms tested:
Modi4
Bug Fix 37
Description:
Okay...this is really it now. Sorry for all the other "fixes".
This will take care of the top_builddir macro for the Fortran
interface.
Solution:
Hardcoded the path to the build directory.
Platforms tested:
Modi4.
Bug Fix
Description:
Wasn't finding the Dependencies file when doing a make.
Solution:
Modified the path to the Dependencies file by prepending a
`$(srcdir)/' to it.
Platforms tested:
Modi4
Bug Fix
Description:
The "fix" for search paths was wrong. It would try to recompute
the SEARCH macro at the end.
Solution:
Stopped it from doing that if the $SEARCH macro has a value. One
question, will fortran always be built from the top directory?
Platforms tested:
Modi4
Bug Fix
Description:
When running configure on subdirectories (like fortran/), looking
up how make implements SEARCHes failed.
Solution:
Exporting the SEARCH macro so that subdirectories don't have to
look for it.
Platforms tested:
Modi4.
Purpose:
Buf fix
Description:
On DEC, H5Dff.f90 would not compile because of variables declaration
order. Other UNIX platforms including J90 did not care.
Solution:
Change the order of variables declarations.
Platforms tested:
DEC Unix (gondolin)
Purpose:
Parallel Bug Fixes
Description:
Was out of sync with header file re-arrangements I checked in last night.
Solution:
Fixed to use new header files, etc.
Platforms tested:
O2K (modi4)
Bug Fix
Description:
When parallel I/O is turned on, there were some macros used in the H5D
routines which poked around in the H5F_t structure. This breaks the
privacy of that structure and ties the H5D code too tightly to the H5F_t
struct.
Solution:
Added a small function to retrieve the the value (driver_id) needed from
the H5F_t function.
Platforms tested:
Eyeballed only, Albert needs this right away...
Purpose:
Updated source code to use new APIs to write/read references
Description and Solution:
Write/read subrotine has extra parameter - size of the reference array.
I modified the source to reflect this change.
Platforms tested:
Solaris 2.6
No change.
Description:
Must've added some debuging printf's and then took them out in a way which
triggered CVS.
Platforms tested:
Solaris 2.6 (baldric) & FreeBSD 4.1.1 (hawkwind)
Maintainance & performance enhancements
Description:
Re-arranged header files to protect private symbols better.
Changed optimized regular hyperslab I/O to compute the offsets more
efficiently from previous method of using matrix operations.
Added sequential I/O operations at a more abstract level (at the same level
as H5F_arr_read/write), to support the optimized hyperslab I/O.
Platforms tested:
Solaris 2.6 (baldric) & FreeBSD 4.1.1 (hawkwind)
Maintainance
Description:
Updated for the new files I'm adding as well as the tools/talign.c file
missing from last night's tests.
Platforms tested:
Solaris 2.6 (baldric) & FreeBSD 4.1.1 (hawkwind)
Bug fix
Description:
The predefined HDF5_PARAPREFIX has a trailing slash. The parallel
testfile names end up with two adjacent slashes that made some system
unhappy.
Solution:
Removed the trailing slash.
Platforms tested:
Arabica (solaris 2.7).
Purpose:
Reimplemented references to the objects and dataset regions.
Description:
Previous implementation was not portable. This implementation
should work on UNIX workstations and Crays, but is very inefficient
since it uses memcpy to repack Fortran buffers with references
to C buffers and vice versa.
Solution:
I used fortran derived datatype with integer fields. h5dwrite_f and
h5dread_f have extra parameter when references are written or read.
This parameter describes size of the buffer that holds references.
Platforms tested:
J90 and Solaris 2.6
H5Pf.c
Some of the functions do not exist now in the development branch.
Commented those out, so one does not need to apply patch in order
to build Fortran Library.
Bug fiX
Description:
H5S_hyper_select_valid would report hyperslab invalid if the one of
the count values is zero. The verifying algorithm did not take into
consideration that block or count can contain zeros to indicate no
element is wanted.
Solution:
Added code to test if block or count is zero. If so, skip the rest
of the validity check.
Platforms tested:
IRIX64 -64.
Bug fix (done by Kim Yates)
Description:
The optimized mpio code was broken and when read was done, it hanged.
Solution:
H5FDmpio.c:
In H5FD_mpio_write, moved the 16-line block of code in which
all procs other than p0 skip the actual write
to be just before the call to MPI_File_write_at.
Previously, the values of the local vars that controlled
"allsame" were not always set correctly when the moved block
was reached.
H5S.c:
Changed default value of H5_mpi_opt_types_g to TRUE, so that
the MPI-IO hyperslab code is executed by default in parallel HDF5,
rather than executing the serial hyperslab code.
H5Smpio.c:
In function H5S_mpio_hyper_type, added a call to free
an intermediate type. Cures a small memory leak.
Added code for cases of empty hyperslab
Changed displacements to be MPI_Aint
Platforms tested:
modi4 -64: worked fine with mpich 1.2.0 but failed with the messages
saying it ran out of entries for MPI_Types during the collective_read
test. After tracing the code all the way to the collective read, all
MPI Types have been freed properly. It aborted with the above message
when it executed the line
if (MPI_SUCCESS!= MPI_File_read_at_all(file->f, mpi_off, buf, size_i, buf_type, &mpi_stat ))
Could not see any problem with this line. It could be a bug in the
SGI version of MPI.
Purpose:
[is this a bug fix? feature? ...]
Description:
[describe the bug, or describe the new feature, etc]
Solution:
[details about the changes, algorithm, etc...]
[Please as detail as you can since your own explanation is
better than others guessing it from the code.]
Platforms tested:
[machines you have tested the changed version. This is absolute
important. Test it out on at least two or three different platforms
such as Big-endian-32bit (SUN/IRIX), little-endian-32(LINUX) and
64-bit (IRIX64/UNICOS/DEC-ALPHA) would be good.]
Adding Testing
Description:
Alignment when putting elements in a compound datatype can be
off.
Solution:
This was a bug which I'd fixed. Here's a program to exercise the
bug.
Platforms tested:
Linux
Bug fix
Description:
The documentation on how to dump attribute data was not complete
enough. Some people got confused on the commandline syntax (you
have to specify the "path" from the root group to the attribute
to dump it). I put some examples in to show how to correctly dump
attributes.
Platforms tested:
Viewed with Netscrape.
Added features
Description:
There were no automatic tests for transfering zero elements.
Solution:
t_dset.c:
Added two new patterns of ZROW (zero rows for process 0)
and ZCOL(zero columns for process 0).
ZROW test was added but it failed because the current library
does not accept it. Not compiled in now. Need to fix the
library before turning it back on again and also to add the
ZCOL test.
t_mdset.c:
Added statement to show progress. Also the MPI_Barrier() call
get processes synchornoized. It eliminates the racing condition
but this is not a permenant solution. The library code needs to
be fixed.
testphdf5.c:
Added a bunch of MPI_Type_XXX debug code. Added the -md
option to skip the multiple datasets tests. Changed the cosmitic
appearance of the banner messages.
testphdf5.h:
When an error is detected, the old way was to call MPI_Finalize()
before exiting. This sometimes hangs because some processes
may be waiting for a message of a different tag. Changed to
call MPI_Abort() for now so that the whole MPI job would
abort rather than hanging due resource limits exceeded.
Added the definition of ZROW and ZCOL.
Platforms tested:
Modi4 -64.
Update
Description:
Added the description of the environment variable HDF5_MPI_OPT_TYPES
which controls the use of optimized MPIO routines.
Platforms tested:
Viewed via IE.
Bug Fix
Description:
Test was not detecting hdp tool from HDF 4.1r4 correctly.
Solution:
Modified test to detect HDF4.1r[3-9] correctly.
Platforms tested:
FreeBSD 4.1.1 (hawkwind)
Portability fix
Description:
Non-portable GNU-specific features were used.
Solution:
Replaced GNU-specific features with more portable (but more difficult to
maintain) forms of the features.
Platforms tested:
FresBSD 4.1.1 (hawkwind)
Bug fix
Description:
In the h5dump_fixtype function, when users created a COMPOUND
datatype, the alignment would be off somewhat.
Solution:
The alignment was being set after insertion. I changed this code:
for (i = 0, offset = 0; i < nmembs; i++) {
H5Tinsert_array(m_type, name[i], offset, ndims[i], dims + i * 4,
NULL, memb[i]);
for (j = 0, nelmts = 1; j < ndims[i]; j++)
nelmts *= dims[i * 4 + j];
offset = ALIGN(offset, H5Tget_size(memb[i])) +
nelmts * H5Tget_size(memb[i]);
}
to:
for (i = 0, offset = 0; i < nmembs; i++) {
if (offset)
offset = ALIGN(offset, H5Tget_size(memb[i]));
H5Tinsert_array(m_type, name[i], offset, ndims[i], dims + i * 4,
NULL, memb[i]);
for (j = 0, nelmts = 1; j < ndims[i]; j++)
nelmts *= dims[i * 4 + j];
offset += nelmts * H5Tget_size(memb[i]);
}
The alignment is now calculated before the insertion.
Platforms tested:
Solaris, Linux
Bug Fix
Description:
Use H5FD_get_eoa instead of H5FD_get_eof to check for reading off the end
of the allocated file space. Using H5FD_get_eof was causing the Stream
VFD to fail.
Solution:
Switched from using H5FD_get_eof to H5FD_get_eoa
Platforms tested:
FreeBSD 4.1.1 (hawkwind)
Bug
Description:
The testh5toh4 was removing all .h5 files from the testfiles
directory, however, with the addition of testh4toh5, we need some
.h5 files in there.
Solution:
Changed the scripts so that an output directory is created for
all of the processed files. This is removed after the test is
finished.
Platforms tested:
Linux
Purpose:
Bugfix
Description:
The Stream VFD was leaking memory on every opened file.
Solution:
In H5FD_stream_close(), finally free the file structure used to describe
the closed file.
Platforms tested:
Linux, SGI
Bug Fix
Description:
zlib was not being retrieved from the place specified by the user
even if the user used the --with-zlib flag.
Solution:
Removed the automatic inclusion of /usr/ncsa/* into the macros
and use the user-defined place to try to pickup the zlib. I'm
relying on the order of the -L flags in the compile line to
specify which libraries to look into first before going on to
look into the system libraries. If some compiler doesn't honor
the this order, yikes...
Platforms tested:
Linux
Added site-specific/ subdirectory in config/ directory
Description:
If a machine needs site-specific configure options but those
options don't necessarily apply to all machines of that type,
place them there.
Site specific configure files
Description:
Some machines need to specify things during the configure
but they aren't necessary for all machines of that type. Those
site-specific changes should go here. The format of the filename
is:
host-$hostname
where $hostname is the output from the `hostname' command.
Needless to say, this is optional to those sites which don't need
it.
Bug fix
Description:
Attempted to close rawdatastream even if it has not been
used to open a new file. Many systems tolerated the NULL
value but not FreeBSD.
Solution:
Check for the NULL value too.
Platforms tested:
hawkwind (freeBSD) and modi4 parallel.
New Feature
Description:
Add -o option to h5dumper. It displays the raw data of datasets to a
separate output file.
Add a feature to h5tools library that it uses the FILE *rawdatastream
as the stream for the display of datasets raw data.
Solution:
Define an "extern FILE *rawdatastream" in h5tools.h
and declare it in h5tools.c. This way, it would work
even if an application does not explicitely declare it.
Tried to initialized it to stdout as
FILE *rawdatastream = stdout;
but Linux gcc rejected it though all other platforms+compilers
accepted it fine. For now, put in a kludge to set it right
before it is used. Need a safer way to initialize it.
Platforms tested:
arabica, eirene, modi4 -64.
Bug fix and feature
Description:
It could not find a working h5dump to process the hdf5 files.
This could be because h5dump is not installed in $PATH or
a disfunctional one is found. (E.g. arabica:/usr/sdt/bin/h5dump
does not work.)
Setting it to ./h5dump or $PWD/h5dump does not work because
when h5dump is used, it has "cd testfiles", a different place.
Solution:
Set H5DUMP with the current absolute path (used `pwd` instead
of $PWD which is sometimes not set for whatever reason.)
Also add a feature to allow H5DUMP to be set to a different
value by hand. For example, if the h5dump just built is not
working correctly, one can do "H5DUMP=/usr/local/bin/h5dump make check"
to bypass the broken h5dump.
Platforms tested:
arabica
Reformat the source
Description:
The tabstop seems to defined different from 8-stops. The
source files looked very confusing. Just reformate the
files. Not change to source code at all.
Platforms tested:
modi4 -64.
Purpose:
Bug fix -- #445
Description:
In RM_H5D.html in the H5 Reference Manual, the H5Dget_storage_size
entry described the wrong FAILURE return value.
Solution:
Changed H5Dget_storage_size return value on FAILURE to 0 (zero).
Platforms tested:
Tested in Internet Explorer 5.
Purpose:
add h4toh5 converter source codes under tools directory.
Description:
this is the expected hdf5 result for h4toh5 converter.
Solution:
[details about the changes, algorithm, etc...]
[Please as detail as you can since your own explanation is
better than others guessing it from the code.]
Platforms tested:
[machines you have tested the changed version. This is absolute
important. Test it out on at least two or three different platforms
such as Big-endian-32bit (SUN/IRIX), little-endian-32(LINUX) and
64-bit (IRIX64/UNICOS/DEC-ALPHA) would be good.]
Purpose:
add h4toh5 converter source codes under tools directory.
Description:
this is the test file for h4toh5 converter.
Solution:
[details about the changes, algorithm, etc...]
[Please as detail as you can since your own explanation is
better than others guessing it from the code.]
Platforms tested:
[machines you have tested the changed version. This is absolute
important. Test it out on at least two or three different platforms
such as Big-endian-32bit (SUN/IRIX), little-endian-32(LINUX) and
64-bit (IRIX64/UNICOS/DEC-ALPHA) would be good.]
Purpose:
add h4toh5 converter source codes under tools directory.
Description:
[describe the bug, or describe the new feature, etc]
Solution:
[details about the changes, algorithm, etc...]
[Please as detail as you can since your own explanation is
better than others guessing it from the code.]
Platforms tested:
[machines you have tested the changed version. This is absolute
important. Test it out on at least two or three different platforms
such as Big-endian-32bit (SUN/IRIX), little-endian-32(LINUX) and
64-bit (IRIX64/UNICOS/DEC-ALPHA) would be good.]
Purpose:
add h4toh5 converter tool
Description:
add flag h4toh5 and testh4toh5 in the Makefile.
Solution:
[details about the changes, algorithm, etc...]
[Please as detail as you can since your own explanation is
better than others guessing it from the code.]
Platforms tested:
test on eirene and arabica.
[machines you have tested the changed version. This is absolute
important. Test it out on at least two or three different platforms
such as Big-endian-32bit (SUN/IRIX), little-endian-32(LINUX) and
64-bit (IRIX64/UNICOS/DEC-ALPHA) would be good.]
Purpose:
h4toh5 converter tool under tools
Description:
put flag h4toh5 and testh4toh5 into the configure file.
Solution:
[details about the changes, algorithm, etc...]
[Please as detail as you can since your own explanation is
better than others guessing it from the code.]
Platforms tested:
at eirene and arabica.
[machines you have tested the changed version. This is absolute
important. Test it out on at least two or three different platforms
such as Big-endian-32bit (SUN/IRIX), little-endian-32(LINUX) and
64-bit (IRIX64/UNICOS/DEC-ALPHA) would be good.]
Rearrange code
Description:
The data sieve buffering code for contiguously stored datasets was
wedged in the H5F_arr_read/H5F_arr_write routines.
Solution:
Created a new H5Fcontig.c to hold I/O routines for contiguously stored
datasets (like H5Fistore.c for chunked dataset I/O routines) and moved
data sieving code into those routines.
Platforms tested:
Solaris 2.6 (i.e. baldric)
Code Optimization.
Description:
The optimized routines for copying regular hyperslabs in memory have been
using the same matrix routines to copy their hyperslab pieces as the
routines for irregularly shaped hyperslabs. This ends up imposing lots of
extra overhead on the optimized routine, since it basically "knows" all the
matrix information it needs.
Solution:
Keep track of the [small] amount of matrix information necessary to perform
the regular hyperslab copies in the optimized routines themselves instead of
using the matrix routines. This improves the performance for the benchmark
I'm running from ~18 seconds to ~12 seconds and should apply to parallel
I/O situations also.
Platforms tested:
Solaris 2.6 (i.e. baldric)