Purpose:
update windows support.
Description:
1. Since we don't support w98 anymore, delete the description about w98.
Actually DLL may work on w98.
2. Release dll work for new HDF5 release.
Solution:
Platforms tested:
Bug Fix
Description:
It was possible to create corrupted metadata information (either in memory
or in the file or both) with a parallel I/O program because of the way
metadata writes were being handled for writes out of the metadata cache.
Solution:
Added a dataset transfer property called "block before metadata write"
which is used by the MPI-I/O and MPI-posix drivers to sync up all the
processes before attempting a metadata write. This property is currently
only for metadata writes from the metadata cache.
Platforms tested:
IRIX64 6.5 (modi4) w/parallel
Bug Fix
Description:
When parallel I/O is used, the MPI-I/O VFL driver uses a "lazy" model to
call MPI_File_set_view() in order to reduce the number of calls to this
function. However, this is unsafe, because if a collective I/O which uses
MPI derived types (and thus uses MPI_File_set_view()) is immediately
followed by an independent I/O, the code will attempt to call
MPI_File_set_view() in order to switch back to the default view of the
file. MPI_File_set_view() is a collective call however, and this causes
the application to hang.
Solution:
Removed "lazy" MPI_File_set_view() code, instead set the file view when it
is needed (with MPI derived types) and immediately set the file view back to
the default view before leaving the I/O routine.
Platforms tested:
IRIX64 6.5 (modi4) w/parallel. Also, tested with the latest development
and release code for the SAF library, which now works correctly with this
change. (Although the release branch of the SAF library seems to have a
bug, this 1.4.4 release candidate code gets as far as the version the SAF
library is released on top of (1.4.2-patch1, I believe)).
New feature.
Description:
There is some discussion among the SAF team as to whether it is better
to use MPI derived types for raw data transfers (thus needing a
MPI_File_set_view() call), or whether it is better to use a sequence of
low-level MPI types (i.e. MPI_BYTE) for the raw data transfer.
Solution:
Added an in internal flag to determine whether derived types are preferred
(the default), or whether they should be avoided. An environment variable
("HDF5_MPI_PREFER_DERIVED_TYPES") can be set by users to control whether MPI
types should be used or not. Set the environment variable to "0" (i.e.:
'setenv HDF5_MPI_PREFER_DERIVED_TYPES 0' to avoid using MPI derived types.
Platforms tested:
IRIX64 6.5 (modi4) w/parallel
Bug fix.
Description:
The chunking code was using internal allocation routines to put blocks on
a free list for reuse, instead of using the system allocation routines (ie.
malloc, free, etc.). This causes problems when user filters attempt to
allocate/free chunks for their algorithm's use.
Solution:
Switched the chunking code back to using the system allocation routines,
we can address performance issues with them if it becomes a real problem.
Platforms tested:
Linux 2.2.x (eirene) && IRIX64 6.5 (modi4)
Code optimization
Description:
Avoid creating MPI types (and thus requiring a MPI_File_set_view() call)
when contiguous selections are used for dataset I/O. This should be a
performance improvement for those sorts of selections.
Platforms tested:
Linux 2.2.x (eirene) w/parallel && IRIX64 6.5 (modi4) w/parallel & FORTRAN
Bug fix
Description:
I/O on "Regular" hyperslab selections could fail to transfer correctly
if the number of elements in the selection's row did now fit "evenly"
into the buffer being used for the transfer.
Solution:
Correct the calculation of the block & count offsets within the optimized
"regular" hyperslab routines.
Platforms tested:
FreeBSD 4.5 (sleipnir)
Bug Fix
Description:
H5Dcreate and H5Tcommit allow "empty" compound and enumerated types (i.e.
ones with no members) to be stored in the file, but this causes an assertion
failure and is somewhat vapid.
Solution:
Check the datatype "makes sense" before using it for H5Dcreate and
H5Tcommit.
Platforms tested:
FreeBSD 4.5 (sleipnir)
Bug Fix (#709)/Code improvement.
Description:
Allow chunks for chunked datasets to be cached when file is open for
read-only access.
Platforms tested:
IRIX64 6.5 (modi4) w/parallel
Bug fix (bug #777)
Description:
Current code allows a compound datatype to be inserted into itself.
Solution:
Check if the ID for the member is the same as the ID for the compound
datatype and reject it if so.
Platforms tested:
FreeBSD 4.5 (sleipnir)
Bug Fix for bug #789
Description:
Creating a 1-D dataset region reference caused the library to hang (go into
an infinite loop).
Solution:
Corrected algorithm for serializing hyperslab regions.
Platforms tested:
FreeBSD 4.5 (sleipnir)
New feature.
Description:
Added a "small data" block allocation mechanism to the library, similar to
the mechanism used for allocating metadata currently.
See the RFC for more details:
http://hdf.ncsa.uiuc.edu/RFC/SmallData/SmallData.html
This reduces the number of I/O operations which hit the disk for my test
program from 19 to 15 (i.e. from 393 to 15, overall).
Platforms tested:
Solaris 2.7 (arabica) w/FORTRAN and FreeBSD 4.5 (sleipnir) w/C++
Purpose:
Bug fix (#699), fix provided by a user, approved by Quincey
Description:
When a scalar dataspace was written to the file and then
subsequently queried with the H5Sget_simple_extent_type function,
type was reported H5S_SIMPLE instead of H5S_SCALAR.
Solution:
Applied a fix (see bug report 699)
Platforms tested:
Solaris 2.7 and Linux 2.2.18
Code improvement
Description:
The metadata aggregation code in the library was not terribly smart about
extending contiguous regions of metadata in the file and would not extend
them as far as possible. This causes space in the file to be wasted, also.
Solution:
Be smarter about extending the space used in the file for metadata by
checking whether new metadata blocks allocated in the file are at the end
of the current metadata aggregation region and append them to the metadata
region if so. This has the nice side benefit of reducing the number of
bytes we waste in the file and reducing the size of the file by a small
amount in some cases.
This reduces the number of I/O operations which hit the disk for my test
program from 53 to 19 (i.e. from 393 to 19, overall).
Platforms tested:
Solaris 2.7 (arabica) w/FORTRAN and FreeBSD 4.5 (sleipnir) w/C++
Bug Fix
Description:
The "dirty" flag for symbol table entries and symbol table nodes was not
being cleared when they were flushed to the file, causing lots of extra
metadata I/O.
Solution:
Reset the symbol table entry & nodes' flags when thy are flushed to disk.
This reduces the number of I/O operations which hit the disk for my test
program from 83 to 53 (i.e. from 393 to 53, overall).
Platforms tested:
Solaris 2.7 (arabica) w/FORTRAN & FreeBSD 4.5 (sleipnir) w/C++
Code cleanup/bug fix
Description:
The "metadata accumulator" cache in the library (which is designed to catch
small metadata writes/reads and bundle them together into larger I/O
buffers) was incorrectly detecting the important case of metadata pieces
being written sequentially to the file, adjoining but not overlapping.
Additionally, the metadata accumulator was not being used to cache data
read in from disk, only caching writes.
Solution:
Fix accumulator to correctly cache adjoining metadata writes and also to
cache metadata read from disk.
Between these two fixes, the number of I/O requests which resulted in actual
reads/writes to the filesystem dropped from 393 requests to 82 for the
particular test I was using. :-)
Platforms tested:
Solaris 2.7 (arabica) w/FORTRAN & FreeBSD 4.5 (sleipnir) w/C++
Document Bug Fix
Description:
Under certain [obscure] circumstances, an object header would get paged out
of the metadata cache, and when it was accessed again and brought back into
the cache, and immediately had additional metadata added to it (an
attribute, usually, or perhaps adding an object to a group), and needed to
be extended with a continuation message, but there was no room in any
existing object header chunks for the continuation message and an existing
object header message needed to be moved to the new object header chunk (I
told you it was obscure :-), the object header message moved to the new
chunk (not the new metadata being added) would get corrupted. *whew* :-)
Solution:
Actually copy the "raw" object header message information of the object
header message being moved to the new chunk, instead of relying on the
"native" object header message information being re-encoded when the object
header is flushed. This is because when an object header is paged out of
the metadata cache and subsequently brought back in, the "native"
information pointer in memory is reset to NULL and only the "raw"
information exists.
Platforms tested:
Solaris 2.7 (arabica) & FreeBSD 4.5 (sleipnir)
Document Code improvement below:
Description:
Propagated the "fill time" property into the parallel chunk allocation
routine, allowing it to avoid writing fill values to each new chunk
allocated. This improves the performance of chunked datasets in parallel
I/O to be on par with contiguous datasets again (on modi4).
Document Bug fix/Code improvement below:
Description:
Currently, the chunk data allocation routine invoked to allocate space for
the entire dataset is inefficient. It writes out each chunk in the dataset,
whether it is already allocated or not. Additionally, this happens not
only when it is created, but also anytime it is opened for writing, or the
dataset is extended. Worse, there's too much parallel I/O syncronization,
which slows things down even more.
Solution:
Only attempt to write out chunks that don't already exist. Additionally,
share the I/O writing between all the nodes, instead of writing everything
with process 0. Then, only block with MPI_Barrier if chunks were actually
created.
Purpose:
Maintenance
Description:
Added information about Parallel Fortran Support for HP-UX 11.00 SysV
and write/read overloaded subroutines (bug #670)
Bug Fix
Description:
Selection offsets were not being used correctly when iterating through
all hyperslabs selections and point selections.
Solution:
Use the selection offset appropriately.
Platforms tested:
FreeBSD 4.5 (sleipnir)
Purpose:
New feature
Description:
Allow H5Glink and H5Gmove to handle links across different locations.
Solution:
Added H5Glink2 and H5Gmove2 functions with new parameter of destination
location.
Platforms tested:
Linux 2.2(eirene)
Bug fix
Description:
When several level deep nested compound & VL datatypes are used, the data
in the nested compound datatypes is incorrectly sharing the same "background
buffer", causing data corruption when the data is written to the file.
Solution:
Allocate a separate background buffer for each level of the nested types
to convert. (Also allocate temporary background buffers for array
datatypes, where this sort of problem could occur also)
Added more regression tests to check for these errors.
Platforms tested:
FreeBSD 4.5 (sleipnir) & Solaris 2.6 (baldric)
Purpose:
New feature
Description:
Fill-value's behaviors for contiguous dataset have been redefined.
Basicly, dataset won't allocate space until it's necessary. Full details
are available at http://hdf.ncsa.uiuc.edu/RFC/Fill_Value, at this moment.
Platforms tested:
Linux 2.2.
New Feature
Description:
Added new H5Dfill() routine to fill the elements in a selection for a
memory buffer with a fill value. This is a user API wrapper around some
internal routines which were needed for the fill-value modifications
from Raymond as well as Pedro's code for reducing the size of a chunked
dataset.
Platforms tested:
FreeBSD 4.5 (sleipnir) [and IRIX64 6.5 (modi4) in parallel, in a few
minutes]
Purpose:
New feature
Description:
Added a query function H5Tget_member_index for compound and enumeration
data types, to retrieve member's index by its name.
Platforms tested:
Linux 2.2