Code cleanup
Description:
Trim trailing whitespace, which is making 'diff'ing the two branches
difficult.
Solution:
Ran this script in each directory:
foreach f (*.[ch] *.cpp)
sed 's/[[:blank:]]*$//' $f > sed.out && mv sed.out $f
end
Platforms tested:
FreeBSD 4.11 (sleipnir)
Too minor to require h5committest
Bug Fix/Code Cleanup/Doc Cleanup/Optimization/Branch Sync :-)
Description:
Generally speaking, this is the "signed->unsigned" change to selections.
However, in the process of merging code back, things got stickier and stickier
until I ended up doing a big "sync the two branches up" operation. So... I
brought back all the "infrastructure" fixes from the development branch to the
release branch (which I think were actually making some improvement in
performance) as well as fixed several bugs which had been fixed in one branch,
but not the other.
I've also tagged the repository before making this checkin with the label
"before_signed_unsigned_changes".
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel & fphdf5
FreeBSD 4.10 (sleipnir) w/threadsafe
FreeBSD 4.10 (sleipnir) w/backward compatibility
Solaris 2.7 (arabica) w/"purify options"
Solaris 2.8 (sol) w/FORTRAN & C++
AIX 5.x (copper) w/parallel & FORTRAN
IRIX64 6.5 (modi4) w/FORTRAN
Linux 2.4 (heping) w/FORTRAN & C++
Misc. update:
Purpose: change feature
Description: Back up support bitfield and time datatypes in H5Tget_native_type.Leave it to future support. Let it return "not supported" error message for
now.
Platforms tested: h5committest and fuss.
Misc. update: RELEASE.txt
new test for the native types test
Description:
on the Cray SV1 an INT type was wrongly converted to a SHORT type
by the get_native_integer function
Choose the type based on the precision; this is to support cases
like the Cray SV1, where the size of short is 8 but precision is 32
(e.g an INT (size 8, prec 64) would be converted to a SHORT
(size 8, prec 32) if the size was the deciding factor)
Solution:
Platforms tested:
linux
solaris
aix
Misc. update:
Code optimization
Description:
Set up datatype ID for dataset's datatype on disk. This allows us to avoid
repeatedly copying the datatype when an ID is needed.
Also, clean up a few warnings in various other places.
Platforms tested:
Solaris 2.7 (arabica)
FreeBSD 4.10 (sleipnir) w/parallel
Too minor to require h5committest
Code cleanup
Description:
Clean up almost all warnings from Windows builds.
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel
Too minor to require h5committest
Bug fix.
Description:
Allow H5Tget_native_type() to handle opaque fields in compound datatypes.
Platforms tested:
FreeBSD 4.9 (sleipnir)
too minor to require h5committest
Improve test a bit
Description:
Add a small bit of testing for the array field in a compound datatype.
Platforms tested:
FreeBSD 4.8 (sleipnir)
h5committest
Cray SV1 (wind)
Cray T3E (hubble)
Cray T90 (gypsy)
Description: H5Tget_native_type fails for multiple kinds of datatype on Cray; it fails
fix-length string type, too.
Platforms tested: Cray, h5committest
Bug fix
Description:
An earlier checkin changed some of the assumptions about single block
hyperslabs, causing them to fail in odd ways.
Solution:
Fix errors with single block hyperslabs by keying off of count==1 instead
of stride==1.
Platforms tested:
FreeBSD 4.8 (sleipnir) w/parallel
h5committested
Bug fix
Description:
Failed tests were not being reported correctly to main test routine, so
they were not stopping a 'make check'
Solution:
Changed '1' to '-1' for failures.
Platforms tested:
h5committestted (although Fortran tests failed for some reason)
Bug fix
Description:
Return correct value (1 instead of -1) on test failure.
Platforms tested:
FreeBSD 4.8 (sleipnir) w/C++
Linux 2.4 (burrwhite) w/FORTRAN
Solaris 2.7 (arabica) w/FORTRAN
IRIX64 6.5 (modi4) w/parallel & FORTRAN
(h5committest not run due to my ongoing difficulties with C++ on burrwhite).
Update
Description:
Updated the Copyright statement
Platforms tested:
Linux (This change is only in the comments, so I just check that the
modules still compile)
Misc. update:
Lots of performance improvements & a couple new internal API interfaces.
Description:
Performance Improvements:
- Cached file offset & length sizes in shared file struct, to avoid
constantly looking them up in the FCPL.
- Generic property improvements:
- Added "revision" number to generic property classes to speed
up comparisons.
- Changed method of storing properties from using a hash-table
to the TBBT routines in the library.
- Share the propery names between classes and the lists derived
from them.
- Removed redundant 'def_value' buffer from each property.
- Switching code to use a "copy on write" strategy for
properties in each list, where the properties in each list
are shared with the properties in the class, until a
property's value is changed in a list.
- Fixed error in layout code which was allocating too many buffers.
- Redefined public macros of the form (H5open()/H5check, <variable>)
internally to only be (<variable>), avoiding innumerable useless
calls to H5open() and H5check_version().
- Reuse already zeroed buffers in H5F_contig_fill instead of
constantly re-zeroing them.
- Don't write fill values if writing entire dataset.
- Use gettimeofday() system call instead of time() system when
checking the modification time of a dataset.
- Added reference counted string API and use it for tracking the
names of objects opening in a file (for the ID->name code).
- Removed redundant H5P_get() calls in B-tree routines.
- Redefine H5T datatype macros internally to the library, to avoid
calling H5check redundantly.
- Keep dataspace information for dataset locally instead of reading
from disk each time. Added new module to track open objects
in a file, to allow this (which will be useful eventually for
some FPH5 metadata caching issues).
- Remove H5AC_find macro which was inlining metadata cache lookups,
and call function instead.
- Remove redundant memset() calls from H5G_namei() routine.
- Remove redundant checking of object type when locating objects
in metadata cache and rely on the address only.
- Create default dataset object to use when default dataset creation
property list is used to create datasets, bypassing querying
for all the property list values.
- Use default I/O vector size when performing raw data with the
default dataset transfer property list, instead of querying for
I/O vector size.
- Remove H5P_DEFAULT internally to the library, replacing it with
more specific default property list based on the type of
property list needed.
- Remove redundant memset() calls in object header message (H5O*)
routines.
- Remove redunant memset() calls in data I/O routines.
- Split free-list allocation routines into malloc() and calloc()-
like routines, instead of one combined routine.
- Remove lots of indirection in H5O*() routines.
- Simplify metadata cache entry comparison routine (used when
flushing entire cache out).
- Only enable metadata cache statistics when H5AC_DEBUG is turned
on, instead of always tracking them.
- Simplify address comparison macro (H5F_addr_eq).
- Remove redundant metadata cache entry protections during dataset
creation by protecting the object header once and making all
the modifications necessary for the dataset creation before
unprotecting it.
- Reduce # of "number of element in extent" computations performed
by computing and storing the value during dataspace creation.
- Simplify checking for group location's file information, when file
has not been involving in file-mounting operations.
- Use binary encoding for modification time, instead of ASCII.
- Hoist H5HL_peek calls (to get information in a local heap)
out of loops in many group routine.
- Use static variable for iterators of selections, instead of
dynamically allocation them each time.
- Lookup & insert new entries in one step, avoiding traversing
group's B-tree twice.
- Fixed memory leak in H5Gget_objname_idx() routine (tangential to
performance improvements, but fixed along the way).
- Use free-list for reference counted strings.
- Don't bother copying object names into cached group entries,
since they are re-created when an object is opened.
The benchmark I used to measure these results created several thousand
small (2K) datasets in a file and wrote out the data for them. This is
Elena's "regular.c" benchmark.
These changes resulted in approximately ~4.3x speedup of the
development branch when compared to the previous code in the
development branch and ~1.4x speedup compared to the release
branch.
Additionally, these changes reduce the total memory used (code and
data) by the development branch by ~800KB, bringing the development
branch back into the same ballpark as the release branch.
I'll send out a more detailed description of the benchmark results
as a followup note.
New internal API routines:
Added "reference counted strings" API for tracking strings that get
used by multiple owners without duplicating the strings.
Added "ternary search tree" API for text->object mappings.
Platforms tested:
Tested h5committest {arabica (fortran), eirene (fortran, C++)
modi4 (parallel, fortran)}
Other platforms/configurations tested?
FreeBSD 4.7 (sleipnir) serial & parallel
Solaris 2.6 (baldric) serial
Purpose:
bug fix
Description:
some arrays were too big, running out of memory limit for some machines.
Solution:
change to dynamic memory allocation.
Platforms tested:
arabica, sleipnir
Purpose:
New feature to H5Dget_offset
Description:
If user block is set, H5Dget_offset should be able to return the absolute
offset from the beginning of file.
Platforms tested:
eirene, arabica
Bug fix
Description:
Array testing routine is creatint huge arrays on the function stack
which causes a segmentation fault on Linux & FreeBSD when threadsafe
support is enabled.
Solution:
Allocate data for test dynamically instead of automatically.
In general, this should be the preferred method for all data arrays.
Platforms tested:
FreeBSD 4.7 (sleipnir) w/threadsafe enabled.