Fix bug where library could core dump when an invalid location ID was
passed to H5Giterate() (and add test for this case).
Tested on:
Mac OS X/32 10.5.4 (amazon)
Too minor to require h5committest
> 1.
Description: Added a new field 'app_count' to H5I_id_info_t struct, to track
the reference count on an id due to the application. the old 'count' field
tracks the total. Generally any id visible to the application gets placed
in app_count. Added app_ref boolean parameter to H5I_inc_ref, H5I_dec_ref,
H5I_register, H5I_clear_type, and a few other functions, to specify whether
the operation(s) being performed on the id(s) are due to the application
(TRUE) or not (FALSE). Test added for this case.
Tested: kagiso, smirom, linew (h5committest)
Description:
Moved mount table from top file structure to shared file structure. Moved
parent out of mount table and back into top file structure. Mounted files can
now be accessed from any handle of the parent file. Changes to how files are
closed. Stricter cycle checking on mounted files. Removed unused function
H5F_has_mount().
Tested:
committest in 1.8 branch. Committing now due to the urgency of the fix. No
changes here are specific to the trunk, but I will keep an eye on the daily
tests.
the file didn't have the data. It happened because each handle had its own
object structure, and the empty one overwrote the data with fill value. This is
fixed by making some attribute information like the data be shared in the
attribute structure.
Tested on smirom, kagiso, and linew.
Add check to avoid mounting the a file on a group twice, then the mounts
are done on the same HDF5 file, but opened with separate H5Fopen calls.
Also add new 'mounted' flag to the H5G_info_t struct, queried with the
H5Gget_info() API call, to allow applications to detect and avoid this
situation.
This probably fixes Bz#1070 also, I'll check with Dan Anov (who reported
a different sort of behavior, but seems to have the same underlying problem).
Tested on:
Mac OS X/32 10.5.4 (amazon)
Linux/64 2.6 (chicago)
Description:
On Windows, certain users were having trouble with the "ohdr" test, which does some processing on object header messages. The errors were hard to reproduce on our machines, and we eventually determined that the errors were timezone-specific.
The bug is triggered on Windows when processing timestamps very near the "Epoch" (midnight on 1/1/1970)-- the mktime() function does some automatic adjustment on the time to correct for timezones. In the USA, the correction adds a few hours; in Europe, it subtracts, thus giving us times pre-Epoch.
This only affects Windows because the Windows mktime() function cannot handle times before 1970-- other systems seemingly can.
The fix is to simply create timestamps only as early as 01/02/1970. This way, any timezone adjustment will still be post-Epoch.
This bug only affects the ohdr test, and shouldn't be a problem in the library. The earliest timestamps that will actually be read will be around the time HDF5 was created (~1996-7, per Quincey).
Tested:
VS2005 on WinXP
h5committest (kagiso, linew, smirom)
Description:
As part of our Windows cleanup, we try to remove windows-specific tweaks in the source code. There are many instances where Windows code is introduces via ifdef's. We re-evaluate whether they are still required, and found that many of them are not. Others we change to "feature"-specific code, rather than Windows-specific.
Tested:
VS2005 on WinXP
VS.NET on WinXP
h5committest (kagisopp, smirom, linew)
Description:
On Windows, the pthread_self function cannot be used to print the returned thread ID for debugging. Instead, we need a separate function, GetCurrentThreadId. To eliminate some Windows ifdef's in the code, we create two new function macros which can be used by all platforms. It is conditionally defined in H5win32defs.h, and globally in H5private.h.
Tested:
VS2005 w/ pthreads on WinXP
kagiso w/ pthreads
Description:
In library code, we try not to use system calls directly, but instead use the HD{function} macro instead. This way, we can map special versions of the call on particular systems. Previously, it was all done in H5private.h. However, in an effort to clean up platform-specific definitions, we move all of the Windows macros into a separate file, win32defs.h. This way, we can use the non-Posix versions that Visual Studio sends warnings about.
Some macros are set specifically in the platform-specific header files. Then, any macros left unset will be set by the "default" implementation in H5private.h.
This checkin also cleans up various source files to use the HD* macros when possible.
Tested:
VS2005 on WinXP
VS.NET on WinXP
h5committest (kagiso, linew, smirom)
Finish omnibus chunked dataset I/O refactoring, to separate general
actions on chunked datasets from actions that are specific to using the v1
B-tree index.
Cleaned up a few bugs and added some additional tests also.
Tested on:
FreeBSD/32 6.2 (duty) in debug mode
FreeBSD/64 6.2 (liberty) w/C++ & FORTRAN, in debug mode
Linux/32 2.6 (kagiso) w/PGI compilers, w/C++ & FORTRAN, w/threadsafe,
in debug mode
Linux/64-amd64 2.6 (smirom) w/default API=1.6.x, w/C++ & FORTRAN,
in production mode
Linux/64-ia64 2.6 (cobalt) w/Intel compilers, w/C++ & FORTRAN,
in production mode
Solaris/32 2.10 (linew) w/deprecated symbols disabled, w/C++ & FORTRAN,
w/szip filter, in production mode
Mac OS X/32 10.5.2 (amazon) in debug mode
Linux/64-ia64 2.4 (tg-login3) w/parallel, w/FORTRAN, in production mode
Description:
On Linux-like systems, we can get the ID of the current thread through a pthread_self. However on Windows, the return cannot be cast as a threadID, so we simply couldn't get the ID. Previously we simply gave up and printed a message that we couldn't get an ID. Instead, though, we can use the Windows-specific call to GetCurrentThreadId(), which achieves the same goal. This way we can provide better debug output with threadsafe features.
Tested:
VS2005 on WinXP
VS.NET on WinXP
(other platforms not tested because change is within _WIN32 ifdef)
Description:
The fortran Makefile.am used HDF_FORTRAN to indicate it is part of the
Fortran API source so that conclude.am will give fortran api prefix in the
test output. The symbox HDF_FORTRAN is also used in configure for a different
purpose (indicated --enable-fortran). They conflicted.
Similar problem for the symbol HDF_CXX.
Solution:
Changed all the involved Makefile.am to use "FORTRAN_API" instead. It is
a more appropriate name. Same for CXX_API.
Along the way, discovered that the Makefile.am of hl/fortran/test and
hl/cxx/test did not have those symbols at all. Added them in.
Platform tested:
Kagiso only. It is a trivia change.
Description: The test program h52gifgentst was getting installed in the bin
directory during 'make install', and it shouldn't. Make now
builds the program for use in testing but doesn't install it
during 'make install'.
Tested: kagiso
Detect chunks that are >4GB before dataset gets created and return error
to application.
Tweak lots of internal variables that hold the chunk size/dimensions to
use an 'uint32_t', instead of a 'size_t', so that the integer size is constant.
Correct a number of our tests which were creating datasets with chunks
that were >4GB and add some specific tests for >4GB chunk size detection.
Minor whitespace & other code cleanups.
Tested on:
Mac OS X/32 10.5.2 (amazon)
Forthcoming testing on other platforms...
Improvement.
Description:
src/libhdf5.settings was the initial configure summary and is installed.
Then configure is changed to dump a summary of the configure settings to
the output and also append it to src/libhdf5.settings. That created
two different output formats and duplicated information. This is the
initial attempt to clean up this confusion and unify the output format.
It is decided to use the src/libhdf5.settings template as the unified means.
This requires more macros symbols be defined. The following symbols are
all related to generating the src/libhdf5.settings file.
AC_SUBST(EXTERNAL_FILTERS)
AC_SUBST(MPE) MPE=no
AC_SUBST(STATIC_EXEC) STATIC_EXEC=no
AC_SUBST(HDF_FORTRAN) HDF_FORTRAN=no
AC_SUBST(FC) HDF_FORTRAN=no
AC_SUBST(HDF_CXX) HDF_CXX=no
AC_SUBST(CXX) HDF_CXX=no
AC_SUBST(HDF5_HL) HDF5_HL=yes
AC_SUBST(GPFS) GPFS=no
AC_SUBST(LINUX_LFS) LINUX_LFS=no
AC_SUBST(INSTRUMENT) INSTRUMENT=no
AC_SUBST(CODESTACK) CODESTACK=no
AC_SUBST(HAVE_DMALLOC) HAVE_DMALLOC=no
AC_SUBST(DIRECT_VFD) DIRECT_VFD=no
AC_SUBST(THREADSAFE) THREADSAFE=no
AC_SUBST(STATIC_SHARED)
AC_SUBST(enable_shared)
AC_SUBST(enable_static)
AC_SUBST(UNAME_INFO) UNAME_INFO=`uname -a`
The src/libhdf5.settings.in has CONDITIONAL's added to it too. The
untrue conditions turned into a "#" and these lines are cleaned by the
post processing script.
Platform tested:
h5committest on kagiso, smirom and linew.
Description: Applying update to autotools that was applied to 1.8 a couple
of weeks ago to the trunk.
Updated bin/reconfigure script to reflect the new versions of
libtool and automake in the /home1/packages/ directory.
Rearranged configure.in script. When using libtool 2.2.2, the
libtool script doesn't generate until later in the configuration
process, so I had to move a test that parsed through the libtool
script to a point after where it was actually being generated.
Ran libtoolize on the project, and ran bin/reconfigure to
regenerate configure and Makefile.in's throughout.
Tested: kagiso, smirom, linew (h5committest)
Omnibus raw data I/O revisions, with wide-ranging changes and
refactoring, in order to prepare for implementing "fast append" feature.
These changes remove the majority of the code duplication for raw data
I/O which has crept in over the last ten years and introduces a more object-
oriented design for operating on different types of dataset storage.
Chunked storage no longer has it's own I/O routines, it is now handled
as either contiguous (if chunk is not pulled into the cache) or compact (if the
chunk is cached in memory).
No bug or feature changes, at least intentionally... :-)
Tested on:
FreeBSD/32 6.2 (duty) in debug mode
FreeBSD/64 6.2 (liberty) w/C++ & FORTRAN, in debug mode
Linux/32 2.6 (kagiso) w/PGI compilers, w/C++ & FORTRAN, w/threadsafe,
in debug mode
Linux/64-amd64 2.6 (smirom) w/default API=1.6.x, w/C++ & FORTRAN,
in production mode
Linux/64-ia64 2.6 (cobalt) w/Intel compilers, w/C++ & FORTRAN,
in production mode
Solaris/32 2.10 (linew) w/deprecated symbols disabled, w/C++ & FORTRAN,
w/szip filter, in production mode
Mac OS X/32 10.5.2 (amazon) in debug mode
Linux/64-ia64 2.4 (tg-login3) w/parallel, w/FORTRAN, in production mode
Description:
Vailin has been working on some new tests for converting Windows paths. She found a bug that is making these two tests fail, but didn't have time to fix it. We've commented out the two tests until she has time to fix the bug. This won't affect other platforms because it's Windows-specific code.
Tested:
VS2005 on WinXP
remove HDputenv() from external_link_env() test
(will add script later to set HDF5_EXT_PREFIX for running the test)
modify and add more comments
2. src/H5private.h: remove #define for HDputenv()
Tested on kagiso, linew and smirom.
Bring r14737 back from the 1.8 branch: Fix bug which would
incorrectly encode the member offsets for compound datatypes whose size was
between 256 & 511 bytes, when the "use the latest format" feature was enabled.
Tested on:
Mac OS X/32 10.5.2 (amazon) w/debug
FreeBSD/32 6.2 (duty) w/production
Minor bug fix to H5Aget_num_attrs() to return error when an invalid
location ID is passed in.
Tested on:
Mac OS X/32 (amazon)
Too minor to require h5committest
callback functions modifies the skip list or LRU out from under a partial
or complete flush of the cache. This is a test for a situation that
should not occur, but we test anyway.
Also added code to skip longer tests in cache_api in express tests.
Tested serial and parallel on phoenix (debian --x86-32), and commit test.
Elena commit tested as well, and ran a manual test under MacOS X.
large entry, or to a large increases in the size of an existing entry.
This required some additions to the cache configuration structure, and
thus will require changes in the metadata cache documentation.
The basic idea is to monitor the size of entries as they are loaded,
inserted, or increased in size. If the size of the entry (or increase)
exceeds some user selected fraction of the size of the cache, increase
the size of the cache.
Note that this fix was designed quickly -- while it deals with the
use case that exposed the problem, we may have to revisit the issue
later.
Tested serial and parallel on Phoenix, and h5committest.
Correct the prototype for H5Sselect_elements() to take an 'hsize_t *' for
the coordinates, instead of 'hsize_t **'.
Tested on:
Mac OS X/32 10.5.1 (amazon)