Clean up compiler warnings and misc. style issues with new internal
tagged entry code.
Tested on:
FreeBSD/32 6.3 (duty) in debug mode
Mac OS X/32 10.6.4 (amazon) in debug mode
Mac OS X/32 10.6.4 (amazon) w/C++ & FORTRAN, w/threadsafe,
in production mode
Mac OS X/32 10.6.4 (amazon) w/parallel, in debug mode
Bring r19234 from the 1.8 branch to the trunk:
Initialize loop variable that caused failures in certain circumstances.
Also clean up compiler warnings and release MPI datatype.
Tested on:
FreeBSD/32 6.3 (duty) in debug mode
FreeBSD/64 6.3 (liberty) w/C++ & FORTRAN, in debug mode
Linux/32 2.6 (jam) w/PGI compilers, w/default API=1.8.x,
w/C++ & FORTRAN, w/threadsafe, in debug mode
Linux/64-amd64 2.6 (amani) w/Intel compilers, w/default API=1.6.x,
w/C++ & FORTRAN, in production mode
Solaris/32 2.10 (linew) w/deprecated symbols disabled, w/C++ & FORTRAN,
w/szip filter, w/threadsafe, in production mode
Linux/PPC 2.6 (heiwa) w/C++ & FORTRAN, w/threadsafe, in debug mode
Linux/64-ia64 2.6 (cobalt) w/Intel compilers, w/C++ & FORTRAN,
in production mode
Linux/64-amd64 2.6 (abe) w/parallel, w/FORTRAN, in debug mode
Mac OS X/32 10.6.4 (amazon) in debug mode
Mac OS X/32 10.6.4 (amazon) w/C++ & FORTRAN, w/threadsafe,
in production mode
Mac OS X/32 10.6.4 (amazon) w/parallel, in debug mode
Description:
honest3 v1.8 failed in parallel test. It got stuck in the same
testpar/testphdf5 subtest (cbhsssdrpio). This is an old problem.
Upon closer inspection, the testphdf5, when terminated, had clocked
up 1hr 9min 46 sec wall clock time. Honest1 system also sent a message
that an mpi process has used up 30+ CPU minutes which exceeded their login
node cpu time limit and they killed the process. I also did a hand-run
of testphdf5. All subtests before cbhsssdrpio completed in a few minutes.
Therefore, it is safe to say the majority of the 70 minutes of wall clock
time are spent in the sub-test cbhsssdrpio. It also used up lots of CPU
time. cbhsssdrpio is likely infinite looping.
Since MPI application is prone to infinite looping due to message deadlock,
the testphdf5 has a built-in protection to give each subtest at most 20 minutes
of wall-clock time to run. When the 20 minutes wall-clock time is exceeded,
the testphdf5 will attempt to terminate itself. This prevents unnecessary
CPU time consumption in infinite looping.
But that clock limit was changed to 30 and then 60 minutes. I should have
but failed to, noticed the change mentioned by Quincey. IMO, 20 wall clock
time is more than sufficient for each subtest of testphdf5 to complete.
If a subtest takes longer than 20 minutes, it is likely infinite looping.
Giving it more time will not help.
If a subtest of testphdf5 takes more than 20 minutes, it should be broken
down to small tests that will finish way under 20 minutes so that it is
much easier to see progress and identify any deadlock problems.
In view of this, I am changing the testphdf5 time limit back to 20 minutes.
This will at least stop the CPU TIME exceeding limits and annoying the
system administrators.
Maybe there could be a provision, such as environment variable like
$HDF5_ALARM_SECOND to modify the alarm duration on individual execution.
Even so, that should be used temporary to see if an execution just needs
a little more time.
Tested: just eyeballed as the change is trivia.
It is an error to use the condition H5_HAVE_FSEEK64 to control the definition
of HDlseek. It caused errors in AIX where lseek64 is available.
Replaced it with H5_HAVE_LSEEK64. Also added the missing HDstrcasecmp macro.
Tested: AIX using default and --disable-largefile.
fseek64 was used to support large file access for the STDIO driver back in
version 1.2.2 in year 2000. Some how it was not included in version 1.4.0.
Now, fseeko64 is used to support large file. There is no more need for fseek64
which is not a standard call. Removed its presence from configure and related
files.
Tested: jam for configure only.
The STDIO only checked for fseeko and incorrectly assumed it can support file
sizes larger than 32bits. Fixed it by making to use fseeko64 if supported, else
use fseeko. To simplify the code, assume fseeko which is a POSIX function must
be supported. Therefore, fseek is not used at all.
(Note: the above applies to Unix-like system. The Windows platform has
hardcoding using Windows functions which are NOT POSIX compliant.)
Tested: h5committested. Also tested in BP (AIX) 32/64 and enable/disable-largefile.
In some machine (Linux), when --disable-largefile is used, it claims it has
fseeko64 but off64_t is NOT supported. Moved the test of fseeko64 and ftello64
to where fseek64 is so that they are tested only if off64_t is supported.
Tested: h5committested.
Bring changes from Coverity branch back to trunk:
r19079 & 19080:
[BZ1942] h5dump -u to generate XML, it does not respect the -m option
xml version of dump_data function didn't check for use of fp_format variable.
Added new test expected file for committed bug 1942
r19103, 19104 & 19105:
[BZ1821] h5repack -v did not display correct output for a selected compression. Needed new test for comparing output of -v option.
Added new test file for solution to BZ1821
BZ1821 - Bring test changes from the shell script actually used.
Tested on:
Mac OS X/32 10.6.4 (amazon) debug & production
(h5committested on branch)
herr_t. To minimize the change of the library's behavior, in the function
H5Z_prelude_callback of H5Z.c, if the return value of can_apply is FALSE and
the filter is MANDATE, this function returns a FAILURE. If the return value is FALSE
but the filter is OPTIONAL, this function returns a SUCCEED. During the IO, the filter
will fail and return a size of zero. But the pipeline will skip this filter.
Tested the same change for 1.8 on jam, linew, and amani. Tested on jam with szip.
herr_t. To minimize the change of the library's behavior, in the function
H5Z_prelude_callback of H5Z.c, if the return value of can_apply is FALSE and
the filter is MANDATE, this function returns a FAILURE. If the return value is FALSE
but the filter is OPTIONAL, this function returns a SUCCEED. During the IO, the filter
will fail and return a size of zero. But the pipeline will skip this filter.
Tested the same change for 1.8 on jam, linew, and amani. Tested on jam with szip.
herr_t. To minimize the change of the library's behavior, in the function
H5Z_prelude_callback of H5Z.c, if the return value of can_apply is FALSE and
the filter is MANDATE, this function returns a FAILURE. If the return value is FALSE
but the filter is OPTIONAL, this function returns a SUCCEED. During the IO, the filter
will fail and return a size of zero. But the pipeline will skip this filter.
Tested on jam, lnew, and amani. Tested on jam with szip.
The previous fix had the Windows code in H5private.h but they should have been
in H5win32defs.h which holds all Windows-specific definitions. Moved the fix.
Tested: BP (AIX) to confirm the fix is still valid. Windows tests will occur
in daily tests tonight.
Description:
test/big incorrectly determined not able to write files larger than 2GB and
skipped the SEC2 and STDIO driver tests. The reason was because it was using
off_t while the SEC2 driver is using lseek64 which expects off64_t type.
Solution:
Created a new HDoff_t which is set to off_t or off64_t or other appropriate
type depending on which of lseek or lseek64 is available. Changed SEC2 file
driver and the big test to use this common definition.
Tested:
In BP (AIX), using --enable and --disable-largefile, for both 32 and 64 bits
modes. Did not do h5committest because: 1. the error was exposed in the remote
BP machine; 2. the change is trivial.
Note that STDIO driver failed when --disable-largefile is used. That is an
error in the STDIO driver code that is being fixed.
Bring revisions from Coverity branch back to trunk:
r19044:
Coverity #449 - Line 1560 called function H5O_chunk_protect for 2 pointers to allocate. But when there's failure on the second one, the first wasn't freed
(H5O_chunk_unprotect). We fixed it by freeing the pointers when an error happens.
r19045:
Fixed coverity issue # 319. Free sec_node in done if it is not NULL.
r19046:
Add intended but missing assignments to initialize pointers to NULL (coverity issue fixes).
r19049:
Hdf5_1_8_coverity branch was recreated from hdf5_1_8 branch in revision 18839 without fix for Coverity issue #84 having been propagated to the hdf5_1_8 branch. This revision adds the fix again.
r19060:
added parentheses to see if they will keep subversion from getting confused
r19061:
Fix coverity item 139. Fixed incorrect condition for freeing buffer on error.
Fix coverity items 20 and 21. Removed unused NTESTS facility from dtypes.c.
Cleanup in H5Shyper.c.
r19062:
Fix coverity item 450. Check to see if chk_proxy has been allocated before
attempting to free it.
Fix coverity item 454. Check to see if allocation of buf failed in
H5D_fill_refill_vl.
Fix coverity items 455-457. Initilize hid_t's to -1, check their value before
attempting to close them, and check if the close failed.
r19063:
New fix to address coverity issue #84. Check that pointers in H5Z_xform_find_type are not NULL before passing them to H5T_cmp.
Tested on:
Mac OS X/32 10.6.4 (amazon) w/debug & production
(Too minor to require h5committest)
Bring changes on Coverity branch back to trunk:
r19040:
Fixed coverity #440 - NULL check after dereference. We moved the NULL check up
into the IF block and changed it to assertion.
r19041:
Maintenance: Addressed Coverity issues 441 and 449 by initializing proper
variables
r19042:
In function H5O_chunk_protect (H5Ochunk.c):
- Initialize H5O_chunk_proxy_t pointers chk_proxy and ret_value.
- Free chk_proxy on error.
r19043:
Addressed coverity issues 442 - 448 by initializing pointers to NULL.
Tested on:
Mac OS X/32 10.6.4 (amazon) w/debug & production
(Too minor to require h5committest)
is shared now. The only situation that requires copying the data is when the metadata cache
evicts and reloads this attribute. The attribute structure will be different in that
situation.
Tested on jam.
a part of the h4h5tools distribution long time ago, but the INSTALL file
was not updated in the development branch and slipped into 1.8 releases and current trunk.
Description:
In certain circumstances, the direct I/O driver did not perform correctly when
data was unaligned. The driver has been patched to fix this. Also added some
potential performance improvements for the unaligned case, and strengthened the
test for whether the data needs to be aligned.
Tested: cobalt
Correct traversal of user-defined links (including external links) to
retain path information of object, allowing H5Iget_name() queries to work
quickly (without searching entire destination file). This required some
refactoring and addition of a mechanism to detect if a "fast" query was
performed (for the tests).
Minor code cleanups, etc.
Tested on:
FreeBSD/32 6.3 (duty) in debug mode
FreeBSD/64 6.3 (liberty) w/C++ & FORTRAN, in debug mode
Linux/32 2.6 (jam) w/PGI compilers, w/default API=1.8.x,
w/C++ & FORTRAN, w/threadsafe, in debug mode
Linux/64-amd64 2.6 (amani) w/Intel compilers, w/default API=1.6.x,
w/C++ & FORTRAN, in production mode
Solaris/32 2.10 (linew) w/deprecated symbols disabled, w/C++ & FORTRAN,
w/szip filter, w/threadsafe, in production mode
Linux/PPC 2.6 (heiwa) w/C++ & FORTRAN, w/threadsafe, in debug mode
Linux/64-ia64 2.6 (cobalt) w/Intel compilers, w/C++ & FORTRAN,
in production mode
Linux/64-amd64 2.6 (abe) w/parallel, w/FORTRAN, in debug mode
Mac OS X/32 10.6.4 (amazon) in debug mode
Mac OS X/32 10.6.4 (amazon) w/C++ & FORTRAN, w/threadsafe,
in production mode
Mac OS X/32 10.6.4 (amazon) w/parallel, in debug mode
Rename H5AC_set() to H5AC_insert_entry()
Get rid of H5C_set_skip_flags() & related flags
Tested on:
Mac OS X/32 10.6.4 (amazon) w/debug, production & parallel
(too simple to require h5committest)
Fix const pointer issues for projection construction routine and also
bump time before alarm kicks in to terminate a test from 20 minutes to 30
minutes, to give the PGI compiler tests w/debugging enabled a chance to finish.
Tested on:
Mac OS X/32 10.6.4 (amazon) w/debug
Linux/32 2.6.18 (jam) w/PGI & debug