[svn-r10201] Purpose:

IBM MPI-IO has a bug for MPI derived data type, which is used
to support collective IO. In order to avoid this, we provide
a way to turn off collective IO support for this kind of platform.


Description:
Using a new macro called H5_MPI_COMPLEX_DERIVED_DATATYPE_WORKS
to turn off irregular hyperslab collective IO support at such platforms.

Solution:
Hard code such platforms under hdf5/config.
So far only IBM AIX has been found to have such problems.

Platforms tested:

heping (Linux) and copper(Aix). Testing at tungsten now, however, it
took almost an hour to build parallel HDF5 at tungsten, it is still building. Cannot wait. Have tested this feature at Tungsten a week ago.
Misc. update:
This commit is contained in:
MuQun Yang 2005-03-11 17:11:05 -05:00
parent 941edeab91
commit 74efc1e4f5
2 changed files with 39 additions and 1 deletions

22
configure vendored
View File

@ -47090,6 +47090,28 @@ echo "${ECHO_T}yes" >&6
echo "${ECHO_T}no" >&6
fi
echo "$as_me:$LINENO: checking if irregular hyperslab optimization code works inside MPI-IO" >&5
echo $ECHO_N "checking if irregular hyperslab optimization code works inside MPI-IO... $ECHO_C" >&6
if test "${hdf5_mpi_complex_derived_datatype_works+set}" = set; then
echo $ECHO_N "(cached) $ECHO_C" >&6
else
hdf5_mpi_complex_derived_datatype_works=yes
fi
if test ${hdf5_mpi_complex_derived_datatype_works} = "yes"; then
cat >>confdefs.h <<\_ACEOF
#define MPI_COMPLEX_DERIVED_DATATYPE_WORKS 1
_ACEOF
echo "$as_me:$LINENO: result: yes" >&5
echo "${ECHO_T}yes" >&6
else
echo "$as_me:$LINENO: result: no" >&5
echo "${ECHO_T}no" >&6
fi
fi

View File

@ -2293,6 +2293,22 @@ if test -n "$PARALLEL"; then
AC_MSG_RESULT([no])
fi
dnl ----------------------------------------------------------------------
dnl Check to see whether the complicate MPI derived datatype works.
dnl Up to now(Dec. 20th, 2004), we find that IBM's MPIO implemention doesn't
dnl handle with the displacement of the complicate MPI type derived datatype
dnl correctly. So we add the check here.
AC_MSG_CHECKING([if irregular hyperslab optimization code works inside MPI-IO])
AC_CACHE_VAL([hdf5_mpi_complex_derived_datatype_works],[hdf5_mpi_complex_derived_datatype_works=yes])
if test ${hdf5_mpi_complex_derived_datatype_works} = "yes"; then
AC_DEFINE([MPI_COMPLEX_DERIVED_DATATYPE_WORKS], [1],
[Define if your system can handle complicated MPI derived datatype correctly.])
AC_MSG_RESULT([yes])
else
AC_MSG_RESULT([no])
fi
fi
dnl ----------------------------------------------------------------------