mirror of
https://github.com/HDFGroup/hdf5.git
synced 2024-11-27 02:10:55 +08:00
dab1b6dcde
Updated. Platforms tested: Eye-balled.
193 lines
5.9 KiB
Plaintext
193 lines
5.9 KiB
Plaintext
Installation instructions for Parallel HDF5
|
|
-------------------------------------------
|
|
|
|
|
|
1. Overview
|
|
-----------
|
|
|
|
This file contains instructions for the installation of parallel
|
|
HDF5. Platforms supported by this release are SGI Origin 2000, IBM SP2,
|
|
Intel TFLOPs, and Linux version 2.4 and greater. The steps are kind of
|
|
unnatural and will be more automized in the next release. If you have
|
|
difficulties installing the software in your system, please send mail to
|
|
|
|
hdfparallel@ncsa.uiuc.edu
|
|
|
|
In your mail, please include the output of "uname -a". Also attach the
|
|
content of "config.log" if you ran the "configure" command.
|
|
|
|
First, you must obtain and unpack the HDF5 source as described in the
|
|
INSTALL file. You also need to obtain the information of the include and
|
|
library paths of MPI and MPIO software installed in your system since the
|
|
parallel HDF5 library uses them for parallel I/O access.
|
|
|
|
|
|
2. Quick Instruction for known systems
|
|
--------------------------------------
|
|
|
|
The following shows particular steps to run the parallel HDF5
|
|
configure for a few machines we've tested. If your particular platform
|
|
is not shown or somehow the steps do not work for yours, please go
|
|
to the next section for more detailed explanations.
|
|
|
|
------
|
|
TFLOPS
|
|
------
|
|
|
|
Follow the instuctions in INSTALL_TFLOPS.
|
|
|
|
-------
|
|
IBM SP2
|
|
-------
|
|
|
|
First of all, make sure your environment variables are set correctly
|
|
to compile and execute a single process mpi applications for the SP2
|
|
machine. They should be similar to the following:
|
|
|
|
setenv CC mpcc_r
|
|
setenv MP_PROCS 1
|
|
setenv MP_NODES 1
|
|
setenv MP_LABELIO no
|
|
setenv MP_RMPOOL 0
|
|
setenv RUNPARALLEL "MP_PROCS=2 MP_TASKS_PER_NODE=2 poe"
|
|
setenv LLNL_COMPILE_SINGLE_THREADED TRUE
|
|
|
|
The shared library configuration for this version is broken. So, only
|
|
static library is supported.
|
|
|
|
An error for powerpc-ibm-aix4.3.2.0's (LLNL Blue) install method was
|
|
discovered after the code freeze. You need to remove the following line
|
|
from config/powerpc-ibm-aix4.3.2.0 before configuration:
|
|
|
|
ac_cv_path_install=${ac_cv_path_install='cp -r'}
|
|
|
|
Then do the following steps:
|
|
|
|
$ ./configure --disable-shared --prefix=<install-directory>
|
|
$ make # build the library
|
|
$ make check # verify the correctness
|
|
$ make install
|
|
|
|
|
|
---------------
|
|
SGI Origin 2000
|
|
Cray T3E
|
|
(where MPI-IO is part of system MPI library such as mpt 1.3)
|
|
---------------
|
|
|
|
#!/bin/sh
|
|
|
|
RUNPARALLEL="mpirun -np 3"
|
|
export RUNPARALLEL
|
|
LIBS="-lmpi"
|
|
export LIBS
|
|
./configure --enable-parallel --disable-shared --prefix=$PWD/installdir
|
|
make
|
|
make check
|
|
make install
|
|
|
|
|
|
---------------
|
|
SGI Origin 2000
|
|
Cray T3E
|
|
(where MPI-IO is not part of system MPI library or I want to use my own
|
|
version of MPIO)
|
|
---------------
|
|
|
|
mpi1_inc="" #mpi-1 include
|
|
mpi1_lib="" #mpi-1 library
|
|
mpio_inc=-I$HOME/ROMIO/include #mpio include
|
|
mpio_lib="-L$HOME/ROMIO/lib/IRIX64" #mpio library
|
|
|
|
MPI_INC="$mpio_inc $mpi1_inc"
|
|
MPI_LIB="$mpio_lib $mpi1_lib"
|
|
|
|
#for version 1.1
|
|
CPPFLAGS=$MPI_INC
|
|
export CPPFLAGS
|
|
LDFLAGS=$MPI_LIB
|
|
export LDFLAGS
|
|
RUNPARALLEL="mpirun -np 3"
|
|
export RUNPARALLEL
|
|
LIBS="-lmpio -lmpi"
|
|
export LIBS
|
|
|
|
./configure --enable-parallel --disable-shared --prefix=$PWD/installdir
|
|
make
|
|
make check
|
|
make install
|
|
|
|
|
|
---------------------
|
|
Linux 2.4 and greater
|
|
---------------------
|
|
|
|
Be sure that your installation of MPICH was configured with the following
|
|
configuration command-line option:
|
|
|
|
-cflags="-D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64"
|
|
|
|
This allows for >2GB sized files on Linux systems and is only available
|
|
with Linux kernels 2.4 and greater.
|
|
|
|
|
|
------------------
|
|
HP V2500 and N4000
|
|
------------------
|
|
|
|
Follow the instructions in section 3.
|
|
|
|
|
|
3. Detail explanation
|
|
---------------------
|
|
|
|
The HDF5 library can be configured to use MPI and MPI-IO for parallelism
|
|
on a distributed multi-processor system. The easiest way to do this is to
|
|
have a properly installed parallel compiler (e.g., MPICH's mpicc or IBM's
|
|
mpcc) and supply that executable as the value of the CC environment
|
|
variable:
|
|
|
|
$ CC=mpcc ./configure
|
|
$ CC=/usr/local/mpi/bin/mpicc ./configure
|
|
|
|
If no such wrapper script is available then you must specify your normal
|
|
C compiler along with the distribution of MPI/MPI-IO which is to be used
|
|
(values other than `mpich' will be added at a later date):
|
|
|
|
$ ./configure --enable-parallel=mpich
|
|
|
|
If the MPI/MPI-IO include files and/or libraries cannot be found by the
|
|
compiler then their directories must be given as arguments to CPPFLAGS
|
|
and/or LDFLAGS:
|
|
|
|
$ CPPFLAGS=-I/usr/local/mpi/include \
|
|
LDFLAGS=-L/usr/local/mpi/lib/LINUX/ch_p4 \
|
|
./configure --enable-parallel=mpich
|
|
|
|
If a parallel library is being built then configure attempts to determine
|
|
how to run a parallel application on one processor and on many
|
|
processors. If the compiler is `mpicc' and the user hasn't specified
|
|
values for RUNSERIAL and RUNPARALLEL then configure chooses `mpirun' from
|
|
the same directory as `mpicc':
|
|
|
|
RUNSERIAL: /usr/local/mpi/bin/mpirun -np 1
|
|
RUNPARALLEL: /usr/local/mpi/bin/mpirun -np $${NPROCS:=3}
|
|
|
|
The `$${NPROCS:=3}' will be substituted with the value of the NPROCS
|
|
environment variable at the time `make check' is run (or the value 3).
|
|
|
|
|
|
4. Parallel tests
|
|
-----------------
|
|
|
|
The testpar/ directory contains tests for Parallel HDF5 and MPI-IO.
|
|
The t_mpi tests the basic functionalities of some MPI-IO features used by
|
|
Parallel HDF5. It usually exits with non-zero code if a required MPI-IO
|
|
feature does not succeed as expected. One exception is the testing of
|
|
accessing files larger than 2GB. If the underlaying filesystem or if
|
|
the MPI-IO library fails to handle file sizes larger than 2GB, the test
|
|
will print informational essages stating the failure but will not exit
|
|
with non-zero code. Failure to support file size greater than 2GB is
|
|
not a fatal error for HDF5 becuase HDF5 can use other file-drivers such
|
|
as families of files to by pass the file size limit.
|