1998-02-16 13:28:33 +08:00
|
|
|
Installation instructions for Parallel HDF5
|
|
|
|
-------------------------------------------
|
|
|
|
|
2001-05-24 22:35:11 +08:00
|
|
|
|
1999-05-22 11:58:47 +08:00
|
|
|
1. Overview
|
|
|
|
-----------
|
2001-05-24 22:35:11 +08:00
|
|
|
|
2004-12-29 22:26:20 +08:00
|
|
|
This file contains instructions for the installation of parallel HDF5 (PHDF5).
|
|
|
|
PHDF5 requires an MPI compiler with MPI-IO support and a parallel file system.
|
|
|
|
If you don't know yet, you should first consult with your system support staff
|
|
|
|
of information how to compile an MPI program, how to run an MPI application,
|
|
|
|
and how to access the parallel file system. There are sample MPI-IO C and
|
|
|
|
Fortran programs in the section of "Sample programs". You can use them to
|
|
|
|
run simple tests of your MPI compilers and the parallel file system.
|
2001-05-24 22:35:11 +08:00
|
|
|
|
2004-12-29 22:26:20 +08:00
|
|
|
If you still have difficulties installing PHDF5 in your system, please
|
|
|
|
send mail to
|
|
|
|
hdfhelp@ncsa.uiuc.edu
|
1998-04-24 08:02:08 +08:00
|
|
|
|
2004-12-29 22:26:20 +08:00
|
|
|
In your mail, please include the output of "uname -a". If you have run the
|
|
|
|
"configure" command, attach the output of the command and the content of
|
|
|
|
the file "config.log".
|
1998-02-16 13:28:33 +08:00
|
|
|
|
|
|
|
|
1999-05-22 11:58:47 +08:00
|
|
|
2. Quick Instruction for known systems
|
|
|
|
--------------------------------------
|
2001-05-24 22:35:11 +08:00
|
|
|
|
1999-05-22 11:58:47 +08:00
|
|
|
The following shows particular steps to run the parallel HDF5
|
2001-05-24 22:35:11 +08:00
|
|
|
configure for a few machines we've tested. If your particular platform
|
1999-05-22 11:58:47 +08:00
|
|
|
is not shown or somehow the steps do not work for yours, please go
|
2001-05-24 22:35:11 +08:00
|
|
|
to the next section for more detailed explanations.
|
1999-05-22 11:58:47 +08:00
|
|
|
|
2002-11-06 04:19:38 +08:00
|
|
|
------
|
|
|
|
Know parallel compilers
|
|
|
|
------
|
|
|
|
|
2004-12-29 22:26:20 +08:00
|
|
|
HDF5 knows several parallel compilers: mpicc, hcc, mpcc, mpcc_r.
|
2002-11-06 04:19:38 +08:00
|
|
|
To build parallel HDF5 with one of the above, just set CC as it
|
|
|
|
and configure. The "--enable-parallel" is optional in this case.
|
|
|
|
|
|
|
|
$ CC=/usr/local/mpi/bin/mpicc ./configure --prefix=<install-directory>
|
|
|
|
$ make
|
|
|
|
$ make check
|
|
|
|
$ make install
|
|
|
|
|
|
|
|
|
2000-12-21 23:54:34 +08:00
|
|
|
------
|
|
|
|
TFLOPS
|
|
|
|
------
|
2001-05-24 22:35:11 +08:00
|
|
|
|
2004-12-29 22:26:20 +08:00
|
|
|
Follow the instructions in INSTALL_TFLOPS.
|
1999-05-22 11:58:47 +08:00
|
|
|
|
2000-12-21 23:54:34 +08:00
|
|
|
-------
|
2002-06-05 06:49:24 +08:00
|
|
|
IBM SP
|
2000-12-21 23:54:34 +08:00
|
|
|
-------
|
2001-05-24 22:35:11 +08:00
|
|
|
|
2000-12-21 23:54:34 +08:00
|
|
|
First of all, make sure your environment variables are set correctly
|
2002-06-05 06:49:24 +08:00
|
|
|
to compile and execute a single process mpi applications for the SP
|
|
|
|
machine. Unfortunately, the setting varies from machine to machine.
|
|
|
|
E.g., the following works for the Blue machine of LLNL.
|
|
|
|
|
|
|
|
setenv MP_PROCS 1
|
|
|
|
setenv MP_NODES 1
|
|
|
|
setenv MP_LABELIO no
|
|
|
|
setenv MP_RMPOOL 0
|
2003-07-04 04:55:14 +08:00
|
|
|
setenv LLNL_COMPILE_SINGLE_THREADED TRUE # for LLNL site only
|
2000-12-21 23:54:34 +08:00
|
|
|
|
2001-05-24 22:35:11 +08:00
|
|
|
The shared library configuration for this version is broken. So, only
|
|
|
|
static library is supported.
|
|
|
|
|
2000-12-21 23:54:34 +08:00
|
|
|
Then do the following steps:
|
|
|
|
|
2001-05-24 22:35:11 +08:00
|
|
|
$ ./configure --disable-shared --prefix=<install-directory>
|
|
|
|
$ make # build the library
|
|
|
|
$ make check # verify the correctness
|
|
|
|
$ make install
|
2000-12-21 23:54:34 +08:00
|
|
|
|
|
|
|
|
2004-12-29 22:26:20 +08:00
|
|
|
We also suggest that you add "-qxlf90=autodealloc" to FFLAGS when
|
|
|
|
building parallel with fortran enabled. This can be done by invoking:
|
|
|
|
|
|
|
|
setenv FFLAGS -qxlf90=autodealloc # 32 bit build
|
|
|
|
|
|
|
|
or
|
|
|
|
|
|
|
|
setenv FFLAGS "-q64 -qxlf90=autodealloc" # 64 bit build
|
|
|
|
|
|
|
|
prior to running configure. Recall that the "-q64" is necessary
|
|
|
|
for 64 bit builds.
|
|
|
|
|
2000-12-21 23:54:34 +08:00
|
|
|
---------------
|
|
|
|
SGI Origin 2000
|
|
|
|
Cray T3E
|
2002-11-06 04:19:38 +08:00
|
|
|
(where MPI-IO is part of system MPI library such as the mpt module)
|
2000-12-21 23:54:34 +08:00
|
|
|
---------------
|
1999-05-22 11:58:47 +08:00
|
|
|
|
|
|
|
#!/bin/sh
|
|
|
|
|
2001-03-20 06:31:06 +08:00
|
|
|
RUNPARALLEL="mpirun -np 3"
|
1999-05-22 11:58:47 +08:00
|
|
|
export RUNPARALLEL
|
|
|
|
LIBS="-lmpi"
|
|
|
|
export LIBS
|
2002-06-05 06:49:24 +08:00
|
|
|
./configure --enable-parallel --prefix=$PWD/installdir
|
1999-05-22 11:58:47 +08:00
|
|
|
make
|
|
|
|
make check
|
|
|
|
make install
|
|
|
|
|
|
|
|
|
2002-06-05 06:49:24 +08:00
|
|
|
***Known problem***
|
|
|
|
Some O2K system may encounter an error during make.
|
|
|
|
ld32: FATAL 9: I/O error (-lmpi): No such file or directory
|
|
|
|
|
|
|
|
This is because libtool tries too hard to locate the loader 'ld'
|
|
|
|
but ends up with one that does not know where to find the right
|
|
|
|
version of libmpi.a for the particular ABI requested.
|
|
|
|
The fix is to edit the file 'libtool' at the top of the build directory.
|
|
|
|
Search for a string that looks like the following:
|
|
|
|
LD="/opt/MIPSpro/MIPSpro_default/opt/MIPSpro/bin/ld -n32"
|
|
|
|
|
|
|
|
Replace it with something that knows how to find the right libmpi.a.
|
|
|
|
E.g.,
|
|
|
|
LD="/opt/MIPSpro/MIPSpro_default/opt/MIPSpro/bin/cc -n32"
|
|
|
|
|
2002-11-06 04:19:38 +08:00
|
|
|
Or you can pre-empt it by setting LD at configure time
|
|
|
|
$ LD="cc" ./configure --enable-parallel ...
|
|
|
|
|
2002-06-05 06:49:24 +08:00
|
|
|
|
2000-12-21 23:54:34 +08:00
|
|
|
---------------
|
|
|
|
SGI Origin 2000
|
|
|
|
Cray T3E
|
2001-05-24 22:35:11 +08:00
|
|
|
(where MPI-IO is not part of system MPI library or I want to use my own
|
|
|
|
version of MPIO)
|
2000-12-21 23:54:34 +08:00
|
|
|
---------------
|
1999-05-22 11:58:47 +08:00
|
|
|
|
|
|
|
mpi1_inc="" #mpi-1 include
|
|
|
|
mpi1_lib="" #mpi-1 library
|
|
|
|
mpio_inc=-I$HOME/ROMIO/include #mpio include
|
|
|
|
mpio_lib="-L$HOME/ROMIO/lib/IRIX64" #mpio library
|
|
|
|
|
|
|
|
MPI_INC="$mpio_inc $mpi1_inc"
|
|
|
|
MPI_LIB="$mpio_lib $mpi1_lib"
|
|
|
|
|
|
|
|
#for version 1.1
|
|
|
|
CPPFLAGS=$MPI_INC
|
|
|
|
export CPPFLAGS
|
|
|
|
LDFLAGS=$MPI_LIB
|
|
|
|
export LDFLAGS
|
2001-03-20 06:31:06 +08:00
|
|
|
RUNPARALLEL="mpirun -np 3"
|
1999-05-22 11:58:47 +08:00
|
|
|
export RUNPARALLEL
|
|
|
|
LIBS="-lmpio -lmpi"
|
|
|
|
export LIBS
|
|
|
|
|
2002-06-05 06:49:24 +08:00
|
|
|
./configure --enable-parallel --prefix=$PWD/installdir
|
1999-05-22 11:58:47 +08:00
|
|
|
make
|
|
|
|
make check
|
|
|
|
make install
|
|
|
|
|
|
|
|
|
2001-05-31 03:51:39 +08:00
|
|
|
---------------------
|
2001-05-24 22:35:11 +08:00
|
|
|
Linux 2.4 and greater
|
2001-05-31 03:51:39 +08:00
|
|
|
---------------------
|
2001-05-24 22:35:11 +08:00
|
|
|
|
|
|
|
Be sure that your installation of MPICH was configured with the following
|
|
|
|
configuration command-line option:
|
|
|
|
|
|
|
|
-cflags="-D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64"
|
|
|
|
|
|
|
|
This allows for >2GB sized files on Linux systems and is only available
|
|
|
|
with Linux kernels 2.4 and greater.
|
|
|
|
|
|
|
|
|
2001-05-31 03:51:39 +08:00
|
|
|
------------------
|
|
|
|
HP V2500 and N4000
|
|
|
|
------------------
|
|
|
|
|
|
|
|
Follow the instructions in section 3.
|
|
|
|
|
|
|
|
|
1999-05-22 11:58:47 +08:00
|
|
|
3. Detail explanation
|
|
|
|
---------------------
|
2001-05-24 22:35:11 +08:00
|
|
|
|
|
|
|
The HDF5 library can be configured to use MPI and MPI-IO for parallelism
|
|
|
|
on a distributed multi-processor system. The easiest way to do this is to
|
|
|
|
have a properly installed parallel compiler (e.g., MPICH's mpicc or IBM's
|
2002-11-06 04:19:38 +08:00
|
|
|
mpcc_r) and supply that executable as the value of the CC environment
|
|
|
|
variable. For examples,
|
1999-08-03 03:51:13 +08:00
|
|
|
|
2002-11-06 04:19:38 +08:00
|
|
|
$ CC=mpcc_r ./configure
|
|
|
|
|
1999-08-03 03:51:13 +08:00
|
|
|
$ CC=/usr/local/mpi/bin/mpicc ./configure
|
|
|
|
|
2001-05-24 22:35:11 +08:00
|
|
|
If no such wrapper script is available then you must specify your normal
|
|
|
|
C compiler along with the distribution of MPI/MPI-IO which is to be used
|
|
|
|
(values other than `mpich' will be added at a later date):
|
1999-08-03 03:51:13 +08:00
|
|
|
|
|
|
|
$ ./configure --enable-parallel=mpich
|
|
|
|
|
2001-05-24 22:35:11 +08:00
|
|
|
If the MPI/MPI-IO include files and/or libraries cannot be found by the
|
|
|
|
compiler then their directories must be given as arguments to CPPFLAGS
|
|
|
|
and/or LDFLAGS:
|
1999-08-03 03:51:13 +08:00
|
|
|
|
|
|
|
$ CPPFLAGS=-I/usr/local/mpi/include \
|
|
|
|
LDFLAGS=-L/usr/local/mpi/lib/LINUX/ch_p4 \
|
|
|
|
./configure --enable-parallel=mpich
|
|
|
|
|
2001-05-24 22:35:11 +08:00
|
|
|
If a parallel library is being built then configure attempts to determine
|
|
|
|
how to run a parallel application on one processor and on many
|
|
|
|
processors. If the compiler is `mpicc' and the user hasn't specified
|
|
|
|
values for RUNSERIAL and RUNPARALLEL then configure chooses `mpirun' from
|
|
|
|
the same directory as `mpicc':
|
1999-08-03 03:51:13 +08:00
|
|
|
|
|
|
|
RUNSERIAL: /usr/local/mpi/bin/mpirun -np 1
|
2001-03-20 06:31:06 +08:00
|
|
|
RUNPARALLEL: /usr/local/mpi/bin/mpirun -np $${NPROCS:=3}
|
1999-08-03 03:51:13 +08:00
|
|
|
|
2001-05-24 22:35:11 +08:00
|
|
|
The `$${NPROCS:=3}' will be substituted with the value of the NPROCS
|
|
|
|
environment variable at the time `make check' is run (or the value 3).
|
2001-08-15 01:07:07 +08:00
|
|
|
|
|
|
|
|
|
|
|
4. Parallel tests
|
|
|
|
-----------------
|
|
|
|
|
|
|
|
The testpar/ directory contains tests for Parallel HDF5 and MPI-IO.
|
|
|
|
The t_mpi tests the basic functionalities of some MPI-IO features used by
|
|
|
|
Parallel HDF5. It usually exits with non-zero code if a required MPI-IO
|
|
|
|
feature does not succeed as expected. One exception is the testing of
|
|
|
|
accessing files larger than 2GB. If the underlaying filesystem or if
|
|
|
|
the MPI-IO library fails to handle file sizes larger than 2GB, the test
|
2004-12-29 22:26:20 +08:00
|
|
|
will print informational messages stating the failure but will not exit
|
2001-08-15 01:07:07 +08:00
|
|
|
with non-zero code. Failure to support file size greater than 2GB is
|
2004-12-29 22:26:20 +08:00
|
|
|
not a fatal error for HDF5 because HDF5 can use other file-drivers such
|
2001-08-15 01:07:07 +08:00
|
|
|
as families of files to by pass the file size limit.
|
2002-06-18 04:09:38 +08:00
|
|
|
|
2003-07-04 04:55:14 +08:00
|
|
|
By default, the parallel tests use the current directory as the test directory.
|
2003-05-23 22:58:10 +08:00
|
|
|
This can be changed by the environment variable $HDF5_PARAPREFIX.
|
2002-06-18 04:09:38 +08:00
|
|
|
For example, if the tests should use directory /PFS/user/me, do
|
|
|
|
HDF5_PARAPREFIX=/PFS/user/me
|
|
|
|
export HDF5_PARAPREFIX
|
|
|
|
make check
|
|
|
|
|
|
|
|
(In some batch job system, you many need to hardset HDF5_PARAPREFIX in
|
|
|
|
the shell initial files like .profile, .cshrc, etc.)
|
2004-12-29 22:26:20 +08:00
|
|
|
|
|
|
|
|
|
|
|
5. Sample programs
|
|
|
|
------------------
|
|
|
|
|
|
|
|
Here are sample MPI-IO C and Fortran programs. You may use them to run simple
|
|
|
|
tests of your MPI compilers and the parallel file system. The MPI commands
|
|
|
|
used here are mpicc, mpif90 and mpirun. Replace them with the commands of
|
|
|
|
your system.
|
|
|
|
|
|
|
|
The programs assume they run in the parallel file system. Thus they create
|
|
|
|
the test data file in the current directory. If the parallel file system
|
|
|
|
is somewhere else, you need to run the sample programs there or edit the
|
|
|
|
programs to use a different file name.
|
|
|
|
|
|
|
|
Example compiling and running:
|
|
|
|
|
|
|
|
% mpicc Sample_mpio.c -o c.out
|
|
|
|
% mpirun -np 4 c.out
|
|
|
|
|
|
|
|
% mpif90 Sample_mpio.f90 -o f.out
|
|
|
|
% mpirun -np 4 f.out
|
|
|
|
|
|
|
|
|
|
|
|
==> Sample_mpio.c <==
|
|
|
|
/* Simple MPI-IO program testing if a parallel file can be created.
|
|
|
|
* Default filename can be specified via first program argument.
|
|
|
|
* Each process writes something, then reads all data back.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <mpi.h>
|
|
|
|
#ifndef MPI_FILE_NULL /*MPIO may be defined in mpi.h already */
|
|
|
|
# include <mpio.h>
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#define DIMSIZE 10 /* dimension size, avoid powers of 2. */
|
|
|
|
#define PRINTID printf("Proc %d: ", mpi_rank)
|
|
|
|
|
|
|
|
main(int ac, char **av)
|
|
|
|
{
|
|
|
|
char hostname[128];
|
|
|
|
int mpi_size, mpi_rank;
|
|
|
|
MPI_File fh;
|
|
|
|
char *filename = "./mpitest.data";
|
|
|
|
char mpi_err_str[MPI_MAX_ERROR_STRING];
|
|
|
|
int mpi_err_strlen;
|
|
|
|
int mpi_err;
|
|
|
|
char writedata[DIMSIZE], readdata[DIMSIZE];
|
|
|
|
char expect_val;
|
|
|
|
int i, irank;
|
|
|
|
int nerrors = 0; /* number of errors */
|
|
|
|
MPI_Offset mpi_off;
|
|
|
|
MPI_Status mpi_stat;
|
|
|
|
|
|
|
|
MPI_Init(&ac, &av);
|
|
|
|
MPI_Comm_size(MPI_COMM_WORLD, &mpi_size);
|
|
|
|
MPI_Comm_rank(MPI_COMM_WORLD, &mpi_rank);
|
|
|
|
|
|
|
|
/* get file name if provided */
|
|
|
|
if (ac > 1){
|
|
|
|
filename = *++av;
|
|
|
|
}
|
|
|
|
if (mpi_rank==0){
|
|
|
|
printf("Testing simple MPIO program with %d processes accessing file %s\n",
|
|
|
|
mpi_size, filename);
|
|
|
|
printf(" (Filename can be specified via program argument)\n");
|
|
|
|
}
|
|
|
|
|
|
|
|
/* show the hostname so that we can tell where the processes are running */
|
|
|
|
if (gethostname(hostname, 128) < 0){
|
|
|
|
PRINTID;
|
|
|
|
printf("gethostname failed\n");
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
PRINTID;
|
|
|
|
printf("hostname=%s\n", hostname);
|
|
|
|
|
|
|
|
if ((mpi_err = MPI_File_open(MPI_COMM_WORLD, filename,
|
|
|
|
MPI_MODE_RDWR | MPI_MODE_CREATE | MPI_MODE_DELETE_ON_CLOSE,
|
|
|
|
MPI_INFO_NULL, &fh))
|
|
|
|
!= MPI_SUCCESS){
|
|
|
|
MPI_Error_string(mpi_err, mpi_err_str, &mpi_err_strlen);
|
|
|
|
PRINTID;
|
|
|
|
printf("MPI_File_open failed (%s)\n", mpi_err_str);
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* each process writes some data */
|
|
|
|
for (i=0; i < DIMSIZE; i++)
|
|
|
|
writedata[i] = mpi_rank*DIMSIZE + i;
|
|
|
|
mpi_off = mpi_rank*DIMSIZE;
|
|
|
|
if ((mpi_err = MPI_File_write_at(fh, mpi_off, writedata, DIMSIZE, MPI_BYTE,
|
|
|
|
&mpi_stat))
|
|
|
|
!= MPI_SUCCESS){
|
|
|
|
MPI_Error_string(mpi_err, mpi_err_str, &mpi_err_strlen);
|
|
|
|
PRINTID;
|
|
|
|
printf("MPI_File_write_at offset(%ld), bytes (%d), failed (%s)\n",
|
|
|
|
(long) mpi_off, (int) DIMSIZE, mpi_err_str);
|
|
|
|
return 1;
|
|
|
|
};
|
|
|
|
|
|
|
|
/* make sure all processes has done writing. */
|
|
|
|
MPI_Barrier(MPI_COMM_WORLD);
|
|
|
|
|
|
|
|
/* each process reads all data and verify. */
|
|
|
|
for (irank=0; irank < mpi_size; irank++){
|
|
|
|
mpi_off = irank*DIMSIZE;
|
|
|
|
if ((mpi_err = MPI_File_read_at(fh, mpi_off, readdata, DIMSIZE, MPI_BYTE,
|
|
|
|
&mpi_stat))
|
|
|
|
!= MPI_SUCCESS){
|
|
|
|
MPI_Error_string(mpi_err, mpi_err_str, &mpi_err_strlen);
|
|
|
|
PRINTID;
|
|
|
|
printf("MPI_File_read_at offset(%ld), bytes (%d), failed (%s)\n",
|
|
|
|
(long) mpi_off, (int) DIMSIZE, mpi_err_str);
|
|
|
|
return 1;
|
|
|
|
};
|
|
|
|
for (i=0; i < DIMSIZE; i++){
|
|
|
|
expect_val = irank*DIMSIZE + i;
|
|
|
|
if (readdata[i] != expect_val){
|
|
|
|
PRINTID;
|
|
|
|
printf("read data[%d:%d] got %d, expect %d\n", irank, i,
|
|
|
|
readdata[i], expect_val);
|
|
|
|
nerrors++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (nerrors)
|
|
|
|
return 1;
|
|
|
|
|
|
|
|
MPI_File_close(&fh);
|
|
|
|
|
|
|
|
PRINTID;
|
|
|
|
printf("all tests passed\n");
|
|
|
|
|
|
|
|
MPI_Finalize();
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
==> Sample_mpio.f90 <==
|
|
|
|
!
|
|
|
|
! The following example demonstrates how to create and close a parallel
|
|
|
|
! file using MPI-IO calls.
|
|
|
|
!
|
|
|
|
! USE MPI is the proper way to bring in MPI definitions but many
|
|
|
|
! MPI Fortran compiler supports the pseudo standard of INCLUDE.
|
|
|
|
! So, HDF5 uses the INCLUDE statement instead.
|
|
|
|
!
|
|
|
|
|
|
|
|
PROGRAM MPIOEXAMPLE
|
|
|
|
|
|
|
|
! USE MPI
|
|
|
|
|
|
|
|
IMPLICIT NONE
|
|
|
|
|
|
|
|
INCLUDE 'mpif.h'
|
|
|
|
|
|
|
|
CHARACTER(LEN=80), PARAMETER :: filename = "filef.h5" ! File name
|
|
|
|
INTEGER :: ierror ! Error flag
|
|
|
|
INTEGER :: fh ! File handle
|
|
|
|
INTEGER :: amode ! File access mode
|
|
|
|
|
|
|
|
call MPI_INIT(ierror)
|
|
|
|
amode = MPI_MODE_RDWR + MPI_MODE_CREATE + MPI_MODE_DELETE_ON_CLOSE
|
|
|
|
call MPI_FILE_OPEN(MPI_COMM_WORLD, filename, amode, MPI_INFO_NULL, fh, ierror)
|
|
|
|
print *, "Trying to create ", filename
|
|
|
|
if ( ierror .eq. MPI_SUCCESS ) then
|
|
|
|
print *, "MPI_FILE_OPEN succeeded"
|
|
|
|
call MPI_FILE_CLOSE(fh, ierror)
|
|
|
|
else
|
|
|
|
print *, "MPI_FILE_OPEN failed"
|
|
|
|
endif
|
|
|
|
|
|
|
|
call MPI_FINALIZE(ierror);
|
|
|
|
END PROGRAM
|