[svn-r368] Purpose:

Documentation (mostly)


Solution:
    Parallel HDF5 support on Intel TFLOPS machine using PFS file system
    and MPIO.


Platform tested:
    Intel TFLOPS (ASCI Red)
This commit is contained in:
Paul Harten 1998-04-23 19:02:08 -05:00
parent 0b6fd0ff94
commit 304ad92a46
5 changed files with 174 additions and 19 deletions

View File

@ -12,7 +12,11 @@ can do the following:
$ make test
$ make install # Optional
Note:
For the users of the Intel TFLOPS machine, a special sequence of steps
for the install may be found in the file: INSTALL.ascired.
=======
Step 0: Install optional third-party packages.
* GNU zlib compression library, version 1.0.2 or later is used for
@ -137,4 +141,3 @@ Step 6. Subscribe to mailing lists.
* Subscribe to the mailing lists described in the README file.

View File

@ -1,11 +1,18 @@
FOR THE ASCI RED MACHINE:
FOR THE INTEL TFLOPS MACHINE:
The setup process for building the HDF5 library for the ASCI Red machine
is done by a coordination of events from sasn100 and janus. Special
effort must be made to move the compiled and linked testers to disks
local to the processors for execution. This special effort is shown
here at steps 8) and steps 9).
Below are the step-by-step procedures for building, testing, and
installing both the sequential and parallel versions of the HDF5 library.
---------------
Sequential HDF5:
---------------
The setup process for building the sequential HDF5 library for the
ASCI Red machine is done by a coordination of events from sasn100 and
janus. Special effort must be made to move the compiled and linked
testers to disks local to the processors for execution. This special
effort is shown here at steps 9) and steps 10).
The total required steps are something similar to:
FROM SASN100,
@ -18,33 +25,115 @@ FROM SASN100,
4) ./configure tflop
5) make H5detect
FROM JANUS,
5) cd ./hdf5
6) cd ./hdf5
6) make H5Tinit.c
7) make H5Tinit.c
FROM SASN100,
7) make >&! comp.out &
8) make >&! comp.out &
When everything is finished compiling and linking,
FROM JANUS,
8) cp -r ../hdf5 /scratch
9) cp -r ../hdf5 /scratch
9) cd /scratch/hdf5/test
10) cd /scratch/hdf5/test
10) make test >&! test.out
11) make test >&! test.out
Once satisfied with the test results, as long as you
have the correct permission,
FROM SASN100,
11) make install
12) make install
---------------
Parallel HDF5:
---------------
The setup process for building the parallel version of the HDF5 library for the
ASCI Red machine is very similar to the sequential version. It is done by a
coordination of events from sasn100 and janus. Special effort must be made to
move the compiled and linked single processor testers to disks local to the
processor for execution. This special effort is shown here at steps 9) and
steps 10). Following these test, there are the edit, compile, link, and
execution of parallel tests described in steps 12) through 16).
The total required steps are something similar to:
FROM SASN100,
1) uncompress hdf5-1.0.0a.tar.Z
2) tar xvf hdf5-1.0.0a.tar
3) cd ./hdf5
4) sh INSTALL_parallel.ascired /* this is different from the sequential version */
5) make H5detect
FROM JANUS,
6) cd ./hdf5
7) make H5Tinit.c
FROM SASN100,
8) make >&! comp.out &
When everything is finished compiling and linking,
FROM JANUS,
9) cp -r ../hdf5 /scratch
10) cd /scratch/hdf5/test
11) make test >&! test.out
Once satisfied with the single processor test results,
FROM SASN100,
12) cd testpar
13) /* edit testphdf5.c, edit the following line */
char *filenames[]={ "pfs:/pfs/multi/tmp_1/your_own/Eg1.h5f", "pfs:/pfs/multi/tmp_1/your_own/Eg2.h5f" };
/* change "your_own" to your own directory name */
char *filenames[]={ "pfs:/pfs/multi/tmp_1/my_name/Eg1.h5f", "pfs:/pfs/multi/tmp_1/my_name/Eg2.h5f" };
/* delete or comment-out the next line */
char *filenames[]={ "ParaEg1.h5f", "ParaEg2.h5f" };
14) make -f Makefile.ascired
When everything is finished compiling and linking,
FROM JANUS,
15) cd ./hdf5/testpar
16) make test -f Makefile.ascired >&! test.out
Once satisfied with the parallel test results, as long as you
have the correct permission,
FROM SASN100,
17) make install

View File

@ -1,14 +1,20 @@
Installation instructions for Parallel HDF5
-------------------------------------------
(last updated: Feb 15, 1998)
(last updated: April 22, 1998)
This file contains instructions for the installation of parallel
HDF5. Platforms supported by this release are SGI Origin 2000
and IBM SP2. The steps are kind of unnatural and will be more
automized in the next release. If you have difficulties installing
the software in your system, please send mail to
HDF5. Platforms supported by this release are SGI Origin 2000,
IBM SP2, and the Intel TFLOP. The steps are kind of unnatural and
will be more automized in the next release. If you have difficulties
installing the software in your system, please send mail to
hdfparallel@ncsa.uiuc.edu
Note:
For the users of the Intel TFLOPS machine, a special sequence of steps
for the parallel install may be found in the file: INSTALL.ascired.
Also, MPI/MPIO information similar to that found below may be found in
INSTALL_parallel.ascired.
First, you must obtain and unpack the HDF5 source as
described in the file INSTALL. You also need to obtain the
information of the include and library paths of MPI and MPIO
@ -32,3 +38,4 @@ MPI_LIB="$mpi1_lib $mpio_lib"
export MPI_LIB
./configure --enable-parallel

53
INSTALL_parallel.ascired Normal file
View File

@ -0,0 +1,53 @@
#! /bin/sh
# How to create a parallel version of HDF5 on the Intel Asci Red System
# that uses MPI and MPI-IO.
# Read the INSTALL.ascired file to understand the configure/make process
# for the sequential (i.e., uniprocessor) version of HDF5.
# The process for creating the parallel version of HDF5 using MPI-IO
# is similar, but first you will have to set up some environment variables
# with values specific to your local installation.
# The relevant variables are shown below, with values that work for Sandia'a
# ASCI Red Tflops machine as of the writing of these instructions (980421).
# Don't try to make a parallel version of HDF5 from the same hdf5 root
# directory where you made a sequential version of HDF5 -- start with
# a fresh copy.
# Here are the flags you must set before running the ./configure program
# to create the parallel version of HDF5.
# (We use sh here, but of course you can adapt to whatever shell you like.)
# compile for MPI jobs
CC=cicc
# The following flags are only needed when compiling/linking a user program
# for execution.
#
debug="-g -UH5O_DEBUG -DH5F_OPT_SEEK=0"
default_mode="-DDOS386 $debug -DH5F_LOW_DFLT=H5F_LOW_SEC2"
MPICH="/usr/community/mpi-io/romio/mpich_1.1.0"
ROMIO="/usr/community/mpi-io/romio/current"
mpi1_inc="-I$MPICH/include"
mpi1_lib="-L$MPICH/lib/tflops/ptls"
mpio_inc="-I$ROMIO/include"
mpio_lib="-L$ROMIO/lib/tflops"
MPI_INC="$mpi1_inc $mpio_inc"
MPI_LIB="$mpi1_lib $mpio_lib"
CFLAGS="$default_mode"
export CC CFLAGS MPI_INC MPI_LIB
# Once these variables are set to the proper values for your installation,
# you can run the configure program (i.e., ./configure tflop --enable-parallel=mpio)
# to set up the Makefiles, etc.
# After configuring, run the make as described in the INSTALL file.
# When compiling and linking your application, don't forget to compile with
# cicc and link to the MPI-IO library and the parallel version of the HDF5
# library (that was created and installed with the configure/make process).
./configure tflop --enable-parallel=mpio

View File

@ -63,6 +63,9 @@ lib progs test _test install uninstall TAGS dep depend:
done
# Number format detection
H5detect:
(cd src && $(MAKE) $@)
H5Tinit.c:
(cd src && $(MAKE) $@)