[svn-r1557] INSTALL:

Edited for 1.2.0beta release.
INSTALL.ascired:
    Updated with simplified steps.
INSTALL_parallel:
    Updated with information that was in INSTALL and INSTALL.parallel.
INSTALL_parallel.ascired:
    Removed old setup no longer needed.
    RUNPARALEL, RUNSERIAL, disable-shared are specified in config/intel-osf1.
README:
    Update mailing list subscription instruction.
RELEASE:
    Updated for 1.2.0beta release information.
INSTALL_Windows.txt:
    Contains Windows platform installation instructions.
INSTALL.parallel:
    Removed because its content has been moved to INSTALL_parallel.
This commit is contained in:
Albert Cheng 1999-08-02 14:51:13 -05:00
parent 0d1c9438f9
commit a9b97ccb0e
8 changed files with 386 additions and 195 deletions

64
INSTALL
View File

@ -1,14 +1,13 @@
-*- outline -*-
This file contains instructions for the installation of HDF5 on
Unix-like systems. Users of the Intel TFLOPS machine should see the
INSTALL.ascired for instructions.
This file contains instructions for the installation of HDF5 software.
* Obtaining HDF5
The latest supported public release of HDF5 is available from
ftp://hdf.ncsa.uiuc.edu/pub/dist/HDF5 and is available in tar
format uncompressed or compressed with compress, gzip, or
bzip2.
ftp://hdf.ncsa.uiuc.edu/pub/dist/HDF5. For Unix platforms, it
is available in tar format uncompressed or compressed with
compress, gzip, or bzip2. For Microsoft Windows, it is in
ZIP format.
The HDF team also makes snapshots of the source code available
on a regular basis. These snapshots are unsupported (that is,
@ -63,6 +62,14 @@ INSTALL.ascired for instructions.
$ make check
$ make install
** TFLOPS
Users of the Intel TFLOPS machine, after reading this file,
should see the INSTALL.ascired for more instructions.
** Windows
Users of Microsoft Windows should see the INSTALL_Windows.txt
for detailed instructions.
* HDF5 dependencies
** Zlib
The HDF5 library has a predefined compression filter that uses
@ -169,11 +176,17 @@ INSTALL.ascired for instructions.
$ CC=/usr/local/mpi/bin/mpicc ./configure
On Irix64 the default compiler is `cc -64'. To use an
On Irix64 the default compiler is `cc'. To use an
alternate compiler specify it with the CC variable:
$ CC='cc -o32' ./configure
One may also use various environment variables to change the
behavior of the compiler. E.g., to ask for -n32 ABI:
$ SGI_ABI=-n32
$ export SGI_ABI
$ ./configure
*** Additional compilation flags
If addtional flags must be passed to the compilation commands
then specify those flags with the CFLAGS variable. For
@ -313,42 +326,9 @@ INSTALL.ascired for instructions.
*** Parallel vs. serial library
The HDF5 library can be configured to use MPI and MPI-IO for
parallelizm on a distributed multi-processor system. The easy
way to do this is to have a properly installed parallel
compiler (e.g., MPICH's mpicc or IBM's mpcc) and supply that
executable as the value of the CC environment variable:
parallelizm on a distributed multi-processor system. Read the
file INSTALL_parallel for detailed explanations.
$ CC=mpcc ./configure
$ CC=/usr/local/mpi/bin/mpicc ./configure
If no such wrapper script is available then you must specify
your normal C compiler along with the distribution of
MPI/MPI-IO which is to be used (values other than `mpich' will
be added at a later date):
$ ./configure --enable-parallel=mpich
If the MPI/MPI-IO include files and/or libraries cannot be
found by the compiler then their directories must be given as
arguments to CPPFLAGS and/or LDFLAGS:
$ CPPFLAGS=-I/usr/local/mpi/include \
LDFLAGS=-L/usr/local/mpi/lib/LINUX/ch_p4 \
./configure --enable-parallel=mpich
If a parallel library is being built then configure attempts
to determine how to run a parallel application on one
processor and on many processors. If the compiler is mpicc
and the user hasn't specified values for RUNSERIAL and
RUNPARALLEL then configure chooses `mpirun' from the same
directory as `mpicc':
RUNSERIAL: /usr/local/mpi/bin/mpirun -np 1
RUNPARALLEL: /usr/local/mpi/bin/mpirun -np $${NPROCS:=2}
The `$${NPROCS:=2}' will be substituted with the value of the
NPROCS environment variable at the time `make check' is run
(or the value 2).
** Building
The library, confidence tests, and programs can be build by

View File

@ -10,51 +10,51 @@ Sequential HDF5:
The setup process for building the sequential HDF5 library for the
ASCI Red machine is done by a coordination of events from sasn100 and
janus. Special effort must be made to move the compiled and linked
testers to disks local to the processors for execution. This special
effort is shown here at steps 9) and steps 10).
janus. Though janus can do compiling, it is better to build it
from sasn100 which has more complete building tools and runs faster.
It is also anti-social to tie up janus with compiling. The HDF5 building
requires the use of janus because one of steps is to execute a program
to find out the run-time characteristics of the TFLOPS machine.
Assuming you have already unpacked the HDF5 tar-file into the
directory <hdf5>, follow the steps below:
The total required steps are something similar to:
FROM SASN100,
1) uncompress hdf5-1.1.0.tar.Z
1) cd <hdf5>
2) tar xvf hdf5-1.1.0.tar
2) ./configure tflop
3) cd ./hdf5
4) ./configure tflop
5) make H5detect
3) make H5detect
FROM JANUS,
6) cd ./hdf5
4) cd <hdf5>
7) make H5Tinit.c
5) make H5Tinit.c
FROM SASN100,
8) make >&! comp.out &
6) make
When everything is finished compiling and linking,
you can run the tests by
FROM JANUS,
9) cp -r ../hdf5 /scratch
10) cd /scratch/hdf5/test
11) make test >&! test.out
7.1) Due to a bug, you must first remove the following line from
the file test/Makefile before the next step.
RUNTEST=$(LT_RUN)
7.2) make check
Once satisfied with the test results, as long as you
have the correct permission,
Once satisfied with the test results, you can install
the software by
FROM SASN100,
12) make install
8) make install
---------------
@ -62,72 +62,46 @@ Parallel HDF5:
---------------
The setup process for building the parallel version of the HDF5 library for the
ASCI Red machine is very similar to the sequential version. It is done by a
coordination of events from sasn100 and janus. Special effort must be made to
move the compiled and linked single processor testers to disks local to the
processor for execution. This special effort is shown here at steps 9) and
steps 10). Following these test, there are the edit, compile, link, and
execution of parallel tests described in steps 12) through 16).
ASCI Red machine is very similar to the sequential version. Since TFLOPS
does not support MPIO, we have prepared a shell-script file that configures
with the appropriate MPI library.
The total required steps are something similar to:
Assuming you have already unpacked the HDF5 tar-file into the
directory <hdf5>, follow the steps below:
FROM SASN100,
1) uncompress hdf5-1.1.0.tar.Z
1) cd <hdf5>
2) tar xvf hdf5-1.1.0.tar
2) sh INSTALL_parallel.ascired /* this is different from the sequential version */
3) cd ./hdf5
4) sh INSTALL_parallel.ascired /* this is different from the sequential version */
5) make H5detect
3) make H5detect
FROM JANUS,
6) cd ./hdf5
4) cd <hdf5>
7) make H5Tinit.c
5) make H5Tinit.c
FROM SASN100,
8) make >&! comp.out &
6) make
When everything is finished compiling and linking,
FROM JANUS,
9) cp -rp ../hdf5 /scratch
10) cd /scratch/hdf5/test
11) make test >&! test.out
Once satisfied with the single processor test results,
FROM SASN100,
12) cd testpar
13) go through the README file.
14) make -f Makefile.ascired
When everything is finished compiling and linking,
FROM JANUS,
15) cd ./hdf5/testpar
16) make test -f Makefile.ascired >&! test.out
7.1) Due to a bug, you must first remove the following line from
the file test/Makefile before the next step.
RUNTEST=$(LT_RUN)
7.2) make check
Once satisfied with the parallel test results, as long as you
have the correct permission,
FROM SASN100,
17) make install
8) make install

View File

@ -1,28 +0,0 @@
This file contains instructions for the installation a version of HDF5
that uses the parallel file I/O facilities of the MPI-IO library. A
parallel version of HDF5 can run in a serial environment as long as
the appropriate MPI-IO and MPI header files and libraries are
available.
The parallel version of hdf5 can be build by generally following the
instructions in the INSTALL file for building a serial version and
using `mpicc' as the C compiler. This can be done by setting the CC
environment variable before invoking configure as with:
$ CC=mpicc ./configure
If the mpicc compiler is not available then a parallel library can
still be built as long as the appropriate header files and libraries
can be found. If these files are part of the default compiler search
paths then configuration is as simple as:
$ ./configure --enable-parallel
Otherwise, if the MPI and MPI-IO header files or library cannot be
found then the compiler search paths can be corrected, the files can
be moved, or configure can be told about the file locations. The
latter is done with something like:
$ CPPFLAGS=-I/usr/local/mpi/include \
LDFLAGS=-L/usr/local/mpi/lib/LINUX/ch_p4 \
./configure --enable-parallel

239
INSTALL_Windows.txt Normal file
View File

@ -0,0 +1,239 @@
HDF5 Install Instructions for Windows NT/95/98.
-------------------------------------------------------------------------
The instructions which follow assume that you will be using the the source
code release 'zip' file (hdf5-1_2_0.zip).
The following sections discuss in detail installation procedures.
Building from Source Code Release (hdf5-1_2_0.zip)
===============================================
STEP I: Preconditions
To build the HDF5 and tests, it is assumed that you have done the following:
1. Installed MicroSoft Developer Studio, and Visual C++ 5.0 or 6.0.
2. Set up a directory structure to unpack the library. For
example:
c:\ (any drive)
MyHDFstuff\ (any folder name)
3. Copied the source distribution archive to that directory
and unpacked it using the appropriate archiver options to
create a directory hierarchy.
Run WinZip on hdf5-1_2_0.zip (the entire source tree).
This creates a directory called 'hdf5' which
contains several files and directories.
STEP II: Building the Libraries and tests.
1. Unpack all.zip in 'hdf5'.
2. Invoke Microsoft Visual C++, go to "File" and select
the "Open Workspace" option.
Then open the c:\myHDFstuff\hdf5\proj\all\all.dsw workspace.
3. Select "Build", then Select "Set Active Configuration".
On Windows platform select as the active configuration
"all -- Win32 Debug" to build debug versions of single-threaded
static libraries, and tests.
or
"all -- Win32 Release" to build release versions of single-threaded
static libraries, and tests.
Select "Build" and "Build all.exe" to
build the corresponding version of the HDF5 library.
NOTE: "all" is a dummy target. You will get a link error when
"all.exe." is built :
LINK: error LNK2001: unresolved external symbol
_mainCRTStartup.....
all.exe - 2 error(s), ....
Warning messages can be ignored. The "all.exe" is never created,
so it is OK.
When the debug or release build is done the directories listed
below will contain the following files :
c:\MyHDFstuff\hdf5\proj\hdf5\debug -
c:\MyHDFstuff\hdf5\proj\hdf5\release -
hdf5.lib- the hdf5 library
c:\MyHDFstuff\hdf5\test\"test directory"-
where test directory is one of the following:
big
bittests
chunk
cmpd_dset
dsets
dtypes
extend
external
fillval
flush1
flush2
gheap
hyperslab
iopipe
istore
links
mount (not supported in this release)
mtime
overhead
ragged
shtype
testhdf5
unlink
Each test directory contains debug and release subdirectories with the
corresponding tests.
STEP III: TESTING THE BUILD
In a command prompt window run the test batch file which
resides in the hdf5 directory to make sure that the library
was built correctly.
The hdf5testDBG.bat file tests the debug version of the library and
hdf5testREL.bat tests the release version of the library.
STEP IV: BUILDING THE EXAMPLES
1. Invoke Microsoft Visual C++, go to "File" and select
the "Open Workspace" option.
Then open the c:\myHDFstuff\hdf5\examples\allexamples.dsw
workspace.
2. Select "Build", then Select "Set Active Configuration".
On Windows platform select as the active configuration
"allexamples -- Win32 Debug" to build debug versions of the examples.
or
"allexamples -- Win32 Release" to build release versions the examples.
Select "Build" and "Build allexamples.exe" to
build the corresponding version of the examples.
When the debug build or release build is done there should be the
following subdirectories in C:\myHDFstuff\hdf5\examples\
attributetest
chunkread
compoundtest
extendwritetest
grouptest
readtest
selecttest
writetest
3. Run the batch file "InstallExamples.bat" which resides in the top level directory. This file creates 2 new directories, examplesREL and examplesDEB, in the examples directory and places all the executables in it. Both the release and debug versions of the examples should be built before this step is done. The examples should be tested in these 2 new directories
due to some dependencies between the examples.
STEP V:
BUILDING AN APPLICATION USING THE HDF5 LIBRARY - SOME HELPFUL POINTERS
=====================================================================
If you are building an application that uses the HDF5 library
the following locations will need to be specified for locating
header files and linking in the HDF libraries:
<top-level HDF5 directory>\src
where <top-level HDF5 directory> may be
C:\MyHDFstuff\hdf5\
MORE HELPFUL POINTERS
=====================
Here are some notes that may be of help if you are not familiar
with using the Visual C++ Development Environment.
Project name and location issues:
The files in all.zip must end up in the hdf5\ directory
installed by hdf5-1_2_0.zip
If you must install all.dsw and all.dsp in another directory, relative
to hdf5\ , you will be asked to locate the sub-project files,
when you open the project all.dsw.
If you want to rename all (the entire project), you will need to modify
two files all.dsw and all.dsp as text (contrary to the explicit warnings
in the files).
You can also modify all.dsw and all.dsp as text, to allow these 2 files
to be installed in another directory.
Settings... details:
If you create your own project, the necessary settings can be
read from the all.dsp file(as text), or from the Project Settings in
the Developer Studio project settings dialog.
Project
Settings
C/C++
Category
PreProcessor
Code Generation
Use run-time Library
These are all set to use
Single-Threaded

View File

@ -1,6 +1,5 @@
Installation instructions for Parallel HDF5
-------------------------------------------
(last updated: May 21, 1999)
1. Overview
-----------
@ -79,9 +78,40 @@ make install
3. Detail explanation
---------------------
[Work in progress. Please send mail to hdfparallel@ncsa.uiuc.edu.]
The HDF5 library can be configured to use MPI and MPI-IO for
parallelizm on a distributed multi-processor system. The easy
way to do this is to have a properly installed parallel
compiler (e.g., MPICH's mpicc or IBM's mpcc) and supply that
executable as the value of the CC environment variable:
$ CC=mpcc ./configure
$ CC=/usr/local/mpi/bin/mpicc ./configure
If no such wrapper script is available then you must specify
your normal C compiler along with the distribution of
MPI/MPI-IO which is to be used (values other than `mpich' will
be added at a later date):
$ ./configure --enable-parallel=mpich
If the MPI/MPI-IO include files and/or libraries cannot be
found by the compiler then their directories must be given as
arguments to CPPFLAGS and/or LDFLAGS:
$ CPPFLAGS=-I/usr/local/mpi/include \
LDFLAGS=-L/usr/local/mpi/lib/LINUX/ch_p4 \
./configure --enable-parallel=mpich
If a parallel library is being built then configure attempts
to determine how to run a parallel application on one
processor and on many processors. If the compiler is mpicc
and the user hasn't specified values for RUNSERIAL and
RUNPARALLEL then configure chooses `mpirun' from the same
directory as `mpicc':
RUNSERIAL: /usr/local/mpi/bin/mpirun -np 1
RUNPARALLEL: /usr/local/mpi/bin/mpirun -np $${NPROCS:=2}
The `$${NPROCS:=2}' will be substituted with the value of the
NPROCS environment variable at the time `make check' is run
(or the value 2).

View File

@ -23,8 +23,6 @@
# The following flags are only needed when compiling/linking a user program
# for execution.
#
debug="-g -UH5O_DEBUG -DH5F_OPT_SEEK=0"
default_mode="$debug -DH5F_LOW_DFLT=H5F_LOW_SEC2"
# Using the MPICH libary by Daniel Sands.
# It contains both MPI-1 and MPI-IO functions.
@ -44,14 +42,6 @@ mpio_lib="-L$ROMIO/lib"
MPI_INC="$mpi1_inc $mpio_inc"
MPI_LIB="$mpi1_lib $mpio_lib"
#for version 1.1 and later
RUNSERIAL="yod -sz 1"
export RUNSERIAL
RUNPARALLEL="yod -sz 8"
export RUNPARALLEL
LIBS="-lmpich"
export LIBS
# Once these variables are set to the proper values for your installation,
# you can run the configure program (i.e., ./configure tflop --enable-parallel=mpio)
@ -62,7 +52,7 @@ export LIBS
# cicc and link to the MPI-IO library and the parallel version of the HDF5
# library (that was created and installed with the configure/make process).
CFLAGS="$default_mode" \
CPPFLAGS=$MPI_INC \
LDFLAGS=$MPI_LIB \
./configure --enable-parallel --disable-shared tflop
LIBS="-lmpich" \
./configure --enable-parallel tflop

6
README
View File

@ -28,8 +28,10 @@ library.
not a discussion list.
To subscribe to a list, send mail to "majordomo@ncsa.uiuc.edu",
with "subscribe <your e-mail address> in the _body_ of the message. Messages
to be sent to the list should be sent to "<list>@ncsa.uiuc.edu".
with "subscribe <list>" in the _body_, not the Subject, of the message.
E.g., subscribe hdf5
Messages to be sent to the list should be sent to "<list>@ncsa.uiuc.edu".
Nearly daily code snapshots are now being provided at the following URL:
ftp://hdf.ncsa.uiuc.edu/pub/outgoing/hdf5/snapshots

94
RELEASE
View File

@ -1,4 +1,4 @@
Release information for hdf5-1.2.0
Release information for hdf5-1.2.0beta
------------------------------------
CHANGES SINCE VERSION 1.0.1
@ -86,6 +86,15 @@ Persistent Pointers
Parallel Support
----------------
* Improved parallel I/O performance.
* Supported new platforms: Cray T3E, Linux, DEC Cluster.
* Use vendor supported version of MPIO on SGI O2K and Cray platforms.
* Improved the algorithm that translates an HDF5 hyperslab selection
into an MPI type for better collective I/O performance.
Tools
-----
@ -94,20 +103,16 @@ Tools
show file addresses for raw data, to format output more reasonably,
to show object attributes, and to perform a recursive listing,
* Enhancements to h5dump similar to h5ls.
* An hdf5 to hdf4 converter.
* Enhancements to h5dump: support new data types added since previous
versions.
* h5toh4: An hdf5 to hdf4 converter.
CHANGES SINCE THE Version 1.0.0 RELEASE
* [Improvement]: configure sets up the Makefile in the parallel tests
suit (testpar/) correctly. (Tested for O2K only.)
suit (testpar/) correctly.
* [Bug-Fix]: Configure failed for all IRIX versions other than 6.3.
It now configures correctly for all IRIX 6.x version.
@ -463,43 +468,42 @@ Ragged Arrays (alpha)
H5Rwrite - write to an array
H5Rread - read from an array
This release has been tested on UNIX platforms only; specifically:
Linux, FreedBSD, IRIX, Solaris & Dec UNIX.
RELEASE INFORMATION FOR PARALLEL HDF5
-------------------------------------
* Current release supports independent access to fixed dimension datasets
only.
* The comm and info arguments of H5Cset_mpi are not used. All parallel
I/O are done via MPI_COMM_WORLD. Access_mode for H5Cset_mpi can be
H5ACC_INDEPENDENT only.
* This release of parallel HDF5 has been tested on IBM SP2 and SGI
Origin 2000 systems. It uses the ROMIO version of MPIO interface
for parallel I/O supports.
* Useful URL's.
Parallel HDF webpage: "http://hdf.ncsa.uiuc.edu/Parallel_HDF/"
ROMIO webpage: "http://www.mcs.anl.gov/home/thakur/romio/"
* Some to-do items for future releases
support for Intel Teraflop platform.
support for unlimited dimension datasets.
support for file access via a communicator besides MPI_COMM_WORLD.
support for collective access to datasets.
support for independent create/open of datasets.
PLATFORMS SUPPORTED
-------------------
Platform(OS) C-Compiler Fortran-Compiler
------------ ---------- ----------------
Sun4(Solaris 2.5) CC SC4.0 f77 SC4.0
SGI (IRIX v6.5) CC 7.21 f77 7.21
SGI-Origin(IRIX64 v6.4-n32) CC 7.2.1.2m f77 7.2.1.2m
SGI-Origin(IRIX64 v6.4-64) CC 7.2.1.2m f77 7.2.1.2m
Operating systems listed below with compiler information and MPI library, if
applicable, are systems that HDF5 1.2.0beta was tested.
Compiler & libraries
Platform Information Comment
-------- ---------- --------
AIX 4.3.2 (IBM SP)
FreeBSD 3.2 gcc 2.8.1
HP-UX 10.20 cc A.10.32.03
IRIX 6.5 cc 7.2.1
IRIX64 6.5 (64 & n32) cc 7.30
mpt.1.3 (SGI MPI 3.2.0.0)
Linux (SuSE and RedHat) egcs-1.1.2 configured with
--disable-hsizet
OSF1 V4.0 DEC-V5.2-033
SunOS 5.6 cc 4.2 no optimization
gcc 2.8.1
SunOS 5.5.1 gcc 2.7.2 configured with
--disable-hsizet
TFLOPS O/S 1.0.4 cicc (pgcc Rel 1.7-6i)
mpich-1.1.2 with local changes
Unicos 2.0.4.61 (T3E) cc 6.2.1.0
mpt.1.3
Windows/NT 4.0/SP3 MSVC++ 5.0 and 6.0