hdf5/INSTALL
Robb Matzke ea3624e133 [svn-r1115] Changes since 19990302
----------------------

./INSTALL
./configure.in
./configure		[REGENERATED]
./src/H5config.h.in	[REGENERATED]
	Improvements for parallel library.  If you have a properly
	working mpicc you should be able to just say:

	    $ CC=mpicc ./configure

	and you will see

	    checking for mpirun... /usr/local/mpi/bin/mpirun
	    checking for parallel support files... skipped
	    checking how to run on one processor...
		     /usr/local/mpi/bin/mpirun -np 1
	    checking how to run in parallel...
		     /usr/local/mpi/bin/mpirun -np $$NPROCS

	To quote from the INSTALL file....

	*** Parallel vs. serial library
	The HDF5 library can be configured to use MPI and MPI-IO for
	parallelizm on a distributed multi-processor system. The easy
	way to do this is to have a properly installed parallel
	compiler (e.g., MPICH's mpicc or IBM's mpcc) and supply that
	executable as the value of the CC environment variable:
	[NOTE: mpcc is not tested yet]

	    $ CC=mpcc ./configure
	    $ CC=/usr/local/mpi/bin/mpicc ./configure

	If no such wrapper script is available then you must specify
	your normal C compiler along with the distribution of
	MPI/MPI-IO which is to be used (values other than `mpich' will
	be added at a later date):

	    $ ./configure --enable-parallel=mpich

	If the MPI/MPI-IO include files and/or libraries cannot be
	found by the compiler then their directories must be given as
	arguments to CPPFLAGS and/or LDFLAGS:

	    $ CPPFLAGS=-I/usr/local/mpi/include \
	      LDFLAGS=-L/usr/local/mpi/lib/LINUX/ch_p4 \
	      ./configure --enable-parallel=mpich

	If a parallel library is being built then configure attempts
	to determine how to run a parallel application on one
	processor and on many processors.  If the compiler is mpicc
	and the user hasn't specified values for RUNSERIAL and
	RUNPARALLEL then configure chooses `mpirun' from the same
	directory as `mpicc':

	    RUNSERIAL:    /usr/local/mpi/bin/mpirun -np 1
	    RUNPARALLEL:  /usr/local/mpi/bin/mpirun -np $${NPROCS:=2}

	The `$${NPROCS:=2}' will be substituted with the value of the
	NPROCS environment variable at the time `make check' is run
	(or the value 2).

./testpar/Makefile.in
	Saying `make check' (or `make test') will run the tests on two
	processors by default.  If you define NPROCS then that many
	processors are used instead:

	    $ NPROCS=4 make check

./configure.in
	Fixed (hopefully) bugs with detecting whether __attribute__
	and __FUNCTION__ are special keywords for the compiler.

./Makefile.in
	Saying `make install' from the top level directory shows
	instructions for using shared libraries.

./config/commence.in
./src/Makefile.in
./test/Makefile.in
./testpar/Makefile.in
./tools/Makefile.in
	Moved the @top_srcdir@ into the makefiles because it was
	expanded too early and had the wrong value.

./INSTALL
	Added a warning that if the wrong version of hdf4 tools are
	installed then `make check' will fail in the tools directory.
1999-03-03 18:17:48 -05:00

399 lines
15 KiB
Plaintext

-*- outline -*-
This file contains instructions for the installation of HDF5 on
Unix-like systems. Users of the Intel TFLOPS machine should see the
INSTALL.ascired for instructions.
* Obtaining HDF5
The latest supported public release of HDF5 is available from
ftp://hdf.ncsa.uiuc.edu/pub/dist/HDF5 and is available in tar
format uncompressed or compressed with compress, gzip, or
bzip2.
The HDF team also makes snapshots of the source code available
on a regular basis but these. These snapshots are unsupported
(that is, the HDF team will not release a bug-fix on a
particular snapshot; rather any bug fixes will be rolled into
the next snapshot). Furthermore, the snapshots have only been
tested on a few machines and may not test correctly for
parallel applications. Snapshots can be found at
ftp://hdf.ncsa.uiuc.edu/pub/outgoing/hdf5/snapshots in a
limited number for formats.
* Warnings about compilers
OUTPUT FROM THE FOLLOWING COMPILERS SHOULD BE EXTREMELY
SUSPECT WHEN USED TO COMPILE THE HDF5 LIBRARY, ESPECIALLY IF
OPTIMIZATIONS ARE ENABLED. IN ALL CASES, HDF5 ATTEMPTS TO WORK
AROUND THE COMPILER BUGS BUT THE HDF5 DEVELOPMENT TEAM MAKES
NO GUARANTEES THAT THERE ARE OTHER CODE GENERATION PROBLEMS.
** GNU (Intel platforms)
Versions before 2.8.1 have serious problems allocating
registers when functions contain operations on `long long'
data types. Supplying the `--disable-hsizet' switch to
configure (documented below) will prevent hdf5 from using
`long long' data types in situations that are known not to
work, but it limits the hdf5 address space to 2GB.
** DEC
The V5.2-038 compiler (and possibly others) occasionally
generates incorrect code for memcpy() calls when optimizations
are enabled, resulting in unaligned access faults. HDF5 works
around the problem by casting the second argument to `char*'.
** SGI (Irix64 6.2)
The Mongoose 7.00 compiler has serious optimization bugs and
should be upgraded to MIPSpro 7.2.1.2m. Patches are available
from SGI.
** Windows/NT
The MicroSoft Win32 5.0 compiler is unable to cast unsigned
long long values to doubles. HDF5 works around this bug by
first casting to signed long long and then to double.
* Quick installation
For those that don't like to read ;-) the following steps can
be used to configure, build, test, and install the HDF5
library, header files, and support programs.
$ gunzip <hdf5-1.0.0.tar.gz |tar xf -
$ cd hdf5-1.0.0
$ make check
$ make install
* HDF5 dependencies
** Zlib
The HDF5 library has a predefined compression filter that uses
the "deflate" method for chunked datatsets. If zlib-1.1.2 or
later is found then hdf5 will use it, otherwise HDF5's
predefined compression method will degenerate to a no-op (the
compression filter will succeed but the data will not be
compressed).
** MPI and MPI-IO
The parallel version of the library is built upon the
foundation provided by MPI and MPI-IO. If these libraries are
not available when HDF5 is configured then only a serial
version of HDF5 can be built.
* Full installation instructions for source distributions
** Unpacking the distribution
The HDF5 source code is distributed in a variety of formats
which can be unpacked with the following commands, each of
which creates an `hdf5-1.0.0' directory.
*** Non-compressed tar archive (*.tar)
$ tar xf hdf5-1.0.0.tar
*** Compressed tar archive (*.tar.Z)
$ uncompress -c <hdf5-1.0.0.tar.Z |tar xf -
*** Gzip'd tar archive (*.tar.gz)
$ gunzip <hdf5-1.0.0.tar.gz |tar xf -
*** Bzip'd tar archive (*.tar.bz2)
$ bunzip2 <hdf5-1.0.0.tar.gz |tar xf -
** Configuring
HDF5 uses the GNU autoconf system for configuration, which
detects various features of the host system and creates the
Makefiles. On most systems it should be sufficient to say:
$ ./configure OR
$ sh configure
The configuration process can be controlled through
environment variables, command-line switches, and host
configuration files. For a complete list of switches say
`./configure --help'. The host configuration files are located
in the `config' directory and are based on architecture name,
vendor name, and/or operating system which are displayed near
the beginning of the `configure' output. The host config file
influences the behavior of configure by setting or augmenting
shell variables.
*** Specifying the installation directories
Typing `make install' will install the HDF5 library, header
files, and support programs in /usr/local/lib,
/usr/local/include, and /usr/local/bin. To use a path other
than /usr/local specify the path with the `--prefix=PATH'
switch:
$ ./configure --prefix=/home/robb
If shared libraries are being built (the default) then the
final home of the shared library must be specified with this
switch before the library and executables are built.
*** Using an alternate C compiler
By default, configure will look for the C compiler by trying
`gcc' and `cc'. However, if the environment variable "CC" is
set then its value is used as the C compiler (users of csh and
derivatives will need to prefix the commands below with
`env'). For instance, to use the native C compiler on a system
which also has the GNU gcc compiler:
$ CC=cc ./configure
A parallel version of hdf5 can be built by specifying `mpicc'
as the C compiler (the `--enable-parallel' flag documented
below is optional). Using the `mpicc' compiler will insure
that the correct MPI and MPI-IO header files and libraries are
used.
$ CC=/usr/local/mpi/bin/mpicc ./configure
On Irix64 the default compiler is `cc -64'. To use an
alternate compiler specify it with the CC variable:
$ CC='cc -o32' ./configure
*** Additional compilation flags
If addtional flags must be passed to the compilation commands
then specify those flags with the CFLAGS variable. For
instance, to enable symbolic debugging of a production version
of HDF5 one might say:
$ CFLAGS=-g ./confgure --enable-production
*** Specifying other programs
The build system has been tuned for use with GNU make but
works also with other versions of make. If the `make' command
runs a non-GNU version but a GNU version is available under a
different name (perhaps `gmake') then HDF5 can be configured
to use it by setting the MAKE variable. Note that whatever
value is used for MAKE must also be used as the make command
when building the library:
$ MAKE=gmake ./configure
$ gmake
The `AR' and `RANLIB' variables can also be set to the names
of the `ar' and `ranlib' (or `:') commands to override values
detected by configure.
The HDF5 library, include files, and utilities are installed
during `make install' (described below) with a BSD-compatible
install program detected automatically by configure. If none
is found then the shell script bin/install-sh is
used. Configure doesn't check that the install script actually
works, but if a bad install is detected on your system (e.g.,
on the ASCI blue machine as of March 2, 1999) you have two
choices:
1. Copy the bin/install-sh program to your $HOME/bin
directory, name it `install', and make sure that
$HOME/bin is searched before the system bin
directories.
2. Specify the full path name of the `install-sh' program
as the value of the INSTALL environment variable. Note:
do not use `cp' or some other program in place of
install because the HDF5 makefiles also use the install
program to also change file ownership and/or access
permissions.
*** Specifying other libraries and headers
Configure searches the standard places (those places known by
the systems compiler) for include files and header
files. However, additional directories can be specified by
using the CPPFLAGS and/or LDFLAGS variables:
$ CPPFLAGS=-I/home/robb/include \
LDFLAGS=-L/home/robb/lib \
./configure
HDF5 uses the zlib library for two purposes: it provides
support for the HDF5 deflate data compression filter, and it
is used by the h5toh4 converter in support of HDF4. Configure
searches the standard places (plus those specified above with
CPPFLAGS and LDFLAGS variables) for the zlib headers and
library. The search can be disabled by specifying
`--without-zlib' or alternate directories can be specified
with `--with-zlib=INCDIR,LIBDIR' or through the CPPFLAGS and
LDFLAGS variables:
$ ./configure --with-zlib=/usr/unsup/include,/usr/unsup/lib
$ CPPFLAGS=-I/usr/unsup/include \
LDFLAGS=-L/usr/unsup/lib \
./configure
The HDF5-to-HDF4 conversion tool requires the HDF4 library and
header files which are detected the same way as zlib. The
switch to give to configure is `--with-hdf4'. Note that HDF5
requires a newer version of zlib than the one shipped with
some versions of HDF4. Also, unless you have the "correct"
version of hdf4 the confidence testing will fail in the tools
directory.
*** Static versus shared linking
The build process will create static libraries on all systems
and shared libraries on systems that support dynamic linking
to a sufficient degree. Either form of library may be
suppressed by saying `--disable-static' or `--disable-shared'.
$ ./configure --disable-shared
*** Optimization versus symbolic debugging
The library can be compiled to provide symbolic debugging
support so it can be debugged with gdb, dbx, ddd, etc or it
can be compiled with various optimizations. To compile for
symbolic debugging (the default for snapshots) say
`--disable-production'; to compile with optimizations (the
default for supported public releases) say
`--enable-production'. On some systems the library can also
be compiled for profiling with gprof by saying
`--enable-production=profile'.
$ ./configure --disable-production #symbolic debugging
$ ./configure --enable-production #optimized code
$ ./configure --enable-production=profile #for use with gprof
Regardless of whether support for symbolic debugging is
enabled, the library also is able to perform runtime debugging
of certain packages (such as type conversion execution times,
and extensive invariant condition checking). To enable this
debugging supply a comma-separated list of package names to to
the `--enable-debug' switch (see Debugging.html for a list of
package names). Debugging can be disabled by saying
`--disable-debug'. The default debugging level for snapshots
is a subset of the available packages; the default for
supported releases is no debugging (debugging can incur a
significant runtime penalty).
$ ./configure --enable-debug=s,t #debug only H5S and H5T
$ ./configure --enable-debug #debug normal packages
$ ./configure --enable-debug=all #debug all packages
$ ./configure --disable-debug #no debugging
HDF5 is also able to print a trace of all API function calls,
their arguments, and the return values. To enable or disable
the ability to trace the API say `--enable-trace' (the default
for snapthots) or `--disable-trace' (the default for public
releases). The tracing must also be enabled at runtime to see
any output (see Debugging.html).
*** Large (>2GB) vs. small (<2GB) file capability
In order to read or write files that could potentially be
larger than 2GB it is necessary to use the non-ANSI `long
long' data type on some platforms. However, some compilers
(e.g., GNU gcc versions before 2.8.1 on Intel platforms)
are unable to produce correct machine code for this data
type. To disable use of the `long long' type on these machines
say:
$ ./configure --disable-hsizet
*** Parallel vs. serial library
The HDF5 library can be configured to use MPI and MPI-IO for
parallelizm on a distributed multi-processor system. The easy
way to do this is to have a properly installed parallel
compiler (e.g., MPICH's mpicc or IBM's mpcc) and supply that
executable as the value of the CC environment variable:
$ CC=mpcc ./configure
$ CC=/usr/local/mpi/bin/mpicc ./configure
If no such wrapper script is available then you must specify
your normal C compiler along with the distribution of
MPI/MPI-IO which is to be used (values other than `mpich' will
be added at a later date):
$ ./configure --enable-parallel=mpich
If the MPI/MPI-IO include files and/or libraries cannot be
found by the compiler then their directories must be given as
arguments to CPPFLAGS and/or LDFLAGS:
$ CPPFLAGS=-I/usr/local/mpi/include \
LDFLAGS=-L/usr/local/mpi/lib/LINUX/ch_p4 \
./configure --enable-parallel=mpich
If a parallel library is being built then configure attempts
to determine how to run a parallel application on one
processor and on many processors. If the compiler is mpicc
and the user hasn't specified values for RUNSERIAL and
RUNPARALLEL then configure chooses `mpirun' from the same
directory as `mpicc':
RUNSERIAL: /usr/local/mpi/bin/mpirun -np 1
RUNPARALLEL: /usr/local/mpi/bin/mpirun -np $${NPROCS:=2}
The `$${NPROCS:=2}' will be substituted with the value of the
NPROCS environment variable at the time `make check' is run
(or the value 2).
** Building
The library, confidence tests, and programs can be build by
saying just
$ make
Note that if you supplied some other make command via the MAKE
variable during the configuration step then that same command
must be used here.
When using GNU make you can add `-j -l6' to the make command
to compile in arallel on SMP machines. Do not give a number
after th `-j' since GNU make will turn it off for recursive
invocations of make.
$ make -j -l6
** Testing
HDF5 comes with various test suites, all of which can be run
by saying
$ make check
To run only the tests for the library change to the `test'
directory before issuing the command. Similarly, tests for the
parallel aspects of the library are in `testpar' and tests for
the support programs are in `tools'.
Temporary files will be deleted by each test when it complets,
but may continue to exist in an incomplete state if the test
fails. To prevent deletion of the files define the
HDF5_NOCLEANUP environment variable.
** Installing
The HDF5 library, include files, and support programs can be
installed in a (semi-)public place by saying `make
install'. The files are installed under the directory
specified with `--prefix=DIR' (or '/usr/local') in directories
named `lib', `include', and `bin'. The prefix directory must
exist prior to `make install', but its subdirectories are
created automatically.
The library can be used without installing it by pointing the
compiler at the `src' directory for both include files and
libraries. However, the minimum which must be installed to
make the library publically available is:
The library:
./src/libhdf5.a
The public header files:
./src/H5*public.h
The main header file:
./src/hdf5.h
The configuration information:
./src/H5config.h
The support programs that are useful are:
./tools/h5ls (list file contents)
./tools/h5dump (dump file contents)
./tools/h5repart (repartition file families)
./tools/h5toh4 (hdf5 to hdf4 file converter)
./tools/h5debug (low-level file debugging)
./tools/h5import (a demo)
* Support
Support is described in the README file.