hdf5/release_docs/INSTALL
Bill Wendling 9075854dd9 [svn-r5591] Purpose:
Update
Description:
    Explained that, if the user uses the "make install prefix=NEW_DIR"
    option, they'll need to modify the installed h5cc file to reflect the
    change.
2002-06-11 12:30:33 -05:00

545 lines
21 KiB
Plaintext

Instructions for the Installation of HDF5 Software
==================================================
CONTENTS
--------
1. Obtaining HDF5
2. Warnings about compilers
2.1. GNU (Intel platforms)
2.2. DEC
2.3. SGI (Irix64 6.2)
2.4. Windows/NT
3. Quick installation
3.1. TFLOPS
3.2. Windows
3.3. Certain Virtual File Layer(VFL)
4. HDF5 dependencies
4.1. Zlib
4.2. MPI and MPI-IO
5. Full installation instructions for source distributions
5.1. Unpacking the distribution
5.1.1. Non-compressed tar archive (*.tar)
5.1.2. Compressed tar archive (*.tar.Z)
5.1.3. Gzip'd tar archive (*.tar.gz)
5.1.4. Bzip'd tar archive (*.tar.bz2)
5.2. Source vs. Build Directories
5.3. Configuring
5.3.1. Specifying the installation directories
5.3.2. Using an alternate C compiler
5.3.3. Additional compilation flags
5.3.4. Compiling HDF5 wrapper libraries
5.3.5. Specifying other programs
5.3.6. Specifying other libraries and headers
5.3.7. Static versus shared linking
5.3.8. Optimization versus symbolic debugging
5.3.9. Large (>2GB) vs. small (<2GB) file capability
5.3.10. Parallel vs. serial library
5.4. Building
5.5. Testing
5.6. Installing
6. Using the Library
7. Support
*****************************************************************************
1. Obtaining HDF5
The latest supported public release of HDF5 is available from
ftp://hdf.ncsa.uiuc.edu/pub/dist/HDF5. For Unix platforms, it is
available in tar format uncompressed or compressed with compress,
gzip, or bzip2. For Microsoft Windows, it is in ZIP format.
The HDF team also makes snapshots of the source code available on
a regular basis. These snapshots are unsupported (that is, the
HDF team will not release a bug-fix on a particular snapshot;
rather any bug fixes will be rolled into the next snapshot).
Furthermore, the snapshots have only been tested on a few
machines and may not test correctly for parallel applications.
Snapshots can be found at
ftp://hdf.ncsa.uiuc.edu/pub/outgoing/hdf5/snapshots in a limited
number of formats.
2. Warnings about compilers
OUTPUT FROM THE FOLLOWING COMPILERS SHOULD BE EXTREMELY SUSPECT
WHEN USED TO COMPILE THE HDF5 LIBRARY, ESPECIALLY IF
OPTIMIZATIONS ARE ENABLED. IN ALL CASES, HDF5 ATTEMPTS TO WORK
AROUND THE COMPILER BUGS BUT THE HDF5 DEVELOPMENT TEAM MAKES NO
GUARANTEES THAT THERE ARE OTHER CODE GENERATION PROBLEMS.
2.1. GNU (Intel platforms)
Versions before 2.8.1 have serious problems allocating registers
when functions contain operations on `long long' data types.
Supplying the `--disable-hsizet' switch to configure (documented
below) will prevent hdf5 from using `long long' data types in
situations that are known not to work, but it limits the hdf5
address space to 2GB.
2.2. DEC
The V5.2-038 compiler (and possibly others) occasionally
generates incorrect code for memcpy() calls when optimizations
are enabled, resulting in unaligned access faults. HDF5 works
around the problem by casting the second argument to `char *'.
2.3. SGI (Irix64 6.2)
The Mongoose 7.00 compiler has serious optimization bugs and
should be upgraded to MIPSpro 7.2.1.2m. Patches are available
from SGI.
2.4. Windows/NT
The MicroSoft Win32 5.0 compiler is unable to cast unsigned long
long values to doubles. HDF5 works around this bug by first
casting to signed long long and then to double.
A link warning: defaultlib "LIBC" conflicts with use of other
libs appears for debug version of VC++ 6.0. This warning will
not affect building and testing hdf5 libraries.
3. Quick installation
For those that don't like to read ;-) the following steps can be
used to configure, build, test, and install the HDF5 library,
header files, and support programs.
$ gunzip < hdf5-1.5.x.tar.gz | tar xf -
$ cd hdf5-1.5.x
$ make check
$ make install-all
3.1. TFLOPS
Users of the Intel TFLOPS machine, after reading this file,
should see the INSTALL_TFLOPS for more instructions.
3.2. Windows
Users of Microsoft Windows should see the INSTALL_Windows for
detailed instructions.
3.3. Certain Virtual File Layer(VFL)
If users want to install with special Virtual File Layer(VFL),
please go to read INSTALL_VFL file. SRB and Globus-GASS have
been documented.
4. HDF5 dependencies
4.1. Zlib
The HDF5 library has a predefined compression filter that uses
the "deflate" method for chunked datatsets. If zlib-1.1.2 or
later is found then HDF5 will use it, otherwise HDF5's predefined
compression method will degenerate to a no-op (the compression
filter will succeed but the data will not be compressed).
4.2. MPI and MPI-IO
The parallel version of the library is built upon the foundation
provided by MPI and MPI-IO. If these libraries are not available
when HDF5 is configured then only a serial version of HDF5 can be
built.
5. Full installation instructions for source distributions
5.1. Unpacking the distribution
The HDF5 source code is distributed in a variety of formats which
can be unpacked with the following commands, each of which
creates an `hdf5-1.5.x' directory.
5.1.1. Non-compressed tar archive (*.tar)
$ tar xf hdf5-1.5.x.tar
5.1.2. Compressed tar archive (*.tar.Z)
$ uncompress -c < hdf5-1.5.x.tar.Z | tar xf -
5.1.3. Gzip'd tar archive (*.tar.gz)
$ gunzip < hdf5-1.5.x.tar.gz | tar xf -
5.1.4. Bzip'd tar archive (*.tar.bz2)
$ bunzip2 < hdf5-1.5.x.tar.bz2 | tar xf -
5.2. Source vs. Build Directories
On most systems the build can occur in a directory other than the
source directory, allowing multiple concurrent builds and/or
read-only source code. In order to accomplish this, one should
create a build directory, cd into that directory, and run the
`configure' script found in the source directory (configure
details are below).
Unfortunately, this does not work on recent Irix platforms (6.5?
and later) because that `make' doesn't understand the VPATH
variable. However, hdf5 also supports Irix `pmake' which has a
.PATH target which serves a similar purpose. Here's what the man
pages say about VPATH, which is the facility used by HDF5
makefiles for this feature:
The VPATH facility is a derivation of the undocumented VPATH
feature in the System V Release 3 version of make. System V
Release 4 has a new VPATH implementation, much like the
pmake(1) .PATH feature. This new feature is also undocumented
in the standard System V Release 4 manual pages. For this
reason it is not available in the IRIX version of make. The
VPATH facility should not be used with the new parallel make
option.
5.3. Configuring
HDF5 uses the GNU autoconf system for configuration, which
detects various features of the host system and creates the
Makefiles. On most systems it should be sufficient to say:
$ ./configure OR
$ sh configure
The configuration process can be controlled through environment
variables, command-line switches, and host configuration files.
For a complete list of switches type:
$ ./configure --help
The host configuration files are located in the `config'
directory and are based on architecture name, vendor name, and/or
operating system which are displayed near the beginning of the
`configure' output. The host config file influences the behavior
of configure by setting or augmenting shell variables.
5.3.1. Specifying the installation directories
Typing `make install' will install the HDF5 library, header
files, examples, and support programs in /usr/local/lib,
/usr/local/include, /usr/local/doc/hdf5/examples, and
/usr/local/bin. To use a path other than
/usr/local specify the path with the `--prefix=PATH' switch:
$ ./configure --prefix=$HOME
If shared libraries are being built (the default) then the final
home of the shared library must be specified with this switch
before the library and executables are built.
5.3.2. Using an alternate C compiler
By default, configure will look for the C compiler by trying `gcc'
and `cc'. However, if the environment variable "CC" is set then its
value is used as the C compiler (users of csh and derivatives will
need to prefix the commands below with `env'). For instance, to use
the native C compiler on a system which also has the GNU gcc
compiler:
$ CC=cc ./configure
A parallel version of hdf5 can be built by specifying `mpicc' as the
C compiler (the `--enable-parallel' flag documented below is
optional). Using the `mpicc' compiler will insure that the correct
MPI and MPI-IO header files and libraries are used.
$ CC=/usr/local/mpi/bin/mpicc ./configure
On Irix64 the default compiler is `cc'. To use an alternate compiler
specify it with the CC variable:
$ CC='cc -n32' ./configure
Similarly, users compiling on a Solaris machine and desiring to build
the distribution with 64-bit support should specify the correct flags
with the CC variable:
$ CC='cc -xarch=v9' ./configure
Specifying these machine architecture flags in the CFLAGS variable
(see below) will not work correctly.
5.3.3. Additional compilation flags
If addtional flags must be passed to the compilation commands
then specify those flags with the CFLAGS variable. For instance,
to enable symbolic debugging of a production version of HDF5 one
might say:
$ CFLAGS=-g ./configure --enable-production
5.3.4. Compiling HDF5 wrapper libraries
One can optionally build the Fortran and/or C++ interface to the
HDF5 C library. By default, both options are disabled. To build
them, specify `--enable-fortran' and `--enable-cxx' respectively.
$ ./configure --enable-fortran
$ ./configure --enable-cxx
Configuration will halt if a working Fortran 90 or 95 compiler or
C++ compiler is not found. Currently, the Fortran configure tests
for these compilers in order: f90, pgf90, f95. To use an
alternative compiler specify it with the F9X variable:
$ F9X=/usr/local/bin/g95 ./configure --enable-fortran
Note: The Fortran and C++ interfaces are not supported on all the
platforms the main HDF5 library supports. Also, the Fortran
interface supports parallel HDF5 while the C++ interface does
not.
Note: On T3E and J90 the following files should be modified before
building the Fortran Library:
fortran/src/H5Dff.f90
fortran/src/H5Aff.f90
fortran/src/H5Pff.f90
Check for "Comment if on T3E ..." comment and comment out
specified lines.
5.3.5. Specifying other programs
The build system has been tuned for use with GNU make but works
also with other versions of make. If the `make' command runs a
non-GNU version but a GNU version is available under a different
name (perhaps `gmake') then HDF5 can be configured to use it by
setting the MAKE variable. Note that whatever value is used for
MAKE must also be used as the make command when building the
library:
$ MAKE=gmake ./configure
$ gmake
The `AR' and `RANLIB' variables can also be set to the names of
the `ar' and `ranlib' (or `:') commands to override values
detected by configure.
The HDF5 library, include files, and utilities are installed
during `make install' (described below) with a BSD-compatible
install program detected automatically by configure. If none is
found then the shell script bin/install-sh is used. Configure
doesn't check that the install script actually works, but if a
bad install is detected on your system (e.g., on the ASCI blue
machine as of March 2, 1999) you have two choices:
1. Copy the bin/install-sh program to your $HOME/bin
directory, name it `install', and make sure that $HOME/bin
is searched before the system bin directories.
2. Specify the full path name of the `install-sh' program
as the value of the INSTALL environment variable. Note: do
not use `cp' or some other program in place of install
because the HDF5 makefiles also use the install program to
also change file ownership and/or access permissions.
5.3.6. Specifying other libraries and headers
Configure searches the standard places (those places known by the
systems compiler) for include files and header files. However,
additional directories can be specified by using the CPPFLAGS
and/or LDFLAGS variables:
$ CPPFLAGS=-I/home/robb/include \
LDFLAGS=-L/home/robb/lib \
./configure
HDF5 uses the zlib library for two purposes: it provides support
for the HDF5 deflate data compression filter, and it is used by
the h5toh4 converter and the h4toh5 converter in support of
HDF4. Configure searches the standard places (plus those
specified above with CPPFLAGS and LDFLAGS variables) for the zlib
headers and library. The search can be disabled by specifying
`--without-zlib' or alternate directories can be specified with
`--with-zlib=INCDIR,LIBDIR' or through the CPPFLAGS and LDFLAGS
variables:
$ ./configure --with-zlib=/usr/unsup/include,/usr/unsup/lib
$ CPPFLAGS=-I/usr/unsup/include \
LDFLAGS=-L/usr/unsup/lib \
./configure
The HDF5-to-HDF4 and HDF4-to-HDF5 conversion tool requires the
HDF4 library and header files which are detected the same way as
zlib. The switch to give to configure is `--with-hdf4'. Note
that HDF5 requires a newer version of zlib than the one shipped
with some versions of HDF4. Also, unless you have the "correct"
version of hdf4 the confidence testing will fail in the tools
directory.
5.3.7. Static versus shared linking
The build process will create static libraries on all systems and
shared libraries on systems that support dynamic linking to a
sufficient degree. Either form of library may be suppressed by
saying `--disable-static' or `--disable-shared'.
$ ./configure --disable-shared
The C++ and Fortran libraries are currently only available in the
static format.
To build only statically linked executables on platforms which
support shared libraries, use the `--enable-static-exec' flag.
$ ./configure --enable-static-exec
5.3.8. Optimization versus symbolic debugging
The library can be compiled to provide symbolic debugging support
so it can be debugged with gdb, dbx, ddd, etc or it can be
compiled with various optimizations. To compile for symbolic
debugging (the default for snapshots) say `--disable-production';
to compile with optimizations (the default for supported public
releases) say `--enable-production'. On some systems the library
can also be compiled for profiling with gprof by saying
`--enable-production=profile'.
$ ./configure --disable-production #symbolic debugging
$ ./configure --enable-production #optimized code
$ ./configure --enable-production=profile #for use with gprof
Regardless of whether support for symbolic debugging is enabled,
the library also is able to perform runtime debugging of certain
packages (such as type conversion execution times, and extensive
invariant condition checking). To enable this debugging supply a
comma-separated list of package names to to the `--enable-debug'
switch (see Debugging.html for a list of package names).
Debugging can be disabled by saying `--disable-debug'. The
default debugging level for snapshots is a subset of the
available packages; the default for supported releases is no
debugging (debugging can incur a significant runtime penalty).
$ ./configure --enable-debug=s,t #debug only H5S and H5T
$ ./configure --enable-debug #debug normal packages
$ ./configure --enable-debug=all #debug all packages
$ ./configure --disable-debug #no debugging
HDF5 is also able to print a trace of all API function calls,
their arguments, and the return values. To enable or disable the
ability to trace the API say `--enable-trace' (the default for
snapthots) or `--disable-trace' (the default for public
releases). The tracing must also be enabled at runtime to see any
output (see Debugging.html).
5.3.9. Large (>2GB) vs. small (<2GB) file capability
In order to read or write files that could potentially be larger
than 2GB it is necessary to use the non-ANSI `long long' data
type on some platforms. However, some compilers (e.g., GNU gcc
versions before 2.8.1 on Intel platforms) are unable to produce
correct machine code for this data type. To disable use of the
`long long' type on these machines say:
$ ./configure --disable-hsizet
5.3.10. Parallel vs. serial library
The HDF5 library can be configured to use MPI and MPI-IO for
parallelizm on a distributed multi-processor system. Read the
file INSTALL_parallel for detailed explanations.
5.3.11. Threadsafe capability
The HDF5 library can be configured to be thread-safe (on a very
large scale) with the with the `--enable-threadsafe' flag to
configure. Read the file doc/TechNotes/ThreadSafeLibrary.html for
further details.
5.3.12. Backward compatibility
The 1.4 version of the HDF5 library can be configured to operate
identically to the v1.2 library with the `--enable-hdf5v1_2'
configure flag. This allows existing code to be compiled with the
v1.4 library without requiring immediate changes to the
application source code. This flag will only be supported in the
v1.4 branch of the library, it will not be available in v1.5+.
5.3.13. Network stream capability
The HDF5 library can be configured with a network stream file
driver with the `--enable-stream-vfd' configure flag. This option
compiles the "stream" Virtual File Driver into the main library.
See the documentation on the Virtual File Layer for more details
about the use of this driver.
5.4. Building
The library, confidence tests, and programs can be build by
saying just:
$ make
Note that if you supplied some other make command via the MAKE
variable during the configuration step then that same command
must be used here.
When using GNU make you can add `-j -l6' to the make command to
compile in parallel on SMP machines. Do not give a number after
th `-j' since GNU make will turn it off for recursive invocations
of make.
$ make -j -l6
5.5. Testing
HDF5 comes with various test suites, all of which can be run by
saying
$ make check
To run only the tests for the library change to the `test'
directory before issuing the command. Similarly, tests for the
parallel aspects of the library are in `testpar' and tests for
the support programs are in `tools'.
Temporary files will be deleted by each test when it complets,
but may continue to exist in an incomplete state if the test
fails. To prevent deletion of the files define the HDF5_NOCLEANUP
environment variable.
5.6. Installing
The HDF5 library, include files, and support programs can be
installed in a (semi-)public place by saying `make install'. The
files are installed under the directory specified with
`--prefix=DIR' (or '/usr/local') in directories named `lib',
`include', and `bin'. The prefix directory must exist prior to
`make install', but its subdirectories are created automatically.
If `make install' fails because the install command at your site
somehow fails, you may use the install-sh that comes with the
source. You need to run ./configure again.
$ INSTALL="$PWD/bin/install-sh -c" ./configure ...
$ make install
If you want to install HDF5 in a location other than the location
specified by the `--prefix=DIR' flag during configuration (or
instead of the default location, `/usr/local'), you can do that
by issuing the command:
$ make install prefix=NEW_DIR
where NEW_DIR is the new directory you wish to install HDF5. If
you do this, you should also modify the installed "bin/h5cc"
script. Change the "prefix=..." line to reflect the value of
NEW_DIR.
The library can be used without installing it by pointing the
compiler at the `src' directory for both include files and
libraries. However, the minimum which must be installed to make
the library publically available is:
The library:
./src/libhdf5.a
The public header files:
./src/H5*public.h
The main header file:
./src/hdf5.h
The configuration information:
./src/H5pubconf.h
The support programs that are useful are:
./tools/h5ls (list file contents)
./tools/h5dump (dump file contents)
./tools/h5repart (repartition file families)
./tools/h5toh4 (hdf5 to hdf4 file converter)
./tools/h5debug (low-level file debugging)
./tools/h5import (a demo)
./tools/h4toh5 (hdf4 to hdf5 file converter)
6. Using the Library
Please see the User Manual in the doc/html directory.
Most programs will include <hdf5.h> and link with -lhdf5.
Additional libraries may also be necessary depending on whether
support for compression, etc. was compiled into the hdf5 library.
A summary of the hdf5 installation can be found in the
libhdf5.settings file in the same directory as the static and/or
shared hdf5 libraries.
7. Support
Support is described in the README file.