[svn-r198] Changes since 19980129

----------------------

./INSTALL
./INSTALL_MAINT
./README
	Updated installation instructions for hdf-5.0.0a.

./RELEASE
	Updated release notes.  Needs more work.

./bin/release
	The tarballs include the name of the root directory like
	hdf-5.0.0a so it doesn't have to be explicitly created when
	the files are extracted.
This commit is contained in:
Robb Matzke 1998-01-29 16:56:06 -05:00
parent 28e23330df
commit cb8a986afd
5 changed files with 277 additions and 121 deletions

123
INSTALL
View File

@ -1,72 +1,113 @@
UNIX-LIKE SYSTEMS
-----------------
This file contains instructions for the installation of HDF5 on
Unix-like systems. First, one must obtain a tarball of the HDF5
release from ---FIXME---->http://hdf5.ncsa.uiuc.edu<----FIXME---
repository. The files are available in uncompressed tar, gzip, bzip2,
and compress formats.
To build/install HDF5 on Unix systems from the root of the
distribution directory:
For those that like to live dangerously and don't like to read ;-) you
can do the following:
* Build the ./src/H5config.h file and Makefiles by saying:
$ tar xf hdf-5.0.0a.tar
$ cd hdf-5.0.0a
$ make test
$ make install # Optional
./configure
You can say `./configure --help' to see a list of options.
Step 1. Unpack the source tree.
One common option is to specify the prefix directory under which
public files are stored. The default prefix is `/usr/local'
resulting in the directory structure:
* The tarball will unpack into an hdf-5.0.0a directory with one of
the following commands:
/usr/local/include -- C header files.
/usr/local/lib -- The HDF5 library.
/usr/local/bin -- HDF5 support programs.
$ tar xf hdf-5.0.0a.tar OR
$ gunzip <hdf-5.0.0a.tar.gz |tar xf - OR
$ bunzip2 <hdf-5.0.0a.tar.bz2 |tar xf - OR
$ uncompress -c <hdf-5.0.0a.tar.Z | tar xf -
To install the public files in `/usr/include', `/usr/lib', and
`/usr/bin' instead say:
./configure --prefix=/usr
Step 2. Configure makefiles.
Configure will create directories `include', `lib', and `bin'
under the prefix directory if they don't already exist, but the
prefix directory must already exist and be writable.
* HDF5 uses the GNU autoconf program for configuration. Most
installations can be configured by typing just (from the
hdf-5.0.0a directory)
You can also override detection of certain things with
environment variables:
$ ./configure
* By default libraries, programs, and documentation are installed
under /usr/local/lib, /usr/local/bin, and /usr/local/man.
However, if you want them in some other location you can specify
a prefix to use instead of /usr/local. For instance, to install
in /usr/lib, /usr/bin, and /usr/man one would say
$ ./configure --prefix=/usr
Note: HDF5 can be used without installing it.
* You can also override detection of certain things with
environment variables:
CC Name of the C compiler to use.
CFLAGS Alternate C compiler flags.
CPPFLAGS Alternate C preprocessor flags.
MAKE Name of the make(1) program.
For instance it is common to say (add `env' to the beginning of
this command if you're running a csh-like shell)
For instance it is common to specify the name of the C compiler,
C proprocessor flags, and compiler flags (add `env' to the
beginning of this command if you're running a csh-like shell)
CPPFLAGS=-DNDEBUG CC=gcc CFLAGS=-O3 ./configure
$ CC=gcc CPPFLAGS=-DNDEBUG CFLAGS="-Wall -O3" ./configure
* Build library targets by saying (if you supplied some other
make command through the MAKE variable in the previous step then
use that command instead):
make
* You can see a list of other configuration options by saying
If you're re-building the library after changing some files and
you're not using GNU make and gcc, then you should say `make
clean' from the top directory between each build attempt since
the development Makefiles don't have complete dependency
information yet.
$ ./configure --help
* Install the library, header files, and programs by saying:
make install
Step 3. Compile library, tests, and programs.
* Build library targets by saying
$ make
Note: If you supplied some other make command through the MAKE
environment variable in the previous step then use that command
instead.
Note: If you're re-building the library after changing some
files and you're not using GNU make and gcc, then you should say
`make clean' from the top directory between each build attempt
since the development Makefiles don't have complete dependency
information yet.
Note: When using GNU make you can add `-j -l6' to the make
command to compile in parallel on SMP machines.
Step 4. Run confidence tests.
* All confidence tests should be run by saying
$ make test
The command will fail if any test fails.
Note: some versions of make will report that `test is up to
date'. If this happens then run `make test' from within the test
directory.
Step 5. Install public files.
* Install the library, header files, and programs by saying:
$ make install
This step will fail unless you have permission to write to the
installation directories. Of course, you can use the header
files, library, and programs directly out of the source tree if
you like, skipping this step.
Step 6. Subscribe to mailing lists.
NON-UNIX SYSTEMS
----------------
* Subscribe to the mailing lists described in the README file.
To build/install HDF5 on non-Unix systems from the root of the
distribution directory:
* To be written later. Basically, there will be a separate
makefile (or equivalent) for each platform.

View File

@ -13,7 +13,7 @@ Information for HDF5 maintainers:
all -- build locally.
install -- install libs, headers, progs.
uninstall -- removed installed files.
uninstall -- remove installed files.
mostlyclean -- remove temp files (eg, *.o but not *.a).
clean -- mostlyclean plus libs and progs.
distclean -- all non-distributed files.
@ -47,8 +47,8 @@ Information for HDF5 maintainers:
fi
Site configuration files are for personal preferences and should
not be distributed.
not be distributed. Run bin/config.guess to see what we think your
CPU, VENDOR, and OS values are.
* If you use GNU make along with gcc the Makefile will contain targets
that automatically maintain a list of source interdependencies; you
@ -57,16 +57,16 @@ Information for HDF5 maintainers:
to force an update.
To force an update of all dependency information remove the
`.depend' file from each directory and type `make depend'. For
`.depend' file from each directory and type `make'. For
instance:
$ cd $HDF5_HOME
$ find . -name .depend -exec rm {} \;
$ make depend
$ make
* Object files stay in the directory and are added to the library as a
final step instead of placing the file in the library immediately
and removing it from the directory. The reason is two-fold:
and removing it from the directory. The reason is three-fold:
1. Most versions of make don't allow `$(LIB)($(SRC:.c=.o))'
which makes it necessary to have two lists of files, one
@ -75,3 +75,20 @@ Information for HDF5 maintainers:
2. Some versions of make/ar have problems with modification
times of archive members.
3. Adding object files immediately causes problems on SMP
machines where make is doing more than one thing at a
time.
* When using GNU make on an SMP you can cause it to compile more than
one thing at a time. At the top of the source tree invoke make as
$ make -j -l6
which causes make to fork as many children as possible as long as
the load average doesn't go above 6. In subdirectories one can say
$ make -j2
which limits the number of children to two (this doesn't work at the
top level because the `-j2' is not passed to recursive makes.

50
README
View File

@ -1,23 +1,35 @@
This is the 10/15/97 prototype release of the HDF5 library.
This is the hdf-5.0.0a prototype release of the HDF5 library.
This release is not fully functional for the entire API defined in the HTML
documentation, see the RELEASE file for information specific to this release of
the library. The INSTALL file contains instructions on compiling and
installing the library.
This release is not fully functional for the entire API defined in the
documentation, see the RELEASE file in this directory for information
specific to this release of the library. The INSTALL file contains
instructions on compiling and installing the library.
Documentation for this release is in the html directory:
H5.apiv2.html - API interface description for HDF5 interface
H5.format.html - Format description for HDF5 files
H5.intro.html - Introduction to programming with the HDF5 interface
Documentation for this release is in the html directory.
Three mailing lists are currently set up for use with the HDF5 library:
hdf5 - For general discussion of the HDF5 library with other users.
hdf5dev - For discussion of the HDF5 library development with developers
and other interested parties.
hdf5announce - For announcements of HDF5 related developments, not a
discussion list.
To subscribe to a list, send mail to "<list>-request@ncsa.uiuc.edu", (i.e.
hdf5-request@ncsa.uiuc.ed) with "subscribe <e-mail address> in the _body_ of
the message. Messages to be sent to the list should be sent to
"<list>@ncsa.uiuc.edu"
Three mailing lists are currently set up for use with the HDF5
library.
hdf5 - For general discussion of the HDF5 library with
other users.
hdf5dev - For discussion of the HDF5 library development
with developers and other interested parties.
hdf5announce - For announcements of HDF5 related developments,
not a discussion list.
To subscribe to a list, send mail to "<list>-request@ncsa.uiuc.edu",
(e.g., hdf5-request@ncsa.uiuc.ed) with "subscribe <your e-mail
address> in the _body_ of the message. Messages to be sent to
the list should be sent to "<list>@ncsa.uiuc.edu".
Bugs should be reported to:
Robb Matzke <matzke@llnl.gov> All types of bugs
Quincey Koziol <koziol@ncsa.uiuc.edu> All types of bugs
Albert Cheng <acheng@ncsa.uiuc.edu> Parallel bugs
Kim Yates <rkyates@llnl.gov> Parallel bugs
Paul Harten <pharten@ncsa.uiuc.edu> Bugs specific to ASCI Red
or to the hdf5dev mailing list.

164
RELEASE
View File

@ -1,53 +1,117 @@
Release information for the 10/10/97 prototype release:
Release information for hdf-5.0.0a
----------------------------------
This release is intended primarily as a proof of concept and as a method
of familiarizing users with the intended functionality. A list of the
limitations of the prototype is as follows:
o - Multiple datasets may be created/read/written in an HDF5 file, but
access must be as an entire dataset, i.e. no slab/slice/subset code is
implemented yet. The datasets must also be homogeneous, orthogonal
datasets, similar to the datasets which are able to be created in the
HDF4 "SD" interface. Reducing these restrictions will be a major
effort of further development on the library.
o - Datasets can be located in a hierarchy of groups in the file, however
user-level features for transitioning through the groups are not
implemented in this release. Each dataset must be accessed through
its full "pathname" in the file, ie. "/foo/bar/dataset1",
"/foo/bar/dataset2", "/foo/data"
This release is an beta release for functionality necessary for the
ASCI vector bundle project in a serial environment. Some parallel
support is also available. Other features should be considered alpha
quality.
This release has been tested on UNIX platforms only; specifically: Linux,
FreedBSD, IRIX, Solaris & Dec UNIX. Machines which do not have IEEE floating-
point representation or non big- or little-endian memory representations aren't
supported in this release, most other machines should work correctly.
The following functions are implemented. Errors are returned if an
attempt is made to use some feature which is not implemented and
printing the error stack will show `not implemented yet'.
A list of the API functions currently supported follows. [This list is
short, mainly as a method of providing feedback before significant direction
changes must be made]
H5 (library) interface
o - H5version
H5F (file) interface
o - H5Fis_hdf5
o - H5Fcreate
o - H5Fopen
o - H5Fclose
H5M (meta) interface
o - H5Mcreate (for datatype, dataspace & dataset objects)
o - H5Mendaccess
H5P (dataspace) interface
o - H5Pis_simple
o - H5Pset_space
o - H5Pnelem
o - H5Pget_lrank
o - H5Pget_ldims
H5T (datatype) interface
o - H5Tis_atomic
o - H5Tset_type
o - H5Tget_type
o - H5Tsize
o - H5Tarch
H5D (dataset) interface
o - H5Mfind_name (to attach to a dataset)
o - H5Dset_info
o - H5Dget_info
o - H5Dread
o - H5Dwrite
Templates
H5Cclose - release template resources
H5Ccopy - copy a template
H5Ccreate - create a new template
H5Cget_chunk - get chunked storage parameters
H5Cget_class - get template class
H5Cget_istore_k - get chunked storage parameters
H5Cget_layout - get raw data layout class
H5Cget_sizes - get address and size sizes
H5Cget_sym_k - get symbol table storage parameters
H5Cget_userblock - get user-block size
H5Cget_version - get file version numbers
H5Cset_chunk - set chunked storage parameters
H5Cset_istore_k - set chunked storage parameters
H5Cset_layout - set raw data layout class
H5Cset_sizes - set address and size sizes
H5Cset_sym_k - set symbol table storage parameters
H5Cset_userblock - set user-block size
Datasets
H5Dclose - release dataset resources
H5Dcreate - create a new dataset
H5Dget_space - get data space
H5Dopen - open an existing dataset
H5Dread - read raw data
H5Dwrite - write raw data
Errors
H5Eclear - clear the error stack
H5Eclose - release an error stack
H5Ecreate - create a new error stack
H5Eprint - print an error stack
H5Epush - push an error onto a stack
Files
H5Fclose - close a file and release resources
H5Fcreate - create a new file
H5Fget_create_template - get file creation template
H5Fis_hdf5 - determine if a file is an hdf5 file
H5Fopen - open an existing file
Groups
H5Gclose - close a group and release resources
H5Gcreate - create a new group
H5Gopen - open an existing group
H5Gpop - pop a group from the cwg stack
H5Gpush - push a group onto the cwg stack
H5Gset - set the current working group (cwg)
Data spaces
H5Pclose - release data space resources
H5Pcreate_simple - create a new simple data space
H5Pget_dims - get data space size
H5Pget_hyperslab - get data space selection
H5Pget_ndims - get data space dimensionality
H5Pget_npoints - get number of selected points
H5Pis_simple - determine if data space is simple
H5Pset_hyperslab - select data points
H5Pset_space - reset data space dimensionality and size
Data types
H5Tclose - release data type resources
H5Tcopy - copy a data type
H5Tcreate - create a new data type
H5Tequal - compare two data types
H5Tfind - find a data type conversion function
H5Tget_class - get data type class
H5Tget_cset - get character set
H5Tget_ebias - get exponent bias
H5Tget_fields - get floating point fields
H5Tget_inpad - get inter-field padding
H5Tget_member_dims - get struct member dimensions
H5Tget_member_name - get struct member name
H5Tget_member_offset - get struct member byte offset
H5Tget_member_type - get struct member type
H5Tget_nmembers - get number of struct members
H5Tget_norm - get floating point normalization
H5Tget_offset - get bit offset within type
H5Tget_order - get byte order
H5Tget_pad - get padding type
H5Tget_precision - get precision in bits
H5Tget_sign - get integer sign type
H5Tget_size - get size in bytes
H5Tget_strpad - get string padding
H5Tinsert - insert struct member
H5Tlock - lock type to prevent changes
H5Tpack - pack struct members
H5Tregister_hard - register specific type conversion function
H5Tregister_soft - register general type conversion function
H5Tset_cset - set character set
H5Tset_ebias - set exponent bias
H5Tset_fields - set floating point fields
H5Tset_inpad - set inter-field padding
H5Tset_norm - set floating point normalization
H5Tset_offset - set bit offset within type
H5Tset_order - set byte order
H5Tset_pad - set padding type
H5Tset_precision - set precision in bits
H5Tset_sign - set integer sign type
H5Tset_size - set size in bytes
H5Tset_strpad - set string padding
H5Tunregister - remove a type conversion function
This release has been tested on UNIX platforms only; specifically:
Linux, FreedBSD, IRIX, Solaris & Dec UNIX.

View File

@ -1,5 +1,6 @@
#! /usr/local/bin/perl -w
require 5.003;
use Cwd;
# Builds a release. Arguments are zero or more of the words.
#
@ -60,7 +61,7 @@ sub setver ($;$$$) {
#
sub release (@) {
my @types = @_;
my ($ver, $status);
my ($ver, $status, $created_symlink);
local $_;
# Make sure the version number is correct.
@ -79,16 +80,32 @@ sub release (@) {
$status = system "cp Makefile.dist Makefile";
die "cannot install default Makefile" if $status >> 8;
# Make sure release directory exists and create a name.
# Make sure release directory exists
(mkdir $releases, 0777 or die "cannot create $releases")
unless -d $releases;
die "no manifest" unless -r "MANIFEST";
my $name = "$releases/hdf-$ver";
# Build the releases
# We build the release from above the root of the source tree so the
# hdf5 directory appears as part of the name in the tar file. We create
# a temporary symlink called something like `hdf-5.0.0a' that points to
# our current working directory.
$_ = cwd;
my ($parent,$root) = m%(.*)/(.*)$% or die "cannot split directory";
if ($root ne "hdf-$ver" && ! -e "../hdf-$ver") {
symlink $root, "../hdf-$ver" or die "cannot create link";
$created_symlink = 1;
}
my $name = "$root/$releases/hdf-$ver";
# Build the releases.
@types = ("gzip") unless @types;
@types = qw/tar gzip compress bzip2/ if 1==@types && "all" eq $types[0];
my $filelist = 'Makefile `cat MANIFEST`';
$_ = `cat MANIFEST`;
s/^\.\///mg;
@filelist = ("Makefile", split /\s*\n/, $_);
$filelist = join " ", map {"hdf-$ver/$_"} @filelist;
chdir ".." or die;
for (@types) {
print "Compressing with $_...\n";
@ -114,6 +131,11 @@ sub release (@) {
} continue {
print STDERR "$_ failed\n" if $status >> 8;
}
chdir $root or die;
# Remove the temporary symlink we created above.
unlink "../hdf-$ver" if $created_symlink;
# Update version info
print <<EOF;