[svn-r5657] Updated.

This commit is contained in:
Albert Cheng 2002-06-17 15:09:38 -05:00
parent 5e12a077ab
commit 91170d97dc
2 changed files with 31 additions and 4 deletions

View File

@ -26,11 +26,20 @@ FROM SASN100,
1) cd <hdf5>
2) CC=/usr/community/mpich/mpich-1.2.3/bin/mpicc ./configure --host=i386-intel-osf1
2) CC=/usr/community/mpich/mpich-1.2.3/bin/mpicc \
./configure --host=i386-intel-osf1 --with-zlib=/usr/community/hdf5/ZLIB
Skip the "--with-zlib=..." option if you do not wish to include the zlib
compression feature. Without the zlib compression feature, the library
will not be able to access zlib compressed datasets.
You may safely ignore the WARNING message,
=========
configure: WARNING: If you wanted to set the --build type, don't use --host.
If a cross compiler is detected then cross compile mode will be used.
You may add the option "--host=i386-intel-osf1" to get rid of the WARNING.
=========
You may add the option "--build=i386-intel-osf1" to get rid of the WARNING.
(The previous bugs in src/Makefile and test/Makefile have been resolved.
You don't need to edit them any more.)
@ -92,11 +101,19 @@ FROM SASN100,
1) cd <hdf5>
2) ./configure --host=i386-intel-osf1
2) ./configure --host=i386-intel-osf1 --with-zlib=/usr/community/hdf5/ZLIB
Skip the "--with-zlib=..." option if you do not wish to include the zlib
compression feature. Without the zlib compression feature, the library
will not be able to access zlib compressed datasets.
You may safely ignore the WARNING message,
=========
configure: WARNING: If you wanted to set the --build type, don't use --host.
If a cross compiler is detected then cross compile mode will be used.
You may add the option "--host=i386-intel-osf1" to get rid of the WARNING.
=========
You may add the option "--build=i386-intel-osf1" to get rid of the WARNING.
(The previous bugs in src/Makefile and test/Makefile have been resolved.
You don't need to edit them any more.)

View File

@ -202,3 +202,13 @@ will print informational essages stating the failure but will not exit
with non-zero code. Failure to support file size greater than 2GB is
not a fatal error for HDF5 becuase HDF5 can use other file-drivers such
as families of files to by pass the file size limit.
By default, the parallel tests use /tmp/$LOGIN as the test directory.
This can be override by the environment variable $HDF5_PARAPREFIX.
For example, if the tests should use directory /PFS/user/me, do
HDF5_PARAPREFIX=/PFS/user/me
export HDF5_PARAPREFIX
make check
(In some batch job system, you many need to hardset HDF5_PARAPREFIX in
the shell initial files like .profile, .cshrc, etc.)