[svn-r4345] Purpose:

Improvement
Description:
    The stdout and stderr were both redirected to an output file. This
    works fine in tradition sequential Unix machines.  But in some
    parallel systems (like mpi-jobs in IBM SP), the stderr is merged
    with stdout alright but not in the exact order as expected.  This
    is not deterministic in parallel jobs.  So, the test output are
    all there but the ordering maynot be as expected.
Solution:
    Redirect stderr to separated file and append it to the stdout
    file after test-command is executed.  Then compare it with
    the expected output.  This eliminate the assumption that
    stdout and stderr must merged in "chronical orders".

    The .ddl file are updated by moving all stderr text to the end of the
    file.
Platforms tested:
    eirene.
This commit is contained in:
Albert Cheng 2001-08-14 12:28:14 -05:00
parent ba1e23c18d
commit c47f724187
4 changed files with 8 additions and 8 deletions

View File

@ -7,6 +7,6 @@ ATTRIBUTE "/attr2" {
DATASPACE SIMPLE { ( 10 ) / ( 10 ) }
}
ATTRIBUTE "/attr" {
h5dump error: unable to open attribute "/"
}
}
}
h5dump error: unable to open attribute "/"

View File

@ -3,7 +3,6 @@ Expected output for 'h5dump -t /#5992:0 -g /group2 tcompound.h5'
#############################
HDF5 "tcompound.h5" {
DATATYPE "/#5992:0"
h5dump error: unable to open datatype "/#5992:0"
GROUP "/group2" {
DATASET "dset5" {
@ -35,3 +34,4 @@ GROUP "/group2" {
}
}
}
h5dump error: unable to open datatype "/#5992:0"

View File

@ -11,6 +11,6 @@ DATASET "/dset2" {
DATASPACE SIMPLE { ( 30, 20 ) / ( 30, 20 ) }
}
DATASET "dset3" {
h5dump error: unable to open dataset "dset3"
}
}
}
h5dump error: unable to open dataset "dset3"

View File

@ -41,6 +41,6 @@ GROUP "/" {
}
}
GROUP "/y" {
h5dump error: unable to open group "/y"
}
}
}
h5dump error: unable to open group "/y"