2
0
mirror of https://github.com/HDFGroup/hdf5.git synced 2025-03-19 16:50:46 +08:00

[svn-r5130] Purpose:

Bug Fix & Feature

Description:
    The selection offset was being ignored for optimized hyperslab selection
    I/O operations.

    Additionally, I've found that the restrictions on optimized selection
    I/O operations were too strict and found a way to allow more hyperslabs
    to use the optimized I/O routines.

Solution:
    Incorporate the selection offset into the selection location when performing
    optimized I/O operations.

    Allow optimized I/O on any single hyperslab selection and also allow
    hyperslab operations on chunked datasets.

Platforms tested:
    FreeBSD 4.5 (sleipnir)
This commit is contained in:
Quincey Koziol 2002-04-02 15:51:41 -05:00
parent c1e44699f0
commit d2232a345f
15 changed files with 737 additions and 656 deletions

@ -35,115 +35,118 @@ Bug Fixes since HDF5-1.4.2
Library
-------
* Fixed bug with contiguous hyperslabs not being detected, causing
slower I/O than necessary.
* Fixed bug where non-aligned hyperslab I/O on chunked datasets was
causing errors during I/O
* The RCSID string in H5public.h was causing the C++ compiling problem
because when it was included multiple times, C++ did not like multiple
definitions of the same static variable. All occurance of RCSID
definition are removed since we have not used it consistently before.
* Fixed bug with non-zero userblock sizes causing raw data to not write
correctly.
* Fixed build on Linux systems with --enable-static-exec flag. It now
works correctly.
* IMPORTANT: Fixed file metadata corruption bug which could cause metadata
data loss in certain situations.
* The allocation by alignment (H5Pset_alignment) feature code somehow
got dropped in some 1.3.x version. Re-implemented it with "new and
improved" algorithm. It keeps track of "wasted" file-fragment in
the free-list too.
* Removed limitation that the data transfer buffer size needed to be
set for datasets whose dimensions were too large for the 'all' selection
code to handle. Any size dimensioned datasets should be handled
correctly now.
* Changed behavior of H5Tget_member_type to correctly emulate HDF5 v1.2.x
when --enable-hdf5v1_2 configure flag is enabled.
* Tweaked a few API functions to use 'size_t' instead of 'unsigned' or
'hsize_t', which may cause errors in some cases.
* Fixed a bug of H5pubconf.h causing repeated definitions if it is included
more than once. hdf5.h now includes H5public.h which includes
H5pubconf.h. Applications should #include hdf5.h which handles multiple
inclusion correctly.
* Fixed H5FDmpio.h to be C++ friendly by making Parallel HDF5 API's to be
external to C++.
* Fixed a bug in H5FD_mpio_flush() that might result in negative file seek
if both MPIO and Split-file drivers are used together.
* Added new parallel hdf5 tests in t_mpi. The new test checks if the
filesystem or the MPI-IO can really handle greater than 2GB files.
If it fails, it prints information message only without failing the
test.
* Fixed a bug when reading chunked datasets where the edge of the dataset
would be incorrectly detected and generate an assertion failure.
* Fixed bug where selection offset was being ignored for certain hyperslab
selections when optimized I/O was being performed. QAK - 2002/04/02
* Added serial multi-gigabyte file size test. "test/big -h" shows
the help page. AKC - 2002/03/29
* Fixed bug where variable-length string type doesn't behave as
string. SLU - 2002/03/28
* Fixed bug in H5Gget_objinfo() which was not setting the 'fileno'
of the H5G_stat_t struct. QAK - 2002/03/27
* Fixed data corruption bug in hyperslab routines when contiguous
hyperslab that spans entire dimension and is larger than type
conversion buffer is attempted to be read. QAK - 2002/03/26
* Fixed bug where non-zero fill-value was not being read correctly from
certain chunked datasets when using an "all" or contiguous hyperslab
selection. QAK - 2002/02/14
* Fixed bug where a preempted chunk in the chunk data could still be
used by an internal pointer and cause an assertion failure or core
dump. QAK - 2002/02/13
* Fixed bug where raw data re-allocated from the free-list would sometimes
overlap with the metadata accumulator and get corrupted. QAK - 2002/01/23
* Fixed bug where variable-length datatypes for attributes was not working
correctly.
* Retired the DPSS virtual file driver (--with-gridstorage configure
option).
* Corrected behavior of H5Tinsert to not allow compound datatype fields to
be inserted past the end of the datatype.
* Fixed the internal macros used to encode & decode file metadata, to avoid
an unaligned access warning on IA64 machines.
* Fixed an off-by-one error in H5Sselect_valid when hyperslab selections
which would allow hyperslab selections which overlapped the edge of the
selection by one element as valid.
* Fixed a bug in internal B-tree code where a B-tree was not being copied
correctly.
* Fixed a bug in the 'big' test where quota limits weren't being detected
properly if they caused close() to fail.
* Fixed a bug where 'or'ing a hyperslab with a 'none' selection would
fail. Now adds that hyperslab as the first hyperlab in the selection.
* Fixed a bug where appending a point selection to the current selection
would not actually append the point when there were no points defined
currently.
* Fixed a bug where reading or writing chunked data which needed datatype
conversion could result in data values getting corrupted.
* Fixed a bug where reading an entire dataset wasn't being handled
optimally when the dataset had unlimited dimensions. Dataset is read
in a single low-level I/O now, instead of being broken into separate
pieces internally.
* Fixed a bug where reading or writing chunked data which needed datatype
conversion could result in data values getting corrupted.
* Fixed a bug where appending a point selection to the current selection
would not actually append the point when there were no points defined
currently.
* Fixed a bug where 'or'ing a hyperslab with a 'none' selection would
fail. Now adds that hyperslab as the first hyperlab in the selection.
* Fixed a bug in the 'big' test where quota limits weren't being detected
properly if they caused close() to fail.
* Fixed a bug in internal B-tree code where a B-tree was not being copied
correctly.
* Fixed an off-by-one error in H5Sselect_valid when hyperslab selections
which would allow hyperslab selections which overlapped the edge of the
selection by one element as valid.
* Fixed the internal macros used to encode & decode file metadata, to avoid
an unaligned access warning on IA64 machines.
* Corrected behavior of H5Tinsert to not allow compound datatype fields to
be inserted past the end of the datatype.
* Retired the DPSS virtual file driver (--with-gridstorage configure
option).
* Fixed bug where variable-length datatypes for attributes was not working
correctly.
* Fixed bug where raw data re-allocated from the free-list would sometimes
overlap with the metadata accumulator and get corrupted. QAK - 1/23/02
* Fixed bug where a preempted chunk in the chunk data could still be
used by an internal pointer and cause an assertion failure or core
dump. QAK - 2/13/02
* Fixed bug where non-zero fill-value was not being read correctly from
certain chunked datasets when using an "all" or contiguous hyperslab
selection. QAK - 2/14/02
* Fixed data corruption bug in hyperslab routines when contiguous
hyperslab that spans entire dimension and is larger than type
conversion buffer is attempted to be read. QAK - 3/26/02
* Fixed bug in H5Gget_objinfo() which was not setting the 'fileno'
of the H5G_stat_t struct. QAK - 3/27/02
* Fixed bug where variable-length string type doesn't behave as string.
* Added serial multi-gigabyte file size test. "test/big -h" shows
the help page. AKC - 2002/03/29
* Fixed a bug when reading chunked datasets where the edge of the dataset
would be incorrectly detected and generate an assertion failure.
* Added new parallel hdf5 tests in t_mpi. The new test checks if the
filesystem or the MPI-IO can really handle greater than 2GB files.
If it fails, it prints information message only without failing the
test.
* Fixed a bug in H5FD_mpio_flush() that might result in negative file seek
if both MPIO and Split-file drivers are used together.
* Fixed H5FDmpio.h to be C++ friendly by making Parallel HDF5 API's to be
external to C++.
* Fixed a bug of H5pubconf.h causing repeated definitions if it is included
more than once. hdf5.h now includes H5public.h which includes
H5pubconf.h. Applications should #include hdf5.h which handles multiple
inclusion correctly.
* Tweaked a few API functions to use 'size_t' instead of 'unsigned' or
'hsize_t', which may cause errors in some cases.
* Changed behavior of H5Tget_member_type to correctly emulate HDF5 v1.2.x
when --enable-hdf5v1_2 configure flag is enabled.
* Removed limitation that the data transfer buffer size needed to be
set for datasets whose dimensions were too large for the 'all' selection
code to handle. Any size dimensioned datasets should be handled
correctly now.
* The allocation by alignment (H5Pset_alignment) feature code somehow
got dropped in some 1.3.x version. Re-implemented it with "new and
improved" algorithm. It keeps track of "wasted" file-fragment in
the free-list too.
* IMPORTANT: Fixed file metadata corruption bug which could cause metadata
data loss in certain situations.
* Fixed build on Linux systems with --enable-static-exec flag. It now
works correctly.
* Fixed bug with non-zero userblock sizes causing raw data to not write
correctly.
* The RCSID string in H5public.h was causing the C++ compiling problem
because when it was included multiple times, C++ did not like multiple
definitions of the same static variable. All occurance of RCSID
definition are removed since we have not used it consistently before.
* Fixed bug where non-aligned hyperslab I/O on chunked datasets was
causing errors during I/O
* Fixed bug with contiguous hyperslabs not being detected, causing
slower I/O than necessary.
Configuration
-------------
* Changed the default value of $NPROCS from 2 to 3 since 3 processes
have a much bigger chance catching parallel errors than just 2.
* Basic port to Compaq (nee DEC) Alpha OSF 5.
* Added --enable-linux-lfs flag to allow more control over whether to enable
or disable large file support on Linux.
* Can use just enable-threadsafe if the C compiler has builtin pthreads
support.
* Require HDF (a.k.a. hdf4) software that consists of a newer version
of zlib library which consists of the compress2() function. Versions
HDF version 4.1r3 and newer meets this requirement. The compress2
uses a newer compression algorithm used by the HDF5 library. Also,
4.1r3 has an hdp tool that can handle "loops" in Vgroups.
* Can use just enable-threadsafe if the C compiler has builtin pthreads
support.
* Added --enable-linux-lfs flag to allow more control over whether to enable
or disable large file support on Linux.
* Basic port to Compaq (nee DEC) Alpha OSF 5.
* Changed the default value of $NPROCS from 2 to 3 since 3 processes
have a much bigger chance catching parallel errors than just 2.
Tools
-----
* Fixed segfault when "-v" flag was used with the h5dumper.
* Fixed so that the "-i" flag works correctly with the h5dumper.
* Fixed limitation in h5dumper with object names which reached over 1024
characters in length. We can now handle arbitrarily larger sizes for
object names. BW - 2/27/02
object names. BW - 2002/02/27
* Fixed so that the "-i" flag works correctly with the h5dumper.
* Fixed segfault when "-v" flag was used with the h5dumper.
Documentation
@ -153,9 +156,85 @@ Documentation
New Features
============
* A helper script called ``h5cc'', which helps compilation of HDF5
programs, is now distributed with HDF5. See the reference manual
for information on how to use this feature.
* Improved performance of single hyperslab I/O when datatype conversion is
unneccessary. QAK - 2002/04/02
* Added new "H5Sget_select_type" API function to determine which type of
selection is defined for a dataspace ("all", "none", "hyperslab" or
"point"). QAK - 2002/02/7
* Added support to read/write portions of chunks directly, if they are
uncompressed and too large to cache. This should speed up I/O on chunked
datasets for a few more cases. QAK - 2002/01/31
* Parallel HDF5 is now supported on HP-UX 11.00 platforms.
* Added H5Rget_obj_type() API function, which performs the same functionality
as H5Rget_object_type(), but requires the reference type as a parameter
in order to correctly handle dataset region references. Moved
H5Rget_object_type() to be only compiled into the library when v1.4
compatibility is enabled.
* Changed internal error handling macros to reduce code size of library by
about 10-20%.
* Added a new file access property, file close degree, to control file
close behavior. It has four values, H5F_CLOSE_WEAK, H5F_CLOSE_SEMI,
H5F_CLOSE_STRONG, and H5F_CLOSE_DEFAULT. Two correspont functions
H5Pset_fclose_degree and H5Pget_fclose_degree are also provided. Two
new functions H5Fget_obj_count and H5Fget_obj_ids are offerted to assist
this new feature. For full details, please refer to the reference
manual under the description of H5Fcreate, H5Fopen, H5Fclose and the
functions mentioned above.
* Removed H5P(get|set)_hyper_cache API function, since the property is no
longer used.
* Improved performance of non-contiguous hyperslabs (built up with
several hyperslab selection calls).
* Improved performance of single, contiguous hyperslabs when reading or
writing.
* As part of the transition to using generic properties everywhere, the
parameter of H5Pcreate changed from H5P_class_t to hid_t, as well
the return type of H5Pget_class changed from H5P_class_t to hid_t.
Further changes are still necessary and will be documented here as they
are made.
* Added a new test to verify the information provided by the configure
command.
* The H5Pset_fapl_split() accepts raw and meta file names similar to the
syntax of H5Pset_fapl_multi() in addition to what it used to accept.
* Added perform programs to test the HDF5 library performance. Programs
are installed in directory perform/.
* Added new checking in H5check_version() to verify the five HDF5 version
information macros (H5_VERS_MAJOR, H5_VERS_MINOR, H5_VERS_RELEASE,
H5_VERS_SUBRELEASE and H5_VERS_INFO) are consistent.
* Added a new public macro, H5_VERS_INFO, which is a string holding
the HDF5 library version information. This string is also compiled
into all HDF5 binary code which helps to identify the version
information of the binary code. One may use the Unix strings
command on the binary file and looks for the pattern "HDF5 library
version".
* Added a parallel HDF5 example examples/ph5example.c to illustrate
the basic way of using parallel HDF5.
* Added two simple parallel performance tests as mpi-perf.c (MPI
performance) and perf.c (PHDF5 performance) in testpar.
* Improved regular hyperslab I/O by about a factor of 6 or so.
* Modified the Pablo build procedure to permit building of the instrumented
library to link either with the Trace libraries as before or with the
Pablo Performance Caputure Facility.
* Verified correct operation of library on Solaris 2.8 in both 64-bit and
32-bit compilation modes. See INSTALL document for instructions on
compiling the distribution with 64-bit support.
* Parallel HDF5 now runs on the HP V2500 and HP N4000 machines.
* H5 <-> GIF convertor has been added. This is available under
tools/gifconv. The convertor supports the ability to create animated
gifs as well.
* Added a global string variable H5_lib_vers_info_g which holds the
HDF5 library version information. This can be used to identify
an hdf5 library or hdf5 application binary.
Also added a verification of the consistency between H5_lib_vers_info_g
and other version information in the source code.
* File sizes greater than 2GB are now supported on Linux systems with
version 2.4.x or higher kernels.
* F90 APIs are available on HPUX 11.00 and IBM SP platforms.
* F90 static library is available on Windows platforms. See
INSTALL_Windows.txt for details.
* F90 API:
- Added aditional parameter "dims" to the h5dread/h5dwrite and
h5aread/h5awrite subroutines. This parameter is 1D array of size
7 and contains the sizes of the data buffer dimensions.
* C++ API:
- Added two new member functions: Exception::getFuncName() and
Exception::getCFuncName() to provide the name of the member
@ -165,83 +244,9 @@ New Features
implementation. The new operator= functions invoke H5Tcopy,
H5Scopy, and H5Pcopy to make a copy of a datatype, dataspace,
and property list, respectively.
* F90 API:
- Added aditional parameter "dims" to the h5dread/h5dwrite and
h5aread/h5awrite subroutines. This parameter is 1D array of size
7 and contains the sizes of the data buffer dimensions.
* F90 static library is available on Windows platforms. See
INSTALL_Windows.txt for details.
* F90 APIs are available on HPUX 11.00 and IBM SP platforms.
* File sizes greater than 2GB are now supported on Linux systems with
version 2.4.x or higher kernels.
* Added a global string variable H5_lib_vers_info_g which holds the
HDF5 library version information. This can be used to identify
an hdf5 library or hdf5 application binary.
Also added a verification of the consistency between H5_lib_vers_info_g
and other version information in the source code.
* H5 <-> GIF convertor has been added. This is available under
tools/gifconv. The convertor supports the ability to create animated
gifs as well.
* Parallel HDF5 now runs on the HP V2500 and HP N4000 machines.
* Verified correct operation of library on Solaris 2.8 in both 64-bit and
32-bit compilation modes. See INSTALL document for instructions on
compiling the distribution with 64-bit support.
* Modified the Pablo build procedure to permit building of the instrumented
library to link either with the Trace libraries as before or with the
Pablo Performance Caputure Facility.
* Improved regular hyperslab I/O by about a factor of 6 or so.
* Added two simple parallel performance tests as mpi-perf.c (MPI
performance) and perf.c (PHDF5 performance) in testpar.
* Added a parallel HDF5 example examples/ph5example.c to illustrate
the basic way of using parallel HDF5.
* Added a new public macro, H5_VERS_INFO, which is a string holding
the HDF5 library version information. This string is also compiled
into all HDF5 binary code which helps to identify the version
information of the binary code. One may use the Unix strings
command on the binary file and looks for the pattern "HDF5 library
version".
* Added new checking in H5check_version() to verify the five HDF5 version
information macros (H5_VERS_MAJOR, H5_VERS_MINOR, H5_VERS_RELEASE,
H5_VERS_SUBRELEASE and H5_VERS_INFO) are consistent.
* Added perform programs to test the HDF5 library performance. Programs
are installed in directory perform/.
* The H5Pset_fapl_split() accepts raw and meta file names similar to the
syntax of H5Pset_fapl_multi() in addition to what it used to accept.
* Added a new test to verify the information provided by the configure
command.
* As part of the transition to using generic properties everywhere, the
parameter of H5Pcreate changed from H5P_class_t to hid_t, as well
the return type of H5Pget_class changed from H5P_class_t to hid_t.
Further changes are still necessary and will be documented here as they
are made.
* Improved performance of single, contiguous hyperslabs when reading or
writing.
* Improved performance of non-contiguous hyperslabs (built up with
several hyperslab selection calls).
* Removed H5P(get|set)_hyper_cache API function, since the property is no
longer used.
* Added a new file access property, file close degree, to control file
close behavior. It has four values, H5F_CLOSE_WEAK, H5F_CLOSE_SEMI,
H5F_CLOSE_STRONG, and H5F_CLOSE_DEFAULT. Two correspont functions
H5Pset_fclose_degree and H5Pget_fclose_degree are also provided. Two
new functions H5Fget_obj_count and H5Fget_obj_ids are offerted to assist
this new feature. For full details, please refer to the reference
manual under the description of H5Fcreate, H5Fopen, H5Fclose and the
functions mentioned above.
* Changed internal error handling macros to reduce code size of library by
about 10-20%.
* Added H5Rget_obj_type() API function, which performs the same functionality
as H5Rget_object_type(), but requires the reference type as a parameter
in order to correctly handle dataset region references. Moved
H5Rget_object_type() to be only compiled into the library when v1.4
compatibility is enabled.
* Parallel HDF5 is now supported on HP-UX 11.00 platforms.
* Added support to read/write portions of chunks directly, if they are
uncompressed and too large to cache. This should speed up I/O on chunked
datasets for a few more cases. -QAK, 1/31/02
* Added new "H5Sget_select_type" API function to determine which type of
selection is defined for a dataspace ("all", "none", "hyperslab" or
"point"). -QAK, 2/7/02
* A helper script called ``h5cc'', which helps compilation of HDF5
programs, is now distributed with HDF5. See the reference manual
for information on how to use this feature.
Platforms Tested

@ -1695,15 +1695,17 @@ H5F_istore_unlock(H5F_t *f, hid_t dxpl_id, const H5O_layout_t *layout,
* Robb Matzke, 1999-08-02
* The data transfer property list is passed as an object ID
* since that's how the virtual file layer wants it.
*
* Quincey Koziol, 2002-04-02
* Enable hyperslab I/O into memory buffer
*-------------------------------------------------------------------------
*/
herr_t
H5F_istore_read(H5F_t *f, hid_t dxpl_id, const H5O_layout_t *layout,
const H5O_pline_t *pline, const H5O_fill_t *fill,
const hsize_t size_m[], const hssize_t offset_m[],
const hssize_t offset_f[], const hsize_t size[], void *buf)
{
hssize_t offset_m[H5O_LAYOUT_NDIMS];
hsize_t size_m[H5O_LAYOUT_NDIMS];
hsize_t idx_cur[H5O_LAYOUT_NDIMS];
hsize_t idx_min[H5O_LAYOUT_NDIMS];
hsize_t idx_max[H5O_LAYOUT_NDIMS];
@ -1726,19 +1728,15 @@ H5F_istore_read(H5F_t *f, hid_t dxpl_id, const H5O_layout_t *layout,
assert(layout && H5D_CHUNKED==layout->type);
assert(layout->ndims>0 && layout->ndims<=H5O_LAYOUT_NDIMS);
assert(H5F_addr_defined(layout->addr));
assert(size_m);
assert(offset_m);
assert(offset_f);
assert(size);
assert(buf);
/*
* For now, a hyperslab of the file must be read into an array in
* memory.We do not yet support reading into a hyperslab of memory.
*/
for (u=0, chunk_size=1; u<layout->ndims; u++) {
offset_m[u] = 0;
size_m[u] = size[u];
/* Compute chunk size */
for (u=0, chunk_size=1; u<layout->ndims; u++)
chunk_size *= layout->dim[u];
} /* end for */
#ifndef NDEBUG
for (u=0; u<layout->ndims; u++) {
@ -1874,16 +1872,18 @@ H5F_istore_read(H5F_t *f, hid_t dxpl_id, const H5O_layout_t *layout,
* Robb Matzke, 1999-08-02
* The data transfer property list is passed as an object ID
* since that's how the virtual file layer wants it.
*
* Quincey Koziol, 2002-04-02
* Enable hyperslab I/O into memory buffer
*-------------------------------------------------------------------------
*/
herr_t
H5F_istore_write(H5F_t *f, hid_t dxpl_id, const H5O_layout_t *layout,
const H5O_pline_t *pline, const H5O_fill_t *fill,
const hsize_t size_m[], const hssize_t offset_m[],
const hssize_t offset_f[], const hsize_t size[],
const void *buf)
{
hssize_t offset_m[H5O_LAYOUT_NDIMS];
hsize_t size_m[H5O_LAYOUT_NDIMS];
int i, carry;
unsigned u;
hsize_t idx_cur[H5O_LAYOUT_NDIMS];
@ -1905,19 +1905,15 @@ H5F_istore_write(H5F_t *f, hid_t dxpl_id, const H5O_layout_t *layout,
assert(layout && H5D_CHUNKED==layout->type);
assert(layout->ndims>0 && layout->ndims<=H5O_LAYOUT_NDIMS);
assert(H5F_addr_defined(layout->addr));
assert(size_m);
assert(offset_m);
assert(offset_f);
assert(size);
assert(buf);
/*
* For now the source must not be a hyperslab. It must be an entire
* memory buffer.
*/
for (u=0, chunk_size=1; u<layout->ndims; u++) {
offset_m[u] = 0;
size_m[u] = size[u];
/* Compute chunk size */
for (u=0, chunk_size=1; u<layout->ndims; u++)
chunk_size *= layout->dim[u];
} /* end for */
#ifndef NDEBUG
for (u=0; u<layout->ndims; u++) {

@ -152,6 +152,7 @@ H5F_seq_readv(H5F_t *f, hid_t dxpl_id, const struct H5O_layout_t *layout,
hsize_t file_offset; /* Offset in dataset */
hsize_t seq_len; /* Number of bytes to read */
hsize_t dset_dims[H5O_LAYOUT_NDIMS]; /* dataspace dimensions */
hssize_t mem_offset[H5O_LAYOUT_NDIMS]; /* offset of hyperslab in memory buffer */
hssize_t coords[H5O_LAYOUT_NDIMS]; /* offset of hyperslab in dataspace */
hsize_t hslab_size[H5O_LAYOUT_NDIMS]; /* hyperslab size in dataspace*/
hsize_t down_size[H5O_LAYOUT_NDIMS]; /* Cumulative yperslab sizes (in elements) */
@ -283,10 +284,13 @@ H5F_seq_readv(H5F_t *f, hid_t dxpl_id, const struct H5O_layout_t *layout,
HRETURN_ERROR(H5E_IO, H5E_UNSUPPORTED, FAIL, "unable to retrieve dataspace dimensions");
/* Build the array of cumulative hyperslab sizes */
/* (And set the memory offset to zero) */
for(acc=1, i=(ndims-1); i>=0; i--) {
mem_offset[i]=0;
down_size[i]=acc;
acc*=dset_dims[i];
} /* end for */
mem_offset[ndims]=0;
/* Brute-force, stupid way to implement the vectors, but too complex to do other ways... */
for(v=0; v<nseq; v++) {
@ -336,8 +340,9 @@ H5F_seq_readv(H5F_t *f, hid_t dxpl_id, const struct H5O_layout_t *layout,
hslab_size[ndims]=elmt_size; /* basic hyperslab size is the element */
/* Read in the partial hyperslab */
if (H5F_istore_read(f, dxpl_id, layout, pline, fill, coords,
hslab_size, buf)<0) {
if (H5F_istore_read(f, dxpl_id, layout, pline, fill,
hslab_size, mem_offset, coords, hslab_size,
buf)<0) {
HRETURN_ERROR(H5E_IO, H5E_READERROR, FAIL, "chunked read failed");
}
@ -396,8 +401,8 @@ H5F_seq_readv(H5F_t *f, hid_t dxpl_id, const struct H5O_layout_t *layout,
hslab_size[ndims]=elmt_size; /* basic hyperslab size is the element */
/* Read the full hyperslab in */
if (H5F_istore_read(f, dxpl_id, layout, pline, fill, coords,
hslab_size, buf)<0) {
if (H5F_istore_read(f, dxpl_id, layout, pline, fill,
hslab_size, mem_offset, coords, hslab_size, buf)<0) {
HRETURN_ERROR(H5E_IO, H5E_READERROR, FAIL, "chunked read failed");
}
@ -443,8 +448,9 @@ H5F_seq_readv(H5F_t *f, hid_t dxpl_id, const struct H5O_layout_t *layout,
hslab_size[ndims]=elmt_size; /* basic hyperslab size is the element */
/* Read in the partial hyperslab */
if (H5F_istore_read(f, dxpl_id, layout, pline, fill, coords,
hslab_size, buf)<0) {
if (H5F_istore_read(f, dxpl_id, layout, pline,
fill, hslab_size, mem_offset, coords,
hslab_size, buf)<0) {
HRETURN_ERROR(H5E_IO, H5E_READERROR, FAIL, "chunked read failed");
}
@ -478,8 +484,9 @@ H5F_seq_readv(H5F_t *f, hid_t dxpl_id, const struct H5O_layout_t *layout,
hslab_size[ndims]=elmt_size; /* basic hyperslab size is the element */
/* Read in the partial hyperslab */
if (H5F_istore_read(f, dxpl_id, layout, pline, fill, coords,
hslab_size, buf)<0) {
if (H5F_istore_read(f, dxpl_id, layout, pline, fill,
hslab_size, mem_offset, coords, hslab_size,
buf)<0) {
HRETURN_ERROR(H5E_IO, H5E_READERROR, FAIL, "chunked read failed");
}
@ -538,6 +545,7 @@ H5F_seq_writev(H5F_t *f, hid_t dxpl_id, const struct H5O_layout_t *layout,
hsize_t file_offset; /* Offset in dataset */
hsize_t seq_len; /* Number of bytes to read */
hsize_t dset_dims[H5O_LAYOUT_NDIMS]; /* dataspace dimensions */
hssize_t mem_offset[H5O_LAYOUT_NDIMS]; /* offset of hyperslab in memory buffer */
hssize_t coords[H5O_LAYOUT_NDIMS]; /* offset of hyperslab in dataspace */
hsize_t hslab_size[H5O_LAYOUT_NDIMS]; /* hyperslab size in dataspace*/
hsize_t down_size[H5O_LAYOUT_NDIMS]; /* Cumulative hyperslab sizes (in elements) */
@ -671,10 +679,13 @@ H5F_seq_writev(H5F_t *f, hid_t dxpl_id, const struct H5O_layout_t *layout,
HRETURN_ERROR(H5E_IO, H5E_UNSUPPORTED, FAIL, "unable to retrieve dataspace dimensions");
/* Build the array of cumulative hyperslab sizes */
/* (And set the memory offset to zero) */
for(acc=1, i=(ndims-1); i>=0; i--) {
mem_offset[i]=0;
down_size[i]=acc;
acc*=dset_dims[i];
} /* end for */
mem_offset[ndims]=0;
/* Brute-force, stupid way to implement the vectors, but too complex to do other ways... */
for(v=0; v<nseq; v++) {
@ -724,8 +735,9 @@ H5F_seq_writev(H5F_t *f, hid_t dxpl_id, const struct H5O_layout_t *layout,
hslab_size[ndims]=elmt_size; /* basic hyperslab size is the element */
/* Write out the partial hyperslab */
if (H5F_istore_write(f, dxpl_id, layout, pline, fill, coords,
hslab_size, buf)<0) {
if (H5F_istore_write(f, dxpl_id, layout, pline, fill,
hslab_size, mem_offset,coords, hslab_size,
buf)<0) {
HRETURN_ERROR(H5E_IO, H5E_WRITEERROR, FAIL, "chunked write failed");
}
@ -784,8 +796,8 @@ H5F_seq_writev(H5F_t *f, hid_t dxpl_id, const struct H5O_layout_t *layout,
hslab_size[ndims]=elmt_size; /* basic hyperslab size is the element */
/* Write the full hyperslab in */
if (H5F_istore_write(f, dxpl_id, layout, pline, fill, coords,
hslab_size, buf)<0) {
if (H5F_istore_write(f, dxpl_id, layout, pline, fill,
hslab_size, mem_offset, coords, hslab_size, buf)<0) {
HRETURN_ERROR(H5E_IO, H5E_WRITEERROR, FAIL, "chunked write failed");
}
@ -831,8 +843,9 @@ H5F_seq_writev(H5F_t *f, hid_t dxpl_id, const struct H5O_layout_t *layout,
hslab_size[ndims]=elmt_size; /* basic hyperslab size is the element */
/* Write out the partial hyperslab */
if (H5F_istore_write(f, dxpl_id, layout, pline, fill, coords,
hslab_size, buf)<0) {
if (H5F_istore_write(f, dxpl_id, layout, pline,
fill, hslab_size, mem_offset, coords,
hslab_size, buf)<0) {
HRETURN_ERROR(H5E_IO, H5E_WRITEERROR, FAIL, "chunked write failed");
}
@ -866,8 +879,9 @@ H5F_seq_writev(H5F_t *f, hid_t dxpl_id, const struct H5O_layout_t *layout,
hslab_size[ndims]=elmt_size; /* basic hyperslab size is the element */
/* Write out the final partial hyperslab */
if (H5F_istore_write(f, dxpl_id, layout, pline, fill, coords,
hslab_size, buf)<0) {
if (H5F_istore_write(f, dxpl_id, layout, pline, fill,
hslab_size, mem_offset, coords, hslab_size,
buf)<0) {
HRETURN_ERROR(H5E_IO, H5E_WRITEERROR, FAIL, "chunked write failed");
}

@ -352,29 +352,20 @@ H5F_arr_read(H5F_t *f, hid_t dxpl_id, const struct H5O_layout_t *layout,
case H5D_CHUNKED:
/*
* This method is unable to access external raw data files or to copy
* into a proper hyperslab.
* This method is unable to access external raw data files
*/
if (efl && efl->nused>0) {
HRETURN_ERROR(H5E_IO, H5E_UNSUPPORTED, FAIL,
"chunking and external files are mutually exclusive");
}
for (u=0; u<layout->ndims; u++) {
if (0!=mem_offset[u] || hslab_size[u]!=mem_size[u]) {
HRETURN_ERROR(H5E_IO, H5E_UNSUPPORTED, FAIL,
"unable to copy into a proper hyperslab");
}
}
if (H5F_istore_read(f, dxpl_id, layout, pline, fill, file_offset,
hslab_size, buf)<0) {
if (efl && efl->nused>0)
HRETURN_ERROR(H5E_IO, H5E_UNSUPPORTED, FAIL, "chunking and external files are mutually exclusive");
/* Go get the data from the chunks */
if (H5F_istore_read(f, dxpl_id, layout, pline, fill, mem_size,
mem_offset, file_offset, hslab_size, buf)<0)
HRETURN_ERROR(H5E_IO, H5E_READERROR, FAIL, "chunked read failed");
}
break;
default:
assert("not implemented yet" && 0);
HRETURN_ERROR(H5E_IO, H5E_UNSUPPORTED, FAIL,
"unsupported storage layout");
HRETURN_ERROR(H5E_IO, H5E_UNSUPPORTED, FAIL, "unsupported storage layout");
}
FUNC_LEAVE(SUCCEED);
@ -628,30 +619,20 @@ H5F_arr_write(H5F_t *f, hid_t dxpl_id, const struct H5O_layout_t *layout,
case H5D_CHUNKED:
/*
* This method is unable to access external raw data files or to copy
* from a proper hyperslab.
* This method is unable to access external raw data files
*/
if (efl && efl->nused>0) {
HRETURN_ERROR(H5E_IO, H5E_UNSUPPORTED, FAIL,
"chunking and external files are mutually exclusive");
}
for (u=0; u<layout->ndims; u++) {
if (0!=mem_offset[u] || hslab_size[u]!=mem_size[u]) {
HRETURN_ERROR(H5E_IO, H5E_UNSUPPORTED, FAIL,
"unable to copy from a proper hyperslab");
}
}
if (H5F_istore_write(f, dxpl_id, layout, pline, fill, file_offset,
hslab_size, buf)<0) {
HRETURN_ERROR(H5E_IO, H5E_WRITEERROR, FAIL,
"chunked write failed");
}
if (efl && efl->nused>0)
HRETURN_ERROR(H5E_IO, H5E_UNSUPPORTED, FAIL, "chunking and external files are mutually exclusive");
/* Write the read to the chunks */
if (H5F_istore_write(f, dxpl_id, layout, pline, fill, mem_size,
mem_offset, file_offset, hslab_size, buf)<0)
HRETURN_ERROR(H5E_IO, H5E_WRITEERROR, FAIL, "chunked write failed");
break;
default:
assert("not implemented yet" && 0);
HRETURN_ERROR(H5E_IO, H5E_UNSUPPORTED, FAIL,
"unsupported storage layout");
HRETURN_ERROR(H5E_IO, H5E_UNSUPPORTED, FAIL, "unsupported storage layout");
}
FUNC_LEAVE (SUCCEED);

@ -1695,15 +1695,17 @@ H5F_istore_unlock(H5F_t *f, hid_t dxpl_id, const H5O_layout_t *layout,
* Robb Matzke, 1999-08-02
* The data transfer property list is passed as an object ID
* since that's how the virtual file layer wants it.
*
* Quincey Koziol, 2002-04-02
* Enable hyperslab I/O into memory buffer
*-------------------------------------------------------------------------
*/
herr_t
H5F_istore_read(H5F_t *f, hid_t dxpl_id, const H5O_layout_t *layout,
const H5O_pline_t *pline, const H5O_fill_t *fill,
const hsize_t size_m[], const hssize_t offset_m[],
const hssize_t offset_f[], const hsize_t size[], void *buf)
{
hssize_t offset_m[H5O_LAYOUT_NDIMS];
hsize_t size_m[H5O_LAYOUT_NDIMS];
hsize_t idx_cur[H5O_LAYOUT_NDIMS];
hsize_t idx_min[H5O_LAYOUT_NDIMS];
hsize_t idx_max[H5O_LAYOUT_NDIMS];
@ -1726,19 +1728,15 @@ H5F_istore_read(H5F_t *f, hid_t dxpl_id, const H5O_layout_t *layout,
assert(layout && H5D_CHUNKED==layout->type);
assert(layout->ndims>0 && layout->ndims<=H5O_LAYOUT_NDIMS);
assert(H5F_addr_defined(layout->addr));
assert(size_m);
assert(offset_m);
assert(offset_f);
assert(size);
assert(buf);
/*
* For now, a hyperslab of the file must be read into an array in
* memory.We do not yet support reading into a hyperslab of memory.
*/
for (u=0, chunk_size=1; u<layout->ndims; u++) {
offset_m[u] = 0;
size_m[u] = size[u];
/* Compute chunk size */
for (u=0, chunk_size=1; u<layout->ndims; u++)
chunk_size *= layout->dim[u];
} /* end for */
#ifndef NDEBUG
for (u=0; u<layout->ndims; u++) {
@ -1874,16 +1872,18 @@ H5F_istore_read(H5F_t *f, hid_t dxpl_id, const H5O_layout_t *layout,
* Robb Matzke, 1999-08-02
* The data transfer property list is passed as an object ID
* since that's how the virtual file layer wants it.
*
* Quincey Koziol, 2002-04-02
* Enable hyperslab I/O into memory buffer
*-------------------------------------------------------------------------
*/
herr_t
H5F_istore_write(H5F_t *f, hid_t dxpl_id, const H5O_layout_t *layout,
const H5O_pline_t *pline, const H5O_fill_t *fill,
const hsize_t size_m[], const hssize_t offset_m[],
const hssize_t offset_f[], const hsize_t size[],
const void *buf)
{
hssize_t offset_m[H5O_LAYOUT_NDIMS];
hsize_t size_m[H5O_LAYOUT_NDIMS];
int i, carry;
unsigned u;
hsize_t idx_cur[H5O_LAYOUT_NDIMS];
@ -1905,19 +1905,15 @@ H5F_istore_write(H5F_t *f, hid_t dxpl_id, const H5O_layout_t *layout,
assert(layout && H5D_CHUNKED==layout->type);
assert(layout->ndims>0 && layout->ndims<=H5O_LAYOUT_NDIMS);
assert(H5F_addr_defined(layout->addr));
assert(size_m);
assert(offset_m);
assert(offset_f);
assert(size);
assert(buf);
/*
* For now the source must not be a hyperslab. It must be an entire
* memory buffer.
*/
for (u=0, chunk_size=1; u<layout->ndims; u++) {
offset_m[u] = 0;
size_m[u] = size[u];
/* Compute chunk size */
for (u=0, chunk_size=1; u<layout->ndims; u++)
chunk_size *= layout->dim[u];
} /* end for */
#ifndef NDEBUG
for (u=0; u<layout->ndims; u++) {

@ -186,12 +186,14 @@ __DLL__ herr_t H5F_istore_read(H5F_t *f, hid_t dxpl_id,
const struct H5O_layout_t *layout,
const struct H5O_pline_t *pline,
const struct H5O_fill_t *fill,
const hssize_t offset[], const hsize_t size[],
const hsize_t size_m[], const hssize_t offset_m[],
const hssize_t offset_f[], const hsize_t size[],
void *buf/*out*/);
__DLL__ herr_t H5F_istore_write(H5F_t *f, hid_t dxpl_id,
const struct H5O_layout_t *layout,
const struct H5O_pline_t *pline,
const struct H5O_fill_t *fill,
const hsize_t size_m[], const hssize_t offset_m[],
const hssize_t offset[], const hsize_t size[],
const void *buf);
__DLL__ herr_t H5F_istore_allocate (H5F_t *f, hid_t dxpl_id,

@ -152,6 +152,7 @@ H5F_seq_readv(H5F_t *f, hid_t dxpl_id, const struct H5O_layout_t *layout,
hsize_t file_offset; /* Offset in dataset */
hsize_t seq_len; /* Number of bytes to read */
hsize_t dset_dims[H5O_LAYOUT_NDIMS]; /* dataspace dimensions */
hssize_t mem_offset[H5O_LAYOUT_NDIMS]; /* offset of hyperslab in memory buffer */
hssize_t coords[H5O_LAYOUT_NDIMS]; /* offset of hyperslab in dataspace */
hsize_t hslab_size[H5O_LAYOUT_NDIMS]; /* hyperslab size in dataspace*/
hsize_t down_size[H5O_LAYOUT_NDIMS]; /* Cumulative yperslab sizes (in elements) */
@ -283,10 +284,13 @@ H5F_seq_readv(H5F_t *f, hid_t dxpl_id, const struct H5O_layout_t *layout,
HRETURN_ERROR(H5E_IO, H5E_UNSUPPORTED, FAIL, "unable to retrieve dataspace dimensions");
/* Build the array of cumulative hyperslab sizes */
/* (And set the memory offset to zero) */
for(acc=1, i=(ndims-1); i>=0; i--) {
mem_offset[i]=0;
down_size[i]=acc;
acc*=dset_dims[i];
} /* end for */
mem_offset[ndims]=0;
/* Brute-force, stupid way to implement the vectors, but too complex to do other ways... */
for(v=0; v<nseq; v++) {
@ -336,8 +340,9 @@ H5F_seq_readv(H5F_t *f, hid_t dxpl_id, const struct H5O_layout_t *layout,
hslab_size[ndims]=elmt_size; /* basic hyperslab size is the element */
/* Read in the partial hyperslab */
if (H5F_istore_read(f, dxpl_id, layout, pline, fill, coords,
hslab_size, buf)<0) {
if (H5F_istore_read(f, dxpl_id, layout, pline, fill,
hslab_size, mem_offset, coords, hslab_size,
buf)<0) {
HRETURN_ERROR(H5E_IO, H5E_READERROR, FAIL, "chunked read failed");
}
@ -396,8 +401,8 @@ H5F_seq_readv(H5F_t *f, hid_t dxpl_id, const struct H5O_layout_t *layout,
hslab_size[ndims]=elmt_size; /* basic hyperslab size is the element */
/* Read the full hyperslab in */
if (H5F_istore_read(f, dxpl_id, layout, pline, fill, coords,
hslab_size, buf)<0) {
if (H5F_istore_read(f, dxpl_id, layout, pline, fill,
hslab_size, mem_offset, coords, hslab_size, buf)<0) {
HRETURN_ERROR(H5E_IO, H5E_READERROR, FAIL, "chunked read failed");
}
@ -443,8 +448,9 @@ H5F_seq_readv(H5F_t *f, hid_t dxpl_id, const struct H5O_layout_t *layout,
hslab_size[ndims]=elmt_size; /* basic hyperslab size is the element */
/* Read in the partial hyperslab */
if (H5F_istore_read(f, dxpl_id, layout, pline, fill, coords,
hslab_size, buf)<0) {
if (H5F_istore_read(f, dxpl_id, layout, pline,
fill, hslab_size, mem_offset, coords,
hslab_size, buf)<0) {
HRETURN_ERROR(H5E_IO, H5E_READERROR, FAIL, "chunked read failed");
}
@ -478,8 +484,9 @@ H5F_seq_readv(H5F_t *f, hid_t dxpl_id, const struct H5O_layout_t *layout,
hslab_size[ndims]=elmt_size; /* basic hyperslab size is the element */
/* Read in the partial hyperslab */
if (H5F_istore_read(f, dxpl_id, layout, pline, fill, coords,
hslab_size, buf)<0) {
if (H5F_istore_read(f, dxpl_id, layout, pline, fill,
hslab_size, mem_offset, coords, hslab_size,
buf)<0) {
HRETURN_ERROR(H5E_IO, H5E_READERROR, FAIL, "chunked read failed");
}
@ -538,6 +545,7 @@ H5F_seq_writev(H5F_t *f, hid_t dxpl_id, const struct H5O_layout_t *layout,
hsize_t file_offset; /* Offset in dataset */
hsize_t seq_len; /* Number of bytes to read */
hsize_t dset_dims[H5O_LAYOUT_NDIMS]; /* dataspace dimensions */
hssize_t mem_offset[H5O_LAYOUT_NDIMS]; /* offset of hyperslab in memory buffer */
hssize_t coords[H5O_LAYOUT_NDIMS]; /* offset of hyperslab in dataspace */
hsize_t hslab_size[H5O_LAYOUT_NDIMS]; /* hyperslab size in dataspace*/
hsize_t down_size[H5O_LAYOUT_NDIMS]; /* Cumulative hyperslab sizes (in elements) */
@ -671,10 +679,13 @@ H5F_seq_writev(H5F_t *f, hid_t dxpl_id, const struct H5O_layout_t *layout,
HRETURN_ERROR(H5E_IO, H5E_UNSUPPORTED, FAIL, "unable to retrieve dataspace dimensions");
/* Build the array of cumulative hyperslab sizes */
/* (And set the memory offset to zero) */
for(acc=1, i=(ndims-1); i>=0; i--) {
mem_offset[i]=0;
down_size[i]=acc;
acc*=dset_dims[i];
} /* end for */
mem_offset[ndims]=0;
/* Brute-force, stupid way to implement the vectors, but too complex to do other ways... */
for(v=0; v<nseq; v++) {
@ -724,8 +735,9 @@ H5F_seq_writev(H5F_t *f, hid_t dxpl_id, const struct H5O_layout_t *layout,
hslab_size[ndims]=elmt_size; /* basic hyperslab size is the element */
/* Write out the partial hyperslab */
if (H5F_istore_write(f, dxpl_id, layout, pline, fill, coords,
hslab_size, buf)<0) {
if (H5F_istore_write(f, dxpl_id, layout, pline, fill,
hslab_size, mem_offset,coords, hslab_size,
buf)<0) {
HRETURN_ERROR(H5E_IO, H5E_WRITEERROR, FAIL, "chunked write failed");
}
@ -784,8 +796,8 @@ H5F_seq_writev(H5F_t *f, hid_t dxpl_id, const struct H5O_layout_t *layout,
hslab_size[ndims]=elmt_size; /* basic hyperslab size is the element */
/* Write the full hyperslab in */
if (H5F_istore_write(f, dxpl_id, layout, pline, fill, coords,
hslab_size, buf)<0) {
if (H5F_istore_write(f, dxpl_id, layout, pline, fill,
hslab_size, mem_offset, coords, hslab_size, buf)<0) {
HRETURN_ERROR(H5E_IO, H5E_WRITEERROR, FAIL, "chunked write failed");
}
@ -831,8 +843,9 @@ H5F_seq_writev(H5F_t *f, hid_t dxpl_id, const struct H5O_layout_t *layout,
hslab_size[ndims]=elmt_size; /* basic hyperslab size is the element */
/* Write out the partial hyperslab */
if (H5F_istore_write(f, dxpl_id, layout, pline, fill, coords,
hslab_size, buf)<0) {
if (H5F_istore_write(f, dxpl_id, layout, pline,
fill, hslab_size, mem_offset, coords,
hslab_size, buf)<0) {
HRETURN_ERROR(H5E_IO, H5E_WRITEERROR, FAIL, "chunked write failed");
}
@ -866,8 +879,9 @@ H5F_seq_writev(H5F_t *f, hid_t dxpl_id, const struct H5O_layout_t *layout,
hslab_size[ndims]=elmt_size; /* basic hyperslab size is the element */
/* Write out the final partial hyperslab */
if (H5F_istore_write(f, dxpl_id, layout, pline, fill, coords,
hslab_size, buf)<0) {
if (H5F_istore_write(f, dxpl_id, layout, pline, fill,
hslab_size, mem_offset, coords, hslab_size,
buf)<0) {
HRETURN_ERROR(H5E_IO, H5E_WRITEERROR, FAIL, "chunked write failed");
}

@ -1525,8 +1525,8 @@ H5S_find (const H5S_t *mem_space, const H5S_t *file_space)
/*
* Initialize direct read/write functions
*/
c1=H5S_select_contiguous(file_space);
c2=H5S_select_contiguous(mem_space);
c1=H5S_select_single(file_space);
c2=H5S_select_single(mem_space);
if(c1==FAIL || c2==FAIL)
HRETURN_ERROR(H5E_DATASPACE, H5E_BADRANGE, NULL, "invalid check for contiguous dataspace ");
@ -1563,8 +1563,8 @@ H5S_find (const H5S_t *mem_space, const H5S_t *file_space)
/*
* Initialize direct read/write functions
*/
c1=H5S_select_contiguous(file_space);
c2=H5S_select_contiguous(mem_space);
c1=H5S_select_single(file_space);
c2=H5S_select_single(mem_space);
if(c1==FAIL || c2==FAIL)
HRETURN_ERROR(H5E_DATASPACE, H5E_BADRANGE, NULL, "invalid check for contiguous dataspace ");

@ -400,18 +400,13 @@ H5S_all_read(H5F_t *f, const H5O_layout_t *layout, const H5O_pline_t *pline,
{
H5S_hyper_span_t *file_span=NULL,*mem_span=NULL; /* Hyperslab span node */
char *buf=(char*)_buf; /* Get pointer to buffer */
hsize_t mem_size,file_size;
hssize_t file_off,mem_off;
hssize_t count; /* Regular hyperslab count */
hsize_t size[H5O_LAYOUT_NDIMS];
hssize_t file_offset[H5O_LAYOUT_NDIMS];
hssize_t mem_offset[H5O_LAYOUT_NDIMS];
unsigned u;
unsigned small_contiguous=0, /* Flags for indicating contiguous hyperslabs */
large_contiguous=0;
int i;
size_t down_size[H5O_LAYOUT_NDIMS];
hsize_t acc;
hsize_t mem_elmts,file_elmts; /* Number of elements in each dimension of selection */
hssize_t file_off,mem_off; /* Offset (in elements) of selection */
hsize_t mem_size[H5O_LAYOUT_NDIMS]; /* Size of memory buffer */
hsize_t size[H5O_LAYOUT_NDIMS]; /* Size of selection */
hssize_t file_offset[H5O_LAYOUT_NDIMS]; /* Offset of selection in file */
hssize_t mem_offset[H5O_LAYOUT_NDIMS]; /* Offset of selection in memory */
unsigned u; /* Index variable */
herr_t ret_value=SUCCEED;
FUNC_ENTER(H5S_all_read, FAIL);
@ -428,196 +423,87 @@ printf("%s: check 1.0\n",FUNC);
if (mem_space->extent.u.simple.rank!=file_space->extent.u.simple.rank)
HGOTO_DONE(SUCCEED);
/* Check for a single hyperslab block defined in memory dataspace */
if (mem_space->select.type==H5S_SEL_HYPERSLABS) {
/* Check for a "regular" hyperslab selection */
if(mem_space->select.sel_info.hslab.diminfo != NULL) {
/* Check each dimension */
for(count=1,u=0; u<mem_space->extent.u.simple.rank; u++)
count*=mem_space->select.sel_info.hslab.diminfo[u].count;
/* If the regular hyperslab definition creates more than one hyperslab, fall through */
if(count>1)
HGOTO_DONE(SUCCEED);
} /* end if */
else {
/* Get the pointer to the hyperslab spans to check */
mem_span=mem_space->select.sel_info.hslab.span_lst->head;
/* Spin through the spans, checking for more than one span in each dimension */
while(mem_span!=NULL) {
/* If there are more than one span in the dimension, we can't use this routine */
if(mem_span->next!=NULL)
HGOTO_DONE(SUCCEED);
/* Advance to the next span, if it's available */
if(mem_span->down==NULL)
break;
else
mem_span=mem_span->down->head;
} /* end while */
/* Get the pointer to the hyperslab spans to use */
mem_span=mem_space->select.sel_info.hslab.span_lst->head;
} /* end else */
} /* end if */
else
if(mem_space->select.type!=H5S_SEL_ALL)
HGOTO_DONE(SUCCEED);
/* Check for a single hyperslab block defined in file dataspace */
if (file_space->select.type==H5S_SEL_HYPERSLABS) {
/* Check for a "regular" hyperslab selection */
if(file_space->select.sel_info.hslab.diminfo != NULL) {
/* Check each dimension */
for(count=1,u=0; u<file_space->extent.u.simple.rank; u++)
count*=file_space->select.sel_info.hslab.diminfo[u].count;
/* If the regular hyperslab definition creates more than one hyperslab, fall through */
if(count>1)
HGOTO_DONE(SUCCEED);
} /* end if */
else {
/* Get the pointer to the hyperslab spans to check */
file_span=file_space->select.sel_info.hslab.span_lst->head;
/* Spin through the spans, checking for more than one span in each dimension */
while(file_span!=NULL) {
/* If there are more than one span in the dimension, we can't use this routine */
if(file_span->next!=NULL)
HGOTO_DONE(SUCCEED);
/* Advance to the next span, if it's available */
if(file_span->down==NULL)
break;
else
file_span=file_span->down->head;
} /* end while */
/* Get the pointer to the hyperslab spans to use */
file_span=file_space->select.sel_info.hslab.span_lst->head;
} /* end else */
} /* end if */
else
if(file_space->select.type!=H5S_SEL_ALL)
HGOTO_DONE(SUCCEED);
/* Get information about memory and file */
for (u=0; u<mem_space->extent.u.simple.rank; u++) {
if(mem_space->select.type==H5S_SEL_HYPERSLABS) {
/* Check for a "regular" hyperslab selection */
if(mem_space->select.sel_info.hslab.diminfo != NULL) {
mem_size=mem_space->select.sel_info.hslab.diminfo[u].block;
mem_off=mem_space->select.sel_info.hslab.diminfo[u].start;
} /* end if */
else {
mem_size=(mem_span->high-mem_span->low)+1;
mem_off=mem_span->low;
mem_span=mem_span->down->head;
} /* end else */
} /* end if */
else {
mem_size=mem_space->extent.u.simple.size[u];
mem_off=0;
} /* end else */
switch(mem_space->select.type) {
case H5S_SEL_HYPERSLABS:
/* Check for a "regular" hyperslab selection */
if(mem_space->select.sel_info.hslab.diminfo != NULL) {
mem_elmts=mem_space->select.sel_info.hslab.diminfo[u].block;
mem_off=mem_space->select.sel_info.hslab.diminfo[u].start;
} /* end if */
else {
mem_elmts=(mem_span->high-mem_span->low)+1;
mem_off=mem_span->low;
mem_span=mem_span->down->head;
} /* end else */
mem_off+=mem_space->select.offset[u];
break;
if(file_space->select.type==H5S_SEL_HYPERSLABS) {
/* Check for a "regular" hyperslab selection */
if(file_space->select.sel_info.hslab.diminfo != NULL) {
file_size=file_space->select.sel_info.hslab.diminfo[u].block;
file_off=file_space->select.sel_info.hslab.diminfo[u].start;
} /* end if */
else {
file_size=(file_span->high-file_span->low)+1;
file_off=file_span->low;
file_span=file_span->down->head;
} /* end else */
} /* end if */
else {
file_size=file_space->extent.u.simple.size[u];
file_off=0;
} /* end else */
case H5S_SEL_ALL:
mem_elmts=mem_space->extent.u.simple.size[u];
mem_off=0;
break;
if (mem_size!=file_size)
case H5S_SEL_POINTS:
mem_elmts=1;
mem_off=mem_space->select.sel_info.pnt_lst->head->pnt[u]
+mem_space->select.offset[u];
break;
default:
assert(0 && "Invalid selection type!");
} /* end switch */
switch(file_space->select.type) {
case H5S_SEL_HYPERSLABS:
/* Check for a "regular" hyperslab selection */
if(file_space->select.sel_info.hslab.diminfo != NULL) {
file_elmts=file_space->select.sel_info.hslab.diminfo[u].block;
file_off=file_space->select.sel_info.hslab.diminfo[u].start;
} /* end if */
else {
file_elmts=(file_span->high-file_span->low)+1;
file_off=file_span->low;
file_span=file_span->down->head;
} /* end else */
file_off+=file_space->select.offset[u];
break;
case H5S_SEL_ALL:
file_elmts=file_space->extent.u.simple.size[u];
file_off=0;
break;
case H5S_SEL_POINTS:
file_elmts=1;
file_off=file_space->select.sel_info.pnt_lst->head->pnt[u]
+file_space->select.offset[u];
break;
default:
assert(0 && "Invalid selection type!");
} /* end switch */
if (mem_elmts!=file_elmts)
HGOTO_DONE(SUCCEED);
size[u] = file_size;
mem_size[u]=mem_space->extent.u.simple.size[u];
size[u] = file_elmts;
file_offset[u] = file_off;
mem_offset[u] = mem_off;
}
mem_size[u]=elmt_size;
size[u] = elmt_size;
file_offset[u] = 0;
mem_offset[u] = 0;
/* Disallow reading a memory hyperslab in the "middle" of a dataset which */
/* spans multiple rows in "interior" dimensions, but allow reading a */
/* hyperslab which is in the "middle" of the fastest or slowest changing */
/* dimension because a hyperslab which "fills" the interior dimensions is */
/* contiguous in memory. i.e. these are allowed: */
/* --------------------- --------------------- */
/* | | | | */
/* |*******************| | ********* | */
/* |*******************| | | */
/* | | | | */
/* | | | | */
/* --------------------- --------------------- */
/* ("large" contiguous block) ("small" contiguous block) */
/* But this is not: */
/* --------------------- */
/* | | */
/* | ********* | */
/* | ********* | */
/* | | */
/* | | */
/* --------------------- */
/* (not contiguous in memory) */
if(mem_space->select.type==H5S_SEL_HYPERSLABS) {
/* Check for a "small" contiguous block */
if(size[0]==1) {
small_contiguous=1;
/* size of block in all dimensions except the fastest must be '1' */
for (u=0; u<(mem_space->extent.u.simple.rank-1); u++) {
if(size[u]>1) {
small_contiguous=0;
break;
} /* end if */
} /* end for */
} /* end if */
/* Check for a "large" contiguous block */
else {
large_contiguous=1;
/* size of block in all dimensions except the slowest must be the */
/* full size of the dimension */
for (u=1; u<mem_space->extent.u.simple.rank; u++) {
if(size[u]!=mem_space->extent.u.simple.size[u]) {
large_contiguous=0;
break;
} /* end if */
} /* end for */
} /* end else */
/* Check for contiguous block */
if(small_contiguous || large_contiguous) {
/* Compute the "down sizes" for each dimension */
for (acc=elmt_size, i=(mem_space->extent.u.simple.rank-1); i>=0; i--) {
H5_ASSIGN_OVERFLOW(down_size[i],acc,hsize_t,size_t);
acc*=mem_space->extent.u.simple.size[i];
} /* end for */
/* Adjust the buffer offset and memory offsets by the proper amount */
for (u=0; u<mem_space->extent.u.simple.rank; u++) {
buf+=mem_offset[u]*down_size[u];
mem_offset[u]=0;
} /* end for */
} /* end if */
else {
/* Non-contiguous hyperslab block */
HGOTO_DONE(SUCCEED);
} /* end else */
} /* end if */
#ifdef QAK
printf("%s: check 2.0\n",FUNC);
for (u=0; u<mem_space->extent.u.simple.rank; u++)
printf("size[%u]=%lu\n",u,(unsigned long)size[u]);
for (u=0; u<=mem_space->extent.u.simple.rank; u++)
printf("mem_size[%u]=%lu\n",u,(unsigned long)mem_size[u]);
for (u=0; u<=mem_space->extent.u.simple.rank; u++)
printf("mem_offset[%u]=%lu\n",u,(unsigned long)mem_offset[u]);
for (u=0; u<=mem_space->extent.u.simple.rank; u++)
@ -625,7 +511,7 @@ for (u=0; u<=mem_space->extent.u.simple.rank; u++)
#endif /* QAK */
/* Read data from the file */
if (H5F_arr_read(f, dxpl_id, layout, pline, fill, efl, size,
size, mem_offset, file_offset, buf/*out*/)<0) {
mem_size, mem_offset, file_offset, buf/*out*/)<0) {
HGOTO_ERROR(H5E_IO, H5E_READERROR, FAIL,
"unable to read data from the file");
}
@ -671,23 +557,21 @@ H5S_all_write(H5F_t *f, const struct H5O_layout_t *layout,
{
H5S_hyper_span_t *file_span=NULL,*mem_span=NULL; /* Hyperslab span node */
const char *buf=(const char*)_buf; /* Get pointer to buffer */
hsize_t mem_size,file_size;
hssize_t file_off,mem_off;
hssize_t count; /* Regular hyperslab count */
hsize_t size[H5O_LAYOUT_NDIMS];
hssize_t file_offset[H5O_LAYOUT_NDIMS];
hssize_t mem_offset[H5O_LAYOUT_NDIMS];
unsigned u;
unsigned small_contiguous=0, /* Flags for indicating contiguous hyperslabs */
large_contiguous=0;
int i;
size_t down_size[H5O_LAYOUT_NDIMS];
hsize_t acc;
hsize_t mem_elmts,file_elmts; /* Number of elements in each dimension of selection */
hssize_t file_off,mem_off; /* Offset (in elements) of selection */
hsize_t mem_size[H5O_LAYOUT_NDIMS]; /* Size of memory buffer */
hsize_t size[H5O_LAYOUT_NDIMS]; /* Size of selection */
hssize_t file_offset[H5O_LAYOUT_NDIMS]; /* Offset of selection in file */
hssize_t mem_offset[H5O_LAYOUT_NDIMS]; /* Offset of selection in memory */
unsigned u; /* Index variable */
herr_t ret_value=SUCCEED;
FUNC_ENTER(H5S_all_write, FAIL);
*must_convert = TRUE;
#ifdef QAK
printf("%s: check 1.0\n",FUNC);
#endif /* QAK */
/* Check whether we can handle this */
if (H5S_SIMPLE!=mem_space->extent.type)
HGOTO_DONE(SUCCEED);
@ -696,201 +580,100 @@ H5S_all_write(H5F_t *f, const struct H5O_layout_t *layout,
if (mem_space->extent.u.simple.rank!=file_space->extent.u.simple.rank)
HGOTO_DONE(SUCCEED);
/* Check for a single hyperslab block defined in memory dataspace */
if (mem_space->select.type==H5S_SEL_HYPERSLABS) {
/* Check for a "regular" hyperslab selection */
if(mem_space->select.sel_info.hslab.diminfo != NULL) {
/* Check each dimension */
for(count=1,u=0; u<mem_space->extent.u.simple.rank; u++)
count*=mem_space->select.sel_info.hslab.diminfo[u].count;
/* If the regular hyperslab definition creates more than one hyperslab, fall through */
if(count>1)
HGOTO_DONE(SUCCEED);
} /* end if */
else {
/* Get the pointer to the hyperslab spans to check */
mem_span=mem_space->select.sel_info.hslab.span_lst->head;
/* Spin through the spans, checking for more than one span in each dimension */
while(mem_span!=NULL) {
/* If there are more than one span in the dimension, we can't use this routine */
if(mem_span->next!=NULL)
HGOTO_DONE(SUCCEED);
/* Advance to the next span, if it's available */
if(mem_span->down==NULL)
break;
else
mem_span=mem_span->down->head;
} /* end while */
/* Get the pointer to the hyperslab spans to use */
mem_span=mem_space->select.sel_info.hslab.span_lst->head;
} /* end else */
} /* end if */
else
if(mem_space->select.type!=H5S_SEL_ALL)
HGOTO_DONE(SUCCEED);
/* Check for a single hyperslab block defined in file dataspace */
if (file_space->select.type==H5S_SEL_HYPERSLABS) {
/* Check for a "regular" hyperslab selection */
if(file_space->select.sel_info.hslab.diminfo != NULL) {
/* Check each dimension */
for(count=1,u=0; u<file_space->extent.u.simple.rank; u++)
count*=file_space->select.sel_info.hslab.diminfo[u].count;
/* If the regular hyperslab definition creates more than one hyperslab, fall through */
if(count>1)
HGOTO_DONE(SUCCEED);
} /* end if */
else {
/* Get the pointer to the hyperslab spans to check */
file_span=file_space->select.sel_info.hslab.span_lst->head;
/* Spin through the spans, checking for more than one span in each dimension */
while(file_span!=NULL) {
/* If there are more than one span in the dimension, we can't use this routine */
if(file_span->next!=NULL)
HGOTO_DONE(SUCCEED);
/* Advance to the next span, if it's available */
if(file_span->down==NULL)
break;
else
file_span=file_span->down->head;
} /* end while */
/* Get the pointer to the hyperslab spans to use */
file_span=file_space->select.sel_info.hslab.span_lst->head;
} /* end else */
} /* end if */
else
if(file_space->select.type!=H5S_SEL_ALL)
HGOTO_DONE(SUCCEED);
/* Get information about memory and file */
for (u=0; u<mem_space->extent.u.simple.rank; u++) {
if(mem_space->select.type==H5S_SEL_HYPERSLABS) {
/* Check for a "regular" hyperslab selection */
if(mem_space->select.sel_info.hslab.diminfo != NULL) {
mem_size=mem_space->select.sel_info.hslab.diminfo[u].block;
mem_off=mem_space->select.sel_info.hslab.diminfo[u].start;
} /* end if */
else {
mem_size=(mem_span->high-mem_span->low)+1;
mem_off=mem_span->low;
mem_span=mem_span->down->head;
} /* end else */
} /* end if */
else {
mem_size=mem_space->extent.u.simple.size[u];
mem_off=0;
} /* end else */
switch(mem_space->select.type) {
case H5S_SEL_HYPERSLABS:
/* Check for a "regular" hyperslab selection */
if(mem_space->select.sel_info.hslab.diminfo != NULL) {
mem_elmts=mem_space->select.sel_info.hslab.diminfo[u].block;
mem_off=mem_space->select.sel_info.hslab.diminfo[u].start;
} /* end if */
else {
mem_elmts=(mem_span->high-mem_span->low)+1;
mem_off=mem_span->low;
mem_span=mem_span->down->head;
} /* end else */
mem_off+=mem_space->select.offset[u];
break;
if(file_space->select.type==H5S_SEL_HYPERSLABS) {
/* Check for a "regular" hyperslab selection */
if(file_space->select.sel_info.hslab.diminfo != NULL) {
file_size=file_space->select.sel_info.hslab.diminfo[u].block;
file_off=file_space->select.sel_info.hslab.diminfo[u].start;
} /* end if */
else {
file_size=(file_span->high-file_span->low)+1;
file_off=file_span->low;
file_span=file_span->down->head;
} /* end else */
} /* end if */
else {
file_size=file_space->extent.u.simple.size[u];
file_off=0;
} /* end else */
case H5S_SEL_ALL:
mem_elmts=mem_space->extent.u.simple.size[u];
mem_off=0;
break;
if (mem_size!=file_size)
case H5S_SEL_POINTS:
mem_elmts=1;
mem_off=mem_space->select.sel_info.pnt_lst->head->pnt[u]
+mem_space->select.offset[u];
break;
default:
assert(0 && "Invalid selection type!");
} /* end switch */
switch(file_space->select.type) {
case H5S_SEL_HYPERSLABS:
/* Check for a "regular" hyperslab selection */
if(file_space->select.sel_info.hslab.diminfo != NULL) {
file_elmts=file_space->select.sel_info.hslab.diminfo[u].block;
file_off=file_space->select.sel_info.hslab.diminfo[u].start;
} /* end if */
else {
file_elmts=(file_span->high-file_span->low)+1;
file_off=file_span->low;
file_span=file_span->down->head;
} /* end else */
file_off+=file_space->select.offset[u];
break;
case H5S_SEL_ALL:
file_elmts=file_space->extent.u.simple.size[u];
file_off=0;
break;
case H5S_SEL_POINTS:
file_elmts=1;
file_off=file_space->select.sel_info.pnt_lst->head->pnt[u]
+file_space->select.offset[u];
break;
default:
assert(0 && "Invalid selection type!");
} /* end switch */
if (mem_elmts!=file_elmts)
HGOTO_DONE(SUCCEED);
size[u] = file_size;
mem_size[u]=mem_space->extent.u.simple.size[u];
size[u] = file_elmts;
file_offset[u] = file_off;
mem_offset[u] = mem_off;
}
mem_size[u]=elmt_size;
size[u] = elmt_size;
file_offset[u] = 0;
mem_offset[u] = 0;
/* Disallow reading a memory hyperslab in the "middle" of a dataset which */
/* spans multiple rows in "interior" dimensions, but allow reading a */
/* hyperslab which is in the "middle" of the fastest or slowest changing */
/* dimension because a hyperslab which "fills" the interior dimensions is */
/* contiguous in memory. i.e. these are allowed: */
/* --------------------- --------------------- */
/* | | | | */
/* |*******************| | ********* | */
/* |*******************| | | */
/* | | | | */
/* | | | | */
/* --------------------- --------------------- */
/* ("large" contiguous block) ("small" contiguous block) */
/* But this is not: */
/* --------------------- */
/* | | */
/* | ********* | */
/* | ********* | */
/* | | */
/* | | */
/* --------------------- */
/* (not contiguous in memory) */
if(mem_space->select.type==H5S_SEL_HYPERSLABS) {
/* Check for a "small" contiguous block */
if(size[0]==1) {
small_contiguous=1;
/* size of block in all dimensions except the fastest must be '1' */
for (u=0; u<(mem_space->extent.u.simple.rank-1); u++) {
if(size[u]>1) {
small_contiguous=0;
break;
} /* end if */
} /* end for */
} /* end if */
/* Check for a "large" contiguous block */
else {
large_contiguous=1;
/* size of block in all dimensions except the slowest must be the */
/* full size of the dimension */
for (u=1; u<mem_space->extent.u.simple.rank; u++) {
if(size[u]!=mem_space->extent.u.simple.size[u]) {
large_contiguous=0;
break;
} /* end if */
} /* end for */
} /* end else */
/* Check for contiguous block */
if(small_contiguous || large_contiguous) {
/* Compute the "down sizes" for each dimension */
for (acc=elmt_size, i=(mem_space->extent.u.simple.rank-1); i>=0; i--) {
H5_ASSIGN_OVERFLOW(down_size[i],acc,hsize_t,size_t);
acc*=mem_space->extent.u.simple.size[i];
} /* end for */
/* Adjust the buffer offset and memory offsets by the proper amount */
for (u=0; u<mem_space->extent.u.simple.rank; u++) {
buf+=mem_offset[u]*down_size[u];
mem_offset[u]=0;
} /* end for */
} /* end if */
else {
/* Non-contiguous hyperslab block */
HGOTO_DONE(SUCCEED);
} /* end else */
} /* end if */
#ifdef QAK
printf("%s: check 2.0\n",FUNC);
for (u=0; u<mem_space->extent.u.simple.rank; u++)
printf("size[%u]=%lu\n",u,(unsigned long)size[u]);
for (u=0; u<=mem_space->extent.u.simple.rank; u++)
printf("mem_size[%u]=%lu\n",u,(unsigned long)mem_size[u]);
for (u=0; u<=mem_space->extent.u.simple.rank; u++)
printf("mem_offset[%u]=%lu\n",u,(unsigned long)mem_offset[u]);
for (u=0; u<=mem_space->extent.u.simple.rank; u++)
printf("file_offset[%u]=%lu\n",u,(unsigned long)file_offset[u]);
#endif /* QAK */
/* Write data to the file */
if (H5F_arr_write(f, dxpl_id, layout, pline, fill, efl, size,
size, mem_offset, file_offset, buf)<0) {
mem_size, mem_offset, file_offset, buf)<0) {
HGOTO_ERROR(H5E_IO, H5E_WRITEERROR, FAIL,
"unable to write data to the file");
}
*must_convert = FALSE;
done:
FUNC_LEAVE(ret_value);
}

@ -5932,6 +5932,84 @@ H5S_hyper_select_contiguous(const H5S_t *space)
FUNC_LEAVE (ret_value);
} /* H5S_hyper_select_contiguous() */
/*--------------------------------------------------------------------------
NAME
H5S_hyper_select_single
PURPOSE
Check if a hyperslab selection is a single block within the dataspace extent.
USAGE
htri_t H5S_select_single(space)
H5S_t *space; IN: Dataspace pointer to check
RETURNS
TRUE/FALSE/FAIL
DESCRIPTION
Checks to see if the current selection in the dataspace is a single block.
This is primarily used for reading the entire selection in one swoop.
GLOBAL VARIABLES
COMMENTS, BUGS, ASSUMPTIONS
EXAMPLES
REVISION LOG
--------------------------------------------------------------------------*/
htri_t
H5S_hyper_select_single(const H5S_t *space)
{
H5S_hyper_span_info_t *spans; /* Hyperslab span info node */
H5S_hyper_span_t *span; /* Hyperslab span node */
unsigned u; /* index variable */
htri_t ret_value=FALSE; /* return value */
FUNC_ENTER (H5S_hyper_select_single, FAIL);
assert(space);
/* Check for a "regular" hyperslab selection */
if(space->select.sel_info.hslab.diminfo != NULL) {
/*
* For a regular hyperslab to be single, it must have only one
* block (i.e. count==1 in all dimensions)
*/
/* Initialize flags */
ret_value=TRUE; /* assume true and reset if the dimensions don't match */
/* Check for a single block */
for(u=0; u<space->extent.u.simple.rank; u++) {
if(space->select.sel_info.hslab.diminfo[u].count>1) {
ret_value=FALSE;
break;
} /* end if */
} /* end for */
} /* end if */
else {
/*
* For a region to be single, it must have only one block
*/
/* Initialize flags */
ret_value=TRUE; /* assume true and reset if the dimensions don't match */
/* Get information for slowest changing information */
spans=space->select.sel_info.hslab.span_lst;
/* Cycle down the spans until we run out of down spans or find a non-contiguous span */
while(spans!=NULL) {
span=spans->head;
/* Check that this is the only span and it spans the entire dimension */
if(span->next!=NULL) {
ret_value=FALSE;
break;
} /* end if */
else {
/* Walk down to the next span */
spans=span->down;
} /* end else */
} /* end while */
} /* end else */
FUNC_LEAVE (ret_value);
} /* H5S_hyper_select_single() */
/*--------------------------------------------------------------------------
NAME

@ -132,6 +132,7 @@ __DLL__ herr_t H5S_point_select_serialize(const H5S_t *space, uint8_t *buf);
__DLL__ herr_t H5S_point_select_deserialize(H5S_t *space, const uint8_t *buf);
__DLL__ herr_t H5S_point_bounds(H5S_t *space, hsize_t *start, hsize_t *end);
__DLL__ htri_t H5S_point_select_contiguous(const H5S_t *space);
__DLL__ htri_t H5S_point_select_single(const H5S_t *space);
__DLL__ herr_t H5S_select_elements (H5S_t *space, H5S_seloper_t op,
size_t num_elem, const hssize_t **coord);
__DLL__ herr_t H5S_point_select_iterate(void *buf, hid_t type_id, H5S_t *space,
@ -174,6 +175,7 @@ __DLL__ hssize_t H5S_hyper_span_nblocks(H5S_hyper_span_info_t *spans);
__DLL__ herr_t H5S_hyper_span_blocklist(H5S_hyper_span_info_t *spans, hssize_t start[], hssize_t end[], hsize_t rank, hsize_t *startblock, hsize_t *numblocks, hsize_t **buf);
__DLL__ herr_t H5S_hyper_bounds(H5S_t *space, hsize_t *start, hsize_t *end);
__DLL__ htri_t H5S_hyper_select_contiguous(const H5S_t *space);
__DLL__ htri_t H5S_hyper_select_single(const H5S_t *space);
__DLL__ herr_t H5S_hyper_select_iterate(void *buf, hid_t type_id, H5S_t *space,
H5D_operator_t op, void *operator_data);

@ -1086,6 +1086,7 @@ H5S_point_bounds(H5S_t *space, hsize_t *start, hsize_t *end)
FUNC_LEAVE (ret_value);
} /* H5Sget_point_bounds() */
/*--------------------------------------------------------------------------
NAME
@ -1126,6 +1127,43 @@ H5S_point_select_contiguous(const H5S_t *space)
FUNC_LEAVE (ret_value);
} /* H5S_point_select_contiguous() */
/*--------------------------------------------------------------------------
NAME
H5S_point_select_single
PURPOSE
Check if a point selection is single within the dataspace extent.
USAGE
htri_t H5S_point_select_contiguous(space)
H5S_t *space; IN: Dataspace pointer to check
RETURNS
TRUE/FALSE/FAIL
DESCRIPTION
Checks to see if the current selection in the dataspace is a single block.
This is primarily used for reading the entire selection in one swoop.
GLOBAL VARIABLES
COMMENTS, BUGS, ASSUMPTIONS
EXAMPLES
REVISION LOG
--------------------------------------------------------------------------*/
htri_t
H5S_point_select_single(const H5S_t *space)
{
htri_t ret_value=FAIL; /* return value */
FUNC_ENTER (H5S_point_select_single, FAIL);
assert(space);
/* One point is definitely contiguous */
if(space->select.num_elem==1)
ret_value=TRUE;
else
ret_value=FALSE;
FUNC_LEAVE (ret_value);
} /* H5S_point_select_single() */
/*--------------------------------------------------------------------------
NAME

@ -219,6 +219,7 @@ __DLL__ hssize_t H5S_select_serial_size(const H5S_t *space);
__DLL__ herr_t H5S_select_serialize(const H5S_t *space, uint8_t *buf);
__DLL__ herr_t H5S_select_deserialize(H5S_t *space, const uint8_t *buf);
__DLL__ htri_t H5S_select_contiguous(const H5S_t *space);
__DLL__ htri_t H5S_select_single(const H5S_t *space);
__DLL__ herr_t H5S_select_iterate(void *buf, hid_t type_id, H5S_t *space,
H5D_operator_t op, void *operator_data);
__DLL__ herr_t H5S_sel_iter_release(const H5S_t *space,

@ -1316,3 +1316,56 @@ H5Sget_select_type(hid_t space_id)
FUNC_LEAVE(space->select.type);
} /* end H5Sget_select_type() */
/*--------------------------------------------------------------------------
NAME
H5S_select_single
PURPOSE
Check if the selection is a single block within the dataspace extent.
USAGE
htri_t H5S_select_single(space)
H5S_t *space; IN: Dataspace pointer to check
RETURNS
TRUE/FALSE/FAIL
DESCRIPTION
Checks to see if the current selection in the dataspace is a single block.
This is primarily used for reading the entire selection in one swoop.
GLOBAL VARIABLES
COMMENTS, BUGS, ASSUMPTIONS
EXAMPLES
REVISION LOG
--------------------------------------------------------------------------*/
htri_t
H5S_select_single(const H5S_t *space)
{
htri_t ret_value=FAIL; /* return value */
FUNC_ENTER (H5S_select_single, FAIL);
assert(space);
switch(space->select.type) {
case H5S_SEL_POINTS: /* Sequence of points selected */
ret_value=H5S_point_select_single(space);
break;
case H5S_SEL_HYPERSLABS: /* Hyperslab selection defined */
ret_value=H5S_hyper_select_single(space);
break;
case H5S_SEL_ALL: /* Entire extent selected */
ret_value=TRUE;
break;
case H5S_SEL_NONE: /* Nothing selected */
ret_value=FALSE;
break;
case H5S_SEL_ERROR:
case H5S_SEL_N:
break;
}
FUNC_LEAVE (ret_value);
} /* H5S_select_single() */

@ -1227,9 +1227,7 @@ test_select_hyper_contig2(hid_t dset_type, hid_t xfer_plist)
hid_t sid1,sid2; /* Dataspace ID */
hsize_t dims2[] = {SPACE8_DIM4, SPACE8_DIM3, SPACE8_DIM2, SPACE8_DIM1};
hssize_t start[SPACE8_RANK]; /* Starting location of hyperslab */
hsize_t stride[SPACE8_RANK]; /* Stride of hyperslab */
hsize_t count[SPACE8_RANK]; /* Element count of hyperslab */
hsize_t block[SPACE8_RANK]; /* Block size of hyperslab */
uint16_t *wbuf, /* buffer to write to disk */
*rbuf, /* buffer read from disk */
*tbuf; /* temporary buffer pointer */
@ -1828,6 +1826,125 @@ test_select_hyper_offset(void)
free(rbuf);
} /* test_select_hyper_offset() */
/****************************************************************
**
** test_select_hyper_offset2(): Test basic H5S (dataspace) selection code.
** Tests optimized hyperslab I/O with selection offsets.
**
****************************************************************/
static void
test_select_hyper_offset2(void)
{
hid_t fid1; /* HDF5 File IDs */
hid_t dataset; /* Dataset ID */
hid_t sid1,sid2; /* Dataspace ID */
hsize_t dims1[] = {SPACE7_DIM1, SPACE7_DIM2};
hsize_t dims2[] = {SPACE7_DIM1, SPACE7_DIM2};
hssize_t start[SPACE7_RANK]; /* Starting location of hyperslab */
hsize_t count[SPACE7_RANK]; /* Element count of hyperslab */
hssize_t offset[SPACE7_RANK]; /* Offset of selection */
uint8_t *wbuf, /* buffer to write to disk */
*rbuf, /* buffer read from disk */
*tbuf, /* temporary buffer pointer */
*tbuf2; /* temporary buffer pointer */
int i,j; /* Counters */
herr_t ret; /* Generic return value */
htri_t valid; /* Generic boolean return value */
/* Output message about test being performed */
MESSAGE(5, ("Testing More Hyperslab Selection Functions with Offsets\n"));
/* Allocate write & read buffers */
wbuf=malloc(sizeof(uint8_t)*SPACE7_DIM1*SPACE7_DIM2);
rbuf=calloc(sizeof(uint8_t),SPACE7_DIM1*SPACE7_DIM2);
/* Initialize write buffer */
for(i=0, tbuf=wbuf; i<SPACE7_DIM1; i++)
for(j=0; j<SPACE7_DIM2; j++)
*tbuf++=(uint8_t)((i*SPACE7_DIM2)+j);
/* Create file */
fid1 = H5Fcreate(FILENAME, H5F_ACC_TRUNC, H5P_DEFAULT, H5P_DEFAULT);
CHECK(fid1, FAIL, "H5Fcreate");
/* Create dataspace for dataset */
sid1 = H5Screate_simple(SPACE7_RANK, dims1, NULL);
CHECK(sid1, FAIL, "H5Screate_simple");
/* Create dataspace for writing buffer */
sid2 = H5Screate_simple(SPACE7_RANK, dims2, NULL);
CHECK(sid2, FAIL, "H5Screate_simple");
/* Select 4x10 hyperslab for disk dataset */
start[0]=1; start[1]=0;
count[0]=4; count[1]=10;
ret = H5Sselect_hyperslab(sid1,H5S_SELECT_SET,start,NULL,count,NULL);
CHECK(ret, FAIL, "H5Sselect_hyperslab");
/* Set offset */
offset[0]=1; offset[1]=0;
ret = H5Soffset_simple(sid1,offset);
CHECK(ret, FAIL, "H5Soffset_simple");
valid = H5Sselect_valid(sid1);
VERIFY(valid, TRUE, "H5Sselect_valid");
/* Select 4x10 hyperslab for memory dataset */
start[0]=1; start[1]=0;
count[0]=4; count[1]=10;
ret = H5Sselect_hyperslab(sid2,H5S_SELECT_SET,start,NULL,count,NULL);
CHECK(ret, FAIL, "H5Sselect_hyperslab");
/* Choose a valid offset for the memory dataspace */
offset[0]=2; offset[1]=0;
ret = H5Soffset_simple(sid2,offset);
CHECK(ret, FAIL, "H5Soffset_simple");
valid = H5Sselect_valid(sid2);
VERIFY(valid, TRUE, "H5Sselect_valid");
/* Create a dataset */
dataset=H5Dcreate(fid1,"Dataset1",H5T_NATIVE_UCHAR,sid1,H5P_DEFAULT);
/* Write selection to disk */
ret=H5Dwrite(dataset,H5T_NATIVE_UCHAR,sid2,sid1,H5P_DEFAULT,wbuf);
CHECK(ret, FAIL, "H5Dwrite");
/* Read selection from disk */
ret=H5Dread(dataset,H5T_NATIVE_UCHAR,sid2,sid1,H5P_DEFAULT,rbuf);
CHECK(ret, FAIL, "H5Dread");
/* Compare data read with data written out */
for(i=0; i<4; i++) {
tbuf=wbuf+((i+3)*SPACE7_DIM2);
tbuf2=rbuf+((i+3)*SPACE7_DIM2);
for(j=0; j<SPACE7_DIM2; j++, tbuf++, tbuf2++) {
if(*tbuf!=*tbuf2) {
printf("%d: hyperslab values don't match!, i=%d, j=%d, *tbuf=%u, *tbuf2=%u\n",__LINE__,i,j,(unsigned)*tbuf,(unsigned)*tbuf2);
num_errs++;
} /* end if */
} /* end for */
} /* end for */
/* Close memory dataspace */
ret = H5Sclose(sid2);
CHECK(ret, FAIL, "H5Sclose");
/* Close disk dataspace */
ret = H5Sclose(sid1);
CHECK(ret, FAIL, "H5Sclose");
/* Close Dataset */
ret = H5Dclose(dataset);
CHECK(ret, FAIL, "H5Dclose");
/* Close file */
ret = H5Fclose(fid1);
CHECK(ret, FAIL, "H5Fclose");
/* Free memory buffers */
free(wbuf);
free(rbuf);
} /* test_select_hyper_offset2() */
/****************************************************************
**
** test_select_point_offset(): Test basic H5S (dataspace) selection code.
@ -4345,6 +4462,7 @@ test_select(void)
test_select_hyper_copy(); /* Test hyperslab selection copying code */
test_select_point_copy(); /* Test point selection copying code */
test_select_hyper_offset(); /* Test selection offset code with hyperslabs */
test_select_hyper_offset2();/* Test more selection offset code with hyperslabs */
test_select_point_offset(); /* Test selection offset code with elements */
test_select_hyper_union(); /* Test hyperslab union code */
#ifdef NEW_HYPERSLAB_API