[svn-r6891] Purpose:

Bug fix

Description:
    Raw data I/O on chunked datasets would attempt to allocate data structures
proportional to the number of chunks in the dataset on disk, instead of just
the number of chunks that the I/O operation would interact with, causing
operations on datasets with large #'s of chunks to fail (or become very slow),
even though the actual I/O operation was very modest.

Solution:
    This is the "scalability fix" for chunked datasets that I've mentioned
we need to do, althought it's not the complete fix for the issue.  Read on
for the details...
    Only create data structures for the chunks that the I/O operation will
actually act on, reducing the amount of information allocated in memory,
normally.
    I say "normally", because this algorithm has the same problems as the
original algorithm (worse actually, since the data structure for each chunk
is larger now) if _all_ the chunks in a dataset with a lot of chunks are
actually involved in the I/O operation.  If that is the case, this code
will fail in a similar way.
    To truly fix the problem, we would need to only create data structures for
a particular number of chunks, perform the I/O on just those chunks, then
release the data structures for those chunks and create data structures for
the next set of chunks to access, etc.  However, I think this case is pretty
rare right now and we should worry about it after the 1.6.0 release.

Platforms tested:
    h5committested
This commit is contained in:
Quincey Koziol 2003-05-17 16:50:55 -05:00
parent dddf167923
commit ad3ace3d16

File diff suppressed because it is too large Load Diff