Add check for actually using the MPI file driver when caching one chunk

Better explain rationale behind chunk caching issue fix
This commit is contained in:
Jordan Henderson 2018-07-13 09:26:41 -05:00
parent 30d3bc2c91
commit 0dc6c3c8b5

View File

@ -2943,9 +2943,23 @@ H5D__chunk_lookup(const H5D_t *dset, const hsize_t *scaled,
* address in the file and/or gets re-allocated with a different size).
* If we were to cache this information, subsequent reads/writes would
* retrieve the invalid information and cause a variety of issues.
*
* This is an ugly and potentially frail check, but the
* H5D__chunk_cinfo_cache_reset() function is not currently available
* to functions outside of this file, so outside functions can not
* invalidate this single chunk cache. Even if the function were available,
* this check prevents us from doing the work of going through and caching
* each chunk in the write operation, when we're only going to invalidate
* the cache at the end of a parallel write anyway.
*
* - JTH
*/
#ifdef H5_HAVE_PARALLEL
if ( !((H5F_INTENT(dset->oloc.file) & H5F_ACC_RDWR) && dset->shared->dcpl_cache.pline.nused) )
if ( !( (H5F_HAS_FEATURE(idx_info.f, H5FD_FEAT_HAS_MPI))
&& (H5F_INTENT(dset->oloc.file) & H5F_ACC_RDWR)
&& dset->shared->dcpl_cache.pline.nused
)
)
#endif
H5D__chunk_cinfo_cache_update(&dset->shared->cache.chunk.last, udata);
} /* end if */