mirror of
https://github.com/HDFGroup/hdf5.git
synced 2024-11-27 02:10:55 +08:00
debeaf6e64
Purpose: A new feature Description: While testing h4toh5 utility with real NASA files, we find an example that the data array(one SDS) is so big that it exceeds the physical memory of some machine(>128 MB) and the conversion failed. Before the smart hyperslab operation is out, I am dividing the whole SDS into smaller hyperslabs with each hyperslab propotational to the original SDS array dimensions. For example, a three dimension array with 1000*1000*1000 elements, I can divide them into eight 500*500*500 pieces. I can read and write each piece and remember their starting and ending points. In this way, the memory allocation failure can be avoided; however, it may not be the efficient way. I've tested this feature using SDS without chunking. It works fine. However, when testing SDS with chunking, it is extremely slow. This happens to be a bug in HDF5 library now. Quincey may fix this later and give me a more efficient way to handle the problem. Currently all my testing files are with UNLIMITED dimensions, so in HDF5 the chunking feature will be required. SO by default, this feature will not be turned on. Solution: see the above Platforms tested: linux 2.2.18 |
||
---|---|---|
.. | ||
gifconv | ||
h4toh5 | ||
h5dump | ||
h5ls | ||
h5toh4 | ||
lib | ||
misc | ||
testfiles | ||
Dependencies | ||
Makefile.in |