[svn-r8916]

Purpose: Improvement

Description: HDF5 Library set pixels_per_scanline parameter to the size of the chunk's
             fastest changing dimension.  As a result, fastest changing dimension
             of the chunk could not be bigger than 4K and smaller than pixels_per_block
             value and szip compression couldn't be used for many real datasets.

Solution: Reworked algorithm how HDF5 sets pixels_per_scanline value; only chunks
          with the total number of elements less than pixels_per_block value are rejected.
          There is no restriction on the size of the chunk's fastest changing
          dimension anymore.

          Modified the test according to the new algorithm.


Platforms tested: verbena, copper, sol

Misc. update:
This commit is contained in:
Elena Pourmal 2004-07-21 15:41:41 -05:00
parent 0fb97eb5fd
commit 560d1127e9

View File

@ -2826,14 +2826,21 @@ file)
}
/* Create new dataset */
/* (Should fail because the 'can apply' filter should indicate inappropriate combination) */
/* (Should succeed; according to the new algorithm, scanline should be reset
to 2*128 satisfying 'maximum blocks per scanline' condition) */
H5E_BEGIN_TRY {
dsid = H5Dcreate(file, DSET_CAN_APPLY_SZIP_NAME, H5T_NATIVE_INT, sid, dcpl);
} H5E_END_TRY;
if (dsid >=0) {
if (dsid <=0) {
H5_FAILED();
printf(" Line %d: Shouldn't have created dataset!\n",__LINE__);
H5Dclose(dsid);
printf(" Line %d: Should have created dataset!\n",__LINE__);
goto error;
} /* end if */
/* Close dataset */
if(H5Dclose(dsid)<0) {
H5_FAILED();
printf(" Line %d: Can't close dataset\n",__LINE__);
goto error;
} /* end if */