Fix for HDFFV-7993 - h5repack fails with error "chunk size must be <= maximum dimension size for fixed-sized dimensions"
Description:
Fixed a failure when change the chunk size of a specified chunked dataset with unlimited max dims.
Also took care of converting to contiguous and compact from the dataset.
Test cases were added and tagged with jira#.
Tested:
jam (linux32-LE), koala (linux64-LE), ostrich (linuxppc64-BE), tejeda (mac32-LE), linew (solaris-BE), Windows (32-LE cmake), Cmake (jam)
Add new "metadata block size" command line option ('-M <x>' or
'--metadata_block_size=<x>') for h5repack.
Tested on:
Mac OSX/64 10.7.3 (amazon) w/debug)
(h5committest upcoming)
Change to use HDxxx macros.
Description:
Originally this started for fixing incorrect pointer usage. But that got
fixed through coverity merge. So this is mainly changing to use HDxxx
macros and clean up some related code.
Tested:
jam (linux32-LE), amani (linux64-LE), heiwa (linuxppc64-BE), tejeda (mac32-LE), linew (solaris-BE)
Fix for the bug1726 - NPOESS: h5repack loses attributes for datasets of
type H5T_REFERENCE.
Description:
include test cases.
also test cases for attribute with object and region reference.
Tested:
jam, amani, linew
Unify srcdir handling for test executables and allow them to use the srcdir
setting from configure time without requiring the 'srcdir' environment variable
be set (although you still can, to override the built in setting). Attempted
to get this right for Windows builds also.
Also add dependency between src/H5Tinit.c and src/libhdf5.settings, so
that the test/testcheck_version.sh script works correctly.
Tested on:
Linux/32 2.6 (jam)
Mac OS X/32 10.6.2 (amazon)
Bring changes from file free space branch back to the trunk. *yay!*
Tested on:
FreeBSD/32 6.3 (duty) in debug mode
FreeBSD/64 6.3 (liberty) w/C++ & FORTRAN, in debug mode
Linux/32 2.6 (jam) w/PGI compilers, w/default API=1.8.x,
w/C++ & FORTRAN, w/threadsafe, in debug mode
Linux/64-amd64 2.6 (smirom) w/Intel compilers, w/default API=1.6.x,
w/C++ & FORTRAN, in production mode
Solaris/32 2.10 (linew) w/deprecated symbols disabled, w/C++ & FORTRAN,
w/szip filter, in production mode
Linux/64-ia64 2.6 (cobalt) w/Intel compilers, w/C++ & FORTRAN,
in production mode
Linux/64-ia64 2.4 (tg-login3) w/parallel, w/FORTRAN, in debug mode
Linux/64-amd64 2.6 (abe) w/parallel, w/FORTRAN, in production mode
Mac OS X/32 10.5.8 (amazon) in debug mode
Mac OS X/32 10.5.8 (amazon) w/C++ & FORTRAN, w/threadsafe,
in production mode
ISSUE : h5repack does not handle group creation order flags.
ACTION: call H5P(g)(s)et_link_creation_order functions when handling groups, add new groups with these flags to the test generation program, and verify results in the test program.
TEST: in the test program, function that compares property lists, added code to verify groups
tested: windows, linux, solaris
Remove trailing whitespace from C/C++ source files, with the following
script:
foreach f (*.[ch] *.cpp)
sed 's/[[:blank:]]*$//' $f > sed.out && mv sed.out $f
end
Tested on:
Mac OS X/32 10.5.5 (amazon)
No need for h5committest, just whitespace changes...
-t T, --threshold=T Threshold value for H5Pset_alignment
-a A, --alignment=A Alignment value for H5Pset_alignment
2) bug fix
the printing of the dataset name was not done for references (verbose mode)
tested: windows, linux
Add a userblock to an HDF5 file during the repack. The user gives
give a filename and userblock size as command line parameters to
h5repack and the contents of that file are stored in the
userblock for the HDF5 file created by h5repack.
New flags to handle this -u and -b
Tested : windows, linux
TO DO: szip, nbit and scale offset
NOTE: the symbol H5Z_SHUFFLE_TOTAL_NPARMS was made public
Tested: windows, teragrid with icc 8.1, linux (kagiso), solaris (linew)
Following the new feature of h5repack to allow multiple filters for all datasets and the new function has_filters that checks if the repacked file has all the filters requested, I added a new function
has_filters_obj
that does the same for each dataset. The previous function that checked this only ckecked if the user input filters were in the output dataset. This new function does this but checks if the filters are exactly the same. Currently the behavior of h5repack is to delete all filters that are present in the input file (dataset) and replace them with the requested ones, so they must match exactly.
We might consider adding other logical operations, like keep the existing ones.
Additionally , the function also checks if the filter parameters match.
While doing this I noticed that for the shuffle filter , the values returned do not match and also the same for the N-bit and scale-offset
The new function that checks for the filter values fails then, and so I commented the h5repack tests that do this for the N-bit and scale-offset filter (previously for the same bug on the shuffle filter I added special code on the compare filter function but this is temporary until I find the issue)
tested: windows, linux, solaris
Remove all plain calls to H5Gopen() from source, replacing them with
either H5Gopen2().
Add test for H5Gopen1().
Reformatted several pieces of code, to clean them up.
Tested on:
FreeBSD/32 6.2 (duty)
FreeBSD/64 6.2 (liberty)
Linux/32 2.6 (kagiso)
Linux/64 2.6 (smirom)
Solaris/32 5.10 (linew)
Mac OS X/32 10.4.10 (amazon)
Minor tunings to output verbose messages:
1)when there is not a filter request do not print a message saying the filter was not apllied when the dataset was too small
2) avoid printing the message that has a list of objects to modify when there is none
Tested:linux
Tested platform:
Kagiso only since it is only a comment block change. If it works in one
machine, it should work in all, I hope. Still need to check the parallel
build on copper.
h5repack support for H5Ocopy in the copy of objects. The old method
for recreating references was dropped (references recreated in a second
traversal of the file)
The logic for using H5Ocopy or not is
if the input DCPL has filters or non default layout OR these are
requested by the user THEN
use the old h5repack read / write
ELSE
use H5Ocopy
Fixes for bugs 676, 228
676: both h5repack and h5diff use H5Dread. In the case of a "big"
dataset, use read/write by hyperslabs the same way h5dump uses. An
arbitrary value of 1GB was defined for "big", i.e, if the dataset is
greater than 1GB, then read/write by hyperslabs
228: use the file type in read/write by default. A new switch -n was
introduced if the user wants to use a native type, which was the
previous use by default.
Added a new test for h5repack that repacks a 1GB dataset
Tested: heping (serial, parallel), sol, copper
Add "use the latest format" support for dataspace object header encode/
decode routines and clean up format a bit for the latest format (new to 1.8.x
releases)
Remove storing 'perm' parameter for array datatypes in memory and the file,
and add test to make certain that if any user applications are attempting to
store them, we get some reports back. (Should be unlikely, since the RefMan
says that the parameter is not implemented and is unsupported).
Carry those changes into the tests, etc.
Clean up a bunch more compiler warnings.
Tested on:
FreeBSD/32 4.11 (sleipnir) w/threadsafe
Linux/32 2.4 (heping) w/FORTRAN & C++
Linux/64 2.4 (mir) w/enable-1.6-compat
Code cleanup
Description:
Trim trailing whitespace in Makefile.am and C/C++ source files to make
diffing changes easier.
Platforms tested:
None necessary, whitespace only change
new feature
Description
some more check in related to the print of compression ratios: print warning messages after the print of the dataset name and compression:
Solution:
Platforms tested:
linux
solaris
AIX
Misc. update:
bug fix
Description:
h5repack was not dealing with family files
Solution:
use the toolslib function h5tools_open to open the file instead of H5Fopen in h5repack
Platforms tested:
linux
solaris
AIX
Misc. update:
Code cleanup
Description:
Check in some of the code cleanups from working on the external link
support. (This doesn't include any of the external link features)
Platforms tested:
FreeBSD 4.11 (sleipnir)
Mac OSX.4 (amazon)
Linux 2.4
Description: VMS doesn't like file names with more than one "."
Some h5repacktst output file names were of the form
<name>.out.h5 causing h5repacktst to choke.
Solution: Renamed output files to be of the form <name>out.h5
Platforms tested: heping, unnamed VMS machine
Misc. update:
new features
Description:
added support for the scale/offset filter
there is a new filter symbol 'SOFF'
-f SOFF=<scale_factor,scale_type>
scale_factor = integer
scale_type = 'IN' or 'DS'
Solution:
Platforms tested:
Linux
SunOS
Misc. update:
bug fix
Description:
during the generation of some test files, H5Fclose was not called
during the #ifdef detection of the scale ofsset filter, a wrong macro symbol was used
Solution:
Platforms tested:
linux
Misc. update:
Code cleanup
Description:
Trim trailing whitespace, which is making 'diff'ing the two branches
difficult.
Solution:
Ran this script in each directory:
foreach f (*.[ch] *.cpp)
sed 's/[[:blank:]]*$//' $f > sed.out && mv sed.out $f
end
Platforms tested:
FreeBSD 4.11 (sleipnir)
Too minor to require h5committest
Bug fix
Description:
The GASS VFL driver header file was bringing in the <string.h> header file,
which several other source code modules needed also, but weren't including
explicitly themselves.
Solution:
Add includes for <string.h> to files which actually need them.
Platforms tested:
FreeBSD 4.11 (sleipnir) w/C++ as CC
Configuration not tested by h5committest...
feature
Description:
h5repack support for scaleoffset compression
Checking in early to help debug the filter.
Solution:
Added messages and command line to handle new scale offset filter.
Note: TESTS ARE DISABLED FOR NOW. The filter is not
complete, repack tests may fail due to know problems.
PLEASE DO NOT MESS WITH THE SCALEOFFSET TESTS AT THIS TIME.
They will be enabled when the filter is ready.
Platforms tested:
verbena,copper,shanti
Misc. update:
MANIFEST
bug fix
Description:
Description:
one case was not handled in the combination of input options (layout and filters)
Solution:
redo the algorythm that handles all cases
Solution:
Platforms tested:
linux
Misc. update:
bug fix
Description:
when specifying both an input object e.g -f mydset:GZIP=1 and a defined chunk -l CHUNK=20x20
the filter used a defined default chunk instead
Solution:
add a check for the input chunk
Platforms tested:
linux (small change)
Misc. update:
new test
Description:
added a test that generates and copies a file with a dataset with fill value
(this is to test the property list function H5Pequal)
Solution:
Platforms tested:
linux
solaris
aix
Misc. update:
new feature
Description:
added a check that the chunk size must be smaller than pixels per block in SZIP request
prints a message and exits, if not met
Solution:
Platforms tested:
linux
aix
solaris
Misc. update: