Minor h5ls bugfix for compound types with array members.
(Similar changes shortly for 1.4 branch).
2002-04-04 16:39:09 Robb Matzke <matzke@arborea.spizella.com>
* display_cmpd_type: Removed code to display array dimensions from display_cmpd_type()
since this was a holdover from the days when hdf5
didn't have an array datatype and was duplicated by
display_array_type().
Code removal
Description:
Removed the HDF4 source files from the HDF5 tree. The directories
will remain. Use the "-P" option when doing a cvs checkout or update
to "prune" the empty directories from your personal tree.
Code Motion
Description:
Removal of HDF4 from the configure/Makefiles. This is a precursor to
the actual physical removal of the HDF4 tools from the HDF5 tree.
Platforms tested:
Arabica, Dangermouse
Bug Fix
Description:
Some -I paths weren't included in the h5cc script. That would cause
the compiler to fail if it was trying to find gass header files or
the like.
Solution:
Added the CPPFLAGS macro to the h5cc.in file so that it'll be there
when it's generated. This will also include some -D options which we
compiled the library with, like the LFS flags on Linux.
Also changed the configure* files so that it will "chmod" the created
h5cc file to 755 (executable) since that wasn't happening all the
time...
Platforms tested:
Linux
Bug Fix
Description:
There was a problem with having a lot of groups nested together. We
could only handle 1024 characters at most, but, in a parallel program
especially, it could occur that there were lots and lots of groups
and would be more than 1024.
Solution:
I made the "objname" part of the obj_t structure a pointer instead of
a fixed size. Added code to allocate/deallocate the memory we need
for it. Had to fix how the "prefix" was being handled in the h5dump
program. It was also set to only 1024 characters in length. I made it
dynamic.
Added a test case...Go me!
Platforms tested:
Linux, Solaris
Oops
Description:
I added files for testing the group comments dumping feature, but
didn't actually add it to the testh5dump.sh script.
Solution:
Added it.
Platforms tested:
Linux
Bug Fix
Description:
The library path was relying upon the "exec_prefix" variable.
However, we weren't including that into the h5cc script.
Solution:
Added it.
Platforms tested:
Linux
Purpose:
Small Fix
Description:
Changed how the list of drivers were listed from:
{ "driver1",
"driver2",
/* ... */
"drivern",
#ifdef FOO
"driverm"
#endif
};
to
{ "driver1"
,"driver2"
/* ... */
,"drivern"
#ifdef FOO
,"driverm"
#endif
};
since it's a nicer way of doing the same thing without the annoying
warning of an extraneous , if FOO isn't defined.
Platforms tested:
Dangermouse, Modi4, Kelgia
Purpose:
Feature Add
Description:
Added support for dumping Group Comments. This involved a
modification of the DDL as well.
Solution:
Steal code from h5ls and put it in the h5dump. The ddl.html file was
updated as normal. And a test was created...
Platforms tested:
Dangermouse, Modi4, Kelgia
Purpose:
Bug fix
Description:
The "make depend" command wasn't propagating down into the tools/
directories.
Solution:
Added the "depend:" part to the Makefile in the tools/ subdirectory.
Platforms tested:
Linux
Removing the DPSS (gridstorage) driver source code.
Description:
The DPSS (using Grid-Storage) driver is retired.
Removed the configure option with-gridstorage from configure.in.
Cvs remove the following files
./src/H5FDdpss.c
./src/H5FDdpss.h
./test/dpss_read.c
./test/dpss_write.c
Regenerated Dependencies files (some had to be hand-edited since
'make depend' did not cover them.)
Removed reference to DPSS Virtual file driver from H5F.c.
Platforms tested:
modi4 (Parallel; -with-gass=...), eirene, arabica (fortran, cxx).
Bug Fix.
Description:
The H5Rget_object_type function could not get the object type for dataset
region references.
Solution:
Added a new function, H5Rget_obj_type, to replace H5Rget_object_type.
The new function requires the reference type as an additional parameter,
in order to allow queries on different reference types to be performed
correctly.
Platforms tested:
FreeBSD 4.4. (sleipnir)
Purpose:
parameter file
Description:
handle parameters in various special cases.
Users donot even have to use this file if they don't have those special needs.
1) memory optimization (should always be set to 0 without handling huge SDS array conversion)
2) unlimited dimension with zero current size, users can use this file to define
their chunk size.
Solution:
Platforms tested:
RedHat Zoot 6.2
Purpose:
a bug fix
a feature added
Description:
1.conversion of unlimited dimension data with the current dimensional size 0
2. Use a parameter file to control some special cases:
1) To subdivide the huge array into hyperslabs, a memory buffer size has i
to be set.
2) when current dimensional size is 0, a default chunk size can be set.
Solution:
bug fixed 1. Old approach: the current dimensional size is set to H5S_UNLIMITED, which is
a huge number. The default chunk size is set as a *FIXED* default value.
2. New approach: the current dimensional size is set to the current value 0.
Users can provide the chunk size through a parameter file.
Platforms tested:
eirene
[machines you have tested the changed version. This is absolute
important. Test it out on at least two or three different platforms
such as Big-endian-32bit (SUN/IRIX), little-endian-32(LINUX) and
64-bit (IRIX64/UNICOS/DEC-ALPHA) would be good.]
Purpose:
bug fixed
Description:
HDF4 Vdata is extensible, when converted, the new HDF4 data set should also be extensible.
Solution:
Make the HDF4 dataset converted from Vdata unlimited dimensions.
These three testing files should be updated accordingly.
Platforms tested:
Linux, RedHat 6.2
Backward Compatibility Fix
Description:
One of H5P[gs]et_buffer's parameters changed between v1.4 and the
development branch.
Solution:
Added v1.4 compat stuff around H5P[gs]et_buffer implementation and testing
to allow v1.4.x users to continue to use their source code without
modification.
These changes are for everything except the FORTRAN wrappers - I spoke with
Elena and she will make the FORTRAN wrapper changes.
Platforms tested:
FreeBSD 4.4 (hawkwind)
Code cleanup
Description:
Windows is generating hundreds of warnings from some of the practices in
the library. Mostly, they are because size_t is 32-bit and hsize_t is
64-bit on Windows and we were carelessly casting the larger values down to
the smaller ones without checking for overflow.
Also, some other small code cleanups,etc.
Solution:
Re-worked some algorithms to eliminate the casts and also added more
overflow checking for assignments and function parameters which needed
casts.
Kent did most of the work, I just went over his changes and fit them into
the the library code a bit better.
Platforms tested:
FreeBSD 4.4 (hawkwind)
Purpose:
A new feature
Description:
While testing h4toh5 utility with real NASA files, we find an example that the data array(one SDS) is so big that it exceeds the physical memory of some machine(>128 MB) and the conversion failed. Before the smart hyperslab operation is out, I am dividing the whole SDS into smaller hyperslabs with each hyperslab propotational to the original SDS array dimensions. For example, a three dimension array with 1000*1000*1000 elements, I can divide them into eight 500*500*500 pieces. I can read and write each piece and remember their starting and ending points. In this way, the memory allocation failure can be avoided; however, it may not be the efficient way.
I've tested this feature using SDS without chunking. It works fine. However, when testing SDS with chunking, it is extremely slow. This happens to be a bug in HDF5 library now. Quincey may fix this later and give me a more efficient way to handle the problem. Currently all my testing files are with UNLIMITED dimensions, so in HDF5 the chunking feature will be required.
SO by default, this feature will not be turned on.
Solution:
see the above
Platforms tested:
linux 2.2.18
Kludge
Description:
Since we're only about halfway through converting the internal use of
property lists from the "old way" to the generic property lists, we turned
off snapshots to avoid exposing lots of API changes to users, until the
APIs settled down.
Getting the snapshots rolling again seems to have become a priority, so
some changes are going to have to be made now that were going to be
postponed until we were completely finished with the conversion. This
requires that the old API functions be able to deal with both the old
and new property lists smoothly.
Solution:
Kludge together the property list code so that they can transparently handle
dealing with both the old and new property lists
Platforms tested:
FreeBSD 4.4 (hawkwind)
Code cleanup for better compatibility with C++ compilers
Description:
C++ compilers are choking on our C code, for various reasons:
we used our UNUSED macro incorrectly when referring to pointer types
we used various C++ keywords as variables, etc.
we incremented enum's with the ++ operator.
Solution:
Changed variables, etc.to avoid C++ keywords (new, class, typename, typeid,
template)
Fixed usage of UNUSED macro from this:
char UNUSED *c
to this:
char * UNUSED c
Switched the enums from x++ to x=x+1
Platforms tested:
FreeBSD 4.4 (hawkwind)
Purpose:
Bug Fix
Description:
Some so-called "operating systems" (*cough*Windows*cough*) can't
handle large string sizes.
Solution:
Replace the Usage string with individual strings which all call
fprintf() themselves.
Platforms tested:
Linux
Purpose:
Bug Fix
Description:
Object IDs command-line options weren't being picked up.
Solution:
The wrong flag was being checked for. Changed the flag from "v" to
"i", which is what the documentation says.
Platforms tested:
Linux
Purpose:
add another test file
Description:
Solution:
Platforms tested:
eirene, arabica
[machines you have tested the changed version. This is absolute
important. Test it out on at least two or three different platforms
such as Big-endian-32bit (SUN/IRIX), little-endian-32(LINUX) and
64-bit (IRIX64/UNICOS/DEC-ALPHA) would be good.]
Purpose:
Add another test file
Description:
Solution:
Platforms tested:
eirene, arabica
[machines you have tested the changed version. This is absolute
important. Test it out on at least two or three different platforms
such as Big-endian-32bit (SUN/IRIX), little-endian-32(LINUX) and
64-bit (IRIX64/UNICOS/DEC-ALPHA) would be good.]
Purpose:
a bug fix
Description:
change PIXEL_INTERLACE to INTERLACE_PIXEL and other interlace mode description
to fit for the image specification.
Solution:
Platforms tested:
eirene, sol2.7
Purpose:
1. fix a bug
2. turn off a feature
Description:
1. change the output of GRgetiminfo from NULL to &interlace_mode.
2. turn off the feature to change line-interleaved feature into
pixel-interleaved feature since inconsistent behaviour is found
in GR interface.
Solution:
see above
Platforms tested:
eirene, arabica
Purpose:
add a real raster-24 bit testing for interlace mode.
Description:
1. GR interfaces will never create an HDF4 file with interlace mode other than
pixel interleaved. DF24 interfaces can create HDF4 file with different interleaved.
There are inconsistent behaviors between GRreqimageil and GRreadimage, data read into the memory will not behave properly if a new interlace mode is asked.
2. Currently HDF5 image spec. supports pixel interleaved and plane interleaved.
We make a real image file to test whether the converter is doing the right thing.
Solution:
We use DF24 bit APIs to generate a real image file that can be tested by H5view.
Platforms tested:
linux and sol2.7
Code cleanup (sorta)
Description:
When the first versions of the HDF5 library were designed, I remembered
vividly the difficulties of porting code from a 32-bit platform to a 16-bit
platform and asked that people use intn & uintn instead of int & unsigned
int, respectively. However, in hindsight, this was overkill and
unnecessary since we weren't going to be porting the HDF5 library to
16-bit architectures.
Currently, the extra uintn & intn typedefs are causing problems for users
who'd like to include both the HDF5 and HDF4 header files in one source
module (like Kent's h4toh5 library).
Solution:
Changed the uintn & intn's to unsigned and int's respectively.
Platforms tested:
FreeBSD 4.4 (hawkwind)
Improvement
Description:
The stdout and stderr were both redirected to an output file. This
works fine in tradition sequential Unix machines. But in some
parallel systems (like mpi-jobs in IBM SP), the stderr is merged
with stdout alright but not in the exact order as expected. This
is not deterministic in parallel jobs. So, the test output are
all there but the ordering maynot be as expected.
Solution:
Redirect stderr to separated file and append it to the stdout
file after test-command is executed. Then compare it with
the expected output. This eliminate the assumption that
stdout and stderr must merged in "chronical orders".
The .ddl file are updated by moving all stderr text to the end of the
file.
Platforms tested:
eirene.
Improvement
Description:
The stdout and stderr were both redirected to an output file. This
works fine in tradition sequential Unix machines. But in some
parallel systems (like mpi-jobs in IBM SP), the stderr is merged
with stdout alright but not in the exact order as expected. This
is not deterministic in parallel jobs. So, the test output are
all there but the ordering maynot be as expected.
Solution:
Redirect stderr to separated file and append it to the stdout
file after test-command is executed. Then compare it with
the expected output. This eliminate the assumption that
stdout and stderr must merged in "chronical orders".
Platforms tested:
tested in v1.4. Folded it into v1.5.
Purpose:
check-in the second time to update the handling of data transfer in h4toh5.
This will make up for the cvs conflict checking a couple hours ago.
Description:
Solution:
Platforms tested:
eirene
Purpose:
1) fix the implementation of image according to image specfication
2) fix two bugs of SDS implemention. the first one is
to handle the unlimited SDS with the first dimensional size set to 0.
the second one is to change the way how HDF5 dataset is written.
Description:
1) mapping 24-bit image to 3D arrays instead of 2D compound datatype.
2) previously forgot considering unlimited SDS with the size set to 0.
3) H5P_set_buffer seems not working well for a extremely small size.
Solution:
1) see above.
2) add a special case to deal with this.
3) don't use H5Pset_buffer.
Platforms tested:
RedHat Zoot 6.2
More code cleanups
Description:
Wrap up the code cleanups for changing the dataset transfer property lists
over to using the generic property list code.
Platforms tested:
IRIX64 6.5 (modi4)
Code cleanups, mostly..
Description:
Work on pacifying the SGI compiler to get the generic properties working
correctly with --enable-parallel and --enable-fortran. It's not quite
fixed yet, but I need to head home and these patches help... :-/
Platforms tested:
IRIX64 6.5 (modi4)