hdf5/release_docs/HISTORY.txt
Elena Pourmal b2375a85c1 [svn-r4114]
Purpose:
    Maintenance
Description:
    Source directory has been rearranged.                                                                        INSTALL*, HISTORY.txt and RELEASE.txt were moved to the release_docs directory.                              *.zip files were moved to the windows directory.                                                             README file was renamed to README.txt                                                                        MANIFEST was updated to reflect those changes.
2001-07-05 11:36:40 -05:00

985 lines
38 KiB
Plaintext

HDF5 HISTORY
============
CONTENTS
I. Release Information for hdf5-1.2.2
II. Release Information for hdf5-1.2.1
III. Release Information for hdf5-1.2.0
A. Platforms Supported
B. Known Problems
C. Changes Since Version 1.0.1
1. Documentation
2. Configuration
3. Debugging
4. Datatypes
5. Dataspaces
6. Persistent Pointers
7. Parallel Support
8. New API Functions
a. Property List Interface
b. Dataset Interface
c. Dataspace Interface
d. Datatype Interface
e. Identifier Interface
f. Reference Interface
g. Ragged Arrays
9. Tools
IV. Changes from Release 1.0.0 to Release 1.0.1
V. Changes from the Beta 1.0.0 Release to Release 1.0.0
VI. Changes from the Second Alpha 1.0.0 Release to the Beta 1.0.0 Release
VII. Changes from the First Alpha 1.0.0 Release to the
Second Alpha 1.0.0 Release
[Search on the string '%%%%' for per-release section breaks.]
-----------------------------------------------------------------------
%%%%1.2.2%%%% Release Information for hdf5-1.2.2 (6/23/00)
I. Release Information for hdf5-1.2.2
INTRODUCTION
This document describes the differences between HDF5-1.2.1 and
HDF5-1.2.2, and contains information on the platforms where HDF5-1.2.2
was tested and known problems in HDF5-1.2.2.
The HDF5 documentation can be found on the NCSA ftp server
(ftp.ncsa.uiuc.edu) in the directory:
/HDF/HDF5/docs/
For more information look at the HDF5 home page at:
http://hdf.ncsa.uiuc.edu/HDF5/
If you have any questions or comments, please send them to:
hdfhelp@ncsa.uiuc.edu
CONTENTS
- Features Added since HDF5-1.2.1
- Bug Fixes since HDF5-1.2.1
- Known Problems
- Platforms Tested
Features Added since HDF5-1.2.1
===============================
* Added internal free-lists to reduce memory required by the library and
H5garbage_collect API function.
* h5dump displays opaque and bitfield types.
* New features added to snapshots. Use 'snapshot help' to see a
complete list of features.
* Improved configure to detect if MPIO routines are available when
parallel mode is requested.
Bug Fixes since HDF5-1.2.1
==========================
* h5dump correctly displays compound datatypes, including simple and
nested compound types.
* h5dump correctly displays the committed copy of predefined types.
* Corrected an error in h5toh4 which did not convert the 32-bit
int from HDF5 to HDF4 correctly for the T3E platform.
* Corrected a floating point number conversion error for the
Cray J90 platform. The error did not convert the value 0.0
correctly.
* Fixed error in H5Giterate which was not updating the "index" parameter
correctly.
* Fixed error in hyperslab iteration which was not walking through the
correct sequence of array elements if hyperslabs were staggered in a
certain pattern.
* Fixed several other problems in hyperslab iteration code.
* Fixed another H5Giterate bug which caused groups with large numbers
of objects in them to misbehave when the callback function returned
non-zero values.
* Changed return type of H5Aiterate and H5A_operator_t typedef to be
herr_t, to align them with the dataset and group iterator functions.
* Changed H5Screate_simple and H5Sset_extent_simple to not allow dimensions
of size 0 without the same dimension being unlimited.
* Improved metadata hashing & caching algorithms to avoid
many hash flushes and also removed some redundant I/O when moving metadata
blocks in the file.
* The libhdf5.settings file shows the correct machine byte-sex.
* The "struct(opt)" type conversion function which gets invoked for
certain compound datatype conversions was fixed for nested compound
types. This required a small change in the datatype conversion
function API.
Known Problems
==============
o SunOS 5.6 with C WorkShop Compilers 4.2: hyperslab selections will
fail if library is compiled using optimization of any level.
o TFLOPS: dsets test fails if compiled with optimization turned on.
o J90: tools fail to dispay data for the datasets with a compound datatype.
Platforms Tested
================
AIX 4.3.3 (IBM SP) 3.6.6 | binaries
mpicc using mpich 1.1.2 | are not
mpicc_r using IBM MPI-IO prototype | available
AIX 4.3.2.0 (IBM SP) xlc 5.0.1.0
Cray J90 10.0.0.7 cc 6.3.0.2
Cray T3E 2.0.5.29 cc 6.3.0.2
mpt.1.3
FreeBSD 4.0 gcc 2.95.2
HP-UX B.10.20 HP C HP92453-01 A.10.32
HP-UX B.11.00 HP92453-01 A.11.00.13 HP C Compiler
(static library only, h5toh4 tool is not available)
IRIX 6.5 MIPSpro cc 7.30
IRIX64 6.5 (64 & n32) MIPSpro cc 7.3.1m
mpt.1.4
Linux 2.2.10 SMP gcc 2.95.1
mpicc(gcc-2.95.1)
gcc (egcs-2.91.66)
mpicc (egcs-2.91.66)
Linux 2.2.16 (RedHat 6.2) gcc 2.95.2
OSF1 V4.0 DEC-V5.2-040
SunOS 5.6 cc WorkShop Compilers 5.0 no optimization
SunOS 5.7 cc WorkShop Compilers 5.0
SolarisX86 SunOS 5.5.1 gcc version 2.7.2 with --disable-hsizet
TFLOPS 3.2.1 pgcc Rel 3.1-3i
mpich-1.1.2 with local changes
Windows NT4.0 sp5 MSVC++ 6.0
Windows 98 MSVC++ 6.0
Windows 2000 MSVC++ 6.0
%%%%1.2.1%%%% Release Information for hdf5-1.2.1
II. Release Information for hdf5-1.2.1
Bug fixes since HDF5-1.2.0
==========================
Configuration
-------------
* The hdf5.h include file was fixed to allow the HDF5 Library to be compiled
with other libraries/applications that use GNU autoconf.
* Configuration for parallel HDF5 was improved. Configure now attempts to
link with libmpi.a and/or libmpio.a as the MPI libraries by default.
It also uses "mpirun" to launch MPI tests by default. It tests to
link MPIO routines during the configuration stage, rather than failing
later as before. One can just do "./configure --enable-parallel"
if the MPI library is in the system library.
Library
-------
* Error was fixed which was not allowing dataset region references to have
their regions retrieved correctly.
* Added internal free-lists to reduce memory required by the library and
H5garbage_collect API function
* Fixed error in H5Giterate which was not updating the "index" parameter
correctly.
* Fixed error in hyperslab iteration which was not walking through the
correct sequence of array elements if hyperslabs were staggered in a
certain pattern
* Fixed several other problems in hyperslab iteration code.
Tests
------
* Added additional tests for group and attribute iteration.
* Added additional test for staggered hyperslab iteration.
* Added additional test for random 5-D hyperslab selection.
Tools
------
* Added an option, -V, to show the version information of h5dump.
* Fixed a core dumping bug of h5toh4 when executed on platforms like
TFLOPS.
* The test script for h5toh4 used to not able to detect the hdp
dumper command was not valid. It now detects and reports the
failure of hdp execution.
Documentation
-------------
* User's Guide and Reference Manual were updated.
See doc/html/PSandPDF/index.html for more details.
Platforms Tested:
================
Note: Due to the nature of bug fixes, only static versions of the library and tools were tested.
AIX 4.3.2 (IBM SP) 3.6.6
Cray T3E 2.0.4.81 cc 6.3.0.1
mpt.1.3
FreeBSD 3.3-STABLE gcc 2.95.2
HP-UX B.10.20 HP C HP92453-01 A.10.32
IRIX 6.5 MIPSpro cc 7.30
IRIX64 6.5 (64 & n32) MIPSpro cc 7.3.1m
mpt.1.3 (SGI MPI 3.2.0.0)
Linux 2.2.10 SuSE egcs-2.91.66 configured with
(i686-pc-linux-gnu) --disable-hsizet
mpich-1.2.0 egcs-2.91.66 19990314/Linux
OSF1 V4.0 DEC-V5.2-040
SunOS 5.6 cc WorkShop Compilers 4.2 no optimization
SunOS 5.7 cc WorkShop Compilers 5.0
TFLOPS 2.8 cicc (pgcc Rel 3.0-5i)
mpich-1.1.2 with local changes
Windows NT4.0 sp5 MSVC++ 6.0
Known Problems:
==============
o SunOS 5.6 with C WorkShop Compilers 4.2: Hyperslab selections will
fail if library is compiled using optimization of any level.
%%%%1.2.0%%%% Release Information for hdf5-1.2.0
III. Release Information for hdf5-1.2.0
A. Platforms Supported
-------------------
Operating systems listed below with compiler information and MPI library, if
applicable, are systems that HDF5 1.2.0 was tested on.
Compiler & libraries
Platform Information Comment
-------- ---------- --------
AIX 4.3.2 (IBM SP) 3.6.6
Cray J90 10.0.0.6 cc 6.3.0.0
Cray T3E 2.0.4.61 cc 6.2.1.0
mpt.1.3
FreeBSD 3.2 gcc 2.95.1
HP-UX B.10.20 HP C HP92453-01 A.10.32
gcc 2.8.1
IRIX 6.5 MIPSpro cc 7.30
IRIX64 6.5 (64 & n32) MIPSpro cc 7.3.1m
mpt.1.3 (SGI MPI 3.2.0.0)
Linux 2.2.10 egcs-2.91.66 configured with
--disable-hsizet
lbraries: glibc2
OSF1 V4.0 DEC-V5.2-040
SunOS 5.6 cc WorkShop Compilers 4.2
no optimization
gcc 2.8.1
SunOS 5.7 cc WorkShop Compilers 5.0
gcc 2.8.1
TFLOPS 2.7.1 cicc (pgcc Rel 3.0-4i)
mpich-1.1.2 with local changes
Windows NT4.0 intel MSVC++ 5.0 and 6.0
Windows NT alpha 4.0 MSVC++ 5.0
Windows 98 MSVC++ 5.0
B. Known Problems
--------------
* NT alpha 4.0
Dumper utiliy h5dump fails if linked with DLL.
* SunOS 5.6 with C WorkShop Compilers 4.2
Hyperslab selections will fail if library is compiled using optimization
of any level.
C. Changes Since Version 1.0.1
---------------------------
1. Documentation
-------------
* More examples
* Updated user guide, reference manual, and format specification.
* Self-contained documentation for installations isolated from the
Internet.
* HDF5 Tutorial was added to the documentation
2. Configuration
-------------
* Better detection and support for MPI-IO.
* Recognition of compilers with known code generation problems.
* Support for various compilers on a single architecture (e.g., the
native compiler and the GNU compilers).
* Ability to build from read-only media and with different compilers
and/or options concurrently.
* Added a libhdf5.settings file which summarizes the configuration
information and is installed along with the library.
* Builds a shared library on most systems that support it.
* Support for Cray T3E, J90 and Windows/NT.
3. Debugging
---------
* Improved control and redirection of debugging and tracing messages.
4. Datatypes
---------
* Optimizations to compound datatype conversions and I/O operations.
* Added nearly 100 optimized conversion functions for native datatypes
including support for non-aligned data.
* Added support for bitfield, opaque, and enumeration types.
* Added distinctions between signed and unsigned char types to the
list of predefined native hdf5 datatypes.
* Added HDF5 type definitions for C9x types like int32_t.
* Application-defined type conversion functions can handle non-packed
data.
* Changed the H5Tunregister() function to use wildcards when matching
conversion functions. H5Tregister_hard() and H5Tregister_soft()
were combined into H5Tregister().
* Support for variable-length datatypes (arrays of varying length per
dataset element). Variable length strings currently supported only
as variable length arrays of 1-byte integers.
5. Dataspaces
----------
* New query functions for selections.
* I/O operations bypass the stripmining loop and go directly to
storage for certain contiguous selections in the absense of type
conversions. In other cases the stripmining buffers are used more
effectively.
* Reduced the number of I/O requests under certain circumstances,
improving performance on systems with high I/O latency.
6. Persistent Pointers
-------------------
* Object (serial and parallel) and dataset region (serial only)
references are implemented.
7. Parallel Support
----------------
* Improved parallel I/O performance.
* Supported new platforms: Cray T3E, Linux, DEC Cluster.
* Use vendor supported version of MPIO on SGI O2K and Cray platforms.
* Improved the algorithm that translates an HDF5 hyperslab selection
into an MPI type for better collective I/O performance.
8. New API functions
-----------------
a. Property List Interface:
------------------------
H5Pset_xfer - set data transfer properties
H5Pset_preserve - set dataset transfer property list status
H5Pget_preserve - get dataset transfer property list status
H5Pset_hyper_cache - indicates whether to cache hyperslab blocks during I/O
H5Pget_hyper_cache - returns information regarding the caching of
hyperslab blocks during I/O
H5Pget_btree_ratios - sets B-tree split ratios for a dataset
transfer property list
H5Pset_btree_ratios - gets B-tree split ratios for a dataset
transfer property list
H5Pset_vlen_mem_manager - sets the memory manager for variable-length
datatype allocation
H5Pget_vlen_mem_manager - sets the memory manager for variable-length
datatype allocation
b. Dataset Interface:
------------------
H5Diterate - iterate over all selected elements in a dataspace
H5Dget_storage_size - return the amount of storage required for a dataset
H5Dvlen_reclaim - reclaim VL datatype memory buffers
c. Dataspace Interface:
--------------------
H5Sget_select_hyper_nblocks - get number of hyperslab blocks
H5Sget_select_hyper_blocklist - get the list of hyperslab blocks
currently selected
H5Sget_select_elem_npoints - get the number of element points
in the current selection
H5Sget_select_elem_pointlist - get the list of element points
currently selected
H5Sget_select_bounds - gets the bounding box containing
the current selection
d. Datatype Interface:
-------------------
H5Tget_super - return the base datatype from which a
datatype is derived
H5Tvlen_create - creates a new variable-length dataype
H5Tenum_create - creates a new enumeration datatype
H5Tenum_insert - inserts a new enumeration datatype member
H5Tenum_nameof - returns the symbol name corresponding to a
specified member of an enumeration datatype
H5Tvalueof - return the value corresponding to a
specified member of an enumeration datatype
H5Tget_member_value - return the value of an enumeration datatype member
H5Tset_tag - tags an opaque datatype
H5Tget_tag - gets the tag associated with an opaque datatype
e. Identifier Interface:
---------------------
H5Iget_type - retrieve the type of an object
f. Reference Interface:
--------------------
H5Rcreate - creates a reference
H5Rdereference - open the HDF5 object referenced
H5Rget_region - retrieve a dataspace with the specified region selected
H5Rget_object_type - retrieve the type of object that an
object reference points to
g. Ragged Arrays (alpha) (names of those API functions were changed):
------------------------------------------------------------------
H5RAcreate - create a new ragged array (old name was H5Rcreate)
H5RAopen - open an existing array (old name was H5Ropen)
H5RAclose - close a ragged array (old name was H5Rclose)
H5RAwrite - write to an array (old name was H5Rwrite)
H5RAread - read from an array (old name was H5Rread)
9. Tools
-----
* Enhancements to the h5ls tool including the ability to list objects
from more than one file, to display raw hexadecimal data, to
show file addresses for raw data, to format output more reasonably,
to show object attributes, and to perform a recursive listing,
* Enhancements to h5dump: support new data types added since previous
versions.
* h5toh4: An hdf5 to hdf4 converter.
%%%%1.0.1%%%% Release Information for hdf5-1.0.1
IV. Changes from Release 1.0.0 to Release 1.0.1
* [Improvement]: configure sets up the Makefile in the parallel tests
suit (testpar/) correctly.
* [Bug-Fix]: Configure failed for all IRIX versions other than 6.3.
It now configures correctly for all IRIX 6.x version.
* Released Parallel HDF5
Supported Features:
------------------
HDF5 files are accessed according to the communicator and INFO
object defined in the property list set by H5Pset_mpi.
Independent read and write accesses to fixed and extendable dimension
datasets.
Collective read and write accesses to fixed dimension datasets.
Supported Platforms:
-------------------
Intel Red
IBM SP2
SGI Origin 2000
Changes In This Release:
-----------------------
o Support of Access to Extendable Dimension Datasets.
Extendable dimension datasets must use chunked storage methods.
A new function, H5Dextend, is created to extend the current
dimensions of a dataset. The current release requires the
MPI application must make a collective call to extend the
dimensions of an extendable dataset before writing to the
newly extended area. (The serial does not require the
call of H5Dextend. The dimensions of an extendable
dataset is increased when data is written to beyond the
current dimensions but within the maximum dimensions.)
The required collective call of H5Dextend may be relaxed
in future release.
This release only support independent read and write accesses
to extendable datasets. Collective accesses to extendable
datasets will be implemented in future releases.
o Collective access to fixed dimension datasets.
Collective access to a dataset can be specified in the transfer
property list argument in H5Dread and H5Dwrite. The current
release supports collective access to fixed dimension datasets.
Collective access to extendable datasets will be implemented in
future releases.
o HDF5 files are opened according to Communicator and INFO object.
H5Dopen now records the communicator and INFO setup by H5Pset_mmpi
and pass them to the corresponding MPIO open file calls for
processing.
o This release has been tested on IBM SP2, Intel Red and SGI Origin 2000
systems. It uses the ROMIO version of MPIO interface for parallel
I/O supports.
%%%%1.0.0%%%% Release Information for hdf5-1.0.0
V. Changes from the Beta 1.0.0 Release to Release 1.0.0
* Added fill values for datasets. For contiguous datasets fill value
performance may be quite poor since the fill value is written to the
entire dataset when the dataset is created. This will be remedied
in a future version. Chunked datasets using fill values do not
incur any additional overhead. See H5Pset_fill_value().
* Multiple hdf5 files can be "mounted" on one another to create a
larger virtual file. See H5Fmount().
* Object names can be removed or changed but objects are never
actually removed from the file yet. See H5Gunlink() and H5Gmove().
* Added a tuning mechanism for B-trees to insure that sequential
writes to chunked datasets use less overhead. See H5Pset_btree_ratios().
* Various optimizations and bug fixes.
%%%%1.0.0 Beta%%%% Release Information for hdf5-1.0.0 Beta
VI. Changes from the Second Alpha 1.0.0 Release to the Beta 1.0.0 Release
* Strided hyperslab selections in dataspaces now working.
* The compression API has been replaced with a more general filter
API. See doc/html/Filters.html for details.
* Alpha-quality 2d ragged arrays are implemented as a layer built on
top of other hdf5 objects. The API and storage format will almost
certainly change.
* More debugging support including API tracing. See Debugging.html.
* C and Fortran style 8-bit fixed-length character string types are
supported with space or null padding or null termination and
translations between them.
* Added function H5Fflush() to write all cached data immediately to
the file.
* Datasets maintain a modification time which can be retrieved with
H5Gstat().
* The h5ls tool can display much more information, including all the
values of a dataset.
%%%%1.0.0 Alpha 2%%%% Release Information for hdf5-1.0.0 Alpha 2
VII. Changes from the First Alpha 1.0.0 Release to
the Second Alpha 1.0.0 Release
* Two of the packages have been renamed. The data space API has been
renamed from `H5P' to `H5S' and the property list (template) API has
been renamed from `H5C' to `H5P'.
* The new attribute API `H5A' has been added. An attribute is a small
dataset which can be attached to some other object (for instance, a
4x4 transformation matrix attached to a 3-dimensional dataset, or an
English abstract attached to a group).
* The error handling API `H5E' has been completed. By default, when an
API function returns failure an error stack is displayed on the
standard error stream. The H5Eset_auto() controls the automatic
printing and H5E_BEGIN_TRY/H5E_END_TRY macros can temporarily
disable the automatic error printing.
* Support for large files and datasets (>2GB) has been added. There
is an html document that describes how it works. Some of the types
for function arguments have changed to support this: all arguments
pertaining to sizes of memory objects are `size_t' and all arguments
pertaining to file sizes are `hsize_t'.
* More data type conversions have been added although none of them are
fine tuned for performance. There are new converters from integer
to integer and float to float, but not between integers and floating
points. A bug has been fixed in the converter between compound
types.
* The numbered types have been removed from the API: int8, uint8,
int16, uint16, int32, uint32, int64, uint64, float32, and float64.
Use standard C types instead. Similarly, the numbered types were
removed from the H5T_NATIVE_* architecture; use unnumbered types
which correspond to the standard C types like H5T_NATIVE_INT.
* More debugging support was added. If tracing is enabled at
configuration time (the default) and the HDF5_TRACE environment
variable is set to a file descriptor then all API calls will emit
the function name, argument names and values, and return value on
that file number. There is an html document that describes this.
If appropriate debugging options are enabled at configuration time,
some packages will display performance information on stderr.
* Data types can be stored in the file as independent objects and
multiple datasets can share a data type.
* The raw data I/O stream has been implemented and the application can
control meta and raw data caches, so I/O performance should be
improved from the first alpha release.
* Group and attribute query functions have been implemented so it is
now possible to find out the contents of a file with no prior
knowledge.
* External raw data storage allows datasets to be written by other
applications or I/O libraries and described and accessed through
HDF5.
* Hard and soft (symbolic) links are implemented which allow groups to
share objects. Dangling and recursive symbolic links are supported.
* User-defined data compression is implemented although we may
generalize the interface to allow arbitrary user-defined filters
which can be used for compression, checksums, encryption,
performance monitoring, etc. The publicly-available `deflate'
method is predefined if the GNU libz.a can be found at configuration
time.
* The configuration scripts have been modified to make it easier to
build debugging vs. production versions of the library.
* The library automatically checks that the application was compiled
with the correct version of header files.
Parallel HDF5 Changes
* Parallel support for fixed dimension datasets with contiguous or
chunked storages. Also, support unlimited dimension datasets which
must use chunk storage. No parallel support for compressed datasets.
* Collective data transfer for H5Dread/H5Dwrite. Collective access
support for datasets with contiguous storage only, thus only fixed
dimension datasets for now.
* H5Pset_mpi and H5Pget_mpi no longer have the access_mode
argument. It is taken over by the data-transfer property list
of H5Dread/H5Dwrite.
* New functions H5Pset_xfer and H5Pget_xfer to handle the
specification of independent or collective data transfer_mode
in the dataset transfer properties list. The properties
list can be used to specify data transfer mode in the H5Dwrite
and H5Dread function calls.
* Added parallel support for datasets with chunked storage layout.
When a dataset is extend in a PHDF5 file, all processes that open
the file must collectively call H5Dextend with identical new dimension
sizes.
LIST OF API FUNCTIONS
The following functions are implemented. Errors are returned if an
attempt is made to use some feature which is not implemented and
printing the error stack will show `not implemented yet'.
Library
H5check - check that lib version matches header version
H5open - initialize library (happens automatically)
H5close - shut down the library (happens automatically)
H5dont_atexit - don't call H5close on exit
H5get_libversion - retrieve library version info
H5check_version - check for specific library version
Property Lists
H5Pclose - release template resources
H5Pcopy - copy a template
H5Pcreate - create a new template
H5Pget_chunk - get chunked storage properties
H5Pset_chunk - set chunked storage properties
H5Pget_class - get template class
H5Pget_istore_k - get chunked storage properties
H5Pset_istore_k - set chunked storage properties
H5Pget_layout - get raw data layout class
H5Pset_layout - set raw data layout class
H5Pget_sizes - get address and size sizes
H5Pset_sizes - set address and size sizes
H5Pget_sym_k - get symbol table storage properties
H5Pset_sym_k - set symbol table storage properties
H5Pget_userblock - get user-block size
H5Pset_userblock - set user-block size
H5Pget_version - get file version numbers
H5Pget_alignment - get data alignment properties
H5Pset_alignment - set data alignment properties
H5Pget_external_count- get count of external data files
H5Pget_external - get information about an external data file
H5Pset_external - add a new external data file to the list
H5Pget_driver - get low-level file driver class
H5Pget_stdio - get properties for stdio low-level driver
H5Pset_stdio - set properties for stdio low-level driver
H5Pget_sec2 - get properties for sec2 low-level driver
H5Pset_sec2 - set properties for sec2 low-level driver
H5Pget_core - get properties for core low-level driver
H5Pset_core - set properties for core low-level driver
H5Pget_split - get properties for split low-level driver
H5Pset_split - set properties for split low-level driver
H5P_get_family - get properties for family low-level driver
H5P_set_family - set properties for family low-level driver
H5Pget_cache - get meta- and raw-data caching properties
H5Pset_cache - set meta- and raw-data caching properties
H5Pget_buffer - get raw-data I/O pipe buffer properties
H5Pset_buffer - set raw-data I/O pipe buffer properties
H5Pget_preserve - get type conversion preservation properties
H5Pset_preserve - set type conversion preservation properties
H5Pget_nfilters - get number of raw data filters
H5Pget_filter - get raw data filter properties
H5Pset_filter - set raw data filter properties
H5Pset_deflate - set deflate compression filter properties
H5Pget_mpi - get MPI-IO properties
H5Pset_mpi - set MPI-IO properties
H5Pget_xfer - get data transfer properties
+ H5Pset_xfer - set data transfer properties
+ H5Pset_preserve - set dataset transfer property list status
+ H5Pget_preserve - get dataset transfer property list status
+ H5Pset_hyper_cache - indicates whether to cache hyperslab blocks during I/O
+ H5Pget_hyper_cache - returns information regarding the caching of
hyperslab blocks during I/O
+ H5Pget_btree_ratios - sets B-tree split ratios for a dataset
transfer property list
+ H5Pset_btree_ratios - gets B-tree split ratios for a dataset
transfer property list
+ H5Pset_vlen_mem_manager - sets the memory manager for variable-length
datatype allocation
+ H5Pget_vlen_mem_manager - sets the memory manager for variable-length
datatype allocation
Datasets
H5Dclose - release dataset resources
H5Dcreate - create a new dataset
H5Dget_space - get data space
H5Dget_type - get data type
H5Dget_create_plist - get dataset creation properties
H5Dopen - open an existing dataset
H5Dread - read raw data
H5Dwrite - write raw data
H5Dextend - extend a dataset
+ H5Diterate - iterate over all selected elements in a dataspace
+ H5Dget_storage_size - return the amount of storage required for a dataset
+ H5Dvlen_reclaim - reclaim VL datatype memory buffers
Attributes
H5Acreate - create a new attribute
H5Aopen_name - open an attribute by name
H5Aopen_idx - open an attribute by number
H5Awrite - write values into an attribute
H5Aread - read values from an attribute
H5Aget_space - get attribute data space
H5Aget_type - get attribute data type
H5Aget_name - get attribute name
H5Anum_attrs - return the number of attributes for an object
H5Aiterate - iterate over an object's attributes
H5Adelete - delete an attribute
H5Aclose - close an attribute
Errors
H5Eclear - clear the error stack
H5Eprint - print an error stack
H5Eget_auto - get automatic error reporting settings
H5Eset_auto - set automatic error reporting
H5Ewalk - iterate over the error stack
H5Ewalk_cb - the default error stack iterator function
H5Eget_major - get the message for the major error number
H5Eget_minor - get the message for the minor error number
Files
H5Fclose - close a file and release resources
H5Fcreate - create a new file
H5Fget_create_plist - get file creation property list
H5Fget_access_plist - get file access property list
H5Fis_hdf5 - determine if a file is an hdf5 file
H5Fopen - open an existing file
H5Freopen - reopen an HDF5 file
H5Fmount - mount a file
H5Funmount - unmount a file
H5Fflush - flush all buffers associated with a file to disk
Groups
H5Gclose - close a group and release resources
H5Gcreate - create a new group
H5Gopen - open an existing group
H5Giterate - iterate over the contents of a group
H5Gmove - change the name of some object
H5Glink - create a hard or soft link to an object
H5Gunlink - break the link between a name and an object
H5Gget_objinfo - get information about a group entry
H5Gget_linkval - get the value of a soft link
H5Gget_comment - get the comment string for an object
H5Gset_comment - set the comment string for an object
Dataspaces
H5Screate - create a new data space
H5Scopy - copy a data space
H5Sclose - release data space
H5Screate_simple - create a new simple data space
H5Sset_space - set simple data space extents
H5Sis_simple - determine if data space is simple
H5Sset_extent_simple - set simple data space dimensionality and size
H5Sget_simple_extent_npoints - get number of points in simple extent
H5Sget_simple_extent_ndims - get simple data space dimensionality
H5Sget_simple_extent_dims - get simple data space size
H5Sget_simple_extent_type - get type of simple extent
H5Sset_extent_none - reset extent to be empty
H5Sextent_copy - copy the extent from one data space to another
H5Sget_select_npoints - get number of points selected for I/O
H5Sselect_hyperslab - set hyperslab dataspace selection
H5Sselect_elements - set element sequence dataspace selection
H5Sselect_all - select entire extent for I/O
H5Sselect_none - deselect all elements of extent
H5Soffset_simple - set selection offset
H5Sselect_valid - determine if selection is valid for extent
+ H5Sget_select_hyper_nblocks - get number of hyperslab blocks
+ H5Sget_select_hyper_blocklist - get the list of hyperslab blocks
currently selected
+ H5Sget_select_elem_npoints - get the number of element points
in the current selection
+ H5Sget_select_elem_pointlist - get the list of element points
currently selected
+ H5Sget_select_bounds - gets the bounding box containing
the current selection
Datatypes
H5Tclose - release data type resources
H5Topen - open a named data type
H5Tcommit - name a data type
H5Tcommitted - determine if a type is named
H5Tcopy - copy a data type
H5Tcreate - create a new data type
H5Tequal - compare two data types
H5Tlock - lock type to prevent changes
H5Tfind - find a data type conversion function
H5Tconvert - convert data from one type to another
H5Tregister - register a conversion function
H5Tunregister - remove a conversion function
H5Tget_overflow - get function that handles overflow conv. cases
H5Tset_overflow - set function to handle overflow conversion cases
H5Tget_class - get data type class
H5Tget_cset - get character set
H5Tget_ebias - get exponent bias
H5Tget_fields - get floating point fields
H5Tget_inpad - get inter-field padding
H5Tget_member_dims - get struct member dimensions
H5Tget_member_name - get struct member name
H5Tget_member_offset - get struct member byte offset
H5Tget_member_type - get struct member type
H5Tget_nmembers - get number of struct members
H5Tget_norm - get floating point normalization
H5Tget_offset - get bit offset within type
H5Tget_order - get byte order
H5Tget_pad - get padding type
H5Tget_precision - get precision in bits
H5Tget_sign - get integer sign type
H5Tget_size - get size in bytes
H5Tget_strpad - get string padding
H5Tinsert - insert scalar struct member
H5Tinsert_array - insert array struct member
H5Tpack - pack struct members
H5Tset_cset - set character set
H5Tset_ebias - set exponent bias
H5Tset_fields - set floating point fields
H5Tset_inpad - set inter-field padding
H5Tset_norm - set floating point normalization
H5Tset_offset - set bit offset within type
H5Tset_order - set byte order
H5Tset_pad - set padding type
H5Tset_precision - set precision in bits
H5Tset_sign - set integer sign type
H5Tset_size - set size in bytes
H5Tset_strpad - set string padding
+ H5Tget_super - return the base datatype from which a
datatype is derived
+ H5Tvlen_create - creates a new variable-length dataype
+ H5Tenum_create - creates a new enumeration datatype
+ H5Tenum_insert - inserts a new enumeration datatype member
+ H5Tenum_nameof - returns the symbol name corresponding to a
specified member of an enumeration datatype
+ H5Tvalueof - return the value corresponding to a
specified member of an enumeration datatype
+ H5Tget_member_value - return the value of an enumeration datatype member
+ H5Tset_tag - tags an opaque datatype
+ H5Tget_tag - gets the tag associated with an opaque datatype
- H5Tregister_hard - register specific type conversion function
- H5Tregister_soft - register general type conversion function
Filters
H5Tregister - register a conversion function
Compression
H5Zregister - register new compression and uncompression
functions for a method specified by a method number
Identifiers
+ H5Iget_type - retrieve the type of an object
References
+ H5Rcreate - creates a reference
+ H5Rdereference - open the HDF5 object referenced
+ H5Rget_region - retrieve a dataspace with the specified region selected
+ H5Rget_object_type - retrieve the type of object that an
object reference points to
Ragged Arrays (alpha)
H5RAcreate - create a new ragged array
H5RAopen - open an existing array
H5RAclose - close a ragged array
H5RAwrite - write to an array
H5RAread - read from an array