Sync develop changes March 20 - March 25 to hdf5_1_14. (#4241)

* Call memset before stat calls (#4202)

The buffers passed to stat-like calls are only partially filled in by
the call, leaving ununitialized memory areas when the stat buffers are
created on the stack.

This change memsets the buffers to 0 before the stat calls, quieting
the -fsanitze=memory complaints.

* Remove unused CMake configuration checks (#4199)

* Update link to Chunking in HDF5 page (#4203)

* Fix H5Pset_efile_prefix documentation error (#4206)

Fixes GH issue #1759

* Suggested header footer for NEWSLETTER (#4204)

* Suggested header footer for NEWSLETTER

* Updates

* Add NEWSLETTER.txt to h5vers script

* Fix grammar in README.md release template (#4201)

* Add back snapshot names (#4198)

* Use tar.gz extension for ABI reports (#4205)

* Fix issue with Subfiling VFD and multiple opens of same file (#4194)

* Fix issue with Subfiling VFD and multiple opens of same file

* Update H5_subfile_fid_to_context to return error value instead of ID

* Add helper routine to initialize open file mapping

* Reverts AC_SYS_LARGEFILE change (#4213)

We previously replaced local macros with AC_SYS_LARGEFILE, which is
unfortunately buggy on some systems and does not correctly set the
necessary defines, despite successfully detecting them.

This restores the previous macro hacks to acsite.m4

* Propagate group creation properties to intermediate groups (#4139)

* Add clarification for current behavior of H5Get_objtype_by_idx() (#4120)

* Addressed Fortran  issues with promoted integers and reals via compilation flags (#4209)

* addressed issue wit promoted integers and reals

* added option to use mpi_f08

* Summarize the library version table (#4212)

Fixes GH-3773

* Fix URLs (#4210)

Also removed Copyright.html context because it's no longer valid.

* Fix 'make check-vfd' target for Autotools (#4211)

Changes Autotools testing to use HDF5_TEST_DRIVER environment
variable to avoid running tests that don't work well with several
VFDs

Restores old h5_get_vfd_fapl() testing function to setup a FAPL
with a particular VFD

Adds a macro for the default VFD name

* Revert "Addressed Fortran  issues with promoted integers and reals via compil…" (#4220)

This reverts commit 06c42ff038.

* Backup and clear CMAKE_C_FLAGS before performing _Float16 configure checks (#4217)

* Fix broken links (#4224)

* Fix broken URLs in documentation (#4214)

Fixes GH-3881 partially.  There are pages that need to be recreated.

* Avoid file size checks in C++ testhdf5 for certain VFDs (#4226)

* Fix an issue with type size assumptions in h5dumpgentest (#4222)

* Fix issue with -Werror cleanup sed command in configure.ac (#4223)

* Fix Java JNI warnings (#4229)

* Rework snapshots/release workflows for consistent args (#4227)

* Fixed a cache assert with too-large metadata objects (#4231)

If the library tries to load a metadata object that is above the
library's hard-coded limits, the size will trip an assert in debug
builds. In HDF5 1.14.4, this can happen if you create a very large
number of links in an old-style group that uses local heaps.

The library will now emit a normal error when it tries to load a
metadata object that is too large.

Partially addresses GitHub #3762

* Set DXPL in API context for native VOL attribute I/O calls (#4228)

* Initialize a variable in C++ testhdf5's tattr.cpp (#4232)

* Addressed Fortran issues with promoted integers and reals via compilation flags, part 2 (#4221)

* addressed issue wit promoted integers and reals

* fixed h5fcreate_f

* added option to use mpi_f08

* change the kind of logical in the parallel tests

* addressed missing return value from callback

* Use cp -rp in test_plugin.sh (#4233)

When building with debug symbols on MacOS, the cp -p commands in
test_plugin.sh will attempt to copy the .dSYM directories with
debugging info, which will fail since -r is missing.

Using cp -rp is harmless and allows the test to run

Fixes HDFFV-10542

* Clean up types in h5test.c (#4235)

Reduces warnings on 32-bit and LLP64 systems

* Fix example links (#4237)

* Fix links md files (#4239)

* Add markdown link checker action (#4219)

* Match minimum CMake version to 3.18 (#4215)
This commit is contained in:
Larry Knox 2024-03-25 17:02:21 -05:00 committed by GitHub
parent b69c6fcbf6
commit 3c86f0f0ce
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
232 changed files with 2846 additions and 1890 deletions

View File

@ -170,7 +170,7 @@ jobs:
cp ${{ inputs.file_base }}-hdf5_cpp_compat_report.html ${{ runner.workspace }}/buildabi/hdf5
cp ${{ inputs.file_base }}-java_compat_report.html ${{ runner.workspace }}/buildabi/hdf5
cd "${{ runner.workspace }}/buildabi"
tar -zcvf ${{ inputs.file_base }}.html.abi.reports hdf5
tar -zcvf ${{ inputs.file_base }}.html.abi.reports.tar.gz hdf5
shell: bash
- name: Save output as artifact
@ -178,4 +178,4 @@ jobs:
with:
name: abi-reports
path: |
${{ runner.workspace }}/buildabi/${{ inputs.file_base }}.html.abi.reports
${{ runner.workspace }}/buildabi/${{ inputs.file_base }}.html.abi.reports.tar.gz

View File

@ -129,12 +129,6 @@ jobs:
run: |
ls -l ${{ github.workspace }}/HDF_Group/HDF5
- name: Set file base name (Linux)
id: set-file-base
run: |
FILE_NAME_BASE=$(echo "${{ inputs.file_base }}")
echo "FILE_BASE=$FILE_NAME_BASE" >> $GITHUB_OUTPUT
- name: List files for the space (Linux)
run: |
ls -l ${{ github.workspace }}
@ -187,12 +181,6 @@ jobs:
run: |
ls -l ${{ github.workspace }}/HDF_Group/HDF5
- name: Set file base name (MacOS)
id: set-file-base
run: |
FILE_NAME_BASE=$(echo "${{ inputs.file_base }}")
echo "FILE_BASE=$FILE_NAME_BASE" >> $GITHUB_OUTPUT
- name: List files for the space (MacOS)
run: |
ls ${{ github.workspace }}

View File

@ -4,6 +4,11 @@ name: hdf5 1.14 ctest runs
on:
workflow_call:
inputs:
snap_name:
description: 'The name in the source tarballs'
type: string
required: false
default: hdfsrc
file_base:
description: "The common base name of the source tarballs"
required: true
@ -46,11 +51,11 @@ jobs:
run: |
FILE_NAME_BASE=$(echo "${{ inputs.file_base }}")
echo "FILE_BASE=$FILE_NAME_BASE" >> $GITHUB_OUTPUT
if [[ '${{ inputs.use_environ }}' == 'snapshots' ]]
if [[ '${{ inputs.use_environ }}' == 'release' ]]
then
SOURCE_NAME_BASE=$(echo "hdfsrc")
SOURCE_NAME_BASE=$(echo "${{ inputs.snap_name }}")
else
SOURCE_NAME_BASE=$(echo "$FILE_NAME_BASE")
SOURCE_NAME_BASE=$(echo "hdfsrc")
fi
echo "SOURCE_BASE=$SOURCE_NAME_BASE" >> $GITHUB_OUTPUT
shell: bash
@ -129,11 +134,11 @@ jobs:
run: |
FILE_NAME_BASE=$(echo "${{ inputs.file_base }}")
echo "FILE_BASE=$FILE_NAME_BASE" >> $GITHUB_OUTPUT
if [[ '${{ inputs.use_environ }}' == 'snapshots' ]]
if [[ '${{ inputs.use_environ }}' == 'release' ]]
then
SOURCE_NAME_BASE=$(echo "hdfsrc")
SOURCE_NAME_BASE=$(echo "${{ inputs.snap_name }}")
else
SOURCE_NAME_BASE=$(echo "$FILE_NAME_BASE")
SOURCE_NAME_BASE=$(echo "hdfsrc")
fi
echo "SOURCE_BASE=$SOURCE_NAME_BASE" >> $GITHUB_OUTPUT
@ -253,11 +258,11 @@ jobs:
run: |
FILE_NAME_BASE=$(echo "${{ inputs.file_base }}")
echo "FILE_BASE=$FILE_NAME_BASE" >> $GITHUB_OUTPUT
if [[ '${{ inputs.use_environ }}' == 'snapshots' ]]
if [[ '${{ inputs.use_environ }}' == 'release' ]]
then
SOURCE_NAME_BASE=$(echo "hdfsrc")
SOURCE_NAME_BASE=$(echo "${{ inputs.snap_name }}")
else
SOURCE_NAME_BASE=$(echo "$FILE_NAME_BASE")
SOURCE_NAME_BASE=$(echo "hdfsrc")
fi
echo "SOURCE_BASE=$SOURCE_NAME_BASE" >> $GITHUB_OUTPUT
@ -333,11 +338,11 @@ jobs:
run: |
FILE_NAME_BASE=$(echo "${{ inputs.file_base }}")
echo "FILE_BASE=$FILE_NAME_BASE" >> $GITHUB_OUTPUT
if [[ '${{ inputs.use_environ }}' == 'snapshots' ]]
if [[ '${{ inputs.use_environ }}' == 'release' ]]
then
SOURCE_NAME_BASE=$(echo "hdfsrc")
SOURCE_NAME_BASE=$(echo "${{ inputs.snap_name }}")
else
SOURCE_NAME_BASE=$(echo "$FILE_NAME_BASE")
SOURCE_NAME_BASE=$(echo "hdfsrc")
fi
echo "SOURCE_BASE=$SOURCE_NAME_BASE" >> $GITHUB_OUTPUT
@ -413,11 +418,11 @@ jobs:
run: |
FILE_NAME_BASE=$(echo "${{ inputs.file_base }}")
echo "FILE_BASE=$FILE_NAME_BASE" >> $GITHUB_OUTPUT
if [[ '${{ inputs.use_environ }}' == 'snapshots' ]]
if [[ '${{ inputs.use_environ }}' == 'release' ]]
then
SOURCE_NAME_BASE=$(echo "hdfsrc")
SOURCE_NAME_BASE=$(echo "${{ inputs.snap_name }}")
else
SOURCE_NAME_BASE=$(echo "$FILE_NAME_BASE")
SOURCE_NAME_BASE=$(echo "hdfsrc")
fi
echo "SOURCE_BASE=$SOURCE_NAME_BASE" >> $GITHUB_OUTPUT
shell: bash
@ -507,11 +512,11 @@ jobs:
run: |
FILE_NAME_BASE=$(echo "${{ inputs.file_base }}")
echo "FILE_BASE=$FILE_NAME_BASE" >> $GITHUB_OUTPUT
if [[ '${{ inputs.use_environ }}' == 'snapshots' ]]
if [[ '${{ inputs.use_environ }}' == 'release' ]]
then
SOURCE_NAME_BASE=$(echo "hdfsrc")
SOURCE_NAME_BASE=$(echo "${{ inputs.snap_name }}")
else
SOURCE_NAME_BASE=$(echo "$FILE_NAME_BASE")
SOURCE_NAME_BASE=$(echo "hdfsrc")
fi
echo "SOURCE_BASE=$SOURCE_NAME_BASE" >> $GITHUB_OUTPUT

View File

@ -35,17 +35,17 @@ jobs:
call-workflow-tarball:
uses: ./.github/workflows/tarball.yml
with:
#use_tag: snapshot-1.14
use_tag: snapshot-1.14
use_environ: snapshots
call-workflow-ctest:
needs: call-workflow-tarball
uses: ./.github/workflows/cmake-ctest.yml
with:
file_base: ${{ needs.call-workflow-tarball.outputs.file_base }}
preset_name: ci-StdShar
#use_tag: snapshot-1.14
#use_environ: snapshots
file_base: ${{ needs.call-workflow-tarball.outputs.file_base }}
use_tag: snapshot-1.14
use_environ: snapshots
if: ${{ needs.call-workflow-tarball.outputs.has_changes == 'true' }}
call-workflow-abi:

View File

@ -0,0 +1,14 @@
name: Check Markdown links
on:
workflow_dispatch:
push:
pull_request:
branches: [ hdf5_1_14 ]
jobs:
markdown-link-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@master
- uses: gaurav-nelson/github-action-markdown-link-check@v1

View File

@ -149,18 +149,18 @@ jobs:
- name: Create sha256 sums for files
run: |
sha256sum ${{ steps.get-file-base.outputs.FILE_BASE }}.doxygen.zip > sha256sums.txt
sha256sum ${{ steps.get-file-base.outputs.FILE_BASE }}.tar.gz >> sha256sums.txt
sha256sum ${{ steps.get-file-base.outputs.FILE_BASE }}.zip >> sha256sums.txt
sha256sum ${{ steps.get-file-base.outputs.FILE_BASE }}-osx12.tar.gz >> sha256sums.txt
sha256sum ${{ steps.get-file-base.outputs.FILE_BASE }}-ubuntu-2204_gcc.tar.gz >> sha256sums.txt
sha256sum ${{ steps.get-file-base.outputs.FILE_BASE }}-ubuntu-2204_gcc.deb >> sha256sums.txt
sha256sum ${{ steps.get-file-base.outputs.FILE_BASE }}-ubuntu-2204_gcc.rpm >> sha256sums.txt
sha256sum ${{ steps.get-file-base.outputs.FILE_BASE }}-ubuntu-2204_gcc_s3.tar.gz >> sha256sums.txt
sha256sum ${{ steps.get-file-base.outputs.FILE_BASE }}-win-vs2022_cl.zip >> sha256sums.txt
sha256sum ${{ steps.get-file-base.outputs.FILE_BASE }}-ubuntu-2204_intel.tar.gz >> sha256sums.txt
sha256sum ${{ steps.get-file-base.outputs.FILE_BASE }}-win-vs2022_intel.zip >> sha256sums.txt
sha256sum ${{ steps.get-file-base.outputs.FILE_BASE }}.html.abi.reports >> sha256sums.txt
sha256sum ${{ steps.get-file-base.outputs.FILE_BASE }}.doxygen.zip > ${{ steps.get-file-base.outputs.FILE_BASE }}.sha256sums.txt
sha256sum ${{ steps.get-file-base.outputs.FILE_BASE }}.tar.gz >> ${{ steps.get-file-base.outputs.FILE_BASE }}.sha256sums.txt
sha256sum ${{ steps.get-file-base.outputs.FILE_BASE }}.zip >> ${{ steps.get-file-base.outputs.FILE_BASE }}.sha256sums.txt
sha256sum ${{ steps.get-file-base.outputs.FILE_BASE }}-osx12.tar.gz >> ${{ steps.get-file-base.outputs.FILE_BASE }}.sha256sums.txt
sha256sum ${{ steps.get-file-base.outputs.FILE_BASE }}-ubuntu-2204_gcc.tar.gz >> ${{ steps.get-file-base.outputs.FILE_BASE }}.sha256sums.txt
sha256sum ${{ steps.get-file-base.outputs.FILE_BASE }}-ubuntu-2204_gcc.deb >> ${{ steps.get-file-base.outputs.FILE_BASE }}.sha256sums.txt
sha256sum ${{ steps.get-file-base.outputs.FILE_BASE }}-ubuntu-2204_gcc.rpm >> ${{ steps.get-file-base.outputs.FILE_BASE }}.sha256sums.txt
sha256sum ${{ steps.get-file-base.outputs.FILE_BASE }}-ubuntu-2204_gcc_s3.tar.gz >> ${{ steps.get-file-base.outputs.FILE_BASE }}.sha256sums.txt
sha256sum ${{ steps.get-file-base.outputs.FILE_BASE }}-win-vs2022_cl.zip >> ${{ steps.get-file-base.outputs.FILE_BASE }}.sha256sums.txt
sha256sum ${{ steps.get-file-base.outputs.FILE_BASE }}-ubuntu-2204_intel.tar.gz >> ${{ steps.get-file-base.outputs.FILE_BASE }}.sha256sums.txt
sha256sum ${{ steps.get-file-base.outputs.FILE_BASE }}-win-vs2022_intel.zip >> ${{ steps.get-file-base.outputs.FILE_BASE }}.sha256sums.txt
sha256sum ${{ steps.get-file-base.outputs.FILE_BASE }}.html.abi.reports.tar.gz >> ${{ steps.get-file-base.outputs.FILE_BASE }}.sha256sums.txt
- name: Store snapshot name
run: |
@ -197,8 +197,8 @@ jobs:
${{ steps.get-file-base.outputs.FILE_BASE }}-win-vs2022_cl.zip
${{ steps.get-file-base.outputs.FILE_BASE }}-ubuntu-2204_intel.tar.gz
${{ steps.get-file-base.outputs.FILE_BASE }}-win-vs2022_intel.zip
${{ steps.get-file-base.outputs.FILE_BASE }}.html.abi.reports
sha256sums.txt
${{ steps.get-file-base.outputs.FILE_BASE }}.html.abi.reports.tar.gz
${{ steps.get-file-base.outputs.FILE_BASE }}.sha256sums.txt
if-no-files-found: error # 'warn' or 'ignore' are also available, defaults to `warn`
- name: Release tag
@ -221,8 +221,8 @@ jobs:
${{ steps.get-file-base.outputs.FILE_BASE }}-win-vs2022_cl.zip
${{ steps.get-file-base.outputs.FILE_BASE }}-ubuntu-2204_intel.tar.gz
${{ steps.get-file-base.outputs.FILE_BASE }}-win-vs2022_intel.zip
${{ steps.get-file-base.outputs.FILE_BASE }}.html.abi.reports
sha256sums.txt
${{ steps.get-file-base.outputs.FILE_BASE }}.html.abi.reports.tar.gz
${{ steps.get-file-base.outputs.FILE_BASE }}.sha256sums.txt
if-no-files-found: error # 'warn' or 'ignore' are also available, defaults to `warn`
- name: List files for the space (Linux)

View File

@ -31,15 +31,16 @@ jobs:
needs: log-the-inputs
uses: ./.github/workflows/tarball.yml
with:
# use_tag: ${{ inputs.use_tag }}
use_tag: ${{ needs.log-the-inputs.outputs.rel_tag }}
use_environ: release
call-workflow-ctest:
needs: call-workflow-tarball
uses: ./.github/workflows/cmake-ctest.yml
with:
file_base: ${{ needs.call-workflow-tarball.outputs.file_base }}
preset_name: ci-StdShar
file_base: ${{ needs.call-workflow-tarball.outputs.file_base }}
snap_name: hdf5-${{ needs.call-workflow-tarball.outputs.source_base }}
use_environ: release
call-workflow-abi:
@ -58,8 +59,8 @@ jobs:
uses: ./.github/workflows/release-files.yml
with:
file_base: ${{ needs.call-workflow-tarball.outputs.file_base }}
file_branch: ${{ needs.log-the-inputs.outputs.rel_tag }}
file_sha: ${{ needs.log-the-inputs.outputs.rel_tag }}
file_branch: ${{ needs.call-workflow-tarball.outputs.file_branch }}
file_sha: ${{ needs.call-workflow-tarball.outputs.file_sha }}
use_tag: ${{ needs.log-the-inputs.outputs.rel_tag }}
use_environ: release

View File

@ -45,7 +45,8 @@ jobs:
token: ${{ github.token }}
tag: "${{ inputs.use_tag }}"
assets: |
${{ steps.get-file-base.outputs.FILE_BASE }}.html.abi.reports
${{ steps.get-file-base.outputs.FILE_BASE }}.sha256sums.txt
${{ steps.get-file-base.outputs.FILE_BASE }}.html.abi.reports.tar.gz
${{ steps.get-file-base.outputs.FILE_BASE }}.doxygen.zip
${{ steps.get-file-base.outputs.FILE_BASE }}.tar.gz
${{ steps.get-file-base.outputs.FILE_BASE }}.zip

View File

@ -4,11 +4,11 @@ name: hdf5 1.14 tarball
on:
workflow_call:
inputs:
# use_tag:
# description: 'Release version tag'
# type: string
# required: false
# default: snapshot-1.14
use_tag:
description: 'Release version tag'
type: string
required: false
default: snapshot-1.14
use_environ:
description: 'Environment to locate files'
type: string
@ -18,6 +18,9 @@ on:
has_changes:
description: "Whether there were changes the previous day"
value: ${{ jobs.check_commits.outputs.has_changes }}
source_base:
description: "The common base name of the source tarballs"
value: ${{ jobs.create_tarball.outputs.source_base }}
file_base:
description: "The common base name of the source tarballs"
value: ${{ jobs.create_tarball.outputs.file_base }}
@ -80,6 +83,7 @@ jobs:
if: ${{ ((inputs.use_environ == 'snapshots') && (needs.check_commits.outputs.has_changes == 'true')) || (inputs.use_environ == 'release') }}
outputs:
file_base: ${{ steps.set-file-base.outputs.FILE_BASE }}
source_base: ${{ steps.version.outputs.SOURCE_TAG }}
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- name: Get Sources
@ -96,23 +100,28 @@ jobs:
id: version
run: |
cd "$GITHUB_WORKSPACE/hdfsrc"
echo "TAG_VERSION=$(bin/h5vers)" >> $GITHUB_OUTPUT
echo "SOURCE_TAG=$(bin/h5vers)" >> $GITHUB_OUTPUT
- name: Set file base name
id: set-file-base
run: |
if [[ '${{ inputs.use_environ }}' == 'snapshots' && '${{ needs.check_commits.outputs.has_changes }}' == 'true' ]]
if [[ '${{ inputs.use_environ }}' == 'snapshots' ]]
then
FILE_NAME_BASE=$(echo "hdf5-${{ needs.check_commits.outputs.branch_ref }}-${{ needs.check_commits.outputs.branch_sha }}")
else
FILE_NAME_BASE=$(echo "hdf5-${{ steps.version.outputs.TAG_VERSION }}")
if [[ '${{ inputs.use_tag }}' == 'snapshot' ]]
then
FILE_NAME_BASE=$(echo "snapshot")
else
FILE_NAME_BASE=$(echo "hdf5-${{ steps.version.outputs.SOURCE_TAG }}")
fi
fi
echo "FILE_BASE=$FILE_NAME_BASE" >> $GITHUB_OUTPUT
shell: bash
- name: Create snapshot file base name
id: create-file-base
if: ${{ (inputs.use_environ == 'snapshots') && (needs.check_commits.outputs.has_changes == 'true') }}
if: ${{ (inputs.use_environ == 'snapshots') }}
run: |
cd "$GITHUB_WORKSPACE/hdfsrc"
bin/release -d $GITHUB_WORKSPACE --branch ${{ needs.check_commits.outputs.branch_ref }} --revision gzip zip
@ -123,7 +132,15 @@ jobs:
if: ${{ (inputs.use_environ == 'release') }}
run: |
cd "$GITHUB_WORKSPACE/hdfsrc"
bin/release -d $GITHUB_WORKSPACE gzip zip cmake-tgz cmake-zip
bin/release -d $GITHUB_WORKSPACE gzip zip
shell: bash
- name: Rename release file base name
id: ren-basename
if: ${{ (inputs.use_environ == 'release') && (inputs.use_tag == 'snapshot') }}
run: |
mv hdf5-${{ steps.version.outputs.SOURCE_TAG }}.tar.gz ${{ inputs.use_tag }}.tar.gz
mv hdf5-${{ steps.version.outputs.SOURCE_TAG }}.zip ${{ inputs.use_tag }}.zip
shell: bash
- name: List files in the repository

View File

@ -122,7 +122,7 @@ Please make sure that you check the items applicable to your pull request:
* [ ] If changes were done to Autotools build, were they added to CMake and vice versa?
* [ ] Is the pull request applicable to any other branches? If yes, which ones? Please document it in the GitHub issue.
* [ ] Is the new code sufficiently documented for future maintenance?
* [ ] Does the new feature require a change to an existing API? See "API Compatibility Macros" document (https://portal.hdfgroup.org/display/HDF5/API+Compatibility+Macros)
* [ ] Does the new feature require a change to an existing API? See "API Compatibility Macros" document (https://docs.hdfgroup.org/hdf5/v1_14/api-compat-macros.html)
* Documentation
* [ ] Was the change described in the release_docs/RELEASE.txt file?
* [ ] Was the new function documented in the corresponding public header file using [Doxygen](https://hdfgroup.github.io/hdf5/v1_14/_r_m_t.html)?

View File

@ -5,12 +5,10 @@
#
# This file is part of HDF5. The full HDF5 copyright notice, including
# terms governing use, modification, and redistribution, is contained in
# the files COPYING and Copyright.html. COPYING can be found at the root
# of the source code distribution tree; Copyright.html can be found at the
# root level of an installed copy of the electronic HDF5 document set and
# is linked from the top-level documents page. It can also be found at
# http://hdfgroup.org/HDF5/doc/Copyright.html. If you do not have
# access to either file, you may request a copy from help@hdfgroup.org.
# the COPYING file, which can be found at the root of the source code
# distribution tree, or in https://www.hdfgroup.org/licenses.
# If you do not have access to either file, you may request a copy from
# help@hdfgroup.org.
##
## Makefile.am
## Run automake to generate a Makefile.in from this file.

View File

@ -6,12 +6,10 @@
#
# This file is part of HDF5. The full HDF5 copyright notice, including
# terms governing use, modification, and redistribution, is contained in
# the files COPYING and Copyright.html. COPYING can be found at the root
# of the source code distribution tree; Copyright.html can be found at the
# root level of an installed copy of the electronic HDF5 document set and
# is linked from the top-level documents page. It can also be found at
# http://hdfgroup.org/HDF5/doc/Copyright.html. If you do not have
# access to either file, you may request a copy from help@hdfgroup.org.
# the COPYING file, which can be found at the root of the source code
# distribution tree, or in https://www.hdfgroup.org/licenses.
# If you do not have access to either file, you may request a copy from
# help@hdfgroup.org.
srcdir=@srcdir@

View File

@ -5,12 +5,10 @@
#
# This file is part of HDF5. The full HDF5 copyright notice, including
# terms governing use, modification, and redistribution, is contained in
# the files COPYING and Copyright.html. COPYING can be found at the root
# of the source code distribution tree; Copyright.html can be found at the
# root level of an installed copy of the electronic HDF5 document set and
# is linked from the top-level documents page. It can also be found at
# http://hdfgroup.org/HDF5/doc/Copyright.html. If you do not have
# access to either file, you may request a copy from help@hdfgroup.org.
# the COPYING file, which can be found at the root of the source code
# distribution tree, or in https://www.hdfgroup.org/licenses.
# If you do not have access to either file, you may request a copy from
# help@hdfgroup.org.
##
## Makefile.am
## Run automake to generate a Makefile.in from this file.

View File

@ -6,12 +6,10 @@
#
# This file is part of HDF5. The full HDF5 copyright notice, including
# terms governing use, modification, and redistribution, is contained in
# the files COPYING and Copyright.html. COPYING can be found at the root
# of the source code distribution tree; Copyright.html can be found at the
# root level of an installed copy of the electronic HDF5 document set and
# is linked from the top-level documents page. It can also be found at
# http://hdfgroup.org/HDF5/doc/Copyright.html. If you do not have
# access to either file, you may request a copy from help@hdfgroup.org.
# the COPYING file, which can be found at the root of the source code
# distribution tree, or in https://www.hdfgroup.org/licenses.
# If you do not have access to either file, you may request a copy from
# help@hdfgroup.org.
srcdir=@srcdir@

View File

@ -5,12 +5,10 @@
#
# This file is part of HDF5. The full HDF5 copyright notice, including
# terms governing use, modification, and redistribution, is contained in
# the files COPYING and Copyright.html. COPYING can be found at the root
# of the source code distribution tree; Copyright.html can be found at the
# root level of an installed copy of the electronic HDF5 document set and
# is linked from the top-level documents page. It can also be found at
# http://hdfgroup.org/HDF5/doc/Copyright.html. If you do not have
# access to either file, you may request a copy from help@hdfgroup.org.
# the COPYING file, which can be found at the root of the source code
# distribution tree, or in https://www.hdfgroup.org/licenses.
# If you do not have access to either file, you may request a copy from
# help@hdfgroup.org.
##
## Makefile.am
## Run automake to generate a Makefile.in from this file.

View File

@ -6,12 +6,10 @@
#
# This file is part of HDF5. The full HDF5 copyright notice, including
# terms governing use, modification, and redistribution, is contained in
# the files COPYING and Copyright.html. COPYING can be found at the root
# of the source code distribution tree; Copyright.html can be found at the
# root level of an installed copy of the electronic HDF5 document set and
# is linked from the top-level documents page. It can also be found at
# http://hdfgroup.org/HDF5/doc/Copyright.html. If you do not have
# access to either file, you may request a copy from help@hdfgroup.org.
# the COPYING file, which can be found at the root of the source code
# distribution tree, or in https://www.hdfgroup.org/licenses.
# If you do not have access to either file, you may request a copy from
# help@hdfgroup.org.
srcdir=@srcdir@

View File

@ -5,12 +5,10 @@
#
# This file is part of HDF5. The full HDF5 copyright notice, including
# terms governing use, modification, and redistribution, is contained in
# the files COPYING and Copyright.html. COPYING can be found at the root
# of the source code distribution tree; Copyright.html can be found at the
# root level of an installed copy of the electronic HDF5 document set and
# is linked from the top-level documents page. It can also be found at
# http://hdfgroup.org/HDF5/doc/Copyright.html. If you do not have
# access to either file, you may request a copy from help@hdfgroup.org.
# the COPYING file, which can be found at the root of the source code
# distribution tree, or in https://www.hdfgroup.org/licenses.
# If you do not have access to either file, you may request a copy from
# help@hdfgroup.org.
##
## Makefile.am
## Run automake to generate a Makefile.in from this file.

View File

@ -6,12 +6,10 @@
#
# This file is part of HDF5. The full HDF5 copyright notice, including
# terms governing use, modification, and redistribution, is contained in
# the files COPYING and Copyright.html. COPYING can be found at the root
# of the source code distribution tree; Copyright.html can be found at the
# root level of an installed copy of the electronic HDF5 document set and
# is linked from the top-level documents page. It can also be found at
# http://hdfgroup.org/HDF5/doc/Copyright.html. If you do not have
# access to either file, you may request a copy from help@hdfgroup.org.
# the COPYING file, which can be found at the root of the source code
# distribution tree, or in https://www.hdfgroup.org/licenses.
# If you do not have access to either file, you may request a copy from
# help@hdfgroup.org.
srcdir=@srcdir@

View File

@ -5,12 +5,10 @@
#
# This file is part of HDF5. The full HDF5 copyright notice, including
# terms governing use, modification, and redistribution, is contained in
# the files COPYING and Copyright.html. COPYING can be found at the root
# of the source code distribution tree; Copyright.html can be found at the
# root level of an installed copy of the electronic HDF5 document set and
# is linked from the top-level documents page. It can also be found at
# http://hdfgroup.org/HDF5/doc/Copyright.html. If you do not have
# access to either file, you may request a copy from help@hdfgroup.org.
# the COPYING file, which can be found at the root of the source code
# distribution tree, or in https://www.hdfgroup.org/licenses.
# If you do not have access to either file, you may request a copy from
# help@hdfgroup.org.
##
## Makefile.am
## Run automake to generate a Makefile.in from this file.

View File

@ -5,12 +5,10 @@
#
# This file is part of HDF5. The full HDF5 copyright notice, including
# terms governing use, modification, and redistribution, is contained in
# the files COPYING and Copyright.html. COPYING can be found at the root
# of the source code distribution tree; Copyright.html can be found at the
# root level of an installed copy of the electronic HDF5 document set and
# is linked from the top-level documents page. It can also be found at
# http://hdfgroup.org/HDF5/doc/Copyright.html. If you do not have
# access to either file, you may request a copy from help@hdfgroup.org.
# the COPYING file, which can be found at the root of the source code
# distribution tree, or in https://www.hdfgroup.org/licenses.
# If you do not have access to either file, you may request a copy from
# help@hdfgroup.org.
##
## Makefile.am
## Run automake to generate a Makefile.in from this file.

View File

@ -6,12 +6,10 @@
#
# This file is part of HDF5. The full HDF5 copyright notice, including
# terms governing use, modification, and redistribution, is contained in
# the files COPYING and Copyright.html. COPYING can be found at the root
# of the source code distribution tree; Copyright.html can be found at the
# root level of an installed copy of the electronic HDF5 document set and
# is linked from the top-level documents page. It can also be found at
# http://hdfgroup.org/HDF5/doc/Copyright.html. If you do not have
# access to either file, you may request a copy from help@hdfgroup.org.
# the COPYING file, which can be found at the root of the source code
# distribution tree, or in https://www.hdfgroup.org/licenses.
# If you do not have access to either file, you may request a copy from
# help@hdfgroup.org.
srcdir=@srcdir@

View File

@ -5,12 +5,10 @@
#
# This file is part of HDF5. The full HDF5 copyright notice, including
# terms governing use, modification, and redistribution, is contained in
# the files COPYING and Copyright.html. COPYING can be found at the root
# of the source code distribution tree; Copyright.html can be found at the
# root level of an installed copy of the electronic HDF5 document set and
# is linked from the top-level documents page. It can also be found at
# http://hdfgroup.org/HDF5/doc/Copyright.html. If you do not have
# access to either file, you may request a copy from help@hdfgroup.org.
# the COPYING file, which can be found at the root of the source code
# distribution tree, or in https://www.hdfgroup.org/licenses.
# If you do not have access to either file, you may request a copy from
# help@hdfgroup.org.
##
## Makefile.am
## Run automake to generate a Makefile.in from this file.

View File

@ -84,7 +84,9 @@ CONTAINS
CHARACTER(LEN=10) :: space
INTEGER :: spaces ! Number of whitespaces to prepend to output
INTEGER :: len
INTEGER :: ret_val_func
ret_val_func = 0
ret_val = 0
name_string(1:10) = " "
@ -140,8 +142,8 @@ CONTAINS
ptr2 = C_LOC(nextod%recurs)
funptr = C_FUNLOC(op_func)
CALL h5literate_by_name_f(loc_id, name_string, H5_INDEX_NAME_F, H5_ITER_NATIVE_F, idx, &
funptr, ptr2, ret_val, status)
funptr, ptr2, ret_val_func, status)
ret_val = INT(ret_val_func,C_INT)
ENDIF
WRITE(*,'(A)') space(1:spaces)//"}"
RETURN

View File

@ -6,12 +6,10 @@
#
# This file is part of HDF5. The full HDF5 copyright notice, including
# terms governing use, modification, and redistribution, is contained in
# the files COPYING and Copyright.html. COPYING can be found at the root
# of the source code distribution tree; Copyright.html can be found at the
# root level of an installed copy of the electronic HDF5 document set and
# is linked from the top-level documents page. It can also be found at
# http://hdfgroup.org/HDF5/doc/Copyright.html. If you do not have
# access to either file, you may request a copy from help@hdfgroup.org.
# the COPYING file, which can be found at the root of the source code
# distribution tree, or in https://www.hdfgroup.org/licenses.
# If you do not have access to either file, you may request a copy from
# help@hdfgroup.org.
srcdir=@srcdir@

View File

@ -25,9 +25,10 @@
!
! MPI definitions and calls.
!
INTEGER :: mpierror ! MPI error flag
INTEGER :: comm, info
INTEGER :: mpi_size, mpi_rank
INTEGER(KIND=MPI_INTEGER_KIND) :: mpierror ! MPI error flag
INTEGER(KIND=MPI_INTEGER_KIND) :: comm, info
INTEGER(KIND=MPI_INTEGER_KIND) :: mpi_size, mpi_rank
comm = MPI_COMM_WORLD
info = MPI_INFO_NULL
CALL MPI_INIT(mpierror)

View File

@ -18,9 +18,9 @@
!
! MPI definitions and calls.
!
INTEGER :: mpierror ! MPI error flag
INTEGER :: comm, info
INTEGER :: mpi_size, mpi_rank
INTEGER(KIND=MPI_INTEGER_KIND) :: mpierror ! MPI error flag
INTEGER(KIND=MPI_INTEGER_KIND) :: comm, info
INTEGER(KIND=MPI_INTEGER_KIND) :: mpi_size, mpi_rank
comm = MPI_COMM_WORLD
info = MPI_INFO_NULL

View File

@ -27,7 +27,7 @@ MODULE filter
INTEGER , PARAMETER :: PATH_MAX = 512
! Global variables
INTEGER :: mpi_rank, mpi_size
INTEGER(KIND=MPI_INTEGER_KIND) :: mpi_rank, mpi_size
CONTAINS
!
@ -91,10 +91,11 @@ CONTAINS
LOGICAL :: do_cleanup
INTEGER :: status
INTEGER(KIND=MPI_INTEGER_KIND) :: mpierror
CALL get_environment_variable("HDF5_NOCLEANUP", STATUS=status)
IF(status.EQ.0)THEN
CALL MPI_File_delete(filename, MPI_INFO_NULL, status)
CALL MPI_File_delete(filename, MPI_INFO_NULL, mpierror)
ENDIF
END SUBROUTINE cleanup
@ -241,18 +242,19 @@ CONTAINS
USE filter
IMPLICIT NONE
INTEGER :: comm = MPI_COMM_WORLD
INTEGER :: info = MPI_INFO_NULL
INTEGER(KIND=MPI_INTEGER_KIND) :: comm = MPI_COMM_WORLD
INTEGER(KIND=MPI_INTEGER_KIND) :: info = MPI_INFO_NULL
INTEGER(hid_t) :: file_id
INTEGER(hid_t) :: fapl_id
INTEGER(hid_t) :: dxpl_id
CHARACTER(LEN=PATH_MAX) :: par_prefix
CHARACTER(LEN=PATH_MAX) :: filename
INTEGER :: status
INTEGER(KIND=MPI_INTEGER_KIND) :: mpierror
CALL MPI_Init(status)
CALL MPI_Comm_size(comm, mpi_size, status)
CALL MPI_Comm_rank(comm, mpi_rank, status)
CALL MPI_Init(mpierror)
CALL MPI_Comm_size(comm, mpi_size, mpierror)
CALL MPI_Comm_rank(comm, mpi_rank, mpierror)
!
! Initialize HDF5 library and Fortran interfaces.
@ -349,6 +351,6 @@ CONTAINS
! ------------------------------------
CALL cleanup(filename)
CALL MPI_Finalize(status)
CALL MPI_Finalize(mpierror)
END PROGRAM main

View File

@ -34,9 +34,9 @@
!
! MPI definitions and calls.
!
INTEGER :: mpierror ! MPI error flag
INTEGER :: comm, info
INTEGER :: mpi_size, mpi_rank
INTEGER(KIND=MPI_INTEGER_KIND) :: mpierror ! MPI error flag
INTEGER(KIND=MPI_INTEGER_KIND) :: comm, info
INTEGER(KIND=MPI_INTEGER_KIND) :: mpi_size, mpi_rank
comm = MPI_COMM_WORLD
info = MPI_INFO_NULL

View File

@ -30,9 +30,9 @@
!
! MPI definitions and calls.
!
INTEGER :: mpierror ! MPI error flag
INTEGER :: comm, info
INTEGER :: mpi_size, mpi_rank
INTEGER(KIND=MPI_INTEGER_KIND) :: mpierror ! MPI error flag
INTEGER(KIND=MPI_INTEGER_KIND) :: comm, info
INTEGER(KIND=MPI_INTEGER_KIND) :: mpi_size, mpi_rank
comm = MPI_COMM_WORLD
info = MPI_INFO_NULL
CALL MPI_INIT(mpierror)

View File

@ -36,9 +36,9 @@
!
! MPI definitions and calls.
!
INTEGER :: mpierror ! MPI error flag
INTEGER :: comm, info
INTEGER :: mpi_size, mpi_rank
INTEGER(KIND=MPI_INTEGER_KIND) :: mpierror ! MPI error flag
INTEGER(KIND=MPI_INTEGER_KIND) :: comm, info
INTEGER(KIND=MPI_INTEGER_KIND) :: mpi_size, mpi_rank
comm = MPI_COMM_WORLD
info = MPI_INFO_NULL

View File

@ -35,9 +35,9 @@
!
! MPI definitions and calls.
!
INTEGER :: mpierror ! MPI error flag
INTEGER :: comm, info
INTEGER :: mpi_size, mpi_rank
INTEGER(KIND=MPI_INTEGER_KIND) :: mpierror ! MPI error flag
INTEGER(KIND=MPI_INTEGER_KIND) :: comm, info
INTEGER(KIND=MPI_INTEGER_KIND) :: mpi_size, mpi_rank
comm = MPI_COMM_WORLD
info = MPI_INFO_NULL

View File

@ -60,8 +60,8 @@ CONTAINS
IMPLICIT NONE
INTEGER(HID_T) :: fapl_id
INTEGER :: mpi_size
INTEGER :: mpi_rank
INTEGER(KIND=MPI_INTEGER_KIND) :: mpi_size
INTEGER(KIND=MPI_INTEGER_KIND) :: mpi_rank
INTEGER, DIMENSION(:), ALLOCATABLE, TARGET :: wdata
INTEGER(hsize_t), DIMENSION(1:EXAMPLE_DSET_DIMS) :: dset_dims
@ -171,8 +171,8 @@ CONTAINS
IMPLICIT NONE
INTEGER(HID_T) :: fapl_id
INTEGER :: mpi_size
INTEGER :: mpi_rank
INTEGER(KIND=MPI_INTEGER_KIND) :: mpi_size
INTEGER(KIND=MPI_INTEGER_KIND) :: mpi_rank
INTEGER, DIMENSION(:), ALLOCATABLE, TARGET :: wdata
@ -304,8 +304,8 @@ CONTAINS
IMPLICIT NONE
INTEGER(HID_T) :: fapl_id
INTEGER :: mpi_size
INTEGER :: mpi_rank
INTEGER(KIND=MPI_INTEGER_KIND) :: mpi_size
INTEGER(KIND=MPI_INTEGER_KIND) :: mpi_rank
INTEGER, DIMENSION(:), ALLOCATABLE, TARGET :: wdata
TYPE(H5FD_subfiling_config_t) :: subf_config
@ -320,6 +320,7 @@ CONTAINS
INTEGER :: status
INTEGER(SIZE_T) :: i
TYPE(C_PTR) :: f_ptr
INTEGER(KIND=MPI_INTEGER_KIND) :: mpierror
! Make a copy of the FAPL so we don't disturb
! it for the other examples
@ -413,7 +414,7 @@ CONTAINS
CALL H5Fclose_f(file_id, status)
ENDIF
CALL MPI_Barrier(MPI_COMM_WORLD, status)
CALL MPI_Barrier(MPI_COMM_WORLD, mpierror)
!
! Use all MPI ranks to re-open the file and
@ -467,26 +468,27 @@ PROGRAM main
USE SUBF
IMPLICIT NONE
INTEGER :: comm = MPI_COMM_WORLD
INTEGER :: info = MPI_INFO_NULL
INTEGER(KIND=MPI_INTEGER_KIND) :: comm = MPI_COMM_WORLD
INTEGER(KIND=MPI_INTEGER_KIND) :: info = MPI_INFO_NULL
INTEGER(HID_T) :: fapl_id
INTEGER :: mpi_size
INTEGER :: mpi_rank
INTEGER :: required
INTEGER :: provided
INTEGER(KIND=MPI_INTEGER_KIND) :: mpi_size
INTEGER(KIND=MPI_INTEGER_KIND) :: mpi_rank
INTEGER(KIND=MPI_INTEGER_KIND) :: required
INTEGER(KIND=MPI_INTEGER_KIND) :: provided
INTEGER(KIND=MPI_INTEGER_KIND) :: mpierror
INTEGER :: status
! HDF5 Subfiling VFD requires MPI_Init_thread with MPI_THREAD_MULTIPLE
required = MPI_THREAD_MULTIPLE
provided = 0
CALL mpi_init_thread(required, provided, status)
CALL mpi_init_thread(required, provided, mpierror)
IF (provided .NE. required) THEN
WRITE(*,*) "MPI doesn't support MPI_Init_thread with MPI_THREAD_MULTIPLE *FAILED*"
CALL MPI_Abort(comm, -1, status)
CALL MPI_Abort(comm, -1_MPI_INTEGER_KIND, mpierror)
ENDIF
CALL MPI_Comm_size(comm, mpi_size, status)
CALL MPI_Comm_rank(comm, mpi_rank, status)
CALL MPI_Comm_size(comm, mpi_size, mpierror)
CALL MPI_Comm_rank(comm, mpi_rank, mpierror)
!
! Initialize HDF5 library and Fortran interfaces.
@ -516,6 +518,6 @@ PROGRAM main
IF(mpi_rank .EQ. 0) WRITE(*,"(A)") "PHDF5 example finished with no errors"
CALL MPI_Finalize(status)
CALL MPI_Finalize(mpierror)
END PROGRAM main

View File

@ -5,12 +5,10 @@
#
# This file is part of HDF5. The full HDF5 copyright notice, including
# terms governing use, modification, and redistribution, is contained in
# the files COPYING and Copyright.html. COPYING can be found at the root
# of the source code distribution tree; Copyright.html can be found at the
# root level of an installed copy of the electronic HDF5 document set and
# is linked from the top-level documents page. It can also be found at
# http://hdfgroup.org/HDF5/doc/Copyright.html. If you do not have
# access to either file, you may request a copy from help@hdfgroup.org.
# the COPYING file, which can be found at the root of the source code
# distribution tree, or in https://www.hdfgroup.org/licenses.
# If you do not have access to either file, you may request a copy from
# help@hdfgroup.org.
##
## Makefile.am
## Run automake to generate a Makefile.in from this file.

View File

@ -74,14 +74,14 @@ PROGRAM main
! Insert enumerated value for memtype.
!
val = i
CALL h5tenum_insert_f(memtype, TRIM(names(i+1)), val, hdferr)
f_ptr = C_LOC(val)
CALL h5tenum_insert_f(memtype, TRIM(names(i+1)), f_ptr, hdferr)
!
! Insert enumerated value for filetype. We must first convert
! the numerical value val to the base type of the destination.
!
f_ptr = C_LOC(val)
CALL h5tconvert_f (M_BASET, F_BASET, INT(1,SIZE_T), f_ptr, hdferr)
CALL h5tenum_insert_f(filetype, TRIM(names(i+1)), val, hdferr)
CALL h5tenum_insert_f(filetype, TRIM(names(i+1)), f_ptr, hdferr)
ENDDO
!
! Create dataspace. Setting maximum size to be the current size.
@ -129,7 +129,7 @@ PROGRAM main
!
! Get the name of the enumeration member.
!
CALL h5tenum_nameof_f( memtype, rdata(i,j), NAME_BUF_SIZE, name, hdferr)
CALL h5tenum_nameof_f( memtype, INT(rdata(i,j)), NAME_BUF_SIZE, name, hdferr)
WRITE(*,'(" ", A6," ")', ADVANCE='NO') TRIM(NAME)
ENDDO
WRITE(*,'("]")')

View File

@ -75,14 +75,15 @@ PROGRAM main
! Insert enumerated value for memtype.
!
val = i
CALL h5tenum_insert_f(memtype, TRIM(names(i+1)), val, hdferr)
f_ptr = C_LOC(val)
CALL h5tenum_insert_f(memtype, TRIM(names(i+1)), f_ptr, hdferr)
!
! Insert enumerated value for filetype. We must first convert
! the numerical value val to the base type of the destination.
!
f_ptr = C_LOC(val)
CALL h5tconvert_f(M_BASET, F_BASET, INT(1,SIZE_T), f_ptr, hdferr)
CALL h5tenum_insert_f(filetype, TRIM(names(i+1)), val, hdferr)
CALL h5tenum_insert_f(filetype, TRIM(names(i+1)), f_ptr, hdferr)
ENDDO
!
! Create dataspace with a null dataspace.
@ -137,7 +138,7 @@ PROGRAM main
!
! Get the name of the enumeration member.
!
CALL h5tenum_nameof_f( memtype, rdata(i,j), NAME_BUF_SIZE, name, hdferr)
CALL h5tenum_nameof_f( memtype, INT(rdata(i,j)), NAME_BUF_SIZE, name, hdferr)
WRITE(*,'(" ",A6," ")', ADVANCE='NO') TRIM(NAME)
ENDDO
WRITE(*,'("]")')

View File

@ -6,12 +6,10 @@
#
# This file is part of HDF5. The full HDF5 copyright notice, including
# terms governing use, modification, and redistribution, is contained in
# the files COPYING and Copyright.html. COPYING can be found at the root
# of the source code distribution tree; Copyright.html can be found at the
# root level of an installed copy of the electronic HDF5 document set and
# is linked from the top-level documents page. It can also be found at
# http://hdfgroup.org/HDF5/doc/Copyright.html. If you do not have
# access to either file, you may request a copy from help@hdfgroup.org.
# the COPYING file, which can be found at the root of the source code
# distribution tree, or in https://www.hdfgroup.org/licenses.
# If you do not have access to either file, you may request a copy from
# help@hdfgroup.org.
srcdir=@srcdir@

View File

@ -5,12 +5,10 @@
#
# This file is part of HDF5. The full HDF5 copyright notice, including
# terms governing use, modification, and redistribution, is contained in
# the files COPYING and Copyright.html. COPYING can be found at the root
# of the source code distribution tree; Copyright.html can be found at the
# root level of an installed copy of the electronic HDF5 document set and
# is linked from the top-level documents page. It can also be found at
# http://hdfgroup.org/HDF5/doc/Copyright.html. If you do not have
# access to either file, you may request a copy from help@hdfgroup.org.
# the COPYING file, which can be found at the root of the source code
# distribution tree, or in https://www.hdfgroup.org/licenses.
# If you do not have access to either file, you may request a copy from
# help@hdfgroup.org.
##
## Makefile.am
## Run automake to generate a Makefile.in from this file.

View File

@ -5,12 +5,10 @@
* *
* This file is part of HDF5. The full HDF5 copyright notice, including *
* terms governing use, modification, and redistribution, is contained in *
* the files COPYING and Copyright.html. COPYING can be found at the root *
* of the source code distribution tree; Copyright.html can be found at the *
* root level of an installed copy of the electronic HDF5 document set and *
* is linked from the top-level documents page. It can also be found at *
* http://hdfgroup.org/HDF5/doc/Copyright.html. If you do not have *
* access to either file, you may request a copy from help@hdfgroup.org. *
* the COPYING file, which can be found at the root of the source code *
* distribution tree, or in https://www.hdfgroup.org/licenses. *
* If you do not have access to either file, you may request a copy from *
* help@hdfgroup.org. *
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */
import hdf.hdf5lib.H5;

View File

@ -5,12 +5,10 @@
* *
* This file is part of HDF5. The full HDF5 copyright notice, including *
* terms governing use, modification, and redistribution, is contained in *
* the files COPYING and Copyright.html. COPYING can be found at the root *
* of the source code distribution tree; Copyright.html can be found at the *
* root level of an installed copy of the electronic HDF5 document set and *
* is linked from the top-level documents page. It can also be found at *
* http://hdfgroup.org/HDF5/doc/Copyright.html. If you do not have *
* access to either file, you may request a copy from help@hdfgroup.org. *
* the COPYING file, which can be found at the root of the source code *
* distribution tree, or in https://www.hdfgroup.org/licenses. *
* If you do not have access to either file, you may request a copy from *
* help@hdfgroup.org. *
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */
import hdf.hdf5lib.H5;

View File

@ -5,12 +5,10 @@
* *
* This file is part of HDF5. The full HDF5 copyright notice, including *
* terms governing use, modification, and redistribution, is contained in *
* the files COPYING and Copyright.html. COPYING can be found at the root *
* of the source code distribution tree; Copyright.html can be found at the *
* root level of an installed copy of the electronic HDF5 document set and *
* is linked from the top-level documents page. It can also be found at *
* http://hdfgroup.org/HDF5/doc/Copyright.html. If you do not have *
* access to either file, you may request a copy from help@hdfgroup.org. *
* the COPYING file, which can be found at the root of the source code *
* distribution tree, or in https://www.hdfgroup.org/licenses. *
* If you do not have access to either file, you may request a copy from *
* help@hdfgroup.org. *
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */
import hdf.hdf5lib.H5;

View File

@ -4,12 +4,10 @@
#
# This file is part of HDF5. The full HDF5 copyright notice, including
# terms governing use, modification, and redistribution, is contained in
# the files COPYING and Copyright.html. COPYING can be found at the root
# of the source code distribution tree; Copyright.html can be found at the
# root level of an installed copy of the electronic HDF5 document set and
# is linked from the top-level documents page. It can also be found at
# http://hdfgroup.org/HDF5/doc/Copyright.html. If you do not have
# access to either file, you may request a copy from help@hdfgroup.org.
# the COPYING file, which can be found at the root of the source code
# distribution tree, or in https://www.hdfgroup.org/licenses.
# If you do not have access to either file, you may request a copy from
# help@hdfgroup.org.
##
## Makefile.am
## Run automake to generate a Makefile.in from this file.

View File

@ -6,12 +6,10 @@
#
# This file is part of HDF5. The full HDF5 copyright notice, including
# terms governing use, modification, and redistribution, is contained in
# the files COPYING and Copyright.html. COPYING can be found at the root
# of the source code distribution tree; Copyright.html can be found at the
# root level of an installed copy of the electronic HDF5 document set and
# is linked from the top-level documents page. It can also be found at
# http://hdfgroup.org/HDF5/doc/Copyright.html. If you do not have
# access to either file, you may request a copy from help@hdfgroup.org.
# the COPYING file, which can be found at the root of the source code
# distribution tree, or in https://www.hdfgroup.org/licenses.
# If you do not have access to either file, you may request a copy from
# help@hdfgroup.org.
#
top_builddir=@top_builddir@

View File

@ -5,15 +5,13 @@
#
# This file is part of HDF5. The full HDF5 copyright notice, including
# terms governing use, modification, and redistribution, is contained in
# the files COPYING and Copyright.html. COPYING can be found at the root
# of the source code distribution tree; Copyright.html can be found at the
# root level of an installed copy of the electronic HDF5 document set and
# is linked from the top-level documents page. It can also be found at
# http://hdfgroup.org/HDF5/doc/Copyright.html. If you do not have
# access to either file, you may request a copy from help@hdfgroup.org.
# the COPYING file, which can be found at the root of the source code
# distribution tree, or in https://www.hdfgroup.org/licenses.
# If you do not have access to either file, you may request a copy from
# help@hdfgroup.org.
##
## Makefile.am
## Run automake to generate a Makefile.in from this file.
##
SUBDIRS = C FORTRAN
SUBDIRS = C FORTRAN

View File

@ -19,7 +19,7 @@ HELP AND SUPPORT
----------------
Information regarding Help Desk and Support services is available at
https://portal.hdfgroup.org/display/support/The+HDF+Help+Desk
https://hdfgroup.atlassian.net/servicedesk/customer/portals
@ -48,7 +48,7 @@ HDF5 SNAPSHOTS, PREVIOUS RELEASES AND SOURCE CODE
--------------------------------------------
Full Documentation and Programming Resources for this HDF5 can be found at
https://portal.hdfgroup.org/display/HDF5
https://portal.hdfgroup.org/documentation/index.html
Periodically development code snapshots are provided at the following URL:
@ -56,7 +56,7 @@ Periodically development code snapshots are provided at the following URL:
Source packages for current and previous releases are located at:
https://portal.hdfgroup.org/display/support/Downloads
https://portal.hdfgroup.org/downloads/
Development code is available at our Github location:

View File

@ -30,7 +30,7 @@ I. Preconditions
1. We suggest you obtain the latest CMake for windows from the Kitware
web site. The HDF5 product requires a minimum CMake version
of 3.12.
of 3.18.
2. You have installed the HDF5 library built with CMake, by executing
the HDF Install Utility (the *.msi file in the binary package for
@ -45,7 +45,7 @@ I. Preconditions
(Note there are no quote characters used on Windows and all platforms
use forward slashes)
4. Created separate source and build directories.
4. Create separate source and build directories.
(CMake commands are executed in the build directory)

View File

@ -7,12 +7,10 @@
#
# This file is part of HDF5. The full HDF5 copyright notice, including
# terms governing use, modification, and redistribution, is contained in
# the files COPYING and Copyright.html. COPYING can be found at the root
# of the source code distribution tree; Copyright.html can be found at the
# root level of an installed copy of the electronic HDF5 document set and
# is linked from the top-level documents page. It can also be found at
# http://hdfgroup.org/HDF5/doc/Copyright.html. If you do not have
# access to either file, you may request a copy from help@hdfgroup.org.
# the COPYING file, which can be found at the root of the source code
# distribution tree, or in https://www.hdfgroup.org/licenses.
# If you do not have access to either file, you may request a copy from
# help@hdfgroup.org.
AC_PREREQ(2.69)
AC_INIT(HDF5-examples, 0.1, help@hdfgroup.org)

53
acsite.m4 Normal file
View File

@ -0,0 +1,53 @@
dnl -------------------------------------------------------------------------
dnl -------------------------------------------------------------------------
dnl
dnl Copyright by The HDF Group.
dnl All rights reserved.
dnl
dnl This file is part of HDF5. The full HDF5 copyright notice, including
dnl terms governing use, modification, and redistribution, is contained in
dnl the COPYING file, which can be found at the root of the source code
dnl dnl distribution tree, or in https://www.hdfgroup.org/licenses.
dnl dnl If you do not have access to either file, you may request a copy from
dnl dnl help@hdfgroup.org.
dnl
dnl Macros for HDF5 Fortran
dnl
dnl -------------------------------------------------------------------------
dnl -------------------------------------------------------------------------
dnl -------------------------------------------------------------------------
dnl _AC_SYS_LARGEFILE_MACRO_VALUE
dnl
dnl The following macro overrides the autoconf macro of the same name
dnl with this custom definition. This macro performs the same checks as
dnl autoconf's native _AC_SYS_LARGEFILE_MACRO_VALUE, but will also set
dnl AM_CPPFLAGS with the appropriate -D defines so additional configure
dnl sizeof checks do not fail.
dnl
# _AC_SYS_LARGEFILE_MACRO_VALUE(C-MACRO, VALUE,
# CACHE-VAR,
# DESCRIPTION,
# PROLOGUE, [FUNCTION-BODY])
# ----------------------------------------------------------
m4_define([_AC_SYS_LARGEFILE_MACRO_VALUE],
[AC_CACHE_CHECK([for $1 value needed for large files], [$3],
[while :; do
m4_ifval([$6], [AC_LINK_IFELSE], [AC_COMPILE_IFELSE])(
[AC_LANG_PROGRAM([$5], [$6])],
[$3=no; break])
m4_ifval([$6], [AC_LINK_IFELSE], [AC_COMPILE_IFELSE])(
[AC_LANG_PROGRAM([@%:@define $1 $2
$5], [$6])],
[$3=$2; break])
$3=unknown
break
done])
case $$3 in #(
no | unknown) ;;
*) AC_DEFINE_UNQUOTED([$1], [$$3], [$4])
AM_CPPFLAGS="-D$1=$$3 $AM_CPPFLAGS";;
esac
rm -rf conftest*[]dnl
])# _AC_SYS_LARGEFILE_MACRO_VALUE

View File

@ -184,6 +184,10 @@ die "unable to read file: $README\n" unless -r $file;
my $RELEASE = $file;
$RELEASE =~ s/[^\/]*$/..\/release_docs\/RELEASE.txt/;
die "unable to read file: $RELEASE\n" unless -r $file;
# release_docs/NEWSLETTER.txt
my $NEWS = $file;
$NEWS =~ s/[^\/]*$/..\/release_docs\/NEWSLETTER.txt/;
die "unable to read file: $NEWS\n" unless -r $file;
# configure.ac
my $CONFIGURE = $file;
$CONFIGURE =~ s/[^\/]*$/..\/configure.ac/;
@ -247,6 +251,7 @@ if ($set) {
# Nothing to do but print result
$README = "";
$RELEASE = "";
$NEWS = "";
$CONFIGURE = "";
$CPP_DOC_CONFIG = "";
$LT_VERS = "";
@ -329,6 +334,20 @@ if ($RELEASE) {
close FILE;
}
# Update the release_docs/NEWSLETTER.txt file
if ($NEWS) {
open FILE, $NEWS or die "$NEWS: $!\n";
my @contents = <FILE>;
close FILE;
$contents[0] = sprintf("HDF5 version %d.%d.%d%s %s",
@newver[0,1,2],
$newver[3] eq "" ? "" : "-".$newver[3],
"currently under development\n");
open FILE, ">$NEWS" or die "$NEWS: $!\n";
print FILE @contents;
close FILE;
}
# Update the c++/src/cpp_doc_config file
if ($CPP_DOC_CONFIG) {
my $data = read_file($CPP_DOC_CONFIG);

View File

@ -1406,17 +1406,23 @@ test_attr_dtype_shared(FileAccPropList &fapl)
SUBTEST("Shared Datatypes with Attributes");
try {
h5_stat_size_t empty_filesize = 0; // Size of empty file
bool is_default_vfd_compat = false;
// Create a file
H5File fid1(FILE_DTYPE, H5F_ACC_TRUNC, FileCreatPropList::DEFAULT, fapl);
// Close file
fid1.close();
// Get size of file
h5_stat_size_t empty_filesize; // Size of empty file
empty_filesize = h5_get_file_size(FILE_DTYPE.c_str(), H5P_DEFAULT);
if (empty_filesize < 0)
TestErrPrintf("Line %d: file size wrong!\n", __LINE__);
h5_driver_is_default_vfd_compatible(H5P_DEFAULT, &is_default_vfd_compat);
if (is_default_vfd_compat) {
// Get size of file
empty_filesize = h5_get_file_size(FILE_DTYPE.c_str(), H5P_DEFAULT);
if (empty_filesize < 0)
TestErrPrintf("Line %d: file size wrong!\n", __LINE__);
}
// Open the file again
fid1.openFile(FILE_DTYPE, H5F_ACC_RDWR);
@ -1533,10 +1539,12 @@ test_attr_dtype_shared(FileAccPropList &fapl)
// Close file
fid1.close();
// Check size of file
filesize = h5_get_file_size(FILE_DTYPE.c_str(), H5P_DEFAULT);
verify_val(static_cast<long>(filesize), static_cast<long>(empty_filesize), "Checking file size",
__LINE__, __FILE__);
if (is_default_vfd_compat) {
// Check size of file
filesize = h5_get_file_size(FILE_DTYPE.c_str(), H5P_DEFAULT);
verify_val(static_cast<long>(filesize), static_cast<long>(empty_filesize), "Checking file size",
__LINE__, __FILE__);
}
PASSED();
} // end try block

View File

@ -119,10 +119,7 @@ CHECK_INCLUDE_FILE_CONCAT ("features.h" ${HDF_PREFIX}_HAVE_FEATURES_H)
CHECK_INCLUDE_FILE_CONCAT ("dirent.h" ${HDF_PREFIX}_HAVE_DIRENT_H)
CHECK_INCLUDE_FILE_CONCAT ("unistd.h" ${HDF_PREFIX}_HAVE_UNISTD_H)
CHECK_INCLUDE_FILE_CONCAT ("pwd.h" ${HDF_PREFIX}_HAVE_PWD_H)
CHECK_INCLUDE_FILE_CONCAT ("globus/common.h" ${HDF_PREFIX}_HAVE_GLOBUS_COMMON_H)
CHECK_INCLUDE_FILE_CONCAT ("pdb.h" ${HDF_PREFIX}_HAVE_PDB_H)
CHECK_INCLUDE_FILE_CONCAT ("pthread.h" ${HDF_PREFIX}_HAVE_PTHREAD_H)
CHECK_INCLUDE_FILE_CONCAT ("srbclient.h" ${HDF_PREFIX}_HAVE_SRBCLIENT_H)
CHECK_INCLUDE_FILE_CONCAT ("dlfcn.h" ${HDF_PREFIX}_HAVE_DLFCN_H)
CHECK_INCLUDE_FILE_CONCAT ("netinet/in.h" ${HDF_PREFIX}_HAVE_NETINET_IN_H)
CHECK_INCLUDE_FILE_CONCAT ("netdb.h" ${HDF_PREFIX}_HAVE_NETDB_H)
@ -908,12 +905,24 @@ if (${HDF_PREFIX}_SIZEOF__FLOAT16)
# compile a program that will generate these functions to check for _Float16
# support. If we fail to compile this program, we will simply disable
# _Float16 support for the time being.
# Some compilers, notably AppleClang on MacOS 12, will succeed in the
# configure check below when optimization flags like -O3 are manually
# passed in CMAKE_C_FLAGS. However, the build will then fail when it
# reaches compilation of H5Tconv.c because of the issue mentioned above.
# MacOS 13 appears to have fixed this, but, just to be sure, backup and
# clear CMAKE_C_FLAGS before performing these configure checks.
set (cmake_c_flags_backup "${CMAKE_C_FLAGS}")
set (CMAKE_C_FLAGS "")
H5ConversionTests (
${HDF_PREFIX}_FLOAT16_CONVERSION_FUNCS_LINK
FALSE
"Checking if compiler can convert _Float16 type with casts"
)
set (CMAKE_C_FLAGS "${cmake_c_flags_backup}")
if (${${HDF_PREFIX}_FLOAT16_CONVERSION_FUNCS_LINK})
# Finally, MacOS 13 appears to have a bug specifically when converting
# long double values to _Float16. Release builds of the dt_arith test
@ -922,12 +931,19 @@ if (${HDF_PREFIX}_SIZEOF__FLOAT16)
# simply chopping off all the bytes of the value except for the first 2.
# These tests pass on MacOS 14, so let's perform a quick test to check
# if the hardware conversion is done correctly.
# Backup and clear CMAKE_C_FLAGS before performing configure checks
set (cmake_c_flags_backup "${CMAKE_C_FLAGS}")
set (CMAKE_C_FLAGS "")
H5ConversionTests (
${HDF_PREFIX}_LDOUBLE_TO_FLOAT16_CORRECT
TRUE
"Checking if correctly converting long double to _Float16 values"
)
set (CMAKE_C_FLAGS "${cmake_c_flags_backup}")
if (NOT ${${HDF_PREFIX}_LDOUBLE_TO_FLOAT16_CORRECT})
message (VERBOSE "Conversions from long double to _Float16 appear to be incorrect. These will be emulated through a soft conversion function.")
endif ()

View File

@ -70,8 +70,17 @@ macro (H5_SET_VFD_LIST)
split
multi
family
splitter
#log - log VFD currently has file space allocation bugs
# Splitter VFD currently can't be tested with the h5_fileaccess()
# approach due to it trying to lock the same W/O file when two
# files are created/opened with the same FAPL that has the VFD
# set on it. When tested with the environment variable and a
# default FAPL, the VFD appends "_wo" to the filename when the
# W/O path isn't specified, which works for all the tests.
#splitter
# Log VFD currently has file space allocation bugs
#log
# Onion VFD not currently tested with VFD tests
#onion
)
if (H5_HAVE_DIRECT)
@ -82,16 +91,21 @@ macro (H5_SET_VFD_LIST)
# list (APPEND VFD_LIST mpio)
endif ()
if (H5_HAVE_MIRROR_VFD)
list (APPEND VFD_LIST mirror)
# Mirror VFD needs network configuration, etc. and isn't easy to set
# reasonable defaults for that info.
# list (APPEND VFD_LIST mirror)
endif ()
if (H5_HAVE_ROS3_VFD)
list (APPEND VFD_LIST ros3)
# This would require a custom test suite
# list (APPEND VFD_LIST ros3)
endif ()
if (H5_HAVE_LIBHDFS)
list (APPEND VFD_LIST hdfs)
# This would require a custom test suite
# list (APPEND VFD_LIST hdfs)
endif ()
if (H5_HAVE_SUBFILING_VFD)
list (APPEND VFD_LIST subfiling)
# Subfiling has a few VFD test failures to be resolved
# list (APPEND VFD_LIST subfiling)
endif ()
if (H5_HAVE_WINDOWS)
list (APPEND VFD_LIST windows)

View File

@ -50,9 +50,15 @@ macro (FORTRAN_RUN FUNCTION_NAME SOURCE_CODE RUN_RESULT_VAR1 COMPILE_RESULT_VAR1
else ()
set (_RUN_OUTPUT_VARIABLE "RUN_OUTPUT_STDOUT_VARIABLE")
endif()
if (${FUNCTION_NAME} STREQUAL "SIZEOF NATIVE KINDs")
set(TMP_CMAKE_Fortran_FLAGS "${CMAKE_Fortran_FLAGS}")
else ()
set(TMP_CMAKE_Fortran_FLAGS "")
endif ()
TRY_RUN (RUN_RESULT_VAR COMPILE_RESULT_VAR
${CMAKE_BINARY_DIR}
${CMAKE_BINARY_DIR}${CMAKE_FILES_DIRECTORY}/CMakeTmp/testFortranCompiler1.f90
CMAKE_FLAGS "${TMP_CMAKE_Fortran_FLAGS}"
LINK_LIBRARIES "${HDF5_REQUIRED_LIBRARIES}"
${_RUN_OUTPUT_VARIABLE} OUTPUT_VAR
)
@ -111,6 +117,16 @@ else ()
set (${HDF_PREFIX}_FORTRAN_C_BOOL_IS_UNIQUE 0)
endif ()
# Check if the fortran compiler supports the intrinsic module "ISO_FORTRAN_ENV" (F08)
READ_SOURCE("PROGRAM PROG_FC_ISO_FORTRAN_ENV" "END PROGRAM PROG_FC_ISO_FORTRAN_ENV" SOURCE_CODE)
check_fortran_source_compiles (${SOURCE_CODE} HAVE_ISO_FORTRAN_ENV SRC_EXT f90)
if (${HAVE_ISO_FORTRAN_ENV})
set (${HDF_PREFIX}_HAVE_ISO_FORTRAN_ENV 1)
else ()
set (${HDF_PREFIX}_HAVE_ISO_FORTRAN_ENV 0)
endif ()
## Set the sizeof function for use later in the fortran tests
if (${HDF_PREFIX}_FORTRAN_HAVE_STORAGE_SIZE)
set (FC_SIZEOF_A "STORAGE_SIZE(a, c_size_t)/STORAGE_SIZE(c_char_'a',c_size_t)")

View File

@ -80,7 +80,8 @@ set (CHAR_ALLOC
set (ISO_FORTRAN_ENV_CODE
"
PROGRAM main
USE, INTRINSIC :: ISO_FORTRAN_ENV
USE, INTRINSIC :: ISO_FORTRAN_ENV, ONLY : atomic_logical_kind
LOGICAL(KIND=atomic_logical_kind) :: state
END PROGRAM
"
)

View File

@ -67,7 +67,7 @@ To test the installation with the examples;
ctest -S HDF5_Examples.cmake,CTEST_SOURCE_NAME=MyExamples,INSTALLDIR=MyLocation -C Release -V -O test.log
When executed, the ctest script will save the results to the log file, test.log, as
indicated by the ctest command. If you wish the to see more build and test information,
indicated by the ctest command. If you wish to see more build and test information,
add "-VV" to the ctest command. The output should show;
100% tests passed, 0 tests failed out of 156.

View File

@ -155,15 +155,15 @@ $(TEST_PROG_CHKEXE) $(TEST_PROG_PARA_CHKEXE) dummy.chkexe_:
if test -n "$(HDF5_VOL_CONNECTOR)"; then \
echo "VOL connector: $(HDF5_VOL_CONNECTOR)" | tee -a $${log}; \
fi; \
if test -n "$(HDF5_DRIVER)"; then \
echo "Virtual file driver (VFD): $(HDF5_DRIVER)" | tee -a $${log}; \
if test -n "$(HDF5_TEST_DRIVER)"; then \
echo "Virtual file driver (VFD): $(HDF5_TEST_DRIVER)" | tee -a $${log}; \
fi; \
else \
if test -n "$(HDF5_VOL_CONNECTOR)"; then \
echo "VOL connector: $(HDF5_VOL_CONNECTOR)" >> $${log}; \
fi; \
if test -n "$(HDF5_DRIVER)"; then \
echo "Virtual file driver (VFD): $(HDF5_DRIVER)" >> $${log}; \
if test -n "$(HDF5_TEST_DRIVER)"; then \
echo "Virtual file driver (VFD): $(HDF5_TEST_DRIVER)" >> $${log}; \
fi; \
fi; \
if test -n "$(REALTIMEOUTPUT)"; then \
@ -276,11 +276,22 @@ build-check-p: $(LIB) $(PROGS) $(chk_TESTS)
echo "===Parallel tests in `echo ${PWD} | sed -e s:.*/::` ended `date`===";\
fi
VFD_LIST = sec2 stdio core core_paged split multi family splitter
VFD_LIST = sec2 stdio core core_paged split multi family
# Splitter VFD currently can't be tested with the h5_fileaccess()
# approach due to it trying to lock the same W/O file when two
# files are created/opened with the same FAPL that has the VFD
# set on it. When tested with the environment variable and a
# default FAPL, the VFD appends "_wo" to the filename when the
# W/O path isn't specified, which works for all the tests.
# VFD_LIST += splitter
# log VFD currently has file space allocation bugs
# VFD_LIST += log
# Not currently tested with VFD tests
# VFD_LIST += onion
if DIRECT_VFD_CONDITIONAL
VFD_LIST += direct
endif
@ -302,21 +313,20 @@ if HDFS_VFD_CONDITIONAL
# VFD_LIST += hdfs
endif
if SUBFILING_VFD_CONDITIONAL
# Several VFD tests fail with Subfiling since it
# doesn't currently support collective I/O
# Subfiling has a few VFD test failures to be resolved
# VFD_LIST += subfiling
endif
# Run test with different Virtual File Driver
check-vfd: $(LIB) $(PROGS) $(chk_TESTS)
@for vfd in $(VFD_LIST) dummy; do \
if test $$vfd != dummy; then \
echo "============================"; \
echo "Testing Virtual File Driver $$vfd"; \
echo "============================"; \
$(MAKE) $(AM_MAKEFLAGS) check-clean || exit 1; \
HDF5_DRIVER=$$vfd $(MAKE) $(AM_MAKEFLAGS) check || exit 1; \
fi; \
@for vfd in $(VFD_LIST) dummy; do \
if test $$vfd != dummy; then \
echo "============================"; \
echo "Testing Virtual File Driver $$vfd"; \
echo "============================"; \
$(MAKE) $(AM_MAKEFLAGS) check-clean || exit 1; \
HDF5_TEST_DRIVER=$$vfd $(MAKE) $(AM_MAKEFLAGS) check || exit 1; \
fi; \
done
# Test with just the native connector, with a single pass-through connector

View File

@ -198,21 +198,36 @@ saved_user_CPPFLAGS="$CPPFLAGS"
##
## Regex:
##
## -Werror Literal -Werror
## \( Start optional capturing group
## = Literal equals sign
## [^[:space:]-] Non-space characters
## \+ 1 or more of the above
## \) End optional capturing group
## \? 0 or 1 capturing group matches
## -Werror Literal -Werror
## \( Start optional capturing group
## = Literal equals sign
## [^[:space:]] Non-space characters
## \+ 1 or more of the above
## \) End optional capturing group
## \? 0 or 1 capturing group matches
##
WERROR_SED= "sed -e 's/-Werror\(=[^[:space:]]\+\)\?//g'"
CFLAGS="`echo $CFLAGS | $WERROR_SED`"
CXXFLAGS="`echo $CXXFLAGS | $WERROR_SED`"
FCFLAGS="`echo $FCFLAGS | $WERROR_SED`"
JAVACFLAGS="`echo $JAVACFLAGS | $WERROR_SED`"
CPPFLAGS="`echo $CPPFLAGS | $WERROR_SED`"
## Note that the outer pair of '[]' ends up getting removed
WERROR_SED='s/-Werror\(=[[^[:space:]]]\+\)\?//g'
CFLAGS_SED="`echo $CFLAGS | sed -e $WERROR_SED`"
if test $? -eq 0; then
CFLAGS="$CFLAGS_SED"
fi
CXXFLAGS_SED="`echo $CXXFLAGS | sed -e $WERROR_SED`"
if test $? -eq 0; then
CXXFLAGS="$CXXFLAGS_SED"
fi
FCFLAGS_SED="`echo $FCFLAGS | sed -e $WERROR_SED`"
if test $? -eq 0; then
FCFLAGS="$FCFLAGS_SED"
fi
JAVACFLAGS_SED="`echo $JAVACFLAGS | sed -e $WERROR_SED`"
if test $? -eq 0; then
JAVACFLAGS="$JAVACFLAGS_SED"
fi
CPPFLAGS_SED="`echo $CPPFLAGS | sed -e $WERROR_SED`"
if test $? -eq 0; then
CPPFLAGS="$CPPFLAGS_SED"
fi
## Support F9X variable to define Fortran compiler if FC variable is
## not used. This should be deprecated in the future.
@ -786,6 +801,15 @@ if test "X$HDF_FORTRAN" = "Xyes"; then
## See if the fortran compiler supports the intrinsic function "STORAGE_SIZE"
PAC_PROG_FC_STORAGE_SIZE
## --------------------------------------------------------------------
## Checking if the fortran compiler supports ISO_FORTRAN_ENV (Fortran 2008)
HAVE_ISO_FORTRAN_ENV="0"
PAC_PROG_FC_ISO_FORTRAN_ENV
if test "X$CHECK_ISO_FORTRAN_ENV" = "Xyes"; then
HAVE_ISO_FORTRAN_ENV="1"
AC_DEFINE([HAVE_ISO_FORTRAN_ENV], [1], [Define if Fortran supports ISO_FORTRAN_ENV (F08)])
fi
## Set the sizeof function for use later in the fortran tests
if test "X$HAVE_STORAGE_SIZE_FORTRAN" = "Xyes";then
FC_SIZEOF_A="STORAGE_SIZE(a, c_size_t)/STORAGE_SIZE(c_char_'a',c_size_t)"
@ -802,8 +826,6 @@ if test "X$HDF_FORTRAN" = "Xyes"; then
fi
fi
## See if the fortran compiler supports the intrinsic module "ISO_FORTRAN_ENV"
PAC_PROG_FC_ISO_FORTRAN_ENV
## Check KIND and size of native integer
PAC_FC_NATIVE_INTEGER
@ -829,6 +851,7 @@ if test "X$HDF_FORTRAN" = "Xyes"; then
AC_SUBST([FORTRAN_HAVE_C_LONG_DOUBLE])
AC_SUBST([FORTRAN_C_LONG_DOUBLE_IS_UNIQUE])
AC_SUBST([FORTRAN_C_BOOL_IS_UNIQUE])
AC_SUBST([HAVE_ISO_FORTRAN_ENV])
AC_SUBST([H5CONFIG_F_NUM_RKIND])
AC_SUBST([H5CONFIG_F_RKIND])
AC_SUBST([H5CONFIG_F_RKIND_SIZEOF])
@ -1576,9 +1599,33 @@ if test "X${enable_shared}" = "Xyes"; then
fi
## ----------------------------------------------------------------------
## Set up large file support
## Use the macro _AC_SYS_LARGEFILE_MACRO_VALUE to test defines
## that might need to be set for largefile support to behave
## correctly. This macro is defined in acsite.m4 and overrides
## the version provided by Autoconf (as of v2.65). The custom
## macro additionally adds the appropriate defines to AM_CPPFLAGS
## so that later configure checks have them visible.
##
## NOTE: AC_SYS_LARGEFILE is buggy on some platforms and will
## NOT set the defines, even though it correctly detects
## the necessary values. These macro hacks are annoying
## but unfortunately will be necessary until we decide
## to drop support for platforms that don't have 64-bit
## off_t defaults.
AC_SYS_LARGEFILE
## Check for _FILE_OFFSET_BITS
_AC_SYS_LARGEFILE_MACRO_VALUE([_FILE_OFFSET_BITS], [64],
[ac_cv_sys_file_offset_bits],
[Number of bits in a file offset, on hosts where this is settable.],
[_AC_SYS_LARGEFILE_TEST_INCLUDES])
## Check for _LARGE_FILES
if test "$ac_cv_sys_file_offset_bits" = unknown; then
_AC_SYS_LARGEFILE_MACRO_VALUE([_LARGE_FILES], [1],
[ac_cv_sys_large_files],
[Define for large files, on AIX-style hosts.],
[_AC_SYS_LARGEFILE_TEST_INCLUDES])
fi
## ----------------------------------------------------------------------
## Add necessary defines for Linux Systems.

View File

@ -726,7 +726,7 @@ are others in `h5test.h` if you want to emit custom text, dump the HDF5 error
stack when it would not normally be triggered, etc.
Most tests will be set up to run with arbitrary VFDs. To do this, you set the
fapl ID using the `h5_fileaccess()` function, which will check the `HDF5_DRIVER`
fapl ID using the `h5_fileaccess()` function, which will check the `HDF5_TEST_DRIVER`
environment variable and set the fapl's VFD accordingly. The `h5_fixname()`
call can then be used to get a VFD-appropriate filename for the `H5Fcreate()`,
etc. call.

View File

@ -154,7 +154,7 @@ optimal performance out of the parallel compression feature.
### Begin with a good chunking strategy
[Starting with a good chunking strategy](https://portal.hdfgroup.org/display/HDF5/Chunking+in+HDF5)
[Starting with a good chunking strategy](https://portal.hdfgroup.org/documentation/hdf5-docs/chunking_in_hdf5.html)
will generally have the largest impact on overall application
performance. The different chunking parameters can be difficult
to fine-tune, but it is essential to start with a well-performing
@ -166,7 +166,7 @@ chosen chunk size becomes a very important factor when compression
is involved, as data chunks have to be completely read and
re-written to perform partial writes to the chunk.
[Improving I/O performance with HDF5 compressed datasets](https://portal.hdfgroup.org/display/HDF5/Improving+IO+Performance+When+Working+with+HDF5+Compressed+Datasets)
[Improving I/O performance with HDF5 compressed datasets](https://docs.hdfgroup.org/archive/support/HDF5/doc/TechNotes/TechNote-HDF5-ImprovingIOPerformanceCompressedDatasets.pdf)
is a useful reference for more information on getting good
performance when using a chunked dataset layout.

View File

@ -234,14 +234,14 @@ ALIASES += sa_metadata_ops="\sa \li H5Pget_all_coll_metadata_ops() \li H5Pget_co
# References
################################################################################
ALIASES += ref_cons_semantics="<a href=\"https://portal.hdfgroup.org/display/HDF5/Enabling+a+Strict+Consistency+Semantics+Model+in+Parallel+HDF5\">Enabling a Strict Consistency Semantics Model in Parallel HDF5</a>"
ALIASES += ref_file_image_ops="<a href=\"https://portal.hdfgroup.org/display/HDF5/HDF5+File+Image+Operations\">HDF5 File Image Operations</a>"
ALIASES += ref_cons_semantics="<a href=\"https://docs.hdfgroup.org/hdf5/rfc/RFC%20PHDF5%20Consistency%20Semantics%20MC%20120328.docx.pdf\">Enabling a Strict Consistency Semantics Model in Parallel HDF5</a>"
ALIASES += ref_file_image_ops="<a href=\"https://docs.hdfgroup.org/hdf5/rfc/HDF5FileImageOperations.pdf\">HDF5 File Image Operations</a>"
ALIASES += ref_filter_pipe="<a href=\"https://portal.hdfgroup.org/display/HDF5/HDF5+Data+Flow+Pipeline+for+H5Dread\">Data Flow Pipeline for H5Dread()</a>"
ALIASES += ref_group_impls="<a href=\"https://portal.hdfgroup.org/display/HDF5/Groups\">Group implementations in HDF5</a>"
ALIASES += ref_h5lib_relver="<a href=\"https://portal.hdfgroup.org/display/HDF5/HDF5+Library+Release+Version+Numbers\">HDF5 Library Release Version Numbers</a>"
ALIASES += ref_group_impls="<a href=\"https://docs.hdfgroup.org/hdf5/v1_14/group___h5_g.html\">Group implementations in HDF5</a>"
ALIASES += ref_h5lib_relver="<a href=\"https://docs.hdfgroup.org/archive/support/HDF5/doc/TechNotes/Version.html\">HDF5 Library Release Version Numbers</a>"
ALIASES += ref_mdc_in_hdf5="<a href=\"https://portal.hdfgroup.org/display/HDF5/Metadata+Caching+in+HDF5\">Metadata Caching in HDF5</a>"
ALIASES += ref_mdc_logging="<a href=\"https://portal.hdfgroup.org/display/HDF5/H5F_START_MDC_LOGGING\">Metadata Cache Logging</a>"
ALIASES += ref_news_112="<a href=\"https://portal.hdfgroup.org/display/HDF5/New+Features+in+HDF5+Release+1.12\">New Features in HDF5 Release 1.12</a>"
ALIASES += ref_news_112="<a href=\"https://portal.hdfgroup.org/documentation/hdf5-docs/release_specifics/new_features_1_12.html\">New Features in HDF5 Release 1.12</a>"
ALIASES += ref_h5ocopy="<a href=\"https://portal.hdfgroup.org/display/HDF5/Copying+Committed+Datatypes+with+H5Ocopy\">Copying Committed Datatypes with H5Ocopy()</a>"
ALIASES += ref_sencode_fmt_change="<a href=\"https://portal.hdfgroup.org/pages/viewpage.action?pageId=58100093&preview=/58100093/58100094/encode_format_RFC.pdf\">RFC H5Secnode() / H5Sdecode() Format Change</a>"
ALIASES += ref_vlen_strings="\Emph{Creating variable-length string datatypes}"

View File

@ -81,7 +81,7 @@ as a general reference.
All custom commands for this project are located in the
<a href="https://github.com/HDFGroup/hdf5/blob/hdf5_1_14/doxygen/aliases"><tt>aliases</tt></a>
file in the <a href="https://github.com/HDFGroup/hdf5/tree/develop/doxygen"><tt>doxygen</tt></a>
file in the <a href="https://github.com/HDFGroup/hdf5/tree/hdf5_1_14/doxygen"><tt>doxygen</tt></a>
subdirectory of the <a href="https://github.com/HDFGroup/hdf5">main HDF5 repo</a>.
The custom commands are grouped in sections. Find a suitable section for your command or
@ -124,4 +124,4 @@ version.
Talk to your friendly IT-team if you need write access, or you need someone to
push an updated version for you!
*/
*/

View File

@ -42,7 +42,7 @@ Parallel HDF5, and the HDF5-1.10 VDS and SWMR new features:
<table>
<tr>
<td style="background-color:#F5F5F5">
<a href="https://portal.hdfgroup.org/display/HDF5/Introduction+to+the+HDF5+High+Level+APIs">Using the High Level APIs</a>
<a href="https://docs.hdfgroup.org/hdf5/develop/high_level.html">Using the High Level APIs</a>
</td>
<td>
\ref H5LT \ref H5IM \ref H5TB \ref H5PT \ref H5DS
@ -72,7 +72,7 @@ HDF5-1.10 New Features
</td>
<td>
\li <a href="https://portal.hdfgroup.org/display/HDF5/Introduction+to+the+Virtual+Dataset++-+VDS">Introduction to the Virtual Dataset - VDS</a>
\li <a href="https://portal.hdfgroup.org/pages/viewpage.action?pageId=48812567">Introduction to Single-Writer/Multiple-Reader (SWMR)</a>
\li <a href="https://docs.hdfgroup.org/hdf5/v1_14/group___s_w_m_r.html">Introduction to Single-Writer/Multiple-Reader (SWMR)</a>
</td>
</tr>
<tr>

View File

@ -608,7 +608,7 @@ on the <a href="http://hdfeos.org/">HDF-EOS Tools and Information Center</a> pag
\section secHDF5Examples Examples
\li \ref LBExamples
\li \ref ExAPI
\li <a href="https://portal.hdfgroup.org/display/HDF5/Examples+in+the+Source+Code">Examples in the Source Code</a>
\li <a href="https://github.com/HDFGroup/hdf5/tree/v1_14/examples">Examples in the Source Code</a>
\li <a href="https://portal.hdfgroup.org/display/HDF5/Other+Examples">Other Examples</a>
\section secHDF5ExamplesCompile How To Compile

View File

@ -166,7 +166,7 @@ created the dataset layout cannot be changed. The h5repack utility can be used t
to a new with a new layout.
\section secLBDsetLayoutSource Sources of Information
<a href="https://confluence.hdfgroup.org/display/HDF5/Chunking+in+HDF5">Chunking in HDF5</a>
<a href="https://portal.hdfgroup.org/documentation/hdf5-docs/chunking_in_hdf5.html">Chunking in HDF5</a>
(See the documentation on <a href="https://confluence.hdfgroup.org/display/HDF5/Advanced+Topics+in+HDF5">Advanced Topics in HDF5</a>)
\see \ref sec_plist in the HDF5 \ref UG.
@ -184,7 +184,7 @@ certain initial dimensions, then to later increase the size of any of the initia
HDF5 requires you to use chunking to define extendible datasets. This makes it possible to extend
datasets efficiently without having to excessively reorganize storage. (To use chunking efficiently,
be sure to see the advanced topic, <a href="https://confluence.hdfgroup.org/display/HDF5/Chunking+in+HDF5">Chunking in HDF5</a>.)
be sure to see the advanced topic, <a href="https://portal.hdfgroup.org/documentation/hdf5-docs/chunking_in_hdf5.html">Chunking in HDF5</a>.)
The following operations are required in order to extend a dataset:
\li Declare the dataspace of the dataset to have unlimited dimensions for all dimensions that might eventually be extended.
@ -224,7 +224,7 @@ Navigate back: \ref index "Main" / \ref GettingStarted / \ref LearnBasics
\section secLBComDsetCreate Creating a Compressed Dataset
HDF5 requires you to use chunking to create a compressed dataset. (To use chunking efficiently,
be sure to see the advanced topic, <a href="https://confluence.hdfgroup.org/display/HDF5/Chunking+in+HDF5">Chunking in HDF5</a>.)
be sure to see the advanced topic, <a href="https://portal.hdfgroup.org/documentation/hdf5-docs/chunking_in_hdf5.html">Chunking in HDF5</a>.)
The following operations are required in order to create a compressed dataset:
\li Create a dataset creation property list.
@ -294,12 +294,12 @@ Specifically look at the \ref ExAPI.
There are examples for different languages, where examples of using #H5Literate and #H5Ovisit/#H5Lvisit are included.
The h5ex_g_traverse example traverses a file using H5Literate:
\li C: <a href="https://support.hdfgroup.org/ftp/HDF5/examples/examples-by-api/hdf5-examples/1_8/C/H5G/h5ex_g_traverse.c">h5ex_g_traverse.c</a>
\li F90: <a href="https://support.hdfgroup.org/ftp/HDF5/examples/examples-by-api/hdf5-examples/1_8/FORTRAN/H5G/h5ex_g_traverse_F03.f90">h5ex_g_traverse_F03.f90</a>
\li C: <a href="https://github.com/HDFGroup/hdf5/blob/v1_14/HDF5Examples/C/H5G/h5ex_g_traverse.c">h5ex_g_traverse.c</a>
\li F90: <a href="https://github.com/HDFGroup/hdf5/blob/v1_14/HDF5Examples/FORTRAN/H5G/h5ex_g_traverse.F90">h5ex_g_traverse_F03.f90</a>
The h5ex_g_visit example traverses a file using H5Ovisit and H5Lvisit:
\li C: <a href="https://support.hdfgroup.org/ftp/HDF5/examples/examples-by-api/hdf5-examples/1_8/C/H5G/h5ex_g_visit.c">h5ex_g_visit.c</a>
\li F90: <a href="https://support.hdfgroup.org/ftp/HDF5/examples/examples-by-api/hdf5-examples/1_8/FORTRAN/H5G/h5ex_g_visit_F03.f90">h5ex_g_visit_F03.f90</a>
\li C: <a href="https://github.com/HDFGroup/hdf5/blob/v1_14/HDF5Examples/C/H5G/h5ex_g_visit.c">h5ex_g_visit.c</a>
\li F90: <a href="https://github.com/HDFGroup/hdf5/blob/v1_14/HDF5Examples/FORTRAN/H5G/h5ex_g_visit.F90">h5ex_g_visit_F03.f90</a>
<hr>
Navigate back: \ref index "Main" / \ref GettingStarted / \ref LearnBasics

View File

@ -7,7 +7,7 @@ This tutorial enables you to get a feel for HDF5 by using the HDFView browser. I
any programming experience.
\section sec_learn_hv_install HDFView Installation
\li Download and install HDFView. It can be downloaded from the <a href="https://portal.hdfgroup.org/display/support/Download+HDFView">Download HDFView</a> page.
\li Download and install HDFView. It can be downloaded from the <a href="https://portal.hdfgroup.org/downloads/hdfview/hdfview3_3_1.html">Download HDFView</a> page.
\li Obtain the <a href="https://support.hdfgroup.org/ftp/HDF5/examples/files/tutorial/storm1.txt">storm1.txt</a> text file, used in the tutorial.
\section sec_learn_hv_begin Begin Tutorial
@ -246,7 +246,7 @@ in the file).
Please note that the chunk sizes used in this topic are for demonstration purposes only. For
information on chunking and specifying an appropriate chunk size, see the
<a href="https://confluence.hdfgroup.org/display/HDF5/Chunking+in+HDF5">Chunking in HDF5</a> documentation.
<a href="https://portal.hdfgroup.org/documentation/hdf5-docs/chunking_in_hdf5.html">Chunking in HDF5</a> documentation.
Also see the HDF5 Tutorial topic on \ref secLBComDsetCreate.
<ul>

View File

@ -374,7 +374,7 @@ These documents provide additional information for the use and tuning of specifi
</tr>
<tr style="height: 49.00pt;">
<td style="width: 234.000pt; border-bottom-style: solid; border-bottom-width: 1px; border-bottom-color: #228a22; vertical-align: top;padding-left: 6.00pt; padding-top: 3.00pt; padding-right: 6.00pt; padding-bottom: 3.00pt;">
<p style="font-style: italic; color: #0000ff;"><span><a href="http://www.hdfgroup.org/HDF5/doc/Advanced/DynamicallyLoadedFilters/HDF5DynamicallyLoadedFilters.pdf">HDF5 Dynamically Loaded Filters</a></span></p>
<p style="font-style: italic; color: #0000ff;"><span><a href="https://docs.hdfgroup.org/hdf5/rfc/HDF5DynamicallyLoadedFilters.pdf">HDF5 Dynamically Loaded Filters</a></span></p>
</td>
<td style="width: 198.000pt; border-bottom-style: solid; border-bottom-width: 1px; border-bottom-color: #228a22; vertical-align: top;padding-left: 6.00pt; padding-top: 3.00pt; padding-right: 6.00pt; padding-bottom: 3.00pt;">
<p>Describes how an HDF5 application can apply a filter that is not registered with the HDF5 Library.</p>
@ -382,7 +382,7 @@ These documents provide additional information for the use and tuning of specifi
</tr>
<tr style="height: 62.00pt;">
<td style="width: 234.000pt; border-bottom-style: solid; border-bottom-width: 1px; border-bottom-color: #228a22; vertical-align: top;padding-left: 6.00pt; padding-top: 3.00pt; padding-right: 6.00pt; padding-bottom: 3.00pt;">
<p style="font-style: italic; color: #0000ff;"><span><a href="http://www.hdfgroup.org/HDF5/doc/Advanced/FileImageOperations/HDF5FileImageOperations.pdf">HDF5 File Image Operations</a></span></p>
<p style="font-style: italic; color: #0000ff;"><span><a href="https://docs.hdfgroup.org/hdf5/rfc/HDF5FileImageOperations.pdf">HDF5 File Image Operations</a></span></p>
</td>
<td style="width: 198.000pt; border-bottom-style: solid; border-bottom-width: 1px; border-bottom-color: #228a22; vertical-align: top;padding-left: 6.00pt; padding-top: 3.00pt; padding-right: 6.00pt; padding-bottom: 3.00pt;">
<p>Describes how to work with HDF5 files in memory. Disk I/O is not required when file images are opened, created, read from, or written to.</p>
@ -390,7 +390,7 @@ These documents provide additional information for the use and tuning of specifi
</tr>
<tr style="height: 62.00pt;">
<td style="width: 234.000pt; border-bottom-style: solid; border-bottom-width: 1px; border-bottom-color: #228a22; vertical-align: top;padding-left: 6.00pt; padding-top: 3.00pt; padding-right: 6.00pt; padding-bottom: 3.00pt;">
<p style="font-style: italic; color: #0000ff;"><span><a href="http://www.hdfgroup.org/HDF5/doc/Advanced/ModifiedRegionWrites/ModifiedRegionWrites.pdf">Modified Region Writes</a></span></p>
<p style="font-style: italic; color: #0000ff;"><span><a href="https://docs.hdfgroup.org/archive/support/HDF5/doc/Advanced/ModifiedRegionWrites/ModifiedRegionWrites.pdf">Modified Region Writes</a></span></p>
</td>
<td style="width: 198.000pt; border-bottom-style: solid; border-bottom-width: 1px; border-bottom-color: #228a22; vertical-align: top;padding-left: 6.00pt; padding-top: 3.00pt; padding-right: 6.00pt; padding-bottom: 3.00pt;">
<p>Describes how to set write operations for in-memory files so that only modified regions are written to storage. Available when the Core (Memory) VFD is used.</p>
@ -438,4 +438,4 @@ Previous Chapter \ref sec_plist
<a href="https://github.com/HDFGroup/hdf5">HDF5 repo</a>, make changes, and create a
<a href="https://github.com/HDFGroup/hdf5/pulls">pull request</a> !\n
*/
*/

View File

@ -92,7 +92,7 @@ Public header Files you will need to be familiar with include:
</table>
Many VOL connectors are listed on The HDF Group's VOL plugin registration page, located at:
<a href="https://portal.hdfgroup.org/display/support/Registered+VOL+Connectors">Registered VOL Connectors</a>.
<a href="https://portal.hdfgroup.org/documentation/hdf5-docs/registered_vol_connectors.html">Registered VOL Connectors</a>.
Not all of these VOL connectors are supported by The HDF Group and the level of completeness varies, but the
connectors found there can serve as examples of working implementations
@ -195,7 +195,7 @@ contact <a href="help@hdfgroup.org">help@hdfgroup.org</a> for help with this. We
name you've chosen will appear on the registered VOL connectors page.
As noted above, registered VOL connectors will be listed at:
<a href="https://portal.hdfgroup.org/display/support/Registered+VOL+Connectors">Registered VOL Connectors</a>
<a href="https://portal.hdfgroup.org/documentation/hdf5-docs/registered_vol_connectors.html">Registered VOL Connectors</a>
A new \b conn_version field has been added to the class struct for 1.13. This field is currently not used by
the library so its use is determined by the connector author. Best practices for this field will be determined

View File

@ -48,7 +48,7 @@ Navigate back: \ref index "Main" / \ref GettingStarted
\section secViewToolsCommandObtain Obtain Tools and Files (Optional)
Pre-built binaries for Linux and Windows are distributed within the respective HDF5 binary release
packages, which can be obtained from the <a href="https://portal.hdfgroup.org/display/support/Download+HDF5">Download HDF5</a> page.
packages, which can be obtained from the <a href="https://portal.hdfgroup.org/downloads/index.html">Download HDF5</a> page.
HDF5 files can be obtained from various places such as \ref HDF5Examples and <a href="http://www.hdfeos.org/">HDF-EOS and Tools and
Information Center</a>. Specifically, the following examples are used in this tutorial topic:

View File

@ -69,6 +69,11 @@ if (H5_FORTRAN_HAVE_C_SIZEOF)
set (CMAKE_H5_FORTRAN_HAVE_C_SIZEOF 1)
endif ()
set (CMAKE_H5_HAVE_ISO_FORTRAN_ENV 0)
if (H5_HAVE_ISO_FORTRAN_ENV)
set (CMAKE_H5_HAVE_ISO_FORTRAN_ENV 1)
endif ()
set (CMAKE_H5_FORTRAN_HAVE_CHAR_ALLOC 0)
if (H5_FORTRAN_HAVE_CHAR_ALLOC)
set (CMAKE_H5_FORTRAN_HAVE_CHAR_ALLOC 1)

View File

@ -306,19 +306,16 @@ CONTAINS
!! \param arg19 C style format control strings
!! \param arg20 C style format control strings
!!
!! \note \p arg[1-20] expects C-style format strings, similar to the
!! system and C functions printf() and fprintf().
!! Furthermore, special characters, such as ANSI escapes,
!! will only be interpreted correctly if the Fortran equivalent
!! is used. For example, to print \p msg "TEXT" in red and has
!! a space after the text would be:
!! \note \p arg[1-20] expects C-style format strings, similar to the system and C functions printf() and fprintf().
!! Furthermore, special characters, such as ANSI escapes, will only be interpreted correctly if the Fortran
!! equivalent is used. For example, to print \p msg "TEXT" in red would be:
!! <br /><br />
!! \code
!! (..., "%s TEXT %s"//C_NEW_LINE, hdferr, ..., arg1=ACHAR(27)//"[31m", arg2=ACHAR(27)//"[0m" )
!! (..., "%s TEXT %s", hdferr, ..., arg1=ACHAR(27)//"[31m"//C_NULL_CHAR, arg2=ACHAR(27)//"[0m"//C_NULL_CHAR )
!! \endcode
!! <br />Using "\n" instead of C_NEW_LINE will not be interpereted correctly, and similarly,
!! using "\x1B" instead of ACHAR(27)
!!
!! using "\x1B" instead of ACHAR(27). Also, all \p arg[1-20] characters strings must be
!! NULL terminated.
!!
!! See C API: @ref H5Epush2()
!!

View File

@ -120,10 +120,10 @@ CONTAINS
INTERFACE
INTEGER(HID_T) FUNCTION H5Fcreate(name, access_flags, &
creation_prp_default, access_prp_default) BIND(C,NAME='H5Fcreate')
IMPORT :: C_CHAR
IMPORT :: C_CHAR, C_INT
IMPORT :: HID_T
CHARACTER(KIND=C_CHAR), DIMENSION(*) :: name
INTEGER, VALUE :: access_flags
INTEGER(C_INT), VALUE :: access_flags
INTEGER(HID_T), VALUE :: creation_prp_default
INTEGER(HID_T), VALUE :: access_prp_default
END FUNCTION H5Fcreate
@ -137,7 +137,7 @@ CONTAINS
IF (PRESENT(creation_prp)) creation_prp_default = creation_prp
IF (PRESENT(access_prp)) access_prp_default = access_prp
file_id = h5fcreate(c_name, access_flags, &
file_id = h5fcreate(c_name, INT(access_flags, C_INT), &
creation_prp_default, access_prp_default)
hdferr = 0

View File

@ -1555,7 +1555,7 @@ CONTAINS
END FUNCTION H5Lvisit
END INTERFACE
return_value_c = INT(H5Lvisit(grp_id, INT(idx_type, C_INT), INT(order, C_INT), op, op_data))
return_value_c = H5Lvisit(grp_id, INT(idx_type, C_INT), INT(order, C_INT), op, op_data)
return_value = INT(return_value_c)
IF(return_value.GE.0)THEN
@ -1624,7 +1624,7 @@ CONTAINS
lapl_id_default = H5P_DEFAULT_F
IF(PRESENT(lapl_id)) lapl_id_default = lapl_id
return_value_c = INT(H5Lvisit_by_name(loc_id, c_name, INT(idx_type, C_INT), INT(order, C_INT), op, op_data, lapl_id_default))
return_value_c = H5Lvisit_by_name(loc_id, c_name, INT(idx_type, C_INT), INT(order, C_INT), op, op_data, lapl_id_default)
return_value = INT(return_value_c)
IF(return_value.GE.0)THEN

View File

@ -2931,7 +2931,7 @@ h5pset_fapl_multi_c(hid_t_f *prp_id, int_f *memb_map, hid_t_f *memb_fapl, _fcd m
* Check that we got correct values from Fortran for memb_addr array
*/
for (i = 0; i < H5FD_MEM_NTYPES; i++) {
if (memb_addr[i] >= 1.0f)
if (memb_addr[i] >= (real_f)1.0)
return ret_value;
}
/*
@ -4598,7 +4598,7 @@ h5pget_file_image_c(hid_t_f *fapl_id, void **buf_ptr, size_t_f *buf_len_ptr)
* SOURCE
*/
int_f
h5pset_fapl_mpio_c(hid_t_f *prp_id, int_f *comm, int_f *info)
h5pset_fapl_mpio_c(hid_t_f *prp_id, void *comm, void *info)
/******/
{
int ret_value = -1;
@ -4606,8 +4606,8 @@ h5pset_fapl_mpio_c(hid_t_f *prp_id, int_f *comm, int_f *info)
herr_t ret;
MPI_Comm c_comm;
MPI_Info c_info;
c_comm = MPI_Comm_f2c(*comm);
c_info = MPI_Info_f2c(*info);
c_comm = MPI_Comm_f2c(*((int *)comm));
c_info = MPI_Info_f2c(*((int *)info));
/*
* Call H5Pset_mpi function.
@ -4633,7 +4633,7 @@ h5pset_fapl_mpio_c(hid_t_f *prp_id, int_f *comm, int_f *info)
* SOURCE
*/
int_f
h5pget_fapl_mpio_c(hid_t_f *prp_id, int_f *comm, int_f *info)
h5pget_fapl_mpio_c(hid_t_f *prp_id, int *comm, int *info)
/******/
{
int ret_value = -1;
@ -4649,8 +4649,8 @@ h5pget_fapl_mpio_c(hid_t_f *prp_id, int_f *comm, int_f *info)
ret = H5Pget_fapl_mpio(c_prp_id, &c_comm, &c_info);
if (ret < 0)
return ret_value;
*comm = (int_f)MPI_Comm_c2f(c_comm);
*info = (int_f)MPI_Info_c2f(c_info);
*comm = (int)MPI_Comm_c2f(c_comm);
*info = (int)MPI_Info_c2f(c_info);
ret_value = 0;
return ret_value;
}
@ -4669,7 +4669,7 @@ h5pget_fapl_mpio_c(hid_t_f *prp_id, int_f *comm, int_f *info)
* SOURCE
*/
int_f
h5pset_mpi_params_c(hid_t_f *prp_id, int_f *comm, int_f *info)
h5pset_mpi_params_c(hid_t_f *prp_id, void *comm, void *info)
/******/
{
int ret_value = -1;
@ -4677,8 +4677,8 @@ h5pset_mpi_params_c(hid_t_f *prp_id, int_f *comm, int_f *info)
herr_t ret;
MPI_Comm c_comm;
MPI_Info c_info;
c_comm = MPI_Comm_f2c(*comm);
c_info = MPI_Info_f2c(*info);
c_comm = MPI_Comm_f2c(*((int *)comm));
c_info = MPI_Info_f2c(*((int *)info));
/*
* Call H5Pset_mpi_params.
@ -4705,7 +4705,7 @@ h5pset_mpi_params_c(hid_t_f *prp_id, int_f *comm, int_f *info)
* SOURCE
*/
int_f
h5pget_mpi_params_c(hid_t_f *prp_id, int_f *comm, int_f *info)
h5pget_mpi_params_c(hid_t_f *prp_id, int *comm, int *info)
/******/
{
int ret_value = -1;
@ -4721,8 +4721,8 @@ h5pget_mpi_params_c(hid_t_f *prp_id, int_f *comm, int_f *info)
ret = H5Pget_mpi_params(c_prp_id, &c_comm, &c_info);
if (ret < 0)
return ret_value;
*comm = (int_f)MPI_Comm_c2f(c_comm);
*info = (int_f)MPI_Info_c2f(c_info);
*comm = (int)MPI_Comm_c2f(c_comm);
*info = (int)MPI_Info_c2f(c_info);
ret_value = 0;
return ret_value;
}

View File

@ -39,6 +39,13 @@
MODULE H5P
#ifdef H5_HAVE_PARALLEL
#ifdef H5_HAVE_MPI_F08
USE MPI_F08, ONLY : MPI_INTEGER_KIND
#else
USE MPI, ONLY : MPI_INTEGER_KIND
#endif
#endif
USE H5GLOBAL
USE H5fortkit
@ -50,6 +57,7 @@ MODULE H5P
PRIVATE h5pregister_integer, h5pregister_ptr
PRIVATE h5pinsert_integer, h5pinsert_char, h5pinsert_ptr
#ifdef H5_HAVE_PARALLEL
PRIVATE MPI_INTEGER_KIND
PRIVATE h5pset_fapl_mpio_f90, h5pget_fapl_mpio_f90
#ifdef H5_HAVE_MPI_F08
PRIVATE h5pset_fapl_mpio_f08, h5pget_fapl_mpio_f08
@ -5182,8 +5190,8 @@ SUBROUTINE h5pset_attr_phase_change_f(ocpl_id, max_compact, min_dense, hdferr)
SUBROUTINE h5pset_fapl_mpio_f(prp_id, comm, info, hdferr)
IMPLICIT NONE
INTEGER(HID_T), INTENT(IN) :: prp_id
INTEGER, INTENT(IN) :: comm
INTEGER, INTENT(IN) :: info
INTEGER(KIND=MPI_INTEGER_KIND), INTENT(IN) :: comm
INTEGER(KIND=MPI_INTEGER_KIND), INTENT(IN) :: info
INTEGER, INTENT(OUT) :: hdferr
END SUBROUTINE h5pset_fapl_mpio_f
!>
@ -5213,17 +5221,17 @@ SUBROUTINE h5pset_attr_phase_change_f(ocpl_id, max_compact, min_dense, hdferr)
SUBROUTINE h5pset_fapl_mpio_f90(prp_id, comm, info, hdferr)
IMPLICIT NONE
INTEGER(HID_T), INTENT(IN) :: prp_id
INTEGER, INTENT(IN) :: comm
INTEGER, INTENT(IN) :: info
INTEGER(KIND=MPI_INTEGER_KIND), INTENT(IN) :: comm
INTEGER(KIND=MPI_INTEGER_KIND), INTENT(IN) :: info
INTEGER, INTENT(OUT) :: hdferr
INTERFACE
INTEGER FUNCTION h5pset_fapl_mpio_c(prp_id, comm, info) &
BIND(C,NAME='h5pset_fapl_mpio_c')
IMPORT :: HID_T
IMPORT :: HID_T, MPI_INTEGER_KIND
IMPLICIT NONE
INTEGER(HID_T) :: prp_id
INTEGER :: comm
INTEGER :: info
INTEGER(KIND=MPI_INTEGER_KIND) :: comm
INTEGER(KIND=MPI_INTEGER_KIND) :: info
END FUNCTION h5pset_fapl_mpio_c
END INTERFACE
@ -5240,7 +5248,7 @@ SUBROUTINE h5pset_attr_phase_change_f(ocpl_id, max_compact, min_dense, hdferr)
TYPE(MPI_INFO), INTENT(IN) :: info
INTEGER, INTENT(OUT) :: hdferr
CALL h5pset_fapl_mpio_f90(prp_id, comm%mpi_val, info%mpi_val, hdferr)
CALL h5pset_fapl_mpio_f90(prp_id, INT(comm%mpi_val,MPI_INTEGER_KIND), INT(info%mpi_val,MPI_INTEGER_KIND), hdferr)
END SUBROUTINE h5pset_fapl_mpio_f08
#endif
@ -5298,21 +5306,28 @@ END SUBROUTINE h5pget_fapl_mpio_f
SUBROUTINE h5pget_fapl_mpio_f90(prp_id, comm, info, hdferr)
IMPLICIT NONE
INTEGER(HID_T), INTENT(IN) :: prp_id
INTEGER, INTENT(OUT) :: comm
INTEGER, INTENT(OUT) :: info
INTEGER(KIND=MPI_INTEGER_KIND), INTENT(OUT) :: comm
INTEGER(KIND=MPI_INTEGER_KIND), INTENT(OUT) :: info
INTEGER, INTENT(OUT) :: hdferr
INTEGER(KIND=C_INT) :: c_comm
INTEGER(KIND=C_INT) :: c_info
INTERFACE
INTEGER FUNCTION h5pget_fapl_mpio_c(prp_id, comm, info) &
BIND(C,NAME='h5pget_fapl_mpio_c')
IMPORT :: HID_T
IMPORT :: HID_T, C_INT
IMPLICIT NONE
INTEGER(HID_T) :: prp_id
INTEGER :: comm
INTEGER :: info
INTEGER(KIND=C_INT) :: comm
INTEGER(KIND=C_INT) :: info
END FUNCTION h5pget_fapl_mpio_c
END INTERFACE
hdferr = h5pget_fapl_mpio_c(prp_id, comm, info)
hdferr = h5pget_fapl_mpio_c(prp_id, c_comm, c_info)
comm = INT(c_comm,KIND=MPI_INTEGER_KIND)
info = INT(c_info,KIND=MPI_INTEGER_KIND)
END SUBROUTINE h5pget_fapl_mpio_f90
@ -5325,7 +5340,13 @@ END SUBROUTINE h5pget_fapl_mpio_f
TYPE(MPI_INFO), INTENT(OUT) :: info
INTEGER, INTENT(OUT) :: hdferr
CALL h5pget_fapl_mpio_f90(prp_id, comm%mpi_val, info%mpi_val, hdferr)
INTEGER(KIND=MPI_INTEGER_KIND) :: tmp_comm
INTEGER(KIND=MPI_INTEGER_KIND) :: tmp_info
CALL h5pget_fapl_mpio_f90(prp_id, tmp_comm, tmp_info, hdferr)
comm%mpi_val = tmp_comm
info%mpi_val = tmp_info
END SUBROUTINE h5pget_fapl_mpio_f08
#endif
@ -5532,8 +5553,8 @@ END SUBROUTINE h5pget_fapl_mpio_f
SUBROUTINE H5Pset_mpi_params_f(prp_id, comm, info, hdferr)
IMPLICIT NONE
INTEGER(HID_T), INTENT(IN) :: prp_id
INTEGER , INTENT(IN) :: comm
INTEGER , INTENT(IN) :: info
INTEGER(KIND=MPI_INTEGER_KIND), INTENT(IN) :: comm
INTEGER(KIND=MPI_INTEGER_KIND), INTENT(IN) :: info
INTEGER , INTENT(OUT) :: hdferr
END SUBROUTINE H5Pset_mpi_params_f
!>
@ -5563,18 +5584,18 @@ END SUBROUTINE h5pget_fapl_mpio_f
SUBROUTINE H5Pset_mpi_params_f90(prp_id, comm, info, hdferr)
IMPLICIT NONE
INTEGER(HID_T), INTENT(IN) :: prp_id
INTEGER , INTENT(IN) :: comm
INTEGER , INTENT(IN) :: info
INTEGER(KIND=MPI_INTEGER_KIND), INTENT(IN) :: comm
INTEGER(KIND=MPI_INTEGER_KIND), INTENT(IN) :: info
INTEGER , INTENT(OUT) :: hdferr
INTERFACE
INTEGER FUNCTION h5pset_mpi_params_c(prp_id, comm, info) &
BIND(C,NAME='h5pset_mpi_params_c')
IMPORT :: HID_T
IMPORT :: HID_T, MPI_INTEGER_KIND
IMPLICIT NONE
INTEGER(HID_T) :: prp_id
INTEGER :: comm
INTEGER :: info
INTEGER(KIND=MPI_INTEGER_KIND) :: comm
INTEGER(KIND=MPI_INTEGER_KIND) :: info
END FUNCTION H5pset_mpi_params_c
END INTERFACE
@ -5614,8 +5635,8 @@ END SUBROUTINE h5pget_fapl_mpio_f
SUBROUTINE H5Pget_mpi_params_f(prp_id, comm, info, hdferr)
IMPLICIT NONE
INTEGER(HID_T), INTENT(IN) :: prp_id
INTEGER , INTENT(OUT) :: comm
INTEGER , INTENT(OUT) :: info
INTEGER, INTENT(OUT) :: comm
INTEGER, INTENT(OUT) :: info
INTEGER , INTENT(OUT) :: hdferr
END SUBROUTINE H5Pget_mpi_params_f
!>
@ -5647,22 +5668,28 @@ END SUBROUTINE h5pget_fapl_mpio_f
SUBROUTINE H5Pget_mpi_params_f90(prp_id, comm, info, hdferr)
IMPLICIT NONE
INTEGER(HID_T), INTENT(IN) :: prp_id
INTEGER , INTENT(OUT) :: comm
INTEGER , INTENT(OUT) :: info
INTEGER(KIND=MPI_INTEGER_KIND), INTENT(OUT) :: comm
INTEGER(KIND=MPI_INTEGER_KIND), INTENT(OUT) :: info
INTEGER , INTENT(OUT) :: hdferr
INTEGER(KIND=C_INT) :: c_comm
INTEGER(KIND=C_INT) :: c_info
INTERFACE
INTEGER FUNCTION h5pget_mpi_params_c(prp_id, comm, info) &
BIND(C,NAME='h5pget_mpi_params_c')
IMPORT :: HID_T
IMPORT :: HID_T, C_INT
IMPLICIT NONE
INTEGER(HID_T) :: prp_id
INTEGER :: comm
INTEGER :: info
INTEGER(KIND=C_INT) :: comm
INTEGER(KIND=C_INT) :: info
END FUNCTION H5pget_mpi_params_c
END INTERFACE
hdferr = H5Pget_mpi_params_c(prp_id, comm, info)
hdferr = H5Pget_mpi_params_c(prp_id, c_comm, c_info)
comm = INT(c_comm,KIND=MPI_INTEGER_KIND)
info = INT(c_info,KIND=MPI_INTEGER_KIND)
END SUBROUTINE H5Pget_mpi_params_f90
@ -5675,7 +5702,13 @@ END SUBROUTINE h5pget_fapl_mpio_f
TYPE(MPI_INFO), INTENT(OUT) :: info
INTEGER , INTENT(OUT) :: hdferr
CALL H5Pget_mpi_params_f90(prp_id, comm%mpi_val, info%mpi_val, hdferr)
INTEGER(KIND=MPI_INTEGER_KIND) :: tmp_comm
INTEGER(KIND=MPI_INTEGER_KIND) :: tmp_info
CALL H5Pget_mpi_params_f90(prp_id, tmp_comm, tmp_info, hdferr)
comm%mpi_val = tmp_comm
info%mpi_val = tmp_info
END SUBROUTINE H5Pget_mpi_params_f08
#endif

View File

@ -55,7 +55,7 @@ h5screate_simple_c(int_f *rank, hsize_t_f *dims, hsize_t_f *maxdims, hid_t_f *sp
c_maxdims[i] = maxdims[*rank - i - 1];
} /* end for */
c_space_id = H5Screate_simple(*rank, c_dims, c_maxdims);
c_space_id = H5Screate_simple((int)*rank, c_dims, c_maxdims);
if (c_space_id < 0)
HGOTO_DONE(FAIL);

View File

@ -79,8 +79,13 @@
! Define if Fortran C_BOOL is different from default LOGICAL
#define H5_FORTRAN_C_BOOL_IS_UNIQUE @H5_FORTRAN_C_BOOL_IS_UNIQUE@
! Define if the intrinsic module ISO_FORTRAN_ENV exists
#define H5_HAVE_ISO_FORTRAN_ENV @H5_HAVE_ISO_FORTRAN_ENV@
! Define if Fortran supports ISO_FORTRAN_ENV (F08)
#cmakedefine01 CMAKE_H5_HAVE_ISO_FORTRAN_ENV
#if CMAKE_H5_HAVE_ISO_FORTRAN_ENV == 0
#undef H5_HAVE_ISO_FORTRAN_ENV
#else
#define H5_HAVE_ISO_FORTRAN_ENV
#endif
! Define the size of C's double
#define H5_SIZEOF_DOUBLE @H5_SIZEOF_DOUBLE@

View File

@ -47,7 +47,7 @@
! Define if Fortran C_BOOL is different from default LOGICAL
#undef FORTRAN_C_BOOL_IS_UNIQUE
! Define if the intrinsic module ISO_FORTRAN_ENV exists
! Define if Fortran supports ISO_FORTRAN_ENV (F08)
#undef HAVE_ISO_FORTRAN_ENV
! Define the size of C's double

View File

@ -518,10 +518,10 @@ H5_FCDLL int_f h5pget_chunk_cache_c(hid_t_f *dapl_id, size_t_f *rdcc_nslots, siz
real_f *rdcc_w0);
#ifdef H5_HAVE_PARALLEL
H5_FCDLL int_f h5pget_mpio_actual_io_mode_c(hid_t_f *dxpl_id, int_f *actual_io_mode);
H5_FCDLL int_f h5pget_fapl_mpio_c(hid_t_f *prp_id, int_f *comm, int_f *info);
H5_FCDLL int_f h5pset_fapl_mpio_c(hid_t_f *prp_id, int_f *comm, int_f *info);
H5_FCDLL int_f h5pget_mpi_params_c(hid_t_f *prp_id, int_f *comm, int_f *info);
H5_FCDLL int_f h5pset_mpi_params_c(hid_t_f *prp_id, int_f *comm, int_f *info);
H5_FCDLL int_f h5pget_fapl_mpio_c(hid_t_f *prp_id, int *comm, int *info);
H5_FCDLL int_f h5pset_fapl_mpio_c(hid_t_f *prp_id, void *comm, void *info);
H5_FCDLL int_f h5pget_mpi_params_c(hid_t_f *prp_id, int *comm, int *info);
H5_FCDLL int_f h5pset_mpi_params_c(hid_t_f *prp_id, void *comm, void *info);
H5_FCDLL int_f h5pget_dxpl_mpio_c(hid_t_f *prp_id, int_f *data_xfer_mode);
H5_FCDLL int_f h5pset_dxpl_mpio_c(hid_t_f *prp_id, int_f *data_xfer_mode);
#endif

View File

@ -100,12 +100,11 @@ CONTAINS
CHARACTER(LEN=35), DIMENSION(2) :: aread_data ! Buffer to put read back
! string attr data
CHARACTER :: attr_character_data = 'A'
REAL(KIND=Fortran_DOUBLE), DIMENSION(1) :: attr_double_data = 3.459D0
REAL(KIND=Fortran_DOUBLE), DIMENSION(1) :: attr_double_data = 3.459_Fortran_DOUBLE
REAL, DIMENSION(1) :: attr_real_data = 4.0
INTEGER, DIMENSION(1) :: attr_integer_data = 5
INTEGER(HSIZE_T), DIMENSION(7) :: data_dims
CHARACTER :: aread_character_data ! variable to put read back Character attr data
INTEGER, DIMENSION(1) :: aread_integer_data ! variable to put read back integer attr data
INTEGER, DIMENSION(1) :: aread_null_data = 7 ! variable to put read back null attr data
@ -577,8 +576,6 @@ CONTAINS
total_error = total_error +1
END IF
CALL h5sclose_f(attr_space, error)
CALL check("h5sclose_f",error,total_error)
CALL h5sclose_f(attr2_space, error)

View File

@ -298,7 +298,8 @@ SUBROUTINE test_error_stack(total_error)
! push a custom error message onto the stack
CALL H5Epush_f(estack_id, file, func, line, &
cls_id, major, minor, "%s ERROR TEXT %s %s %s", error, &
arg1=ACHAR(27)//"[31m", arg2=ACHAR(27)//"[0m", arg3=ACHAR(0), arg4=ACHAR(10) )
arg1=ACHAR(27)//"[31m"//C_NULL_CHAR, arg2=ACHAR(27)//"[0m"//C_NULL_CHAR, &
arg3=ACHAR(0)//C_NULL_CHAR, arg4=ACHAR(10)//C_NULL_CHAR )
CALL check("H5Epush_f", error, total_error)
CALL h5eget_num_f(estack_id, count, error)

View File

@ -162,7 +162,7 @@ CONTAINS
INTEGER :: nlen, i, istart, iend
op_data%n_obj = op_data%n_obj + 1
op_data%n_obj = op_data%n_obj + 1_C_INT
nlen = 1
DO i = 1, MAX_CHAR_LEN

View File

@ -118,10 +118,10 @@ CONTAINS
IF((field .EQ. H5O_INFO_TIME_F).OR.(field .EQ. H5O_INFO_ALL_F))THEN
atime(1:8) = h5gmtime(oinfo_c%atime)
btime(1:8) = h5gmtime(oinfo_c%btime)
ctime(1:8) = h5gmtime(oinfo_c%ctime)
mtime(1:8) = h5gmtime(oinfo_c%mtime)
atime(1:8) = INT(h5gmtime(oinfo_c%atime),C_INT)
btime(1:8) = INT(h5gmtime(oinfo_c%btime),C_INT)
ctime(1:8) = INT(h5gmtime(oinfo_c%ctime),C_INT)
mtime(1:8) = INT(h5gmtime(oinfo_c%mtime),C_INT)
DO i = 1, 8
IF( (atime(i) .NE. oinfo_f%atime(i)) )THEN

View File

@ -709,8 +709,8 @@ END SUBROUTINE test_array_compound_atomic
DO i = 1, LENGTH
DO j = 1, ALEN
cf(i)%a(j) = 100*(i+1) + j
cf(i)%b(j) = (100.*(i+1) + 0.01*j)
cf(i)%c(j) = 100.*(i+1) + 0.02*j
cf(i)%b(j) = (100._sp*REAL(i+1,sp) + 0.01_sp*REAL(j,sp))
cf(i)%c(j) = 100._dp*REAL(i+1,dp) + 0.02_dp*REAL(j,dp)
ENDDO
ENDDO
@ -855,7 +855,7 @@ END SUBROUTINE test_array_compound_atomic
! --------------------------------
DO i = 1, LENGTH
DO j = 1, ALEN
fld(i)%b(j) = 1.313
fld(i)%b(j) = 1.313_sp
cf(i)%b(j) = fld(i)%b(j)
ENDDO
ENDDO
@ -2930,8 +2930,8 @@ SUBROUTINE test_nbit(total_error )
! dataset datatype (no precision loss during datatype conversion)
!
REAL(kind=wp), DIMENSION(1:2,1:5), TARGET :: orig_data = &
RESHAPE( (/188384.00, 19.103516, -1.0831790e9, -84.242188, &
5.2045898, -49140.000, 2350.2500, -3.2110596e-1, 6.4998865e-5, -0.0000000/) , (/2,5/) )
RESHAPE( (/188384.00_wp, 19.103516_wp, -1.0831790e9_wp, -84.242188_wp, &
5.2045898_wp, -49140.000_wp, 2350.2500_wp, -3.2110596e-1_wp, 6.4998865e-5_wp, -0.0000000_wp/) , (/2,5/) )
REAL(kind=wp), DIMENSION(1:2,1:5), TARGET :: new_data
INTEGER(size_t) :: PRECISION, offset
INTEGER :: error

View File

@ -13,9 +13,15 @@
! Tests async Fortran wrappers. It needs an async VOL. It will skip the tests if
! HDF5_VOL_CONNECTOR is not set or is set to a non-supporting async VOL.
!
#include <H5config_f.inc>
MODULE test_async_APIs
#ifdef H5_HAVE_MPI_F08
USE MPI_F08
#else
USE MPI
#endif
USE HDF5
USE TH5_MISC
USE TH5_MISC_GEN
@ -40,6 +46,8 @@ MODULE test_async_APIs
CHARACTER(LEN=10), TARGET :: app_func = "func_name"//C_NULL_CHAR
INTEGER :: app_line = 42
INTEGER :: mpi_ikind = MPI_INTEGER_KIND
CONTAINS
INTEGER(KIND=C_INT) FUNCTION liter_cb(group, name, link_info, op_data) bind(C)
@ -60,7 +68,7 @@ CONTAINS
CASE(0)
liter_cb = 0
CASE(2)
liter_cb = op_data%command*10
liter_cb = op_data%command*10_C_INT
END SELECT
op_data%command = op_data_command
op_data%type = op_data_type
@ -381,9 +389,14 @@ CONTAINS
INTEGER, TARGET :: fillvalue = 99
INTEGER :: error ! Error flags
INTEGER :: mpierror ! MPI error flag
INTEGER :: comm, info
INTEGER :: mpi_size, mpi_rank
INTEGER(KIND=MPI_INTEGER_KIND) :: mpierror ! MPI error flag
#ifdef H5_HAVE_MPI_F08
TYPE(MPI_COMM) :: comm
TYPE(MPI_INFO) :: info
#else
INTEGER(KIND=MPI_INTEGER_KIND) :: comm, info
#endif
INTEGER(KIND=MPI_INTEGER_KIND) :: mpi_size, mpi_rank
comm = MPI_COMM_WORLD
info = MPI_INFO_NULL
@ -399,7 +412,7 @@ CONTAINS
CALL h5pcreate_f(H5P_FILE_ACCESS_F, fapl_id, hdferror)
CALL check("h5pcreate_f", hdferror, total_error)
CALL h5pset_fapl_mpio_f(fapl_id, MPI_COMM_WORLD, MPI_INFO_NULL, hdferror)
CALL h5pset_fapl_mpio_f(fapl_id, comm, info, hdferror)
CALL check("h5pset_fapl_mpio_f", hdferror, total_error)
CALL h5fcreate_async_f(filename, H5F_ACC_TRUNC_F, file_id, es_id, error, access_prp = fapl_id )
@ -581,9 +594,14 @@ CONTAINS
TYPE(H5G_info_t), DIMENSION(1:3) :: ginfo
INTEGER :: error
INTEGER :: mpierror ! MPI error flag
INTEGER :: comm, info
INTEGER :: mpi_size, mpi_rank
INTEGER(KIND=MPI_INTEGER_KIND) :: mpierror ! MPI error flag
#ifdef H5_HAVE_MPI_F08
TYPE(MPI_COMM) :: comm
TYPE(MPI_INFO) :: info
#else
INTEGER(KIND=MPI_INTEGER_KIND) :: comm, info
#endif
INTEGER(KIND=MPI_INTEGER_KIND) :: mpi_size, mpi_rank
comm = MPI_COMM_WORLD
info = MPI_INFO_NULL
@ -709,9 +727,14 @@ CONTAINS
INTEGER(HID_T) :: ret_file_id
INTEGER :: error ! Error flags
INTEGER :: mpierror ! MPI error flag
INTEGER :: comm, info
INTEGER :: mpi_size, mpi_rank
INTEGER(KIND=MPI_INTEGER_KIND) :: mpierror ! MPI error flag
#ifdef H5_HAVE_MPI_F08
TYPE(MPI_COMM) :: comm
TYPE(MPI_INFO) :: info
#else
INTEGER(KIND=MPI_INTEGER_KIND) :: comm, info
#endif
INTEGER(KIND=MPI_INTEGER_KIND) :: mpi_size, mpi_rank
comm = MPI_COMM_WORLD
info = MPI_INFO_NULL
@ -812,12 +835,16 @@ CONTAINS
TYPE(iter_info), TARGET :: info
TYPE(C_FUNPTR) :: f1
TYPE(C_PTR) :: f2
INTEGER(C_INT) :: ret_value
INTEGER :: ret_value
INTEGER :: error ! Error flags
INTEGER :: mpierror ! MPI error flag
INTEGER :: comm
INTEGER :: mpi_size, mpi_rank
INTEGER(KIND=MPI_INTEGER_KIND) :: mpierror ! MPI error flag
#ifdef H5_HAVE_MPI_F08
TYPE(MPI_COMM) :: comm
#else
INTEGER(KIND=MPI_INTEGER_KIND) :: comm
#endif
INTEGER(KIND=MPI_INTEGER_KIND) :: mpi_size, mpi_rank
INTEGER(SIZE_T) :: count
@ -1211,10 +1238,10 @@ CONTAINS
CALL check("H5Oget_info_by_name_async_f", hdferror, total_error)
ENDIF
atime(1:8) = h5gmtime(oinfo_f%atime)
btime(1:8) = h5gmtime(oinfo_f%btime)
ctime(1:8) = h5gmtime(oinfo_f%ctime)
mtime(1:8) = h5gmtime(oinfo_f%mtime)
atime(1:8) = INT(h5gmtime(oinfo_f%atime),C_INT)
btime(1:8) = INT(h5gmtime(oinfo_f%btime),C_INT)
ctime(1:8) = INT(h5gmtime(oinfo_f%ctime),C_INT)
mtime(1:8) = INT(h5gmtime(oinfo_f%mtime),C_INT)
IF( atime(1) .LT. 2021 .OR. &
btime(1).LT. 2021 .OR. &
@ -1244,10 +1271,15 @@ PROGRAM async_test
IMPLICIT NONE
INTEGER :: total_error = 0 ! sum of the number of errors
INTEGER :: mpierror ! MPI hdferror flag
INTEGER :: mpi_size ! number of processes in the group of communicator
INTEGER :: mpi_rank ! rank of the calling process in the communicator
INTEGER :: required, provided
INTEGER(KIND=MPI_INTEGER_KIND) :: mpierror ! MPI hdferror flag
INTEGER(KIND=MPI_INTEGER_KIND) :: mpi_size ! number of processes in the group of communicator
INTEGER(KIND=MPI_INTEGER_KIND) :: mpi_rank ! rank of the calling process in the communicator
INTEGER(KIND=MPI_INTEGER_KIND) :: required, provided
#ifdef H5_HAVE_MPI_F08
TYPE(MPI_DATATYPE) :: mpi_int_type
#else
INTEGER(KIND=MPI_INTEGER_KIND) :: mpi_int_type
#endif
INTEGER(HID_T) :: vol_id
INTEGER :: hdferror
@ -1290,7 +1322,7 @@ PROGRAM async_test
IF(mpi_rank==0) CALL write_test_status(sum, &
'Testing Initializing mpi_init_thread', total_error)
CALL MPI_Barrier(MPI_COMM_WORLD, mpierror)
CALL mpi_abort(MPI_COMM_WORLD, 1, mpierror)
CALL mpi_abort(MPI_COMM_WORLD, 1_MPI_INTEGER_KIND, mpierror)
ENDIF
IF(mpi_rank==0) CALL write_test_header("ASYNC FORTRAN TESTING")
@ -1408,7 +1440,13 @@ PROGRAM async_test
!
CALL h5close_f(hdferror)
CALL MPI_ALLREDUCE(total_error, sum, 1, MPI_INTEGER, MPI_SUM, MPI_COMM_WORLD, mpierror)
IF(h5_sizeof(total_error).EQ.8_size_t)THEN
mpi_int_type=MPI_INTEGER8
ELSE
mpi_int_type=MPI_INTEGER4
ENDIF
CALL MPI_ALLREDUCE(total_error, sum, 1_MPI_INTEGER_KIND, mpi_int_type, MPI_SUM, MPI_COMM_WORLD, mpierror)
IF(mpi_rank==0) CALL write_test_footer()
@ -1422,7 +1460,7 @@ PROGRAM async_test
ENDIF
ELSE
WRITE(*,*) 'Errors detected in process ', mpi_rank
CALL mpi_abort(MPI_COMM_WORLD, 1, mpierror)
CALL mpi_abort(MPI_COMM_WORLD, 1_MPI_INTEGER_KIND, mpierror)
IF (mpierror .NE. MPI_SUCCESS) THEN
WRITE(*,*) "MPI_ABORT *FAILED* Process = ", mpi_rank
ENDIF

View File

@ -25,8 +25,8 @@ SUBROUTINE hyper(length,do_collective,do_chunk, mpi_size, mpi_rank, nerrors)
INTEGER, INTENT(in) :: length ! array length
LOGICAL, INTENT(in) :: do_collective ! use collective I/O
LOGICAL, INTENT(in) :: do_chunk ! use chunking
INTEGER, INTENT(in) :: mpi_size ! number of processes in the group of communicator
INTEGER, INTENT(in) :: mpi_rank ! rank of the calling process in the communicator
INTEGER(KIND=MPI_INTEGER_KIND), INTENT(in) :: mpi_size ! number of processes in the group of communicator
INTEGER(KIND=MPI_INTEGER_KIND), INTENT(in) :: mpi_rank ! rank of the calling process in the communicator
INTEGER, INTENT(inout) :: nerrors ! number of errors
INTEGER :: hdferror ! HDF hdferror flag
INTEGER(hsize_t), DIMENSION(1) :: dims ! dataset dimensions

View File

@ -25,8 +25,8 @@ SUBROUTINE multiple_dset_write(length, do_collective, do_chunk, mpi_size, mpi_ra
INTEGER, INTENT(in) :: length ! array length
LOGICAL, INTENT(in) :: do_collective ! use collective I/O
LOGICAL, INTENT(in) :: do_chunk ! use chunking
INTEGER, INTENT(in) :: mpi_size ! number of processes in the group of communicator
INTEGER, INTENT(in) :: mpi_rank ! rank of the calling process in the communicator
INTEGER(KIND=MPI_INTEGER_KIND), INTENT(in) :: mpi_size ! number of processes in the group of communicator
INTEGER(KIND=MPI_INTEGER_KIND), INTENT(in) :: mpi_rank ! rank of the calling process in the communicator
INTEGER, INTENT(inout) :: nerrors ! number of errors
INTEGER :: hdferror ! HDF hdferror flag
INTEGER(hsize_t), DIMENSION(1) :: dims ! dataset dimensions

View File

@ -18,24 +18,32 @@
SUBROUTINE mpi_param_03(nerrors)
#ifdef H5_HAVE_ISO_FORTRAN_ENV
USE, INTRINSIC :: iso_fortran_env, ONLY : atomic_logical_kind
#endif
USE MPI
USE HDF5
USE TH5_MISC
USE TH5_MISC_GEN
IMPLICIT NONE
INTEGER, INTENT(inout) :: nerrors ! number of errors
INTEGER :: hdferror ! HDF hdferror flag
INTEGER(hid_t) :: fapl_id ! file access identifier
INTEGER :: mpi_size, mpi_size_ret ! number of processes in the group of communicator
INTEGER :: mpierror ! MPI hdferror flag
INTEGER :: mpi_rank ! rank of the calling process in the communicator
INTEGER(KIND=MPI_INTEGER_KIND) :: mpi_size, mpi_size_ret ! number of processes in the group of communicator
INTEGER(KIND=MPI_INTEGER_KIND) :: mpierror ! MPI hdferror flag
INTEGER(KIND=MPI_INTEGER_KIND) :: mpi_rank ! rank of the calling process in the communicator
INTEGER :: info, info_ret
INTEGER :: comm, comm_ret
INTEGER :: nkeys
LOGICAL :: flag
INTEGER(KIND=MPI_INTEGER_KIND) :: info, info_ret
INTEGER(KIND=MPI_INTEGER_KIND) :: comm, comm_ret
INTEGER(KIND=MPI_INTEGER_KIND) :: nkeys
#ifdef H5_HAVE_ISO_FORTRAN_ENV
LOGICAL(KIND=atomic_logical_kind) :: flag
#else
LOGICAL(KIND=MPI_INTEGER_KIND) :: flag
#endif
INTEGER :: iconfig
CHARACTER(LEN=4) , PARAMETER :: in_key="host"
CHARACTER(LEN=10), PARAMETER :: in_value="myhost.org"
@ -62,13 +70,13 @@ SUBROUTINE mpi_param_03(nerrors)
! Split the communicator
IF(mpi_rank.EQ.0)THEN
CALL MPI_Comm_split(MPI_COMM_WORLD, 1, mpi_rank, comm, mpierror)
CALL MPI_Comm_split(MPI_COMM_WORLD, 1_MPI_INTEGER_KIND, mpi_rank, comm, mpierror)
IF (mpierror .NE. MPI_SUCCESS) THEN
WRITE(*,*) "MPI_COMM_SPLIT *FAILED* Process = ", mpi_rank
nerrors = nerrors + 1
ENDIF
ELSE
CALL MPI_Comm_split(MPI_COMM_WORLD, 0, mpi_rank, comm, mpierror)
CALL MPI_Comm_split(MPI_COMM_WORLD, 0_MPI_INTEGER_KIND, mpi_rank, comm, mpierror)
IF (mpierror .NE. MPI_SUCCESS) THEN
WRITE(*,*) "MPI_COMM_SPLIT *FAILED* Process = ", mpi_rank
nerrors = nerrors + 1
@ -111,9 +119,9 @@ SUBROUTINE mpi_param_03(nerrors)
nerrors = nerrors + 1
ENDIF
IF (mpi_rank.EQ.0)THEN
CALL VERIFY("h5pget_fapl_mpio_f", mpi_size_ret, 1, hdferror)
CALL VERIFY("h5pget_fapl_mpio_f", mpi_size_ret, 1_MPI_INTEGER_KIND, hdferror)
ELSE
CALL VERIFY("h5pget_fapl_mpio_f", mpi_size_ret, mpi_size-1, hdferror)
CALL VERIFY("h5pget_fapl_mpio_f", mpi_size_ret, INT(mpi_size-1,MPI_INTEGER_KIND), hdferror)
ENDIF
! Check info returned
@ -122,9 +130,9 @@ SUBROUTINE mpi_param_03(nerrors)
WRITE(*,*) "MPI_INFO_GET_NKEYS *FAILED* Process = ", mpi_rank
nerrors = nerrors + 1
ENDIF
CALL VERIFY("h5pget_fapl_mpio_f", nkeys, 1, hdferror)
CALL VERIFY("h5pget_fapl_mpio_f", nkeys, 1_MPI_INTEGER_KIND, hdferror)
CALL MPI_Info_get_nthkey(info_ret, 0, key, mpierror)
CALL MPI_Info_get_nthkey(info_ret, 0_MPI_INTEGER_KIND, key, mpierror)
IF (mpierror .NE. MPI_SUCCESS) THEN
WRITE(*,*) "MPI_INFO_GET_NTHKEY *FAILED* Process = ", mpi_rank
nerrors = nerrors + 1
@ -136,7 +144,7 @@ SUBROUTINE mpi_param_03(nerrors)
WRITE(*,*) "MPI_INFO_GET *FAILED* Process = ", mpi_rank
nerrors = nerrors + 1
ENDIF
CALL VERIFY("h5pget_fapl_mpio_f", flag, .TRUE., hdferror)
CALL VERIFY("h5pget_fapl_mpio_f", LOGICAL(flag), .TRUE., hdferror)
CALL VERIFY("h5pget_fapl_mpio_f", TRIM(value), in_value, hdferror)
! Free the MPI resources
@ -171,6 +179,9 @@ SUBROUTINE mpi_param_08(nerrors)
#ifdef H5_HAVE_MPI_F08
#ifdef H5_HAVE_ISO_FORTRAN_ENV
USE, INTRINSIC :: iso_fortran_env, ONLY : atomic_logical_kind
#endif
USE MPI_F08
USE HDF5
USE TH5_MISC
@ -181,14 +192,18 @@ SUBROUTINE mpi_param_08(nerrors)
INTEGER :: hdferror ! HDF hdferror flag
INTEGER(hid_t) :: fapl_id ! file access identifier
INTEGER :: mpi_size, mpi_size_ret ! number of processes in the group of communicator
INTEGER :: mpierror ! MPI hdferror flag
INTEGER :: mpi_rank ! rank of the calling process in the communicator
INTEGER(KIND=MPI_INTEGER_KIND) :: mpi_size, mpi_size_ret ! number of processes in the group of communicator
INTEGER(KIND=MPI_INTEGER_KIND) :: mpierror ! MPI hdferror flag
INTEGER(KIND=MPI_INTEGER_KIND) :: mpi_rank ! rank of the calling process in the communicator
TYPE(MPI_INFO) :: info, info_ret
TYPE(MPI_COMM) :: comm, comm_ret
INTEGER :: nkeys
LOGICAL :: flag
INTEGER(KIND=MPI_INTEGER_KIND) :: nkeys
#ifdef H5_HAVE_ISO_FORTRAN_ENV
LOGICAL(KIND=atomic_logical_kind) :: flag
#else
LOGICAL(KIND=MPI_INTEGER_KIND) :: flag
#endif
INTEGER :: iconfig
CHARACTER(LEN=4) , PARAMETER :: in_key="host"
CHARACTER(LEN=10), PARAMETER :: in_value="myhost.org"
@ -215,13 +230,13 @@ SUBROUTINE mpi_param_08(nerrors)
! Split the communicator
IF(mpi_rank.EQ.0)THEN
CALL MPI_Comm_split(MPI_COMM_WORLD, 1, mpi_rank, comm, mpierror)
CALL MPI_Comm_split(MPI_COMM_WORLD, 1_MPI_INTEGER_KIND, mpi_rank, comm, mpierror)
IF (mpierror .NE. MPI_SUCCESS) THEN
WRITE(*,*) "MPI_COMM_SPLIT *FAILED* Process = ", mpi_rank
nerrors = nerrors + 1
ENDIF
ELSE
CALL MPI_Comm_split(MPI_COMM_WORLD, 0, mpi_rank, comm, mpierror)
CALL MPI_Comm_split(MPI_COMM_WORLD, 0_MPI_INTEGER_KIND, mpi_rank, comm, mpierror)
IF (mpierror .NE. MPI_SUCCESS) THEN
WRITE(*,*) "MPI_COMM_SPLIT *FAILED* Process = ", mpi_rank
nerrors = nerrors + 1
@ -264,9 +279,9 @@ SUBROUTINE mpi_param_08(nerrors)
nerrors = nerrors + 1
ENDIF
IF (mpi_rank.EQ.0)THEN
CALL VERIFY("h5pget_fapl_mpio_f", mpi_size_ret, 1, hdferror)
CALL VERIFY("h5pget_fapl_mpio_f", mpi_size_ret, 1_MPI_INTEGER_KIND, hdferror)
ELSE
CALL VERIFY("h5pget_fapl_mpio_f", mpi_size_ret, mpi_size-1, hdferror)
CALL VERIFY("h5pget_fapl_mpio_f", mpi_size_ret, INT(mpi_size-1,MPI_INTEGER_KIND), hdferror)
ENDIF
! Check info returned
@ -275,9 +290,9 @@ SUBROUTINE mpi_param_08(nerrors)
WRITE(*,*) "MPI_INFO_GET_NKEYS *FAILED* Process = ", mpi_rank
nerrors = nerrors + 1
ENDIF
CALL VERIFY("h5pget_fapl_mpio_f", nkeys, 1, hdferror)
CALL VERIFY("h5pget_fapl_mpio_f", nkeys, 1_MPI_INTEGER_KIND, hdferror)
CALL MPI_Info_get_nthkey(info_ret, 0, key, mpierror)
CALL MPI_Info_get_nthkey(info_ret, 0_MPI_INTEGER_KIND, key, mpierror)
IF (mpierror .NE. MPI_SUCCESS) THEN
WRITE(*,*) "MPI_INFO_GET_NTHKEY *FAILED* Process = ", mpi_rank
nerrors = nerrors + 1
@ -289,7 +304,7 @@ SUBROUTINE mpi_param_08(nerrors)
WRITE(*,*) "MPI_INFO_GET *FAILED* Process = ", mpi_rank
nerrors = nerrors + 1
ENDIF
CALL VERIFY("h5pget_fapl_mpio_f", flag, .TRUE., hdferror)
CALL VERIFY("h5pget_fapl_mpio_f", LOGICAL(flag), .TRUE., hdferror)
CALL VERIFY("h5pget_fapl_mpio_f", TRIM(value), in_value, hdferror)
! Free the MPI resources

View File

@ -25,8 +25,8 @@ SUBROUTINE pmultiple_dset_hyper_rw(do_collective, do_chunk, mpi_size, mpi_rank,
LOGICAL, INTENT(in) :: do_collective ! use collective IO
LOGICAL, INTENT(in) :: do_chunk ! use chunking
INTEGER, INTENT(in) :: mpi_size ! number of processes in the group of communicator
INTEGER, INTENT(in) :: mpi_rank ! rank of the calling process in the communicator
INTEGER(KIND=MPI_INTEGER_KIND), INTENT(in) :: mpi_size ! number of processes in the group of communicator
INTEGER(KIND=MPI_INTEGER_KIND), INTENT(in) :: mpi_rank ! rank of the calling process in the communicator
INTEGER, INTENT(inout) :: nerrors ! number of errors
CHARACTER(LEN=80):: dsetname ! Dataset name
INTEGER(hsize_t), DIMENSION(1:2) :: cdims ! chunk dimensions
@ -156,6 +156,7 @@ SUBROUTINE pmultiple_dset_hyper_rw(do_collective, do_chunk, mpi_size, mpi_rank,
CALL h5dwrite_multi_f(ndsets, dset_id, mem_type_id, mem_space_id, file_space_id, buf_md, error, plist_id)
CALL check("h5dwrite_multi_f", error, nerrors)
return
CALL h5pget_dxpl_mpio_f(plist_id, data_xfer_mode, error)
CALL check("h5pget_dxpl_mpio_f", error, nerrors)

View File

@ -21,12 +21,12 @@ PROGRAM parallel_test
IMPLICIT NONE
INTEGER :: mpierror ! MPI hdferror flag
INTEGER(KIND=MPI_INTEGER_KIND) :: mpierror ! MPI hdferror flag
INTEGER :: hdferror ! HDF hdferror flag
INTEGER :: ret_total_error = 0 ! number of errors in subroutine
INTEGER :: total_error = 0 ! sum of the number of errors
INTEGER :: mpi_size ! number of processes in the group of communicator
INTEGER :: mpi_rank ! rank of the calling process in the communicator
INTEGER(KIND=MPI_INTEGER_KIND) :: mpi_size ! number of processes in the group of communicator
INTEGER(KIND=MPI_INTEGER_KIND) :: mpi_rank ! rank of the calling process in the communicator
INTEGER :: length = 12000 ! length of array
INTEGER :: i,j, sum
! use collective MPI I/O
@ -35,6 +35,7 @@ PROGRAM parallel_test
! use chunking
LOGICAL, DIMENSION(1:2) :: do_chunk = (/.FALSE.,.TRUE./)
CHARACTER(LEN=10), DIMENSION(1:2) :: chr_chunk =(/"contiguous", "chunk "/)
INTEGER(KIND=MPI_INTEGER_KIND) :: mpi_int_type
!
! initialize MPI
@ -71,6 +72,7 @@ PROGRAM parallel_test
!
! test write/read dataset by hyperslabs (contiguous/chunk) with independent/collective MPI I/O
!
DO i = 1, 2
DO j = 1, 2
ret_total_error = 0
@ -80,10 +82,10 @@ PROGRAM parallel_test
total_error)
ENDDO
ENDDO
!
! test write/read several datasets (independent MPI I/O)
!
ret_total_error = 0
CALL multiple_dset_write(length, do_collective(1), do_chunk(1), mpi_size, mpi_rank, ret_total_error)
IF(mpi_rank==0) CALL write_test_status(ret_total_error, &
@ -105,7 +107,13 @@ PROGRAM parallel_test
!
CALL h5close_f(hdferror)
CALL MPI_ALLREDUCE(total_error, sum, 1, MPI_INTEGER, MPI_SUM, MPI_COMM_WORLD, mpierror)
IF(h5_sizeof(total_error).EQ.8_size_t)THEN
mpi_int_type=MPI_INTEGER8
ELSE
mpi_int_type=MPI_INTEGER4
ENDIF
CALL MPI_ALLREDUCE(total_error, sum, 1_MPI_INTEGER_KIND, mpi_int_type, MPI_SUM, MPI_COMM_WORLD, mpierror)
IF(mpi_rank==0) CALL write_test_footer()
@ -119,7 +127,7 @@ PROGRAM parallel_test
ENDIF
ELSE
WRITE(*,*) 'Errors detected in process ', mpi_rank
CALL mpi_abort(MPI_COMM_WORLD, 1, mpierror)
CALL mpi_abort(MPI_COMM_WORLD, 1_MPI_INTEGER_KIND, mpierror)
IF (mpierror .NE. MPI_SUCCESS) THEN
WRITE(*,*) "MPI_ABORT *FAILED* Process = ", mpi_rank
ENDIF

View File

@ -18,6 +18,9 @@
PROGRAM subfiling_test
USE, INTRINSIC :: ISO_C_BINDING, ONLY : C_INT64_T
#ifdef H5_HAVE_ISO_FORTRAN_ENV
USE, INTRINSIC :: iso_fortran_env, ONLY : atomic_logical_kind
#endif
USE HDF5
USE MPI
USE TH5_MISC
@ -25,29 +28,33 @@ PROGRAM subfiling_test
IMPLICIT NONE
INTEGER :: total_error = 0 ! sum of the number of errors
INTEGER :: mpierror ! MPI hdferror flag
INTEGER :: mpi_rank ! rank of the calling process in the communicator
INTEGER(KIND=MPI_INTEGER_KIND) :: mpierror ! MPI hdferror flag
INTEGER(KIND=MPI_INTEGER_KIND) :: mpi_rank ! rank of the calling process in the communicator
#ifdef H5_HAVE_SUBFILING_VFD
CHARACTER(LEN=7), PARAMETER :: filename = "subf.h5"
INTEGER :: hdferror ! HDF hdferror flag
INTEGER :: mpi_size, mpi_size_ret ! number of processes in the group of communicator
INTEGER :: required, provided
INTEGER :: hdferror ! HDF hdferror flag
INTEGER(KIND=MPI_INTEGER_KIND) :: mpi_size, mpi_size_ret ! number of processes in the group of communicator
INTEGER(KIND=MPI_INTEGER_KIND) :: required, provided
LOGICAL :: file_exists
INTEGER(HID_T) :: fapl_id
INTEGER(HID_T) :: file_id
INTEGER :: comm, comm_ret
INTEGER :: info, info_ret
INTEGER(KIND=MPI_INTEGER_KIND) :: comm, comm_ret
INTEGER(KIND=MPI_INTEGER_KIND) :: info, info_ret
CHARACTER(LEN=3) :: info_val
CHARACTER(LEN=180) :: subfname
INTEGER :: i, sum
INTEGER(C_INT64_T) inode
TYPE(H5FD_subfiling_config_t) :: vfd_config
TYPE(H5FD_ioc_config_t) :: vfd_config_ioc
LOGICAL :: flag
#ifdef H5_HAVE_ISO_FORTRAN_ENV
LOGICAL(KIND=atomic_logical_kind) :: flag
#else
LOGICAL(KIND=MPI_INTEGER_KIND) :: flag
#endif
INTEGER :: nerrors = 0
@ -56,6 +63,7 @@ PROGRAM subfiling_test
CHARACTER(len=8) :: hex1, hex2
CHARACTER(len=1) :: arg
INTEGER(KIND=MPI_INTEGER_KIND) :: mpi_int_type
!
! initialize MPI
!
@ -84,7 +92,7 @@ PROGRAM subfiling_test
IF(mpi_rank==0) CALL write_test_status(sum, &
'Testing Initializing mpi_init_thread', total_error)
CALL MPI_Barrier(MPI_COMM_WORLD, mpierror)
CALL mpi_abort(MPI_COMM_WORLD, 1, mpierror)
CALL mpi_abort(MPI_COMM_WORLD, 1_MPI_INTEGER_KIND, mpierror)
ENDIF
!
@ -101,9 +109,9 @@ PROGRAM subfiling_test
IF(mpi_size.GT.2)THEN
IF (mpi_rank.LE.1)THEN
CALL MPI_Comm_split(MPI_COMM_WORLD, 1, mpi_rank, comm, mpierror)
CALL MPI_Comm_split(MPI_COMM_WORLD, 1_MPI_INTEGER_KIND, mpi_rank, comm, mpierror)
ELSE
CALL MPI_Comm_split(MPI_COMM_WORLD, 0, mpi_rank, comm, mpierror)
CALL MPI_Comm_split(MPI_COMM_WORLD, 0_MPI_INTEGER_KIND, mpi_rank, comm, mpierror)
ENDIF
CALL MPI_Info_create(info, mpierror)
@ -128,8 +136,8 @@ PROGRAM subfiling_test
nerrors = nerrors + 1
ENDIF
CALL mpi_info_get(info_ret,"foo", 3, info_val, flag, mpierror)
IF(flag .EQV. .TRUE.)THEN
CALL mpi_info_get(info_ret,"foo", 3_MPI_INTEGER_KIND, info_val, flag, mpierror)
IF(LOGICAL(flag) .EQV. .TRUE.)THEN
IF(info_val.NE."bar")THEN
IF(mpi_rank.EQ.0) &
WRITE(*,*) "Failed H5Pset_mpi_params_f and H5Pget_mpi_params_f sequence"
@ -148,7 +156,13 @@ PROGRAM subfiling_test
ENDIF
CALL MPI_REDUCE(nerrors, sum, 1, MPI_INTEGER, MPI_SUM, 0, MPI_COMM_WORLD, mpierror)
IF(h5_sizeof(total_error).EQ.8_size_t)THEN
mpi_int_type=MPI_INTEGER8
ELSE
mpi_int_type=MPI_INTEGER4
ENDIF
CALL MPI_REDUCE(nerrors, sum, 1_MPI_INTEGER_KIND, mpi_int_type, MPI_SUM, 0_MPI_INTEGER_KIND, MPI_COMM_WORLD, mpierror)
IF(mpi_rank==0) CALL write_test_status(sum, &
'Testing H5Pset/get_mpi_params_f', total_error)
@ -267,10 +281,10 @@ PROGRAM subfiling_test
! Testing modifying defaults for subfiling FD
vfd_config%magic = H5FD_SUBFILING_FAPL_MAGIC_F
vfd_config%version = H5FD_SUBFILING_CURR_FAPL_VERSION_F
vfd_config%magic = INT(H5FD_SUBFILING_FAPL_MAGIC_F,C_INT32_T)
vfd_config%version = INT(H5FD_SUBFILING_CURR_FAPL_VERSION_F,C_INT32_T)
vfd_config%require_ioc = .TRUE.
vfd_config%shared_cfg%ioc_selection = SELECT_IOC_ONE_PER_NODE_F
vfd_config%shared_cfg%ioc_selection = INT(SELECT_IOC_ONE_PER_NODE_F,C_INT)
vfd_config%shared_cfg%stripe_size = 16*1024*1024
vfd_config%shared_cfg%stripe_count = 3
@ -299,8 +313,8 @@ PROGRAM subfiling_test
IF(mpi_rank==0) CALL write_test_status(nerrors, &
'Testing H5Pset/get_fapl_subfiling_f with custom settings', total_error)
vfd_config_ioc%magic = H5FD_IOC_FAPL_MAGIC_F
vfd_config_ioc%version = H5FD_IOC_CURR_FAPL_VERSION_F
vfd_config_ioc%magic = INT(H5FD_IOC_FAPL_MAGIC_F,C_INT32_T)
vfd_config_ioc%version = INT(H5FD_IOC_CURR_FAPL_VERSION_F,C_INT32_T)
vfd_config_ioc%thread_pool_size = 2
nerrors = 0
@ -374,7 +388,13 @@ PROGRAM subfiling_test
!
CALL h5close_f(hdferror)
CALL MPI_ALLREDUCE(total_error, sum, 1, MPI_INTEGER, MPI_SUM, MPI_COMM_WORLD, mpierror)
IF(h5_sizeof(total_error).EQ.8_size_t)THEN
mpi_int_type=MPI_INTEGER8
ELSE
mpi_int_type=MPI_INTEGER4
ENDIF
CALL MPI_ALLREDUCE(total_error, sum, 1_MPI_INTEGER_KIND, mpi_int_type, MPI_SUM, MPI_COMM_WORLD, mpierror)
!
! close MPI
@ -386,7 +406,7 @@ PROGRAM subfiling_test
ENDIF
ELSE
WRITE(*,*) 'Errors detected in process ', mpi_rank
CALL mpi_abort(MPI_COMM_WORLD, 1, mpierror)
CALL mpi_abort(MPI_COMM_WORLD, 1_MPI_INTEGER_KIND, mpierror)
IF (mpierror .NE. MPI_SUCCESS) THEN
WRITE(*,*) "MPI_ABORT *FAILED* Process = ", mpi_rank
ENDIF

View File

@ -78,6 +78,7 @@ test_dataset_append_notset(hid_t fid)
} /* end for */
/* File size when not flushed */
memset(&sb1, 0, sizeof(h5_stat_t));
if (HDstat(FILENAME, &sb1) < 0)
TEST_ERROR;
@ -86,6 +87,7 @@ test_dataset_append_notset(hid_t fid)
FAIL_STACK_ERROR;
/* File size after flushing */
memset(&sb2, 0, sizeof(h5_stat_t));
if (HDstat(FILENAME, &sb2) < 0)
TEST_ERROR;

View File

@ -12,7 +12,7 @@
/*
* For details of the HDF libraries, see the HDF Documentation at:
* http://hdfgroup.org/HDF5/doc/
* https://portal.hdfgroup.org/documentation/index.html
*
*/

View File

@ -12,7 +12,7 @@
/*
* For details of the HDF libraries, see the HDF Documentation at:
* http://hdfgroup.org/HDF5/doc/
* https://portal.hdfgroup.org/documentation/index.html
*
*/

View File

@ -12,7 +12,7 @@
/*
* For details of the HDF libraries, see the HDF Documentation at:
* http://hdfgroup.org/HDF5/doc/
* https://portal.hdfgroup.org/documentation/index.html
*
*/

View File

@ -12,7 +12,7 @@
/*
* For details of the HDF libraries, see the HDF Documentation at:
* http://hdfgroup.org/HDF5/doc/
* https://portal.hdfgroup.org/documentation/index.html
*
*/
@ -1107,12 +1107,12 @@ done:
JNIEXPORT jint JNICALL
Java_hdf_hdf5lib_H5_H5AreadVL(JNIEnv *env, jclass clss, jlong attr_id, jlong mem_type_id, jobjectArray buf)
{
jbyte *readBuf = NULL;
void *readBuf = NULL;
hsize_t dims[H5S_MAX_RANK];
hid_t sid = H5I_INVALID_HID;
size_t typeSize;
H5T_class_t type_class;
jsize vl_array_len;
jsize vl_array_len = 0;
htri_t vl_data_class;
herr_t status = FAIL;
htri_t is_variable = 0;
@ -1136,7 +1136,7 @@ Java_hdf_hdf5lib_H5_H5AreadVL(JNIEnv *env, jclass clss, jlong attr_id, jlong mem
if (NULL == (readBuf = calloc((size_t)vl_array_len, typeSize)))
H5_OUT_OF_MEMORY_ERROR(ENVONLY, "H5Aread: failed to allocate raw VL read buffer");
if ((status = H5Aread((hid_t)attr_id, (hid_t)mem_type_id, (void *)readBuf)) < 0)
if ((status = H5Aread((hid_t)attr_id, (hid_t)mem_type_id, readBuf)) < 0)
H5_LIBRARY_ERROR(ENVONLY);
if ((type_class = H5Tget_class((hid_t)mem_type_id)) < 0)
H5_LIBRARY_ERROR(ENVONLY);
@ -1173,12 +1173,12 @@ done:
JNIEXPORT jint JNICALL
Java_hdf_hdf5lib_H5_H5AwriteVL(JNIEnv *env, jclass clss, jlong attr_id, jlong mem_type_id, jobjectArray buf)
{
jbyte *writeBuf = NULL;
void *writeBuf = NULL;
hsize_t dims[H5S_MAX_RANK];
hid_t sid = H5I_INVALID_HID;
size_t typeSize;
H5T_class_t type_class;
jsize vl_array_len;
jsize vl_array_len = 0;
htri_t vl_data_class;
herr_t status = FAIL;
htri_t is_variable = 0;

View File

@ -12,7 +12,7 @@
/*
* For details of the HDF libraries, see the HDF Documentation at:
* http://hdfgroup.org/HDF5/doc/
* https://portal.hdfgroup.org/documentation/index.html
*
*/
@ -185,7 +185,7 @@ Java_hdf_hdf5lib_H5_H5Dread(JNIEnv *env, jclass clss, jlong dataset_id, jlong me
jbyte *readBuf = NULL;
size_t typeSize;
H5T_class_t type_class;
jsize vl_array_len; // Only used by vl_data_class types
jsize vl_array_len = 0; // Only used by vl_data_class types
htri_t vl_data_class;
herr_t status = FAIL;
@ -266,7 +266,7 @@ Java_hdf_hdf5lib_H5_H5Dwrite(JNIEnv *env, jclass clss, jlong dataset_id, jlong m
jbyte *writeBuf = NULL;
size_t typeSize;
H5T_class_t type_class;
jsize vl_array_len; // Only used by vl_data_class types
jsize vl_array_len = 0; // Only used by vl_data_class types
htri_t vl_data_class;
herr_t status = FAIL;
@ -1134,7 +1134,7 @@ JNIEXPORT jint JNICALL
Java_hdf_hdf5lib_H5_H5DreadVL(JNIEnv *env, jclass clss, jlong dataset_id, jlong mem_type_id,
jlong mem_space_id, jlong file_space_id, jlong xfer_plist_id, jobjectArray buf)
{
jbyte *readBuf = NULL;
void *readBuf = NULL;
size_t typeSize;
H5T_class_t type_class;
jsize vl_array_len;
@ -1164,7 +1164,7 @@ Java_hdf_hdf5lib_H5_H5DreadVL(JNIEnv *env, jclass clss, jlong dataset_id, jlong
H5_OUT_OF_MEMORY_ERROR(ENVONLY, "H5DreadVL: failed to allocate raw VL read buffer");
if ((status = H5Dread((hid_t)dataset_id, (hid_t)mem_type_id, (hid_t)mem_space_id, (hid_t)file_space_id,
(hid_t)xfer_plist_id, (void *)readBuf)) < 0)
(hid_t)xfer_plist_id, readBuf)) < 0)
H5_LIBRARY_ERROR(ENVONLY);
if ((type_class = H5Tget_class((hid_t)mem_type_id)) < 0)
H5_LIBRARY_ERROR(ENVONLY);
@ -1194,7 +1194,7 @@ JNIEXPORT jint JNICALL
Java_hdf_hdf5lib_H5_H5DwriteVL(JNIEnv *env, jclass clss, jlong dataset_id, jlong mem_type_id,
jlong mem_space_id, jlong file_space_id, jlong xfer_plist_id, jobjectArray buf)
{
jbyte *writeBuf = NULL;
void *writeBuf = NULL;
size_t typeSize;
H5T_class_t type_class;
jsize vl_array_len; // Only used by vl_data_class types

View File

@ -10,12 +10,6 @@
* help@hdfgroup.org. *
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */
/*
* For details of the HDF libraries, see the HDF Documentation at:
* http://hdfgroup.org/HDF5/doc/
*
*/
#ifdef __cplusplus
extern "C" {
#endif /* __cplusplus */
@ -28,7 +22,7 @@ extern "C" {
* analogous arguments and return codes.
*
* For details of the HDF libraries, see the HDF Documentation at:
* http://www.hdfgroup.org/HDF5/doc/
* https://portal.hdfgroup.org/documentation/index.html
*
*/
@ -461,7 +455,7 @@ Java_hdf_hdf5lib_H5_H5Eget_1msg(JNIEnv *env, jclass clss, jlong msg_id, jintArra
H5_LIBRARY_ERROR(ENVONLY);
namePtr[buf_size] = '\0';
theArray[0] = error_msg_type;
theArray[0] = (jint)error_msg_type;
if (NULL == (str = ENVPTR->NewStringUTF(ENVONLY, namePtr)))
CHECK_JNI_EXCEPTION(ENVONLY, JNI_FALSE);

View File

@ -12,7 +12,7 @@
/*
* For details of the HDF libraries, see the HDF Documentation at:
* http://hdfgroup.org/HDF5/doc/
* https://portal.hdfgroup.org/documentation/index.html
*
*/

View File

@ -12,7 +12,7 @@
/*
* For details of the HDF libraries, see the HDF Documentation at:
* http://hdfgroup.org/HDF5/doc/
* https://portal.hdfgroup.org/documentation/index.html
*
*/

View File

@ -12,7 +12,7 @@
/*
* For details of the HDF libraries, see the HDF Documentation at:
* http://hdfgroup.org/HDF5/doc/
* https://portal.hdfgroup.org/documentation/index.html
*
*/

View File

@ -12,7 +12,7 @@
/*
* For details of the HDF libraries, see the HDF Documentation at:
* http://hdfgroup.org/HDF5/doc/
* https://portal.hdfgroup.org/documentation/index.html
*
*/

Some files were not shown because too many files have changed in this diff Show More