netcdf-c/configure.ac

2315 lines
89 KiB
Plaintext
Raw Normal View History

# -*- Autoconf -*-
2010-06-03 21:25:25 +08:00
## Process this file with autoconf to produce a configure script.
# This is part of Unidata's netCDF package. Copyright 2005-2018, see
2010-06-03 21:25:25 +08:00
# the COPYRIGHT file for more information.
2017-11-18 06:35:14 +08:00
# Ed Hartnett, Ward Fisher, Dennis Heimbigner
2010-06-03 21:25:25 +08:00
# Running autoconf on this file will trigger a warning if
2010-06-03 21:25:25 +08:00
# autoconf is not at least the specified version.
AC_PREREQ([2.59])
# Initialize with name, version, and support email address.
AC_INIT([netCDF],[4.9.3-development],[support-netcdf@unidata.ucar.edu],[netcdf-c])
2010-06-03 21:25:25 +08:00
##
# Prefer an empty CFLAGS variable instead of the default -g -O2.
# See:
# * http://www.gnu.org/savannah-checkouts/gnu/autoconf/manual/autoconf-2.69/html_node/C-Compiler.html#C-Compiler
##
: ${CFLAGS=""}
AC_SUBST([NC_VERSION_MAJOR]) NC_VERSION_MAJOR=4
2022-06-04 00:52:00 +08:00
AC_SUBST([NC_VERSION_MINOR]) NC_VERSION_MINOR=9
AC_SUBST([NC_VERSION_PATCH]) NC_VERSION_PATCH=3
AC_SUBST([NC_VERSION_NOTE]) NC_VERSION_NOTE="-development"
##
# These linker flags specify libtool version info.
# See http://www.gnu.org/software/libtool/manual/libtool.html#Libtool-versioning
# for information regarding incrementing `-version-info`.
# These values should match those in CMakeLists.txt
2023-03-14 05:40:12 +08:00
AC_SUBST([netCDF_SO_VERSION]) netCDF_SO_VERSION=21:2:2
#####
# Set some variables used to generate a libnetcdf.settings file,
# pattered after the files generated by libhdf4, libhdf5.
#####
# Create the VERSION file, which contains the package version from AC_INIT.
# This file is apparently unused. But see the bottom of Makefile.am
# echo AC_PACKAGE_VERSION>VERSION
2010-06-03 21:25:25 +08:00
AC_SUBST(PACKAGE_VERSION)
AC_MSG_NOTICE([netCDF AC_PACKAGE_VERSION])
# Keep libtool macros in an m4 directory.
AC_CONFIG_MACRO_DIR([m4])
# Configuration Date
if test "x$SOURCE_DATE_EPOCH" != "x" ; then
AC_SUBST([CONFIG_DATE]) CONFIG_DATE="`date -u -d "${SOURCE_DATE_EPOCH}"`"
else
AC_SUBST([CONFIG_DATE]) CONFIG_DATE="`date`"
fi
2010-06-03 21:25:25 +08:00
# Find out about the host we're building on.
AC_CANONICAL_HOST
# Find out about the target we're building for.
AC_CANONICAL_TARGET
AC_CONFIG_HEADERS([config.h])
##
# Check to see if the compiler supports -fno-strict-aliasing and, if so,
# add that to the C compiler flags. This is in support of https://github.com/Unidata/netcdf-c/issues/1983.
##
SAVE_CFLAGS="${CFLAGS}"
AC_LANG_PUSH([C])
AC_LANG_COMPILER_REQUIRE
CFLAGS="${CFLAGS} -fno-strict-aliasing"
AC_COMPILE_IFELSE([AC_LANG_PROGRAM([
[int i = 0;]])],
[have_no_strict_aliasing=yes],
[have_no_strict_aliasing=no])
AC_MSG_CHECKING([whether compiler supports -fno-strict-aliasing])
AC_MSG_RESULT([$have_no_strict_aliasing])
AC_LANG_POP([C])
if test $have_no_strict_aliasing = no; then
CFLAGS=$SAVE_CFLAGS
fi
##
# Some files need to exist in build directories
# that do not correspond to their source directory, or
# the test program makes an assumption about where files
# live. AC_CONFIG_LINKS provides a mechanism to link/copy files
# if an out-of-source build is happening.
##
AC_CONFIG_LINKS([nc_test4/ref_hdf5_compat1.nc:nc_test4/ref_hdf5_compat1.nc])
AC_CONFIG_LINKS([nc_test4/ref_hdf5_compat2.nc:nc_test4/ref_hdf5_compat2.nc])
AC_CONFIG_LINKS([nc_test4/ref_hdf5_compat3.nc:nc_test4/ref_hdf5_compat3.nc])
2018-03-05 18:45:18 +08:00
AC_CONFIG_LINKS([hdf4_test/ref_chunked.hdf4:hdf4_test/ref_chunked.hdf4])
AC_CONFIG_LINKS([hdf4_test/ref_contiguous.hdf4:hdf4_test/ref_contiguous.hdf4])
AM_INIT_AUTOMAKE([foreign dist-zip subdir-objects])
#AM_MAINTAINER_MODE()
# Check for the existence of this file before proceeding.
AC_CONFIG_SRCDIR([include/netcdf.h])
2010-06-03 21:25:25 +08:00
Support MSYS2/Mingw platform re: The current netcdf-c release has some problems with the mingw platform on windows. Mostly they are path issues. Changes to support mingw+msys2: ------------------------------- * Enable option of looking into the windows registry to find the mingw root path. In aid of proper path handling. * Add mingw+msys as a specific platform in configure.ac and move testing of the platform to the front so it is available early. * Handle mingw X libncpoco (dynamic loader) properly even though mingw does not yet support it. * Handle mingw X plugins properly even though mingw does not yet support it. * Alias pwd='pwd -W' to better handle paths in shell scripts. * Plus a number of other minor compile irritations. * Disallow the use of multiple nc_open's on the same file for windows (and mingw) because windows does not seem to handle these properly. Not sure why we did not catch this earlier. * Add mountpoint info to dpathmgr.c to help support mingw. * Cleanup dpathmgr conversions. Known problems: --------------- * I have not been able to get shared libraries to work, so plugins/filters must be disabled. * There is some kind of problem with libcurl that I have not solved, so all uses of libcurl (currently DAP+Byterange) must be disabled. Misc. other fixes: ------------------ * Cleanup the relationship between ENABLE_PLUGINS and various other flags in CMakeLists.txt and configure.ac. * Re-arrange the TESTDIRS order in Makefile.am. * Add pseudo-breakpoint to nclog.[ch] for debugging. * Improve the documentation of the path manager code in ncpathmgr.h * Add better support for relative paths in dpathmgr.c * Default the mode args to NCfopen to include "b" (binary) for windows. * Add optional debugging output in various places. * Make sure that everything builds with plugins disabled. * Fix numerous (s)printf inconsistencies betweenb the format spec and the arguments.
2021-12-24 13:18:56 +08:00
# Figure out platforms of special interest
AC_CANONICAL_HOST
AS_CASE([$host],
[*-*-cygwin], [ISCYGWIN=yes],
[*-*-darwin*], [ISOSX=yes],
[*-*-mingw*], [ISMINGW=yes],
[*-*-msys], [ISMINGW=yes],
[*-*-win*], [ISMSVC=yes],
[]
)
Support MSYS2/Mingw platform re: The current netcdf-c release has some problems with the mingw platform on windows. Mostly they are path issues. Changes to support mingw+msys2: ------------------------------- * Enable option of looking into the windows registry to find the mingw root path. In aid of proper path handling. * Add mingw+msys as a specific platform in configure.ac and move testing of the platform to the front so it is available early. * Handle mingw X libncpoco (dynamic loader) properly even though mingw does not yet support it. * Handle mingw X plugins properly even though mingw does not yet support it. * Alias pwd='pwd -W' to better handle paths in shell scripts. * Plus a number of other minor compile irritations. * Disallow the use of multiple nc_open's on the same file for windows (and mingw) because windows does not seem to handle these properly. Not sure why we did not catch this earlier. * Add mountpoint info to dpathmgr.c to help support mingw. * Cleanup dpathmgr conversions. Known problems: --------------- * I have not been able to get shared libraries to work, so plugins/filters must be disabled. * There is some kind of problem with libcurl that I have not solved, so all uses of libcurl (currently DAP+Byterange) must be disabled. Misc. other fixes: ------------------ * Cleanup the relationship between ENABLE_PLUGINS and various other flags in CMakeLists.txt and configure.ac. * Re-arrange the TESTDIRS order in Makefile.am. * Add pseudo-breakpoint to nclog.[ch] for debugging. * Improve the documentation of the path manager code in ncpathmgr.h * Add better support for relative paths in dpathmgr.c * Default the mode args to NCfopen to include "b" (binary) for windows. * Add optional debugging output in various places. * Make sure that everything builds with plugins disabled. * Fix numerous (s)printf inconsistencies betweenb the format spec and the arguments.
2021-12-24 13:18:56 +08:00
if test "x$MSYSTEM" != x ; then
ISMINGW=yes
ISMSYS=yes
fi
Improve UTF8 Support On Windows re: Issue https://github.com/Unidata/netcdf-c/issues/2190 The primary purpose of this PR is to improve the utf8 support for windows. This is persuant to a change in Windows that supports utf8 natively (almost). The almost means that it is still utf16 internally and the set of characters representable by utf8 is larger than those representable by utf16. This leaves open the question in the Issue about handling the Windows 1252 character set. This required the following changes: 1. Test the Windows build and major version in order to see if native utf8 is supported. 2. If native utf8 is supported, Modify dpathmgr.c to call the 8-bit version of the windows fopen() and open() functions. 3. In support of this, programs that use XGetOpt (Windows versions) need to get the command line as utf8 and then parse to arc+argv as utf8. This requires using a homegrown command line parser named XCommandLineToArgvA. 4. Add a utility program called "acpget" that prints out the current Windows code page and locale. Additionally, some technical debt was cleaned up as follows: 1. Unify all the places which attempt to read all or a part of a file into the dutil.c#NC_readfile code. 2. Similary unify all the code that creates temp files into dutil.c#NC_mktmp code. 3. Convert almost all remaining calls to fopen() and open() to NCfopen() and NCopen3(). This is to ensure that path management is used consistently. This touches a number of files. 4. extern->EXTERNL as needed to get it to work under Windows.
2022-02-09 11:53:30 +08:00
# Get windows version info
if test "x$ISMSVC" = xyes ; then
Improve UTF8 Support On Windows re: Issue https://github.com/Unidata/netcdf-c/issues/2190 The primary purpose of this PR is to improve the utf8 support for windows. This is persuant to a change in Windows that supports utf8 natively (almost). The almost means that it is still utf16 internally and the set of characters representable by utf8 is larger than those representable by utf16. This leaves open the question in the Issue about handling the Windows 1252 character set. This required the following changes: 1. Test the Windows build and major version in order to see if native utf8 is supported. 2. If native utf8 is supported, Modify dpathmgr.c to call the 8-bit version of the windows fopen() and open() functions. 3. In support of this, programs that use XGetOpt (Windows versions) need to get the command line as utf8 and then parse to arc+argv as utf8. This requires using a homegrown command line parser named XCommandLineToArgvA. 4. Add a utility program called "acpget" that prints out the current Windows code page and locale. Additionally, some technical debt was cleaned up as follows: 1. Unify all the places which attempt to read all or a part of a file into the dutil.c#NC_readfile code. 2. Similary unify all the code that creates temp files into dutil.c#NC_mktmp code. 3. Convert almost all remaining calls to fopen() and open() to NCfopen() and NCopen3(). This is to ensure that path management is used consistently. This touches a number of files. 4. extern->EXTERNL as needed to get it to work under Windows.
2022-02-09 11:53:30 +08:00
WINVER=`systeminfo | sed -e '/^OS Version:/p' -ed | sed -e 's|[^0-9]*\([0-9.]*\).*|\1|'`
else
WINVER="0.0.0"
fi
Improve UTF8 Support On Windows re: Issue https://github.com/Unidata/netcdf-c/issues/2190 The primary purpose of this PR is to improve the utf8 support for windows. This is persuant to a change in Windows that supports utf8 natively (almost). The almost means that it is still utf16 internally and the set of characters representable by utf8 is larger than those representable by utf16. This leaves open the question in the Issue about handling the Windows 1252 character set. This required the following changes: 1. Test the Windows build and major version in order to see if native utf8 is supported. 2. If native utf8 is supported, Modify dpathmgr.c to call the 8-bit version of the windows fopen() and open() functions. 3. In support of this, programs that use XGetOpt (Windows versions) need to get the command line as utf8 and then parse to arc+argv as utf8. This requires using a homegrown command line parser named XCommandLineToArgvA. 4. Add a utility program called "acpget" that prints out the current Windows code page and locale. Additionally, some technical debt was cleaned up as follows: 1. Unify all the places which attempt to read all or a part of a file into the dutil.c#NC_readfile code. 2. Similary unify all the code that creates temp files into dutil.c#NC_mktmp code. 3. Convert almost all remaining calls to fopen() and open() to NCfopen() and NCopen3(). This is to ensure that path management is used consistently. This touches a number of files. 4. extern->EXTERNL as needed to get it to work under Windows.
2022-02-09 11:53:30 +08:00
WINVERMAJOR=`echo $WVER | sed -e 's|\([^.]*\)[.]\([^.]*\)[.]\(.*\)|\1|'`
WINVERBUILD=`echo $WVER | sed -e 's|\([^.]*\)[.]\([^.]*\)[.]\(.*\)|\3|'`
if test "x$WINVERMAJOR" = x ; then WINVERMAJOR=0; fi
if test "x$WINVERBUILD" = x ; then WINVERBUILD=0; fi
AC_DEFINE_UNQUOTED([WINVERMAJOR], [$WINVERMAJOR], [windows version major])
AC_DEFINE_UNQUOTED([WINVERBUILD], [$WINVERBUILD], [windows version build])
AC_MSG_NOTICE([checking supported formats])
# An explicit disable of netcdf-4 | netcdf4 is treated as if it was disable-hdf5
AC_MSG_CHECKING([whether we should build with netcdf4 (alias for HDF5)])
AC_ARG_ENABLE([netcdf4], [AS_HELP_STRING([--disable-netcdf4],
Improve S3 Documentation and Support ## Improvements to S3 Documentation * Create a new document *quickstart_paths.md* that give a summary of the legal path formats used by netcdf-c. This includes both file paths and URL paths. * Modify *nczarr.md* to remove most of the S3 related text. * Move the S3 text from *nczarr.md* to a new document *cloud.md*. * Add some S3-related text to the *byterange.md* document. Hopefully, this will make it easier for users to find the information they want. ## Rebuild NCZarr Testing In order to avoid problems with running make check in parallel, two changes were made: 1. The *nczarr_test* test system was rebuilt. Now, for each test. any generated files are kept in a test-specific directory, isolated from all other test executions. 2. Similarly, since the S3 test bucket is shared, any generated S3 objects are isolated using a test-specific key path. ## Other S3 Related Changes * Add code to ensure that files created on S3 are reclaimed at end of testing. * Used the bash "trap" command to ensure S3 cleanup even if the test fails. * Cleanup the S3 related configure.ac flag set since S3 is used in several places. So now one should use the option *--enable-s3* instead of *--enable-nczarr-s3*, although the latter is still kept as a deprecated alias for the former. * Get some of the github actions yml to work with S3; required fixing various test scripts adding a secret to access the Unidata S3 bucket. * Cleanup S3 portion of libnetcdf.settings.in and netcdf_meta.h.in and test_common.in. * Merge partial S3 support into dhttp.c. * Create an experimental s3 access library especially for use with Windows. It is enabled by using the options *--enable-s3-internal* (automake) or *-DENABLE_S3_INTERNAL=ON* (CMake). Also add a unit-test for it. * Move some definitions from ncrc.h to ncs3sdk.h ## Other Changes * Provide a default implementation of strlcpy and move this and similar defaults into *dmissing.c*.
2023-04-26 07:15:06 +08:00
[(Deprecated) Synonym for --enable-hdf5)])])
test "x$enable_netcdf4" = xno || enable_netcdf4=yes
Improve S3 Documentation and Support ## Improvements to S3 Documentation * Create a new document *quickstart_paths.md* that give a summary of the legal path formats used by netcdf-c. This includes both file paths and URL paths. * Modify *nczarr.md* to remove most of the S3 related text. * Move the S3 text from *nczarr.md* to a new document *cloud.md*. * Add some S3-related text to the *byterange.md* document. Hopefully, this will make it easier for users to find the information they want. ## Rebuild NCZarr Testing In order to avoid problems with running make check in parallel, two changes were made: 1. The *nczarr_test* test system was rebuilt. Now, for each test. any generated files are kept in a test-specific directory, isolated from all other test executions. 2. Similarly, since the S3 test bucket is shared, any generated S3 objects are isolated using a test-specific key path. ## Other S3 Related Changes * Add code to ensure that files created on S3 are reclaimed at end of testing. * Used the bash "trap" command to ensure S3 cleanup even if the test fails. * Cleanup the S3 related configure.ac flag set since S3 is used in several places. So now one should use the option *--enable-s3* instead of *--enable-nczarr-s3*, although the latter is still kept as a deprecated alias for the former. * Get some of the github actions yml to work with S3; required fixing various test scripts adding a secret to access the Unidata S3 bucket. * Cleanup S3 portion of libnetcdf.settings.in and netcdf_meta.h.in and test_common.in. * Merge partial S3 support into dhttp.c. * Create an experimental s3 access library especially for use with Windows. It is enabled by using the options *--enable-s3-internal* (automake) or *-DENABLE_S3_INTERNAL=ON* (CMake). Also add a unit-test for it. * Move some definitions from ncrc.h to ncs3sdk.h ## Other Changes * Provide a default implementation of strlcpy and move this and similar defaults into *dmissing.c*.
2023-04-26 07:15:06 +08:00
AC_MSG_RESULT([$enable_netcdf4 (Deprecated) Please use --disable-hdf5)])
AC_MSG_CHECKING([whether we should build with netcdf-4 (alias for HDF5)])
AC_ARG_ENABLE([netcdf-4], [AS_HELP_STRING([--disable-netcdf-4],
[(synonym for --disable-netcdf4)])])
test "x$enable_netcdf_4" = xno || enable_netcdf_4=yes
AC_MSG_RESULT([$enable_netcdf_4])
# Propagate the alias
if test "x$enable_netcdf_4" = xno ; then enable_netcdf4=no; fi
if test "x$enable_netcdf4" = xno ; then enable_netcdf_4=no; fi
# Does the user want to use HDF5?
AC_MSG_CHECKING([whether we should build with HDF5])
AC_ARG_ENABLE([hdf5], [AS_HELP_STRING([--disable-hdf5],
[do not build with HDF5])])
test "x$enable_hdf5" = xno || enable_hdf5=yes
if test "x$enable_netcdf4" = xno ; then enable_hdf5=no ; fi
# disable-netcdf4 is synonym for disable-hdf5
AC_MSG_RESULT([$enable_hdf5])
# Check whether we want to enable CDF5 support.
AC_MSG_CHECKING([whether CDF5 support should be disabled])
AC_ARG_ENABLE([cdf5],
[AS_HELP_STRING([--disable-cdf5],
[build without CDF5 support. @<:@default: auto@:>@])],
[enable_cdf5=${enableval}], [enable_cdf5=auto]
)
AC_MSG_RESULT($enable_cdf5)
# Does the user want to turn on HDF4 read ability?
AC_MSG_CHECKING([whether reading of HDF4 SD files is to be enabled])
AC_ARG_ENABLE([hdf4], [AS_HELP_STRING([--enable-hdf4],
[build netcdf with HDF4 read capability (HDF4, HDF5 and zlib required)])])
test "x$enable_hdf4" = xyes || enable_hdf4=no
AC_MSG_RESULT($enable_hdf4)
AC_MSG_CHECKING([whether parallel I/O for classic files is to be enabled])
AC_ARG_ENABLE([pnetcdf], [AS_HELP_STRING([--enable-pnetcdf],
[build with parallel I/O for classic files. @<:@default: disabled@:>@])])
test "x$enable_pnetcdf" = xyes || enable_pnetcdf=no
AC_MSG_RESULT($enable_pnetcdf)
AC_MSG_CHECKING([whether any network access should be allowed])
AC_ARG_ENABLE([remote-functionality], [AS_HELP_STRING([--enable-remote-functionality],
[enable|disable all forms of network access (default enabled)])])
test "x$enable_remote_functionality" = xno || enable_remote_functionality=yes
AC_MSG_RESULT($enable_remote_functionality)
Upgrade the nczarr code to match Zarr V2 Re: https://github.com/zarr-developers/zarr-python/pull/716 The Zarr version 2 spec has been extended to include the ability to choose the dimension separator in chunk name keys. The legal separators has been extended from {'.'} to {'.' '/'}. So now it is possible to use a key like "0/1/2/0" for chunk names. This PR implements this for NCZarr. The V2 spec now says that this separator can be set on a per-variable basis. For now, I have chosen to allow this be set only globally by adding a key named "ZARR.DIMENSION_SEPARATOR=<char>" in the .daprc/.dodsrc/ncrc file. Currently, the only legal separator characters are '.' (the default) and '/'. On writing, this key will only be written if its value is different than the default. This change caused problems because supporting a separator of '/' is difficult to parse when keys/paths use '/' as the path separator. A test case was added for this. Additionally, make nczarr be enabled default by default. This required some additional changes so that if zip and/or AWS S3 sdk are unavailable, then they are disabled for NCZarr. In addition the following unrelated changes were made. 1. Tested that pure-zarr mode could read an nczarr formatted store. 1. The .rc file handling now merges all known .rc files (.ncrc,.daprc, and .dodsrc) in that order and using those in HOME first, then in current directory. For duplicate entries, the later ones override the earlier ones. This change is to remove some of the conflicts inherent in the current .rc file load process. A set of test cases was also added. 1. Re-order tests in configure.ac and CMakeLists.txt so that if libcurl is not found then the other options that depend upon it properly are disabled. 1. I decided that xarray support should be enabled by default for pure zarr. In order to allow disabling, I added a new mode flag "noxarray". 1. Certain test in nczarr_test depend on use of .dodsrc. In order for these to work when testing in parallel, some inter-test dependencies needed to be added. 1. Improved authorization testing to use changes in thredds.ucar.edu
2021-04-25 09:48:15 +08:00
# We need curl for remote operations
AC_CHECK_LIB([curl],[curl_easy_setopt],[found_curl=yes],[found_curl=no])
if test "x$found_curl" = "xyes" ; then
AC_SEARCH_LIBS([curl_easy_setopt],[curl curl.dll cygcurl.dll], [],[])
fi
## Capture the state of the --enable-dap flag => enable dap2+dap4
AC_MSG_CHECKING([whether DAP client(s) are to be built])
AC_ARG_ENABLE([dap],
[AS_HELP_STRING([--disable-dap],
[build without DAP client support.])])
test "x$enable_dap" = xno || enable_dap=yes
AC_MSG_RESULT($enable_dap)
if test "x$enable_remote_functionality" = xno ; then
AC_MSG_WARN([All network access is disabled => DAP support disabled.])
enable_dap=no
fi
Upgrade the nczarr code to match Zarr V2 Re: https://github.com/zarr-developers/zarr-python/pull/716 The Zarr version 2 spec has been extended to include the ability to choose the dimension separator in chunk name keys. The legal separators has been extended from {'.'} to {'.' '/'}. So now it is possible to use a key like "0/1/2/0" for chunk names. This PR implements this for NCZarr. The V2 spec now says that this separator can be set on a per-variable basis. For now, I have chosen to allow this be set only globally by adding a key named "ZARR.DIMENSION_SEPARATOR=<char>" in the .daprc/.dodsrc/ncrc file. Currently, the only legal separator characters are '.' (the default) and '/'. On writing, this key will only be written if its value is different than the default. This change caused problems because supporting a separator of '/' is difficult to parse when keys/paths use '/' as the path separator. A test case was added for this. Additionally, make nczarr be enabled default by default. This required some additional changes so that if zip and/or AWS S3 sdk are unavailable, then they are disabled for NCZarr. In addition the following unrelated changes were made. 1. Tested that pure-zarr mode could read an nczarr formatted store. 1. The .rc file handling now merges all known .rc files (.ncrc,.daprc, and .dodsrc) in that order and using those in HOME first, then in current directory. For duplicate entries, the later ones override the earlier ones. This change is to remove some of the conflicts inherent in the current .rc file load process. A set of test cases was also added. 1. Re-order tests in configure.ac and CMakeLists.txt so that if libcurl is not found then the other options that depend upon it properly are disabled. 1. I decided that xarray support should be enabled by default for pure zarr. In order to allow disabling, I added a new mode flag "noxarray". 1. Certain test in nczarr_test depend on use of .dodsrc. In order for these to work when testing in parallel, some inter-test dependencies needed to be added. 1. Improved authorization testing to use changes in thredds.ucar.edu
2021-04-25 09:48:15 +08:00
if test "x$enable_dap" = xyes & test "x$found_curl" = xno ; then
AC_MSG_WARN([curl required for dap access. DAP support disabled.])
enable_dap=no
fi
AC_MSG_CHECKING([whether netcdf zarr storage format should be disabled])
AC_ARG_ENABLE([nczarr],
Upgrade the nczarr code to match Zarr V2 Re: https://github.com/zarr-developers/zarr-python/pull/716 The Zarr version 2 spec has been extended to include the ability to choose the dimension separator in chunk name keys. The legal separators has been extended from {'.'} to {'.' '/'}. So now it is possible to use a key like "0/1/2/0" for chunk names. This PR implements this for NCZarr. The V2 spec now says that this separator can be set on a per-variable basis. For now, I have chosen to allow this be set only globally by adding a key named "ZARR.DIMENSION_SEPARATOR=<char>" in the .daprc/.dodsrc/ncrc file. Currently, the only legal separator characters are '.' (the default) and '/'. On writing, this key will only be written if its value is different than the default. This change caused problems because supporting a separator of '/' is difficult to parse when keys/paths use '/' as the path separator. A test case was added for this. Additionally, make nczarr be enabled default by default. This required some additional changes so that if zip and/or AWS S3 sdk are unavailable, then they are disabled for NCZarr. In addition the following unrelated changes were made. 1. Tested that pure-zarr mode could read an nczarr formatted store. 1. The .rc file handling now merges all known .rc files (.ncrc,.daprc, and .dodsrc) in that order and using those in HOME first, then in current directory. For duplicate entries, the later ones override the earlier ones. This change is to remove some of the conflicts inherent in the current .rc file load process. A set of test cases was also added. 1. Re-order tests in configure.ac and CMakeLists.txt so that if libcurl is not found then the other options that depend upon it properly are disabled. 1. I decided that xarray support should be enabled by default for pure zarr. In order to allow disabling, I added a new mode flag "noxarray". 1. Certain test in nczarr_test depend on use of .dodsrc. In order for these to work when testing in parallel, some inter-test dependencies needed to be added. 1. Improved authorization testing to use changes in thredds.ucar.edu
2021-04-25 09:48:15 +08:00
[AS_HELP_STRING([--disable-nczarr],
[disable netcdf zarr storage support])])
test "x$enable_nczarr" = xno || enable_nczarr=yes
AC_MSG_RESULT($enable_nczarr)
# HDF5 | HDF4 | NCZarr => netcdf-4
if test "x$enable_hdf5" = xyes || test "x$enable_hdf4" = xyes || test "x$enable_nczarr" = xyes ; then
enable_netcdf_4=yes
fi
AC_MSG_CHECKING([whether netcdf-4 should be forcibly enabled])
AC_MSG_RESULT([$enable_netcdf_4])
# Synonym
enable_netcdf4=${enable_netcdf_4}
2010-06-03 21:25:25 +08:00
AC_MSG_NOTICE([checking user options])
# Did the user specify a default minimum blocksize (NCIO_MINBLOCKSIZE) for posixio?
AC_MSG_CHECKING([whether a NCIO_MINBLOCKSIZE was specified])
AC_ARG_WITH([minblocksize],
[AS_HELP_STRING([--with-minblocksize=<integer>],
[Specify minimum I/O blocksize for netCDF classic and 64-bit offset format files.])],
[NCIO_MINBLOCKSIZE=$with_minblocksize], [NCIO_MINBLOCKSIZE=256])
AC_MSG_RESULT([$NCIO_MINBLOCKSIZE])
AC_DEFINE_UNQUOTED([NCIO_MINBLOCKSIZE], [$NCIO_MINBLOCKSIZE], [min blocksize for posixio.])
2018-03-29 03:54:05 +08:00
# Find valgrind, if available, and add targets for it.
AX_VALGRIND_DFLT([sgcheck], [off])
AX_VALGRIND_CHECK
AM_CONDITIONAL(ENABLE_VALGRIND, [test "x$VALGRIND_ENABLED" = xyes])
2018-03-29 03:54:05 +08:00
###
# Doxygen and doxygen-related options.
###
AC_ARG_ENABLE([doxygen],
2012-05-19 04:31:50 +08:00
[AS_HELP_STRING([--enable-doxygen],
[Enable generation of documentation.])])
test "x$enable_doxygen" = xyes || enable_doxygen=no
AM_CONDITIONAL([BUILD_DOCS], [test "x$enable_doxygen" = xyes])
AC_ARG_ENABLE([doxygen-tasks],
[AS_HELP_STRING([--enable-doxygen-tasks],
[Enable Doxygen-generated test, todo and bug list documentation. Developers only.])])
test "x$enable_doxygen_tasks" = xyes || enable_doxygen_tasks=no
AM_CONDITIONAL([SHOW_DOXYGEN_TAG_LIST], [test "x$enable_doxygen_tasks" = xyes])
AC_SUBST([SHOW_DOXYGEN_TAG_LIST], [$enable_doxygen_tasks])
###
# Determine if we should build documentation
# configured for releases on the Unidata web server.
###
AC_ARG_ENABLE([doxygen-build-release-docs],
[AS_HELP_STRING([--enable-doxygen-build-release-docs],
[Build release documentation. This is of interest only to developers.])])
test "x$enable_doxygen_build_release_docs" = xyes || enable_doxygen_build_release_docs=no
AM_CONDITIONAL([DOXYGEN_BUILD_RELEASE_DOCS], [test "x$enable_doxygen_build_release_docs" = xyes])
if test $enable_doxygen_build_release_docs = yes; then
AC_SUBST([DOXYGEN_CSS_FILE], ["release.css"])
AC_SUBST([DOXYGEN_HEADER_FILE], ["release_header.html"])
AC_SUBST([DOXYGEN_SEARCHENGINE], ["NO"])
else
AC_SUBST([DOXYGEN_CSS_FILE], [])
AC_SUBST([DOXYGEN_HEADER_FILE], [])
AC_SUBST([DOXYGEN_SEARCHENGINE], ["YES"])
fi
AC_SUBST([DOXYGEN_SERVER_BASED_SEARCH], ["NO"])
AC_ARG_ENABLE([doxygen-pdf-output],
[AS_HELP_STRING([--enable-doxygen-pdf-output],
[Build netCDF library documentation in PDF format. Experimental.])])
AM_CONDITIONAL([NC_ENABLE_DOXYGEN_PDF_OUTPUT], [test "x$enable_doxygen_pdf_output" = xyes])
AC_SUBST([NC_ENABLE_DOXYGEN_PDF_OUTPUT], [$enable_doxygen_pdf_output])
AC_ARG_ENABLE([dot],
[AS_HELP_STRING([--enable-dot],
[Use dot (provided by graphviz) to generate charts and graphs in the doxygen-based documentation.])])
test "x$enable_dot" = xyes || enable_dot=no
AC_ARG_ENABLE([internal-docs],
[AS_HELP_STRING([--enable-internal-docs],
[Include documentation of library internals. This is of interest only to those developing the netCDF library.])])
2011-09-16 00:57:16 +08:00
test "x$enable_internal_docs" = xyes || enable_internal_docs=no
AC_SUBST([BUILD_INTERNAL_DOCS], [$enable_internal_docs])
# Doxygen is apparently buggy when trying to combine a markdown
# file with @internal. The equivalent can be faked using
# the Doxygen ENABLED_SECTIONS mechanism. See docs/testserver.dox
# to see how this is done.
sections=
if test "x$enable_internal_docs" = xyes ; then
sections="$sections INTERNAL"
fi
AC_SUBST([ENABLED_DOC_SECTIONS], [$sections])
2010-06-03 21:25:25 +08:00
AC_MSG_CHECKING([if fsync support is enabled])
AC_ARG_ENABLE([fsync],
[AS_HELP_STRING([--enable-fsync],
[enable fsync support])],
[],
[enable_fsync=no])
2010-06-03 21:25:25 +08:00
test "x$enable_fsync" = xno || enable_fsync=yes
AC_MSG_RESULT($enable_fsync)
if test "x$enable_fsync" = xyes ; then
AC_DEFINE([USE_FSYNC], [1], [if true, include experimental fsync code])
fi
2014-04-16 11:25:44 +08:00
# Temporary until JNA bug is fixed (which is probably never).
# The problem being solved is this:
# > On Windows using the microsoft runtime, it is an error
# > for one library to free memory allocated by a different library.
# This is probably only an issue when using the netcdf-c library
# via JNA under Java.
AC_MSG_CHECKING([if jna bug workaround is enabled])
2014-04-10 09:55:20 +08:00
AC_ARG_ENABLE([jna],
[AS_HELP_STRING([--enable-jna],
[enable jna bug workaround])],
[],
[enable_jna=no])
test "x$enable_jna" = xno || enable_jna=yes
AC_MSG_RESULT($enable_jna)
if test "x$enable_jna" = xyes ; then
AC_DEFINE([JNA], [1], [if true, include jna bug workaround code])
fi
2019-08-09 23:31:24 +08:00
# Does the user want to turn off unit tests (useful for test coverage
# analysis).
AC_MSG_CHECKING([if unit tests should be enabled])
2019-08-09 23:31:24 +08:00
AC_ARG_ENABLE([unit-tests],
[AS_HELP_STRING([--enable-unit-tests],
[Disable tests in unit_test directory. Other tests still run.])])
2019-08-09 23:31:24 +08:00
test "x$enable_unit_tests" = xno || enable_unit_tests=yes
This PR adds EXPERIMENTAL support for accessing data in the cloud using a variant of the Zarr protocol and storage format. This enhancement is generically referred to as "NCZarr". The data model supported by NCZarr is netcdf-4 minus the user-defined types and the String type. In this sense it is similar to the CDF-5 data model. More detailed information about enabling and using NCZarr is described in the document NUG/nczarr.md and in a [Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in). WARNING: this code has had limited testing, so do use this version for production work. Also, performance improvements are ongoing. Note especially the following platform matrix of successful tests: Platform | Build System | S3 support ------------------------------------ Linux+gcc | Automake | yes Linux+gcc | CMake | yes Visual Studio | CMake | no Additionally, and as a consequence of the addition of NCZarr, major changes have been made to the Filter API. NOTE: NCZarr does not yet support filters, but these changes are enablers for that support in the future. Note that it is possible (probable?) that there will be some accidental reversions if the changes here did not correctly mimic the existing filter testing. In any case, previously filter ids and parameters were of type unsigned int. In order to support the more general zarr filter model, this was all converted to char*. The old HDF5-specific, unsigned int operations are still supported but they are wrappers around the new, char* based nc_filterx_XXX functions. This entailed at least the following changes: 1. Added the files libdispatch/dfilterx.c and include/ncfilter.h 2. Some filterx utilities have been moved to libdispatch/daux.c 3. A new entry, "filter_actions" was added to the NCDispatch table and the version bumped. 4. An overly complex set of structs was created to support funnelling all of the filterx operations thru a single dispatch "filter_actions" entry. 5. Move common code to from libhdf5 to libsrc4 so that it is accessible to nczarr. Changes directly related to Zarr: 1. Modified CMakeList.txt and configure.ac to support both C and C++ -- this is in support of S3 support via the awd-sdk libraries. 2. Define a size64_t type to support nczarr. 3. More reworking of libdispatch/dinfermodel.c to support zarr and to regularize the structure of the fragments section of a URL. Changes not directly related to Zarr: 1. Make client-side filter registration be conditional, with default off. 2. Hack include/nc4internal.h to make some flags added by Ed be unique: e.g. NC_CREAT, NC_INDEF, etc. 3. cleanup include/nchttp.h and libdispatch/dhttp.c. 4. Misc. changes to support compiling under Visual Studio including: * Better testing under windows for dirent.h and opendir and closedir. 5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags and to centralize error reporting. 6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them. 7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible. Changes Left TO-DO: 1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
AC_MSG_RESULT($enable_unit_tests)
AM_CONDITIONAL(BUILD_UNIT_TESTS, [test "x$enable_unit_tests" = xyes])
2019-08-09 23:31:24 +08:00
# Does the user require dynamic loading?
# This is only for those hdf5 installs that support it.
AC_MSG_CHECKING([do we require hdf5 dynamic-loading support])
AC_ARG_ENABLE([dynamic-loading], [AS_HELP_STRING([--enable-dynamic-loading],
[enable dynamic loading for use with supported hdf5 installs (libdl, HDF5 required)])])
test "x$enable_dynamic_loading" = xno || enable_dynamic_loading=yes
AC_MSG_RESULT([$enable_dynamic_loading])
2010-06-03 21:25:25 +08:00
# Does the user want to turn on extra HDF4 file tests?
AC_MSG_CHECKING([whether to fetch some sample HDF4 files from Unidata ftp site to test HDF4 reading (requires wget)])
AC_ARG_ENABLE([hdf4-file-tests], [AS_HELP_STRING([--enable-hdf4-file-tests],
[get some HDF4 files from Unidata ftp site and test that they can be read])])
test "x$enable_hdf4" = xyes -a "x$enable_hdf4_file_tests" = xyes || enable_hdf4_file_tests=no
if test "x$enable_hdf4_file_tests" = xyes; then
AC_DEFINE([USE_HDF4_FILE_TESTS], 1, [If true, use use wget to fetch some sample HDF4 data, and then test against it.])
fi
AC_MSG_RESULT($enable_hdf4_file_tests)
# Does the user want to run extra parallel tests when parallel netCDF-4 is built?
AC_MSG_CHECKING([whether parallel IO tests should be run])
AC_ARG_ENABLE([parallel-tests],
[AS_HELP_STRING([--enable-parallel-tests],
[Run extra parallel IO tests. Requires netCDF-4
with parallel I/O support.])])
2010-06-03 21:25:25 +08:00
test "x$enable_parallel_tests" = xyes || enable_parallel_tests=no
AC_MSG_RESULT($enable_parallel_tests)
2018-08-18 18:17:24 +08:00
# Did the user specify an MPI launcher other than mpiexec?
2018-08-16 21:23:41 +08:00
AC_MSG_CHECKING([whether a user specified program to run mpi programs])
AC_ARG_WITH([mpiexec],
[AS_HELP_STRING([--with-mpiexec=<command>],
[Specify command to launch MPI parallel tests.])],
[MPIEXEC=$with_mpiexec], [MPIEXEC=mpiexec])
AC_MSG_RESULT([$MPIEXEC])
AC_SUBST([MPIEXEC], [$MPIEXEC])
2010-06-03 21:25:25 +08:00
# Did the user specify a default chunk size?
AC_MSG_CHECKING([whether a default chunk size in bytes was specified])
AC_ARG_WITH([default-chunk-size],
[AS_HELP_STRING([--with-default-chunk-size=<integer>],
[Specify default size of chunks in bytes.])],
[DEFAULT_CHUNK_SIZE=$with_default_chunk_size], [DEFAULT_CHUNK_SIZE=4194304])
2010-06-03 21:25:25 +08:00
AC_MSG_RESULT([$DEFAULT_CHUNK_SIZE])
AC_DEFINE_UNQUOTED([DEFAULT_CHUNK_SIZE], [$DEFAULT_CHUNK_SIZE], [default chunk size in bytes])
# Did the user specify a default cache size?
AC_MSG_CHECKING([whether a default cache size was specified])
AC_ARG_WITH([default-chunk-cache-size],
[AS_HELP_STRING([--with-default-chunk-cache-size=<integer>],
[Specify default size (in bytes) for chunk cache.])],
[DEFAULT_CHUNK_CACHE_SIZE=$with_default_chunk_cache_size], [DEFAULT_CHUNK_CACHE_SIZE=16777216U])
AC_MSG_RESULT([$DEFAULT_CHUNK_CACHE_SIZE])
AC_DEFINE_UNQUOTED([DEFAULT_CHUNK_CACHE_SIZE], [$DEFAULT_CHUNK_CACHE_SIZE], [default size of the chunk cache.])
# Did the user specify a max number of chunks in default per-var cache size?
AC_MSG_CHECKING([whether a default number of entries for the chunk cache was specified])
2010-06-03 21:25:25 +08:00
AC_ARG_WITH([default-chunks-in-cache],
[AS_HELP_STRING([--with-default-chunks-in-cache=<integer>],
[Specify the max number of chunks to store in cache.])],
[DEFAULT_CHUNKS_IN_CACHE=$with_default_chunks_in_cache], [DEFAULT_CHUNKS_IN_CACHE=1000])
2010-06-03 21:25:25 +08:00
AC_MSG_RESULT([$DEFAULT_CHUNKS_IN_CACHE])
AC_DEFINE_UNQUOTED([DEFAULT_CHUNKS_IN_CACHE], [$DEFAULT_CHUNKS_IN_CACHE], [default max num chunks in chunk cache.])
2010-06-03 21:25:25 +08:00
# Did the user specify a default cache preemption
AC_MSG_CHECKING([whether a default cache preemption was specified])
AC_ARG_WITH([default-chunk-cache-preemption],
[AS_HELP_STRING([--with-chunk-cache-preemption=<float between 0 and 1 inclusive>],
[Specify default file chunk cache preemption policy (a number between 0 and 1, inclusive).])],
[DEFAULT_CHUNK_CACHE_PREEMPTION=$with_chunk_cache_preemption], [DEFAULT_CHUNK_CACHE_PREEMPTION=0.75])
AC_MSG_RESULT([$DEFAULT_CHUNK_CACHE_PREEMPTION])
AC_DEFINE_UNQUOTED([DEFAULT_CHUNK_CACHE_PREEMPTION], [$DEFAULT_CHUNK_CACHE_PREEMPTION], [default file chunk cache preemption policy.])
# These three options are redundant over the --with-default... options above.
# Did the user specify a default cache size for HDF5?
2010-06-03 21:25:25 +08:00
AC_MSG_CHECKING([whether a default file cache size for HDF5 was specified])
AC_ARG_WITH([chunk-cache-size],
[AS_HELP_STRING([--with-chunk-cache-size=<integer>],
[Specify default file cache chunk size for HDF5 files in bytes.])],
[CHUNK_CACHE_SIZE=$with_chunk_cache_size], [CHUNK_CACHE_SIZE=DEFAULT_CHUNK_CACHE_SIZE])
2010-06-03 21:25:25 +08:00
AC_MSG_RESULT([$CHUNK_CACHE_SIZE])
AC_DEFINE_UNQUOTED([CHUNK_CACHE_SIZE], [$CHUNK_CACHE_SIZE], [default file chunk cache size in bytes.])
# Did the user specify a default max cache entries for HDF5
2010-06-03 21:25:25 +08:00
AC_MSG_CHECKING([whether a default file cache maximum number of elements for HDF5 was specified])
AC_ARG_WITH([chunk-cache-nelems],
[AS_HELP_STRING([--with-chunk-cache-nelems=<integer>],
[Specify default maximum number of elements in the file chunk cache chunk for HDF5 files (should be prime number).])],
[CHUNK_CACHE_NELEMS=$with_chunk_cache_nelems], [CHUNK_CACHE_NELEMS=DEFAULT_CHUNKS_IN_CACHE])
2010-06-03 21:25:25 +08:00
AC_MSG_RESULT([$CHUNK_CACHE_NELEMS])
AC_DEFINE_UNQUOTED([CHUNK_CACHE_NELEMS], [$CHUNK_CACHE_NELEMS], [default file chunk cache nelems.])
# Did the user specify a default cache preemption for HDF5?
2010-06-03 21:25:25 +08:00
AC_MSG_CHECKING([whether a default cache preemption for HDF5 was specified])
AC_ARG_WITH([chunk-cache-preemption],
[AS_HELP_STRING([--with-chunk-cache-preemption=<float between 0 and 1 inclusive>],
[Specify default file chunk cache preemption policy for HDF5 files (a number between 0 and 1, inclusive).])],
[CHUNK_CACHE_PREEMPTION=$with_chunk_cache_preemption], [CHUNK_CACHE_PREEMPTION=DEFAULT_CHUNK_CACHE_PREEMPTION])
2010-06-03 21:25:25 +08:00
AC_MSG_RESULT([$CHUNK_CACHE_PREEMPTION])
AC_DEFINE_UNQUOTED([CHUNK_CACHE_PREEMPTION], [$CHUNK_CACHE_PREEMPTION], [default file chunk cache preemption policy.])
# Does the user want to enable netcdf-4 logging?
AC_MSG_CHECKING([whether netCDF-4 logging is enabled])
AC_ARG_ENABLE([logging],
[AS_HELP_STRING([--enable-logging],
[enable logging capability (only applies when netCDF-4 is built). \
This debugging features is only of interest to netCDF developers. \
Ignored if netCDF-4 is not enabled.])])
2010-06-03 21:25:25 +08:00
test "x$enable_logging" = xyes || enable_logging=no
AC_MSG_RESULT([$enable_logging])
# Does the user want to turn off nc_set_log_level() function? (It will
# always be defined if --enable-logging is used.)
AC_MSG_CHECKING([whether nc_set_log_level() function is included (will do nothing unless enable-logging is also used)])
AC_ARG_ENABLE([set_log_level_func], [AS_HELP_STRING([--disable-set-log-level-func],
[disable the nc_set_log_level function])])
test "x$enable_set_log_level_func" = xno -a "x$enable_logging" = xno || enable_set_log_level_func=yes
if test "x$enable_set_log_level_func" = xyes -a "x$enable_netcdf_4" = xyes; then
AC_DEFINE([ENABLE_SET_LOG_LEVEL], 1, [If true, define nc_set_log_level.])
fi
AC_MSG_RESULT($enable_set_log_level_func)
2011-04-29 01:11:21 +08:00
# CURLOPT_USERNAME is not defined until curl version 7.19.1
# CURLOPT_PASSWORD is not defined until curl version 7.19.1
# CURLOPT_KEYPASSWD is not defined until curl version 7.16.4
# CURLINFO_RESPONSE_CODE is not defined until curl version 7.10.7
# CURLOPT_CHUNK_BGN_FUNCTION is not defined until curl version 7.21.0
# CURL_MAX_READ_SIZE is not defined until 7.59
# Save/restore CFLAGS
SAVECFLAGS="$CFLAGS"
CFLAGS="${curl_cflags}"
AC_COMPILE_IFELSE([AC_LANG_PROGRAM(
[#include "curl/curl.h"],
[[int x = CURLOPT_USERNAME;]])],
[haveusername=yes],
[haveusername=no])
AC_MSG_CHECKING([whether CURLOPT_USERNAME is defined])
AC_MSG_RESULT([${haveusername}])
if test $haveusername = yes; then
AC_DEFINE([HAVE_CURLOPT_USERNAME],[1],[Is CURLOPT_USERNAME defined])
fi
AC_COMPILE_IFELSE([AC_LANG_PROGRAM(
[#include "curl/curl.h"],
[[int x = CURLOPT_PASSWORD;]])],
[havepassword=yes],
[havepassword=no])
AC_MSG_CHECKING([whether CURLOPT_PASSWORD is defined])
AC_MSG_RESULT([${havepassword}])
if test $havepassword = yes; then
AC_DEFINE([HAVE_CURLOPT_PASSWORD],[1],[Is CURLOPT_PASSWORD defined])
fi
AC_COMPILE_IFELSE([AC_LANG_PROGRAM(
[#include "curl/curl.h"],
[[int x = CURLOPT_KEYPASSWD;]])],
[havekeypassword=yes],
[havekeypassword=no])
AC_MSG_CHECKING([whether CURLOPT_KEYPASSWD is defined])
AC_MSG_RESULT([${havekeypassword}])
if test $havekeypassword = yes; then
AC_DEFINE([HAVE_CURLOPT_KEYPASSWD],[1],[Is CURLOPT_KEYPASSWD defined])
fi
AC_COMPILE_IFELSE([AC_LANG_PROGRAM(
[#include "curl/curl.h"],
[[int x = CURLINFO_RESPONSE_CODE;]])],
[haveresponsecode=yes],
[haveresponsecode=no])
AC_MSG_CHECKING([whether CURLINFO_RESPONSE_CODE is defined])
AC_MSG_RESULT([${haveresponsecode}])
if test $haveresponsecode = yes; then
AC_DEFINE([HAVE_CURLINFO_RESPONSE_CODE],[1],[Is CURLINFO_RESPONSE_CODE defined])
fi
AC_COMPILE_IFELSE([AC_LANG_PROGRAM(
[#include "curl/curl.h"],
[[int x = CURLOPT_BUFFERSIZE;]])],
[havecurloption=yes],
[havecurloption=no])
AC_MSG_CHECKING([whether CURLOPT_BUFFERSIZE is defined])
AC_MSG_RESULT([${havecurloption}])
if test $havecurloption = yes; then
AC_DEFINE([HAVE_CURLOPT_BUFFERSIZE],[1],[Is CURLOPT_BUFFERSIZE defined])
fi
AC_COMPILE_IFELSE([AC_LANG_PROGRAM(
[#include "curl/curl.h"],
[[int x = CURLOPT_TCP_KEEPALIVE;]])],
[havecurloption=yes],
[havecurloption=no])
AC_MSG_CHECKING([whether CURLOPT_TCP_KEEPALIVE is defined])
AC_MSG_RESULT([${havecurloption}])
if test $havecurloption = yes; then
AC_DEFINE([HAVE_CURLOPT_KEEPALIVE],[1],[Is CURLOPT_TCP_KEEPALIVE defined])
fi
# CURLOPT_VERIFYHOST semantics differ depending on version
AC_MSG_CHECKING([whether libcurl is version 7.66 or later?])
AC_COMPILE_IFELSE([AC_LANG_PROGRAM(
[#include "curl/curl.h"],
[[
#if !CURL_AT_LEAST_VERSION(7,66,0)
error "<7.66";
#endif
]])], [libcurl766=yes], [libcurl766=no])
AC_MSG_RESULT([$libcurl766])
if test x$libcurl766 = xyes; then
AC_DEFINE([HAVE_LIBCURL_766],[1],[libcurl version is 7.66 or later])
fi
CFLAGS="$SAVECFLAGS"
###
# Libxml2 control block.
###
AC_MSG_CHECKING([whether to search for and use external libxml2])
AC_ARG_ENABLE([libxml2],
[AS_HELP_STRING([--disable-libxml2],
[disable detection and use of libxml2 in favor of the bundled xml parser])])
test "x$enable_libxml2" = xno || enable_libxml2=yes
AC_MSG_RESULT([$enable_libxml2])
# We can optionally use libxml2 for DAP4 and nch5comms, if enabled
have_libxml2=no
if test "x$enable_libxml2" = xyes; then
AC_CHECK_PROGS([NC_XML2_CONFIG], [xml2-config])
if test -z "$NC_XML2_CONFIG"; then
AC_MSG_ERROR([Cannot find xml2-config utility. Either install the libxml2 development package, or re-run configure with --disable-libxml2 to use the bundled xml2 parser])
fi
AC_CHECK_LIB([xml2],[xmlReadMemory],[have_libxml2=yes],[have_libxml2=no])
if test "x$have_libxml2" = "xyes" ; then
AC_SEARCH_LIBS([xmlReadMemory],[xml2 xml2.dll cygxml2.dll], [],[])
fi
if test "x$have_libxml2" = xyes; then
XML2FLAGS=`xml2-config --cflags`
AC_SUBST([XML2FLAGS],${XML2FLAGS})
AC_DEFINE([HAVE_LIBXML2], [1], [if true, use libxml2])
fi
fi
if test "x$enable_libxml2" = xyes; then
XMLPARSER="libxml2"
else
XMLPARSER="tinyxml2 (bundled)"
fi
# Need a condition and subst for this
AM_CONDITIONAL(ENABLE_LIBXML2, [test "x$enable_libxml2" = xyes])
AC_SUBST([XMLPARSER],[${XMLPARSER}])
###
# End Libxml2 block
###
##
# Should quantize be enabled?
# On by default, but for testing purposes we may want to
# disable it.
##
AC_MSG_CHECKING([whether to enable quantize functionality])
AC_ARG_ENABLE([quantize],
[AS_HELP_STRING([--disable-quantize],
[disable quantize support. It is safe to leave this on unless you specifically need to disable it.])])
test "x$enable_quantize" = xno || enable_quantize=yes
AC_MSG_RESULT($enable_quantize)
if test "x${enable_quantize}" = xyes; then
AC_DEFINE([ENABLE_QUANTIZE], [1], [if true, enable quantize support])
fi
AM_CONDITIONAL(ENABLE_QUANTIZE, [test x$enable_quantize = xyes])
2018-11-09 22:39:12 +08:00
# --enable-dap => enable-dap4
enable_dap4=$enable_dap
Add support for Zarr string type to NCZarr * re: https://github.com/Unidata/netcdf-c/pull/2278 * re: https://github.com/Unidata/netcdf-c/issues/2485 * re: https://github.com/Unidata/netcdf-c/issues/2474 This PR subsumes PR https://github.com/Unidata/netcdf-c/pull/2278. Actually is a bit an omnibus covering several issues. ## PR https://github.com/Unidata/netcdf-c/pull/2278 Add support for the Zarr string type. Zarr strings are restricted currently to be of fixed size. The primary issue to be addressed is to provide a way for user to specify the size of the fixed length strings. This is handled by providing the following new attributes special: 1. **_nczarr_default_maxstrlen** &mdash; This is an attribute of the root group. It specifies the default maximum string length for string types. If not specified, then it has the value of 64 characters. 2. **_nczarr_maxstrlen** &mdash; This is a per-variable attribute. It specifies the maximum string length for the string type associated with the variable. If not specified, then it is assigned the value of **_nczarr_default_maxstrlen**. This PR also requires some hacking to handle the existing netcdf-c NC_CHAR type, which does not exist in zarr. The goal was to choose numpy types for both the netcdf-c NC_STRING type and the netcdf-c NC_CHAR type such that if a pure zarr implementation read them, it would still work and an NC_CHAR type would be handled by zarr as a string of length 1. For writing variables and NCZarr attributes, the type mapping is as follows: * "|S1" for NC_CHAR. * ">S1" for NC_STRING && MAXSTRLEN==1 * ">Sn" for NC_STRING && MAXSTRLEN==n Note that it is a bit of a hack to use endianness, but it should be ok since for string/char, the endianness has no meaning. For reading attributes with pure zarr (i.e. with no nczarr atribute types defined), they will always be interpreted as of type NC_CHAR. ## Issue: https://github.com/Unidata/netcdf-c/issues/2474 This PR partly fixes this issue because it provided more comprehensive support for Zarr attributes that are JSON valued expressions. This PR still does not address the problem in that issue where the _ARRAY_DIMENSION attribute is incorrectly set. Than can only be fixed by the creator of the datasets. ## Issue: https://github.com/Unidata/netcdf-c/issues/2485 This PR also fixes the scalar failure shown in this issue. It generally cleans up scalar handling. It also adds a note to the documentation describing that NCZarr supports scalars while Zarr does not and also how scalar interoperability is achieved. ## Misc. Other Changes 1. Convert the nczarr special attributes and keys to be all lower case. So "_NCZARR_ATTR" now used "_nczarr_attr. Support back compatibility for the upper case names. 2. Cleanup my too-clever-by-half handling of scalars in libnczarr.
2022-08-28 10:21:13 +08:00
AC_MSG_CHECKING([whether dap use of remotetest server should be enabled])
2010-06-03 21:25:25 +08:00
AC_ARG_ENABLE([dap-remote-tests],
Add support for Zarr string type to NCZarr * re: https://github.com/Unidata/netcdf-c/pull/2278 * re: https://github.com/Unidata/netcdf-c/issues/2485 * re: https://github.com/Unidata/netcdf-c/issues/2474 This PR subsumes PR https://github.com/Unidata/netcdf-c/pull/2278. Actually is a bit an omnibus covering several issues. ## PR https://github.com/Unidata/netcdf-c/pull/2278 Add support for the Zarr string type. Zarr strings are restricted currently to be of fixed size. The primary issue to be addressed is to provide a way for user to specify the size of the fixed length strings. This is handled by providing the following new attributes special: 1. **_nczarr_default_maxstrlen** &mdash; This is an attribute of the root group. It specifies the default maximum string length for string types. If not specified, then it has the value of 64 characters. 2. **_nczarr_maxstrlen** &mdash; This is a per-variable attribute. It specifies the maximum string length for the string type associated with the variable. If not specified, then it is assigned the value of **_nczarr_default_maxstrlen**. This PR also requires some hacking to handle the existing netcdf-c NC_CHAR type, which does not exist in zarr. The goal was to choose numpy types for both the netcdf-c NC_STRING type and the netcdf-c NC_CHAR type such that if a pure zarr implementation read them, it would still work and an NC_CHAR type would be handled by zarr as a string of length 1. For writing variables and NCZarr attributes, the type mapping is as follows: * "|S1" for NC_CHAR. * ">S1" for NC_STRING && MAXSTRLEN==1 * ">Sn" for NC_STRING && MAXSTRLEN==n Note that it is a bit of a hack to use endianness, but it should be ok since for string/char, the endianness has no meaning. For reading attributes with pure zarr (i.e. with no nczarr atribute types defined), they will always be interpreted as of type NC_CHAR. ## Issue: https://github.com/Unidata/netcdf-c/issues/2474 This PR partly fixes this issue because it provided more comprehensive support for Zarr attributes that are JSON valued expressions. This PR still does not address the problem in that issue where the _ARRAY_DIMENSION attribute is incorrectly set. Than can only be fixed by the creator of the datasets. ## Issue: https://github.com/Unidata/netcdf-c/issues/2485 This PR also fixes the scalar failure shown in this issue. It generally cleans up scalar handling. It also adds a note to the documentation describing that NCZarr supports scalars while Zarr does not and also how scalar interoperability is achieved. ## Misc. Other Changes 1. Convert the nczarr special attributes and keys to be all lower case. So "_NCZARR_ATTR" now used "_nczarr_attr. Support back compatibility for the upper case names. 2. Cleanup my too-clever-by-half handling of scalars in libnczarr.
2022-08-28 10:21:13 +08:00
[AS_HELP_STRING([--disable-dap-remote-tests],
[disable dap remote tests])])
Fix DAP4 remotetest server Warning: This PR is a follow on to PR https://github.com/Unidata/netcdf-c/pull/2555 and should not be merged until that prior PR has been merged. The changeset for this PR is a delta on the PR https://github.com/Unidata/netcdf-c/pull/2555. This PR re-enables the use of the server *remotetest.unidata.ucar.edu/d4ts* to test several features: 1. Show that access over the Internet to servers using the DAP4 protocol works. 2. Test that DAP4 support in the [Thredds Data Server](https://github.com/Unidata/tds) is operating correctly. 4. Test that the DAP4 support in the [netcdf-java library](https://github.com/Unidata/netcdf-java) library and the DAP4 support in the netcdf-c library are consistent and are interoperable. The test inputs (primarily *\*.nc* files) provided in the netcdf-c library are also used by the DAP4 Test Server (aka d4ts) to present web access to a collection of data files accessible via the DAP4 protocol and which can be used for testing Internet access to a working server. To be precise, this version of d4ts is currently in unmerged branches of the *netcdf-java* and *tds* Github repositories and so are not actually in the main repositories *yet*. However, the *d4ts.war* file was created from that branch and used to populate the *remotetest.unidata.ucar.edu* server The two other remote servers that were used in the past are *Hyrax* (OPenDAP.org) and *thredds-test*. These will continue to remain disabled until those servers can be fixed. ## Primary Changes * Rebuild the *baselineremote* directory. This directory contains the validation data needed to test the remote servers. * Re-enable using remotetest.unidata.ucar.edu as part of the DAP4 testing process. * Fix the *dap4_test/test_remote.sh* test script to match the current available test data. * Make some changes to libdap4 to improve the ability to catch malformed data streams [affects a lot of files in libdap4]. ## Misc. Unrelated Changes * Remove a raft of warnings, especially in nc_test4/tst_quantize.c. * Add some additional explanatory information to the NCZarr documentation. * Cleanup some Doxygen errors in the docs file and reorder some files.
2022-11-16 11:29:21 +08:00
# Default on
test "x$enable_dap_remote_tests" = xno || enable_dap_remote_tests=yes
2010-06-03 21:25:25 +08:00
if test "x$enable_dap" = "xno" ; then
enable_dap_remote_tests=no
fi
AC_MSG_RESULT($enable_dap_remote_tests)
Fix DAP4 remotetest server Warning: This PR is a follow on to PR https://github.com/Unidata/netcdf-c/pull/2555 and should not be merged until that prior PR has been merged. The changeset for this PR is a delta on the PR https://github.com/Unidata/netcdf-c/pull/2555. This PR re-enables the use of the server *remotetest.unidata.ucar.edu/d4ts* to test several features: 1. Show that access over the Internet to servers using the DAP4 protocol works. 2. Test that DAP4 support in the [Thredds Data Server](https://github.com/Unidata/tds) is operating correctly. 4. Test that the DAP4 support in the [netcdf-java library](https://github.com/Unidata/netcdf-java) library and the DAP4 support in the netcdf-c library are consistent and are interoperable. The test inputs (primarily *\*.nc* files) provided in the netcdf-c library are also used by the DAP4 Test Server (aka d4ts) to present web access to a collection of data files accessible via the DAP4 protocol and which can be used for testing Internet access to a working server. To be precise, this version of d4ts is currently in unmerged branches of the *netcdf-java* and *tds* Github repositories and so are not actually in the main repositories *yet*. However, the *d4ts.war* file was created from that branch and used to populate the *remotetest.unidata.ucar.edu* server The two other remote servers that were used in the past are *Hyrax* (OPenDAP.org) and *thredds-test*. These will continue to remain disabled until those servers can be fixed. ## Primary Changes * Rebuild the *baselineremote* directory. This directory contains the validation data needed to test the remote servers. * Re-enable using remotetest.unidata.ucar.edu as part of the DAP4 testing process. * Fix the *dap4_test/test_remote.sh* test script to match the current available test data. * Make some changes to libdap4 to improve the ability to catch malformed data streams [affects a lot of files in libdap4]. ## Misc. Unrelated Changes * Remove a raft of warnings, especially in nc_test4/tst_quantize.c. * Add some additional explanatory information to the NCZarr documentation. * Cleanup some Doxygen errors in the docs file and reorder some files.
2022-11-16 11:29:21 +08:00
AC_MSG_CHECKING([whether use of external (non-unidata) servers should be enabled])
Add support for Zarr string type to NCZarr * re: https://github.com/Unidata/netcdf-c/pull/2278 * re: https://github.com/Unidata/netcdf-c/issues/2485 * re: https://github.com/Unidata/netcdf-c/issues/2474 This PR subsumes PR https://github.com/Unidata/netcdf-c/pull/2278. Actually is a bit an omnibus covering several issues. ## PR https://github.com/Unidata/netcdf-c/pull/2278 Add support for the Zarr string type. Zarr strings are restricted currently to be of fixed size. The primary issue to be addressed is to provide a way for user to specify the size of the fixed length strings. This is handled by providing the following new attributes special: 1. **_nczarr_default_maxstrlen** &mdash; This is an attribute of the root group. It specifies the default maximum string length for string types. If not specified, then it has the value of 64 characters. 2. **_nczarr_maxstrlen** &mdash; This is a per-variable attribute. It specifies the maximum string length for the string type associated with the variable. If not specified, then it is assigned the value of **_nczarr_default_maxstrlen**. This PR also requires some hacking to handle the existing netcdf-c NC_CHAR type, which does not exist in zarr. The goal was to choose numpy types for both the netcdf-c NC_STRING type and the netcdf-c NC_CHAR type such that if a pure zarr implementation read them, it would still work and an NC_CHAR type would be handled by zarr as a string of length 1. For writing variables and NCZarr attributes, the type mapping is as follows: * "|S1" for NC_CHAR. * ">S1" for NC_STRING && MAXSTRLEN==1 * ">Sn" for NC_STRING && MAXSTRLEN==n Note that it is a bit of a hack to use endianness, but it should be ok since for string/char, the endianness has no meaning. For reading attributes with pure zarr (i.e. with no nczarr atribute types defined), they will always be interpreted as of type NC_CHAR. ## Issue: https://github.com/Unidata/netcdf-c/issues/2474 This PR partly fixes this issue because it provided more comprehensive support for Zarr attributes that are JSON valued expressions. This PR still does not address the problem in that issue where the _ARRAY_DIMENSION attribute is incorrectly set. Than can only be fixed by the creator of the datasets. ## Issue: https://github.com/Unidata/netcdf-c/issues/2485 This PR also fixes the scalar failure shown in this issue. It generally cleans up scalar handling. It also adds a note to the documentation describing that NCZarr supports scalars while Zarr does not and also how scalar interoperability is achieved. ## Misc. Other Changes 1. Convert the nczarr special attributes and keys to be all lower case. So "_NCZARR_ATTR" now used "_nczarr_attr. Support back compatibility for the upper case names. 2. Cleanup my too-clever-by-half handling of scalars in libnczarr.
2022-08-28 10:21:13 +08:00
AC_ARG_ENABLE([external-server-tests],
[AS_HELP_STRING([--enable-external-server-tests (default off)],
[enable external server tests])])
test "x$enable_external_server_tests" = xyes || enable_external_server_tests=no
AC_MSG_RESULT($enable_external_server_tests)
if test "x$enable_dap_remote_tests" = "xno" ; then
AC_MSG_NOTICE([--disable-dap_remote_tests => --disable-external-server-tests])
enable_external_server_tests=no
fi
2019-03-12 22:39:06 +08:00
# Default is not to do the remote authorization tests.
Add support for Zarr string type to NCZarr * re: https://github.com/Unidata/netcdf-c/pull/2278 * re: https://github.com/Unidata/netcdf-c/issues/2485 * re: https://github.com/Unidata/netcdf-c/issues/2474 This PR subsumes PR https://github.com/Unidata/netcdf-c/pull/2278. Actually is a bit an omnibus covering several issues. ## PR https://github.com/Unidata/netcdf-c/pull/2278 Add support for the Zarr string type. Zarr strings are restricted currently to be of fixed size. The primary issue to be addressed is to provide a way for user to specify the size of the fixed length strings. This is handled by providing the following new attributes special: 1. **_nczarr_default_maxstrlen** &mdash; This is an attribute of the root group. It specifies the default maximum string length for string types. If not specified, then it has the value of 64 characters. 2. **_nczarr_maxstrlen** &mdash; This is a per-variable attribute. It specifies the maximum string length for the string type associated with the variable. If not specified, then it is assigned the value of **_nczarr_default_maxstrlen**. This PR also requires some hacking to handle the existing netcdf-c NC_CHAR type, which does not exist in zarr. The goal was to choose numpy types for both the netcdf-c NC_STRING type and the netcdf-c NC_CHAR type such that if a pure zarr implementation read them, it would still work and an NC_CHAR type would be handled by zarr as a string of length 1. For writing variables and NCZarr attributes, the type mapping is as follows: * "|S1" for NC_CHAR. * ">S1" for NC_STRING && MAXSTRLEN==1 * ">Sn" for NC_STRING && MAXSTRLEN==n Note that it is a bit of a hack to use endianness, but it should be ok since for string/char, the endianness has no meaning. For reading attributes with pure zarr (i.e. with no nczarr atribute types defined), they will always be interpreted as of type NC_CHAR. ## Issue: https://github.com/Unidata/netcdf-c/issues/2474 This PR partly fixes this issue because it provided more comprehensive support for Zarr attributes that are JSON valued expressions. This PR still does not address the problem in that issue where the _ARRAY_DIMENSION attribute is incorrectly set. Than can only be fixed by the creator of the datasets. ## Issue: https://github.com/Unidata/netcdf-c/issues/2485 This PR also fixes the scalar failure shown in this issue. It generally cleans up scalar handling. It also adds a note to the documentation describing that NCZarr supports scalars while Zarr does not and also how scalar interoperability is achieved. ## Misc. Other Changes 1. Convert the nczarr special attributes and keys to be all lower case. So "_NCZARR_ATTR" now used "_nczarr_attr. Support back compatibility for the upper case names. 2. Cleanup my too-clever-by-half handling of scalars in libnczarr.
2022-08-28 10:21:13 +08:00
AC_MSG_CHECKING([whether dap authorization testing should be enabled (default off)])
2014-03-11 06:40:52 +08:00
AC_ARG_ENABLE([dap-auth-tests],
[AS_HELP_STRING([--enable-dap-auth-tests],
[enable dap remote authorization tests])])
test "x$enable_dap_auth_tests" = xyes || enable_dap_auth_tests=no
Add support for Zarr string type to NCZarr * re: https://github.com/Unidata/netcdf-c/pull/2278 * re: https://github.com/Unidata/netcdf-c/issues/2485 * re: https://github.com/Unidata/netcdf-c/issues/2474 This PR subsumes PR https://github.com/Unidata/netcdf-c/pull/2278. Actually is a bit an omnibus covering several issues. ## PR https://github.com/Unidata/netcdf-c/pull/2278 Add support for the Zarr string type. Zarr strings are restricted currently to be of fixed size. The primary issue to be addressed is to provide a way for user to specify the size of the fixed length strings. This is handled by providing the following new attributes special: 1. **_nczarr_default_maxstrlen** &mdash; This is an attribute of the root group. It specifies the default maximum string length for string types. If not specified, then it has the value of 64 characters. 2. **_nczarr_maxstrlen** &mdash; This is a per-variable attribute. It specifies the maximum string length for the string type associated with the variable. If not specified, then it is assigned the value of **_nczarr_default_maxstrlen**. This PR also requires some hacking to handle the existing netcdf-c NC_CHAR type, which does not exist in zarr. The goal was to choose numpy types for both the netcdf-c NC_STRING type and the netcdf-c NC_CHAR type such that if a pure zarr implementation read them, it would still work and an NC_CHAR type would be handled by zarr as a string of length 1. For writing variables and NCZarr attributes, the type mapping is as follows: * "|S1" for NC_CHAR. * ">S1" for NC_STRING && MAXSTRLEN==1 * ">Sn" for NC_STRING && MAXSTRLEN==n Note that it is a bit of a hack to use endianness, but it should be ok since for string/char, the endianness has no meaning. For reading attributes with pure zarr (i.e. with no nczarr atribute types defined), they will always be interpreted as of type NC_CHAR. ## Issue: https://github.com/Unidata/netcdf-c/issues/2474 This PR partly fixes this issue because it provided more comprehensive support for Zarr attributes that are JSON valued expressions. This PR still does not address the problem in that issue where the _ARRAY_DIMENSION attribute is incorrectly set. Than can only be fixed by the creator of the datasets. ## Issue: https://github.com/Unidata/netcdf-c/issues/2485 This PR also fixes the scalar failure shown in this issue. It generally cleans up scalar handling. It also adds a note to the documentation describing that NCZarr supports scalars while Zarr does not and also how scalar interoperability is achieved. ## Misc. Other Changes 1. Convert the nczarr special attributes and keys to be all lower case. So "_NCZARR_ATTR" now used "_nczarr_attr. Support back compatibility for the upper case names. 2. Cleanup my too-clever-by-half handling of scalars in libnczarr.
2022-08-28 10:21:13 +08:00
AC_MSG_RESULT($enable_dap_auth_tests)
# dap must be enabled in order to use various other flags
Add support for Zarr string type to NCZarr * re: https://github.com/Unidata/netcdf-c/pull/2278 * re: https://github.com/Unidata/netcdf-c/issues/2485 * re: https://github.com/Unidata/netcdf-c/issues/2474 This PR subsumes PR https://github.com/Unidata/netcdf-c/pull/2278. Actually is a bit an omnibus covering several issues. ## PR https://github.com/Unidata/netcdf-c/pull/2278 Add support for the Zarr string type. Zarr strings are restricted currently to be of fixed size. The primary issue to be addressed is to provide a way for user to specify the size of the fixed length strings. This is handled by providing the following new attributes special: 1. **_nczarr_default_maxstrlen** &mdash; This is an attribute of the root group. It specifies the default maximum string length for string types. If not specified, then it has the value of 64 characters. 2. **_nczarr_maxstrlen** &mdash; This is a per-variable attribute. It specifies the maximum string length for the string type associated with the variable. If not specified, then it is assigned the value of **_nczarr_default_maxstrlen**. This PR also requires some hacking to handle the existing netcdf-c NC_CHAR type, which does not exist in zarr. The goal was to choose numpy types for both the netcdf-c NC_STRING type and the netcdf-c NC_CHAR type such that if a pure zarr implementation read them, it would still work and an NC_CHAR type would be handled by zarr as a string of length 1. For writing variables and NCZarr attributes, the type mapping is as follows: * "|S1" for NC_CHAR. * ">S1" for NC_STRING && MAXSTRLEN==1 * ">Sn" for NC_STRING && MAXSTRLEN==n Note that it is a bit of a hack to use endianness, but it should be ok since for string/char, the endianness has no meaning. For reading attributes with pure zarr (i.e. with no nczarr atribute types defined), they will always be interpreted as of type NC_CHAR. ## Issue: https://github.com/Unidata/netcdf-c/issues/2474 This PR partly fixes this issue because it provided more comprehensive support for Zarr attributes that are JSON valued expressions. This PR still does not address the problem in that issue where the _ARRAY_DIMENSION attribute is incorrectly set. Than can only be fixed by the creator of the datasets. ## Issue: https://github.com/Unidata/netcdf-c/issues/2485 This PR also fixes the scalar failure shown in this issue. It generally cleans up scalar handling. It also adds a note to the documentation describing that NCZarr supports scalars while Zarr does not and also how scalar interoperability is achieved. ## Misc. Other Changes 1. Convert the nczarr special attributes and keys to be all lower case. So "_NCZARR_ATTR" now used "_nczarr_attr. Support back compatibility for the upper case names. 2. Cleanup my too-clever-by-half handling of scalars in libnczarr.
2022-08-28 10:21:13 +08:00
if test "x$enable_dap" = "xno" ; then
Add support for Zarr string type to NCZarr * re: https://github.com/Unidata/netcdf-c/pull/2278 * re: https://github.com/Unidata/netcdf-c/issues/2485 * re: https://github.com/Unidata/netcdf-c/issues/2474 This PR subsumes PR https://github.com/Unidata/netcdf-c/pull/2278. Actually is a bit an omnibus covering several issues. ## PR https://github.com/Unidata/netcdf-c/pull/2278 Add support for the Zarr string type. Zarr strings are restricted currently to be of fixed size. The primary issue to be addressed is to provide a way for user to specify the size of the fixed length strings. This is handled by providing the following new attributes special: 1. **_nczarr_default_maxstrlen** &mdash; This is an attribute of the root group. It specifies the default maximum string length for string types. If not specified, then it has the value of 64 characters. 2. **_nczarr_maxstrlen** &mdash; This is a per-variable attribute. It specifies the maximum string length for the string type associated with the variable. If not specified, then it is assigned the value of **_nczarr_default_maxstrlen**. This PR also requires some hacking to handle the existing netcdf-c NC_CHAR type, which does not exist in zarr. The goal was to choose numpy types for both the netcdf-c NC_STRING type and the netcdf-c NC_CHAR type such that if a pure zarr implementation read them, it would still work and an NC_CHAR type would be handled by zarr as a string of length 1. For writing variables and NCZarr attributes, the type mapping is as follows: * "|S1" for NC_CHAR. * ">S1" for NC_STRING && MAXSTRLEN==1 * ">Sn" for NC_STRING && MAXSTRLEN==n Note that it is a bit of a hack to use endianness, but it should be ok since for string/char, the endianness has no meaning. For reading attributes with pure zarr (i.e. with no nczarr atribute types defined), they will always be interpreted as of type NC_CHAR. ## Issue: https://github.com/Unidata/netcdf-c/issues/2474 This PR partly fixes this issue because it provided more comprehensive support for Zarr attributes that are JSON valued expressions. This PR still does not address the problem in that issue where the _ARRAY_DIMENSION attribute is incorrectly set. Than can only be fixed by the creator of the datasets. ## Issue: https://github.com/Unidata/netcdf-c/issues/2485 This PR also fixes the scalar failure shown in this issue. It generally cleans up scalar handling. It also adds a note to the documentation describing that NCZarr supports scalars while Zarr does not and also how scalar interoperability is achieved. ## Misc. Other Changes 1. Convert the nczarr special attributes and keys to be all lower case. So "_NCZARR_ATTR" now used "_nczarr_attr. Support back compatibility for the upper case names. 2. Cleanup my too-clever-by-half handling of scalars in libnczarr.
2022-08-28 10:21:13 +08:00
AC_MSG_NOTICE([--disable-dap => --disable-dap-remote-tests --disable-auth-tests --disable-external-server-tests])
enable_dap_remote_tests=no
Add support for Zarr string type to NCZarr * re: https://github.com/Unidata/netcdf-c/pull/2278 * re: https://github.com/Unidata/netcdf-c/issues/2485 * re: https://github.com/Unidata/netcdf-c/issues/2474 This PR subsumes PR https://github.com/Unidata/netcdf-c/pull/2278. Actually is a bit an omnibus covering several issues. ## PR https://github.com/Unidata/netcdf-c/pull/2278 Add support for the Zarr string type. Zarr strings are restricted currently to be of fixed size. The primary issue to be addressed is to provide a way for user to specify the size of the fixed length strings. This is handled by providing the following new attributes special: 1. **_nczarr_default_maxstrlen** &mdash; This is an attribute of the root group. It specifies the default maximum string length for string types. If not specified, then it has the value of 64 characters. 2. **_nczarr_maxstrlen** &mdash; This is a per-variable attribute. It specifies the maximum string length for the string type associated with the variable. If not specified, then it is assigned the value of **_nczarr_default_maxstrlen**. This PR also requires some hacking to handle the existing netcdf-c NC_CHAR type, which does not exist in zarr. The goal was to choose numpy types for both the netcdf-c NC_STRING type and the netcdf-c NC_CHAR type such that if a pure zarr implementation read them, it would still work and an NC_CHAR type would be handled by zarr as a string of length 1. For writing variables and NCZarr attributes, the type mapping is as follows: * "|S1" for NC_CHAR. * ">S1" for NC_STRING && MAXSTRLEN==1 * ">Sn" for NC_STRING && MAXSTRLEN==n Note that it is a bit of a hack to use endianness, but it should be ok since for string/char, the endianness has no meaning. For reading attributes with pure zarr (i.e. with no nczarr atribute types defined), they will always be interpreted as of type NC_CHAR. ## Issue: https://github.com/Unidata/netcdf-c/issues/2474 This PR partly fixes this issue because it provided more comprehensive support for Zarr attributes that are JSON valued expressions. This PR still does not address the problem in that issue where the _ARRAY_DIMENSION attribute is incorrectly set. Than can only be fixed by the creator of the datasets. ## Issue: https://github.com/Unidata/netcdf-c/issues/2485 This PR also fixes the scalar failure shown in this issue. It generally cleans up scalar handling. It also adds a note to the documentation describing that NCZarr supports scalars while Zarr does not and also how scalar interoperability is achieved. ## Misc. Other Changes 1. Convert the nczarr special attributes and keys to be all lower case. So "_NCZARR_ATTR" now used "_nczarr_attr. Support back compatibility for the upper case names. 2. Cleanup my too-clever-by-half handling of scalars in libnczarr.
2022-08-28 10:21:13 +08:00
enable_dap_auth_tests=no
enable_external_server_tests=no
fi
Primary change: add dap4 support Specific changes: 1. Add dap4 code: libdap4 and dap4_test. Note that until the d4ts server problem is solved, dap4 is turned off. 2. Modify various files to support dap4 flags: configure.ac, Makefile.am, CMakeLists.txt, etc. 3. Add nc_test/test_common.sh. This centralizes the handling of the locations of various things in the build tree: e.g. where is ncgen.exe located. See nc_test/test_common.sh for details. 4. Modify .sh files to use test_common.sh 5. Obsolete separate oc2 by moving it to be part of netcdf-c. This means replacing code with netcdf-c equivalents. 5. Add --with-testserver to configure.ac to allow override of the servers to be used for --enable-dap-remote-tests. 6. There were multiple versions of nctypealignment code. Try to centralize in libdispatch/doffset.c and include/ncoffsets.h 7. Add a unit test for the ncuri code because of its complexity. 8. Move the findserver code out of libdispatch and into a separate, self contained program in ncdap_test and dap4_test. 9. Move the dispatch header files (nc{3,4}dispatch.h) to .../include because they are now shared by modules. 10. Revamp the handling of TOPSRCDIR and TOPBUILDDIR for shell scripts. 11. Make use of MREMAP if available 12. Misc. minor changes e.g. - #include <config.h> -> #include "config.h" - Add some no-install headers to /include - extern -> EXTERNL and vice versa as needed - misc header cleanup - clean up checking for misc. unix vs microsoft functions 13. Change copyright decls in some files to point to LICENSE file. 14. Add notes to RELEASENOTES.md
2017-03-09 08:01:10 +08:00
# Did the user specify a list of test servers to try for remote tests?
AC_MSG_CHECKING([which remote test server(s) to use])
AC_ARG_WITH([testservers],
[AS_HELP_STRING([--with-testservers=<host:port>,<host:port>...],
Primary change: add dap4 support Specific changes: 1. Add dap4 code: libdap4 and dap4_test. Note that until the d4ts server problem is solved, dap4 is turned off. 2. Modify various files to support dap4 flags: configure.ac, Makefile.am, CMakeLists.txt, etc. 3. Add nc_test/test_common.sh. This centralizes the handling of the locations of various things in the build tree: e.g. where is ncgen.exe located. See nc_test/test_common.sh for details. 4. Modify .sh files to use test_common.sh 5. Obsolete separate oc2 by moving it to be part of netcdf-c. This means replacing code with netcdf-c equivalents. 5. Add --with-testserver to configure.ac to allow override of the servers to be used for --enable-dap-remote-tests. 6. There were multiple versions of nctypealignment code. Try to centralize in libdispatch/doffset.c and include/ncoffsets.h 7. Add a unit test for the ncuri code because of its complexity. 8. Move the findserver code out of libdispatch and into a separate, self contained program in ncdap_test and dap4_test. 9. Move the dispatch header files (nc{3,4}dispatch.h) to .../include because they are now shared by modules. 10. Revamp the handling of TOPSRCDIR and TOPBUILDDIR for shell scripts. 11. Make use of MREMAP if available 12. Misc. minor changes e.g. - #include <config.h> -> #include "config.h" - Add some no-install headers to /include - extern -> EXTERNL and vice versa as needed - misc header cleanup - clean up checking for misc. unix vs microsoft functions 13. Change copyright decls in some files to point to LICENSE file. 14. Add notes to RELEASENOTES.md
2017-03-09 08:01:10 +08:00
[Specify the testserver(s) to try for remote tests.])],
[REMOTETESTSERVERS=$with_testservers], [REMOTETESTSERVERS=no])
if test "x$REMOTETESTSERVERS" = xno ; then
Cleanup DAP4 testing NOTE: This PR should not be included in 4.9.1 since additional DAP4 related PRs will be forthcoming. This PR makes major changes to libdap4 and dap4_test driven by changes to TDS. * Enable DAP4 * Clean up the test input files and the test baseline comparison files. This entails: * Remove a multitude of unused test input and baseline data files; among them are dap4_test/: daptestfiles, dmrtestfiles, nctestfiles, and misctestfiles. * Define a canonical set of test input files and record in dap4_test/cdltestfiles. * Use the cdltestfiles to generate the .nc test inputs. This set of .nc files is then moved to the d4ts (DAP4 test server) war file in the tds repository. This set then becomes the canonical set of DAP4 test sources. * Scrape d4ts to obtain copies of the raw streams of DAP4 encoded data. The .dmr and .dap streams are then stored in dap4_test/rawtestfiles. * Disable some remote server tests until those servers are fixed. * Add an option to ncdump (-XF) that forces the type of the _FillValue attribute; this is primarily to simplify testing of fill mismatch. * Minor bug fixes to ncgen. * Changes to libdap4: * Replace old checksum hack with the dap4.checksum flag. * Support the dap4.XXX controls. * Cleanup _FillValue handling, especially var-attribute type mismatches. * Fix enum handling based on changes to netcdf-java. * Changes to dap4_test: * Add getopt support to various test support programs. * Remove unneeded shell scripts. * Add new scripts: test_curlopt.sh
2022-11-14 04:15:11 +08:00
dfaltsvc="remotetest.unidata.ucar.edu"
REMOTETESTSERVERS="${dfaltsvc}"
Primary change: add dap4 support Specific changes: 1. Add dap4 code: libdap4 and dap4_test. Note that until the d4ts server problem is solved, dap4 is turned off. 2. Modify various files to support dap4 flags: configure.ac, Makefile.am, CMakeLists.txt, etc. 3. Add nc_test/test_common.sh. This centralizes the handling of the locations of various things in the build tree: e.g. where is ncgen.exe located. See nc_test/test_common.sh for details. 4. Modify .sh files to use test_common.sh 5. Obsolete separate oc2 by moving it to be part of netcdf-c. This means replacing code with netcdf-c equivalents. 5. Add --with-testserver to configure.ac to allow override of the servers to be used for --enable-dap-remote-tests. 6. There were multiple versions of nctypealignment code. Try to centralize in libdispatch/doffset.c and include/ncoffsets.h 7. Add a unit test for the ncuri code because of its complexity. 8. Move the findserver code out of libdispatch and into a separate, self contained program in ncdap_test and dap4_test. 9. Move the dispatch header files (nc{3,4}dispatch.h) to .../include because they are now shared by modules. 10. Revamp the handling of TOPSRCDIR and TOPBUILDDIR for shell scripts. 11. Make use of MREMAP if available 12. Misc. minor changes e.g. - #include <config.h> -> #include "config.h" - Add some no-install headers to /include - extern -> EXTERNL and vice versa as needed - misc header cleanup - clean up checking for misc. unix vs microsoft functions 13. Change copyright decls in some files to point to LICENSE file. 14. Add notes to RELEASENOTES.md
2017-03-09 08:01:10 +08:00
fi
Fix DAP4 remotetest server Warning: This PR is a follow on to PR https://github.com/Unidata/netcdf-c/pull/2555 and should not be merged until that prior PR has been merged. The changeset for this PR is a delta on the PR https://github.com/Unidata/netcdf-c/pull/2555. This PR re-enables the use of the server *remotetest.unidata.ucar.edu/d4ts* to test several features: 1. Show that access over the Internet to servers using the DAP4 protocol works. 2. Test that DAP4 support in the [Thredds Data Server](https://github.com/Unidata/tds) is operating correctly. 4. Test that the DAP4 support in the [netcdf-java library](https://github.com/Unidata/netcdf-java) library and the DAP4 support in the netcdf-c library are consistent and are interoperable. The test inputs (primarily *\*.nc* files) provided in the netcdf-c library are also used by the DAP4 Test Server (aka d4ts) to present web access to a collection of data files accessible via the DAP4 protocol and which can be used for testing Internet access to a working server. To be precise, this version of d4ts is currently in unmerged branches of the *netcdf-java* and *tds* Github repositories and so are not actually in the main repositories *yet*. However, the *d4ts.war* file was created from that branch and used to populate the *remotetest.unidata.ucar.edu* server The two other remote servers that were used in the past are *Hyrax* (OPenDAP.org) and *thredds-test*. These will continue to remain disabled until those servers can be fixed. ## Primary Changes * Rebuild the *baselineremote* directory. This directory contains the validation data needed to test the remote servers. * Re-enable using remotetest.unidata.ucar.edu as part of the DAP4 testing process. * Fix the *dap4_test/test_remote.sh* test script to match the current available test data. * Make some changes to libdap4 to improve the ability to catch malformed data streams [affects a lot of files in libdap4]. ## Misc. Unrelated Changes * Remove a raft of warnings, especially in nc_test4/tst_quantize.c. * Add some additional explanatory information to the NCZarr documentation. * Cleanup some Doxygen errors in the docs file and reorder some files.
2022-11-16 11:29:21 +08:00
msg="${REMOTETESTSERVERS}"
Cleanup DAP4 testing NOTE: This PR should not be included in 4.9.1 since additional DAP4 related PRs will be forthcoming. This PR makes major changes to libdap4 and dap4_test driven by changes to TDS. * Enable DAP4 * Clean up the test input files and the test baseline comparison files. This entails: * Remove a multitude of unused test input and baseline data files; among them are dap4_test/: daptestfiles, dmrtestfiles, nctestfiles, and misctestfiles. * Define a canonical set of test input files and record in dap4_test/cdltestfiles. * Use the cdltestfiles to generate the .nc test inputs. This set of .nc files is then moved to the d4ts (DAP4 test server) war file in the tds repository. This set then becomes the canonical set of DAP4 test sources. * Scrape d4ts to obtain copies of the raw streams of DAP4 encoded data. The .dmr and .dap streams are then stored in dap4_test/rawtestfiles. * Disable some remote server tests until those servers are fixed. * Add an option to ncdump (-XF) that forces the type of the _FillValue attribute; this is primarily to simplify testing of fill mismatch. * Minor bug fixes to ncgen. * Changes to libdap4: * Replace old checksum hack with the dap4.checksum flag. * Support the dap4.XXX controls. * Cleanup _FillValue handling, especially var-attribute type mismatches. * Fix enum handling based on changes to netcdf-java. * Changes to dap4_test: * Add getopt support to various test support programs. * Remove unneeded shell scripts. * Add new scripts: test_curlopt.sh
2022-11-14 04:15:11 +08:00
if test "x$dfaltsvc" != x ; then
Fix DAP4 remotetest server Warning: This PR is a follow on to PR https://github.com/Unidata/netcdf-c/pull/2555 and should not be merged until that prior PR has been merged. The changeset for this PR is a delta on the PR https://github.com/Unidata/netcdf-c/pull/2555. This PR re-enables the use of the server *remotetest.unidata.ucar.edu/d4ts* to test several features: 1. Show that access over the Internet to servers using the DAP4 protocol works. 2. Test that DAP4 support in the [Thredds Data Server](https://github.com/Unidata/tds) is operating correctly. 4. Test that the DAP4 support in the [netcdf-java library](https://github.com/Unidata/netcdf-java) library and the DAP4 support in the netcdf-c library are consistent and are interoperable. The test inputs (primarily *\*.nc* files) provided in the netcdf-c library are also used by the DAP4 Test Server (aka d4ts) to present web access to a collection of data files accessible via the DAP4 protocol and which can be used for testing Internet access to a working server. To be precise, this version of d4ts is currently in unmerged branches of the *netcdf-java* and *tds* Github repositories and so are not actually in the main repositories *yet*. However, the *d4ts.war* file was created from that branch and used to populate the *remotetest.unidata.ucar.edu* server The two other remote servers that were used in the past are *Hyrax* (OPenDAP.org) and *thredds-test*. These will continue to remain disabled until those servers can be fixed. ## Primary Changes * Rebuild the *baselineremote* directory. This directory contains the validation data needed to test the remote servers. * Re-enable using remotetest.unidata.ucar.edu as part of the DAP4 testing process. * Fix the *dap4_test/test_remote.sh* test script to match the current available test data. * Make some changes to libdap4 to improve the ability to catch malformed data streams [affects a lot of files in libdap4]. ## Misc. Unrelated Changes * Remove a raft of warnings, especially in nc_test4/tst_quantize.c. * Add some additional explanatory information to the NCZarr documentation. * Cleanup some Doxygen errors in the docs file and reorder some files.
2022-11-16 11:29:21 +08:00
msg="${msg} (default)"
Cleanup DAP4 testing NOTE: This PR should not be included in 4.9.1 since additional DAP4 related PRs will be forthcoming. This PR makes major changes to libdap4 and dap4_test driven by changes to TDS. * Enable DAP4 * Clean up the test input files and the test baseline comparison files. This entails: * Remove a multitude of unused test input and baseline data files; among them are dap4_test/: daptestfiles, dmrtestfiles, nctestfiles, and misctestfiles. * Define a canonical set of test input files and record in dap4_test/cdltestfiles. * Use the cdltestfiles to generate the .nc test inputs. This set of .nc files is then moved to the d4ts (DAP4 test server) war file in the tds repository. This set then becomes the canonical set of DAP4 test sources. * Scrape d4ts to obtain copies of the raw streams of DAP4 encoded data. The .dmr and .dap streams are then stored in dap4_test/rawtestfiles. * Disable some remote server tests until those servers are fixed. * Add an option to ncdump (-XF) that forces the type of the _FillValue attribute; this is primarily to simplify testing of fill mismatch. * Minor bug fixes to ncgen. * Changes to libdap4: * Replace old checksum hack with the dap4.checksum flag. * Support the dap4.XXX controls. * Cleanup _FillValue handling, especially var-attribute type mismatches. * Fix enum handling based on changes to netcdf-java. * Changes to dap4_test: * Add getopt support to various test support programs. * Remove unneeded shell scripts. * Add new scripts: test_curlopt.sh
2022-11-14 04:15:11 +08:00
fi
Fix DAP4 remotetest server Warning: This PR is a follow on to PR https://github.com/Unidata/netcdf-c/pull/2555 and should not be merged until that prior PR has been merged. The changeset for this PR is a delta on the PR https://github.com/Unidata/netcdf-c/pull/2555. This PR re-enables the use of the server *remotetest.unidata.ucar.edu/d4ts* to test several features: 1. Show that access over the Internet to servers using the DAP4 protocol works. 2. Test that DAP4 support in the [Thredds Data Server](https://github.com/Unidata/tds) is operating correctly. 4. Test that the DAP4 support in the [netcdf-java library](https://github.com/Unidata/netcdf-java) library and the DAP4 support in the netcdf-c library are consistent and are interoperable. The test inputs (primarily *\*.nc* files) provided in the netcdf-c library are also used by the DAP4 Test Server (aka d4ts) to present web access to a collection of data files accessible via the DAP4 protocol and which can be used for testing Internet access to a working server. To be precise, this version of d4ts is currently in unmerged branches of the *netcdf-java* and *tds* Github repositories and so are not actually in the main repositories *yet*. However, the *d4ts.war* file was created from that branch and used to populate the *remotetest.unidata.ucar.edu* server The two other remote servers that were used in the past are *Hyrax* (OPenDAP.org) and *thredds-test*. These will continue to remain disabled until those servers can be fixed. ## Primary Changes * Rebuild the *baselineremote* directory. This directory contains the validation data needed to test the remote servers. * Re-enable using remotetest.unidata.ucar.edu as part of the DAP4 testing process. * Fix the *dap4_test/test_remote.sh* test script to match the current available test data. * Make some changes to libdap4 to improve the ability to catch malformed data streams [affects a lot of files in libdap4]. ## Misc. Unrelated Changes * Remove a raft of warnings, especially in nc_test4/tst_quantize.c. * Add some additional explanatory information to the NCZarr documentation. * Cleanup some Doxygen errors in the docs file and reorder some files.
2022-11-16 11:29:21 +08:00
AC_MSG_RESULT([${msg}])
AC_DEFINE_UNQUOTED([REMOTETESTSERVERS], ["$REMOTETESTSERVERS"], [the testservers for remote tests.])
Primary change: add dap4 support Specific changes: 1. Add dap4 code: libdap4 and dap4_test. Note that until the d4ts server problem is solved, dap4 is turned off. 2. Modify various files to support dap4 flags: configure.ac, Makefile.am, CMakeLists.txt, etc. 3. Add nc_test/test_common.sh. This centralizes the handling of the locations of various things in the build tree: e.g. where is ncgen.exe located. See nc_test/test_common.sh for details. 4. Modify .sh files to use test_common.sh 5. Obsolete separate oc2 by moving it to be part of netcdf-c. This means replacing code with netcdf-c equivalents. 5. Add --with-testserver to configure.ac to allow override of the servers to be used for --enable-dap-remote-tests. 6. There were multiple versions of nctypealignment code. Try to centralize in libdispatch/doffset.c and include/ncoffsets.h 7. Add a unit test for the ncuri code because of its complexity. 8. Move the findserver code out of libdispatch and into a separate, self contained program in ncdap_test and dap4_test. 9. Move the dispatch header files (nc{3,4}dispatch.h) to .../include because they are now shared by modules. 10. Revamp the handling of TOPSRCDIR and TOPBUILDDIR for shell scripts. 11. Make use of MREMAP if available 12. Misc. minor changes e.g. - #include <config.h> -> #include "config.h" - Add some no-install headers to /include - extern -> EXTERNL and vice versa as needed - misc header cleanup - clean up checking for misc. unix vs microsoft functions 13. Change copyright decls in some files to point to LICENSE file. 14. Add notes to RELEASENOTES.md
2017-03-09 08:01:10 +08:00
2010-12-16 05:45:05 +08:00
# Set the config.h flags
if test "x$enable_dap" = xyes; then
AC_DEFINE([USE_DAP], [1], [if true, build DAP Client])
AC_DEFINE([ENABLE_DAP], [1], [if true, build DAP Client])
fi
if test "x$enable_dap_remote_tests" = xyes; then
AC_DEFINE([ENABLE_DAP_REMOTE_TESTS], [1], [if true, do remote tests])
fi
Add support for Zarr string type to NCZarr * re: https://github.com/Unidata/netcdf-c/pull/2278 * re: https://github.com/Unidata/netcdf-c/issues/2485 * re: https://github.com/Unidata/netcdf-c/issues/2474 This PR subsumes PR https://github.com/Unidata/netcdf-c/pull/2278. Actually is a bit an omnibus covering several issues. ## PR https://github.com/Unidata/netcdf-c/pull/2278 Add support for the Zarr string type. Zarr strings are restricted currently to be of fixed size. The primary issue to be addressed is to provide a way for user to specify the size of the fixed length strings. This is handled by providing the following new attributes special: 1. **_nczarr_default_maxstrlen** &mdash; This is an attribute of the root group. It specifies the default maximum string length for string types. If not specified, then it has the value of 64 characters. 2. **_nczarr_maxstrlen** &mdash; This is a per-variable attribute. It specifies the maximum string length for the string type associated with the variable. If not specified, then it is assigned the value of **_nczarr_default_maxstrlen**. This PR also requires some hacking to handle the existing netcdf-c NC_CHAR type, which does not exist in zarr. The goal was to choose numpy types for both the netcdf-c NC_STRING type and the netcdf-c NC_CHAR type such that if a pure zarr implementation read them, it would still work and an NC_CHAR type would be handled by zarr as a string of length 1. For writing variables and NCZarr attributes, the type mapping is as follows: * "|S1" for NC_CHAR. * ">S1" for NC_STRING && MAXSTRLEN==1 * ">Sn" for NC_STRING && MAXSTRLEN==n Note that it is a bit of a hack to use endianness, but it should be ok since for string/char, the endianness has no meaning. For reading attributes with pure zarr (i.e. with no nczarr atribute types defined), they will always be interpreted as of type NC_CHAR. ## Issue: https://github.com/Unidata/netcdf-c/issues/2474 This PR partly fixes this issue because it provided more comprehensive support for Zarr attributes that are JSON valued expressions. This PR still does not address the problem in that issue where the _ARRAY_DIMENSION attribute is incorrectly set. Than can only be fixed by the creator of the datasets. ## Issue: https://github.com/Unidata/netcdf-c/issues/2485 This PR also fixes the scalar failure shown in this issue. It generally cleans up scalar handling. It also adds a note to the documentation describing that NCZarr supports scalars while Zarr does not and also how scalar interoperability is achieved. ## Misc. Other Changes 1. Convert the nczarr special attributes and keys to be all lower case. So "_NCZARR_ATTR" now used "_nczarr_attr. Support back compatibility for the upper case names. 2. Cleanup my too-clever-by-half handling of scalars in libnczarr.
2022-08-28 10:21:13 +08:00
if test "x$enable_external_server_tests" = xyes; then
AC_DEFINE([ENABLE_EXTERNAL_SERVER_TESTS], [1], [if true, do remote external tests])
fi
2010-12-16 05:45:05 +08:00
2010-06-03 21:25:25 +08:00
AC_MSG_CHECKING([whether the time-consuming dap tests should be enabled (default off)])
AC_ARG_ENABLE([dap-long-tests],
[AS_HELP_STRING([--enable-dap-long-tests],
[enable dap long tests])])
test "x$enable_dap_long_tests" = xyes || enable_dap_long_tests=no
Add support for Zarr string type to NCZarr * re: https://github.com/Unidata/netcdf-c/pull/2278 * re: https://github.com/Unidata/netcdf-c/issues/2485 * re: https://github.com/Unidata/netcdf-c/issues/2474 This PR subsumes PR https://github.com/Unidata/netcdf-c/pull/2278. Actually is a bit an omnibus covering several issues. ## PR https://github.com/Unidata/netcdf-c/pull/2278 Add support for the Zarr string type. Zarr strings are restricted currently to be of fixed size. The primary issue to be addressed is to provide a way for user to specify the size of the fixed length strings. This is handled by providing the following new attributes special: 1. **_nczarr_default_maxstrlen** &mdash; This is an attribute of the root group. It specifies the default maximum string length for string types. If not specified, then it has the value of 64 characters. 2. **_nczarr_maxstrlen** &mdash; This is a per-variable attribute. It specifies the maximum string length for the string type associated with the variable. If not specified, then it is assigned the value of **_nczarr_default_maxstrlen**. This PR also requires some hacking to handle the existing netcdf-c NC_CHAR type, which does not exist in zarr. The goal was to choose numpy types for both the netcdf-c NC_STRING type and the netcdf-c NC_CHAR type such that if a pure zarr implementation read them, it would still work and an NC_CHAR type would be handled by zarr as a string of length 1. For writing variables and NCZarr attributes, the type mapping is as follows: * "|S1" for NC_CHAR. * ">S1" for NC_STRING && MAXSTRLEN==1 * ">Sn" for NC_STRING && MAXSTRLEN==n Note that it is a bit of a hack to use endianness, but it should be ok since for string/char, the endianness has no meaning. For reading attributes with pure zarr (i.e. with no nczarr atribute types defined), they will always be interpreted as of type NC_CHAR. ## Issue: https://github.com/Unidata/netcdf-c/issues/2474 This PR partly fixes this issue because it provided more comprehensive support for Zarr attributes that are JSON valued expressions. This PR still does not address the problem in that issue where the _ARRAY_DIMENSION attribute is incorrectly set. Than can only be fixed by the creator of the datasets. ## Issue: https://github.com/Unidata/netcdf-c/issues/2485 This PR also fixes the scalar failure shown in this issue. It generally cleans up scalar handling. It also adds a note to the documentation describing that NCZarr supports scalars while Zarr does not and also how scalar interoperability is achieved. ## Misc. Other Changes 1. Convert the nczarr special attributes and keys to be all lower case. So "_NCZARR_ATTR" now used "_nczarr_attr. Support back compatibility for the upper case names. 2. Cleanup my too-clever-by-half handling of scalars in libnczarr.
2022-08-28 10:21:13 +08:00
AC_MSG_RESULT([$enable_dap_long_tests])
if test "x$enable_dap_remote_tests" = "xno" || test "x$enable_external_server_tests" = "xno" ; then
AC_MSG_NOTICE([--disable-dap-remote|external-server-tests => --disable_dap_long_tests])
2010-06-03 21:25:25 +08:00
enable_dap_long_tests=no
fi
This PR adds EXPERIMENTAL support for accessing data in the cloud using a variant of the Zarr protocol and storage format. This enhancement is generically referred to as "NCZarr". The data model supported by NCZarr is netcdf-4 minus the user-defined types and the String type. In this sense it is similar to the CDF-5 data model. More detailed information about enabling and using NCZarr is described in the document NUG/nczarr.md and in a [Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in). WARNING: this code has had limited testing, so do use this version for production work. Also, performance improvements are ongoing. Note especially the following platform matrix of successful tests: Platform | Build System | S3 support ------------------------------------ Linux+gcc | Automake | yes Linux+gcc | CMake | yes Visual Studio | CMake | no Additionally, and as a consequence of the addition of NCZarr, major changes have been made to the Filter API. NOTE: NCZarr does not yet support filters, but these changes are enablers for that support in the future. Note that it is possible (probable?) that there will be some accidental reversions if the changes here did not correctly mimic the existing filter testing. In any case, previously filter ids and parameters were of type unsigned int. In order to support the more general zarr filter model, this was all converted to char*. The old HDF5-specific, unsigned int operations are still supported but they are wrappers around the new, char* based nc_filterx_XXX functions. This entailed at least the following changes: 1. Added the files libdispatch/dfilterx.c and include/ncfilter.h 2. Some filterx utilities have been moved to libdispatch/daux.c 3. A new entry, "filter_actions" was added to the NCDispatch table and the version bumped. 4. An overly complex set of structs was created to support funnelling all of the filterx operations thru a single dispatch "filter_actions" entry. 5. Move common code to from libhdf5 to libsrc4 so that it is accessible to nczarr. Changes directly related to Zarr: 1. Modified CMakeList.txt and configure.ac to support both C and C++ -- this is in support of S3 support via the awd-sdk libraries. 2. Define a size64_t type to support nczarr. 3. More reworking of libdispatch/dinfermodel.c to support zarr and to regularize the structure of the fragments section of a URL. Changes not directly related to Zarr: 1. Make client-side filter registration be conditional, with default off. 2. Hack include/nc4internal.h to make some flags added by Ed be unique: e.g. NC_CREAT, NC_INDEF, etc. 3. cleanup include/nchttp.h and libdispatch/dhttp.c. 4. Misc. changes to support compiling under Visual Studio including: * Better testing under windows for dirent.h and opendir and closedir. 5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags and to centralize error reporting. 6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them. 7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible. Changes Left TO-DO: 1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
# Control zarr storage
if test "x$enable_nczarr" = xyes ; then
Upgrade the nczarr code to match Zarr V2 Re: https://github.com/zarr-developers/zarr-python/pull/716 The Zarr version 2 spec has been extended to include the ability to choose the dimension separator in chunk name keys. The legal separators has been extended from {'.'} to {'.' '/'}. So now it is possible to use a key like "0/1/2/0" for chunk names. This PR implements this for NCZarr. The V2 spec now says that this separator can be set on a per-variable basis. For now, I have chosen to allow this be set only globally by adding a key named "ZARR.DIMENSION_SEPARATOR=<char>" in the .daprc/.dodsrc/ncrc file. Currently, the only legal separator characters are '.' (the default) and '/'. On writing, this key will only be written if its value is different than the default. This change caused problems because supporting a separator of '/' is difficult to parse when keys/paths use '/' as the path separator. A test case was added for this. Additionally, make nczarr be enabled default by default. This required some additional changes so that if zip and/or AWS S3 sdk are unavailable, then they are disabled for NCZarr. In addition the following unrelated changes were made. 1. Tested that pure-zarr mode could read an nczarr formatted store. 1. The .rc file handling now merges all known .rc files (.ncrc,.daprc, and .dodsrc) in that order and using those in HOME first, then in current directory. For duplicate entries, the later ones override the earlier ones. This change is to remove some of the conflicts inherent in the current .rc file load process. A set of test cases was also added. 1. Re-order tests in configure.ac and CMakeLists.txt so that if libcurl is not found then the other options that depend upon it properly are disabled. 1. I decided that xarray support should be enabled by default for pure zarr. In order to allow disabling, I added a new mode flag "noxarray". 1. Certain test in nczarr_test depend on use of .dodsrc. In order for these to work when testing in parallel, some inter-test dependencies needed to be added. 1. Improved authorization testing to use changes in thredds.ucar.edu
2021-04-25 09:48:15 +08:00
if test "x$enable_netcdf_4" = xno ; then
AC_MSG_WARN([netCDF-4 disabled, so you must not enable nczarr])
enable_nczarr=no
fi
This PR adds EXPERIMENTAL support for accessing data in the cloud using a variant of the Zarr protocol and storage format. This enhancement is generically referred to as "NCZarr". The data model supported by NCZarr is netcdf-4 minus the user-defined types and the String type. In this sense it is similar to the CDF-5 data model. More detailed information about enabling and using NCZarr is described in the document NUG/nczarr.md and in a [Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in). WARNING: this code has had limited testing, so do use this version for production work. Also, performance improvements are ongoing. Note especially the following platform matrix of successful tests: Platform | Build System | S3 support ------------------------------------ Linux+gcc | Automake | yes Linux+gcc | CMake | yes Visual Studio | CMake | no Additionally, and as a consequence of the addition of NCZarr, major changes have been made to the Filter API. NOTE: NCZarr does not yet support filters, but these changes are enablers for that support in the future. Note that it is possible (probable?) that there will be some accidental reversions if the changes here did not correctly mimic the existing filter testing. In any case, previously filter ids and parameters were of type unsigned int. In order to support the more general zarr filter model, this was all converted to char*. The old HDF5-specific, unsigned int operations are still supported but they are wrappers around the new, char* based nc_filterx_XXX functions. This entailed at least the following changes: 1. Added the files libdispatch/dfilterx.c and include/ncfilter.h 2. Some filterx utilities have been moved to libdispatch/daux.c 3. A new entry, "filter_actions" was added to the NCDispatch table and the version bumped. 4. An overly complex set of structs was created to support funnelling all of the filterx operations thru a single dispatch "filter_actions" entry. 5. Move common code to from libhdf5 to libsrc4 so that it is accessible to nczarr. Changes directly related to Zarr: 1. Modified CMakeList.txt and configure.ac to support both C and C++ -- this is in support of S3 support via the awd-sdk libraries. 2. Define a size64_t type to support nczarr. 3. More reworking of libdispatch/dinfermodel.c to support zarr and to regularize the structure of the fragments section of a URL. Changes not directly related to Zarr: 1. Make client-side filter registration be conditional, with default off. 2. Hack include/nc4internal.h to make some flags added by Ed be unique: e.g. NC_CREAT, NC_INDEF, etc. 3. cleanup include/nchttp.h and libdispatch/dhttp.c. 4. Misc. changes to support compiling under Visual Studio including: * Better testing under windows for dirent.h and opendir and closedir. 5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags and to centralize error reporting. 6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them. 7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible. Changes Left TO-DO: 1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
fi
Upgrade the nczarr code to match Zarr V2 Re: https://github.com/zarr-developers/zarr-python/pull/716 The Zarr version 2 spec has been extended to include the ability to choose the dimension separator in chunk name keys. The legal separators has been extended from {'.'} to {'.' '/'}. So now it is possible to use a key like "0/1/2/0" for chunk names. This PR implements this for NCZarr. The V2 spec now says that this separator can be set on a per-variable basis. For now, I have chosen to allow this be set only globally by adding a key named "ZARR.DIMENSION_SEPARATOR=<char>" in the .daprc/.dodsrc/ncrc file. Currently, the only legal separator characters are '.' (the default) and '/'. On writing, this key will only be written if its value is different than the default. This change caused problems because supporting a separator of '/' is difficult to parse when keys/paths use '/' as the path separator. A test case was added for this. Additionally, make nczarr be enabled default by default. This required some additional changes so that if zip and/or AWS S3 sdk are unavailable, then they are disabled for NCZarr. In addition the following unrelated changes were made. 1. Tested that pure-zarr mode could read an nczarr formatted store. 1. The .rc file handling now merges all known .rc files (.ncrc,.daprc, and .dodsrc) in that order and using those in HOME first, then in current directory. For duplicate entries, the later ones override the earlier ones. This change is to remove some of the conflicts inherent in the current .rc file load process. A set of test cases was also added. 1. Re-order tests in configure.ac and CMakeLists.txt so that if libcurl is not found then the other options that depend upon it properly are disabled. 1. I decided that xarray support should be enabled by default for pure zarr. In order to allow disabling, I added a new mode flag "noxarray". 1. Certain test in nczarr_test depend on use of .dodsrc. In order for these to work when testing in parallel, some inter-test dependencies needed to be added. 1. Improved authorization testing to use changes in thredds.ucar.edu
2021-04-25 09:48:15 +08:00
This PR adds EXPERIMENTAL support for accessing data in the cloud using a variant of the Zarr protocol and storage format. This enhancement is generically referred to as "NCZarr". The data model supported by NCZarr is netcdf-4 minus the user-defined types and the String type. In this sense it is similar to the CDF-5 data model. More detailed information about enabling and using NCZarr is described in the document NUG/nczarr.md and in a [Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in). WARNING: this code has had limited testing, so do use this version for production work. Also, performance improvements are ongoing. Note especially the following platform matrix of successful tests: Platform | Build System | S3 support ------------------------------------ Linux+gcc | Automake | yes Linux+gcc | CMake | yes Visual Studio | CMake | no Additionally, and as a consequence of the addition of NCZarr, major changes have been made to the Filter API. NOTE: NCZarr does not yet support filters, but these changes are enablers for that support in the future. Note that it is possible (probable?) that there will be some accidental reversions if the changes here did not correctly mimic the existing filter testing. In any case, previously filter ids and parameters were of type unsigned int. In order to support the more general zarr filter model, this was all converted to char*. The old HDF5-specific, unsigned int operations are still supported but they are wrappers around the new, char* based nc_filterx_XXX functions. This entailed at least the following changes: 1. Added the files libdispatch/dfilterx.c and include/ncfilter.h 2. Some filterx utilities have been moved to libdispatch/daux.c 3. A new entry, "filter_actions" was added to the NCDispatch table and the version bumped. 4. An overly complex set of structs was created to support funnelling all of the filterx operations thru a single dispatch "filter_actions" entry. 5. Move common code to from libhdf5 to libsrc4 so that it is accessible to nczarr. Changes directly related to Zarr: 1. Modified CMakeList.txt and configure.ac to support both C and C++ -- this is in support of S3 support via the awd-sdk libraries. 2. Define a size64_t type to support nczarr. 3. More reworking of libdispatch/dinfermodel.c to support zarr and to regularize the structure of the fragments section of a URL. Changes not directly related to Zarr: 1. Make client-side filter registration be conditional, with default off. 2. Hack include/nc4internal.h to make some flags added by Ed be unique: e.g. NC_CREAT, NC_INDEF, etc. 3. cleanup include/nchttp.h and libdispatch/dhttp.c. 4. Misc. changes to support compiling under Visual Studio including: * Better testing under windows for dirent.h and opendir and closedir. 5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags and to centralize error reporting. 6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them. 7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible. Changes Left TO-DO: 1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
if test "x$enable_nczarr" = xyes; then
AC_DEFINE([ENABLE_NCZARR], [1], [if true, build NCZarr Client])
AC_SUBST(ENABLE_NCZARR)
fi
AM_CONDITIONAL(ENABLE_NCZARR, [test x$enable_nczarr = xyes])
Enhance/Fix filter support re: Discussion https://github.com/Unidata/netcdf-c/discussions/2214 The primary change is to support so-called "standard filters". A standard filter is one that is defined by the following netcdf-c API: ```` int nc_def_var_XXX(int ncid, int varid, size_t nparams, unsigned* params); int nc_inq_var_XXXX(int ncid, int varid, int* usefilterp, unsigned* params); ```` So for example, zstandard would be a standard filter by defining the functions *nc_def_var_zstandard* and *nc_inq_var_zstandard*. In order to define these functions, we need a new dispatch function: ```` int nc_inq_filter_avail(int ncid, unsigned filterid); ```` This function, combined with the existing filter API can be used to implement arbitrary standard filters using a simple code pattern. Note that I would have preferred that this function return a list of all available filters, but HDF5 does not support that functionality. So this PR implements the dispatch function and implements the following standard functions: + bzip2 + zstandard + blosc Specific test cases are also provided for HDF5 and NCZarr. Over time, other specific standard filters will be defined. ## Primary Changes * Add nc_inq_filter_avail() to netcdf-c API. * Add standard filter implementations to test use of *nc_inq_filter_avail*. * Bump the dispatch table version number and add to all the relevant dispatch tables (libsrc, libsrcp, etc). * Create a program to invoke nc_inq_filter_avail so that it is accessible to shell scripts. * Cleanup szip support to properly support szip when HDF5 is disabled. This involves detecting libsz separately from testing if HDF5 supports szip. * Integrate shuffle and fletcher32 into the existing filter API. This means that, for example, nc_def_var_fletcher32 is now a wrapper around nc_def_var_filter. * Extend the Codec defaulting to allow multiple default shared libraries. ## Misc. Changes * Modify configure.ac/CMakeLists.txt to look for the relevant libraries implementing standard filters. * Modify libnetcdf.settings to list available standard filters (including deflate and szip). * Add CMake test modules to locate libbz2 and libzstd. * Cleanup the HDF5 memory manager function use in the plugins. * remove unused file include//ncfilter.h * remove tests for the HDF5 memory operations e.g. H5allocate_memory. * Add flag to ncdump to force use of _Filter instead of _Deflate or _Shuffle or _Fletcher32. Used for testing.
2022-03-15 02:39:37 +08:00
##########
# Look for Standardized libraries
##########
2023-07-21 05:59:53 +08:00
##
# blosc checks
##
2023-07-21 05:59:53 +08:00
# See if we want to enable blosc, and if so, search for the library.
AC_MSG_CHECKING([whether to search for and enable blosc filter support])
AC_ARG_ENABLE([filter-blosc],
[AS_HELP_STRING([--disable-filter-blosc],
[disable blosc filter support.])])
test "x$enable_filter_blosc" = xno || enable_filter_blosc=yes
AC_MSG_RESULT($enable_filter_blosc)
if test "x$enable_filter_blosc" = "xyes" ; then
# See if we have libblosc
AC_CHECK_LIB([blosc],[blosc_init],[have_blosc=yes],[have_blosc=no])
if test "x$have_blosc" = "xyes" ; then
AC_SEARCH_LIBS([blosc_init],[blosc blosc.dll cygblosc.dll], [], [])
AC_DEFINE([HAVE_BLOSC], [1], [if true, blosc library is available])
fi
else
have_blosc=no
Enhance/Fix filter support re: Discussion https://github.com/Unidata/netcdf-c/discussions/2214 The primary change is to support so-called "standard filters". A standard filter is one that is defined by the following netcdf-c API: ```` int nc_def_var_XXX(int ncid, int varid, size_t nparams, unsigned* params); int nc_inq_var_XXXX(int ncid, int varid, int* usefilterp, unsigned* params); ```` So for example, zstandard would be a standard filter by defining the functions *nc_def_var_zstandard* and *nc_inq_var_zstandard*. In order to define these functions, we need a new dispatch function: ```` int nc_inq_filter_avail(int ncid, unsigned filterid); ```` This function, combined with the existing filter API can be used to implement arbitrary standard filters using a simple code pattern. Note that I would have preferred that this function return a list of all available filters, but HDF5 does not support that functionality. So this PR implements the dispatch function and implements the following standard functions: + bzip2 + zstandard + blosc Specific test cases are also provided for HDF5 and NCZarr. Over time, other specific standard filters will be defined. ## Primary Changes * Add nc_inq_filter_avail() to netcdf-c API. * Add standard filter implementations to test use of *nc_inq_filter_avail*. * Bump the dispatch table version number and add to all the relevant dispatch tables (libsrc, libsrcp, etc). * Create a program to invoke nc_inq_filter_avail so that it is accessible to shell scripts. * Cleanup szip support to properly support szip when HDF5 is disabled. This involves detecting libsz separately from testing if HDF5 supports szip. * Integrate shuffle and fletcher32 into the existing filter API. This means that, for example, nc_def_var_fletcher32 is now a wrapper around nc_def_var_filter. * Extend the Codec defaulting to allow multiple default shared libraries. ## Misc. Changes * Modify configure.ac/CMakeLists.txt to look for the relevant libraries implementing standard filters. * Modify libnetcdf.settings to list available standard filters (including deflate and szip). * Add CMake test modules to locate libbz2 and libzstd. * Cleanup the HDF5 memory manager function use in the plugins. * remove unused file include//ncfilter.h * remove tests for the HDF5 memory operations e.g. H5allocate_memory. * Add flag to ncdump to force use of _Filter instead of _Deflate or _Shuffle or _Fletcher32. Used for testing.
2022-03-15 02:39:37 +08:00
fi
2023-07-21 05:59:53 +08:00
##
# End blosc checks
##
Enhance/Fix filter support re: Discussion https://github.com/Unidata/netcdf-c/discussions/2214 The primary change is to support so-called "standard filters". A standard filter is one that is defined by the following netcdf-c API: ```` int nc_def_var_XXX(int ncid, int varid, size_t nparams, unsigned* params); int nc_inq_var_XXXX(int ncid, int varid, int* usefilterp, unsigned* params); ```` So for example, zstandard would be a standard filter by defining the functions *nc_def_var_zstandard* and *nc_inq_var_zstandard*. In order to define these functions, we need a new dispatch function: ```` int nc_inq_filter_avail(int ncid, unsigned filterid); ```` This function, combined with the existing filter API can be used to implement arbitrary standard filters using a simple code pattern. Note that I would have preferred that this function return a list of all available filters, but HDF5 does not support that functionality. So this PR implements the dispatch function and implements the following standard functions: + bzip2 + zstandard + blosc Specific test cases are also provided for HDF5 and NCZarr. Over time, other specific standard filters will be defined. ## Primary Changes * Add nc_inq_filter_avail() to netcdf-c API. * Add standard filter implementations to test use of *nc_inq_filter_avail*. * Bump the dispatch table version number and add to all the relevant dispatch tables (libsrc, libsrcp, etc). * Create a program to invoke nc_inq_filter_avail so that it is accessible to shell scripts. * Cleanup szip support to properly support szip when HDF5 is disabled. This involves detecting libsz separately from testing if HDF5 supports szip. * Integrate shuffle and fletcher32 into the existing filter API. This means that, for example, nc_def_var_fletcher32 is now a wrapper around nc_def_var_filter. * Extend the Codec defaulting to allow multiple default shared libraries. ## Misc. Changes * Modify configure.ac/CMakeLists.txt to look for the relevant libraries implementing standard filters. * Modify libnetcdf.settings to list available standard filters (including deflate and szip). * Add CMake test modules to locate libbz2 and libzstd. * Cleanup the HDF5 memory manager function use in the plugins. * remove unused file include//ncfilter.h * remove tests for the HDF5 memory operations e.g. H5allocate_memory. * Add flag to ncdump to force use of _Filter instead of _Deflate or _Shuffle or _Fletcher32. Used for testing.
2022-03-15 02:39:37 +08:00
##
# libzstd checks
##
# See if we want to enable libzstd, and if so, search for the library.
AC_MSG_CHECKING([whether to search for and enable libzstd filter support])
AC_ARG_ENABLE([filter-zstd],
[AS_HELP_STRING([--disable-filter-zstd],
[disable zstd filter support.])])
test "x$enable_filter_zstd" = xno || enable_filter_zstd=yes
AC_MSG_RESULT($enable_filter_zstd)
if test "x$enable_filter_zstd" = "xyes" ; then
# See if we have libzstd
AC_CHECK_LIB([zstd],[ZSTD_compress],[have_zstd=yes],[have_zstd=no])
if test "x$have_zstd" = "xyes" ; then
AC_SEARCH_LIBS([ZSTD_compress],[zstd zstd.dll cygzstd.dll], [], [])
AC_DEFINE([HAVE_ZSTD], [1], [if true, zstd library is available])
fi
AC_MSG_CHECKING([whether libzstd library is available])
AC_MSG_RESULT([${have_zstd}])
##
# Ensure that the zstd.h dev files are also available.
##
if test "x$have_zstd" = "xyes" ; then
AC_CHECK_HEADERS([zstd.h], [], [nc_zstd_h_missing=yes])
if test "x$nc_zstd_h_missing" = xyes; then
AC_MSG_WARN([zstd library detected, but zstd.h development file not found. Ensure that the zstd development files are installed in order to build zstd support.])
AC_DEFINE([HAVE_ZSTD], [0], [if true, zstd library is available])
have_zstd=no
fi
fi
else
have_zstd=no
Enhance/Fix filter support re: Discussion https://github.com/Unidata/netcdf-c/discussions/2214 The primary change is to support so-called "standard filters". A standard filter is one that is defined by the following netcdf-c API: ```` int nc_def_var_XXX(int ncid, int varid, size_t nparams, unsigned* params); int nc_inq_var_XXXX(int ncid, int varid, int* usefilterp, unsigned* params); ```` So for example, zstandard would be a standard filter by defining the functions *nc_def_var_zstandard* and *nc_inq_var_zstandard*. In order to define these functions, we need a new dispatch function: ```` int nc_inq_filter_avail(int ncid, unsigned filterid); ```` This function, combined with the existing filter API can be used to implement arbitrary standard filters using a simple code pattern. Note that I would have preferred that this function return a list of all available filters, but HDF5 does not support that functionality. So this PR implements the dispatch function and implements the following standard functions: + bzip2 + zstandard + blosc Specific test cases are also provided for HDF5 and NCZarr. Over time, other specific standard filters will be defined. ## Primary Changes * Add nc_inq_filter_avail() to netcdf-c API. * Add standard filter implementations to test use of *nc_inq_filter_avail*. * Bump the dispatch table version number and add to all the relevant dispatch tables (libsrc, libsrcp, etc). * Create a program to invoke nc_inq_filter_avail so that it is accessible to shell scripts. * Cleanup szip support to properly support szip when HDF5 is disabled. This involves detecting libsz separately from testing if HDF5 supports szip. * Integrate shuffle and fletcher32 into the existing filter API. This means that, for example, nc_def_var_fletcher32 is now a wrapper around nc_def_var_filter. * Extend the Codec defaulting to allow multiple default shared libraries. ## Misc. Changes * Modify configure.ac/CMakeLists.txt to look for the relevant libraries implementing standard filters. * Modify libnetcdf.settings to list available standard filters (including deflate and szip). * Add CMake test modules to locate libbz2 and libzstd. * Cleanup the HDF5 memory manager function use in the plugins. * remove unused file include//ncfilter.h * remove tests for the HDF5 memory operations e.g. H5allocate_memory. * Add flag to ncdump to force use of _Filter instead of _Deflate or _Shuffle or _Fletcher32. Used for testing.
2022-03-15 02:39:37 +08:00
fi
##
# End zstd checks
##
##
# Begin bz2 checks
##
2023-07-21 05:59:53 +08:00
# See if we want to enable BZ2, and if so, search for the library.
AC_MSG_CHECKING([whether to search for and enable external bz2 filter support])
AC_ARG_ENABLE([filter-bz2],
[AS_HELP_STRING([--disable-filter-bz2],
[disable external bz2 filter support. bz2 support defaults to internal implementation if this is disabled or if the external library is not found.])])
test "x$enable_filter_bz2" = xno || enable_filter_bz2=yes
2023-08-12 01:29:30 +08:00
AC_MSG_RESULT([$enable_filter_bz2])
if test "x$enable_filter_bz2" = "xyes" ; then
AC_CHECK_LIB([bz2],[BZ2_bzCompress],[have_bz2=yes],[have_bz2=no])
if test "x$have_bz2" = "xyes" ; then
AC_SEARCH_LIBS([BZ2_bzCompress],[bz2 bz2.dll cygbz2.dll], [], [])
AC_DEFINE([HAVE_BZ2], [1], [if true, bz2 library is installed])
fi
AC_MSG_CHECKING([whether libbz2 library is available])
AC_MSG_RESULT([${have_bz2}])
Enhance/Fix filter support re: Discussion https://github.com/Unidata/netcdf-c/discussions/2214 The primary change is to support so-called "standard filters". A standard filter is one that is defined by the following netcdf-c API: ```` int nc_def_var_XXX(int ncid, int varid, size_t nparams, unsigned* params); int nc_inq_var_XXXX(int ncid, int varid, int* usefilterp, unsigned* params); ```` So for example, zstandard would be a standard filter by defining the functions *nc_def_var_zstandard* and *nc_inq_var_zstandard*. In order to define these functions, we need a new dispatch function: ```` int nc_inq_filter_avail(int ncid, unsigned filterid); ```` This function, combined with the existing filter API can be used to implement arbitrary standard filters using a simple code pattern. Note that I would have preferred that this function return a list of all available filters, but HDF5 does not support that functionality. So this PR implements the dispatch function and implements the following standard functions: + bzip2 + zstandard + blosc Specific test cases are also provided for HDF5 and NCZarr. Over time, other specific standard filters will be defined. ## Primary Changes * Add nc_inq_filter_avail() to netcdf-c API. * Add standard filter implementations to test use of *nc_inq_filter_avail*. * Bump the dispatch table version number and add to all the relevant dispatch tables (libsrc, libsrcp, etc). * Create a program to invoke nc_inq_filter_avail so that it is accessible to shell scripts. * Cleanup szip support to properly support szip when HDF5 is disabled. This involves detecting libsz separately from testing if HDF5 supports szip. * Integrate shuffle and fletcher32 into the existing filter API. This means that, for example, nc_def_var_fletcher32 is now a wrapper around nc_def_var_filter. * Extend the Codec defaulting to allow multiple default shared libraries. ## Misc. Changes * Modify configure.ac/CMakeLists.txt to look for the relevant libraries implementing standard filters. * Modify libnetcdf.settings to list available standard filters (including deflate and szip). * Add CMake test modules to locate libbz2 and libzstd. * Cleanup the HDF5 memory manager function use in the plugins. * remove unused file include//ncfilter.h * remove tests for the HDF5 memory operations e.g. H5allocate_memory. * Add flag to ncdump to force use of _Filter instead of _Deflate or _Shuffle or _Fletcher32. Used for testing.
2022-03-15 02:39:37 +08:00
fi
2023-08-12 01:29:30 +08:00
# How about bzip2
if test "x$have_bz2" = x ; then
AC_CHECK_LIB([bzip2],[BZ2_bzCompress],[have_bz2=yes],[have_bz2=no])
if test "x$have_bz2" = "xyes" ; then
AC_SEARCH_LIBS([BZ2_bzCompress],[bzip2 bzip2.dll cygbzip2.dll], [], [])
AC_DEFINE([HAVE_BZ2], [1], [if true, bzip2 library is installed])
fi
AC_MSG_CHECKING([whether libbzip2 library is available])
AC_MSG_RESULT([${have_bz2}])
fi
if test "x$have_bz2" = "xno" ; then
have_local_bz2=yes
2022-05-28 05:45:34 +08:00
AC_MSG_NOTICE([Defaulting to internal libbz2])
else
have_local_bz2=no
Enhance/Fix filter support re: Discussion https://github.com/Unidata/netcdf-c/discussions/2214 The primary change is to support so-called "standard filters". A standard filter is one that is defined by the following netcdf-c API: ```` int nc_def_var_XXX(int ncid, int varid, size_t nparams, unsigned* params); int nc_inq_var_XXXX(int ncid, int varid, int* usefilterp, unsigned* params); ```` So for example, zstandard would be a standard filter by defining the functions *nc_def_var_zstandard* and *nc_inq_var_zstandard*. In order to define these functions, we need a new dispatch function: ```` int nc_inq_filter_avail(int ncid, unsigned filterid); ```` This function, combined with the existing filter API can be used to implement arbitrary standard filters using a simple code pattern. Note that I would have preferred that this function return a list of all available filters, but HDF5 does not support that functionality. So this PR implements the dispatch function and implements the following standard functions: + bzip2 + zstandard + blosc Specific test cases are also provided for HDF5 and NCZarr. Over time, other specific standard filters will be defined. ## Primary Changes * Add nc_inq_filter_avail() to netcdf-c API. * Add standard filter implementations to test use of *nc_inq_filter_avail*. * Bump the dispatch table version number and add to all the relevant dispatch tables (libsrc, libsrcp, etc). * Create a program to invoke nc_inq_filter_avail so that it is accessible to shell scripts. * Cleanup szip support to properly support szip when HDF5 is disabled. This involves detecting libsz separately from testing if HDF5 supports szip. * Integrate shuffle and fletcher32 into the existing filter API. This means that, for example, nc_def_var_fletcher32 is now a wrapper around nc_def_var_filter. * Extend the Codec defaulting to allow multiple default shared libraries. ## Misc. Changes * Modify configure.ac/CMakeLists.txt to look for the relevant libraries implementing standard filters. * Modify libnetcdf.settings to list available standard filters (including deflate and szip). * Add CMake test modules to locate libbz2 and libzstd. * Cleanup the HDF5 memory manager function use in the plugins. * remove unused file include//ncfilter.h * remove tests for the HDF5 memory operations e.g. H5allocate_memory. * Add flag to ncdump to force use of _Filter instead of _Deflate or _Shuffle or _Fletcher32. Used for testing.
2022-03-15 02:39:37 +08:00
fi
AM_CONDITIONAL(HAVE_LOCAL_BZ2, [test "x$have_local_bz2" = xyes])
##
# End bz2 checks
##
Enhance/Fix filter support re: Discussion https://github.com/Unidata/netcdf-c/discussions/2214 The primary change is to support so-called "standard filters". A standard filter is one that is defined by the following netcdf-c API: ```` int nc_def_var_XXX(int ncid, int varid, size_t nparams, unsigned* params); int nc_inq_var_XXXX(int ncid, int varid, int* usefilterp, unsigned* params); ```` So for example, zstandard would be a standard filter by defining the functions *nc_def_var_zstandard* and *nc_inq_var_zstandard*. In order to define these functions, we need a new dispatch function: ```` int nc_inq_filter_avail(int ncid, unsigned filterid); ```` This function, combined with the existing filter API can be used to implement arbitrary standard filters using a simple code pattern. Note that I would have preferred that this function return a list of all available filters, but HDF5 does not support that functionality. So this PR implements the dispatch function and implements the following standard functions: + bzip2 + zstandard + blosc Specific test cases are also provided for HDF5 and NCZarr. Over time, other specific standard filters will be defined. ## Primary Changes * Add nc_inq_filter_avail() to netcdf-c API. * Add standard filter implementations to test use of *nc_inq_filter_avail*. * Bump the dispatch table version number and add to all the relevant dispatch tables (libsrc, libsrcp, etc). * Create a program to invoke nc_inq_filter_avail so that it is accessible to shell scripts. * Cleanup szip support to properly support szip when HDF5 is disabled. This involves detecting libsz separately from testing if HDF5 supports szip. * Integrate shuffle and fletcher32 into the existing filter API. This means that, for example, nc_def_var_fletcher32 is now a wrapper around nc_def_var_filter. * Extend the Codec defaulting to allow multiple default shared libraries. ## Misc. Changes * Modify configure.ac/CMakeLists.txt to look for the relevant libraries implementing standard filters. * Modify libnetcdf.settings to list available standard filters (including deflate and szip). * Add CMake test modules to locate libbz2 and libzstd. * Cleanup the HDF5 memory manager function use in the plugins. * remove unused file include//ncfilter.h * remove tests for the HDF5 memory operations e.g. H5allocate_memory. * Add flag to ncdump to force use of _Filter instead of _Deflate or _Shuffle or _Fletcher32. Used for testing.
2022-03-15 02:39:37 +08:00
# Note that szip management is tricky.
# This is because we have three things to consider:
# 1. is libsz available?
# 2. is szip enabled in HDF5?
# 3. is nczarr enabled?
# We need separate flags for cases 1 and 2
# See if we have libsz (usually via libaec)
AC_CHECK_LIB([sz],[SZ_BufftoBuffCompress],[have_sz=yes],[have_sz=no])
if test "x$have_sz" = "xyes" ; then
AC_SEARCH_LIBS([SZ_BufftoBuffCompress],[sz sz.dll cygsz.dll], [], [])
AC_DEFINE([HAVE_SZ], [1], [if true, libsz (==szip) is available])
Add filter support to NCZarr Filter support has three goals: 1. Use the existing HDF5 filter implementations, 2. Allow filter metadata to be stored in the NumCodecs metadata format used by Zarr, 3. Allow filters to be used even when HDF5 is disabled Detailed usage directions are define in docs/filters.md. For now, the existing filter API is left in place. So filters are defined using ''nc_def_var_filter'' using the HDF5 style where the id and parameters are unsigned integers. This is a big change since filters affect many parts of the code. In the following, the terms "compressor" and "filter" and "codec" are generally used synonomously. ### Filter-Related Changes: * In order to support dynamic loading of shared filter libraries, a new library was added in the libncpoco directory; it helps to isolate dynamic loading across multiple platforms. * Provide a json parsing library for use by plugins; this is created by merging libdispatch/ncjson.c with include/ncjson.h. * Add a new _Codecs attribute to allow clients to see what codecs are being used; let ncdump -s print it out. * Provide special headers to help support compilation of HDF5 filters when HDF5 is not enabled: netcdf_filter_hdf5_build.h and netcdf_filter_build.h. * Add a number of new test to test the new nczarr filters. * Let ncgen parse _Codecs attribute, although it is ignored. ### Plugin directory changes: * Add support for the Blosc compressor; this is essential because it is the most common compressor used in Zarr datasets. This also necessitated adding a CMake FindBlosc.cmake file * Add NCZarr support for the big-four filters provided by HDF5: shuffle, fletcher32, deflate (zlib), and szip * Add a Codec defaulter (see docs/filters.md) for the big four filters. * Make plugins work with windows by properly adding __declspec declaration. ### Misc. Non-Filter Changes * Replace most uses of USE_NETCDF4 (deprecated) with USE_HDF5. * Improve support for caching * More fixes for path conversion code * Fix misc. memory leaks * Add new utility -- ncdump/ncpathcvt -- that does more or less the same thing as cygpath. * Add a number of new test to test the non-filter fixes. * Update the parsers * Convert most instances of '#ifdef _MSC_VER' to '#ifdef _WIN32'
2021-09-03 07:04:26 +08:00
fi
Enhance/Fix filter support re: Discussion https://github.com/Unidata/netcdf-c/discussions/2214 The primary change is to support so-called "standard filters". A standard filter is one that is defined by the following netcdf-c API: ```` int nc_def_var_XXX(int ncid, int varid, size_t nparams, unsigned* params); int nc_inq_var_XXXX(int ncid, int varid, int* usefilterp, unsigned* params); ```` So for example, zstandard would be a standard filter by defining the functions *nc_def_var_zstandard* and *nc_inq_var_zstandard*. In order to define these functions, we need a new dispatch function: ```` int nc_inq_filter_avail(int ncid, unsigned filterid); ```` This function, combined with the existing filter API can be used to implement arbitrary standard filters using a simple code pattern. Note that I would have preferred that this function return a list of all available filters, but HDF5 does not support that functionality. So this PR implements the dispatch function and implements the following standard functions: + bzip2 + zstandard + blosc Specific test cases are also provided for HDF5 and NCZarr. Over time, other specific standard filters will be defined. ## Primary Changes * Add nc_inq_filter_avail() to netcdf-c API. * Add standard filter implementations to test use of *nc_inq_filter_avail*. * Bump the dispatch table version number and add to all the relevant dispatch tables (libsrc, libsrcp, etc). * Create a program to invoke nc_inq_filter_avail so that it is accessible to shell scripts. * Cleanup szip support to properly support szip when HDF5 is disabled. This involves detecting libsz separately from testing if HDF5 supports szip. * Integrate shuffle and fletcher32 into the existing filter API. This means that, for example, nc_def_var_fletcher32 is now a wrapper around nc_def_var_filter. * Extend the Codec defaulting to allow multiple default shared libraries. ## Misc. Changes * Modify configure.ac/CMakeLists.txt to look for the relevant libraries implementing standard filters. * Modify libnetcdf.settings to list available standard filters (including deflate and szip). * Add CMake test modules to locate libbz2 and libzstd. * Cleanup the HDF5 memory manager function use in the plugins. * remove unused file include//ncfilter.h * remove tests for the HDF5 memory operations e.g. H5allocate_memory. * Add flag to ncdump to force use of _Filter instead of _Deflate or _Shuffle or _Fletcher32. Used for testing.
2022-03-15 02:39:37 +08:00
AC_MSG_CHECKING([whether libsz library is available])
AC_MSG_RESULT([${have_sz}])
##########
Add filter support to NCZarr Filter support has three goals: 1. Use the existing HDF5 filter implementations, 2. Allow filter metadata to be stored in the NumCodecs metadata format used by Zarr, 3. Allow filters to be used even when HDF5 is disabled Detailed usage directions are define in docs/filters.md. For now, the existing filter API is left in place. So filters are defined using ''nc_def_var_filter'' using the HDF5 style where the id and parameters are unsigned integers. This is a big change since filters affect many parts of the code. In the following, the terms "compressor" and "filter" and "codec" are generally used synonomously. ### Filter-Related Changes: * In order to support dynamic loading of shared filter libraries, a new library was added in the libncpoco directory; it helps to isolate dynamic loading across multiple platforms. * Provide a json parsing library for use by plugins; this is created by merging libdispatch/ncjson.c with include/ncjson.h. * Add a new _Codecs attribute to allow clients to see what codecs are being used; let ncdump -s print it out. * Provide special headers to help support compilation of HDF5 filters when HDF5 is not enabled: netcdf_filter_hdf5_build.h and netcdf_filter_build.h. * Add a number of new test to test the new nczarr filters. * Let ncgen parse _Codecs attribute, although it is ignored. ### Plugin directory changes: * Add support for the Blosc compressor; this is essential because it is the most common compressor used in Zarr datasets. This also necessitated adding a CMake FindBlosc.cmake file * Add NCZarr support for the big-four filters provided by HDF5: shuffle, fletcher32, deflate (zlib), and szip * Add a Codec defaulter (see docs/filters.md) for the big four filters. * Make plugins work with windows by properly adding __declspec declaration. ### Misc. Non-Filter Changes * Replace most uses of USE_NETCDF4 (deprecated) with USE_HDF5. * Improve support for caching * More fixes for path conversion code * Fix misc. memory leaks * Add new utility -- ncdump/ncpathcvt -- that does more or less the same thing as cygpath. * Add a number of new test to test the non-filter fixes. * Update the parsers * Convert most instances of '#ifdef _MSC_VER' to '#ifdef _WIN32'
2021-09-03 07:04:26 +08:00
##
# Check to see if we're using NCZarr. If not, we don't need to check for dependencies and such.
##
Upgrade the nczarr code to match Zarr V2 Re: https://github.com/zarr-developers/zarr-python/pull/716 The Zarr version 2 spec has been extended to include the ability to choose the dimension separator in chunk name keys. The legal separators has been extended from {'.'} to {'.' '/'}. So now it is possible to use a key like "0/1/2/0" for chunk names. This PR implements this for NCZarr. The V2 spec now says that this separator can be set on a per-variable basis. For now, I have chosen to allow this be set only globally by adding a key named "ZARR.DIMENSION_SEPARATOR=<char>" in the .daprc/.dodsrc/ncrc file. Currently, the only legal separator characters are '.' (the default) and '/'. On writing, this key will only be written if its value is different than the default. This change caused problems because supporting a separator of '/' is difficult to parse when keys/paths use '/' as the path separator. A test case was added for this. Additionally, make nczarr be enabled default by default. This required some additional changes so that if zip and/or AWS S3 sdk are unavailable, then they are disabled for NCZarr. In addition the following unrelated changes were made. 1. Tested that pure-zarr mode could read an nczarr formatted store. 1. The .rc file handling now merges all known .rc files (.ncrc,.daprc, and .dodsrc) in that order and using those in HOME first, then in current directory. For duplicate entries, the later ones override the earlier ones. This change is to remove some of the conflicts inherent in the current .rc file load process. A set of test cases was also added. 1. Re-order tests in configure.ac and CMakeLists.txt so that if libcurl is not found then the other options that depend upon it properly are disabled. 1. I decided that xarray support should be enabled by default for pure zarr. In order to allow disabling, I added a new mode flag "noxarray". 1. Certain test in nczarr_test depend on use of .dodsrc. In order for these to work when testing in parallel, some inter-test dependencies needed to be added. 1. Improved authorization testing to use changes in thredds.ucar.edu
2021-04-25 09:48:15 +08:00
if test "x$enable_nczarr" = xno ; then
enable_nczarr_zip=no
else
Upgrade the nczarr code to match Zarr V2 Re: https://github.com/zarr-developers/zarr-python/pull/716 The Zarr version 2 spec has been extended to include the ability to choose the dimension separator in chunk name keys. The legal separators has been extended from {'.'} to {'.' '/'}. So now it is possible to use a key like "0/1/2/0" for chunk names. This PR implements this for NCZarr. The V2 spec now says that this separator can be set on a per-variable basis. For now, I have chosen to allow this be set only globally by adding a key named "ZARR.DIMENSION_SEPARATOR=<char>" in the .daprc/.dodsrc/ncrc file. Currently, the only legal separator characters are '.' (the default) and '/'. On writing, this key will only be written if its value is different than the default. This change caused problems because supporting a separator of '/' is difficult to parse when keys/paths use '/' as the path separator. A test case was added for this. Additionally, make nczarr be enabled default by default. This required some additional changes so that if zip and/or AWS S3 sdk are unavailable, then they are disabled for NCZarr. In addition the following unrelated changes were made. 1. Tested that pure-zarr mode could read an nczarr formatted store. 1. The .rc file handling now merges all known .rc files (.ncrc,.daprc, and .dodsrc) in that order and using those in HOME first, then in current directory. For duplicate entries, the later ones override the earlier ones. This change is to remove some of the conflicts inherent in the current .rc file load process. A set of test cases was also added. 1. Re-order tests in configure.ac and CMakeLists.txt so that if libcurl is not found then the other options that depend upon it properly are disabled. 1. I decided that xarray support should be enabled by default for pure zarr. In order to allow disabling, I added a new mode flag "noxarray". 1. Certain test in nczarr_test depend on use of .dodsrc. In order for these to work when testing in parallel, some inter-test dependencies needed to be added. 1. Improved authorization testing to use changes in thredds.ucar.edu
2021-04-25 09:48:15 +08:00
# See if we have libzip for NCZarr
AC_SEARCH_LIBS([zip_open],[zip zip.dll cygzip.dll],[have_zip=yes],[have_zip=no])
AC_MSG_CHECKING([whether libzip library is available])
AC_MSG_RESULT([${have_zip}])
enable_nczarr_zip=${have_zip} # alias
Upgrade the nczarr code to match Zarr V2 Re: https://github.com/zarr-developers/zarr-python/pull/716 The Zarr version 2 spec has been extended to include the ability to choose the dimension separator in chunk name keys. The legal separators has been extended from {'.'} to {'.' '/'}. So now it is possible to use a key like "0/1/2/0" for chunk names. This PR implements this for NCZarr. The V2 spec now says that this separator can be set on a per-variable basis. For now, I have chosen to allow this be set only globally by adding a key named "ZARR.DIMENSION_SEPARATOR=<char>" in the .daprc/.dodsrc/ncrc file. Currently, the only legal separator characters are '.' (the default) and '/'. On writing, this key will only be written if its value is different than the default. This change caused problems because supporting a separator of '/' is difficult to parse when keys/paths use '/' as the path separator. A test case was added for this. Additionally, make nczarr be enabled default by default. This required some additional changes so that if zip and/or AWS S3 sdk are unavailable, then they are disabled for NCZarr. In addition the following unrelated changes were made. 1. Tested that pure-zarr mode could read an nczarr formatted store. 1. The .rc file handling now merges all known .rc files (.ncrc,.daprc, and .dodsrc) in that order and using those in HOME first, then in current directory. For duplicate entries, the later ones override the earlier ones. This change is to remove some of the conflicts inherent in the current .rc file load process. A set of test cases was also added. 1. Re-order tests in configure.ac and CMakeLists.txt so that if libcurl is not found then the other options that depend upon it properly are disabled. 1. I decided that xarray support should be enabled by default for pure zarr. In order to allow disabling, I added a new mode flag "noxarray". 1. Certain test in nczarr_test depend on use of .dodsrc. In order for these to work when testing in parallel, some inter-test dependencies needed to be added. 1. Improved authorization testing to use changes in thredds.ucar.edu
2021-04-25 09:48:15 +08:00
Fix byterange handling of some URLS re: Issue The byterange handling of the following URLS fails. ### Problem 1: "https://crudata.uea.ac.uk/cru/data/temperature/HadCRUT.4.6.0.0.median.nc#mode=bytes" It turns out that byterange in hdf5 has two possible targets: S3 and not-S3 (e.g. a thredds server or the crudata URL above). Each uses a different HDF5 Virtual File Driver (VFD). I incorrectly set up the byterange code in libhdf5 so that it would choose one or the other of the two VFD's for any netcdf-c library build. The fix is to allow it to choose either one at run-time. ### Problem 2: "https://noaa-goes16.s3.amazonaws.com/ABI-L1b-RadF/2022/001/18/OR_ABI-L1b-RadF-M6C01_G16_s20220011800205_e20220011809513_c20220011809562.nc#mode=bytes,s3" When given what appears to be an S3-related URL, the netcdf-c library code converts it into a canonical, so-called "path" format. In casing out the possible input URL formats, I missed the case where the host contains the bucket ("noaa-goes16"), but not the region. So the fix was to check for this case. ## Misc. Related Changes 1. Since S3 is used in more than just NCZarr, I changed the automake/cmake options to replace "--enable-nczarr-s3" with "--enable-s3", but keeping the former option as a synonym for the latter. This also entailed cleaning up libnetcdf.settings WRT S3 support 2. Added the above URLS as additional test cases ## Misc. Un-Related Changes 1. CURLOPT_PUT is deprecated in favor to CURLOPT_UPLOAD 2. Fix some minor warnings ## Open Problems * Under Ubuntu, either libcrypto or aws-sdk-cpp has a memory leak.
2023-03-03 10:51:02 +08:00
AC_MSG_CHECKING([whether nczarr zip support is enabled])
AC_MSG_RESULT([${enable_nczarr_zip}])
if test "x$enable_nczarr_zip" = xyes ; then
AC_DEFINE([ENABLE_NCZARR_ZIP], [1], [If true, then libzip found])
fi
Upgrade the nczarr code to match Zarr V2 Re: https://github.com/zarr-developers/zarr-python/pull/716 The Zarr version 2 spec has been extended to include the ability to choose the dimension separator in chunk name keys. The legal separators has been extended from {'.'} to {'.' '/'}. So now it is possible to use a key like "0/1/2/0" for chunk names. This PR implements this for NCZarr. The V2 spec now says that this separator can be set on a per-variable basis. For now, I have chosen to allow this be set only globally by adding a key named "ZARR.DIMENSION_SEPARATOR=<char>" in the .daprc/.dodsrc/ncrc file. Currently, the only legal separator characters are '.' (the default) and '/'. On writing, this key will only be written if its value is different than the default. This change caused problems because supporting a separator of '/' is difficult to parse when keys/paths use '/' as the path separator. A test case was added for this. Additionally, make nczarr be enabled default by default. This required some additional changes so that if zip and/or AWS S3 sdk are unavailable, then they are disabled for NCZarr. In addition the following unrelated changes were made. 1. Tested that pure-zarr mode could read an nczarr formatted store. 1. The .rc file handling now merges all known .rc files (.ncrc,.daprc, and .dodsrc) in that order and using those in HOME first, then in current directory. For duplicate entries, the later ones override the earlier ones. This change is to remove some of the conflicts inherent in the current .rc file load process. A set of test cases was also added. 1. Re-order tests in configure.ac and CMakeLists.txt so that if libcurl is not found then the other options that depend upon it properly are disabled. 1. I decided that xarray support should be enabled by default for pure zarr. In order to allow disabling, I added a new mode flag "noxarray". 1. Certain test in nczarr_test depend on use of .dodsrc. In order for these to work when testing in parallel, some inter-test dependencies needed to be added. 1. Improved authorization testing to use changes in thredds.ucar.edu
2021-04-25 09:48:15 +08:00
# Check for enabling of S3 support
AC_MSG_CHECKING([whether netcdf S3 support should be enabled])
AC_ARG_ENABLE([s3],
[AS_HELP_STRING([--enable-s3],
[enable netcdf S3 support])])
test "x$enable_s3" = xyes || enable_s3=no
AC_MSG_RESULT($enable_s3)
if test "x$enable_remote_functionality" = xno ; then
AC_MSG_WARN([--disable-remote-functionality => --disable-s3])
enable_s3=no
fi
Improve S3 Documentation and Support ## Improvements to S3 Documentation * Create a new document *quickstart_paths.md* that give a summary of the legal path formats used by netcdf-c. This includes both file paths and URL paths. * Modify *nczarr.md* to remove most of the S3 related text. * Move the S3 text from *nczarr.md* to a new document *cloud.md*. * Add some S3-related text to the *byterange.md* document. Hopefully, this will make it easier for users to find the information they want. ## Rebuild NCZarr Testing In order to avoid problems with running make check in parallel, two changes were made: 1. The *nczarr_test* test system was rebuilt. Now, for each test. any generated files are kept in a test-specific directory, isolated from all other test executions. 2. Similarly, since the S3 test bucket is shared, any generated S3 objects are isolated using a test-specific key path. ## Other S3 Related Changes * Add code to ensure that files created on S3 are reclaimed at end of testing. * Used the bash "trap" command to ensure S3 cleanup even if the test fails. * Cleanup the S3 related configure.ac flag set since S3 is used in several places. So now one should use the option *--enable-s3* instead of *--enable-nczarr-s3*, although the latter is still kept as a deprecated alias for the former. * Get some of the github actions yml to work with S3; required fixing various test scripts adding a secret to access the Unidata S3 bucket. * Cleanup S3 portion of libnetcdf.settings.in and netcdf_meta.h.in and test_common.in. * Merge partial S3 support into dhttp.c. * Create an experimental s3 access library especially for use with Windows. It is enabled by using the options *--enable-s3-internal* (automake) or *-DENABLE_S3_INTERNAL=ON* (CMake). Also add a unit-test for it. * Move some definitions from ncrc.h to ncs3sdk.h ## Other Changes * Provide a default implementation of strlcpy and move this and similar defaults into *dmissing.c*.
2023-04-26 07:15:06 +08:00
# --enable-nczarr-s3 is synonym for --enable-s3 (but...)
AC_MSG_CHECKING([whether netcdf NCZarr S3 support should be enabled])
AC_ARG_ENABLE([nczarr-s3],
[AS_HELP_STRING([--enable-nczarr-s3],
[(Deprecated) enable netcdf NCZarr S3 support; Deprecated in favor of --enable-s3])])
AC_MSG_RESULT([$enable_nczarr_s3 (Deprecated) Please use --enable-s3)])
2020-10-17 05:04:51 +08:00
# Set enable_s3 instead of enable_nczarr_s3
if test "x$enable_s3" = xno && test "x$enable_nczarr_s3" = xyes && test "x$enable_remote_functionality" = xyes; then
enable_s3=yes # back compatibility
fi
unset enable_nczarr_s3
# Note we check for the library after checking for enable_s3
# because for some reason this fails if we unconditionally test for sdk
# and it is not available. Fix someday
S3LIBS=""
if test "x$enable_s3" = xyes ; then
# See if we have the s3 aws library
# Check for the AWS S3 SDK library
AC_LANG_PUSH([C++])
AC_CHECK_LIB([aws-c-common], [aws_string_destroy], [enable_s3_aws=yes],[enable_s3_aws=no])
if test "x$enable_s3_aws" = "xyes" ; then
S3LIBS="-laws-cpp-sdk-core -laws-cpp-sdk-s3"
fi
AC_LANG_POP
else
enable_s3_aws=no
fi
This PR adds EXPERIMENTAL support for accessing data in the cloud using a variant of the Zarr protocol and storage format. This enhancement is generically referred to as "NCZarr". The data model supported by NCZarr is netcdf-4 minus the user-defined types and the String type. In this sense it is similar to the CDF-5 data model. More detailed information about enabling and using NCZarr is described in the document NUG/nczarr.md and in a [Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in). WARNING: this code has had limited testing, so do use this version for production work. Also, performance improvements are ongoing. Note especially the following platform matrix of successful tests: Platform | Build System | S3 support ------------------------------------ Linux+gcc | Automake | yes Linux+gcc | CMake | yes Visual Studio | CMake | no Additionally, and as a consequence of the addition of NCZarr, major changes have been made to the Filter API. NOTE: NCZarr does not yet support filters, but these changes are enablers for that support in the future. Note that it is possible (probable?) that there will be some accidental reversions if the changes here did not correctly mimic the existing filter testing. In any case, previously filter ids and parameters were of type unsigned int. In order to support the more general zarr filter model, this was all converted to char*. The old HDF5-specific, unsigned int operations are still supported but they are wrappers around the new, char* based nc_filterx_XXX functions. This entailed at least the following changes: 1. Added the files libdispatch/dfilterx.c and include/ncfilter.h 2. Some filterx utilities have been moved to libdispatch/daux.c 3. A new entry, "filter_actions" was added to the NCDispatch table and the version bumped. 4. An overly complex set of structs was created to support funnelling all of the filterx operations thru a single dispatch "filter_actions" entry. 5. Move common code to from libhdf5 to libsrc4 so that it is accessible to nczarr. Changes directly related to Zarr: 1. Modified CMakeList.txt and configure.ac to support both C and C++ -- this is in support of S3 support via the awd-sdk libraries. 2. Define a size64_t type to support nczarr. 3. More reworking of libdispatch/dinfermodel.c to support zarr and to regularize the structure of the fragments section of a URL. Changes not directly related to Zarr: 1. Make client-side filter registration be conditional, with default off. 2. Hack include/nc4internal.h to make some flags added by Ed be unique: e.g. NC_CREAT, NC_INDEF, etc. 3. cleanup include/nchttp.h and libdispatch/dhttp.c. 4. Misc. changes to support compiling under Visual Studio including: * Better testing under windows for dirent.h and opendir and closedir. 5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags and to centralize error reporting. 6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them. 7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible. Changes Left TO-DO: 1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
AC_MSG_CHECKING([whether AWS S3 SDK library is available])
AC_MSG_RESULT([$enable_s3_aws])
Improve S3 Documentation and Support ## Improvements to S3 Documentation * Create a new document *quickstart_paths.md* that give a summary of the legal path formats used by netcdf-c. This includes both file paths and URL paths. * Modify *nczarr.md* to remove most of the S3 related text. * Move the S3 text from *nczarr.md* to a new document *cloud.md*. * Add some S3-related text to the *byterange.md* document. Hopefully, this will make it easier for users to find the information they want. ## Rebuild NCZarr Testing In order to avoid problems with running make check in parallel, two changes were made: 1. The *nczarr_test* test system was rebuilt. Now, for each test. any generated files are kept in a test-specific directory, isolated from all other test executions. 2. Similarly, since the S3 test bucket is shared, any generated S3 objects are isolated using a test-specific key path. ## Other S3 Related Changes * Add code to ensure that files created on S3 are reclaimed at end of testing. * Used the bash "trap" command to ensure S3 cleanup even if the test fails. * Cleanup the S3 related configure.ac flag set since S3 is used in several places. So now one should use the option *--enable-s3* instead of *--enable-nczarr-s3*, although the latter is still kept as a deprecated alias for the former. * Get some of the github actions yml to work with S3; required fixing various test scripts adding a secret to access the Unidata S3 bucket. * Cleanup S3 portion of libnetcdf.settings.in and netcdf_meta.h.in and test_common.in. * Merge partial S3 support into dhttp.c. * Create an experimental s3 access library especially for use with Windows. It is enabled by using the options *--enable-s3-internal* (automake) or *-DENABLE_S3_INTERNAL=ON* (CMake). Also add a unit-test for it. * Move some definitions from ncrc.h to ncs3sdk.h ## Other Changes * Provide a default implementation of strlcpy and move this and similar defaults into *dmissing.c*.
2023-04-26 07:15:06 +08:00
# Check for enabling forced use of Internal S3 library
AC_MSG_CHECKING([whether internal S3 support should be used])
AC_ARG_ENABLE([s3-internal],
[AS_HELP_STRING([--enable-s3-internal],
[enable internal S3 support])])
test "x$enable_s3_internal" = xyes || enable_s3_internal=no
AC_MSG_RESULT($enable_s3_internal)
Improve S3 Documentation and Support ## Improvements to S3 Documentation * Create a new document *quickstart_paths.md* that give a summary of the legal path formats used by netcdf-c. This includes both file paths and URL paths. * Modify *nczarr.md* to remove most of the S3 related text. * Move the S3 text from *nczarr.md* to a new document *cloud.md*. * Add some S3-related text to the *byterange.md* document. Hopefully, this will make it easier for users to find the information they want. ## Rebuild NCZarr Testing In order to avoid problems with running make check in parallel, two changes were made: 1. The *nczarr_test* test system was rebuilt. Now, for each test. any generated files are kept in a test-specific directory, isolated from all other test executions. 2. Similarly, since the S3 test bucket is shared, any generated S3 objects are isolated using a test-specific key path. ## Other S3 Related Changes * Add code to ensure that files created on S3 are reclaimed at end of testing. * Used the bash "trap" command to ensure S3 cleanup even if the test fails. * Cleanup the S3 related configure.ac flag set since S3 is used in several places. So now one should use the option *--enable-s3* instead of *--enable-nczarr-s3*, although the latter is still kept as a deprecated alias for the former. * Get some of the github actions yml to work with S3; required fixing various test scripts adding a secret to access the Unidata S3 bucket. * Cleanup S3 portion of libnetcdf.settings.in and netcdf_meta.h.in and test_common.in. * Merge partial S3 support into dhttp.c. * Create an experimental s3 access library especially for use with Windows. It is enabled by using the options *--enable-s3-internal* (automake) or *-DENABLE_S3_INTERNAL=ON* (CMake). Also add a unit-test for it. * Move some definitions from ncrc.h to ncs3sdk.h ## Other Changes * Provide a default implementation of strlcpy and move this and similar defaults into *dmissing.c*.
2023-04-26 07:15:06 +08:00
if test "x$enable_s3_aws" = xno && test "x$enable_s3_internal" = xno ; then
AC_MSG_WARN([No S3 library available => S3 support disabled])
enable_s3=no
fi
Upgrade the nczarr code to match Zarr V2 Re: https://github.com/zarr-developers/zarr-python/pull/716 The Zarr version 2 spec has been extended to include the ability to choose the dimension separator in chunk name keys. The legal separators has been extended from {'.'} to {'.' '/'}. So now it is possible to use a key like "0/1/2/0" for chunk names. This PR implements this for NCZarr. The V2 spec now says that this separator can be set on a per-variable basis. For now, I have chosen to allow this be set only globally by adding a key named "ZARR.DIMENSION_SEPARATOR=<char>" in the .daprc/.dodsrc/ncrc file. Currently, the only legal separator characters are '.' (the default) and '/'. On writing, this key will only be written if its value is different than the default. This change caused problems because supporting a separator of '/' is difficult to parse when keys/paths use '/' as the path separator. A test case was added for this. Additionally, make nczarr be enabled default by default. This required some additional changes so that if zip and/or AWS S3 sdk are unavailable, then they are disabled for NCZarr. In addition the following unrelated changes were made. 1. Tested that pure-zarr mode could read an nczarr formatted store. 1. The .rc file handling now merges all known .rc files (.ncrc,.daprc, and .dodsrc) in that order and using those in HOME first, then in current directory. For duplicate entries, the later ones override the earlier ones. This change is to remove some of the conflicts inherent in the current .rc file load process. A set of test cases was also added. 1. Re-order tests in configure.ac and CMakeLists.txt so that if libcurl is not found then the other options that depend upon it properly are disabled. 1. I decided that xarray support should be enabled by default for pure zarr. In order to allow disabling, I added a new mode flag "noxarray". 1. Certain test in nczarr_test depend on use of .dodsrc. In order for these to work when testing in parallel, some inter-test dependencies needed to be added. 1. Improved authorization testing to use changes in thredds.ucar.edu
2021-04-25 09:48:15 +08:00
if test "x$enable_s3_aws" = xyes && test "x$enable_s3_internal" = xyes ; then
AC_MSG_WARN([Both aws-sdk-cpp and s3-internal enabled => use s3-internal.])
enable_s3_aws=no
fi
if test "x$enable_s3_internal" = xyes ; then
if test "x$ISOSX" != xyes && test "x$ISMINGW" != xyes && test "x$ISMSVC" != xyes ; then
# Find crypto libraries if using ssl
AC_CHECK_LIB([ssl],[ssl_create_cipher_list])
AC_CHECK_LIB([crypto],[SHA256])
fi
fi
Improve S3 Documentation and Support ## Improvements to S3 Documentation * Create a new document *quickstart_paths.md* that give a summary of the legal path formats used by netcdf-c. This includes both file paths and URL paths. * Modify *nczarr.md* to remove most of the S3 related text. * Move the S3 text from *nczarr.md* to a new document *cloud.md*. * Add some S3-related text to the *byterange.md* document. Hopefully, this will make it easier for users to find the information they want. ## Rebuild NCZarr Testing In order to avoid problems with running make check in parallel, two changes were made: 1. The *nczarr_test* test system was rebuilt. Now, for each test. any generated files are kept in a test-specific directory, isolated from all other test executions. 2. Similarly, since the S3 test bucket is shared, any generated S3 objects are isolated using a test-specific key path. ## Other S3 Related Changes * Add code to ensure that files created on S3 are reclaimed at end of testing. * Used the bash "trap" command to ensure S3 cleanup even if the test fails. * Cleanup the S3 related configure.ac flag set since S3 is used in several places. So now one should use the option *--enable-s3* instead of *--enable-nczarr-s3*, although the latter is still kept as a deprecated alias for the former. * Get some of the github actions yml to work with S3; required fixing various test scripts adding a secret to access the Unidata S3 bucket. * Cleanup S3 portion of libnetcdf.settings.in and netcdf_meta.h.in and test_common.in. * Merge partial S3 support into dhttp.c. * Create an experimental s3 access library especially for use with Windows. It is enabled by using the options *--enable-s3-internal* (automake) or *-DENABLE_S3_INTERNAL=ON* (CMake). Also add a unit-test for it. * Move some definitions from ncrc.h to ncs3sdk.h ## Other Changes * Provide a default implementation of strlcpy and move this and similar defaults into *dmissing.c*.
2023-04-26 07:15:06 +08:00
# Check for enabling S3 testing
AC_MSG_CHECKING([what level of netcdf S3 testing should be enabled])
AC_ARG_WITH([s3-testing],
[AS_HELP_STRING([--with-s3-testing=yes|no|public],
[control netcdf S3 testing])],
[], [with_s3_testing=public])
AC_MSG_RESULT($with_s3_testing)
# Disable S3 tests if S3 support is disabled
if test "x$enable_s3" = xno ; then
if test "x$with_s3_testing" != xno ; then
AC_MSG_WARN([S3 support is disabled => no testing])
with_s3_testing=no
fi
fi
Improve S3 Documentation and Support ## Improvements to S3 Documentation * Create a new document *quickstart_paths.md* that give a summary of the legal path formats used by netcdf-c. This includes both file paths and URL paths. * Modify *nczarr.md* to remove most of the S3 related text. * Move the S3 text from *nczarr.md* to a new document *cloud.md*. * Add some S3-related text to the *byterange.md* document. Hopefully, this will make it easier for users to find the information they want. ## Rebuild NCZarr Testing In order to avoid problems with running make check in parallel, two changes were made: 1. The *nczarr_test* test system was rebuilt. Now, for each test. any generated files are kept in a test-specific directory, isolated from all other test executions. 2. Similarly, since the S3 test bucket is shared, any generated S3 objects are isolated using a test-specific key path. ## Other S3 Related Changes * Add code to ensure that files created on S3 are reclaimed at end of testing. * Used the bash "trap" command to ensure S3 cleanup even if the test fails. * Cleanup the S3 related configure.ac flag set since S3 is used in several places. So now one should use the option *--enable-s3* instead of *--enable-nczarr-s3*, although the latter is still kept as a deprecated alias for the former. * Get some of the github actions yml to work with S3; required fixing various test scripts adding a secret to access the Unidata S3 bucket. * Cleanup S3 portion of libnetcdf.settings.in and netcdf_meta.h.in and test_common.in. * Merge partial S3 support into dhttp.c. * Create an experimental s3 access library especially for use with Windows. It is enabled by using the options *--enable-s3-internal* (automake) or *-DENABLE_S3_INTERNAL=ON* (CMake). Also add a unit-test for it. * Move some definitions from ncrc.h to ncs3sdk.h ## Other Changes * Provide a default implementation of strlcpy and move this and similar defaults into *dmissing.c*.
2023-04-26 07:15:06 +08:00
if test "x$enable_s3" = xyes ; then
AC_DEFINE([ENABLE_S3], [1], [if true, build netcdf-c with S3 support enabled])
fi
if test "x$enable_s3_aws" = xyes ; then
LIBS="$LIBS$S3LIBS"
AC_DEFINE([ENABLE_S3_AWS], [1], [If true, then use aws S3 library])
fi
if test "x$enable_s3_internal" = xyes ; then
AC_DEFINE([ENABLE_S3_INTERNAL], [1], [If true, then use internal S3 library])
fi
Improve S3 Documentation and Support ## Improvements to S3 Documentation * Create a new document *quickstart_paths.md* that give a summary of the legal path formats used by netcdf-c. This includes both file paths and URL paths. * Modify *nczarr.md* to remove most of the S3 related text. * Move the S3 text from *nczarr.md* to a new document *cloud.md*. * Add some S3-related text to the *byterange.md* document. Hopefully, this will make it easier for users to find the information they want. ## Rebuild NCZarr Testing In order to avoid problems with running make check in parallel, two changes were made: 1. The *nczarr_test* test system was rebuilt. Now, for each test. any generated files are kept in a test-specific directory, isolated from all other test executions. 2. Similarly, since the S3 test bucket is shared, any generated S3 objects are isolated using a test-specific key path. ## Other S3 Related Changes * Add code to ensure that files created on S3 are reclaimed at end of testing. * Used the bash "trap" command to ensure S3 cleanup even if the test fails. * Cleanup the S3 related configure.ac flag set since S3 is used in several places. So now one should use the option *--enable-s3* instead of *--enable-nczarr-s3*, although the latter is still kept as a deprecated alias for the former. * Get some of the github actions yml to work with S3; required fixing various test scripts adding a secret to access the Unidata S3 bucket. * Cleanup S3 portion of libnetcdf.settings.in and netcdf_meta.h.in and test_common.in. * Merge partial S3 support into dhttp.c. * Create an experimental s3 access library especially for use with Windows. It is enabled by using the options *--enable-s3-internal* (automake) or *-DENABLE_S3_INTERNAL=ON* (CMake). Also add a unit-test for it. * Move some definitions from ncrc.h to ncs3sdk.h ## Other Changes * Provide a default implementation of strlcpy and move this and similar defaults into *dmissing.c*.
2023-04-26 07:15:06 +08:00
AC_DEFINE_UNQUOTED([WITH_S3_TESTING], [$with_s3_testing], [control S3 testing.])
This PR adds EXPERIMENTAL support for accessing data in the cloud using a variant of the Zarr protocol and storage format. This enhancement is generically referred to as "NCZarr". The data model supported by NCZarr is netcdf-4 minus the user-defined types and the String type. In this sense it is similar to the CDF-5 data model. More detailed information about enabling and using NCZarr is described in the document NUG/nczarr.md and in a [Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in). WARNING: this code has had limited testing, so do use this version for production work. Also, performance improvements are ongoing. Note especially the following platform matrix of successful tests: Platform | Build System | S3 support ------------------------------------ Linux+gcc | Automake | yes Linux+gcc | CMake | yes Visual Studio | CMake | no Additionally, and as a consequence of the addition of NCZarr, major changes have been made to the Filter API. NOTE: NCZarr does not yet support filters, but these changes are enablers for that support in the future. Note that it is possible (probable?) that there will be some accidental reversions if the changes here did not correctly mimic the existing filter testing. In any case, previously filter ids and parameters were of type unsigned int. In order to support the more general zarr filter model, this was all converted to char*. The old HDF5-specific, unsigned int operations are still supported but they are wrappers around the new, char* based nc_filterx_XXX functions. This entailed at least the following changes: 1. Added the files libdispatch/dfilterx.c and include/ncfilter.h 2. Some filterx utilities have been moved to libdispatch/daux.c 3. A new entry, "filter_actions" was added to the NCDispatch table and the version bumped. 4. An overly complex set of structs was created to support funnelling all of the filterx operations thru a single dispatch "filter_actions" entry. 5. Move common code to from libhdf5 to libsrc4 so that it is accessible to nczarr. Changes directly related to Zarr: 1. Modified CMakeList.txt and configure.ac to support both C and C++ -- this is in support of S3 support via the awd-sdk libraries. 2. Define a size64_t type to support nczarr. 3. More reworking of libdispatch/dinfermodel.c to support zarr and to regularize the structure of the fragments section of a URL. Changes not directly related to Zarr: 1. Make client-side filter registration be conditional, with default off. 2. Hack include/nc4internal.h to make some flags added by Ed be unique: e.g. NC_CREAT, NC_INDEF, etc. 3. cleanup include/nchttp.h and libdispatch/dhttp.c. 4. Misc. changes to support compiling under Visual Studio including: * Better testing under windows for dirent.h and opendir and closedir. 5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags and to centralize error reporting. 6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them. 7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible. Changes Left TO-DO: 1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
if test "x$with_s3_testing" = xyes ; then
AC_MSG_WARN([*** DO NOT SPECIFY WITH_S3_TESTING=YES UNLESS YOU HAVE ACCESS TO THE UNIDATA S3 BUCKET! ***])
AC_DEFINE([ENABLE_S3_TESTALL], [yes], [control S3 testing.])
fi
fi
2017-11-21 06:50:47 +08:00
# Check whether we want to enable strict null byte header padding.
2017-11-22 03:49:14 +08:00
# See https://github.com/Unidata/netcdf-c/issues/657 for more information.
2017-11-21 06:50:47 +08:00
AC_MSG_CHECKING([whether to enable strict null-byte header padding when reading (default off)])
AC_ARG_ENABLE([strict-null-byte-header-padding],
[AS_HELP_STRING([--enable-strict-null-byte-header-padding],
[enable strict null-byte header padding when reading netCDF3 files.])])
test "x$enable_strict_null_byte_header_padding" = xyes || enable_strict_null_byte_header_padding=no
AC_MSG_RESULT($enable_strict_null_byte_header_padding)
if test "x$enable_strict_null_byte_header_padding" = xyes; then
AC_DEFINE([USE_STRICT_NULL_BYTE_HEADER_PADDING], [1], [if true, enable strict null byte header padding])
fi
AM_CONDITIONAL(USE_STRICT_NULL_BYTE_HEADER_PADDING, [test x$enable_strict_null_byte_header_padding = xyes ])
2010-06-03 21:25:25 +08:00
# Does the user want to use the ffio module?
AC_MSG_CHECKING([whether FFIO will be used])
AC_ARG_ENABLE([ffio],
[AS_HELP_STRING([--enable-ffio],
[use ffio instead of posixio (ex. on the Cray)])])
test "x$enable_ffio" = xyes || enable_ffio=no
AC_MSG_RESULT($enable_ffio)
if test "x$enable_ffio" = xyes; then
AC_DEFINE([USE_FFIO], [1], [if true, use ffio instead of posixio])
fi
2010-06-03 21:25:25 +08:00
AM_CONDITIONAL(USE_FFIO, [test x$enable_ffio = xyes])
# Does the user want to use the stdio module?
AC_MSG_CHECKING([whether STDIO will be used])
AC_ARG_ENABLE([stdio],
[AS_HELP_STRING([--enable-stdio],
[use stdio instead of posixio (ex. on the Cray)])])
test "x$enable_stdio" = xyes || enable_stdio=no
AC_MSG_RESULT($enable_stdio)
if test "x$enable_stdio" = xyes; then
AC_DEFINE([USE_STDIO], [1], [if true, use stdio instead of posixio])
fi
AM_CONDITIONAL(USE_STDIO, [test x$enable_stdio = xyes])
2011-04-27 04:57:24 +08:00
nc_build_c=yes
2010-06-03 21:25:25 +08:00
nc_build_v2=yes
nc_build_utilities=yes
nc_build_tests=yes
2010-06-03 21:25:25 +08:00
nc_build_examples=yes
# Does the user want to build examples?
AC_MSG_CHECKING([whether examples should be built])
AC_ARG_ENABLE([examples],
[AS_HELP_STRING([--disable-examples],
[don't build the netCDF examples during make check \
(examples are treated as extra tests by netCDF)])])
test "x$enable_examples" = xno && nc_build_examples=no
AC_MSG_RESULT($nc_build_examples)
AM_CONDITIONAL(BUILD_EXAMPLES, [test x$nc_build_examples = xyes])
# Does the user want to disable the V2 API?
AC_MSG_CHECKING([whether v2 netCDF API should be built])
AC_ARG_ENABLE([v2],
[AS_HELP_STRING([--disable-v2],
[turn off the netCDF version 2 API])])
test "x$enable_v2" = xno && nc_build_v2=no
AC_MSG_RESULT($nc_build_v2)
AM_CONDITIONAL(BUILD_V2, [test x$nc_build_v2 = xyes])
if test "x$nc_build_v2" = xno; then
AC_DEFINE_UNQUOTED(NO_NETCDF_2, 1, [do not build the netCDF version 2 API])
else
AC_DEFINE_UNQUOTED(USE_NETCDF_2, 1, [build the netCDF version 2 API])
2010-06-03 21:25:25 +08:00
fi
2020-05-15 02:18:42 +08:00
# Does the user want to disable ncgen/ncdump/nccopy/...?
AC_MSG_CHECKING([whether the ncgen/ncdump/nccopy should be built])
2010-06-03 21:25:25 +08:00
AC_ARG_ENABLE([utilities],
[AS_HELP_STRING([--disable-utilities],
[don't build netCDF utilities ncgen, ncdump, and nccopy])])
2010-06-03 21:25:25 +08:00
test "x$nc_build_c" = xno && enable_utilities=no
test "x$enable_utilities" = xno && nc_build_utilities=no
AC_MSG_RESULT($nc_build_utilities)
AM_CONDITIONAL(BUILD_UTILITIES, [test x$nc_build_utilities = xyes])
# Does the user want to disable all tests?
AC_MSG_CHECKING([whether test should be built and run])
AC_ARG_ENABLE([testsets],
[AS_HELP_STRING([--disable-testsets],
[don't build or run netCDF tests])])
test "x$enable_testsets" = xno || enable_testsets=yes
nc_build_tests=$enable_testsets
AC_MSG_RESULT($nc_build_tests)
AM_CONDITIONAL(BUILD_TESTSETS, [test x$nc_build_tests = xyes])
2010-06-03 21:25:25 +08:00
# Does the user want to run tests for large files (> 2GiB)?
AC_MSG_CHECKING([whether large file (> 2GB) tests should be run])
AC_ARG_ENABLE([large-file-tests],
[AS_HELP_STRING([--enable-large-file-tests],
[Run tests which create very large data files (~13 GB disk space
required, but it will be recovered when tests are complete). See
option --with-temp-large to specify temporary directory])])
test "x$enable_large_file_tests" = xyes || enable_large_file_tests=no
AC_MSG_RESULT($enable_large_file_tests)
AM_CONDITIONAL(LARGE_FILE_TESTS, [test x$enable_large_file_tests = xyes])
if test "x$enable_large_file_tests" = xyes; then
AC_DEFINE([LARGE_FILE_TESTS], [1], [do large file tests])
fi
# Does the user want to run benchmarks?
AC_MSG_CHECKING([whether benchmarks should be run])
2010-06-03 21:25:25 +08:00
AC_ARG_ENABLE([benchmarks],
[AS_HELP_STRING([--enable-benchmarks],
[Run benchmarks. This will cause sample data files from the Unidata ftp
site to be fetched. The benchmarks are a bunch of extra tests, which
are timed. We use these tests to check netCDF performance.])])
2010-06-03 21:25:25 +08:00
test "x$enable_benchmarks" = xyes || enable_benchmarks=no
AC_MSG_RESULT($enable_benchmarks)
if test "x$enable_HDF5" = xno -a "x$enable_benchmarks" = xyes; then
AC_MSG_ERROR([Can't use benchmarks if HDF5 is disabled.])
fi
2010-06-03 21:25:25 +08:00
AM_CONDITIONAL(BUILD_BENCHMARKS, [test x$enable_benchmarks = xyes])
# Does the user want to use extreme numbers in testing.
AC_MSG_CHECKING([whether extreme numbers should be used in tests])
AC_ARG_ENABLE([extreme-numbers],
[AS_HELP_STRING([--disable-extreme-numbers],
[don't use extreme numbers during testing, such as MAX_INT - 1])])
case "$host_cpu $host_os" in
*386*solaris*)
test "x$enable_extreme_numbers" = xyes || enable_extreme_numbers=no
;;
*)
test "x$enable_extreme_numbers" = xno || enable_extreme_numbers=yes
;;
esac
2010-06-03 21:25:25 +08:00
AC_MSG_RESULT($enable_extreme_numbers)
if test "x$enable_extreme_numbers" = xyes; then
AC_DEFINE(USE_EXTREME_NUMBERS, 1, [set this to use extreme numbers in tests])
fi
# If the env. variable TEMP_LARGE is set, or if
# --with-temp-large=<directory>, use it as a place for the large
# (i.e. > 2 GiB) files created during the large file testing.
AC_MSG_CHECKING([where to put large temp files if large file tests are run])
AC_ARG_WITH([temp-large],
[AS_HELP_STRING([--with-temp-large=<directory>],
[specify directory where large files (i.e. >2 GB) \
will be written, if large files tests are run with
--enable-large-file-tests])],
[TEMP_LARGE=$with_temp_large])
TEMP_LARGE=${TEMP_LARGE-.}
AC_MSG_RESULT($TEMP_LARGE)
#AC_SUBST(TEMP_LARGE)
AC_DEFINE_UNQUOTED([TEMP_LARGE], ["$TEMP_LARGE"], [Place to put very large netCDF test files.])
There was a request to extend the provenance information stored in the _NCProperties attribute to allow two things: 1. capture of additional library dependencies (over and above hdf5) 2. Recognition of non-netcdf libraries that create netcdf-4 format files. To this end, the _NCProperties format has been extended to be and arbitrary set of key=value pairs separated by commas. This new format has version = 2, and uses commas as the pair separator. Thus the general form is: _NCProperties = "version=2,key1=value,key2=value2..." ; This new version is accompanied by a new ./configure option of the form --with-ncproperties="key1=value1,key2=value2..." that specifies pairs to add to the _NCProperties attribute for all files created with that netcdf library. At this point, what is missing is some programmatic way to specify either all the pairs or additional pairs to the _NCProperties attribute. Not sure of the best way to do this. Builders using non-netcdf libraries can specify whatever they want in the key value pairs (as long as the version=2 is specified first). By convention, the primary library is expected to be the the first pair after the leading version=2 pair, but this is convention only and is neither required nor enforced. Related changes: 1. Fixed the tests that check _NCProperties to properly operate with version=2. 2. When reading a version 1 _NCProperties attribute, convert it to look like a version 2 attribute. 2. Added some version 2 tests to ncdump/tst_fileinfo.c and ncdump/tst_fileinfo.sh Misc Changes: 1. Fix minor problem in ncdap_test/testurl.sh where a parameter to buildurl needed to be quoted. 2. Minor fix to ncgen to swap switches -H and -h to be consistent with other utilities. 3. Document the -M flag in nccopy usage() and the nccopy man page. 4. Modify a test case to use the nccopy -M flag.
2018-08-26 11:44:41 +08:00
# Specify extra values to add to _NCProperties attribute
# --with-ncproperties-extra="<name>=<value>|...".
# Note: need to figure out a way to do this programmatically also
AC_MSG_CHECKING([Extra values for _NCProperties])
AC_ARG_WITH([ncproperties-extra],
[AS_HELP_STRING([--with-ncproperties-extra="<name>=<value>,...],
[specify extra pairs for _NCProperties])],
[NCPROPERTIES_EXTRA=$with_ncproperties_extra],
[NCPROPERTIES_EXTRA=""])
AC_MSG_RESULT([$NCPROPERTIES_EXTRA])
AC_DEFINE_UNQUOTED([NCPROPERTIES_EXTRA], ["$NCPROPERTIES_EXTRA"], [Extra pairs for _NCProperties])
# Did the user specify a user-defined format 0?
AC_MSG_CHECKING([whether user-defined format 0 was specified])
AC_ARG_WITH([udf0],
[AS_HELP_STRING([--with-udf0=<dispatch_name>],
[Specify a dispatch table for user-defined format 0.])],
[UDF0_DISPATCH=$with_udf0])
AC_MSG_RESULT([$UDF0_DISPATCH])
if test -n "$UDF0_DISPATCH"; then
AC_DEFINE_UNQUOTED([UDF0_DISPATCH], [$UDF0_DISPATCH], [dispatch table for user-defined format 0.])
2018-06-03 20:28:55 +08:00
AC_DEFINE_UNQUOTED([UDF0_DISPATCH_FUNC], [get_$UDF0_DISPATCH()], [function to get dispatch table for user-defined format 0.])
AC_DEFINE([USE_UDF0], [1], [if true, use user-defined format 0 in utilities])
2018-06-03 20:14:23 +08:00
AC_CHECK_LIB([$UDF0_DISPATCH], [get_$UDF0_DISPATCH], [],
[AC_MSG_ERROR([Can't find or link to the user-defined format 0 library.])],
[])
fi
2018-06-07 02:54:03 +08:00
# Did the user specify a magic number for user-defined format 0?
AC_MSG_CHECKING([whether a magic number for user-defined format 0 was specified])
AC_ARG_WITH([udf0-magic-number],
[AS_HELP_STRING([--with-udf0-magic-number=<magic_number>],
[Specify a magic number for user-defined format 0 (ignored unless --with-udf0 is also used).])],
[UDF0_MAGIC_NUMBER=$with_udf0_magic_number])
AC_MSG_RESULT([$UDF0_MAGIC_NUMBER])
# Did the user specify a user-defined format 1?
AC_MSG_CHECKING([whether user-defined format 1 was specified])
AC_ARG_WITH([udf1],
[AS_HELP_STRING([--with-udf1=<dispatch_name>],
[Specify a dispatch table for user-defined format 1.])],
[UDF1_DISPATCH=$with_udf1])
AC_MSG_RESULT([$UDF1_DISPATCH])
if test -n "$UDF1_DISPATCH"; then
AC_DEFINE_UNQUOTED([UDF1_DISPATCH], [$UDF1_DISPATCH], [dispatch table for user-defined format 1.])
2018-06-03 20:28:55 +08:00
AC_DEFINE_UNQUOTED([UDF1_DISPATCH_FUNC], [get_$UDF1_DISPATCH()], [function to get dispatch table for user-defined format 1.])
AC_DEFINE([USE_UDF1], [1], [if true, use user-defined format 1 in utilities])
2018-06-03 20:28:55 +08:00
AC_CHECK_LIB([$UDF1_DISPATCH], [get_$UDF1_DISPATCH], [],
[AC_MSG_ERROR([Can't find or link to the user-defined format 1 library.])],
[])
fi
2018-06-07 02:54:03 +08:00
# Did the user specify a magic number for user-defined format 0?
AC_MSG_CHECKING([whether a magic number for user-defined format 1 was specified])
AC_ARG_WITH([udf1-magic-number],
[AS_HELP_STRING([--with-udf1-magic-number=<magic_number>],
[Specify a magic number for user-defined format 1 (ignored unless --with-udf1 is also used).])],
[UDF1_MAGIC_NUMBER=$with_udf1_magic_number])
AC_MSG_RESULT([$UDF1_MAGIC_NUMBER])
2010-06-03 21:25:25 +08:00
# According to the autoconf mailing list gurus, we must test for
# compilers unconditionally. That is, we can't skip looking for the
# fortran compilers, just because the user doesn't want fortran. This
# is due to a limitation in autoconf.
# Find the C compiler.
AC_MSG_NOTICE([finding C compiler])
2014-05-31 03:48:55 +08:00
## Compiler with version information. This consists of the full path
## name of the compiler and the reported version number.
AC_SUBST([CC_VERSION])
## Strip anything that looks like a flag off of $CC
CC_NOFLAGS=`echo $CC | sed 's/ -.*//'`
if `echo $CC_NOFLAGS | grep ^/ >/dev/null 2>&1`; then
CC_VERSION="$CC"
else
CC_VERSION="$CC";
for x in `echo $PATH | sed -e 's/:/ /g'`; do
if test -x $x/$CC_NOFLAGS; then
CC_VERSION="$x/$CC"
break
fi
done
fi
if test -n "$cc_version_info"; then
CC_VERSION="$CC_VERSION ( $cc_version_info)"
fi
2011-04-27 04:57:24 +08:00
AC_PROG_CC
This PR adds EXPERIMENTAL support for accessing data in the cloud using a variant of the Zarr protocol and storage format. This enhancement is generically referred to as "NCZarr". The data model supported by NCZarr is netcdf-4 minus the user-defined types and the String type. In this sense it is similar to the CDF-5 data model. More detailed information about enabling and using NCZarr is described in the document NUG/nczarr.md and in a [Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in). WARNING: this code has had limited testing, so do use this version for production work. Also, performance improvements are ongoing. Note especially the following platform matrix of successful tests: Platform | Build System | S3 support ------------------------------------ Linux+gcc | Automake | yes Linux+gcc | CMake | yes Visual Studio | CMake | no Additionally, and as a consequence of the addition of NCZarr, major changes have been made to the Filter API. NOTE: NCZarr does not yet support filters, but these changes are enablers for that support in the future. Note that it is possible (probable?) that there will be some accidental reversions if the changes here did not correctly mimic the existing filter testing. In any case, previously filter ids and parameters were of type unsigned int. In order to support the more general zarr filter model, this was all converted to char*. The old HDF5-specific, unsigned int operations are still supported but they are wrappers around the new, char* based nc_filterx_XXX functions. This entailed at least the following changes: 1. Added the files libdispatch/dfilterx.c and include/ncfilter.h 2. Some filterx utilities have been moved to libdispatch/daux.c 3. A new entry, "filter_actions" was added to the NCDispatch table and the version bumped. 4. An overly complex set of structs was created to support funnelling all of the filterx operations thru a single dispatch "filter_actions" entry. 5. Move common code to from libhdf5 to libsrc4 so that it is accessible to nczarr. Changes directly related to Zarr: 1. Modified CMakeList.txt and configure.ac to support both C and C++ -- this is in support of S3 support via the awd-sdk libraries. 2. Define a size64_t type to support nczarr. 3. More reworking of libdispatch/dinfermodel.c to support zarr and to regularize the structure of the fragments section of a URL. Changes not directly related to Zarr: 1. Make client-side filter registration be conditional, with default off. 2. Hack include/nc4internal.h to make some flags added by Ed be unique: e.g. NC_CREAT, NC_INDEF, etc. 3. cleanup include/nchttp.h and libdispatch/dhttp.c. 4. Misc. changes to support compiling under Visual Studio including: * Better testing under windows for dirent.h and opendir and closedir. 5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags and to centralize error reporting. 6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them. 7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible. Changes Left TO-DO: 1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
AC_PROG_CXX
2010-06-03 21:25:25 +08:00
AM_PROG_CC_C_O
AC_C_CONST
# Set up libtool.
AC_MSG_NOTICE([setting up libtool])
LT_PREREQ([2.2])
2017-11-30 20:40:17 +08:00
LT_INIT()
2010-11-30 06:23:16 +08:00
2010-06-03 21:25:25 +08:00
AC_MSG_NOTICE([finding other utilities])
# Is m4 installed? If not, bail.
2015-01-14 00:56:37 +08:00
AC_CHECK_PROGS([NC_M4], [m4])
if test -z "$NC_M4"; then
AC_MSG_ERROR([Cannot find m4 utility. Install m4 and try again.])
fi
2022-06-07 21:06:10 +08:00
if test "x$enable_doxygen" != xno; then
# Is doxygen installed? If so, have configure construct the Doxyfile.
AC_CHECK_PROGS([DOXYGEN], [doxygen])
if test -z "$DOXYGEN"; then
AC_MSG_ERROR([Doxygen not found - install doxygen or build without --enable-doxygen])
fi
fi
# Is graphviz/dot installed? If so, we'll use dot to create
# graphs in the documentation.
AC_CHECK_PROGS([DOT], [dot])
if test -z "$DOT"; then
AC_MSG_WARN([dot not found - will use simple charts in documentation])
HAVE_DOT=NO
elif test "x$enable_dot" = xno; then
HAVE_DOT=NO
else
HAVE_DOT=YES
fi
# If we have doxygen, and it's enabled, then process the file.
if test "x$enable_doxygen" != xno; then
if test -n "$DOXYGEN"; then
AC_SUBST(HAVE_DOT)
AC_CONFIG_FILES([docs/Doxyfile])
fi
# Note: the list of files to input to doxygen
2014-05-22 04:40:39 +08:00
# has been moved to docs/Doxyfile.in so
# that make distcheck works correctly.
# Any new inputs should be inserted into
2014-05-22 04:40:39 +08:00
# docs/Doxyfile.in and possibley docs/Makefile.am
2011-07-21 05:39:14 +08:00
fi
# Find the install program.
2010-06-03 21:25:25 +08:00
AC_PROG_INSTALL
# Check to see if any macros must be set to enable large (>2GB) files.
AC_SYS_LARGEFILE
AC_MSG_NOTICE([displaying some results])
## This next macro just prints some results for debugging
## support issues.
UD_DISPLAY_RESULTS
# For nightly build testing, output CC, FC, etc.
echo "CPPFLAGS=$CPPFLAGS CC=$CC CFLAGS=$CFLAGS LDFLAGS=$LDFLAGS LIBS=$LIBS" >> comps.txt
2010-06-03 21:25:25 +08:00
AC_MSG_NOTICE([checking types, headers, and functions])
AC_CHECK_HEADERS([sys/param.h])
2018-02-03 10:57:55 +08:00
AC_CHECK_HEADERS([libgen.h])
#AC_CHECK_HEADERS([locale.h])
2010-06-03 21:25:25 +08:00
AC_HEADER_STDC
Fix byterange handling of some URLS re: Issue The byterange handling of the following URLS fails. ### Problem 1: "https://crudata.uea.ac.uk/cru/data/temperature/HadCRUT.4.6.0.0.median.nc#mode=bytes" It turns out that byterange in hdf5 has two possible targets: S3 and not-S3 (e.g. a thredds server or the crudata URL above). Each uses a different HDF5 Virtual File Driver (VFD). I incorrectly set up the byterange code in libhdf5 so that it would choose one or the other of the two VFD's for any netcdf-c library build. The fix is to allow it to choose either one at run-time. ### Problem 2: "https://noaa-goes16.s3.amazonaws.com/ABI-L1b-RadF/2022/001/18/OR_ABI-L1b-RadF-M6C01_G16_s20220011800205_e20220011809513_c20220011809562.nc#mode=bytes,s3" When given what appears to be an S3-related URL, the netcdf-c library code converts it into a canonical, so-called "path" format. In casing out the possible input URL formats, I missed the case where the host contains the bucket ("noaa-goes16"), but not the region. So the fix was to check for this case. ## Misc. Related Changes 1. Since S3 is used in more than just NCZarr, I changed the automake/cmake options to replace "--enable-nczarr-s3" with "--enable-s3", but keeping the former option as a synonym for the latter. This also entailed cleaning up libnetcdf.settings WRT S3 support 2. Added the above URLS as additional test cases ## Misc. Un-Related Changes 1. CURLOPT_PUT is deprecated in favor to CURLOPT_UPLOAD 2. Fix some minor warnings ## Open Problems * Under Ubuntu, either libcrypto or aws-sdk-cpp has a memory leak.
2023-03-03 10:51:02 +08:00
AC_CHECK_HEADERS([locale.h stdio.h stdarg.h fcntl.h malloc.h stdlib.h string.h strings.h unistd.h sys/stat.h getopt.h sys/time.h sys/types.h time.h dirent.h stdint.h ctype.h])
2010-06-03 21:25:25 +08:00
2012-05-16 01:10:41 +08:00
# Do sys/resource.h separately
#AC_CHECK_HEADERS([sys/resource.h],[havesysresource=1],[havesysresource=0])
#if test "x$enable_dll" != xyes ; then
AC_CHECK_HEADERS([sys/resource.h])
#fi
# See if we have ftw.h to walk directory trees
AC_CHECK_HEADERS([ftw.h])
2010-06-03 21:25:25 +08:00
# Check for these functions...
AC_CHECK_FUNCS([strlcat snprintf strcasecmp fileno \
strdup strtoll strtoull \
mkstemp mktemp random \
getrlimit gettimeofday fsync MPI_Comm_f2c MPI_Info_f2c \
Improve S3 Documentation and Support ## Improvements to S3 Documentation * Create a new document *quickstart_paths.md* that give a summary of the legal path formats used by netcdf-c. This includes both file paths and URL paths. * Modify *nczarr.md* to remove most of the S3 related text. * Move the S3 text from *nczarr.md* to a new document *cloud.md*. * Add some S3-related text to the *byterange.md* document. Hopefully, this will make it easier for users to find the information they want. ## Rebuild NCZarr Testing In order to avoid problems with running make check in parallel, two changes were made: 1. The *nczarr_test* test system was rebuilt. Now, for each test. any generated files are kept in a test-specific directory, isolated from all other test executions. 2. Similarly, since the S3 test bucket is shared, any generated S3 objects are isolated using a test-specific key path. ## Other S3 Related Changes * Add code to ensure that files created on S3 are reclaimed at end of testing. * Used the bash "trap" command to ensure S3 cleanup even if the test fails. * Cleanup the S3 related configure.ac flag set since S3 is used in several places. So now one should use the option *--enable-s3* instead of *--enable-nczarr-s3*, although the latter is still kept as a deprecated alias for the former. * Get some of the github actions yml to work with S3; required fixing various test scripts adding a secret to access the Unidata S3 bucket. * Cleanup S3 portion of libnetcdf.settings.in and netcdf_meta.h.in and test_common.in. * Merge partial S3 support into dhttp.c. * Create an experimental s3 access library especially for use with Windows. It is enabled by using the options *--enable-s3-internal* (automake) or *-DENABLE_S3_INTERNAL=ON* (CMake). Also add a unit-test for it. * Move some definitions from ncrc.h to ncs3sdk.h ## Other Changes * Provide a default implementation of strlcpy and move this and similar defaults into *dmissing.c*.
2023-04-26 07:15:06 +08:00
strncasecmp strlcpy])
# See if clock_gettime is available and its arg types.
AC_CHECK_FUNCS([clock_gettime])
AC_CHECK_TYPES([struct timespec])
# disable dap4 if hdf5 is disabled
if test "x$enable_hdf5" = "xno" ; then
Primary change: add dap4 support Specific changes: 1. Add dap4 code: libdap4 and dap4_test. Note that until the d4ts server problem is solved, dap4 is turned off. 2. Modify various files to support dap4 flags: configure.ac, Makefile.am, CMakeLists.txt, etc. 3. Add nc_test/test_common.sh. This centralizes the handling of the locations of various things in the build tree: e.g. where is ncgen.exe located. See nc_test/test_common.sh for details. 4. Modify .sh files to use test_common.sh 5. Obsolete separate oc2 by moving it to be part of netcdf-c. This means replacing code with netcdf-c equivalents. 5. Add --with-testserver to configure.ac to allow override of the servers to be used for --enable-dap-remote-tests. 6. There were multiple versions of nctypealignment code. Try to centralize in libdispatch/doffset.c and include/ncoffsets.h 7. Add a unit test for the ncuri code because of its complexity. 8. Move the findserver code out of libdispatch and into a separate, self contained program in ncdap_test and dap4_test. 9. Move the dispatch header files (nc{3,4}dispatch.h) to .../include because they are now shared by modules. 10. Revamp the handling of TOPSRCDIR and TOPBUILDDIR for shell scripts. 11. Make use of MREMAP if available 12. Misc. minor changes e.g. - #include <config.h> -> #include "config.h" - Add some no-install headers to /include - extern -> EXTERNL and vice versa as needed - misc header cleanup - clean up checking for misc. unix vs microsoft functions 13. Change copyright decls in some files to point to LICENSE file. 14. Add notes to RELEASENOTES.md
2017-03-09 08:01:10 +08:00
AC_MSG_WARN([netcdf-4 not enabled; disabling DAP4])
enable_dap4=no
fi
if test "x$enable_dap4" = xyes; then
AC_DEFINE([ENABLE_DAP4], [1], [if true, build DAP4 Client])
fi
2012-05-16 01:10:41 +08:00
# check for useful, but not essential, memio support
AC_CHECK_FUNCS([memmove getpagesize sysconf])
2012-06-24 03:25:49 +08:00
# Does the user want to allow use of mmap for NC_DISKLESS?
AC_MSG_CHECKING([whether mmap is enabled for in-memory files])
AC_ARG_ENABLE([mmap],
[AS_HELP_STRING([--enable-mmap],
2012-06-24 03:25:49 +08:00
[allow mmap for in-memory files])])
test "x$enable_mmap" = xyes || enable_mmap=no
AC_MSG_RESULT($enable_mmap)
# check for mmap availability before committing to use mmap
have_mmap="$enable_mmap"
AC_CHECK_FUNCS([mmap],[havemmapfcn=yes],[havemmapfcn=no])
if test "x$havemmapfcn" = xno ; then
have_mmap=no
fi
# check for mremap availability; not strictly needed
AC_CHECK_FUNCS([mremap],[havemremapfcn=yes],[havemmapfcn=no])
2012-05-16 01:10:41 +08:00
# Check for MAP_ANONYMOUS
AC_COMPILE_IFELSE([AC_LANG_PROGRAM(
[#include <sys/mman.h>],
[[int x = MAP_ANONYMOUS;]])],
[havemapanon=yes],
[havemapanon=no])
AC_MSG_CHECKING([whether MAP_ANONYMOUS is defined])
AC_MSG_RESULT([${havemapanon}])
if test "x$havemapanon" != xyes ; then
have_mmap=no
Revert/Improve nc_create + NC_DISKLESS behavior re: https://github.com/Unidata/netcdf-c/issues/1154 Inadvertently, the behavior of NC_DISKLESS with nc_create() was changed in release 4.6.1. Previously, the NC_WRITE flag needed to be explicitly used with NC_DISKLESS in order to cause the created file to be persisted to disk. Additional analyis indicated that the current NC_DISKLESS implementation was seriously flawed. This PR attempts to clean up and regularize the situation with respect to NC_DISKLESS control. One important aspect of diskless operation is that there are two different notions of write. 1. The file is read-write vs read-only when using the netcdf API. 2. The file is persisted or not to disk at nc_close(). Previously, these two were conflated. The rules now are as follows. 1. NC_DISKLESS + NC_WRITE means that the file is read/write using the netcdf API 2. NC_DISKLESS + NC_PERSIST means that the file is persisted to a disk file at nc_close. 3. NC_DISKLESS + NC_PERSIST + NC_WRITE means both 1 and 2. The NC_PERSIST flag is new and takes over the obsolete NC_MPIPOSIX flag. NC_MPIPOSIX is still defined, but is now an alias for the NC_MPIIO flag. It is also now the case that for netcdf-4, NC_DISKLESS is independent of NC_INMEMORY and in fact it is an error to specify both flags simultaneously. Finally, the MMAP code was fixed to use NC_PERSIST as well. Also marked MMAP as deprecated. Also added a test case to test various combinations of NC_DISKLESS, NC_PERSIST, and NC_WRITE. This PR affects a number of files and especially test cases that used NC_DISKLESS. Misc. Unrelated fixes 1. fixed some warnings in ncdump/dumplib.c
2018-10-11 03:32:17 +08:00
fi
if test "x$have_mmap" != xyes ; then
if test "x$enable_mmap" = xyes ; then
AC_MSG_WARN([mmap functionality is not available: disabling mmap])
else
AC_MSG_NOTICE([mmap functionality is not available: disabling mmap])
fi
enable_mmap=no
fi
if test "x$enable_mmap" = xyes; then
2012-05-16 01:10:41 +08:00
AC_DEFINE([USE_MMAP], [1], [if true, use mmap for in-memory files])
fi
Provide byte-range reading of remote datasets re: issue https://github.com/Unidata/netcdf-c/issues/1251 Assume that you have the URL to a remote dataset which is a normal netcdf-3 or netcdf-4 file. This PR allows the netcdf-c to read that dataset's contents as a netcdf file using HTTP byte ranges if the remote server supports byte-range access. Originally, this PR was set up to access Amazon S3 objects, but it can also access other remote datasets such as those provided by a Thredds server via the HTTPServer access protocol. It may also work for other kinds of servers. Note that this is not intended as a true production capability because, as is known, this kind of access to can be quite slow. In addition, the byte-range IO drivers do not currently do any sort of optimization or caching. An additional goal here is to gain some experience with the Amazon S3 REST protocol. This architecture and its use documented in the file docs/byterange.dox. There are currently two test cases: 1. nc_test/tst_s3raw.c - this does a simple open, check format, close cycle for a remote netcdf-3 file and a remote netcdf-4 file. 2. nc_test/test_s3raw.sh - this uses ncdump to investigate some remote datasets. This PR also incorporates significantly changed model inference code (see the superceded PR https://github.com/Unidata/netcdf-c/pull/1259). 1. It centralizes the code that infers the dispatcher. 2. It adds support for byte-range URLs Other changes: 1. NC_HDF5_finalize was not being properly called by nc_finalize(). 2. Fix minor bug in ncgen3.l 3. fix memory leak in nc4info.c 4. add code to walk the .daprc triples and to replace protocol= fragment tag with a more general mode= tag. Final Note: Th inference code is still way too complicated. We need to move to the validfile() model used by netcdf Java, where each dispatcher is asked if it can process the file. This decentralizes the inference code. This will be done after all the major new dispatchers (PIO, Zarr, etc) have been implemented.
2019-01-02 09:27:36 +08:00
# Does the user want to allow reading of remote data via range headers?
AC_MSG_CHECKING([whether byte range support is enabled])
AC_ARG_ENABLE([byterange],
[AS_HELP_STRING([--disable-byterange],
Provide byte-range reading of remote datasets re: issue https://github.com/Unidata/netcdf-c/issues/1251 Assume that you have the URL to a remote dataset which is a normal netcdf-3 or netcdf-4 file. This PR allows the netcdf-c to read that dataset's contents as a netcdf file using HTTP byte ranges if the remote server supports byte-range access. Originally, this PR was set up to access Amazon S3 objects, but it can also access other remote datasets such as those provided by a Thredds server via the HTTPServer access protocol. It may also work for other kinds of servers. Note that this is not intended as a true production capability because, as is known, this kind of access to can be quite slow. In addition, the byte-range IO drivers do not currently do any sort of optimization or caching. An additional goal here is to gain some experience with the Amazon S3 REST protocol. This architecture and its use documented in the file docs/byterange.dox. There are currently two test cases: 1. nc_test/tst_s3raw.c - this does a simple open, check format, close cycle for a remote netcdf-3 file and a remote netcdf-4 file. 2. nc_test/test_s3raw.sh - this uses ncdump to investigate some remote datasets. This PR also incorporates significantly changed model inference code (see the superceded PR https://github.com/Unidata/netcdf-c/pull/1259). 1. It centralizes the code that infers the dispatcher. 2. It adds support for byte-range URLs Other changes: 1. NC_HDF5_finalize was not being properly called by nc_finalize(). 2. Fix minor bug in ncgen3.l 3. fix memory leak in nc4info.c 4. add code to walk the .daprc triples and to replace protocol= fragment tag with a more general mode= tag. Final Note: Th inference code is still way too complicated. We need to move to the validfile() model used by netcdf Java, where each dispatcher is asked if it can process the file. This decentralizes the inference code. This will be done after all the major new dispatchers (PIO, Zarr, etc) have been implemented.
2019-01-02 09:27:36 +08:00
[allow byte-range I/O])])
test "x$enable_byterange" = xno || enable_byterange=yes
Provide byte-range reading of remote datasets re: issue https://github.com/Unidata/netcdf-c/issues/1251 Assume that you have the URL to a remote dataset which is a normal netcdf-3 or netcdf-4 file. This PR allows the netcdf-c to read that dataset's contents as a netcdf file using HTTP byte ranges if the remote server supports byte-range access. Originally, this PR was set up to access Amazon S3 objects, but it can also access other remote datasets such as those provided by a Thredds server via the HTTPServer access protocol. It may also work for other kinds of servers. Note that this is not intended as a true production capability because, as is known, this kind of access to can be quite slow. In addition, the byte-range IO drivers do not currently do any sort of optimization or caching. An additional goal here is to gain some experience with the Amazon S3 REST protocol. This architecture and its use documented in the file docs/byterange.dox. There are currently two test cases: 1. nc_test/tst_s3raw.c - this does a simple open, check format, close cycle for a remote netcdf-3 file and a remote netcdf-4 file. 2. nc_test/test_s3raw.sh - this uses ncdump to investigate some remote datasets. This PR also incorporates significantly changed model inference code (see the superceded PR https://github.com/Unidata/netcdf-c/pull/1259). 1. It centralizes the code that infers the dispatcher. 2. It adds support for byte-range URLs Other changes: 1. NC_HDF5_finalize was not being properly called by nc_finalize(). 2. Fix minor bug in ncgen3.l 3. fix memory leak in nc4info.c 4. add code to walk the .daprc triples and to replace protocol= fragment tag with a more general mode= tag. Final Note: Th inference code is still way too complicated. We need to move to the validfile() model used by netcdf Java, where each dispatcher is asked if it can process the file. This decentralizes the inference code. This will be done after all the major new dispatchers (PIO, Zarr, etc) have been implemented.
2019-01-02 09:27:36 +08:00
AC_MSG_RESULT($enable_byterange)
Fix byterange handling of some URLS re: Issue The byterange handling of the following URLS fails. ### Problem 1: "https://crudata.uea.ac.uk/cru/data/temperature/HadCRUT.4.6.0.0.median.nc#mode=bytes" It turns out that byterange in hdf5 has two possible targets: S3 and not-S3 (e.g. a thredds server or the crudata URL above). Each uses a different HDF5 Virtual File Driver (VFD). I incorrectly set up the byterange code in libhdf5 so that it would choose one or the other of the two VFD's for any netcdf-c library build. The fix is to allow it to choose either one at run-time. ### Problem 2: "https://noaa-goes16.s3.amazonaws.com/ABI-L1b-RadF/2022/001/18/OR_ABI-L1b-RadF-M6C01_G16_s20220011800205_e20220011809513_c20220011809562.nc#mode=bytes,s3" When given what appears to be an S3-related URL, the netcdf-c library code converts it into a canonical, so-called "path" format. In casing out the possible input URL formats, I missed the case where the host contains the bucket ("noaa-goes16"), but not the region. So the fix was to check for this case. ## Misc. Related Changes 1. Since S3 is used in more than just NCZarr, I changed the automake/cmake options to replace "--enable-nczarr-s3" with "--enable-s3", but keeping the former option as a synonym for the latter. This also entailed cleaning up libnetcdf.settings WRT S3 support 2. Added the above URLS as additional test cases ## Misc. Un-Related Changes 1. CURLOPT_PUT is deprecated in favor to CURLOPT_UPLOAD 2. Fix some minor warnings ## Open Problems * Under Ubuntu, either libcrypto or aws-sdk-cpp has a memory leak.
2023-03-03 10:51:02 +08:00
if test "x$enable_remote_functionality" = xno ; then
AC_MSG_WARN([--disable-remote-functionality => --disable-byterange])
enable_byterange=no
fi
Provide byte-range reading of remote datasets re: issue https://github.com/Unidata/netcdf-c/issues/1251 Assume that you have the URL to a remote dataset which is a normal netcdf-3 or netcdf-4 file. This PR allows the netcdf-c to read that dataset's contents as a netcdf file using HTTP byte ranges if the remote server supports byte-range access. Originally, this PR was set up to access Amazon S3 objects, but it can also access other remote datasets such as those provided by a Thredds server via the HTTPServer access protocol. It may also work for other kinds of servers. Note that this is not intended as a true production capability because, as is known, this kind of access to can be quite slow. In addition, the byte-range IO drivers do not currently do any sort of optimization or caching. An additional goal here is to gain some experience with the Amazon S3 REST protocol. This architecture and its use documented in the file docs/byterange.dox. There are currently two test cases: 1. nc_test/tst_s3raw.c - this does a simple open, check format, close cycle for a remote netcdf-3 file and a remote netcdf-4 file. 2. nc_test/test_s3raw.sh - this uses ncdump to investigate some remote datasets. This PR also incorporates significantly changed model inference code (see the superceded PR https://github.com/Unidata/netcdf-c/pull/1259). 1. It centralizes the code that infers the dispatcher. 2. It adds support for byte-range URLs Other changes: 1. NC_HDF5_finalize was not being properly called by nc_finalize(). 2. Fix minor bug in ncgen3.l 3. fix memory leak in nc4info.c 4. add code to walk the .daprc triples and to replace protocol= fragment tag with a more general mode= tag. Final Note: Th inference code is still way too complicated. We need to move to the validfile() model used by netcdf Java, where each dispatcher is asked if it can process the file. This decentralizes the inference code. This will be done after all the major new dispatchers (PIO, Zarr, etc) have been implemented.
2019-01-02 09:27:36 +08:00
# Need curl for byte ranges
if test "x$found_curl" = xno && test "x$enable_byterange" = xyes ; then
Provide byte-range reading of remote datasets re: issue https://github.com/Unidata/netcdf-c/issues/1251 Assume that you have the URL to a remote dataset which is a normal netcdf-3 or netcdf-4 file. This PR allows the netcdf-c to read that dataset's contents as a netcdf file using HTTP byte ranges if the remote server supports byte-range access. Originally, this PR was set up to access Amazon S3 objects, but it can also access other remote datasets such as those provided by a Thredds server via the HTTPServer access protocol. It may also work for other kinds of servers. Note that this is not intended as a true production capability because, as is known, this kind of access to can be quite slow. In addition, the byte-range IO drivers do not currently do any sort of optimization or caching. An additional goal here is to gain some experience with the Amazon S3 REST protocol. This architecture and its use documented in the file docs/byterange.dox. There are currently two test cases: 1. nc_test/tst_s3raw.c - this does a simple open, check format, close cycle for a remote netcdf-3 file and a remote netcdf-4 file. 2. nc_test/test_s3raw.sh - this uses ncdump to investigate some remote datasets. This PR also incorporates significantly changed model inference code (see the superceded PR https://github.com/Unidata/netcdf-c/pull/1259). 1. It centralizes the code that infers the dispatcher. 2. It adds support for byte-range URLs Other changes: 1. NC_HDF5_finalize was not being properly called by nc_finalize(). 2. Fix minor bug in ncgen3.l 3. fix memory leak in nc4info.c 4. add code to walk the .daprc triples and to replace protocol= fragment tag with a more general mode= tag. Final Note: Th inference code is still way too complicated. We need to move to the validfile() model used by netcdf Java, where each dispatcher is asked if it can process the file. This decentralizes the inference code. This will be done after all the major new dispatchers (PIO, Zarr, etc) have been implemented.
2019-01-02 09:27:36 +08:00
AC_MSG_ERROR([curl required for byte range support. Install curl or build without --enable-byterange.])
enable_byterange=no
fi
if test "x$enable_byterange" = xyes; then
AC_DEFINE([ENABLE_BYTERANGE], [1], [if true, support byte-range read of remote datasets.])
Provide byte-range reading of remote datasets re: issue https://github.com/Unidata/netcdf-c/issues/1251 Assume that you have the URL to a remote dataset which is a normal netcdf-3 or netcdf-4 file. This PR allows the netcdf-c to read that dataset's contents as a netcdf file using HTTP byte ranges if the remote server supports byte-range access. Originally, this PR was set up to access Amazon S3 objects, but it can also access other remote datasets such as those provided by a Thredds server via the HTTPServer access protocol. It may also work for other kinds of servers. Note that this is not intended as a true production capability because, as is known, this kind of access to can be quite slow. In addition, the byte-range IO drivers do not currently do any sort of optimization or caching. An additional goal here is to gain some experience with the Amazon S3 REST protocol. This architecture and its use documented in the file docs/byterange.dox. There are currently two test cases: 1. nc_test/tst_s3raw.c - this does a simple open, check format, close cycle for a remote netcdf-3 file and a remote netcdf-4 file. 2. nc_test/test_s3raw.sh - this uses ncdump to investigate some remote datasets. This PR also incorporates significantly changed model inference code (see the superceded PR https://github.com/Unidata/netcdf-c/pull/1259). 1. It centralizes the code that infers the dispatcher. 2. It adds support for byte-range URLs Other changes: 1. NC_HDF5_finalize was not being properly called by nc_finalize(). 2. Fix minor bug in ncgen3.l 3. fix memory leak in nc4info.c 4. add code to walk the .daprc triples and to replace protocol= fragment tag with a more general mode= tag. Final Note: Th inference code is still way too complicated. We need to move to the validfile() model used by netcdf Java, where each dispatcher is asked if it can process the file. This decentralizes the inference code. This will be done after all the major new dispatchers (PIO, Zarr, etc) have been implemented.
2019-01-02 09:27:36 +08:00
fi
# Does the user want to disable atexit?
AC_MSG_CHECKING([whether nc_finalize should be invoked at exit])
AC_ARG_ENABLE([atexit-finalize],
[AS_HELP_STRING([--disable-atexit-finalize],
[disable invoking nc_finalize at exit])])
test "x$enable_atexit_finalize" = xno || enable_atexit_finalize=yes
AC_MSG_RESULT($enable_atexit_finalize)
# Check for atexit
AC_CHECK_FUNCS([atexit])
# If no atexit, then disable atexit finalize
if test "x$enable_atexit_finalize" = xyes ; then
if test "x$ac_cv_func_function" = xno ; then
enable_atexit_finalize=no
AC_MSG_ERROR([atexit() required for enable-atexit-finalize.])
fi
fi
if test "x$enable_atexit_finalize" = xyes ; then
AC_DEFINE([ENABLE_ATEXIT_FINALIZE], [1], [If true, enable nc_finalize via atexit()])
fi
Add filter support to NCZarr Filter support has three goals: 1. Use the existing HDF5 filter implementations, 2. Allow filter metadata to be stored in the NumCodecs metadata format used by Zarr, 3. Allow filters to be used even when HDF5 is disabled Detailed usage directions are define in docs/filters.md. For now, the existing filter API is left in place. So filters are defined using ''nc_def_var_filter'' using the HDF5 style where the id and parameters are unsigned integers. This is a big change since filters affect many parts of the code. In the following, the terms "compressor" and "filter" and "codec" are generally used synonomously. ### Filter-Related Changes: * In order to support dynamic loading of shared filter libraries, a new library was added in the libncpoco directory; it helps to isolate dynamic loading across multiple platforms. * Provide a json parsing library for use by plugins; this is created by merging libdispatch/ncjson.c with include/ncjson.h. * Add a new _Codecs attribute to allow clients to see what codecs are being used; let ncdump -s print it out. * Provide special headers to help support compilation of HDF5 filters when HDF5 is not enabled: netcdf_filter_hdf5_build.h and netcdf_filter_build.h. * Add a number of new test to test the new nczarr filters. * Let ncgen parse _Codecs attribute, although it is ignored. ### Plugin directory changes: * Add support for the Blosc compressor; this is essential because it is the most common compressor used in Zarr datasets. This also necessitated adding a CMake FindBlosc.cmake file * Add NCZarr support for the big-four filters provided by HDF5: shuffle, fletcher32, deflate (zlib), and szip * Add a Codec defaulter (see docs/filters.md) for the big four filters. * Make plugins work with windows by properly adding __declspec declaration. ### Misc. Non-Filter Changes * Replace most uses of USE_NETCDF4 (deprecated) with USE_HDF5. * Improve support for caching * More fixes for path conversion code * Fix misc. memory leaks * Add new utility -- ncdump/ncpathcvt -- that does more or less the same thing as cygpath. * Add a number of new test to test the non-filter fixes. * Update the parsers * Convert most instances of '#ifdef _MSC_VER' to '#ifdef _WIN32'
2021-09-03 07:04:26 +08:00
# Need libdl(d) for plugins
AC_CHECK_LIB([dl],[dlopen],[have_libdld=yes],[have_libdld=no])
if test "x$have_libdld" = "xyes" ; then
AC_SEARCH_LIBS([dlopen],[dl dld], [],[])
fi
# Does the user want plugins?
AC_MSG_CHECKING([whether dynamically loaded plugins is enabled])
AC_ARG_ENABLE([plugins],
[AS_HELP_STRING([--disable-plugins],
Support MSYS2/Mingw platform re: The current netcdf-c release has some problems with the mingw platform on windows. Mostly they are path issues. Changes to support mingw+msys2: ------------------------------- * Enable option of looking into the windows registry to find the mingw root path. In aid of proper path handling. * Add mingw+msys as a specific platform in configure.ac and move testing of the platform to the front so it is available early. * Handle mingw X libncpoco (dynamic loader) properly even though mingw does not yet support it. * Handle mingw X plugins properly even though mingw does not yet support it. * Alias pwd='pwd -W' to better handle paths in shell scripts. * Plus a number of other minor compile irritations. * Disallow the use of multiple nc_open's on the same file for windows (and mingw) because windows does not seem to handle these properly. Not sure why we did not catch this earlier. * Add mountpoint info to dpathmgr.c to help support mingw. * Cleanup dpathmgr conversions. Known problems: --------------- * I have not been able to get shared libraries to work, so plugins/filters must be disabled. * There is some kind of problem with libcurl that I have not solved, so all uses of libcurl (currently DAP+Byterange) must be disabled. Misc. other fixes: ------------------ * Cleanup the relationship between ENABLE_PLUGINS and various other flags in CMakeLists.txt and configure.ac. * Re-arrange the TESTDIRS order in Makefile.am. * Add pseudo-breakpoint to nclog.[ch] for debugging. * Improve the documentation of the path manager code in ncpathmgr.h * Add better support for relative paths in dpathmgr.c * Default the mode args to NCfopen to include "b" (binary) for windows. * Add optional debugging output in various places. * Make sure that everything builds with plugins disabled. * Fix numerous (s)printf inconsistencies betweenb the format spec and the arguments.
2021-12-24 13:18:56 +08:00
[disallow dynamically loaded plugins])])
Add filter support to NCZarr Filter support has three goals: 1. Use the existing HDF5 filter implementations, 2. Allow filter metadata to be stored in the NumCodecs metadata format used by Zarr, 3. Allow filters to be used even when HDF5 is disabled Detailed usage directions are define in docs/filters.md. For now, the existing filter API is left in place. So filters are defined using ''nc_def_var_filter'' using the HDF5 style where the id and parameters are unsigned integers. This is a big change since filters affect many parts of the code. In the following, the terms "compressor" and "filter" and "codec" are generally used synonomously. ### Filter-Related Changes: * In order to support dynamic loading of shared filter libraries, a new library was added in the libncpoco directory; it helps to isolate dynamic loading across multiple platforms. * Provide a json parsing library for use by plugins; this is created by merging libdispatch/ncjson.c with include/ncjson.h. * Add a new _Codecs attribute to allow clients to see what codecs are being used; let ncdump -s print it out. * Provide special headers to help support compilation of HDF5 filters when HDF5 is not enabled: netcdf_filter_hdf5_build.h and netcdf_filter_build.h. * Add a number of new test to test the new nczarr filters. * Let ncgen parse _Codecs attribute, although it is ignored. ### Plugin directory changes: * Add support for the Blosc compressor; this is essential because it is the most common compressor used in Zarr datasets. This also necessitated adding a CMake FindBlosc.cmake file * Add NCZarr support for the big-four filters provided by HDF5: shuffle, fletcher32, deflate (zlib), and szip * Add a Codec defaulter (see docs/filters.md) for the big four filters. * Make plugins work with windows by properly adding __declspec declaration. ### Misc. Non-Filter Changes * Replace most uses of USE_NETCDF4 (deprecated) with USE_HDF5. * Improve support for caching * More fixes for path conversion code * Fix misc. memory leaks * Add new utility -- ncdump/ncpathcvt -- that does more or less the same thing as cygpath. * Add a number of new test to test the non-filter fixes. * Update the parsers * Convert most instances of '#ifdef _MSC_VER' to '#ifdef _WIN32'
2021-09-03 07:04:26 +08:00
test "x$enable_plugins" = xno || enable_plugins=yes
Support MSYS2/Mingw platform re: The current netcdf-c release has some problems with the mingw platform on windows. Mostly they are path issues. Changes to support mingw+msys2: ------------------------------- * Enable option of looking into the windows registry to find the mingw root path. In aid of proper path handling. * Add mingw+msys as a specific platform in configure.ac and move testing of the platform to the front so it is available early. * Handle mingw X libncpoco (dynamic loader) properly even though mingw does not yet support it. * Handle mingw X plugins properly even though mingw does not yet support it. * Alias pwd='pwd -W' to better handle paths in shell scripts. * Plus a number of other minor compile irritations. * Disallow the use of multiple nc_open's on the same file for windows (and mingw) because windows does not seem to handle these properly. Not sure why we did not catch this earlier. * Add mountpoint info to dpathmgr.c to help support mingw. * Cleanup dpathmgr conversions. Known problems: --------------- * I have not been able to get shared libraries to work, so plugins/filters must be disabled. * There is some kind of problem with libcurl that I have not solved, so all uses of libcurl (currently DAP+Byterange) must be disabled. Misc. other fixes: ------------------ * Cleanup the relationship between ENABLE_PLUGINS and various other flags in CMakeLists.txt and configure.ac. * Re-arrange the TESTDIRS order in Makefile.am. * Add pseudo-breakpoint to nclog.[ch] for debugging. * Improve the documentation of the path manager code in ncpathmgr.h * Add better support for relative paths in dpathmgr.c * Default the mode args to NCfopen to include "b" (binary) for windows. * Add optional debugging output in various places. * Make sure that everything builds with plugins disabled. * Fix numerous (s)printf inconsistencies betweenb the format spec and the arguments.
2021-12-24 13:18:56 +08:00
AC_MSG_RESULT([$enable_plugins])
if test "x$have_libdld" = xno && test "x$ISMSVC" = xno; then
enable_plugins=no
2022-06-04 00:52:00 +08:00
AC_MSG_WARN([libdld required for enable-plugins.])
Support MSYS2/Mingw platform re: The current netcdf-c release has some problems with the mingw platform on windows. Mostly they are path issues. Changes to support mingw+msys2: ------------------------------- * Enable option of looking into the windows registry to find the mingw root path. In aid of proper path handling. * Add mingw+msys as a specific platform in configure.ac and move testing of the platform to the front so it is available early. * Handle mingw X libncpoco (dynamic loader) properly even though mingw does not yet support it. * Handle mingw X plugins properly even though mingw does not yet support it. * Alias pwd='pwd -W' to better handle paths in shell scripts. * Plus a number of other minor compile irritations. * Disallow the use of multiple nc_open's on the same file for windows (and mingw) because windows does not seem to handle these properly. Not sure why we did not catch this earlier. * Add mountpoint info to dpathmgr.c to help support mingw. * Cleanup dpathmgr conversions. Known problems: --------------- * I have not been able to get shared libraries to work, so plugins/filters must be disabled. * There is some kind of problem with libcurl that I have not solved, so all uses of libcurl (currently DAP+Byterange) must be disabled. Misc. other fixes: ------------------ * Cleanup the relationship between ENABLE_PLUGINS and various other flags in CMakeLists.txt and configure.ac. * Re-arrange the TESTDIRS order in Makefile.am. * Add pseudo-breakpoint to nclog.[ch] for debugging. * Improve the documentation of the path manager code in ncpathmgr.h * Add better support for relative paths in dpathmgr.c * Default the mode args to NCfopen to include "b" (binary) for windows. * Add optional debugging output in various places. * Make sure that everything builds with plugins disabled. * Fix numerous (s)printf inconsistencies betweenb the format spec and the arguments.
2021-12-24 13:18:56 +08:00
fi
if test "x$enable_plugins" = xyes && test "x$enable_shared" = xno; then
AC_MSG_WARN([--disable-shared => --disable-plugins])
enable_plugins=no
fi
Add filter support to NCZarr Filter support has three goals: 1. Use the existing HDF5 filter implementations, 2. Allow filter metadata to be stored in the NumCodecs metadata format used by Zarr, 3. Allow filters to be used even when HDF5 is disabled Detailed usage directions are define in docs/filters.md. For now, the existing filter API is left in place. So filters are defined using ''nc_def_var_filter'' using the HDF5 style where the id and parameters are unsigned integers. This is a big change since filters affect many parts of the code. In the following, the terms "compressor" and "filter" and "codec" are generally used synonomously. ### Filter-Related Changes: * In order to support dynamic loading of shared filter libraries, a new library was added in the libncpoco directory; it helps to isolate dynamic loading across multiple platforms. * Provide a json parsing library for use by plugins; this is created by merging libdispatch/ncjson.c with include/ncjson.h. * Add a new _Codecs attribute to allow clients to see what codecs are being used; let ncdump -s print it out. * Provide special headers to help support compilation of HDF5 filters when HDF5 is not enabled: netcdf_filter_hdf5_build.h and netcdf_filter_build.h. * Add a number of new test to test the new nczarr filters. * Let ncgen parse _Codecs attribute, although it is ignored. ### Plugin directory changes: * Add support for the Blosc compressor; this is essential because it is the most common compressor used in Zarr datasets. This also necessitated adding a CMake FindBlosc.cmake file * Add NCZarr support for the big-four filters provided by HDF5: shuffle, fletcher32, deflate (zlib), and szip * Add a Codec defaulter (see docs/filters.md) for the big four filters. * Make plugins work with windows by properly adding __declspec declaration. ### Misc. Non-Filter Changes * Replace most uses of USE_NETCDF4 (deprecated) with USE_HDF5. * Improve support for caching * More fixes for path conversion code * Fix misc. memory leaks * Add new utility -- ncdump/ncpathcvt -- that does more or less the same thing as cygpath. * Add a number of new test to test the non-filter fixes. * Update the parsers * Convert most instances of '#ifdef _MSC_VER' to '#ifdef _WIN32'
2021-09-03 07:04:26 +08:00
if test "x$enable_plugins" = xyes; then
AC_DEFINE([ENABLE_PLUGINS], [1], [if true, support dynamically loaded plugins])
fi
AM_CONDITIONAL(ENABLE_PLUGINS, [test "x$enable_plugins" = xyes])
Support MSYS2/Mingw platform re: The current netcdf-c release has some problems with the mingw platform on windows. Mostly they are path issues. Changes to support mingw+msys2: ------------------------------- * Enable option of looking into the windows registry to find the mingw root path. In aid of proper path handling. * Add mingw+msys as a specific platform in configure.ac and move testing of the platform to the front so it is available early. * Handle mingw X libncpoco (dynamic loader) properly even though mingw does not yet support it. * Handle mingw X plugins properly even though mingw does not yet support it. * Alias pwd='pwd -W' to better handle paths in shell scripts. * Plus a number of other minor compile irritations. * Disallow the use of multiple nc_open's on the same file for windows (and mingw) because windows does not seem to handle these properly. Not sure why we did not catch this earlier. * Add mountpoint info to dpathmgr.c to help support mingw. * Cleanup dpathmgr conversions. Known problems: --------------- * I have not been able to get shared libraries to work, so plugins/filters must be disabled. * There is some kind of problem with libcurl that I have not solved, so all uses of libcurl (currently DAP+Byterange) must be disabled. Misc. other fixes: ------------------ * Cleanup the relationship between ENABLE_PLUGINS and various other flags in CMakeLists.txt and configure.ac. * Re-arrange the TESTDIRS order in Makefile.am. * Add pseudo-breakpoint to nclog.[ch] for debugging. * Improve the documentation of the path manager code in ncpathmgr.h * Add better support for relative paths in dpathmgr.c * Default the mode args to NCfopen to include "b" (binary) for windows. * Add optional debugging output in various places. * Make sure that everything builds with plugins disabled. * Fix numerous (s)printf inconsistencies betweenb the format spec and the arguments.
2021-12-24 13:18:56 +08:00
AC_SUBST(USEPLUGINS, [${enable_plugins}])
Add filter support to NCZarr Filter support has three goals: 1. Use the existing HDF5 filter implementations, 2. Allow filter metadata to be stored in the NumCodecs metadata format used by Zarr, 3. Allow filters to be used even when HDF5 is disabled Detailed usage directions are define in docs/filters.md. For now, the existing filter API is left in place. So filters are defined using ''nc_def_var_filter'' using the HDF5 style where the id and parameters are unsigned integers. This is a big change since filters affect many parts of the code. In the following, the terms "compressor" and "filter" and "codec" are generally used synonomously. ### Filter-Related Changes: * In order to support dynamic loading of shared filter libraries, a new library was added in the libncpoco directory; it helps to isolate dynamic loading across multiple platforms. * Provide a json parsing library for use by plugins; this is created by merging libdispatch/ncjson.c with include/ncjson.h. * Add a new _Codecs attribute to allow clients to see what codecs are being used; let ncdump -s print it out. * Provide special headers to help support compilation of HDF5 filters when HDF5 is not enabled: netcdf_filter_hdf5_build.h and netcdf_filter_build.h. * Add a number of new test to test the new nczarr filters. * Let ncgen parse _Codecs attribute, although it is ignored. ### Plugin directory changes: * Add support for the Blosc compressor; this is essential because it is the most common compressor used in Zarr datasets. This also necessitated adding a CMake FindBlosc.cmake file * Add NCZarr support for the big-four filters provided by HDF5: shuffle, fletcher32, deflate (zlib), and szip * Add a Codec defaulter (see docs/filters.md) for the big four filters. * Make plugins work with windows by properly adding __declspec declaration. ### Misc. Non-Filter Changes * Replace most uses of USE_NETCDF4 (deprecated) with USE_HDF5. * Improve support for caching * More fixes for path conversion code * Fix misc. memory leaks * Add new utility -- ncdump/ncpathcvt -- that does more or less the same thing as cygpath. * Add a number of new test to test the non-filter fixes. * Update the parsers * Convert most instances of '#ifdef _MSC_VER' to '#ifdef _WIN32'
2021-09-03 07:04:26 +08:00
2010-06-03 21:25:25 +08:00
AC_FUNC_ALLOCA
AC_CHECK_DECLS([isnan, isinf, isfinite],,,[#include <math.h>])
2010-06-03 21:25:25 +08:00
AC_STRUCT_ST_BLKSIZE
UD_CHECK_IEEE
AC_CHECK_TYPES([size_t, ssize_t, schar, uchar, longlong, ushort, uint, int64, uint64, size64_t, ssize64_t, _off64_t, uint64_t, ptrdiff_t])
2010-06-03 21:25:25 +08:00
AC_TYPE_OFF_T
AC_TYPE_UINTPTR_T
AC_C_CHAR_UNSIGNED
2010-06-03 21:25:25 +08:00
AC_C_BIGENDIAN
Mitigate S3 test interference + Unlimited Dimensions in NCZarr This PR started as an attempt to add unlimited dimensions to NCZarr. It did that, but this exposed significant problems with test interference. So this PR is mostly about fixing -- well mitigating anyway -- test interference. The problem of test interference is now documented in the document docs/internal.md. The solutions implemented here are also describe in that document. The solution is somewhat fragile but multiple cleanup mechanisms are provided. Note that this feature requires that the AWS command line utility must be installed. ## Unlimited Dimensions. The existing NCZarr extensions to Zarr are modified to support unlimited dimensions. NCzarr extends the Zarr meta-data for the ".zgroup" object to include netcdf-4 model extensions. This information is stored in ".zgroup" as dictionary named "_nczarr_group". Inside "_nczarr_group", there is a key named "dims" that stores information about netcdf-4 named dimensions. The value of "dims" is a dictionary whose keys are the named dimensions. The value associated with each dimension name has one of two forms Form 1 is a special case of form 2, and is kept for backward compatibility. Whenever a new file is written, it uses format 1 if possible, otherwise format 2. * Form 1: An integer representing the size of the dimension, which is used for simple named dimensions. * Form 2: A dictionary with the following keys and values" - "size" with an integer value representing the (current) size of the dimension. - "unlimited" with a value of either "1" or "0" to indicate if this dimension is an unlimited dimension. For Unlimited dimensions, the size is initially zero, and as variables extend the length of that dimension, the size value for the dimension increases. That dimension size is shared by all arrays referencing that dimension, so if one array extends an unlimited dimension, it is implicitly extended for all other arrays that reference that dimension. This is the standard semantics for unlimited dimensions. Adding unlimited dimensions required a number of other changes to the NCZarr code-base. These included the following. * Did a partial refactor of the slice handling code in zwalk.c to clean it up. * Added a number of tests for unlimited dimensions derived from the same test in nc_test4. * Added several NCZarr specific unlimited tests; more are needed. * Add test of endianness. ## Misc. Other Changes * Modify libdispatch/ncs3sdk_aws.cpp to optionally support use of the AWS Transfer Utility mechanism. This is controlled by the ```#define TRANSFER```` command in that file. It defaults to being disabled. * Parameterize both the standard Unidata S3 bucket (S3TESTBUCKET) and the netcdf-c test data prefix (S3TESTSUBTREE). * Fixed an obscure memory leak in ncdump. * Removed some obsolete unit testing code and test cases. * Uncovered a bug in the netcdf-c handling of big-endian floats and doubles. Have not fixed yet. See tst_h5_endians.c. * Renamed some nczarr_tests testcases to avoid name conflicts with nc_test4. * Modify the semantics of zmap\#ncsmap_write to only allow total rewrite of objects. * Modify the semantics of zodom to properly handle stride > 1. * Add a truncate operation to the libnczarr zmap code.
2023-09-27 06:56:48 +08:00
AC_CHECK_TYPES([mode_t])
2020-10-17 05:04:51 +08:00
AM_CONDITIONAL(ISCYGWIN, [test "x$ISCYGWIN" = xyes])
AM_CONDITIONAL(ISMSVC, [test "x$ISMSVC" = xyes])
2021-05-20 11:04:53 +08:00
AM_CONDITIONAL(ISOSX, [test "x$ISOSX" = xyes])
AM_CONDITIONAL(ISMINGW, [test "x$ISMINGW" = xyes])
Support MSYS2/Mingw platform re: The current netcdf-c release has some problems with the mingw platform on windows. Mostly they are path issues. Changes to support mingw+msys2: ------------------------------- * Enable option of looking into the windows registry to find the mingw root path. In aid of proper path handling. * Add mingw+msys as a specific platform in configure.ac and move testing of the platform to the front so it is available early. * Handle mingw X libncpoco (dynamic loader) properly even though mingw does not yet support it. * Handle mingw X plugins properly even though mingw does not yet support it. * Alias pwd='pwd -W' to better handle paths in shell scripts. * Plus a number of other minor compile irritations. * Disallow the use of multiple nc_open's on the same file for windows (and mingw) because windows does not seem to handle these properly. Not sure why we did not catch this earlier. * Add mountpoint info to dpathmgr.c to help support mingw. * Cleanup dpathmgr conversions. Known problems: --------------- * I have not been able to get shared libraries to work, so plugins/filters must be disabled. * There is some kind of problem with libcurl that I have not solved, so all uses of libcurl (currently DAP+Byterange) must be disabled. Misc. other fixes: ------------------ * Cleanup the relationship between ENABLE_PLUGINS and various other flags in CMakeLists.txt and configure.ac. * Re-arrange the TESTDIRS order in Makefile.am. * Add pseudo-breakpoint to nclog.[ch] for debugging. * Improve the documentation of the path manager code in ncpathmgr.h * Add better support for relative paths in dpathmgr.c * Default the mode args to NCfopen to include "b" (binary) for windows. * Add optional debugging output in various places. * Make sure that everything builds with plugins disabled. * Fix numerous (s)printf inconsistencies betweenb the format spec and the arguments.
2021-12-24 13:18:56 +08:00
AM_CONDITIONAL(ISMSYS, [test "x$ISMSYS" = xyes])
2021-05-20 11:04:53 +08:00
AC_SUBST([ISMSVC], [${ISMSVC}])
Improve UTF8 Support On Windows re: Issue https://github.com/Unidata/netcdf-c/issues/2190 The primary purpose of this PR is to improve the utf8 support for windows. This is persuant to a change in Windows that supports utf8 natively (almost). The almost means that it is still utf16 internally and the set of characters representable by utf8 is larger than those representable by utf16. This leaves open the question in the Issue about handling the Windows 1252 character set. This required the following changes: 1. Test the Windows build and major version in order to see if native utf8 is supported. 2. If native utf8 is supported, Modify dpathmgr.c to call the 8-bit version of the windows fopen() and open() functions. 3. In support of this, programs that use XGetOpt (Windows versions) need to get the command line as utf8 and then parse to arc+argv as utf8. This requires using a homegrown command line parser named XCommandLineToArgvA. 4. Add a utility program called "acpget" that prints out the current Windows code page and locale. Additionally, some technical debt was cleaned up as follows: 1. Unify all the places which attempt to read all or a part of a file into the dutil.c#NC_readfile code. 2. Similary unify all the code that creates temp files into dutil.c#NC_mktmp code. 3. Convert almost all remaining calls to fopen() and open() to NCfopen() and NCopen3(). This is to ensure that path management is used consistently. This touches a number of files. 4. extern->EXTERNL as needed to get it to work under Windows.
2022-02-09 11:53:30 +08:00
AC_SUBST([WINVERMAJOR], [${WINVERMAJOR}])
AC_SUBST([WINVERBUILD], [${WINVERBUILD}])
2021-05-20 11:04:53 +08:00
AC_SUBST([ISCYGWIN], [${ISCYGWIN}])
AC_SUBST([ISOSX], [${ISOSX}])
AC_SUBST([ISMINGW], [${ISMINGW}])
Support MSYS2/Mingw platform re: The current netcdf-c release has some problems with the mingw platform on windows. Mostly they are path issues. Changes to support mingw+msys2: ------------------------------- * Enable option of looking into the windows registry to find the mingw root path. In aid of proper path handling. * Add mingw+msys as a specific platform in configure.ac and move testing of the platform to the front so it is available early. * Handle mingw X libncpoco (dynamic loader) properly even though mingw does not yet support it. * Handle mingw X plugins properly even though mingw does not yet support it. * Alias pwd='pwd -W' to better handle paths in shell scripts. * Plus a number of other minor compile irritations. * Disallow the use of multiple nc_open's on the same file for windows (and mingw) because windows does not seem to handle these properly. Not sure why we did not catch this earlier. * Add mountpoint info to dpathmgr.c to help support mingw. * Cleanup dpathmgr conversions. Known problems: --------------- * I have not been able to get shared libraries to work, so plugins/filters must be disabled. * There is some kind of problem with libcurl that I have not solved, so all uses of libcurl (currently DAP+Byterange) must be disabled. Misc. other fixes: ------------------ * Cleanup the relationship between ENABLE_PLUGINS and various other flags in CMakeLists.txt and configure.ac. * Re-arrange the TESTDIRS order in Makefile.am. * Add pseudo-breakpoint to nclog.[ch] for debugging. * Improve the documentation of the path manager code in ncpathmgr.h * Add better support for relative paths in dpathmgr.c * Default the mode args to NCfopen to include "b" (binary) for windows. * Add optional debugging output in various places. * Make sure that everything builds with plugins disabled. * Fix numerous (s)printf inconsistencies betweenb the format spec and the arguments.
2021-12-24 13:18:56 +08:00
AC_SUBST([ISMSYS], [${ISMSYS}])
if test "x$ISMSVC" != x ; then REGEDIT=yes; fi
if test "x$ISMSYS" != x ; then REGEDIT=yes; fi
if test "x$ISCYGWIN" != x ; then REGEDIT=yes; fi
if test "x$REGEDIT" = xyes; then
AC_DEFINE([REGEDIT], 1, [dreg.c usable])
fi
AM_CONDITIONAL(REGEDIT, [test "x$REGEDIT" = xyes])
AC_SUBST([ISREGEDIT], [yes])
2020-10-17 05:04:51 +08:00
###
# Crude hack to work around an issue
# in Cygwin.
###
SLEEPCMD=""
2020-10-17 05:04:51 +08:00
if test "x$ISCYGWIN" = "xyes"; then
SLEEPCMD="sleep 5"
AC_MSG_NOTICE([Pausing between sizeof() checks to mitigate a Cygwin issue.])
fi
$SLEEPCMD
2010-06-03 21:25:25 +08:00
AC_CHECK_SIZEOF(short)
$SLEEPCMD
2010-06-03 21:25:25 +08:00
AC_CHECK_SIZEOF(int)
$SLEEPCMD
2010-06-03 21:25:25 +08:00
AC_CHECK_SIZEOF(long)
$SLEEPCMD
AC_CHECK_SIZEOF(long long)
$SLEEPCMD
2010-06-03 21:25:25 +08:00
AC_CHECK_SIZEOF(float)
$SLEEPCMD
2010-06-03 21:25:25 +08:00
AC_CHECK_SIZEOF(double)
$SLEEPCMD
2010-06-03 21:25:25 +08:00
AC_CHECK_SIZEOF(off_t)
$SLEEPCMD
2010-06-03 21:25:25 +08:00
AC_CHECK_SIZEOF(size_t)
$SLEEPCMD
2015-08-16 06:26:35 +08:00
AC_CHECK_SIZEOF(unsigned long long)
if test "$ac_cv_sizeof_size_t" -lt "8" ; then
if test "x${enable_cdf5}" = xyes ; then
dnl unable to support CDF5, but --enable-cdf5 is explicitly set
AC_MSG_ERROR([Unable to support CDF5 feature because size_t is less than 8 bytes])
fi
enable_cdf5=no
else
if test "x${enable_cdf5}" != xno ; then
enable_cdf5=yes
fi
fi
if test "x${enable_cdf5}" = xyes; then
AC_DEFINE([ENABLE_CDF5], [1], [if true, enable CDF5 Support])
fi
AM_CONDITIONAL(ENABLE_CDF5, [test x$enable_cdf5 = xyes ])
2015-08-16 06:26:35 +08:00
$SLEEPCMD
if test "$ac_cv_type_uchar" = yes ; then
AC_CHECK_SIZEOF(uchar)
else
AC_CHECK_SIZEOF(unsigned char)
fi
2015-11-20 04:44:07 +08:00
$SLEEPCMD
2015-08-16 06:26:35 +08:00
if test "$ac_cv_type_ushort" = yes ; then
AC_CHECK_SIZEOF(ushort)
else
AC_CHECK_SIZEOF(unsigned short int)
fi
2015-11-20 04:44:07 +08:00
$SLEEPCMD
2015-08-16 06:26:35 +08:00
if test "$ac_cv_type_uint" = yes ; then
AC_CHECK_SIZEOF(uint)
else
AC_CHECK_SIZEOF(unsigned int)
fi
$SLEEPCMD
2015-08-16 06:26:35 +08:00
if test "$ac_cv_type_ushort" = yes ; then
AC_CHECK_SIZEOF(ushort)
else
AC_CHECK_SIZEOF(unsigned short int)
fi
$SLEEPCMD
2015-08-16 06:26:35 +08:00
if test "$ac_cv_type_uint" = yes ; then
AC_CHECK_SIZEOF(uint)
else
AC_CHECK_SIZEOF(unsigned int)
fi
$SLEEPCMD
Primary change: add dap4 support Specific changes: 1. Add dap4 code: libdap4 and dap4_test. Note that until the d4ts server problem is solved, dap4 is turned off. 2. Modify various files to support dap4 flags: configure.ac, Makefile.am, CMakeLists.txt, etc. 3. Add nc_test/test_common.sh. This centralizes the handling of the locations of various things in the build tree: e.g. where is ncgen.exe located. See nc_test/test_common.sh for details. 4. Modify .sh files to use test_common.sh 5. Obsolete separate oc2 by moving it to be part of netcdf-c. This means replacing code with netcdf-c equivalents. 5. Add --with-testserver to configure.ac to allow override of the servers to be used for --enable-dap-remote-tests. 6. There were multiple versions of nctypealignment code. Try to centralize in libdispatch/doffset.c and include/ncoffsets.h 7. Add a unit test for the ncuri code because of its complexity. 8. Move the findserver code out of libdispatch and into a separate, self contained program in ncdap_test and dap4_test. 9. Move the dispatch header files (nc{3,4}dispatch.h) to .../include because they are now shared by modules. 10. Revamp the handling of TOPSRCDIR and TOPBUILDDIR for shell scripts. 11. Make use of MREMAP if available 12. Misc. minor changes e.g. - #include <config.h> -> #include "config.h" - Add some no-install headers to /include - extern -> EXTERNL and vice versa as needed - misc header cleanup - clean up checking for misc. unix vs microsoft functions 13. Change copyright decls in some files to point to LICENSE file. 14. Add notes to RELEASENOTES.md
2017-03-09 08:01:10 +08:00
AC_CHECK_SIZEOF(ssize_t)
$SLEEPCMD
AC_CHECK_SIZEOF([void*])
2010-06-03 21:25:25 +08:00
Improve performance of the nc_reclaim_data and nc_copy_data functions. re: Issue https://github.com/Unidata/netcdf-c/issues/2685 re: PR https://github.com/Unidata/netcdf-c/pull/2179 As noted in PR https://github.com/Unidata/netcdf-c/pull/2179, the old code did not allow for reclaiming instances of types, nor for properly copying them. That PR provided new functions capable of reclaiming/copying instances of arbitrary types. However, as noted by Issue https://github.com/Unidata/netcdf-c/issues/2685, using these most general functions resulted in a significant performance degradation, even for common cases. This PR attempts to mitigate the cost of using the general reclaim/copy functions in two ways. First, the previous functions operating at the top level by using ncid and typeid arguments. These functions were augmented with equivalent versions that used the netcdf-c library internal data structures to allow direct access to needed information. These new functions are used internally to the library. The second mitigation involves optimizing the internal functions by providing early tests for common cases. This avoids unnecessary recursive function calls. The overall result is a significant improvement in speed by a factor of roughly twenty -- your mileage may vary. These optimized functions are still not as fast as the original (more limited) functions, but they are getting close. Additional optimizations are possible. But the cost is a significant "uglification" of the code that I deemed a step too far, at least for now. ## Misc. Changes 1. Added a test case to check the proper reclamation/copy of complex types. 2. Found and fixed some places where nc_reclaim/copy should have been used. 3. Replaced, in the netcdf-c library, (almost all) occurrences of nc_reclaim_copy with calls to NC_reclaim/copy. This plus the optimizations is the primary speed-up mechanism. 4. In DAP4, the metadata is held in a substrate in-memory file; this required some changes so that the reclaim/copy code accessed that substrate dispatcher rather than the DAP4 dispatcher. 5. Re-factored and isolated the code that computes if a type is (transitively) variable-sized or not. 6. Clean up the reclamation code in ncgen; adding the use of nc_reclaim exposed some memory problems.
2023-05-21 07:11:25 +08:00
# Check for deflate library
AC_SEARCH_LIBS([deflate], [zlibwapi zlibstat zlib zlib1 z], [have_deflate=yes], [have_deflate=no])
AC_MSG_CHECKING([whether deflate library is available])
AC_MSG_RESULT([${have_deflate}])
if test "x$have_deflate" = xno ; then
AC_MSG_ERROR([Can't find or link to the z library. Turn off netCDF-4 and \
DAP and NCZarr clients with --disable-hdf5 --disable-dap --disable-nczarr, or see config.log for errors.])
fi
2012-05-16 01:10:41 +08:00
# We need the math library
AC_CHECK_LIB([m], [floor], [],
2011-06-02 03:48:33 +08:00
[AC_MSG_ERROR([Can't find or link to the math library.])])
2010-06-03 21:25:25 +08:00
if test "x$enable_netcdf_4" = xyes; then
2018-11-29 22:02:31 +08:00
AC_DEFINE([USE_NETCDF4], [1], [if true, build netCDF-4])
fi
Add support for multiple filters per variable. re: https://github.com/Unidata/netcdf-c/issues/1584 Support has been added for multiple filters per variable. This affects a number of components in netcdf. The new APIs are documented in NUG/filters.md. The primary changes are: * A set of new functions are provided (see __include/netcdf_filter.h__). - Obtain a list of the filters associated with a variable - Obtain the parameters for a specific filter. * The existing __nc_inq_var_filter__ function now returns info about the first defined filter. * The utilities (ncgen, ncdump, and nccopy) now support an extended format for specifying a sequence of filters. The general form is __<filter>|<filter>..._. * The ncdump **_Filter** attribute now dumps a list of all the filters associated with a variable using the above new format. * Filter specifications can now use a filter name instead of number for filters known to the netcdf library, which in turn is taken from the HDF5 filter registration page. * New errors are defined: NC_EFILTER and NC_ENOFILTER. The latter is returned if an attempt is made to access an unknown filter. * Internally, the dispatch table has been extended to add a function to handle all of the filter functions. * New, filter-related, tests were added to nc_test4. * A new plugin was added to the plugins directory to help with testing. Notes: 1. The shuffle and fletcher32 filters are not part of the multifilter system. Misc. changes: 1. A debug module was added to libhdf5 to help catch error locations.
2020-02-17 03:59:33 +08:00
# Set defaults
2018-11-29 22:02:31 +08:00
hdf5_parallel=no
hdf5_supports_par_filters=no
Enhance/Fix filter support re: Discussion https://github.com/Unidata/netcdf-c/discussions/2214 The primary change is to support so-called "standard filters". A standard filter is one that is defined by the following netcdf-c API: ```` int nc_def_var_XXX(int ncid, int varid, size_t nparams, unsigned* params); int nc_inq_var_XXXX(int ncid, int varid, int* usefilterp, unsigned* params); ```` So for example, zstandard would be a standard filter by defining the functions *nc_def_var_zstandard* and *nc_inq_var_zstandard*. In order to define these functions, we need a new dispatch function: ```` int nc_inq_filter_avail(int ncid, unsigned filterid); ```` This function, combined with the existing filter API can be used to implement arbitrary standard filters using a simple code pattern. Note that I would have preferred that this function return a list of all available filters, but HDF5 does not support that functionality. So this PR implements the dispatch function and implements the following standard functions: + bzip2 + zstandard + blosc Specific test cases are also provided for HDF5 and NCZarr. Over time, other specific standard filters will be defined. ## Primary Changes * Add nc_inq_filter_avail() to netcdf-c API. * Add standard filter implementations to test use of *nc_inq_filter_avail*. * Bump the dispatch table version number and add to all the relevant dispatch tables (libsrc, libsrcp, etc). * Create a program to invoke nc_inq_filter_avail so that it is accessible to shell scripts. * Cleanup szip support to properly support szip when HDF5 is disabled. This involves detecting libsz separately from testing if HDF5 supports szip. * Integrate shuffle and fletcher32 into the existing filter API. This means that, for example, nc_def_var_fletcher32 is now a wrapper around nc_def_var_filter. * Extend the Codec defaulting to allow multiple default shared libraries. ## Misc. Changes * Modify configure.ac/CMakeLists.txt to look for the relevant libraries implementing standard filters. * Modify libnetcdf.settings to list available standard filters (including deflate and szip). * Add CMake test modules to locate libbz2 and libzstd. * Cleanup the HDF5 memory manager function use in the plugins. * remove unused file include//ncfilter.h * remove tests for the HDF5 memory operations e.g. H5allocate_memory. * Add flag to ncdump to force use of _Filter instead of _Deflate or _Shuffle or _Fletcher32. Used for testing.
2022-03-15 02:39:37 +08:00
enable_hdf5_szip=no
Improve S3 Documentation and Support ## Improvements to S3 Documentation * Create a new document *quickstart_paths.md* that give a summary of the legal path formats used by netcdf-c. This includes both file paths and URL paths. * Modify *nczarr.md* to remove most of the S3 related text. * Move the S3 text from *nczarr.md* to a new document *cloud.md*. * Add some S3-related text to the *byterange.md* document. Hopefully, this will make it easier for users to find the information they want. ## Rebuild NCZarr Testing In order to avoid problems with running make check in parallel, two changes were made: 1. The *nczarr_test* test system was rebuilt. Now, for each test. any generated files are kept in a test-specific directory, isolated from all other test executions. 2. Similarly, since the S3 test bucket is shared, any generated S3 objects are isolated using a test-specific key path. ## Other S3 Related Changes * Add code to ensure that files created on S3 are reclaimed at end of testing. * Used the bash "trap" command to ensure S3 cleanup even if the test fails. * Cleanup the S3 related configure.ac flag set since S3 is used in several places. So now one should use the option *--enable-s3* instead of *--enable-nczarr-s3*, although the latter is still kept as a deprecated alias for the former. * Get some of the github actions yml to work with S3; required fixing various test scripts adding a secret to access the Unidata S3 bucket. * Cleanup S3 portion of libnetcdf.settings.in and netcdf_meta.h.in and test_common.in. * Merge partial S3 support into dhttp.c. * Create an experimental s3 access library especially for use with Windows. It is enabled by using the options *--enable-s3-internal* (automake) or *-DENABLE_S3_INTERNAL=ON* (CMake). Also add a unit-test for it. * Move some definitions from ncrc.h to ncs3sdk.h ## Other Changes * Provide a default implementation of strlcpy and move this and similar defaults into *dmissing.c*.
2023-04-26 07:15:06 +08:00
has_hdf5_ros3=no
Add support for multiple filters per variable. re: https://github.com/Unidata/netcdf-c/issues/1584 Support has been added for multiple filters per variable. This affects a number of components in netcdf. The new APIs are documented in NUG/filters.md. The primary changes are: * A set of new functions are provided (see __include/netcdf_filter.h__). - Obtain a list of the filters associated with a variable - Obtain the parameters for a specific filter. * The existing __nc_inq_var_filter__ function now returns info about the first defined filter. * The utilities (ncgen, ncdump, and nccopy) now support an extended format for specifying a sequence of filters. The general form is __<filter>|<filter>..._. * The ncdump **_Filter** attribute now dumps a list of all the filters associated with a variable using the above new format. * Filter specifications can now use a filter name instead of number for filters known to the netcdf library, which in turn is taken from the HDF5 filter registration page. * New errors are defined: NC_EFILTER and NC_ENOFILTER. The latter is returned if an attempt is made to access an unknown filter. * Internally, the dispatch table has been extended to add a function to handle all of the filter functions. * New, filter-related, tests were added to nc_test4. * A new plugin was added to the plugins directory to help with testing. Notes: 1. The shuffle and fletcher32 filters are not part of the multifilter system. Misc. changes: 1. A debug module was added to libhdf5 to help catch error locations.
2020-02-17 03:59:33 +08:00
2018-11-29 22:02:31 +08:00
if test "x$enable_hdf5" = xyes; then
2011-12-13 12:16:52 +08:00
2018-05-09 01:58:01 +08:00
AC_DEFINE([USE_HDF5], [1], [if true, use HDF5])
2010-06-03 21:25:25 +08:00
AC_DEFINE([H5_USE_16_API], [1], [use HDF5 1.6 API])
# Check for the main hdf5 and hdf5_hl library.
2011-12-13 12:16:52 +08:00
AC_SEARCH_LIBS([H5Fflush], [hdf5 hdf5.dll], [],
[AC_MSG_ERROR([Can't find or link to the hdf5 library. Use --disable-hdf5, or see config.log for errors.])])
AC_SEARCH_LIBS([H5DSis_scale], [hdf5_hl hdf5_hl.dll], [],
[AC_MSG_ERROR([Can't find or link to the hdf5 library. Use --disable-hdf5, or see config.log for errors.])])
2018-11-09 23:45:27 +08:00
AC_CHECK_HEADERS([hdf5.h], [], [AC_MSG_ERROR([Compiling a test with HDF5 failed. Either hdf5.h cannot be found, or config.log should be checked for other reason.])])
# Was HDF5 built with zlib as netCDF requires?
AC_COMPILE_IFELSE([AC_LANG_PROGRAM([#include "H5public.h"],
[[#if !H5_HAVE_ZLIB_H
# error
#endif]
])], [], [AC_MSG_ERROR([HDF5 was not built with zlib, which is required. Rebuild HDF5 with zlib.])])
2018-09-23 09:22:34 +08:00
# H5Pset_fapl_mpiposix and H5Pget_fapl_mpiposix have been removed since HDF5 1.8.12.
# Use H5Pset_fapl_mpio and H5Pget_fapl_mpio, instead.
Enhance/Fix filter support re: Discussion https://github.com/Unidata/netcdf-c/discussions/2214 The primary change is to support so-called "standard filters". A standard filter is one that is defined by the following netcdf-c API: ```` int nc_def_var_XXX(int ncid, int varid, size_t nparams, unsigned* params); int nc_inq_var_XXXX(int ncid, int varid, int* usefilterp, unsigned* params); ```` So for example, zstandard would be a standard filter by defining the functions *nc_def_var_zstandard* and *nc_inq_var_zstandard*. In order to define these functions, we need a new dispatch function: ```` int nc_inq_filter_avail(int ncid, unsigned filterid); ```` This function, combined with the existing filter API can be used to implement arbitrary standard filters using a simple code pattern. Note that I would have preferred that this function return a list of all available filters, but HDF5 does not support that functionality. So this PR implements the dispatch function and implements the following standard functions: + bzip2 + zstandard + blosc Specific test cases are also provided for HDF5 and NCZarr. Over time, other specific standard filters will be defined. ## Primary Changes * Add nc_inq_filter_avail() to netcdf-c API. * Add standard filter implementations to test use of *nc_inq_filter_avail*. * Bump the dispatch table version number and add to all the relevant dispatch tables (libsrc, libsrcp, etc). * Create a program to invoke nc_inq_filter_avail so that it is accessible to shell scripts. * Cleanup szip support to properly support szip when HDF5 is disabled. This involves detecting libsz separately from testing if HDF5 supports szip. * Integrate shuffle and fletcher32 into the existing filter API. This means that, for example, nc_def_var_fletcher32 is now a wrapper around nc_def_var_filter. * Extend the Codec defaulting to allow multiple default shared libraries. ## Misc. Changes * Modify configure.ac/CMakeLists.txt to look for the relevant libraries implementing standard filters. * Modify libnetcdf.settings to list available standard filters (including deflate and szip). * Add CMake test modules to locate libbz2 and libzstd. * Cleanup the HDF5 memory manager function use in the plugins. * remove unused file include//ncfilter.h * remove tests for the HDF5 memory operations e.g. H5allocate_memory. * Add flag to ncdump to force use of _Filter instead of _Deflate or _Shuffle or _Fletcher32. Used for testing.
2022-03-15 02:39:37 +08:00
AC_CHECK_FUNCS([H5Pget_fapl_mpio H5Pset_deflate H5Z_SZIP H5Pset_all_coll_metadata_ops H5Literate])
# Check to see if HDF5 library has collective metadata APIs, (HDF5 >= 1.10.0)
if test "x$ac_cv_func_H5Pset_all_coll_metadata_ops" = xyes; then
AC_DEFINE([HDF5_HAS_COLL_METADATA_OPS], [1], [if true, use collective metadata ops in parallel netCDF-4])
fi
2015-08-16 06:26:35 +08:00
# If parallel is available in hdf5, enable it in the C code. Also add some stuff to netcdf.h.
if test "x$ac_cv_func_H5Pget_fapl_mpio" = xyes -o "x$ac_cv_func_H5Pget_fapl_mpiposix" = xyes; then
2015-08-16 06:26:35 +08:00
hdf5_parallel=yes
2010-06-03 21:25:25 +08:00
fi
2015-08-16 06:26:35 +08:00
AC_MSG_CHECKING([whether parallel io is enabled in hdf5])
AC_MSG_RESULT([$hdf5_parallel])
2020-05-15 02:59:00 +08:00
# See if H5Dread_chunk is available
AC_SEARCH_LIBS([H5Dread_chunk],[hdf5_hldll hdf5_hl], [has_readchunks=yes], [has_readdhunks=no])
# See if hdf5 library supports Read-Only S3 (byte-range) driver
Improve S3 Documentation and Support ## Improvements to S3 Documentation * Create a new document *quickstart_paths.md* that give a summary of the legal path formats used by netcdf-c. This includes both file paths and URL paths. * Modify *nczarr.md* to remove most of the S3 related text. * Move the S3 text from *nczarr.md* to a new document *cloud.md*. * Add some S3-related text to the *byterange.md* document. Hopefully, this will make it easier for users to find the information they want. ## Rebuild NCZarr Testing In order to avoid problems with running make check in parallel, two changes were made: 1. The *nczarr_test* test system was rebuilt. Now, for each test. any generated files are kept in a test-specific directory, isolated from all other test executions. 2. Similarly, since the S3 test bucket is shared, any generated S3 objects are isolated using a test-specific key path. ## Other S3 Related Changes * Add code to ensure that files created on S3 are reclaimed at end of testing. * Used the bash "trap" command to ensure S3 cleanup even if the test fails. * Cleanup the S3 related configure.ac flag set since S3 is used in several places. So now one should use the option *--enable-s3* instead of *--enable-nczarr-s3*, although the latter is still kept as a deprecated alias for the former. * Get some of the github actions yml to work with S3; required fixing various test scripts adding a secret to access the Unidata S3 bucket. * Cleanup S3 portion of libnetcdf.settings.in and netcdf_meta.h.in and test_common.in. * Merge partial S3 support into dhttp.c. * Create an experimental s3 access library especially for use with Windows. It is enabled by using the options *--enable-s3-internal* (automake) or *-DENABLE_S3_INTERNAL=ON* (CMake). Also add a unit-test for it. * Move some definitions from ncrc.h to ncs3sdk.h ## Other Changes * Provide a default implementation of strlcpy and move this and similar defaults into *dmissing.c*.
2023-04-26 07:15:06 +08:00
AC_SEARCH_LIBS([H5Pset_fapl_ros3],[hdf5_hldll hdf5_hl], [has_hdf5_ros3=yes], [has_hdf5_ros3=no])
if test "x$has_hdf5_ros3" = xyes && test "x$enable_byterange" = xyes; then
AC_DEFINE([ENABLE_HDF5_ROS3], [1], [if true, support byte-range using hdf5 virtual file driver.])
fi
# Check to see if HDF5 library is 1.10.3 or greater. If so, allows
# parallel I/O with filters. This allows zlib/szip compression to
# be used with parallel I/O, which is very helpful to HPC users.
2020-05-15 02:59:00 +08:00
if test "x$has_readchunks" = xyes; then
AC_DEFINE([HDF5_SUPPORTS_PAR_FILTERS], [1], [if true, HDF5 is at least version 1.10.3 and allows parallel I/O with zip])
hdf5_supports_par_filters=yes
fi
AC_MSG_CHECKING([whether HDF5 allows parallel filters])
2020-05-15 02:59:00 +08:00
AC_MSG_RESULT([$has_readchunks])
# Check to see if user asked for parallel build, but HDF5 does not support it.
if test "x$hdf5_parallel" = "xno"; then
if test "x$enable_parallel_tests" = "xyes"; then
AC_MSG_ERROR([Parallel tests requested, but no parallel HDF5 installation detected.])
fi
fi
# Check whether HDF5 was built with the SZLIB library. If so we
# must be able to link to szip library.
AC_MSG_CHECKING([whether szlib was used when building HDF5])
if test "x$ac_cv_func_H5Z_SZIP" = xyes; then
Enhance/Fix filter support re: Discussion https://github.com/Unidata/netcdf-c/discussions/2214 The primary change is to support so-called "standard filters". A standard filter is one that is defined by the following netcdf-c API: ```` int nc_def_var_XXX(int ncid, int varid, size_t nparams, unsigned* params); int nc_inq_var_XXXX(int ncid, int varid, int* usefilterp, unsigned* params); ```` So for example, zstandard would be a standard filter by defining the functions *nc_def_var_zstandard* and *nc_inq_var_zstandard*. In order to define these functions, we need a new dispatch function: ```` int nc_inq_filter_avail(int ncid, unsigned filterid); ```` This function, combined with the existing filter API can be used to implement arbitrary standard filters using a simple code pattern. Note that I would have preferred that this function return a list of all available filters, but HDF5 does not support that functionality. So this PR implements the dispatch function and implements the following standard functions: + bzip2 + zstandard + blosc Specific test cases are also provided for HDF5 and NCZarr. Over time, other specific standard filters will be defined. ## Primary Changes * Add nc_inq_filter_avail() to netcdf-c API. * Add standard filter implementations to test use of *nc_inq_filter_avail*. * Bump the dispatch table version number and add to all the relevant dispatch tables (libsrc, libsrcp, etc). * Create a program to invoke nc_inq_filter_avail so that it is accessible to shell scripts. * Cleanup szip support to properly support szip when HDF5 is disabled. This involves detecting libsz separately from testing if HDF5 supports szip. * Integrate shuffle and fletcher32 into the existing filter API. This means that, for example, nc_def_var_fletcher32 is now a wrapper around nc_def_var_filter. * Extend the Codec defaulting to allow multiple default shared libraries. ## Misc. Changes * Modify configure.ac/CMakeLists.txt to look for the relevant libraries implementing standard filters. * Modify libnetcdf.settings to list available standard filters (including deflate and szip). * Add CMake test modules to locate libbz2 and libzstd. * Cleanup the HDF5 memory manager function use in the plugins. * remove unused file include//ncfilter.h * remove tests for the HDF5 memory operations e.g. H5allocate_memory. * Add flag to ncdump to force use of _Filter instead of _Deflate or _Shuffle or _Fletcher32. Used for testing.
2022-03-15 02:39:37 +08:00
enable_hdf5_szip=yes
AC_DEFINE([HAVE_H5Z_SZIP], [1], [if true, compile in szip compression in netCDF-4 variables])
2010-06-03 21:25:25 +08:00
fi
Enhance/Fix filter support re: Discussion https://github.com/Unidata/netcdf-c/discussions/2214 The primary change is to support so-called "standard filters". A standard filter is one that is defined by the following netcdf-c API: ```` int nc_def_var_XXX(int ncid, int varid, size_t nparams, unsigned* params); int nc_inq_var_XXXX(int ncid, int varid, int* usefilterp, unsigned* params); ```` So for example, zstandard would be a standard filter by defining the functions *nc_def_var_zstandard* and *nc_inq_var_zstandard*. In order to define these functions, we need a new dispatch function: ```` int nc_inq_filter_avail(int ncid, unsigned filterid); ```` This function, combined with the existing filter API can be used to implement arbitrary standard filters using a simple code pattern. Note that I would have preferred that this function return a list of all available filters, but HDF5 does not support that functionality. So this PR implements the dispatch function and implements the following standard functions: + bzip2 + zstandard + blosc Specific test cases are also provided for HDF5 and NCZarr. Over time, other specific standard filters will be defined. ## Primary Changes * Add nc_inq_filter_avail() to netcdf-c API. * Add standard filter implementations to test use of *nc_inq_filter_avail*. * Bump the dispatch table version number and add to all the relevant dispatch tables (libsrc, libsrcp, etc). * Create a program to invoke nc_inq_filter_avail so that it is accessible to shell scripts. * Cleanup szip support to properly support szip when HDF5 is disabled. This involves detecting libsz separately from testing if HDF5 supports szip. * Integrate shuffle and fletcher32 into the existing filter API. This means that, for example, nc_def_var_fletcher32 is now a wrapper around nc_def_var_filter. * Extend the Codec defaulting to allow multiple default shared libraries. ## Misc. Changes * Modify configure.ac/CMakeLists.txt to look for the relevant libraries implementing standard filters. * Modify libnetcdf.settings to list available standard filters (including deflate and szip). * Add CMake test modules to locate libbz2 and libzstd. * Cleanup the HDF5 memory manager function use in the plugins. * remove unused file include//ncfilter.h * remove tests for the HDF5 memory operations e.g. H5allocate_memory. * Add flag to ncdump to force use of _Filter instead of _Deflate or _Shuffle or _Fletcher32. Used for testing.
2022-03-15 02:39:37 +08:00
AC_MSG_RESULT([$enable_hdf5_szip])
2020-05-15 02:18:42 +08:00
2020-10-17 05:04:51 +08:00
# Check to see if HDF5 library is 1.10.6 or greater.
# Used to control path name conversion
AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[#include <H5public.h>]], [[
#if (H5_VERS_MAJOR*10000 + H5_VERS_MINOR*100 + H5_VERS_RELEASE < 11006)
choke me
#endif
]])], [hdf5_version_1106=yes], [hdf5_version_1106=no])
AC_MSG_CHECKING([whether HDF5 library is version 1.10.6 or later])
AC_MSG_RESULT([$hdf5_version_1106])
if test "x$hdf5_version_1106" = xyes; then
AC_DEFINE([HDF5_UTF8_PATHS], [1], [if true, HDF5 paths can be utf-8])
fi
Add filter support to NCZarr Filter support has three goals: 1. Use the existing HDF5 filter implementations, 2. Allow filter metadata to be stored in the NumCodecs metadata format used by Zarr, 3. Allow filters to be used even when HDF5 is disabled Detailed usage directions are define in docs/filters.md. For now, the existing filter API is left in place. So filters are defined using ''nc_def_var_filter'' using the HDF5 style where the id and parameters are unsigned integers. This is a big change since filters affect many parts of the code. In the following, the terms "compressor" and "filter" and "codec" are generally used synonomously. ### Filter-Related Changes: * In order to support dynamic loading of shared filter libraries, a new library was added in the libncpoco directory; it helps to isolate dynamic loading across multiple platforms. * Provide a json parsing library for use by plugins; this is created by merging libdispatch/ncjson.c with include/ncjson.h. * Add a new _Codecs attribute to allow clients to see what codecs are being used; let ncdump -s print it out. * Provide special headers to help support compilation of HDF5 filters when HDF5 is not enabled: netcdf_filter_hdf5_build.h and netcdf_filter_build.h. * Add a number of new test to test the new nczarr filters. * Let ncgen parse _Codecs attribute, although it is ignored. ### Plugin directory changes: * Add support for the Blosc compressor; this is essential because it is the most common compressor used in Zarr datasets. This also necessitated adding a CMake FindBlosc.cmake file * Add NCZarr support for the big-four filters provided by HDF5: shuffle, fletcher32, deflate (zlib), and szip * Add a Codec defaulter (see docs/filters.md) for the big four filters. * Make plugins work with windows by properly adding __declspec declaration. ### Misc. Non-Filter Changes * Replace most uses of USE_NETCDF4 (deprecated) with USE_HDF5. * Improve support for caching * More fixes for path conversion code * Fix misc. memory leaks * Add new utility -- ncdump/ncpathcvt -- that does more or less the same thing as cygpath. * Add a number of new test to test the non-filter fixes. * Update the parsers * Convert most instances of '#ifdef _MSC_VER' to '#ifdef _WIN32'
2021-09-03 07:04:26 +08:00
2018-11-29 22:02:31 +08:00
fi
2020-10-17 05:04:51 +08:00
2020-05-15 02:59:00 +08:00
AM_CONDITIONAL(ENABLE_NCDUMPCHUNKS, [test "x$has_readchunks" = xyes ])
2018-11-29 22:02:31 +08:00
# If the user wants hdf4 built in, check it out.
if test "x$enable_hdf4" = xyes; then
AC_CHECK_LIB([jpeg], [jpeg_CreateCompress], [],
[AC_MSG_ERROR([Jpeg library required for --enable-hdf4 builds.])])
AC_CHECK_HEADERS([mfhdf.h], [], [nc_mfhdf_h_missing=yes])
if test "x$nc_mfhdf_h_missing" = xyes; then
AC_MSG_ERROR([Cannot find mfhdf.h, yet --enable-hdf4 was used.])
2010-06-03 21:25:25 +08:00
fi
2018-11-29 22:02:31 +08:00
AC_CHECK_LIB([df], [Hclose], [], [AC_MSG_ERROR([Can't find or link to the hdf4 df library. See config.log for errors.])])
AC_CHECK_LIB([mfhdf], [NC_arrayfill], [AC_MSG_ERROR([HDF4 library must be built with --disable-netcdf.])], [])
AC_CHECK_LIB([mfhdf], [SDcreate], [], [AC_MSG_ERROR([Can't find or link to the hdf4 mfhdf library. See config.log for errors.])])
AC_CHECK_LIB([jpeg], [jpeg_set_quality], [], [AC_MSG_ERROR([Can't find or link to the jpeg library (required by hdf4). See config.log for errors.])])
AC_DEFINE([USE_HDF4], [1], [if true, use HDF4 too])
2010-06-03 21:25:25 +08:00
fi
2015-08-16 06:26:35 +08:00
# There are several cases for parallelism:
# 1. PnetCDF enabled => we want to parallelism for CDF-1,CDF-2,and CDF-5
2015-08-16 06:26:35 +08:00
# 2. hdf5 has mpio enabled
# a. do not want to use it for netcdf4
# b. do want to use it for netcdf4
2017-10-27 04:05:04 +08:00
# Should we provide parallel io for netcdf-4?
2018-11-29 22:02:31 +08:00
if test "x$enable_hdf5" = xyes ; then
2017-10-27 04:05:04 +08:00
AC_ARG_ENABLE([parallel4],
[AS_HELP_STRING([--disable-parallel4],
[disable parallel I/O for netcdf-4, even if it's enabled in libhdf5])],
[user_set_parallel4=${enableval}]2)
2017-10-27 04:05:04 +08:00
test "x$enable_parallel4" = xno || enable_parallel4=yes
# If user wants parallel IO for netCDF-4, make sure HDF5 can provide it.
if test "x$enable_parallel4" = xyes; then
if test "x$hdf5_parallel" = xno; then
# If user specifically asked for parallel4, then error out.
if test "x$user_set_parallel4" = xyes; then
AC_MSG_ERROR([Paralllel IO in netCDF-4 requested, but HDF5 does not provide parallel IO.])
fi
# User didn't specify, so disable parallel4
enable_parallel4=no
AC_MSG_WARN([Parallel io disabled for netcdf-4 because hdf5 does not support])
fi
fi
2015-08-16 06:26:35 +08:00
else
2017-10-27 04:05:04 +08:00
enable_parallel4=no
2015-08-16 06:26:35 +08:00
fi
AC_MSG_CHECKING([whether parallel I/O is enabled for netcdf-4])
AC_MSG_RESULT($enable_parallel4)
2015-08-16 06:26:35 +08:00
# We have already tested for parallel io in netcdf4
# parallel I/O for CDF-1, 2, and 5 files can also be done through PnetCDF
# See if the PnetCDF lib is available and of the
2015-08-16 06:26:35 +08:00
# right version (1.6.0 or later)
if test "x$enable_pnetcdf" = xyes; then
pnetcdf_conflict=no
AC_CHECK_LIB([pnetcdf], [ncmpi_create], [],[pnetcdf_conflict=yes])
if test "x$pnetcdf_conflict" = xyes ; then
AC_MSG_ERROR([Cannot link to PnetCDF library.])
2017-11-21 06:50:47 +08:00
fi
2015-08-16 06:26:35 +08:00
# Pnetcdf did not support utf-8 until 1.6.0
AC_MSG_CHECKING([Is libpnetcdf version 1.6.0 or later?])
AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[#include <pnetcdf.h>]], [[
2018-07-28 01:18:37 +08:00
#if (PNETCDF_VERSION_MAJOR*1000 + PNETCDF_VERSION_MINOR < 1006)
choke me
#endif
]])], [pnetcdf16=yes], [pnetcdf16=no])
2015-08-16 06:26:35 +08:00
AC_MSG_RESULT([$pnetcdf16])
if test x$pnetcdf16 = xno; then
AC_MSG_ERROR([--enable-pnetcdf requires version 1.6.0 or later])
2015-08-16 06:26:35 +08:00
fi
fi
# Now, set enable_parallel if either enable_pnetcdf or enable_parallel4 is set
2015-08-16 06:26:35 +08:00
if test "x$enable_pnetcdf" = xyes -o "x$enable_parallel4" = xyes; then
enable_parallel=yes
2015-08-16 06:26:35 +08:00
else
enable_parallel=no
2015-08-16 06:26:35 +08:00
fi
AM_CONDITIONAL(ENABLE_PARALLEL, [test x$enable_parallel = xyes ])
2015-08-16 06:26:35 +08:00
if test "x$hdf5_parallel" = xyes; then
# Provide more precise parallel control
AC_DEFINE([HDF5_PARALLEL], [1], [if true, hdf5 has parallelism enabled])
fi
# Set config flags
if test "x$enable_parallel4" = xyes; then
# Provide more precise parallel control
AC_DEFINE([USE_PARALLEL4], [1], [if true, parallel netcdf-4 is in use])
fi
if test "x$enable_pnetcdf" = xyes; then
AC_DEFINE([USE_PNETCDF], [1], [if true, PnetCDF is used])
2015-08-16 06:26:35 +08:00
fi
# If enable_parallel is in use, enable it in the C code. Also add some stuff to netcdf.h.
if test "x$enable_parallel" = xyes; then
AC_DEFINE([USE_PARALLEL], [1], [if true, PnetCDF or parallel netcdf-4 is in use])
2015-08-16 06:26:35 +08:00
fi
AC_ARG_ENABLE([erange_fill],
[AS_HELP_STRING([--enable-erange-fill],
[Enable use of fill value when out-of-range type
conversion causes NC_ERANGE error. @<:@default: disabled@:>@])],
[enable_erange_fill=${enableval}], [enable_erange_fill=auto]
)
# check PnetCDF's settings on enable_erange_fill and relax_coord_bound
if test "x$enable_pnetcdf" = xyes; then
UD_CHECK_HEADER_PATH([pnetcdf.h])
AC_MSG_CHECKING([if erange-fill is enabled in PnetCDF])
AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[#include <pnetcdf.h>]], [[
#if !defined(PNETCDF_ERANGE_FILL) || PNETCDF_ERANGE_FILL == 0
choke me
#endif]])], [enable_erange_fill_pnetcdf=yes], [enable_erange_fill_pnetcdf=no])
AC_MSG_RESULT([$enable_erange_fill_pnetcdf])
if test "x$enable_erange_fill" = xauto ; then
enable_erange_fill=$enable_erange_fill_pnetcdf
elif test "$enable_erange_fill" != "$enable_erange_fill_pnetcdf"; then
if test "$enable_erange_fill" = yes; then
AC_MSG_ERROR([Enabling erange-fill conflicts with PnetCDF setting])
else
AC_MSG_ERROR([Disabling erange-fill conflicts with PnetCDF setting])
fi
fi
AC_MSG_CHECKING([if relax-coord-bound is enabled in PnetCDF])
AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[#include <pnetcdf.h>]], [[
#if !defined(PNETCDF_RELAX_COORD_BOUND) || PNETCDF_RELAX_COORD_BOUND == 0
choke me
#endif]])], [relax_coord_bound_pnetcdf=yes], [relax_coord_bound_pnetcdf=no])
AC_MSG_RESULT([$relax_coord_bound_pnetcdf])
if test x"$relax_coord_bound_pnetcdf" != xyes; then
AC_MSG_ERROR([PNetCDF must be built with relax-coord-bound])
fi
else
if test "x$enable_erange_fill" = xauto; then
# if --enable-erange-fill is not used, default setting is no
enable_erange_fill=no
fi
fi
if test "x$enable_erange_fill" = xyes ; then
if test "x$M4FLAGS" = x ; then
M4FLAGS="-DERANGE_FILL"
else
M4FLAGS="$M4FLAGS -DERANGE_FILL"
fi
AC_DEFINE([ERANGE_FILL], [1], [if true, use _FillValue for NC_ERANGE data elements])
fi
AC_SUBST(M4FLAGS)
2010-06-03 21:25:25 +08:00
# No logging for netcdf-3.
if test "x$enable_netcdf_4" = xno; then
enable_logging=no
fi
if test "x$enable_logging" = xyes; then
AC_DEFINE([LOGGING], 1, [If true, turn on logging.])
fi
Add filter support to NCZarr Filter support has three goals: 1. Use the existing HDF5 filter implementations, 2. Allow filter metadata to be stored in the NumCodecs metadata format used by Zarr, 3. Allow filters to be used even when HDF5 is disabled Detailed usage directions are define in docs/filters.md. For now, the existing filter API is left in place. So filters are defined using ''nc_def_var_filter'' using the HDF5 style where the id and parameters are unsigned integers. This is a big change since filters affect many parts of the code. In the following, the terms "compressor" and "filter" and "codec" are generally used synonomously. ### Filter-Related Changes: * In order to support dynamic loading of shared filter libraries, a new library was added in the libncpoco directory; it helps to isolate dynamic loading across multiple platforms. * Provide a json parsing library for use by plugins; this is created by merging libdispatch/ncjson.c with include/ncjson.h. * Add a new _Codecs attribute to allow clients to see what codecs are being used; let ncdump -s print it out. * Provide special headers to help support compilation of HDF5 filters when HDF5 is not enabled: netcdf_filter_hdf5_build.h and netcdf_filter_build.h. * Add a number of new test to test the new nczarr filters. * Let ncgen parse _Codecs attribute, although it is ignored. ### Plugin directory changes: * Add support for the Blosc compressor; this is essential because it is the most common compressor used in Zarr datasets. This also necessitated adding a CMake FindBlosc.cmake file * Add NCZarr support for the big-four filters provided by HDF5: shuffle, fletcher32, deflate (zlib), and szip * Add a Codec defaulter (see docs/filters.md) for the big four filters. * Make plugins work with windows by properly adding __declspec declaration. ### Misc. Non-Filter Changes * Replace most uses of USE_NETCDF4 (deprecated) with USE_HDF5. * Improve support for caching * More fixes for path conversion code * Fix misc. memory leaks * Add new utility -- ncdump/ncpathcvt -- that does more or less the same thing as cygpath. * Add a number of new test to test the non-filter fixes. * Update the parsers * Convert most instances of '#ifdef _MSC_VER' to '#ifdef _WIN32'
2021-09-03 07:04:26 +08:00
# Control NCZarr filters
# Does the user want to use NCZarr filters?
AC_MSG_CHECKING([whether we should enable NCZarr filters])
AC_ARG_ENABLE([nczarr-filters], [AS_HELP_STRING([--disable-nczarr-filters],
[disable NCZarr filters])])
test "x$enable_nczarr_filters" = xno || enable_nczarr_filters=yes
AC_MSG_RESULT([$enable_nczarr_filters])
# Control filter test/example
AC_MSG_CHECKING([whether filter testing should be run])
AC_ARG_ENABLE([filter-testing],
[AS_HELP_STRING([--disable-filter-testing],
[Do not run filter test and example; requires shared libraries and HDF5|NCZarr])])
test "x$enable_filter_testing" = xno || enable_filter_testing=yes
AC_MSG_RESULT($enable_filter_testing)
Support MSYS2/Mingw platform re: The current netcdf-c release has some problems with the mingw platform on windows. Mostly they are path issues. Changes to support mingw+msys2: ------------------------------- * Enable option of looking into the windows registry to find the mingw root path. In aid of proper path handling. * Add mingw+msys as a specific platform in configure.ac and move testing of the platform to the front so it is available early. * Handle mingw X libncpoco (dynamic loader) properly even though mingw does not yet support it. * Handle mingw X plugins properly even though mingw does not yet support it. * Alias pwd='pwd -W' to better handle paths in shell scripts. * Plus a number of other minor compile irritations. * Disallow the use of multiple nc_open's on the same file for windows (and mingw) because windows does not seem to handle these properly. Not sure why we did not catch this earlier. * Add mountpoint info to dpathmgr.c to help support mingw. * Cleanup dpathmgr conversions. Known problems: --------------- * I have not been able to get shared libraries to work, so plugins/filters must be disabled. * There is some kind of problem with libcurl that I have not solved, so all uses of libcurl (currently DAP+Byterange) must be disabled. Misc. other fixes: ------------------ * Cleanup the relationship between ENABLE_PLUGINS and various other flags in CMakeLists.txt and configure.ac. * Re-arrange the TESTDIRS order in Makefile.am. * Add pseudo-breakpoint to nclog.[ch] for debugging. * Improve the documentation of the path manager code in ncpathmgr.h * Add better support for relative paths in dpathmgr.c * Default the mode args to NCfopen to include "b" (binary) for windows. * Add optional debugging output in various places. * Make sure that everything builds with plugins disabled. * Fix numerous (s)printf inconsistencies betweenb the format spec and the arguments.
2021-12-24 13:18:56 +08:00
# Apply constraints
if test "x$enable_plugins" = xno ; then
AC_MSG_WARN([--disable-plugins => --disable-nczarr-filters])
Add filter support to NCZarr Filter support has three goals: 1. Use the existing HDF5 filter implementations, 2. Allow filter metadata to be stored in the NumCodecs metadata format used by Zarr, 3. Allow filters to be used even when HDF5 is disabled Detailed usage directions are define in docs/filters.md. For now, the existing filter API is left in place. So filters are defined using ''nc_def_var_filter'' using the HDF5 style where the id and parameters are unsigned integers. This is a big change since filters affect many parts of the code. In the following, the terms "compressor" and "filter" and "codec" are generally used synonomously. ### Filter-Related Changes: * In order to support dynamic loading of shared filter libraries, a new library was added in the libncpoco directory; it helps to isolate dynamic loading across multiple platforms. * Provide a json parsing library for use by plugins; this is created by merging libdispatch/ncjson.c with include/ncjson.h. * Add a new _Codecs attribute to allow clients to see what codecs are being used; let ncdump -s print it out. * Provide special headers to help support compilation of HDF5 filters when HDF5 is not enabled: netcdf_filter_hdf5_build.h and netcdf_filter_build.h. * Add a number of new test to test the new nczarr filters. * Let ncgen parse _Codecs attribute, although it is ignored. ### Plugin directory changes: * Add support for the Blosc compressor; this is essential because it is the most common compressor used in Zarr datasets. This also necessitated adding a CMake FindBlosc.cmake file * Add NCZarr support for the big-four filters provided by HDF5: shuffle, fletcher32, deflate (zlib), and szip * Add a Codec defaulter (see docs/filters.md) for the big four filters. * Make plugins work with windows by properly adding __declspec declaration. ### Misc. Non-Filter Changes * Replace most uses of USE_NETCDF4 (deprecated) with USE_HDF5. * Improve support for caching * More fixes for path conversion code * Fix misc. memory leaks * Add new utility -- ncdump/ncpathcvt -- that does more or less the same thing as cygpath. * Add a number of new test to test the non-filter fixes. * Update the parsers * Convert most instances of '#ifdef _MSC_VER' to '#ifdef _WIN32'
2021-09-03 07:04:26 +08:00
enable_nczarr_filters=no
fi
Support MSYS2/Mingw platform re: The current netcdf-c release has some problems with the mingw platform on windows. Mostly they are path issues. Changes to support mingw+msys2: ------------------------------- * Enable option of looking into the windows registry to find the mingw root path. In aid of proper path handling. * Add mingw+msys as a specific platform in configure.ac and move testing of the platform to the front so it is available early. * Handle mingw X libncpoco (dynamic loader) properly even though mingw does not yet support it. * Handle mingw X plugins properly even though mingw does not yet support it. * Alias pwd='pwd -W' to better handle paths in shell scripts. * Plus a number of other minor compile irritations. * Disallow the use of multiple nc_open's on the same file for windows (and mingw) because windows does not seem to handle these properly. Not sure why we did not catch this earlier. * Add mountpoint info to dpathmgr.c to help support mingw. * Cleanup dpathmgr conversions. Known problems: --------------- * I have not been able to get shared libraries to work, so plugins/filters must be disabled. * There is some kind of problem with libcurl that I have not solved, so all uses of libcurl (currently DAP+Byterange) must be disabled. Misc. other fixes: ------------------ * Cleanup the relationship between ENABLE_PLUGINS and various other flags in CMakeLists.txt and configure.ac. * Re-arrange the TESTDIRS order in Makefile.am. * Add pseudo-breakpoint to nclog.[ch] for debugging. * Improve the documentation of the path manager code in ncpathmgr.h * Add better support for relative paths in dpathmgr.c * Default the mode args to NCfopen to include "b" (binary) for windows. * Add optional debugging output in various places. * Make sure that everything builds with plugins disabled. * Fix numerous (s)printf inconsistencies betweenb the format spec and the arguments.
2021-12-24 13:18:56 +08:00
if test "x$enable_plugins" = xno ; then
AC_MSG_WARN([--disable-plugins => --disable-filter-testing])
Add filter support to NCZarr Filter support has three goals: 1. Use the existing HDF5 filter implementations, 2. Allow filter metadata to be stored in the NumCodecs metadata format used by Zarr, 3. Allow filters to be used even when HDF5 is disabled Detailed usage directions are define in docs/filters.md. For now, the existing filter API is left in place. So filters are defined using ''nc_def_var_filter'' using the HDF5 style where the id and parameters are unsigned integers. This is a big change since filters affect many parts of the code. In the following, the terms "compressor" and "filter" and "codec" are generally used synonomously. ### Filter-Related Changes: * In order to support dynamic loading of shared filter libraries, a new library was added in the libncpoco directory; it helps to isolate dynamic loading across multiple platforms. * Provide a json parsing library for use by plugins; this is created by merging libdispatch/ncjson.c with include/ncjson.h. * Add a new _Codecs attribute to allow clients to see what codecs are being used; let ncdump -s print it out. * Provide special headers to help support compilation of HDF5 filters when HDF5 is not enabled: netcdf_filter_hdf5_build.h and netcdf_filter_build.h. * Add a number of new test to test the new nczarr filters. * Let ncgen parse _Codecs attribute, although it is ignored. ### Plugin directory changes: * Add support for the Blosc compressor; this is essential because it is the most common compressor used in Zarr datasets. This also necessitated adding a CMake FindBlosc.cmake file * Add NCZarr support for the big-four filters provided by HDF5: shuffle, fletcher32, deflate (zlib), and szip * Add a Codec defaulter (see docs/filters.md) for the big four filters. * Make plugins work with windows by properly adding __declspec declaration. ### Misc. Non-Filter Changes * Replace most uses of USE_NETCDF4 (deprecated) with USE_HDF5. * Improve support for caching * More fixes for path conversion code * Fix misc. memory leaks * Add new utility -- ncdump/ncpathcvt -- that does more or less the same thing as cygpath. * Add a number of new test to test the non-filter fixes. * Update the parsers * Convert most instances of '#ifdef _MSC_VER' to '#ifdef _WIN32'
2021-09-03 07:04:26 +08:00
enable_filter_testing=no
fi
Support MSYS2/Mingw platform re: The current netcdf-c release has some problems with the mingw platform on windows. Mostly they are path issues. Changes to support mingw+msys2: ------------------------------- * Enable option of looking into the windows registry to find the mingw root path. In aid of proper path handling. * Add mingw+msys as a specific platform in configure.ac and move testing of the platform to the front so it is available early. * Handle mingw X libncpoco (dynamic loader) properly even though mingw does not yet support it. * Handle mingw X plugins properly even though mingw does not yet support it. * Alias pwd='pwd -W' to better handle paths in shell scripts. * Plus a number of other minor compile irritations. * Disallow the use of multiple nc_open's on the same file for windows (and mingw) because windows does not seem to handle these properly. Not sure why we did not catch this earlier. * Add mountpoint info to dpathmgr.c to help support mingw. * Cleanup dpathmgr conversions. Known problems: --------------- * I have not been able to get shared libraries to work, so plugins/filters must be disabled. * There is some kind of problem with libcurl that I have not solved, so all uses of libcurl (currently DAP+Byterange) must be disabled. Misc. other fixes: ------------------ * Cleanup the relationship between ENABLE_PLUGINS and various other flags in CMakeLists.txt and configure.ac. * Re-arrange the TESTDIRS order in Makefile.am. * Add pseudo-breakpoint to nclog.[ch] for debugging. * Improve the documentation of the path manager code in ncpathmgr.h * Add better support for relative paths in dpathmgr.c * Default the mode args to NCfopen to include "b" (binary) for windows. * Add optional debugging output in various places. * Make sure that everything builds with plugins disabled. * Fix numerous (s)printf inconsistencies betweenb the format spec and the arguments.
2021-12-24 13:18:56 +08:00
if test "x$enable_filter_testing" = xno; then
AC_MSG_WARN([--disable-filter-testing => --disable-nczarr-filter-testing])
enable_nczarr_filter_testing=no
fi
Enhance/Fix filter support re: Discussion https://github.com/Unidata/netcdf-c/discussions/2214 The primary change is to support so-called "standard filters". A standard filter is one that is defined by the following netcdf-c API: ```` int nc_def_var_XXX(int ncid, int varid, size_t nparams, unsigned* params); int nc_inq_var_XXXX(int ncid, int varid, int* usefilterp, unsigned* params); ```` So for example, zstandard would be a standard filter by defining the functions *nc_def_var_zstandard* and *nc_inq_var_zstandard*. In order to define these functions, we need a new dispatch function: ```` int nc_inq_filter_avail(int ncid, unsigned filterid); ```` This function, combined with the existing filter API can be used to implement arbitrary standard filters using a simple code pattern. Note that I would have preferred that this function return a list of all available filters, but HDF5 does not support that functionality. So this PR implements the dispatch function and implements the following standard functions: + bzip2 + zstandard + blosc Specific test cases are also provided for HDF5 and NCZarr. Over time, other specific standard filters will be defined. ## Primary Changes * Add nc_inq_filter_avail() to netcdf-c API. * Add standard filter implementations to test use of *nc_inq_filter_avail*. * Bump the dispatch table version number and add to all the relevant dispatch tables (libsrc, libsrcp, etc). * Create a program to invoke nc_inq_filter_avail so that it is accessible to shell scripts. * Cleanup szip support to properly support szip when HDF5 is disabled. This involves detecting libsz separately from testing if HDF5 supports szip. * Integrate shuffle and fletcher32 into the existing filter API. This means that, for example, nc_def_var_fletcher32 is now a wrapper around nc_def_var_filter. * Extend the Codec defaulting to allow multiple default shared libraries. ## Misc. Changes * Modify configure.ac/CMakeLists.txt to look for the relevant libraries implementing standard filters. * Modify libnetcdf.settings to list available standard filters (including deflate and szip). * Add CMake test modules to locate libbz2 and libzstd. * Cleanup the HDF5 memory manager function use in the plugins. * remove unused file include//ncfilter.h * remove tests for the HDF5 memory operations e.g. H5allocate_memory. * Add flag to ncdump to force use of _Filter instead of _Deflate or _Shuffle or _Fletcher32. Used for testing.
2022-03-15 02:39:37 +08:00
if test "x$enable_nczarr" = xno; then
AC_MSG_WARN([--disable-nczarr => --disable-nczarr-filters])
enable_nczarr_filters=no
enable_nczarr_filter_testing=no
fi
Support MSYS2/Mingw platform re: The current netcdf-c release has some problems with the mingw platform on windows. Mostly they are path issues. Changes to support mingw+msys2: ------------------------------- * Enable option of looking into the windows registry to find the mingw root path. In aid of proper path handling. * Add mingw+msys as a specific platform in configure.ac and move testing of the platform to the front so it is available early. * Handle mingw X libncpoco (dynamic loader) properly even though mingw does not yet support it. * Handle mingw X plugins properly even though mingw does not yet support it. * Alias pwd='pwd -W' to better handle paths in shell scripts. * Plus a number of other minor compile irritations. * Disallow the use of multiple nc_open's on the same file for windows (and mingw) because windows does not seem to handle these properly. Not sure why we did not catch this earlier. * Add mountpoint info to dpathmgr.c to help support mingw. * Cleanup dpathmgr conversions. Known problems: --------------- * I have not been able to get shared libraries to work, so plugins/filters must be disabled. * There is some kind of problem with libcurl that I have not solved, so all uses of libcurl (currently DAP+Byterange) must be disabled. Misc. other fixes: ------------------ * Cleanup the relationship between ENABLE_PLUGINS and various other flags in CMakeLists.txt and configure.ac. * Re-arrange the TESTDIRS order in Makefile.am. * Add pseudo-breakpoint to nclog.[ch] for debugging. * Improve the documentation of the path manager code in ncpathmgr.h * Add better support for relative paths in dpathmgr.c * Default the mode args to NCfopen to include "b" (binary) for windows. * Add optional debugging output in various places. * Make sure that everything builds with plugins disabled. * Fix numerous (s)printf inconsistencies betweenb the format spec and the arguments.
2021-12-24 13:18:56 +08:00
if test "x$enable_nczarr_filters" = xyes; then
AC_DEFINE([ENABLE_NCZARR_FILTERS], [1], [if true, enable NCZarr filters])
Add filter support to NCZarr Filter support has three goals: 1. Use the existing HDF5 filter implementations, 2. Allow filter metadata to be stored in the NumCodecs metadata format used by Zarr, 3. Allow filters to be used even when HDF5 is disabled Detailed usage directions are define in docs/filters.md. For now, the existing filter API is left in place. So filters are defined using ''nc_def_var_filter'' using the HDF5 style where the id and parameters are unsigned integers. This is a big change since filters affect many parts of the code. In the following, the terms "compressor" and "filter" and "codec" are generally used synonomously. ### Filter-Related Changes: * In order to support dynamic loading of shared filter libraries, a new library was added in the libncpoco directory; it helps to isolate dynamic loading across multiple platforms. * Provide a json parsing library for use by plugins; this is created by merging libdispatch/ncjson.c with include/ncjson.h. * Add a new _Codecs attribute to allow clients to see what codecs are being used; let ncdump -s print it out. * Provide special headers to help support compilation of HDF5 filters when HDF5 is not enabled: netcdf_filter_hdf5_build.h and netcdf_filter_build.h. * Add a number of new test to test the new nczarr filters. * Let ncgen parse _Codecs attribute, although it is ignored. ### Plugin directory changes: * Add support for the Blosc compressor; this is essential because it is the most common compressor used in Zarr datasets. This also necessitated adding a CMake FindBlosc.cmake file * Add NCZarr support for the big-four filters provided by HDF5: shuffle, fletcher32, deflate (zlib), and szip * Add a Codec defaulter (see docs/filters.md) for the big four filters. * Make plugins work with windows by properly adding __declspec declaration. ### Misc. Non-Filter Changes * Replace most uses of USE_NETCDF4 (deprecated) with USE_HDF5. * Improve support for caching * More fixes for path conversion code * Fix misc. memory leaks * Add new utility -- ncdump/ncpathcvt -- that does more or less the same thing as cygpath. * Add a number of new test to test the non-filter fixes. * Update the parsers * Convert most instances of '#ifdef _MSC_VER' to '#ifdef _WIN32'
2021-09-03 07:04:26 +08:00
fi
# Client side filter registration is permanently disabled
enable_clientside_filters=no
AM_CONDITIONAL(ENABLE_CLIENTSIDE_FILTERS, [test x$enable_clientside_filters = xyes])
AM_CONDITIONAL(ENABLE_FILTER_TESTING, [test x$enable_filter_testing = xyes])
AM_CONDITIONAL(ENABLE_NCZARR_FILTERS, [test x$enable_nczarr_filters = xyes])
2010-06-03 21:25:25 +08:00
# Automake conditionals need to be called, whether the answer is yes
# or no.
AM_CONDITIONAL(BUILD_PARALLEL, [test x$enable_parallel = xyes])
2015-08-16 06:26:35 +08:00
AM_CONDITIONAL(TEST_PARALLEL4, [test "x$enable_parallel4" = xyes -a "x$enable_parallel_tests" = xyes])
2010-06-03 21:25:25 +08:00
AM_CONDITIONAL(BUILD_DAP, [test "x$enable_dap" = xyes])
AM_CONDITIONAL(USE_DAP, [test "x$enable_dap" = xyes]) # Alias
Primary change: add dap4 support Specific changes: 1. Add dap4 code: libdap4 and dap4_test. Note that until the d4ts server problem is solved, dap4 is turned off. 2. Modify various files to support dap4 flags: configure.ac, Makefile.am, CMakeLists.txt, etc. 3. Add nc_test/test_common.sh. This centralizes the handling of the locations of various things in the build tree: e.g. where is ncgen.exe located. See nc_test/test_common.sh for details. 4. Modify .sh files to use test_common.sh 5. Obsolete separate oc2 by moving it to be part of netcdf-c. This means replacing code with netcdf-c equivalents. 5. Add --with-testserver to configure.ac to allow override of the servers to be used for --enable-dap-remote-tests. 6. There were multiple versions of nctypealignment code. Try to centralize in libdispatch/doffset.c and include/ncoffsets.h 7. Add a unit test for the ncuri code because of its complexity. 8. Move the findserver code out of libdispatch and into a separate, self contained program in ncdap_test and dap4_test. 9. Move the dispatch header files (nc{3,4}dispatch.h) to .../include because they are now shared by modules. 10. Revamp the handling of TOPSRCDIR and TOPBUILDDIR for shell scripts. 11. Make use of MREMAP if available 12. Misc. minor changes e.g. - #include <config.h> -> #include "config.h" - Add some no-install headers to /include - extern -> EXTERNL and vice versa as needed - misc header cleanup - clean up checking for misc. unix vs microsoft functions 13. Change copyright decls in some files to point to LICENSE file. 14. Add notes to RELEASENOTES.md
2017-03-09 08:01:10 +08:00
# Provide protocol specific flags
AM_CONDITIONAL(ENABLE_DAP, [test "x$enable_dap" = xyes])
AM_CONDITIONAL(ENABLE_DAP4, [test "x$enable_dap4" = xyes])
2017-11-21 06:50:47 +08:00
AM_CONDITIONAL(USE_STRICT_NULL_BYTE_HEADER_PADDING, [test x$enable_strict_null_byte_header_padding = xyes])
2017-09-14 05:25:40 +08:00
AM_CONDITIONAL(ENABLE_CDF5, [test "x$enable_cdf5" = xyes])
2010-06-03 21:25:25 +08:00
AM_CONDITIONAL(ENABLE_DAP_REMOTE_TESTS, [test "x$enable_dap_remote_tests" = xyes])
Add support for Zarr string type to NCZarr * re: https://github.com/Unidata/netcdf-c/pull/2278 * re: https://github.com/Unidata/netcdf-c/issues/2485 * re: https://github.com/Unidata/netcdf-c/issues/2474 This PR subsumes PR https://github.com/Unidata/netcdf-c/pull/2278. Actually is a bit an omnibus covering several issues. ## PR https://github.com/Unidata/netcdf-c/pull/2278 Add support for the Zarr string type. Zarr strings are restricted currently to be of fixed size. The primary issue to be addressed is to provide a way for user to specify the size of the fixed length strings. This is handled by providing the following new attributes special: 1. **_nczarr_default_maxstrlen** &mdash; This is an attribute of the root group. It specifies the default maximum string length for string types. If not specified, then it has the value of 64 characters. 2. **_nczarr_maxstrlen** &mdash; This is a per-variable attribute. It specifies the maximum string length for the string type associated with the variable. If not specified, then it is assigned the value of **_nczarr_default_maxstrlen**. This PR also requires some hacking to handle the existing netcdf-c NC_CHAR type, which does not exist in zarr. The goal was to choose numpy types for both the netcdf-c NC_STRING type and the netcdf-c NC_CHAR type such that if a pure zarr implementation read them, it would still work and an NC_CHAR type would be handled by zarr as a string of length 1. For writing variables and NCZarr attributes, the type mapping is as follows: * "|S1" for NC_CHAR. * ">S1" for NC_STRING && MAXSTRLEN==1 * ">Sn" for NC_STRING && MAXSTRLEN==n Note that it is a bit of a hack to use endianness, but it should be ok since for string/char, the endianness has no meaning. For reading attributes with pure zarr (i.e. with no nczarr atribute types defined), they will always be interpreted as of type NC_CHAR. ## Issue: https://github.com/Unidata/netcdf-c/issues/2474 This PR partly fixes this issue because it provided more comprehensive support for Zarr attributes that are JSON valued expressions. This PR still does not address the problem in that issue where the _ARRAY_DIMENSION attribute is incorrectly set. Than can only be fixed by the creator of the datasets. ## Issue: https://github.com/Unidata/netcdf-c/issues/2485 This PR also fixes the scalar failure shown in this issue. It generally cleans up scalar handling. It also adds a note to the documentation describing that NCZarr supports scalars while Zarr does not and also how scalar interoperability is achieved. ## Misc. Other Changes 1. Convert the nczarr special attributes and keys to be all lower case. So "_NCZARR_ATTR" now used "_nczarr_attr. Support back compatibility for the upper case names. 2. Cleanup my too-clever-by-half handling of scalars in libnczarr.
2022-08-28 10:21:13 +08:00
AM_CONDITIONAL(ENABLE_EXTERNAL_SERVER_TESTS, [test "x$enable_external_server_tests" = xyes])
AM_CONDITIONAL(ENABLE_DAP_AUTH_TESTS, [test "x$enable_dap_auth_tests" = xyes])
2010-06-03 21:25:25 +08:00
AM_CONDITIONAL(ENABLE_DAP_LONG_TESTS, [test "x$enable_dap_long_tests" = xyes])
AM_CONDITIONAL(USE_PNETCDF_DIR, [test ! "x$PNETCDFDIR" = x])
AM_CONDITIONAL(USE_LOGGING, [test "x$enable_logging" = xyes])
AM_CONDITIONAL(CROSS_COMPILING, [test "x$cross_compiling" = xyes])
AM_CONDITIONAL(USE_NETCDF4, [test x$enable_netcdf_4 = xyes])
2018-05-09 01:58:01 +08:00
AM_CONDITIONAL(USE_HDF5, [test x$enable_hdf5 = xyes])
2010-06-03 21:25:25 +08:00
AM_CONDITIONAL(USE_HDF4, [test x$enable_hdf4 = xyes])
AM_CONDITIONAL(USE_HDF4_FILE_TESTS, [test x$enable_hdf4_file_tests = xyes])
AM_CONDITIONAL(USE_RENAMEV3, [test x$enable_netcdf_4 = xyes -o x$enable_dap = xyes])
AM_CONDITIONAL(USE_PNETCDF, [test x$enable_pnetcdf = xyes])
AM_CONDITIONAL(USE_DISPATCH, [test x$enable_dispatch = xyes])
2012-05-16 01:10:41 +08:00
AM_CONDITIONAL(BUILD_MMAP, [test x$enable_mmap = xyes])
AM_CONDITIONAL(BUILD_DOCS, [test x$enable_doxygen = xyes])
AM_CONDITIONAL(SHOW_DOXYGEN_TAG_LIST, [test x$enable_doxygen_tasks = xyes])
AM_CONDITIONAL(ENABLE_METADATA_PERF, [test x$enable_metadata_perf = xyes])
AM_CONDITIONAL(ENABLE_BYTERANGE, [test "x$enable_byterange" = xyes])
2020-01-25 08:15:29 +08:00
AM_CONDITIONAL(RELAX_COORD_BOUND, [test "xyes" = xyes])
Add support for multiple filters per variable. re: https://github.com/Unidata/netcdf-c/issues/1584 Support has been added for multiple filters per variable. This affects a number of components in netcdf. The new APIs are documented in NUG/filters.md. The primary changes are: * A set of new functions are provided (see __include/netcdf_filter.h__). - Obtain a list of the filters associated with a variable - Obtain the parameters for a specific filter. * The existing __nc_inq_var_filter__ function now returns info about the first defined filter. * The utilities (ncgen, ncdump, and nccopy) now support an extended format for specifying a sequence of filters. The general form is __<filter>|<filter>..._. * The ncdump **_Filter** attribute now dumps a list of all the filters associated with a variable using the above new format. * Filter specifications can now use a filter name instead of number for filters known to the netcdf library, which in turn is taken from the HDF5 filter registration page. * New errors are defined: NC_EFILTER and NC_ENOFILTER. The latter is returned if an attempt is made to access an unknown filter. * Internally, the dispatch table has been extended to add a function to handle all of the filter functions. * New, filter-related, tests were added to nc_test4. * A new plugin was added to the plugins directory to help with testing. Notes: 1. The shuffle and fletcher32 filters are not part of the multifilter system. Misc. changes: 1. A debug module was added to libhdf5 to help catch error locations.
2020-02-17 03:59:33 +08:00
AM_CONDITIONAL(HAS_PAR_FILTERS, [test x$hdf5_supports_par_filters = xyes ])
Fix byterange handling of some URLS re: Issue The byterange handling of the following URLS fails. ### Problem 1: "https://crudata.uea.ac.uk/cru/data/temperature/HadCRUT.4.6.0.0.median.nc#mode=bytes" It turns out that byterange in hdf5 has two possible targets: S3 and not-S3 (e.g. a thredds server or the crudata URL above). Each uses a different HDF5 Virtual File Driver (VFD). I incorrectly set up the byterange code in libhdf5 so that it would choose one or the other of the two VFD's for any netcdf-c library build. The fix is to allow it to choose either one at run-time. ### Problem 2: "https://noaa-goes16.s3.amazonaws.com/ABI-L1b-RadF/2022/001/18/OR_ABI-L1b-RadF-M6C01_G16_s20220011800205_e20220011809513_c20220011809562.nc#mode=bytes,s3" When given what appears to be an S3-related URL, the netcdf-c library code converts it into a canonical, so-called "path" format. In casing out the possible input URL formats, I missed the case where the host contains the bucket ("noaa-goes16"), but not the region. So the fix was to check for this case. ## Misc. Related Changes 1. Since S3 is used in more than just NCZarr, I changed the automake/cmake options to replace "--enable-nczarr-s3" with "--enable-s3", but keeping the former option as a synonym for the latter. This also entailed cleaning up libnetcdf.settings WRT S3 support 2. Added the above URLS as additional test cases ## Misc. Un-Related Changes 1. CURLOPT_PUT is deprecated in favor to CURLOPT_UPLOAD 2. Fix some minor warnings ## Open Problems * Under Ubuntu, either libcrypto or aws-sdk-cpp has a memory leak.
2023-03-03 10:51:02 +08:00
# We need to simplify the set of S3 and Zarr flag combinations
AM_CONDITIONAL(ENABLE_S3, [test "x$enable_s3" = xyes])
Improve S3 Documentation and Support ## Improvements to S3 Documentation * Create a new document *quickstart_paths.md* that give a summary of the legal path formats used by netcdf-c. This includes both file paths and URL paths. * Modify *nczarr.md* to remove most of the S3 related text. * Move the S3 text from *nczarr.md* to a new document *cloud.md*. * Add some S3-related text to the *byterange.md* document. Hopefully, this will make it easier for users to find the information they want. ## Rebuild NCZarr Testing In order to avoid problems with running make check in parallel, two changes were made: 1. The *nczarr_test* test system was rebuilt. Now, for each test. any generated files are kept in a test-specific directory, isolated from all other test executions. 2. Similarly, since the S3 test bucket is shared, any generated S3 objects are isolated using a test-specific key path. ## Other S3 Related Changes * Add code to ensure that files created on S3 are reclaimed at end of testing. * Used the bash "trap" command to ensure S3 cleanup even if the test fails. * Cleanup the S3 related configure.ac flag set since S3 is used in several places. So now one should use the option *--enable-s3* instead of *--enable-nczarr-s3*, although the latter is still kept as a deprecated alias for the former. * Get some of the github actions yml to work with S3; required fixing various test scripts adding a secret to access the Unidata S3 bucket. * Cleanup S3 portion of libnetcdf.settings.in and netcdf_meta.h.in and test_common.in. * Merge partial S3 support into dhttp.c. * Create an experimental s3 access library especially for use with Windows. It is enabled by using the options *--enable-s3-internal* (automake) or *-DENABLE_S3_INTERNAL=ON* (CMake). Also add a unit-test for it. * Move some definitions from ncrc.h to ncs3sdk.h ## Other Changes * Provide a default implementation of strlcpy and move this and similar defaults into *dmissing.c*.
2023-04-26 07:15:06 +08:00
AM_CONDITIONAL(ENABLE_S3_AWS, [test "x$enable_s3_aws" = xyes])
AM_CONDITIONAL(ENABLE_S3_INTERNAL, [test "x$enable_s3_internal" = xyes])
Fix byterange handling of some URLS re: Issue The byterange handling of the following URLS fails. ### Problem 1: "https://crudata.uea.ac.uk/cru/data/temperature/HadCRUT.4.6.0.0.median.nc#mode=bytes" It turns out that byterange in hdf5 has two possible targets: S3 and not-S3 (e.g. a thredds server or the crudata URL above). Each uses a different HDF5 Virtual File Driver (VFD). I incorrectly set up the byterange code in libhdf5 so that it would choose one or the other of the two VFD's for any netcdf-c library build. The fix is to allow it to choose either one at run-time. ### Problem 2: "https://noaa-goes16.s3.amazonaws.com/ABI-L1b-RadF/2022/001/18/OR_ABI-L1b-RadF-M6C01_G16_s20220011800205_e20220011809513_c20220011809562.nc#mode=bytes,s3" When given what appears to be an S3-related URL, the netcdf-c library code converts it into a canonical, so-called "path" format. In casing out the possible input URL formats, I missed the case where the host contains the bucket ("noaa-goes16"), but not the region. So the fix was to check for this case. ## Misc. Related Changes 1. Since S3 is used in more than just NCZarr, I changed the automake/cmake options to replace "--enable-nczarr-s3" with "--enable-s3", but keeping the former option as a synonym for the latter. This also entailed cleaning up libnetcdf.settings WRT S3 support 2. Added the above URLS as additional test cases ## Misc. Un-Related Changes 1. CURLOPT_PUT is deprecated in favor to CURLOPT_UPLOAD 2. Fix some minor warnings ## Open Problems * Under Ubuntu, either libcrypto or aws-sdk-cpp has a memory leak.
2023-03-03 10:51:02 +08:00
AM_CONDITIONAL(ENABLE_NCZARR, [test "x$enable_nczarr" = xyes])
AM_CONDITIONAL(ENABLE_S3_TESTPUB, [test "x$with_s3_testing" != xno]) # all => public
AM_CONDITIONAL(ENABLE_S3_TESTALL, [test "x$with_s3_testing" = xyes])
Fix byterange handling of some URLS re: Issue The byterange handling of the following URLS fails. ### Problem 1: "https://crudata.uea.ac.uk/cru/data/temperature/HadCRUT.4.6.0.0.median.nc#mode=bytes" It turns out that byterange in hdf5 has two possible targets: S3 and not-S3 (e.g. a thredds server or the crudata URL above). Each uses a different HDF5 Virtual File Driver (VFD). I incorrectly set up the byterange code in libhdf5 so that it would choose one or the other of the two VFD's for any netcdf-c library build. The fix is to allow it to choose either one at run-time. ### Problem 2: "https://noaa-goes16.s3.amazonaws.com/ABI-L1b-RadF/2022/001/18/OR_ABI-L1b-RadF-M6C01_G16_s20220011800205_e20220011809513_c20220011809562.nc#mode=bytes,s3" When given what appears to be an S3-related URL, the netcdf-c library code converts it into a canonical, so-called "path" format. In casing out the possible input URL formats, I missed the case where the host contains the bucket ("noaa-goes16"), but not the region. So the fix was to check for this case. ## Misc. Related Changes 1. Since S3 is used in more than just NCZarr, I changed the automake/cmake options to replace "--enable-nczarr-s3" with "--enable-s3", but keeping the former option as a synonym for the latter. This also entailed cleaning up libnetcdf.settings WRT S3 support 2. Added the above URLS as additional test cases ## Misc. Un-Related Changes 1. CURLOPT_PUT is deprecated in favor to CURLOPT_UPLOAD 2. Fix some minor warnings ## Open Problems * Under Ubuntu, either libcrypto or aws-sdk-cpp has a memory leak.
2023-03-03 10:51:02 +08:00
AM_CONDITIONAL(ENABLE_NCZARR_ZIP, [test "x$enable_nczarr_zip" = xyes])
Improve performance of the nc_reclaim_data and nc_copy_data functions. re: Issue https://github.com/Unidata/netcdf-c/issues/2685 re: PR https://github.com/Unidata/netcdf-c/pull/2179 As noted in PR https://github.com/Unidata/netcdf-c/pull/2179, the old code did not allow for reclaiming instances of types, nor for properly copying them. That PR provided new functions capable of reclaiming/copying instances of arbitrary types. However, as noted by Issue https://github.com/Unidata/netcdf-c/issues/2685, using these most general functions resulted in a significant performance degradation, even for common cases. This PR attempts to mitigate the cost of using the general reclaim/copy functions in two ways. First, the previous functions operating at the top level by using ncid and typeid arguments. These functions were augmented with equivalent versions that used the netcdf-c library internal data structures to allow direct access to needed information. These new functions are used internally to the library. The second mitigation involves optimizing the internal functions by providing early tests for common cases. This avoids unnecessary recursive function calls. The overall result is a significant improvement in speed by a factor of roughly twenty -- your mileage may vary. These optimized functions are still not as fast as the original (more limited) functions, but they are getting close. Additional optimizations are possible. But the cost is a significant "uglification" of the code that I deemed a step too far, at least for now. ## Misc. Changes 1. Added a test case to check the proper reclamation/copy of complex types. 2. Found and fixed some places where nc_reclaim/copy should have been used. 3. Replaced, in the netcdf-c library, (almost all) occurrences of nc_reclaim_copy with calls to NC_reclaim/copy. This plus the optimizations is the primary speed-up mechanism. 4. In DAP4, the metadata is held in a substrate in-memory file; this required some changes so that the reclaim/copy code accessed that substrate dispatcher rather than the DAP4 dispatcher. 5. Re-factored and isolated the code that computes if a type is (transitively) variable-sized or not. 6. Clean up the reclamation code in ncgen; adding the use of nc_reclaim exposed some memory problems.
2023-05-21 07:11:25 +08:00
AM_CONDITIONAL(HAVE_DEFLATE, [test "x$have_deflate" = xyes])
Enhance/Fix filter support re: Discussion https://github.com/Unidata/netcdf-c/discussions/2214 The primary change is to support so-called "standard filters". A standard filter is one that is defined by the following netcdf-c API: ```` int nc_def_var_XXX(int ncid, int varid, size_t nparams, unsigned* params); int nc_inq_var_XXXX(int ncid, int varid, int* usefilterp, unsigned* params); ```` So for example, zstandard would be a standard filter by defining the functions *nc_def_var_zstandard* and *nc_inq_var_zstandard*. In order to define these functions, we need a new dispatch function: ```` int nc_inq_filter_avail(int ncid, unsigned filterid); ```` This function, combined with the existing filter API can be used to implement arbitrary standard filters using a simple code pattern. Note that I would have preferred that this function return a list of all available filters, but HDF5 does not support that functionality. So this PR implements the dispatch function and implements the following standard functions: + bzip2 + zstandard + blosc Specific test cases are also provided for HDF5 and NCZarr. Over time, other specific standard filters will be defined. ## Primary Changes * Add nc_inq_filter_avail() to netcdf-c API. * Add standard filter implementations to test use of *nc_inq_filter_avail*. * Bump the dispatch table version number and add to all the relevant dispatch tables (libsrc, libsrcp, etc). * Create a program to invoke nc_inq_filter_avail so that it is accessible to shell scripts. * Cleanup szip support to properly support szip when HDF5 is disabled. This involves detecting libsz separately from testing if HDF5 supports szip. * Integrate shuffle and fletcher32 into the existing filter API. This means that, for example, nc_def_var_fletcher32 is now a wrapper around nc_def_var_filter. * Extend the Codec defaulting to allow multiple default shared libraries. ## Misc. Changes * Modify configure.ac/CMakeLists.txt to look for the relevant libraries implementing standard filters. * Modify libnetcdf.settings to list available standard filters (including deflate and szip). * Add CMake test modules to locate libbz2 and libzstd. * Cleanup the HDF5 memory manager function use in the plugins. * remove unused file include//ncfilter.h * remove tests for the HDF5 memory operations e.g. H5allocate_memory. * Add flag to ncdump to force use of _Filter instead of _Deflate or _Shuffle or _Fletcher32. Used for testing.
2022-03-15 02:39:37 +08:00
AM_CONDITIONAL(HAVE_SZ, [test "x$have_sz" = xyes])
AM_CONDITIONAL(HAVE_H5Z_SZIP, [test "x$enable_hdf5_szip" = xyes])
AM_CONDITIONAL(HAVE_BLOSC, [test "x$have_blosc" = xyes])
AM_CONDITIONAL(HAVE_BZ2, [test "x$have_bz2" = xyes])
Enhance/Fix filter support re: Discussion https://github.com/Unidata/netcdf-c/discussions/2214 The primary change is to support so-called "standard filters". A standard filter is one that is defined by the following netcdf-c API: ```` int nc_def_var_XXX(int ncid, int varid, size_t nparams, unsigned* params); int nc_inq_var_XXXX(int ncid, int varid, int* usefilterp, unsigned* params); ```` So for example, zstandard would be a standard filter by defining the functions *nc_def_var_zstandard* and *nc_inq_var_zstandard*. In order to define these functions, we need a new dispatch function: ```` int nc_inq_filter_avail(int ncid, unsigned filterid); ```` This function, combined with the existing filter API can be used to implement arbitrary standard filters using a simple code pattern. Note that I would have preferred that this function return a list of all available filters, but HDF5 does not support that functionality. So this PR implements the dispatch function and implements the following standard functions: + bzip2 + zstandard + blosc Specific test cases are also provided for HDF5 and NCZarr. Over time, other specific standard filters will be defined. ## Primary Changes * Add nc_inq_filter_avail() to netcdf-c API. * Add standard filter implementations to test use of *nc_inq_filter_avail*. * Bump the dispatch table version number and add to all the relevant dispatch tables (libsrc, libsrcp, etc). * Create a program to invoke nc_inq_filter_avail so that it is accessible to shell scripts. * Cleanup szip support to properly support szip when HDF5 is disabled. This involves detecting libsz separately from testing if HDF5 supports szip. * Integrate shuffle and fletcher32 into the existing filter API. This means that, for example, nc_def_var_fletcher32 is now a wrapper around nc_def_var_filter. * Extend the Codec defaulting to allow multiple default shared libraries. ## Misc. Changes * Modify configure.ac/CMakeLists.txt to look for the relevant libraries implementing standard filters. * Modify libnetcdf.settings to list available standard filters (including deflate and szip). * Add CMake test modules to locate libbz2 and libzstd. * Cleanup the HDF5 memory manager function use in the plugins. * remove unused file include//ncfilter.h * remove tests for the HDF5 memory operations e.g. H5allocate_memory. * Add flag to ncdump to force use of _Filter instead of _Deflate or _Shuffle or _Fletcher32. Used for testing.
2022-03-15 02:39:37 +08:00
AM_CONDITIONAL(HAVE_ZSTD, [test "x$have_zstd" = xyes])
2010-06-03 21:25:25 +08:00
# If the machine doesn't have a long long, and we want netCDF-4, then
# we've got problems!
if test "x$enable_netcdf_4" = xyes; then
AC_TYPE_LONG_LONG_INT
AC_TYPE_UNSIGNED_LONG_LONG_INT
dnl if test ! "x$ac_cv_type_long_long_int" = xyes -o ! "x$ac_cv_type_unsigned_long_long_int" = xyes; then
dnl AC_MSG_ERROR([This platform does not support long long types. These are required for netCDF-4.])
dnl fi
fi
# Create the file name for a "make ftpbin" which is used to generate a
# binary distribution. For each release we generate binary releases on
# the thousands of machines in Unidata's vast underground complex at
# an undisclosed location in the Rocky Mountains. The binary
# distributions, along with the 25-foot thick cement slabs and the
# giant springs, will help distribute netCDF even after a catastrophic
# meteor strike.
AC_MSG_CHECKING([what to call the output of the ftpbin target])
BINFILE_NAME=binary-netcdf-$PACKAGE_VERSION
test "x$enable_netcdf_4" = xno && BINFILE_NAME=${BINFILE_NAME}_nc3
BINFILE_NAME=${BINFILE_NAME}.tar
AC_SUBST(BINFILE_NAME)
AC_MSG_RESULT([$BINFILE_NAME $FC $CXX])
##
# Bugfix for Cygwin.
##
AC_MSG_CHECKING([if libtool needs -no-undefined flag to build shared libraries])
NOUNDEFINED=""
case "`uname`" in
CYGWIN*|MINGW*|AIX*)
## Add in the -no-undefined flag to LDFLAGS for libtool.
2015-11-04 01:37:42 +08:00
AC_MSG_RESULT([yes])
NOUNDEFINED=" -no-undefined"
;;
*)
## Don't add anything
AC_MSG_RESULT([no])
;;
esac
2010-06-03 21:25:25 +08:00
AC_MSG_CHECKING([value of LIBS])
AC_MSG_RESULT([$LIBS])
2010-06-03 21:25:25 +08:00
# Flags for nc-config script; by design $prefix, $includir, $libdir,
# etc. are left as shell variables in the script so as to facilitate
# relocation
if test "x$with_netcdf_c_lib" = x ; then
2010-06-03 21:25:25 +08:00
NC_LIBS="-lnetcdf"
else
NC_LIBS="$with_netcdf_c_lib"
fi
if test "x$enable_shared" != xyes; then
2011-06-10 21:34:12 +08:00
NC_LIBS="$LDFLAGS $NC_LIBS $LIBS"
fi
2010-06-03 21:25:25 +08:00
case "x$target_os" in
xsolaris*)
2010-06-03 21:25:25 +08:00
NEWNCLIBS=""
for x in $NC_LIBS ; do
case "$x" in
-L*) r=`echo "$x" | sed -e 's|^-L|-R|'`
NEWNCLIBS="$NEWNCLIBS $x $r"
;;
*) NEWNCLIBS="$NEWNCLIBS $x" ;;
esac
done
NC_LIBS="$NEWNCLIBS"
;;
*);;
esac
2011-04-27 04:57:24 +08:00
NC_FLIBS="-lnetcdff $NC_LIBS"
2010-06-03 21:25:25 +08:00
AC_SUBST(NC_LIBS,[$NC_LIBS])
AC_SUBST(HAS_DAP,[$enable_dap])
AC_SUBST(HAS_DAP2,[$enable_dap])
Primary change: add dap4 support Specific changes: 1. Add dap4 code: libdap4 and dap4_test. Note that until the d4ts server problem is solved, dap4 is turned off. 2. Modify various files to support dap4 flags: configure.ac, Makefile.am, CMakeLists.txt, etc. 3. Add nc_test/test_common.sh. This centralizes the handling of the locations of various things in the build tree: e.g. where is ncgen.exe located. See nc_test/test_common.sh for details. 4. Modify .sh files to use test_common.sh 5. Obsolete separate oc2 by moving it to be part of netcdf-c. This means replacing code with netcdf-c equivalents. 5. Add --with-testserver to configure.ac to allow override of the servers to be used for --enable-dap-remote-tests. 6. There were multiple versions of nctypealignment code. Try to centralize in libdispatch/doffset.c and include/ncoffsets.h 7. Add a unit test for the ncuri code because of its complexity. 8. Move the findserver code out of libdispatch and into a separate, self contained program in ncdap_test and dap4_test. 9. Move the dispatch header files (nc{3,4}dispatch.h) to .../include because they are now shared by modules. 10. Revamp the handling of TOPSRCDIR and TOPBUILDDIR for shell scripts. 11. Make use of MREMAP if available 12. Misc. minor changes e.g. - #include <config.h> -> #include "config.h" - Add some no-install headers to /include - extern -> EXTERNL and vice versa as needed - misc header cleanup - clean up checking for misc. unix vs microsoft functions 13. Change copyright decls in some files to point to LICENSE file. 14. Add notes to RELEASENOTES.md
2017-03-09 08:01:10 +08:00
AC_SUBST(HAS_DAP4,[$enable_dap4])
AC_SUBST(HAS_NC2,[$nc_build_v2])
AC_SUBST(HAS_NC4,[$enable_netcdf_4])
2017-09-14 05:25:40 +08:00
AC_SUBST(HAS_CDF5,[$enable_cdf5])
AC_SUBST(HAS_HDF4,[$enable_hdf4])
2022-04-26 20:18:52 +08:00
AC_SUBST(HAS_BENCHMARKS,[$enable_benchmarks])
2018-05-09 01:58:01 +08:00
AC_SUBST(HAS_HDF5,[$enable_hdf5])
AC_SUBST(HAS_PNETCDF,[$enable_pnetcdf])
AC_SUBST(HAS_LOGGING, [$enable_logging])
AC_SUBST(HAS_PARALLEL,[$enable_parallel])
2015-08-16 06:26:35 +08:00
AC_SUBST(HAS_PARALLEL4,[$enable_parallel4])
AC_SUBST(HAS_DISKLESS,[yes])
AC_SUBST(HAS_MMAP,[$enable_mmap])
AC_SUBST(HAS_JNA,[$enable_jna])
AC_SUBST(HAS_ERANGE_FILL,[$enable_erange_fill])
AC_SUBST(HAS_BYTERANGE,[$enable_byterange])
2020-01-25 08:15:29 +08:00
AC_SUBST(RELAX_COORD_BOUND,[yes])
Add support for multiple filters per variable. re: https://github.com/Unidata/netcdf-c/issues/1584 Support has been added for multiple filters per variable. This affects a number of components in netcdf. The new APIs are documented in NUG/filters.md. The primary changes are: * A set of new functions are provided (see __include/netcdf_filter.h__). - Obtain a list of the filters associated with a variable - Obtain the parameters for a specific filter. * The existing __nc_inq_var_filter__ function now returns info about the first defined filter. * The utilities (ncgen, ncdump, and nccopy) now support an extended format for specifying a sequence of filters. The general form is __<filter>|<filter>..._. * The ncdump **_Filter** attribute now dumps a list of all the filters associated with a variable using the above new format. * Filter specifications can now use a filter name instead of number for filters known to the netcdf library, which in turn is taken from the HDF5 filter registration page. * New errors are defined: NC_EFILTER and NC_ENOFILTER. The latter is returned if an attempt is made to access an unknown filter. * Internally, the dispatch table has been extended to add a function to handle all of the filter functions. * New, filter-related, tests were added to nc_test4. * A new plugin was added to the plugins directory to help with testing. Notes: 1. The shuffle and fletcher32 filters are not part of the multifilter system. Misc. changes: 1. A debug module was added to libhdf5 to help catch error locations.
2020-02-17 03:59:33 +08:00
AC_SUBST([HAS_PAR_FILTERS], [$hdf5_supports_par_filters])
Fix byterange handling of some URLS re: Issue The byterange handling of the following URLS fails. ### Problem 1: "https://crudata.uea.ac.uk/cru/data/temperature/HadCRUT.4.6.0.0.median.nc#mode=bytes" It turns out that byterange in hdf5 has two possible targets: S3 and not-S3 (e.g. a thredds server or the crudata URL above). Each uses a different HDF5 Virtual File Driver (VFD). I incorrectly set up the byterange code in libhdf5 so that it would choose one or the other of the two VFD's for any netcdf-c library build. The fix is to allow it to choose either one at run-time. ### Problem 2: "https://noaa-goes16.s3.amazonaws.com/ABI-L1b-RadF/2022/001/18/OR_ABI-L1b-RadF-M6C01_G16_s20220011800205_e20220011809513_c20220011809562.nc#mode=bytes,s3" When given what appears to be an S3-related URL, the netcdf-c library code converts it into a canonical, so-called "path" format. In casing out the possible input URL formats, I missed the case where the host contains the bucket ("noaa-goes16"), but not the region. So the fix was to check for this case. ## Misc. Related Changes 1. Since S3 is used in more than just NCZarr, I changed the automake/cmake options to replace "--enable-nczarr-s3" with "--enable-s3", but keeping the former option as a synonym for the latter. This also entailed cleaning up libnetcdf.settings WRT S3 support 2. Added the above URLS as additional test cases ## Misc. Un-Related Changes 1. CURLOPT_PUT is deprecated in favor to CURLOPT_UPLOAD 2. Fix some minor warnings ## Open Problems * Under Ubuntu, either libcrypto or aws-sdk-cpp has a memory leak.
2023-03-03 10:51:02 +08:00
AC_SUBST(HAS_S3,[$enable_s3])
Improve S3 Documentation and Support ## Improvements to S3 Documentation * Create a new document *quickstart_paths.md* that give a summary of the legal path formats used by netcdf-c. This includes both file paths and URL paths. * Modify *nczarr.md* to remove most of the S3 related text. * Move the S3 text from *nczarr.md* to a new document *cloud.md*. * Add some S3-related text to the *byterange.md* document. Hopefully, this will make it easier for users to find the information they want. ## Rebuild NCZarr Testing In order to avoid problems with running make check in parallel, two changes were made: 1. The *nczarr_test* test system was rebuilt. Now, for each test. any generated files are kept in a test-specific directory, isolated from all other test executions. 2. Similarly, since the S3 test bucket is shared, any generated S3 objects are isolated using a test-specific key path. ## Other S3 Related Changes * Add code to ensure that files created on S3 are reclaimed at end of testing. * Used the bash "trap" command to ensure S3 cleanup even if the test fails. * Cleanup the S3 related configure.ac flag set since S3 is used in several places. So now one should use the option *--enable-s3* instead of *--enable-nczarr-s3*, although the latter is still kept as a deprecated alias for the former. * Get some of the github actions yml to work with S3; required fixing various test scripts adding a secret to access the Unidata S3 bucket. * Cleanup S3 portion of libnetcdf.settings.in and netcdf_meta.h.in and test_common.in. * Merge partial S3 support into dhttp.c. * Create an experimental s3 access library especially for use with Windows. It is enabled by using the options *--enable-s3-internal* (automake) or *-DENABLE_S3_INTERNAL=ON* (CMake). Also add a unit-test for it. * Move some definitions from ncrc.h to ncs3sdk.h ## Other Changes * Provide a default implementation of strlcpy and move this and similar defaults into *dmissing.c*.
2023-04-26 07:15:06 +08:00
AC_SUBST(HAS_S3_AWS,[$enable_s3_aws])
AC_SUBST(HAS_S3_INTERNAL,[$enable_s3_internal])
AC_SUBST(HAS_HDF5_ROS3,[$has_hdf5_ros3])
This PR adds EXPERIMENTAL support for accessing data in the cloud using a variant of the Zarr protocol and storage format. This enhancement is generically referred to as "NCZarr". The data model supported by NCZarr is netcdf-4 minus the user-defined types and the String type. In this sense it is similar to the CDF-5 data model. More detailed information about enabling and using NCZarr is described in the document NUG/nczarr.md and in a [Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in). WARNING: this code has had limited testing, so do use this version for production work. Also, performance improvements are ongoing. Note especially the following platform matrix of successful tests: Platform | Build System | S3 support ------------------------------------ Linux+gcc | Automake | yes Linux+gcc | CMake | yes Visual Studio | CMake | no Additionally, and as a consequence of the addition of NCZarr, major changes have been made to the Filter API. NOTE: NCZarr does not yet support filters, but these changes are enablers for that support in the future. Note that it is possible (probable?) that there will be some accidental reversions if the changes here did not correctly mimic the existing filter testing. In any case, previously filter ids and parameters were of type unsigned int. In order to support the more general zarr filter model, this was all converted to char*. The old HDF5-specific, unsigned int operations are still supported but they are wrappers around the new, char* based nc_filterx_XXX functions. This entailed at least the following changes: 1. Added the files libdispatch/dfilterx.c and include/ncfilter.h 2. Some filterx utilities have been moved to libdispatch/daux.c 3. A new entry, "filter_actions" was added to the NCDispatch table and the version bumped. 4. An overly complex set of structs was created to support funnelling all of the filterx operations thru a single dispatch "filter_actions" entry. 5. Move common code to from libhdf5 to libsrc4 so that it is accessible to nczarr. Changes directly related to Zarr: 1. Modified CMakeList.txt and configure.ac to support both C and C++ -- this is in support of S3 support via the awd-sdk libraries. 2. Define a size64_t type to support nczarr. 3. More reworking of libdispatch/dinfermodel.c to support zarr and to regularize the structure of the fragments section of a URL. Changes not directly related to Zarr: 1. Make client-side filter registration be conditional, with default off. 2. Hack include/nc4internal.h to make some flags added by Ed be unique: e.g. NC_CREAT, NC_INDEF, etc. 3. cleanup include/nchttp.h and libdispatch/dhttp.c. 4. Misc. changes to support compiling under Visual Studio including: * Better testing under windows for dirent.h and opendir and closedir. 5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags and to centralize error reporting. 6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them. 7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible. Changes Left TO-DO: 1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
AC_SUBST(HAS_NCZARR,[$enable_nczarr])
Mitigate S3 test interference + Unlimited Dimensions in NCZarr This PR started as an attempt to add unlimited dimensions to NCZarr. It did that, but this exposed significant problems with test interference. So this PR is mostly about fixing -- well mitigating anyway -- test interference. The problem of test interference is now documented in the document docs/internal.md. The solutions implemented here are also describe in that document. The solution is somewhat fragile but multiple cleanup mechanisms are provided. Note that this feature requires that the AWS command line utility must be installed. ## Unlimited Dimensions. The existing NCZarr extensions to Zarr are modified to support unlimited dimensions. NCzarr extends the Zarr meta-data for the ".zgroup" object to include netcdf-4 model extensions. This information is stored in ".zgroup" as dictionary named "_nczarr_group". Inside "_nczarr_group", there is a key named "dims" that stores information about netcdf-4 named dimensions. The value of "dims" is a dictionary whose keys are the named dimensions. The value associated with each dimension name has one of two forms Form 1 is a special case of form 2, and is kept for backward compatibility. Whenever a new file is written, it uses format 1 if possible, otherwise format 2. * Form 1: An integer representing the size of the dimension, which is used for simple named dimensions. * Form 2: A dictionary with the following keys and values" - "size" with an integer value representing the (current) size of the dimension. - "unlimited" with a value of either "1" or "0" to indicate if this dimension is an unlimited dimension. For Unlimited dimensions, the size is initially zero, and as variables extend the length of that dimension, the size value for the dimension increases. That dimension size is shared by all arrays referencing that dimension, so if one array extends an unlimited dimension, it is implicitly extended for all other arrays that reference that dimension. This is the standard semantics for unlimited dimensions. Adding unlimited dimensions required a number of other changes to the NCZarr code-base. These included the following. * Did a partial refactor of the slice handling code in zwalk.c to clean it up. * Added a number of tests for unlimited dimensions derived from the same test in nc_test4. * Added several NCZarr specific unlimited tests; more are needed. * Add test of endianness. ## Misc. Other Changes * Modify libdispatch/ncs3sdk_aws.cpp to optionally support use of the AWS Transfer Utility mechanism. This is controlled by the ```#define TRANSFER```` command in that file. It defaults to being disabled. * Parameterize both the standard Unidata S3 bucket (S3TESTBUCKET) and the netcdf-c test data prefix (S3TESTSUBTREE). * Fixed an obscure memory leak in ncdump. * Removed some obsolete unit testing code and test cases. * Uncovered a bug in the netcdf-c handling of big-endian floats and doubles. Have not fixed yet. See tst_h5_endians.c. * Renamed some nczarr_tests testcases to avoid name conflicts with nc_test4. * Modify the semantics of zmap\#ncsmap_write to only allow total rewrite of objects. * Modify the semantics of zodom to properly handle stride > 1. * Add a truncate operation to the libnczarr zmap code.
2023-09-27 06:56:48 +08:00
AC_SUBST(ENABLE_S3_TESTING,[$with_s3_testing])
AC_SUBST(HAS_NCZARR_ZIP,[$enable_nczarr_zip])
AC_SUBST(DO_NCZARR_ZIP_TESTS,[$enable_nczarr_zip])
AC_SUBST(HAS_QUANTIZE,[$enable_quantize])
AC_SUBST(HAS_LOGGING,[$enable_logging])
Add filter support to NCZarr Filter support has three goals: 1. Use the existing HDF5 filter implementations, 2. Allow filter metadata to be stored in the NumCodecs metadata format used by Zarr, 3. Allow filters to be used even when HDF5 is disabled Detailed usage directions are define in docs/filters.md. For now, the existing filter API is left in place. So filters are defined using ''nc_def_var_filter'' using the HDF5 style where the id and parameters are unsigned integers. This is a big change since filters affect many parts of the code. In the following, the terms "compressor" and "filter" and "codec" are generally used synonomously. ### Filter-Related Changes: * In order to support dynamic loading of shared filter libraries, a new library was added in the libncpoco directory; it helps to isolate dynamic loading across multiple platforms. * Provide a json parsing library for use by plugins; this is created by merging libdispatch/ncjson.c with include/ncjson.h. * Add a new _Codecs attribute to allow clients to see what codecs are being used; let ncdump -s print it out. * Provide special headers to help support compilation of HDF5 filters when HDF5 is not enabled: netcdf_filter_hdf5_build.h and netcdf_filter_build.h. * Add a number of new test to test the new nczarr filters. * Let ncgen parse _Codecs attribute, although it is ignored. ### Plugin directory changes: * Add support for the Blosc compressor; this is essential because it is the most common compressor used in Zarr datasets. This also necessitated adding a CMake FindBlosc.cmake file * Add NCZarr support for the big-four filters provided by HDF5: shuffle, fletcher32, deflate (zlib), and szip * Add a Codec defaulter (see docs/filters.md) for the big four filters. * Make plugins work with windows by properly adding __declspec declaration. ### Misc. Non-Filter Changes * Replace most uses of USE_NETCDF4 (deprecated) with USE_HDF5. * Improve support for caching * More fixes for path conversion code * Fix misc. memory leaks * Add new utility -- ncdump/ncpathcvt -- that does more or less the same thing as cygpath. * Add a number of new test to test the non-filter fixes. * Update the parsers * Convert most instances of '#ifdef _MSC_VER' to '#ifdef _WIN32'
2021-09-03 07:04:26 +08:00
AC_SUBST(DO_FILTER_TESTS,[$enable_filter_testing])
Improve performance of the nc_reclaim_data and nc_copy_data functions. re: Issue https://github.com/Unidata/netcdf-c/issues/2685 re: PR https://github.com/Unidata/netcdf-c/pull/2179 As noted in PR https://github.com/Unidata/netcdf-c/pull/2179, the old code did not allow for reclaiming instances of types, nor for properly copying them. That PR provided new functions capable of reclaiming/copying instances of arbitrary types. However, as noted by Issue https://github.com/Unidata/netcdf-c/issues/2685, using these most general functions resulted in a significant performance degradation, even for common cases. This PR attempts to mitigate the cost of using the general reclaim/copy functions in two ways. First, the previous functions operating at the top level by using ncid and typeid arguments. These functions were augmented with equivalent versions that used the netcdf-c library internal data structures to allow direct access to needed information. These new functions are used internally to the library. The second mitigation involves optimizing the internal functions by providing early tests for common cases. This avoids unnecessary recursive function calls. The overall result is a significant improvement in speed by a factor of roughly twenty -- your mileage may vary. These optimized functions are still not as fast as the original (more limited) functions, but they are getting close. Additional optimizations are possible. But the cost is a significant "uglification" of the code that I deemed a step too far, at least for now. ## Misc. Changes 1. Added a test case to check the proper reclamation/copy of complex types. 2. Found and fixed some places where nc_reclaim/copy should have been used. 3. Replaced, in the netcdf-c library, (almost all) occurrences of nc_reclaim_copy with calls to NC_reclaim/copy. This plus the optimizations is the primary speed-up mechanism. 4. In DAP4, the metadata is held in a substrate in-memory file; this required some changes so that the reclaim/copy code accessed that substrate dispatcher rather than the DAP4 dispatcher. 5. Re-factored and isolated the code that computes if a type is (transitively) variable-sized or not. 6. Clean up the reclamation code in ncgen; adding the use of nc_reclaim exposed some memory problems.
2023-05-21 07:11:25 +08:00
AC_SUBST(HAS_DEFLATE,[$have_deflate])
Enhance/Fix filter support re: Discussion https://github.com/Unidata/netcdf-c/discussions/2214 The primary change is to support so-called "standard filters". A standard filter is one that is defined by the following netcdf-c API: ```` int nc_def_var_XXX(int ncid, int varid, size_t nparams, unsigned* params); int nc_inq_var_XXXX(int ncid, int varid, int* usefilterp, unsigned* params); ```` So for example, zstandard would be a standard filter by defining the functions *nc_def_var_zstandard* and *nc_inq_var_zstandard*. In order to define these functions, we need a new dispatch function: ```` int nc_inq_filter_avail(int ncid, unsigned filterid); ```` This function, combined with the existing filter API can be used to implement arbitrary standard filters using a simple code pattern. Note that I would have preferred that this function return a list of all available filters, but HDF5 does not support that functionality. So this PR implements the dispatch function and implements the following standard functions: + bzip2 + zstandard + blosc Specific test cases are also provided for HDF5 and NCZarr. Over time, other specific standard filters will be defined. ## Primary Changes * Add nc_inq_filter_avail() to netcdf-c API. * Add standard filter implementations to test use of *nc_inq_filter_avail*. * Bump the dispatch table version number and add to all the relevant dispatch tables (libsrc, libsrcp, etc). * Create a program to invoke nc_inq_filter_avail so that it is accessible to shell scripts. * Cleanup szip support to properly support szip when HDF5 is disabled. This involves detecting libsz separately from testing if HDF5 supports szip. * Integrate shuffle and fletcher32 into the existing filter API. This means that, for example, nc_def_var_fletcher32 is now a wrapper around nc_def_var_filter. * Extend the Codec defaulting to allow multiple default shared libraries. ## Misc. Changes * Modify configure.ac/CMakeLists.txt to look for the relevant libraries implementing standard filters. * Modify libnetcdf.settings to list available standard filters (including deflate and szip). * Add CMake test modules to locate libbz2 and libzstd. * Cleanup the HDF5 memory manager function use in the plugins. * remove unused file include//ncfilter.h * remove tests for the HDF5 memory operations e.g. H5allocate_memory. * Add flag to ncdump to force use of _Filter instead of _Deflate or _Shuffle or _Fletcher32. Used for testing.
2022-03-15 02:39:37 +08:00
AC_SUBST(HAS_SZLIB,[$have_sz])
AC_SUBST(HAS_SZLIB_WRITE, [$have_sz])
AC_SUBST(HAS_ZSTD,[$have_zstd])
Improve S3 Documentation and Support ## Improvements to S3 Documentation * Create a new document *quickstart_paths.md* that give a summary of the legal path formats used by netcdf-c. This includes both file paths and URL paths. * Modify *nczarr.md* to remove most of the S3 related text. * Move the S3 text from *nczarr.md* to a new document *cloud.md*. * Add some S3-related text to the *byterange.md* document. Hopefully, this will make it easier for users to find the information they want. ## Rebuild NCZarr Testing In order to avoid problems with running make check in parallel, two changes were made: 1. The *nczarr_test* test system was rebuilt. Now, for each test. any generated files are kept in a test-specific directory, isolated from all other test executions. 2. Similarly, since the S3 test bucket is shared, any generated S3 objects are isolated using a test-specific key path. ## Other S3 Related Changes * Add code to ensure that files created on S3 are reclaimed at end of testing. * Used the bash "trap" command to ensure S3 cleanup even if the test fails. * Cleanup the S3 related configure.ac flag set since S3 is used in several places. So now one should use the option *--enable-s3* instead of *--enable-nczarr-s3*, although the latter is still kept as a deprecated alias for the former. * Get some of the github actions yml to work with S3; required fixing various test scripts adding a secret to access the Unidata S3 bucket. * Cleanup S3 portion of libnetcdf.settings.in and netcdf_meta.h.in and test_common.in. * Merge partial S3 support into dhttp.c. * Create an experimental s3 access library especially for use with Windows. It is enabled by using the options *--enable-s3-internal* (automake) or *-DENABLE_S3_INTERNAL=ON* (CMake). Also add a unit-test for it. * Move some definitions from ncrc.h to ncs3sdk.h ## Other Changes * Provide a default implementation of strlcpy and move this and similar defaults into *dmissing.c*.
2023-04-26 07:15:06 +08:00
AC_SUBST(DO_LARGE_TESTS,[$enable_large_file_tests])
AC_SUBST(DO_REMOTE_FUNCTIONALITY,[$enable_remote_functionality])
Improve S3 Documentation and Support ## Improvements to S3 Documentation * Create a new document *quickstart_paths.md* that give a summary of the legal path formats used by netcdf-c. This includes both file paths and URL paths. * Modify *nczarr.md* to remove most of the S3 related text. * Move the S3 text from *nczarr.md* to a new document *cloud.md*. * Add some S3-related text to the *byterange.md* document. Hopefully, this will make it easier for users to find the information they want. ## Rebuild NCZarr Testing In order to avoid problems with running make check in parallel, two changes were made: 1. The *nczarr_test* test system was rebuilt. Now, for each test. any generated files are kept in a test-specific directory, isolated from all other test executions. 2. Similarly, since the S3 test bucket is shared, any generated S3 objects are isolated using a test-specific key path. ## Other S3 Related Changes * Add code to ensure that files created on S3 are reclaimed at end of testing. * Used the bash "trap" command to ensure S3 cleanup even if the test fails. * Cleanup the S3 related configure.ac flag set since S3 is used in several places. So now one should use the option *--enable-s3* instead of *--enable-nczarr-s3*, although the latter is still kept as a deprecated alias for the former. * Get some of the github actions yml to work with S3; required fixing various test scripts adding a secret to access the Unidata S3 bucket. * Cleanup S3 portion of libnetcdf.settings.in and netcdf_meta.h.in and test_common.in. * Merge partial S3 support into dhttp.c. * Create an experimental s3 access library especially for use with Windows. It is enabled by using the options *--enable-s3-internal* (automake) or *-DENABLE_S3_INTERNAL=ON* (CMake). Also add a unit-test for it. * Move some definitions from ncrc.h to ncs3sdk.h ## Other Changes * Provide a default implementation of strlcpy and move this and similar defaults into *dmissing.c*.
2023-04-26 07:15:06 +08:00
if test "x$enable_s3_aws" = xyes ; then
AC_SUBST(WHICH_S3_SDK,[aws-sdk-cpp])
fi
if test "x$enable_s3_internal" = xyes; then
AC_SUBST(WHICH_S3_SDK,[internal])
fi
if test "x$enable_s3_aws" = xno && test "x$enable_s3_internal" = xno; then
AC_SUBST(WHICH_S3_SDK,[none])
fi
Enhance/Fix filter support re: Discussion https://github.com/Unidata/netcdf-c/discussions/2214 The primary change is to support so-called "standard filters". A standard filter is one that is defined by the following netcdf-c API: ```` int nc_def_var_XXX(int ncid, int varid, size_t nparams, unsigned* params); int nc_inq_var_XXXX(int ncid, int varid, int* usefilterp, unsigned* params); ```` So for example, zstandard would be a standard filter by defining the functions *nc_def_var_zstandard* and *nc_inq_var_zstandard*. In order to define these functions, we need a new dispatch function: ```` int nc_inq_filter_avail(int ncid, unsigned filterid); ```` This function, combined with the existing filter API can be used to implement arbitrary standard filters using a simple code pattern. Note that I would have preferred that this function return a list of all available filters, but HDF5 does not support that functionality. So this PR implements the dispatch function and implements the following standard functions: + bzip2 + zstandard + blosc Specific test cases are also provided for HDF5 and NCZarr. Over time, other specific standard filters will be defined. ## Primary Changes * Add nc_inq_filter_avail() to netcdf-c API. * Add standard filter implementations to test use of *nc_inq_filter_avail*. * Bump the dispatch table version number and add to all the relevant dispatch tables (libsrc, libsrcp, etc). * Create a program to invoke nc_inq_filter_avail so that it is accessible to shell scripts. * Cleanup szip support to properly support szip when HDF5 is disabled. This involves detecting libsz separately from testing if HDF5 supports szip. * Integrate shuffle and fletcher32 into the existing filter API. This means that, for example, nc_def_var_fletcher32 is now a wrapper around nc_def_var_filter. * Extend the Codec defaulting to allow multiple default shared libraries. ## Misc. Changes * Modify configure.ac/CMakeLists.txt to look for the relevant libraries implementing standard filters. * Modify libnetcdf.settings to list available standard filters (including deflate and szip). * Add CMake test modules to locate libbz2 and libzstd. * Cleanup the HDF5 memory manager function use in the plugins. * remove unused file include//ncfilter.h * remove tests for the HDF5 memory operations e.g. H5allocate_memory. * Add flag to ncdump to force use of _Filter instead of _Deflate or _Shuffle or _Fletcher32. Used for testing.
2022-03-15 02:39:37 +08:00
Mitigate S3 test interference + Unlimited Dimensions in NCZarr This PR started as an attempt to add unlimited dimensions to NCZarr. It did that, but this exposed significant problems with test interference. So this PR is mostly about fixing -- well mitigating anyway -- test interference. The problem of test interference is now documented in the document docs/internal.md. The solutions implemented here are also describe in that document. The solution is somewhat fragile but multiple cleanup mechanisms are provided. Note that this feature requires that the AWS command line utility must be installed. ## Unlimited Dimensions. The existing NCZarr extensions to Zarr are modified to support unlimited dimensions. NCzarr extends the Zarr meta-data for the ".zgroup" object to include netcdf-4 model extensions. This information is stored in ".zgroup" as dictionary named "_nczarr_group". Inside "_nczarr_group", there is a key named "dims" that stores information about netcdf-4 named dimensions. The value of "dims" is a dictionary whose keys are the named dimensions. The value associated with each dimension name has one of two forms Form 1 is a special case of form 2, and is kept for backward compatibility. Whenever a new file is written, it uses format 1 if possible, otherwise format 2. * Form 1: An integer representing the size of the dimension, which is used for simple named dimensions. * Form 2: A dictionary with the following keys and values" - "size" with an integer value representing the (current) size of the dimension. - "unlimited" with a value of either "1" or "0" to indicate if this dimension is an unlimited dimension. For Unlimited dimensions, the size is initially zero, and as variables extend the length of that dimension, the size value for the dimension increases. That dimension size is shared by all arrays referencing that dimension, so if one array extends an unlimited dimension, it is implicitly extended for all other arrays that reference that dimension. This is the standard semantics for unlimited dimensions. Adding unlimited dimensions required a number of other changes to the NCZarr code-base. These included the following. * Did a partial refactor of the slice handling code in zwalk.c to clean it up. * Added a number of tests for unlimited dimensions derived from the same test in nc_test4. * Added several NCZarr specific unlimited tests; more are needed. * Add test of endianness. ## Misc. Other Changes * Modify libdispatch/ncs3sdk_aws.cpp to optionally support use of the AWS Transfer Utility mechanism. This is controlled by the ```#define TRANSFER```` command in that file. It defaults to being disabled. * Parameterize both the standard Unidata S3 bucket (S3TESTBUCKET) and the netcdf-c test data prefix (S3TESTSUBTREE). * Fixed an obscure memory leak in ncdump. * Removed some obsolete unit testing code and test cases. * Uncovered a bug in the netcdf-c handling of big-endian floats and doubles. Have not fixed yet. See tst_h5_endians.c. * Renamed some nczarr_tests testcases to avoid name conflicts with nc_test4. * Modify the semantics of zmap\#ncsmap_write to only allow total rewrite of objects. * Modify the semantics of zodom to properly handle stride > 1. * Add a truncate operation to the libnczarr zmap code.
2023-09-27 06:56:48 +08:00
# The Unidata testing S3 bucket
2023-09-28 08:53:45 +08:00
# WARNING: this must match the value in CMakeLists.txt
Mitigate S3 test interference + Unlimited Dimensions in NCZarr This PR started as an attempt to add unlimited dimensions to NCZarr. It did that, but this exposed significant problems with test interference. So this PR is mostly about fixing -- well mitigating anyway -- test interference. The problem of test interference is now documented in the document docs/internal.md. The solutions implemented here are also describe in that document. The solution is somewhat fragile but multiple cleanup mechanisms are provided. Note that this feature requires that the AWS command line utility must be installed. ## Unlimited Dimensions. The existing NCZarr extensions to Zarr are modified to support unlimited dimensions. NCzarr extends the Zarr meta-data for the ".zgroup" object to include netcdf-4 model extensions. This information is stored in ".zgroup" as dictionary named "_nczarr_group". Inside "_nczarr_group", there is a key named "dims" that stores information about netcdf-4 named dimensions. The value of "dims" is a dictionary whose keys are the named dimensions. The value associated with each dimension name has one of two forms Form 1 is a special case of form 2, and is kept for backward compatibility. Whenever a new file is written, it uses format 1 if possible, otherwise format 2. * Form 1: An integer representing the size of the dimension, which is used for simple named dimensions. * Form 2: A dictionary with the following keys and values" - "size" with an integer value representing the (current) size of the dimension. - "unlimited" with a value of either "1" or "0" to indicate if this dimension is an unlimited dimension. For Unlimited dimensions, the size is initially zero, and as variables extend the length of that dimension, the size value for the dimension increases. That dimension size is shared by all arrays referencing that dimension, so if one array extends an unlimited dimension, it is implicitly extended for all other arrays that reference that dimension. This is the standard semantics for unlimited dimensions. Adding unlimited dimensions required a number of other changes to the NCZarr code-base. These included the following. * Did a partial refactor of the slice handling code in zwalk.c to clean it up. * Added a number of tests for unlimited dimensions derived from the same test in nc_test4. * Added several NCZarr specific unlimited tests; more are needed. * Add test of endianness. ## Misc. Other Changes * Modify libdispatch/ncs3sdk_aws.cpp to optionally support use of the AWS Transfer Utility mechanism. This is controlled by the ```#define TRANSFER```` command in that file. It defaults to being disabled. * Parameterize both the standard Unidata S3 bucket (S3TESTBUCKET) and the netcdf-c test data prefix (S3TESTSUBTREE). * Fixed an obscure memory leak in ncdump. * Removed some obsolete unit testing code and test cases. * Uncovered a bug in the netcdf-c handling of big-endian floats and doubles. Have not fixed yet. See tst_h5_endians.c. * Renamed some nczarr_tests testcases to avoid name conflicts with nc_test4. * Modify the semantics of zmap\#ncsmap_write to only allow total rewrite of objects. * Modify the semantics of zodom to properly handle stride > 1. * Add a truncate operation to the libnczarr zmap code.
2023-09-27 06:56:48 +08:00
AC_DEFINE([S3TESTBUCKET], ["unidata-zarr-test-data"], [S3 test bucket])
AC_SUBST([S3TESTBUCKET],["unidata-zarr-test-data"])
# The working S3 path tree within the Unidata bucket.
2023-09-28 08:53:45 +08:00
# WARNING: this must match the value in CMakeLists.txt
AC_DEFINE([S3TESTSUBTREE], ["netcdf-c"], [S3 test path prefix])
AC_SUBST([S3TESTSUBTREE],[netcdf-c])
Mitigate S3 test interference + Unlimited Dimensions in NCZarr This PR started as an attempt to add unlimited dimensions to NCZarr. It did that, but this exposed significant problems with test interference. So this PR is mostly about fixing -- well mitigating anyway -- test interference. The problem of test interference is now documented in the document docs/internal.md. The solutions implemented here are also describe in that document. The solution is somewhat fragile but multiple cleanup mechanisms are provided. Note that this feature requires that the AWS command line utility must be installed. ## Unlimited Dimensions. The existing NCZarr extensions to Zarr are modified to support unlimited dimensions. NCzarr extends the Zarr meta-data for the ".zgroup" object to include netcdf-4 model extensions. This information is stored in ".zgroup" as dictionary named "_nczarr_group". Inside "_nczarr_group", there is a key named "dims" that stores information about netcdf-4 named dimensions. The value of "dims" is a dictionary whose keys are the named dimensions. The value associated with each dimension name has one of two forms Form 1 is a special case of form 2, and is kept for backward compatibility. Whenever a new file is written, it uses format 1 if possible, otherwise format 2. * Form 1: An integer representing the size of the dimension, which is used for simple named dimensions. * Form 2: A dictionary with the following keys and values" - "size" with an integer value representing the (current) size of the dimension. - "unlimited" with a value of either "1" or "0" to indicate if this dimension is an unlimited dimension. For Unlimited dimensions, the size is initially zero, and as variables extend the length of that dimension, the size value for the dimension increases. That dimension size is shared by all arrays referencing that dimension, so if one array extends an unlimited dimension, it is implicitly extended for all other arrays that reference that dimension. This is the standard semantics for unlimited dimensions. Adding unlimited dimensions required a number of other changes to the NCZarr code-base. These included the following. * Did a partial refactor of the slice handling code in zwalk.c to clean it up. * Added a number of tests for unlimited dimensions derived from the same test in nc_test4. * Added several NCZarr specific unlimited tests; more are needed. * Add test of endianness. ## Misc. Other Changes * Modify libdispatch/ncs3sdk_aws.cpp to optionally support use of the AWS Transfer Utility mechanism. This is controlled by the ```#define TRANSFER```` command in that file. It defaults to being disabled. * Parameterize both the standard Unidata S3 bucket (S3TESTBUCKET) and the netcdf-c test data prefix (S3TESTSUBTREE). * Fixed an obscure memory leak in ncdump. * Removed some obsolete unit testing code and test cases. * Uncovered a bug in the netcdf-c handling of big-endian floats and doubles. Have not fixed yet. See tst_h5_endians.c. * Renamed some nczarr_tests testcases to avoid name conflicts with nc_test4. * Modify the semantics of zmap\#ncsmap_write to only allow total rewrite of objects. * Modify the semantics of zodom to properly handle stride > 1. * Add a truncate operation to the libnczarr zmap code.
2023-09-27 06:56:48 +08:00
# Build a small unique id to avoid interference on same platform
PLATFORMUID="$RANDOM"
# Make sure uid > 0
PLATFORMUID=$((PLATFORMUID % 1000 + 1))
# Build a unique id based on the date
TESTUID=`date +%s`
AC_DEFINE_UNQUOTED([TESTUID], [${TESTUID}], [S3 working path])
AC_SUBST([TESTUID],${TESTUID})
AC_SUBST([PLATFORMUID],${PLATFORMUID})
Enhance/Fix filter support re: Discussion https://github.com/Unidata/netcdf-c/discussions/2214 The primary change is to support so-called "standard filters". A standard filter is one that is defined by the following netcdf-c API: ```` int nc_def_var_XXX(int ncid, int varid, size_t nparams, unsigned* params); int nc_inq_var_XXXX(int ncid, int varid, int* usefilterp, unsigned* params); ```` So for example, zstandard would be a standard filter by defining the functions *nc_def_var_zstandard* and *nc_inq_var_zstandard*. In order to define these functions, we need a new dispatch function: ```` int nc_inq_filter_avail(int ncid, unsigned filterid); ```` This function, combined with the existing filter API can be used to implement arbitrary standard filters using a simple code pattern. Note that I would have preferred that this function return a list of all available filters, but HDF5 does not support that functionality. So this PR implements the dispatch function and implements the following standard functions: + bzip2 + zstandard + blosc Specific test cases are also provided for HDF5 and NCZarr. Over time, other specific standard filters will be defined. ## Primary Changes * Add nc_inq_filter_avail() to netcdf-c API. * Add standard filter implementations to test use of *nc_inq_filter_avail*. * Bump the dispatch table version number and add to all the relevant dispatch tables (libsrc, libsrcp, etc). * Create a program to invoke nc_inq_filter_avail so that it is accessible to shell scripts. * Cleanup szip support to properly support szip when HDF5 is disabled. This involves detecting libsz separately from testing if HDF5 supports szip. * Integrate shuffle and fletcher32 into the existing filter API. This means that, for example, nc_def_var_fletcher32 is now a wrapper around nc_def_var_filter. * Extend the Codec defaulting to allow multiple default shared libraries. ## Misc. Changes * Modify configure.ac/CMakeLists.txt to look for the relevant libraries implementing standard filters. * Modify libnetcdf.settings to list available standard filters (including deflate and szip). * Add CMake test modules to locate libbz2 and libzstd. * Cleanup the HDF5 memory manager function use in the plugins. * remove unused file include//ncfilter.h * remove tests for the HDF5 memory operations e.g. H5allocate_memory. * Add flag to ncdump to force use of _Filter instead of _Deflate or _Shuffle or _Fletcher32. Used for testing.
2022-03-15 02:39:37 +08:00
# Always available
Improve performance of the nc_reclaim_data and nc_copy_data functions. re: Issue https://github.com/Unidata/netcdf-c/issues/2685 re: PR https://github.com/Unidata/netcdf-c/pull/2179 As noted in PR https://github.com/Unidata/netcdf-c/pull/2179, the old code did not allow for reclaiming instances of types, nor for properly copying them. That PR provided new functions capable of reclaiming/copying instances of arbitrary types. However, as noted by Issue https://github.com/Unidata/netcdf-c/issues/2685, using these most general functions resulted in a significant performance degradation, even for common cases. This PR attempts to mitigate the cost of using the general reclaim/copy functions in two ways. First, the previous functions operating at the top level by using ncid and typeid arguments. These functions were augmented with equivalent versions that used the netcdf-c library internal data structures to allow direct access to needed information. These new functions are used internally to the library. The second mitigation involves optimizing the internal functions by providing early tests for common cases. This avoids unnecessary recursive function calls. The overall result is a significant improvement in speed by a factor of roughly twenty -- your mileage may vary. These optimized functions are still not as fast as the original (more limited) functions, but they are getting close. Additional optimizations are possible. But the cost is a significant "uglification" of the code that I deemed a step too far, at least for now. ## Misc. Changes 1. Added a test case to check the proper reclamation/copy of complex types. 2. Found and fixed some places where nc_reclaim/copy should have been used. 3. Replaced, in the netcdf-c library, (almost all) occurrences of nc_reclaim_copy with calls to NC_reclaim/copy. This plus the optimizations is the primary speed-up mechanism. 4. In DAP4, the metadata is held in a substrate in-memory file; this required some changes so that the reclaim/copy code accessed that substrate dispatcher rather than the DAP4 dispatcher. 5. Re-factored and isolated the code that computes if a type is (transitively) variable-sized or not. 6. Clean up the reclamation code in ncgen; adding the use of nc_reclaim exposed some memory problems.
2023-05-21 07:11:25 +08:00
std_filters="bz2"
if test "x$have_deflate" = xyes ; then
std_filters="${std_filters} deflate"
fi
if test "x$have_sz" = xyes ; then
std_filters="${std_filters} szip"
Enhance/Fix filter support re: Discussion https://github.com/Unidata/netcdf-c/discussions/2214 The primary change is to support so-called "standard filters". A standard filter is one that is defined by the following netcdf-c API: ```` int nc_def_var_XXX(int ncid, int varid, size_t nparams, unsigned* params); int nc_inq_var_XXXX(int ncid, int varid, int* usefilterp, unsigned* params); ```` So for example, zstandard would be a standard filter by defining the functions *nc_def_var_zstandard* and *nc_inq_var_zstandard*. In order to define these functions, we need a new dispatch function: ```` int nc_inq_filter_avail(int ncid, unsigned filterid); ```` This function, combined with the existing filter API can be used to implement arbitrary standard filters using a simple code pattern. Note that I would have preferred that this function return a list of all available filters, but HDF5 does not support that functionality. So this PR implements the dispatch function and implements the following standard functions: + bzip2 + zstandard + blosc Specific test cases are also provided for HDF5 and NCZarr. Over time, other specific standard filters will be defined. ## Primary Changes * Add nc_inq_filter_avail() to netcdf-c API. * Add standard filter implementations to test use of *nc_inq_filter_avail*. * Bump the dispatch table version number and add to all the relevant dispatch tables (libsrc, libsrcp, etc). * Create a program to invoke nc_inq_filter_avail so that it is accessible to shell scripts. * Cleanup szip support to properly support szip when HDF5 is disabled. This involves detecting libsz separately from testing if HDF5 supports szip. * Integrate shuffle and fletcher32 into the existing filter API. This means that, for example, nc_def_var_fletcher32 is now a wrapper around nc_def_var_filter. * Extend the Codec defaulting to allow multiple default shared libraries. ## Misc. Changes * Modify configure.ac/CMakeLists.txt to look for the relevant libraries implementing standard filters. * Modify libnetcdf.settings to list available standard filters (including deflate and szip). * Add CMake test modules to locate libbz2 and libzstd. * Cleanup the HDF5 memory manager function use in the plugins. * remove unused file include//ncfilter.h * remove tests for the HDF5 memory operations e.g. H5allocate_memory. * Add flag to ncdump to force use of _Filter instead of _Deflate or _Shuffle or _Fletcher32. Used for testing.
2022-03-15 02:39:37 +08:00
fi
if test "x$have_blosc" = xyes ; then
std_filters="${std_filters} blosc"
Enhance/Fix filter support re: Discussion https://github.com/Unidata/netcdf-c/discussions/2214 The primary change is to support so-called "standard filters". A standard filter is one that is defined by the following netcdf-c API: ```` int nc_def_var_XXX(int ncid, int varid, size_t nparams, unsigned* params); int nc_inq_var_XXXX(int ncid, int varid, int* usefilterp, unsigned* params); ```` So for example, zstandard would be a standard filter by defining the functions *nc_def_var_zstandard* and *nc_inq_var_zstandard*. In order to define these functions, we need a new dispatch function: ```` int nc_inq_filter_avail(int ncid, unsigned filterid); ```` This function, combined with the existing filter API can be used to implement arbitrary standard filters using a simple code pattern. Note that I would have preferred that this function return a list of all available filters, but HDF5 does not support that functionality. So this PR implements the dispatch function and implements the following standard functions: + bzip2 + zstandard + blosc Specific test cases are also provided for HDF5 and NCZarr. Over time, other specific standard filters will be defined. ## Primary Changes * Add nc_inq_filter_avail() to netcdf-c API. * Add standard filter implementations to test use of *nc_inq_filter_avail*. * Bump the dispatch table version number and add to all the relevant dispatch tables (libsrc, libsrcp, etc). * Create a program to invoke nc_inq_filter_avail so that it is accessible to shell scripts. * Cleanup szip support to properly support szip when HDF5 is disabled. This involves detecting libsz separately from testing if HDF5 supports szip. * Integrate shuffle and fletcher32 into the existing filter API. This means that, for example, nc_def_var_fletcher32 is now a wrapper around nc_def_var_filter. * Extend the Codec defaulting to allow multiple default shared libraries. ## Misc. Changes * Modify configure.ac/CMakeLists.txt to look for the relevant libraries implementing standard filters. * Modify libnetcdf.settings to list available standard filters (including deflate and szip). * Add CMake test modules to locate libbz2 and libzstd. * Cleanup the HDF5 memory manager function use in the plugins. * remove unused file include//ncfilter.h * remove tests for the HDF5 memory operations e.g. H5allocate_memory. * Add flag to ncdump to force use of _Filter instead of _Deflate or _Shuffle or _Fletcher32. Used for testing.
2022-03-15 02:39:37 +08:00
fi
if test "x$have_zstd" = xyes ; then
std_filters="${std_filters} zstd"
Enhance/Fix filter support re: Discussion https://github.com/Unidata/netcdf-c/discussions/2214 The primary change is to support so-called "standard filters". A standard filter is one that is defined by the following netcdf-c API: ```` int nc_def_var_XXX(int ncid, int varid, size_t nparams, unsigned* params); int nc_inq_var_XXXX(int ncid, int varid, int* usefilterp, unsigned* params); ```` So for example, zstandard would be a standard filter by defining the functions *nc_def_var_zstandard* and *nc_inq_var_zstandard*. In order to define these functions, we need a new dispatch function: ```` int nc_inq_filter_avail(int ncid, unsigned filterid); ```` This function, combined with the existing filter API can be used to implement arbitrary standard filters using a simple code pattern. Note that I would have preferred that this function return a list of all available filters, but HDF5 does not support that functionality. So this PR implements the dispatch function and implements the following standard functions: + bzip2 + zstandard + blosc Specific test cases are also provided for HDF5 and NCZarr. Over time, other specific standard filters will be defined. ## Primary Changes * Add nc_inq_filter_avail() to netcdf-c API. * Add standard filter implementations to test use of *nc_inq_filter_avail*. * Bump the dispatch table version number and add to all the relevant dispatch tables (libsrc, libsrcp, etc). * Create a program to invoke nc_inq_filter_avail so that it is accessible to shell scripts. * Cleanup szip support to properly support szip when HDF5 is disabled. This involves detecting libsz separately from testing if HDF5 supports szip. * Integrate shuffle and fletcher32 into the existing filter API. This means that, for example, nc_def_var_fletcher32 is now a wrapper around nc_def_var_filter. * Extend the Codec defaulting to allow multiple default shared libraries. ## Misc. Changes * Modify configure.ac/CMakeLists.txt to look for the relevant libraries implementing standard filters. * Modify libnetcdf.settings to list available standard filters (including deflate and szip). * Add CMake test modules to locate libbz2 and libzstd. * Cleanup the HDF5 memory manager function use in the plugins. * remove unused file include//ncfilter.h * remove tests for the HDF5 memory operations e.g. H5allocate_memory. * Add flag to ncdump to force use of _Filter instead of _Deflate or _Shuffle or _Fletcher32. Used for testing.
2022-03-15 02:39:37 +08:00
fi
AC_SUBST(STD_FILTERS,[$std_filters])
# If user wants, then install selected standard filters
AC_MSG_CHECKING([whether and where we should install plugins])
AC_ARG_WITH([plugin-dir], [AS_HELP_STRING([--with-plugin-dir=<absolute directory>|no|--without-plugin-dir],
[Install selected standard filters in specified or default directory])],
[],[with_plugin_dir=no])
AC_MSG_RESULT([$with_plugin_dir])
2022-05-17 03:58:42 +08:00
if test "x$with_plugin_dir" = xno ; then # option missing|disabled
2022-06-04 00:52:00 +08:00
with_plugin_dir=no
with_plugin_dir_setting="N.A."
enable_plugin_dir=no
2022-05-17 03:58:42 +08:00
elif test "x$with_plugin_dir" = xyes ; then # --with-plugin-dir, no argument
# Default to last dir (lowest search priority) in HDF5_PLUGIN_PATH
PLUGIN_PATH="$HDF5_PLUGIN_PATH"
if test "x${PLUGIN_PATH}" = x ; then
if test "x$ISMSVC" = xyes || test "x$ISMINGW" = xyes; then
PLUGIN_PATH="${ALLUSERSPROFILE}\\hdfd5\\lib\\plugin"
else
PLUGIN_PATH="/usr/local/hdf5/lib/plugin"
fi
fi
# Use the lowest priority dir in the path
if test "x$ISMSVC" = xyes || test "x$ISMINGW" = xyes; then
PLUGIN_DIR=`echo "$PLUGIN_PATH" | tr ';' ' '`
2022-05-17 03:58:42 +08:00
else
PLUGIN_DIR=`echo "$PLUGIN_PATH" | tr ':' ' '`
2022-05-17 03:58:42 +08:00
fi
for pp in ${PLUGIN_DIR} ; do last="$pp"; done
PLUGIN_DIR="$last"
2022-05-17 03:58:42 +08:00
with_plugin_dir_setting="$PLUGIN_DIR"
# canonical form is all forward slashes
with_plugin_dir=`echo "$PLUGIN_DIR" | tr '\\\\' '/'`
enable_plugin_dir=yes
AC_MSG_NOTICE([Defaulting to --with-plugin-dir=$with_plugin_dir])
2022-05-17 03:58:42 +08:00
else # --with-plugin-dir=<dir|path>
with_plugin_dir_setting="$with_plugin_dir"
enable_plugin_dir=yes
fi
AM_CONDITIONAL([ENABLE_PLUGIN_DIR], [test "x$enable_plugin_dir" = xyes])
AC_SUBST([PLUGIN_INSTALL_DIR], [$with_plugin_dir])
# Better value for libnetcdf.settings
AC_SUBST([PLUGIN_INSTALL_DIR_SETTING], [$with_plugin_dir_setting])
2011-07-15 06:24:02 +08:00
# Access netcdf specific version of config.h
AH_BOTTOM([#include "ncconfigure.h"])
2010-06-03 21:25:25 +08:00
##################################################
# Uncomment this to keep a copy of autoconf defines at this point, for
# debugging purposes.
# cp confdefs.h my_config.h
#####
# Create output variables from various
# shell variables, for use in generating
# libnetcdf.settings.
#####
AC_SUBST([enable_shared])
AC_SUBST([enable_static])
AC_SUBST([CFLAGS])
AC_SUBST([CPPFLAGS])
AC_SUBST([LDFLAGS])
AC_SUBST([AM_CFLAGS])
AC_SUBST([AM_CPPFLAGS])
AC_SUBST([AM_LDFLAGS])
AC_SUBST([NOUNDEFINED])
# Args:
# 1. netcdf_meta.h variable
# 2. conditional variable that is yes or no.
# 3. default condition
#
# example: AX_SET_META([NC_HAS_NC2],[$nc_build_v2],[]) # Because it checks for no.
# AX_SET_META([NC_HAS_HDF4],[$enable_hdf4],[yes])
AC_DEFUN([AX_SET_META],[
if [ test "x$2" = "x$3" ]; then
AC_SUBST([$1]) $1=1
2014-09-06 01:09:46 +08:00
else
AC_SUBST([$1]) $1=0
2014-09-06 01:09:46 +08:00
fi
])
#####
# Define values used in include/netcdf_meta.h
#####
AC_SUBST([NC_VERSION]) NC_VERSION=$VERSION
AX_SET_META([NC_HAS_NC2],[$nc_build_v2],[yes])
AX_SET_META([NC_HAS_NC4],[$enable_netcdf_4],[yes])
AX_SET_META([NC_HAS_HDF4],[$enable_hdf4],[yes])
2022-04-26 20:18:52 +08:00
AX_SET_META([NC_HAS_BENCHMARKS],[$enable_benchmarks],[yes])
AX_SET_META([NC_HAS_HDF5],[$enable_hdf5],[yes])
AX_SET_META([NC_HAS_DAP2],[$enable_dap],[yes])
Primary change: add dap4 support Specific changes: 1. Add dap4 code: libdap4 and dap4_test. Note that until the d4ts server problem is solved, dap4 is turned off. 2. Modify various files to support dap4 flags: configure.ac, Makefile.am, CMakeLists.txt, etc. 3. Add nc_test/test_common.sh. This centralizes the handling of the locations of various things in the build tree: e.g. where is ncgen.exe located. See nc_test/test_common.sh for details. 4. Modify .sh files to use test_common.sh 5. Obsolete separate oc2 by moving it to be part of netcdf-c. This means replacing code with netcdf-c equivalents. 5. Add --with-testserver to configure.ac to allow override of the servers to be used for --enable-dap-remote-tests. 6. There were multiple versions of nctypealignment code. Try to centralize in libdispatch/doffset.c and include/ncoffsets.h 7. Add a unit test for the ncuri code because of its complexity. 8. Move the findserver code out of libdispatch and into a separate, self contained program in ncdap_test and dap4_test. 9. Move the dispatch header files (nc{3,4}dispatch.h) to .../include because they are now shared by modules. 10. Revamp the handling of TOPSRCDIR and TOPBUILDDIR for shell scripts. 11. Make use of MREMAP if available 12. Misc. minor changes e.g. - #include <config.h> -> #include "config.h" - Add some no-install headers to /include - extern -> EXTERNL and vice versa as needed - misc header cleanup - clean up checking for misc. unix vs microsoft functions 13. Change copyright decls in some files to point to LICENSE file. 14. Add notes to RELEASENOTES.md
2017-03-09 08:01:10 +08:00
AX_SET_META([NC_HAS_DAP4],[$enable_dap4],[yes])
AX_SET_META([NC_HAS_DISKLESS],[yes],[yes])
AX_SET_META([NC_HAS_MMAP],[$enable_mmap],[yes])
AX_SET_META([NC_HAS_JNA],[$enable_jna],[yes])
AX_SET_META([NC_HAS_PNETCDF],[$enable_pnetcdf],[yes])
AX_SET_META([NC_HAS_PARALLEL],[$enable_parallel],[yes])
2015-08-16 06:26:35 +08:00
AX_SET_META([NC_HAS_PARALLEL4],[$enable_parallel4],[yes])
AX_SET_META([NC_HAS_CDF5],[$enable_cdf5],[yes])
AX_SET_META([NC_HAS_ERANGE_FILL], [$enable_erange_fill],[yes])
AX_SET_META([NC_HAS_PAR_FILTERS], [$hdf5_supports_par_filters],[yes])
AX_SET_META([NC_HAS_BYTERANGE],[$enable_byterange],[yes])
AX_SET_META([NC_HAS_S3],[$enable_s3],[yes])
AX_SET_META([NC_HAS_S3_AWS],[$enable_s3_aws],[yes])
AX_SET_META([NC_HAS_S3_INTERNAL],[$enable_s3_internal],[yes])
AX_SET_META([NC_HAS_HDF5_ROS3],[$has_hdf5_ros3],[yes])
This PR adds EXPERIMENTAL support for accessing data in the cloud using a variant of the Zarr protocol and storage format. This enhancement is generically referred to as "NCZarr". The data model supported by NCZarr is netcdf-4 minus the user-defined types and the String type. In this sense it is similar to the CDF-5 data model. More detailed information about enabling and using NCZarr is described in the document NUG/nczarr.md and in a [Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in). WARNING: this code has had limited testing, so do use this version for production work. Also, performance improvements are ongoing. Note especially the following platform matrix of successful tests: Platform | Build System | S3 support ------------------------------------ Linux+gcc | Automake | yes Linux+gcc | CMake | yes Visual Studio | CMake | no Additionally, and as a consequence of the addition of NCZarr, major changes have been made to the Filter API. NOTE: NCZarr does not yet support filters, but these changes are enablers for that support in the future. Note that it is possible (probable?) that there will be some accidental reversions if the changes here did not correctly mimic the existing filter testing. In any case, previously filter ids and parameters were of type unsigned int. In order to support the more general zarr filter model, this was all converted to char*. The old HDF5-specific, unsigned int operations are still supported but they are wrappers around the new, char* based nc_filterx_XXX functions. This entailed at least the following changes: 1. Added the files libdispatch/dfilterx.c and include/ncfilter.h 2. Some filterx utilities have been moved to libdispatch/daux.c 3. A new entry, "filter_actions" was added to the NCDispatch table and the version bumped. 4. An overly complex set of structs was created to support funnelling all of the filterx operations thru a single dispatch "filter_actions" entry. 5. Move common code to from libhdf5 to libsrc4 so that it is accessible to nczarr. Changes directly related to Zarr: 1. Modified CMakeList.txt and configure.ac to support both C and C++ -- this is in support of S3 support via the awd-sdk libraries. 2. Define a size64_t type to support nczarr. 3. More reworking of libdispatch/dinfermodel.c to support zarr and to regularize the structure of the fragments section of a URL. Changes not directly related to Zarr: 1. Make client-side filter registration be conditional, with default off. 2. Hack include/nc4internal.h to make some flags added by Ed be unique: e.g. NC_CREAT, NC_INDEF, etc. 3. cleanup include/nchttp.h and libdispatch/dhttp.c. 4. Misc. changes to support compiling under Visual Studio including: * Better testing under windows for dirent.h and opendir and closedir. 5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags and to centralize error reporting. 6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them. 7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible. Changes Left TO-DO: 1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
AX_SET_META([NC_HAS_NCZARR],[$enable_nczarr],[yes])
AX_SET_META([NC_HAS_LOGGING],[$enable_logging],[yes])
AX_SET_META([NC_HAS_QUANTIZE],[$enable_quantize],[yes])
Enhance/Fix filter support re: Discussion https://github.com/Unidata/netcdf-c/discussions/2214 The primary change is to support so-called "standard filters". A standard filter is one that is defined by the following netcdf-c API: ```` int nc_def_var_XXX(int ncid, int varid, size_t nparams, unsigned* params); int nc_inq_var_XXXX(int ncid, int varid, int* usefilterp, unsigned* params); ```` So for example, zstandard would be a standard filter by defining the functions *nc_def_var_zstandard* and *nc_inq_var_zstandard*. In order to define these functions, we need a new dispatch function: ```` int nc_inq_filter_avail(int ncid, unsigned filterid); ```` This function, combined with the existing filter API can be used to implement arbitrary standard filters using a simple code pattern. Note that I would have preferred that this function return a list of all available filters, but HDF5 does not support that functionality. So this PR implements the dispatch function and implements the following standard functions: + bzip2 + zstandard + blosc Specific test cases are also provided for HDF5 and NCZarr. Over time, other specific standard filters will be defined. ## Primary Changes * Add nc_inq_filter_avail() to netcdf-c API. * Add standard filter implementations to test use of *nc_inq_filter_avail*. * Bump the dispatch table version number and add to all the relevant dispatch tables (libsrc, libsrcp, etc). * Create a program to invoke nc_inq_filter_avail so that it is accessible to shell scripts. * Cleanup szip support to properly support szip when HDF5 is disabled. This involves detecting libsz separately from testing if HDF5 supports szip. * Integrate shuffle and fletcher32 into the existing filter API. This means that, for example, nc_def_var_fletcher32 is now a wrapper around nc_def_var_filter. * Extend the Codec defaulting to allow multiple default shared libraries. ## Misc. Changes * Modify configure.ac/CMakeLists.txt to look for the relevant libraries implementing standard filters. * Modify libnetcdf.settings to list available standard filters (including deflate and szip). * Add CMake test modules to locate libbz2 and libzstd. * Cleanup the HDF5 memory manager function use in the plugins. * remove unused file include//ncfilter.h * remove tests for the HDF5 memory operations e.g. H5allocate_memory. * Add flag to ncdump to force use of _Filter instead of _Deflate or _Shuffle or _Fletcher32. Used for testing.
2022-03-15 02:39:37 +08:00
AX_SET_META([NC_HAS_SZIP],[$enable_hdf5_szip],[yes])
AX_SET_META([NC_HAS_ZSTD],[$have_zstd],[yes])
AX_SET_META([NC_HAS_BLOSC],[$have_blosc],[yes])
AX_SET_META([NC_HAS_BZ2],[$have_bz2],[yes])
# This is the version of the dispatch table. If the dispatch table is
# changed, this should be incremented, so that user-defined format
# applications like PIO can determine whether they have an appropriate
# dispatch table to submit. If this is changed, make sure the value in
# CMakeLists.txt also changes to match.
Enhance/Fix filter support re: Discussion https://github.com/Unidata/netcdf-c/discussions/2214 The primary change is to support so-called "standard filters". A standard filter is one that is defined by the following netcdf-c API: ```` int nc_def_var_XXX(int ncid, int varid, size_t nparams, unsigned* params); int nc_inq_var_XXXX(int ncid, int varid, int* usefilterp, unsigned* params); ```` So for example, zstandard would be a standard filter by defining the functions *nc_def_var_zstandard* and *nc_inq_var_zstandard*. In order to define these functions, we need a new dispatch function: ```` int nc_inq_filter_avail(int ncid, unsigned filterid); ```` This function, combined with the existing filter API can be used to implement arbitrary standard filters using a simple code pattern. Note that I would have preferred that this function return a list of all available filters, but HDF5 does not support that functionality. So this PR implements the dispatch function and implements the following standard functions: + bzip2 + zstandard + blosc Specific test cases are also provided for HDF5 and NCZarr. Over time, other specific standard filters will be defined. ## Primary Changes * Add nc_inq_filter_avail() to netcdf-c API. * Add standard filter implementations to test use of *nc_inq_filter_avail*. * Bump the dispatch table version number and add to all the relevant dispatch tables (libsrc, libsrcp, etc). * Create a program to invoke nc_inq_filter_avail so that it is accessible to shell scripts. * Cleanup szip support to properly support szip when HDF5 is disabled. This involves detecting libsz separately from testing if HDF5 supports szip. * Integrate shuffle and fletcher32 into the existing filter API. This means that, for example, nc_def_var_fletcher32 is now a wrapper around nc_def_var_filter. * Extend the Codec defaulting to allow multiple default shared libraries. ## Misc. Changes * Modify configure.ac/CMakeLists.txt to look for the relevant libraries implementing standard filters. * Modify libnetcdf.settings to list available standard filters (including deflate and szip). * Add CMake test modules to locate libbz2 and libzstd. * Cleanup the HDF5 memory manager function use in the plugins. * remove unused file include//ncfilter.h * remove tests for the HDF5 memory operations e.g. H5allocate_memory. * Add flag to ncdump to force use of _Filter instead of _Deflate or _Shuffle or _Fletcher32. Used for testing.
2022-03-15 02:39:37 +08:00
AC_SUBST([NC_DISPATCH_VERSION], [5])
AC_DEFINE_UNQUOTED([NC_DISPATCH_VERSION], [${NC_DISPATCH_VERSION}], [Dispatch table version.])
This PR adds EXPERIMENTAL support for accessing data in the cloud using a variant of the Zarr protocol and storage format. This enhancement is generically referred to as "NCZarr". The data model supported by NCZarr is netcdf-4 minus the user-defined types and the String type. In this sense it is similar to the CDF-5 data model. More detailed information about enabling and using NCZarr is described in the document NUG/nczarr.md and in a [Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in). WARNING: this code has had limited testing, so do use this version for production work. Also, performance improvements are ongoing. Note especially the following platform matrix of successful tests: Platform | Build System | S3 support ------------------------------------ Linux+gcc | Automake | yes Linux+gcc | CMake | yes Visual Studio | CMake | no Additionally, and as a consequence of the addition of NCZarr, major changes have been made to the Filter API. NOTE: NCZarr does not yet support filters, but these changes are enablers for that support in the future. Note that it is possible (probable?) that there will be some accidental reversions if the changes here did not correctly mimic the existing filter testing. In any case, previously filter ids and parameters were of type unsigned int. In order to support the more general zarr filter model, this was all converted to char*. The old HDF5-specific, unsigned int operations are still supported but they are wrappers around the new, char* based nc_filterx_XXX functions. This entailed at least the following changes: 1. Added the files libdispatch/dfilterx.c and include/ncfilter.h 2. Some filterx utilities have been moved to libdispatch/daux.c 3. A new entry, "filter_actions" was added to the NCDispatch table and the version bumped. 4. An overly complex set of structs was created to support funnelling all of the filterx operations thru a single dispatch "filter_actions" entry. 5. Move common code to from libhdf5 to libsrc4 so that it is accessible to nczarr. Changes directly related to Zarr: 1. Modified CMakeList.txt and configure.ac to support both C and C++ -- this is in support of S3 support via the awd-sdk libraries. 2. Define a size64_t type to support nczarr. 3. More reworking of libdispatch/dinfermodel.c to support zarr and to regularize the structure of the fragments section of a URL. Changes not directly related to Zarr: 1. Make client-side filter registration be conditional, with default off. 2. Hack include/nc4internal.h to make some flags added by Ed be unique: e.g. NC_CREAT, NC_INDEF, etc. 3. cleanup include/nchttp.h and libdispatch/dhttp.c. 4. Misc. changes to support compiling under Visual Studio including: * Better testing under windows for dirent.h and opendir and closedir. 5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags and to centralize error reporting. 6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them. 7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible. Changes Left TO-DO: 1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
2019-03-12 22:55:30 +08:00
#####
# End netcdf_meta.h definitions.
#####
2019-03-12 22:55:30 +08:00
# This would be true for a cmake build.
AC_SUBST([ISCMAKE], [])
2019-03-12 22:55:30 +08:00
Mitigate S3 test interference + Unlimited Dimensions in NCZarr This PR started as an attempt to add unlimited dimensions to NCZarr. It did that, but this exposed significant problems with test interference. So this PR is mostly about fixing -- well mitigating anyway -- test interference. The problem of test interference is now documented in the document docs/internal.md. The solutions implemented here are also describe in that document. The solution is somewhat fragile but multiple cleanup mechanisms are provided. Note that this feature requires that the AWS command line utility must be installed. ## Unlimited Dimensions. The existing NCZarr extensions to Zarr are modified to support unlimited dimensions. NCzarr extends the Zarr meta-data for the ".zgroup" object to include netcdf-4 model extensions. This information is stored in ".zgroup" as dictionary named "_nczarr_group". Inside "_nczarr_group", there is a key named "dims" that stores information about netcdf-4 named dimensions. The value of "dims" is a dictionary whose keys are the named dimensions. The value associated with each dimension name has one of two forms Form 1 is a special case of form 2, and is kept for backward compatibility. Whenever a new file is written, it uses format 1 if possible, otherwise format 2. * Form 1: An integer representing the size of the dimension, which is used for simple named dimensions. * Form 2: A dictionary with the following keys and values" - "size" with an integer value representing the (current) size of the dimension. - "unlimited" with a value of either "1" or "0" to indicate if this dimension is an unlimited dimension. For Unlimited dimensions, the size is initially zero, and as variables extend the length of that dimension, the size value for the dimension increases. That dimension size is shared by all arrays referencing that dimension, so if one array extends an unlimited dimension, it is implicitly extended for all other arrays that reference that dimension. This is the standard semantics for unlimited dimensions. Adding unlimited dimensions required a number of other changes to the NCZarr code-base. These included the following. * Did a partial refactor of the slice handling code in zwalk.c to clean it up. * Added a number of tests for unlimited dimensions derived from the same test in nc_test4. * Added several NCZarr specific unlimited tests; more are needed. * Add test of endianness. ## Misc. Other Changes * Modify libdispatch/ncs3sdk_aws.cpp to optionally support use of the AWS Transfer Utility mechanism. This is controlled by the ```#define TRANSFER```` command in that file. It defaults to being disabled. * Parameterize both the standard Unidata S3 bucket (S3TESTBUCKET) and the netcdf-c test data prefix (S3TESTSUBTREE). * Fixed an obscure memory leak in ncdump. * Removed some obsolete unit testing code and test cases. * Uncovered a bug in the netcdf-c handling of big-endian floats and doubles. Have not fixed yet. See tst_h5_endians.c. * Renamed some nczarr_tests testcases to avoid name conflicts with nc_test4. * Modify the semantics of zmap\#ncsmap_write to only allow total rewrite of objects. * Modify the semantics of zodom to properly handle stride > 1. * Add a truncate operation to the libnczarr zmap code.
2023-09-27 06:56:48 +08:00
# Provide true/false conditionals to temporarily suppress tests and such
AM_CONDITIONAL([AX_DISABLE], [test xno = xyes])
AM_CONDITIONAL([AX_ENABLE], [test xyes = xyes])
2023-06-11 10:11:26 +08:00
# Provide conditional to identify tests that must be run manually
AM_CONDITIONAL([AX_MANUAL], [test xno = xyes])
2019-03-12 22:55:30 +08:00
AC_MSG_NOTICE([generating header files and makefiles])
AC_CONFIG_FILES(test_common.sh:test_common.in)
Mitigate S3 test interference + Unlimited Dimensions in NCZarr This PR started as an attempt to add unlimited dimensions to NCZarr. It did that, but this exposed significant problems with test interference. So this PR is mostly about fixing -- well mitigating anyway -- test interference. The problem of test interference is now documented in the document docs/internal.md. The solutions implemented here are also describe in that document. The solution is somewhat fragile but multiple cleanup mechanisms are provided. Note that this feature requires that the AWS command line utility must be installed. ## Unlimited Dimensions. The existing NCZarr extensions to Zarr are modified to support unlimited dimensions. NCzarr extends the Zarr meta-data for the ".zgroup" object to include netcdf-4 model extensions. This information is stored in ".zgroup" as dictionary named "_nczarr_group". Inside "_nczarr_group", there is a key named "dims" that stores information about netcdf-4 named dimensions. The value of "dims" is a dictionary whose keys are the named dimensions. The value associated with each dimension name has one of two forms Form 1 is a special case of form 2, and is kept for backward compatibility. Whenever a new file is written, it uses format 1 if possible, otherwise format 2. * Form 1: An integer representing the size of the dimension, which is used for simple named dimensions. * Form 2: A dictionary with the following keys and values" - "size" with an integer value representing the (current) size of the dimension. - "unlimited" with a value of either "1" or "0" to indicate if this dimension is an unlimited dimension. For Unlimited dimensions, the size is initially zero, and as variables extend the length of that dimension, the size value for the dimension increases. That dimension size is shared by all arrays referencing that dimension, so if one array extends an unlimited dimension, it is implicitly extended for all other arrays that reference that dimension. This is the standard semantics for unlimited dimensions. Adding unlimited dimensions required a number of other changes to the NCZarr code-base. These included the following. * Did a partial refactor of the slice handling code in zwalk.c to clean it up. * Added a number of tests for unlimited dimensions derived from the same test in nc_test4. * Added several NCZarr specific unlimited tests; more are needed. * Add test of endianness. ## Misc. Other Changes * Modify libdispatch/ncs3sdk_aws.cpp to optionally support use of the AWS Transfer Utility mechanism. This is controlled by the ```#define TRANSFER```` command in that file. It defaults to being disabled. * Parameterize both the standard Unidata S3 bucket (S3TESTBUCKET) and the netcdf-c test data prefix (S3TESTSUBTREE). * Fixed an obscure memory leak in ncdump. * Removed some obsolete unit testing code and test cases. * Uncovered a bug in the netcdf-c handling of big-endian floats and doubles. Have not fixed yet. See tst_h5_endians.c. * Renamed some nczarr_tests testcases to avoid name conflicts with nc_test4. * Modify the semantics of zmap\#ncsmap_write to only allow total rewrite of objects. * Modify the semantics of zodom to properly handle stride > 1. * Add a truncate operation to the libnczarr zmap code.
2023-09-27 06:56:48 +08:00
AC_CONFIG_FILES(s3cleanup.sh:s3cleanup.in, [chmod ugo+x s3cleanup.sh])
AC_CONFIG_FILES(s3gc.sh:s3gc.in, [chmod ugo+x s3gc.sh])
AC_CONFIG_FILES(nc_test4/findplugin.sh:nc_test4/findplugin.in, [chmod ugo+x nc_test4/findplugin.sh])
AC_CONFIG_FILES(nczarr_test/findplugin.sh:nc_test4/findplugin.in, [chmod ugo+x nczarr_test/findplugin.sh])
AC_CONFIG_FILES(plugins/findplugin.sh:nc_test4/findplugin.in, [chmod ugo+x plugins/findplugin.sh])
AC_CONFIG_FILES(examples/C/findplugin.sh:nc_test4/findplugin.in, [chmod ugo+x examples/C/findplugin.sh])
AC_CONFIG_FILES(ncdap_test/findtestserver.c:ncdap_test/findtestserver.c.in, [chmod ugo+x ncdap_test/findtestserver.c])
AC_CONFIG_FILES([nc_test/run_pnetcdf_tests.sh:nc_test/run_pnetcdf_tests.sh.in],[chmod ugo+x nc_test/run_pnetcdf_tests.sh])
AC_CONFIG_FILES(dap4_test/findtestserver4.c:ncdap_test/findtestserver.c.in)
AC_CONFIG_FILES(dap4_test/pingurl4.c:ncdap_test/pingurl.c)
2020-02-28 05:06:45 +08:00
AC_CONFIG_FILES([h5_test/run_par_tests.sh], [chmod ugo+x h5_test/run_par_tests.sh])
AC_CONFIG_FILES([nc_test4/run_par_test.sh], [chmod ugo+x nc_test4/run_par_test.sh])
AC_CONFIG_FILES([nc_perf/run_par_bm_test.sh], [chmod ugo+x nc_perf/run_par_bm_test.sh])
2020-07-01 00:40:05 +08:00
AC_CONFIG_FILES([nc_perf/run_gfs_test.sh], [chmod ugo+x nc_perf/run_gfs_test.sh])
Add filter support to NCZarr Filter support has three goals: 1. Use the existing HDF5 filter implementations, 2. Allow filter metadata to be stored in the NumCodecs metadata format used by Zarr, 3. Allow filters to be used even when HDF5 is disabled Detailed usage directions are define in docs/filters.md. For now, the existing filter API is left in place. So filters are defined using ''nc_def_var_filter'' using the HDF5 style where the id and parameters are unsigned integers. This is a big change since filters affect many parts of the code. In the following, the terms "compressor" and "filter" and "codec" are generally used synonomously. ### Filter-Related Changes: * In order to support dynamic loading of shared filter libraries, a new library was added in the libncpoco directory; it helps to isolate dynamic loading across multiple platforms. * Provide a json parsing library for use by plugins; this is created by merging libdispatch/ncjson.c with include/ncjson.h. * Add a new _Codecs attribute to allow clients to see what codecs are being used; let ncdump -s print it out. * Provide special headers to help support compilation of HDF5 filters when HDF5 is not enabled: netcdf_filter_hdf5_build.h and netcdf_filter_build.h. * Add a number of new test to test the new nczarr filters. * Let ncgen parse _Codecs attribute, although it is ignored. ### Plugin directory changes: * Add support for the Blosc compressor; this is essential because it is the most common compressor used in Zarr datasets. This also necessitated adding a CMake FindBlosc.cmake file * Add NCZarr support for the big-four filters provided by HDF5: shuffle, fletcher32, deflate (zlib), and szip * Add a Codec defaulter (see docs/filters.md) for the big four filters. * Make plugins work with windows by properly adding __declspec declaration. ### Misc. Non-Filter Changes * Replace most uses of USE_NETCDF4 (deprecated) with USE_HDF5. * Improve support for caching * More fixes for path conversion code * Fix misc. memory leaks * Add new utility -- ncdump/ncpathcvt -- that does more or less the same thing as cygpath. * Add a number of new test to test the non-filter fixes. * Update the parsers * Convert most instances of '#ifdef _MSC_VER' to '#ifdef _WIN32'
2021-09-03 07:04:26 +08:00
AC_CONFIG_FILES(nczarr_test/timer_utils.h:unit_test/timer_utils.h)
AC_CONFIG_FILES(nczarr_test/timer_utils.c:unit_test/timer_utils.c)
AC_CONFIG_FILES(nczarr_test/test_filter.c:nc_test4/test_filter.c)
AC_CONFIG_FILES(nczarr_test/test_filter_misc.c:nc_test4/test_filter_misc.c)
AC_CONFIG_FILES(nczarr_test/tst_multifilter.c:nc_test4/tst_multifilter.c)
AC_CONFIG_FILES(nczarr_test/test_filter_repeat.c:nc_test4/test_filter_repeat.c)
AC_CONFIG_FILES(nczarr_test/test_filter_order.c:nc_test4/test_filter_order.c)
AC_CONFIG_FILES([examples/C/run_par_test.sh], [chmod ugo+x examples/C/run_par_test.sh])
AC_CONFIG_FILES([nc-config], [chmod 755 nc-config])
2010-06-03 21:25:25 +08:00
AC_CONFIG_FILES([Makefile
netcdf.pc
2018-05-13 22:38:21 +08:00
libnetcdf.settings
include/netcdf_meta.h
include/netcdf_dispatch.h
2010-06-04 04:22:55 +08:00
include/Makefile
2014-09-06 01:09:46 +08:00
h5_test/Makefile
2018-03-05 18:45:18 +08:00
hdf4_test/Makefile
2010-06-03 21:25:25 +08:00
libsrc/Makefile
libsrc4/Makefile
2018-05-09 01:58:01 +08:00
libhdf5/Makefile
libsrcp/Makefile
2010-06-03 21:25:25 +08:00
ncdump/Makefile
ncgen3/Makefile
ncgen/Makefile
examples/Makefile
examples/C/Makefile
examples/CDL/Makefile
oc2/Makefile
libdap2/Makefile
Primary change: add dap4 support Specific changes: 1. Add dap4 code: libdap4 and dap4_test. Note that until the d4ts server problem is solved, dap4 is turned off. 2. Modify various files to support dap4 flags: configure.ac, Makefile.am, CMakeLists.txt, etc. 3. Add nc_test/test_common.sh. This centralizes the handling of the locations of various things in the build tree: e.g. where is ncgen.exe located. See nc_test/test_common.sh for details. 4. Modify .sh files to use test_common.sh 5. Obsolete separate oc2 by moving it to be part of netcdf-c. This means replacing code with netcdf-c equivalents. 5. Add --with-testserver to configure.ac to allow override of the servers to be used for --enable-dap-remote-tests. 6. There were multiple versions of nctypealignment code. Try to centralize in libdispatch/doffset.c and include/ncoffsets.h 7. Add a unit test for the ncuri code because of its complexity. 8. Move the findserver code out of libdispatch and into a separate, self contained program in ncdap_test and dap4_test. 9. Move the dispatch header files (nc{3,4}dispatch.h) to .../include because they are now shared by modules. 10. Revamp the handling of TOPSRCDIR and TOPBUILDDIR for shell scripts. 11. Make use of MREMAP if available 12. Misc. minor changes e.g. - #include <config.h> -> #include "config.h" - Add some no-install headers to /include - extern -> EXTERNL and vice versa as needed - misc header cleanup - clean up checking for misc. unix vs microsoft functions 13. Change copyright decls in some files to point to LICENSE file. 14. Add notes to RELEASENOTES.md
2017-03-09 08:01:10 +08:00
libdap4/Makefile
2018-02-08 21:20:58 +08:00
libhdf4/Makefile
This PR adds EXPERIMENTAL support for accessing data in the cloud using a variant of the Zarr protocol and storage format. This enhancement is generically referred to as "NCZarr". The data model supported by NCZarr is netcdf-4 minus the user-defined types and the String type. In this sense it is similar to the CDF-5 data model. More detailed information about enabling and using NCZarr is described in the document NUG/nczarr.md and in a [Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in). WARNING: this code has had limited testing, so do use this version for production work. Also, performance improvements are ongoing. Note especially the following platform matrix of successful tests: Platform | Build System | S3 support ------------------------------------ Linux+gcc | Automake | yes Linux+gcc | CMake | yes Visual Studio | CMake | no Additionally, and as a consequence of the addition of NCZarr, major changes have been made to the Filter API. NOTE: NCZarr does not yet support filters, but these changes are enablers for that support in the future. Note that it is possible (probable?) that there will be some accidental reversions if the changes here did not correctly mimic the existing filter testing. In any case, previously filter ids and parameters were of type unsigned int. In order to support the more general zarr filter model, this was all converted to char*. The old HDF5-specific, unsigned int operations are still supported but they are wrappers around the new, char* based nc_filterx_XXX functions. This entailed at least the following changes: 1. Added the files libdispatch/dfilterx.c and include/ncfilter.h 2. Some filterx utilities have been moved to libdispatch/daux.c 3. A new entry, "filter_actions" was added to the NCDispatch table and the version bumped. 4. An overly complex set of structs was created to support funnelling all of the filterx operations thru a single dispatch "filter_actions" entry. 5. Move common code to from libhdf5 to libsrc4 so that it is accessible to nczarr. Changes directly related to Zarr: 1. Modified CMakeList.txt and configure.ac to support both C and C++ -- this is in support of S3 support via the awd-sdk libraries. 2. Define a size64_t type to support nczarr. 3. More reworking of libdispatch/dinfermodel.c to support zarr and to regularize the structure of the fragments section of a URL. Changes not directly related to Zarr: 1. Make client-side filter registration be conditional, with default off. 2. Hack include/nc4internal.h to make some flags added by Ed be unique: e.g. NC_CREAT, NC_INDEF, etc. 3. cleanup include/nchttp.h and libdispatch/dhttp.c. 4. Misc. changes to support compiling under Visual Studio including: * Better testing under windows for dirent.h and opendir and closedir. 5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags and to centralize error reporting. 6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them. 7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible. Changes Left TO-DO: 1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
libnczarr/Makefile
Add filter support to NCZarr Filter support has three goals: 1. Use the existing HDF5 filter implementations, 2. Allow filter metadata to be stored in the NumCodecs metadata format used by Zarr, 3. Allow filters to be used even when HDF5 is disabled Detailed usage directions are define in docs/filters.md. For now, the existing filter API is left in place. So filters are defined using ''nc_def_var_filter'' using the HDF5 style where the id and parameters are unsigned integers. This is a big change since filters affect many parts of the code. In the following, the terms "compressor" and "filter" and "codec" are generally used synonomously. ### Filter-Related Changes: * In order to support dynamic loading of shared filter libraries, a new library was added in the libncpoco directory; it helps to isolate dynamic loading across multiple platforms. * Provide a json parsing library for use by plugins; this is created by merging libdispatch/ncjson.c with include/ncjson.h. * Add a new _Codecs attribute to allow clients to see what codecs are being used; let ncdump -s print it out. * Provide special headers to help support compilation of HDF5 filters when HDF5 is not enabled: netcdf_filter_hdf5_build.h and netcdf_filter_build.h. * Add a number of new test to test the new nczarr filters. * Let ncgen parse _Codecs attribute, although it is ignored. ### Plugin directory changes: * Add support for the Blosc compressor; this is essential because it is the most common compressor used in Zarr datasets. This also necessitated adding a CMake FindBlosc.cmake file * Add NCZarr support for the big-four filters provided by HDF5: shuffle, fletcher32, deflate (zlib), and szip * Add a Codec defaulter (see docs/filters.md) for the big four filters. * Make plugins work with windows by properly adding __declspec declaration. ### Misc. Non-Filter Changes * Replace most uses of USE_NETCDF4 (deprecated) with USE_HDF5. * Improve support for caching * More fixes for path conversion code * Fix misc. memory leaks * Add new utility -- ncdump/ncpathcvt -- that does more or less the same thing as cygpath. * Add a number of new test to test the non-filter fixes. * Update the parsers * Convert most instances of '#ifdef _MSC_VER' to '#ifdef _WIN32'
2021-09-03 07:04:26 +08:00
libncpoco/Makefile
libncxml/Makefile
libdispatch/Makefile
liblib/Makefile
ncdump/cdl/Makefile
ncdump/expected/Makefile
Primary change: add dap4 support Specific changes: 1. Add dap4 code: libdap4 and dap4_test. Note that until the d4ts server problem is solved, dap4 is turned off. 2. Modify various files to support dap4 flags: configure.ac, Makefile.am, CMakeLists.txt, etc. 3. Add nc_test/test_common.sh. This centralizes the handling of the locations of various things in the build tree: e.g. where is ncgen.exe located. See nc_test/test_common.sh for details. 4. Modify .sh files to use test_common.sh 5. Obsolete separate oc2 by moving it to be part of netcdf-c. This means replacing code with netcdf-c equivalents. 5. Add --with-testserver to configure.ac to allow override of the servers to be used for --enable-dap-remote-tests. 6. There were multiple versions of nctypealignment code. Try to centralize in libdispatch/doffset.c and include/ncoffsets.h 7. Add a unit test for the ncuri code because of its complexity. 8. Move the findserver code out of libdispatch and into a separate, self contained program in ncdap_test and dap4_test. 9. Move the dispatch header files (nc{3,4}dispatch.h) to .../include because they are now shared by modules. 10. Revamp the handling of TOPSRCDIR and TOPBUILDDIR for shell scripts. 11. Make use of MREMAP if available 12. Misc. minor changes e.g. - #include <config.h> -> #include "config.h" - Add some no-install headers to /include - extern -> EXTERNL and vice versa as needed - misc header cleanup - clean up checking for misc. unix vs microsoft functions 13. Change copyright decls in some files to point to LICENSE file. 14. Add notes to RELEASENOTES.md
2017-03-09 08:01:10 +08:00
docs/Makefile
docs/images/Makefile
2019-08-09 23:31:24 +08:00
unit_test/Makefile
Primary change: add dap4 support Specific changes: 1. Add dap4 code: libdap4 and dap4_test. Note that until the d4ts server problem is solved, dap4 is turned off. 2. Modify various files to support dap4 flags: configure.ac, Makefile.am, CMakeLists.txt, etc. 3. Add nc_test/test_common.sh. This centralizes the handling of the locations of various things in the build tree: e.g. where is ncgen.exe located. See nc_test/test_common.sh for details. 4. Modify .sh files to use test_common.sh 5. Obsolete separate oc2 by moving it to be part of netcdf-c. This means replacing code with netcdf-c equivalents. 5. Add --with-testserver to configure.ac to allow override of the servers to be used for --enable-dap-remote-tests. 6. There were multiple versions of nctypealignment code. Try to centralize in libdispatch/doffset.c and include/ncoffsets.h 7. Add a unit test for the ncuri code because of its complexity. 8. Move the findserver code out of libdispatch and into a separate, self contained program in ncdap_test and dap4_test. 9. Move the dispatch header files (nc{3,4}dispatch.h) to .../include because they are now shared by modules. 10. Revamp the handling of TOPSRCDIR and TOPBUILDDIR for shell scripts. 11. Make use of MREMAP if available 12. Misc. minor changes e.g. - #include <config.h> -> #include "config.h" - Add some no-install headers to /include - extern -> EXTERNL and vice versa as needed - misc header cleanup - clean up checking for misc. unix vs microsoft functions 13. Change copyright decls in some files to point to LICENSE file. 14. Add notes to RELEASENOTES.md
2017-03-09 08:01:10 +08:00
nctest/Makefile
nc_test4/Makefile
2019-03-17 22:03:27 +08:00
nc_perf/Makefile
Primary change: add dap4 support Specific changes: 1. Add dap4 code: libdap4 and dap4_test. Note that until the d4ts server problem is solved, dap4 is turned off. 2. Modify various files to support dap4 flags: configure.ac, Makefile.am, CMakeLists.txt, etc. 3. Add nc_test/test_common.sh. This centralizes the handling of the locations of various things in the build tree: e.g. where is ncgen.exe located. See nc_test/test_common.sh for details. 4. Modify .sh files to use test_common.sh 5. Obsolete separate oc2 by moving it to be part of netcdf-c. This means replacing code with netcdf-c equivalents. 5. Add --with-testserver to configure.ac to allow override of the servers to be used for --enable-dap-remote-tests. 6. There were multiple versions of nctypealignment code. Try to centralize in libdispatch/doffset.c and include/ncoffsets.h 7. Add a unit test for the ncuri code because of its complexity. 8. Move the findserver code out of libdispatch and into a separate, self contained program in ncdap_test and dap4_test. 9. Move the dispatch header files (nc{3,4}dispatch.h) to .../include because they are now shared by modules. 10. Revamp the handling of TOPSRCDIR and TOPBUILDDIR for shell scripts. 11. Make use of MREMAP if available 12. Misc. minor changes e.g. - #include <config.h> -> #include "config.h" - Add some no-install headers to /include - extern -> EXTERNL and vice versa as needed - misc header cleanup - clean up checking for misc. unix vs microsoft functions 13. Change copyright decls in some files to point to LICENSE file. 14. Add notes to RELEASENOTES.md
2017-03-09 08:01:10 +08:00
nc_test/Makefile
ncdap_test/Makefile
ncdap_test/testdata3/Makefile
ncdap_test/expected3/Makefile
ncdap_test/expectremote3/Makefile
Mitigate S3 test interference + Unlimited Dimensions in NCZarr This PR started as an attempt to add unlimited dimensions to NCZarr. It did that, but this exposed significant problems with test interference. So this PR is mostly about fixing -- well mitigating anyway -- test interference. The problem of test interference is now documented in the document docs/internal.md. The solutions implemented here are also describe in that document. The solution is somewhat fragile but multiple cleanup mechanisms are provided. Note that this feature requires that the AWS command line utility must be installed. ## Unlimited Dimensions. The existing NCZarr extensions to Zarr are modified to support unlimited dimensions. NCzarr extends the Zarr meta-data for the ".zgroup" object to include netcdf-4 model extensions. This information is stored in ".zgroup" as dictionary named "_nczarr_group". Inside "_nczarr_group", there is a key named "dims" that stores information about netcdf-4 named dimensions. The value of "dims" is a dictionary whose keys are the named dimensions. The value associated with each dimension name has one of two forms Form 1 is a special case of form 2, and is kept for backward compatibility. Whenever a new file is written, it uses format 1 if possible, otherwise format 2. * Form 1: An integer representing the size of the dimension, which is used for simple named dimensions. * Form 2: A dictionary with the following keys and values" - "size" with an integer value representing the (current) size of the dimension. - "unlimited" with a value of either "1" or "0" to indicate if this dimension is an unlimited dimension. For Unlimited dimensions, the size is initially zero, and as variables extend the length of that dimension, the size value for the dimension increases. That dimension size is shared by all arrays referencing that dimension, so if one array extends an unlimited dimension, it is implicitly extended for all other arrays that reference that dimension. This is the standard semantics for unlimited dimensions. Adding unlimited dimensions required a number of other changes to the NCZarr code-base. These included the following. * Did a partial refactor of the slice handling code in zwalk.c to clean it up. * Added a number of tests for unlimited dimensions derived from the same test in nc_test4. * Added several NCZarr specific unlimited tests; more are needed. * Add test of endianness. ## Misc. Other Changes * Modify libdispatch/ncs3sdk_aws.cpp to optionally support use of the AWS Transfer Utility mechanism. This is controlled by the ```#define TRANSFER```` command in that file. It defaults to being disabled. * Parameterize both the standard Unidata S3 bucket (S3TESTBUCKET) and the netcdf-c test data prefix (S3TESTSUBTREE). * Fixed an obscure memory leak in ncdump. * Removed some obsolete unit testing code and test cases. * Uncovered a bug in the netcdf-c handling of big-endian floats and doubles. Have not fixed yet. See tst_h5_endians.c. * Renamed some nczarr_tests testcases to avoid name conflicts with nc_test4. * Modify the semantics of zmap\#ncsmap_write to only allow total rewrite of objects. * Modify the semantics of zodom to properly handle stride > 1. * Add a truncate operation to the libnczarr zmap code.
2023-09-27 06:56:48 +08:00
ncdap_test/expectedhyrax/Makefile
Primary change: add dap4 support Specific changes: 1. Add dap4 code: libdap4 and dap4_test. Note that until the d4ts server problem is solved, dap4 is turned off. 2. Modify various files to support dap4 flags: configure.ac, Makefile.am, CMakeLists.txt, etc. 3. Add nc_test/test_common.sh. This centralizes the handling of the locations of various things in the build tree: e.g. where is ncgen.exe located. See nc_test/test_common.sh for details. 4. Modify .sh files to use test_common.sh 5. Obsolete separate oc2 by moving it to be part of netcdf-c. This means replacing code with netcdf-c equivalents. 5. Add --with-testserver to configure.ac to allow override of the servers to be used for --enable-dap-remote-tests. 6. There were multiple versions of nctypealignment code. Try to centralize in libdispatch/doffset.c and include/ncoffsets.h 7. Add a unit test for the ncuri code because of its complexity. 8. Move the findserver code out of libdispatch and into a separate, self contained program in ncdap_test and dap4_test. 9. Move the dispatch header files (nc{3,4}dispatch.h) to .../include because they are now shared by modules. 10. Revamp the handling of TOPSRCDIR and TOPBUILDDIR for shell scripts. 11. Make use of MREMAP if available 12. Misc. minor changes e.g. - #include <config.h> -> #include "config.h" - Add some no-install headers to /include - extern -> EXTERNL and vice versa as needed - misc header cleanup - clean up checking for misc. unix vs microsoft functions 13. Change copyright decls in some files to point to LICENSE file. 14. Add notes to RELEASENOTES.md
2017-03-09 08:01:10 +08:00
dap4_test/Makefile
plugins/Makefile
This PR adds EXPERIMENTAL support for accessing data in the cloud using a variant of the Zarr protocol and storage format. This enhancement is generically referred to as "NCZarr". The data model supported by NCZarr is netcdf-4 minus the user-defined types and the String type. In this sense it is similar to the CDF-5 data model. More detailed information about enabling and using NCZarr is described in the document NUG/nczarr.md and in a [Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in). WARNING: this code has had limited testing, so do use this version for production work. Also, performance improvements are ongoing. Note especially the following platform matrix of successful tests: Platform | Build System | S3 support ------------------------------------ Linux+gcc | Automake | yes Linux+gcc | CMake | yes Visual Studio | CMake | no Additionally, and as a consequence of the addition of NCZarr, major changes have been made to the Filter API. NOTE: NCZarr does not yet support filters, but these changes are enablers for that support in the future. Note that it is possible (probable?) that there will be some accidental reversions if the changes here did not correctly mimic the existing filter testing. In any case, previously filter ids and parameters were of type unsigned int. In order to support the more general zarr filter model, this was all converted to char*. The old HDF5-specific, unsigned int operations are still supported but they are wrappers around the new, char* based nc_filterx_XXX functions. This entailed at least the following changes: 1. Added the files libdispatch/dfilterx.c and include/ncfilter.h 2. Some filterx utilities have been moved to libdispatch/daux.c 3. A new entry, "filter_actions" was added to the NCDispatch table and the version bumped. 4. An overly complex set of structs was created to support funnelling all of the filterx operations thru a single dispatch "filter_actions" entry. 5. Move common code to from libhdf5 to libsrc4 so that it is accessible to nczarr. Changes directly related to Zarr: 1. Modified CMakeList.txt and configure.ac to support both C and C++ -- this is in support of S3 support via the awd-sdk libraries. 2. Define a size64_t type to support nczarr. 3. More reworking of libdispatch/dinfermodel.c to support zarr and to regularize the structure of the fragments section of a URL. Changes not directly related to Zarr: 1. Make client-side filter registration be conditional, with default off. 2. Hack include/nc4internal.h to make some flags added by Ed be unique: e.g. NC_CREAT, NC_INDEF, etc. 3. cleanup include/nchttp.h and libdispatch/dhttp.c. 4. Misc. changes to support compiling under Visual Studio including: * Better testing under windows for dirent.h and opendir and closedir. 5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags and to centralize error reporting. 6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them. 7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible. Changes Left TO-DO: 1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
nczarr_test/Makefile
])
2010-06-03 21:25:25 +08:00
AC_OUTPUT()
cat libnetcdf.settings