netcdf-c/libsrc/Makefile.am

71 lines
1.8 KiB
Makefile
Raw Normal View History

2010-06-03 21:24:43 +08:00
## This is an automake file, part of Unidata's netCDF package.
# Copyright 2008, see the COPYRIGHT file for more information.
2010-06-03 21:24:43 +08:00
2010-06-04 04:22:55 +08:00
# This automake file is in charge of building the libsrc directory,
# which contains the classic library code.
2010-06-03 21:24:43 +08:00
2011-09-19 04:57:51 +08:00
include $(top_srcdir)/lib_flags.am
libnetcdf3_la_CPPFLAGS = ${AM_CPPFLAGS}
2010-06-03 21:24:43 +08:00
# These files comprise the netCDF-3 classic library code.
2013-03-09 02:27:45 +08:00
libnetcdf3_la_SOURCES = v1hpg.c \
putget.c attr.c nc3dispatch.c nc3internal.c var.c dim.c ncx.c \
ncx.h lookup3.c pstdint.h ncio.c ncio.h memio.c
if BUILD_MMAP
libnetcdf3_la_SOURCES += mmapio.c
2012-06-24 03:25:49 +08:00
endif BUILD_MMAP
2010-06-03 21:24:43 +08:00
# Does the user want to use ffio, a replacement for posixio for Cray
# computers?
2010-06-03 21:24:43 +08:00
if USE_FFIO
2010-06-04 04:33:02 +08:00
libnetcdf3_la_SOURCES += ffio.c
else !USE_FFIO
if USE_STDIO
2015-10-07 05:29:18 +08:00
libnetcdf3_la_SOURCES += ncstdio.c
else !USE_STDIO
2010-06-04 04:33:02 +08:00
libnetcdf3_la_SOURCES += posixio.c
endif !USE_STDIO
endif !USE_FFIO
if NETCDF_ENABLE_BYTERANGE
Provide byte-range reading of remote datasets re: issue https://github.com/Unidata/netcdf-c/issues/1251 Assume that you have the URL to a remote dataset which is a normal netcdf-3 or netcdf-4 file. This PR allows the netcdf-c to read that dataset's contents as a netcdf file using HTTP byte ranges if the remote server supports byte-range access. Originally, this PR was set up to access Amazon S3 objects, but it can also access other remote datasets such as those provided by a Thredds server via the HTTPServer access protocol. It may also work for other kinds of servers. Note that this is not intended as a true production capability because, as is known, this kind of access to can be quite slow. In addition, the byte-range IO drivers do not currently do any sort of optimization or caching. An additional goal here is to gain some experience with the Amazon S3 REST protocol. This architecture and its use documented in the file docs/byterange.dox. There are currently two test cases: 1. nc_test/tst_s3raw.c - this does a simple open, check format, close cycle for a remote netcdf-3 file and a remote netcdf-4 file. 2. nc_test/test_s3raw.sh - this uses ncdump to investigate some remote datasets. This PR also incorporates significantly changed model inference code (see the superceded PR https://github.com/Unidata/netcdf-c/pull/1259). 1. It centralizes the code that infers the dispatcher. 2. It adds support for byte-range URLs Other changes: 1. NC_HDF5_finalize was not being properly called by nc_finalize(). 2. Fix minor bug in ncgen3.l 3. fix memory leak in nc4info.c 4. add code to walk the .daprc triples and to replace protocol= fragment tag with a more general mode= tag. Final Note: Th inference code is still way too complicated. We need to move to the validfile() model used by netcdf Java, where each dispatcher is asked if it can process the file. This decentralizes the inference code. This will be done after all the major new dispatchers (PIO, Zarr, etc) have been implemented.
2019-01-02 09:27:36 +08:00
libnetcdf3_la_SOURCES += httpio.c
if NETCDF_ENABLE_S3
Improve S3 Documentation and Support ## Improvements to S3 Documentation * Create a new document *quickstart_paths.md* that give a summary of the legal path formats used by netcdf-c. This includes both file paths and URL paths. * Modify *nczarr.md* to remove most of the S3 related text. * Move the S3 text from *nczarr.md* to a new document *cloud.md*. * Add some S3-related text to the *byterange.md* document. Hopefully, this will make it easier for users to find the information they want. ## Rebuild NCZarr Testing In order to avoid problems with running make check in parallel, two changes were made: 1. The *nczarr_test* test system was rebuilt. Now, for each test. any generated files are kept in a test-specific directory, isolated from all other test executions. 2. Similarly, since the S3 test bucket is shared, any generated S3 objects are isolated using a test-specific key path. ## Other S3 Related Changes * Add code to ensure that files created on S3 are reclaimed at end of testing. * Used the bash "trap" command to ensure S3 cleanup even if the test fails. * Cleanup the S3 related configure.ac flag set since S3 is used in several places. So now one should use the option *--enable-s3* instead of *--enable-nczarr-s3*, although the latter is still kept as a deprecated alias for the former. * Get some of the github actions yml to work with S3; required fixing various test scripts adding a secret to access the Unidata S3 bucket. * Cleanup S3 portion of libnetcdf.settings.in and netcdf_meta.h.in and test_common.in. * Merge partial S3 support into dhttp.c. * Create an experimental s3 access library especially for use with Windows. It is enabled by using the options *--enable-s3-internal* (automake) or *-DENABLE_S3_INTERNAL=ON* (CMake). Also add a unit-test for it. * Move some definitions from ncrc.h to ncs3sdk.h ## Other Changes * Provide a default implementation of strlcpy and move this and similar defaults into *dmissing.c*.
2023-04-26 07:15:06 +08:00
libnetcdf3_la_SOURCES += s3io.c
endif
endif NETCDF_ENABLE_BYTERANGE
Provide byte-range reading of remote datasets re: issue https://github.com/Unidata/netcdf-c/issues/1251 Assume that you have the URL to a remote dataset which is a normal netcdf-3 or netcdf-4 file. This PR allows the netcdf-c to read that dataset's contents as a netcdf file using HTTP byte ranges if the remote server supports byte-range access. Originally, this PR was set up to access Amazon S3 objects, but it can also access other remote datasets such as those provided by a Thredds server via the HTTPServer access protocol. It may also work for other kinds of servers. Note that this is not intended as a true production capability because, as is known, this kind of access to can be quite slow. In addition, the byte-range IO drivers do not currently do any sort of optimization or caching. An additional goal here is to gain some experience with the Amazon S3 REST protocol. This architecture and its use documented in the file docs/byterange.dox. There are currently two test cases: 1. nc_test/tst_s3raw.c - this does a simple open, check format, close cycle for a remote netcdf-3 file and a remote netcdf-4 file. 2. nc_test/test_s3raw.sh - this uses ncdump to investigate some remote datasets. This PR also incorporates significantly changed model inference code (see the superceded PR https://github.com/Unidata/netcdf-c/pull/1259). 1. It centralizes the code that infers the dispatcher. 2. It adds support for byte-range URLs Other changes: 1. NC_HDF5_finalize was not being properly called by nc_finalize(). 2. Fix minor bug in ncgen3.l 3. fix memory leak in nc4info.c 4. add code to walk the .daprc triples and to replace protocol= fragment tag with a more general mode= tag. Final Note: Th inference code is still way too complicated. We need to move to the validfile() model used by netcdf Java, where each dispatcher is asked if it can process the file. This decentralizes the inference code. This will be done after all the major new dispatchers (PIO, Zarr, etc) have been implemented.
2019-01-02 09:27:36 +08:00
2010-06-04 04:33:02 +08:00
noinst_LTLIBRARIES = libnetcdf3.la
2010-06-04 04:22:55 +08:00
2010-06-03 21:24:43 +08:00
# These files are cleaned on developer workstations (and then rebuilt
# with m4), but they are included in the distribution so that the user
# does not have to have m4.
MAINTAINERCLEANFILES = $(man_MANS) attr.c ncx.c putget.c
EXTRA_DIST = attr.m4 ncx.m4 putget.m4 $(man_MANS) CMakeLists.txt
2010-06-03 21:24:43 +08:00
# This tells make how to turn .m4 files into .c files.
.m4.c:
m4 $(AM_M4FLAGS) $(M4FLAGS) -s $< > $(top_srcdir)/libsrc/$(@F)
2010-06-03 21:24:43 +08:00
# The C API man page.
2015-10-07 05:29:18 +08:00
man_MANS = netcdf.3
# Decide what goes in the man page, based on user configure options.
ARGS_MANPAGE = -DAPI=C
if USE_NETCDF4
ARGS_MANPAGE += -DNETCDF4=TRUE
endif
if BUILD_DAP
ARGS_MANPAGE += -DDAP=TRUE
endif
if BUILD_PARALLEL
ARGS_MANPAGE += -DPARALLEL_IO=TRUE
endif
# This rule generates the C manpage.
2015-10-07 05:29:18 +08:00
netcdf.3: $(top_srcdir)/docs/netcdf.m4
m4 $(M4FLAGS) $(ARGS_MANPAGE) $? >$@ || rm $@