Provide byte-range reading of remote datasets
re: issue https://github.com/Unidata/netcdf-c/issues/1251
Assume that you have the URL to a remote dataset
which is a normal netcdf-3 or netcdf-4 file.
This PR allows the netcdf-c to read that dataset's
contents as a netcdf file using HTTP byte ranges
if the remote server supports byte-range access.
Originally, this PR was set up to access Amazon S3 objects,
but it can also access other remote datasets such as those
provided by a Thredds server via the HTTPServer access protocol.
It may also work for other kinds of servers.
Note that this is not intended as a true production
capability because, as is known, this kind of access to
can be quite slow. In addition, the byte-range IO drivers
do not currently do any sort of optimization or caching.
An additional goal here is to gain some experience with
the Amazon S3 REST protocol.
This architecture and its use documented in
the file docs/byterange.dox.
There are currently two test cases:
1. nc_test/tst_s3raw.c - this does a simple open, check format, close cycle
for a remote netcdf-3 file and a remote netcdf-4 file.
2. nc_test/test_s3raw.sh - this uses ncdump to investigate some remote
datasets.
This PR also incorporates significantly changed model inference code
(see the superceded PR https://github.com/Unidata/netcdf-c/pull/1259).
1. It centralizes the code that infers the dispatcher.
2. It adds support for byte-range URLs
Other changes:
1. NC_HDF5_finalize was not being properly called by nc_finalize().
2. Fix minor bug in ncgen3.l
3. fix memory leak in nc4info.c
4. add code to walk the .daprc triples and to replace protocol=
fragment tag with a more general mode= tag.
Final Note:
Th inference code is still way too complicated. We need to move
to the validfile() model used by netcdf Java, where each
dispatcher is asked if it can process the file. This decentralizes
the inference code. This will be done after all the major new
dispatchers (PIO, Zarr, etc) have been implemented.
2019-01-02 09:27:36 +08:00
|
|
|
/**
|
|
|
|
@if INTERNAL
|
|
|
|
|
|
|
|
@page byterange Remote Dataset Access Using HTTP Byte Ranges
|
|
|
|
|
|
|
|
\tableofcontents
|
|
|
|
|
|
|
|
<!-- Note that this file has the .dox extension, but is mostly markdown -->
|
|
|
|
<!-- Begin MarkDown -->
|
|
|
|
|
|
|
|
# Introduction {#byterange_intro}
|
|
|
|
|
|
|
|
Suppose that you have the URL to a remote dataset
|
|
|
|
which is a normal netcdf-3 or netcdf-4 file.
|
|
|
|
|
|
|
|
The netCDF-c library now supports read-only access to such
|
|
|
|
datasets using the HTTP byte range capability [], assuming that
|
|
|
|
the remote server supports byte-range access.
|
|
|
|
|
|
|
|
Two examples:
|
|
|
|
|
|
|
|
1. An Amazon S3 object containing a netcdf classic file.
|
2020-01-04 04:12:58 +08:00
|
|
|
- location: "https://remotetest.unidata.ucar.edu/thredds/fileServer/testdata/2004050300_eta_211.nc#mode=bytes"
|
Provide byte-range reading of remote datasets
re: issue https://github.com/Unidata/netcdf-c/issues/1251
Assume that you have the URL to a remote dataset
which is a normal netcdf-3 or netcdf-4 file.
This PR allows the netcdf-c to read that dataset's
contents as a netcdf file using HTTP byte ranges
if the remote server supports byte-range access.
Originally, this PR was set up to access Amazon S3 objects,
but it can also access other remote datasets such as those
provided by a Thredds server via the HTTPServer access protocol.
It may also work for other kinds of servers.
Note that this is not intended as a true production
capability because, as is known, this kind of access to
can be quite slow. In addition, the byte-range IO drivers
do not currently do any sort of optimization or caching.
An additional goal here is to gain some experience with
the Amazon S3 REST protocol.
This architecture and its use documented in
the file docs/byterange.dox.
There are currently two test cases:
1. nc_test/tst_s3raw.c - this does a simple open, check format, close cycle
for a remote netcdf-3 file and a remote netcdf-4 file.
2. nc_test/test_s3raw.sh - this uses ncdump to investigate some remote
datasets.
This PR also incorporates significantly changed model inference code
(see the superceded PR https://github.com/Unidata/netcdf-c/pull/1259).
1. It centralizes the code that infers the dispatcher.
2. It adds support for byte-range URLs
Other changes:
1. NC_HDF5_finalize was not being properly called by nc_finalize().
2. Fix minor bug in ncgen3.l
3. fix memory leak in nc4info.c
4. add code to walk the .daprc triples and to replace protocol=
fragment tag with a more general mode= tag.
Final Note:
Th inference code is still way too complicated. We need to move
to the validfile() model used by netcdf Java, where each
dispatcher is asked if it can process the file. This decentralizes
the inference code. This will be done after all the major new
dispatchers (PIO, Zarr, etc) have been implemented.
2019-01-02 09:27:36 +08:00
|
|
|
2. A Thredds Server dataset supporting the Thredds HTTPServer protocol.
|
|
|
|
and containing a netcdf enhanced file.
|
2019-09-30 02:59:28 +08:00
|
|
|
- location: "http://noaa-goes16.s3.amazonaws.com/ABI-L1b-RadC/2017/059/03/OR_ABI-L1b-RadC-M3C13_G16_s20170590337505_e20170590340289_c20170590340316.nc#mode=bytes"
|
Provide byte-range reading of remote datasets
re: issue https://github.com/Unidata/netcdf-c/issues/1251
Assume that you have the URL to a remote dataset
which is a normal netcdf-3 or netcdf-4 file.
This PR allows the netcdf-c to read that dataset's
contents as a netcdf file using HTTP byte ranges
if the remote server supports byte-range access.
Originally, this PR was set up to access Amazon S3 objects,
but it can also access other remote datasets such as those
provided by a Thredds server via the HTTPServer access protocol.
It may also work for other kinds of servers.
Note that this is not intended as a true production
capability because, as is known, this kind of access to
can be quite slow. In addition, the byte-range IO drivers
do not currently do any sort of optimization or caching.
An additional goal here is to gain some experience with
the Amazon S3 REST protocol.
This architecture and its use documented in
the file docs/byterange.dox.
There are currently two test cases:
1. nc_test/tst_s3raw.c - this does a simple open, check format, close cycle
for a remote netcdf-3 file and a remote netcdf-4 file.
2. nc_test/test_s3raw.sh - this uses ncdump to investigate some remote
datasets.
This PR also incorporates significantly changed model inference code
(see the superceded PR https://github.com/Unidata/netcdf-c/pull/1259).
1. It centralizes the code that infers the dispatcher.
2. It adds support for byte-range URLs
Other changes:
1. NC_HDF5_finalize was not being properly called by nc_finalize().
2. Fix minor bug in ncgen3.l
3. fix memory leak in nc4info.c
4. add code to walk the .daprc triples and to replace protocol=
fragment tag with a more general mode= tag.
Final Note:
Th inference code is still way too complicated. We need to move
to the validfile() model used by netcdf Java, where each
dispatcher is asked if it can process the file. This decentralizes
the inference code. This will be done after all the major new
dispatchers (PIO, Zarr, etc) have been implemented.
2019-01-02 09:27:36 +08:00
|
|
|
|
|
|
|
Other remote servers may also provide byte-range access in a similar form.
|
|
|
|
|
|
|
|
It is important to note that this is not intended as a true
|
2019-09-30 02:59:28 +08:00
|
|
|
production capability because it is believed that this kind of access
|
Provide byte-range reading of remote datasets
re: issue https://github.com/Unidata/netcdf-c/issues/1251
Assume that you have the URL to a remote dataset
which is a normal netcdf-3 or netcdf-4 file.
This PR allows the netcdf-c to read that dataset's
contents as a netcdf file using HTTP byte ranges
if the remote server supports byte-range access.
Originally, this PR was set up to access Amazon S3 objects,
but it can also access other remote datasets such as those
provided by a Thredds server via the HTTPServer access protocol.
It may also work for other kinds of servers.
Note that this is not intended as a true production
capability because, as is known, this kind of access to
can be quite slow. In addition, the byte-range IO drivers
do not currently do any sort of optimization or caching.
An additional goal here is to gain some experience with
the Amazon S3 REST protocol.
This architecture and its use documented in
the file docs/byterange.dox.
There are currently two test cases:
1. nc_test/tst_s3raw.c - this does a simple open, check format, close cycle
for a remote netcdf-3 file and a remote netcdf-4 file.
2. nc_test/test_s3raw.sh - this uses ncdump to investigate some remote
datasets.
This PR also incorporates significantly changed model inference code
(see the superceded PR https://github.com/Unidata/netcdf-c/pull/1259).
1. It centralizes the code that infers the dispatcher.
2. It adds support for byte-range URLs
Other changes:
1. NC_HDF5_finalize was not being properly called by nc_finalize().
2. Fix minor bug in ncgen3.l
3. fix memory leak in nc4info.c
4. add code to walk the .daprc triples and to replace protocol=
fragment tag with a more general mode= tag.
Final Note:
Th inference code is still way too complicated. We need to move
to the validfile() model used by netcdf Java, where each
dispatcher is asked if it can process the file. This decentralizes
the inference code. This will be done after all the major new
dispatchers (PIO, Zarr, etc) have been implemented.
2019-01-02 09:27:36 +08:00
|
|
|
can be quite slow. In addition, the byte-range IO drivers do not
|
|
|
|
currently do any sort of optimization or caching.
|
|
|
|
|
|
|
|
# Configuration {#byterange_config}
|
|
|
|
|
|
|
|
This capability is enabled using the option *--enable-byterange* option
|
|
|
|
to the *./configure* command for Automake. For Cmake, the option flag is
|
|
|
|
*-DENABLE_BYTERANGE=true*.
|
|
|
|
|
|
|
|
This capability requires access to *libcurl*, and an error will occur
|
|
|
|
if byterange is enabled, but no *libcurl* could not be located.
|
|
|
|
In this, it is similar to the DAP2 and DAP4 capabilities.
|
|
|
|
|
2019-09-30 02:59:28 +08:00
|
|
|
Note also that here, the term "http" is often used as a synonym for *byterange*.
|
Provide byte-range reading of remote datasets
re: issue https://github.com/Unidata/netcdf-c/issues/1251
Assume that you have the URL to a remote dataset
which is a normal netcdf-3 or netcdf-4 file.
This PR allows the netcdf-c to read that dataset's
contents as a netcdf file using HTTP byte ranges
if the remote server supports byte-range access.
Originally, this PR was set up to access Amazon S3 objects,
but it can also access other remote datasets such as those
provided by a Thredds server via the HTTPServer access protocol.
It may also work for other kinds of servers.
Note that this is not intended as a true production
capability because, as is known, this kind of access to
can be quite slow. In addition, the byte-range IO drivers
do not currently do any sort of optimization or caching.
An additional goal here is to gain some experience with
the Amazon S3 REST protocol.
This architecture and its use documented in
the file docs/byterange.dox.
There are currently two test cases:
1. nc_test/tst_s3raw.c - this does a simple open, check format, close cycle
for a remote netcdf-3 file and a remote netcdf-4 file.
2. nc_test/test_s3raw.sh - this uses ncdump to investigate some remote
datasets.
This PR also incorporates significantly changed model inference code
(see the superceded PR https://github.com/Unidata/netcdf-c/pull/1259).
1. It centralizes the code that infers the dispatcher.
2. It adds support for byte-range URLs
Other changes:
1. NC_HDF5_finalize was not being properly called by nc_finalize().
2. Fix minor bug in ncgen3.l
3. fix memory leak in nc4info.c
4. add code to walk the .daprc triples and to replace protocol=
fragment tag with a more general mode= tag.
Final Note:
Th inference code is still way too complicated. We need to move
to the validfile() model used by netcdf Java, where each
dispatcher is asked if it can process the file. This decentralizes
the inference code. This will be done after all the major new
dispatchers (PIO, Zarr, etc) have been implemented.
2019-01-02 09:27:36 +08:00
|
|
|
|
|
|
|
# Run-time Usage {#byterange_url}
|
|
|
|
|
|
|
|
In order to use this capability at run-time, with *ncdump* for
|
|
|
|
example, it is necessary to provide a URL pointing to the basic
|
|
|
|
dataset to be accessed. The URL must be annotated to tell the
|
|
|
|
netcdf-c library that byte-range access should be used. This is
|
2019-09-30 02:59:28 +08:00
|
|
|
indicated by appending the phrase ````#mode=bytes````
|
Provide byte-range reading of remote datasets
re: issue https://github.com/Unidata/netcdf-c/issues/1251
Assume that you have the URL to a remote dataset
which is a normal netcdf-3 or netcdf-4 file.
This PR allows the netcdf-c to read that dataset's
contents as a netcdf file using HTTP byte ranges
if the remote server supports byte-range access.
Originally, this PR was set up to access Amazon S3 objects,
but it can also access other remote datasets such as those
provided by a Thredds server via the HTTPServer access protocol.
It may also work for other kinds of servers.
Note that this is not intended as a true production
capability because, as is known, this kind of access to
can be quite slow. In addition, the byte-range IO drivers
do not currently do any sort of optimization or caching.
An additional goal here is to gain some experience with
the Amazon S3 REST protocol.
This architecture and its use documented in
the file docs/byterange.dox.
There are currently two test cases:
1. nc_test/tst_s3raw.c - this does a simple open, check format, close cycle
for a remote netcdf-3 file and a remote netcdf-4 file.
2. nc_test/test_s3raw.sh - this uses ncdump to investigate some remote
datasets.
This PR also incorporates significantly changed model inference code
(see the superceded PR https://github.com/Unidata/netcdf-c/pull/1259).
1. It centralizes the code that infers the dispatcher.
2. It adds support for byte-range URLs
Other changes:
1. NC_HDF5_finalize was not being properly called by nc_finalize().
2. Fix minor bug in ncgen3.l
3. fix memory leak in nc4info.c
4. add code to walk the .daprc triples and to replace protocol=
fragment tag with a more general mode= tag.
Final Note:
Th inference code is still way too complicated. We need to move
to the validfile() model used by netcdf Java, where each
dispatcher is asked if it can process the file. This decentralizes
the inference code. This will be done after all the major new
dispatchers (PIO, Zarr, etc) have been implemented.
2019-01-02 09:27:36 +08:00
|
|
|
to the end of the URL.
|
|
|
|
The two examples above show how this will look.
|
|
|
|
|
|
|
|
In order to determine the kind of file being accessed, the
|
|
|
|
netcdf-c library will read what is called the "magic number"
|
|
|
|
from the beginning of the remote dataset. This magic number
|
|
|
|
is a specific set of bytes that indicates the kind of file:
|
|
|
|
classic, enhanced, cdf5, etc.
|
|
|
|
|
|
|
|
# Architecture {#byterange_arch}
|
|
|
|
|
|
|
|
Internally, this capability is implemented with three files:
|
|
|
|
|
|
|
|
1. libdispatch/dhttp.c -- wrap libcurl operations.
|
|
|
|
2. libsrc/httpio.c -- provide byte-range reading to the netcdf-3 dispatcher.
|
|
|
|
3. libhdf5/H5FDhttp.c -- provide byte-range reading to the netcdf-4 dispatcher.
|
|
|
|
|
|
|
|
Both *httpio.c* and *H5FDhttp.c* are adapters that use *dhttp.c*
|
|
|
|
to do the work. Testing for the magic number is also carried out
|
|
|
|
by using the *dhttp.c* code.
|
|
|
|
|
|
|
|
## NetCDF Classic Access
|
|
|
|
|
|
|
|
The netcdf-3 code in the directory *libsrc* is built using
|
|
|
|
a secondary dispatch mechanism called *ncio*. This allows the
|
|
|
|
netcdf-3 code be independent of the lowest level IO access mechanisms.
|
|
|
|
This is how in-memory and mmap based access is implemented.
|
|
|
|
The file *httpio.c* is the dispatcher used to provide byte-range
|
|
|
|
IO for the netcdf-3 code.
|
|
|
|
|
|
|
|
Note that *httpio.c* is mostly just an
|
|
|
|
adapter between the *ncio* API and the *dhttp.c* code.
|
|
|
|
|
|
|
|
## NetCDF Enhanced Access
|
|
|
|
|
|
|
|
Similar to the netcdf-3 code, the HDF5 library
|
|
|
|
provides a secondary dispatch mechanism *H5FD*. This allows the
|
|
|
|
HDF5 code to be independent of the lowest level IO access mechanisms.
|
|
|
|
The netcdf-4 code in libhdf5 is built on the HDF5 library, so
|
|
|
|
it indirectly inherits the H5FD mechanism.
|
|
|
|
|
|
|
|
The file *H5FDhttp.c* implements the H5FD dispatcher API
|
|
|
|
and provides byte-range IO for the netcdf-4 code
|
|
|
|
(and for the HDF5 library as a side effect).
|
|
|
|
|
|
|
|
Note that *H5FDhttp.c* is mostly just an
|
|
|
|
adapter between the *H5FD* API and the *dhttp.c* code.
|
|
|
|
|
|
|
|
# The dhttp.c Code {#byterange_dhttp}
|
|
|
|
|
|
|
|
The core of all this is *dhttp.c* (and its header
|
|
|
|
*include/nchttp.c*). It is a wrapper over *libcurl*
|
|
|
|
and so exposes the libcurl handles -- albeit as _void*_.
|
|
|
|
|
|
|
|
The API for *dhttp.c* consists of the following procedures:
|
|
|
|
- int nc_http_open(const char* objecturl, void** curlp, fileoffset_t* filelenp);
|
|
|
|
- int nc_http_read(void* curl, const char* url, fileoffset_t start, fileoffset_t count, NCbytes* buf);
|
|
|
|
- int nc_http_close(void* curl);
|
|
|
|
- typedef long long fileoffset_t;
|
|
|
|
|
|
|
|
The type *fileoffset_t* is used to avoid use of *off_t* or *off64_t*
|
|
|
|
which are too volatile. It is intended to be represent file lengths
|
|
|
|
and offsets.
|
|
|
|
|
|
|
|
## nc_http_open
|
|
|
|
The *nc_http_open* procedure creates a *Curl* handle and returns it
|
|
|
|
in the *curlp* argument. It also obtains and searches the headers
|
|
|
|
looking for two headers:
|
|
|
|
|
|
|
|
1. "Accept-Ranges: bytes" -- to verify that byte-range access is supported.
|
|
|
|
2. "Content-Length: ..." -- to obtain the size of the remote dataset.
|
|
|
|
|
|
|
|
The dataset length is returned in the *filelenp* argument.
|
|
|
|
|
|
|
|
## nc_http_read
|
|
|
|
|
|
|
|
The *nc_http_read* procedure reads a specified set of contiguous bytes
|
|
|
|
as specified by the *start* and *count* arguments. It takes the *Curl*
|
|
|
|
handle produced by *nc_http_open* to indicate the server from which to read.
|
|
|
|
|
|
|
|
The *buf* argument is a pointer to an instance of type *NCbytes*, which
|
|
|
|
is a dynamically expandable byte vector (see the file *include/ncbytes.h*).
|
|
|
|
|
|
|
|
This procedure reads *count* bytes from the remote dataset starting at
|
|
|
|
the offset *start* position. The bytes are stored in *buf*.
|
|
|
|
|
|
|
|
## nc_http_close
|
|
|
|
|
|
|
|
The *nc_http_close* function closes the *Curl* handle and does any
|
|
|
|
necessary cleanup.
|
|
|
|
|
|
|
|
# Point of Contact {#byterange_poc}
|
|
|
|
|
|
|
|
__Author__: Dennis Heimbigner<br>
|
|
|
|
__Email__: dmh at ucar dot edu<br>
|
|
|
|
__Initial Version__: 12/30/2018<br>
|
|
|
|
__Last Revised__: 12/30/2018
|
|
|
|
|
|
|
|
<!-- End MarkDown -->
|
|
|
|
|
|
|
|
@endif
|
|
|
|
|
|
|
|
*/
|