Provide byte-range reading of remote datasets
re: issue https://github.com/Unidata/netcdf-c/issues/1251
Assume that you have the URL to a remote dataset
which is a normal netcdf-3 or netcdf-4 file.
This PR allows the netcdf-c to read that dataset's
contents as a netcdf file using HTTP byte ranges
if the remote server supports byte-range access.
Originally, this PR was set up to access Amazon S3 objects,
but it can also access other remote datasets such as those
provided by a Thredds server via the HTTPServer access protocol.
It may also work for other kinds of servers.
Note that this is not intended as a true production
capability because, as is known, this kind of access to
can be quite slow. In addition, the byte-range IO drivers
do not currently do any sort of optimization or caching.
An additional goal here is to gain some experience with
the Amazon S3 REST protocol.
This architecture and its use documented in
the file docs/byterange.dox.
There are currently two test cases:
1. nc_test/tst_s3raw.c - this does a simple open, check format, close cycle
for a remote netcdf-3 file and a remote netcdf-4 file.
2. nc_test/test_s3raw.sh - this uses ncdump to investigate some remote
datasets.
This PR also incorporates significantly changed model inference code
(see the superceded PR https://github.com/Unidata/netcdf-c/pull/1259).
1. It centralizes the code that infers the dispatcher.
2. It adds support for byte-range URLs
Other changes:
1. NC_HDF5_finalize was not being properly called by nc_finalize().
2. Fix minor bug in ncgen3.l
3. fix memory leak in nc4info.c
4. add code to walk the .daprc triples and to replace protocol=
fragment tag with a more general mode= tag.
Final Note:
Th inference code is still way too complicated. We need to move
to the validfile() model used by netcdf Java, where each
dispatcher is asked if it can process the file. This decentralizes
the inference code. This will be done after all the major new
dispatchers (PIO, Zarr, etc) have been implemented.
2019-01-02 09:27:36 +08:00
|
|
|
/*********************************************************************
|
|
|
|
* Copyright 2018, UCAR/Unidata
|
|
|
|
* See netcdf/COPYRIGHT file for copying and redistribution conditions.
|
|
|
|
* ********************************************************************/
|
|
|
|
|
|
|
|
#if HAVE_CONFIG_H
|
|
|
|
#include <config.h>
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#include <assert.h>
|
|
|
|
#include <stdlib.h>
|
|
|
|
#include <errno.h>
|
|
|
|
#include <string.h>
|
|
|
|
#ifdef HAVE_FCNTL_H
|
|
|
|
#include <fcntl.h>
|
|
|
|
#endif
|
|
|
|
#ifdef _MSC_VER /* Microsoft Compilers */
|
|
|
|
#include <io.h>
|
|
|
|
#endif
|
|
|
|
#ifdef HAVE_UNISTD_H
|
|
|
|
#include <unistd.h>
|
|
|
|
#endif
|
|
|
|
#include "nc3internal.h"
|
This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
|
|
|
#include "nclist.h"
|
|
|
|
#include "ncbytes.h"
|
Provide byte-range reading of remote datasets
re: issue https://github.com/Unidata/netcdf-c/issues/1251
Assume that you have the URL to a remote dataset
which is a normal netcdf-3 or netcdf-4 file.
This PR allows the netcdf-c to read that dataset's
contents as a netcdf file using HTTP byte ranges
if the remote server supports byte-range access.
Originally, this PR was set up to access Amazon S3 objects,
but it can also access other remote datasets such as those
provided by a Thredds server via the HTTPServer access protocol.
It may also work for other kinds of servers.
Note that this is not intended as a true production
capability because, as is known, this kind of access to
can be quite slow. In addition, the byte-range IO drivers
do not currently do any sort of optimization or caching.
An additional goal here is to gain some experience with
the Amazon S3 REST protocol.
This architecture and its use documented in
the file docs/byterange.dox.
There are currently two test cases:
1. nc_test/tst_s3raw.c - this does a simple open, check format, close cycle
for a remote netcdf-3 file and a remote netcdf-4 file.
2. nc_test/test_s3raw.sh - this uses ncdump to investigate some remote
datasets.
This PR also incorporates significantly changed model inference code
(see the superceded PR https://github.com/Unidata/netcdf-c/pull/1259).
1. It centralizes the code that infers the dispatcher.
2. It adds support for byte-range URLs
Other changes:
1. NC_HDF5_finalize was not being properly called by nc_finalize().
2. Fix minor bug in ncgen3.l
3. fix memory leak in nc4info.c
4. add code to walk the .daprc triples and to replace protocol=
fragment tag with a more general mode= tag.
Final Note:
Th inference code is still way too complicated. We need to move
to the validfile() model used by netcdf Java, where each
dispatcher is asked if it can process the file. This decentralizes
the inference code. This will be done after all the major new
dispatchers (PIO, Zarr, etc) have been implemented.
2019-01-02 09:27:36 +08:00
|
|
|
|
|
|
|
#undef DEBUG
|
|
|
|
|
|
|
|
#ifdef DEBUG
|
|
|
|
#include <stdio.h>
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#include <curl/curl.h>
|
|
|
|
|
|
|
|
#include "ncio.h"
|
|
|
|
#include "fbits.h"
|
|
|
|
#include "rnd.h"
|
|
|
|
#include "ncbytes.h"
|
|
|
|
#include "nchttp.h"
|
|
|
|
|
|
|
|
#define DEFAULTPAGESIZE 16384
|
|
|
|
|
|
|
|
/* Private data */
|
|
|
|
|
|
|
|
typedef struct NCHTTP {
|
This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
|
|
|
NC_HTTP_STATE* state;
|
|
|
|
size64_t size; /* of the S3 object */
|
Provide byte-range reading of remote datasets
re: issue https://github.com/Unidata/netcdf-c/issues/1251
Assume that you have the URL to a remote dataset
which is a normal netcdf-3 or netcdf-4 file.
This PR allows the netcdf-c to read that dataset's
contents as a netcdf file using HTTP byte ranges
if the remote server supports byte-range access.
Originally, this PR was set up to access Amazon S3 objects,
but it can also access other remote datasets such as those
provided by a Thredds server via the HTTPServer access protocol.
It may also work for other kinds of servers.
Note that this is not intended as a true production
capability because, as is known, this kind of access to
can be quite slow. In addition, the byte-range IO drivers
do not currently do any sort of optimization or caching.
An additional goal here is to gain some experience with
the Amazon S3 REST protocol.
This architecture and its use documented in
the file docs/byterange.dox.
There are currently two test cases:
1. nc_test/tst_s3raw.c - this does a simple open, check format, close cycle
for a remote netcdf-3 file and a remote netcdf-4 file.
2. nc_test/test_s3raw.sh - this uses ncdump to investigate some remote
datasets.
This PR also incorporates significantly changed model inference code
(see the superceded PR https://github.com/Unidata/netcdf-c/pull/1259).
1. It centralizes the code that infers the dispatcher.
2. It adds support for byte-range URLs
Other changes:
1. NC_HDF5_finalize was not being properly called by nc_finalize().
2. Fix minor bug in ncgen3.l
3. fix memory leak in nc4info.c
4. add code to walk the .daprc triples and to replace protocol=
fragment tag with a more general mode= tag.
Final Note:
Th inference code is still way too complicated. We need to move
to the validfile() model used by netcdf Java, where each
dispatcher is asked if it can process the file. This decentralizes
the inference code. This will be done after all the major new
dispatchers (PIO, Zarr, etc) have been implemented.
2019-01-02 09:27:36 +08:00
|
|
|
NCbytes* region;
|
|
|
|
} NCHTTP;
|
|
|
|
|
|
|
|
/* Forward */
|
|
|
|
static int httpio_rel(ncio *const nciop, off_t offset, int rflags);
|
|
|
|
static int httpio_get(ncio *const nciop, off_t offset, size_t extent, int rflags, void **const vpp);
|
|
|
|
static int httpio_move(ncio *const nciop, off_t to, off_t from, size_t nbytes, int rflags);
|
|
|
|
static int httpio_sync(ncio *const nciop);
|
|
|
|
static int httpio_filesize(ncio* nciop, off_t* filesizep);
|
|
|
|
static int httpio_pad_length(ncio* nciop, off_t length);
|
|
|
|
static int httpio_close(ncio* nciop, int);
|
|
|
|
|
|
|
|
static long pagesize = 0;
|
|
|
|
|
|
|
|
/* Create a new ncio struct to hold info about the file. */
|
|
|
|
static int
|
|
|
|
httpio_new(const char* path, int ioflags, ncio** nciopp, NCHTTP** hpp)
|
|
|
|
{
|
|
|
|
int status = NC_NOERR;
|
|
|
|
ncio* nciop = NULL;
|
|
|
|
NCHTTP* http = NULL;
|
|
|
|
|
|
|
|
if(pagesize == 0)
|
|
|
|
pagesize = DEFAULTPAGESIZE;
|
|
|
|
errno = 0;
|
|
|
|
|
|
|
|
nciop = (ncio* )calloc(1,sizeof(ncio));
|
|
|
|
if(nciop == NULL) {status = NC_ENOMEM; goto fail;}
|
|
|
|
|
|
|
|
nciop->ioflags = ioflags;
|
|
|
|
|
|
|
|
*((char**)&nciop->path) = strdup(path);
|
|
|
|
if(nciop->path == NULL) {status = NC_ENOMEM; goto fail;}
|
|
|
|
|
|
|
|
*((ncio_relfunc**)&nciop->rel) = httpio_rel;
|
|
|
|
*((ncio_getfunc**)&nciop->get) = httpio_get;
|
|
|
|
*((ncio_movefunc**)&nciop->move) = httpio_move;
|
|
|
|
*((ncio_syncfunc**)&nciop->sync) = httpio_sync;
|
|
|
|
*((ncio_filesizefunc**)&nciop->filesize) = httpio_filesize;
|
|
|
|
*((ncio_pad_lengthfunc**)&nciop->pad_length) = httpio_pad_length;
|
|
|
|
*((ncio_closefunc**)&nciop->close) = httpio_close;
|
|
|
|
|
|
|
|
http = (NCHTTP*)calloc(1,sizeof(NCHTTP));
|
|
|
|
if(http == NULL) {status = NC_ENOMEM; goto fail;}
|
|
|
|
*((void* *)&nciop->pvt) = http;
|
|
|
|
|
|
|
|
if(nciopp) *nciopp = nciop;
|
|
|
|
if(hpp) *hpp = http;
|
|
|
|
|
|
|
|
done:
|
|
|
|
return status;
|
|
|
|
|
|
|
|
fail:
|
|
|
|
if(http != NULL) {
|
|
|
|
if(http->region)
|
|
|
|
ncbytesfree(http->region);
|
|
|
|
free(http);
|
|
|
|
}
|
|
|
|
if(nciop != NULL) {
|
|
|
|
if(nciop->path != NULL) free((char*)nciop->path);
|
|
|
|
}
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Create a file, and the ncio struct to go with it. This function is
|
|
|
|
only called from nc__create_mp.
|
|
|
|
|
|
|
|
path - path of file to create.
|
|
|
|
ioflags - flags from nc_create
|
|
|
|
initialsz - From the netcdf man page: "The argument
|
|
|
|
Iinitialsize sets the initial size of the file at creation time."
|
|
|
|
igeto -
|
|
|
|
igetsz -
|
|
|
|
sizehintp - the size of a page of data for buffered reads and writes.
|
|
|
|
nciopp - pointer to a pointer that will get location of newly
|
|
|
|
created and inited ncio struct.
|
|
|
|
mempp - pointer to pointer to the initial memory read.
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
httpio_create(const char* path, int ioflags,
|
|
|
|
size_t initialsz,
|
|
|
|
off_t igeto, size_t igetsz, size_t* sizehintp,
|
|
|
|
void* parameters,
|
|
|
|
ncio* *nciopp, void** const mempp)
|
|
|
|
{
|
|
|
|
return NC_EPERM;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* This function opens the data file. It is only called from nc.c,
|
|
|
|
from nc__open_mp and nc_delete_mp.
|
|
|
|
|
|
|
|
path - path of data file.
|
|
|
|
ioflags - flags passed into nc_open.
|
|
|
|
igeto - looks like this function can do an initial page get, and
|
|
|
|
igeto is going to be the offset for that. But it appears to be
|
|
|
|
unused
|
|
|
|
igetsz - the size in bytes of initial page get (a.k.a. extent). Not
|
|
|
|
ever used in the library.
|
|
|
|
sizehintp - the size of a page of data for buffered reads and writes.
|
|
|
|
nciopp - pointer to pointer that will get address of newly created
|
|
|
|
and inited ncio struct.
|
|
|
|
mempp - pointer to pointer to the initial memory read.
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
httpio_open(const char* path,
|
|
|
|
int ioflags,
|
|
|
|
/* ignored */ off_t igeto, size_t igetsz, size_t* sizehintp,
|
|
|
|
/* ignored */ void* parameters,
|
|
|
|
ncio* *nciopp,
|
|
|
|
/* ignored */ void** const mempp)
|
|
|
|
{
|
|
|
|
ncio* nciop;
|
|
|
|
int status;
|
|
|
|
NCHTTP* http = NULL;
|
|
|
|
size_t sizehint;
|
|
|
|
|
|
|
|
if(path == NULL ||* path == 0)
|
|
|
|
return EINVAL;
|
|
|
|
|
|
|
|
/* Create private data */
|
|
|
|
if((status = httpio_new(path, ioflags, &nciop, &http))) goto done;
|
|
|
|
/* Open the path and get curl handle and object size */
|
This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
|
|
|
if((status = nc_http_open(path,&http->state,&http->size))) goto done;
|
Provide byte-range reading of remote datasets
re: issue https://github.com/Unidata/netcdf-c/issues/1251
Assume that you have the URL to a remote dataset
which is a normal netcdf-3 or netcdf-4 file.
This PR allows the netcdf-c to read that dataset's
contents as a netcdf file using HTTP byte ranges
if the remote server supports byte-range access.
Originally, this PR was set up to access Amazon S3 objects,
but it can also access other remote datasets such as those
provided by a Thredds server via the HTTPServer access protocol.
It may also work for other kinds of servers.
Note that this is not intended as a true production
capability because, as is known, this kind of access to
can be quite slow. In addition, the byte-range IO drivers
do not currently do any sort of optimization or caching.
An additional goal here is to gain some experience with
the Amazon S3 REST protocol.
This architecture and its use documented in
the file docs/byterange.dox.
There are currently two test cases:
1. nc_test/tst_s3raw.c - this does a simple open, check format, close cycle
for a remote netcdf-3 file and a remote netcdf-4 file.
2. nc_test/test_s3raw.sh - this uses ncdump to investigate some remote
datasets.
This PR also incorporates significantly changed model inference code
(see the superceded PR https://github.com/Unidata/netcdf-c/pull/1259).
1. It centralizes the code that infers the dispatcher.
2. It adds support for byte-range URLs
Other changes:
1. NC_HDF5_finalize was not being properly called by nc_finalize().
2. Fix minor bug in ncgen3.l
3. fix memory leak in nc4info.c
4. add code to walk the .daprc triples and to replace protocol=
fragment tag with a more general mode= tag.
Final Note:
Th inference code is still way too complicated. We need to move
to the validfile() model used by netcdf Java, where each
dispatcher is asked if it can process the file. This decentralizes
the inference code. This will be done after all the major new
dispatchers (PIO, Zarr, etc) have been implemented.
2019-01-02 09:27:36 +08:00
|
|
|
|
|
|
|
sizehint = pagesize;
|
|
|
|
|
|
|
|
/* sizehint must be multiple of 8 */
|
|
|
|
sizehint = (sizehint / 8) * 8;
|
|
|
|
if(sizehint < 8) sizehint = 8;
|
|
|
|
|
|
|
|
*sizehintp = sizehint;
|
|
|
|
*nciopp = nciop;
|
|
|
|
done:
|
|
|
|
if(status)
|
|
|
|
httpio_close(nciop,0);
|
|
|
|
return status;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Get file size in bytes.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
httpio_filesize(ncio* nciop, off_t* filesizep)
|
|
|
|
{
|
|
|
|
NCHTTP* http;
|
|
|
|
if(nciop == NULL || nciop->pvt == NULL) return NC_EINVAL;
|
|
|
|
http = (NCHTTP*)nciop->pvt;
|
|
|
|
if(filesizep != NULL) *filesizep = http->size;
|
|
|
|
return NC_NOERR;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Sync any changes to disk, then truncate or extend file so its size
|
|
|
|
* is length. This is only intended to be called before close, if the
|
|
|
|
* file is open for writing and the actual size does not match the
|
|
|
|
* calculated size, perhaps as the result of having been previously
|
|
|
|
* written in NOFILL mode.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
httpio_pad_length(ncio* nciop, off_t length)
|
|
|
|
{
|
|
|
|
return NC_NOERR; /* do nothing */
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Write out any dirty buffers to disk and
|
|
|
|
ensure that next read will get data from disk.
|
|
|
|
Sync any changes, then close the open file associated with the ncio
|
|
|
|
struct, and free its memory.
|
|
|
|
nciop - pointer to ncio to close.
|
|
|
|
doUnlink - if true, unlink file
|
|
|
|
*/
|
|
|
|
|
|
|
|
static int
|
|
|
|
httpio_close(ncio* nciop, int doUnlink)
|
|
|
|
{
|
|
|
|
int status = NC_NOERR;
|
|
|
|
NCHTTP* http;
|
|
|
|
if(nciop == NULL || nciop->pvt == NULL) return NC_NOERR;
|
|
|
|
|
|
|
|
http = (NCHTTP*)nciop->pvt;
|
|
|
|
assert(http != NULL);
|
|
|
|
|
This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
|
|
|
status = nc_http_close(http->state);
|
Provide byte-range reading of remote datasets
re: issue https://github.com/Unidata/netcdf-c/issues/1251
Assume that you have the URL to a remote dataset
which is a normal netcdf-3 or netcdf-4 file.
This PR allows the netcdf-c to read that dataset's
contents as a netcdf file using HTTP byte ranges
if the remote server supports byte-range access.
Originally, this PR was set up to access Amazon S3 objects,
but it can also access other remote datasets such as those
provided by a Thredds server via the HTTPServer access protocol.
It may also work for other kinds of servers.
Note that this is not intended as a true production
capability because, as is known, this kind of access to
can be quite slow. In addition, the byte-range IO drivers
do not currently do any sort of optimization or caching.
An additional goal here is to gain some experience with
the Amazon S3 REST protocol.
This architecture and its use documented in
the file docs/byterange.dox.
There are currently two test cases:
1. nc_test/tst_s3raw.c - this does a simple open, check format, close cycle
for a remote netcdf-3 file and a remote netcdf-4 file.
2. nc_test/test_s3raw.sh - this uses ncdump to investigate some remote
datasets.
This PR also incorporates significantly changed model inference code
(see the superceded PR https://github.com/Unidata/netcdf-c/pull/1259).
1. It centralizes the code that infers the dispatcher.
2. It adds support for byte-range URLs
Other changes:
1. NC_HDF5_finalize was not being properly called by nc_finalize().
2. Fix minor bug in ncgen3.l
3. fix memory leak in nc4info.c
4. add code to walk the .daprc triples and to replace protocol=
fragment tag with a more general mode= tag.
Final Note:
Th inference code is still way too complicated. We need to move
to the validfile() model used by netcdf Java, where each
dispatcher is asked if it can process the file. This decentralizes
the inference code. This will be done after all the major new
dispatchers (PIO, Zarr, etc) have been implemented.
2019-01-02 09:27:36 +08:00
|
|
|
|
|
|
|
/* do cleanup */
|
|
|
|
if(http != NULL) {
|
|
|
|
ncbytesfree(http->region);
|
|
|
|
free(http);
|
|
|
|
}
|
|
|
|
if(nciop->path != NULL) free((char*)nciop->path);
|
|
|
|
free(nciop);
|
|
|
|
return status;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Request that the region (offset, extent)
|
|
|
|
* be made available through *vpp.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
httpio_get(ncio* const nciop, off_t offset, size_t extent, int rflags, void** const vpp)
|
|
|
|
{
|
|
|
|
int status = NC_NOERR;
|
|
|
|
NCHTTP* http;
|
|
|
|
|
|
|
|
if(nciop == NULL || nciop->pvt == NULL) {status = NC_EINVAL; goto done;}
|
|
|
|
http = (NCHTTP*)nciop->pvt;
|
|
|
|
|
|
|
|
assert(http->region == NULL);
|
|
|
|
http->region = ncbytesnew();
|
|
|
|
ncbytessetalloc(http->region,(unsigned long)extent);
|
This PR adds EXPERIMENTAL support for accessing data in the
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
2020-06-29 08:02:47 +08:00
|
|
|
if((status = nc_http_read(http->state,nciop->path,offset,extent,http->region)))
|
Provide byte-range reading of remote datasets
re: issue https://github.com/Unidata/netcdf-c/issues/1251
Assume that you have the URL to a remote dataset
which is a normal netcdf-3 or netcdf-4 file.
This PR allows the netcdf-c to read that dataset's
contents as a netcdf file using HTTP byte ranges
if the remote server supports byte-range access.
Originally, this PR was set up to access Amazon S3 objects,
but it can also access other remote datasets such as those
provided by a Thredds server via the HTTPServer access protocol.
It may also work for other kinds of servers.
Note that this is not intended as a true production
capability because, as is known, this kind of access to
can be quite slow. In addition, the byte-range IO drivers
do not currently do any sort of optimization or caching.
An additional goal here is to gain some experience with
the Amazon S3 REST protocol.
This architecture and its use documented in
the file docs/byterange.dox.
There are currently two test cases:
1. nc_test/tst_s3raw.c - this does a simple open, check format, close cycle
for a remote netcdf-3 file and a remote netcdf-4 file.
2. nc_test/test_s3raw.sh - this uses ncdump to investigate some remote
datasets.
This PR also incorporates significantly changed model inference code
(see the superceded PR https://github.com/Unidata/netcdf-c/pull/1259).
1. It centralizes the code that infers the dispatcher.
2. It adds support for byte-range URLs
Other changes:
1. NC_HDF5_finalize was not being properly called by nc_finalize().
2. Fix minor bug in ncgen3.l
3. fix memory leak in nc4info.c
4. add code to walk the .daprc triples and to replace protocol=
fragment tag with a more general mode= tag.
Final Note:
Th inference code is still way too complicated. We need to move
to the validfile() model used by netcdf Java, where each
dispatcher is asked if it can process the file. This decentralizes
the inference code. This will be done after all the major new
dispatchers (PIO, Zarr, etc) have been implemented.
2019-01-02 09:27:36 +08:00
|
|
|
goto done;
|
|
|
|
assert(ncbyteslength(http->region) == extent);
|
|
|
|
if(vpp) *vpp = ncbytescontents(http->region);
|
|
|
|
done:
|
|
|
|
return status;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Like memmove(), safely move possibly overlapping data.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
httpio_move(ncio* const nciop, off_t to, off_t from, size_t nbytes, int ignored)
|
|
|
|
{
|
|
|
|
return NC_EPERM;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
httpio_rel(ncio* const nciop, off_t offset, int rflags)
|
|
|
|
{
|
|
|
|
int status = NC_NOERR;
|
|
|
|
NCHTTP* http;
|
|
|
|
|
|
|
|
if(nciop == NULL || nciop->pvt == NULL) {status = NC_EINVAL; goto done;}
|
|
|
|
http = (NCHTTP*)nciop->pvt;
|
|
|
|
ncbytesfree(http->region);
|
|
|
|
http->region = NULL;
|
|
|
|
done:
|
|
|
|
return status;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Write out any dirty buffers to disk and
|
|
|
|
* ensure that next read will get data from disk.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
httpio_sync(ncio* const nciop)
|
|
|
|
{
|
|
|
|
return NC_NOERR; /* do nothing */
|
|
|
|
}
|