netcdf-c/include/ncconfigure.h
Dennis Heimbigner bf2746b8ea Provide byte-range reading of remote datasets
re: issue https://github.com/Unidata/netcdf-c/issues/1251

Assume that you have the URL to a remote dataset
which is a normal netcdf-3 or netcdf-4 file.

This PR allows the netcdf-c to read that dataset's
contents as a netcdf file using HTTP byte ranges
if the remote server supports byte-range access.

Originally, this PR was set up to access Amazon S3 objects,
but it can also access other remote datasets such as those
provided by a Thredds server via the HTTPServer access protocol.
It may also work for other kinds of servers.

Note that this is not intended as a true production
capability because, as is known, this kind of access to
can be quite slow. In addition, the byte-range IO drivers
do not currently do any sort of optimization or caching.

An additional goal here is to gain some experience with
the Amazon S3 REST protocol.

This architecture and its use documented in
the file docs/byterange.dox.

There are currently two test cases:

1. nc_test/tst_s3raw.c - this does a simple open, check format, close cycle
   for a remote netcdf-3 file and a remote netcdf-4 file.
2. nc_test/test_s3raw.sh - this uses ncdump to investigate some remote
   datasets.

This PR also incorporates significantly changed model inference code
(see the superceded PR https://github.com/Unidata/netcdf-c/pull/1259).

1. It centralizes the code that infers the dispatcher.
2. It adds support for byte-range URLs

Other changes:

1. NC_HDF5_finalize was not being properly called by nc_finalize().
2. Fix minor bug in ncgen3.l
3. fix memory leak in nc4info.c
4. add code to walk the .daprc triples and to replace protocol=
   fragment tag with a more general mode= tag.

Final Note:
Th inference code is still way too complicated. We need to move
to the validfile() model used by netcdf Java, where each
dispatcher is asked if it can process the file. This decentralizes
the inference code. This will be done after all the major new
dispatchers (PIO, Zarr, etc) have been implemented.
2019-01-01 18:27:36 -07:00

86 lines
1.6 KiB
C

/*
* Copyright 2018 University Corporation for Atmospheric
* Research/Unidata. See COPYRIGHT file for more info.
*
* This header file is for the parallel I/O functions of netCDF.
*
*/
/* "$Id: netcdf_par.h,v 1.1 2010/06/01 15:46:49 ed Exp $" */
#ifndef NCCONFIGURE_H
#define NCCONFIGURE_H 1
#ifdef HAVE_STDLIB_H
#include <stdlib.h>
#endif
/*
This is included in bottom
of config.h. It is where,
typically, alternatives to
missing functions should be
defined and missing types defined.
*/
#ifndef HAVE_STRDUP
extern char* strdup(const char*);
#endif
/* handle null arguments */
#ifndef nulldup
#ifdef HAVE_STRDUP
#define nulldup(s) ((s)==NULL?NULL:strdup(s))
#else
char *nulldup(const char* s);
#endif
#endif
#ifdef _MSC_VER
#ifndef HAVE_SSIZE_T
#include <basetsd.h>
typedef SSIZE_T ssize_t;
#define HAVE_SSIZE_T 1
#endif
#endif
#ifndef HAVE_STRLCAT
#ifdef _MSC_VER
/* Windows strlcat_s is equivalent to strlcat, but different arg order */
#define strlcat(d,s,n) strcat_s((d),(n),(s))
#else
extern size_t strlcat(char* dst, const char* src, size_t dsize);
#endif
#endif
#ifndef nulldup
#define nulldup(s) ((s)==NULL?NULL:strdup(s))
#endif
#ifndef nulllen
#define nulllen(s) ((s)==NULL?0:strlen(s))
#endif
#ifndef nullfree
#define nullfree(s) {if((s)!=NULL) {free(s);} else {}}
#endif
#ifndef HAVE_UCHAR
typedef unsigned char uchar;
#endif
#ifndef HAVE_LONGLONG
typedef long long longlong;
typedef unsigned long long ulonglong;
#endif
#ifndef HAVE_USHORT
typedef unsigned short ushort;
#endif
#ifndef HAVE_UINT
typedef unsigned int uint;
#endif
/* Provide a fixed size alternative to off_t or off64_t */
typedef long long fileoffset_t;
#endif /* NCCONFIGURE_H */