This supports better authorization
handling for DAP requests, especially redirection
based authorization. I also added a new test case
ncdap_tests/testauth.sh.
Specifically, suppose I have a netrc file /tmp/netrc
containing this.
machine uat.urs.earthdata.nasa.gov login xxxxxx password yyyyyy
Also suppose I have a .ocrc file containing these lines
HTTP.COOKIEJAR=/tmp/cookies
HTTP.NETRC=/tmp/netrc
Assume that .ocrc is in the local directory or HOME.
Then this command should work (assuming a valid login and password).
ncdump -h "https://54.86.135.31/opendap/data/nc/fnoc1.nc"
The pnetcdf support was not
properly being used to provide
mpi parallel io for netcdf-3 classic
files. The wrong dispatch table was being
used. The fix was to modify
dfile.c#NC_check_file_type to properly
specify the pnetcdf dispatch table when
use_parallel was true.
If the netCDF-C library is built with the
HDF5 library but without the HDF4 library and one attempts
to open an HDF4 file, an abort occurs rather than returning
a proper error code (NC_ENOTNC).
Fix is to modify dfile.c#NC_check_file_type to properly
#ifdef relevant tests.
There was a problem with the
distcheck testing since the test input files
are in ${src_dir} and the test is in ${build_dir}.
So modify run_chunk_hdf4 to copy as necessary.
should be under ENABLE_DAP_REMOTE_TESTS.
Fixed to make sure that this is so.
Also attempted to fix ncdap_test/CMakeLists.txt,
but probably got it wrong.
HT to Nico Schlomer.
2. Attempted to reduce the number of conversion errors
when -Wconversion is set. Fixed oc2, but
rest of netcdf remains to be done.
HT to Nico Schlomer.
3. When doing #2, I discovered an error in ncgen.y
that has remained hidden. This required some other
test case fixes.
times, but returning the same ncid.
2. Separate out the dap auth tests
and make them disabled by default.
3. turn of ncdap_test/test_varm3 until
we can find a copy of coads_climatology.nc
The code that tests if a path is a url is
faulting when the url does not end in a slash
(e.g. http://thredds-tests.ucar.edu).
The code that tests if a path is a url is
faulting when the url does not end in a slash
(e.g. http://thredds-tests.ucar.edu).
CF-273]/HZY-708311
Solution was to do a null pointer test.
Added a test (tst_misc).
Add a new function called nc_inq_format_extended that
returns more detailed format information (vis-a-vis
nc_inq_format) about an open dataset.
Note that the netcdf API will present the file as if it had
the format specified by nc_inq_format. The true file
format, however, may not even be a netcdf file; it might be
DAP, HDF4, or PNETCDF, for example. This function returns
that true file type. It also returns the effective mode for
the file.
signature: nc_inq_format_extended(int ncid, int* formatp, int* modep)
where
* ncid is the NetCDF ID from a previous call to nc_open() or
nc_create().
* formatp is a pointer to a location for returned true format.
* modep is a pointer to a location for returned mode flags.
Refer to the actual list in the file netcdf.h to see the
currently defined set.
Also added test cases (tst_formatx*).
Columbia server does not serve up proper
opendap DDS replies. The Dataset {...} name
changes depending on if the request has certain
kinds of constraints.
Code for a hack was not being used, so restore it.
The fix is to effectively ignore differences in
Dataset node names if the code is coming from
columbia.edu.
2. [NCF-278]
The ncgen code is improperly typing int64 integer constants
as uint64.
3. [NCF-279]
Empty string constants were not being properly
filled when their target array is length 1 or more.
Fix Http Basic Authorization.
The problem is really in oc2.0.
In order for it to work,
the CURLOPT_COOKIEJAR must have
a non-null value. The code
was already there, but not being
used for some reason.
1. fixed cookiejar code in oc2.0
2. synched oc2.0 with netcdf-c/oc2
3. added a test case
Ncgen is unable to resolve
ambiguous references to an enum
constant when two different enums
have same econstant name.
Solved by allowing more specific
forms for econstant references.
1. /.../enumname.enumconstname
2. enumname.enumconstname
3. enumconstname
Case 1 is resolved by using the econstant
in the specific enum definition. If none is
found, an error is reported.
Case 2 is resolved by
1. finding an enclosing group with an
enum definition with the specified name
and containing the specified econstant.
If there are more than one, then an error is reported
2. finding all enum definitions in the dataset that have
the specified enum name and contain the specified
econstant. If more than one is found, then an error is reported.
If the above two methods fail, then report an error.
Case 3 is similar to case 2, but all enums, irrespective
of name are used if they contains the specified enum constant.
The ref_tst_econst.cdl test in ncdump is causing ncdump
to fail. So there may be yet some problem.
effectively o(n cubed); modified to be
o(n squared).
2. If the list of prefetched variables is too long,
(something on the order of 400 variables), then
the server may reject it. Modified code so that
in the case that the set of prefetch'd vars is
the in fact all variables, it does not create a long
request. This does not actually solve the problem
if the prefetch list is long, but not all inclusive.
(primarily from libsrc4)
into its own dispatch library
called libsrc5.
2. Fixed part of Jira NCF-253
by removing the need for the
pnetcdf_ndims field.
For some reason, the original
code tried to cache the variable
ranks rather than computing them
as needed. Fixed by doing
an ...inq_varndims call as needed.
3. found some places where NC_MAX_DIMS
was being stack allocated and changed
to heap allocation.
Still some cases in nc_test4.
(primarily from libsrc4)
into its own dispatch library
called libsrc5.
2. Fixed part of Jira NCF-253
by removing the need for the
pnetcdf_ndims field.
For some reason, the original
code tried to cache the variable
ranks rather than computing them
as needed. Fixed by doing
an ...inq_varndims call as needed.
3. found some places where NC_MAX_DIMS
was being stack allocated and changed
to heap allocation.
Still some cases in nc_test4.
So, it turns out that just freeing
the nc4_info is not enough;
The root group must also be reclaimed.
So, it appears the best approach
is to invoke an abort on the
failed file.
Problem was that the NC_create
code was not checking for the NC_CLASSIC_MODEL
mode flag in deciding what dispatch table to use.
This meant that it was then defaulting to use
the default format, and if that was changed
to e.g. NC_FORMAT_NETCDF4, then it would try
to create a netcdf-4 format file, even is
NC_CLASSIC_MODEL mode flag was set.
to do prefetch on either a lazy
or eager basis. Lazy means that
the prefetch does not occur
until and unless the client actually
makes a get_var request.
Also repaired a problem where
doing prefetch wrt a url that
has a constraint will prefetch
a whole variable if its constrained
size is small enough, even if the
underlying variable is too large
to warrant prefetch.