Github issue https://github.com/Unidata/netcdf-c/issues/152
requested that "orphaned" DAS attributes be included in the netcdf
metadata as global variables. The term orphaned here meant that
they were not connected to any variable in the DDS.
This was done in pull request https://github.com/Unidata/netcdf-c/pull/164
However, some servers (e.g. Thredds) include attributes for variables not
specified in a constraint expression, but which exist in the full DDS.
So I was adding these to the set of global attributes, but in retrospect
this should not have been done: they should have been elided.
Solution: modify oc2 code to be more distriminatory about
which orphaned attributes to include.
Specific changes:
1. Add dap4 code: libdap4 and dap4_test.
Note that until the d4ts server problem is solved, dap4 is turned off.
2. Modify various files to support dap4 flags:
configure.ac, Makefile.am, CMakeLists.txt, etc.
3. Add nc_test/test_common.sh. This centralizes
the handling of the locations of various
things in the build tree: e.g. where is
ncgen.exe located. See nc_test/test_common.sh
for details.
4. Modify .sh files to use test_common.sh
5. Obsolete separate oc2 by moving it to be part of
netcdf-c. This means replacing code with netcdf-c
equivalents.
5. Add --with-testserver to configure.ac to allow
override of the servers to be used for --enable-dap-remote-tests.
6. There were multiple versions of nctypealignment code. Try to
centralize in libdispatch/doffset.c and include/ncoffsets.h
7. Add a unit test for the ncuri code because of its complexity.
8. Move the findserver code out of libdispatch and into
a separate, self contained program in ncdap_test and dap4_test.
9. Move the dispatch header files (nc{3,4}dispatch.h) to
.../include because they are now shared by modules.
10. Revamp the handling of TOPSRCDIR and TOPBUILDDIR for shell scripts.
11. Make use of MREMAP if available
12. Misc. minor changes e.g.
- #include <config.h> -> #include "config.h"
- Add some no-install headers to /include
- extern -> EXTERNL and vice versa as needed
- misc header cleanup
- clean up checking for misc. unix vs microsoft functions
13. Change copyright decls in some files to point to LICENSE file.
14. Add notes to RELEASENOTES.md
Problem was in oc2/dap.y.
In definition of errormsg:, change WORD_WORD to WORD_STRING
since the msg field of an opendap error response is a quoted
string.
Also took the opportunity to modify ncgen to
transfer the logging level (-L flag) into the c-code
generated using -lc.
User request to have all orphaned DAP2 attributes kept as netcdf
global attributes. This is primarily a change in the oc code
nplus testcase dataset changes.
Result may be inconsistent with netcdf-Java output.
Their is an ambiguity in the DAP2 spec. Section A.2 of the
dap2 spec says:
"...The backslash character (.\.) MAY be used as
a single-character quoting mechanism only within
quoted-string and comment constructs.
quoted-pair = "\" CHAR
..."
The underlying problem was to allow for " chars inside
strings by using \". However, this definition is overbroad.
It is not stated:
1. if the backslash is to be left in the string or not.
2. There is also an unstated, but related issue of what
to do about e.g. '\n';convert to newline or not.
This change is to conform to libdap and it does the following:
1. The backslash is left in the string
2. Things like \n are left as is and it is assumed that
higher level code will decide what to do with e.g. \n.
It appears the problem is that synth9 was erroneously
included in testing. It involves a nested sequence which
is not translatable. Not sure why it was still there,
but fix is to suppress the test.
DAPRCFILE. Note that the value of this environment
variable should be the absolute path of the rc file, not
the path to its containing directory.
2. fixup testauth.sh and add some new tests
3. synch oc
2. modify oc2/ocrc.c rcfilenames to look for .ocrc before .dodsrc.
3. Modify testauth.sh to avoid using names that might already
exist for cookies file and netrc file. Still must use .ocrc
to test for local/home search.
4. Modify testauth.sh to save and restore any file it creates
that already exists.
This supports better authorization
handling for DAP requests, especially redirection
based authorization. I also added a new test case
ncdap_tests/testauth.sh.
Specifically, suppose I have a netrc file /tmp/netrc
containing this.
machine uat.urs.earthdata.nasa.gov login xxxxxx password yyyyyy
Also suppose I have a .ocrc file containing these lines
HTTP.COOKIEJAR=/tmp/cookies
HTTP.NETRC=/tmp/netrc
Assume that .ocrc is in the local directory or HOME.
Then this command should work (assuming a valid login and password).
ncdump -h "https://54.86.135.31/opendap/data/nc/fnoc1.nc"
When a .dodsrc file is present, and
specifies user name and password,
it is being ignored after the first time.
Fix required a major rewrite of ocrc.c because
it was mishandling a number of .dodsrc entries.
should be under ENABLE_DAP_REMOTE_TESTS.
Fixed to make sure that this is so.
Also attempted to fix ncdap_test/CMakeLists.txt,
but probably got it wrong.
HT to Nico Schlomer.
2. Attempted to reduce the number of conversion errors
when -Wconversion is set. Fixed oc2, but
rest of netcdf remains to be done.
HT to Nico Schlomer.
3. When doing #2, I discovered an error in ncgen.y
that has remained hidden. This required some other
test case fixes.
Some servers do not properly
implement the current DAP2 spec.
It turns out that this server is one of those:
http://nomads.ncep.noaa.gov:9090/
When a reference such as this is made:
http://nomads.ncep.noaa.gov:9090/dods/gens/gens20140123/gep_all_12z?prmslmsl[0][0][0][0:359]
tt is returning this:
Dataset {
float prmslmsl[ens=1][time=1][lat=1][lon=360];
} gens%2fgens20140123%2fgep_all_12z;
when it should be returning this:
Dataset {
Structure {
float prmslmsl[ens=1][time=1][lat=1][lon=360];
} prmslmsl;
} gens%2fgens20140123%2fgep_all_12z;
The reason is that when picking fields out of a grid,
one must maintain the fully qualified name, so the grid
is converted to an enclosing structure.
It turns out that the problem was that
when I create the new structure node, I was
improperly linking it into the existing graph.
This caused a null pointer failure.
Fix is to make sure the relevant field (node->root)
is set.
error occurs after an "exit:" label.
Corrected a dozen Coverity errors (mainly allocation issues, along with a few
other things):
711711, 711802, 711803, 711905, 970825, 996123, 996124, 1025787,
1047274, 1130013, 1130014, 1139538
Refactored internal fill-value code to correctly handle string types, and
especially to allow NULL pointers and null strings (ie. "") to be
distinguished. The code now avoids partially aliasing the two together
(which only happened on the 'write' side of things and wasn't reflected on
the 'read' side, adding to the previous confusion).
Probably still weak on handling fill-values of variable-length and compound
datatypes.
Refactored the recursive metadata reads a bit more, to process HDF5 named
datatypes and datasets immediately, avoiding chewing up memory for those
types of objects, etc.
Finished uncommenting and updating the nc_test4/tst_fills2.c code (as I'm
proceeding alphabetically through the nc_test4 code files).
Fix Http Basic Authorization.
The problem is really in oc2.0.
In order for it to work,
the CURLOPT_COOKIEJAR must have
a non-null value. The code
was already there, but not being
used for some reason.
1. fixed cookiejar code in oc2.0
2. synched oc2.0 with netcdf-c/oc2
3. added a test case
effectively o(n cubed); modified to be
o(n squared).
2. If the list of prefetched variables is too long,
(something on the order of 400 variables), then
the server may reject it. Modified code so that
in the case that the set of prefetch'd vars is
the in fact all variables, it does not create a long
request. This does not actually solve the problem
if the prefetch list is long, but not all inclusive.
effectively o(n cubed); modified to be
o(n squared).
2. If the list of prefetched variables is too long,
(something on the order of 400 variables), then
the server may reject it. Modified code so that
in the case that the set of prefetch'd vars is
the in fact all variables, it does not create a long
request. This does not actually solve the problem
if the prefetch list is long, but not all inclusive.
1. In nc4hdf.c, had a variable declaration located such
that Visual Studio complained and threw an error. Moved
to head of function.
2. Visual Studio was complaining about variable declarations
being made after OCVERIFY/OCDEREF macros.