The `typedef struct NC_Dispatch NC_Dispatch;` declaration is in both netcdf.h and in ncdispatch.h. Even though it is the same declaration, there are compilers which see this as an error and the compilation will fail:
```
In file included from /netcdf-c/include/nc4dispatch.h:15,
from /netcdf-c/libhdf5/hdf5attr.c:16:
/netcdf-c/include/ncdispatch.h:111: error: redefinition of typedef `NC_Dispatch`
/netcdf/netcdf-c/include/netcdf.h:515: note: previous declaration of `NC_Dispatch` was here
```
This is with a very old version of gcc-4.4.7 which isn't used much anymore, but this is the default gcc compiler on RHEL6 so it might be worth fixing.
So it now becomes the first default test server to try.
This also means that the dap4 remote testing is enabled.
The only issue to watch is to see if the jetstream-based
server can stay up for significant periods of time.
A uptimerobot (https://uptimerobot.com) has been set ups
to monitor this hourly, so we shall see.
corresponding HDF5 operations.
re: github issue https://github.com/Unidata/netcdf-c/issues/908
also in reference to https://github.com/pydata/xarray/issues/2004
The netcdf-c library has implemented the nc_get_vars and nc_put_vars
operations as element at a time. This has resulted in very slow
operation.
This pr attempts to improve the situation for netcdf-4/hdf5 files
by using the slab operations provided by the hdf5 library. The new
implementation passes the get/put vars stride information down to
the hdf5 slab operations.
The result appears to improve performance significantly. Some simple
tests on large 2-D arrays shows speedups in excess of 150.
Misc. other changes:
1. fix bug in ncgen/semantics.c; using a list's allocated length
instead of actual length.
2. Added a temporary hook in the netcdf library plus a performance
test case (tst_varsperf.c) to estimate the speedup. After users
have had some experience with this, I will remove it, probably
after the 4.7 release.
Fix https://github.com/Unidata/netcdf-c/issues/962
1. remove the --disable-diskless option since it is no
longer needed. Similarly for CMakeLists.txt.
2. Fixed nc4files.c where BAIL and return were mixed
leading to situation where cleanup code was not
being invoked. This probably occurs elsewhere,
but I did not find any specifically.
Fix github issue https://github.com/Unidata/netcdf-c/issues/899
which came from e-support UOY-859712.
The problem was that the vlen_max parameter
to libsrc/var.c#NC_check_vlen was of type size_t.
However, it is being called, sometimes, with values
of size X_INT64_MAX. The resulting truncation was
causing dimension failures as noted in the e-support
report.
Fix is to change the vlen_max argument (and some
local variables in NC_check_vlen) to be of declared
as unsigned long long.
The file docs/indexing.dox tries to provide design
information for the refactoring.
The primary change is to replace all walking of linked
lists with the use of the NCindex data structure.
Ncindex is a combination of a hash table (for name-based
lookup) and a vector (for walking the elements in the index).
Additionally, global vectors are added to NC_HDF5_FILE_INFO_T
to support direct mapping of an e.g. dimid to the NC_DIM_INFO_T
object. These global vectors exist for dimensions, types, and groups
because they have globally unique id numbers.
WARNING:
1. since libsrc4 and libsrchdf4 share code, there are also
changes in libsrchdf4.
2. Any outstanding pull requests that change libsrc4 or libhdf4
are likely to cause conflicts with this code.
3. The original reason for doing this was for performance improvements,
but as noted elsewhere, this may not be significant because
the meta-data read performance apparently is being dominated
by the hdf5 library because we do bulk meta-data reading rather
than lazy reading.
and https://github.com/Unidata/netcdf-c/issues/708
Expand the NC_INMEMORY capabilities to support writing and accessing
the final modified memory.
Three new functions have been added:
nc_open_memio, nc_create_mem, and nc_close_memio.
The following new capabilities were added.
1. nc_open_memio() allows the NC_WRITE mode flag
so a chunk of memory can be passed in and be modified
2. nc_create_mem() allows the NC_INMEMORY flag to be set
to cause the created file to be kept in memory.
3. nc_close_mem() allows the final in-memory contents to be
retrieved at the time the file is closed.
4. A special flag, NC_MEMIO_LOCK, is provided to ensure that
the provided memory will not be freed or reallocated.
Note the following.
1. If nc_open_memio() is called with NC_WRITE, and NC_MEMIO_LOCK is not set,
then the netcdf-c library will take control of the incoming memory.
This means that the original memory block should not be freed
but the block returned by nc_close_mem() must be freed.
2. If nc_open_memio() is called with NC_WRITE, and NC_MEMIO_LOCK is set,
then modifications to the original memory may fail if the space available
is insufficient.
Documentation is provided in the file docs/inmemory.md.
A test case is provided: nc_test/tst_inmemory.c driven by
nc_test/run_inmemory.sh
WARNING: changes were made to the dispatch table for
the close entry. From int (*close)(int) to int (*close)(int,void*).
(I hope) metadata mechanism. This mostly just adds new pieces of
code (e.g. nclistmap) and does some minor fixes.
It should be transparent to everything else.
The next set of changes will be the big step.
strlcat provides better protection against buffer overflows.
Code is taken from the FreeBSD project source code. Specifically:
https://github.com/freebsd/freebsd/blob/master/lib/libc/string/strlcat.c
License appears to be acceptable, but needs to be checked by e.g. Debian.
Step 1:
1. Add to netcdf-c/include/ncconfigure.h to use our version
if not already available as determined by HAVE_STRLCAT in config.h.
2. Add the strlcat code to libdispatch/dstring.c
3. Turns out that strlcat was already defined in several places.
So remove it from:
ncgen3/genlib.c
ncdump/dumplib.c
3. Define strlcat extern definition in ncconfigure.h.
4. Modify following directories to use strlcat:
libdap2 libdap4 ncdap_test dap4_test
Will do others in subsequent steps.
re pull request https://github.com/Unidata/netcdf-c/pull/405
re pull request https://github.com/Unidata/netcdf-c/pull/446
Notes:
1. This branch is a cleanup of the magic.dmh branch.
2. magic.dmh was originally merged, but caused problems with parallel IO.
It was re-issued as pull request https://github.com/Unidata/netcdf-c/pull/446.
3. This branch + pull request replace any previous pull requests and magic.dmh branch.
Given an otherwise valid netCDF file that has a corrupted header,
the netcdf library currently crashes. Instead, it should return
NC_ENOTNC.
Additionally, the NC_check_file_type code does not do the
forward search required by hdf5 files. It currently only looks
at file position 0 instead of 512, 1024, 2048,... Also, it turns
out that the HDF4 magic number is assumed to always be at the
beginning of the file (unlike HDF5).
The change is localized to libdispatch/dfile.c See
https://support.hdfgroup.org/release4/doc/DSpec_html/DS.pdf
Also, it turns out that the code in NC_check_file_type is duplicated
(mostly) in the function libsrc4/nc4file.c#nc_check_for_hdf.
This branch does the following.
1. Make NC_check_file_type return NC_ENOTNC instead of crashing.
2. Remove nc_check_for_hdf and centralize all file format checking
NC_check_file_type.
3. Add proper forward search for HDF5 files (but not HDF4 files)
to look for the magic number at offsets of 0, 512, 1024...
4. Add test tst_hdf5_offset.sh. This tests that hdf5 files with
an offset are properly recognized. It does so by prefixing
a legal file with some number of zero bytes: 512, 1024, etc.
5. Off-topic: Added -N flag to ncdump to force a specific output dataset name.
2. Factored out the parameter string parsing for ncgen and nccopy
int libdispatch/dfilter.c + include/ncfilter.h
3. Allow a parameter string to use constant types other than
unsigned int. See docs/filters.md for details.
4. Moved the old content of include/netcdf_filter.h into include/netcdf.h
and removed include/netcdf_filter.h as no longer needed.
5. Force the test filter (bzip2) in nc_test4/filter_test to
be built using BUILT_SOURCES.
The use of the following version-specific curl flags
is not always properly wrapped or aliased using
config.h HAVE_CURL... ifdefs.
# CURLOPT_USERNAME is not defined until curl version 7.19.1
# CURLOPT_PASSWORD is not defined until curl version 7.19.1
# CURLOPT_KEYPASSWD is not defined until curl version 7.16.4
-- aliased as needed to CURLOPT_SSLKEYPASSWD
# CURLINFO_RESPONSE_CODE is not defined until curl version 7.10.7
-- aliased as needed to CURLINFO_HTTP_CODE
# CURLOPT_CHUNK_BGN_FUNCTION is not defined until curl version 7.21.0
-- not used in our code
Primary change is to cleanup code and remove duplicated code.
1. Unify the rc file reading into libdispatch/drc.c. Eventually extend
if we need rc file for netcdf itself as opposed to the dap code.
2. Unify the extraction from the rc file of DAP authorization info.
3. Misc. other small unifications: make temp file, read file.
4. Avoid use of libcurl when reading file:// because
there is some kind of problem with the Visual Studio version.
Might be related to the winpath problem.
In any case, do direct read instead.
5. Add new error code NC_ERCFILE for errors in reading RC file.
6. Complete documentation cleanup as indicated in this comment
https://github.com/Unidata/netcdf-c/pull/472#issuecomment-325926426
7. Convert some occurrences of #ifdef _WIN32 to #ifdef _MSC_VER
generates garbage. This in turn interferes with using .netrc
because the garbage user+pwd can will override the
.netrc. Note that this may work ok sometimes
if the garbage happens to start with a nul character.
2. It turns out that the user:pwd combination needs to support
character escaping. One reason is the user may contain an '@' character.
The other is that modern password rules make it not unlikely that
the password will contain characters that interfere with url parsing.
So, the rule I have implemented is that all occurrences of the user:pwd
format must escape any dodgy characters. The escape format is URL escaping
of the form %XX. This applies both to user:pwd
embedded in a URL as well as the use of HTTP.CREDENTIALS.USERPASSWORD
in a .dodsrc/.daprc file. The user and password in .netrc must not
be escaped. This is now documented in docs/auth.md
The fix for #2 actually obviated #1. Now, internally, the user and pwd
are stored separately and not in the user:pwd format. They are combined
(and escaped) only when needed.
2. modified ncdap_tests to remove <cr>
from generated output before comparison
to expected. This is a hack because
my other attempts to force windows to use
binary mode have not worked.
from e-support OYW-455599.
Problem was that in nctime.c#CDMonthDay, it was setting up
the month -> #days table correctly, but it did not use it
because it forgot to check for Cd366, it only checked for Cd365.
were added to provide a path name converter from e.g. cygwin
paths to e.g. windows paths. This is necessary because
the shell scripts may produce cygwin paths, but the code
may have been compiled with Visual Studio. Similar issues
arise with Mingw.
At appropriate places, and if using Visual Studio or Mingw,
I added calls to the path conversion code.
Apparently I forgot to find all the places where this
conversion was needed. So this pr does the following:
1. Push the calls to the converter to the various libXXX
directories and out of libdispatch/dfile.c.
2. Add conversion calls to other parts of the code like oc2.
I also turns out that conversion code in dapcvt.c
had a bug when handling DAP Byte type under visual studio.
Notes:
1. there may still be places I missed that need to do path conversion.
2. need to make sure that calls to e.g. H5open also use converted path.
This is a follow-on in that the old utf8 code was still being
used in ncgen to convert utf8->utf16 when converting cdl to Java
(see genj.c).
The new code apparently has no utf16 support, but it does have
utf32 support. Converting utf32 -> utf16 can be approximated by
truncating the 32bits to 16 bits, unless the top 16 bits are
not zero. This latter condition is unlikely to be common because
it implies use of some rather obscure characters.
So solution is to convert to utf32 and truncate to 16 bits to
get utf16. An error is reported if the high-order truncated 16
bits are not zero. If we get complaints, then I will figure out
how to convert full utf32 to a utf16 pair.
Other changes:
1. removed the old code from ncgen.
2. changed UTF8PROC_DLLEXPORT (in utf8proc) to EXTERNL
and added appropriate includes. This should fix
issue https://github.com/Unidata/netcdf-c/issues/404,
but since we cannot duplicate the failure, I am not quite
sure.
This is a follow-on in that the old utf8 code was still being
used in ncgen to convert utf8->utf16 when converting cdl to Java
(see genj.c).
The new code apparently has no utf16 support, but it does have
utf32 support. Converting utf32 -> utf16 can be approximated by
truncating the 32bits to 16 bits, unless the top 16 bits are
not zero. This latter condition is unlikely to be common because
it implies use of some rather obscure characters.
So solution is to convert to utf32 and truncate to 16 bits to
get utf16. An error is reported if the high-order truncated 16
bits are not zero. If we get complaints, then I will figure out
how to convert full utf32 to a utf16 pair.
Also removed the old code from ncgen.
This relies on the HDF5 capability to
dynamically load compression filters.
Note that a compression filter is just
a subcase of filters.
The primary user-visible changes are as follows:
1. Add a standard header "netcdf_filter.h" that defines
the necessary API extensions
2. Modify ncgen to support two new special attributes
"_Filter_ID" and "_Filter_Parameters" so that compression
can be turned on when creating a file using ncgen.
4. Add a detailed description of filtering support
to the user's guide; see the file filters.md
5. Add a test case directory for this: nc_test4/filter_test.
It is fragile and a ./configure flags (-enable-filter-test)
is defined (default disabled) to shut this off this test
to avoid spurious 'make check' failures.
Note that the HDF5 documentation is not up-to-date, so
much of what is encoded here comes from examining the
actual code in the file H5PL.c in the HDF5 source code.
1. When running under windows (as opposed to cygwin)
we need to make sure to not user /cygdrive/ file paths.
This was ocurring in libdap4/d4read.c, but may occur
elsewhere.
2. Shell scripts in the git repo are not being checked-out
with the executable mode set. Had core.filemode set to false.
Was a major hassle to fix.
Specific changes:
1. Add dap4 code: libdap4 and dap4_test.
Note that until the d4ts server problem is solved, dap4 is turned off.
2. Modify various files to support dap4 flags:
configure.ac, Makefile.am, CMakeLists.txt, etc.
3. Add nc_test/test_common.sh. This centralizes
the handling of the locations of various
things in the build tree: e.g. where is
ncgen.exe located. See nc_test/test_common.sh
for details.
4. Modify .sh files to use test_common.sh
5. Obsolete separate oc2 by moving it to be part of
netcdf-c. This means replacing code with netcdf-c
equivalents.
5. Add --with-testserver to configure.ac to allow
override of the servers to be used for --enable-dap-remote-tests.
6. There were multiple versions of nctypealignment code. Try to
centralize in libdispatch/doffset.c and include/ncoffsets.h
7. Add a unit test for the ncuri code because of its complexity.
8. Move the findserver code out of libdispatch and into
a separate, self contained program in ncdap_test and dap4_test.
9. Move the dispatch header files (nc{3,4}dispatch.h) to
.../include because they are now shared by modules.
10. Revamp the handling of TOPSRCDIR and TOPBUILDDIR for shell scripts.
11. Make use of MREMAP if available
12. Misc. minor changes e.g.
- #include <config.h> -> #include "config.h"
- Add some no-install headers to /include
- extern -> EXTERNL and vice versa as needed
- misc header cleanup
- clean up checking for misc. unix vs microsoft functions
13. Change copyright decls in some files to point to LICENSE file.
14. Add notes to RELEASENOTES.md
The nvars, ndims, and natts fields on the NC_HDF5_FILE_INFO struct are
never set. The nvars field is read, but since it is never written,
the value is always zero.
Update utf8proc.[ch] to use the version now
maintained by the Julia Language project
(https://github.com/JuliaLang/utf8proc/blob/master/LICENSE.md).
The license for the previous version was
unacceptable for the Debian and Ubuntu release
systems. The new version both updates the code
and addresses the license issue.
It turns out that the utf8proc software we are using
was turned over to the Julia Language developers
and the license terms changed to allow modification.
(https://github.com/JuliaLang/utf8proc/blob/master/LICENSE.md).
So the fix here is as follows:
1. Wrap the library with a fixed interface: libdispatch/dutf8.c
and include/ncutf8.h.
2. Replace the existing utf8proc code with the new version
from https://github.com/JuliaLang/utf8proc.
3. Add a couple more test cases: nc_test/tst_utf8_validate.c
and nc_test_utf8_phrases.c. If/when I can find a usable
normalization test, I will incorporate that later.
Modified provenance code to allocate the minimal space
needed for _NCProperties attribute in file. Basically
required using malloc in the provenance code and in ncdump.
Otherwise should cause no externally visible effects.
Also removed the ENABLE_FILEINFO from configure.ac since
the provenance code is no longer optional.
As best I can tell, this should be ENABLE_PARALLEL4 instead of ENABLE_PARALLEL. ENABLE_PARALLEL is not used other than in a couple documentation files. But, ENABLE_PARALLEL4 is set in the top-level CMakeLists.txt file if a parallel hdf5 library is detected.
The single commented out line was the only use of a c-99 or c++ style comment in the entire file. Either remove line completely or change to c-89 style comment to permit use with projects that still require c89 compatible code.
This consists of a persistent attribute named
_NCProperties plus two computed attributes
_IsNetcdf4 and _SuperblockVersion.
See the 'Provenance Attributes' section
of docs/attribute_conventions.md for details.
The addition of the nc_hashmap to facilitate quick
retrieval of var and dim by name did not take into
account key collisions -- two or more names hashed
to the same value. If the keys matched, it assumed
that the names matched also.
This change fixes this incorrect assumption and
checks both the key (which is the hash of the name)
and if the keys match, it also checks that the names
match.
While there have been no instances of duplicate keys,
they are certain to occur and cause difficult to
debug issues. This fix eliminates that defect.
In non-classic netcdf-4 models, it is allowable to have
large numbers of dims and vars. In many operations, the
entire list of dims or vars is searched for a dim/var matching
a specific name which results in *lots* of strncmp or strcmp
calls.
If we add a hash field to the var and dim structs similar to what
has already been done for the netcdf-3 formats, then we can hash the
name being searched for and numerically compare that value with
the var/dim hash value. If they match, then do a more expensive
strncmp call to ensure that the names truly match.