supercede PR: https://github.com/Unidata/netcdf-c/pull/1384
Since we have an mmap user, undeprecate it and make sure
it works. Other changes:
* fix test cases to work with make -j
* fix exposed ncgen error.
Priority: Low
re: issue https://github.com/Unidata/netcdf-c/issues/1329
HDF5 has the ability to programmatically define new filters,
as opposed to using HDF5_PLUGIN_PATH env variable.
This PR adds support for that feature.
Not clear how useful this is, though.
See docs/filters.md for details.
re: Issue https://github.com/Unidata/netcdf-c/issues/1365
At some point (4.6.1) we changed the diskless handling of flags
and added a new NC_PERSIST flag. Looks like we did not fix all
occurrences of the old flag set, specifically for 'nccopy -w' command.
Also added some tests (tst_nccopy_w{3,4}.sh) for this situation.
A user suggested that the nccopy -F option
syntax should be extended to support specification
of multiple (or all) variables in a single -F option.
The new syntax allows:
1. '*' as the name of the variable; this means apply the
filter to all variables in the data set.
2. *var1|var2|...* as the variable name to indicate that the filter
should be applied to the multiple specified variables.
re: issue https://github.com/Unidata/netcdf-c/issues/1278
re: issue https://github.com/Unidata/netcdf-c/issues/876
re: issue https://github.com/Unidata/netcdf-c/issues/806
* Major change to the handling of 8-byte parameters for nc_def_var_filter.
The old code was not well thought out.
* The new algorithm is documented in docs/filters.md.
* Added new utility file plugins/H5Zutil.c to support
* Modified plugins/H5Zmisc.c to use new algorithm
the new algorithm.
* Renamed include/ncfilter.h to include/netcdf_filter.h
and made it an installed header so clients can access the
new algorithm utility.
* Fixed nc_test4/tst_filterparser.c and nc_test4/test_filter_misc.c
to use the new algorithm
* libdap4/ fixes:
* d4swap.c has an error in the endian pre-processing such
that record counts were not being swapped correctly.
* d4data.c had an error in that checksums were being computed
after endian swapping rather than before.
* ocinitialize() was never being called, so xxdr bigendian handling
was never set correctly.
* Required adding debug statements to occompile
* Found and fixed memory leak in ncdump.c
Not tested:
* HDF4
* Pnetcdf
* parallel HDF5
re: issue https://github.com/Unidata/netcdf-c/issues/1251
Assume that you have the URL to a remote dataset
which is a normal netcdf-3 or netcdf-4 file.
This PR allows the netcdf-c to read that dataset's
contents as a netcdf file using HTTP byte ranges
if the remote server supports byte-range access.
Originally, this PR was set up to access Amazon S3 objects,
but it can also access other remote datasets such as those
provided by a Thredds server via the HTTPServer access protocol.
It may also work for other kinds of servers.
Note that this is not intended as a true production
capability because, as is known, this kind of access to
can be quite slow. In addition, the byte-range IO drivers
do not currently do any sort of optimization or caching.
An additional goal here is to gain some experience with
the Amazon S3 REST protocol.
This architecture and its use documented in
the file docs/byterange.dox.
There are currently two test cases:
1. nc_test/tst_s3raw.c - this does a simple open, check format, close cycle
for a remote netcdf-3 file and a remote netcdf-4 file.
2. nc_test/test_s3raw.sh - this uses ncdump to investigate some remote
datasets.
This PR also incorporates significantly changed model inference code
(see the superceded PR https://github.com/Unidata/netcdf-c/pull/1259).
1. It centralizes the code that infers the dispatcher.
2. It adds support for byte-range URLs
Other changes:
1. NC_HDF5_finalize was not being properly called by nc_finalize().
2. Fix minor bug in ncgen3.l
3. fix memory leak in nc4info.c
4. add code to walk the .daprc triples and to replace protocol=
fragment tag with a more general mode= tag.
Final Note:
Th inference code is still way too complicated. We need to move
to the validfile() model used by netcdf Java, where each
dispatcher is asked if it can process the file. This decentralizes
the inference code. This will be done after all the major new
dispatchers (PIO, Zarr, etc) have been implemented.
This is a follow up to PR https://github.com/Unidata/netcdf-c/pull/1173
Sorry that it is so big, but leak suppression can be complex.
This PR fixes all remaining memory leaks -- as determined by
-fsanitize=address, and with the exceptions noted below.
Unfortunately. there remains a significant leak that I cannot
solve. It involves vlens, and it is unclear if the leak is
occurring in the netcdf-c library or the HDF5 library.
I have added a check_PROGRAM to the ncdump directory to show the
problem. The program is called tst_vlen_demo.c To exercise it,
build the netcdf library with -fsanitize=address enabled. Then
go into ncdump and do a "make clean check". This should build
tst_vlen_demo without actually executing it. Then do the
command "./tst_vlen_demo" to see the output of the memory
checker. Note the the lost malloc is deep in the HDF5 library
(in H5Tvlen.c).
I am temporarily working around this error in the following way.
1. I modified several test scripts to not execute known vlen tests
that fail as described above.
2. Added an environment variable called NC_VLEN_NOTEST.
If set, then those specific tests are suppressed.
This should mean that the --disable-utilities option to
./configure should not need to be set to get a memory leak clean
build. This should allow for detection of any new leaks.
Note: I used an environment variable rather than a ./configure
option to control the vlen tests. This is because it is
temporary (I hope) and because it is a bit tricky for shell
scripts to access ./configure options.
Finally, as before, this only been tested with netcdf-4 and hdf5 support.
re: https://github.com/Unidata/netcdf-c/issues/1154
Inadvertently, the behavior of NC_DISKLESS with nc_create() was
changed in release 4.6.1. Previously, the NC_WRITE flag needed
to be explicitly used with NC_DISKLESS in order to cause the
created file to be persisted to disk.
Additional analyis indicated that the current NC_DISKLESS
implementation was seriously flawed.
This PR attempts to clean up and regularize the situation with
respect to NC_DISKLESS control. One important aspect of diskless
operation is that there are two different notions of write.
1. The file is read-write vs read-only when using the netcdf API.
2. The file is persisted or not to disk at nc_close().
Previously, these two were conflated. The rules now are
as follows.
1. NC_DISKLESS + NC_WRITE means that the file is read/write using the netcdf API
2. NC_DISKLESS + NC_PERSIST means that the file is persisted to a disk file at nc_close.
3. NC_DISKLESS + NC_PERSIST + NC_WRITE means both 1 and 2.
The NC_PERSIST flag is new and takes over the obsolete NC_MPIPOSIX flag.
NC_MPIPOSIX is still defined, but is now an alias for the NC_MPIIO flag.
It is also now the case that for netcdf-4, NC_DISKLESS is independent
of NC_INMEMORY and in fact it is an error to specify both flags
simultaneously.
Finally, the MMAP code was fixed to use NC_PERSIST as well.
Also marked MMAP as deprecated.
Also added a test case to test various combinations of NC_DISKLESS,
NC_PERSIST, and NC_WRITE.
This PR affects a number of files and especially test cases
that used NC_DISKLESS.
Misc. Unrelated fixes
1. fixed some warnings in ncdump/dumplib.c
re: issue https://github.com/Unidata/netcdf-c/issues/1156
Starting with HDF5 version 1.10.x, the plugin code MUST be
careful when using the standard *malloc()*, *realloc()*, and
*free()* function.
In the event that the code is allocating, reallocating, or
free'ing memory that either came from -- or will be exported to --
the calling HDF5 library, then one MUST use the corresponding
HDF5 functions *H5allocate_memory()*, *H5resize_memory()*,
*H5free_memory()* [5] to avoid memory failures.
Additionally, if your filter code leaks memory, then the HDF5 library
generates a failure something like this.
````
H5MM.c:232: H5MM_final_sanity_check: Assertion `0 == H5MM_curr_alloc_bytes_s' failed.
````
This PR modifies the code in the plugins directory to
conform to these new requirements.
This raises a question about the libhdf5 code where this
same problem may occur. We need to scan especially nc4hdf.c
to look for this problem.
re: issue https://github.com/Unidata/netcdf-c/issues/1151
Modify DAP2 and DAP4 code to handle case when _FillValue type is not
same as the parent variable type.
Specifically:
1. Define a parameter [fillmismatch] to allow this mismatch;
default is to disallow.
2. If allowed, forcibly change the type of the _FillValue to match
the parent variable.
3. If allowed Convert the values to match new type
4. Generate a log message
5. if not allowed, then fail
Implementing this required some changes to ncdap_test/dapcvt.c
Also added test cases.
Minor Unrelated Changes:
1. There were a number of warnings about e.g.
assigning a const char* to a char*. Fix these
2. In nccopy.1, replace .NP with .IP "n"
(re PR https://github.com/Unidata/netcdf-c/pull/1144)
3. fix minor error in ncdump/ocprint
re: https://github.com/Unidata/netcdf-c/issues/972
The current szip plugin code in the HDF5 library has some
unexpected behaviors that require some changes to how
nc_inq_var_szip is implemented and to the corresponding tests:
nc_test4/{test_szip,tst_vars3}.
Specifically, the following can happen:
1. The number of parameters provided by the user will be two,
but the number of parameters returned by nc_inq_var_filter
will be four because the HDF5 code (H5Zszip) will add two
extra parameters for internal use. It turns out that the two
parameters provided when calling nc_def_var_filter correspond
to the first two parameters of the four parameters returned
by nc_inq_var_filter.
2. The nc_inq_var_szip values corresponding to the ones provided
by the caller may be different than those provided by
nc_def_var_filter. The value of the options_mask argument is
known to add additional flag bits, and the pixels_per_block
parameter may be modified.
re: github issue https://github.com/Unidata/netcdf-c/issues/1111
One of the less common use cases for the in-memory feature is
apparently failing with HDF5-1.10.x. The fix is complicated and
requires significant changes to libhdf5/nc4memcb.c. The current
setup is detailed in the file docs/inmeminternal.dox.
Additionally, it was discovered that the program
nc_test/tst_inmemory.c, which is invoked by
nc_test/run_inmemory.sh, actually was failing because of the
above problem. But the failure is not detected since the script
does not return non-zero value.
Other Changes:
1. Fix nc_test_tst_inmemory to return errors correctly.
2. Make ncdap_tests/findtestserver.c and dap4_tests/findtestserver4.c
be generated from ncdap_test/findtestserver.c.in.
3. Make LOG() print output to stderr instead of stdout to
avoid contaminating e.g. ncdump output.
4. Modify the handling of NC_INMEMORY and NC_DISKLESS flags
to properly handle that NC_DISKLESS => NC_INMEMORY. This
affects a number of code pieces, especially memio.c.
Add the ability to set some additional curlopt values via .daprc (aka .dodsrc).
This effects both DAP2 and DAP4 protocols.
Related issues:
[1] re: esupport: KOZ-821332
[2] re: github issue https://github.com/Unidata/netcdf4-python/issues/836
[3] re: github issue https://github.com/Unidata/netcdf-c/issues/1074
1. CURLOPT_BUFFERSIZE: Relevant to [1]. Allow user to set the read/write
buffersizes used by curl.
This is done by adding the following to .daprc (aka .dodsrc):
HTTP.READ.BUFFERSIZE=n
where n is the buffersize in bytes. There is a built-in (to curl)
limit of 512k for this value.
2. CURLOPT_TCP_KEEPALIVE (and CURLOPT_TCP_KEEPIDLE and CURLOPT_TCP_KEEPINTVL):
Relevant (maybe) to [2] and [3]. Allow the user to turn on KEEPALIVE
This is done by adding the following to .daprc (aka .dodsrc):
HTTP.KEEPALIVE=on|n/m
If the value is "on", then simply enable default KEEPALIVE. If the value
is n/m, then enable KEEPALIVE and set KEEPIDLE to n and KEEPINTVL to m.
stored in the _NCProperties attribute to allow two things:
1. capture of additional library dependencies (over and above
hdf5)
2. Recognition of non-netcdf libraries that create netcdf-4 format
files.
To this end, the _NCProperties format has been extended to be
and arbitrary set of key=value pairs separated by commas.
This new format has version = 2, and uses commas as the pair separator.
Thus the general form is:
_NCProperties = "version=2,key1=value,key2=value2..." ;
This new version is accompanied by a new ./configure option of the form
--with-ncproperties="key1=value1,key2=value2..."
that specifies pairs to add to the _NCProperties attribute for all
files created with that netcdf library.
At this point, what is missing is some programmatic way to
specify either all the pairs or additional pairs
to the _NCProperties attribute. Not sure of the best way
to do this.
Builders using non-netcdf libraries can specify
whatever they want in the key value pairs (as long
as the version=2 is specified first).
By convention, the primary library is expected to be the
the first pair after the leading version=2 pair, but this
is convention only and is neither required nor enforced.
Related changes:
1. Fixed the tests that check _NCProperties to properly operate with version=2.
2. When reading a version 1 _NCProperties attribute, convert it to look
like a version 2 attribute.
2. Added some version 2 tests to ncdump/tst_fileinfo.c and
ncdump/tst_fileinfo.sh
Misc Changes:
1. Fix minor problem in ncdap_test/testurl.sh where a parameter to
buildurl needed to be quoted.
2. Minor fix to ncgen to swap switches -H and -h to be consistent
with other utilities.
3. Document the -M flag in nccopy usage() and the nccopy man page.
4. Modify a test case to use the nccopy -M flag.
file with @internal. The equivalent can be faked using
the Doxygen ENABLED_SECTIONS mechanism. See docs/testserver.dox
to see how this is done. So I make --enable-internal-docs
also set ENABLED_SECTIONS = INTERNAL and then used it
in docs/testserver.dox to decide if it needs to be included.
in the docs directory.
1. Add a new internal document -- testserver.dox -- to describe
how to set up and maintain the dap test server.
2. It moves the internal documentation (internal.dox, indexing.dox,
and testserver.dox) to later in the documentation table of contents.
3. Cleanup the formatting of the internal documents.
4. Cleanup some minor doxygen issues in other files.
The file docs/indexing.dox tries to provide design
information for the refactoring.
The primary change is to replace all walking of linked
lists with the use of the NCindex data structure.
Ncindex is a combination of a hash table (for name-based
lookup) and a vector (for walking the elements in the index).
Additionally, global vectors are added to NC_HDF5_FILE_INFO_T
to support direct mapping of an e.g. dimid to the NC_DIM_INFO_T
object. These global vectors exist for dimensions, types, and groups
because they have globally unique id numbers.
WARNING:
1. since libsrc4 and libsrchdf4 share code, there are also
changes in libsrchdf4.
2. Any outstanding pull requests that change libsrc4 or libhdf4
are likely to cause conflicts with this code.
3. The original reason for doing this was for performance improvements,
but as noted elsewhere, this may not be significant because
the meta-data read performance apparently is being dominated
by the hdf5 library because we do bulk meta-data reading rather
than lazy reading.
and https://github.com/Unidata/netcdf-c/issues/708
Expand the NC_INMEMORY capabilities to support writing and accessing
the final modified memory.
Three new functions have been added:
nc_open_memio, nc_create_mem, and nc_close_memio.
The following new capabilities were added.
1. nc_open_memio() allows the NC_WRITE mode flag
so a chunk of memory can be passed in and be modified
2. nc_create_mem() allows the NC_INMEMORY flag to be set
to cause the created file to be kept in memory.
3. nc_close_mem() allows the final in-memory contents to be
retrieved at the time the file is closed.
4. A special flag, NC_MEMIO_LOCK, is provided to ensure that
the provided memory will not be freed or reallocated.
Note the following.
1. If nc_open_memio() is called with NC_WRITE, and NC_MEMIO_LOCK is not set,
then the netcdf-c library will take control of the incoming memory.
This means that the original memory block should not be freed
but the block returned by nc_close_mem() must be freed.
2. If nc_open_memio() is called with NC_WRITE, and NC_MEMIO_LOCK is set,
then modifications to the original memory may fail if the space available
is insufficient.
Documentation is provided in the file docs/inmemory.md.
A test case is provided: nc_test/tst_inmemory.c driven by
nc_test/run_inmemory.sh
WARNING: changes were made to the dispatch table for
the close entry. From int (*close)(int) to int (*close)(int,void*).
(I hope) metadata mechanism. This mostly just adds new pieces of
code (e.g. nclistmap) and does some minor fixes.
It should be transparent to everything else.
The next set of changes will be the big step.
2. Fixed plugin building (nc_test4/hdf5plugins)
to be done properly by cmake and automake.
4. Duplicated part of the nc_test4 filter test code
in examples/C
An incomplete and untested set of hooks exist
for OS-X in nc_test4/findplugins.in. They need testing.
Some parameters like stringlength actually affect a dimension
named maxStrlen. So, add some aliasing so maxstrlen can be
specified as a parameter and as an alias for stringlength.
The affected parameters (case insensitive):
stringlength has alias maxstrlen
stringlength_<varname> has alias maxstrlen_<varname>
Also:
1. added a test case in ncdap_test/testurl.sh
2. added note to documentation
2. Factored out the parameter string parsing for ncgen and nccopy
int libdispatch/dfilter.c + include/ncfilter.h
3. Allow a parameter string to use constant types other than
unsigned int. See docs/filters.md for details.
4. Moved the old content of include/netcdf_filter.h into include/netcdf.h
and removed include/netcdf_filter.h as no longer needed.
5. Force the test filter (bzip2) in nc_test4/filter_test to
be built using BUILT_SOURCES.
Primary change is to cleanup code and remove duplicated code.
1. Unify the rc file reading into libdispatch/drc.c. Eventually extend
if we need rc file for netcdf itself as opposed to the dap code.
2. Unify the extraction from the rc file of DAP authorization info.
3. Misc. other small unifications: make temp file, read file.
4. Avoid use of libcurl when reading file:// because
there is some kind of problem with the Visual Studio version.
Might be related to the winpath problem.
In any case, do direct read instead.
5. Add new error code NC_ERCFILE for errors in reading RC file.
6. Complete documentation cleanup as indicated in this comment
https://github.com/Unidata/netcdf-c/pull/472#issuecomment-325926426
7. Convert some occurrences of #ifdef _WIN32 to #ifdef _MSC_VER
generates garbage. This in turn interferes with using .netrc
because the garbage user+pwd can will override the
.netrc. Note that this may work ok sometimes
if the garbage happens to start with a nul character.
2. It turns out that the user:pwd combination needs to support
character escaping. One reason is the user may contain an '@' character.
The other is that modern password rules make it not unlikely that
the password will contain characters that interfere with url parsing.
So, the rule I have implemented is that all occurrences of the user:pwd
format must escape any dodgy characters. The escape format is URL escaping
of the form %XX. This applies both to user:pwd
embedded in a URL as well as the use of HTTP.CREDENTIALS.USERPASSWORD
in a .dodsrc/.daprc file. The user and password in .netrc must not
be escaped. This is now documented in docs/auth.md
The fix for #2 actually obviated #1. Now, internally, the user and pwd
are stored separately and not in the user:pwd format. They are combined
(and escaped) only when needed.
to docs/filter.md
2. Moved location of filter.md in documentation
3. Add a template file as the basis for building new filters.
4. Did some test case cleanup
1. Allow nccopy to apply filters, especially on the output file.
This provides a third way to do this other than using ncgen or
programatically
2. Make sure that even if the filter code is not available, it is
possible to see the filter id and parameters for variables using
e.g ncdump -hs.
3. Fix bug in nccopy so that the input file does
not necessarily have to be netcdf-4.
4. At last minute decided to change to using a
single "_Filter" attribute for ncgen
5. Added a test to tst_filter.sh to generate C code using ncgen.
This relies on the HDF5 capability to
dynamically load compression filters.
Note that a compression filter is just
a subcase of filters.
The primary user-visible changes are as follows:
1. Add a standard header "netcdf_filter.h" that defines
the necessary API extensions
2. Modify ncgen to support two new special attributes
"_Filter_ID" and "_Filter_Parameters" so that compression
can be turned on when creating a file using ncgen.
4. Add a detailed description of filtering support
to the user's guide; see the file filters.md
5. Add a test case directory for this: nc_test4/filter_test.
It is fragile and a ./configure flags (-enable-filter-test)
is defined (default disabled) to shut this off this test
to avoid spurious 'make check' failures.
Note that the HDF5 documentation is not up-to-date, so
much of what is encoded here comes from examining the
actual code in the file H5PL.c in the HDF5 source code.
1. Cleanup test_common.sh to expunge (mostly) the use of the VS
path value. This has the effect of being unable to use the
Visual Studio C compiler for shell tests.
2. There is a missing case in CMakeLists.txt so add
defaulting for HDF5_C_LIBRARY_hdf5 using HDF5_C_LIBRARY.
Ward should probably examine this to get it fixed correctly.
3. Put back ref to esg.md in docs/Doxyfile.in
4. Fix minor warning in dut8proc.h
1. When running under windows (as opposed to cygwin)
we need to make sure to not user /cygdrive/ file paths.
This was ocurring in libdap4/d4read.c, but may occur
elsewhere.
2. Shell scripts in the git repo are not being checked-out
with the executable mode set. Had core.filemode set to false.
Was a major hassle to fix.
Modified provenance code to allocate the minimal space
needed for _NCProperties attribute in file. Basically
required using malloc in the provenance code and in ncdump.
Otherwise should cause no externally visible effects.
Also removed the ENABLE_FILEINFO from configure.ac since
the provenance code is no longer optional.
Charlie Zender noted that we forgot to define what happens for
various netcdf API attribute operations, notably nc_inq_att()
and nc_get_att().
So, I added a list of legal and illegal api calls for the provenance
attributes in docs/attribute_conventions.md.
I also added more test cases to ncdump/tst_fileinfo.c to verify
and fixed resultant errors.
This consists of a persistent attribute named
_NCProperties plus two computed attributes
_IsNetcdf4 and _SuperblockVersion.
See the 'Provenance Attributes' section
of docs/attribute_conventions.md for details.