2018-08-13 13:01:08 +08:00
|
|
|
#!/bin/sh
|
2017-03-09 08:01:10 +08:00
|
|
|
|
2018-05-15 04:37:07 +08:00
|
|
|
if test "x$srcdir" = x ; then srcdir=`pwd`; fi
|
2017-03-09 08:01:10 +08:00
|
|
|
. ../test_common.sh
|
|
|
|
|
2023-04-13 10:37:03 +08:00
|
|
|
. $srcdir/test_ncdump.sh
|
|
|
|
|
Mitigate S3 test interference + Unlimited Dimensions in NCZarr
This PR started as an attempt to add unlimited dimensions to NCZarr.
It did that, but this exposed significant problems with test interference.
So this PR is mostly about fixing -- well mitigating anyway -- test
interference.
The problem of test interference is now documented in the document docs/internal.md.
The solutions implemented here are also describe in that document.
The solution is somewhat fragile but multiple cleanup mechanisms
are provided. Note that this feature requires that the
AWS command line utility must be installed.
## Unlimited Dimensions.
The existing NCZarr extensions to Zarr are modified to support unlimited dimensions.
NCzarr extends the Zarr meta-data for the ".zgroup" object to include netcdf-4 model extensions. This information is stored in ".zgroup" as dictionary named "_nczarr_group".
Inside "_nczarr_group", there is a key named "dims" that stores information about netcdf-4 named dimensions. The value of "dims" is a dictionary whose keys are the named dimensions. The value associated with each dimension name has one of two forms
Form 1 is a special case of form 2, and is kept for backward compatibility. Whenever a new file is written, it uses format 1 if possible, otherwise format 2.
* Form 1: An integer representing the size of the dimension, which is used for simple named dimensions.
* Form 2: A dictionary with the following keys and values"
- "size" with an integer value representing the (current) size of the dimension.
- "unlimited" with a value of either "1" or "0" to indicate if this dimension is an unlimited dimension.
For Unlimited dimensions, the size is initially zero, and as variables extend the length of that dimension, the size value for the dimension increases.
That dimension size is shared by all arrays referencing that dimension, so if one array extends an unlimited dimension, it is implicitly extended for all other arrays that reference that dimension.
This is the standard semantics for unlimited dimensions.
Adding unlimited dimensions required a number of other changes to the NCZarr code-base. These included the following.
* Did a partial refactor of the slice handling code in zwalk.c to clean it up.
* Added a number of tests for unlimited dimensions derived from the same test in nc_test4.
* Added several NCZarr specific unlimited tests; more are needed.
* Add test of endianness.
## Misc. Other Changes
* Modify libdispatch/ncs3sdk_aws.cpp to optionally support use of the
AWS Transfer Utility mechanism. This is controlled by the
```#define TRANSFER```` command in that file. It defaults to being disabled.
* Parameterize both the standard Unidata S3 bucket (S3TESTBUCKET) and the netcdf-c test data prefix (S3TESTSUBTREE).
* Fixed an obscure memory leak in ncdump.
* Removed some obsolete unit testing code and test cases.
* Uncovered a bug in the netcdf-c handling of big-endian floats and doubles. Have not fixed yet. See tst_h5_endians.c.
* Renamed some nczarr_tests testcases to avoid name conflicts with nc_test4.
* Modify the semantics of zmap\#ncsmap_write to only allow total rewrite of objects.
* Modify the semantics of zodom to properly handle stride > 1.
* Add a truncate operation to the libnczarr zmap code.
2023-09-27 06:56:48 +08:00
|
|
|
isolate "testdir_nccopy4"
|
|
|
|
THISDIR=`pwd`
|
|
|
|
cd $ISOPATH
|
2023-04-26 07:15:06 +08:00
|
|
|
|
2021-12-24 13:18:56 +08:00
|
|
|
set -e
|
|
|
|
|
2010-06-03 21:24:43 +08:00
|
|
|
# For a netCDF-4 build, test nccopy on netCDF files in this directory
|
|
|
|
|
|
|
|
echo ""
|
|
|
|
|
2023-04-13 10:37:03 +08:00
|
|
|
# Create common test inputs
|
|
|
|
createtestinputs
|
|
|
|
|
2022-02-20 07:47:31 +08:00
|
|
|
TESTFILES0='tst_comp tst_comp2 tst_enum_data tst_fillbug
|
2010-09-01 06:41:00 +08:00
|
|
|
tst_group_data tst_nans tst_opaque_data tst_solar_1 tst_solar_2
|
2022-02-20 07:47:31 +08:00
|
|
|
tst_solar_cmp tst_special_atts'
|
|
|
|
|
|
|
|
TESTFILES="$TESTFILES0 tst_string_data"
|
2018-11-16 01:00:38 +08:00
|
|
|
|
Fix various problem around VLEN's
re: https://github.com/Unidata/netcdf-c/issues/541
re: https://github.com/Unidata/netcdf-c/issues/1208
re: https://github.com/Unidata/netcdf-c/issues/2078
re: https://github.com/Unidata/netcdf-c/issues/2041
re: https://github.com/Unidata/netcdf-c/issues/2143
For a long time, there have been known problems with the
management of complex types containing VLENs. This also
involves the string type because it is stored as a VLEN of
chars.
This PR (mostly) fixes this problem. But note that it adds new
functions to netcdf.h (see below) and this may require bumping
the .so number. These new functions can be removed, if desired,
in favor of functions in netcdf_aux.h, but netcdf.h seems the
better place for them because they are intended as alternatives
to the nc_free_vlen and nc_free_string functions already in
netcdf.h.
The term complex type refers to any type that directly or
transitively references a VLEN type. So an array of VLENS, a
compound with a VLEN field, and so on.
In order to properly handle instances of these complex types, it
is necessary to have function that can recursively walk
instances of such types to perform various actions on them. The
term "deep" is also used to mean recursive.
At the moment, the two operations needed by the netcdf library are:
* free'ing an instance of the complex type
* copying an instance of the complex type.
The current library does only shallow free and shallow copy of
complex types. This means that only the top level is properly
free'd or copied, but deep internal blocks in the instance are
not touched.
Note that the term "vector" will be used to mean a contiguous (in
memory) sequence of instances of some type. Given an array with,
say, dimensions 2 X 3 X 4, this will be stored in memory as a
vector of length 2*3*4=24 instances.
The use cases are primarily these.
## nc_get_vars
Suppose one is reading a vector of instances using nc_get_vars
(or nc_get_vara or nc_get_var, etc.). These functions will
return the vector in the top-level memory provided. All
interior blocks (form nested VLEN or strings) will have been
dynamically allocated.
After using this vector of instances, it is necessary to free
(aka reclaim) the dynamically allocated memory, otherwise a
memory leak occurs. So, the recursive reclaim function is used
to walk the returned instance vector and do a deep reclaim of
the data.
Currently functions are defined in netcdf.h that are supposed to
handle this: nc_free_vlen(), nc_free_vlens(), and
nc_free_string(). Unfortunately, these functions only do a
shallow free, so deeply nested instances are not properly
handled by them.
Note that internally, the provided data is immediately written so
there is no need to copy it. But the caller may need to reclaim the
data it passed into the function.
## nc_put_att
Suppose one is writing a vector of instances as the data of an attribute
using, say, nc_put_att.
Internally, the incoming attribute data must be copied and stored
so that changes/reclamation of the input data will not affect
the attribute.
Again, the code inside the netcdf library does only shallow copying
rather than deep copy. As a result, one sees effects such as described
in Github Issue https://github.com/Unidata/netcdf-c/issues/2143.
Also, after defining the attribute, it may be necessary for the user
to free the data that was provided as input to nc_put_att().
## nc_get_att
Suppose one is reading a vector of instances as the data of an attribute
using, say, nc_get_att.
Internally, the existing attribute data must be copied and returned
to the caller, and the caller is responsible for reclaiming
the returned data.
Again, the code inside the netcdf library does only shallow copying
rather than deep copy. So this can lead to memory leaks and errors
because the deep data is shared between the library and the user.
# Solution
The solution is to build properly recursive reclaim and copy
functions and use those as needed.
These recursive functions are defined in libdispatch/dinstance.c
and their signatures are defined in include/netcdf.h.
For back compatibility, corresponding "ncaux_XXX" functions
are defined in include/netcdf_aux.h.
````
int nc_reclaim_data(int ncid, nc_type xtypeid, void* memory, size_t count);
int nc_reclaim_data_all(int ncid, nc_type xtypeid, void* memory, size_t count);
int nc_copy_data(int ncid, nc_type xtypeid, const void* memory, size_t count, void* copy);
int nc_copy_data_all(int ncid, nc_type xtypeid, const void* memory, size_t count, void** copyp);
````
There are two variants. The first two, nc_reclaim_data() and
nc_copy_data(), assume the top-level vector is managed by the
caller. For reclaim, this is so the user can use, for example, a
statically allocated vector. For copy, it assumes the user
provides the space into which the copy is stored.
The second two, nc_reclaim_data_all() and
nc_copy_data_all(), allows the functions to manage the
top-level. So for nc_reclaim_data_all, the top level is
assumed to be dynamically allocated and will be free'd by
nc_reclaim_data_all(). The nc_copy_data_all() function
will allocate the top level and return a pointer to it to the
user. The user can later pass that pointer to
nc_reclaim_data_all() to reclaim the instance(s).
# Internal Changes
The netcdf-c library internals are changed to use the proper
reclaim and copy functions. It turns out that the places where
these functions are needed is quite pervasive in the netcdf-c
library code. Using these functions also allows some
simplification of the code since the stdata and vldata fields of
NC_ATT_INFO are no longer needed. Currently this is commented
out using the SEPDATA \#define macro. When any bugs are largely
fixed, all this code will be removed.
# Known Bugs
1. There is still one known failure that has not been solved.
All the failures revolve around some variant of this .cdl file.
The proximate cause of failure is the use of a VLEN FillValue.
````
netcdf x {
types:
float(*) row_of_floats ;
dimensions:
m = 5 ;
variables:
row_of_floats ragged_array(m) ;
row_of_floats ragged_array:_FillValue = {-999} ;
data:
ragged_array = {10, 11, 12, 13, 14}, {20, 21, 22, 23}, {30, 31, 32},
{40, 41}, _ ;
}
````
When a solution is found, I will either add it to this PR or post a new PR.
# Related Changes
* Mark nc_free_vlen(s) as deprecated in favor of ncaux_reclaim_data.
* Remove the --enable-unfixed-memory-leaks option.
* Remove the NC_VLENS_NOTEST code that suppresses some vlen tests.
* Document this change in docs/internal.md
* Disable the tst_vlen_data test in ncdump/tst_nccopy4.sh.
* Mark types as fixed size or not (transitively) to optimize the reclaim
and copy functions.
# Misc. Changes
* Make Doxygen process libdispatch/daux.c
* Make sure the NC_ATT_INFO_T.container field is set.
2022-01-09 09:30:00 +08:00
|
|
|
# Causes memory leak; source unknown
|
|
|
|
MEMLEAK="tst_vlen_data"
|
2010-06-03 21:24:43 +08:00
|
|
|
|
2010-09-01 06:41:00 +08:00
|
|
|
echo "*** Testing netCDF-4 features of nccopy on ncdump/*.nc files"
|
2010-06-03 21:24:43 +08:00
|
|
|
for i in $TESTFILES ; do
|
2012-06-13 05:50:02 +08:00
|
|
|
echo "*** Test nccopy $i.nc copy_of_$i.nc ..."
|
2021-12-24 13:18:56 +08:00
|
|
|
# if test "x$i" = xtst_vlen_data ; then
|
|
|
|
# ls -l tst_vlen_data*
|
|
|
|
# ls -l *.nc
|
|
|
|
# fi
|
2018-08-26 11:44:41 +08:00
|
|
|
${NCCOPY} $i.nc copy_of_$i.nc
|
Codify cross-platform file paths
The netcdf-c code has to deal with a variety of platforms:
Windows, OSX, Linux, Cygwin, MSYS, etc. These platforms differ
significantly in the kind of file paths that they accept. So in
order to handle this, I have created a set of replacements for
the most common file system operations such as _open_ or _fopen_
or _access_ to manage the file path differences correctly.
A more limited version of this idea was already implemented via
the ncwinpath.h and dwinpath.c code. So this can be viewed as a
replacement for that code. And in path in many cases, the only
change that was required was to replace '#include <ncwinpath.h>'
with '#include <ncpathmgt.h>' and then replace file operation
calls with the NCxxx equivalent from ncpathmgr.h Note that
recently, the ncwinpath.h was renamed ncpathmgmt.h, so this pull
request should not require dealing with winpath.
The heart of the change is include/ncpathmgmt.h, which provides
alternate operations such as NCfopen or NCaccess and which properly
parse and rebuild path arguments to work for the platform on which
the code is executing. This mostly matters for Windows because of the
way that it uses backslash and drive letters, as compared to *nix*.
One important feature is that the user can do string manipulations
on a file path without having to worry too much about the platform
because the path management code will properly handle most mixed cases.
So one can for example concatenate a path suffix that uses forward
slashes to a Windows path and have it work correctly.
The conversion code is in libdispatch/dpathmgr.c, and the
important function there is NCpathcvt which does the proper
conversions to the local path format.
As a rule, most code should just replace their file operations with
the corresponding NCxxx ones defined in include/ncpathmgmt.h. These
NCxxx functions all call NCpathcvt on their path arguments before
executing the actual file operation.
In some rare cases, the client may need to directly use NCpathcvt,
but this should be avoided as much as possible. If there is a need
for supporting a new file operation not already in ncpathmgmt.h, then
use the code in dpathmgr.c as a template. Also please notify Unidata
so we can include it as a formal part or our supported operations.
Also, if you see an operation in the library that is not using the
NCxxx form, then please submit an issue so we can fix it.
Misc. Changes:
* Clean up the utf8 testing code; it is impossible to get some
tests to work under windows using shell scripts; the args do
not pass as utf8 but as some other encoding.
* Added an extra utf8 test case: test_unicode_path.sh
* Add a true test for HDF5 1.10.6 or later because as noted in
PR https://github.com/Unidata/netcdf-c/pull/1794,
HDF5 changed its Windows file path handling.
2021-03-05 04:41:31 +08:00
|
|
|
${NCDUMP} -n copy_of_$i $i.nc > tmp_$i.cdl
|
|
|
|
${NCDUMP} copy_of_$i.nc > copy_of_$i.cdl
|
|
|
|
echo "*** compare " with copy_of_$i.cdl
|
|
|
|
diff copy_of_$i.cdl tmp_$i.cdl
|
|
|
|
rm copy_of_$i.nc copy_of_$i.cdl tmp_$i.cdl
|
2010-06-03 21:24:43 +08:00
|
|
|
done
|
Codify cross-platform file paths
The netcdf-c code has to deal with a variety of platforms:
Windows, OSX, Linux, Cygwin, MSYS, etc. These platforms differ
significantly in the kind of file paths that they accept. So in
order to handle this, I have created a set of replacements for
the most common file system operations such as _open_ or _fopen_
or _access_ to manage the file path differences correctly.
A more limited version of this idea was already implemented via
the ncwinpath.h and dwinpath.c code. So this can be viewed as a
replacement for that code. And in path in many cases, the only
change that was required was to replace '#include <ncwinpath.h>'
with '#include <ncpathmgt.h>' and then replace file operation
calls with the NCxxx equivalent from ncpathmgr.h Note that
recently, the ncwinpath.h was renamed ncpathmgmt.h, so this pull
request should not require dealing with winpath.
The heart of the change is include/ncpathmgmt.h, which provides
alternate operations such as NCfopen or NCaccess and which properly
parse and rebuild path arguments to work for the platform on which
the code is executing. This mostly matters for Windows because of the
way that it uses backslash and drive letters, as compared to *nix*.
One important feature is that the user can do string manipulations
on a file path without having to worry too much about the platform
because the path management code will properly handle most mixed cases.
So one can for example concatenate a path suffix that uses forward
slashes to a Windows path and have it work correctly.
The conversion code is in libdispatch/dpathmgr.c, and the
important function there is NCpathcvt which does the proper
conversions to the local path format.
As a rule, most code should just replace their file operations with
the corresponding NCxxx ones defined in include/ncpathmgmt.h. These
NCxxx functions all call NCpathcvt on their path arguments before
executing the actual file operation.
In some rare cases, the client may need to directly use NCpathcvt,
but this should be avoided as much as possible. If there is a need
for supporting a new file operation not already in ncpathmgmt.h, then
use the code in dpathmgr.c as a template. Also please notify Unidata
so we can include it as a formal part or our supported operations.
Also, if you see an operation in the library that is not using the
NCxxx form, then please submit an issue so we can fix it.
Misc. Changes:
* Clean up the utf8 testing code; it is impossible to get some
tests to work under windows using shell scripts; the args do
not pass as utf8 but as some other encoding.
* Added an extra utf8 test case: test_unicode_path.sh
* Add a true test for HDF5 1.10.6 or later because as noted in
PR https://github.com/Unidata/netcdf-c/pull/1794,
HDF5 changed its Windows file path handling.
2021-03-05 04:41:31 +08:00
|
|
|
|
2012-06-13 05:50:02 +08:00
|
|
|
# echo "*** Testing compression of deflatable files ..."
|
2023-04-13 10:37:03 +08:00
|
|
|
${execdir}/tst_compress
|
2011-06-21 23:10:17 +08:00
|
|
|
echo "*** Test nccopy -d1 can compress a classic format file ..."
|
2018-08-26 11:44:41 +08:00
|
|
|
${NCCOPY} -d1 tst_inflated.nc tst_deflated.nc
|
2010-09-01 06:41:00 +08:00
|
|
|
if test `wc -c < tst_deflated.nc` -ge `wc -c < tst_inflated.nc`; then
|
|
|
|
exit 1
|
|
|
|
fi
|
2011-06-21 23:10:17 +08:00
|
|
|
echo "*** Test nccopy -d1 can compress a netCDF-4 format file ..."
|
2018-08-26 11:44:41 +08:00
|
|
|
${NCCOPY} -d1 tst_inflated4.nc tst_deflated.nc
|
2011-06-21 23:10:17 +08:00
|
|
|
if test `wc -c < tst_deflated.nc` -ge `wc -c < tst_inflated4.nc`; then
|
|
|
|
exit 1
|
|
|
|
fi
|
|
|
|
echo "*** Test nccopy -d1 -s can compress a classic model netCDF-4 file even more ..."
|
2022-11-09 11:12:38 +08:00
|
|
|
${NCCOPY} -d1 -s tst_inflated.nc tmp_ncc4.nc
|
|
|
|
if test `wc -c < tmp_ncc4.nc` -ge `wc -c < tst_inflated.nc`; then
|
2010-09-01 06:41:00 +08:00
|
|
|
exit 1
|
|
|
|
fi
|
2011-06-21 23:10:17 +08:00
|
|
|
echo "*** Test nccopy -d1 -s can compress a netCDF-4 file even more ..."
|
2022-11-09 11:12:38 +08:00
|
|
|
${NCCOPY} -d1 -s tst_inflated4.nc tmp_ncc4.nc
|
|
|
|
if test `wc -c < tmp_ncc4.nc` -ge `wc -c < tst_inflated4.nc`; then
|
2011-06-21 23:10:17 +08:00
|
|
|
exit 1
|
|
|
|
fi
|
2014-03-24 07:25:45 +08:00
|
|
|
echo "*** Test nccopy -d0 turns off compression, shuffling of compressed, shuffled file ..."
|
2022-11-09 11:12:38 +08:00
|
|
|
${NCCOPY} -d0 tst_inflated4.nc tmp_ncc4.nc
|
|
|
|
${NCDUMP} -sh tmp_ncc4.nc > tmp_ncc4.cdl
|
|
|
|
if fgrep '_DeflateLevel' < tmp_ncc4.cdl ; then
|
2014-03-24 07:25:45 +08:00
|
|
|
exit 1
|
|
|
|
fi
|
2022-11-09 11:12:38 +08:00
|
|
|
if fgrep '_Shuffle' < tmp_ncc4.cdl ; then
|
2014-03-24 07:25:45 +08:00
|
|
|
exit 1
|
|
|
|
fi
|
2022-11-09 11:12:38 +08:00
|
|
|
rm tst_deflated.nc tst_inflated.nc tst_inflated4.nc tmp_ncc4.nc tmp_ncc4.cdl
|
2010-09-01 06:41:00 +08:00
|
|
|
|
|
|
|
echo "*** Testing nccopy -d1 -s on ncdump/*.nc files"
|
2022-02-20 07:47:31 +08:00
|
|
|
for i in $TESTFILES0 ; do
|
2012-06-13 05:50:02 +08:00
|
|
|
echo "*** Test nccopy -d1 -s $i.nc copy_of_$i.nc ..."
|
2018-08-26 11:44:41 +08:00
|
|
|
${NCCOPY} -d1 -s $i.nc copy_of_$i.nc
|
2022-11-09 11:12:38 +08:00
|
|
|
${NCDUMP} -n copy_of_$i $i.nc > tmp_ncc4.cdl
|
2022-02-20 07:47:31 +08:00
|
|
|
${NCDUMP} copy_of_$i.nc > copy_of_$i.cdl
|
|
|
|
# echo "*** compare " with copy_of_$i.cdl
|
2022-11-09 11:12:38 +08:00
|
|
|
diff copy_of_$i.cdl tmp_ncc4.cdl
|
|
|
|
rm copy_of_$i.nc copy_of_$i.cdl tmp_ncc4.cdl
|
2010-09-01 06:41:00 +08:00
|
|
|
done
|
2023-04-13 10:37:03 +08:00
|
|
|
${execdir}/tst_chunking
|
2011-01-18 06:15:26 +08:00
|
|
|
echo "*** Test that nccopy -c can chunk and unchunk files"
|
2022-11-09 11:12:38 +08:00
|
|
|
${NCCOPY} -M0 tst_chunking.nc tmp_ncc4.nc
|
|
|
|
${NCDUMP} tmp_ncc4.nc > tmp_ncc4.cdl
|
2018-08-26 11:44:41 +08:00
|
|
|
${NCCOPY} -c dim0/,dim1/1,dim2/,dim3/1,dim4/,dim5/1,dim6/ tst_chunking.nc tmp-chunked.nc
|
2022-11-09 11:12:38 +08:00
|
|
|
${NCDUMP} -n tmp_ncc4 tmp-chunked.nc > tmp-chunked.cdl
|
|
|
|
diff tmp_ncc4.cdl tmp-chunked.cdl
|
2018-08-26 11:44:41 +08:00
|
|
|
${NCCOPY} -c dim0/,dim1/,dim2/,dim3/,dim4/,dim5/,dim6/ tmp-chunked.nc tmp-unchunked.nc
|
2022-11-09 11:12:38 +08:00
|
|
|
${NCDUMP} -n tmp_ncc4 tmp-unchunked.nc > tmp-unchunked.cdl
|
|
|
|
diff tmp_ncc4.cdl tmp-unchunked.cdl
|
2020-09-02 03:44:24 +08:00
|
|
|
${NCCOPY} -c // tmp-chunked.nc tmp-unchunked2.nc
|
2022-11-09 11:12:38 +08:00
|
|
|
${NCDUMP} -n tmp_ncc4 tmp-unchunked.nc > tmp-unchunked2.cdl
|
|
|
|
diff tmp_ncc4.cdl tmp-unchunked2.cdl
|
2015-01-04 08:18:14 +08:00
|
|
|
echo "*** Test that nccopy -c works as intended for record dimension default (1)"
|
2017-03-09 08:01:10 +08:00
|
|
|
${NCGEN} -b -o tst_bug321.nc $srcdir/tst_bug321.cdl
|
2022-11-09 11:12:38 +08:00
|
|
|
${NCCOPY} -k nc7 -c"lat/2,lon/2" tst_bug321.nc tmp_ncc4.nc
|
|
|
|
${NCDUMP} -n tst_bug321 tmp_ncc4.nc > tmp_ncc4.cdl
|
|
|
|
diff -b $srcdir/tst_bug321.cdl tmp_ncc4.cdl
|
2020-09-02 03:44:24 +08:00
|
|
|
|
2022-11-09 11:12:38 +08:00
|
|
|
rm tst_chunking.nc tmp_ncc4.nc tmp_ncc4.cdl tmp-chunked.nc tmp-chunked.cdl tmp-unchunked.nc tmp-unchunked.cdl
|
Codify cross-platform file paths
The netcdf-c code has to deal with a variety of platforms:
Windows, OSX, Linux, Cygwin, MSYS, etc. These platforms differ
significantly in the kind of file paths that they accept. So in
order to handle this, I have created a set of replacements for
the most common file system operations such as _open_ or _fopen_
or _access_ to manage the file path differences correctly.
A more limited version of this idea was already implemented via
the ncwinpath.h and dwinpath.c code. So this can be viewed as a
replacement for that code. And in path in many cases, the only
change that was required was to replace '#include <ncwinpath.h>'
with '#include <ncpathmgt.h>' and then replace file operation
calls with the NCxxx equivalent from ncpathmgr.h Note that
recently, the ncwinpath.h was renamed ncpathmgmt.h, so this pull
request should not require dealing with winpath.
The heart of the change is include/ncpathmgmt.h, which provides
alternate operations such as NCfopen or NCaccess and which properly
parse and rebuild path arguments to work for the platform on which
the code is executing. This mostly matters for Windows because of the
way that it uses backslash and drive letters, as compared to *nix*.
One important feature is that the user can do string manipulations
on a file path without having to worry too much about the platform
because the path management code will properly handle most mixed cases.
So one can for example concatenate a path suffix that uses forward
slashes to a Windows path and have it work correctly.
The conversion code is in libdispatch/dpathmgr.c, and the
important function there is NCpathcvt which does the proper
conversions to the local path format.
As a rule, most code should just replace their file operations with
the corresponding NCxxx ones defined in include/ncpathmgmt.h. These
NCxxx functions all call NCpathcvt on their path arguments before
executing the actual file operation.
In some rare cases, the client may need to directly use NCpathcvt,
but this should be avoided as much as possible. If there is a need
for supporting a new file operation not already in ncpathmgmt.h, then
use the code in dpathmgr.c as a template. Also please notify Unidata
so we can include it as a formal part or our supported operations.
Also, if you see an operation in the library that is not using the
NCxxx form, then please submit an issue so we can fix it.
Misc. Changes:
* Clean up the utf8 testing code; it is impossible to get some
tests to work under windows using shell scripts; the args do
not pass as utf8 but as some other encoding.
* Added an extra utf8 test case: test_unicode_path.sh
* Add a true test for HDF5 1.10.6 or later because as noted in
PR https://github.com/Unidata/netcdf-c/pull/1794,
HDF5 changed its Windows file path handling.
2021-03-05 04:41:31 +08:00
|
|
|
|
2020-09-02 03:44:24 +08:00
|
|
|
echo "*** Test that nccopy -c dim/n works as intended "
|
|
|
|
${NCGEN} -4 -b -o tst_perdimspecs.nc $srcdir/ref_tst_perdimspecs.cdl
|
|
|
|
${NCCOPY} -M0 -4 -c "time/10,lat/15,lon/20" tst_perdimspecs.nc tmppds.nc
|
|
|
|
${NCDUMP} -hs tmppds.nc > tmppds.cdl
|
|
|
|
STORAGE=`cat tmppds.cdl | sed -e '/tas:_Storage/p' -ed | tr -d '\t \r'`
|
|
|
|
test "x$STORAGE" = 'xtas:_Storage="chunked";'
|
|
|
|
CHUNKSIZES=`cat tmppds.cdl | sed -e '/tas:_ChunkSizes/p' -ed | tr -d '\t \r'`
|
|
|
|
test "x$CHUNKSIZES" = 'xtas:_ChunkSizes=10,15,20;'
|
|
|
|
|
2021-02-01 06:10:39 +08:00
|
|
|
echo "*** Test that nccopy -F var1,none works as intended "
|
|
|
|
${NCGEN} -4 -b -o tst_nofilters.nc $srcdir/ref_tst_nofilters.cdl
|
2021-03-09 06:10:50 +08:00
|
|
|
${NCCOPY} -M0 -4 -F var1,none -c // tst_nofilters.nc tmp_nofilters.nc
|
2021-02-01 06:10:39 +08:00
|
|
|
${NCDUMP} -hs tmp_nofilters.nc > tmp_nofilters.cdl
|
|
|
|
STORAGE=`cat tmp_nofilters.cdl | sed -e '/var1:_Storage/p' -ed | tr -d '\t \r'`
|
|
|
|
test "x$STORAGE" = 'xvar1:_Storage="contiguous";'
|
|
|
|
FILTERS=`cat tmp_nofilters.cdl | sed -e '/var1:_Filters/p' -ed | tr -d '\t \r'`
|
|
|
|
test "x$FILTERS" = 'x'
|
2011-01-18 06:15:26 +08:00
|
|
|
|
2010-06-03 21:24:43 +08:00
|
|
|
echo "*** All nccopy tests passed!"
|
2021-12-24 13:18:56 +08:00
|
|
|
|
2010-06-03 21:24:43 +08:00
|
|
|
exit 0
|