2018-12-07 06:36:53 +08:00
|
|
|
# Copyright 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002,
|
|
|
|
# 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014,
|
|
|
|
# 2015, 2016, 2017, 2018
|
|
|
|
# University Corporation for Atmospheric Research/Unidata.
|
|
|
|
|
|
|
|
# See netcdf-c/COPYRIGHT file for more info.
|
2020-05-19 09:36:28 +08:00
|
|
|
#IF(BUILD_SHARED_LIBS AND WIN32)
|
|
|
|
# remove_definitions(-DDLL_EXPORT)
|
|
|
|
# remove_definitions(-DDLL_NETCDF)
|
|
|
|
#ENDIF()
|
2012-08-07 00:57:29 +08:00
|
|
|
|
2021-10-30 10:06:37 +08:00
|
|
|
SET(RCMERGE OFF)
|
|
|
|
|
2012-08-22 04:08:53 +08:00
|
|
|
SET(ncdump_FILES ncdump.c vardata.c dumplib.c indent.c nctime0.c utils.c nciter.c)
|
2018-07-27 10:16:02 +08:00
|
|
|
SET(nccopy_FILES nccopy.c nciter.c chunkspec.c utils.c dimmap.c list.c)
|
2018-08-01 02:51:24 +08:00
|
|
|
SET(ocprint_FILES ocprint.c)
|
2019-03-24 04:02:39 +08:00
|
|
|
SET(ncvalidator_FILES ncvalidator.c)
|
2021-03-07 05:09:37 +08:00
|
|
|
SET(printfqn_FILES printfqn.c)
|
2021-09-03 07:04:26 +08:00
|
|
|
SET(ncpathcvt_FILES ncpathcvt.c)
|
Fix various problem around VLEN's
re: https://github.com/Unidata/netcdf-c/issues/541
re: https://github.com/Unidata/netcdf-c/issues/1208
re: https://github.com/Unidata/netcdf-c/issues/2078
re: https://github.com/Unidata/netcdf-c/issues/2041
re: https://github.com/Unidata/netcdf-c/issues/2143
For a long time, there have been known problems with the
management of complex types containing VLENs. This also
involves the string type because it is stored as a VLEN of
chars.
This PR (mostly) fixes this problem. But note that it adds new
functions to netcdf.h (see below) and this may require bumping
the .so number. These new functions can be removed, if desired,
in favor of functions in netcdf_aux.h, but netcdf.h seems the
better place for them because they are intended as alternatives
to the nc_free_vlen and nc_free_string functions already in
netcdf.h.
The term complex type refers to any type that directly or
transitively references a VLEN type. So an array of VLENS, a
compound with a VLEN field, and so on.
In order to properly handle instances of these complex types, it
is necessary to have function that can recursively walk
instances of such types to perform various actions on them. The
term "deep" is also used to mean recursive.
At the moment, the two operations needed by the netcdf library are:
* free'ing an instance of the complex type
* copying an instance of the complex type.
The current library does only shallow free and shallow copy of
complex types. This means that only the top level is properly
free'd or copied, but deep internal blocks in the instance are
not touched.
Note that the term "vector" will be used to mean a contiguous (in
memory) sequence of instances of some type. Given an array with,
say, dimensions 2 X 3 X 4, this will be stored in memory as a
vector of length 2*3*4=24 instances.
The use cases are primarily these.
## nc_get_vars
Suppose one is reading a vector of instances using nc_get_vars
(or nc_get_vara or nc_get_var, etc.). These functions will
return the vector in the top-level memory provided. All
interior blocks (form nested VLEN or strings) will have been
dynamically allocated.
After using this vector of instances, it is necessary to free
(aka reclaim) the dynamically allocated memory, otherwise a
memory leak occurs. So, the recursive reclaim function is used
to walk the returned instance vector and do a deep reclaim of
the data.
Currently functions are defined in netcdf.h that are supposed to
handle this: nc_free_vlen(), nc_free_vlens(), and
nc_free_string(). Unfortunately, these functions only do a
shallow free, so deeply nested instances are not properly
handled by them.
Note that internally, the provided data is immediately written so
there is no need to copy it. But the caller may need to reclaim the
data it passed into the function.
## nc_put_att
Suppose one is writing a vector of instances as the data of an attribute
using, say, nc_put_att.
Internally, the incoming attribute data must be copied and stored
so that changes/reclamation of the input data will not affect
the attribute.
Again, the code inside the netcdf library does only shallow copying
rather than deep copy. As a result, one sees effects such as described
in Github Issue https://github.com/Unidata/netcdf-c/issues/2143.
Also, after defining the attribute, it may be necessary for the user
to free the data that was provided as input to nc_put_att().
## nc_get_att
Suppose one is reading a vector of instances as the data of an attribute
using, say, nc_get_att.
Internally, the existing attribute data must be copied and returned
to the caller, and the caller is responsible for reclaiming
the returned data.
Again, the code inside the netcdf library does only shallow copying
rather than deep copy. So this can lead to memory leaks and errors
because the deep data is shared between the library and the user.
# Solution
The solution is to build properly recursive reclaim and copy
functions and use those as needed.
These recursive functions are defined in libdispatch/dinstance.c
and their signatures are defined in include/netcdf.h.
For back compatibility, corresponding "ncaux_XXX" functions
are defined in include/netcdf_aux.h.
````
int nc_reclaim_data(int ncid, nc_type xtypeid, void* memory, size_t count);
int nc_reclaim_data_all(int ncid, nc_type xtypeid, void* memory, size_t count);
int nc_copy_data(int ncid, nc_type xtypeid, const void* memory, size_t count, void* copy);
int nc_copy_data_all(int ncid, nc_type xtypeid, const void* memory, size_t count, void** copyp);
````
There are two variants. The first two, nc_reclaim_data() and
nc_copy_data(), assume the top-level vector is managed by the
caller. For reclaim, this is so the user can use, for example, a
statically allocated vector. For copy, it assumes the user
provides the space into which the copy is stored.
The second two, nc_reclaim_data_all() and
nc_copy_data_all(), allows the functions to manage the
top-level. So for nc_reclaim_data_all, the top level is
assumed to be dynamically allocated and will be free'd by
nc_reclaim_data_all(). The nc_copy_data_all() function
will allocate the top level and return a pointer to it to the
user. The user can later pass that pointer to
nc_reclaim_data_all() to reclaim the instance(s).
# Internal Changes
The netcdf-c library internals are changed to use the proper
reclaim and copy functions. It turns out that the places where
these functions are needed is quite pervasive in the netcdf-c
library code. Using these functions also allows some
simplification of the code since the stdata and vldata fields of
NC_ATT_INFO are no longer needed. Currently this is commented
out using the SEPDATA \#define macro. When any bugs are largely
fixed, all this code will be removed.
# Known Bugs
1. There is still one known failure that has not been solved.
All the failures revolve around some variant of this .cdl file.
The proximate cause of failure is the use of a VLEN FillValue.
````
netcdf x {
types:
float(*) row_of_floats ;
dimensions:
m = 5 ;
variables:
row_of_floats ragged_array(m) ;
row_of_floats ragged_array:_FillValue = {-999} ;
data:
ragged_array = {10, 11, 12, 13, 14}, {20, 21, 22, 23}, {30, 31, 32},
{40, 41}, _ ;
}
````
When a solution is found, I will either add it to this PR or post a new PR.
# Related Changes
* Mark nc_free_vlen(s) as deprecated in favor of ncaux_reclaim_data.
* Remove the --enable-unfixed-memory-leaks option.
* Remove the NC_VLENS_NOTEST code that suppresses some vlen tests.
* Document this change in docs/internal.md
* Disable the tst_vlen_data test in ncdump/tst_nccopy4.sh.
* Mark types as fixed size or not (transitively) to optimize the reclaim
and copy functions.
# Misc. Changes
* Make Doxygen process libdispatch/daux.c
* Make sure the NC_ATT_INFO_T.container field is set.
2022-01-09 09:30:00 +08:00
|
|
|
SET(nchdf5version_FILES nchdf5version.c)
|
2012-08-08 06:58:15 +08:00
|
|
|
|
2012-09-14 04:41:54 +08:00
|
|
|
IF(USE_X_GETOPT)
|
2014-04-03 04:26:42 +08:00
|
|
|
SET(ncdump_FILES ${ncdump_FILES} XGetopt.c)
|
|
|
|
SET(nccopy_FILES ${nccopy_FILES} XGetopt.c)
|
2018-08-01 02:51:24 +08:00
|
|
|
SET(ocprint_FILES ${ocprint_FILES} XGetopt.c)
|
2019-03-24 04:02:39 +08:00
|
|
|
SET(ncvalidator_FILES ${ncvalidator_FILES} XGetopt.c)
|
Regularize the scoping of dimensions
This is a follow-on to pull request
````https://github.com/Unidata/netcdf-c/pull/1959````,
which fixed up type scoping.
The primary changes are to _nc\_inq\_dimid()_ and to ncdump.
The _nc\_inq\_dimid()_ function is supposed to allow the name to be
and FQN, but this apparently never got implemented. So if was modified
to support FQNs.
The ncdump program is supposed to output fully qualified dimension names
in its generated CDL file under certain conditions.
Suppose ncdump has a netcdf-4 file F with variable V, and V's parent group
is G. For each dimension id D referenced by V, ncdump needs to determine
whether to print its name as a simple name or as a fully qualified name (FQN).
The algorithm is as follows:
1. Search up the tree of ancestor groups.
2. If one of those ancestor groups contains the dimid, then call it dimgrp.
3. If one of those ancestor groups contains a dim with the same name as the dimid, but with a different dimid, then record that as duplicate=true.
4. If dimgrp is defined and duplicate == false, then we do not need an fqn.
5. If dimgrp is defined and duplicate == true, then we do need an fqn to avoid incorrectly using the duplicate.
6. If dimgrp is undefined, then do a preorder breadth-first search of all the groups looking for the dimid.
7. If found, then use the fqn of the first found such dimension location.
8. If not found, then fail.
Test case ncdump/test_scope.sh was modified to test the proper
operation of ncdump and _nc\_inq\_dimid()_.
Misc. Other Changes:
* Fix nc_inq_ncid (NC4_inq_ncid actually) to return root group id if the name argument is NULL.
* Modify _ncdump/printfqn_ to print out a dimid FQN; this supports verification that the resulting .nc files were properly created.
2021-06-01 05:51:12 +08:00
|
|
|
SET(printfqn_FILES ${printfqn_FILES} XGetopt.c)
|
2021-09-03 07:04:26 +08:00
|
|
|
SET(ncpathcvt_FILES ${ncpathcvt_FILES} XGetopt.c)
|
Regularize the scoping of dimensions
This is a follow-on to pull request
````https://github.com/Unidata/netcdf-c/pull/1959````,
which fixed up type scoping.
The primary changes are to _nc\_inq\_dimid()_ and to ncdump.
The _nc\_inq\_dimid()_ function is supposed to allow the name to be
and FQN, but this apparently never got implemented. So if was modified
to support FQNs.
The ncdump program is supposed to output fully qualified dimension names
in its generated CDL file under certain conditions.
Suppose ncdump has a netcdf-4 file F with variable V, and V's parent group
is G. For each dimension id D referenced by V, ncdump needs to determine
whether to print its name as a simple name or as a fully qualified name (FQN).
The algorithm is as follows:
1. Search up the tree of ancestor groups.
2. If one of those ancestor groups contains the dimid, then call it dimgrp.
3. If one of those ancestor groups contains a dim with the same name as the dimid, but with a different dimid, then record that as duplicate=true.
4. If dimgrp is defined and duplicate == false, then we do not need an fqn.
5. If dimgrp is defined and duplicate == true, then we do need an fqn to avoid incorrectly using the duplicate.
6. If dimgrp is undefined, then do a preorder breadth-first search of all the groups looking for the dimid.
7. If found, then use the fqn of the first found such dimension location.
8. If not found, then fail.
Test case ncdump/test_scope.sh was modified to test the proper
operation of ncdump and _nc\_inq\_dimid()_.
Misc. Other Changes:
* Fix nc_inq_ncid (NC4_inq_ncid actually) to return root group id if the name argument is NULL.
* Modify _ncdump/printfqn_ to print out a dimid FQN; this supports verification that the resulting .nc files were properly created.
2021-06-01 05:51:12 +08:00
|
|
|
ENDIF(USE_X_GETOPT)
|
2012-09-14 02:27:23 +08:00
|
|
|
|
2012-08-04 06:24:29 +08:00
|
|
|
ADD_EXECUTABLE(ncdump ${ncdump_FILES})
|
|
|
|
ADD_EXECUTABLE(nccopy ${nccopy_FILES})
|
2019-03-24 04:02:39 +08:00
|
|
|
ADD_EXECUTABLE(ncvalidator ${ncvalidator_FILES})
|
2021-09-03 07:04:26 +08:00
|
|
|
ADD_EXECUTABLE(ncpathcvt ${ncpathcvt_FILES})
|
2018-08-02 04:15:01 +08:00
|
|
|
|
2020-09-16 04:25:38 +08:00
|
|
|
IF(USE_HDF5)
|
2020-09-25 04:33:58 +08:00
|
|
|
ADD_EXECUTABLE(nc4print nc4print.c nc4printer.c)
|
2021-03-07 05:09:37 +08:00
|
|
|
ADD_EXECUTABLE(printfqn ${printfqn_FILES})
|
Fix various problem around VLEN's
re: https://github.com/Unidata/netcdf-c/issues/541
re: https://github.com/Unidata/netcdf-c/issues/1208
re: https://github.com/Unidata/netcdf-c/issues/2078
re: https://github.com/Unidata/netcdf-c/issues/2041
re: https://github.com/Unidata/netcdf-c/issues/2143
For a long time, there have been known problems with the
management of complex types containing VLENs. This also
involves the string type because it is stored as a VLEN of
chars.
This PR (mostly) fixes this problem. But note that it adds new
functions to netcdf.h (see below) and this may require bumping
the .so number. These new functions can be removed, if desired,
in favor of functions in netcdf_aux.h, but netcdf.h seems the
better place for them because they are intended as alternatives
to the nc_free_vlen and nc_free_string functions already in
netcdf.h.
The term complex type refers to any type that directly or
transitively references a VLEN type. So an array of VLENS, a
compound with a VLEN field, and so on.
In order to properly handle instances of these complex types, it
is necessary to have function that can recursively walk
instances of such types to perform various actions on them. The
term "deep" is also used to mean recursive.
At the moment, the two operations needed by the netcdf library are:
* free'ing an instance of the complex type
* copying an instance of the complex type.
The current library does only shallow free and shallow copy of
complex types. This means that only the top level is properly
free'd or copied, but deep internal blocks in the instance are
not touched.
Note that the term "vector" will be used to mean a contiguous (in
memory) sequence of instances of some type. Given an array with,
say, dimensions 2 X 3 X 4, this will be stored in memory as a
vector of length 2*3*4=24 instances.
The use cases are primarily these.
## nc_get_vars
Suppose one is reading a vector of instances using nc_get_vars
(or nc_get_vara or nc_get_var, etc.). These functions will
return the vector in the top-level memory provided. All
interior blocks (form nested VLEN or strings) will have been
dynamically allocated.
After using this vector of instances, it is necessary to free
(aka reclaim) the dynamically allocated memory, otherwise a
memory leak occurs. So, the recursive reclaim function is used
to walk the returned instance vector and do a deep reclaim of
the data.
Currently functions are defined in netcdf.h that are supposed to
handle this: nc_free_vlen(), nc_free_vlens(), and
nc_free_string(). Unfortunately, these functions only do a
shallow free, so deeply nested instances are not properly
handled by them.
Note that internally, the provided data is immediately written so
there is no need to copy it. But the caller may need to reclaim the
data it passed into the function.
## nc_put_att
Suppose one is writing a vector of instances as the data of an attribute
using, say, nc_put_att.
Internally, the incoming attribute data must be copied and stored
so that changes/reclamation of the input data will not affect
the attribute.
Again, the code inside the netcdf library does only shallow copying
rather than deep copy. As a result, one sees effects such as described
in Github Issue https://github.com/Unidata/netcdf-c/issues/2143.
Also, after defining the attribute, it may be necessary for the user
to free the data that was provided as input to nc_put_att().
## nc_get_att
Suppose one is reading a vector of instances as the data of an attribute
using, say, nc_get_att.
Internally, the existing attribute data must be copied and returned
to the caller, and the caller is responsible for reclaiming
the returned data.
Again, the code inside the netcdf library does only shallow copying
rather than deep copy. So this can lead to memory leaks and errors
because the deep data is shared between the library and the user.
# Solution
The solution is to build properly recursive reclaim and copy
functions and use those as needed.
These recursive functions are defined in libdispatch/dinstance.c
and their signatures are defined in include/netcdf.h.
For back compatibility, corresponding "ncaux_XXX" functions
are defined in include/netcdf_aux.h.
````
int nc_reclaim_data(int ncid, nc_type xtypeid, void* memory, size_t count);
int nc_reclaim_data_all(int ncid, nc_type xtypeid, void* memory, size_t count);
int nc_copy_data(int ncid, nc_type xtypeid, const void* memory, size_t count, void* copy);
int nc_copy_data_all(int ncid, nc_type xtypeid, const void* memory, size_t count, void** copyp);
````
There are two variants. The first two, nc_reclaim_data() and
nc_copy_data(), assume the top-level vector is managed by the
caller. For reclaim, this is so the user can use, for example, a
statically allocated vector. For copy, it assumes the user
provides the space into which the copy is stored.
The second two, nc_reclaim_data_all() and
nc_copy_data_all(), allows the functions to manage the
top-level. So for nc_reclaim_data_all, the top level is
assumed to be dynamically allocated and will be free'd by
nc_reclaim_data_all(). The nc_copy_data_all() function
will allocate the top level and return a pointer to it to the
user. The user can later pass that pointer to
nc_reclaim_data_all() to reclaim the instance(s).
# Internal Changes
The netcdf-c library internals are changed to use the proper
reclaim and copy functions. It turns out that the places where
these functions are needed is quite pervasive in the netcdf-c
library code. Using these functions also allows some
simplification of the code since the stdata and vldata fields of
NC_ATT_INFO are no longer needed. Currently this is commented
out using the SEPDATA \#define macro. When any bugs are largely
fixed, all this code will be removed.
# Known Bugs
1. There is still one known failure that has not been solved.
All the failures revolve around some variant of this .cdl file.
The proximate cause of failure is the use of a VLEN FillValue.
````
netcdf x {
types:
float(*) row_of_floats ;
dimensions:
m = 5 ;
variables:
row_of_floats ragged_array(m) ;
row_of_floats ragged_array:_FillValue = {-999} ;
data:
ragged_array = {10, 11, 12, 13, 14}, {20, 21, 22, 23}, {30, 31, 32},
{40, 41}, _ ;
}
````
When a solution is found, I will either add it to this PR or post a new PR.
# Related Changes
* Mark nc_free_vlen(s) as deprecated in favor of ncaux_reclaim_data.
* Remove the --enable-unfixed-memory-leaks option.
* Remove the NC_VLENS_NOTEST code that suppresses some vlen tests.
* Document this change in docs/internal.md
* Disable the tst_vlen_data test in ncdump/tst_nccopy4.sh.
* Mark types as fixed size or not (transitively) to optimize the reclaim
and copy functions.
# Misc. Changes
* Make Doxygen process libdispatch/daux.c
* Make sure the NC_ATT_INFO_T.container field is set.
2022-01-09 09:30:00 +08:00
|
|
|
ADD_EXECUTABLE(nchdf5version ${nchdf5version_FILES})
|
2020-09-16 04:25:38 +08:00
|
|
|
ENDIF(USE_HDF5)
|
|
|
|
|
2018-08-02 04:15:01 +08:00
|
|
|
IF(ENABLE_DAP)
|
|
|
|
ADD_EXECUTABLE(ocprint ${ocprint_FILES})
|
|
|
|
ENDIF(ENABLE_DAP)
|
2012-08-04 06:24:29 +08:00
|
|
|
|
2012-08-11 05:44:00 +08:00
|
|
|
TARGET_LINK_LIBRARIES(ncdump netcdf ${ALL_TLL_LIBS})
|
|
|
|
TARGET_LINK_LIBRARIES(nccopy netcdf ${ALL_TLL_LIBS})
|
2019-03-24 04:02:39 +08:00
|
|
|
TARGET_LINK_LIBRARIES(ncvalidator netcdf ${ALL_TLL_LIBS})
|
2021-09-03 07:04:26 +08:00
|
|
|
TARGET_LINK_LIBRARIES(ncpathcvt netcdf ${ALL_TLL_LIBS})
|
2018-08-02 04:15:01 +08:00
|
|
|
|
2020-09-16 04:37:54 +08:00
|
|
|
IF(USE_HDF5)
|
|
|
|
TARGET_LINK_LIBRARIES(nc4print netcdf ${ALL_TLL_LIBS})
|
2021-03-07 05:09:37 +08:00
|
|
|
TARGET_LINK_LIBRARIES(printfqn netcdf ${ALL_TLL_LIBS})
|
Fix various problem around VLEN's
re: https://github.com/Unidata/netcdf-c/issues/541
re: https://github.com/Unidata/netcdf-c/issues/1208
re: https://github.com/Unidata/netcdf-c/issues/2078
re: https://github.com/Unidata/netcdf-c/issues/2041
re: https://github.com/Unidata/netcdf-c/issues/2143
For a long time, there have been known problems with the
management of complex types containing VLENs. This also
involves the string type because it is stored as a VLEN of
chars.
This PR (mostly) fixes this problem. But note that it adds new
functions to netcdf.h (see below) and this may require bumping
the .so number. These new functions can be removed, if desired,
in favor of functions in netcdf_aux.h, but netcdf.h seems the
better place for them because they are intended as alternatives
to the nc_free_vlen and nc_free_string functions already in
netcdf.h.
The term complex type refers to any type that directly or
transitively references a VLEN type. So an array of VLENS, a
compound with a VLEN field, and so on.
In order to properly handle instances of these complex types, it
is necessary to have function that can recursively walk
instances of such types to perform various actions on them. The
term "deep" is also used to mean recursive.
At the moment, the two operations needed by the netcdf library are:
* free'ing an instance of the complex type
* copying an instance of the complex type.
The current library does only shallow free and shallow copy of
complex types. This means that only the top level is properly
free'd or copied, but deep internal blocks in the instance are
not touched.
Note that the term "vector" will be used to mean a contiguous (in
memory) sequence of instances of some type. Given an array with,
say, dimensions 2 X 3 X 4, this will be stored in memory as a
vector of length 2*3*4=24 instances.
The use cases are primarily these.
## nc_get_vars
Suppose one is reading a vector of instances using nc_get_vars
(or nc_get_vara or nc_get_var, etc.). These functions will
return the vector in the top-level memory provided. All
interior blocks (form nested VLEN or strings) will have been
dynamically allocated.
After using this vector of instances, it is necessary to free
(aka reclaim) the dynamically allocated memory, otherwise a
memory leak occurs. So, the recursive reclaim function is used
to walk the returned instance vector and do a deep reclaim of
the data.
Currently functions are defined in netcdf.h that are supposed to
handle this: nc_free_vlen(), nc_free_vlens(), and
nc_free_string(). Unfortunately, these functions only do a
shallow free, so deeply nested instances are not properly
handled by them.
Note that internally, the provided data is immediately written so
there is no need to copy it. But the caller may need to reclaim the
data it passed into the function.
## nc_put_att
Suppose one is writing a vector of instances as the data of an attribute
using, say, nc_put_att.
Internally, the incoming attribute data must be copied and stored
so that changes/reclamation of the input data will not affect
the attribute.
Again, the code inside the netcdf library does only shallow copying
rather than deep copy. As a result, one sees effects such as described
in Github Issue https://github.com/Unidata/netcdf-c/issues/2143.
Also, after defining the attribute, it may be necessary for the user
to free the data that was provided as input to nc_put_att().
## nc_get_att
Suppose one is reading a vector of instances as the data of an attribute
using, say, nc_get_att.
Internally, the existing attribute data must be copied and returned
to the caller, and the caller is responsible for reclaiming
the returned data.
Again, the code inside the netcdf library does only shallow copying
rather than deep copy. So this can lead to memory leaks and errors
because the deep data is shared between the library and the user.
# Solution
The solution is to build properly recursive reclaim and copy
functions and use those as needed.
These recursive functions are defined in libdispatch/dinstance.c
and their signatures are defined in include/netcdf.h.
For back compatibility, corresponding "ncaux_XXX" functions
are defined in include/netcdf_aux.h.
````
int nc_reclaim_data(int ncid, nc_type xtypeid, void* memory, size_t count);
int nc_reclaim_data_all(int ncid, nc_type xtypeid, void* memory, size_t count);
int nc_copy_data(int ncid, nc_type xtypeid, const void* memory, size_t count, void* copy);
int nc_copy_data_all(int ncid, nc_type xtypeid, const void* memory, size_t count, void** copyp);
````
There are two variants. The first two, nc_reclaim_data() and
nc_copy_data(), assume the top-level vector is managed by the
caller. For reclaim, this is so the user can use, for example, a
statically allocated vector. For copy, it assumes the user
provides the space into which the copy is stored.
The second two, nc_reclaim_data_all() and
nc_copy_data_all(), allows the functions to manage the
top-level. So for nc_reclaim_data_all, the top level is
assumed to be dynamically allocated and will be free'd by
nc_reclaim_data_all(). The nc_copy_data_all() function
will allocate the top level and return a pointer to it to the
user. The user can later pass that pointer to
nc_reclaim_data_all() to reclaim the instance(s).
# Internal Changes
The netcdf-c library internals are changed to use the proper
reclaim and copy functions. It turns out that the places where
these functions are needed is quite pervasive in the netcdf-c
library code. Using these functions also allows some
simplification of the code since the stdata and vldata fields of
NC_ATT_INFO are no longer needed. Currently this is commented
out using the SEPDATA \#define macro. When any bugs are largely
fixed, all this code will be removed.
# Known Bugs
1. There is still one known failure that has not been solved.
All the failures revolve around some variant of this .cdl file.
The proximate cause of failure is the use of a VLEN FillValue.
````
netcdf x {
types:
float(*) row_of_floats ;
dimensions:
m = 5 ;
variables:
row_of_floats ragged_array(m) ;
row_of_floats ragged_array:_FillValue = {-999} ;
data:
ragged_array = {10, 11, 12, 13, 14}, {20, 21, 22, 23}, {30, 31, 32},
{40, 41}, _ ;
}
````
When a solution is found, I will either add it to this PR or post a new PR.
# Related Changes
* Mark nc_free_vlen(s) as deprecated in favor of ncaux_reclaim_data.
* Remove the --enable-unfixed-memory-leaks option.
* Remove the NC_VLENS_NOTEST code that suppresses some vlen tests.
* Document this change in docs/internal.md
* Disable the tst_vlen_data test in ncdump/tst_nccopy4.sh.
* Mark types as fixed size or not (transitively) to optimize the reclaim
and copy functions.
# Misc. Changes
* Make Doxygen process libdispatch/daux.c
* Make sure the NC_ATT_INFO_T.container field is set.
2022-01-09 09:30:00 +08:00
|
|
|
TARGET_LINK_LIBRARIES(nchdf5version netcdf ${ALL_TLL_LIBS})
|
2020-09-16 04:37:54 +08:00
|
|
|
ENDIF(USE_HDF5)
|
|
|
|
|
2018-08-02 04:15:01 +08:00
|
|
|
IF(ENABLE_DAP)
|
|
|
|
TARGET_LINK_LIBRARIES(ocprint netcdf ${ALL_TLL_LIBS})
|
|
|
|
ENDIF(ENABLE_DAP)
|
2012-08-04 06:24:29 +08:00
|
|
|
|
2015-01-28 04:57:51 +08:00
|
|
|
####
|
|
|
|
# We have to do a little tweaking
|
|
|
|
# to remove the Release/ and Debug/ directories
|
|
|
|
# in MSVC builds. This is required to get
|
|
|
|
# test scripts to work.
|
|
|
|
####
|
|
|
|
IF(MSVC)
|
2015-02-05 05:11:20 +08:00
|
|
|
SET_TARGET_PROPERTIES(ncdump PROPERTIES RUNTIME_OUTPUT_DIRECTORY
|
2015-01-28 04:57:51 +08:00
|
|
|
${CMAKE_CURRENT_BINARY_DIR})
|
2015-02-05 05:11:20 +08:00
|
|
|
SET_TARGET_PROPERTIES(ncdump PROPERTIES RUNTIME_OUTPUT_DIRECTORY_DEBUG
|
2015-01-28 04:57:51 +08:00
|
|
|
${CMAKE_CURRENT_BINARY_DIR})
|
|
|
|
SET_TARGET_PROPERTIES(ncdump PROPERTIES RUNTIME_OUTPUT_DIRECTORY_RELEASE
|
|
|
|
${CMAKE_CURRENT_BINARY_DIR})
|
|
|
|
|
2015-02-05 05:11:20 +08:00
|
|
|
SET_TARGET_PROPERTIES(nccopy PROPERTIES RUNTIME_OUTPUT_DIRECTORY
|
2015-01-28 04:57:51 +08:00
|
|
|
${CMAKE_CURRENT_BINARY_DIR})
|
2015-02-05 05:11:20 +08:00
|
|
|
SET_TARGET_PROPERTIES(nccopy PROPERTIES RUNTIME_OUTPUT_DIRECTORY_DEBUG
|
2015-01-28 04:57:51 +08:00
|
|
|
${CMAKE_CURRENT_BINARY_DIR})
|
|
|
|
SET_TARGET_PROPERTIES(nccopy PROPERTIES RUNTIME_OUTPUT_DIRECTORY_RELEASE
|
|
|
|
${CMAKE_CURRENT_BINARY_DIR})
|
2018-08-01 02:51:24 +08:00
|
|
|
|
2019-03-24 04:02:39 +08:00
|
|
|
SET_TARGET_PROPERTIES(ncvalidator PROPERTIES RUNTIME_OUTPUT_DIRECTORY
|
|
|
|
${CMAKE_CURRENT_BINARY_DIR})
|
|
|
|
SET_TARGET_PROPERTIES(ncvalidator PROPERTIES RUNTIME_OUTPUT_DIRECTORY_DEBUG
|
|
|
|
${CMAKE_CURRENT_BINARY_DIR})
|
|
|
|
SET_TARGET_PROPERTIES(ncvalidator PROPERTIES RUNTIME_OUTPUT_DIRECTORY_RELEASE
|
|
|
|
${CMAKE_CURRENT_BINARY_DIR})
|
|
|
|
|
2021-09-03 07:04:26 +08:00
|
|
|
SET_TARGET_PROPERTIES(ncpathcvt PROPERTIES RUNTIME_OUTPUT_DIRECTORY
|
|
|
|
${CMAKE_CURRENT_BINARY_DIR})
|
|
|
|
SET_TARGET_PROPERTIES(ncpathcvt PROPERTIES RUNTIME_OUTPUT_DIRECTORY_DEBUG
|
|
|
|
${CMAKE_CURRENT_BINARY_DIR})
|
|
|
|
SET_TARGET_PROPERTIES(ncpathcvt PROPERTIES RUNTIME_OUTPUT_DIRECTORY_RELEASE
|
|
|
|
${CMAKE_CURRENT_BINARY_DIR})
|
2021-03-07 05:09:37 +08:00
|
|
|
|
2021-09-03 07:04:26 +08:00
|
|
|
IF(USE_HDF5)
|
2021-03-07 05:09:37 +08:00
|
|
|
SET_TARGET_PROPERTIES(printfqn PROPERTIES RUNTIME_OUTPUT_DIRECTORY
|
|
|
|
${CMAKE_CURRENT_BINARY_DIR})
|
|
|
|
SET_TARGET_PROPERTIES(printfqn PROPERTIES RUNTIME_OUTPUT_DIRECTORY_DEBUG
|
|
|
|
${CMAKE_CURRENT_BINARY_DIR})
|
|
|
|
SET_TARGET_PROPERTIES(printfqn PROPERTIES RUNTIME_OUTPUT_DIRECTORY_RELEASE
|
|
|
|
${CMAKE_CURRENT_BINARY_DIR})
|
Fix various problem around VLEN's
re: https://github.com/Unidata/netcdf-c/issues/541
re: https://github.com/Unidata/netcdf-c/issues/1208
re: https://github.com/Unidata/netcdf-c/issues/2078
re: https://github.com/Unidata/netcdf-c/issues/2041
re: https://github.com/Unidata/netcdf-c/issues/2143
For a long time, there have been known problems with the
management of complex types containing VLENs. This also
involves the string type because it is stored as a VLEN of
chars.
This PR (mostly) fixes this problem. But note that it adds new
functions to netcdf.h (see below) and this may require bumping
the .so number. These new functions can be removed, if desired,
in favor of functions in netcdf_aux.h, but netcdf.h seems the
better place for them because they are intended as alternatives
to the nc_free_vlen and nc_free_string functions already in
netcdf.h.
The term complex type refers to any type that directly or
transitively references a VLEN type. So an array of VLENS, a
compound with a VLEN field, and so on.
In order to properly handle instances of these complex types, it
is necessary to have function that can recursively walk
instances of such types to perform various actions on them. The
term "deep" is also used to mean recursive.
At the moment, the two operations needed by the netcdf library are:
* free'ing an instance of the complex type
* copying an instance of the complex type.
The current library does only shallow free and shallow copy of
complex types. This means that only the top level is properly
free'd or copied, but deep internal blocks in the instance are
not touched.
Note that the term "vector" will be used to mean a contiguous (in
memory) sequence of instances of some type. Given an array with,
say, dimensions 2 X 3 X 4, this will be stored in memory as a
vector of length 2*3*4=24 instances.
The use cases are primarily these.
## nc_get_vars
Suppose one is reading a vector of instances using nc_get_vars
(or nc_get_vara or nc_get_var, etc.). These functions will
return the vector in the top-level memory provided. All
interior blocks (form nested VLEN or strings) will have been
dynamically allocated.
After using this vector of instances, it is necessary to free
(aka reclaim) the dynamically allocated memory, otherwise a
memory leak occurs. So, the recursive reclaim function is used
to walk the returned instance vector and do a deep reclaim of
the data.
Currently functions are defined in netcdf.h that are supposed to
handle this: nc_free_vlen(), nc_free_vlens(), and
nc_free_string(). Unfortunately, these functions only do a
shallow free, so deeply nested instances are not properly
handled by them.
Note that internally, the provided data is immediately written so
there is no need to copy it. But the caller may need to reclaim the
data it passed into the function.
## nc_put_att
Suppose one is writing a vector of instances as the data of an attribute
using, say, nc_put_att.
Internally, the incoming attribute data must be copied and stored
so that changes/reclamation of the input data will not affect
the attribute.
Again, the code inside the netcdf library does only shallow copying
rather than deep copy. As a result, one sees effects such as described
in Github Issue https://github.com/Unidata/netcdf-c/issues/2143.
Also, after defining the attribute, it may be necessary for the user
to free the data that was provided as input to nc_put_att().
## nc_get_att
Suppose one is reading a vector of instances as the data of an attribute
using, say, nc_get_att.
Internally, the existing attribute data must be copied and returned
to the caller, and the caller is responsible for reclaiming
the returned data.
Again, the code inside the netcdf library does only shallow copying
rather than deep copy. So this can lead to memory leaks and errors
because the deep data is shared between the library and the user.
# Solution
The solution is to build properly recursive reclaim and copy
functions and use those as needed.
These recursive functions are defined in libdispatch/dinstance.c
and their signatures are defined in include/netcdf.h.
For back compatibility, corresponding "ncaux_XXX" functions
are defined in include/netcdf_aux.h.
````
int nc_reclaim_data(int ncid, nc_type xtypeid, void* memory, size_t count);
int nc_reclaim_data_all(int ncid, nc_type xtypeid, void* memory, size_t count);
int nc_copy_data(int ncid, nc_type xtypeid, const void* memory, size_t count, void* copy);
int nc_copy_data_all(int ncid, nc_type xtypeid, const void* memory, size_t count, void** copyp);
````
There are two variants. The first two, nc_reclaim_data() and
nc_copy_data(), assume the top-level vector is managed by the
caller. For reclaim, this is so the user can use, for example, a
statically allocated vector. For copy, it assumes the user
provides the space into which the copy is stored.
The second two, nc_reclaim_data_all() and
nc_copy_data_all(), allows the functions to manage the
top-level. So for nc_reclaim_data_all, the top level is
assumed to be dynamically allocated and will be free'd by
nc_reclaim_data_all(). The nc_copy_data_all() function
will allocate the top level and return a pointer to it to the
user. The user can later pass that pointer to
nc_reclaim_data_all() to reclaim the instance(s).
# Internal Changes
The netcdf-c library internals are changed to use the proper
reclaim and copy functions. It turns out that the places where
these functions are needed is quite pervasive in the netcdf-c
library code. Using these functions also allows some
simplification of the code since the stdata and vldata fields of
NC_ATT_INFO are no longer needed. Currently this is commented
out using the SEPDATA \#define macro. When any bugs are largely
fixed, all this code will be removed.
# Known Bugs
1. There is still one known failure that has not been solved.
All the failures revolve around some variant of this .cdl file.
The proximate cause of failure is the use of a VLEN FillValue.
````
netcdf x {
types:
float(*) row_of_floats ;
dimensions:
m = 5 ;
variables:
row_of_floats ragged_array(m) ;
row_of_floats ragged_array:_FillValue = {-999} ;
data:
ragged_array = {10, 11, 12, 13, 14}, {20, 21, 22, 23}, {30, 31, 32},
{40, 41}, _ ;
}
````
When a solution is found, I will either add it to this PR or post a new PR.
# Related Changes
* Mark nc_free_vlen(s) as deprecated in favor of ncaux_reclaim_data.
* Remove the --enable-unfixed-memory-leaks option.
* Remove the NC_VLENS_NOTEST code that suppresses some vlen tests.
* Document this change in docs/internal.md
* Disable the tst_vlen_data test in ncdump/tst_nccopy4.sh.
* Mark types as fixed size or not (transitively) to optimize the reclaim
and copy functions.
# Misc. Changes
* Make Doxygen process libdispatch/daux.c
* Make sure the NC_ATT_INFO_T.container field is set.
2022-01-09 09:30:00 +08:00
|
|
|
SET_TARGET_PROPERTIES(nchdf5version PROPERTIES RUNTIME_OUTPUT_DIRECTORY
|
|
|
|
${CMAKE_CURRENT_BINARY_DIR})
|
|
|
|
SET_TARGET_PROPERTIES(nchdf5version PROPERTIES RUNTIME_OUTPUT_DIRECTORY_DEBUG
|
|
|
|
${CMAKE_CURRENT_BINARY_DIR})
|
|
|
|
SET_TARGET_PROPERTIES(nchdf5version PROPERTIES RUNTIME_OUTPUT_DIRECTORY_RELEASE
|
|
|
|
${CMAKE_CURRENT_BINARY_DIR})
|
2020-09-16 04:25:38 +08:00
|
|
|
ENDIF(USE_HDF5)
|
|
|
|
|
2018-08-02 04:15:01 +08:00
|
|
|
IF(ENABLE_DAP)
|
2018-08-01 02:51:24 +08:00
|
|
|
SET_TARGET_PROPERTIES(ocprint PROPERTIES RUNTIME_OUTPUT_DIRECTORY
|
2018-08-02 04:15:01 +08:00
|
|
|
${CMAKE_CURRENT_BINARY_DIR})
|
|
|
|
SET_TARGET_PROPERTIES(ocprint PROPERTIES RUNTIME_OUTPUT_DIRECTORY_DEBUG
|
|
|
|
${CMAKE_CURRENT_BINARY_DIR})
|
|
|
|
SET_TARGET_PROPERTIES(ocprint PROPERTIES RUNTIME_OUTPUT_DIRECTORY_RELEASE
|
|
|
|
${CMAKE_CURRENT_BINARY_DIR})
|
|
|
|
ENDIF(ENABLE_DAP)
|
|
|
|
|
2015-01-28 04:57:51 +08:00
|
|
|
ENDIF()
|
|
|
|
|
2012-08-08 06:58:15 +08:00
|
|
|
IF(ENABLE_TESTS)
|
2021-12-21 06:13:08 +08:00
|
|
|
|
2014-04-03 04:26:42 +08:00
|
|
|
ADD_EXECUTABLE(rewrite-scalar rewrite-scalar.c)
|
|
|
|
ADD_EXECUTABLE(bom bom.c)
|
2016-01-06 13:26:20 +08:00
|
|
|
ADD_EXECUTABLE(tst_dimsizes tst_dimsizes.c)
|
2017-03-09 08:01:10 +08:00
|
|
|
ADD_EXECUTABLE(nctrunc nctrunc.c)
|
2021-10-30 10:06:37 +08:00
|
|
|
if(RCMERGE)
|
Upgrade the nczarr code to match Zarr V2
Re: https://github.com/zarr-developers/zarr-python/pull/716
The Zarr version 2 spec has been extended to include the ability
to choose the dimension separator in chunk name keys. The legal
separators has been extended from {'.'} to {'.' '/'}. So now it
is possible to use a key like "0/1/2/0" for chunk names.
This PR implements this for NCZarr. The V2 spec now says that
this separator can be set on a per-variable basis. For now, I
have chosen to allow this be set only globally by adding a key
named "ZARR.DIMENSION_SEPARATOR=<char>" in the
.daprc/.dodsrc/ncrc file. Currently, the only legal separator
characters are '.' (the default) and '/'. On writing, this key
will only be written if its value is different than the default.
This change caused problems because supporting a separator of '/'
is difficult to parse when keys/paths use '/' as the path separator.
A test case was added for this.
Additionally, make nczarr be enabled default by default. This required
some additional changes so that if zip and/or AWS S3 sdk are unavailable,
then they are disabled for NCZarr.
In addition the following unrelated changes were made.
1. Tested that pure-zarr mode could read an nczarr formatted store.
1. The .rc file handling now merges all known .rc files (.ncrc,.daprc, and .dodsrc) in that order and using those in HOME first, then in current directory. For duplicate entries, the later ones override the earlier ones. This change is to remove some of the conflicts inherent in the current .rc file load process. A set of test cases was also added.
1. Re-order tests in configure.ac and CMakeLists.txt so that if libcurl
is not found then the other options that depend upon it properly
are disabled.
1. I decided that xarray support should be enabled by default for pure
zarr. In order to allow disabling, I added a new mode flag "noxarray".
1. Certain test in nczarr_test depend on use of .dodsrc. In order for these
to work when testing in parallel, some inter-test dependencies needed to
be added.
1. Improved authorization testing to use changes in thredds.ucar.edu
2021-04-25 09:48:15 +08:00
|
|
|
ADD_EXECUTABLE(tst_rcmerge tst_rcmerge.c)
|
2021-10-30 10:06:37 +08:00
|
|
|
endif()
|
2014-04-03 04:26:42 +08:00
|
|
|
TARGET_LINK_LIBRARIES(rewrite-scalar netcdf)
|
|
|
|
TARGET_LINK_LIBRARIES(bom netcdf)
|
2016-01-06 13:26:20 +08:00
|
|
|
TARGET_LINK_LIBRARIES(tst_dimsizes netcdf)
|
Upgrade the nczarr code to match Zarr V2
Re: https://github.com/zarr-developers/zarr-python/pull/716
The Zarr version 2 spec has been extended to include the ability
to choose the dimension separator in chunk name keys. The legal
separators has been extended from {'.'} to {'.' '/'}. So now it
is possible to use a key like "0/1/2/0" for chunk names.
This PR implements this for NCZarr. The V2 spec now says that
this separator can be set on a per-variable basis. For now, I
have chosen to allow this be set only globally by adding a key
named "ZARR.DIMENSION_SEPARATOR=<char>" in the
.daprc/.dodsrc/ncrc file. Currently, the only legal separator
characters are '.' (the default) and '/'. On writing, this key
will only be written if its value is different than the default.
This change caused problems because supporting a separator of '/'
is difficult to parse when keys/paths use '/' as the path separator.
A test case was added for this.
Additionally, make nczarr be enabled default by default. This required
some additional changes so that if zip and/or AWS S3 sdk are unavailable,
then they are disabled for NCZarr.
In addition the following unrelated changes were made.
1. Tested that pure-zarr mode could read an nczarr formatted store.
1. The .rc file handling now merges all known .rc files (.ncrc,.daprc, and .dodsrc) in that order and using those in HOME first, then in current directory. For duplicate entries, the later ones override the earlier ones. This change is to remove some of the conflicts inherent in the current .rc file load process. A set of test cases was also added.
1. Re-order tests in configure.ac and CMakeLists.txt so that if libcurl
is not found then the other options that depend upon it properly
are disabled.
1. I decided that xarray support should be enabled by default for pure
zarr. In order to allow disabling, I added a new mode flag "noxarray".
1. Certain test in nczarr_test depend on use of .dodsrc. In order for these
to work when testing in parallel, some inter-test dependencies needed to
be added.
1. Improved authorization testing to use changes in thredds.ucar.edu
2021-04-25 09:48:15 +08:00
|
|
|
TARGET_LINK_LIBRARIES(nctrunc netcdf)
|
2021-10-30 10:06:37 +08:00
|
|
|
if(RCMERGE)
|
Upgrade the nczarr code to match Zarr V2
Re: https://github.com/zarr-developers/zarr-python/pull/716
The Zarr version 2 spec has been extended to include the ability
to choose the dimension separator in chunk name keys. The legal
separators has been extended from {'.'} to {'.' '/'}. So now it
is possible to use a key like "0/1/2/0" for chunk names.
This PR implements this for NCZarr. The V2 spec now says that
this separator can be set on a per-variable basis. For now, I
have chosen to allow this be set only globally by adding a key
named "ZARR.DIMENSION_SEPARATOR=<char>" in the
.daprc/.dodsrc/ncrc file. Currently, the only legal separator
characters are '.' (the default) and '/'. On writing, this key
will only be written if its value is different than the default.
This change caused problems because supporting a separator of '/'
is difficult to parse when keys/paths use '/' as the path separator.
A test case was added for this.
Additionally, make nczarr be enabled default by default. This required
some additional changes so that if zip and/or AWS S3 sdk are unavailable,
then they are disabled for NCZarr.
In addition the following unrelated changes were made.
1. Tested that pure-zarr mode could read an nczarr formatted store.
1. The .rc file handling now merges all known .rc files (.ncrc,.daprc, and .dodsrc) in that order and using those in HOME first, then in current directory. For duplicate entries, the later ones override the earlier ones. This change is to remove some of the conflicts inherent in the current .rc file load process. A set of test cases was also added.
1. Re-order tests in configure.ac and CMakeLists.txt so that if libcurl
is not found then the other options that depend upon it properly
are disabled.
1. I decided that xarray support should be enabled by default for pure
zarr. In order to allow disabling, I added a new mode flag "noxarray".
1. Certain test in nczarr_test depend on use of .dodsrc. In order for these
to work when testing in parallel, some inter-test dependencies needed to
be added.
1. Improved authorization testing to use changes in thredds.ucar.edu
2021-04-25 09:48:15 +08:00
|
|
|
TARGET_LINK_LIBRARIES(tst_rcmerge netcdf)
|
2021-10-30 10:06:37 +08:00
|
|
|
endif()
|
2015-01-28 05:20:24 +08:00
|
|
|
|
2020-08-18 09:15:47 +08:00
|
|
|
IF(USE_HDF5)
|
2017-04-07 04:55:11 +08:00
|
|
|
ADD_EXECUTABLE(tst_fileinfo tst_fileinfo.c)
|
|
|
|
TARGET_LINK_LIBRARIES(tst_fileinfo netcdf)
|
|
|
|
ENDIF()
|
2016-05-04 11:17:06 +08:00
|
|
|
|
2017-04-07 04:55:11 +08:00
|
|
|
IF(MSVC)
|
|
|
|
SET_TARGET_PROPERTIES(rewrite-scalar PROPERTIES RUNTIME_OUTPUT_DIRECTORY
|
|
|
|
${CMAKE_CURRENT_BINARY_DIR})
|
|
|
|
SET_TARGET_PROPERTIES(rewrite-scalar PROPERTIES RUNTIME_OUTPUT_DIRECTORY_DEBUG
|
|
|
|
${CMAKE_CURRENT_BINARY_DIR})
|
|
|
|
SET_TARGET_PROPERTIES(rewrite-scalar PROPERTIES RUNTIME_OUTPUT_DIRECTORY_RELEASE
|
|
|
|
${CMAKE_CURRENT_BINARY_DIR})
|
2015-01-28 05:20:24 +08:00
|
|
|
|
2017-04-07 04:55:11 +08:00
|
|
|
SET_TARGET_PROPERTIES(bom PROPERTIES RUNTIME_OUTPUT_DIRECTORY
|
|
|
|
${CMAKE_CURRENT_BINARY_DIR})
|
|
|
|
SET_TARGET_PROPERTIES(bom PROPERTIES RUNTIME_OUTPUT_DIRECTORY_DEBUG
|
|
|
|
${CMAKE_CURRENT_BINARY_DIR})
|
|
|
|
SET_TARGET_PROPERTIES(bom PROPERTIES RUNTIME_OUTPUT_DIRECTORY_RELEASE
|
|
|
|
${CMAKE_CURRENT_BINARY_DIR})
|
2016-01-06 13:26:20 +08:00
|
|
|
|
2017-04-07 04:55:11 +08:00
|
|
|
SET_TARGET_PROPERTIES(tst_dimsizes PROPERTIES RUNTIME_OUTPUT_DIRECTORY
|
|
|
|
${CMAKE_CURRENT_BINARY_DIR})
|
|
|
|
SET_TARGET_PROPERTIES(tst_dimsizes PROPERTIES RUNTIME_OUTPUT_DIRECTORY_DEBUG
|
|
|
|
${CMAKE_CURRENT_BINARY_DIR})
|
|
|
|
SET_TARGET_PROPERTIES(tst_dimsizes PROPERTIES RUNTIME_OUTPUT_DIRECTORY_RELEASE
|
|
|
|
${CMAKE_CURRENT_BINARY_DIR})
|
2017-01-19 12:46:47 +08:00
|
|
|
|
2017-04-07 04:55:11 +08:00
|
|
|
SET_TARGET_PROPERTIES(nctrunc PROPERTIES RUNTIME_OUTPUT_DIRECTORY
|
2017-01-20 00:59:17 +08:00
|
|
|
${CMAKE_CURRENT_BINARY_DIR})
|
2017-04-07 04:55:11 +08:00
|
|
|
SET_TARGET_PROPERTIES(nctrunc PROPERTIES RUNTIME_OUTPUT_DIRECTORY_DEBUG
|
2017-01-20 00:59:17 +08:00
|
|
|
${CMAKE_CURRENT_BINARY_DIR})
|
2017-04-07 04:55:11 +08:00
|
|
|
SET_TARGET_PROPERTIES(nctrunc PROPERTIES RUNTIME_OUTPUT_DIRECTORY_RELEASE
|
2017-01-20 00:59:17 +08:00
|
|
|
${CMAKE_CURRENT_BINARY_DIR})
|
2017-04-07 04:55:11 +08:00
|
|
|
|
2021-11-04 02:49:54 +08:00
|
|
|
IF(RCMERGE)
|
|
|
|
SET_TARGET_PROPERTIES(tst_rcmerge PROPERTIES RUNTIME_OUTPUT_DIRECTORY
|
|
|
|
${CMAKE_CURRENT_BINARY_DIR})
|
|
|
|
SET_TARGET_PROPERTIES(tst_rcmerge PROPERTIES RUNTIME_OUTPUT_DIRECTORY_DEBUG
|
|
|
|
${CMAKE_CURRENT_BINARY_DIR})
|
|
|
|
SET_TARGET_PROPERTIES(tst_rcmerge PROPERTIES RUNTIME_OUTPUT_DIRECTORY_RELEASE ${CMAKE_CURRENT_BINARY_DIR})
|
|
|
|
endif()
|
Upgrade the nczarr code to match Zarr V2
Re: https://github.com/zarr-developers/zarr-python/pull/716
The Zarr version 2 spec has been extended to include the ability
to choose the dimension separator in chunk name keys. The legal
separators has been extended from {'.'} to {'.' '/'}. So now it
is possible to use a key like "0/1/2/0" for chunk names.
This PR implements this for NCZarr. The V2 spec now says that
this separator can be set on a per-variable basis. For now, I
have chosen to allow this be set only globally by adding a key
named "ZARR.DIMENSION_SEPARATOR=<char>" in the
.daprc/.dodsrc/ncrc file. Currently, the only legal separator
characters are '.' (the default) and '/'. On writing, this key
will only be written if its value is different than the default.
This change caused problems because supporting a separator of '/'
is difficult to parse when keys/paths use '/' as the path separator.
A test case was added for this.
Additionally, make nczarr be enabled default by default. This required
some additional changes so that if zip and/or AWS S3 sdk are unavailable,
then they are disabled for NCZarr.
In addition the following unrelated changes were made.
1. Tested that pure-zarr mode could read an nczarr formatted store.
1. The .rc file handling now merges all known .rc files (.ncrc,.daprc, and .dodsrc) in that order and using those in HOME first, then in current directory. For duplicate entries, the later ones override the earlier ones. This change is to remove some of the conflicts inherent in the current .rc file load process. A set of test cases was also added.
1. Re-order tests in configure.ac and CMakeLists.txt so that if libcurl
is not found then the other options that depend upon it properly
are disabled.
1. I decided that xarray support should be enabled by default for pure
zarr. In order to allow disabling, I added a new mode flag "noxarray".
1. Certain test in nczarr_test depend on use of .dodsrc. In order for these
to work when testing in parallel, some inter-test dependencies needed to
be added.
1. Improved authorization testing to use changes in thredds.ucar.edu
2021-04-25 09:48:15 +08:00
|
|
|
|
2020-08-18 09:15:47 +08:00
|
|
|
IF(USE_HDF5)
|
2017-04-07 04:55:11 +08:00
|
|
|
SET_TARGET_PROPERTIES(tst_fileinfo PROPERTIES RUNTIME_OUTPUT_DIRECTORY
|
|
|
|
${CMAKE_CURRENT_BINARY_DIR})
|
|
|
|
SET_TARGET_PROPERTIES(tst_fileinfo PROPERTIES RUNTIME_OUTPUT_DIRECTORY_DEBUG
|
|
|
|
${CMAKE_CURRENT_BINARY_DIR})
|
|
|
|
SET_TARGET_PROPERTIES(tst_fileinfo PROPERTIES RUNTIME_OUTPUT_DIRECTORY_RELEASE
|
|
|
|
${CMAKE_CURRENT_BINARY_DIR})
|
|
|
|
|
Codify cross-platform file paths
The netcdf-c code has to deal with a variety of platforms:
Windows, OSX, Linux, Cygwin, MSYS, etc. These platforms differ
significantly in the kind of file paths that they accept. So in
order to handle this, I have created a set of replacements for
the most common file system operations such as _open_ or _fopen_
or _access_ to manage the file path differences correctly.
A more limited version of this idea was already implemented via
the ncwinpath.h and dwinpath.c code. So this can be viewed as a
replacement for that code. And in path in many cases, the only
change that was required was to replace '#include <ncwinpath.h>'
with '#include <ncpathmgt.h>' and then replace file operation
calls with the NCxxx equivalent from ncpathmgr.h Note that
recently, the ncwinpath.h was renamed ncpathmgmt.h, so this pull
request should not require dealing with winpath.
The heart of the change is include/ncpathmgmt.h, which provides
alternate operations such as NCfopen or NCaccess and which properly
parse and rebuild path arguments to work for the platform on which
the code is executing. This mostly matters for Windows because of the
way that it uses backslash and drive letters, as compared to *nix*.
One important feature is that the user can do string manipulations
on a file path without having to worry too much about the platform
because the path management code will properly handle most mixed cases.
So one can for example concatenate a path suffix that uses forward
slashes to a Windows path and have it work correctly.
The conversion code is in libdispatch/dpathmgr.c, and the
important function there is NCpathcvt which does the proper
conversions to the local path format.
As a rule, most code should just replace their file operations with
the corresponding NCxxx ones defined in include/ncpathmgmt.h. These
NCxxx functions all call NCpathcvt on their path arguments before
executing the actual file operation.
In some rare cases, the client may need to directly use NCpathcvt,
but this should be avoided as much as possible. If there is a need
for supporting a new file operation not already in ncpathmgmt.h, then
use the code in dpathmgr.c as a template. Also please notify Unidata
so we can include it as a formal part or our supported operations.
Also, if you see an operation in the library that is not using the
NCxxx form, then please submit an issue so we can fix it.
Misc. Changes:
* Clean up the utf8 testing code; it is impossible to get some
tests to work under windows using shell scripts; the args do
not pass as utf8 but as some other encoding.
* Added an extra utf8 test case: test_unicode_path.sh
* Add a true test for HDF5 1.10.6 or later because as noted in
PR https://github.com/Unidata/netcdf-c/pull/1794,
HDF5 changed its Windows file path handling.
2021-03-05 04:41:31 +08:00
|
|
|
ENDIF(USE_HDF5)
|
|
|
|
ENDIF(MSVC)
|
2015-01-28 05:20:24 +08:00
|
|
|
|
2021-12-21 06:13:08 +08:00
|
|
|
# Build support programs
|
|
|
|
build_bin_test_no_prefix(tst_utf8)
|
|
|
|
build_bin_test_no_prefix(tst_fillbug)
|
|
|
|
IF(USE_HDF5)
|
|
|
|
build_bin_test_no_prefix(tst_h_rdc0)
|
|
|
|
build_bin_test_no_prefix(tst_unicode)
|
|
|
|
add_bin_test_no_prefix(tst_create_files)
|
|
|
|
add_bin_test_no_prefix(tst_opaque_data)
|
|
|
|
add_bin_test_no_prefix(tst_string_data)
|
|
|
|
add_bin_test_no_prefix(tst_vlen_data)
|
|
|
|
add_bin_test_no_prefix(tst_comp2)
|
|
|
|
add_bin_test_no_prefix(tst_nans)
|
|
|
|
add_bin_test_no_prefix(tst_h_scalar)
|
|
|
|
add_bin_test_no_prefix(tst_compress)
|
|
|
|
add_bin_test_no_prefix(tst_chunking)
|
|
|
|
add_bin_test_no_prefix(tst_group_data)
|
|
|
|
add_bin_test_no_prefix(tst_enum_data)
|
|
|
|
add_bin_test_no_prefix(tst_comp)
|
|
|
|
# Add this test by hand, as it is also called from a script.
|
|
|
|
# Editing the script would break autotools compatibility.
|
|
|
|
add_bin_test_no_prefix(tst_special_atts)
|
|
|
|
ENDIF(USE_HDF5)
|
|
|
|
|
2014-04-03 04:26:42 +08:00
|
|
|
# Base tests
|
|
|
|
# The tests are set up as a combination of shell scripts and executables that
|
|
|
|
# must be run in a particular order. It is painful but will use macros to help
|
|
|
|
# keep it from being too bad.
|
2015-02-05 05:11:20 +08:00
|
|
|
|
2021-12-21 06:13:08 +08:00
|
|
|
IF(HAVE_BASH)
|
|
|
|
|
2014-04-03 04:26:42 +08:00
|
|
|
## Start adding tests in the appropriate order
|
2017-11-17 08:54:30 +08:00
|
|
|
add_bin_test_no_prefix(ref_ctest)
|
|
|
|
add_bin_test_no_prefix(ref_ctest64)
|
2021-12-21 06:13:08 +08:00
|
|
|
|
|
|
|
add_sh_test(ncdump run_tests)
|
|
|
|
add_sh_test(ncdump tst_64bit)
|
2014-04-03 04:26:42 +08:00
|
|
|
add_sh_test(ncdump tst_lengths)
|
|
|
|
add_sh_test(ncdump tst_calendars)
|
|
|
|
add_sh_test(ncdump run_utf8_tests)
|
2015-10-23 04:09:19 +08:00
|
|
|
|
2021-12-21 06:13:08 +08:00
|
|
|
add_sh_test(ncdump tst_nccopy3_subset)
|
|
|
|
add_sh_test(ncdump tst_charfill)
|
|
|
|
add_sh_test(ncdump tst_formatx3)
|
|
|
|
add_sh_test(ncdump tst_bom)
|
|
|
|
add_sh_test(ncdump tst_dimsizes)
|
|
|
|
add_sh_test(ncdump tst_inmemory_nc3)
|
|
|
|
add_sh_test(ncdump tst_nccopy_w3)
|
|
|
|
add_sh_test(ncdump run_ncgen_tests)
|
|
|
|
add_sh_test(ncdump tst_inttags)
|
|
|
|
add_sh_test(ncdump test_radix)
|
|
|
|
add_sh_test(ncdump tst_ctests)
|
2021-11-04 02:49:54 +08:00
|
|
|
|
2017-11-21 08:02:16 +08:00
|
|
|
add_sh_test(ncdump tst_null_byte_padding)
|
|
|
|
IF(USE_STRICT_NULL_BYTE_HEADER_PADDING)
|
|
|
|
SET_TESTS_PROPERTIES(ncdump_tst_null_byte_padding PROPERTIES WILL_FAIL TRUE)
|
|
|
|
ENDIF(USE_STRICT_NULL_BYTE_HEADER_PADDING)
|
2017-11-21 04:52:06 +08:00
|
|
|
|
2021-11-04 02:49:54 +08:00
|
|
|
IF(NOT MSVC AND NOT MINGW)
|
2021-12-21 06:13:08 +08:00
|
|
|
add_sh_test(ncdump tst_output)
|
|
|
|
add_sh_test(ncdump tst_nccopy3)
|
|
|
|
# Known failure on MSVC; the number of 0's padding
|
|
|
|
# is different, but the result is actually correct.
|
|
|
|
if(USE_HDF5)
|
|
|
|
add_sh_test(ncdump tst_netcdf4)
|
|
|
|
endif()
|
|
|
|
|
|
|
|
SET_TESTS_PROPERTIES(ncdump_tst_nccopy3 PROPERTIES DEPENDS
|
|
|
|
"ncdump_tst_calendars;ncdump_run_utf8_tests;ncdump_tst_output;ncdump_tst_64bit;ncdump_run_tests;ncdump_tst_lengths")
|
|
|
|
|
2021-11-04 02:49:54 +08:00
|
|
|
ENDIF()
|
2017-12-07 06:00:21 +08:00
|
|
|
|
2021-12-21 06:13:08 +08:00
|
|
|
IF(USE_HDF5)
|
|
|
|
add_sh_test(ncdump tst_formatx4)
|
|
|
|
add_sh_test(ncdump_sh tst_fillbug)
|
|
|
|
add_sh_test(ncdump_shell tst_h_scalar)
|
|
|
|
add_sh_test(ncdump tst_mud)
|
|
|
|
add_sh_test(ncdump tst_grp_spec)
|
|
|
|
add_sh_test(ncdump tst_nccopy5)
|
|
|
|
add_sh_test(ncdump tst_inttags4)
|
|
|
|
add_sh_test(ncdump run_utf8_nc4_tests)
|
|
|
|
add_sh_test(ncdump tst_fileinfo)
|
|
|
|
add_sh_test(ncdump tst_hdf5_offset)
|
|
|
|
add_sh_test(ncdump tst_inmemory_nc4)
|
|
|
|
add_sh_test(ncdump tst_nccopy_w4)
|
|
|
|
add_sh_test(ncdump run_ncgen_nc4_tests)
|
|
|
|
add_sh_test(ncdump tst_ncgen4)
|
|
|
|
add_sh_test(ncdump tst_netcdf4_4)
|
|
|
|
add_sh_test(ncdump tst_nccopy4)
|
2015-01-31 00:40:48 +08:00
|
|
|
|
2021-12-21 06:13:08 +08:00
|
|
|
SET_TESTS_PROPERTIES(ncdump_tst_nccopy4 PROPERTIES DEPENDS "ncdump_run_ncgen_tests;ncdump_tst_output;ncdump_tst_ncgen4;ncdump_tst_fillbug;ncdump_tst_netcdf4_4;ncdump_tst_h_scalar;tst_comp;tst_comp2")
|
|
|
|
SET_TESTS_PROPERTIES(ncdump_tst_nccopy5 PROPERTIES DEPENDS "ncdump_tst_nccopy4")
|
|
|
|
|
|
|
|
ENDIF(USE_HDF5)
|
2017-04-07 04:55:11 +08:00
|
|
|
|
2015-04-21 06:06:20 +08:00
|
|
|
# The following test script invokes
|
|
|
|
# gcc directly.
|
|
|
|
IF(CMAKE_COMPILER_IS_GNUCC OR APPLE)
|
2015-05-30 03:34:35 +08:00
|
|
|
IF(ENABLE_LARGE_FILE_TESTS)
|
2015-04-21 06:06:20 +08:00
|
|
|
add_sh_test(ncdump tst_iter)
|
2015-05-30 03:34:35 +08:00
|
|
|
ENDIF(ENABLE_LARGE_FILE_TESTS)
|
|
|
|
ENDIF(CMAKE_COMPILER_IS_GNUCC OR APPLE)
|
2015-01-31 00:40:48 +08:00
|
|
|
|
2021-12-21 06:13:08 +08:00
|
|
|
###
|
|
|
|
# This test fails on Visual Studio builds with bash.
|
|
|
|
# It passes, but technically fails because the scientific
|
|
|
|
# formatting omits a 0.
|
|
|
|
###
|
|
|
|
IF(EXTRA_TESTS)
|
|
|
|
IF(USE_HDF5)
|
2021-11-04 02:49:54 +08:00
|
|
|
IF(NOT MSVC AND NOT MINGW)
|
|
|
|
add_sh_test(ncdump run_back_comp_tests)
|
|
|
|
ENDIF()
|
|
|
|
ENDIF()
|
2021-12-21 06:13:08 +08:00
|
|
|
ENDIF(EXTRA_TESTS)
|
2019-11-04 03:03:13 +08:00
|
|
|
|
Codify cross-platform file paths
The netcdf-c code has to deal with a variety of platforms:
Windows, OSX, Linux, Cygwin, MSYS, etc. These platforms differ
significantly in the kind of file paths that they accept. So in
order to handle this, I have created a set of replacements for
the most common file system operations such as _open_ or _fopen_
or _access_ to manage the file path differences correctly.
A more limited version of this idea was already implemented via
the ncwinpath.h and dwinpath.c code. So this can be viewed as a
replacement for that code. And in path in many cases, the only
change that was required was to replace '#include <ncwinpath.h>'
with '#include <ncpathmgt.h>' and then replace file operation
calls with the NCxxx equivalent from ncpathmgr.h Note that
recently, the ncwinpath.h was renamed ncpathmgmt.h, so this pull
request should not require dealing with winpath.
The heart of the change is include/ncpathmgmt.h, which provides
alternate operations such as NCfopen or NCaccess and which properly
parse and rebuild path arguments to work for the platform on which
the code is executing. This mostly matters for Windows because of the
way that it uses backslash and drive letters, as compared to *nix*.
One important feature is that the user can do string manipulations
on a file path without having to worry too much about the platform
because the path management code will properly handle most mixed cases.
So one can for example concatenate a path suffix that uses forward
slashes to a Windows path and have it work correctly.
The conversion code is in libdispatch/dpathmgr.c, and the
important function there is NCpathcvt which does the proper
conversions to the local path format.
As a rule, most code should just replace their file operations with
the corresponding NCxxx ones defined in include/ncpathmgmt.h. These
NCxxx functions all call NCpathcvt on their path arguments before
executing the actual file operation.
In some rare cases, the client may need to directly use NCpathcvt,
but this should be avoided as much as possible. If there is a need
for supporting a new file operation not already in ncpathmgmt.h, then
use the code in dpathmgr.c as a template. Also please notify Unidata
so we can include it as a formal part or our supported operations.
Also, if you see an operation in the library that is not using the
NCxxx form, then please submit an issue so we can fix it.
Misc. Changes:
* Clean up the utf8 testing code; it is impossible to get some
tests to work under windows using shell scripts; the args do
not pass as utf8 but as some other encoding.
* Added an extra utf8 test case: test_unicode_path.sh
* Add a true test for HDF5 1.10.6 or later because as noted in
PR https://github.com/Unidata/netcdf-c/pull/1794,
HDF5 changed its Windows file path handling.
2021-03-05 04:41:31 +08:00
|
|
|
# The unicode tests are complicated
|
|
|
|
IF(USE_HDF5)
|
2021-11-04 02:49:54 +08:00
|
|
|
IF(NOT MSVC AND NOT MINGW)
|
Codify cross-platform file paths
The netcdf-c code has to deal with a variety of platforms:
Windows, OSX, Linux, Cygwin, MSYS, etc. These platforms differ
significantly in the kind of file paths that they accept. So in
order to handle this, I have created a set of replacements for
the most common file system operations such as _open_ or _fopen_
or _access_ to manage the file path differences correctly.
A more limited version of this idea was already implemented via
the ncwinpath.h and dwinpath.c code. So this can be viewed as a
replacement for that code. And in path in many cases, the only
change that was required was to replace '#include <ncwinpath.h>'
with '#include <ncpathmgt.h>' and then replace file operation
calls with the NCxxx equivalent from ncpathmgr.h Note that
recently, the ncwinpath.h was renamed ncpathmgmt.h, so this pull
request should not require dealing with winpath.
The heart of the change is include/ncpathmgmt.h, which provides
alternate operations such as NCfopen or NCaccess and which properly
parse and rebuild path arguments to work for the platform on which
the code is executing. This mostly matters for Windows because of the
way that it uses backslash and drive letters, as compared to *nix*.
One important feature is that the user can do string manipulations
on a file path without having to worry too much about the platform
because the path management code will properly handle most mixed cases.
So one can for example concatenate a path suffix that uses forward
slashes to a Windows path and have it work correctly.
The conversion code is in libdispatch/dpathmgr.c, and the
important function there is NCpathcvt which does the proper
conversions to the local path format.
As a rule, most code should just replace their file operations with
the corresponding NCxxx ones defined in include/ncpathmgmt.h. These
NCxxx functions all call NCpathcvt on their path arguments before
executing the actual file operation.
In some rare cases, the client may need to directly use NCpathcvt,
but this should be avoided as much as possible. If there is a need
for supporting a new file operation not already in ncpathmgmt.h, then
use the code in dpathmgr.c as a template. Also please notify Unidata
so we can include it as a formal part or our supported operations.
Also, if you see an operation in the library that is not using the
NCxxx form, then please submit an issue so we can fix it.
Misc. Changes:
* Clean up the utf8 testing code; it is impossible to get some
tests to work under windows using shell scripts; the args do
not pass as utf8 but as some other encoding.
* Added an extra utf8 test case: test_unicode_path.sh
* Add a true test for HDF5 1.10.6 or later because as noted in
PR https://github.com/Unidata/netcdf-c/pull/1794,
HDF5 changed its Windows file path handling.
2021-03-05 04:41:31 +08:00
|
|
|
# These tests do not work under windows
|
|
|
|
add_sh_test(ncdump test_unicode_directory)
|
|
|
|
add_sh_test(ncdump test_unicode_path)
|
2021-11-04 02:49:54 +08:00
|
|
|
ENDIF()
|
Codify cross-platform file paths
The netcdf-c code has to deal with a variety of platforms:
Windows, OSX, Linux, Cygwin, MSYS, etc. These platforms differ
significantly in the kind of file paths that they accept. So in
order to handle this, I have created a set of replacements for
the most common file system operations such as _open_ or _fopen_
or _access_ to manage the file path differences correctly.
A more limited version of this idea was already implemented via
the ncwinpath.h and dwinpath.c code. So this can be viewed as a
replacement for that code. And in path in many cases, the only
change that was required was to replace '#include <ncwinpath.h>'
with '#include <ncpathmgt.h>' and then replace file operation
calls with the NCxxx equivalent from ncpathmgr.h Note that
recently, the ncwinpath.h was renamed ncpathmgmt.h, so this pull
request should not require dealing with winpath.
The heart of the change is include/ncpathmgmt.h, which provides
alternate operations such as NCfopen or NCaccess and which properly
parse and rebuild path arguments to work for the platform on which
the code is executing. This mostly matters for Windows because of the
way that it uses backslash and drive letters, as compared to *nix*.
One important feature is that the user can do string manipulations
on a file path without having to worry too much about the platform
because the path management code will properly handle most mixed cases.
So one can for example concatenate a path suffix that uses forward
slashes to a Windows path and have it work correctly.
The conversion code is in libdispatch/dpathmgr.c, and the
important function there is NCpathcvt which does the proper
conversions to the local path format.
As a rule, most code should just replace their file operations with
the corresponding NCxxx ones defined in include/ncpathmgmt.h. These
NCxxx functions all call NCpathcvt on their path arguments before
executing the actual file operation.
In some rare cases, the client may need to directly use NCpathcvt,
but this should be avoided as much as possible. If there is a need
for supporting a new file operation not already in ncpathmgmt.h, then
use the code in dpathmgr.c as a template. Also please notify Unidata
so we can include it as a formal part or our supported operations.
Also, if you see an operation in the library that is not using the
NCxxx form, then please submit an issue so we can fix it.
Misc. Changes:
* Clean up the utf8 testing code; it is impossible to get some
tests to work under windows using shell scripts; the args do
not pass as utf8 but as some other encoding.
* Added an extra utf8 test case: test_unicode_path.sh
* Add a true test for HDF5 1.10.6 or later because as noted in
PR https://github.com/Unidata/netcdf-c/pull/1794,
HDF5 changed its Windows file path handling.
2021-03-05 04:41:31 +08:00
|
|
|
ENDIF(USE_HDF5)
|
|
|
|
|
2020-06-06 07:03:29 +08:00
|
|
|
IF(USE_CDF5)
|
2021-12-21 06:13:08 +08:00
|
|
|
add_sh_test(ncdump test_keywords)
|
2020-06-06 07:03:29 +08:00
|
|
|
ENDIF()
|
2020-12-08 03:18:30 +08:00
|
|
|
|
2021-03-09 05:27:56 +08:00
|
|
|
IF(USE_HDF5)
|
2021-12-21 06:13:08 +08:00
|
|
|
add_sh_test(ncdump test_scope)
|
2021-03-07 05:09:37 +08:00
|
|
|
ENDIF()
|
|
|
|
|
2021-12-21 06:13:08 +08:00
|
|
|
if(RCMERGE)
|
|
|
|
add_sh_test(ncdump test_rcmerge)
|
|
|
|
endif()
|
|
|
|
|
|
|
|
ENDIF(HAVE_BASH)
|
2021-09-03 07:04:26 +08:00
|
|
|
|
Codify cross-platform file paths
The netcdf-c code has to deal with a variety of platforms:
Windows, OSX, Linux, Cygwin, MSYS, etc. These platforms differ
significantly in the kind of file paths that they accept. So in
order to handle this, I have created a set of replacements for
the most common file system operations such as _open_ or _fopen_
or _access_ to manage the file path differences correctly.
A more limited version of this idea was already implemented via
the ncwinpath.h and dwinpath.c code. So this can be viewed as a
replacement for that code. And in path in many cases, the only
change that was required was to replace '#include <ncwinpath.h>'
with '#include <ncpathmgt.h>' and then replace file operation
calls with the NCxxx equivalent from ncpathmgr.h Note that
recently, the ncwinpath.h was renamed ncpathmgmt.h, so this pull
request should not require dealing with winpath.
The heart of the change is include/ncpathmgmt.h, which provides
alternate operations such as NCfopen or NCaccess and which properly
parse and rebuild path arguments to work for the platform on which
the code is executing. This mostly matters for Windows because of the
way that it uses backslash and drive letters, as compared to *nix*.
One important feature is that the user can do string manipulations
on a file path without having to worry too much about the platform
because the path management code will properly handle most mixed cases.
So one can for example concatenate a path suffix that uses forward
slashes to a Windows path and have it work correctly.
The conversion code is in libdispatch/dpathmgr.c, and the
important function there is NCpathcvt which does the proper
conversions to the local path format.
As a rule, most code should just replace their file operations with
the corresponding NCxxx ones defined in include/ncpathmgmt.h. These
NCxxx functions all call NCpathcvt on their path arguments before
executing the actual file operation.
In some rare cases, the client may need to directly use NCpathcvt,
but this should be avoided as much as possible. If there is a need
for supporting a new file operation not already in ncpathmgmt.h, then
use the code in dpathmgr.c as a template. Also please notify Unidata
so we can include it as a formal part or our supported operations.
Also, if you see an operation in the library that is not using the
NCxxx form, then please submit an issue so we can fix it.
Misc. Changes:
* Clean up the utf8 testing code; it is impossible to get some
tests to work under windows using shell scripts; the args do
not pass as utf8 but as some other encoding.
* Added an extra utf8 test case: test_unicode_path.sh
* Add a true test for HDF5 1.10.6 or later because as noted in
PR https://github.com/Unidata/netcdf-c/pull/1794,
HDF5 changed its Windows file path handling.
2021-03-05 04:41:31 +08:00
|
|
|
ENDIF(ENABLE_TESTS)
|
2012-08-08 06:58:15 +08:00
|
|
|
|
2020-05-19 09:36:28 +08:00
|
|
|
#IF(MSVC)
|
|
|
|
# SET_TARGET_PROPERTIES(ncdump
|
|
|
|
# PROPERTIES LINK_FLAGS_DEBUG " /NODEFAULTLIB:MSVCRT"
|
|
|
|
# )
|
|
|
|
# SET_TARGET_PROPERTIES(nccopy
|
|
|
|
# PROPERTIES LINK_FLAGS_DEBUG " /NODEFAULTLIB:MSVCRT"
|
|
|
|
# )
|
|
|
|
# SET_TARGET_PROPERTIES(ncvalidator
|
|
|
|
# PROPERTIES LINK_FLAGS_DEBUG " /NODEFAULTLIB:MSVCRT"
|
|
|
|
# )
|
|
|
|
# IF(ENABLE_DAP)
|
|
|
|
# SET_TARGET_PROPERTIES(ocprint
|
|
|
|
# PROPERTIES LINK_FLAGS_DEBUG " /NODEFAULTLIB:MSVCRT"
|
|
|
|
# )
|
|
|
|
# ENDIF(ENABLE_DAP)
|
|
|
|
#ENDIF()
|
2012-09-20 05:32:28 +08:00
|
|
|
|
2013-08-23 01:15:12 +08:00
|
|
|
INSTALL(TARGETS ncdump RUNTIME DESTINATION bin COMPONENT utilities)
|
|
|
|
INSTALL(TARGETS nccopy RUNTIME DESTINATION bin COMPONENT utilities)
|
2018-08-02 04:15:01 +08:00
|
|
|
|
2012-10-18 05:14:30 +08:00
|
|
|
SET(MAN_FILES nccopy.1 ncdump.1)
|
2013-06-04 00:42:04 +08:00
|
|
|
|
2018-03-16 22:38:40 +08:00
|
|
|
# Note, the L512.bin file is file containing exactly 512 bytes each of value 0.
|
re e-support UBS-599337
re pull request https://github.com/Unidata/netcdf-c/pull/405
re pull request https://github.com/Unidata/netcdf-c/pull/446
Notes:
1. This branch is a cleanup of the magic.dmh branch.
2. magic.dmh was originally merged, but caused problems with parallel IO.
It was re-issued as pull request https://github.com/Unidata/netcdf-c/pull/446.
3. This branch + pull request replace any previous pull requests and magic.dmh branch.
Given an otherwise valid netCDF file that has a corrupted header,
the netcdf library currently crashes. Instead, it should return
NC_ENOTNC.
Additionally, the NC_check_file_type code does not do the
forward search required by hdf5 files. It currently only looks
at file position 0 instead of 512, 1024, 2048,... Also, it turns
out that the HDF4 magic number is assumed to always be at the
beginning of the file (unlike HDF5).
The change is localized to libdispatch/dfile.c See
https://support.hdfgroup.org/release4/doc/DSpec_html/DS.pdf
Also, it turns out that the code in NC_check_file_type is duplicated
(mostly) in the function libsrc4/nc4file.c#nc_check_for_hdf.
This branch does the following.
1. Make NC_check_file_type return NC_ENOTNC instead of crashing.
2. Remove nc_check_for_hdf and centralize all file format checking
NC_check_file_type.
3. Add proper forward search for HDF5 files (but not HDF4 files)
to look for the magic number at offsets of 0, 512, 1024...
4. Add test tst_hdf5_offset.sh. This tests that hdf5 files with
an offset are properly recognized. It does so by prefixing
a legal file with some number of zero bytes: 512, 1024, etc.
5. Off-topic: Added -N flag to ncdump to force a specific output dataset name.
2017-10-25 06:25:09 +08:00
|
|
|
# It is used for creating hdf5 files with varying offsets for testing.
|
2013-06-04 00:42:04 +08:00
|
|
|
|
2021-09-03 07:04:26 +08:00
|
|
|
FILE(GLOB COPY_FILES ${CMAKE_BINARY_DIR}/ncgen/*.nc ${CMAKE_BINARY_DIR}/nc_test4/*.nc ${CMAKE_CURRENT_SOURCE_DIR}/*.ncml ${CMAKE_CURRENT_SOURCE_DIR}/*.nc ${CMAKE_CURRENT_SOURCE_DIR}/*.cdl ${CMAKE_CURRENT_SOURCE_DIR}/*.sh ${CMAKE_CURRENT_SOURCE_DIR}/*.1 ${CMAKE_CURRENT_SOURCE_DIR}/L512.bin ${CMAKE_CURRENT_SOURCE_DIR}/ref_ctest*.c )
|
2013-06-04 00:42:04 +08:00
|
|
|
FILE(COPY ${COPY_FILES} DESTINATION ${CMAKE_CURRENT_BINARY_DIR}/ FILE_PERMISSIONS OWNER_WRITE OWNER_READ OWNER_EXECUTE)
|
|
|
|
|
2014-09-20 02:43:39 +08:00
|
|
|
ADD_SUBDIRECTORY(cdl)
|
|
|
|
ADD_SUBDIRECTORY(expected)
|
2013-06-04 00:42:04 +08:00
|
|
|
|
2012-08-28 05:49:09 +08:00
|
|
|
SET_DIRECTORY_PROPERTIES(PROPERTIES ADDITIONAL_MAKE_CLEAN_FILES "${CLEANFILES}")
|
|
|
|
|
2015-02-05 05:11:20 +08:00
|
|
|
IF(NOT MSVC)
|
2014-04-03 04:26:42 +08:00
|
|
|
INSTALL(FILES ${MAN_FILES} DESTINATION "share/man/man1" COMPONENT documentation)
|
2013-06-04 00:42:04 +08:00
|
|
|
ENDIF()
|