HDF5 can depend on the Z library (in fact required for netCDF). Moved the detection of whether hdf5 was built with zlib up before any other tests that may require linking of the hdf5 library to determine presence/absence of symbols. These tests require that the link line include "-lz" if the hdf5 library was built with libz support.
This is typically handled somewhat automatically if shared libraries are being used, but in the static library case, the explicit dependency needs to be specified. For internal CMake checks, it uses the `CMAKE_REQUIRED_LIBRARIES` list to specify the libraries that should be used in a `CHECK_C_SOURCE_COMPILE` or a `CHECK_LIBRARY_EXISTS` call. In the current CMakeLists.txt ordering, the zlib detection is done _after_ the `CHECK_LIBRARY_EXISTS` calls which can cause them to fail and give an incorrect result about whether the function being tested for exists. With the reordering in this PR, I am able to correctly configure netCDF on a CRAY HPC system that uses static libraries by default.
On some versions of the HDF5 find_package call, it sets `HDF5_C_LIBRARIES` and `HDF5_HL_LIBRARIES`, but does not set the `HDF5_C_LIBRARY` or `HDF5_HL_LIBRARY` to anything. Control then falls out of the if block with these unset and it falls into the default setting at line 792. This does not include the path, so then when the later `CHECK_LIBRARY_EXISTS` calls are run, they do not have the full path to the library and will not link correctly. Since the link fails, the code defaults to thinking that none of the symbols are defined.
I don't think this change will have any affect since it only sets the symbols if they are unset.
## S3 Related Fixes
* Add comprehensive support for specifying AWS profiles to provide access credentials.
* Parse the files "~/.aws/config" and "~/.aws/credentials to provide credentials for the HDF5 ROS3 driver and to locate default region.
* Add a function to obtain the currently active S3 credentials. The search rules are defined in docs/nczarr.md.
* Provide documentation for the new features.
* Modify the struct NCauth (in include/ncauth.h) to replace specific S3 credentials with a profile name.
* Add a unit test to test the operation of profile and credentials management.
* Add support for URLS of the form "s3://<bucket>/<key>"; this requires obtaining a default region.
* Allows the specification of profile and/or region in a URL of the form "#mode=nczarr,...&aws.region=...&aws.profile=..."
## Misc. Fixes
* Move the ezxml code to libdispatch so that it can be used both by DAP4 and nczarr.
* Modify nclist to provide a deep clone operation.
* Modify ncuri to provide a deep clone operation.
* Modify the .rc file format to allow the specification of a path to be tested when looking for an entry in the .rc file.
* Ensure that the NC_rcload function is called.
* Modify nchttp to support setting request headers.
Filter support has three goals:
1. Use the existing HDF5 filter implementations,
2. Allow filter metadata to be stored in the NumCodecs metadata format used by Zarr,
3. Allow filters to be used even when HDF5 is disabled
Detailed usage directions are define in docs/filters.md.
For now, the existing filter API is left in place. So filters
are defined using ''nc_def_var_filter'' using the HDF5 style
where the id and parameters are unsigned integers.
This is a big change since filters affect many parts of the code.
In the following, the terms "compressor" and "filter" and "codec" are generally
used synonomously.
### Filter-Related Changes:
* In order to support dynamic loading of shared filter libraries, a new library was added in the libncpoco directory; it helps to isolate dynamic loading across multiple platforms.
* Provide a json parsing library for use by plugins; this is created by merging libdispatch/ncjson.c with include/ncjson.h.
* Add a new _Codecs attribute to allow clients to see what codecs are being used; let ncdump -s print it out.
* Provide special headers to help support compilation of HDF5 filters when HDF5 is not enabled: netcdf_filter_hdf5_build.h and netcdf_filter_build.h.
* Add a number of new test to test the new nczarr filters.
* Let ncgen parse _Codecs attribute, although it is ignored.
### Plugin directory changes:
* Add support for the Blosc compressor; this is essential because it is the most common compressor used in Zarr datasets. This also necessitated adding a CMake FindBlosc.cmake file
* Add NCZarr support for the big-four filters provided by HDF5: shuffle, fletcher32, deflate (zlib), and szip
* Add a Codec defaulter (see docs/filters.md) for the big four filters.
* Make plugins work with windows by properly adding __declspec declaration.
### Misc. Non-Filter Changes
* Replace most uses of USE_NETCDF4 (deprecated) with USE_HDF5.
* Improve support for caching
* More fixes for path conversion code
* Fix misc. memory leaks
* Add new utility -- ncdump/ncpathcvt -- that does more or less the same thing as cygpath.
* Add a number of new test to test the non-filter fixes.
* Update the parsers
* Convert most instances of '#ifdef _MSC_VER' to '#ifdef _WIN32'
A fairly vanilla build of 1.12.1 into a non-default directory ends up
with `HDF5_C_LIBRARY` set to `hdf5` which ends up failing all of the
`try_compile` checks because `-lhdf5` cannot be found.
* Set `CMAKE_REQUIRED_INCLUDES` to include the path found for `curl.h`. The `CHECK_C_SOURCE_COMPILES` function uses this and not the `INCLUDE_DIRECTORIES`
* Make the test for version 7.66 or later match the same test in `configure.ac`
* If the version is 7.66 or later, then we can skip the tests for the curl symbols which were all added in versions prior to 7.66.
* If the version is earlier than 7.66, then continue to perform the tests.
The thredds-test server now has some password protected datasets
that can be used to test DAP2 authorization support.
The general location is
````
https://thredds.ucar.edu/thredds/tdscapabilities/authTest.html
````
and specifically:
````
https://thredds.ucar.edu/thredds/dodsC/test3/testData.nc.html
````
This PR replaces old testcases with ncdap_test/testauth.sh.
This testcase allows us to test use of the .dodsrc file and .netrc file
and embedded user+pwd.
As part of this, I had to create a program (ncdap_test/pathcvt.c)
that is essentially the equivalent to cygpath. Given a path in
windows, unix, msys or cygwin format, it converts it to the
equivalent format in one of those four cases. So it can be used
to convert a cygwin path to a windows path, for example. This is
needed in testpathcvt and testauth to make sure that the paths
in .daprc (e.g. the reference to .netrc) are of the proper
format.
Misc. Other Changes:
1. Fix some memory leaks in libdap2
2. Setting the env variable CURLOPT_VERBOSE allows tracking of curl
operations.
3. Make tst_charvlenbug be conditional on NC_VLEN_NOTEST.
Re: https://github.com/zarr-developers/zarr-python/pull/716
The Zarr version 2 spec has been extended to include the ability
to choose the dimension separator in chunk name keys. The legal
separators has been extended from {'.'} to {'.' '/'}. So now it
is possible to use a key like "0/1/2/0" for chunk names.
This PR implements this for NCZarr. The V2 spec now says that
this separator can be set on a per-variable basis. For now, I
have chosen to allow this be set only globally by adding a key
named "ZARR.DIMENSION_SEPARATOR=<char>" in the
.daprc/.dodsrc/ncrc file. Currently, the only legal separator
characters are '.' (the default) and '/'. On writing, this key
will only be written if its value is different than the default.
This change caused problems because supporting a separator of '/'
is difficult to parse when keys/paths use '/' as the path separator.
A test case was added for this.
Additionally, make nczarr be enabled default by default. This required
some additional changes so that if zip and/or AWS S3 sdk are unavailable,
then they are disabled for NCZarr.
In addition the following unrelated changes were made.
1. Tested that pure-zarr mode could read an nczarr formatted store.
1. The .rc file handling now merges all known .rc files (.ncrc,.daprc, and .dodsrc) in that order and using those in HOME first, then in current directory. For duplicate entries, the later ones override the earlier ones. This change is to remove some of the conflicts inherent in the current .rc file load process. A set of test cases was also added.
1. Re-order tests in configure.ac and CMakeLists.txt so that if libcurl
is not found then the other options that depend upon it properly
are disabled.
1. I decided that xarray support should be enabled by default for pure
zarr. In order to allow disabling, I added a new mode flag "noxarray".
1. Certain test in nczarr_test depend on use of .dodsrc. In order for these
to work when testing in parallel, some inter-test dependencies needed to
be added.
1. Improved authorization testing to use changes in thredds.ucar.edu
Some of the tests for HDF5 symbols are failing due to not finding the include file which skews the test results. Added the `CMAKE_REQUIRED_INCLUDES` command to set the path. In one place this was already set, but used `HAVE_HDF5_H` which is not an include path and isn't even set at the location where it was used. We may not need these set more than once, but I added them before each `CHECK_C_SOURCE_COMPILES` in case those blocks get moved around and the value gets unset when it should be set.
Also changed the `H5Literate_vers` check to check that it is defined instead of checking the value. As best I can see, as long as it is defined, then the `H5Literate` symbol will be defined which is what is needed. Checked with 1.10.7 and 1.12.0
The netcdf-c code has to deal with a variety of platforms:
Windows, OSX, Linux, Cygwin, MSYS, etc. These platforms differ
significantly in the kind of file paths that they accept. So in
order to handle this, I have created a set of replacements for
the most common file system operations such as _open_ or _fopen_
or _access_ to manage the file path differences correctly.
A more limited version of this idea was already implemented via
the ncwinpath.h and dwinpath.c code. So this can be viewed as a
replacement for that code. And in path in many cases, the only
change that was required was to replace '#include <ncwinpath.h>'
with '#include <ncpathmgt.h>' and then replace file operation
calls with the NCxxx equivalent from ncpathmgr.h Note that
recently, the ncwinpath.h was renamed ncpathmgmt.h, so this pull
request should not require dealing with winpath.
The heart of the change is include/ncpathmgmt.h, which provides
alternate operations such as NCfopen or NCaccess and which properly
parse and rebuild path arguments to work for the platform on which
the code is executing. This mostly matters for Windows because of the
way that it uses backslash and drive letters, as compared to *nix*.
One important feature is that the user can do string manipulations
on a file path without having to worry too much about the platform
because the path management code will properly handle most mixed cases.
So one can for example concatenate a path suffix that uses forward
slashes to a Windows path and have it work correctly.
The conversion code is in libdispatch/dpathmgr.c, and the
important function there is NCpathcvt which does the proper
conversions to the local path format.
As a rule, most code should just replace their file operations with
the corresponding NCxxx ones defined in include/ncpathmgmt.h. These
NCxxx functions all call NCpathcvt on their path arguments before
executing the actual file operation.
In some rare cases, the client may need to directly use NCpathcvt,
but this should be avoided as much as possible. If there is a need
for supporting a new file operation not already in ncpathmgmt.h, then
use the code in dpathmgr.c as a template. Also please notify Unidata
so we can include it as a formal part or our supported operations.
Also, if you see an operation in the library that is not using the
NCxxx form, then please submit an issue so we can fix it.
Misc. Changes:
* Clean up the utf8 testing code; it is impossible to get some
tests to work under windows using shell scripts; the args do
not pass as utf8 but as some other encoding.
* Added an extra utf8 test case: test_unicode_path.sh
* Add a true test for HDF5 1.10.6 or later because as noted in
PR https://github.com/Unidata/netcdf-c/pull/1794,
HDF5 changed its Windows file path handling.
re: Issue
The netcdf dispatch table version was defined in several places.
Modify to only require defining it in CMakeLists.txt and configure.ac.
Fix entailed the following changes:
* Up the NC_DISPATCH_VERSION from 2 to 3 in configure.ac and CMakeLists.txt
* Create include/netcdf_dispatch.h.in and use it to configure include/netcdf_dispatch.h
* For CMAKE, make it search CMAKE_CURRENT_BINARY_DIR so code can locate the configured netcdf_dispatch.h
* Add entry to config.h.cmake.in for NC_DISPATCH_VERSION
* Move NCerror from include/ncdispatch.h to libdap2/nccomon.h
* Fix an API problem re nchttp.h
* Fix a conversion warning in libdispatch/dinfermodel.c
The primary change is to support the use of a zip file as a
storage format. Simultaneously the .nz4 support is made obsolete
Use of zip requires the libzip support library, so a number of
changes to the build files (Makefile.am, CMakeLists.txt) are
necessary to locate and incorporate libzip. The nczarr_tests
tests are also changed to add zip testing.
Other changes:
* Make sure distcheck leaves no files around.
* Add some functions to netcdf_aux to export some functions of libnetcdf.
* Add a new error NC_EFOUND as the complement of NC_EEMPTY.
* Add tracing support to nclog and use it in libnczarr.
* Modify the zmap interface to support the writeonce semantics of zip.
* Create a new s3util.c to support a variety of S3 auxilliary functions.
* EXTERNL'ize a number of functions so they can be used in s3util.
* Add support for the S3 ListObjects CommonPrefixes mechanism
to improve search.
* Add experimental support for running nczarr X s3 tests against
the actual Amazon S3 cloud.
The `NC_HAS_MULTIFILTERS` line in `netcdf_meta.h` was not of the correct form -- showed as
```
#define NC_HAS_MULTIFILTERS
```
Instead of the better form:
```
#define NC_HAS_MULTIFILTERS 0 (or 1 if enabled)
```
Primary Fixes:
* Add a whole variable optimization -- used in the rare case that nc_get/put_vara covers the whole of a variable and the variable has a single chunk.
* Fix chunking error when stride causes whole chunks to be skipped.
* Fix some memory leaks
* Add test cases
* Add one performance test to nczarr_test/. This uses the timer utils from unit_test: timer_utils.[ch].
* Move ncdumpchunks utility from ncdump to nczarr_test
Misc. Other Changes:
* Make check for aws libraries conditional on --enable-nczarr-s3
* Remove all but one bm tests from nczarr_test until they are working.
* Remove another dependency on HDF5 from supposedly non-HDF5 specific code; specifically hdf5_log_hdf5.
* Make the BAIL2 macro be hdf5 specific and replace elsewhere with an HDF5 independent equivalent.
* Move hdf5cache.c to libsrc4/nc4cache.c because it is used by nczarr.
* Modify unit_tests so that some of them are run even if using Windows.
* Misc. small bug fixes and refactors and memory leaks.
* Rename some conflicting tests for cmake.
* Attempted to make nc_perf work with cmake and failed.
Re: GH Issue https://github.com/Unidata/netcdf-c/issues/1900
Apparently the clock_gettime() function is not always available.
It is used in unit_test/tst_exhash.c and unit_test/tst_xcache.c.
To solve this, a number of things were changed:
* Move the timing code to a new file unit_tests/timer_utils.[ch]
* Modify the timing code to choose one of several timing methods
depending on availability. The prioritized order is as follows:
1. If Windows, use the QueryPerformanceCounter mechanism else
2. Use clock_gettime if available else
3. Use gettimeofday if available else
4. Use getrusage if available
Note that the resolution of 3 and 4 is less than 1 or 2.
Misc. Other Changes:
* Move the test in CMakeLists.txt that disables unit tests for WIN32 to unit_test/CMakeLists.txt since some unit tests actually work under Visual Studio.
* Fix some of the unit tests to work under visual studio
* Fix problem with using remove() in zmap_nzf.c
* Remove some warning about use of EXTERNL
Primary changes:
* Add an improved cache system to speed up performance.
* Fix NCZarr to properly handle scalar variables.
Misc. Related Changes:
* Added unit tests for extendible hash and for the generic cache.
* Add config parameter to set size of the NCZarr cache.
* Add initial performance tests but leave them unused.
* Add CRC64 support.
* Move location of ncdumpchunks utility from /ncgen to /ncdump.
* Refactor auth support.
Misc. Unrelated Changes:
* More cleanup of the S3 support
* Add support for S3 authentication in .rc files: HTTP.S3.ACCESSID and HTTP.S3.SECRETKEY.
* Remove the hashkey from the struct OBJHDR since it is never used.
There were some irregularities in the flags for handling NCZarr S3 support.
The primary change is to regularize the flags controlling this to the following.
1. Automake: --enable-nczarr-s3 and CMake: ENABLE_NCZARR_S3
2. Automake: --enable-nczarr-s3-tests and CMake: ENABLE_NCZARR_S3_TESTS
Flag 1 indicates that NCZarr should be built with S3 support enabled.
Flag 2 indicates that the NCZarr S3 tests should be run
These two flags are separate because running the NCZarr S3 tests
requires access to protected S3 resources. Currently, running
these tests is restricted to Unidata personnel. However, users
may want to enable S3 support even if they cannot run the tests.
It is, of course, an error to specify 2 without specifying 1.
Additionally, if the AWS S3 SDK library is not found, then the NCZARR S3
support and testing must be disabled. Otherwise an error is signaled
during the build.
Some of these NCZarr and S3 changes are propagated to nc-config.
Misc. Other Changes:
1. Allow testing for CYGWIN or MSVC in shell scripts.
2. Add specific test for HDF5 library version 1.10.6.
This is encoded as "HDF5_UTF8_PATHS" because that is the first
version where HDF5 properly supports it under Windows. This is used
in hdf5internal/nc4_ndf5_ansi_to_utf8.
3. Add a AM Conditional -- AX_IGNORE -- for use in testing
when it is desirable to temporarily suppress Makefile code.
4. Add MULTIFILTER flag to CMakeLists.txt
re: Issue https://github.com/Unidata/netcdf-c/issues/1848
The existing Virtual File Driver built to support byte-range
read-only file access is quite old. It turns out to be extremely
slow (reason unknown at the moment).
Starting with HDF5 1.10.6, the HDF5 library has its own version
of such a file driver. The HDF5 developers have better knowledge
about building such a driver and what incantations are needed to
get good performance.
This PR modifies the byte-range code in hdf5open.c so
that if the HDF5 file driver is available, then it is used
in preference to the one written by the Netcdf group.
Misc. Other Changes:
1. Moved all of nc4print code to ncdump to keep appveyor quiet.
disengagement of enable-netcdf4 from enable-hdf5.
That is, with the advent of nczarr, it is possible
to turn off hdf5 but still need netcdf-4 enabled
because nczarr uses libsrc4, but not libhdf5.
This change involves a bunch of things:
1. Modify configure.ac and CMakelist to make enable_hdf5
control if hdf5 support is provided. For back compatibility,
disable-netcdf4 is treated as disable-hdf5. But internally,
netcdf4 support is controlled only by the enabling of formats
that require it.
2. In support of #1, modify .travis.yml to use enable/disable-hdf5
instead of enable/disable-netcdf4.
3. test_common.in is modified to track selected features,
including enable-hdf5 and enable-s3-tests. This is used in
selected tests that mix netcdf-3 and netcdf4 tests.
4. The conflation of USE_HDF5 and USE_NETCDF4 is common in
code, tests, and build files, so all of those had to be weeded out.
5. It turns out that some of the NC4_dim functions really are HDF5 specific,
but are not treated as such. So they are moved from nc4dim.c to
hdf5dim.c or hdf5dispatch.c
6. Some generic functions in libhdf5 can be (and were) moved to libsrc4.
The primary fix is to improve CMake build support.
Specific changes include:
* CMake: Provide a better soln to locating the AWS SDK
libraries; the new way is the preferred method as described in
the aws-cpp-sdk documentation.
* CMake (and Automake): allow -DENABLE_S3_SDK (default off) to suppress
looking for AWS libraries.
* CMake: add the complete set of nczarr tests
* CMake: add EXTERNL as needed to various .h files.
* Improve support for windows drive letters in paths.
* Add nczarr and s3 flags to nc-config
* For VisualStudio X nczarr, cleanup the NAN+INFINITY handling
* Convert _MSC_VER -> _WIN32 and vice versa as needed
* NCZarr - support multiple platform paths including windows, cygwin.
mingw, etc.
* NCZarr - sort the test outputs because different platforms
produce directory contents in different orders.
One big change concerns netcdf-c/CMakeLists.txt and netcdf-c/configure.ac.
In the current versions, it was the case that --disable-hdf5
disabled netcdf-4 (libsrc4). With nczarr, this can no longer
be the case because nczarr requires libsrc4 even if libhdf5
is disabled. So, I modified the above files to move the
format options (HDF5, NCZarr, HDF4, etc) to a single place
near the front of the files. Now it is the case that:
* Enabling any of the formats that require libsrc4
also does an implicit --enable-netcdf4.
* --disable-netcdf4 | --disable-netcdf-4 now becomes
and alias for --disable-hdf5.
There are probably some bugs in this change in terms of
dependencies between format options.
Problems:
* CMake S3 support is still not working for Visual Studio
* A recent issue points out that there is work to do on handling
UTF8 filenames, but that will be addressed in a separate fix.
Notes:
* Consider converting all of our includes/.h files to use EXTERNL
cloud using a variant of the Zarr protocol and storage
format. This enhancement is generically referred to as "NCZarr".
The data model supported by NCZarr is netcdf-4 minus the user-defined
types and the String type. In this sense it is similar to the CDF-5
data model.
More detailed information about enabling and using NCZarr is
described in the document NUG/nczarr.md and in a
[Unidata Developer's blog entry](https://www.unidata.ucar.edu/blogs/developer/en/entry/overview-of-zarr-support-in).
WARNING: this code has had limited testing, so do use this version
for production work. Also, performance improvements are ongoing.
Note especially the following platform matrix of successful tests:
Platform | Build System | S3 support
------------------------------------
Linux+gcc | Automake | yes
Linux+gcc | CMake | yes
Visual Studio | CMake | no
Additionally, and as a consequence of the addition of NCZarr,
major changes have been made to the Filter API. NOTE: NCZarr
does not yet support filters, but these changes are enablers for
that support in the future. Note that it is possible
(probable?) that there will be some accidental reversions if the
changes here did not correctly mimic the existing filter testing.
In any case, previously filter ids and parameters were of type
unsigned int. In order to support the more general zarr filter
model, this was all converted to char*. The old HDF5-specific,
unsigned int operations are still supported but they are
wrappers around the new, char* based nc_filterx_XXX functions.
This entailed at least the following changes:
1. Added the files libdispatch/dfilterx.c and include/ncfilter.h
2. Some filterx utilities have been moved to libdispatch/daux.c
3. A new entry, "filter_actions" was added to the NCDispatch table
and the version bumped.
4. An overly complex set of structs was created to support funnelling
all of the filterx operations thru a single dispatch
"filter_actions" entry.
5. Move common code to from libhdf5 to libsrc4 so that it is accessible
to nczarr.
Changes directly related to Zarr:
1. Modified CMakeList.txt and configure.ac to support both C and C++
-- this is in support of S3 support via the awd-sdk libraries.
2. Define a size64_t type to support nczarr.
3. More reworking of libdispatch/dinfermodel.c to
support zarr and to regularize the structure of the fragments
section of a URL.
Changes not directly related to Zarr:
1. Make client-side filter registration be conditional, with default off.
2. Hack include/nc4internal.h to make some flags added by Ed be unique:
e.g. NC_CREAT, NC_INDEF, etc.
3. cleanup include/nchttp.h and libdispatch/dhttp.c.
4. Misc. changes to support compiling under Visual Studio including:
* Better testing under windows for dirent.h and opendir and closedir.
5. Misc. changes to the oc2 code to support various libcurl CURLOPT flags
and to centralize error reporting.
6. By default, suppress the vlen tests that have unfixed memory leaks; add option to enable them.
7. Make part of the nc_test/test_byterange.sh test be contingent on remotetest.unidata.ucar.edu being accessible.
Changes Left TO-DO:
1. fix provenance code, it is too HDF5 specific.
re: Partly addresses issue https://github.com/Unidata/netcdf-c/issues/1712.
1. Turn on Hyrax Hack to accept Hyrax style attribute containers.
2. Support Url type as alias for String.
3. Accept the special attribute, "__DAP4_Checksum_CRC32",
to control per-variable checksums.
4. Make _DAP4_xxx attributes be reserved and only accessible
by name (ala _SuperBlock attribute).
5. Fix handling of checksums. There is a hack in the code
that uses an extra flag in the chunk header to indicate
that all variables have checksums. This violates the spec
and will be removed once it is possible to regenerate the
test cases.
Note that checksumming with the Hyrax test server has not
been tested. This, along with some other probable inconsistencies,
needs fixing when OPeNDAP and Unidata can agree on the proper
specification. Testing will be included.
Fix Issue https://github.com/Unidata/netcdf-c/issues/1725.
Replace PR https://github.com/Unidata/netcdf-c/pull/1726
Also replace PR https://github.com/Unidata/netcdf-c/pull/1694
The general problem is that under Visual Studio, we are seeing
a number of undefined reference and other scoping errors.
The reason is that the code is not properly using Visual Studio
_declspec() declarations.
The basic solution is to ensure that when compiling the code itself
one needs to ensure that _declspec(dllexport) is used. There
are several sets of macros to handle this, but they all rely
on the flag DLL_EXPORT being define when the code is compiled,
but not being defined when the code is used via a .h file.
As a test, I modified XGetOpt.c to build properly. I also
fixed the oc2 library to properly _declspec things like ocdebug.
I also made some misc. changes to get all the tests to run
if cygwin is installed (to get bash, sed, etc).
Misc. Changes:
* Put XGetOpt.c into libsrc and copy at build time
to the other directories where it is needed.