After a long discussion, I implemented the rules at the end of that issue.
They are documented in nccopy.1.
Additionally, I added a new, per-variable, -c flag that allows
for the direct setting of the chunking parameters for a variable.
The form is
-c var:c1,c2,...ck
where var is the name of the variable (possibly a fully qualified name)
and the ci are the chunksizes for that variable. It must be the case
that the rank of the variable is k. If the new form is used as well
as the old form, then the new form overrides the old form for the
specified variable. Note that multiple occurrences of the new form
-c flag may be specified.
Misc. Other fixes
1. Added -M <size> option to nccopy to specify the minimum
allowable chunksize.
2. Removed the unused variables from bigmeta.c
(Issue https://github.com/Unidata/netcdf-c/issues/1079)
3. Fixed failure of nc_test4/tst_filter.sh by using the new -M
flag (#1) to allow filter test on a small chunk size.
corresponding HDF5 operations.
re: github issue https://github.com/Unidata/netcdf-c/issues/908
also in reference to https://github.com/pydata/xarray/issues/2004
The netcdf-c library has implemented the nc_get_vars and nc_put_vars
operations as element at a time. This has resulted in very slow
operation.
This pr attempts to improve the situation for netcdf-4/hdf5 files
by using the slab operations provided by the hdf5 library. The new
implementation passes the get/put vars stride information down to
the hdf5 slab operations.
The result appears to improve performance significantly. Some simple
tests on large 2-D arrays shows speedups in excess of 150.
Misc. other changes:
1. fix bug in ncgen/semantics.c; using a list's allocated length
instead of actual length.
2. Added a temporary hook in the netcdf library plus a performance
test case (tst_varsperf.c) to estimate the speedup. After users
have had some experience with this, I will remove it, probably
after the 4.7 release.