Typo fixes in install doc. Document nccopy chunk threshold.

This commit is contained in:
Russ Rew 2014-03-04 15:03:49 -07:00
parent 93878e8816
commit dac98c89b5
3 changed files with 23 additions and 15 deletions

View File

@ -12,7 +12,7 @@ library.
\page getting_and_building_netcdf Getting and Building NetCDF-C
\brief This page provides instructions for obtaining and building NetCDF-C.
\brief This page provides instructions for obtaining and building netCDF-C.
\tableofcontents
@ -29,7 +29,7 @@ When getting netCDF from a software repository, you will wish to get
the development version of the package ("netcdf-devel"). This includes
the netcdf.h header file.
\note If you are interested in building NetCDF-C on Windows, please see \ref winbin and \ref netCDF-CMake.
\note If you are interested in building netCDF-C on Windows, please see \ref winbin and \ref netCDF-CMake.
\subsection sec_get_source Getting the latest netCDF-C Source Code
@ -85,7 +85,7 @@ libraries. (And, optionally, the szlib library). Versions required are
at least HDF5 1.8.8, zlib 1.2.5, and curl 7.18.0 or later.
(Optionally, if building with szlib, get szip 2.0 or later.)
HDF5 1.8.9 and zlib 1.2.7 packages are available from the <a
HDF5 1.8.12 and zlib 1.2.8 packages are available from the <a
href="ftp://ftp.unidata.ucar.edu/pub/netcdf/netcdf-4">netCDF-4 ftp
site</a>. If you wish to use the remote data client code, then you
will also need libcurl, which can be obtained from the <a
@ -146,7 +146,7 @@ FAQ</a> for more details on using shared libraries.
If you are building HDF5 with szip, then include the <CODE>--with-szlib=</CODE>
option, with the directory holding the szip library.
After HDF5 is done, build netcdf, specifying the location of the
After HDF5 is done, build netCDF, specifying the location of the
HDF5, zlib, and (if built into HDF5) the szip header files and
libraries in the CPPFLAGS and LDFLAGS environment variables. For example:
@ -230,7 +230,7 @@ Then, when building netCDF-4, use the
option to configure. The location for the HDF4 header files and
library must be set in the CPPFLAGS and LDFLAGS options.
For HDF4 access to work, the library must be build with netCDF-4
For HDF4 access to work, the library must be built with netCDF-4
features.
Here's an example, assuming the HDF5 library has been built and

View File

@ -130,6 +130,10 @@ that use 'm' and 'n' dimensions might be 'm/100,n/200' to specify
100 by 200 chunks. To see the chunking resulting from copying with a
chunkspec, use the '-s' option of ncdump on the output file.
.IP
As an I/O optimization, \fBnccopy\fP has a threshold for the minimum size of
non-record variables that get chunked, currently 8192 bytes. In the future,
use of this threshold and its size may be settable in an option.
.IP
Note that \fBnccopy\fP requires variables that share a dimension to
also share the chunk size associated with that dimension, but the
programming interface has no such restriction. If you need to
@ -186,7 +190,7 @@ performance, if the output fits in memory.
.IP "\fB -h \fP \fI chunk_cache \fP"
For netCDF-4 output, including netCDF-4 classic model, an integer or
floating-point number that specifies the size in bytes of chunk cache
for chunked variables. This is not a property of the file, but merely
for each chunked variable. This is not a property of the file, but merely
a performance tuning parameter for avoiding compressing or
decompressing the same data multiple times while copying and changing
chunk shapes. A suffix of K, M, G, or T multiplies the chunk cache
@ -200,9 +204,9 @@ cache size has been implemented yet. Using the '-w' option may
provide better performance, if the output fits in memory.
.IP "\fB -e \fP \fI cache_elems \fP"
For netCDF-4 output, including netCDF-4 classic model, specifies
number of elements that the chunk cache can hold. A suffix of K, M, G,
or T multiplies the copy buffer size by one thousand, million,
billion, or trillion, respectively. This is not a
number of chunkss that the chunk cache can hold. A suffix of K, M, G,
or T multiplies the number of chunks that can be held in the cache
by one thousand, million, billion, or trillion, respectively. This is not a
property of the file, but merely a performance tuning parameter for
avoiding compressing or decompressing the same data multiple times
while copying and changing chunk shapes. The default is 1009 (or

View File

@ -34,7 +34,7 @@ int optind;
#define COPY_BUFFER_SIZE (5000000)
#define COPY_CHUNKCACHE_PREEMPTION (1.0f) /* for copying, can eject fully read chunks */
#define SAME_AS_INPUT (-1) /* default, if kind not specified */
#define CHUNK_THRESHOLD (8192) /* variables with fewer bytes don't get chunked */
#define CHUNK_THRESHOLD (8192) /* non-record variables with fewer bytes don't get chunked */
#ifndef USE_NETCDF4
#define NC_CLASSIC_MODEL 0x0100 /* Enforce classic model if netCDF-4 not available. */
@ -1655,6 +1655,11 @@ chunk length. An example of a chunkspec for variables that use
chunks. To see the chunking resulting from copying with a chunkspec,
use the '-s' option of ncdump on the output file.
@par
As an I/O optimization, \b nccopy has a threshold for the minimum size of
non-record variables that get chunked, currently 8192 bytes. In the future,
use of this threshold and its size may be settable in an option.
@par
Note that \b nccopy requires variables that share a dimension to also
share the chunk size associated with that dimension, but the
@ -1728,7 +1733,7 @@ performance, if the output fits in memory.
@par
For netCDF-4 output, including netCDF-4 classic model, an integer or
floating-point number that specifies the size in bytes of chunk cache
for chunked variables. This is not a property of the file, but merely
allocated for each chunked variable. This is not a property of the file, but merely
a performance tuning parameter for avoiding compressing or
decompressing the same data multiple times while copying and changing
chunk shapes. A suffix of K, M, G, or T multiplies the chunk cache
@ -1740,13 +1745,12 @@ buffer size and divide it optimally between a copy buffer and chunk
cache, but no general algorithm for computing the optimum chunk cache
size has been implemented yet. Using the '-w' option may provide
better performance, if the output fits in memory.
@par -e \e cache_elems
@par
For netCDF-4 output, including netCDF-4 classic model, specifies
number of elements that the chunk cache can hold. A suffix of K, M, G,
or T multiplies the copy buffer size by one thousand, million,
billion, or trillion, respectively. This is not a
number of chunks that the chunk cache can hold. A suffix of K, M, G,
or T multiplies the number of chunks that can be held in the cache
by one thousand, million, billion, or trillion, respectively. This is not a
property of the file, but merely a performance tuning parameter for
avoiding compressing or decompressing the same data multiple times
while copying and changing chunk shapes. The default is 1009 (or