mirror of
https://git.postgresql.org/git/postgresql.git
synced 2025-02-23 19:39:53 +08:00
Documentation spell checking and markup improvements
This commit is contained in:
parent
30b5ede715
commit
256f6ba78a
@ -223,7 +223,7 @@ include 'filename'
|
||||
<secondary>in configuration file</secondary>
|
||||
</indexterm>
|
||||
The <filename>postgresql.conf</> file can also contain
|
||||
<firstterm>include_dir directives</>, which specify an entire directory
|
||||
<literal>include_dir</literal> directives, which specify an entire directory
|
||||
of configuration files to include. It is used similarly:
|
||||
<programlisting>
|
||||
include_dir 'directory'
|
||||
@ -234,7 +234,7 @@ include 'filename'
|
||||
names end with the suffix <literal>.conf</literal> will be included. File
|
||||
names that start with the <literal>.</literal> character are also excluded,
|
||||
to prevent mistakes as they are hidden on some platforms. Multiple files
|
||||
within an include directory are processed in filename order. The filenames
|
||||
within an include directory are processed in file name order. The file names
|
||||
are ordered by C locale rules, ie. numbers before letters, and uppercase
|
||||
letters before lowercase ones.
|
||||
</para>
|
||||
@ -1211,7 +1211,7 @@ include 'filename'
|
||||
Specifies the maximum amount of disk space that a session can use
|
||||
for temporary files, such as sort and hash temporary files, or the
|
||||
storage file for a held cursor. A transaction attempting to exceed
|
||||
this limit will be cancelled.
|
||||
this limit will be canceled.
|
||||
The value is specified in kilobytes, and <literal>-1</> (the
|
||||
default) means no limit.
|
||||
Only superusers can change this setting.
|
||||
@ -3358,7 +3358,7 @@ local0.* /var/log/postgresql
|
||||
<para>
|
||||
When <varname>logging_collector</varname> is enabled,
|
||||
this parameter sets the file names of the created log files. The value
|
||||
is treated as a <systemitem>strftime</systemitem> pattern,
|
||||
is treated as a <function>strftime</function> pattern,
|
||||
so <literal>%</literal>-escapes can be used to specify time-varying
|
||||
file names. (Note that if there are
|
||||
any time-zone-dependent <literal>%</literal>-escapes, the computation
|
||||
|
@ -4098,7 +4098,7 @@ SET xmloption TO { DOCUMENT | CONTENT };
|
||||
representations of XML values, such as in the above examples.
|
||||
This would ordinarily mean that encoding declarations contained in
|
||||
XML data can become invalid as the character data is converted
|
||||
to other encodings while travelling between client and server,
|
||||
to other encodings while traveling between client and server,
|
||||
because the embedded encoding declaration is not changed. To cope
|
||||
with this behavior, encoding declarations contained in
|
||||
character strings presented for input to the <type>xml</type> type
|
||||
|
@ -450,7 +450,7 @@ ExecForeignInsert (EState *estate,
|
||||
query has a <literal>RETURNING</> clause. Hence, the FDW could choose
|
||||
to optimize away returning some or all columns depending on the contents
|
||||
of the <literal>RETURNING</> clause. However, some slot must be
|
||||
returned to indicate success, or the query's reported rowcount will be
|
||||
returned to indicate success, or the query's reported row count will be
|
||||
wrong.
|
||||
</para>
|
||||
|
||||
@ -495,7 +495,7 @@ ExecForeignUpdate (EState *estate,
|
||||
query has a <literal>RETURNING</> clause. Hence, the FDW could choose
|
||||
to optimize away returning some or all columns depending on the contents
|
||||
of the <literal>RETURNING</> clause. However, some slot must be
|
||||
returned to indicate success, or the query's reported rowcount will be
|
||||
returned to indicate success, or the query's reported row count will be
|
||||
wrong.
|
||||
</para>
|
||||
|
||||
@ -538,7 +538,7 @@ ExecForeignDelete (EState *estate,
|
||||
query has a <literal>RETURNING</> clause. Hence, the FDW could choose
|
||||
to optimize away returning some or all columns depending on the contents
|
||||
of the <literal>RETURNING</> clause. However, some slot must be
|
||||
returned to indicate success, or the query's reported rowcount will be
|
||||
returned to indicate success, or the query's reported row count will be
|
||||
wrong.
|
||||
</para>
|
||||
|
||||
|
@ -9928,7 +9928,7 @@ table2-mapping
|
||||
</indexterm>
|
||||
<literal>array_to_json(anyarray [, pretty_bool])</literal>
|
||||
</entry>
|
||||
<entry>json</entry>
|
||||
<entry><type>json</type></entry>
|
||||
<entry>
|
||||
Returns the array as JSON. A PostgreSQL multidimensional array
|
||||
becomes a JSON array of arrays. Line feeds will be added between
|
||||
@ -9944,7 +9944,7 @@ table2-mapping
|
||||
</indexterm>
|
||||
<literal>row_to_json(record [, pretty_bool])</literal>
|
||||
</entry>
|
||||
<entry>json</entry>
|
||||
<entry><type>json</type></entry>
|
||||
<entry>
|
||||
Returns the row as JSON. Line feeds will be added between level
|
||||
1 elements if <parameter>pretty_bool</parameter> is true.
|
||||
@ -9959,12 +9959,12 @@ table2-mapping
|
||||
</indexterm>
|
||||
<literal>to_json(anyelement)</literal>
|
||||
</entry>
|
||||
<entry>json</entry>
|
||||
<entry><type>json</type></entry>
|
||||
<entry>
|
||||
Returns the value as JSON. If the data type is not builtin, and there
|
||||
is a cast from the type to json, the cast function will be used to
|
||||
Returns the value as JSON. If the data type is not built in, and there
|
||||
is a cast from the type to <type>json</type>, the cast function will be used to
|
||||
perform the conversion. Otherwise, for any value other than a number,
|
||||
a boolean or NULL, the text representation will be used, escaped and
|
||||
a Boolean, or a null value, the text representation will be used, escaped and
|
||||
quoted so that it is legal JSON.
|
||||
</entry>
|
||||
<entry><literal>to_json('Fred said "Hi."'::text)</literal></entry>
|
||||
@ -9977,9 +9977,9 @@ table2-mapping
|
||||
</indexterm>
|
||||
<literal>json_array_length(json)</literal>
|
||||
</entry>
|
||||
<entry>int</entry>
|
||||
<entry><type>int</type></entry>
|
||||
<entry>
|
||||
Returns the number of elements in the outermost json array.
|
||||
Returns the number of elements in the outermost JSON array.
|
||||
</entry>
|
||||
<entry><literal>json_array_length('[1,2,3,{"f1":1,"f2":[5,6]},4]')</literal></entry>
|
||||
<entry><literal>5</literal></entry>
|
||||
@ -9991,9 +9991,9 @@ table2-mapping
|
||||
</indexterm>
|
||||
<literal>json_each(json)</literal>
|
||||
</entry>
|
||||
<entry>SETOF key text, value json</entry>
|
||||
<entry><type>SETOF key text, value json</type></entry>
|
||||
<entry>
|
||||
Expands the outermost json object into a set of key/value pairs.
|
||||
Expands the outermost JSON object into a set of key/value pairs.
|
||||
</entry>
|
||||
<entry><literal>select * from json_each('{"a":"foo", "b":"bar"}')</literal></entry>
|
||||
<entry>
|
||||
@ -10012,9 +10012,9 @@ table2-mapping
|
||||
</indexterm>
|
||||
<literal>json_each_text(from_json json)</literal>
|
||||
</entry>
|
||||
<entry>SETOF key text, value text</entry>
|
||||
<entry><type>SETOF key text, value text</type></entry>
|
||||
<entry>
|
||||
Expands the outermost json object into a set of key/value pairs. The
|
||||
Expands the outermost JSON object into a set of key/value pairs. The
|
||||
returned value will be of type text.
|
||||
</entry>
|
||||
<entry><literal>select * from json_each_text('{"a":"foo", "b":"bar"}')</literal></entry>
|
||||
@ -10034,9 +10034,9 @@ table2-mapping
|
||||
</indexterm>
|
||||
<literal>json_extract_path(from_json json, VARIADIC path_elems text[])</literal>
|
||||
</entry>
|
||||
<entry>json</entry>
|
||||
<entry><type>json</type></entry>
|
||||
<entry>
|
||||
Returns json object pointed to by <parameter>path_elems</parameter>.
|
||||
Returns JSON object pointed to by <parameter>path_elems</parameter>.
|
||||
</entry>
|
||||
<entry><literal>json_extract_path('{"f2":{"f3":1},"f4":{"f5":99,"f6":"foo"}}','f4')</literal></entry>
|
||||
<entry><literal>{"f5":99,"f6":"foo"}</literal></entry>
|
||||
@ -10048,9 +10048,9 @@ table2-mapping
|
||||
</indexterm>
|
||||
<literal>json_extract_path_text(from_json json, VARIADIC path_elems text[])</literal>
|
||||
</entry>
|
||||
<entry>text</entry>
|
||||
<entry><type>text</type></entry>
|
||||
<entry>
|
||||
Returns json object pointed to by <parameter>path_elems</parameter>.
|
||||
Returns JSON object pointed to by <parameter>path_elems</parameter>.
|
||||
</entry>
|
||||
<entry><literal>json_extract_path_text('{"f2":{"f3":1},"f4":{"f5":99,"f6":"foo"}}','f4', 'f6')</literal></entry>
|
||||
<entry><literal>foo</literal></entry>
|
||||
@ -10062,9 +10062,9 @@ table2-mapping
|
||||
</indexterm>
|
||||
<literal>json_object_keys(json)</literal>
|
||||
</entry>
|
||||
<entry>SETOF text</entry>
|
||||
<entry><type>SETOF text</type></entry>
|
||||
<entry>
|
||||
Returns set of keys in the json object. Only the "outer" object will be displayed.
|
||||
Returns set of keys in the JSON object. Only the <quote>outer</quote> object will be displayed.
|
||||
</entry>
|
||||
<entry><literal>json_object_keys('{"f1":"abc","f2":{"f3":"a", "f4":"b"}}')</literal></entry>
|
||||
<entry>
|
||||
@ -10083,11 +10083,11 @@ table2-mapping
|
||||
</indexterm>
|
||||
<literal>json_populate_record(base anyelement, from_json json, [, use_json_as_text bool=false]</literal>
|
||||
</entry>
|
||||
<entry>anyelement</entry>
|
||||
<entry><type>anyelement</type></entry>
|
||||
<entry>
|
||||
Expands the object in from_json to a row whose columns match
|
||||
Expands the object in <replaceable>from_json</replaceable> to a row whose columns match
|
||||
the record type defined by base. Conversion will be best
|
||||
effort; columns in base with no corresponding key in from_json
|
||||
effort; columns in base with no corresponding key in <replaceable>from_json</replaceable>
|
||||
will be left null. A column may only be specified once.
|
||||
</entry>
|
||||
<entry><literal>select * from json_populate_record(null::x, '{"a":1,"b":2}')</literal></entry>
|
||||
@ -10106,12 +10106,12 @@ table2-mapping
|
||||
</indexterm>
|
||||
<literal>json_populate_recordset(base anyelement, from_json json, [, use_json_as_text bool=false]</literal>
|
||||
</entry>
|
||||
<entry>SETOF anyelement</entry>
|
||||
<entry><type>SETOF anyelement</type></entry>
|
||||
<entry>
|
||||
Expands the outermost set of objects in from_json to a set
|
||||
Expands the outermost set of objects in <replaceable>from_json</replaceable> to a set
|
||||
whose columns match the record type defined by base.
|
||||
Conversion will be best effort; columns in base with no
|
||||
corresponding key in from_json will be left null. A column
|
||||
corresponding key in <replaceable>from_json</replaceable> will be left null. A column
|
||||
may only be specified once.
|
||||
</entry>
|
||||
<entry><literal>select * from json_populate_recordset(null::x, '[{"a":1,"b":2},{"a":3,"b":4}]')</literal></entry>
|
||||
@ -10131,9 +10131,9 @@ table2-mapping
|
||||
</indexterm>
|
||||
<literal>json_array_elements(json)</literal>
|
||||
</entry>
|
||||
<entry>SETOF json</entry>
|
||||
<entry><type>SETOF json</type></entry>
|
||||
<entry>
|
||||
Expands a json array to a set of json elements.
|
||||
Expands a JSON array to a set of JSON elements.
|
||||
</entry>
|
||||
<entry><literal>json_array_elements('[1,true, [2,false]]')</literal></entry>
|
||||
<entry>
|
||||
@ -10152,8 +10152,8 @@ table2-mapping
|
||||
|
||||
<note>
|
||||
<para>
|
||||
The <xref linkend="hstore"> extension has a cast from hstore to
|
||||
json, so that converted hstore values are represented as json objects,
|
||||
The <xref linkend="hstore"> extension has a cast from <type>hstore</type> to
|
||||
<type>json</type>, so that converted <type>hstore</type> values are represented as JSON objects,
|
||||
not as string values.
|
||||
</para>
|
||||
</note>
|
||||
@ -10161,7 +10161,7 @@ table2-mapping
|
||||
<para>
|
||||
See also <xref linkend="functions-aggregate"> about the aggregate
|
||||
function <function>json_agg</function> which aggregates record
|
||||
values as json efficiently.
|
||||
values as JSON efficiently.
|
||||
</para>
|
||||
</sect1>
|
||||
|
||||
@ -11546,7 +11546,7 @@ SELECT NULLIF(value, '(none)') ...
|
||||
<entry>
|
||||
<type>json</type>
|
||||
</entry>
|
||||
<entry>aggregates records as a json array of objects</entry>
|
||||
<entry>aggregates records as a JSON array of objects</entry>
|
||||
</row>
|
||||
|
||||
<row>
|
||||
@ -14904,7 +14904,7 @@ SELECT set_config('log_statement_stats', 'off', false);
|
||||
</sect2>
|
||||
|
||||
<sect2 id="functions-admin-signal">
|
||||
<title>Server Signalling Functions</title>
|
||||
<title>Server Signaling Functions</title>
|
||||
|
||||
<indexterm>
|
||||
<primary>pg_cancel_backend</primary>
|
||||
@ -14932,7 +14932,7 @@ SELECT set_config('log_statement_stats', 'off', false);
|
||||
</para>
|
||||
|
||||
<table id="functions-admin-signal-table">
|
||||
<title>Server Signalling Functions</title>
|
||||
<title>Server Signaling Functions</title>
|
||||
<tgroup cols="3">
|
||||
<thead>
|
||||
<row><entry>Name</entry> <entry>Return Type</entry> <entry>Description</entry>
|
||||
|
@ -105,7 +105,7 @@
|
||||
Returns a palloc'd array of keys given an item to be indexed. The
|
||||
number of returned keys must be stored into <literal>*nkeys</>.
|
||||
If any of the keys can be null, also palloc an array of
|
||||
<literal>*nkeys</> booleans, store its address at
|
||||
<literal>*nkeys</> <type>bool</type> fields, store its address at
|
||||
<literal>*nullFlags</>, and set these null flags as needed.
|
||||
<literal>*nullFlags</> can be left <symbol>NULL</symbol> (its initial value)
|
||||
if all keys are non-null.
|
||||
@ -130,11 +130,11 @@
|
||||
<literal>query</> and the method it should use to extract key values.
|
||||
The number of returned keys must be stored into <literal>*nkeys</>.
|
||||
If any of the keys can be null, also palloc an array of
|
||||
<literal>*nkeys</> booleans, store its address at
|
||||
<literal>*nkeys</> <type>bool</type> fields, store its address at
|
||||
<literal>*nullFlags</>, and set these null flags as needed.
|
||||
<literal>*nullFlags</> can be left NULL (its initial value)
|
||||
<literal>*nullFlags</> can be left <symbol>NULL</symbol> (its initial value)
|
||||
if all keys are non-null.
|
||||
The return value can be NULL if the <literal>query</> contains no keys.
|
||||
The return value can be <symbol>NULL</symbol> if the <literal>query</> contains no keys.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
@ -168,8 +168,8 @@
|
||||
an array of <literal>*nkeys</> booleans and store its address at
|
||||
<literal>*pmatch</>. Each element of the array should be set to TRUE
|
||||
if the corresponding key requires partial match, FALSE if not.
|
||||
If <literal>*pmatch</> is set to NULL then GIN assumes partial match
|
||||
is not required. The variable is initialized to NULL before call,
|
||||
If <literal>*pmatch</> is set to <symbol>NULL</symbol> then GIN assumes partial match
|
||||
is not required. The variable is initialized to <symbol>NULL</symbol> before call,
|
||||
so this argument can simply be ignored by operator classes that do
|
||||
not support partial match.
|
||||
</para>
|
||||
@ -181,7 +181,7 @@
|
||||
To use it, <function>extractQuery</> must allocate
|
||||
an array of <literal>*nkeys</> Pointers and store its address at
|
||||
<literal>*extra_data</>, then store whatever it wants to into the
|
||||
individual pointers. The variable is initialized to NULL before
|
||||
individual pointers. The variable is initialized to <symbol>NULL</symbol> before
|
||||
call, so this argument can simply be ignored by operator classes that
|
||||
do not require extra data. If <literal>*extra_data</> is set, the
|
||||
whole array is passed to the <function>consistent</> method, and
|
||||
@ -215,7 +215,7 @@
|
||||
and so are the <literal>queryKeys[]</> and <literal>nullFlags[]</>
|
||||
arrays previously returned by <function>extractQuery</>.
|
||||
<literal>extra_data</> is the extra-data array returned by
|
||||
<function>extractQuery</>, or NULL if none.
|
||||
<function>extractQuery</>, or <symbol>NULL</symbol> if none.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
@ -261,7 +261,7 @@
|
||||
that generated the partial match query is provided, in case its
|
||||
semantics are needed to determine when to end the scan. Also,
|
||||
<literal>extra_data</> is the corresponding element of the extra-data
|
||||
array made by <function>extractQuery</>, or NULL if none.
|
||||
array made by <function>extractQuery</>, or <symbol>NULL</symbol> if none.
|
||||
Null keys are never passed to this function.
|
||||
</para>
|
||||
</listitem>
|
||||
@ -305,9 +305,9 @@
|
||||
</para>
|
||||
|
||||
<para>
|
||||
As of <productname>PostgreSQL</productname> 9.1, NULL key values can be
|
||||
included in the index. Also, placeholder NULLs are included in the index
|
||||
for indexed items that are NULL or contain no keys according to
|
||||
As of <productname>PostgreSQL</productname> 9.1, null key values can be
|
||||
included in the index. Also, placeholder nulls are included in the index
|
||||
for indexed items that are null or contain no keys according to
|
||||
<function>extractValue</>. This allows searches that should find empty
|
||||
items to do so.
|
||||
</para>
|
||||
@ -471,11 +471,11 @@
|
||||
|
||||
<para>
|
||||
<acronym>GIN</acronym> assumes that indexable operators are strict. This
|
||||
means that <function>extractValue</> will not be called at all on a NULL
|
||||
means that <function>extractValue</> will not be called at all on a null
|
||||
item value (instead, a placeholder index entry is created automatically),
|
||||
and <function>extractQuery</function> will not be called on a NULL query
|
||||
and <function>extractQuery</function> will not be called on a null query
|
||||
value either (instead, the query is presumed to be unsatisfiable). Note
|
||||
however that NULL key values contained within a non-null composite item
|
||||
however that null key values contained within a non-null composite item
|
||||
or query value are supported.
|
||||
</para>
|
||||
</sect1>
|
||||
|
@ -325,7 +325,7 @@ b
|
||||
<row>
|
||||
<entry><function>hstore_to_json(hstore)</function></entry>
|
||||
<entry><type>json</type></entry>
|
||||
<entry>get <type>hstore</type> as a json value</entry>
|
||||
<entry>get <type>hstore</type> as a <type>json</type> value</entry>
|
||||
<entry><literal>hstore_to_json('"a key"=>1, b=>t, c=>null, d=>12345, e=>012345, f=>1.234, g=>2.345e+4')</literal></entry>
|
||||
<entry><literal>{"a key": "1", "b": "t", "c": null, "d": "12345", "e": "012345", "f": "1.234", "g": "2.345e+4"}</literal></entry>
|
||||
</row>
|
||||
@ -333,7 +333,7 @@ b
|
||||
<row>
|
||||
<entry><function>hstore_to_json_loose(hstore)</function></entry>
|
||||
<entry><type>json</type></entry>
|
||||
<entry>get <type>hstore</type> as a json value, but attempting to distinguish numerical and boolean values so they are unquoted in the json</entry>
|
||||
<entry>get <type>hstore</type> as a <type>json</type> value, but attempting to distinguish numerical and Boolean values so they are unquoted in the JSON</entry>
|
||||
<entry><literal>hstore_to_json_loose('"a key"=>1, b=>t, c=>null, d=>12345, e=>012345, f=>1.234, g=>2.345e+4')</literal></entry>
|
||||
<entry><literal>{"a key": 1, "b": true, "c": null, "d": 12345, "e": "012345", "f": 1.234, "g": 2.345e+4}</literal></entry>
|
||||
</row>
|
||||
|
@ -113,8 +113,8 @@
|
||||
<structfield>amoptionalkey</structfield> false.
|
||||
One reason that an index AM might set
|
||||
<structfield>amoptionalkey</structfield> false is if it doesn't index
|
||||
NULLs. Since most indexable operators are
|
||||
strict and hence cannot return TRUE for NULL inputs,
|
||||
null values. Since most indexable operators are
|
||||
strict and hence cannot return true for null inputs,
|
||||
it is at first sight attractive to not store index entries for null values:
|
||||
they could never be returned by an index scan anyway. However, this
|
||||
argument fails when an index scan has no restriction clause for a given
|
||||
|
@ -13,7 +13,7 @@
|
||||
information schema is defined in the SQL standard and can therefore
|
||||
be expected to be portable and remain stable — unlike the system
|
||||
catalogs, which are specific to
|
||||
<productname>PostgreSQL</productname> and are modelled after
|
||||
<productname>PostgreSQL</productname> and are modeled after
|
||||
implementation concerns. The information schema views do not,
|
||||
however, contain information about
|
||||
<productname>PostgreSQL</productname>-specific features; to inquire
|
||||
|
@ -233,7 +233,7 @@ $ENV{PATH}=$ENV{PATH} . ';c:\some\where\bison\bin';
|
||||
spaces in the name, such as the default location on English
|
||||
installations <filename>C:\Program Files\GnuWin32</filename>.
|
||||
Consider installing into <filename>C:\GnuWin32</filename> or use the
|
||||
NTFS shortname path to GnuWin32 in your PATH environment setting
|
||||
NTFS short name path to GnuWin32 in your PATH environment setting
|
||||
(e.g. <filename>C:\PROGRA~1\GnuWin32</filename>).
|
||||
</para>
|
||||
</note>
|
||||
|
@ -2734,9 +2734,9 @@ char *PQresultErrorField(const PGresult *res, int fieldcode);
|
||||
<term><symbol>PG_DIAG_DATATYPE_NAME</></term>
|
||||
<listitem>
|
||||
<para>
|
||||
If the error was associated with a specific datatype, the name
|
||||
of the datatype. (When this field is present, the schema name
|
||||
field provides the name of the datatype's schema.)
|
||||
If the error was associated with a specific data type, the name
|
||||
of the data type. (When this field is present, the schema name
|
||||
field provides the name of the data type's schema.)
|
||||
</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
@ -2787,7 +2787,7 @@ char *PQresultErrorField(const PGresult *res, int fieldcode);
|
||||
|
||||
<note>
|
||||
<para>
|
||||
The fields for schema name, table name, column name, datatype
|
||||
The fields for schema name, table name, column name, data type
|
||||
name, and constraint name are supplied only for a limited number
|
||||
of error types; see <xref linkend="errcodes-appendix">.
|
||||
</para>
|
||||
|
@ -33,7 +33,7 @@
|
||||
a path from the root of a hierarchical tree to a particular node. The
|
||||
length of a label path must be less than 65Kb, but keeping it under 2Kb is
|
||||
preferable. In practice this is not a major limitation; for example,
|
||||
the longest label path in the DMOZ catalogue (<ulink
|
||||
the longest label path in the DMOZ catalog (<ulink
|
||||
url="http://www.dmoz.org"></ulink>) is about 240 bytes.
|
||||
</para>
|
||||
|
||||
|
@ -263,9 +263,9 @@
|
||||
<important>
|
||||
<para>
|
||||
Some <productname>PostgreSQL</productname> data types and functions have
|
||||
special rules regarding transactional behaviour. In particular, changes
|
||||
made to a <literal>SEQUENCE</literal> (and therefore the counter of a
|
||||
column declared using <literal>SERIAL</literal>) are immediately visible
|
||||
special rules regarding transactional behavior. In particular, changes
|
||||
made to a sequence (and therefore the counter of a
|
||||
column declared using <type>serial</type>) are immediately visible
|
||||
to all other transactions and are not rolled back if the transaction
|
||||
that made the changes aborts. See <xref linkend="functions-sequence">
|
||||
and <xref linkend="datatype-serial">.
|
||||
|
@ -675,7 +675,7 @@ EXPLAIN ANALYZE SELECT * FROM polygon_tbl WHERE f1 @> polygon '(0.5,2.0)';
|
||||
|
||||
<para>
|
||||
<command>EXPLAIN</> has a <literal>BUFFERS</> option that can be used with
|
||||
<literal>ANALYZE</> to get even more runtime statistics:
|
||||
<literal>ANALYZE</> to get even more run time statistics:
|
||||
|
||||
<screen>
|
||||
EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM tenk1 WHERE unique1 < 100 AND unique2 > 9000;
|
||||
@ -735,7 +735,7 @@ ROLLBACK;
|
||||
So above, we see the same sort of bitmap table scan we've seen already,
|
||||
and its output is fed to an Update node that stores the updated rows.
|
||||
It's worth noting that although the data-modifying node can take a
|
||||
considerable amount of runtime (here, it's consuming the lion's share
|
||||
considerable amount of run time (here, it's consuming the lion's share
|
||||
of the time), the planner does not currently add anything to the cost
|
||||
estimates to account for that work. That's because the work to be done is
|
||||
the same for every correct query plan, so it doesn't affect planning
|
||||
@ -811,7 +811,7 @@ EXPLAIN ANALYZE SELECT * FROM tenk1 WHERE unique1 < 100 AND unique2 > 9000
|
||||
the estimated cost and row count for the Index Scan node are shown as
|
||||
though it were run to completion. But in reality the Limit node stopped
|
||||
requesting rows after it got two, so the actual row count is only 2 and
|
||||
the runtime is less than the cost estimate would suggest. This is not
|
||||
the run time is less than the cost estimate would suggest. This is not
|
||||
an estimation error, only a discrepancy in the way the estimates and true
|
||||
values are displayed.
|
||||
</para>
|
||||
|
@ -36,7 +36,7 @@
|
||||
difference in real database throughput, especially since many database servers
|
||||
are not speed-limited by their transaction logs.
|
||||
<application>pg_test_fsync</application> reports average file sync operation
|
||||
time in microseconds for each wal_sync_method, which can also be used to
|
||||
time in microseconds for each <literal>wal_sync_method</literal>, which can also be used to
|
||||
inform efforts to optimize the value of <xref linkend="guc-commit-delay">.
|
||||
</para>
|
||||
</refsect1>
|
||||
|
@ -432,7 +432,7 @@ rows = (outer_cardinality * inner_cardinality) * selectivity
|
||||
<structname>tenk2</>. But this is not the case: the join relation size
|
||||
is estimated before any particular join plan has been considered. If
|
||||
everything is working well then the two ways of estimating the join
|
||||
size will produce about the same answer, but due to roundoff error and
|
||||
size will produce about the same answer, but due to round-off error and
|
||||
other factors they sometimes diverge significantly.
|
||||
</para>
|
||||
|
||||
|
@ -201,7 +201,7 @@ select returns_array();
|
||||
|
||||
<para>
|
||||
Perl passes <productname>PostgreSQL</productname> arrays as a blessed
|
||||
PostgreSQL::InServer::ARRAY object. This object may be treated as an array
|
||||
<type>PostgreSQL::InServer::ARRAY</type> object. This object may be treated as an array
|
||||
reference or a string, allowing for backward compatibility with Perl
|
||||
code written for <productname>PostgreSQL</productname> versions below 9.1 to
|
||||
run. For example:
|
||||
@ -228,7 +228,7 @@ SELECT concat_array_elements(ARRAY['PL','/','Perl']);
|
||||
|
||||
<note>
|
||||
<para>
|
||||
Multi-dimensional arrays are represented as references to
|
||||
Multidimensional arrays are represented as references to
|
||||
lower-dimensional arrays of references in a way common to every Perl
|
||||
programmer.
|
||||
</para>
|
||||
@ -278,7 +278,7 @@ SELECT * FROM perl_row();
|
||||
<para>
|
||||
PL/Perl functions can also return sets of either scalar or
|
||||
composite types. Usually you'll want to return rows one at a
|
||||
time, both to speed up startup time and to keep from queueing up
|
||||
time, both to speed up startup time and to keep from queuing up
|
||||
the entire result set in memory. You can do this with
|
||||
<function>return_next</function> as illustrated below. Note that
|
||||
after the last <function>return_next</function>, you must put
|
||||
|
@ -1292,7 +1292,7 @@ EXECUTE 'UPDATE tbl SET '
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Because <function>quote_literal</function> is labelled
|
||||
Because <function>quote_literal</function> is labeled
|
||||
<literal>STRICT</literal>, it will always return null when called with a
|
||||
null argument. In the above example, if <literal>newvalue</> or
|
||||
<literal>keyvalue</> were null, the entire dynamic query string would
|
||||
@ -2107,11 +2107,11 @@ EXIT <optional> <replaceable>label</replaceable> </optional> <optional> WHEN <re
|
||||
When used with a
|
||||
<literal>BEGIN</literal> block, <literal>EXIT</literal> passes
|
||||
control to the next statement after the end of the block.
|
||||
Note that a label must be used for this purpose; an unlabelled
|
||||
Note that a label must be used for this purpose; an unlabeled
|
||||
<literal>EXIT</literal> is never considered to match a
|
||||
<literal>BEGIN</literal> block. (This is a change from
|
||||
pre-8.4 releases of <productname>PostgreSQL</productname>, which
|
||||
would allow an unlabelled <literal>EXIT</literal> to match
|
||||
would allow an unlabeled <literal>EXIT</literal> to match
|
||||
a <literal>BEGIN</literal> block.)
|
||||
</para>
|
||||
|
||||
|
@ -236,11 +236,11 @@
|
||||
|
||||
<para>
|
||||
When <literal>use_remote_estimate</literal> is true,
|
||||
<filename>postgres_fdw</> obtains rowcount and cost estimates from the
|
||||
<filename>postgres_fdw</> obtains row count and cost estimates from the
|
||||
remote server and then adds <literal>fdw_startup_cost</literal> and
|
||||
<literal>fdw_tuple_cost</literal> to the cost estimates. When
|
||||
<literal>use_remote_estimate</literal> is false,
|
||||
<filename>postgres_fdw</> performs local rowcount and cost estimation
|
||||
<filename>postgres_fdw</> performs local row count and cost estimation
|
||||
and then adds <literal>fdw_startup_cost</literal> and
|
||||
<literal>fdw_tuple_cost</literal> to the cost estimates. This local
|
||||
estimation is unlikely to be very accurate unless local copies of the
|
||||
|
@ -4813,9 +4813,9 @@ message.
|
||||
</term>
|
||||
<listitem>
|
||||
<para>
|
||||
Datatype name: if the error was associated with a specific datatype,
|
||||
the name of the datatype. (When this field is present, the schema
|
||||
name field provides the name of the datatype's schema.)
|
||||
Data type name: if the error was associated with a specific data type,
|
||||
the name of the data type. (When this field is present, the schema
|
||||
name field provides the name of the data type's schema.)
|
||||
</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
@ -4874,7 +4874,7 @@ message.
|
||||
|
||||
<note>
|
||||
<para>
|
||||
The fields for schema name, table name, column name, datatype name, and
|
||||
The fields for schema name, table name, column name, data type name, and
|
||||
constraint name are supplied only for a limited number of error types;
|
||||
see <xref linkend="errcodes-appendix">.
|
||||
</para>
|
||||
|
@ -121,8 +121,8 @@ COPY { <replaceable class="parameter">table_name</replaceable> [ ( <replaceable
|
||||
<term><replaceable class="parameter">filename</replaceable></term>
|
||||
<listitem>
|
||||
<para>
|
||||
The path name of the input or output file. An input filename can be
|
||||
an absolute or relative path, but an output filename must be an absolute
|
||||
The path name of the input or output file. An input file name can be
|
||||
an absolute or relative path, but an output file name must be an absolute
|
||||
path. Windows users might need to use an <literal>E''</> string and
|
||||
double any backslashes used in the path name.
|
||||
</para>
|
||||
|
@ -364,7 +364,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI
|
||||
constraints copied by <literal>LIKE</> are not merged with similarly
|
||||
named columns and constraints.
|
||||
If the same name is specified explicitly or in another
|
||||
<literal>LIKE</literal> clause, an error is signalled.
|
||||
<literal>LIKE</literal> clause, an error is signaled.
|
||||
</para>
|
||||
<para>
|
||||
The <literal>LIKE</literal> clause can also be used to copy columns from
|
||||
|
@ -136,7 +136,7 @@ CREATE TYPE <replaceable class="parameter">name</replaceable>
|
||||
be any type with an associated b-tree operator class (to determine the
|
||||
ordering of values for the range type). Normally the subtype's default
|
||||
b-tree operator class is used to determine ordering; to use a non-default
|
||||
opclass, specify its name with <replaceable
|
||||
operator class, specify its name with <replaceable
|
||||
class="parameter">subtype_opclass</replaceable>. If the subtype is
|
||||
collatable, and you want to use a non-default collation in the range's
|
||||
ordering, specify the desired collation with the <replaceable
|
||||
|
@ -75,7 +75,7 @@ EXPLAIN [ ANALYZE ] [ VERBOSE ] <replaceable class="parameter">statement</replac
|
||||
|
||||
<para>
|
||||
The <literal>ANALYZE</literal> option causes the statement to be actually
|
||||
executed, not only planned. Then actual runtime statistics are added to
|
||||
executed, not only planned. Then actual run time statistics are added to
|
||||
the display, including the total elapsed time expended within each plan
|
||||
node (in milliseconds) and the total number of rows it actually returned.
|
||||
This is useful for seeing whether the planner's estimates
|
||||
|
@ -183,7 +183,7 @@ LOCK [ TABLE ] [ ONLY ] <replaceable class="PARAMETER">name</replaceable> [ * ]
|
||||
the mode names involving <literal>ROW</> are all misnomers. These
|
||||
mode names should generally be read as indicating the intention of
|
||||
the user to acquire row-level locks within the locked table. Also,
|
||||
<literal>ROW EXCLUSIVE</> mode is a sharable table lock. Keep in
|
||||
<literal>ROW EXCLUSIVE</> mode is a shareable table lock. Keep in
|
||||
mind that all the lock modes have identical semantics so far as
|
||||
<command>LOCK TABLE</> is concerned, differing only in the rules
|
||||
about which modes conflict with which. For information on how to
|
||||
|
@ -194,7 +194,7 @@ PostgreSQL documentation
|
||||
<listitem>
|
||||
|
||||
<para>
|
||||
Write a minimal recovery.conf in the output directory (or into
|
||||
Write a minimal <filename>recovery.conf</filename> in the output directory (or into
|
||||
the base archive file when using tar format) to ease setting
|
||||
up a standby server.
|
||||
</para>
|
||||
|
@ -323,10 +323,10 @@ PostgreSQL documentation
|
||||
<para>
|
||||
For a consistent backup, the database server needs to support synchronized snapshots,
|
||||
a feature that was introduced in <productname>PostgreSQL</productname> 9.2. With this
|
||||
feature, database clients can ensure they see the same dataset even though they use
|
||||
feature, database clients can ensure they see the same data set even though they use
|
||||
different connections. <command>pg_dump -j</command> uses multiple database
|
||||
connections; it connects to the database once with the master process and
|
||||
once again for each worker job. Without the sychronized snapshot feature, the
|
||||
once again for each worker job. Without the synchronized snapshot feature, the
|
||||
different worker jobs wouldn't be guaranteed to see the same data in each connection,
|
||||
which could lead to an inconsistent backup.
|
||||
</para>
|
||||
|
@ -156,7 +156,7 @@ gmake installcheck
|
||||
|
||||
<para>
|
||||
The source distribution also contains regression tests of the static
|
||||
behaviour of Hot Standby. These tests require a running primary server
|
||||
behavior of Hot Standby. These tests require a running primary server
|
||||
and a running standby server that is accepting new WAL changes from the
|
||||
primary using either file-based log shipping or streaming replication.
|
||||
Those servers are not automatically created for you, nor is the setup
|
||||
@ -185,9 +185,9 @@ gmake standbycheck
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Some extreme behaviours can also be generated on the primary using the
|
||||
Some extreme behaviors can also be generated on the primary using the
|
||||
script: <filename>src/test/regress/sql/hs_primary_extremes.sql</filename>
|
||||
to allow the behaviour of the standby to be tested.
|
||||
to allow the behavior of the standby to be tested.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
|
@ -700,7 +700,7 @@
|
||||
|
||||
<listitem>
|
||||
<para>
|
||||
Allow a multi-row <link
|
||||
Allow a multirow <link
|
||||
linkend="SQL-VALUES"><literal>VALUES</></link> clause in a rule
|
||||
to reference <literal>OLD</>/<literal>NEW</> (Tom Lane)
|
||||
</para>
|
||||
@ -911,7 +911,7 @@
|
||||
<para>
|
||||
Allow text <link linkend="datatype-timezones">timezone
|
||||
designations</link>, e.g. <quote>America/Chicago</> when using
|
||||
the <acronym>ISO</> <quote>T</> timestamptz format (Bruce Momjian)
|
||||
the <acronym>ISO</> <quote>T</> <type>timestamptz</type> format (Bruce Momjian)
|
||||
</para>
|
||||
</listitem>
|
||||
|
||||
@ -1128,7 +1128,7 @@
|
||||
</para>
|
||||
|
||||
<para>
|
||||
This allows plpy.debug(rv) to output something reasonable.
|
||||
This allows <literal>plpy.debug(rv)</literal> to output something reasonable.
|
||||
</para>
|
||||
</listitem>
|
||||
|
||||
@ -1538,7 +1538,7 @@
|
||||
|
||||
<listitem>
|
||||
<para>
|
||||
Add emacs macro to match <productname>PostgreSQL</> perltidy
|
||||
Add Emacs macro to match <productname>PostgreSQL</> perltidy
|
||||
formatting (Peter Eisentraut)
|
||||
</para>
|
||||
</listitem>
|
||||
@ -1783,7 +1783,7 @@
|
||||
|
||||
<listitem>
|
||||
<para>
|
||||
Have <application>pg_upgrade</> create unix-domain sockets in
|
||||
Have <application>pg_upgrade</> create Unix-domain sockets in
|
||||
the current directory (Bruce Momjian, Tom Lane)
|
||||
</para>
|
||||
|
||||
|
@ -315,7 +315,7 @@ $ sudo semodule -r sepgsql-regtest
|
||||
control rules as relationships between a subject entity (typically,
|
||||
a client of the database) and an object entity (such as a database
|
||||
object), each of which is
|
||||
identified by a security label. If access to an unlabelled object is
|
||||
identified by a security label. If access to an unlabeled object is
|
||||
attempted, the object is treated as if it were assigned the label
|
||||
<literal>unlabeled_t</>.
|
||||
</para>
|
||||
@ -397,7 +397,7 @@ UPDATE t1 SET x = 2, y = md5sum(y) WHERE z = 100;
|
||||
user tries to execute a function as a part of query, or using fast-path
|
||||
invocation. If this function is a trusted procedure, it also checks
|
||||
<literal>db_procedure:{entrypoint}</> permission to check whether it
|
||||
can perform as entrypoint of trusted procedure.
|
||||
can perform as entry point of trusted procedure.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
|
@ -148,7 +148,7 @@
|
||||
there is little it can do to make sure the data has arrived at a truly
|
||||
non-volatile storage area. Rather, it is the
|
||||
administrator's responsibility to make certain that all storage components
|
||||
ensure integrity for both data and filesystem metadata.
|
||||
ensure integrity for both data and file-system metadata.
|
||||
Avoid disk controllers that have non-battery-backed write caches.
|
||||
At the drive level, disable write-back caching if the
|
||||
drive cannot guarantee the data will be written before shutdown.
|
||||
@ -200,8 +200,8 @@
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Internal data structures such as pg_clog, pg_subtrans, pg_multixact,
|
||||
pg_serial, pg_notify, pg_stat, pg_snapshots are not directly
|
||||
Internal data structures such as <filename>pg_clog</filename>, <filename>pg_subtrans</filename>, <filename>pg_multixact</filename>,
|
||||
<filename>pg_serial</filename>, <filename>pg_notify</filename>, <filename>pg_stat</filename>, <filename>pg_snapshots</filename> are not directly
|
||||
checksummed, nor are pages protected by full page writes. However, where
|
||||
such data structures are persistent, WAL records are written that allow
|
||||
recent changes to be accurately rebuilt at crash recovery and those
|
||||
@ -210,7 +210,7 @@
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Individual state files in pg_twophase are protected by CRC-32.
|
||||
Individual state files in <filename>pg_twophase</filename> are protected by CRC-32.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
|
Loading…
Reference in New Issue
Block a user