When executing a utility statement, we must fetch everything
we need out of the PlannedStmt data structure before calling
standard_ProcessUtility. In certain cases (possibly only ROLLBACK
in extended query protocol), that data structure will get freed
during command execution. The situation is probably often harmless
in production builds, but in debug builds we intentionally overwrite
the freed memory with garbage, leading to picking up garbage values
of statement location and length, typically causing an assertion
failure later in pg_stat_statements. In non-debug builds, if
something did go wrong it would likely lead to storing garbage
for the query string.
Report and fix by zhaoqigui (with cosmetic adjustments by me).
It's an old problem, so back-patch to all supported versions.
Discussion: https://postgr.es/m/17663-a344fd0675f92128@postgresql.org
Discussion: https://postgr.es/m/1667307420050.56657@hundsun.com
This is similar to 7d25958, and this commit takes care of all the
remaining inconsistencies between the initial value used in the C
variable associated to a GUC and its default value stored in the GUC
tables (as of pg_settings.boot_val).
Some of the initial values of the GUCs updated rely on a compile-time
default. These are refactored so as the GUC table and its C declaration
use the same values. This makes everything consistent with other
places, backend_flush_after, bgwriter_flush_after, port,
checkpoint_flush_after doing so already, for example.
Extracted from a larger patch by Peter Smith. The spots updated in the
modules are from me.
Author: Peter Smith, Michael Paquier
Reviewed-by: Nathan Bossart, Tom Lane, Justin Pryzby
Discussion: https://postgr.es/m/CAHut+PtHE0XSfjjRQ6D4v7+dqzCw=d+1a64ujra4EX8aoc_Z+w@mail.gmail.com
Per discussion, the existing routine name able to initialize a SRF
function with materialize mode is unpopular, so rename it. Equally, the
flags of this function are renamed, as of:
- SRF_SINGLE_USE_EXPECTED -> MAT_SRF_USE_EXPECTED_DESC
- SRF_SINGLE_BLESS -> MAT_SRF_BLESS
The previous function and flags introduced in 9e98583 are kept around
for compatibility purposes, so as any extension code already compiled
with v15 continues to work as-is. The declarations introduced here for
compatibility will be removed from HEAD in a follow-up commit.
The new names have been suggested by Andres Freund and Melanie
Plageman.
Discussion: https://postgr.es/m/20221013194820.ciktb2sbbpw7cljm@awork3.anarazel.de
Backpatch-through: 15
Commits cf112c12 and a0dc8271 were a little too hasty in getting rid of
the pg_ prefixes where we use pread(), pwrite() and vectored variants.
We dropped support for ancient Unixes where we needed to use lseek() to
implement replacements for those, but it turns out that Windows also
changes the current position even when you pass in an offset to
ReadFile() and WriteFile() if the file handle is synchronous, despite
its documentation saying otherwise.
Switching to asynchronous file handles would fix that, but have other
complications. For now let's just put back the pg_ prefix and add some
comments to highlight the non-standard side-effect, which we can now
describe as Windows-only.
Reported-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Discussion: https://postgr.es/m/20220923202439.GA1156054%40nathanxps13
pread() and pwrite() are in SUSv2, and all targeted Unix systems have
them.
Previously, we defined pg_pread and pg_pwrite to emulate these function
with lseek() on old Unixen. The names with a pg_ prefix were a reminder
of a portability hazard: they might change the current file position.
That hazard is gone, so we can drop the prefixes.
Since the remaining replacement code is Windows-only, move it into
src/port/win32p{read,write}.c, and move the declarations into
src/include/port/win32_port.h.
No need for vestigial HAVE_PREAD, HAVE_PWRITE macros as they were only
used for declarations in port.h which have now moved into win32_port.h.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Greg Stark <stark@mit.edu>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CA+hUKGJ3LHeP9w5Fgzdr4G8AnEtJ=z=p6hGDEm4qYGEUX5B6fQ@mail.gmail.com
We've heard a couple of reports of people having trouble with
multi-gigabyte-sized query-texts files. It occurred to me that on
32-bit platforms, there could be an issue with integer overflow
of calculations associated with the total query text size.
Address that with several changes:
1. Limit pg_stat_statements.max to INT_MAX / 2 not INT_MAX.
The hashtable code will bound it to that anyway unless "long"
is 64 bits. We still need overflow guards on its use, but
this helps.
2. Add a check to prevent extending the query-texts file to
more than MaxAllocHugeSize. If it got that big, qtext_load_file
would certainly fail, so there's not much point in allowing it.
Without this, we'd need to consider whether extent, query_offset,
and related variables shouldn't be off_t not size_t.
3. Adjust the comparisons in need_gc_qtexts() to be done in 64-bit
arithmetic on all platforms. It appears possible that under duress
those multiplications could overflow 32 bits, yielding a false
conclusion that we need to garbage-collect the texts file, which
could lead to repeatedly garbage-collecting after every hash table
insertion.
Per report from Bruno da Silva. I'm not convinced that these
issues fully explain his problem; there may be some other bug that's
contributing to the query-texts file becoming so large in the first
place. But it did get that big, so #2 is a reasonable defense,
and #3 could explain the reported performance difficulties.
(See also commit 8bbe4cbd9, which addressed some related bugs.
The second Discussion: link is the thread that led up to that.)
This issue is old, and is primarily a problem for old platforms,
so back-patch.
Discussion: https://postgr.es/m/CAB+Nuk93fL1Q9eLOCotvLP07g7RAv4vbdrkm0cVQohDVMpAb9A@mail.gmail.com
Discussion: https://postgr.es/m/5601D354.5000703@BlueTreble.com
This replaces all MemSet() calls with struct initialization where that
is easily and obviously possible. (For example, some cases have to
worry about padding bits, so I left those.)
(The same could be done with appropriate memset() calls, but this
patch is part of an effort to phase out MemSet(), so it doesn't touch
memset() calls.)
Reviewed-by: Ranier Vilela <ranier.vf@gmail.com>
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://www.postgresql.org/message-id/9847b13c-b785-f4e2-75c3-12ec77a3b05c@enterprisedb.com
Currently, preloaded libraries are expected to request additional
shared memory and LWLocks in _PG_init(). However, it is not unusal
for such requests to depend on MaxBackends, which won't be
initialized at that time. Such requests could also depend on GUCs
that other modules might change. This introduces a new hook where
modules can safely use MaxBackends and GUCs to request additional
shared memory and LWLocks.
Furthermore, this change restricts requests for shared memory and
LWLocks to this hook. Previously, libraries could make requests
until the size of the main shared memory segment was calculated.
Unlike before, we no longer silently ignore requests received at
invalid times. Instead, we FATAL if someone tries to request
additional shared memory or LWLocks outside of the hook.
Nathan Bossart and Julien Rouhaud
Discussion: https://postgr.es/m/20220412210112.GA2065815%40nathanxps13
Discussion: https://postgr.es/m/Yn2jE/lmDhKtkUdr@paquier.xyz
The code for unloading a library has been commented-out for over 12
years, ever since commit 602a9ef5a7, and we're
no closer to supporting it now than we were back then.
Nathan Bossart, reviewed by Michael Paquier and by me.
Discussion: http://postgr.es/m/Ynsc9bRL1caUSBSE@paquier.xyz
This commit adds two new columns to pg_stat_statements, called
temp_blk_read_time and temp_blk_write_time. Those columns respectively
show the time spent to read and write temporary file blocks on disk,
whose tracking has been added in efb0ef9. This information is
available when track_io_timing is enabled, like blk_read_time and
blk_write_time.
pg_stat_statements is updated to version to 1.10 as an effect of the
newly-added columns. Tests for the upgrade path 1.9->1.10 are added.
PGSS_FILE_HEADER is bumped for the new stats file format.
Author: Masahiko Sawada
Reviewed-by: Georgios Kokolatos, Melanie Plageman, Julien Rouhaud,
Ranier Vilela
Discussion: https://postgr.es/m/CAD21AoAJgotTeP83p6HiAGDhs_9Fw9pZ2J=_tYTsiO5Ob-V5GQ@mail.gmail.com
With stats now being stored in shared memory, the GUC isn't needed
anymore. However, the pg_stat_tmp directory and PG_STAT_TMP_DIR define are
kept, as pg_stat_statements (and some out-of-core extensions) store data in
it.
Docs will be updated in a subsequent commit, together with the other pending
docs updates due to shared memory stats.
Author: Andres Freund <andres@anarazel.de>
Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-By: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://postgr.es/m/20220330233550.eiwsbearu6xhuqwe@alap3.anarazel.de
Discussion: https://postgr.es/m/20220303021600.hs34ghqcw6zcokdh@alap3.anarazel.de
Generally if a role is granted membership to another role with NOINHERIT
they must use SET ROLE to access the privileges of that role, however
with predefined roles the membership and privilege is conflated. Fix that
by replacing is_member_of_role with has_privs_for_role for predefined
roles. Patch does not remove is_member_of_role from acl.h, but it does
add a warning not to use that function for privilege checking. Not
backpatched based on hackers list discussion.
Author: Joshua Brindle
Reviewed-by: Stephen Frost, Nathan Bossart, Joe Conway
Discussion: https://postgr.es/m/flat/CAGB+Vh4Zv_TvKt2tv3QNS6tUM_F_9icmuj0zjywwcgVi4PAhFA@mail.gmail.com
9e98583 introduced a helper to centralize building their needed state
(tuplestore, tuple descriptors, etc.), checking for any errors. This
commit updates all places of contrib/ that can be switched to use
SetSingleFuncCall() as a drop-in replacement, resulting in the removal
of a lot of boilerplate code in all the modules updated by this commit.
Per analysis, some places remain as they are:
- pg_logdir_ls() in adminpack/ uses historically TYPEFUNC_RECORD as
return type, and I suspect that changing it may cause issues at run-time
with some of its past versions, down to 1.0.
- dblink/ uses a wrapper function doing exactly the work of
SetSingleFuncCall(). Here the switch should be possible, but rather
invasive so it does not seem the extra backpatch maintenance cost.
- tablefunc/, similarly, uses multiple helper functions with portions of
SetSingleFuncCall() spread across the code paths of this module.
Author: Melanie Plageman
Discussion: https://postgr.es/m/CAAKRu_bvDPJoL9mH6eYwvBpPtTGQwbDzfJbCM-OjkSZDu5yTPg@mail.gmail.com
Commit 75d22069e tried to throw a warning for setting a custom GUC whose
prefix belongs to a previously-loaded extension, if there is no such GUC
defined by the extension. But that caused unstable behavior with
parallel workers, because workers don't necessarily load extensions and
GUCs in the same order their leader did. To make that work safely, we
have to completely disallow the case. We now actually remove any such
GUCs at the time of initial extension load, and then throw an error not
just a warning if you try to add one later. While this might create a
compatibility issue for a few people, the improvement in error-detection
capability seems worth it; it's hard to believe that there's any good
use-case for choosing such GUC names.
This also un-reverts 5609cc01c (Rename EmitWarningsOnPlaceholders() to
MarkGUCPrefixReserved()), since that function's old name is now even
more of a misnomer.
Florin Irion and Tom Lane
Discussion: https://postgr.es/m/1902182.1640711215@sss.pgh.pa.us
This routine is a no-op since dd04e95 from 2003, with a macro kept
around for compatibility purposes. This has led to the same code
patterns being copy-pasted around for no effect, sometimes in confusing
ways like in pg_logical_slot_get_changes_guts() from logical.c where the
code was actually incorrect.
This issue has been discussed on two different threads recently, so
rather than living with this legacy, remove any uses of this routine in
the C code to simplify things. The compatibility macro is kept to avoid
breaking any out-of-core modules that depend on it.
Reported-by: Tatsuhito Kasahara, Justin Pryzby
Author: Tatsuhito Kasahara
Discussion: https://postgr.es/m/20211217200419.GQ17618@telsasoft.com
Discussion: https://postgr.es/m/CAP0=ZVJeeYfAeRfmzqAF2Lumdiv4S4FewyBnZd4DPTrsSQKJKw@mail.gmail.com
Since 4f0b096, pgss_store() does nothing if compute_query_id is disabled
or if no other module computed a query identifier, but the top comment
of this function did not reflect that. This behavior is already
documented in its own code path, and this just removes the inconsistent
comment.
Author: Kyotaro Horiguchi
Reviewed-by: Julien Rouhaud
Discussion: https://postgr.es/m/20211122.153823.1325120762360533122.horikyota.ntt@gmail.com
Backpatch-through: 14
This seems like a clearer name for what it does now.
Provide a compatibility macro so that extensions don't have to convert
to the new name right away.
Discussion: https://postgr.es/m/116024.1640111629@sss.pgh.pa.us
Windows fails on a request to read() more than INT_MAX bytes,
and perhaps other platforms could have similar issues. Let's
adjust this code to read at most 1GB per call.
(One would not have thought the file could get that big, but now
we have a field report of trouble, so it can. We likely ought to
add some mechanism to limit the size of the query-texts file
separately from the size of the hash table. That is not this
patch, though.)
Per bug #17254 from Yusuke Egashira. It's been like this for
awhile, so back-patch to all supported branches.
Discussion: https://postgr.es/m/17254-a926c89dc03375c2@postgresql.org
In the "simple Query" code path, it's fine for parse analysis or
execution of a utility statement to scribble on the statement's node
tree, since that'll just be thrown away afterwards. However it's
not fine if the node tree is in the plan cache, as then it'd be
corrupted for subsequent executions. Up to now we've dealt with
that by having individual utility-statement functions apply
copyObject() if they were going to modify the tree. But that's
prone to errors of omission. Bug #17053 from Charles Samborski
shows that CREATE/ALTER DOMAIN didn't get this memo, and can
crash if executed repeatedly from plan cache.
In the back branches, we'll just apply a narrow band-aid for that,
but in HEAD it seems prudent to have a more principled fix that
will close off the possibility of other similar bugs in future.
Hence, let's hoist the responsibility for doing copyObject up into
ProcessUtility from its children, thus ensuring that it happens for
all utility statement types.
Also, modify ProcessUtility's API so that its callers can tell it
whether a copy step is necessary. It turns out that in all cases,
the immediate caller knows whether the node tree is transient, so
this doesn't involve a huge amount of code thrashing. In this way,
while we lose a little bit in the execute-from-cache code path due
to sometimes copying node trees that wouldn't be mutated anyway,
we gain something in the simple-Query code path by not copying
throwaway node trees. Statements that are complex enough to be
expensive to copy are almost certainly ones that would have to be
copied anyway, so the loss in the cache code path shouldn't be much.
(Note that this whole problem applies only to utility statements.
Optimizable statements don't have the issue because we long ago made
the executor treat Plan trees as read-only. Perhaps someday we will
make utility statement execution act likewise, but I'm not holding
my breath.)
Discussion: https://postgr.es/m/931771.1623893989@sss.pgh.pa.us
Discussion: https://postgr.es/m/17053-3ca3f501bbc212b4@postgresql.org
Allowing only on/off meant that all either all existing configuration
guides would become obsolete if we disabled it by default, or that we
would have to accept a performance loss in the default config if we
enabled it by default. By allowing 'auto' as a middle ground, the
performance cost is only paid by those who enable pg_stat_statements and
similar modules.
I only edited the release notes to comment-out a paragraph that is now
factually wrong; further edits are probably needed to describe the
related change in more detail.
Author: Julien Rouhaud <rjuju123@gmail.com>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/20210513002623.eugftm4nk2lvvks3@nol
Also "make reformat-dat-files".
The only change worthy of note is that pgindent messed up the formatting
of launcher.c's struct LogicalRepWorkerId, which led me to notice that
that struct wasn't used at all anymore, so I just took it out.
EXPLAIN ANALYZE for an async-capable ForeignScan node associated with
postgres_fdw is done just by using instrumentation for ExecProcNode()
called from the node's callbacks, causing the following problems:
1) If the remote table to scan is empty, the node is incorrectly
considered as "never executed" by the command even if the node is
executed, as ExecProcNode() isn't called from the node's callbacks at
all in that case.
2) The command fails to collect timings for things other than
ExecProcNode() done in the node, such as creating a cursor for the
node's remote query.
To fix these problems, add instrumentation for async-capable nodes, and
modify postgres_fdw accordingly.
My oversight in commit 27e1f1456.
While at it, update a comment for the AsyncRequest struct in execnodes.h
and the documentation for the ForeignAsyncRequest API in fdwhandler.sgml
to match the code in ExecAsyncAppendResponse() in nodeAppend.c, and fix
typos in comments in nodeAppend.c.
Per report from Andrey Lepikhov, though I didn't use his patch.
Reviewed-by: Andrey Lepikhov
Discussion: https://postgr.es/m/2eb662bb-105d-fc20-7412-2f027cc3ca72%40postgrespro.ru
Ignore parallel workers in pg_stat_statements
Oversight in 4f0b0966c8 which exposed queryid in parallel workers.
Counters are aggregated by the main backend process so parallel workers
would report duplicated activity, and could also report activity for the
wrong entry as they are only aware of the top level queryid.
Fix thinko in pg_stat_get_activity when retrieving the queryid.
Remove unnecessary call to pgstat_report_queryid().
Reported-by: Amit Kapila, Andres Freund, Thomas Munro
Discussion: https://postgr.es/m/20210408051735.lfbdzun5zdlax5gd@alap3.anarazel.dep634GTSOqnDW86Owrn6qDAVosC5dJjXjp7BMfc5Gz1Q@mail.gmail.com
Author: Julien Rouhaud
Changing pg_stat_statements.track between 'all' and 'top' would control
if pg_stat_statements tracked just top level statements or also
statements inside functions, but when tracking all it would not
differentiate between the two. Being table to differentiate this is
useful both to track where the actual query is coming from, and to see
if there are differences in executions between the two.
To do this, add a boolean to the hash key indicating if the statement
was top level or not.
Experience from the pg_stat_kcache module shows that in at least some
"reasonable worloads" only <5% of the queries show up both top level and
nested. Based on this, admittedly small, dataset, this patch does not
try to de-duplicate those query *texts*, and will just store one copy
for the top level and one for the nested.
Author: Julien Rohaud
Reviewed-By: Magnus Hagander, Masahiro Ikeda
Discussion: https://postgr.es/m/20201202040516.GA43757@nol
Use the in-core query id computation for pg_stat_activity,
log_line_prefix, and EXPLAIN VERBOSE.
Similar to other fields in pg_stat_activity, only the queryid from the
top level statements are exposed, and if the backends status isn't
active then the queryid from the last executed statements is displayed.
Add a %Q placeholder to include the queryid in log_line_prefix, which
will also only expose top level statements.
For EXPLAIN VERBOSE, if a query identifier has been computed, either by
enabling compute_query_id or using a third-party module, display it.
Bump catalog version.
Discussion: https://postgr.es/m/20210407125726.tkvjdbw76hxnpwfi@nol
Author: Julien Rouhaud
Reviewed-by: Alvaro Herrera, Nitin Jadhav, Zhihong Yu
Add compute_query_id GUC to control whether a query identifier should be
computed by the core (off by default). It's thefore now possible to
disable core queryid computation and use pg_stat_statements with a
different algorithm to compute the query identifier by using a
third-party module.
To ensure that a single source of query identifier can be used and is
well defined, modules that calculate a query identifier should throw an
error if compute_query_id specified to compute a query id and if a query
idenfitier was already calculated.
Discussion: https://postgr.es/m/20210407125726.tkvjdbw76hxnpwfi@nol
Author: Julien Rouhaud
Reviewed-by: Alvaro Herrera, Nitin Jadhav, Zhihong Yu
Other code paths are already protected against this case, and _PG_init()
warns about that in pg_stat_statements.c. While on it, I have checked
the other extensions of the tree but did not notice any holes.
Oversight in 9fbc3f3.
Author: Jaime Casanova
Reviewed-by: Julien Rouhaud
Discussion: https://postgr.es/m/CAJKUy5gF4=_=qhJ1VX_tSGFfjKHb9BvzhRYWSApJD=Bfwp2SBw@mail.gmail.com
This commit adds "stats_reset" column into the pg_stat_statements_info
view. This column indicates the time at which all statistics in the
pg_stat_statements view were last reset.
Per discussion, this commit also changes pg_stat_statements_info code
so that "dealloc" column is reset at the same time as "stats_reset" is reset,
i.e., whenever all pg_stat_statements entries are removed, for the sake
of consistency. Previously "dealloc" was reset only when
pg_stat_statements_reset(0, 0, 0) is called and was not reset when
pg_stat_statements_reset() with non-zero value argument discards all
entries. This was confusing.
Author: Naoki Nakamichi, Yuki Seino
Reviewed-by: Yuki Seino, Kyotaro Horiguchi, Li Japin, Fujii Masao
Discussion: https://postgr.es/m/c102cf3180d0ee73c1c5a0f7f8558322@oss.nttdata.com
Invent a new flag bit HASH_STRINGS to specify C-string hashing, which
was formerly the default; and add assertions insisting that exactly
one of the bits HASH_STRINGS, HASH_BLOBS, and HASH_FUNCTION be set.
This is in hopes of preventing recurrences of the type of oversight
fixed in commit a1b8aa1e4 (i.e., mistakenly omitting HASH_BLOBS).
Also, when HASH_STRINGS is specified, insist that the keysize be
more than 8 bytes. This is a heuristic, but it should catch
accidental use of HASH_STRINGS for integer or pointer keys.
(Nearly all existing use-cases set the keysize to NAMEDATALEN or
more, so there's little reason to think this restriction should
be problematic.)
Tweak hash_create() to insist that the HASH_ELEM flag be set, and
remove the defaults it had for keysize and entrysize. Since those
defaults were undocumented and basically useless, no callers
omitted HASH_ELEM anyway.
Also, remove memset's zeroing the HASHCTL parameter struct from
those callers that had one. This has never been really necessary,
and while it wasn't a bad coding convention it was confusing that
some callers did it and some did not. We might as well save a few
cycles by standardizing on "not".
Also improve the documentation for hash_create().
In passing, improve reinit.c's usage of a hash table by storing
the key as a binary Oid rather than a string; and, since that's
a temporary hash table, allocate it in CurrentMemoryContext for
neatness.
Discussion: https://postgr.es/m/590625.1607878171@sss.pgh.pa.us
If more distinct statements than pg_stat_statements.max are observed,
pg_stat_statements entries about the least-executed statements are
deallocated. This commit enables us to track the total number of times
those entries were deallocated. That number can be viewed in the
pg_stat_statements_info view that this commit adds. It's useful when
tuning pg_stat_statements.max parameter. If it's high, i.e., the entries
are deallocated very frequently, which might cause the performance
regression and we can increase pg_stat_statements.max to avoid those
frequent deallocations.
The pg_stat_statements_info view is intended to display the statistics
of pg_stat_statements module itself. Currently it has only one column
"dealloc" indicating the number of times entries were deallocated.
But an upcoming patch will add other columns (for example, the time
at which pg_stat_statements statistics were last reset) into the view.
Author: Katsuragi Yuta, Yuki Seino
Reviewed-by: Fujii Masao
Discussion: https://postgr.es/m/0d9f1107772cf5c3f954e985464c7298@oss.nttdata.com
Commit 6023b7ea71 allowed pg_stat_statements to track the number
of rows retrieved or affected by some utility commands including
CREATE MATERIALIZED VIEW. However it did not track the rowcount
of REFRESH MATERIALIZED VIEW. This commit allows pg_stat_statements
to track that.
To track that, this commit changes the query completion for
REFRESH MATERIALIZED VIEW so that it saves the rowcount. But note that
the rowcount is still not displayed in the command completion tag output.
That is, the display_rowcount flag of CMDTAG_REFRESH_MATERIALIZED_VIEW
command tag is left false in cmdtaglist.h. Otherwise, the change of
completion tag output might break applications using it.
Author: Katsuragi Yuta, Seino Yuki
Reviewed-by: Fujii Masao
Discussion: https://postgr.es/m/71f6bc72f8bbaa06e701f8bd2562c347@oss.nttdata.com
Discussion: https://postgr.es/m/aadbfba9-e4bb-9531-6b3a-d13c31c8f4fe@oss.nttdata.com
This commit makes pg_stat_statements track the total number
of rows retrieved or affected by CREATE TABLE AS, SELECT INTO,
CREATE MATERIALIZED VIEW and FETCH commands.
Suggested-by: Pascal Legrand
Author: Fujii Masao
Reviewed-by: Asif Rehman
Discussion: https://postgr.es/m/1584293755198-0.post@n3.nabble.com
Since v13 pg_stat_statements is allowed to track the planning time of
statements when track_planning option is enabled. Its default was on.
But this feature could cause more terrible spinlock contentions in
pg_stat_statements. As a result of this, Robins Tharakan reported that
v13 beta1 showed ~45% performance drop at high DB connection counts
(when compared with v12.3) during fully-cached SELECT-only test using
pgbench.
To avoid this performance regression by the default setting,
this commit changes default of pg_stat_statements.track_planning to off.
Back-patch to v13 where pg_stat_statements.track_planning was introduced.
Reported-by: Robins Tharakan
Author: Fujii Masao
Reviewed-by: Julien Rouhaud
Discussion: https://postgr.es/m/2895b53b033c47ccb22972b589050dd9@EX13D05UWC001.ant.amazon.com
Includes some manual cleanup of places that pgindent messed up,
most of which weren't per project style anyway.
Notably, it seems some people didn't absorb the style rules of
commit c9d297751, because there were a bunch of new occurrences
of function calls with a newline just after the left paren, all
with faulty expectations about how the rest of the call would get
indented.
In commit 33e05f89c5, we have added the option to display WAL usage
statistics in Explain and auto_explain. The display format used two spaces
between each field which is inconsistent with Buffer usage statistics which
is using one space between each field. Change the format to make WAL usage
statistics consistent with Buffer usage statistics.
This commit also changed the usage of "full page writes" to
"full page images" for WAL usage statistics to make it consistent with
other parts of code and docs.
Author: Julien Rouhaud, Amit Kapila
Reviewed-by: Justin Pryzby, Kyotaro Horiguchi and Amit Kapila
Discussion: https://postgr.es/m/CAB-hujrP8ZfUkvL5OYETipQwA=e3n7oqHFU=4ZLxWS_Cza3kQQ@mail.gmail.com