This adds a new "Memory:" line under the "Planning:" group (which
currently only has "Buffers:") when the MEMORY option is specified.
In order to make the reporting reasonably accurate, we create a separate
memory context for planner activities, to be used only when this option
is given. The total amount of memory allocated by that context is
reported as "allocated"; we subtract memory in the context's freelists
from that and report that result as "used". We use
MemoryContextStatsInternal() to obtain the quantities.
The code structure to show buffer usage during planning was not in
amazing shape, so I (Álvaro) modified the patch a bit to clean that up
in passing.
Author: Ashutosh Bapat
Reviewed-by: David Rowley, Andrey Lepikhov, Jian He, Andy Fan
Discussion: https://www.postgresql.org/message-id/CAExHW5sZA=5LJ_ZPpRO-w09ck8z9p7eaYAqq3Ks9GDfhrxeWBw@mail.gmail.com
There are a lot of Perl scripts in the tree, mostly code generation
and TAP tests. Occasionally, these scripts produce warnings. These
are probably always mistakes on the developer side (true positives).
Typical examples are warnings from genbki.pl or related when you make
a mess in the catalog files during development, or warnings from tests
when they massage a config file that looks different on different
hosts, or mistakes during merges (e.g., duplicate subroutine
definitions), or just mistakes that weren't noticed because there is a
lot of output in a verbose build.
This changes all warnings into fatal errors, by replacing
use warnings;
by
use warnings FATAL => 'all';
in all Perl files.
Discussion: https://www.postgresql.org/message-id/flat/06f899fd-1826-05ab-42d6-adeb1fd5e200%40eisentraut.org
Run pgindent, pgperltidy, and reformat-dat-files.
This set of diffs is a bit larger than typical. We've updated to
pg_bsd_indent 2.1.2, which properly indents variable declarations that
have multi-line initialization expressions (the continuation lines are
now indented one tab stop). We've also updated to perltidy version
20230309 and changed some of its settings, which reduces its desire to
add whitespace to lines to make assignments etc. line up. Going
forward, that should make for fewer random-seeming changes to existing
code.
Discussion: https://postgr.es/m/20230428092545.qfb3y5wcu4cm75ur@alvherre.pgsql
When auto_explain.log_verbose is on, auto_explain should print in the
logs plans equivalent to the EXPLAIN (VERBOSE). However, when
compute_query_id is on, query identifiers were not showing up, being
only handled by EXPLAIN (VERBOSE). This brings auto_explain on par with
EXPLAIN regarding that. Note that like EXPLAIN, auto_explain does not
show the query identifier when compute_query_id=regress.
The change is done so as the choice of printing the query identifier is
done in ExplainPrintPlan() rather than in ExplainOnePlan(), to avoid a
duplication of the logic dealing with the query ID. auto_explain is the
only in-core caller of ExplainPrintPlan().
While looking at the area, I have noticed that more consolidation
between EXPLAIN and auto_explain would be in order for the logging of
the plan duration and the buffer usage. This refactoring is left as a
future change.
Author: Atsushi Torikoshi
Reviewed-by: Justin Pryzby, Julien Rouhaud
Discussion: https://postgr.es/m/1ea21936981f161bccfce05765c03bee@oss.nttdata.com
The generated resource files aren't exactly the same ones as the old
buildsystems generate. Previously "InternalName" and "OriginalFileName" were
mostly wrong / not set (despite being required), but that was hard to fix in
at least the make build. Additionally, the meson build falls back to a
"auto-generated" description when not set, and doesn't set it in a few cases -
unlikely that anybody looks at these descriptions in detail.
Author: Andres Freund <andres@anarazel.de>
Author: Nazir Bilal Yavuz <byavuz81@gmail.com>
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Autoconf is showing its age, fewer and fewer contributors know how to wrangle
it. Recursive make has a lot of hard to resolve dependency issues and slow
incremental rebuilds. Our home-grown MSVC build system is hard to maintain for
developers not using Windows and runs tests serially. While these and other
issues could individually be addressed with incremental improvements, together
they seem best addressed by moving to a more modern build system.
After evaluating different build system choices, we chose to use meson, to a
good degree based on the adoption by other open source projects.
We decided that it's more realistic to commit a relatively early version of
the new build system and mature it in tree.
This commit adds an initial version of a meson based build system. It supports
building postgres on at least AIX, FreeBSD, Linux, macOS, NetBSD, OpenBSD,
Solaris and Windows (however only gcc is supported on aix, solaris). For
Windows/MSVC postgres can now be built with ninja (faster, particularly for
incremental builds) and msbuild (supporting the visual studio GUI, but
building slower).
Several aspects (e.g. Windows rc file generation, PGXS compatibility, LLVM
bitcode generation, documentation adjustments) are done in subsequent commits
requiring further review. Other aspects (e.g. not installing test-only
extensions) are not yet addressed.
When building on Windows with msbuild, builds are slower when using a visual
studio version older than 2019, because those versions do not support
MultiToolTask, required by meson for intra-target parallelism.
The plan is to remove the MSVC specific build system in src/tools/msvc soon
after reaching feature parity. However, we're not planning to remove the
autoconf/make build system in the near future. Likely we're going to keep at
least the parts required for PGXS to keep working around until all supported
versions build with meson.
Some initial help for postgres developers is at
https://wiki.postgresql.org/wiki/Meson
With contributions from Thomas Munro, John Naylor, Stone Tickle and others.
Author: Andres Freund <andres@anarazel.de>
Author: Nazir Bilal Yavuz <byavuz81@gmail.com>
Author: Peter Eisentraut <peter@eisentraut.org>
Reviewed-By: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Discussion: https://postgr.es/m/20211012083721.hvixq4pnh2pixr3j@alap3.anarazel.de
In commit 7c34555f8, I overlooked the need to configure SSPI
on Windows to allow login as the non-superuser role.
Fix that by adding auth_extra/--create-role incantation
(which, oddly enough, doesn't actually create the role).
Per buildfarm.
While here, upgrade the mechanism for temporarily setting
$ENV{PGUSER}, as per recommendation from ilmari.
Discussion: https://postgr.es/m/87edy7j1zz.fsf@wibble.ilmari.org
We weren't exercising the session_preload_libraries option in any
meaningful way. auto_explain is a good testbed for doing so, since
it's one of the primary use-cases for session_preload_libraries.
Hence, adjust its TAP test to load the library via
session_preload_libraries not shared_preload_libraries. While at it,
feed test-specific settings to the backend via PGOPTIONS rather than
tediously rewriting postgresql.conf.
Also, since auto_explain has some PGC_SUSET parameters, we can use it
to provide a test case for the permissions-checking bug just fixed
by commit b35617de3.
Back-patch to v15 so that we have coverage for the permissions issue
in that branch too. To do that, I back-patched the refactoring
recently done by commit 550bc0a6c.
Dagfinn Ilmari Mannsåker and Tom Lane
Discussion: https://postgr.es/m/CABwTF4VEpwTHhRQ+q5MiC5ucngN-whN-PdcKeufX7eLSoAfbZA@mail.gmail.com
auto_explain.log_parameter_max_length is a new GUC part of the
extension, similar to the corresponding core setting, that controls the
inclusion of query parameters in the logged explain output.
More tests are added to check the behavior of this new parameter: when
parameters logged in full (the default of -1), when disabled (value of
0) and when partially truncated (value different than the two others).
Author: Dagfinn Ilmari Mannsåker
Discussion: https://postgr.es/m/87ee09mohb.fsf@wibble.ilmari.org
Previously, the tests were structured so as all the queries whose plans
are checked run first, followed by pattern checks using the full set of
server logs. This can be problematic when extending the tests, as this
increases query plan overlaps, where two tests finish by having similar
plan outputs potentially invalidating the tests wanted.
The tests are refactored so as log content matches are checked in
isolation of each query run, by grabbing the position of the server logs
before running each query whose plan is generated in the logs. This
avoids issues when extending the tests, something that would become a
larger problem with a follow-up patch that adds a new GUC in
auto_explain to control the size of the each parameter logged.
Author: Dagfinn Ilmari Mannsåker
Discussion: https://postgr.es/m/87ee09mohb.fsf@wibble.ilmari.org
The code for unloading a library has been commented-out for over 12
years, ever since commit 602a9ef5a7, and we're
no closer to supporting it now than we were back then.
Nathan Bossart, reviewed by Michael Paquier and by me.
Discussion: http://postgr.es/m/Ynsc9bRL1caUSBSE@paquier.xyz
Commit 75d22069e tried to throw a warning for setting a custom GUC whose
prefix belongs to a previously-loaded extension, if there is no such GUC
defined by the extension. But that caused unstable behavior with
parallel workers, because workers don't necessarily load extensions and
GUCs in the same order their leader did. To make that work safely, we
have to completely disallow the case. We now actually remove any such
GUCs at the time of initial extension load, and then throw an error not
just a warning if you try to add one later. While this might create a
compatibility issue for a few people, the improvement in error-detection
capability seems worth it; it's hard to believe that there's any good
use-case for choosing such GUC names.
This also un-reverts 5609cc01c (Rename EmitWarningsOnPlaceholders() to
MarkGUCPrefixReserved()), since that function's old name is now even
more of a misnomer.
Florin Irion and Tom Lane
Discussion: https://postgr.es/m/1902182.1640711215@sss.pgh.pa.us
Rather than doing manual book keeping to plan the number of tests to run
in each TAP suite, conclude each run with done_testing() summing up the
the number of tests that ran. This removes the need for maintaning and
updating the plan count at the expense of an accurate count of remaining
during the test suite runtime.
This patch has been discussed a number of times, often in the context of
other patches which updates tests, so a larger number of discussions can
be found in the archives.
Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Reviewed-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://postgr.es/m/DD399313-3D56-4666-8079-88949DAC870F@yesql.se
This seems like a clearer name for what it does now.
Provide a compatibility macro so that extensions don't have to convert
to the new name right away.
Discussion: https://postgr.es/m/116024.1640111629@sss.pgh.pa.us
Standardize on xoroshiro128** as our basic PRNG algorithm, eliminating
a bunch of platform dependencies as well as fundamentally-obsolete PRNG
code. In addition, this API replacement will ease replacing the
algorithm again in future, should that become necessary.
xoroshiro128** is a few percent slower than the drand48 family,
but it can produce full-width 64-bit random values not only 48-bit,
and it should be much more trustworthy. It's likely to be noticeably
faster than the platform's random(), depending on which platform you
are thinking about; and we can have non-global state vectors easily,
unlike with random(). It is not cryptographically strong, but neither
are the functions it replaces.
Fabien Coelho, reviewed by Dean Rasheed, Aleksander Alekseev, and myself
Discussion: https://postgr.es/m/alpine.DEB.2.22.394.2105241211230.165418@pseudo
The five modules in our TAP test framework all had names in the top
level namespace. This is unwise because, even though we're not
exporting them to CPAN, the names can leak, for example if they are
exported by the RPM build process. We therefore move the modules to the
PostgreSQL::Test namespace. In the process PostgresNode is renamed to
Cluster, and TestLib is renamed to Utils. PostgresVersion becomes simply
PostgreSQL::Version, to avoid possible confusion about what it's the
version of.
Discussion: https://postgr.es/m/aede93a4-7d92-ef26-398f-5094944c2504@dunslane.net
Reviewed by Erik Rijkers and Michael Paquier
There is only one constructor now for PostgresNode, with the idiomatic
name 'new'. The method is not exported by the class, and must be called
as "PostgresNode->new('name',[args])". All the TAP tests that use
PostgresNode are modified accordingly. Third party scripts will need
adjusting, which is a fairly mechanical process (I just used a sed
script).
EXPLAIN ANALYZE for an async-capable ForeignScan node associated with
postgres_fdw is done just by using instrumentation for ExecProcNode()
called from the node's callbacks, causing the following problems:
1) If the remote table to scan is empty, the node is incorrectly
considered as "never executed" by the command even if the node is
executed, as ExecProcNode() isn't called from the node's callbacks at
all in that case.
2) The command fails to collect timings for things other than
ExecProcNode() done in the node, such as creating a cursor for the
node's remote query.
To fix these problems, add instrumentation for async-capable nodes, and
modify postgres_fdw accordingly.
My oversight in commit 27e1f1456.
While at it, update a comment for the AsyncRequest struct in execnodes.h
and the documentation for the ForeignAsyncRequest API in fdwhandler.sgml
to match the code in ExecAsyncAppendResponse() in nodeAppend.c, and fix
typos in comments in nodeAppend.c.
Per report from Andrey Lepikhov, though I didn't use his patch.
Reviewed-by: Andrey Lepikhov
Discussion: https://postgr.es/m/2eb662bb-105d-fc20-7412-2f027cc3ca72%40postgrespro.ru
The ExecutorEnd hook is invoked in a context that could be quite
long-lived, not the executor's own per-query context as I think
we were sort of assuming. Thus, any cruft generated while producing
the EXPLAIN output could accumulate over multiple queries. This can
result in spectacular leakage if log_nested_statements is on, and
even without that I'm surprised nobody complained before.
To fix, just switch into the executor's context so that anything we
allocate will be released when standard_ExecutorEnd frees the executor
state. We might as well nuke the code's retail pfree of the explain
output string, too; that's laughably inadequate to the need.
Japin Li, per report from Jeff Janes. This bug is old, so
back-patch to all supported branches.
Discussion: https://postgr.es/m/CAMkU=1wCVtbeRn0s9gt12KwQ7PLXovbpM8eg25SYocKW3BT4hg@mail.gmail.com
This commit adds a new option WAL similar to existing option BUFFERS in the
EXPLAIN command. This option allows to include information on WAL record
generation added by commit df3b181499 in EXPLAIN output.
This also allows the WAL usage information to be displayed via
the auto_explain module. A new parameter auto_explain.log_wal controls
whether WAL usage statistics are printed when an execution plan is logged.
This parameter has no effect unless auto_explain.log_analyze is enabled.
Author: Julien Rouhaud
Reviewed-by: Dilip Kumar and Amit Kapila
Discussion: https://postgr.es/m/CAB-hujrP8ZfUkvL5OYETipQwA=e3n7oqHFU=4ZLxWS_Cza3kQQ@mail.gmail.com
When maintaining or merging patches, one of the most common sources
for conflicts are the list of objects in makefiles. Especially when
the split across lines has been changed on both sides, which is
somewhat common due to attempting to stay below 80 columns, those
conflicts are unnecessarily laborious to resolve.
By splitting, and alphabetically sorting, OBJS style lines into one
object per line, conflicts should be less frequent, and easier to
resolve when they still occur.
Author: Andres Freund
Discussion: https://postgr.es/m/20191029200901.vww4idgcxv74cwes@alap3.anarazel.de
This gives an alternative way of catching exceptions, for the common
case where the cleanup code is the same in the error and non-error
cases. So instead of
PG_TRY();
{
... code that might throw ereport(ERROR) ...
}
PG_CATCH();
{
cleanup();
PG_RE_THROW();
}
PG_END_TRY();
cleanup();
one can write
PG_TRY();
{
... code that might throw ereport(ERROR) ...
}
PG_FINALLY();
{
cleanup();
}
PG_END_TRY();
Discussion: https://www.postgresql.org/message-id/flat/95a822c3-728b-af0e-d7e5-71890507ae0c%402ndquadrant.com
A parallel worker process should not be making any decisions of its
own about whether to auto-explain. If the parent session process
passed down flags asking for instrumentation data, do that, otherwise
not. Trying to enable instrumentation anyway leads to bugs like the
"could not find key N in shm TOC" failure reported in bug #15821
from Christian Hofstaedtler.
We can implement this cheaply by piggybacking on the existing logic
for not doing anything when we've chosen not to sample a statement.
While at it, clean up some tin-eared coding related to the sampling
feature, including an off-by-one error that meant that asking for 1.0
sampling rate didn't actually result in sampling every statement.
Although the specific case reported here only manifested in >= v11,
I believe that related misbehaviors can be demonstrated in any version
that has parallel query; and the off-by-one error is certainly there
back to 9.6 where that feature was added. So back-patch to 9.6.
Discussion: https://postgr.es/m/15821-5eb422e980594075@postgresql.org
Query planning is affected by a number of configuration options, and it
may be crucial to know which of those options were set to non-default
values. With this patch you can say EXPLAIN (SETTINGS ON) to include
that information in the query plan. Only options affecting planning,
with values different from the built-in default are printed.
This patch also adds auto_explain.log_settings option, providing the
same capability in auto_explain module.
Author: Tomas Vondra
Reviewed-by: Rafia Sabih, John Naylor
Discussion: https://postgr.es/m/e1791b4c-df9c-be02-edc5-7c8874944be0@2ndquadrant.com
I (Andres) was more than a bit hasty in committing 33001fd7a7
after last minute changes, leading to a number of problems (jit output
was only shown for JIT in parallel workers, and just EXPLAIN without
ANALYZE didn't work). Lukas luckily found these issues quickly.
Instead of combining instrumentation in in standard_ExecutorEnd(), do
so on demand in the new ExplainPrintJITSummary().
Also update a documentation example of the JIT output, changed in
52050ad8eb.
Author: Lukas Fittl, with minor changes by me
Discussion: https://postgr.es/m/CAP53PkxmgJht69pabxBXJBM+0oc6kf3KHMborLP7H2ouJ0CCtQ@mail.gmail.com
Backpatch: 11, where JIT compilation was introduced
Previously, when using parallel query, EXPLAIN (ANALYZE)'s JIT
compilation timings did not include the overhead from doing so on the
workers. Fix that.
We do so by simply aggregating the cost of doing JIT compilation on
workers and the leader together. Arguably that's not quite accurate,
because the total time spend doing so is spent in parallel - but it's
hard to do much better. For additional detail, when VERBOSE is
specified, the stats for workers are displayed separately.
Author: Amit Khandekar and Andres Freund
Discussion: https://postgr.es/m/CAJ3gD9eLrz51RK_gTkod+71iDcjpB_N8eC6vU2AW-VicsAERpQ@mail.gmail.com
Backpatch: 11-
The previous limit of INT_MAX / 1000 seems to have been cargo-culted in
from somewhere else. Or possibly the value was converted to microseconds
at some point; but in all supported releases, it's just compared to other
values, so there's no need for the restriction. This change raises the
effective limit from ~35 minutes to ~24 days, which conceivably is useful
to somebody, and anyway it's more consistent with the range of the core
log_min_duration_statement GUC.
Per complaint from Kevin Bloch. Back-patch to all supported releases.
Discussion: https://postgr.es/m/8ea82d7e-cb78-8e05-0629-73aa14d2a0ca@codingthat.com
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
Previously, it was unsafe to execute a plan in parallel if
ExecutorRun() might be called with a non-zero row count. However,
it's quite easy to fix things up so that we can support that case,
provided that it is known that we will never call ExecutorRun() a
second time for the same QueryDesc. Add infrastructure to signal
this, and cross-checks to make sure that a caller who claims this is
true doesn't later reneg.
While that pattern never happens with queries received directly from a
client -- there's no way to know whether multiple Execute messages
will be sent unless the first one requests all the rows -- it's pretty
common for queries originating from procedural languages, which often
limit the result to a single tuple or to a user-specified number of
tuples.
This commit doesn't actually enable parallelism in any additional
cases, because currently none of the places that would be able to
benefit from this infrastructure pass CURSOR_OPT_PARALLEL_OK in the
first place, but it makes it much more palatable to pass
CURSOR_OPT_PARALLEL_OK in places where we currently don't, because it
eliminates some cases where we'd end up having to run the parallel
plan serially.
Patch by me, based on some ideas from Rafia Sabih and corrected by
Rafia Sabih based on feedback from Dilip Kumar and myself.
Discussion: http://postgr.es/m/CA+TgmobXEhvHbJtWDuPZM9bVSLiTj-kShxQJ2uM5GPDze9fRYA@mail.gmail.com
This patch widens SPI_processed, EState's es_processed field, PortalData's
portalPos field, FuncCallContext's call_cntr and max_calls fields,
ExecutorRun's count argument, PortalRunFetch's result, and the max number
of rows in a SPITupleTable to uint64, and deals with (I hope) all the
ensuing fallout. Some of these values were declared uint32 before, and
others "long".
I also removed PortalData's posOverflow field, since that logic seems
pretty useless given that portalPos is now always 64 bits.
The user-visible results are that command tags for SELECT etc will
correctly report tuple counts larger than 4G, as will plpgsql's GET
GET DIAGNOSTICS ... ROW_COUNT command. Queries processing more tuples
than that are still not exactly the norm, but they're becoming more
common.
Most values associated with FETCH/MOVE distances, such as PortalRun's count
argument and the count argument of most SPI functions that have one, remain
declared as "long". It's not clear whether it would be worth promoting
those to int64; but it would definitely be a large dollop of additional
API churn on top of this, and it would only help 32-bit platforms which
seem relatively less likely to see any benefit.
Andreas Scherbaum, reviewed by Christian Ullrich, additional hacking by me
New configuration parameter auto_explain.sample_ratio makes it
possible to log just a fraction of the queries meeting the configured
threshold, to reduce the amount of logging.
Author: Craig Ringer and Julien Rouhaud
Review: Petr Jelinek