When we're deleting a no-op LEFT JOIN from the query, we must remove
the join's joinclauses from surviving relations' joininfo lists.
The invention of "cloned" clauses in 2489d76c4 broke the logic for
that; it'd fail to remove clones that include OJ relids outside the
doomed join's min relid sets, which could happen if that join was
previously discovered to commute with some other join.
This accidentally failed to cause problems in the majority of cases,
because we'd never decide that such a cloned clause was evaluatable at
any surviving join. However, Richard Guo discovered a case where that
did happen, leading to "no relation entry for relid" errors later.
Also, adding assertions that a non-removed clause contains no Vars from
the doomed join exposes that there are quite a few existing regression
test cases where the problem happens but is accidentally not exposed.
The fix for this is just to include the target join's commute_above_r
and commute_below_l sets in the relid set we test against when
deciding whether a join clause is "pushed down" and thus not
removable.
While at it, do a little refactoring: the join's relid set can be
computed inside remove_rel_from_query rather than in the caller.
Patch by me; thanks to Richard Guo for review.
Discussion: https://postgr.es/m/CAMbWs4_PHrRqTKDNnTRsxxQy6BtYCVKsgXm1_gdN2yQ=kmcO5g@mail.gmail.com
Avoid "right sibling's left-link doesn't match" errors when vacuuming a
corrupt nbtree index. Just LOG the issue and press on. That way VACUUM
will have a decent chance of finishing off all required processing for
the index (and for the table as a whole).
This error was seen in the field from time to time (it's more than a
theoretical risk), so giving VACUUM the ability to press on like this
has real value. Nothing short of a REINDEX is expected to fix the
underlying index corruption, so giving up (by throwing an error) risks
making a bad situation far worse. Anything that blocks forward progress
by VACUUM like this might go unnoticed for a long time. This could
eventually lead to a wraparound/xidStopLimit outage.
Note that _bt_unlink_halfdead_page() has always been able to bail on
page deletion when the target page's left sibling page was in an
inconsistent state. It now does the same thing (returns false to back
out of the second phase of deletion) when it notices sibling link
corruption in the target page's right sibling page.
This is similar to the work from commit 5b861baa (later backpatched as
commit 43e409ce), which taught nbtree to press on with vacuuming an
index when page deletion fails to "re-find" a downlink in the target
page's parent page. The "re-find" check seems to make VACUUM bail on
page deletion more often in practice, but there is no reason to take any
chances here.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://postgr.es/m/CAH2-Wzko2q2kP1+UvgJyP9g0mF4hopK0NtQZcxwvMv9_ytGhkQ@mail.gmail.com
Backpatch: 11- (all supported versions).
We've had multiple issues with the clause_is_computable_at logic that
I introduced in 2489d76c4: it's been known to accept more than one
clone of the same qual at the same plan node, and also to accept no
clones at all. It's looking impractical to get it 100% right on the
basis of the currently-stored information, so fix it by introducing a
new RestrictInfo field "incompatible_relids" that explicitly shows
which outer joins a given clone mustn't be pushed above.
In principle we could populate this field in every RestrictInfo, but
that would cost space and there doesn't presently seem to be a need
for it in general. Also, while deconstruct_distribute_oj_quals can
easily fill the field with the remaining members of the commutative
join set that it's considering, computing it in the general case
seems again pretty complicated. So for now, just fill it for
clone quals.
Along the way, fix a bug that may or may not be only latent:
equivclass.c was generating replacement clauses with is_pushed_down
and has_clone/is_clone markings that didn't match their
required_relids. This led me to conclude that leaving the clone flags
out of make_restrictinfo's purview wasn't such a great idea after all,
so add them.
Per report from Richard Guo.
Discussion: https://postgr.es/m/CAMbWs48EYi_9-pSd0ORes1kTmTeAjT4Q3gu49hJtYCbSn2JyeA@mail.gmail.com
2dcd1578c4 left the --role option undocumented, which is
inconsistent with other deprecated options such as pg_dump's
--blobs and --no-blobs. This change adds --role back to
createuser's documentation and usage output and marks it as
deprecated.
Suggested-by: Peter Eisentraut
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/0e85c9e7-4804-1cdb-5a4a-c72c328f9ad8%40enterprisedb.com
Commit f62975b2a made headerscheck and cpluspluscheck a little
smarter than this documentation gave them credit for. Make
their docs match current reality. Also add a warning about the
layer of dust that has settled on the rest of these scripts.
Discussion: https://postgr.es/m/769950.1684797558@sss.pgh.pa.us
This file could be in the current (build) directory if we just
built it. However, when installing from a VPATH build from a
tarball, it will exist in the source directory and gmake will
therefore not rebuild it. Use the $< macro to find out where
gmake found it.
Oversight in b3a0d8324, which also exposes a buildfarm testing gap:
we test install from VPATH builds from bare source trees, but not
from tarballs.
Per report from Christoph Berg.
Discussion: https://postgr.es/m/ZGzEAqjxkkoY3ooH@msg.df7cb.de
That's how other boolean options are handled, so do likewise.
The previous coding with "enable" and "disable" was seemingly
modeled on gssencmode, but that's a three-way flag.
While at it, add PGGSSDELEGATION to the set of environment
variables cleared by pg_regress and Utils.pm.
Abhijit Menon-Sen, per gripe from Alvaro Herrera
Discussion: https://postgr.es/m/20230522091609.nlyuu4nolhycqs2p@alvherre.pgsql
This change renames --admin to --with-admin, --role to --member-of,
and --member to --with-member. Many people found the previous
names to be confusing. The --admin and --member options are new in
v16, but --role has been there for a while, so that one has been
kept (but left undocumented) for backward compatibility.
Suggested-by: Peter Eisentraut
Reviewed-by: Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/ZFvVZvQDliIWmOwg%40momjian.us
Use the clause's required_relids not clause_relids for testing
whether it is computable at the current join level, if it is a
clone clause generated by deconstruct_distribute_oj_quals().
Arguably, this is more correct and we should do it for all clauses;
that would at least remove the handwavy claim that we are doing
it to save cycles compared to inspecting Vars individually.
However, attempting to do that exposes that we are not being careful
to compute an accurate value for required_relids in all cases.
I'm unsure whether it's a good idea to attempt to do that for v16,
or leave it as future clean-up. In the meantime, this quick hack
demonstrably fixes some cases, so let's squeeze it in for beta1.
Patch by me, but great thanks to Richard Guo for investigation
and testing. The new test cases are all modeled on his examples.
Discussion: https://postgr.es/m/CAMbWs4-_vwkBij4XOQ5ukxUvLgwTm0kS5_DO9CicUeKbEfKjUw@mail.gmail.com
The assertion checked that the size of the relation is not "too large" - but
the code is explicitly dealing with the possibility of another backend
extending the relation concurrently. In that case the new relation size could
be bigger than what the current backend needs, wrongly triggering an assertion
failure.
Unfortunately it is hard to write a reliable and affordable regression tests
for this, as a lot of concurrency is needed to encounter the bug.
Introduced in 31966b151e.
Reported-by: Melanie Plageman <melanieplageman@gmail.com>
WalSndWakeup() currently loops through all the walsenders slots, with a
spinlock acquisition and release for every iteration, to wake up waiting
walsenders.
This commonly was not a problem before e101dfac3a. But, to allow logical
decoding on standbys, we need to wake up logical walsenders after every WAL
record is applied on the standby, rather just when flushing WAL or switching
timelines. This causes a performance regression for workloads replaying a lot
of WAL records.
To solve this, we use condition variable (CV) to efficiently wake up
walsenders in WalSndWakeup().
Every walsender prepares to sleep on a shared memory CV. Note that it just
prepares to sleep on the CV (i.e., adds itself to the CV's waitlist), but does
not actually wait on the CV (IOW, it never calls ConditionVariableSleep()). It
still uses WaitEventSetWait() for waiting, because CV infrastructure doesn't
handle FeBe socket events currently. The processes (startup process,
walreceiver etc.) wanting to wake up walsenders use
ConditionVariableBroadcast(), which in turn calls SetLatch(), helping
walsenders come out of WaitEventSetWait().
We use separate shared memory CVs for physical and logical walsenders for
selective wake ups, see WalSndWakeup() for more details.
This approach is simple and reasonably efficient. But not very elegant. But
for 16 it seems to be a better path than a larger redesign of the CV
mechanism. A desirable future improvement would be to add support for CVs
into WaitEventSetWait().
This still leaves us with a small regression in very extreme workloads (due to
the spinlock acquisition in ConditionVariableBroadcast() when there are no
waiters) - but that seems acceptable.
Reported-by: Andres Freund <andres@anarazel.de>
Suggested-by: Andres Freund <andres@anarazel.de>
Author: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Reviewed-by: "Drouvot, Bertrand" <bertranddrouvot.pg@gmail.com>
Reviewed-by: Zhijie Hou <houzj.fnst@fujitsu.com>
Discussion: https://www.postgresql.org/message-id/20230509190247.3rrplhdgem6su6cg%40awork3.anarazel.de
Tweak column widths in a couple of tables to avoid "contents
... exceed the available area" warnings. Remove usage of
some non-Latin-1 characters.
Discussion: https://postgr.es/m/614600.1684537037@sss.pgh.pa.us
Complete the task begun in 9c0a0e2ed: we don't want to use the
abbreviation "deleg" for GSS delegation in any user-visible places.
(For consistency, this also changes most internal uses too.)
Abhijit Menon-Sen and Tom Lane
Discussion: https://postgr.es/m/949048.1684639317@sss.pgh.pa.us
* document to_reg* accepting OIDs
* document pg_log_standby_snapshot()
* document pg_input_is_valid() and pg_input_error_info()
* handle rename of function to pg_split_walfile_name()
* fix character encoding problem for Przemyslaw Sztoch
* remove partition section
Reported-by: jian he, Tom Lane, Bertrand Drouvot
Run pgindent, pgperltidy, and reformat-dat-files.
This set of diffs is a bit larger than typical. We've updated to
pg_bsd_indent 2.1.2, which properly indents variable declarations that
have multi-line initialization expressions (the continuation lines are
now indented one tab stop). We've also updated to perltidy version
20230309 and changed some of its settings, which reduces its desire to
add whitespace to lines to make assignments etc. line up. Going
forward, that should make for fewer random-seeming changes to existing
code.
Discussion: https://postgr.es/m/20230428092545.qfb3y5wcu4cm75ur@alvherre.pgsql
Run renumber_oids.pl to move high-numbered OIDs down, as per pre-beta
tasks specified by RELEASE_CHANGES. For reference, the command was
./renumber_oids.pl --first-mapped-oid 8000 --target-oid 6200
In commit 9df8f903e I (tgl) switched join_is_removable() from
using the min relid sets of the join under consideration to
using its full syntactic relid sets. This was a mistake,
as it allowed join removal in cases where a reference to the
join output would survive in some syntactically-lower join
condition. Revert to the former coding.
Richard Guo
Discussion: https://postgr.es/m/CAMbWs4-EU9uBGSP7G-iTwLBhRQ=rnZKvFDhD+n+xhajokyPCKg@mail.gmail.com
The idea of EvalPlanQual is that we replace the query's scan of the
result relation with a single injected tuple, and see if we get a
tuple out, thereby implying that the injected tuple still passes the
query quals. (In join cases, other relations in the query are still
scanned normally.) This logic was not updated when commit 86dc90056
made it possible for a single DML query plan to have multiple result
relations, when the query target relation has inheritance or partition
children. We replaced the output for the current result relation
successfully, but other result relations were still scanned normally;
thus, if any other result relation contained a tuple satisfying the
quals, we'd think the EPQ check passed, even if it did not pass for
the injected tuple itself. This would lead to update or delete
actions getting performed when they should have been skipped due to
a conflicting concurrent update in READ COMMITTED isolation mode.
Fix by blocking all sibling result relations from emitting tuples
during an EvalPlanQual recheck. In the back branches, the fix is
complicated a bit by the need to not change the size of struct
EPQState (else we'd have ABI-breaking changes in offsets in
struct ModifyTableState). Like the back-patches of 3f7836ff6
and 4b3e37993, add a separately palloc'd struct to avoid that.
The logic is the same as in HEAD otherwise.
This is only a live bug back to v14 where 86dc90056 came in.
However, I chose to back-patch the test cases further, on the
grounds that this whole area is none too well tested. I skipped
doing so in v11 though because none of the test applied cleanly,
and it didn't quite seem worth extra work for a branch with only
six months to live.
Per report from Ante Krešić (via Aleksander Alekseev)
Discussion: https://postgr.es/m/CAJ7c6TMBTN3rcz4=AjYhLPD_w3FFT0Wq_C15jxCDn8U4tZnH1g@mail.gmail.com