This was previously allowed, but I think that was just an oversight.
It's a clear violation of the rule that a generated column cannot
depend on itself or other generated columns. Moreover, because the
code was relying on the assumption that no such cross-references
exist, it was pretty easy to crash ALTER TABLE and perhaps other
places. Even if you managed not to crash, you got quite unstable,
implementation-dependent results.
Per report from Vitaly Ustinov.
Back-patch to v12 where GENERATED came in.
Discussion: https://postgr.es/m/CAM_DEiWR2DPT6U4xb-Ehigozzd3n3G37ZB1+867zbsEVtYoJww@mail.gmail.com
We consider this supported (though I've got my doubts that it's a
good idea, because tableoid is not immutable). However, several
code paths failed to fill the field in soon enough, causing such
a GENERATED expression to see zero or the wrong value. This
occurred when ALTER TABLE adds a new GENERATED column to a table
with existing rows, and during regular INSERT or UPDATE on a
foreign table with GENERATED columns.
Noted during investigation of a report from Vitaly Ustinov.
Back-patch to v12 where GENERATED came in.
Discussion: https://postgr.es/m/CAM_DEiWR2DPT6U4xb-Ehigozzd3n3G37ZB1+867zbsEVtYoJww@mail.gmail.com
COMMIT/ROLLBACK necessarily destroys all snapshots within the session.
The original implementation of intra-procedure transactions just
cavalierly did that, ignoring the fact that this left us executing in
a rather different environment than normal. In particular, it turns
out that handling of toasted datums depends rather critically on there
being an outer ActiveSnapshot: otherwise, when SPI or the core
executor pop whatever snapshot they used and return, it's unsafe to
dereference any toasted datums that may appear in the query result.
It's possible to demonstrate "no known snapshots" and "missing chunk
number N for toast value" errors as a result of this oversight.
Historically this outer snapshot has been held by the Portal code,
and that seems like a good plan to preserve. So add infrastructure
to pquery.c to allow re-establishing the Portal-owned snapshot if it's
not there anymore, and add enough bookkeeping support that we can tell
whether it is or not.
We can't, however, just re-establish the Portal snapshot as part of
COMMIT/ROLLBACK. As in normal transaction start, acquiring the first
snapshot should wait until after SET and LOCK commands. Hence, teach
spi.c about doing this at the right time. (Note that this patch
doesn't fix the problem for any PLs that try to run intra-procedure
transactions without using SPI to execute SQL commands.)
This makes SPI's no_snapshots parameter rather a misnomer, so in HEAD,
rename that to allow_nonatomic.
replication/logical/worker.c also needs some fixes, because it wasn't
careful to hold a snapshot open around AFTER trigger execution.
That code doesn't use a Portal, which I suspect someday we're gonna
have to fix. But for now, just rearrange the order of operations.
This includes back-patching the recent addition of finish_estate()
to centralize the cleanup logic there.
This also back-patches commit 2ecfeda3e into v13, to improve the
test coverage for worker.c (it was that test that exposed that
worker.c's snapshot management is wrong).
Per bug #15990 from Andreas Wicht. Back-patch to v11 where
intra-procedure COMMIT was added.
Discussion: https://postgr.es/m/15990-eee2ac466b11293d@postgresql.org
While applying the truncate change, the logical apply worker acquires
RowExclusiveLock on the relation being truncated. This allowed truncate on
the relation at a time by two apply workers which lead to a deadlock. The
reason was that one of the workers after updating the pg_class tuple tries
to acquire SHARE lock on the relation and started to wait for the second
worker which has acquired RowExclusiveLock on the relation. And when the
second worker tries to update the pg_class tuple, it starts to wait for
the first worker which leads to a deadlock. Fix it by acquiring
AccessExclusiveLock on the relation before applying the truncate change as
we do for normal truncate operation.
Author: Peter Smith, test case by Haiying Tang
Reviewed-by: Dilip Kumar, Amit Kapila
Backpatch-through: 11
Discussion: https://postgr.es/m/CAHut+PsNm43p0jM+idTvWwiGZPcP0hGrHMPK9TOAkc+a4UpUqw@mail.gmail.com
exec_for_query() normally tries to prefetch a few rows at a time
from the query being iterated over, so as to reduce executor
entry/exit overhead. Unfortunately this is unsafe if we have
COMMIT or ROLLBACK within the loop, because there might be
TOAST references in the data that we prefetched but haven't
yet examined. Immediately after the COMMIT/ROLLBACK, we have
no snapshots in the session, meaning that VACUUM is at liberty
to remove recently-deleted TOAST rows.
This was originally reported as a case triggering the "no known
snapshots" error in init_toast_snapshot(), but even if you miss
hitting that, you can get "missing toast chunk", as illustrated
by the added isolation test case.
To fix, just disable prefetching in non-atomic contexts. Maybe
there will be performance complaints prompting us to work harder
later, but it's not clear at the moment that this really costs
much, and I doubt we'd want to back-patch any complicated fix.
In passing, adjust that error message in init_toast_snapshot()
to be a little clearer about the likely cause of the problem.
Patch by me, based on earlier investigation by Konstantin Knizhnik.
Per bug #15990 from Andreas Wicht. Back-patch to v11 where
intra-procedure COMMIT was added.
Discussion: https://postgr.es/m/15990-eee2ac466b11293d@postgresql.org
A lamentable oversight on my part meant that when PostgresVersion.pm was
added in commit 4c4eaf3d19 provision to install it was not added to the
Makefile, so it was not installed along with the other perl modules.
"typename" is a C++ keyword, so pg_upgrade.h fails to compile in C++.
Fortunately, there seems no likely reason for somebody to need to
do that. Nonetheless, it's project policy that all .h files should
pass cpluspluscheck, so rename the argument to fix that.
Oversight in 57c081de0; back-patch as that was. (The policy requiring
pg_upgrade.h to pass cpluspluscheck only goes back to v12, but it
seems best to keep this code looking the same in all branches.)
Older versions of perl on Windows don't like the list form of pipe open,
and perlcritic doesn't like the string form of open, so we avoid both
with a simpler formulation using qx{}.
Per complaint from Amit Kapila.
Recently we refactored things so that pg_regress makes the
"testtablespace" subdirectory used by the core regression tests,
instead of doing that in the makefiles. That had the undesirable
side effect of making such a subdirectory in every directory that
has "input" or "output" test files. Since these subdirectories
remain empty, git doesn't complain about them, but nonetheless
they're clutter.
To fix, invent an explicit --make-testtablespace-dir switch,
so that pg_regress only makes the subdirectory when explicitly
told to.
Discussion: https://postgr.es/m/2854388.1621284789@sss.pgh.pa.us
One of the tests for the pgbench permute() function added by
6b258e3d68 fails on some 32-bit platforms, due to variations in the
floating point computations in getrand(). The remaining tests give
sufficient coverage, so just remove the failing test.
Reported by Christoph Berg. Analysis by Thomas Munro and Tom Lane.
Based on patch by Fabien Coelho.
Discussion: https://postgr.es/m/YKQnUoYV63GRJBDD@msg.df7cb.de
If a promotion is triggered while recovery is paused, the paused state ends
and promotion continues. But previously in that case
pg_get_wal_replay_pause_state() returned 'paused' wrongly while a promotion
was ongoing.
This commit changes a standby promotion so that it marks the recovery
pause state as 'not paused' when it's triggered, to fix the issue.
Author: Fujii Masao
Reviewed-by: Dilip Kumar, Kyotaro Horiguchi
Discussion: https://postgr.es/m/f706876c-4894-0ba5-6f4d-79803eeea21b@oss.nttdata.com
We were not waiting for a publisher to catch up with the subscriber after
creating a subscription. Now, it can happen that apply worker starts
replication even after we have disabled the subscription in the test. This
will make the test expect that there is no active slot whereas there
exists one. Fix this symptom by allowing the publisher to wait for
catching up with the subscription.
It is not a good idea to ensure if the slot is still active by checking
for walsender existence as we release the slot after we clean up the
walsender related memory. Fix that by checking the slot status in
pg_replication_slots.
Also, it is better to avoid repeated enabling/disabling of the
subscription.
Finally, we make autovacuum off for this test to avoid any empty
transaction appearing in the test while consuming changes.
Reported-by: as per buildfarm
Author: Vignesh C
Reviewed-by: Amit Kapila, Michael Paquier
Discussion: https://postgr.es/m/CAA4eK1+uW1UGDHDz-HWMHMen76mKP7NJebOTZN4uwbyMjaYVww@mail.gmail.com
1) Previously there were both pgstat_send_wal() and pgstat_report_wal()
in order to send WAL activity to the stats collector. With the former being
used by wal writer, the latter by most other processes. They were a bit
redundant and so this commit merges them into pgstat_send_wal() to
simplify the code.
2) Previously WAL global statistics counters were calculated and then
compared with zero-filled buffer in order to determine whether any WAL
activity has happened since the last submission. These calculation and
comparison were not cheap. This was regularly exercised even in read-only
workloads. This commit fixes the issue by making some WAL activity
counters directly be checked to determine if there's WAL activity stats
to send.
3) Previously pgstat_report_stat() did not check if there's WAL activity
stats to send as part of the "Don't expend a clock check if nothing to do"
check at the top. It's probably rare to have pending WAL stats without
also passing one of the other conditions, but for safely this commit
changes pgstat_report_stats() so that it checks also some WAL activity
counters at the top.
This commit also adds the comments about the design of WAL stats.
Reported-by: Andres Freund
Author: Masahiro Ikeda
Reviewed-by: Kyotaro Horiguchi, Atsushi Torikoshi, Andres Freund, Fujii Masao
Discussion: https://postgr.es/m/20210324232224.vrfiij2rxxwqqjjb@alap3.anarazel.de
This is an oversight from bbe0a81d, where the equivalent option exists
in pg_dump. This is useful to be able to reset the compression methods
cluster-wide when restoring the data based on default_toast_compression.
Reviewed-by: Daniel Gustafsson, Tom Lane
Discussion: https://postgr.es/m/YKHC+qCJvzCRVCpY@paquier.xyz
Allowing only on/off meant that all either all existing configuration
guides would become obsolete if we disabled it by default, or that we
would have to accept a performance loss in the default config if we
enabled it by default. By allowing 'auto' as a middle ground, the
performance cost is only paid by those who enable pg_stat_statements and
similar modules.
I only edited the release notes to comment-out a paragraph that is now
factually wrong; further edits are probably needed to describe the
related change in more detail.
Author: Julien Rouhaud <rjuju123@gmail.com>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/20210513002623.eugftm4nk2lvvks3@nol
ForgetBackgroundWorker lacked any memory barrier at all, while
BackgroundWorkerStateChange had one but unaccountably did
additional manipulation of the slot after the barrier. AFAICS,
the rule must be that the barrier is immediately before setting
or clearing slot->in_use.
It looks like back in 9.6 when ForgetBackgroundWorker was first
written, there might have been some case for not needing a
barrier there, but I'm not very convinced of that --- the fact
that the load of bgw_notify_pid is in the caller doesn't seem
to guarantee no memory ordering problem. So patch 9.6 too.
It's likely that this doesn't fix any observable bug on Intel
hardware, but machines with weaker memory ordering rules could
have problems here.
Discussion: https://postgr.es/m/4046084.1620244003@sss.pgh.pa.us
Add a defensive "can't happen" error to code that handles nbtree posting
list splits (promote an existing assertion). This avoids a segfault in
the event of an insertion of a newitem that is somehow identical to an
existing non-pivot tuple in the index. An nbtree index should never
have two index tuples with identical TIDs.
This scenario is not particular unlikely in the event of any kind of
corruption that leaves the index in an inconsistent state relative to
the heap relation that is indexed. There are two known reports of
preventable hard crashes. Doing nothing seems unacceptable given the
general expectation that nbtree will cope reasonably well with corrupt
data.
Discussion: https://postgr.es/m/CAH2-Wz=Jr_d-dOYEEmwz0-ifojVNWho01eAqewfQXgKfoe114w@mail.gmail.com
Backpatch: 13-, where nbtree deduplication was introduced.
Formerly we just relied on operator classes that assert longValuesOK
to eventually shorten the leaf value enough to fit on an index page.
That fails since the introduction of INCLUDE-column support (commit
09c1c6ab4), because the INCLUDE columns might alone take up more
than a page, meaning no amount of leaf-datum compaction will get
the job done. At least with spgtextproc.c, that leads to an infinite
loop, since spgtextproc.c won't throw an error for not being able
to shorten the leaf datum anymore.
To fix without breaking cases that would otherwise work, add logic
to spgdoinsert() to verify that the leaf tuple size is decreasing
after each "choose" step. Some opclasses might not decrease the
size on every single cycle, and in any case, alignment roundoff
of the tuple size could obscure small gains. Therefore, allow
up to 10 cycles without additional savings before throwing an
error. (Perhaps this number will need adjustment, but it seems
quite generous right now.)
As long as we've developed this logic, let's back-patch it.
The back branches don't have INCLUDE columns to worry about, but
this seems like a good defense against possible bugs in operator
classes. We already know that an infinite loop here is pretty
unpleasant, so having a defense seems to outweigh the risk of
breaking things. (Note that spgtextproc.c is actually the only
known opclass with longValuesOK support, so that this is all moot
for known non-core opclasses anyway.)
Per report from Dilip Kumar.
Discussion: https://postgr.es/m/CAFiTN-uxP_soPhVG840tRMQTBmtA_f_Y8N51G7DKYYqDh7XN-A@mail.gmail.com
Knowing that a buggy opclass could cause an infinite insertion loop,
spgdoinsert() intended to allow its loop to be interrupted by query
cancel. However, that never actually worked, because in iterations
after the first, we'd be holding buffer lock(s) which would cause
InterruptHoldoffCount to be positive, preventing servicing of the
interrupt.
To fix, check if an interrupt is pending, and if so fall out of
the insertion loop and service the interrupt after we've released
the buffers. If it was indeed a query cancel, that's the end of
the matter. If it was a non-canceling interrupt reason, make use
of the existing provision to retry the whole insertion. (This isn't
as wasteful as it might seem, since any upper-level index tuples we
already created should be usable in the next attempt.)
While there's no known instance of such a bug in existing release
branches, it still seems like a good idea to back-patch this to
all supported branches, since the behavior is fairly nasty if a
loop does happen --- not only is it uncancelable, but it will
quickly consume memory to the point of an OOM failure. In any
case, this code is certainly not working as intended.
Per report from Dilip Kumar.
Discussion: https://postgr.es/m/CAFiTN-uxP_soPhVG840tRMQTBmtA_f_Y8N51G7DKYYqDh7XN-A@mail.gmail.com
Split up CHECK_FOR_INTERRUPTS() to provide an additional macro
INTERRUPTS_PENDING_CONDITION(), which just tests whether an
interrupt is pending without attempting to service it. This is
useful in situations where the caller knows that interrupts are
blocked, and would like to find out if it's worth the trouble
to unblock them.
Also add INTERRUPTS_CAN_BE_PROCESSED(), which indicates whether
CHECK_FOR_INTERRUPTS() can be relied on to clear the pending interrupt.
This commit doesn't actually add any uses of the new macros,
but a follow-on bug fix will do so. Back-patch to all supported
branches to provide infrastructure for that fix.
Alvaro Herrera and Tom Lane
Discussion: https://postgr.es/m/20210513155351.GA7848@alvherre.pgsql
This seems to be leftover from ea15e1867 and from when we used to evaluate
SRFs at each node.
Since there is an unconditional "return" at the end of the loop body, only
1 loop is ever possible, so we can just change this into an if condition.
There is no actual bug being fixed here so no back-patch. It seems fine to
just fix this anomaly in master only.
Author: Greg Nancarrow
Discussion: https://postgr.es/m/CAJcOf-d7T1q0az-D8evWXnsuBZjigT04WkV5hCAOEJQZRWy28w@mail.gmail.com
The percentage of blocks from the table value reported by autovacuum log
output (following commit 5100010ee4) should never exceed 100% because
it describes the state of the table back when lazy_vacuum() was called.
The value could nevertheless exceed 100% in the event of heap relation
truncation. We failed to compensate for how truncation affects
rel_pages.
Fix the faulty accounting by using the original rel_pages value instead
of the current/final rel_pages value.
Reported-By: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20210423204306.5osfpkt2ggaedyvy@alap3.anarazel.de
Make sample like_regex match string values of the root object instead of the
whole document. The corrected example seems to represent a more relevant
use case.
Backpatch to 12, when jsonpath was introduced.
Discussion: https://postgr.es/m/13440f8b-4c1f-5875-c8e3-f3f65606af2f%40xs4all.nl
Author: Erik Rijkers
Reviewed-by: Michael Paquier, Alexander Korotkov
Backpatch-through: 12
Commits 27e1f1456 and 86dc90056, which were independently discussed,
cause a crash when executing an inherited foreign UPDATE/DELETE query
with asynchronous execution enabled, where children of an Append node
that is the direct/indirect child of the ModifyTable node are rewritten
so as to modify foreign tables directly by postgresPlanDirectModify();
as in that case the direct modifications are executed asynchronously,
which is not currently supported by asynchronous execution. Fix by
disabling asynchronous execution of the direct modifications in that
function.
Author: Etsuro Fujita
Reviewed-by: Amit Langote
Discussion: https://postgr.es/m/CAPmGK158e9sJOfuWxfn%2B0ynrspXQU3JhNjSCbaoeSzMvnga%2Bbw%40mail.gmail.com
Some of the tests were not considering that the slot's spill stats could be
received by the stats collector after we have reset the stats. Remove
those tests and don't check total bytes decoded and sent to output plugin
in the spilled stats test as we can send the spilled stats to the stats
collector before actually sending the changes to output plugin.
Reported-by: Tom Lane as per buildfarm
Author: Vignesh C, Sawada Masahiko
Reviewed-by: Amit Kapila
Discussion: https://postgr.es/m/20210319185247.ldebgpdaxsowiflw@alap3.anarazel.de
When specified directly as DML queries, INSERT was not getting always
completed to "INSERT INTO", same for DELETE with "DELETE FROM". This
makes the completion behavior more consistent for both commands, saving
a few keystrokes.
Commands on policies, triggers, grant/revoke, etc. require only DELETE
as completion keyword.
Author: Haiying Tang
Reviewed-by: Dilip Kumar, Julien Rouhaud
Discussion: https://postgr.es/m/OS0PR01MB61135AE2B07CCD1AB8C6A0F6FB549@OS0PR01MB6113.jpnprd01.prod.outlook.com