The JOIN_SEMI case Assert'ed that there are no PlaceHolderVars that
need to be evaluated at the semijoin's RHS, which is wrong because
there could be some in the semijoin's qual condition. However, there
could not be any references further up than that, and within the qual
there is not any way that such a PHV could have gone to null yet, so
we don't really need the PHV and there is no need to avoid making the
RHS-removal optimization. The upshot is that there's no actual bug
in production code, and we ought to just remove this misguided Assert.
While we're here, also drop the JOIN_RIGHT case, which is dead code
because reduce_outer_joins() already got rid of JOIN_RIGHT.
Per bug #17700 from Xin Wen. Uselessness of the JOIN_RIGHT case
pointed out by Richard Guo. Back-patch to v12 where this code
was added.
Discussion: https://postgr.es/m/17700-2b5c10d917c30687@postgresql.org
When loading plperl built against Strawberry perl or the msys2 ucrt perl
that have been built with gcc, a binary mismatch has been encountered
which looks like this:
loadable library and perl binaries are mismatched (got handshake key 0000000012800080, needed 0000000012900080)
To cure this we bring the handshake keys into sync by adding
NO_THREAD_SAFE_LOCALE to the defines used to build plperl.
Discussion: https://postgr.es/m/20211005004334.tgjmro4kuachwiuc@alap3.anarazel.de
Discussion: https://postgr.es/m/c2da86a0-2906-744c-923d-16da6047875e@dunslane.net
Backpatch to all live branches.
Commit 9b4eafcaf4 added creattion of a directory to reserve TAP test
ports at the top of the build tree. In a non-vpath build this means at
the top of the source tree, so it needs to be added to .gitignore.
As suggested by Michael Paquier
Backpatch to all live branches.
Strawberry uses __builtin_expect which Visual C doesn't have. For this
case define it as a noop. Solution taken from vim sources.
Backpatch to all live branches
Commit b663a4136, which allowed FDWs to INSERT rows in bulk, added to
nodeModifyTable.c code to flush pending inserts to the foreign-table
result relation(s) before completing processing of the ModifyTable node,
but the code failed to take into account the case where the INSERT query
has modifying CTEs, leading to incorrect results.
Also, that commit failed to flush pending inserts before firing BEFORE
ROW triggers so that rows are visible to such triggers.
In that commit we scanned through EState's
es_tuple_routing_result_relations or es_opened_result_relations list to
find the foreign-table result relations to which pending inserts are
flushed, but that would be inefficient in some cases. So to fix, 1) add
a List member to EState to record the insert-pending result relations,
and 2) modify nodeModifyTable.c so that it adds the foreign-table result
relation to the list in ExecInsert() if appropriate, and flushes pending
inserts properly using the list where needed.
While here, fix a copy-and-pasteo in a comment in ExecBatchInsert(),
which was added by that commit.
Back-patch to v14 where that commit appeared.
Discussion: https://postgr.es/m/CAPmGK16qutyCmyJJzgQOhfBq%3DNoGDqTB6O0QBZTihrbqre%2BoxA%40mail.gmail.com
In commit 272248a0c, we introduced an InitialRunningXacts array to
remember transactions and subtransactions that were running when the
xl_running_xacts record that we decoded was written. This array was
allocated in the snapshot builder memory context after we restore
serialized snapshot but we forgot to reset the array while freeing the
builder memory context. So, the next time when we start decoding in the
same session where we don't restore any serialized snapshot, we ended up
using the uninitialized array and that can lead to unpredictable behavior.
This problem doesn't exist in HEAD as instead of using
InitialRunningXacts, we added the list of transaction IDs and
sub-transaction IDs, that have modified catalogs and are running during
snapshot serialization, to the serialized snapshot (see commit 7f13ac8123).
Reported-by: Maxim Orlov
Author: Masahiko Sawada
Reviewed-by: Amit Kapila, Maxim Orlov
Backpatch-through: 11
Discussion: https://postgr.es/m/CACG=ezZoz_KG+Ryh9MrU_g5e0HiVoHocEvqFF=NRrhrwKmEQJQ@mail.gmail.com
We've made multiple attempts at preventing get_actual_variable_range
from taking an unreasonable amount of time (3ca930fc3, fccebe421).
But there's still an issue for the very first planning attempt after
deletion of a large number of extremal-valued tuples. While that
planning attempt will set "killed" bits on the tuples it visits and
thereby reduce effort for next time, there's still a lot of work it
has to do to visit the heap and then set those bits. It's (usually?)
not worth it to do that much work at plan time to have a slightly
better estimate, especially in a context like this where the table
contents are known to be mutating rapidly.
Therefore, let's bound the amount of work to be done by giving up
after we've visited 100 heap pages. Giving up just means we'll
fall back on the extremal value recorded in pg_statistic, so it
shouldn't mean that planner estimates suddenly become worthless.
Note that this means we'll still gradually whittle down the problem
by setting a few more index "killed" bits in each planning attempt;
so eventually we'll reach a good state (barring further deletions),
even in the absence of VACUUM.
Simon Riggs, per a complaint from Jakub Wartak (with cosmetic
adjustments by me). Back-patch to all supported branches.
Discussion: https://postgr.es/m/CAKZiRmznOwi0oaV=4PHOCM4ygcH4MgSvt8=5cu_vNCfc8FSUug@mail.gmail.com
Currently there is a race condition where if concurrent TAP tests both
test that they can open a port they will assume that it is free and use
it, causing one of them to fail. To prevent this we record a reservation
using an exclusive lock, and any TAP test that discovers a reservation
checks to see if the reserving process is still alive, and looks for
another free port if it is.
Ports are reserved in a directory set by the environment setting
PG_TEST_PORT_DIR, or if that doesn't exist a subdirectory of the top
build directory as set by Makefile.global, or its own
tmp_check directory.
The prove_check recipe in Makefile.global.in is extended to export
top_builddir to the TAP tests. This was already exported by the
prove_installcheck recipes.
Per complaint from Andres Freund
Backpatched from 9b4eafcaf4 to all live branches
Discussion: https://postgr.es/m/20221002164931.d57hlutrcz4d2zi7@awork3.anarazel.de
Once a logical slot has acquired a catalog_xmin, it doesn't let go of
it, even when invalidated by exceeding the max_slot_wal_keep_size, which
means that dead catalog tuples are not removed by vacuum anymore since
the point is invalidated, until the slot is dropped. This could be
catastrophic if catalog churn is high.
Change the computation of Xmin to ignore invalidated slots,
to prevent dead rows from accumulating.
Backpatch to 13, where slot invalidation appeared.
Author: Sirisha Chamarthi <sirichamarthi22@gmail.com>
Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Discussion: https://postgr.es/m/CAKrAKeUEDeqquN9vwzNeG-CN8wuVsfRYbeOUV9qKO_RHok=j+g@mail.gmail.com
The Hunspell project moved from Sourceforge to Github sometime
in 2016, so update our links to match the new URL. Backpatch
the doc changes to all supported versions.
Discussion: https://postgr.es/m/DC9A662A-360D-4125-A453-5A6CB9C6C4B4@yesql.se
Backpatch-through: v11
I just spent an annoying amount of time reverse-engineering the
100%-undocumented API between ts_headline and the text search
parser's prsheadline function. Add some commentary about that
while it's fresh in mind. Also remove some unused macros in
wparser_def.c.
While at it, I noticed that when commit 78e73e875 added a
CHECK_FOR_INTERRUPTS call in TS_execute_recurse, it missed
doing so in the parallel function TS_phrase_execute, which
surely needs one just as much.
Back-patch because of the missing CHECK_FOR_INTERRUPTS.
Might as well back-patch the rest of this too.
These functions have been marked parallel safe, but the buildfarm's
response to commit e2933a6e1 exposed the flaw in that thinking:
if you try to use them on a temporary table, and they run inside
a parallel worker, they'll fail with "cannot access temporary tables
during a parallel operation".
Fix that by marking them parallel restricted instead. Maybe someday
we'll have a better answer and can reverse this decision.
Back-patch to v15. To go back further, we'd have to devise variant
versions of pre-1.10 pageinspect versions. Given the lack of field
complaints, it doesn't seem worth the trouble. We'll just deem
this case unsupported pre-v15. (If anyone does complain, it might
be good enough to update the markings manually in their DBs.)
Discussion: https://postgr.es/m/E1ox94a-000EHu-VH@gemulon.postgresql.org
pageinspect has occasionally failed on slow buildfarm members,
with symptoms indicating that the expected effects of VACUUM
FREEZE didn't happen. This is presumably because a background
transaction such as auto-analyze was holding back global xmin.
We can work around that by using a temp table in the test.
Since commit a7212be8b, that will use an up-to-date cutoff xmin
regardless of other processes. And pageinspect itself shouldn't
really care whether the table is temp.
Back-patch to v14. There would be no point in older branches
without back-patching a7212be8b, which seems like more trouble
than the problem is worth.
Discussion: https://postgr.es/m/2892135.1668976646@sss.pgh.pa.us
ProcSleep() used a PGPROC* variable to point to PROC_QUEUE->links.next,
because that does "the right thing" with SHMQueueInsertBefore(). While that
largely works, it's certainly not correct and unnecessary - we can just use
SHM_QUEUE* to point to the insertion point.
Noticed when testing a 32bit of postgres with undefined behavior
sanitizer. UBSan noticed that sometimes the supposed PGPROC wasn't
sufficiently aligned (required since 46d6e5f567, ensured indirectly, via
ShmemAllocRaw() guaranteeing cacheline alignment).
For now fix this by using a SHM_QUEUE* for the insertion point. Subsequently
we should replace all the use of PROC_QUEUE and SHM_QUEUE with ilist.h, but
that's a larger change that we don't want to backpatch.
Backpatch to all supported versions - it's useful to be able to run postgres
under UBSan.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/20221117014230.op5kmgypdv2dtqsf@awork3.anarazel.de
Backpatch: 11-
The test output varies when debug_discard_caches is enabled,
because that causes extra executions of recomputeNamespacePath.
Maybe putting a hook in that was a bad idea, but as a stopgap,
just turn off debug_discard_caches in this test.
Per buildfarm (now that we have debug_discard_caches coverage
again). Back-patch to v15 where this module was added.
Discussion: https://postgr.es/m/2267406.1668804934@sss.pgh.pa.us
basics.source is supposed to be pretty closely in step with
the examples in chapter 2 of the tutorial, but I forgot to
update it in commit f05a5e000. Fix that, and adjust a couple
of other discrepancies that had crept in over time.
(I notice that advanced.source is nowhere near being in sync
with chapter 3, but I lack the ambition to do something
about that right now.)
Version strings with unequal numbers of parts were being compared
incorrectly. We cure this by treating a missing part in the shorter
version as 0.
per complaint from Jehan-Guillaume de Rorthais, but the fix is mine, not
his.
Discussion: https://postgr.es/m/20220628225325.53d97b8d@karst
Backpatch to release 14 where this code was introduced.
This commend references a struct that disappeared before MERGE was
merged ... and ExecDelete is not called by the committed MERGE anyway.
Revert to the original wording.
Backpatch to 15
During XLOG_HASH_SPLIT_ALLOCATE_PAGE replay, we were checking for a
cleanup lock on the new bucket page after acquiring an exclusive lock on
it and raising a PANIC error on failure. However, it is quite possible
that checkpointer can acquire the pin on the same page before acquiring a
lock on it, and then the replay will lead to an error. So instead, directly
acquire the cleanup lock on the new bucket page during
XLOG_HASH_SPLIT_ALLOCATE_PAGE replay operation.
Reported-by: Andres Freund
Author: Robert Haas
Reviewed-By: Amit Kapila, Andres Freund, Vignesh C
Backpatch-through: 11
Discussion: https://postgr.es/m/20220810022617.fvjkjiauaykwrbse@awork3.anarazel.de
The current code looks for the sample file in the source directory, but
it seems better to test against the installed sample file.
Backpatch to release 15 where the test was introduced.
Discussion: https://postgr.es/m/73eea68e-3b6f-5f63-6024-25ed26b52016@dunslane.net
Reviewed by Tom Lane, Alvaro Herrera, Michael Paquier.
Currently this only allows for one argument, which must be present, and
always returns a single string. With this change the following now all
work:
$all_config = $node->config_data;
%config_map = ($node->config_data);
$incdir = $node->config_data('--include-dir');
($incdir, $sharedir) = $node->config_data(
qw(--include-dir --share-dir));
Backpatch to release 15 where this was introduced.
Discussion: https://postgr.es/m/73eea68e-3b6f-5f63-6024-25ed26b52016@dunslane.net
Reviewed by Tom Lane, Alvaro Herrera, Michael Paquier.
Test files should now ignore has_wal_read_bug() so long as
wait_for_catchup() is their only known way of reaching the bug. That's
at least five files today, a number expected to grow over time. This
commit removes skip logic from three. By doing so, systems having the
bug regain the ability to catch other kinds of defects via those three
tests. The other two, 002_databases.pl and 031_recovery_conflict.pl,
have been unprotected. Back-patch to v15, where done_testing() first
became our standard.
Discussion: https://postgr.es/m/20221030031639.GA3082137@rfd.leadboat.com
The original report was concerned with a possible inconsistency
between the heap and the visibility map, which I was unable to
confirm. The concern has been retracted.
However, there did seem to be a torn page hazard when using
checksums. By not setting the heap page LSN during redo, the
protections of minRecoveryPoint were bypassed. Fixed, along with a
misleading comment.
It may have been impossible to hit this problem in practice, because
it would require a page tear between the checksum and the flags, so I
am marking this as a theoretical risk. But, as discussed, it did
violate expectations about the page LSN, so it may have other
consequences.
Backpatch to all supported versions.
Reported-by: Konstantin Knizhnik
Reviewed-by: Konstantin Knizhnik
Discussion: https://postgr.es/m/fed17dac-8cb8-4f5b-d462-1bb4908c029e@garret.ru
Backpatch-through: 11
The stanza "SET STORAGE may need to add a TOAST table" does not
test what it's supposed to, and hasn't done so since we added
the ability to store constant column default values as metadata.
We need to use a non-constant default to get the expected table
rewrite to actually happen.
Fix that, and add the missing checks that would have exposed the
problem to begin with.
Noted while reviewing a patch that made changes in this test case.
Back-patch to v11 where the problem came in.
In commit 450ee7012 I supposed that all platforms we now care about have
snprintf(), since that's required by C99. Turns out that Microsoft did
not get around to adding that until VS2015. We've dropped support for
VS2013 as of HEAD (cf 6203583b7), but not in the back branches, so add
a hack for this in the back branches only.
There's no easy shortcut to an exact emulation of standard snprintf
in VS2013, but fortunately we don't need one: this code was just fine
with using sprintf before 450ee7012, so we can make it do so again
on that platform (and any others where the problem might crop up).
Per bug #17681 from Daisuke Higuchi. Back-patch to v12, like the
previous patch.
Discussion: https://postgr.es/m/17681-485ba2ec13e7f392@postgresql.org
Replace the stopgap fix I made in 0e758ae89 with a cleaner one.
The real problem with 4ab5dae94 is that it contorted this function's
logic substantially, by introducing a third code path that required
different behavior in the function's main loop. That seems quite
unnecessary on closer inspection: the new IsBinaryUpgrade case can
just share the behavior of the other immediate-unlink cases. Hence,
revert 4ab5dae94 and most of 0e758ae89 (keeping the latter's
save/restore errno fix), and add IsBinaryUpgrade to the set of
conditions tested to choose immediate unlink.
Also fix some additional places with sloppy handling of errno,
to ensure we have an invariant that we always continue processing
after any non-ENOENT failure of do_truncate. I doubt that that's
fixing any bug of field importance, so I don't feel it necessary to
back-patch; but we might as well get it right while we're here.
Also improve the comments, which had drifted a bit from what the
code actually does, and neglected to mention some important
considerations.
Back-patch to v15, not because this is fixing any bug but because
it doesn't seem like a good idea for v15's mdunlinkfork logic to be
significantly different from both v14 and v16.
Discussion: https://postgr.es/m/3797575.1667924888@sss.pgh.pa.us
Add a little to the header comments for these functions to make it
clearer what guarantees about commit behavior are provided to callers.
(See commit f92944137 for context.)
Although this is only a comment change, it's really documentation
aimed at authors of extensions, so it seems appropriate to back-patch.
Yugo Nagata and Tom Lane, per further discussion of bug #17434.
Discussion: https://postgr.es/m/17434-d9f7a064ce2a88a3@postgresql.org
Commit fede15417 introduced FILTER by jamming it into the existing
example introducing HAVING, which seems pedagogically poor to me;
and it added no information about what the keyword actually does.
Not to mention that the claimed output didn't match the sample
data being used in this running example.
Revert that and instead make an independent example using FILTER.
To help drive home the point that it's a per-aggregate filter,
we need to use two aggregates not just one; for consistency
expand all the examples in this segment to do that.
Also adjust the example using WHERE ... LIKE so that it'd produce
nonempty output with this sample data, and show that output.
Back-patch, as the previous patch was. (Sadly, v10 is now out
of scope.)
Discussion: https://postgr.es/m/166794307526.652.9073408178177444190@wrigleys.postgresql.org
Commit 4ab5dae94 broke mdunlinkfork's logic for removing additional
segments of a multi-gigabyte table, because it neglected to advance
"segno" after unlinking the first segment, in the code path where it
chooses to unlink that one immediately. Then the main remove loop
gets ENOENT at segment zero and figures it's done, so we never remove
whatever additional segments might exist.
The main problem here is with large temporary tables, but WAL replay
of a drop of a large regular table would also fail to remove extra
segments. The third case where this path is taken is for non-main
forks; but I doubt it matters for those since they probably never
exceed 1GB.
The simplest fix is just to increment segno after that unlink().
(Probably this logic could do with a more thorough rethink, but not
with mere hours to go before 15.1 wraps.)
While here, also fix an incautious assumption that
register_forget_request cannot change errno. I don't think that
that has any really bad consequences, as we'd end up trying to unlink
the zero'th segment either way, but it greatly complicates reasoning
about what could happen here. Also make a couple of other cosmetic
fixes.
Per bug #17679 from Balazs Szilfai. Back-patch into v15, as the
faulty patch was.
Discussion: https://postgr.es/m/17679-1095d04450cf6a6e@postgresql.org
As usual, the release notes for other branches will be made by cutting
these down, but put them up for community review first.
Also as usual for a .1 release, there are some entries here that
are not really relevant for v15 because they already appeared in 15.0.
Those'll be removed later.
Commit aa0105141 repeated one of the oldest mistakes in our book:
thinking that OID is the same as int32. It isn't of course, and
unsurprisingly the first person who came along with a database
OID above 2 billion broke it. Repair.
Per bug #17677 from Sergey Pankov. Back-patch to v15.
Discussion: https://postgr.es/m/17677-a99fa067d7ed71c9@postgresql.org
"Triggers on partitioned tables cannot have transition tables." is
incorrect as we allow statement-level triggers on partitioned tables to
have transition tables.
This has been wrong since commit 86f575948; back-patch to v11 where that
commit came in.
Reviewed by Tom Lane.
Discussion: https://postgr.es/m/CAPmGK17gk4vXLzz2iG%2BG4LWRWCoVyam70nZ3OuGm1hMJwDrhcg%40mail.gmail.com
In the docs, the GUC flags that pg_settings_get_flags() reported were
listed using <simplelist>. But the list was treated as separate lines
in the existing function table and didn't look good. For better view,
this commit separates the list from the table entry for
pg_settings_get_flags() and adds the table for it at the bottom of
the existing function table.
Author: Fujii Masao
Reviewed-by: Alvaro Herrera, Michael Paquier
Discussion: https://postgr.es/m/f093edf9-6e5a-b119-ee50-6a2c97c79ee8@oss.nttdata.com
Back-patch of f2d0c7f18 into v15.
Discussion: https://postgr.es/m/20221103123320.GQ16921@telsasoft.com
Commit f56f8f8da6 added some code in CloneFkReferencing that's way too
lax about a Constraint node it manufactures, not initializing enough
struct members -- initially_valid in particular was forgotten. This
causes some FKs in partitions added by ALTER TABLE ATTACH PARTITION to
be marked as not validated. Set initially_valid true, which fixes the
bug.
While at it, make the struct initialization more complete. Very similar
code was added in two other places by the same commit; make them all
follow the same pattern for consistency, though no bugs are apparent
there.
This bug has never been reported: I only happened to notice while
working on commit 614a406b4f. The test case that was added there with
the improper result is repaired.
Backpatch to 12.
Discussion: https://postgr.es/m/20221005105523.bhuhkdx4olajboof@alvherre.pgsql