infer_arbiter_indexes failed to renumber varnos in index expressions
or predicates that it got from the catalogs. This escaped detection
up to now because the stored varnos in such trees will be 1, and an
INSERT's result relation is usually the first rangetable entry,
so that that was fine. However, in cases such as inserting through
an updatable view, it's not fine, leading to failure to match the
expressions to the query with ensuing "there is no unique or exclusion
constraint matching the ON CONFLICT specification" errors.
Fix by copy-and-paste from get_relation_info().
Per bug #18502 from Michael Wang. Back-patch to all supported
versions.
Discussion: https://postgr.es/m/18502-545b53f5b81e54e0@postgresql.org
When a partition is being detached in concurrent mode, it is possible
for find_inheritance_children_extended() to return that partition in the
list, and immediately after that receive an invalidation message that
sets its relpartbound to NULL just before we read it. (This can happen
because table_open() reads invalidation messages.) Currently we raise
an error
ERROR: missing relpartbound for relation %u
about the situation, but that's bogus because the table is no longer a
partition, so we shouldn't be complaining about it. A better reaction
is to retry the find_inheritance_children_extended call to get a new
list, which will no longer have the partition being detached.
Noticed while investigating bug #18377.
Backpatch to 14, where DETACH CONCURRENTLY appeared.
Discussion: https://postgr.es/m/202405201616.y4ht2qe5ihoy@alvherre.pgsql
test_predtest() neglected to consider the possibility that
SPI_plan_get_cached_plan would return NULL. This led to a core
dump if the input (incorrectly) contains more than one SQL
command.
While here, let's expend more than zero effort on the error
message for this case and nearby ones.
Per (half of) bug #18483 from Alexander Kozhemyakin.
Back-patch to all supported branches, not because this is
very significant (it's merely test scaffolding) but to make
our world a bit safer for fuzz testing.
Discussion: https://postgr.es/m/18483-30bfff42de238000@postgresql.org
Normally this case isn't even reachable by non-superusers, since
permissions checks prevent naming such a table. However, it is
possible to make it happen by altering a parent table whose child
is another session's temp table.
We definitely can't support any such ALTER that requires modifying
the contents of such a table, since we lack access to the other
session's temporary-buffer pool. But there seems no good reason
to allow it even if it'd only require changing catalog contents.
One reason not to allow it is that we'd rather not expose the
implementation-dependent behavior of whether a specific ALTER
requires touching the table contents. Another is that there may
be (in future, even if not today) optimizations that assume that
a session's own temp tables won't be modified by other sessions.
Hence, add a RELATION_IS_OTHER_TEMP() check to all the places
where ALTER TABLE currently does CheckTableNotInUse(). (I looked
through all other callers of CheckTableNotInUse(), and they seem
OK already.)
Per bug #18492 from Alexander Lakhin. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/18492-c7a2634bf4968763@postgresql.org
If the CALL is within an atomic context (e.g. there's an outer
transaction block), _SPI_execute_plan should acquire a fresh snapshot
to execute any such functions with. We failed to do that and instead
passed them the Portal snapshot, which had been acquired at the start
of the current SQL command. This'd lead to seeing stale values of
rows modified since the start of the command.
This is arguably a bug in 84f5c2908: I failed to see that "are we in
non-atomic mode" needs to be defined the same way as it is further
down in _SPI_execute_plan, i.e. check !_SPI_current->atomic not just
options->allow_nonatomic. Alternatively the blame could be laid on
plpgsql, which is unconditionally passing allow_nonatomic = true
for CALL/DO even when it knows it's in an atomic context. However,
fixing it in spi.c seems like a better idea since that will also fix
the problem for any extensions that may have copied plpgsql's coding
pattern.
While here, update an obsolete comment about _SPI_execute_plan's
snapshot management.
Per report from Victor Yegorov. Back-patch to all supported versions.
Discussion: https://postgr.es/m/CAGnEboiRe+fG2QxuBO2390F7P8e2MQ6UyBjZSL_w1Cej+E4=Vw@mail.gmail.com
Floris Van Nee has reported a bug in the pgstats facility where a stats
entry already dropped would get again dropped. This case should not
happen, still the error generated did not offer any details about the
stats entry getting dropped.
This commit improves the error message generated to inform about the
stats entry kind, database OID, object OID and refcount, which should
help to debug more the problem reported. Bertrand Drouvot has been
independently able to reach this error path while writing a new feature,
and more details about the failure would have been helpful for
debugging.
Author: Andres Freund, Bertrand Drouvot
Discussion: https://postgr.es/m/20240505160915.6boysum4f34siqct@awork3.anarazel.de
Discussion: https://postgr.es/m/ZkM30paAD8Cr/Bix@ip-10-97-1-34.eu-west-3.compute.internal
Backpatch-through: 15
Previously, when considering LIMIT pushdown, postgres_fdw failed to
check whether the query has this clause, which led to pushing false
LIMIT clauses, causing incorrect results.
This clause has been supported since v13, so we need to do a
remote-version check before deciding that it will be safe to push such a
clause, but we do not currently have a way to do the check (without
accessing the remote server); disable pushing such a clause for now.
Oversight in commit 357889eb1. Back-patch to v13, where that commit
added the support.
Per bug #18467 from Onder Kalaci.
Patch by Japin Li, per a suggestion from Tom Lane, with some changes to
the comments by me. Review by Onder Kalaci, Alvaro Herrera, and me.
Discussion: https://postgr.es/m/18467-7bb89084ff03a08d%40postgresql.org
Before the v13-era commit 913bbd88d, check_sql_fn_retval fails to
resolve polymorphic output types and then just throws up its hands and
assumes the check will be made at runtime. I think that's true for
ordinary functions returning RECORD, but it doesn't happen in CALL,
potentially resulting in crashes if the actual output of the SQL
procedure's SELECT doesn't match the type inferred from polymorphism.
With a little bit of rearrangement, we can use get_call_result_type
instead of get_func_result_type and thereby infer the correct types.
I'm still unwilling to back-patch all of 913bbd88d, so if the types
don't match you'll get an error rather than perhaps silently inserting
a cast as v13 and later can. That's consistent with prior behavior
though, so it seems fine.
Prior to 70ffb27b2, you'd typically get other errors due to other
shortcomings of CALL's management of polymorphism. Nonetheless,
this is an independent bug.
Although there is no bug in v13 and up, it seems prudent to add
the test case for this to the newer branches too. It's clearly
an under-tested area.
Per report from Andrew Bille.
Discussion: https://postgr.es/m/CAJnzarw9EeWHAQRm76dXd=7j+rgw6ERqC=nCay8jeFqTwKwhqQ@mail.gmail.com
Concurrent activity around replication slot creation and drop could
cause a replication slot to use a stats entry it should not have used
when created, triggering an assertion failure when retrieving this
inconsistent entry from the dshash table used by the stats facility.
The issue is that pgstat_drop_replslot() calls pgstat_drop_entry()
without checking the result. If pgstat_drop_entry() cannot free the
entry related to the object dropped, pgstat_request_entry_refs_gc()
should be called. AtEOXact_PgStat_DroppedStats() and surrounding
routines dropping stats entries already do that.
This is documented in pgstat_internal.h, but let's add a comment at the
top of pgstat_drop_entry() as that can be easy to miss.
Reported-by: Alexander Lakhin
Author: Floris Van Nee
Analyzed-by: Andres Freund
Discussion: https://postgr.es/m/17947-b9554521ad963c9c@postgresql.org
Backpatch-through: 15
The documentation for POSIX semaphores is missing a reference to
max_wal_senders. This commit fixes that in the same way that
commit 4ebe51a5fb fixed the same issue in the documentation for
System V semaphores.
Discussion: https://postgr.es/m/20240517164452.GA1914161%40nathanxps13
Backpatch-through: 12
This HBA entry was using "local" while specifying an address, which was
incorrect. While in it, this adjusts the format of the entry to be
aligned with the surroundings.
Oversight in 8fea86830e.
Reported-by: Stéphane Schildknecht
Reviewed-by: David G. Johnston
Discussion: https://postgr.es/m/44662001-54c4-4bfd-be93-35e01ca25fa1@gmail.com
Backpatch-through: 16
In a procedure or function returning tuple, we use that function to
parse the Tcl script's result, which is supposed to be a Tcl list.
If it isn't, you get an error. Commit 26abb50c4 incautiously
supposed that we could use throw_tcl_error() to report such an error.
That doesn't actually work, because low-level functions like
Tcl_ListObjGetElements() don't fill Tcl's errorInfo variable.
The result is either a null-pointer-dereference crash or emission
of misleading context information describing the previous Tcl error.
Back off to just reporting the interpreter's result string, and
improve throw_tcl_error()'s comment to explain when to use it.
Also, although the similar code in pltcl_trigger_handler() avoided
this mistake, it was using a fairly confusing wording of the
error message. Improve that while we're here.
Per report from A. Kozhemyakin. Back-patch to all supported
branches.
Erik Wienhold and Tom Lane
Discussion: https://postgr.es/m/6a2a1c40-2b2c-4a33-8b72-243c0766fcda@postgrespro.ru
Commit faff8f8e47 allowed integer literals to contain underscores, but
failed to update the lexer's "numericfail" rule. As a result, a
decimal integer literal containing underscores would fail to parse, if
used in an integer range with no whitespace after the first number,
such as "1_001..1_003" in a PL/pgSQL FOR loop.
Fix and backpatch to v16, where support for underscores in integer
literals was added.
Report and patch by Erik Wienhold.
Discussion: https://postgr.es/m/808ce947-46ec-4628-85fa-3dd600b2c154%40ewie.name
The formulas for SEMMNI and SEMMNS do not include the archiver
process, which was converted to an auxiliary process in v14, and
the WAL summarizer process, which was introduced in v17. This
commit corrects these formulas and adds a missing reference to
max_wal_senders nearby. Since this section of the documentation
tends to be incorrect quite often, we should likely give up on
documenting the exact formulas in favor of something less fragile,
but that is left as a future exercise.
Reported-by: Sami Imseih
Reviewed-by: Sami Imseih
Discussion: https://postgr.es/m/20240517164452.GA1914161%40nathanxps13
Backpatch-through: 12
This test was failing when using wal_debug=on and -DWAL_DEBUG because of
additional log entries that made the test grab an LSN not mapping with
the error expected in the test.
Previously the test would look for the first matching line to get the
LSN to skip up to. This is improved by having the test scan the logs
with a regexp that checks for the expected ERROR string, ensuring that
the wanted LSN comes from the correct context.
Backpatch down to 15 where this test has been introduced.
Author: Ian Ilyasov
Discussion: https://postgr.es/m/GV1P251MB100415F17E6B2FDD7188777ECDE32@GV1P251MB1004.EURP251.PROD.OUTLOOK.COM
Backpatch-through: 15
Coverity complains that ECPGdebug is accessing debugstream without
holding debug_mutex, which is a fair complaint: we should take
debug_mutex while changing the settings ecpg_log looks at.
In some branches it also complains about unlocked use of simple_debug.
I think it's intentional and safe to have a quick unlocked check of
simple_debug at the start of ecpg_log, since that early exit will
always be taken in non-debug cases. But we should recheck
simple_debug after acquiring the mutex. In the worst case, calling
ECPGdebug concurrently with ecpg_log in another thread could result
in a null-pointer dereference due to debugstream transiently being
NULL while simple_debug isn't 0.
This is largely hypothetical, since it's unlikely anybody uses
ECPGdebug() at all in the field, and our own regression tests
don't seem to be hitting the theoretical race conditions either.
Still, if we're going to the trouble of having mutexes here, we ought
to be using them in a way that's actually safe not just almost safe.
Hence, back-patch to all supported branches.
Parameter column_name must be an existing column because ALTER
MATERIALIZED VIEW cannot add new columns. The old description was
likely copied from ALTER TABLE.
Author: Erik Wienhold
Discussion: https://postgr.es/m/6880ca53-7961-4eeb-86d5-6bd05fc2027e@ewie.name
Backpatch-through: 12
Commit 3e1a373e2 missed teaching DecodeTimeOnly the same "ptype"
manipulations it added to DecodeDateTime. While likely harmless
at the time, it became a problem after 5b3c59535 added an error check
that ptype must be zero once we exit the parsing loop (that is, there
shouldn't be any unused prefixes). The consequence was that we'd
reject time or timetz input like T12:34:56 (the "extended" format
per ISO 8601-1:2019), even though that still worked in timestamp
input.
Since this is clearly under-tested code, add test cases covering all
the ISO 8601 time formats. (Note: although 8601 allows just "Thh",
we have never accepted that, and this patch doesn't change that.
I'm content to leave that as-is because it seems too likely to be
a mistake rather than intended input. If anyone wants to allow
that, it should be a separate patch anyway, and not back-patched.)
Per bug #18470 from David Perez. Back-patch to v16 where we
broke it.
Discussion: https://postgr.es/m/18470-34fad4c829106848@postgresql.org
transformTableLikeClause believed that it could process extended
statistics immediately because "the representation of CreateStatsStmt
doesn't depend on column numbers". That was true when extended stats
were first introduced, but it was falsified by the addition of
extended stats on expressions: the parsed expression tree is fed
forward by the LIKE option, and that will contain Vars. So if the
new table doesn't have attnums identical to the old one's (typically
because there are some dropped columns in the old one), that doesn't
work. The CREATE goes through, but it emits invalid statistics
objects that will cause problems later.
Fortunately, we already have logic that can adapt expression trees
to the possibly-new column numbering. To use it, we have to delay
processing of CREATE_TABLE_LIKE_STATISTICS into expandTableLikeClause,
just as for other LIKE options that involve expressions.
Per bug #18468 from Alexander Lakhin. Back-patch to v14 where
extended statistics on expressions were added.
Discussion: https://postgr.es/m/18468-f5add190e3fa5902@postgresql.org
We are capable of optimizing MIN() and MAX() aggregates on indexed
columns into subqueries that exploit the index, rather than the normal
thing of scanning the whole table. When we do this, we replace the
Aggref node(s) with Params referencing subquery outputs. Such Params
really ought to be included in the per-plan-node extParam/allParam
sets computed by SS_finalize_plan. However, we've never done so
up to now because of an ancient implementation choice to perform
that substitution during set_plan_references, which runs after
SS_finalize_plan, so that SS_finalize_plan never sees these Params.
The cleanest fix would be to perform a separate tree walk to do
these substitutions before SS_finalize_plan runs. That seems
unattractive, first because a whole-tree mutation pass is expensive,
and second because we lack infrastructure for visiting expression
subtrees in a Plan tree, so that we'd need a new function knowing
as much as SS_finalize_plan knows about that. I also considered
swapping the order of SS_finalize_plan and set_plan_references,
but that fell foul of various assumptions that seem tricky to fix.
So the approach adopted here is to teach SS_finalize_plan itself
to check for such Aggrefs. I refactored things a bit in setrefs.c
to avoid having three copies of the code that does that.
Back-patch of v17 commits d0d44049d and 779ac2c74. When d0d44049d
went in, there was no evidence that it was fixing a reachable bug,
so I refrained from back-patching. Now we have such evidence.
Per bug #18465 from Hal Takahara. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/18465-2fae927718976b22@postgresql.org
Discussion: https://postgr.es/m/2391880.1689025003@sss.pgh.pa.us
Specifically, it terminates a background worker even if the caller
couldn't terminate the background worker with pg_terminate_backend().
Commit 3a9b18b309 neglected to update
this. Back-patch to v13, which introduced DROP DATABASE FORCE.
Reviewed by Amit Kapila. Reported by Kirill Reshke.
Discussion: https://postgr.es/m/20240429212756.60.nmisch@google.com
9a974cbcba moved the query in binary_upgrade_set_pg_class_oids to the
outer level, but left the PQclear and query buffer destruction in the
is_index conditional. 353708e1fb fixed the leak of the query buffer
but left the PGresult leak. This moves clearing the result to the outer
level ensuring that it will be called.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/374550C1-F4ED-4D9D-9498-0FD029CCF674@yesql.se
Backpatch-through: v15
Underscores were added to numeric literals in faff8f8e47. This change
also affected the positional parameters (e.g., $1) rule, which uses
the same production for its digits. But this did not actually work,
because the digits for parameters are processed using atol(), which
does not handle underscores and ignores whatever it cannot parse.
The underscores notation is probably not useful for positional
parameters, so for simplicity revert that rule to its old form that
only accepts digits 0-9.
Author: Erik Wienhold <ewie@ewie.name>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://www.postgresql.org/message-id/flat/5d216d1c-91f6-4cbe-95e2-b4cbd930520c%40ewie.name
Most of the infrastructure for procedure arguments was already
okay with polymorphic output arguments, but it turns out that
CallStmtResultDesc() was a few bricks shy of a load here. It thought
all it needed to do was call build_function_result_tupdesc_t, but
that function specifically disclaims responsibility for resolving
polymorphic arguments. Failing to handle that doesn't seem to be
a problem for CALL in plpgsql, but CALL from plain SQL would get
errors like "cannot display a value of type anyelement", or even
crash outright.
In v14 and later we can simply examine the exposed types of the
CallStmt.outargs nodes to get the right type OIDs. But it's a lot
more complicated to fix in v12/v13, because those versions don't
have CallStmt.outargs, nor do they do expand_function_arguments
until ExecuteCallStmt runs. We have to duplicatively run
expand_function_arguments, and then re-determine which elements
of the args list are output arguments.
Per bug #18463 from Drew Kimball. Back-patch to all supported
versions, since it's busted in all of them.
Discussion: https://postgr.es/m/18463-f8cd77e12564d8a2@postgresql.org
Presently, when this function is called for an unlogged sequence on
a standby server, it will error out with a message like
ERROR: could not open file "base/5/16388": No such file or directory
Since the pg_sequences system view uses pg_sequence_last_value(),
it can error similarly. To fix, modify the function to return NULL
for unlogged sequences on standby servers. Since this bug is
present on all versions since v15, this approach is preferable to
making the ERROR nicer because we need to repair the pg_sequences
view without modifying its definition on released versions. For
consistency, this commit also modifies the function to return NULL
for other sessions' temporary sequences. The pg_sequences view
already appropriately filters out such sequences, so there's no bug
there, but we might as well offer some defense in case someone
invokes this function directly.
Unlogged sequences were first introduced in v15, but temporary
sequences are much older, so while the fix for unlogged sequences
is only back-patched to v15, the temporary sequence portion is
back-patched to all supported versions.
We could also remove the privilege check in the pg_sequences view
definition in v18 if we modify this function to return NULL for
sequences for which the current user lacks privileges, but that is
left as a future exercise for when v18 development begins.
Reviewed-by: Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/20240501005730.GA594666%40nathanxps13
Backpatch-through: 12
If we recursed to a new call of the same function, with a different
coldeflist (AS clause), it would fail because the inner call would
overwrite the outer call's idea of what to return. This is vaguely
like 1d2fe56e4 and c5bec5426, but it's not due to any API decisions:
it's just that we computed the actual output rowtype at the start of
the call, and saved it in the per-procedure data structure. We can
fix it at basically zero cost by doing the computation at the end
of each call instead of the start.
It's not clear that there's any real-world use-case for such a
function, but given that it doesn't cost anything to fix,
it'd be silly not to.
Per report from Andreas Karlsson. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/1651a46d-3c15-4028-a8c1-d74937b54e19@proxel.se
json_lex_string() relies on pg_encoding_mblen_bounded() to point to the
end of a JSON string when generating an error message, and the input it
uses is not guaranteed to be null-terminated.
It was possible to walk off the end of the input buffer by a few bytes
when the last bytes consist of an incomplete multi-byte sequence, as
token_terminator would point to a location defined by
pg_encoding_mblen_bounded() rather than the end of the input. This
commit switches token_terminator so as the error uses data up to the
end of the JSON input.
More work should be done so as this code could rely on an equivalent of
report_invalid_encoding() so as incorrect byte sequences can show in
error messages in a readable form. This requires work for at least two
cases in the JSON parsing API: an incomplete token and an invalid escape
sequence. A more complete solution may be too invasive for a backpatch,
so this is left as a future improvement, taking care of the overread
first.
A test is added on HEAD as test_json_parser makes this issue
straight-forward to check.
Note that pg_encoding_mblen_bounded() no longer has any callers. This
will be removed on HEAD with a separate commit, as this is proving to
encourage unsafe coding.
Author: Jacob Champion
Discussion: https://postgr.es/m/CAOYmi+ncM7pwLS3AnKCSmoqqtpjvA8wmCdoBtKA3ZrB2hZG6zA@mail.gmail.com
Backpatch-through: 13
If -l was specified together with selective-restore options such as -n
or -N, dependent TOC entries such as comments would be omitted from
the listing, even when an actual restore would have selected them.
This happened because PrintTOCSummary neglected to update the te->reqs
marking of the entry they depended on.
Per report from Justin Pryzby. This has been wrong since 0d4e6ed30
taught _tocEntryRequired to sometimes look at the "reqs" marking of
other TOC entries, so back-patch to all supported branches.
Discussion: https://postgr.es/m/ZjoeirG7yxODdC4P@pryzbyj2023
If a plpython-language trigger caused another one to be invoked,
the "TD" dictionary created for the inner one would overwrite the
outer one's "TD" dictionary. This is more or less the same problem
that 1d2fe56e4 fixed for ordinary functions in plpython, so fix it
the same way, by saving and restoring "TD" during a recursive
invocation.
This fix makes an ABI-incompatible change in struct PLySavedArgs.
I'm not too worried about that because it seems highly unlikely that
any extension is messing with those structs. We could imagine doing
something weird to preserve nominal ABI compatibility in the back
branches, like keeping the saved TD object in an extra element of
namedargs[]. However, that would only be very nominal compatibility:
if anything *is* touching PLySavedArgs, it would likely do the wrong
thing due to not knowing about the additional value. So I judge it
not worth the ugliness to do something different there.
(I also changed struct PLyProcedure, but its added field fits
into formerly-padding space, so that should be safe.)
Per bug #18456 from Jacques Combrink. This bug is very ancient,
so back-patch to all supported branches.
Discussion: https://postgr.es/m/3008982.1714853799@sss.pgh.pa.us
The catalog view pg_stats_ext fails to consider privileges for
expression statistics. The catalog view pg_stats_ext_exprs fails
to consider privileges and row-level security policies. To fix,
restrict the data in these views to table owners or roles that
inherit privileges of the table owner. It may be possible to apply
less restrictive privilege checks in some cases, but that is left
as a future exercise. Furthermore, for pg_stats_ext_exprs, do not
return data for tables with row-level security enabled, as is
already done for pg_stats_ext.
On the back-branches, a fix-CVE-2024-4317.sql script is provided
that will install into the "share" directory. This file can be
used to apply the fix to existing clusters.
Bumps catversion on 'master' branch only.
Reported-by: Lukas Fittl
Reviewed-by: Noah Misch, Tomas Vondra, Tom Lane
Security: CVE-2024-4317
Backpatch-through: 14
The documentation said that you need to pick a suitable LC_COLLATE
setting in addition to setting the DETERMINISTIC flag. This would
have been correct if the libc provider supported nondeterministic
collations, but since it doesn't, you actually need to set the LOCALE
option.
Reviewed-by: Kashif Zeeshan <kashi.zeeshan@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/a71023c2-0ae0-45ad-9688-cf3b93d0d65b%40eisentraut.org
When testing pg_upgrade against an old server, ignore failures on the
check to upgrade invalid databases. This is necessary because old
servers don't know to raise the appropriate error of the database being
invalid.
This change causes no reduction in coverage, because such old versions
don't know to mark databases invalid when a drop is interrupted; but
testing against such old servers is useful in some circumstances.
Backpatch to 16, where it cherry-picks with minimal conflicts.
On 16, perltidy 20230309 chooses to change an unrelated line. I let it
do that because that's the version we document as preferred for that
branch, even though it would make other changes to many other files in
the tree.
Discussion: https://postgr.es/m/202404181539.lh42llaesnv3@alvherre.pgsql
94985c210 added code to detect when WindowFuncs were monotonic and
allowed additional quals to be "pushed down" into the subquery to be
used as WindowClause runConditions in order to short-circuit execution
in nodeWindowAgg.c.
The Node representation of runConditions wasn't well selected and
because we do qual pushdown before planning the subquery, the planning
of the subquery could perform subquery pull-up of nested subqueries.
For WindowFuncs with args, the arguments could be changed after pushing
the qual down to the subquery.
This was made more difficult by the fact that the code duplicated the
WindowFunc inside an OpExpr to include in the WindowClauses runCondition
field. This could result in duplication of subqueries and a pull-up of
such a subquery could result in another initplan parameter being issued
for the 2nd version of the subplan. This could result in errors such as:
ERROR: WindowFunc not found in subplan target lists
Here in the backbranches, we don't have the flexibility to improve the
Node representation to resolve this, so instead we just disable the
runCondition optimization for ntile() unless the argument is a Const,
(v16 only) and likewise for count(expr) (both v15 and v16). count(*) is
unaffected. All other window functions which support this optimization
all take zero arguments and therefore are unaffected.
Bug: #18170
Reported-by: Zuming Jiang
Discussion: https://postgr.es/m/18170-f1d17bf9a0d58b24@postgresql.org
Backpatch-through 15 (master will be fixed independently)
A parallel worker's buffer usage is accumulated to its pgBufferUsage
and then is accumulated into the leader's one at the end of the
parallel vacuum. However, since the leader process used to use
dedicated VacuumPage{Hit, Miss, Dirty} globals for the buffer usage
reporting, the worker's buffer usage was not included, leading to an
incorrect buffer usage report.
To fix the problem, this commit makes vacuum use pgBufferUsage
instruments for buffer usage reporting instead of VacuumPage{Hit,
Miss, Dirty} globals. These global variables are still used by ANALYZE
command and autoanalyze.
This also fixes the buffer usage report of vacuuming on temporary
tables, since the buffers dirtied by MarkLocalBufferDirty() were not
tracked by the VacuumPageDirty variable.
Parallel vacuum was introduced in 13, but the buffer usage reporting
for VACUUM command with the VERBOSE option was implemented in
15. So backpatch to 15.
Reported-by: Anthonin Bonnefoy
Author: Anthonin Bonnefoy
Reviewed-by: Alena Rybakina, Masahiko Sawada
Discussion: https://postgr.es/m/CAO6_XqrQk+QZQcYs_C6nk0cMfHuUWk85vT9CrcA1NffFbAVE2A@mail.gmail.com
Backpatch-through: 15
As an optimization, we store "name" columns as cstrings in btree
indexes.
Here we modify it so that Index Only Scans convert these cstrings back
to names with NAMEDATALEN bytes rather than storing the cstring in the
tuple slot, as was happening previously.
Bug: #17855
Reported-by: Alexander Lakhin
Reviewed-by: Alexander Lakhin, Tom Lane
Discussion: https://postgr.es/m/17855-5f523e0f9769a566@postgresql.org
Backpatch-through: 12, all supported versions
vac_update_datfrozenxid() did multiple loads of relfrozenxid and
relminmxid from buffer memory, and it assumed each would get the same
value. Not so if a concurrent vac_update_relstats() did an inplace
update. Commit 2d2e40e3be fixed the same
kind of bug in vac_truncate_clog(). Today's bug could cause the
rel-level field and XIDs in the rel's rows to precede the db-level
field. A cluster having such values should VACUUM affected tables.
Back-patch to v12 (all supported versions).
Discussion: https://postgr.es/m/20240423003956.e7.nmisch@google.com