"meson test" used to ignore the PG_TEST_EXTRA environment variable,
which meant that in order to run additional tests, you had to run
"meson setup -DPG_TEST_EXTRA=...". That's somewhat expensive, and not
consistent with autoconf builds. Allow PG_TEST_EXTRA environment
variable to override the setup-time option at run time, so that you
can do "PG_TEST_EXTRA=... meson test".
To implement this, the configuration time value is passed as an extra
"--pg-test-extra" argument to testwrap instead of adding it to the
test environment. If the environment variable is set at the time of
running test, testwrap uses the value from the environment variable
and ignores the --pg-test-extra option.
Now that "meson test" obeys the environment variable, we can remove it
from the "meson setup" steps in the CI script. It will now be picked
up from the environment variable like with "make check".
Author: Nazir Bilal Yavuzk, Ashutosh Bapat
Reviewed-by: Ashutosh Bapat with inputs from Tom Lane and Andrew Dunstan
arrays.sql was already missing it before 49d6c7d8da, and I have just
noticed it thanks to this commit. The second one in test_slru has been
introduced by 768a9fd553.
A CacheInvalidateHeapTuple* callee might call
CatalogCacheInitializeCache(), which needs a relcache entry. Acquiring
a valid relcache entry might scan pg_class. Hence, to prevent
undetected LWLock self-deadlock, CacheInvalidateHeapTuple* callers must
not hold BUFFER_LOCK_EXCLUSIVE on buffers of pg_class. Move the
CacheInvalidateHeapTupleInplace() before the BUFFER_LOCK_EXCLUSIVE. No
back-patch, since I've reverted commit
243e9b40f1 from non-master branches.
Reported by Alexander Lakhin. Reviewed by Alexander Lakhin.
Discussion: https://postgr.es/m/10ec0bc3-5933-1189-6bb8-5dec4114558e@gmail.com
Commit a07e03fd8f enlarged the work done
here under the pg_class heap buffer lock. Two preexisting actions are
best done before holding that lock. Both RelationGetNumberOfBlocks()
and visibilitymap_count() do I/O, and the latter might exclusive-lock a
visibility map buffer. Moving these reduces contention and risk of
undetected LWLock deadlock. Back-patch to v12, like that commit.
Discussion: https://postgr.es/m/20241031200139.b4@rfd.leadboat.com
Add README.non-ASCII to explain non-ASCII doc behavior; some text moved
from release.sgml.
Change UTF8 SGML characters to use HTML entities.
Remove unnecessary UTF8 spaces.
Add SVG file check for check-nbsp target.
Add dummy 'pdf' Makefile target.
Reported-by: Yugo Nagata
Discussion: https://postgr.es/m/20241011114122.c90f8a871462da36f2e2afeb@sraoss.co.jp
Backpatch-through: master
Instead of talking about setting latches, which is a pretty low-level
mechanism, emphasize that they wake up other processes.
This is in preparation for replacing Latches with a new abstraction.
That's still work in progress, but this seems a little tidier anyway,
so let's get this refactoring out of the way already.
Discussion: https://www.postgresql.org/message-id/391abe21-413e-4d91-a650-b663af49500c%40iki.fi
After a closer lookup, this makes the all-zero check of the page more
expensive, so let's remove the new function call in bufpage.c. The
maths of the check were also incorrect, checking that the page was full
of zeros only for the first 1kB.
This brings back the code to the state it was at 49d6c7d8da.
Per discussion with David Rowley and Bertrand Drouvot.
Discussion: https://postgr.es/m/CAApHDvrXzPAr3FxoBuB7b3D-okNoNA2jxLun1rW8Yw5wkbqusw@mail.gmail.com
This new function tests if a memory region starting at a given location
for a defined length is made only of zeroes. This unifies in a single
path the all-zero checks that were happening in a couple of places of
the backend code:
- For pgstats entries of relation, checkpointer and bgwriter, where
some "all_zeroes" variables were previously used with memcpy().
- For all-zero buffer pages in PageIsVerifiedExtended().
This new function uses the same forward scan as the check for all-zero
buffer pages, applying it to the three pgstats paths mentioned above.
Author: Bertrand Drouvot
Reviewed-by: Peter Eisentraut, Heikki Linnakangas, Peter Smith
Discussion: https://postgr.es/m/ZupUDDyf1hHI4ibn@ip-10-97-1-34.eu-west-3.compute.internal
This function takes in input an array, and reverses the position of all
its elements. This operation only affects the first dimension of the
array, like array_shuffle().
The implementation structure is inspired by array_shuffle(), with a
subroutine called array_reverse_n() that may come in handy in the
future, should more functions able to reverse portions of arrays be
introduced.
Bump catalog version.
Author: Aleksander Alekseev
Reviewed-by: Ashutosh Bapat, Tom Lane, Vladlen Popolitov
Discussion: https://postgr.es/m/CAJ7c6TMpeO_ke+QGOaAx9xdJuxa7r=49-anMh3G5476e3CX1CA@mail.gmail.com
This patch responds to a comment that I (tgl) made in the
discussion leading up to 774171c4f, that really all errors
occurring during raw parsing should provide error cursors.
Syntax errors reported by Bison will have one, and most of
the handwritten ereport's in gram.y already provide one,
but there were a few stragglers.
(It is not claimed that this handles every failure reachable
during raw parsing --- out-of-memory is an obvious exception.
But this makes a good start on cases that are likely to occur.)
While we're at it, clean up the reported positions for errors
associated with LIMIT/OFFSET clauses. Previously we were
relying on applying exprLocation() to the contained expressions,
but that leads to slightly odd cursor placement, e.g.
regression=# (select * from foo limit 10) limit 10;
ERROR: multiple LIMIT clauses not allowed
LINE 1: (select * from foo limit 10) limit 10;
^
We can afford to keep a little more state in the transient
SelectLimit structs in order to make that better.
Jian He and Tom Lane (extracted from a larger patch by Jian,
with some additional work by me)
Discussion: https://postgr.es/m/CACJufxEmONE3P2En=jopZy1m=cCCUs65M4+1o52MW5og9oaUPA@mail.gmail.com
This allows an error cursor to be supplied for a bunch of
bad-function-definition errors that previously lacked one,
or that cheated a bit by pointing at the contained type name
when the error isn't really about that.
Bump catversion from an abundance of caution --- I don't think
this node type can actually appear in stored views/rules, but
better safe than sorry.
Jian He and Tom Lane (extracted from a larger patch by Jian,
with some additional work by me)
Discussion: https://postgr.es/m/CACJufxEmONE3P2En=jopZy1m=cCCUs65M4+1o52MW5og9oaUPA@mail.gmail.com
Buildfarm member 'prion', which is configured with
-DRELCACHE_FORCE_RELEASE -DCATCACHE_FORCE_RELEASE, failed with errors
like this:
ERROR: could not read blocks 0..0 in file "global/2672": read only 0 of 8192 bytes
while running a parallel test group that includes VACUUM FULL on some
catalog tables among other things. I was not able to reproduce that
just by running the tests with -DRELCACHE_FORCE_RELEASE
-DCATCACHE_FORCE_RELEASE, even though 'prion' hit it on first run
after commit 2b9b8ebbf8, so there might be something else that makes
it more susceptible to the race. However, I was able to reproduce it
by adding another test to the same test group that runs "vacuum full
pg_database" repeatedly.
The problem is that RelationReloadIndexInfo() no longer calls
RelationInitPhysicalAddr() on a nailed, shared index, when an
invalidation happens early during backend startup, before the critical
relcaches have been built. Before commit 2b9b8ebbf8, that was done by
RelationReloadNailed(), but it went missing from that path. Add it
back as an explicit step.
Broken by commit 2b9b8ebbf8, which refactored these functions.
Discussion: https://www.postgresql.org/message-id/db876575-8f5b-4193-a538-df7e1f92d47a%40iki.fi
The old RelationClearRelation function did different things depending
on the arguments and circumstances. It could:
a) remove the relation completely from relcache (rebuild == false),
b) mark the entry as invalid (rebuild == true, but not in xact), or
c) rebuild the entry (rebuild == true).
Different callers used it for different purposes, and often assumed a
particular behavior, which was confusing. Split it into three
different functions, one for each of the above actions (one of them,
RelationInvalidateRelation, was already added in commit e6cd857726).
Move the responsibility of choosing the action and calling the right
function to the callers.
Reviewed-by: jian he <jian.universality@gmail.com>
Discussion: https://www.postgresql.org/message-id/9c9e8908-7b3e-4ce7-85a8-00c0e165a3d6%40iki.fi
RelationClearRelation(rebuild == true) calls RelationReloadIndexInfo()
for indexes. We can rely on that in RelationIdGetRelation(), instead
of calling RelationReloadIndexInfo() directly. That simplifies the
code a little.
In the passing, add a comment in RelationBuildLocalRelation()
explaining why it doesn't call RelationInitIndexAccessInfo(). It's
because at index creation, it's called before the pg_index row has
been created. That's also the reason that RelationClearRelation()
still needs a special case to go through the full-blown rebuild if the
index support information in the relcache entry hasn't been populated
yet.
Reviewed-by: jian he <jian.universality@gmail.com>
Discussion: https://www.postgresql.org/message-id/9c9e8908-7b3e-4ce7-85a8-00c0e165a3d6%40iki.fi
9f00edc228 has disabled a permutation due to failures in the CI for
FreeBSD environments, but this is a matter of timing. Let's document
properly why this type of permutation is a bad idea if relying on a wait
done in a SQL function, so as this can be avoided when implementing new
tests (this spec is also a template).
Reviewed-by: Bertrand Drouvot
Discussion: https://postgr.es/m/ZyCa2qsopKaw3W3K@paquier.xyz
Follow-up to bugfix commit 763d65ae. Technically this new assertion is
redundant with the assertion recently added to _bt_readpage by that same
commit, but it seems like a good idea to have both.
The new assertion makes it clear that we expect to call _bt_readnextpage
when there's another primitive index scan scheduled, though only when
needed as the final step of ending the current primitive scan.
Strictly speaking, we only need to make sure to leave the scan's array
keys in their final positions (final for the current scan direction) to
handle SAOP array exhaustion because btgettuple might only return a
subset of the items for the final page (final for the current scan
direction), before the scan changes direction. While it's typical for
so->currPos to be invalidated shortly after the scan's arrays are first
exhausted, and while so->currPos invalidation does obviate the need to
leave the scan's arrays in any particular state, we can't rely on any of
that actually happening when handling array exhaustion. Adjust comments
to make all of that a lot clearer.
Oversight in commit 5bf748b8, which enhanced nbtree ScalarArrayOp
execution.
Presently, each iteration of the loop in sift_down() will perform
3 comparisons if both children are larger than the parent node (2
for comparing each child to the parent node, and a third to compare
the children to each other). By first comparing the children to
each other and then comparing the larger child to the parent node,
we can accomplish the same thing with just 2 comparisons (while
also not affecting the number of comparisons in any other case).
Author: ChangAo Chen
Reviewed-by: Robert Haas
Discussion: https://postgr.es/m/tencent_0142D8DA90940B9930BCC08348BBD6D0BB07%40qq.com
An operation like '12:34:56'::time_tz takes the UTC offset from
the prevailing time zone, which means that the results change
across DST transitions. One of the test cases added in ed055d249
failed to consider this.
Per report from Bernhard Wiedemann. Back-patch to v17, as the
test case was.
Discussion: https://postgr.es/m/ba8e1bc0-8a99-45b7-8397-3f2e94415e03@suse.de
A bug in nbtree's handling of primitive index scan scheduling could lead
to wrong answers when a scrollable cursor was used with an index scan
that had a SAOP index qual. Wrong answers were only possible when the
scan direction changed after a primitive scan was scheduled, but before
_bt_next was asked to fetch the next tuple in line (i.e. for things to
break, _bt_next had to be denied the opportunity to step off the page in
the same direction as the one used when the primscan was scheduled).
Furthermore, the issue only occurred when the page in question happened
to be the first page to be visited by the entire top-level scan; the
issue hinged upon the cursor backing up to the absolute beginning of the
key space that it returns tuples from (fetching in the opposite scan
direction across a "primitive scan boundary" always worked correctly).
To fix, make _bt_next unset the "needs primitive index scan" flag when
it detects that the current scan direction is not the one that was used
by _bt_readpage back when the primitive scan in question was scheduled.
This fixes the cases that are known to be faulty, and also seems like a
good idea on general robustness grounds.
Affected scrollable cursor cases now avoid a spurious primitive index
scan when they fetch backwards to the absolute start of the key space to
be visited by their cursor. Fetching backwards now only returns those
tuples at the start of the scan, as expected. It'll also be okay to
once again fetch forwards from the start at that point, since the scan
will be left in a state that's exactly consistent with the state it was
in before any tuples were ever fetched, as expected.
Oversight in commit 5bf748b8, which enhanced nbtree ScalarArrayOp
execution.
Author: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAH2-Wznv49bFsE2jkt4GuZ0tU2C91dEST=50egzjY2FeOcHL4Q@mail.gmail.com
Backpatch: 17-, where commit 5bf748b8 first appears.
* In DetachPartitionFinalize() we were applying a tuple conversion map
to tuples that didn't need one, which can lead to erratic behavior if
a partitioned table has a partition with a different column order, as
reported by Alexander Lakhin. This was introduced by 53af9491a0.
Don't do that. Also, modify a recently added test case to exercise
this.
* The same function as well as CloneFkReferenced() were acquiring
AccessShareLock on a partition, only to have CreateTrigger() later
acquire ShareRowExclusiveLock on it. This can lead to deadlock by
lock escalation, unnecessarily. Avoid that by acquiring the stronger
lock to begin with. This probably dates back to branch 12, but I have
never seen a report of this being a problem in the field.
* Innocuous but wasteful: also introduced by 53af9491a0, we were
reading a pg_constraint tuple from syscache that we don't need, as
reported by Tender Wang. Don't.
Backpatch to 15.
Discussion: https://postgr.es/m/461e9c26-2076-8224-e119-84998b6a784e@gmail.com
This commit allows logical replication to publish and replicate generated
columns when explicitly listed in the column list. We also ensured that
the generated columns were copied during the initial tablesync when they
were published.
We will allow to replicate generated columns even when they are not
specified in the column list (via a new publication option) in a separate
commit.
The motivation of this work is to allow replication for cases where the
client doesn't have generated columns. For example, the case where one is
trying to replicate data from Postgres to the non-Postgres database.
Author: Shubham Khanna, Vignesh C, Hou Zhijie
Reviewed-by: Peter Smith, Hayato Kuroda, Shlok Kyal, Amit Kapila
Discussion: https://postgr.es/m/B80D17B2-2C8E-4C7D-87F2-E5B4BE3C069E@gmail.com
Commit a07e03fd8f changed inplace updates
to wait for heap_update() commands like GRANT TABLE and GRANT DATABASE.
By keeping the pin during that wait, a sequence of autovacuum workers
and an uncommitted GRANT starved one foreground LockBufferForCleanup()
for six minutes, on buildfarm member sarus. Prevent, at the cost of a
bit of complexity. Back-patch to v12, like the earlier commit. That
commit and heap_inplace_lock() have not yet appeared in any release.
Discussion: https://postgr.es/m/20241026184936.ae.nmisch@google.com
Historical corrections for Mexico, Mongolia, and Portugal.
Notably, Asia/Choibalsan is now an alias for Asia/Ulaanbaatar
rather than being a separate zone, mainly because the differences
between those zones were found to be based on untrustworthy data.
c01743aa4 and later 161320b4b adjusted the EXPLAIN output to show which
plan nodes were chosen despite being disabled by the various enable*
GUCs. Prior to e22253467, the disabledness of a node was only evident by
a large startup cost penalty. Since we now explicitly tag disabled nodes
with a boolean property in EXPLAIN, let's add some documentation to
provide some details about why and when disabled nodes can appear in the
plan.
Author: Laurenz Albe, David Rowley
Discussion: https://postgr.es/m/883729e429267214753d5e438c82c73a58c3db5d.camel@cybertec.at
There are two functions that can be used in event triggers to get more
details about a rewrite happening on a relation. Both had a limited
documentation:
- pg_event_trigger_table_rewrite_reason() and
pg_event_trigger_table_rewrite_oid() were not mentioned in the main
event trigger section in the paragraph dedicated to the event
table_rewrite.
- pg_event_trigger_table_rewrite_reason() returns an integer which is a
bitmap of the reasons why a rewrite happens. There was no explanation
about the meaning of these values, forcing the reader to look at the
code to find out that these are defined in event_trigger.h.
While on it, let's add a comment in event_trigger.h where the
AT_REWRITE_* are defined, telling to update the documentation when
these values are changed.
Backpatch down to 13 as a consequence of 1ad23335f3, where this area
of the documentation has been heavily reworked.
Author: Greg Sabino Mullane
Discussion: https://postgr.es/m/CAKAnmmL+Z6j-C8dAx1tVrnBmZJu+BSoc68WSg3sR+CVNjBCqbw@mail.gmail.com
Backpatch-through: 13
Disabling enable_indexscan has always also disabled Index Only Scans.
Here we make that more clear in the documentation in an attempt to
prevent future complaints complaining about this expected behavior.
Reported-by: Melanie Plageman
Author: David G. Johnston, David Rowley
Backpatch-through: 12, oldest supported version
Discussion: https://postgr.es/m/CAAKRu_atV=kovgpaLREyG68PB5+ncKvJ2UNoeRetEgyC3Yb5Sw@mail.gmail.com
A pg_depend entry between a partitioned table and its table access
method was missing when using CREATE TABLE .. USING with an unpinned
access method. DROP ACCESS METHOD could be used, while it should be
blocked if CASCADE is not specified, even if there was a partitioned
table that depends on the table access method. pg_class.relam would
then hold an orphaned OID value still pointing to the AM dropped.
The problem is fixed by adding a dependency between the partitioned
table and its table access method if set when the relation is created.
A test checking the contents of pg_depend in this case is added.
Issue introduced in 374c7a2290, that has added support for CREATE
TABLE .. USING for partitioned tables.
Reviewed-by: Alexander Lakhin
Discussion: https://postgr.es/m/18674-1ef01eceec278fab@postgresql.org
Backpatch-through: 17
I assumed that all index_drop() callers set an active snapshot
beforehand, but that is evidently not true. One counterexample is
autovacuum, which doesn't set an active snapshot when cleaning up
orphan temp indexes. To fix, unconditionally push an active
snapshot before updating pg_index in index_drop().
Oversight in commit b52adbad46.
Reported-by: Masahiko Sawada
Reviewed-by: Stepan Neretin, Masahiko Sawada
Discussion: https://postgr.es/m/CAD21AoBgF9etQrXbN9or_YHsmBRJHHNUEkhHp9rGK9CyQv5aTQ%40mail.gmail.com
As threatened in the previous patch, define MaxAllocSize in
src/include/common/fe_memutils.h rather than having several
copies of it in different src/common/*.c files. This also
provides an opportunity to document it better.
While this would probably be safe to back-patch, I'll refrain
(for now anyway).
Coverity complained that pg_saslprep() could suffer integer overflow,
leading to under-allocation of the output buffer, if the input string
exceeds SIZE_MAX/4. This hazard seems largely hypothetical, but it's
easy enough to defend against, so let's do so.
This patch creates a third place in src/common/ where we are locally
defining MaxAllocSize so that we can test against that in the same way
in backend and frontend compiles. That seems like about two places
too many, so the next patch will move that into common/fe_memutils.h.
I'm hesitant to do that in back branches however.
Back-patch to v14. The code looks similar in older branches, but
before commit 67a472d71 there was a separate test on the input string
length that prevented this hazard.
Per Coverity report.
Revert commit 924e03917 in favor of adding code to convert \r\n to \n
explicitly, on Windows only. The idea of letting text mode do the
work fails for a couple of reasons:
* Per Microsoft documentation, text mode also causes control-Z to be
interpreted as end-of-file. While it may be unlikely that extension
scripts contain control-Z, we've historically allowed it, and breaking
the case doesn't seem wise.
* Apparently, on some Windows configurations, "r" mode is interpreted
as binary not text mode. We could force it with "rt" but that would
be inconsistent with our code elsewhere, and it would still require
Windows-specific coding.
Thanks to Alexander Lakhin for investigation.
Discussion: https://postgr.es/m/79284195-4993-7b00-f6df-8db28ca60fa3@gmail.com