Commit 5c649fe15 introduced a memory leak into pg_basebackup's
parse_compress_options. (I simplified nearby code while at it.)
Commit 9a974cbcb introduced a memory leak into pg_dump's
binary_upgrade_set_pg_class_oids.
Coverity also complained about a call of SnapBuildProcessChange that
ignored the result, unlike every other call of that function. This
is evidently intentional, so add a (void) cast to indicate that.
(It's also old, dating to b89e15105; I suppose the reason it showed
up now is 7a5f6b474's recent rearrangement of nearby code.)
In the normal configuration where GEQO_DEBUG isn't defined,
recent clang versions have started to complain that geqo_main.c
accumulates the edge_failures count but never does anything
with it. As a minimal back-patchable fix, insert a void cast
to silence this warning. (I'd speculated about ripping out the
GEQO_DEBUG logic altogether, but I don't think we'd wish to
back-patch that.)
Per recently-established project policy, this is a candidate
for back-patching into out-of-support branches: it suppresses
an annoying compiler warning but changes no behavior. Hence,
back-patch all the way to 9.2.
Discussion: https://postgr.es/m/CA+hUKGLTSZQwES8VNPmWO9AO0wSeLt36OCPDAZTccT1h7Q7kTQ@mail.gmail.com
In sort_inner_and_outer we iterate a list of PathKey elements, but the
variable is declared as (List *). This mistake is benign, because we
only pass the pointer to lcons() and never dereference it.
This exists since ~2004, but it's confusing. So fix and backpatch to all
supported branches.
Backpatch-through: 10
Discussion: https://postgr.es/m/bf3a6ea1-a7d8-7211-0669-189d5c169374%40enterprisedb.com
The syscache lookup may return NULL even for valid OID, for example due
to a concurrent DROP STATISTICS, so a HeapTupleIsValid is necessary.
Without it, it may fail with a segfault.
Reported by Alexander Lakhin, patch by me. Backpatch to 13, where ALTER
STATISTICS ... SET STATISTICS was introduced.
Backpatch-through: 13
Discussion: https://postgr.es/m/17372-bf3b6e947e35ae77%40postgresql.org
Putting "inline" on a function that's not used anywhere in its
own file is useless unless the linker is doing global optimization,
a method we don't generally enable. Moreover, it draws warnings
from some buildfarm members (curculio at least).
Looks like this was sloppiness in cc8b25712, which moved the
function from somewhere else where the inline marker was
more appropriate.
These functions aren't exact inverses, but they're closely related;
yet we document them in two different sections. Add cross-ref
<link>s to improve that situation.
While here, move the strpos and substr entries to re-alphabetize
Table 9.10. Also, drop an ancient compatibility note about
string_to_array, which wasn't even on the right page, so probably
few people ever saw it.
Discussion: https://postgr.es/m/164287017550.704.5840412183184961677@wrigleys.postgresql.org
Previously, unless we had to add a NOT NULL constraint to the column,
this command resulted in updating only the index's relcache entry.
That's problematic when replication behavior is being driven off the
existence of a primary key: other sessions (and ours too for that
matter) failed to recalculate their opinion of whether the table can
be replicated. Add a relcache invalidation to fix it.
This has been broken since pg_class.relhaspkey was removed in v11.
Before that, updating the table's relhaspkey value sufficed to cause
a cache flush. Hence, backpatch to v11.
Report and patch by Hou Zhijie
Discussion: https://postgr.es/m/OS0PR01MB5716EBE01F112C62F8F9B786947B9@OS0PR01MB5716.jpnprd01.prod.outlook.com
In libpq and ecpglib, multiple threads can concurrently enter the
initialization logic for message localization. Since we set the
its-done flag before actually doing the work, it'd be possible
for some threads to reach gettext() before anyone has called
bindtextdomain(). Barring bugs in libintl itself, this would not
result in anything worse than failure to localize some early
messages. Nonetheless, it's a bug, and an easy one to fix.
Noted while investigating bug #17299 from Clemens Zeidler
(much thanks to Liam Bowen for followup investigation on that).
It currently appears that that actually *is* a bug in libintl itself,
but that doesn't let us off the hook for this bit.
Back-patch to all supported versions.
Discussion: https://postgr.es/m/17299-7270741958c0b1ab@postgresql.org
Discussion: https://postgr.es/m/CAE7q7Eit4Eq2=bxce=Fm8HAStECjaXUE=WBQc-sDDcgJQ7s7eg@mail.gmail.com
While individual logical rewrite files were synced to disk, the directory was
not. On some filesystems that could lead to loosing directory entries after a
crash.
Reported-By: Tom Lane <tgl@sss.pgh.pa.us>
Author: Nathan Bossart <bossartn@amazon.com>
Discussion: https://postgr.es/m/867F2E29-2782-4869-970E-B984C6D35A8F@amazon.com
Backpatch: 10-
Commit 27e1f1456 resets the per-connection states of connections used to
scan foreign tables asynchronously during abort cleanup at main
transaction end, but it failed to do so during subabort cleanup at
subtransaction end, leading to a segmentation fault when re-executing an
asynchronous-foreign-table-scan query in a transaction that was
cancelled in a subtransaction of it.
Fix by modifying pgfdw_abort_cleanup() to reset the per-connection state
of a given connection also when called for subabort cleanup. Also,
modify that function to do the reset in both the abort-cleanup and
subabort-cleanup cases if necessary, to save cycles, and improve a
comment on it a little bit.
Back-patch to v14 where the aforesaid commit came in.
Reviewed by Alexander Pyhalov
Discussion: https://postgr.es/m/CAPmGK14cCV-JA7kNsyt2EUTKvZ4xkr2LNRthi1U1C3cqfGppAw@mail.gmail.com
The logic in charge of writing commit timestamps (enabled with
track_commit_timestamp) for subtransactions had a one-bug bug,
where it would be possible that commit timestamps go missing for the
last subtransaction committed.
While on it, simplify a bit the iteration logic in the loop writing the
commit timestamps, as per suggestions from Kyotaro Horiguchi and Tom
Lane, so as some variable initializations are not part of the loop
itself.
Issue introduced in 73c986a.
Analyzed-by: Alex Kingsborough
Author: Alex Kingsborough, Kyotaro Horiguchi
Discussion: https://postgr.es/m/73A66172-4050-4F2A-B7F1-13508EDA2144@amazon.com
Backpatch-through: 10
The option --compress is extended to accept a compression method and an
optional compression level, as of the grammar METHOD[:LEVEL]. The
methods currently support are "none" and "gzip", for client-side
compression. Any of those methods use only an integer value for the
compression level, but any method implemented in the future could use
more specific keywords if necessary.
This commit keeps the logic backward-compatible. Hence, the following
compatibility rules apply for the new format of the option --compress:
* -z/--gzip is a synonym of --compress=gzip.
* --compress=NUM implies:
** --compress=none if NUM = 0.
** --compress=gzip:NUM if NUM > 0.
Note that there are also plans to extend more this grammar with
server-side compression.
Reviewed-by: Robert Haas, Magnus Hagander, Álvaro Herrera, David
G. Johnston, Georgios Kokolatos
Discussion: https://postgr.es/m/Yb3GEgWwcu4wZDuA@paquier.xyz
Commits 6c4a8903b et al. had a couple of deficiencies:
* The logic I added to Cluster::start to see if a PID file is present
could be fooled by a stale PID file left over from a previous
postmaster. To fix, if we're not sure whether we expect to find a
running postmaster or not, validate the PID using "kill 0".
* 017_shm.pl has a loop in which it just issues repeated Cluster::start
calls; this will fail if some invocation fails but leaves self->_pid
set. Per buildfarm results, the above fix is not enough to make this
safe: we might have "validated" a PID for a postmaster that exits
immediately after we look. Hence, match each failed start call with
a stop call that will get us back to the self->_pid == undef state.
Add a fail_ok option to Cluster::stop to make this work.
Discussion: https://postgr.es/m/CA+hUKGKV6fOHvfiPt8=dOKzvswjAyLoFoJF1iQXMNpi7+hD1JQ@mail.gmail.com
pg_basebackup now has a --target=TARGET[:DETAIL] option. If specfied,
it is sent to the server as the value of the TARGET option to the
BASE_BACKUP command. If DETAIL is included, it is sent as the value of
the new TARGET_DETAIL option to the BASE_BACKUP command. If the
target is anything other than 'client', pg_basebackup assumes that it
will now be the server's job to write the backup in a location somehow
defined by the target, and that it therefore needs to write nothing
locally. However, the server will still send messages to the client
for progress reporting purposes.
On the server side, we now support two additional types of backup
targets. There is a 'blackhole' target, which just throws away the
backup data without doing anything at all with it. Naturally, this
should only be used for testing and debugging purposes, since you will
not actually have a backup when it finishes running. More usefully,
there is also a 'server' target, so you can now use something like
'pg_basebackup -Xnone -t server:/SOME/PATH' to write a backup to some
location on the server. We can extend this to more types of targets
in the future, and might even want to create an extensibility
mechanism for adding new target types.
Since WAL fetching is handled with separate client-side logic, it's
not part of this mechanism; thus, backups with non-default targets
must use -Xnone or -Xfetch.
Patch by me, with a bug fix by Jeevan Ladhe. The patch set of which
this is a part has also had review and/or testing from Tushar Ahuja,
Suraj Kharage, Dipesh Pandit, and Mark Dilger.
Discussion: http://postgr.es/m/CA+TgmoaYZbz0=Yk797aOJwkGJC-LK3iXn+wzzMx7KdwNpZhS5g@mail.gmail.com
Since the test requires reproducible behavior from VACUUM, and since
DISABLE_PAGE_SKIPPING doesn't actually disable all forms of page
skipping, let's use a temporary table to avoid contention.
Back-patch to 12, like commit 3414099c.
Discussion: https://postgr.es/m/20220120052404.sonrhq3f3qgplpzj%40alap3.anarazel.de
Previously, callers of pg_newlocale_from_collation() did not call it
if the collation was DEFAULT_COLLATION_OID and instead proceeded with
a pg_locale_t of 0. Instead, now we call it anyway and have it return
0 if the default collation was passed. It already did this, so we
just have to adjust the callers. This simplifies all the call sites
and also makes future enhancements easier.
After discussion and testing, the previous comment in pg_locale.c
about avoiding this for performance reasons may have been mistaken
since it was testing a very different patch version way back when.
Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Discussion: https://www.postgresql.org/message-id/ed3baa81-7fac-7788-cc12-41e3f7917e34@enterprisedb.com
Its results vary depending on the IntervalStyle GUC, so it cannot
be considered immutable.
This is an extremely ancient bug. AFAICT it was a sloppy mistake
in 6f58115dd, which marked it "cacheable" alongside marking several
other interval functions that way. At the time, interval_out()
depended on DateStyle not IntervalStyle, but it was still wrong.
Back-patching this change doesn't look very practical, so I won't.
Aside from the usual difficulties of getting catalog changes
applied to existing databases, people might have indexes,
generated columns, etc that depend on interval-to-text casts
being considered immutable. (This'd not really give them any
problem as long as they never change IntervalStyle.) They
wouldn't appreciate us breaking such usage in minor releases.
Per bug #17371 from Marcus Gartner.
Discussion: https://postgr.es/m/17371-8f57e6e9ca5e35bf@postgresql.org
"pg_ctl start" might start a new postmaster and then return failure
anyway, for example if PGCTLTIMEOUT is exceeded. If there is a
postmaster there, it's still incumbent on us to shut it down at
script end, so check for the PID file even though we are about
to fail.
This has been broken all along, so back-patch to all supported branches.
Discussion: https://postgr.es/m/647439.1642622744@sss.pgh.pa.us
This avoids possibly selecting Python 2.x on systems that have
both Python 2 and Python 3. We used to feel that what "python"
links to is a user choice that we should honor. However, we're
about to cease support for Python 2, so users will no longer have
any choice of that sort. This small change is being made ahead
of the big Python-2-ectomy so that we can see how much of the
buildfarm is not yet prepared for that. Systems with only
Python 2 will continue to build that way, for now.
Discussion: https://postgr.es/m/2872c9a0-4b0a-1354-d5f6-94d6f85ba354@enterprisedb.com
In adverse circumstances, the fsync calls cause this test to run for
quite a long time (multiple minutes) and even suffer timeout failures.
This seems to date from before we made an effort to disable fsync in
all our test cases; there's not a lot of point in using it if there's
not a plan to force an O/S crash during the test.
Discussion: https://postgr.es/m/440239.1642560607@sss.pgh.pa.us
The base Makefile will output help to the user when invoking make in
an unconfigured tree, the help was however always referring to a file
which may not be present as it's only in tarballs. Dynamically check
for the presence of the INSTALL file and fall back on README.git when
it's not available (which is the case of Git checkouts).
Reported-by: Tim McNamara <tim@mcnamara.nz>
Reviewed-by: Magnus Hagander <magnus@hagander.net>
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/730dae39-abaa-4140-893b-95d732fed003@www.fastmail.com
bbsink_copystream wants to store a type byte just before the buffer,
but basebackup.c wants the buffer to be aligned so that it can call
PageIsNew() and PageGetLSN() on it. Therefore, instead of inserting
1 extra byte before the buffer, insert MAXIMUM_ALIGNOF extra bytes
and only use the last one.
On most machines this doesn't cause any problem (except perhaps for
performance) but some buildfarm machines with -fsanitize=alignment
dump core.
Discussion: http://postgr.es/m/CA+TgmoYx5=1A2K9JYV-9zdhyokU4KKTyNQ9q7CiXrX=YBBMWVw@mail.gmail.com
If connectivity to the server has been lost or become flaky, the
user might well try to send a query cancel. It's highly annoying
if PQcancel hangs up in such a case, but that's exactly what's likely
to happen. To ameliorate this problem, apply the PGconn's
tcp_user_timeout and keepalives settings to the TCP connection used
to send the cancel. This should be safe on Unix machines, since POSIX
specifies that setsockopt() is async-signal-safe. We are guessing
that WSAIoctl(SIO_KEEPALIVE_VALS) is similarly safe on Windows.
(Note that at least in psql and our other frontend programs, there's
no safety issue involved anyway, since we run PQcancel in its own
thread rather than in a signal handler.)
Most of the value here comes from the expectation that tcp_user_timeout
will be applied as a connection timeout. That appears to happen on
Linux, even though its tcp(7) man page claims differently. The
keepalive options probably won't help much, but as long as we can
apply them for not much code, we might as well.
Jelte Fennema, reviewed by Fujii Masao and myself
Discussion: https://postgr.es/m/AM5PR83MB017870DE81FC84D5E21E9D1EF7AA9@AM5PR83MB0178.EURPRD83.prod.outlook.com
In the new approach, all files across all tablespaces are sent in a
single COPY OUT operation. The CopyData messages are no longer raw
archive content; rather, each message is prefixed with a type byte
that describes its purpose, e.g. 'n' signifies the start of a new
archive and 'd' signifies archive or manifest data. This protocol
is significantly more extensible than the old approach, since we can
later create more message types, though not without concern for
backward compatibility.
The new protocol sends a few things to the client that the old one
did not. First, it sends the name of each archive explicitly, instead
of letting the client compute it. This is intended to make it easier
to write future patches that might send archives in a format other
that tar (e.g. cpio, pax, tar.gz). Second, it sends explicit progress
messages rather than allowing the client to assume that progress is
defined by the number of bytes received. This will help with future
features where the server compresses the data, or sends it someplace
directly rather than transmitting it to the client.
The old protocol is still supported for compatibility with previous
releases. The new protocol is selected by means of a new
TARGET option to the BASE_BACKUP command. Currently, the
only supported target is 'client'. Support for additional
targets will be added in a later commit.
Patch by me. The patch set of which this is a part has had review
and/or testing from Jeevan Ladhe, Tushar Ahuja, Suraj Kharage,
Dipesh Pandit, and Mark Dilger.
Discussion: http://postgr.es/m/CA+TgmoaYZbz0=Yk797aOJwkGJC-LK3iXn+wzzMx7KdwNpZhS5g@mail.gmail.com
Where we test vacuum_truncate's effects, sometimes this is failing to
truncate as expected on the build farm. That could be explained by page
skipping, so disable it explicitly, with the theory that commit fe246d1c
didn't go far enough.
Back-patch to 12, where the vacuum_truncate tests were added.
Discussion: https://postgr.es/m/CA%2BhUKGLT2UL5_JhmBzUgkdyKfc%3D5J-gJSQJLysMs4rqLUKLAzw%40mail.gmail.com
This reverts commit e0e567a106.
On various platforms, the new approach using the sysconfig module
reported incorrect values for the include directory, and so any
Python-related compilations failed. Revert for now and revisit later.
The tests added by 14d755b000 added a
test case for psql's \set ECHO errors. After the test, it then reset
this to \set ECHO none, which is the default. But the regression
tests are actually run under \set ECHO all (psql -a), so that would
have been the correct way to restore the previous state. Otherwise,
test cases added after that point would not have their input lines
displayed. This was never the intention, so fix this now.
With Python 3.10, configure spits out warnings about the module
distutils.sysconfig being deprecated and scheduled for removal in
Python 3.12. Change the uses in configure to use the module sysconfig
instead. The logic stays the same.
Note that sysconfig exists since Python 2.7, so this moves the minimum
required version up from Python 2.6.
Discussion: https://www.postgresql.org/message-id/flat/c74add3c-09c4-a9dd-1a03-a846e5b2fc52%40enterprisedb.com
The previous coding was correct, but the style and commentary were a bit
vague about which operations had to happen, in what circumstances, and
in what order. Rearrange so that the epilogue does nothing in the DFA END
state. That allows turning some conditional statements in the backtracking
logic into asserts. With that, we can be more explicit about needing
to backtrack at least one byte in non-END states to ensure checking the
current byte sequence in the slow path. No change to the regression tests,
since they should be able catch deficiencies here already.
In passing, improve the comments around DFA states where the first
continuation byte has a restricted range.
The original coding (from c33869cc3) failed with "more than one row
returned by a subquery used as an expression" if there were unrelated
triggers of the same tgname on parent partitioned tables. (That's
possible because statement-level triggers don't get inherited.) Fix
by applying LIMIT 1 after sorting the candidates by inheritance level.
Also, wrap the subquery in a CASE so that we don't have to execute it at
all when the trigger is visibly non-inherited. Aside from saving some
cycles, this avoids the need for a confusing and undocumented NULLIF().
While here, tweak the format of the emitted query to look a bit
nicer for "psql -E", and add some explanation of this subquery,
because it badly needs it.
Report and patch by Justin Pryzby (with some editing by me).
Back-patch to v13 where the faulty code came in.
Discussion: https://postgr.es/m/20211217154356.GJ17618@telsasoft.com
Most tests invoking pg_basebackup themselves did not yet use -cfast, which
makes pg_basebackup take considerably longer. The only reason this didn't
cause the tests to take many minutes is that spread checkpoints only throttle
when writing out a buffer and there aren't that many dirty buffers in the
tests...
Discussion: https://postgr.es/m/20220117195711.xx4qbxutrrlmo2dg@alap3.anarazel.de
BufferGetBlockNumber() is not that cheap and obviously cannot change during
one heap_prune_page(), so only call it once. We might be able to do better and
pass the block number from the caller, but that'd be a larger change...
Discussion: https://postgr.es/m/20211211045710.ljtuu4gfloh754rs@alap3.anarazel.de
Currently, database OIDs, relfilenodes, and tablespace OIDs can all
change when a cluster is upgraded using pg_upgrade. It seems better
to preserve them, because (1) it makes troubleshooting pg_upgrade
easier, since you don't have to do a lot of work to match up files
in the old and new clusters, (2) it allows 'rsync' to save bandwidth
when used to re-sync a cluster after an upgrade, and (3) if we ever
encrypt or sign blocks, we would likely want to use a nonce that
depends on these values.
This patch only arranges to preserve relfilenodes and tablespace
OIDs. The task of preserving database OIDs is left for another patch,
since it involves some complexities that don't exist in these cases.
Database OIDs have a similar issue, but there are some tricky points
in that case that do not apply to these cases, so that problem is left
for another patch.
Shruthi KC, based on an earlier patch from Antonin Houska, reviewed
and with some adjustments by me.
Discussion: http://postgr.es/m/CA+TgmoYgTwYcUmB=e8+hRHOFA0kkS6Kde85+UNdon6q7bt1niQ@mail.gmail.com
It seems highly unlikely that gettext() can be relied on to be
async-signal-safe. psql used to understand that, but someone got
it wrong long ago in the src/bin/scripts/ version of handle_sigint,
and then the bad idea was perpetuated when those two versions were
unified into src/fe_utils/cancel.c.
I'm unsure why there have not been field complaints about this
... maybe gettext() is signal-safe once it's translated at least
one message? But we have no business assuming any such thing.
In cancel.c (v13 and up), I preserved our ability to localize
"Cancel request sent" messages by invoking gettext() before
the signal handler is set up. In earlier branches I just made
src/bin/scripts/ not localize those messages, as psql did then.
(Just for extra unsafety, the src/bin/scripts/ version was
invoking fprintf() from a signal handler. Sigh.)
Noted while fixing signal-safety issues in PQcancel() itself.
Back-patch to all supported branches.
Discussion: https://postgr.es/m/2937814.1641960929@sss.pgh.pa.us
PQcancel() is supposed to be safe to call from a signal handler,
and indeed psql uses it that way. All of the library functions
it uses are specified to be async-signal-safe by POSIX ...
except for strerror. Neither plain strerror nor strerror_r
are considered safe. When this code was written, back in the
dark ages, we probably figured "oh, strerror will just index
into a constant array of strings" ... but in any locale except C,
that's unlikely to be true. Probably the reason we've not heard
complaints is that (a) this error-handling code is unlikely to be
reached in normal use, and (b) in many scenarios, localized error
strings would already have been loaded, after which maybe it's
safe to call strerror here. Still, this is clearly unacceptable.
The best we can do without relying on strerror is to print the
decimal value of errno, so make it do that instead. (This is
probably not much loss of user-friendliness, given that it is
hard to get a failure here.)
Back-patch to all supported branches.
Discussion: https://postgr.es/m/2937814.1641960929@sss.pgh.pa.us
The token in nodeTokenType() is actually the whole rest of the string,
so we need to take into account the length to do the correct
comparison.
Without this, postgres_fdw tests fail under
-DWRITE_READ_PARSE_PLAN_TREES.
Before, SQL-level boolean constants were represented by a string with
a cast, and internal Boolean values in DDL commands were usually
represented by Integer nodes. This takes the place of both of these
uses, making the intent clearer and having some amount of type safety.
Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/8c1a2e37-c68d-703c-5a83-7a6077f4f997@enterprisedb.com
The logic is similar to default_tablespace in some ways, so as no SET
queries on default_table_access_method are generated before dumping or
restoring an object (table or materialized view support table AMs) when
specifying this new option.
This option is useful to enforce the use of a default access method even
if some tables included in a dump use an AM different than the system's
default.
There are already two cases in the TAP tests of pg_dump with a table and
a materialized view that use a non-default table AM, and these are
extended that the new option does not generate SET clauses on
default_table_access_method.
Author: Justin Pryzby
Discussion: https://postgr.es/m/20211207153930.GR17618@telsasoft.com