IF NOT EXISTS was ignored when specified in an EXPLAIN query for CREATE
MATERIALIZED VIEW or CREATE TABLE AS. Hence, if this clause was
specified, the caller would get a failure if the relation already
exists instead of a success with a NOTICE message.
This commit makes the behavior of IF NOT EXISTS in EXPLAIN consistent
with the non-EXPLAIN'd DDL queries, preventing a failure with IF NOT
EXISTS if the relation to-be-created already exists. The skip is done
before the SELECT query used for the relation is planned or executed,
and a "dummy" plan is generated instead depending on the format used by
EXPLAIN.
Author: Bharath Rupireddy
Reviewed-by: Zhijie Hou, Michael Paquier
Discussion: https://postgr.es/m/CALj2ACVa3oJ9O_wcGd+FtHWZds04dEKcakxphGz5POVgD4wC7Q@mail.gmail.com
This adds six methods to the output plugin API, adding support for
streaming changes of two-phase transactions at prepare time.
* begin_prepare
* filter_prepare
* prepare
* commit_prepared
* rollback_prepared
* stream_prepare
Most of this is a simple extension of the existing methods, with the
semantic difference that the transaction is not yet committed and maybe
aborted later.
Until now two-phase transactions were translated into regular transactions
on the subscriber, and the GID was not forwarded to it. None of the
two-phase commands were communicated to the subscriber.
This patch provides the infrastructure for logical decoding plugins to be
informed of two-phase commands Like PREPARE TRANSACTION, COMMIT PREPARED
and ROLLBACK PREPARED commands with the corresponding GID.
This also extends the 'test_decoding' plugin, implementing these new
methods.
This commit simply adds these new APIs and the upcoming patch to "allow
the decoding at prepare time in ReorderBuffer" will use these APIs.
Author: Ajin Cherian and Amit Kapila based on previous work by Nikhil Sontakke and Stas Kelvich
Reviewed-by: Amit Kapila, Peter Smith, Sawada Masahiko, and Dilip Kumar
Discussion:
https://postgr.es/m/02DA5F5E-CECE-4D9C-8B4B-418077E2C010@postgrespro.ruhttps://postgr.es/m/CAMGcDxeqEpWj3fTXwqhSwBdXd2RS9jzwWscO-XbeCfso6ts3+Q@mail.gmail.com
Those two modules included references to libpq's source path, without
using anything from libpq. Some copy-pastos done when each module was
created are likely at the origin of those useless references (aecf5ee
for old_snapshot, fe59e56 for adminpack).
Reviewed-by: Tom Lane, David Rowley
Discussion: https://postgr.es/m/X+LQpfLyk7jgzUki@paquier.xyz
Addition of multirange info to tables 8.27 and 65.1 made them start
throwing "exceed the available area" warnings in PDF docs builds.
For 8.27, twiddling the existing column width hints was enough to
fix this. For 65.1, I twiddled the widths a little, but to really
fix it I had to insert a space after each comma in the table, to
allow a line break to occur there. (This seemed easier to read
and maintain than the alternative of inserting &zwsp; entities.)
Per buildfarm.
When the postmaster sends SIGQUIT to its children, there's no real
need for all the children to log that fact; the postmaster already
made a log entry about it, so adding perhaps dozens or hundreds of
child-process log entries adds nothing of value. So, let's introduce
a new ereport level to specify "WARNING, but never send to log" and
use that for these messages.
Such a change wouldn't have been desirable before commit 7e784d1dc,
because if someone manually SIGQUIT's a backend, we *do* want to log
that. But now we can tell the difference between a signal that was
issued by the postmaster and one that was not with reasonable
certainty.
While we're here, also clear error_context_stack before ereport'ing,
to prevent error callbacks from being invoked in the signal-handler
context. This should reduce the odds of getting hung up while trying
to notify the client.
Per a suggestion from Andres Freund.
Discussion: https://postgr.es/m/20201225230331.hru3u6obyy6j53tk@alap3.anarazel.de
6df7a9698b has introduced a set of operators between ranges and multiranges.
Existing GiST indexes for ranges could easily support majority of them.
This commit adds support for new operators to the existing range GiST indexes.
New operators resides the same strategy numbers as existing ones. Appropriate
check function is determined using the subtype.
Catversion is bumped.
There is a set of *_internal() functions exposed in
include/utils/multirangetypes.h. This commit improves the signatures of these
functions in two ways.
* Add const qualifies where applicable.
* Replace multirange typecache argument with range typecache argument.
Multirange typecache was used solely to find the range typecache. At the
same time, range typecache is easier for the caller to find.
We have operators for checking if the multirange contains a range but don't
have the opposite. This commit improves completeness of the operator set by
adding two new operators: @> (anyrange,anymultirange) and
<@(anymultirange,anyrange).
Catversion is bumped.
Two functions multirange_range_overlaps_bsearch_comparison() and
multirange_range_contains_bsearch_comparison() contain bugs of returning -1
instead of 1. This commit fixes these bugs and adds corresponding regression
tests.
Include details on whether GSS encryption has been activated;
since we added "hostgssenc" type HBA entries, that's relevant info.
Kyotaro Horiguchi and Tom Lane. Back-patch to v12 where
GSS encryption was introduced.
Discussion: https://postgr.es/m/e5b0b6ed05764324a2f3fe7acfc766d5@smhi.se
Unrecoverable errors detected by GSSAPI encryption can't just be
reported with elog(ERROR) or elog(FATAL), because attempting to
send the error report to the client is likely to lead to infinite
recursion or loss of protocol sync. Instead make this code do what
the SSL encryption code has long done, which is to just report any
such failure to the server log (with elevel COMMERROR), then pretend
we've lost the connection by returning errno = ECONNRESET.
Along the way, fix confusion about whether message translation is done
by pg_GSS_error() or its callers (the latter should do it), and make
the backend version of that function work more like the frontend
version.
Avoid allocating the port->gss struct until it's needed; we surely
don't need to allocate it in the postmaster.
Improve logging of "connection authorized" messages with GSS enabled.
(As part of this, I back-patched the code changes from dc11f31a1.)
Make BackendStatusShmemSize() account for the GSS-related space that
will be allocated by CreateSharedBackendStatus(). This omission
could possibly cause out-of-shared-memory problems with very high
max_connections settings.
Remove arbitrary, pointless restriction that only GSS authentication
can be used on a GSS-encrypted connection.
Improve documentation; notably, document the fact that libpq now
prefers GSS encryption over SSL encryption if both are possible.
Per report from Mikael Gustavsson. Back-patch to v12 where
this code was introduced.
Discussion: https://postgr.es/m/e5b0b6ed05764324a2f3fe7acfc766d5@smhi.se
The critical issue fixed here is that if a GSSAPI-encrypted connection
is successfully made, pqsecure_open_gss() cleared conn->allow_ssl_try,
as an admittedly-hacky way of preventing us from then trying to tunnel
SSL encryption over the already-encrypted connection. The problem
with that is that if we abandon the GSSAPI connection because of a
failure during authentication, we would not attempt SSL encryption
in the next try with the same server. This can lead to unexpected
connection failure, or silently getting a non-encrypted connection
where an encrypted one is expected.
Fortunately, we'd only manage to make a GSSAPI-encrypted connection
if both client and server hold valid tickets in the same Kerberos
infrastructure, which is a relatively uncommon environment.
Nonetheless this is a very nasty bug with potential security
consequences. To fix, don't reset the flag, instead adding a
check for conn->gssenc being already true when deciding whether
to try to initiate SSL.
While here, fix some lesser issues in libpq's GSSAPI code:
* Use the need_new_connection stanza when dropping an attempted
GSSAPI connection, instead of partially duplicating that code.
The consequences of this are pretty minor: AFAICS it could only
lead to auth_req_received or password_needed remaining set when
they shouldn't, which is not too harmful.
* Fix pg_GSS_error() to not repeat the "mprefix" it's given multiple
times, and to notice any failure return from gss_display_status().
* Avoid gratuitous dependency on NI_MAXHOST in
pg_GSS_load_servicename().
Per report from Mikael Gustavsson. Back-patch to v12 where
this code was introduced.
Discussion: https://postgr.es/m/e5b0b6ed05764324a2f3fe7acfc766d5@smhi.se
There's a second call of get_eval_mcontext() that should also be
get_stmt_mcontext(). This is actually dead code, since no
interesting allocations happen before switching back to the
original context, but we should keep it in sync with the other
call to forestall possible future bugs.
Discussion: https://postgr.es/m/f075f7be-c654-9aa8-3ffc-e9214622f02a@enterprisedb.com
Commit a6b1f5365 intended to place the transient "target" list of
a CALL statement in the function's statement-lifespan context,
but I fat-fingered that and used get_eval_mcontext() instead of
get_stmt_mcontext(). The eval_mcontext belongs to the "simple
expression" infrastructure, which is destroyed at transaction end.
The net effect is that a CALL in a procedure to another procedure
that has OUT or INOUT parameters would fail if the called procedure
did a COMMIT.
Per report from Peter Eisentraut. Back-patch to v11, like the
prior patch.
Discussion: https://postgr.es/m/f075f7be-c654-9aa8-3ffc-e9214622f02a@enterprisedb.com
The code in charge of processing a single invalidation message has been
using since 568d413 the structure for relation mapping messages. This
had fortunately no consequence as both locate the database ID at the
same location, but it could become a problem in the future if this area
of the code changes.
Author: Konstantin Knizhnik
Discussion: https://postgr.es/m/8044c223-4d3a-2cdb-42bf-29940840ce94@postgrespro.ru
Backpatch-through: 9.5
In postgres_fdw, the cached connections to foreign servers will not be
closed until the local session exits if the user mappings or foreign servers
that those connections depend on are dropped. Those connections can be
leaked.
To fix that connection leak issue, after a change to a pg_foreign_server
or pg_user_mapping catalog entry, this commit makes postgres_fdw close
the connections depending on that entry immediately if current
transaction has not used those connections yet. Otherwise, mark those
connections as invalid and then close them at the end of current transaction,
since they cannot be closed in the midst of the transaction using them.
Closed connections will be remade at the next opportunity if necessary.
Back-patch to all supported branches.
Author: Bharath Rupireddy
Reviewed-by: Zhihong Yu, Zhijie Hou, Fujii Masao
Discussion: https://postgr.es/m/CALj2ACVNcGH_6qLY-4_tXz8JLvA+4yeBThRfxMz7Oxbk1aHcpQ@mail.gmail.com
The patch needs test cases, reorganization, and cfbot testing.
Technically reverts commits 5c31afc49d..e35b2bad1a (exclusive/inclusive)
and 08db7c63f3..ccbe34139b.
Reported-by: Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/E1ktAAG-0002V2-VB@gemulon.postgresql.org
Removing the EXPLAIN test to stabilize the buildfarm. The execution
test should still be effective to catch the bug even if the plan is
slightly different on different platforms.
Before processing tuples, agg_refill_hash_table() was setting all
pergroup pointers to NULL to signal to advance_aggregates() that it
should not attempt to advance groups that had spilled.
The problem was that it also set the pergroups for sorted grouping
sets to NULL, which caused rescanning to fail.
Instead, change agg_refill_hash_table() to only set the pergroups for
hashed grouping sets to NULL; and when compiling the expression, pass
doSort=false.
Reported-by: Alexander Lakhin
Discussion: https://postgr.es/m/16784-7ff169bf2c3d1588%40postgresql.org
Backpatch-through: 13
This makes existing sessions reflect "ALTER ROLE ... [NO]INHERIT" as
quickly as they have been reflecting "GRANT role_name". Back-patch to
9.5 (all supported versions).
Reviewed by Nathan Bossart.
Discussion: https://postgr.es/m/20201221095028.GB3777719@rfd.leadboat.com
This adds a key management system that stores (currently) two data
encryption keys of length 128, 192, or 256 bits. The data keys are
AES256 encrypted using a key encryption key, and validated via GCM
cipher mode. A command to obtain the key encryption key must be
specified at initdb time, and will be run at every database server
start. New parameters allow a file descriptor open to the terminal to
be passed. pg_upgrade support has also been added.
Discussion: https://postgr.es/m/CA+fd4k7q5o6Nc_AaX6BcYM9yqTbC6_pnH-6nSD=54Zp6NBQTCQ@mail.gmail.com
Discussion: https://postgr.es/m/20201202213814.GG20285@momjian.us
Author: Masahiko Sawada, me, Stephen Frost
Buildfarm members pogona and petalura have shown a failure when
pg_ctl/t/004_logrotate.pl starts just before local midnight.
The default rotate-at-midnight behavior occurs just before the
Perl script examines current_logfiles, so it figures that the
rotation it's already requested has occurred ... but in reality,
that rotation happens just after it looks, so the expected new
log data goes into a different file than the one it's examining.
In HEAD, src/test/kerberos/t/001_auth.pl has acquired similar code
that evidently has a related failure mode. Besides being quite new,
few buildfarm critters run that test, so it's unsurprising that
we've not yet seen a failure there.
Fix both cases by setting log_rotation_age = 0 so that no time-based
rotation can occur. Also absorb 004_logrotate.pl's decision to
set lc_messages = 'C' into the kerberos test, in hopes that it will
work in non-English prevailing locales.
Report: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=pogona&dt=2020-12-24%2022%3A10%3A04
Report: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=petalura&dt=2020-02-01%2022%3A20%3A04
ecpglib on certain platforms can't handle the pg_log_fatal calls from
libraries. This was reported by the buildfarm. It needs a refactoring
and return value change if it is later removed.
Backpatch-through: master
If a database shutdown (smart or fast) is commanded between the time
some process decides to request a new background worker and the time
that the postmaster can launch that worker, then nothing happens
because the postmaster won't launch any bgworkers once it's exited
PM_RUN state. This is fine ... unless the requesting process is
waiting for that worker to finish (or even for it to start); in that
case the requestor is stuck, and only manual intervention will get us
to the point of being able to shut down.
To fix, cancel pending requests for workers when the postmaster sends
shutdown (SIGTERM) signals, and similarly cancel any new requests that
arrive after that point. (We can optimize things slightly by only
doing the cancellation for workers that have waiters.) To fit within
the existing bgworker APIs, the "cancel" is made to look like the
worker was started and immediately stopped, causing deregistration of
the bgworker entry. Waiting processes would have to deal with
premature worker exit anyway, so this should introduce no bugs that
weren't there before. We do have a side effect that registration
records for restartable bgworkers might disappear when theoretically
they should have remained in place; but since we're shutting down,
that shouldn't matter.
Back-patch to v10. There might be value in putting this into 9.6
as well, but the management of bgworkers is a bit different there
(notably see 8ff518699) and I'm not convinced it's worth the effort
to validate the patch for that branch.
Discussion: https://postgr.es/m/661570.1608673226@sss.pgh.pa.us
Up to now, if the DBA issued "pg_ctl stop -m immediate", the message
sent to clients was the same as for a crash-and-restart situation.
This is confusing, not least because the message claims that the
database will soon be up again, something we have no business
predicting.
Improve things so that we can generate distinct messages for the two
cases (and also recognize an ad-hoc SIGQUIT, should somebody try that).
To do that, add a field to pmsignal.c's shared memory data structure
that the postmaster sets just before broadcasting SIGQUIT to its
children. No interlocking seems to be necessary; the intervening
signal-sending and signal-receipt should sufficiently serialize accesses
to the field. Hence, this isn't any riskier than the existing usages
of pmsignal.c.
We might in future extend this idea to improve other
postmaster-to-children signal scenarios, although none of them
currently seem to be as badly overloaded as SIGQUIT.
Discussion: https://postgr.es/m/559291.1608587013@sss.pgh.pa.us
The parsing of this parameter has been using strtoul(), which is not
portable across platforms. On most Unix platforms, unsigned long has a
size of 64 bits, while on Windows it is 32 bits. It is common in
recovery scenarios to rely on the output of txid_current() or even the
newer pg_current_xact_id() to get a transaction ID for setting up
recovery_target_xid. The value returned by those functions includes the
epoch in the computed result, which would cause strtoul() to fail where
unsigned long has a size of 32 bits once the epoch is incremented.
WAL records and 2PC data include only information about 32-bit XIDs and
it is not possible to have XIDs across more than one epoch, so
discarding the high bits from the transaction ID set has no impact on
recovery. On the contrary, the use of strtoul() prevents a consistent
behavior across platforms depending on the size of unsigned long.
This commit changes the parsing of recovery_target_xid to use
pg_strtouint64() instead, available down to 9.6. There is one TAP test
stressing recovery with recovery_target_xid, where a tweak based on
pg_reset{xlog,wal} is added to bump the XID epoch so as this change gets
tested, as per an idea from Alexander Lakhin.
Reported-by: Alexander Lakhin
Discussion: https://postgr.es/m/16780-107fd0c0385b1035@postgresql.org
Backpatch-through: 9.6
Bad things happen if the DBA issues "pg_ctl stop -m fast" before
autoprewarm finishes loading its list of blocks to prewarm.
The current worker process successfully terminates early, but
(if this wasn't the last database with blocks to prewarm) the
leader process will just try to launch another worker for the
next database. Since the postmaster is now in PM_WAIT_BACKENDS
state, it ignores the launch request, and the leader just sits
until it's killed manually.
This is mostly the fault of our half-baked design for launching
background workers, but a proper fix for that is likely to be
too invasive to be back-patchable. To ameliorate the situation,
fix apw_load_buffers() to check whether SIGTERM has arrived
just before trying to launch another worker. That leaves us with
only a very narrow window in each worker launch where SIGTERM
could occur between the launch request and successful worker start.
Another issue is that if the leader process does manage to exit,
it unconditionally rewrites autoprewarm.blocks with only the
blocks currently in shared buffers, thus forgetting any blocks
that we hadn't reached yet while prewarming. This seems quite
unhelpful, since the next database start will then not have the
expected prewarming benefit. Fix it to not modify the file if
we shut down before the initial load attempt is complete.
Per bug #16785 from John Thompson. Back-patch to v11 where
the autoprewarm code was introduced.
Discussion: https://postgr.es/m/16785-c0207d8c67fb5f25@postgresql.org