Expand the checks of toasted attributes to complain if the rawsize is
overlarge. For compressed attributes, also complain if compression
appears to have expanded the attribute or if the compression method is
invalid.
Mark Dilger, reviewed by Justin Pryzby, Alexander Alekseev, Heikki
Linnakangas, Greg Stark, and me.
Discussion: http://postgr.es/m/8E42250D-586A-4A27-B317-8B062C3816A8@enterprisedb.com
pgcrypto had internal implementations of some encryption algorithms,
as an alternative to calling out to OpenSSL. These were rarely used,
since most production installations are built with OpenSSL. Moreover,
maintaining parallel code paths makes the code more complex and
difficult to maintain.
This patch removes these internal implementations. Now, pgcrypto is
only built if OpenSSL support is configured.
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://www.postgresql.org/message-id/flat/0b42f1df-8cba-6a30-77d7-acc241cc88c1%40enterprisedb.com
In the same spirit as 6301c3ada, fix some more places where we were
using list_delete_first() in a loop and thereby risking O(N^2)
behavior. It's not clear that the lists manipulated in these spots
can get long enough to be really problematic ... but it's not clear
that they can't, either, and the fixes are simple enough.
As before, back-patch to v13.
Discussion: https://postgr.es/m/CD2F0E7F-9822-45EC-A411-AE56F14DEA9F@amazon.com
Windows fails on a request to read() more than INT_MAX bytes,
and perhaps other platforms could have similar issues. Let's
adjust this code to read at most 1GB per call.
(One would not have thought the file could get that big, but now
we have a field report of trouble, so it can. We likely ought to
add some mechanism to limit the size of the query-texts file
separately from the size of the hash table. That is not this
patch, though.)
Per bug #17254 from Yusuke Egashira. It's been like this for
awhile, so back-patch to all supported branches.
Discussion: https://postgr.es/m/17254-a926c89dc03375c2@postgresql.org
Commits fdd965d07 and 3cd9c3b92 tested CREATE INDEX CONCURRENTLY by
launching two separate pgbench runs concurrently. This was needed so
that only a single client thread would run CREATE INDEX CONCURRENTLY,
avoiding deadlock between two CICs. However, there's a better way,
which is to use an advisory lock to prevent concurrent CICs. That's
better in part because the test code is shorter and more readable, but
mostly because it automatically scales things to launch an appropriate
number of CICs relative to the number of INSERT transactions.
As committed, typically half to three-quarters of the CIC transactions
were pointless because the INSERT transactions had already stopped.
In passing, remove background_pgbench, which was added to support
these tests and isn't needed anymore. We can always put it back
if we find a use for it later.
Back-patch to v12; older pgbench versions lack the
conditional-execution features needed for this method.
Tom Lane and Andrey Borodin
Discussion: https://postgr.es/m/139687.1635277318@sss.pgh.pa.us
The foreign data wrapper's validator function provides a HINT message with
list of valid options for the object specified in CREATE or ALTER command,
when the option given in the command is invalid. Previously
postgresql_fdw_validator() and the validator functions for postgres_fdw and
dblink_fdw worked in that way even there were no valid options in the object,
which could lead to the HINT message with empty list (because there were
no valid options). For example, ALTER FOREIGN DATA WRAPPER postgres_fdw
OPTIONS (format 'csv') reported the following ERROR and HINT messages.
This behavior was confusing.
ERROR: invalid option "format"
HINT: Valid options in this context are:
There is no such issue in file_fdw. The validator function for file_fdw
reports the HINT message "There are no valid options in this context."
instead in that case.
This commit improves postgresql_fdw_validator() and the validator functions
for postgres_fdw and dblink_fdw so that they do likewise. For example,
this change causes the above ALTER FOREIGN DATA WRAPPER command to
report the following messages.
ERROR: invalid option "nonexistent"
HINT: There are no valid options in this context.
Author: Kosei Masumura
Reviewed-by: Bharath Rupireddy, Fujii Masao
Discussion: https://postgr.es/m/557d06cebe19081bfcc83ee2affc98d3@oss.nttdata.com
The five modules in our TAP test framework all had names in the top
level namespace. This is unwise because, even though we're not
exporting them to CPAN, the names can leak, for example if they are
exported by the RPM build process. We therefore move the modules to the
PostgreSQL::Test namespace. In the process PostgresNode is renamed to
Cluster, and TestLib is renamed to Utils. PostgresVersion becomes simply
PostgreSQL::Version, to avoid possible confusion about what it's the
version of.
Discussion: https://postgr.es/m/aede93a4-7d92-ef26-398f-5094944c2504@dunslane.net
Reviewed by Erik Rijkers and Michael Paquier
The purpose of commit 8a54e12a38 was to
fix this, and it sufficed when the PREPARE TRANSACTION completed before
the CIC looked for lock conflicts. Otherwise, things still broke. As
before, in a cluster having used CIC while having enabled prepared
transactions, queries that use the resulting index can silently fail to
find rows. It may be necessary to reindex to recover from past
occurrences; REINDEX CONCURRENTLY suffices. Fix this for future index
builds by making CIC wait for arbitrarily-recent prepared transactions
and for ordinary transactions that may yet PREPARE TRANSACTION. As part
of that, have PREPARE TRANSACTION transfer locks to its dummy PGPROC
before it calls ProcArrayClearTransaction(). Back-patch to 9.6 (all
supported versions).
Andrey Borodin, reviewed (in earlier versions) by Andres Freund.
Discussion: https://postgr.es/m/01824242-AA92-4FE9-9BA7-AEBAFFEA3D0C@yandex-team.ru
CIC and REINDEX CONCURRENTLY assume backends see their catalog changes
no later than each backend's next transaction start. That failed to
hold when a backend absorbed a relevant invalidation in the middle of
running RelationBuildDesc() on the CIC index. Queries that use the
resulting index can silently fail to find rows. Fix this for future
index builds by making RelationBuildDesc() loop until it finishes
without accepting a relevant invalidation. It may be necessary to
reindex to recover from past occurrences; REINDEX CONCURRENTLY suffices.
Back-patch to 9.6 (all supported versions).
Noah Misch and Andrey Borodin, reviewed (in earlier versions) by Andres
Freund.
Discussion: https://postgr.es/m/20210730022548.GA1940096@gust.leadboat.com
Incrementing the level of the call stack reported is useful for
debugging purposes as it allows to control which part of the test is
exactly failing, especially if a test is structured with subroutines
that call routines from Test::More.
This adds more incrementations of $Test::Builder::Level where debugging
gets improved (for example it does not make sense for some paths like
pg_rewind where long subroutines are used).
A note is added to src/test/perl/README about that, based on a
suggestion from Andrew Dunstan and a wording coming from both of us.
Usage of Test::Builder::Level has spread in 12, so a backpatch down to
this version is done.
Reviewed-by: Andrew Dunstan, Peter Eisentraut, Daniel Gustafsson
Discussion: https://postgr.es/m/YV1CCFwgM1RV1LeS@paquier.xyz
Backpatch-through: 12
Have verify_heapam.c treat unlogged relations as if they were simply
empty when in Hot Standby mode. This brings it in line with
verify_nbtree.c, which has handled unlogged relations in the same way
since bugfix commit 6754fe65a4. This was an oversight in commit
866e24d47d, which extended contrib/amcheck to check heap relations.
In passing, lower the verbosity used when reporting that a relation has
been skipped like this, from NOTICE to DEBUG1. This is appropriate
because the skipping behavior is only an implementation detail, needed
to work around the fact that unlogged tables don't have smgr-level
storage for their main fork when in Hot Standby mode.
Affected unlogged relations should be considered "trivially verified",
not skipped over. They are verified in the same sense that a totally
empty relation can be verified. This behavior seems least surprising
overall, since unlogged relations on a replica will initially be empty
if and when the replica is promoted and Hot Standby ends.
Author: Mark Dilger <mark.dilger@enterprisedb.com>
Reviewed-By: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAH2-Wzk_pukOFY7JmdiFLsrz+Pd3V8OwgC1TH2Vd5BH5ZgK4bA@mail.gmail.com
Backpatch: 14-, where heapam verification was introduced.
Commit c7b7311f6 adjusted conversion_error_callback to always use
information from the query's rangetable, to avoid doing catalog lookups
in an already-failed transaction. However, as a result of the utterly
inadequate documentation for make_tuple_from_result_row, I failed to
realize that fsstate could be NULL in some contexts. That led to a
crash if we got a conversion error in such a context. Fix by falling
back to the previous coding when fsstate is NULL. Improve the
commentary, too.
Per report from Andrey Borodin. Back-patch to 9.6, like the previous
patch.
Discussion: https://postgr.es/m/08916396-55E4-4D68-AB3A-BD6066F9E5C0@yandex-team.ru
When the newest version is loaded, the backend would load objects from
the oldest complete SQL file (here 1.4) and then update to the latest
version with transition scripts (up to 1.9 currently). This provides
some coverage for upgrades of pg_stat_statements, but there is no test
to show how things have changed across each version.
This adds a couple of tests for the upgrade paths using objects from
each version supported, stressing the objects whose behaviors have
changed across each version supported.
Author: Erica Zhang
Reviewed-by: Julien Rouhaud, Michael Paquier
Discussion: https://postgr.es/m/tencent_BBA974AFF61379F2345E782FD6C55891950A@qq.com
It turns out that the instability complained of in commit d3c09b9b1
has an embarrassingly simple explanation. The test script waits for
the standby to flush incoming WAL to disk, but it should wait for
the WAL to be replayed, since we are testing for the effects of that
to be visible.
While at it, use wait_for_catchup instead of reinventing that logic,
and adjust $Test::Builder::Level to improve future error reports.
Back-patch to v12 where the necessary infrastructure came in
(cf. aforesaid commit). Also back-patch 7d1aa6bf1 so that the
test will actually get run.
Discussion: https://postgr.es/m/2854602.1632852664@sss.pgh.pa.us
Sequences were left out of the list of relation kinds that
verify_heapam knew how to check, though it is fairly trivial to allow
them. Doing that, and while at it, updating pg_amcheck to include
sequences in relations matched by table and relation patterns.
Author: Mark Dilger <mark.dilger@enterprisedb.com>
Discussion: https://www.postgresql.org/message-id/flat/81ad4757-92c1-4aa3-7bee-f609544837e3%40enterprisedb.com
These tests were disabled back in 2018 (commit d3c09b9b1) because of
failures observed in the buildfarm. I've not been able to reproduce
any failure on longfin's host, though, so I'm curious whether or to
what extent we've fixed the problem. Let's re-enable it (in HEAD
only) and see what blows up.
Discussion: https://postgr.es/m/2769443.1632773967@sss.pgh.pa.us
Several mistakes have piled in the code comments over the time,
including incorrect grammar, function names and simple typos. This
commit takes care of a portion of these.
No backpatch is done as this is only cosmetic.
Author: Justin Pryzby
Discussion: https://postgr.es/m/20210924215827.GS831@telsasoft.com
In postgres_fdw, pgfdw_xact_callback() and pgfdw_subxact_callback()
callback functions do almost the same thing to rollback remote toplevel-
and sub-transaction. But previously their such rollback logics were
implemented separately in each function and in different way. Which
could decrease the readability and maintainability of the code.
To fix the issue, this commit creates the common function to rollback
remote transactions, and makes those callback functions use it. Which
allows us to avoid unnecessary code duplication.
Author: Fujii Masao
Reviewed-by: Zhihong Yu, Bharath Rupireddy
Discussion: https://postgr.es/m/62fbb63a-d46c-fb47-a56d-f6be1909aa44@oss.nttdata.com
An elog that reports the value of a transaction ID stored on a deleted
nbtree page was added by commit e5d8a999, which taught page deletion to
store full 64-bit XIDs. It seems very chatty on further reflection, so
lower its elevel from NOTICE to DEBUG2.
Author: Peter Geoghegan <pg@bowt.ie>
Backpatch: 14-, just like the nbtree XID enhancement.
This switches the default ACL to what the documentation has recommended
since CVE-2018-1058. Upgrades will carry forward any old ownership and
ACL. Sites that declined the 2018 recommendation should take a fresh
look. Recipes for commissioning a new database cluster from scratch may
need to create a schema, grant more privileges, etc. Out-of-tree test
suites may require such updates.
Reviewed by Peter Eisentraut.
Discussion: https://postgr.es/m/20201031163518.GB4039133@rfd.leadboat.com
The Value node struct is a weird construct. It is its own node type,
but most of the time, it actually has a node type of Integer, Float,
String, or BitString. As a consequence, the struct name and the node
type don't match most of the time, and so it has to be treated
specially a lot. There doesn't seem to be any value in the special
construct. There is very little code that wants to accept all Value
variants but nothing else (and even if it did, this doesn't provide
any convenient way to check it), and most code wants either just one
particular node type (usually String), or it accepts a broader set of
node types besides just Value.
This change removes the Value struct and node type and replaces them
by separate Integer, Float, String, and BitString node types that are
proper node types and structs of their own and behave mostly like
normal node types.
Also, this removes the T_Null node tag, which was previously also a
possible variant of Value but wasn't actually used outside of the
Value contained in A_Const. Replace that by an isnull field in
A_Const.
Reviewed-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/5ba6bc5b-3f95-04f2-2419-f8ddb4c046fb@enterprisedb.com
Commit 449ab63505 added the tests that check that postgres_fdw.application_name
GUC works as expected. But they were unstable and caused some buildfarm
members to report the failure. This commit reverts those unstable tests.
Reported-by: Tom Lane as per buildfarm
Discussion: https://postgr.es/m/3220909.1631054766@sss.pgh.pa.us
All the code paths simplified here were already using a boolean or used
an expression that led to zero or one, making the extra bits
unnecessary.
Author: Justin Pryzby
Reviewed-by: Tom Lane, Michael Paquier, Peter Smith
Discussion: https://postgr.es/m/20210428182936.GE27406@telsasoft.com
This commit adds postgres_fdw.application_name GUC which specifies
a value for application_name configuration parameter used
when postgres_fdw establishes a connection to a foreign server.
This GUC setting always overrides application_name option of
the foreign server object. This GUC is useful when we want to
specify our own application_name per remote connection.
Previously application_name of a remote connection could be set
basically only via options of a server object. But which meant that
every session connecting to the same foreign server basically
should use the same application_name. Also if we want to change
the setting, we had to execute "ALTER SERVER ... OPTIONS ..." command.
It was inconvenient.
Author: Hayato Kuroda
Reviewed-by: Masahiro Ikeda, Fujii Masao
Discussion: https://postgr.es/m/TYCPR01MB5870D1E8B949DAF6D3B84E02F5F29@TYCPR01MB5870.jpnprd01.prod.outlook.com
Avoid repeating fragments of the query we're building, along the
same lines as recent cleanup in pg_dump. I got annoyed about this
because aa769f80e broke my pending patch to change postgres_fdw's
collation handling, due to each of us having incompletely done
this same refactoring. Let's finish that job in hopes of having
a more stable base.
map and grep are not intended to be used as mutators, iterating
with side-effects should be done with for or foreach loops. This
fixes the one occurrence of the pattern, and bumps the perlcritic
policy to severity 5 for the map and grep policies.
Author: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Andrew Dunstan <andrew@dunslane.net>
Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Discussion: https://postgr.es/m/87fsvzhhc4.fsf@wibble.ilmari.org
Commit 325f2ec555 introduced pg_class.relwrite to skip operations on
tables created as part of a heap rewrite during DDL. It links such
transient heaps to the original relation OID via this new field in
pg_class but forgot to do anything about toast tables. So, logical
decoding was not able to skip operations on internally created toast
tables. This leads to an error when we tried to decode the WAL for the
next operation for which it appeared that there is a toast data where
actually it didn't have any toast data.
To fix this, we set pg_class.relwrite for internally created toast tables
as well which allowed skipping operations on them during logical decoding.
Author: Bertrand Drouvot
Reviewed-by: David Zhang, Amit Kapila
Backpatch-through: 11, where it was introduced
Discussion: https://postgr.es/m/b5146fb1-ad9e-7d6e-f980-98ed68744a7c@amazon.com
The current code could do unnecessary calls to isinf() (two for the
argument values all the time while one could be sufficient in some
cases). zero_is_valid was never used but the result value was still
checked on 0 in the first position of the check.
This is similar to 607f8ce. btree_gist has just copy-pasted the code
doing those checks from the backend float4/8 code, as of the macro
CHECKFLOATVAL(), to do the work.
Author: Haiying Tang
Discussion: https://postgr.es/m/OS0PR01MB611358E3A7BC3C2F874AC36BFBF39@OS0PR01MB6113.jpnprd01.prod.outlook.com
As a result of confusion about whether the "char" type is signed or
unsigned, scans for index searches like "col < 'x'" or "col <= 'x'"
would start at the middle of the index not the left end, thus missing
many or all of the entries they should find. Fortunately, this
is not a symptom of index corruption. It's only the search logic
that is broken, and we can fix it without unpleasant side-effects.
Per report from Jason Kim. This has been wrong since btree_gin's
beginning, so back-patch to all supported branches.
Discussion: https://postgr.es/m/20210810001649.htnltbh7c63re42p@jasonk.me
OpenSSL 3 introduced the concept of providers to support modularization,
and moved the outdated ciphers to the new legacy provider. In case it's
not loaded in the users openssl.cnf file there will be a lot of regress
test failures, so add alternative outputs covering those.
Also document the need to load the legacy provider in order to use older
ciphers with OpenSSL-enabled pgcrypto.
This will be backpatched to all supported version once there is sufficient
testing in the buildfarm of OpenSSL 3.
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/FEF81714-D479-4512-839B-C769D2605F8A@yesql.se
The PX layer in pgcrypto is handling digest padding on its own uniformly
for all backend implementations. Starting with OpenSSL 3.0.0, DecryptUpdate
doesn't flush the last block in case padding is enabled so explicitly
disable it as we don't use it.
This will be backpatched to all supported version once there is sufficient
testing in the buildfarm of OpenSSL 3.
Reviewed-by: Peter Eisentraut, Michael Paquier
Discussion: https://postgr.es/m/FEF81714-D479-4512-839B-C769D2605F8A@yesql.se
Identifying the precise match locations for parenthesized subexpressions
is a fairly expensive task given the way our regexp engine works, both
at regexp compile time (where we must create an optimized NFA for each
parenthesized subexpression) and at runtime (where determining exact
match locations requires laborious search).
Up to now we've made little attempt to optimize this situation. This
patch identifies cases where we know at compile time that we won't
need to know subexpression match locations, and teaches the regexp
compiler to not bother creating per-subexpression regexps for
parenthesis pairs that are not referenced by backrefs elsewhere in
the regexp. (To preserve semantics, we obviously still have to
pin down the match locations of backref references.) Users could
have obtained the same results before this by being careful to
write "non capturing" parentheses wherever possible, but few people
bother with that.
Discussion: https://postgr.es/m/2219936.1628115334@sss.pgh.pa.us
postgres_fdw imported generated columns from the remote tables as plain
columns, and caused failures like "ERROR: cannot insert a non-DEFAULT
value into column "foo"" when inserting into the foreign tables, as it
tried to insert values into the generated columns. To fix, we do the
following under the assumption that generated columns in a postgres_fdw
foreign table are defined so that they represent generated columns in
the underlying remote table:
* Send DEFAULT for the generated columns to the foreign server on insert
or update, not generated column values computed on the local server.
* Add to postgresImportForeignSchema() an option "import_generated" to
include column generated expressions in the definitions of foreign
tables imported from a foreign server. The option is true by default.
The assumption seems reasonable, because that would make a query of the
postgres_fdw foreign table return values for the generated columns that
are consistent with the generated expression.
While here, fix another issue in postgresImportForeignSchema(): it tried
to include column generated expressions as column default expressions in
the foreign table definitions when the import_default option was enabled.
Per bug #16631 from Daniel Cherniy. Back-patch to v12 where generated
columns were added.
Discussion: https://postgr.es/m/16631-e929fe9db0ffc7cf%40postgresql.org
I failed to account for the possibility that when
ExecAppendAsyncEventWait() notifies multiple async-capable nodes using
postgres_fdw, a preceding node might invoke process_pending_request() to
process a pending asynchronous request made by a succeeding node. In
that case the succeeding node should produce a tuple to return to the
parent Append node from tuples fetched by process_pending_request() when
notified. Repair.
Per buildfarm via Michael Paquier. Back-patch to v14, like the previous
commit.
Thanks to Tom Lane for testing.
Discussion: https://postgr.es/m/YQP0UPT8KmPiHTMs%40paquier.xyz
This is simple enough except for the need to check whether CaseTestExpr
nodes have a collation that is not derived from a remote Var. For that,
examine the CASE's "arg" expression and then pass that info down into the
recursive examination of the WHEN expressions.
Alexander Pyhalov, reviewed by Gilles Darold and myself
Discussion: https://postgr.es/m/fda09032e90d85d9b726a41e03f9097f@postgrespro.ru
A pending asynchronous request is handled by process_pending_request(),
which previously not only processed an in-progress remote query but
performed ExecForeignScan() to produce a tuple to return to the local
server asynchronously from the result of the remote query. But that led
to a server crash when executing a query or led to an "InstrStartNode
called twice in a row" or "InstrEndLoop called on running node" failure
when doing EXPLAIN ANALYZE of it, in cases where the plan tree for it
contained multiple async-capable nodes accessing the same
initplan/subplan that contained multiple async-capable nodes scanning
the same foreign tables as for the parent async-capable nodes, as
reported by Andrey Lepikhov. The reason is that the second step in
process_pending_request() invoked when executing the initplan/subplan
for one of the parent async-capable nodes caused recursive execution of
the initplan/subplan for another of the parent async-capable nodes.
To fix, split process_pending_request() into the two steps and postpone
the second step until ForeignAsyncConfigureWait() is called for each of
the pending asynchronous requests. Also, in ExecAppendAsyncEventWait()
we assumed that FDWs would register at least one wait event in a
WaitEventSet created there when they were called from
ForeignAsyncConfigureWait() in that function, but allow FDWs to register
zero wait events in the WaitEventSet; modify ExecAppendAsyncEventWait()
to just return in that case.
Oversight in commit 27e1f1456. Back-patch to v14 where that commit went
in.
Andrey Lepikhov and Etsuro Fujita
Discussion: https://postgr.es/m/fe5eaa19-1704-e4a4-76ee-3b9d37ade399@postgrespro.ru
There is only one constructor now for PostgresNode, with the idiomatic
name 'new'. The method is not exported by the class, and must be called
as "PostgresNode->new('name',[args])". All the TAP tests that use
PostgresNode are modified accordingly. Third party scripts will need
adjusting, which is a fairly mechanical process (I just used a sed
script).
This adjusts the MSVC build scripts to look at the compile flags mentioned
in the Makefile to look for -D arguments in order to determine which
constants should be defined in Visual Studio builds.
One small anomaly that appeared as a result of this change is that the
Makefile for the ltree contrib module defined LOWER_NODE, but this was
not properly defined in the MSVC build scripts. This meant that MSVC
builds would differ in case sensitivity in the ltree module when
compared to builds using a make build environment. To maintain the same
behavior here we remove the -DLOWER_NODE from the Makefile and just always
define it in ltree.h for non-MSVC builds. We need to maintain the old
behavior here as this affects the on-disk compatibility of GiST indexes
when using the ltree type.
The only other resulting change here is that REFINT_VERBOSE is now defined
for the autoinc, insert_username and moddatetime contrib modules.
Previously on MSVC, this was only defined for the refint module. This
aligns the behavior to build environments using make as all 4 of these
modules share the same Makefile.
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/CAApHDvpo6g5csCTjc_0C7DMvgFPomVb0Rh-AcW5afd=Ya=LRuw@mail.gmail.com
The error messages using the word "non-negative" are confusing
because it's ambiguous about whether it accepts zero or not.
This commit improves those error messages by replacing it with
less ambiguous word like "greater than zero" or
"greater than or equal to zero".
Also this commit added the note about the word "non-negative" to
the error message style guide, to help writing the new error messages.
When postgres_fdw option fetch_size was set to zero, previously
the error message "fetch_size requires a non-negative integer value"
was reported. This error message was outright buggy. Therefore
back-patch to all supported versions where such buggy error message
could be thrown.
Reported-by: Hou Zhijie
Author: Bharath Rupireddy
Reviewed-by: Kyotaro Horiguchi, Fujii Masao
Discussion: https://postgr.es/m/OS0PR01MB5716415335A06B489F1B3A8194569@OS0PR01MB5716.jpnprd01.prod.outlook.com
This patch causes EXPLAIN output to refer to objects that are in
the current session's temp schema with the "pg_temp" schema alias
rather than that schema's actual name. This is useful for our own
testing purposes since it will stabilize EXPLAIN VERBOSE output
for such cases, allowing us to use that in regression tests.
It should be less confusing for end users too.
Since ruleutils.c needs to change behavior for this, the change
also leaks into a few other users of ruleutils.c, for example
pg_get_viewdef(). AFAICS that won't cause any problems.
We did find that aggressively trying to change this behavior
across-the-board would cause issues, but as long as "pg_temp"
only appears within generated SQL text, I think it'll be fine.
Along the way, make get_namespace_name_or_temp conform to the
same API as get_namespace_name, ie that it returns a palloc'd
string or NULL. The current behavior hasn't caused any bugs
since no callers attempt to pfree the result, but if it gets
more widespread usage that could become a problem.
Amul Sul, reviewed and extended by me
Discussion: https://postgr.es/m/CAAJ_b97W=QaGmag9AhWNbmx3uEYsNkXWL+OVW1_E1D3BtgWvtw@mail.gmail.com
To add support for streaming transactions at prepare time into the
built-in logical replication, we need to do the following things:
* Modify the output plugin (pgoutput) to implement the new two-phase API
callbacks, by leveraging the extended replication protocol.
* Modify the replication apply worker, to properly handle two-phase
transactions by replaying them on prepare.
* Add a new SUBSCRIPTION option "two_phase" to allow users to enable
two-phase transactions. We enable the two_phase once the initial data sync
is over.
We however must explicitly disable replication of two-phase transactions
during replication slot creation, even if the plugin supports it. We
don't need to replicate the changes accumulated during this phase,
and moreover, we don't have a replication connection open so we don't know
where to send the data anyway.
The streaming option is not allowed with this new two_phase option. This
can be done as a separate patch.
We don't allow to toggle two_phase option of a subscription because it can
lead to an inconsistent replica. For the same reason, we don't allow to
refresh the publication once the two_phase is enabled for a subscription
unless copy_data option is false.
Author: Peter Smith, Ajin Cherian and Amit Kapila based on previous work by Nikhil Sontakke and Stas Kelvich
Reviewed-by: Amit Kapila, Sawada Masahiko, Vignesh C, Dilip Kumar, Takamichi Osumi, Greg Nancarrow
Tested-By: Haiying Tang
Discussion: https://postgr.es/m/02DA5F5E-CECE-4D9C-8B4B-418077E2C010@postgrespro.ru
Discussion: https://postgr.es/m/CAA4eK1+opiV4aFTmWWUF9h_32=HfPOW9vZASHarT0UA5oBrtGw@mail.gmail.com
"Result Cache" was never a great name for this node, but nobody managed
to come up with another name that anyone liked enough. That was until
David Johnston mentioned "Node Memoization", which Tom Lane revised to
just "Memoize". People seem to like "Memoize", so let's do the rename.
Reviewed-by: Justin Pryzby
Discussion: https://postgr.es/m/20210708165145.GG1176@momjian.us
Backpatch-through: 14, where Result Cache was introduced