Incrementing the level of the call stack reported is useful for
debugging purposes as it allows to control which part of the test is
exactly failing, especially if a test is structured with subroutines
that call routines from Test::More.
This adds more incrementations of $Test::Builder::Level where debugging
gets improved (for example it does not make sense for some paths like
pg_rewind where long subroutines are used).
A note is added to src/test/perl/README about that, based on a
suggestion from Andrew Dunstan and a wording coming from both of us.
Usage of Test::Builder::Level has spread in 12, so a backpatch down to
this version is done.
Reviewed-by: Andrew Dunstan, Peter Eisentraut, Daniel Gustafsson
Discussion: https://postgr.es/m/YV1CCFwgM1RV1LeS@paquier.xyz
Backpatch-through: 12
Have verify_heapam.c treat unlogged relations as if they were simply
empty when in Hot Standby mode. This brings it in line with
verify_nbtree.c, which has handled unlogged relations in the same way
since bugfix commit 6754fe65a4. This was an oversight in commit
866e24d47d, which extended contrib/amcheck to check heap relations.
In passing, lower the verbosity used when reporting that a relation has
been skipped like this, from NOTICE to DEBUG1. This is appropriate
because the skipping behavior is only an implementation detail, needed
to work around the fact that unlogged tables don't have smgr-level
storage for their main fork when in Hot Standby mode.
Affected unlogged relations should be considered "trivially verified",
not skipped over. They are verified in the same sense that a totally
empty relation can be verified. This behavior seems least surprising
overall, since unlogged relations on a replica will initially be empty
if and when the replica is promoted and Hot Standby ends.
Author: Mark Dilger <mark.dilger@enterprisedb.com>
Reviewed-By: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAH2-Wzk_pukOFY7JmdiFLsrz+Pd3V8OwgC1TH2Vd5BH5ZgK4bA@mail.gmail.com
Backpatch: 14-, where heapam verification was introduced.
Commit c7b7311f6 adjusted conversion_error_callback to always use
information from the query's rangetable, to avoid doing catalog lookups
in an already-failed transaction. However, as a result of the utterly
inadequate documentation for make_tuple_from_result_row, I failed to
realize that fsstate could be NULL in some contexts. That led to a
crash if we got a conversion error in such a context. Fix by falling
back to the previous coding when fsstate is NULL. Improve the
commentary, too.
Per report from Andrey Borodin. Back-patch to 9.6, like the previous
patch.
Discussion: https://postgr.es/m/08916396-55E4-4D68-AB3A-BD6066F9E5C0@yandex-team.ru
When the newest version is loaded, the backend would load objects from
the oldest complete SQL file (here 1.4) and then update to the latest
version with transition scripts (up to 1.9 currently). This provides
some coverage for upgrades of pg_stat_statements, but there is no test
to show how things have changed across each version.
This adds a couple of tests for the upgrade paths using objects from
each version supported, stressing the objects whose behaviors have
changed across each version supported.
Author: Erica Zhang
Reviewed-by: Julien Rouhaud, Michael Paquier
Discussion: https://postgr.es/m/tencent_BBA974AFF61379F2345E782FD6C55891950A@qq.com
It turns out that the instability complained of in commit d3c09b9b1
has an embarrassingly simple explanation. The test script waits for
the standby to flush incoming WAL to disk, but it should wait for
the WAL to be replayed, since we are testing for the effects of that
to be visible.
While at it, use wait_for_catchup instead of reinventing that logic,
and adjust $Test::Builder::Level to improve future error reports.
Back-patch to v12 where the necessary infrastructure came in
(cf. aforesaid commit). Also back-patch 7d1aa6bf1 so that the
test will actually get run.
Discussion: https://postgr.es/m/2854602.1632852664@sss.pgh.pa.us
Sequences were left out of the list of relation kinds that
verify_heapam knew how to check, though it is fairly trivial to allow
them. Doing that, and while at it, updating pg_amcheck to include
sequences in relations matched by table and relation patterns.
Author: Mark Dilger <mark.dilger@enterprisedb.com>
Discussion: https://www.postgresql.org/message-id/flat/81ad4757-92c1-4aa3-7bee-f609544837e3%40enterprisedb.com
These tests were disabled back in 2018 (commit d3c09b9b1) because of
failures observed in the buildfarm. I've not been able to reproduce
any failure on longfin's host, though, so I'm curious whether or to
what extent we've fixed the problem. Let's re-enable it (in HEAD
only) and see what blows up.
Discussion: https://postgr.es/m/2769443.1632773967@sss.pgh.pa.us
Several mistakes have piled in the code comments over the time,
including incorrect grammar, function names and simple typos. This
commit takes care of a portion of these.
No backpatch is done as this is only cosmetic.
Author: Justin Pryzby
Discussion: https://postgr.es/m/20210924215827.GS831@telsasoft.com
In postgres_fdw, pgfdw_xact_callback() and pgfdw_subxact_callback()
callback functions do almost the same thing to rollback remote toplevel-
and sub-transaction. But previously their such rollback logics were
implemented separately in each function and in different way. Which
could decrease the readability and maintainability of the code.
To fix the issue, this commit creates the common function to rollback
remote transactions, and makes those callback functions use it. Which
allows us to avoid unnecessary code duplication.
Author: Fujii Masao
Reviewed-by: Zhihong Yu, Bharath Rupireddy
Discussion: https://postgr.es/m/62fbb63a-d46c-fb47-a56d-f6be1909aa44@oss.nttdata.com
An elog that reports the value of a transaction ID stored on a deleted
nbtree page was added by commit e5d8a999, which taught page deletion to
store full 64-bit XIDs. It seems very chatty on further reflection, so
lower its elevel from NOTICE to DEBUG2.
Author: Peter Geoghegan <pg@bowt.ie>
Backpatch: 14-, just like the nbtree XID enhancement.
This switches the default ACL to what the documentation has recommended
since CVE-2018-1058. Upgrades will carry forward any old ownership and
ACL. Sites that declined the 2018 recommendation should take a fresh
look. Recipes for commissioning a new database cluster from scratch may
need to create a schema, grant more privileges, etc. Out-of-tree test
suites may require such updates.
Reviewed by Peter Eisentraut.
Discussion: https://postgr.es/m/20201031163518.GB4039133@rfd.leadboat.com
The Value node struct is a weird construct. It is its own node type,
but most of the time, it actually has a node type of Integer, Float,
String, or BitString. As a consequence, the struct name and the node
type don't match most of the time, and so it has to be treated
specially a lot. There doesn't seem to be any value in the special
construct. There is very little code that wants to accept all Value
variants but nothing else (and even if it did, this doesn't provide
any convenient way to check it), and most code wants either just one
particular node type (usually String), or it accepts a broader set of
node types besides just Value.
This change removes the Value struct and node type and replaces them
by separate Integer, Float, String, and BitString node types that are
proper node types and structs of their own and behave mostly like
normal node types.
Also, this removes the T_Null node tag, which was previously also a
possible variant of Value but wasn't actually used outside of the
Value contained in A_Const. Replace that by an isnull field in
A_Const.
Reviewed-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/5ba6bc5b-3f95-04f2-2419-f8ddb4c046fb@enterprisedb.com
Commit 449ab63505 added the tests that check that postgres_fdw.application_name
GUC works as expected. But they were unstable and caused some buildfarm
members to report the failure. This commit reverts those unstable tests.
Reported-by: Tom Lane as per buildfarm
Discussion: https://postgr.es/m/3220909.1631054766@sss.pgh.pa.us
All the code paths simplified here were already using a boolean or used
an expression that led to zero or one, making the extra bits
unnecessary.
Author: Justin Pryzby
Reviewed-by: Tom Lane, Michael Paquier, Peter Smith
Discussion: https://postgr.es/m/20210428182936.GE27406@telsasoft.com
This commit adds postgres_fdw.application_name GUC which specifies
a value for application_name configuration parameter used
when postgres_fdw establishes a connection to a foreign server.
This GUC setting always overrides application_name option of
the foreign server object. This GUC is useful when we want to
specify our own application_name per remote connection.
Previously application_name of a remote connection could be set
basically only via options of a server object. But which meant that
every session connecting to the same foreign server basically
should use the same application_name. Also if we want to change
the setting, we had to execute "ALTER SERVER ... OPTIONS ..." command.
It was inconvenient.
Author: Hayato Kuroda
Reviewed-by: Masahiro Ikeda, Fujii Masao
Discussion: https://postgr.es/m/TYCPR01MB5870D1E8B949DAF6D3B84E02F5F29@TYCPR01MB5870.jpnprd01.prod.outlook.com
Avoid repeating fragments of the query we're building, along the
same lines as recent cleanup in pg_dump. I got annoyed about this
because aa769f80e broke my pending patch to change postgres_fdw's
collation handling, due to each of us having incompletely done
this same refactoring. Let's finish that job in hopes of having
a more stable base.
map and grep are not intended to be used as mutators, iterating
with side-effects should be done with for or foreach loops. This
fixes the one occurrence of the pattern, and bumps the perlcritic
policy to severity 5 for the map and grep policies.
Author: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Andrew Dunstan <andrew@dunslane.net>
Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Discussion: https://postgr.es/m/87fsvzhhc4.fsf@wibble.ilmari.org
Commit 325f2ec555 introduced pg_class.relwrite to skip operations on
tables created as part of a heap rewrite during DDL. It links such
transient heaps to the original relation OID via this new field in
pg_class but forgot to do anything about toast tables. So, logical
decoding was not able to skip operations on internally created toast
tables. This leads to an error when we tried to decode the WAL for the
next operation for which it appeared that there is a toast data where
actually it didn't have any toast data.
To fix this, we set pg_class.relwrite for internally created toast tables
as well which allowed skipping operations on them during logical decoding.
Author: Bertrand Drouvot
Reviewed-by: David Zhang, Amit Kapila
Backpatch-through: 11, where it was introduced
Discussion: https://postgr.es/m/b5146fb1-ad9e-7d6e-f980-98ed68744a7c@amazon.com
The current code could do unnecessary calls to isinf() (two for the
argument values all the time while one could be sufficient in some
cases). zero_is_valid was never used but the result value was still
checked on 0 in the first position of the check.
This is similar to 607f8ce. btree_gist has just copy-pasted the code
doing those checks from the backend float4/8 code, as of the macro
CHECKFLOATVAL(), to do the work.
Author: Haiying Tang
Discussion: https://postgr.es/m/OS0PR01MB611358E3A7BC3C2F874AC36BFBF39@OS0PR01MB6113.jpnprd01.prod.outlook.com
As a result of confusion about whether the "char" type is signed or
unsigned, scans for index searches like "col < 'x'" or "col <= 'x'"
would start at the middle of the index not the left end, thus missing
many or all of the entries they should find. Fortunately, this
is not a symptom of index corruption. It's only the search logic
that is broken, and we can fix it without unpleasant side-effects.
Per report from Jason Kim. This has been wrong since btree_gin's
beginning, so back-patch to all supported branches.
Discussion: https://postgr.es/m/20210810001649.htnltbh7c63re42p@jasonk.me
OpenSSL 3 introduced the concept of providers to support modularization,
and moved the outdated ciphers to the new legacy provider. In case it's
not loaded in the users openssl.cnf file there will be a lot of regress
test failures, so add alternative outputs covering those.
Also document the need to load the legacy provider in order to use older
ciphers with OpenSSL-enabled pgcrypto.
This will be backpatched to all supported version once there is sufficient
testing in the buildfarm of OpenSSL 3.
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/FEF81714-D479-4512-839B-C769D2605F8A@yesql.se
The PX layer in pgcrypto is handling digest padding on its own uniformly
for all backend implementations. Starting with OpenSSL 3.0.0, DecryptUpdate
doesn't flush the last block in case padding is enabled so explicitly
disable it as we don't use it.
This will be backpatched to all supported version once there is sufficient
testing in the buildfarm of OpenSSL 3.
Reviewed-by: Peter Eisentraut, Michael Paquier
Discussion: https://postgr.es/m/FEF81714-D479-4512-839B-C769D2605F8A@yesql.se
Identifying the precise match locations for parenthesized subexpressions
is a fairly expensive task given the way our regexp engine works, both
at regexp compile time (where we must create an optimized NFA for each
parenthesized subexpression) and at runtime (where determining exact
match locations requires laborious search).
Up to now we've made little attempt to optimize this situation. This
patch identifies cases where we know at compile time that we won't
need to know subexpression match locations, and teaches the regexp
compiler to not bother creating per-subexpression regexps for
parenthesis pairs that are not referenced by backrefs elsewhere in
the regexp. (To preserve semantics, we obviously still have to
pin down the match locations of backref references.) Users could
have obtained the same results before this by being careful to
write "non capturing" parentheses wherever possible, but few people
bother with that.
Discussion: https://postgr.es/m/2219936.1628115334@sss.pgh.pa.us
postgres_fdw imported generated columns from the remote tables as plain
columns, and caused failures like "ERROR: cannot insert a non-DEFAULT
value into column "foo"" when inserting into the foreign tables, as it
tried to insert values into the generated columns. To fix, we do the
following under the assumption that generated columns in a postgres_fdw
foreign table are defined so that they represent generated columns in
the underlying remote table:
* Send DEFAULT for the generated columns to the foreign server on insert
or update, not generated column values computed on the local server.
* Add to postgresImportForeignSchema() an option "import_generated" to
include column generated expressions in the definitions of foreign
tables imported from a foreign server. The option is true by default.
The assumption seems reasonable, because that would make a query of the
postgres_fdw foreign table return values for the generated columns that
are consistent with the generated expression.
While here, fix another issue in postgresImportForeignSchema(): it tried
to include column generated expressions as column default expressions in
the foreign table definitions when the import_default option was enabled.
Per bug #16631 from Daniel Cherniy. Back-patch to v12 where generated
columns were added.
Discussion: https://postgr.es/m/16631-e929fe9db0ffc7cf%40postgresql.org
I failed to account for the possibility that when
ExecAppendAsyncEventWait() notifies multiple async-capable nodes using
postgres_fdw, a preceding node might invoke process_pending_request() to
process a pending asynchronous request made by a succeeding node. In
that case the succeeding node should produce a tuple to return to the
parent Append node from tuples fetched by process_pending_request() when
notified. Repair.
Per buildfarm via Michael Paquier. Back-patch to v14, like the previous
commit.
Thanks to Tom Lane for testing.
Discussion: https://postgr.es/m/YQP0UPT8KmPiHTMs%40paquier.xyz
This is simple enough except for the need to check whether CaseTestExpr
nodes have a collation that is not derived from a remote Var. For that,
examine the CASE's "arg" expression and then pass that info down into the
recursive examination of the WHEN expressions.
Alexander Pyhalov, reviewed by Gilles Darold and myself
Discussion: https://postgr.es/m/fda09032e90d85d9b726a41e03f9097f@postgrespro.ru
A pending asynchronous request is handled by process_pending_request(),
which previously not only processed an in-progress remote query but
performed ExecForeignScan() to produce a tuple to return to the local
server asynchronously from the result of the remote query. But that led
to a server crash when executing a query or led to an "InstrStartNode
called twice in a row" or "InstrEndLoop called on running node" failure
when doing EXPLAIN ANALYZE of it, in cases where the plan tree for it
contained multiple async-capable nodes accessing the same
initplan/subplan that contained multiple async-capable nodes scanning
the same foreign tables as for the parent async-capable nodes, as
reported by Andrey Lepikhov. The reason is that the second step in
process_pending_request() invoked when executing the initplan/subplan
for one of the parent async-capable nodes caused recursive execution of
the initplan/subplan for another of the parent async-capable nodes.
To fix, split process_pending_request() into the two steps and postpone
the second step until ForeignAsyncConfigureWait() is called for each of
the pending asynchronous requests. Also, in ExecAppendAsyncEventWait()
we assumed that FDWs would register at least one wait event in a
WaitEventSet created there when they were called from
ForeignAsyncConfigureWait() in that function, but allow FDWs to register
zero wait events in the WaitEventSet; modify ExecAppendAsyncEventWait()
to just return in that case.
Oversight in commit 27e1f1456. Back-patch to v14 where that commit went
in.
Andrey Lepikhov and Etsuro Fujita
Discussion: https://postgr.es/m/fe5eaa19-1704-e4a4-76ee-3b9d37ade399@postgrespro.ru
There is only one constructor now for PostgresNode, with the idiomatic
name 'new'. The method is not exported by the class, and must be called
as "PostgresNode->new('name',[args])". All the TAP tests that use
PostgresNode are modified accordingly. Third party scripts will need
adjusting, which is a fairly mechanical process (I just used a sed
script).
This adjusts the MSVC build scripts to look at the compile flags mentioned
in the Makefile to look for -D arguments in order to determine which
constants should be defined in Visual Studio builds.
One small anomaly that appeared as a result of this change is that the
Makefile for the ltree contrib module defined LOWER_NODE, but this was
not properly defined in the MSVC build scripts. This meant that MSVC
builds would differ in case sensitivity in the ltree module when
compared to builds using a make build environment. To maintain the same
behavior here we remove the -DLOWER_NODE from the Makefile and just always
define it in ltree.h for non-MSVC builds. We need to maintain the old
behavior here as this affects the on-disk compatibility of GiST indexes
when using the ltree type.
The only other resulting change here is that REFINT_VERBOSE is now defined
for the autoinc, insert_username and moddatetime contrib modules.
Previously on MSVC, this was only defined for the refint module. This
aligns the behavior to build environments using make as all 4 of these
modules share the same Makefile.
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/CAApHDvpo6g5csCTjc_0C7DMvgFPomVb0Rh-AcW5afd=Ya=LRuw@mail.gmail.com
The error messages using the word "non-negative" are confusing
because it's ambiguous about whether it accepts zero or not.
This commit improves those error messages by replacing it with
less ambiguous word like "greater than zero" or
"greater than or equal to zero".
Also this commit added the note about the word "non-negative" to
the error message style guide, to help writing the new error messages.
When postgres_fdw option fetch_size was set to zero, previously
the error message "fetch_size requires a non-negative integer value"
was reported. This error message was outright buggy. Therefore
back-patch to all supported versions where such buggy error message
could be thrown.
Reported-by: Hou Zhijie
Author: Bharath Rupireddy
Reviewed-by: Kyotaro Horiguchi, Fujii Masao
Discussion: https://postgr.es/m/OS0PR01MB5716415335A06B489F1B3A8194569@OS0PR01MB5716.jpnprd01.prod.outlook.com
This patch causes EXPLAIN output to refer to objects that are in
the current session's temp schema with the "pg_temp" schema alias
rather than that schema's actual name. This is useful for our own
testing purposes since it will stabilize EXPLAIN VERBOSE output
for such cases, allowing us to use that in regression tests.
It should be less confusing for end users too.
Since ruleutils.c needs to change behavior for this, the change
also leaks into a few other users of ruleutils.c, for example
pg_get_viewdef(). AFAICS that won't cause any problems.
We did find that aggressively trying to change this behavior
across-the-board would cause issues, but as long as "pg_temp"
only appears within generated SQL text, I think it'll be fine.
Along the way, make get_namespace_name_or_temp conform to the
same API as get_namespace_name, ie that it returns a palloc'd
string or NULL. The current behavior hasn't caused any bugs
since no callers attempt to pfree the result, but if it gets
more widespread usage that could become a problem.
Amul Sul, reviewed and extended by me
Discussion: https://postgr.es/m/CAAJ_b97W=QaGmag9AhWNbmx3uEYsNkXWL+OVW1_E1D3BtgWvtw@mail.gmail.com
To add support for streaming transactions at prepare time into the
built-in logical replication, we need to do the following things:
* Modify the output plugin (pgoutput) to implement the new two-phase API
callbacks, by leveraging the extended replication protocol.
* Modify the replication apply worker, to properly handle two-phase
transactions by replaying them on prepare.
* Add a new SUBSCRIPTION option "two_phase" to allow users to enable
two-phase transactions. We enable the two_phase once the initial data sync
is over.
We however must explicitly disable replication of two-phase transactions
during replication slot creation, even if the plugin supports it. We
don't need to replicate the changes accumulated during this phase,
and moreover, we don't have a replication connection open so we don't know
where to send the data anyway.
The streaming option is not allowed with this new two_phase option. This
can be done as a separate patch.
We don't allow to toggle two_phase option of a subscription because it can
lead to an inconsistent replica. For the same reason, we don't allow to
refresh the publication once the two_phase is enabled for a subscription
unless copy_data option is false.
Author: Peter Smith, Ajin Cherian and Amit Kapila based on previous work by Nikhil Sontakke and Stas Kelvich
Reviewed-by: Amit Kapila, Sawada Masahiko, Vignesh C, Dilip Kumar, Takamichi Osumi, Greg Nancarrow
Tested-By: Haiying Tang
Discussion: https://postgr.es/m/02DA5F5E-CECE-4D9C-8B4B-418077E2C010@postgrespro.ru
Discussion: https://postgr.es/m/CAA4eK1+opiV4aFTmWWUF9h_32=HfPOW9vZASHarT0UA5oBrtGw@mail.gmail.com
"Result Cache" was never a great name for this node, but nobody managed
to come up with another name that anyone liked enough. That was until
David Johnston mentioned "Node Memoization", which Tom Lane revised to
just "Memoize". People seem to like "Memoize", so let's do the rename.
Reviewed-by: Justin Pryzby
Discussion: https://postgr.es/m/20210708165145.GG1176@momjian.us
Backpatch-through: 14, where Result Cache was introduced
The idea behind this patch is to design out bugs like the one fixed
by commit 9d523119f. Previously, once one did RelationOpenSmgr(rel),
it was considered okay to access rel->rd_smgr directly for some
not-very-clear interval. But since that pointer will be cleared by
relcache flushes, we had bugs arising from overreliance on a previous
RelationOpenSmgr call still being effective.
Now, very little code except that in rel.h and relcache.c should ever
touch the rd_smgr field directly. The normal coding rule is to use
RelationGetSmgr(rel) and not expect the result to be valid for longer
than one smgr function call. There are a couple of places where using
the function every single time seemed like overkill, but they are now
annotated with large warning comments.
Amul Sul, after an idea of mine.
Discussion: https://postgr.es/m/CANiYTQsU7yMFpQYnv=BrcRVqK_3U3mtAzAsJCaqtzsDHfsUbdQ@mail.gmail.com
ld_upper, ld_lower, pd_special and the page size have been using
smallint as return type, which could cause those fields to return
negative values in certain cases for builds configures with a page size
of 32kB.
Bump pageinspect to 1.10. page_header() is able to handle the correct
return type of those fields at runtime when using an older version of
the extension, with some tests are added to cover that.
Author: Quan Zongliang
Reviewed-by: Michael Paquier, Bharath Rupireddy
Discussion: https://postgr.es/m/8b8ec36e-61fe-14f9-005d-07bc85aa4eed@yeah.net
Most error messages about a relkind that was not supported or
appropriate for the command was of the pattern
"relation \"%s\" is not a table, foreign table, or materialized view"
This style can become verbose and tedious to maintain. Moreover, it's
not very helpful: If I'm trying to create a comment on a TOAST table,
which is not supported, then the information that I could have created
a comment on a materialized view is pointless.
Instead, write the primary error message shorter and saying more
directly that what was attempted is not possible. Then, in the detail
message, explain that the operation is not supported for the relkind
the object was. To simplify that, add a new function
errdetail_relkind_not_supported() that does this.
In passing, make use of RELKIND_HAS_STORAGE() where appropriate,
instead of listing out the relkinds individually.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://www.postgresql.org/message-id/flat/dc35a398-37d0-75ce-07ea-1dd71d98f8ec@2ndquadrant.com
In 86dc90056, I'd written find_modifytable_subplan with the assumption
that if the immediate child of a ModifyTable is a Result, it must be
a projecting Result with a subplan. However, if the UPDATE or DELETE
has a provably-constant-false WHERE clause, that's not so: we'll
generate a dummy subplan with a childless Result. Add the missing
null-check so we don't crash on such cases.
Per report from Alexander Pyhalov.
Discussion: https://postgr.es/m/b9a6f53549456b2f3e2fd150dcd79d72@postgrespro.ru
Previously the values such as '100$%$#$#', '9,223,372,' were accepted and
treated as valid integers for postgres_fdw options batch_size and fetch_size.
Whereas this is not the case with fdw_startup_cost and fdw_tuple_cost options
for which an error is thrown. This was because endptr was not used
while converting strings to integers using strtol.
This commit changes the logic so that it uses parse_int function
instead of strtol as it serves the purpose by returning false in case
if it is unable to convert the string to integer. Note that
this function also rounds off the values such as '100.456' to 100 and
'100.567' or '100.678' to 101.
While on this, use parse_real for fdw_startup_cost and fdw_tuple_cost options.
Since parse_int and parse_real are being used for reloptions and GUCs,
it is more appropriate to use in postgres_fdw rather than using strtol
and strtod directly.
Back-patch to v14.
Author: Bharath Rupireddy
Reviewed-by: Ashutosh Bapat, Tom Lane, Kyotaro Horiguchi, Fujii Masao
Discussion: https://postgr.es/m/CALj2ACVMO6wY5Pc4oe1OCgUOAtdjHuFsBDw8R5uoYR86eWFQDA@mail.gmail.com
As in 50371df26, this is a bad idea since the callback can't really
know what error is being thrown and thus whether or not it is safe
to attempt catalog accesses. Rather than pushing said accesses into
the mainline code where they'd usually be a waste of cycles, we can
look at the query's rangetable instead.
This change does mean that we'll be printing query aliases (if any
were used) rather than the table or column's true name. But that
doesn't seem like a bad thing: it's certainly a more useful definition
in self-join cases, for instance. In any case, it seems unlikely that
any applications would be depending on this detail, so it seems safe
to change.
Patch by me. Original complaint by Andres Freund; Bharath Rupireddy
noted the connection to conversion_error_callback.
Discussion: https://postgr.es/m/20210106020229.ne5xnuu6wlondjpe@alap3.anarazel.de
The queries involving ft1_nopw don't stably return the same row
anymore. I surmise that an autovacuum hitting "S 1"."T 1"
right after the updates introduced by f61db909d/5843659d0 freed
some space, changing where subsequent insertions get stored.
It's only by good luck that these results were stable before,
though, since a LIMIT without ORDER BY isn't well defined,
and it's not like we've ever treated that table as append-only
in this test script.
Since we only really care whether these commands succeed or not,
just replace "SELECT *" with "SELECT 1".
Report: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2021-06-23%2019%3A52%3A08
For no obvious reason, isolationtester has always insisted that
session and step names be written with double quotes. This is
fairly tedious and does little for test readability, especially
since the names that people actually choose almost always look
like normal identifiers. Hence, let's tweak the lexer to allow
SQL-like identifiers not only double-quoted strings.
(They're SQL-like, not exactly SQL, because I didn't add any
case-folding logic. Also there's no provision for U&"..." names,
not that anyone's likely to care.)
There is one incompatibility introduced by this change: if you write
"foo""bar" with no space, that used to be taken as two identifiers,
but now it's just one identifier with an embedded quote mark.
I converted all the src/test/isolation/ specfiles to remove
unnecessary double quotes, but stopped there because my
eyes were glazing over already.
Like 741d7f104, back-patch to all supported branches, so that this
isn't a stumbling block for back-patching isolation test changes.
Discussion: https://postgr.es/m/759113.1623861959@sss.pgh.pa.us
Previously, isolationtester displayed SQL query results using some
ad-hoc code that clearly hadn't had much effort expended on it.
Field values longer than 14 characters weren't separated from
the next field, and usually caused misalignment of the columns
too. Also there was no visual separation of a query's result
from subsequent isolationtester output. This made test result
files confusing and hard to read.
To improve matters, let's use libpq's PQprint() function. Although
that's long since unused by psql, it's still plenty good enough
for the purpose here.
Like 741d7f104, back-patch to all supported branches, so that this
isn't a stumbling block for back-patching isolation test changes.
Discussion: https://postgr.es/m/582362.1623798221@sss.pgh.pa.us
We've long contended with isolation test results that aren't entirely
stable. Some test scripts insert long delays to try to force stable
results, which is not terribly desirable; but other erratic failure
modes remain, causing unrepeatable buildfarm failures. I've spent a
fair amount of time trying to solve this by improving the server-side
support code, without much success: that way is fundamentally unable
to cope with diffs that stem from chance ordering of arrival of
messages from different server processes.
We can improve matters on the client side, however, by annotating
the test scripts themselves to show the desired reporting order
of events that might occur in different orders. This patch adds
three types of annotations to deal with (a) test steps that might or
might not complete their waits before the isolationtester can see them
waiting; (b) test steps in different sessions that can legitimately
complete in either order; and (c) NOTIFY messages that might arrive
before or after the completion of a step in another session. We might
need more annotation types later, but this seems to be enough to deal
with the instabilities we've seen in the buildfarm. It also lets us
get rid of all the long delays that were previously used, cutting more
than a minute off the runtime of the isolation tests.
Back-patch to all supported branches, because the buildfarm
instabilities affect all the branches, and because it seems desirable
to keep isolationtester's capabilities the same across all branches
to simplify possible future back-patching of tests.
Discussion: https://postgr.es/m/327948.1623725828@sss.pgh.pa.us