b6e1157e7d made some changes to enforce that
JsonValueExpr.formatted_expr is always set and is the expression that
gives a JsonValueExpr its runtime value, but that's not really
apparent from the comments about and the code manipulating
formatted_expr. This commit fixes that.
Per suggestion from Álvaro Herrera.
Discussion: https://postgr.es/m/20230718155313.3wqg6encgt32adqb%40alvherre.pgsql
Currently, the database name to connect is initialized only when the
module is loaded with shared_preload_libraries, causing any call of
worker_spi_launch() to fail if the library is not loaded for a dynamic
bgworker launch. Rather than making the GUC defining the database to
connect to a PGC_POSTMASTER, this commit switches worker_spi.database to
PGC_SIGHUP, loaded even if the module's library is loaded dynamically
for a worker.
We have been discussing about the integration of more advanced tests in
this module, with and without shared_preload_libraries set, so this
eases a bit the work planned in this area.
No backpatch is done as, while this is a bug, it changes the definition
of worker_spi.database.
Author: Masahiro Ikeda
Reviewed-by: Bharath Rupireddy
Discussion: https://postgr.es/m/d30d3ea7d21cb7c9e1e3cc47e301f1b6@oss.nttdata.com
If both the passed-in plan pointer and plansource->gplan are
NULL, CachedPlanIsSimplyValid would think that the plan pointer
is possibly-valid and try to dereference it. For the one extant
call site in plpgsql, this situation doesn't normally happen
which is why we've not noticed. However, it appears to be possible
if the previous use of the cached plan failed, as per report from
Justin Pryzby. Add an extra check to prevent crashing.
Back-patch to v13 where this code was added.
Discussion: https://postgr.es/m/ZLlV+STFz1l/WhAQ@telsasoft.com
This adds the X509 attributes notBefore and notAfter to sslinfo
as well as pg_stat_ssl to allow verifying and identifying the
validity period of the current client certificate.
Author: Cary Huang <cary.huang@highgo.ca>
Discussion: https://postgr.es/m/182b8565486.10af1a86f158715.2387262617218380588@highgo.ca
Rather than specifying a validity of 10 000 days into the future
during test certificate generation, this hardcodes the notBefore
and notAfter attributes to known values. This will allow writing
tests on the validity of the certificates without knowing when a
specific certificate was regenerated.
This is done as a prerequisite for an upcoming patch which adds
notBefore and notAfter to pg_stat_ssl and sslinfo.
Discussion: https://postgr.es/m/EE288A58-947E-479A-9D99-C46C273D7A23@yesql.se
When the cluster failed the pg_controldata check for clean shut
down we only reported that it did so, not why. The state of the
cluster can be important information when diagnosing the failed
upgrade attempt, so instead include it in the error message.
Discussion: https://postgr.es/m/E0D5EA16-A085-4753-8DDC-C7055048B439@yesql.se
This essentially removes the JsonbTypeCategory enum and
jsonb_categorize_type() and integrates any jsonb-specific logic that
was in jsonb_categorize_type() into json_categorize_type(), now
moved to jsonfuncs.c. The remaining JsonTypeCategory enum and
json_categorize_type() cover the needs of the callers in both json.c
and jsonb.c. json_categorize_type() has grown a new parameter named
is_jsonb for callers to engage the jsonb-specific behavior of
json_categorize_type().
One notable change in the now exported API of json_categorize_type()
is that it now always returns *outfuncoid even though a caller may
have no need currently to see one.
This is in preparation of later commits to implement additional
SQL/JSON functions.
Co-authored-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/CA+HiwqE4XTdfb1nW=Ojoy_tQSRhYt-q_kb6i5d4xcKyrLC1Nbg@mail.gmail.com
Based on how postgres.h foes the Oid <-> Datum conversion, there is no
existing bugs but let's be consistent. 17 spots have been noticed as
incorrectly passing down Oids rather than Datums. Aleksander got one,
Zhang two and I the rest.
Author: Michael Paquier, Aleksander Alekseev, Zhang Mingli
Discussion: https://postgr.es/m/ZLUhqsqQN1MOaxdw@paquier.xyz
When pg_recvlogical needs to abort on a signal like SIGINT/SIGTERM, it
is expected to exit cleanly as the code documents. However, the code
forgot to clean up the state of the connection before leaving. This
would cause the tool to emit messages like "unexpected termination of
replication stream" error, which is meant for really unexpected
termination or a crash.
The code is refactored to apply the same termination abort operations for
signals, end LSN and keepalive cases, registering a "reason" for the
termination with a message printed under --verbose adapted to the reason
used.
This is arguably a bug, but this has been this way since the tool exists
and the signal termination can now become slower depending on the change
being decoded when the signal is received.
Reported-by: Andres Freund
Author: Bharath Rupireddy
Reviewed-by: Andres Freund, Kyotaro Horiguchi, Cary Huang, Michael
Paquier
Discussion: https://postgr.es/m/20221019213953.htdtzikf4f45ywil@awork3.anarazel.de
b5913f6120 added a parenthesized syntax for CLUSTER, but it
requires specifying a table name. This is unlike commands such as
VACUUM and ANALYZE, which do not require specifying a table in the
parenthesized syntax. This change resolves this inconsistency.
This is preparatory work for a follow-up commit that will move the
unparenthesized syntax to the "Compatibility" section of the
CLUSTER documentation.
Reviewed-by: Melanie Plageman, Michael Paquier
Discussion: https://postgr.es/m/CAAKRu_bc5uHieG1976kGqJKxyWtyQt9yvktjsVX%2Bi7NOigDjOA%40mail.gmail.com
This change moves the unparenthesized syntax for CLUSTER to the end
of the ClusterStmt rules in preparation for a follow-up commit that
will move this syntax to the "Compatibility" section of the CLUSTER
documentation. The documentation for the CLUSTER syntaxes has also
been consolidated.
Suggested-by: Melanie Plageman
Discussion https://postgr.es/m/CAAKRu_bc5uHieG1976kGqJKxyWtyQt9yvktjsVX%2Bi7NOigDjOA%40mail.gmail.com
With the addition of INHERIT and SET options for role grants,
the historical display of role memberships in \du/\dg is woefully
inadequate. Besides those options, there are pre-existing
shortcomings that you can't see the ADMIN option nor the grantor.
To fix this, remove the "Member of" column from \du/\dg altogether
(making that output usefully narrower), and invent a new meta-command
"\drg" that is specifically for displaying role memberships. It
shows one row for each role granted to the selected role(s), with
the grant options and grantor.
We would not normally back-patch such a feature addition post
feature freeze, but in this case the change is mainly driven by
v16 changes in the server, so it seems appropriate to include it
in v16.
Pavel Luzanov, with bikeshedding and review from a lot of people,
but particularly David Johnston
Discussion: https://postgr.es/m/b9be2d0e-a9bc-0a30-492f-a4f68e4f7740@postgrespro.ru
IN and NOT IN work fine on records and arrays, so just say that
they accept "expressions" not "scalar expressions". I think that
that phrasing was meant to say that they don't work on set-returning
expressions, but that's not the common meaning of "scalar".
Revise the description of row-constructor comparisons to make it
perhaps a bit less confusing. (This partially reverts some
dubious wording changes made by commit f56651519.)
Per gripe from Ilya Nenashev. Back-patch to supported branches.
In HEAD and v16, also drop a NOTE about pre-8.2 behavior, which
is hopefully no longer of interest to anybody.
Discussion: https://postgr.es/m/168968062460.632.14303906825812821399@wrigleys.postgresql.org
By default, pg_archivecleanup does not remove backup history files.
These are just few bytes useful for debugging purposes, still keeping
them around can bloat an archive path history files mixed with the WAL
segments if the path has a long history.
This patch adds a new option to control if backup history files are
removed, depending on the oldest segment name to keep around.
While on it, the TAP tests are refactored so as these are now able to
handle lists of files. Each file has a flag to track if it should still
exist or not depending on the oldest segment defined with the command
run.
Author: Atsushi Torikoshi
Reviewed-by: Kyotaro Horiguchi, Fujii Masao, Michael Paquier
Discussion: https://postgr.es/m/d660ef741ce3d82f3b4283f1cafd576c@oss.nttdata.com
This commit refactors a bit the main loop of pg_archivecleanup that
handles the removal of old segments, reducing by one its level of
indentation. This will help an incoming patch that adds a new option
related to the segment filtering logic.
Author: Atsushi Torikoshi
Reviewed-by: Kyotaro Horiguchi, Fujii Masao, Michael Paquier
Discussion: https://postgr.es/m/d660ef741ce3d82f3b4283f1cafd576c@oss.nttdata.com
The "count" argument of SPI_exec() only limits execution when
the query is actually returning rows. This was not the case
before PG 9.0, so this example was correct when written; but
we missed updating it in commit 2ddc600f8. Extend the example
to show the behavior both with and without RETURNING.
While here, improve the commentary and markup for the rest
of the example.
David G. Johnston and Tom Lane, per report from Curt Kolovson.
Back-patch to all supported branches.
Discussion: https://postgr.es/m/CANhYJV6HWtgz_qjx_APfK0PAgLUzY-2vjLuj7i_o=TZF1LAQew@mail.gmail.com
This has been missed in cb0cca1, noticed before buildfarm member koel
has been able to complain while poking at a different patch. Like the
other commit, backpatch all the way down to limit the odds of merge
conflicts.
Backpatch-through: 11
A crash in the middle of a checkpoint with some two-phase state data
already flushed to disk by this checkpoint could cause a follow-up crash
recovery to recover twice the same transaction, once from what has been
found in pg_twophase/ at the beginning of recovery and a second time
when replaying its corresponding record.
This would lead to FATAL failures in the startup process during
recovery, where the same transaction would have a state recovered twice
instead of once:
LOG: recovering prepared transaction 731 from shared memory
LOG: recovering prepared transaction 731 from shared memory
FATAL: lock ExclusiveLock on object 731/0/0 is already held
This issue is fixed by skipping the addition of any 2PC state coming
from a record whose equivalent 2PC state file has already been loaded in
TwoPhaseState at the beginning of recovery by restoreTwoPhaseData(),
which is OK as long as the system has not reached a consistent state.
The timing to get a messed up recovery processing is very racy, and
would very unlikely happen. The thread that has reported the issue has
demonstrated the bug using injection points to force a PANIC in the
middle of a checkpoint.
Issue introduced in 728bd99, so backpatch all the way down.
Reported-by: "suyu.cmj" <mengjuan.cmj@alibaba-inc.com>
Author: "suyu.cmj" <mengjuan.cmj@alibaba-inc.com>
Author: Michael Paquier
Discussion: https://postgr.es/m/109e6994-b971-48cb-84f6-829646f18b4c.mengjuan.cmj@alibaba-inc.com
Backpatch-through: 11
fe-auth.c references CHAR_BIT since commit 3a465cc67, but it
did not #include <limits.h>, which per POSIX is where that
symbol is defined. This escaped notice so far because
(a) on most platforms, <sys/param.h> pulls in <limits.h>,
(b) even if yours doesn't, OpenSSL pulls it in, so compiling
with --with-openssl masks the omission.
Per bug #18026 from Marcel Hofstetter. Back-patch to v16.
Discussion: https://postgr.es/m/18026-d5bb69f79cd16203@postgresql.org
Here we reduce the block size fields in AllocSetContext, GenerationContext
and SlabContext from Size down to uint32. Ever since c6e0fe1f2, blocks
for non-dedicated palloc chunks can no longer be larger than 1GB, so
there's no need to store the various block size fields as 64-bit values.
32 bits are enough to store 2^30.
Here we also further reduce the memory context struct sizes by getting rid
of the 'keeper' field which stores a pointer to the context's keeper
block. All the context types which have this field always allocate the
keeper block in the same allocation as the memory context itself, so the
keeper block always comes right at the end of the context struct. Add
some macros to calculate that address rather than storing it in the
context.
Overall, in AllocSetContext and GenerationContext, this saves 20 bytes on
64-bit builds which for ALLOCSET_SMALL_SIZES can sometimes mean the
difference between having to allocate a 2nd block and storing all the
required allocations on the keeper block alone. Such contexts are used
in relcache to store cache entries for indexes, of which there can be
a large number in a single backend.
Author: Melih Mutlu
Reviewed-by: David Rowley
Discussion: https://postgr.es/m/CAGPVpCSOW3uJ1QJmsMR9_oE3X7fG_z4q0AoU4R_w+2RzvroPFg@mail.gmail.com
Now that the in-tree getopt_long() moves non-options to the end of
argv (see commit 411b720343), we can remove pg_ctl's workaround for
getopt_long() implementations that don't reorder argv.
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/20230713034903.GA991765%40nathanxps13
If the plan itself is parallel-safe, and the initPlans are too,
there's no reason anymore to prevent the plan from being marked
parallel-safe. That restriction (dating to commit ab77a5a45) was
really a special case of the fact that we couldn't transmit subplans
to parallel workers at all. We fixed that in commit 5e6d8d2bb and
follow-ons, but this case never got addressed.
We still forbid attaching initPlans to a Gather node that's
inserted pursuant to debug_parallel_query = regress. That's because,
when we hide the Gather from EXPLAIN output, we'd hide the initPlans
too, causing cosmetic regression diffs. It seems inadvisable to
kluge EXPLAIN to the extent required to make the output look the
same, so just don't do it in that case.
Along the way, this also takes care of some sloppiness about updating
path costs to match when we move initplans from one place to another
during createplan.c and setrefs.c. Since all the planning decisions
are already made by that point, this is just cosmetic; but it seems
good to keep EXPLAIN output consistent with where the initplans are.
The diff in query_planner() might be worth remarking on. I found that
one because after fixing things to allow parallel-safe initplans, one
partition_prune test case changed plans (as shown in the patch) ---
but only when debug_parallel_query was active. The reason proved to
be that we only bothered to mark Result nodes as potentially
parallel-safe when debug_parallel_query is on. This neglects the fact
that parallel-safety may be of interest for a sub-query even though
the Result itself doesn't parallelize.
Discussion: https://postgr.es/m/1129530.1681317832@sss.pgh.pa.us
We are capable of optimizing MIN() and MAX() aggregates on indexed
columns into subqueries that exploit the index, rather than the normal
thing of scanning the whole table. When we do this, we replace the
Aggref node(s) with Params referencing subquery outputs. Such Params
really ought to be included in the per-plan-node extParam/allParam
sets computed by SS_finalize_plan. However, we've never done so
up to now because of an ancient implementation choice to perform
that substitution during set_plan_references, which runs after
SS_finalize_plan, so that SS_finalize_plan never sees these Params.
This seems like clearly a bug, yet there have been no field reports
of problems that could trace to it. This may be because the types
of Plan nodes that could contain Aggrefs do not have any of the
rescan optimizations that are controlled by extParam/allParam.
Nonetheless it seems certain to bite us someday, so let's fix it
in a self-contained patch that can be back-patched if we find a
case in which there's a live bug pre-v17.
The cleanest fix would be to perform a separate tree walk to do
these substitutions before SS_finalize_plan runs. That seems
unattractive, first because a whole-tree mutation pass is expensive,
and second because we lack infrastructure for visiting expression
subtrees in a Plan tree, so that we'd need a new function knowing
as much as SS_finalize_plan knows about that. I also considered
swapping the order of SS_finalize_plan and set_plan_references,
but that fell foul of various assumptions that seem tricky to fix.
So the approach adopted here is to teach SS_finalize_plan itself
to check for such Aggrefs. I refactored things a bit in setrefs.c
to avoid having three copies of the code that does that.
Given the lack of any currently-known bug, no test case here.
Discussion: https://postgr.es/m/2391880.1689025003@sss.pgh.pa.us
Before, if you went past about 64M array elements in array_agg() and
allied functions, you got a generic "invalid memory alloc request
size" error. This patch replaces that with "array size exceeds the
maximum allowed", which seems more user-friendly since it points you
to needing to reduce the size of your array result. (This is the
same error text you'd get from construct_md_array in the event of
overrunning the maximum physical size for the finished array.)
Per question from Shaozhong Shi. Since this hasn't come up often,
I don't feel a need to back-patch.
Discussion: https://postgr.es/m/CA+i5JwYtVS9z2E71PcNKAVPbOn4R2wuj-LqbJsYr_XOz73q7dQ@mail.gmail.com
In a61b1f7482, we failed to update transformFromClauseItem() and
buildNSItemFromLists() to set ParseNamespaceItem.p_perminfo causing
it to point to garbage.
Pointed out by Tom Lane.
Reported-by: Farias de Oliveira <matheusfarias519@gmail.com>
Discussion: https://postgr.es/m/3173476.1689286373%40sss.pgh.pa.us
Backpatch-through: 16
Presently, the privilege check for SET SESSION AUTHORIZATION checks
whether the original authenticated role was a superuser at
connection start time. Even if the role loses the superuser
attribute, its existing sessions are permitted to change session
authorization to any role.
This commit modifies this privilege check to verify the original
authenticated role currently has superuser. In the event that the
authenticated role loses superuser within a session authorization
change, the authorization change will remain in effect, which means
the user can still take advantage of the target role's privileges.
However, [RE]SET SESSION AUTHORIZATION will only permit switching
to the original authenticated role.
Author: Joseph Koshakow
Discussion: https://postgr.es/m/CAAvxfHc-HHzONQ2oXdvhFF9ayRnidPwK%2BfVBhRzaBWYYLVQL-g%40mail.gmail.com
Presently, the privilege check for SET SESSION AUTHORIZATION is
performed in session_authorization's assign_hook. A relevant
comment states, "It's OK because the check does not require catalog
access and can't fail during an end-of-transaction GUC
reversion..." However, we plan to add a catalog lookup to this
privilege check in a follow-up commit.
This commit moves this privilege check to the check_hook for
session_authorization. Like check_role(), we do not throw a hard
error for insufficient privileges when the source is PGC_S_TEST.
Author: Joseph Koshakow
Discussion: https://postgr.es/m/CAAvxfHc-HHzONQ2oXdvhFF9ayRnidPwK%2BfVBhRzaBWYYLVQL-g%40mail.gmail.com
Commit 89e46da5e5 allowed using BTREE indexes that are neither
PRIMARY KEY nor REPLICA IDENTITY on the subscriber during apply of
update/delete. This patch extends that functionality to also allow HASH
indexes.
We explored supporting other index access methods as well but they don't
have a fixed strategy for equality operation which is required by the
current infrastructure in logical replication to scan the indexes.
Author: Kuroda Hayato
Reviewed-by: Peter Smith, Onder Kalaci, Amit Kapila
Discussion: https://postgr.es/m/TYAPR01MB58669D7414E59664E17A5827F522A@TYAPR01MB5866.jpnprd01.prod.outlook.com
RelationReloadIndexInfo() is a fast-path used for index reloads in the
relation cache, and it has always forgotten about updating
indisreplident, which is something that would happen after an index is
selected for a replica identity. This can lead to incorrect cache
information provided when executing a command in a transaction context
that updates indisreplident.
None of the code paths currently on HEAD that need to check upon
pg_index.indisreplident fetch its value from the relation cache, always
relying on a fresh copy on the syscache. Unfortunately, this may not be
the case of out-of-core code, that could see out-of-date value.
Author: Shruthi Gowda
Reviewed-by: Robert Haas, Dilip Kumar, Michael Paquier
Discussion: https://postgr.es/m/CAASxf_PBcxax0wW-3gErUyftZ0XrCs3Lrpuhq4-Z3Fak1DoW7Q@mail.gmail.com
Backpatch-through: 11
indisvalid is switched to true for partitioned indexes when all its
partitions have valid indexes when attaching a new partition, up to the
top-most parent if all its leaves are themselves valid when dealing with
multiple layers of partitions.
The copy of the tuple from pg_index used to switch indisvalid to true
came from the relation cache, which is incorrect. Particularly, in the
case reported by Shruthi Gowda, executing a series of commands in a
single transaction would cause the validation of partitioned indexes to
use an incorrect version of a pg_index tuple, as indexes are reloaded
after an invalidation request with RelationReloadIndexInfo(), a much
faster version than a full index cache rebuild. In this case, the
limited information updated in the cache leads to an incorrect version
of the tuple used. One of the symptoms reported was the following
error, with a replica identity update, for instance:
"ERROR: attempted to update invisible tuple"
This is incorrect since 8b08f7d, so backpatch all the way down.
Reported-by: Shruthi Gowda
Author: Michael Paquier
Reviewed-by: Shruthi Gowda, Dilip Kumar
Discussion: https://postgr.es/m/CAASxf_PBcxax0wW-3gErUyftZ0XrCs3Lrpuhq4-Z3Fak1DoW7Q@mail.gmail.com
Backpatch-through: 11
The "fsync" level already flushes drive write caches on Windows (as does
"fdatasync"), so it only confuses matters to have an apparently higher
level that isn't actually different at all.
That leaves "fsync_writethrough" only for macOS, where it actually does
something different.
Reviewed-by: Magnus Hagander <magnus@hagander.net>
Discussion: https://postgr.es/m/CA%2BhUKGJ2CG2SouPv2mca2WCTOJxYumvBARRcKPraFMB6GSEMcA%40mail.gmail.com
The contents of the line whose parsing failed was not reported in the
error message produced by generate-wait_event_types.pl, making harder
than necessary the debugging of incorrectly-shaped entries in the file.
Reported-by: Andres Freund
Discussion: https://postgr.es/m/ZK9S3jFEV1X797Ll@paquier.xyz
The double quotes used for the wait event names are not required, as
the values quoted are made of single words. The files generated by
generate-wait_event_types.pl (pgstat_wait_event.c, wait_event_types.h
and wait_event_types.sgml) are exactly the same before and after this
commit, hence the wait event names and the enum elements have the same
names as before.
Discussion: https://postgr.es/m/ZK9S3jFEV1X797Ll@paquier.xyz
Until now, when DROP DATABASE got interrupted in the wrong moment, the removal
of the pg_database row would also roll back, even though some irreversible
steps have already been taken. E.g. DropDatabaseBuffers() might have thrown
out dirty buffers, or files could have been unlinked. But we continued to
allow connections to such a corrupted database.
To fix this, mark databases invalid with an in-place update, just before
starting to perform irreversible steps. As we can't add a new column in the
back branches, we use pg_database.datconnlimit = -2 for this purpose.
An invalid database cannot be connected to anymore, but can still be
dropped.
Unfortunately we can't easily add output to psql's \l to indicate that some
database is invalid, it doesn't fit in any of the existing columns.
Add tests verifying that a interrupted DROP DATABASE is handled correctly in
the backend and in various tools.
Reported-by: Evgeny Morozov <postgresql3@realityexists.net>
Author: Andres Freund <andres@anarazel.de>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/20230509004637.cgvmfwrbht7xm7p6@awork3.anarazel.de
Discussion: https://postgr.es/m/20230314174521.74jl6ffqsee5mtug@awork3.anarazel.de
Backpatch: 11-, bug present in all supported versions
When vac_truncate_clog() encounters bogus datfrozenxid / datminmxid values, it
returns early. Unfortunately, until now, it did not release
WrapLimitsVacuumLock. If the backend later tries to acquire
WrapLimitsVacuumLock, the session / autovacuum worker hangs in an
uncancellable way. Similarly, other sessions will hang waiting for the
lock. However, if the backend holding the lock exited or errored out for some
reason, the lock was released.
The bug was introduced as a side effect of 566372b3d6.
It is interesting that there are no production reports of this problem. That
is likely due to a mix of bugs leading to bogus values having gotten less
common, process exit releasing locks and instances of hangs being hard to
debug for "normal" users.
Discussion: https://postgr.es/m/20230621221208.vhsqgduwfpzwxnpg@awork3.anarazel.de
GiST compress functions (like all GiST opclass functions) are
supposed to be called in short-lived memory contexts, so that
minor memory leaks in them are not of concern, and indeed
explicit pfree's are likely slightly counterproductive.
But this one in g_intbig_compress() is more than
slightly counterproductive, because it's guarded by
"if (in != DatumGetArrayTypeP(entry->key))" which means
that if this test succeeds, we've detoasted the datum twice.
(And to add insult to injury, the extra detoast result is
leaked.) Let's just drop the whole stanza, relying on the
GiST temporary context mechanism to clean up in good time.
The analogous bit in g_int_compress() is
if (r != (ArrayType *) DatumGetPointer(entry->key))
pfree(r);
which doesn't have the gratuitous-detoast problem so
I left it alone. Perhaps there is a case for removing
unnecessary pfree's more widely, but I'm not sure if it's
worth the code churn.
The potential extra decompress seems expensive enough to
justify calling this a (minor) performance bug and
back-patching.
Konstantin Knizhnik, Matthias van de Meent, Tom Lane
Discussion: https://postgr.es/m/CAEze2Wi86=DxErfvf+SCB2UKmU2amKOF60BKuJOX=w-RojRn0A@mail.gmail.com
Commit 89e46da5e allowed REPLICA IDENTITY FULL tables to use an index
on the subscriber during apply of update/delete. This commit clarifies
in the documentation that the leftmost field of candidate indexes must
be a column (not an expression) that references the published relation
column.
The source code comments are also updated accordingly.
Reviewed-by: Peter Smith, Amit Kapila
Discussion: https://postgr.es/m/CAD21AoDJjffEvUFKXT27Q5U8-UU9JHv4rrJ9Ke8Zkc5UPWHLvA@mail.gmail.com
Backpatch-through: 16
Unlike the other implementations of getopt_long() I could find, the
in-tree implementation does not reorder non-options to the end of
argv. Instead, it returns -1 as soon as the first non-option is
found, even if there are other options listed afterwards. By
moving non-options to the end of argv, getopt_long() can parse all
specified options and return -1 when only non-options remain.
This quirk is periodically missed by hackers (e.g., 869aa40a27,
ffd398021c, and d9ddc50baf). This commit introduces the
aforementioned non-option reordering behavior to the in-tree
getopt_long() implementation.
Special thanks to Noah Misch for his help verifying behavior on
AIX.
Reviewed-by: Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/20230609232257.GA121461%40nathanxps13
A CaseTestExpr is currently being put into
JsonValueExpr.formatted_expr as placeholder for the result of
evaluating JsonValueExpr.raw_expr, which in turn is evaluated
separately. Though, there's no need for this indirection if
raw_expr itself can be embedded into formatted_expr and evaluated
as part of evaluating the latter, especially as there is no
special reason to evaluate it separately. So this commit makes it
so. As a result, JsonValueExpr.raw_expr no longer needs to be
evaluated in ExecInterpExpr(), eval_const_exprs_mutator() etc. and
is now only used for displaying the original "unformatted"
expression in ruleutils.c.
While at it, this also removes the function makeCaseTestExpr(),
because the code in makeJsonConstructorExpr() looks more readable
without it IMO and isn't used by anyone else either.
Finally, a note is added in the comment above CaseTestExpr's
definition that JsonConstructorExpr is also using it.
Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/CA+HiwqE4XTdfb1nW=Ojoy_tQSRhYt-q_kb6i5d4xcKyrLC1Nbg@mail.gmail.com
The first check on the enum values was not necessary as the values set
in wait_event_names.txt for the classes LWLock and Lock were able to
satisfy the check. The second check when generating the C and header
files is now based on a match of the class names, making it simpler to
understand.
Author: Masahiro Ikeda, Michael Paquier
Discussion: https://postgr.es/m/eaf82a85c0ef1b55dc3b651d3f7b867a@oss.nttdata.com