The decimal conversion functions dectoint and dectolong are documented
to return ECPG_INFORMIX_NUM_OVERFLOW in case of overflows, but always
returned -1 on all errors due to incorrectly checking the returnvalue
from the PGTYPES* functions.
Author: Aidar Imamov <a.imamov@postgrespro.ru>
Discussion: https://postgr.es/m/54d2b53327516d9454daa5fb2f893bdc@postgrespro.ru
The new combined WAL record is now used for pruning, freezing and 2nd
pass of vacuum. This is in preparation for changing VACUUM to write a
combined prune+freeze record per page, instead of separate two
records. The new WAL record format now supports that, but the code
still always writes separate records for pruning and freezing.
This reserves separate XLOG_HEAP2_* info codes for when the pruning
record is emitted for on-access pruning or VACUUM, per Peter
Geoghegan's suggestion. The record format is identical, but having
separate info codes makes it easier analyze pruning and vacuuming with
pg_waldump.
The function to emit the new WAL record, log_heap_prune_and_freeze(),
is in pruneheap.c. The existing heap_log_freeze_plan() and its
subroutines are moved to pruneheap.c without changes, to keep them
together with log_heap_prune_and_freeze().
Author: Melanie Plageman <melanieplageman@gmail.com>
Discussion: https://www.postgresql.org/message-id/CAAKRu_azf-zH%3DDgVbquZ3tFWjMY1w5pO8m-TXJaMdri8z3933g@mail.gmail.com
Discussion: https://www.postgresql.org/message-id/CAAKRu_b2oE4GL%3Dq4g9mcByS9yT7wTQvEH9OLpabj28e%2BWKFi2A@mail.gmail.com
It must be run on the target server and should be able to connect to
the source server (publisher) and the target server (subscriber). All
tables in the specified database(s) are included in the logical
replication setup. A pair of publication and subscription objects are
created for each database.
The main advantage of pg_createsubscriber over the common logical
replication setup is the initial data copy. It also reduces the
catchup phase.
Some prerequisites must be met to successfully run it. It is
basically the logical replication requirements. It starts creating a
publication using FOR ALL TABLES and a replication slot for each
specified database. Write recovery parameters into the target data
directory and start the target server. It specifies the LSN of the
last replication slot (replication start point) up to which the
recovery will proceed. Wait until the target server is promoted.
Create one subscription per specified database (using publication and
replication slot created in a previous step) on the target server.
Set the replication progress to the replication start point for each
subscription. Enable the subscription for each specified database on
the target server. And finally, change the system identifier on the
target server.
Author: Euler Taveira <euler.taveira@enterprisedb.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: Shlok Kyal <shlok.kyal.oss@gmail.com>
Reviewed-by: Vignesh C <vignesh21@gmail.com>
Reviewed-by: Shubham Khanna <khannashubham1197@gmail.com>
Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://www.postgresql.org/message-id/flat/5ac50071-f2ed-4ace-a8fd-b892cffd33eb@www.fastmail.com
This commit adds a new property called last_inactive_time for slots. It is
set to 0 whenever a slot is made active/acquired and set to the current
timestamp whenever the slot is inactive/released or restored from the disk.
Note that we don't set the last_inactive_time for the slots currently being
synced from the primary to the standby because such slots are typically
inactive as decoding is not allowed on those.
The 'last_inactive_time' will be useful on production servers to debug and
analyze inactive replication slots. It will also help to know the lifetime
of a replication slot - one can know how long a streaming standby, logical
subscriber, or replication slot consumer is down.
The 'last_inactive_time' will also be useful to implement inactive
timeout-based replication slot invalidation in a future commit.
Author: Bharath Rupireddy
Reviewed-by: Bertrand Drouvot, Amit Kapila, Shveta Malik
Discussion: https://www.postgresql.org/message-id/CALj2ACW4aUe-_uFQOjdWCEN-xXoLGhmvRFnL8SNw_TZ5nJe+aw@mail.gmail.com
* Fix the comment of init_dummy_sjinfo() to remove references to
non-existing parameters 'rel1' and 'rel2'.
* Adjust consider_new_or_clause() to call init_dummy_sjinfo() to make
up a SpecialJoinInfo for inner joins like other code sites that
were adjusted in 6190d828cd to do so.
Author: Richard Guo <guofenglinux@gmail.com>
Reported-by: Richard Guo <guofenglinux@gmail.com>
Discussion: https://postgr.es/m/CAExHW5tHqEf3ASVqvFFcghYGPfpy7o3xnvhHwBGbJFMRH8KjNw@mail.gmail.com
This teaches build_child_join_sjinfo() to create the dummy
SpecialJoinInfos (those created for inner joins) directly for a given
child join, skipping the unnecessary overhead of translating the
parent joinrel's SpecialJoinInfo.
To that end, this commit moves the code to initialize the dummy
SpecialJoinInfos to a new function named init_dummy_sjinfo() and
changes the few existing sites that have this code and
build_child_join_sjinfo() to call this new function.
Author: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Reviewed-by: Richard Guo <guofenglinux@gmail.com>
Reviewed-by: Amit Langote <amitlangote09@gmail.com>
Reviewed-by: Andrey Lepikhov <a.lepikhov@postgrespro.ru>
Reviewed-by: Tomas Vondra <tomas.vondra@enterprisedb.com>
Discussion: https://postgr.es/m/CAExHW5tHqEf3ASVqvFFcghYGPfpy7o3xnvhHwBGbJFMRH8KjNw@mail.gmail.com
Specifically, this commit reduces the memory consumed by the
SpecialJoinInfos that are allocated for child joins in
try_partitionwise_join() by freeing them at the end of creating paths
for each child join.
A SpecialJoinInfo allocated for a given child join is a copy of the
parent join's SpecialJoinInfo, which contains the translated copies
of the various Relids bitmapsets and semi_rhs_exprs, which is a List
of Nodes. The newly added freeing step frees the struct itself and
the various bitmapsets, but not semi_rhs_exprs, because there's no
handy function to free the memory of Node trees.
Author: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Reviewed-by: Richard Guo <guofenglinux@gmail.com>
Reviewed-by: Amit Langote <amitlangote09@gmail.com>
Reviewed-by: Andrey Lepikhov <a.lepikhov@postgrespro.ru>
Reviewed-by: Tomas Vondra <tomas.vondra@enterprisedb.com>
Discussion: https://postgr.es/m/CAExHW5tHqEf3ASVqvFFcghYGPfpy7o3xnvhHwBGbJFMRH8KjNw@mail.gmail.com
This changes "make dist" to directly use "git archive", rather than
the custom shell script it currently runs.
This is to make the creation of the distribution tarball more directly
traceable to the git repository. That is why we removed the "make
distprep" step.
"make dist" continues to produce a .gz and a .bz2 tarball as before.
The archives produced this way are deterministic and reproducible,
meaning for a given commit the result file should always be
bit-for-bit identical. The exception is that if you use a git version
older than 2.38.0, gzip records the platform in the archive, so you'd
get a different output on Windows vs. macOS vs. "UNIX" (everything
else). In git 2.38.0, this was changed so that everything is recorded
as "UNIX" now. This is just something to keep in mind. This issue is
specific to the gzip format, it does not affect other compression
formats.
Meson has its own distribution building command (meson dist), but we
are not using that at this point. The main problem is that, the way
they have implemented it, it is not deterministic in the above sense.
Also, we want a "make" version for the time being. But the target
name "dist" in meson is reserved for that reason, so we call the
custom target "pgdist" (so call something like "meson compile -C build
pgdist").
Reviewed-by: Tristan Partin <tristan@neon.tech>
Discussion: https://www.postgresql.org/message-id/flat/40e80f77-a294-4f29-a16f-e21bc7bc75fc%40eisentraut.org
Coverity complained about the integer handling issue; if we start with
an arbitrary non-negative shift value, the loop may decrement it down
to something less than zero before exiting. This commit adds an
assertion to make sure the 'shift' is always 0 after the loop, and
uses 0 as the shift to get the key chunk in the following operation.
Introduced by ee1b30f12.
Reported-by: Tom Lane as per coverity
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/2089517.1711299216%40sss.pgh.pa.us
Until now, UNION queries have often been suboptimal as the planner has
only ever considered using an Append node and making the results unique
by either using a Hash Aggregate, or by Sorting the entire Append result
and running it through the Unique operator. Both of these methods
always require reading all rows from the union subqueries.
Here we adjust the union planner so that it can request that each subquery
produce results in target list order so that these can be Merge Appended
together and made unique with a Unique node. This can improve performance
significantly as the union child can make use of the likes of btree
indexes and/or Merge Joins to provide the top-level UNION with presorted
input. This is especially good if the top-level UNION contains a LIMIT
node that limits the output rows to a small subset of the unioned rows as
cheap startup plans can be used.
Author: David Rowley
Reviewed-by: Richard Guo, Andy Fan
Discussion: https://postgr.es/m/CAApHDvpb_63XQodmxKUF8vb9M7CxyUyT4sWvEgqeQU-GB7QFoQ@mail.gmail.com
Straight-forward index-level REINDEX is not supported with multiple jobs as
we cannot control the concurrent processing of multiple indexes depending on
the same relation. Instead, we dedicate the whole table to certain reindex
job. Thus, if indexes in the lists belong to different tables, that gives us
a fair level of parallelism.
This commit teaches get_parallel_object_list() to fetch table names for
indexes in the case of index-level REINDEX. The same tables are grouped
together in the output order, and the list of indexes is also rebuilt to
match that order. Later during processingof that list, we push indexes
belonging to the same table into the same job.
Discussion: https://postgr.es/m/CACG%3DezZU_VwDi-1PN8RUSE6mcYG%2BYx1NH_rJO4%2BKe-mKqLp%3DNw%40mail.gmail.com
Author: Maxim Orlov, Svetlana Derevyanko, Alexander Korotkov
Reviewed-by: Michael Paquier
Check source length before checking for NUL terminator to avoid
reading one byte past the string end. Also fix unreachable bug when
caller does not expect NUL-terminated result.
Add unit test coverage of convert_case() in case_test.c, which makes
it easier to reproduce the valgrind failure.
Discussion: https://postgr.es/m/7a9fd36d-7a38-4dc2-e676-fc939491a95a@gmail.com
Reported-by: Alexander Lakhin
Since commit 3d14e171e9, SET ROLE has required the current session
user to have membership with the SET option in the target role, but
the SET ROLE documentation only mentions the membership
requirement. This commit adds this important detail to the SET
ROLE page.
Reviewed-by: Robert Haas
Discussion: https://postgr.es/m/CA%2BRLCQysHtME0znk2KUMJN343ksboSRQSU-hCnOjesX6VK300Q%40mail.gmail.com
Backpatch-through: 16
Up to now we've rejected cases like
BEGIN;
CREATE TYPE rainbow AS ENUM ();
ALTER TYPE rainbow ADD VALUE 'red';
-- use the value 'red', perhaps in a constraint or index
COMMIT;
The concern is that the uncommitted enum value 'red' might get into
an index and then break the index if we roll back the ALTER ADD.
If the ALTER is in the same transaction as the CREATE then it's really
perfectly safe, but we weren't taking the trouble to identify that.
pg_dump in binary-upgrade mode will emit enum definitions that look
like the above, which up to now didn't fall foul of the unsafe-usage
check because we processed each restore command as a separate
transaction. However an upcoming patch proposes to bundle the restore
commands into large transactions to reduce XID consumption during
pg_upgrade, and that makes this behavior a problem.
To fix, remember the OIDs of enum types created in the current
transaction, and allow use of enum values that are added to one later
in the same transaction. To do this fully correctly in the presence
of subtransactions, we'd have to track subtransaction nesting level of
the CREATE and do maintenance work at every subsequent subtransaction
exit. That seems expensive, and we don't need it to satisfy pg_dump's
usage. Hence, apply the additional optimization only when the CREATE
and ALTER are at outermost transaction level.
Patch by me, reviewed by Andrew Dunstan
Discussion: https://postgr.es/m/1548468.1711220438@sss.pgh.pa.us
It wasn't getting freed in one code path, which Coverity identified as
a resource leak. It's probably of little consequence, but re-ordering
the code into the correct sequence is no more work than dismissing the
complaint. Minor oversight in commit a145f424d.
While here, improve the unreasonably clunky coding of
FindDbnameInConnParams: use of an output parameter is unnecessary
and prone to uninitialized-variable problems.
Coverity identified this as a resource leak. It's surely of no
consequence given that the function is called only once per run, but
freeing the storage is no more work than dismissing the complaint.
Minor oversight in commit 347758b12.
We need a slightly different "cp" incantation to make sure top-level
"dot" files, such as ".git", are also copied.
This is relevant for example if a script wants to execute a git
command. This currently does not happen, but it has come up while
testing other patches.
Reviewed-by: Tristan Partin <tristan@neon.tech>
Discussion: https://www.postgresql.org/message-id/flat/40e80f77-a294-4f29-a16f-e21bc7bc75fc%40eisentraut.org
Add PERIOD clause to foreign key constraint definitions. This is
supported for range and multirange types. Temporal foreign keys check
for range containment instead of equality.
This feature matches the behavior of the SQL standard temporal foreign
keys, but it works on PostgreSQL's native ranges instead of SQL's
"periods", which don't exist in PostgreSQL (yet).
Reference actions ON {UPDATE,DELETE} {CASCADE,SET NULL,SET DEFAULT}
are not supported yet.
Author: Paul A. Jungwirth <pj@illuminatedcomputing.com>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: jian he <jian.universality@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/CA+renyUApHgSZF9-nd-a0+OPGharLQLO=mDHcY4_qQ0+noCUVg@mail.gmail.com
It might happen that the varlena value wasn't compressed by index_form_tuple()
due to current storage parameters. If compression is currently enabled, we
need to compress such values to match index tuple coming from the heap.
Backpatch to all supported versions.
Discussion: https://postgr.es/m/flat/7bdbe559-d61a-4ae4-a6e1-48abdf3024cc%40postgrespro.ru
Author: Andrey Borodin
Reviewed-by: Alexander Lakhin, Michael Zhilin, Jian He, Alexander Korotkov
Backpatch-through: 12
In the heap, tuples may contain short varlena datum with both 1B header and 4B
headers. But the corresponding index tuple should always have such varlena's
with 1B headers. So, for fingerprinting, we need to convert.
Backpatch to all supported versions.
Discussion: https://postgr.es/m/flat/7bdbe559-d61a-4ae4-a6e1-48abdf3024cc%40postgrespro.ru
Author: Michael Zhilin
Reviewed-by: Alexander Lakhin, Andrey Borodin, Jian He, Alexander Korotkov
Backpatch-through: 12
Up to now, all of the "catcache list" objects within a catalog cache
were just chained together on a single dlist, requiring O(N) time to
search. Remarkably, we've not had serious performance problems with
that so far; but we got a complaint of a bad performance regression
from v15 in a case with a large number of roles in the system, which
traced down to O(N^2) total time when we probed N catcache lists.
Replace that data structure with a hashtable having an enlargeable
number of dlists, in an exactly parallel way to the data structure
we've used for years for the plain CatCTup cache members. The extra
cost of maintaining a hash table seems negligible, since we were
already computing a hash value for list searches.
Normally this'd be HEAD-only material, but in view of the performance
regression it seems advisable to back-patch into v16. In the v16
version of the patch, leave the dead cc_lists field where it is and
add the new fields at the end of struct catcache, to avoid possible
ABI breakage in case any external code is looking at these structs.
(We assume no external code is actually allocating new catcache
structs.)
Per report from alex work.
Discussion: https://postgr.es/m/CAGvXd3OSMbJQwOSc-Tq-Ro1CAz=vggErdSG7pv2s6vmmTOLJSg@mail.gmail.com
This adds the X509 attributes notBefore and notAfter to sslinfo
as well as pg_stat_ssl to allow verifying and identifying the
validity period of the current client certificate. OpenSSL has
APIs for extracting notAfter and notBefore, but they are only
supported in recent versions so we have to calculate the dates
by hand in order to make this work for the older versions of
OpenSSL that we still support.
Original patch by Cary Huang with additional hacking by Jacob
and myself.
Author: Cary Huang <cary.huang@highgo.ca>
Co-author: Jacob Champion <jacob.champion@enterprisedb.com>
Co-author: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/182b8565486.10af1a86f158715.2387262617218380588@highgo.ca
This changes nodeToString() to not output the actual value of location
fields in nodes, but instead it writes -1. This mirrors the fact that
stringToNode() also does not read location field values but always
stores -1.
For most uses of nodeToString(), which is to store nodes in catalog
fields, this is more useful. We don't store original query texts in
catalogs, so any lingering query location values are not meaningful.
For debugging purposes, there is a new nodeToStringWithLocations(),
which mirrors the existing stringToNodeWithLocations(). This is used
for WRITE_READ_PARSE_PLAN_TREES and nodes/print.c functions, which
covers all the debugging uses.
Reviewed-by: Matthias van de Meent <boekewurm+postgres@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/CAEze2WgrCiR3JZmWyB0YTc8HV7ewRdx13j0CqD6mVkYAW+SFGQ@mail.gmail.com
Till now, the reason for replication slot invalidation is not tracked
directly in pg_replication_slots. A recent commit 007693f2a3 added
'conflict_reason' to show the reasons for slot conflict/invalidation, but
only for logical slots.
This commit adds a new column 'invalidation_reason' to show invalidation
reasons for both physical and logical slots. And, this commit also turns
'conflict_reason' text column to 'conflicting' boolean column (effectively
reverting commit 007693f2a3). The 'conflicting' column is true for
invalidation reasons 'rows_removed' and 'wal_level_insufficient' because
those make the slot conflict with recovery. When 'conflicting' is true,
one can now look at the new 'invalidation_reason' column for the reason
for the logical slot's conflict with recovery.
The new 'invalidation_reason' column will also be useful to track other
invalidation reasons in the future commit.
Author: Bharath Rupireddy
Reviewed-by: Bertrand Drouvot, Amit Kapila, Shveta Malik
Discussion: https://www.postgresql.org/message-id/ZfR7HuzFEswakt/a%40ip-10-97-1-34.eu-west-3.compute.internal
Discussion: https://www.postgresql.org/message-id/CALj2ACW4aUe-_uFQOjdWCEN-xXoLGhmvRFnL8SNw_TZ5nJe+aw@mail.gmail.com
Put the fields alias and eref earlier in the struct, so that it
matches the order in _outRangeTblEntry()/_readRangeTblEntry(). This
helps if we ever want to fully automate out/read of RangeTblEntry.
Also, it makes dumps in the debugger easier to read in the same way.
Internally, this makes no difference.
Reviewed-by: Andrew Dunstan <andrew@dunslane.net>
Discussion: https://www.postgresql.org/message-id/flat/4b27fc50-8cd6-46f5-ab20-88dbaadca645@eisentraut.org
This is part of an effort to reduce the number of special cases in the
automatically generated node support functions.
This patch removes _jumbleRangeTblEntry() and instead adds per-field
query_jumble_ignore annotations to match the behavior of the previous
custom code. The pg_stat_statements test suite has some coverage of
this. It gets rid of the switch on rtekind; this should be
technically correct, since we do the equal and copy functions like
this also.
The list of fields to jumble has been checked and is considered
correct as of 8b29a119fd.
Reviewed-by: Andrew Dunstan <andrew@dunslane.net>
Discussion: https://www.postgresql.org/message-id/flat/4b27fc50-8cd6-46f5-ab20-88dbaadca645@eisentraut.org
This allows us to abstract how/whether table AM uses transaction identifiers.
A custom table AM can use a custom slot, which may not store xmin directly,
but determine the tuple belonging to the current transaction in the other way.
Discussion: https://postgr.es/m/CAPpHfdurb9ycV8udYqM%3Do0sPS66PJ4RCBM1g-bBpvzUfogY0EA%40mail.gmail.com
Reviewed-by: Matthias van de Meent, Mark Dilger, Pavel Borisov
Reviewed-by: Nikita Malakhov, Japin Li
This allows table AM to return a native tuple slot even if
VirtualTupleTableSlot is given as an input. Native tuple slots have knowledge
about system attributes, which could be accessed in the future.
table_multi_insert() method already can modify the input 'slots' array.
Discussion: https://postgr.es/m/CAPpHfdurb9ycV8udYqM%3Do0sPS66PJ4RCBM1g-bBpvzUfogY0EA%40mail.gmail.com
Reviewed-by: Matthias van de Meent, Mark Dilger, Pavel Borisov
Reviewed-by: Nikita Malakhov, Japin Li
The new table AM method free_rd_amcache is responsible for freeing all the
memory related to rd_amcache and setting free_rd_amcache to NULL. If the new
method is not specified, we still assume rd_amcache to be a single chunk of
memory, which could be just pfree'd.
Discussion: https://postgr.es/m/CAPpHfdurb9ycV8udYqM%3Do0sPS66PJ4RCBM1g-bBpvzUfogY0EA%40mail.gmail.com
Reviewed-by: Matthias van de Meent, Mark Dilger, Pavel Borisov
Reviewed-by: Nikita Malakhov, Japin Li
The old text claims that HOT completely removes old row versions.
It was unclear whether it just meant the tuples themselves, or the
tuples together with their line pointers. If it meant the former,
it was wrong because we can remove dead row versions even when no
HOT updates have occurred, so it's not describing a benefit of HOT.
If it meant the latter, it was wrong because HOT doesn't allow
reclaiming the root tuple's line pointer.
This section does seems like it's intended to be more of an
informal introduction to HOT than a precise technical description
of every detail of how it works, but we still don't want it to
say things that are just not true, so update the text enough
to avoid that.
Patch by me, reviewed by James Coleman (although he would have
preferred more extensive changes) and Shubham Khanna.
Discussion: http://postgr.es/m/CA+TgmobH6DPmR-u--Xgeg8cYUwhDhypNsv38nDrAJyf_xno=TQ@mail.gmail.com
Presently, one of the lines in README.md has trailing whitespace,
which was added by commit 363eb05996 to maintain a line break that
was in the non-Markdown version. Instead of changing
.gitattributes, let's match this paragraph's style with the style
of the following paragraph, thereby removing the trailing
whitespace.
Reported-by: Peter Eisentraut
Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/c0a4e906-c6fe-4519-bd15-28dfcd03fdb2%40eisentraut.org
This refactors the SASL init flow to set password_needed on the two
SCRAM exchanges currently supported. The code already required this
but was set up in such a way that all SASL exchanges required using
a password, a restriction which may not hold for all exchanges (the
example at hand being the proposed OAuthbearer exchange).
This was extracted from a larger patchset to introduce OAuthBearer
authentication and authorization.
Author: Jacob Champion <jacob.champion@enterprisedb.com>
Discussion: https://postgr.es/m/d1b467a78e0e36ed85a09adf979d04cf124a9d4b.camel@vmware.com
The SASL exchange callback returned state in to output variables:
done and success. This refactors that logic by introducing a new
return variable of type SASLStatus which makes the code easier to
read and understand, and prepares for future SASL exchanges which
operate asynchronously.
This was extracted from a larger patchset to introduce OAuthBearer
authentication and authorization.
Author: Jacob Champion <jacob.champion@enterprisedb.com>
Discussion: https://postgr.es/m/d1b467a78e0e36ed85a09adf979d04cf124a9d4b.camel@vmware.com
The buildfarm animal parula has been sporadically failing in the
partition_prune test for the past week or so. It appears like an
auto-vacuum or auto-analyze has run on one of the partitions of the "ab"
table, causing the plan to change. This is unexpected as the table is
empty.
Here we install some telemetry to find out if this is the case. This
also joins in pg_index to see if something has gone wrong with the index
which could result in the planner being unable to use that index.
We can revert this once we've figured out the cause of the plan
instability.
Reported-by: Tom Lane
Investigation-by: Tom Lane
Discussion: https://postgr.es/m/4009739.1710878318%40sss.pgh.pa.us
This introduces the following SQL/JSON functions for querying JSON
data using jsonpath expressions:
JSON_EXISTS(), which can be used to apply a jsonpath expression to a
JSON value to check if it yields any values.
JSON_QUERY(), which can be used to to apply a jsonpath expression to
a JSON value to get a JSON object, an array, or a string. There are
various options to control whether multi-value result uses array
wrappers and whether the singleton scalar strings are quoted or not.
JSON_VALUE(), which can be used to apply a jsonpath expression to a
JSON value to return a single scalar value, producing an error if it
multiple values are matched.
Both JSON_VALUE() and JSON_QUERY() functions have options for
handling EMPTY and ERROR conditions, which can be used to specify
the behavior when no values are matched and when an error occurs
during jsonpath evaluation, respectively.
Author: Nikita Glukhov <n.gluhov@postgrespro.ru>
Author: Teodor Sigaev <teodor@sigaev.ru>
Author: Oleg Bartunov <obartunov@gmail.com>
Author: Alexander Korotkov <aekorotkov@gmail.com>
Author: Andrew Dunstan <andrew@dunslane.net>
Author: Amit Langote <amitlangote09@gmail.com>
Author: Peter Eisentraut <peter@eisentraut.org>
Author: Jian He <jian.universality@gmail.com>
Reviewers have included (in no particular order):
Andres Freund, Alexander Korotkov, Pavel Stehule, Andrew Alsup,
Erik Rijkers, Zihong Yu, Himanshu Upadhyaya, Daniel Gustafsson,
Justin Pryzby, Álvaro Herrera, Jian He, Anton A. Melnikov,
Nikita Malakhov, Peter Eisentraut, Tomas Vondra
Discussion: https://postgr.es/m/cd0bb935-0158-78a7-08b5-904886deac4b@postgrespro.ru
Discussion: https://postgr.es/m/20220616233130.rparivafipt6doj3@alap3.anarazel.de
Discussion: https://postgr.es/m/abd9b83b-aa66-f230-3d6d-734817f0995d%40postgresql.org
Discussion: https://postgr.es/m/CA+HiwqHROpf9e644D8BRqYvaAPmgBZVup-xKMDPk-nd4EpgzHw@mail.gmail.com
Discussion: https://postgr.es/m/CA+HiwqE4XTdfb1nW=Ojoy_tQSRhYt-q_kb6i5d4xcKyrLC1Nbg@mail.gmail.com
Commit cca97ce6a6 allowed dbname in pg_basebackup connstring and in this
commit we allow it to be written in postgresql.auto.conf when -R option is
used. The database name in the connection string will be used by the
logical replication slot synchronization on standby.
The dbname will be recorded only if specified explicitly in the connection
string or environment variable.
Masahiko Sawada hasn't reviewed the code in detail but endorsed the idea.
Author: Vignesh C, Kuroda Hayato
Reviewed-by: Amit Kapila
Discussion: https://postgr.es/m/CAB8KJ=hdKdg+UeXhReeHpHA6N6v3e0qFF+ZsPFHk9_ThWKf=2A@mail.gmail.com
TIDStore is a data structure designed to efficiently store large sets
of TIDs. For TID storage, it employs a radix tree, where the key is
a block number, and the value is a bitmap representing offset
numbers. The TIDStore can be created on a DSA area and used by
multiple backend processes simultaneously.
There are potential future users such as tidbitmap.c, though it's very
likely the interface will need to evolve as we come to understand the
needs of different kinds of users. For example, we can support
updating the offset bitmap of existing values.
Currently, the TIDStore is not used for anything yet, aside from the
test code. But an upcoming patch will use it.
This includes a unit test module, in src/test/modules/test_tidstore.
Co-authored-by: John Naylor
Discussion: https://postgr.es/m/CAD21AoAfOZvmfR0j8VmZorZjL7RhTiQdVttNuC4W-Shdc2a-AA%40mail.gmail.com
This essentially reverts commit 69eb643b2, which added a fast path
in Catalog::ParseData, but neglected to preserve the behavior of
adding a line_number field in each hash. That makes it impossible
for genbki.pl to provide any localization of error reports, which is
bad enough; but actually the affected error reports failed entirely,
producing useless bleats like "use of undefined value in sprintf".
69eb643b2 claimed to get a 15% speedup, but I'm not sure I believe
that: the time to rebuild the bki files changes by less than 1% for
me. In any case, making debugging of mistakes in .dat files more
difficult would not be justified by even an order of magnitude
speedup here; it's just not that big a chunk of the total build time.
Per report from David Wheeler. Although it's also broken in v16,
I don't think this is worth a back-patch, since we're very unlikely
to touch the v16 catalog data again.
Discussion: https://postgr.es/m/19238.1710953049@sss.pgh.pa.us
In combination with to_regtype, this allows converting a string to
the "canonicalized" form emitted by format_type. That usage requires
parsing the string twice, which is slightly annoying but not really
too expensive. We considered alternatives such as returning a record
type, but that way was notationally uglier than this, and possibly
less flexible.
Like to_regtype(), we'd rather that this return NULL for any bad
input, but the underlying type-parsing logic isn't yet capable of
not throwing syntax errors. Adjust the documentation for both
functions to point that out.
In passing, fix up a couple of nearby entries in the System Catalog
Information Functions table that had not gotten the word about our
since-v13 convention for displaying function usage examples.
David Wheeler and Erik Wienhold, reviewed by Pavel Stehule, Jim Jones,
and others.
Discussion: https://postgr.es/m/DF2324CA-2673-4ABE-B382-26B5770B6AA3@justatheory.com
This commit limits the maximum value of wal_summary_keep_time to
INT_MAX / SECS_PER_MINUTE to avoid overflow when it is converted to
seconds. In passing, use the HOURS_PER_DAY, MINS_PER_HOUR, and
SECS_PER_MINUTE macros in the code for this GUC instead of hard-
coding those values.
Discussion: https://postgr.es/m/20240314210010.GA3056455%40nathanxps13