If we define ZLIB_CONST before including zlib.h, zlib augments some
interfaces with const decorations. By doing that we can keep our own
interfaces cleaner and can remove some unconstify calls.
ZLIB_CONST was introduced in zlib 1.2.5.2 (17 Dec 2011). When
compiling with older zlib releases, you might now get compiler
warnings about discarding qualifiers.
CentOS 6 has zlib 1.2.3, but in 8e278b6576, we removed support for the
OpenSSL release in CentOS 6, so it seems ok to de-support the zlib
release in CentOS 6 as well.
Reviewed-by: Tristan Partin <tristan@neon.tech>
Discussion: https://www.postgresql.org/message-id/flat/33462926-bb1e-7cc9-8d92-d86318e8ed1d%40eisentraut.org
The code in join_search_one_level was a bit convoluted. With a casual
glance, you might think that other_rels_list was being set to something
other than joinrels[1] when level == 2, however, joinrels[level - 1] is
joinrels[1] when level == 2, so nothing special needs to happen to set
other_rels_list. Let's clean that up to avoid confusing anyone.
In passing, we may as well modernize the loop in
make_rels_by_clause_joins() and instead of passing in the ListCell to
start looping from, let's just pass in the index where to start from and
make use of for_each_from(). Ever since 1cff1b95a, Lists are arrays
under the hood. lnext() and list_head() both seem a little too linked-list
like.
Author: Alex Hsieh, David Rowley, Richard Guo
Reviewed-by: Julien Rouhaud
Discussion: https://postgr.es/m/CANWNU8x9P9aCXGF%3DaT-A_8mLTAT0LkcZ_ySYrGbcuHzMQw2-1g%40mail.gmail.com
This test was recently added in 3900a02c9. It appears to be unstable in
regards to the join order presumably due to the relations at either side
of the join being equal in side. Here we add a qual to make one of them
smaller so the planner is more likely to choose to hash the smaller of the
two.
Reported-by: Nathan Bossart, Tom Lane
Discussion: https://www.postgr.es/m/20230803235403.GC1238296@nathanxps13
Here we adjust the costs for WindowAggs so that they properly take into
account how much of their subnode they must read before outputting the
first row. Without this, we always assumed that the startup cost for the
WindowAgg was not much more expensive than the startup cost of its
subnode, however, that's going to be completely wrong in many cases. The
WindowAgg may have to read *all* of its subnode to output a single row
with certain window bound options.
Here we estimate how many rows we'll need to read from the WindowAgg's
subnode and proportionally add more of the subnode's run costs onto the
WindowAgg's startup costs according to how much of it we expect to have to
read in order to produce the first WindowAgg row.
The reason this is more important than we might have initially thought is
that we may end up making use of a path from the lower planner that works
well as a cheap startup plan when the query has a LIMIT clause, however,
the WindowAgg might mean we need to read far more rows than what the LIMIT
specifies.
No backpatch on this so as not to cause plan changes in released
versions.
Bug: #17862
Reported-by: Tim Palmer
Author: David Rowley
Reviewed-by: Andy Fan
Discussion: https://postgr.es/m/17862-1ab8f74b0f7b0611@postgresql.org
Discussion: https://postgr.es/m/CAApHDvrB0S5BMv+0-wTTqWFE-BJ0noWqTnDu9QQfjZ2VSpLv_g@mail.gmail.com
Both apply and tablesync workers were using ApplyWorkerMain() as entry
point. As the name implies, ApplyWorkerMain() should be considered as
the main function for apply workers. Tablesync worker's path was hidden
and does not have enough in common to share the same main function with
apply worker.
Also, most of the code shared by both worker types is already combined
in LogicalRepApplyLoop(). There is no need to combine the rest in
ApplyWorkerMain() anymore.
This patch introduces TablesyncWorkerMain() as a new entry point for
tablesync workers. This aims to increase code readability and would help
with future improvements like the reuse of tablesync workers in the
initial synchronization.
Author: Melih Mutlu based on suggestions by Melanie Plageman
Reviewed-by: Peter Smith, Kuroda Hayato, Amit Kapila
Discussion: http://postgr.es/m/CAGPVpCTq=rUDd4JUdaRc1XUWf4BrH2gdSNf3rtOMUGj9rPpfzQ@mail.gmail.com
Between 6fcda9aba and 1b6f632a3, the pg_strtoint functions became quite
a bit slower in v16, despite efforts in 6b423ec67 to speed these up.
Since the majority of cases for these functions will only contain
base-10 digits, perhaps prefixed by a '-', it makes sense to have a
special case for this and just fall back on the more complex version
which processes hex, octal, binary and underscores if the fast path
version fails to parse the string.
While we're here, update the header comments for these functions to
mention that hex, octal and binary formats along with underscore
separators are now supported.
Author: Andres Freund, David Rowley
Reported-by: Masahiko Sawada
Reviewed-by: Dean Rasheed, John Naylor
Discussion: https://postgr.es/m/CAD21AoDvDmUQeJtZrau1ovnT_smN940%3DKp6mszNGK3bq9yRN6g%40mail.gmail.com
Backpatch-through: 16, where 6fcda9aba and 1b6f632a3 were added
The stats regression test attempts to ensure that Buffer Access Strategy
"reuses" are being counted in pg_stat_io by vacuuming a table which is larger
than the size of the strategy ring. However, when shared buffers are in
sufficiently high demand, another backend could evict one of the blocks in the
strategy ring before the first backend has a chance to reuse the buffer. The
backend using the strategy would then evict another shared buffer and add that
buffer to the strategy ring. This counts as an eviction and not a reuse in
pg_stat_io. Count both evictions and reuses in the test to ensure it does not
fail incorrectly.
Reported-by: Jeff Davis <pgsql@j-davis.com>,
Author: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-by: Alexander Lakhin <exclusion@gmail.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Discussion: https://postgr.es/m/CAAKRu_bNG27AxG9TdPtwsL6wg8AWbVckjmTL2t1HF=miDQuNtw@mail.gmail.com
This was failing for queries which try to get the .type() of a
jpiLikeRegex. For example:
select jsonb_path_query('["string", "string"]',
'($[0] like_regex ".{7}").type()');
Reported-by: Alexander Kozhemyakin
Bug: #18035
Discussion: https://postgr.es/m/18035-64af5cdcb5adf2a9@postgresql.org
Backpatch-through: 12, where SQL/JSON path was added.
The previous commit removed the "override" APIs. Surviving APIs facilitate
plancache.c to snapshot search_path and test whether the current value equals
a remembered snapshot.
Aleksander Alekseev. Reported by Alexander Lakhin and Noah Misch.
Discussion: https://postgr.es/m/8ffb4650-52c4-6a81-38fc-8f99be981130@gmail.com
Two backend routines are added to allow extension to allocate and define
custom wait events, all of these being allocated in the type
"Extension":
* WaitEventExtensionNew(), that allocates a wait event ID computed from
a counter in shared memory.
* WaitEventExtensionRegisterName(), to associate a custom string to the
wait event ID allocated.
Note that this includes an example of how to use this new facility in
worker_spi with tests in TAP for various scenarios, and some
documentation about how to use them.
Any code in the tree that currently uses WAIT_EVENT_EXTENSION could
switch to this new facility to define custom wait events. This is left
as work for future patches.
Author: Masahiro Ikeda
Reviewed-by: Andres Freund, Michael Paquier, Tristan Partin, Bharath
Rupireddy
Discussion: https://postgr.es/m/b9f5411acda0cf15c8fbb767702ff43e@oss.nttdata.com
MSVC's _BitScan* functions return a boolean indicating whether any
bits were set in the input, and we were previously asserting that
they returned true, per our API. This is correct. However, other
platforms simply assert that the input is non-zero, so do that to be
more consistent.
Noted while investigating a hypothesis from Ranier Vilela about
undefined behavior, but this is not his proposed patch.
Discussion: https://www.postgresql.org/message-id/CAEudQAoDhUZyKGJ1vbMGcgVUOcsixe-%3DjcVaDWarqkUg163D2w%40mail.gmail.com
These are useful to extract the class ID and the event ID associated to
a single uint32 wait_event_info. Only two code paths use them now, but
an upcoming patch will extend their use.
Discussion: https://postgr.es/m/ZMcJ7F7nkGkIs8zP@paquier.xyz
libpq_source.c would consider any result returned by
pg_tablespace_location() as a symlink, resulting in run-time errors like
that:
pg_rewind: error: file "pg_tblspc/NN" is of different type in source and target
In-place tablespaces are directories located in pg_tblspc/, returned as
relative paths instead of absolute paths, so rely on that to make the
difference with a normal tablespace and an in-place one. If the path is
relative, the tablespace is handled as a directory. If the path is
absolute, consider it as a symlink.
In-place tablespaces are only intended for development purposes, so like
363e8f9 no backpatch is done. A test is added in pg_rewind with an
in-place tablespace and some data in it.
Author: Rui Zhao, Michael Paquier
Discussion: https://postgr.es/m/2b79d2a8-b2d5-4bd7-a15b-31e485100980.xiyuan.zr@alibaba-inc.com
The second portion of the tests had a race condition where it would be
possible for the startup of the dynamic workers to fail, in the event
where the static workers started before them with the library loading in
shared_preload_libraries did not finish to create their respective
schemas. The conflict is caused by the fact that the dynamic and static
workers used the same IDs, overlapping each other, so, for now, switch
the dynamic workers to use different IDs, leading to different schemas
created.
Reported-by: Andres Freund
Discussion: https://postgr.es/m/20230728022332.egqzobhskmlf6ntr@awork3.anarazel.de
Commits 83dec5a712 and ff402ae11b taught vacuumdb to reuse
passwords instead of prompting repeatedly. However, the docs still
warn about repeated prompts, and this improvement was not applied
to clusterdb and reindexdb. This commit allows clusterdb and
reindexdb to reuse passwords just like vacuumdb does, and it
expunges the aforementioned warnings from the docs.
Reviewed-by: Gurjeet Singh, Zhang Mingli
Discussion: https://postgr.es/m/20230628045741.GA1813397%40nathanxps13
Commit e7cb7ee14, which introduced the infrastructure for FDWs and
custom scan providers to replace joins with scans, failed to add support
handling of pseudoconstant quals assigned to replaced joins in
createplan.c, leading to an incorrect plan without a gating Result node
when postgres_fdw replaced a join with such a qual.
To fix, we could add the support by 1) modifying the ForeignPath and
CustomPath structs to store the list of RestrictInfo nodes to apply to
the join, as in JoinPaths, if they represent foreign and custom scans
replacing a join with a scan, and by 2) modifying create_scan_plan() in
createplan.c to use that list in that case, instead of the
baserestrictinfo list, to get pseudoconstant quals assigned to the join;
but #1 would cause an ABI break. So fix by modifying the infrastructure
to just disallow replacing joins with such quals.
Back-patch to all supported branches.
Reported by Nishant Sharma. Patch by me, reviewed by Nishant Sharma and
Richard Guo.
Discussion: https://postgr.es/m/CADrsxdbcN1vejBaf8a%2BQhrZY5PXL-04mCd4GDu6qm6FigDZd6Q%40mail.gmail.com
Historically, hba.c limited tokens in the authentication configuration
files (pg_hba.conf and pg_ident.conf) to less than 256 bytes. We have
seen a few reports of this limit causing problems; notably, for
moderately-complex LDAP configurations. Let's get rid of the fixed
limit by using a StringInfo instead of a fixed-size buffer.
This actually takes less code than before, since we can get rid of
a nontrivial error recovery stanza. It's doubtless a hair slower,
but parsing the content of the HBA files should in no way be
performance-critical.
Although this is a pretty straightforward patch, it doesn't seem
worth the risk to back-patch given the small number of complaints
to date. In released branches, we'll just raise MAX_TOKEN to
ameliorate the problem.
Discussion: https://postgr.es/m/1588937.1690221208@sss.pgh.pa.us
This commit moves worker_spi to use TAP tests. sql/worker_spi.sql is
gone, replaced by an equivalent set of queries in a TAP script, without
worker_spi loaded in shared_preload_libraries:
- One query to launch a worker dynamically, relying now on "postgres" as
the default database to connect to.
- Two wait queries with poll_query_until(), one to wait for the worker
schema to be initialized and a second to wait for a tuple processed by
the worker.
- Server reload to accelerate the main loop of the spawned worker.
More coverage is added for workers registered when the library is loaded
with shared_preload_libraries, while on it, checking that these are
connecting to the database set in the GUC worker_spi.database.
A local run of these test is showing that TAP is slightly faster than
the original, while providing more coverage (3.7s vs 4.4s). There was
also some discussions about keeping the SQL tests, but this would
require initializing twice a cluster, increasing the runtime of the
tests up to 5.6s here.
These tests will be expanded more in an upcoming patch aimed at adding
support for custom wait events for the Extension class, still under
discussion, to check the new in-core APIs with and without a library set
in shared_preload_libraries.
Bharath has written the part where shared_preload_libraries is used,
while I have migrated the existing SQL tests to TAP.
Author: Bharath Rupireddy, Michael Paquier
Reviewed-by: Masahiro Ikeda
Discussion: https://postgr.es/m/CALj2ACWR9ncAiDF73unqdJF1dmsA2R0efGXX2624X+YVxcAVWg@mail.gmail.com
9f8377f7a added code to allow COPY FROM insert a column's default value
when the input matches the DEFAULT string specified in the COPY command.
Here we fix some inefficient code which needlessly palloc0'd an array to
store if we should use the default value or input value for the given
column. This array was being palloc0'd and pfree'd once per row. It's
much more efficient to allocate this once and just reset the values once
per row.
Reported-by: Masahiko Sawada
Author: Masahiko Sawada
Discussion: https://postgr.es/m/CAD21AoDvDmUQeJtZrau1ovnT_smN940%3DKp6mszNGK3bq9yRN6g%40mail.gmail.com
Backpatch-through: 16, where 9f8377f7a was introduced.
There was already a check on the relation OID, but not its index OID and
the attributes that can be used during the syscache lookups. The two
assertions added by this commit are cheap, and actually useful for
developers to fasten the detection of incorrect data in a new entry
added in the syscache list, as these assertions are triggered during the
initial cache loading (initdb, or just backend startup), not requiring a
syscache that uses the new entry.
While on it, the relation OID check is switched to use OidIsValid().
Author: Aleksander Alekseev
Reviewed-by: Dagfinn Ilmari Mannsåker, Zhang Mingli, Michael Paquier
Discussion: https://postgr.es/m/CAJ7c6TOjUTJ0jxvWY6oJeP2-840OF8ch7qscZQsuVuotXTOS_g@mail.gmail.com
In pg_stat_statements, savepoint names now show up as constants with a
parameter symbol, using as base query string the one added as a new
entry to the PGSS hash table, leading to:
RELEASE $1
ROLLBACK TO $1
SAVEPOINT $1
Applying constants to these query parts is a huge advantage for
workloads that generate randomly savepoint points, like ORMs (Django is
at the origin of this patch). The ODBC driver is a second layer that
likes a lot savepoints, though it does not use a random naming pattern.
A "location" field is added to TransactionStmt, now set only for
savepoints. The savepoint name is ignored by the query jumbling. The
location can be extended to other query patterns, if required, like 2PC
commands. Some tests are added to pg_stat_statements for all the query
patterns supported by the parser.
ROLLBACK, ROLLBACK TO SAVEPOINT and ROLLBACK TRANSACTION TO SAVEPOINT
have the same Node representation, so all these are equivalents. The
same happens for RELEASE and RELEASE SAVEPOINT.
Author: Greg Sabino Mullane
Discussion: https://postgr.es/m/CAKAnmm+2s9PA4OaumwMJReWHk8qvJ_-g1WqxDRDAN1BSUfxyTw@mail.gmail.com
psql's --echo-hidden, --log-file, and --single-step options
generate extra lines to clearly separate queries from other output.
Presently, these extra lines are not valid SQL comments, which
makes them a hazard for anyone trying to copy/paste the decorated
queries into a client or query editor. This commit replaces the
starting and ending asterisks in these extra lines with forward
slashes so that they are valid SQL comments that can be copy/pasted
without incident.
Author: Kirk Wolak
Reviewed-by: Pavel Stehule, Laurenz Albe, Tom Lane, Alvaro Herrera, Andrey Borodin
Discussion: https://postgr.es/m/CACLU5mTFJRJYtbvmZ26txGgmXWQo0hkGhH2o3hEquUPmSbGtBw%40mail.gmail.com
This Patch introduces three SQL standard JSON functions:
JSON()
JSON_SCALAR()
JSON_SERIALIZE()
JSON() produces json values from text, bytea, json or jsonb values,
and has facilitites for handling duplicate keys.
JSON_SCALAR() produces a json value from any scalar sql value,
including json and jsonb.
JSON_SERIALIZE() produces text or bytea from input which containis
or represents json or jsonb;
For the most part these functions don't add any significant new
capabilities, but they will be of use to users wanting standard
compliant JSON handling.
Catversion bumped as this changes ruleutils.c.
Author: Nikita Glukhov <n.gluhov@postgrespro.ru>
Author: Teodor Sigaev <teodor@sigaev.ru>
Author: Oleg Bartunov <obartunov@gmail.com>
Author: Alexander Korotkov <aekorotkov@gmail.com>
Author: Andrew Dunstan <andrew@dunslane.net>
Author: Amit Langote <amitlangote09@gmail.com>
Reviewers have included (in no particular order) Andres Freund, Alexander
Korotkov, Pavel Stehule, Andrew Alsup, Erik Rijkers, Zihong Yu,
Himanshu Upadhyaya, Daniel Gustafsson, Justin Pryzby, Álvaro Herrera,
Peter Eisentraut
Discussion: https://postgr.es/m/cd0bb935-0158-78a7-08b5-904886deac4b@postgrespro.ru
Discussion: https://postgr.es/m/20220616233130.rparivafipt6doj3@alap3.anarazel.de
Discussion: https://postgr.es/m/abd9b83b-aa66-f230-3d6d-734817f0995d%40postgresql.org
Discussion: https://postgr.es/m/CA+HiwqE4XTdfb1nW=Ojoy_tQSRhYt-q_kb6i5d4xcKyrLC1Nbg@mail.gmail.com
This is to export datum_to_json(), datum_to_jsonb(), and
jsonb_from_cstring(), though the last one is exported as
jsonb_from_text().
A subsequent commit to add new SQL/JSON constructor functions will
need them for calling from the executor.
Discussion: https://postgr.es/m/20230720160252.ldk7jy6jqclxfxkq%40alvherre.pgsql
Commit 5764f611e used dclist_delete_from() to remove the proc from the
wait queue. However, since it doesn't clear dist_node's next/prev to
NULL, it could call RemoveFromWaitQueue() twice: when the process
detects a deadlock and then when cleaning up locks on aborting the
transaction. The waiting lock information is cleared in the first
call, so it led to a crash in the second call.
Backpatch to v16, where the change was introduced.
Bug: #18031
Reported-by: Justin Pryzby, Alexander Lakhin
Reviewed-by: Andres Freund
Discussion: https://postgr.es/m/ZKy4AdrLEfbqrxGJ%40telsasoft.com
Discussion: https://postgr.es/m/18031-ebe2d08cb405f6cc@postgresql.org
Backpatch-through: 16
This gives a way to make a difference between workers registered when
the library is loaded with shared_preload_libraries and when these are
launched dynamically, in ps output or pg_stat_activity.
Extracted from a larger patch by the same author.
Author: Bharath Rupireddy
Reviewed-by: Masahiro Ikeda
Discussion: https://postgr.es/m/CALj2ACWR9ncAiDF73unqdJF1dmsA2R0efGXX2624X+YVxcAVWg@mail.gmail.com
This commit adds a few comments about what LWLockWaitForVar() relies on
when a backend waits for a variable update on its LWLocks for WAL
insertions up to an expected LSN.
First, LWLockWaitForVar() does not include a memory barrier, relying on
a spinlock taken at the beginning of WaitXLogInsertionsToFinish(). This
was hidden behind two layers of routines in lwlock.c. This assumption
is now documented at the top of LWLockWaitForVar(), and detailed at bit
more within LWLockConflictsWithVar().
Second, document why WaitXLogInsertionsToFinish() does not include
memory barriers, relying on a spinlock at its top, which is, per Andres'
input, fine for two different reasons, both depending on the fact that
the caller of WaitXLogInsertionsToFinish() is waiting for a LSN up to a
certain value.
This area's documentation and assumptions could be improved more in the
future, but at least that's a beginning.
Author: Bharath Rupireddy, Andres Freund
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/CALj2ACVF+6jLvqKe6xhDzCCkr=rfd6upaGc3477Pji1Ke9G7Bg@mail.gmail.com
Previously, when selecting an usable index for update/delete for the
REPLICA IDENTITY FULL table, in IsIndexOnlyExpression(), we used to
check if all index fields are not expressions. However, it was not
necessary, because it is enough to check if only the leftmost index
field is not an expression (and references the remote table column)
and this check has already been done by
RemoteRelContainsLeftMostColumnOnIdx().
This commit removes IsIndexOnlyExpression() and
RemoteRelContainsLeftMostColumnOnIdx() and all checks for usable
indexes for REPLICA IDENTITY FULL tables are now performed by
IsIndexUsableForReplicaIdentityFull().
Backpatch this to remain the code consistent.
Reported-by: Peter Smith
Reviewed-by: Amit Kapila, Önder Kalacı
Discussion: https://postgr.es/m/CAHut%2BPsGRE5WSsY0jcLHJEoA17MrbP9yy8FxdjC_ZOAACxbt%2BQ%40mail.gmail.com
Backpatch-through: 16
The WAL insertion lock variable insertingAt is currently being read
and written with the help of the LWLock wait list lock to avoid any read
of torn values. This wait list lock can become a point of contention on
a highly concurrent write workloads.
This commit switches insertingAt to a 64b atomic variable that provides
torn-free reads/writes. On platforms without 64b atomic support, the
fallback implementation uses spinlocks to provide the same guarantees
for the values read. LWLockWaitForVar(), through
LWLockConflictsWithVar(), reads the new value to check if it still needs
to wait with a u64 atomic operation. LWLockUpdateVar() updates the
variable before waking up the waiters with an exchange_u64 (full memory
barrier). LWLockReleaseClearVar() now uses also an exchange_u64 to
reset the variable. Before this commit, all these steps relied on
LWLockWaitListLock() and LWLockWaitListUnlock().
This reduces contention on LWLock wait list lock and improves
performance of highly-concurrent write workloads. Here are some
numbers using pg_logical_emit_message() (HEAD at d6677b93) with various
arbitrary record lengths and clients up to 1k on a rather-large machine
(64 vCPUs, 512GB of RAM, 16 cores per sockets, 2 sockets), in terms of
TPS numbers coming from pgbench:
message_size_b | 16 | 64 | 256 | 1024
--------------------+--------+--------+--------+-------
patch_4_clients | 83830 | 82929 | 80478 | 73131
patch_16_clients | 267655 | 264973 | 250566 | 213985
patch_64_clients | 380423 | 378318 | 356907 | 294248
patch_256_clients | 360915 | 354436 | 326209 | 263664
patch_512_clients | 332654 | 321199 | 287521 | 240128
patch_1024_clients | 288263 | 276614 | 258220 | 217063
patch_2048_clients | 252280 | 243558 | 230062 | 192429
patch_4096_clients | 212566 | 213654 | 205951 | 166955
head_4_clients | 83686 | 83766 | 81233 | 73749
head_16_clients | 266503 | 265546 | 249261 | 213645
head_64_clients | 366122 | 363462 | 341078 | 261707
head_256_clients | 132600 | 132573 | 134392 | 165799
head_512_clients | 118937 | 114332 | 116860 | 150672
head_1024_clients | 133546 | 115256 | 125236 | 151390
head_2048_clients | 137877 | 117802 | 120909 | 138165
head_4096_clients | 113440 | 115611 | 120635 | 114361
Bharath has been measuring similar improvements, where the limit of the
WAL insertion lock begins to be felt when more than 256 concurrent
clients are involved in this specific workload.
An extra patch has been discussed to introduce a fast-exit path in
LWLockUpdateVar() when there are no waiters, still this does not
influence the write-heavy workload cases discussed as there are always
waiters. This will be considered separately.
Author: Bharath Rupireddy
Reviewed-by: Nathan Bossart, Andres Freund, Michael Paquier
Discussion: https://postgr.es/m/CALj2ACVF+6jLvqKe6xhDzCCkr=rfd6upaGc3477Pji1Ke9G7Bg@mail.gmail.com
We include the message type while displaying an error context in the
apply worker. Now, while retrieving the message type string if the
message type is unknown we throw an error that will hide the original
error. So, instead, we need to simply return the string indicating an
unknown message type.
Reported-by: Ashutosh Bapat
Author: Euler Taveira, Amit Kapila
Reviewed-by: Ashutosh Bapat
Backpatch-through: 15
Discussion: https://postgr.es/m/CAExHW5suAEDW-mBZt_qu4RVxWZ1vL54-L+ci2zreYWebpzxYsA@mail.gmail.com
Due to the bug LimitAdditionalPins() could return 0, violating
LimitAdditionalPins()'s API ("One additional pin is always allowed"). This
could be hit when setting shared_buffers very low and using a fair amount of
concurrency.
This bug was introduced in 31966b151e.
Author: "Anton A. Melnikov" <aamelnikov@inbox.ru>
Reported-by: "Anton A. Melnikov" <aamelnikov@inbox.ru>
Reported-by: Victoria Shepard
Discussion: https://postgr.es/m/ae46f2fb-5586-3de0-b54b-1bb0f6410ebd@inbox.ru
Backpatch: 16-
Some of the test_decoding test output was extremely wide, because it
deals with massive toasted values, and the aligned mode causes psql to
produce 200kB of whitespace and dashes. Change to unaligned mode
temporarily to avoid that behavior.
Backpatch to 14, where it applies cleanly.
Discussion: https://postgr.es/m/20230405103953.sxleixp3uz5lazst@alvherre.pgsql
Because PostgreSQL::Version is very nuanced about development version
numbers, the comparison to 16beta2 makes it think that that release is
older than 16, therefore applying a database tweak that doesn't work
there (the comparison is only supposed to match when run on version 15).
As suggested by Andrew Dunstan, fix by having AdjustUpgrade.pm public
methods create a separate PostgreSQL::Version object to use for these
comparisons, that only carries the major version number.
While at it, have the same methods ensure that the objects given are of
the expected type.
Backpatch to 16. This module goes all the way back to 9.2, but there's
probably no need for this fix except where betas still live.
Co-authored-by: Andrew Dunstan <andrew@dunslane.net>
Discussion: https://postgr.es/m/20230719110504.zbu74o54bqqlsufb@alvherre.pgsql
This commit switches the client-side data generation from INSERT queries
to COPY for the two tables pgbench_branches and pgbench_tellers.
pgbench_accounts was already using COPY.
COPY is a better interface for bulk loading or high latency connections
(this point can be countered with the option for server-side data
generation, still client-side is the default), and measurements have
proved that using it for these two other tables can lead to improvements
during initialization. I did not notice slowdowns at large scale
numbers on a local setup, either, most of the work happening for the
accounts table.
Previously COPY was only used for the pgbench_accounts table because the
amount of data was much larger than the two other tables. The code is
refactored so as all three tables use the same code path to execute the
COPY queries, with a callback to build data rows.
Author: Tristan Partin
Discussion: https://postgr.es/m/CSTU5P82ONZ1.19XFUGHMXHBRY@c3po
The tables created by pgbench rely on a few assumptions for TPC-B, where
the "filler" attribute is used to comply with this benchmark's rules as
well as pgbencn historical behavior. The data generated for each table
uses this filler in a different way:
- pgbench_accounts uses it as a blank-padded empty string.
- pgbench_tellers and pgbench_branches use it as a NULL value.
There were no checks done about the consistency of the data initialized,
and this has showed up while discussing a patch that changes the logic
in charge of the client-side data generation (pgbench documents all that
already in its comments). This commit adds some checks on the data
generated for both the server-side and client-side logic.
Reviewed-by: Tristan Partin
Discussion: https://postgr.es/m/ZLik4oKnqRmVCM3t@paquier.xyz
After 3c90dcd03, try_partitionwise_join's child_joinrelids
variable is read only in an Assert, provoking a compiler
warning in non-assert builds. Rearrange code to avoid the
warning and eliminate unnecessary work in the non-assert case.
Per CI testing (via Jeff Davis and Bharath Rupireddy)
Discussion: https://postgr.es/m/ef0de9713e605451f1b60b30648c5ee900b2394c.camel@j-davis.com
Applying add_outer_joins_to_relids() to a child join doesn't actually
work, even if we've built a SpecialJoinInfo specialized to the child,
because that function will also compare the join's relids to elements
of the main join_info_list, which only deal in regular relids not
child relids. This mistake escaped detection by the existing
partitionwise join tests because they didn't test any cases where
add_outer_joins_to_relids() needs to add additional OJ relids (that
is, any cases where join reordering per identity 3 is possible).
Instead, let's apply adjust_child_relids() to the relids of the parent
join. This requires minor code reordering to collect the relevant
AppendRelInfo structures first, but that's work we'd do shortly anyway.
Report and fix by Richard Guo; cosmetic changes by me
Discussion: https://postgr.es/m/CAMbWs49NCNbyubZWgci3o=_OTY=snCfAPtMnM-32f3mm-K-Ckw@mail.gmail.com