The comment
/* On some platforms, readline is declared as readline(char *) */
is obsolete. The casting away of const can be removed.
The const in the readline() prototype was added in GNU readline 4.2,
released in 2001. BSD libedit has also had const in the prototype
since at least 2001.
(The commit that introduced this comment (187e865174) talked about
FreeBSD 4.8, which didn't have readline compatibility in libedit yet,
so it must have been talking about GNU readline in the base system.
This checks out, but already FreeBSD 5 had an updated GNU readline
with const.)
Reviewed-by: Aleksander Alekseev <aleksander@timescale.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/862fc1d4-9a0c-d2b6-5451-ee3dc750bcab%40eisentraut.org
A couple of new options are added to this module to provide more control
on the ways bgworkers are started:
- A new GUC called worker_spi.role to control which role to use by
default when starting a worker.
- worker_spi_launch() gains three arguments: a role OID, a database OID
and flags (currently only BGWORKER_BYPASS_ALLOWCONN). By default, the
role OID and the database OID are InvalidOid, in which case the worker
would use the related GUCs.
Workers loaded by shared_preload_libraries use the default values
provided by the GUCs, with flags at 0. The options are given to the
main bgworker routine through bgw_extra. A test case is tweaked to
start two dynamic workers with databases and roles defined by the caller
of worker_spi_launch().
These additions will have the advantage of expanding the tests for
bgworkers, for at least two cases:
- BGWORKER_BYPASS_ALLOWCONN has no coverage in the core tree.
- A new bgworker flag is under discussion, and this eases the
integration of new tests.
Reviewed-by: Bertrand Drouvot
Discussion: https://postgr.es/m/bcc36259-7850-4882-97ef-d6b905d2fc51@gmail.com
Two custom wait events are added here:
- "DblinkConnect", when waiting to establish a connection to a remote
server.
- "DblinkGetConnect", when waiting to establish a connection to a remote
server but it could not be found in the list of already-opened ones.
Author: Masahiro Ikeda
Discussion: https://postgr.es/m/197bce267fa691a0ac62c86c4ab904c4@oss.nttdata.com
Three custom wait events are added here:
- "PostgresFdwCleanupResult", waiting while cleaning up PQgetResult() on
transaction abort.
- "PostgresFdwConnect", waiting to establish a connection to a remote
server.
- "PostgresFdwGetResult", waiting to receive a result from a remote
server.
Author: Masahiro Ikeda
Discussion: https://postgr.es/m/197bce267fa691a0ac62c86c4ab904c4@oss.nttdata.com
Two custom wait events are added here:
- "TestShmMqBgWorkerStartup", when setting up a set of bgworkers in
wait_for_workers_to_become_ready().
- "TestShmMqMessageQueue", when waiting for a queued message in
test_shm_mq_pipelined().
Author: Masahiro Ikeda
Discussion: https://postgr.es/m/197bce267fa691a0ac62c86c4ab904c4@oss.nttdata.com
This naming is more consistent with all the other user-facing wait event
strings. Other in-core modules will use the same naming convention, so
let's be consistent here as well.
Extracted from a larger patch by the same author.
Author: Masahiro Ikeda
Discussion: https://postgr.es/m/197bce267fa691a0ac62c86c4ab904c4@oss.nttdata.com
Allow a line break in example output, as we have done elsewhere.
Overlength output was added in commit 1e68e43d3.
While here, adjust some shaky grammar in an adjacent note
(from a different commit, c9af05465).
Per buildfarm.
It is unnecessary to include this field in IndexInfo. It is only used
by DDL code, not during execution. It is really only used to pass
local information around between functions in index.c and indexcmds.c,
for which it is clearer to use local variables, like in similar cases.
Discussion: https://www.postgresql.org/message-id/flat/f84640e3-00d3-5abd-3f41-e6a19d33c40b@eisentraut.org
In hopes of putting these where any would-be implementer is sure to
find them, make a placeholder grammar production for ALTER DROP VALUE
and put them there. This is really just a docs patch, though.
Vik Fearing, with a bit more wordsmithing by me
Discussion: https://postgr.es/m/9fffd149-da0f-0c9c-6745-731fb688642a@postgresfriends.org
This further reduces the length and complexity of sendFile(),
hopefully make it easier to understand and modify. In addition
to moving some logic into a new function, I took this opportunity
to make a few slight adjustments to sendFile() itself, including
renaming the 'len' variable to 'bytes_done', since we use it to represent
the number of bytes we've already handled so far, not the total
length of the file.
Patch by me, reviewed by David Steele.
Discussion: http://postgr.es/m/CA+TgmoYt5jXH4U6cu1dm9Oe2FTn1aae6hBNhZzJJjyjbE_zYig@mail.gmail.com
If checksum verification fails for a particular page, we reread the
page and try one more time. The code that does this somewhat complex
and difficult to follow. Move some of the logic into a new function
and rearrange the code a bit to try to make it clearer. This way,
we don't need the block_retry Boolean, a couple of other variables
move from sendFile() into the new function, and some code is now less
deeply indented.
Patch by me, reviewed by David Steele.
Discussion: http://postgr.es/m/CA+TgmoYt5jXH4U6cu1dm9Oe2FTn1aae6hBNhZzJJjyjbE_zYig@mail.gmail.com
The code in charge of copying the contents of PgBackendStatus to local
memory could fail on memory allocation because of an overflow on the
amount of memory to use. The overflow can happen when combining a high
value track_activity_query_size (max at 1MB) with a large
max_connections, when both multiplied get higher than INT32_MAX as both
parameters treated as signed integers. This could for example trigger
with the following functions, all calling pgstat_read_current_status():
- pg_stat_get_backend_subxact()
- pg_stat_get_backend_idset()
- pg_stat_get_progress_info()
- pg_stat_get_activity()
- pg_stat_get_db_numbackends()
The change to use MemoryContextAllocHuge() has been introduced in
8d0ddccec6, so backpatch down to 12.
Author: Jakub Wartak
Discussion: https://postgr.es/m/CAKZiRmw8QSNVw2qNK-dznsatQqz+9DkCquxP0GHbbv1jMkGHMA@mail.gmail.com
Backpatch-through: 12
Make a few newish calls to appendStringInfo() which have no special
formatting use appendStringInfoString() instead. Also, adjust usages of
appendStringInfoString() which only append a string containing a single
character to make use of appendStringInfoChar() instead.
This makes the code marginally faster, but primarily this change is so
we use the StringInfo type as it was intended to be used.
Discussion: https://postgr.es/m/CAApHDvpXKQmL+r=VDNS98upqhr9yGBhv2Jw3GBFFk_wKHcB39A@mail.gmail.com
This commit changes the WAL reader routines so as a FATAL for the
backend or exit(FAILURE) for the frontend is triggered if an allocation
for a WAL record decode fails in walreader.c, rather than treating this
case as bogus data, which would be equivalent to the end of WAL. The
key is to avoid palloc_extended(MCXT_ALLOC_NO_OOM) in walreader.c,
relying on plain palloc() calls.
The previous behavior could make WAL replay finish too early than it
should. For example, crash recovery finishing earlier may corrupt
clusters because not all the WAL available locally was replayed to
ensure a consistent state. Out-of-memory failures would show up
randomly depending on the memory pressure on the host, but one simple
case would be to generate a large record, then replay this record after
downsizing a host, as Ethan Mertz originally reported.
This relies on bae868caf2, as the WAL reader routines now do the
memory allocation required for a record only once its header has been
fully read and validated, making xl_tot_len trustable. Making the WAL
reader react differently on out-of-memory or bogus record data would
require ABI changes, so this is the safest choice for stable branches.
Also, it is worth noting that 3f1ce97346 has been using a plain
palloc() in this code for some time now.
Thanks to Noah Misch and Thomas Munro for the discussion.
Like the other commit, backpatch down to 12, leaving out v11 that will
be EOL'd soon. The behavior of considering a failed allocation as bogus
data comes originally from 0ffe11abd3, where the record length
retrieved from its header was not entirely trustable.
Reported-by: Ethan Mertz
Discussion: https://postgr.es/m/ZRKKdI5-RRlta3aF@paquier.xyz
Backpatch-through: 12
The list form of stat() is an inelegant API as it relies on the position
of the file size in the list returned in result. Like in any other
places of the tree, replace that with a -s switch instead.
Another suggestion from Dagfinn is File::Stat, which we've been already
using for some other fields. It really comes down to a matter of taste
to choose that over -s, and the latter is more used in the tree.
Author: Bertrand Drouvot
Reviewed-by: Dagfinn Ilmari Mannsåker
Discussion: https://postgr.es/m/b2020df7-d0fc-4ea5-b2a9-7efc6d36b2ac@gmail.com
In a selective restore, ACLs for a table should be dumped if the
table is selected to be dumped. However, if the table has both
table-level and column-level ACLs, only the table-level ACL was
restored. This happened because _tocEntryRequired assumed that
an ACL could have only one dependency (the one on its table),
and punted if there was more than one. But since commit ea9125304,
column-level ACLs also depend on the table-level ACL if any, to
ensure correct ordering in parallel restores. To fix, adjust the
logic in _tocEntryRequired to ignore dependencies on ACLs.
I extended a test case in 002_pg_dump.pl so that it purports to
test for this; but in fact the test passes even without the fix.
That's because this bug only manifests during a selective restore,
while the scenarios 002_pg_dump.pl tests include only selective dumps.
Perhaps somebody would like to extend the script so that it can test
scenarios including selective restore, but I'm not touching that.
Euler Taveira and Tom Lane, per report from Kong Man.
Back-patch to all supported branches.
Discussion: https://postgr.es/m/DM4PR11MB73976902DBBA10B1D652F9498B06A@DM4PR11MB7397.namprd11.prod.outlook.com
The retry loop is needed because heap_page_prune() calls
HeapTupleSatisfiesVacuum() and then lazy_scan_prune() does the same
thing again, and they might get different answers due to concurrent
clog updates. But this patch makes heap_page_prune() return the
HeapTupleSatisfiesVacuum() results that it computed back to the
caller, which allows lazy_scan_prune() to avoid needing to recompute
those values in the first place. That's nice both because it eliminates
the need for a retry loop and also because it's cheaper.
Melanie Plageman, reviewed by David Geier, Andres Freund, and me.
Discussion: https://postgr.es/m/CAAKRu_br124qsGJieuYA0nGjywEukhK1dKBfRdby_4yY3E9SXA%40mail.gmail.com
This adjusts the expression evaluation code for CoerceViaIO and
CoerceToDomain to handle errors softly if needed.
For CoerceViaIo, this means using InputFunctionCallSafe(), which
provides the option to handle errors softly, instead of calling the
type input function directly.
For CoerceToDomain, this simply entails replacing the ereport() in
ExecEvalConstraintCheck() by errsave().
In both cases, the ErrorSaveContext to be used when evaluating the
expression is stored by ExecInitExprRec() in the expression's struct
in the expression's ExprEvalStep. The ErrorSaveContext is passed by
setting ExprState.escontext to point to it when calling
ExecInitExprRec() on the expression whose errors are to be handled
softly.
Note that no call site of ExecInitExprRec() has been changed in this
commit, so there's no functional change. This is intended for
implementing new SQL/JSON expression nodes in future commits that
will use to it suppress errors that may occur during type coercions.
Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/CA+HiwqE4XTdfb1nW=Ojoy_tQSRhYt-q_kb6i5d4xcKyrLC1Nbg@mail.gmail.com
SetResultVariables() was not getting called when "printing" a result
that failed (see around PrintQueryResult), which would cause some
variables to not be set, like ROW_COUNT, SQLSTATE or ERROR. This can be
confusing as a previous result would be retained.
This state could be reached when failing to process tuples in a few
commands, like \gset when it returns no tuples, or \crosstabview. A
test is added, based on \gset.
This is arguably a bug fix, but no backpatch is done as there is a risk
of breaking scripts that rely on the previous behavior, even if they do
so accidentally.
Reported-by: amutu
Author: Japin Li
Reviewed-by: Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/18134-87126d90cb4dd049@postgresql.org
After receiving position data for a lexeme, tsvectorrecv()
advanced its "datalen" value by (npos+1)*sizeof(WordEntry)
where the correct calculation is (npos+1)*sizeof(WordEntryPos).
This accidentally failed to render the constructed tsvector
invalid, but it did result in leaving some wasted space
approximately equal to the space consumed by the position data.
That could have several bad effects:
* Disk space is wasted if the received tsvector is stored into a
table as-is.
* A legal tsvector could get rejected with "maximum total lexeme
length exceeded" if the extra space pushes it over the MAXSTRPOS
limit.
* In edge cases, the finished tsvector could be assigned a length
larger than the allocated size of its palloc chunk, conceivably
leading to SIGSEGV when the tsvector gets copied somewhere else.
The odds of a field failure of this sort seem low, though valgrind
testing could probably have found this.
While we're here, let's express the calculation as
"sizeof(uint16) + npos * sizeof(WordEntryPos)" to avoid the type
pun implicit in the "npos + 1" formulation. It's not wrong
given that WordEntryPos had better be 2 bytes to avoid padding
problems, but it seems clearer this way.
Report and patch by Denis Erokhin. Back-patch to all supported
versions.
Discussion: https://postgr.es/m/009801d9f2d9$f29730c0$d7c59240$@datagile.ru
In recent releases, such cases fail with "cache lookup failed for
function 0" rather than complaining that the conversion function
doesn't exist as prior versions did. Seems to be a consequence of
sloppy refactoring in commit f82de5c46. Add the missing error check.
Per report from Pierre Fortin. Back-patch to v14 where the
oversight crept in.
Discussion: https://postgr.es/m/20230929163739.3bea46e5.pfortin@pfortin.com
Commit 9f8377f7a2 was a little too eager in fetching default values.
Normally this would not matter, but if the default value is not valid
for the type (e.g. a varchar that's too long) it caused an unnecessary
error.
Complaint and fix from Laurenz Albe
Backpatch to release 16.
Discussion: https://postgr.es/m/75a7b7483aeb331aa017328d606d568fc715b90d.camel@cybertec.at
-exported_symbols_list=... works on Ventura and earlier, but not on
Sonoma. The easiest way to fix it is to -Wl,-exported_symbols_list,@0@ which
actually seems more appropriate anyway, it's obviously a linker argument. It
is easier to use the -Wl,, syntax than passing multiple arguments, due to the
way the export_fmt is used (a single string that's formatted), but if it turns
out to be necessary, we can go for multiple arguments as well.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/20230928222248.jw6s7yktpfsfczha@alap3.anarazel.de
Backpatch: 16-, where the meson based buildsystem was added
These options already exist, but you need to specify a column list for
them, which can be cumbersome. We already have the possibility of all
columns for FORCE QUOTE, so this is simply extending that facility to
FORCE_NULL and FORCE_NOT_NULL.
Author: Zhang Mingli
Reviewed-By: Richard Guo, Kyatoro Horiguchi, Michael Paquier.
Discussion: https://postgr.es/m/CACJufxEnVqzOFtqhexF2+AwOKFrV8zHOY3y=p+gPK6eB14pn_w@mail.gmail.com
ANALYZE on a table with inheritance children analyzes all the child
tables in a loop. When stepping to next child table, it updated the
child rel ID value in the command progress stats, but did not reset
the 'sample_blks_total' and 'sample_blks_scanned' counters.
acquire_sample_rows() updates 'sample_blks_total' as soon as the scan
starts and 'sample_blks_scanned' after processing the first block, but
until then, pg_stat_progress_analyze would display a bogus combination
of the new child table relid with old counter values from the
previously processed child table. Fix by resetting 'sample_blks_total'
and 'sample_blks_scanned' to zero at the same time that
'current_child_table_relid' is updated.
Backpatch to v13, where pg_stat_progress_analyze view was introduced.
Reported-by: Justin Pryzby
Discussion: https://www.postgresql.org/message-id/20230122162345.GP13860%40telsasoft.com
Under some circumstances, concurrent MERGE operations could lead to
inconsistent results, that varied according the plan chosen. This was
caused by a lack of rowmarks on the source relation, which meant that
EvalPlanQual rechecking was not guaranteed to return the same source
tuples when re-running the join query.
Fix by ensuring that preprocess_rowmarks() sets up PlanRowMarks for
all non-target relations used in MERGE, in the same way that it does
for UPDATE and DELETE.
Per bug #18103. Back-patch to v15, where MERGE was introduced.
Dean Rasheed, reviewed by Richard Guo.
Discussion: https://postgr.es/m/18103-c4386baab8e355e3%40postgresql.org
Add "-gmt 1" to our test invocations of the Tcl "clock" command,
so that they do not consult the timezone environment. While it
doesn't really matter which timezone is used here, it does
matter that the command not fall over entirely. We've now
discovered that at least on FreeBSD, "clock scan" will fail if
/etc/localtime is missing. It seems worth making the test
insensitive to that.
Per Tomas Vondras' buildfarm animal dikkop. Thanks to
Thomas Munro for the diagnosis.
Discussion: https://postgr.es/m/316d304a-1dcd-cea1-3d6c-27f794727a06@enterprisedb.com
As of Xcode 15 (macOS Sonoma), the linker complains about duplicate
references to the same library. We see warnings about libpgport and
libpgcommon being duplicated in many client executables. This is a
consequence of the hack introduced in commit 6b7ef076b to list
libpgport before libpq while not removing it from $(LIBS).
(Commit 8396447cd later applied the same rule to libpgcommon.)
The concern in 6b7ef076b was to ensure that the client executable
wouldn't unintentionally depend on pgport functions from libpq.
That concern is obsolete on any platform for which we can do symbol
export control, because if we can then the pgport functions in libpq
won't be exposed anyway. Hence, we can fix this problem by just
removing libpgport and libpgcommon from $(libpq_pgport), and letting
clients depend on the occurrences in $(LIBS).
In the back branches, do that only on macOS (which we know has
symbol export control). In HEAD, let's be more aggressive and
remove the extra libraries everywhere. The only still-supported
platforms that lack export control are MinGW/Cygwin, and it
doesn't seem worth sweating over ABI stability details for those
(or if somebody does care, it'd probably be possible to perform
symbol export control for those too). As well as being simpler,
this might give some microscopic improvement in build time.
The meson build system is not changed here, as it doesn't have
this particular disease, though it does have some related issues
that we'll fix separately.
Discussion: https://postgr.es/m/467042.1695766998@sss.pgh.pa.us
Commit 7d3b7011b added a link to the statistics functions, which at the
time were anchored under the section for statistics views. aebe989477
added a separate section for statistics functions, but the link was not
updated to point to the new anchor. Fix by changing the xref.
Backpatch to all supported branches.
Author: Peter Smith <peter.b.smith@fujitsu.com>
Discussion: https://postgr.es/m/CAHut+Ptr0jKzNNtWnssLq+3jNhbyaBseqf6NPrWHk08mQFRoTg@mail.gmail.com
Backpatch-through: 11
Improve find_base_rel() and find_base_rel_ignore_join() so that they
raise an ERROR if they ever receive a negative relid value in
non-cassert builds. If either of these functions had ever received a
negative relid then they'd have attempted to access memory that does not
belong to simple_rel_array.
Because no evidence has been presented of actual cases where bugs have
caused this to happen, here we take a lightweight approach to checking
for negative values and simply cast both values to uint32 before
performing the comparison. This will cause any negative relids to be
seen as greater than simple_rel_array_size which will ERROR rather than
attempt to access a negative simple_rel_array element. Obviously, the
run-time error is better than a crash, so it makes sense to protect
against this, especially when it can be done without adding any
additional run-time overhead.
There is a slight change here if the functions are ever called with a
relid of 0. This will pass the bounds check, but that array entry
should be NULL (along with the corresponding simple_rte_array entry), so
won't pass the "if (rel)" condition and still fall through and raise an
ERROR.
Author: Ranier Vilela
Reviewed-by: Ashutosh Bapat, David Rowley
Discussion: https://postgr.es/m/CAEudQArQSghBu2gLojg4o_tnHj_x2HcS%3D%2BwewL3NJS8z0VnK%2Bg%40mail.gmail.com
nbtree's mark/restore processing failed to correctly handle an edge case
involving array key advancement and related search-type scan key state.
Scans with ScalarArrayScalarArrayOpExpr quals requiring mark/restore
processing (for a merge join) could incorrectly conclude that an
affected array/scan key must not have advanced during the time between
marking and restoring the scan's position.
As a result of all this, array key handling within btrestrpos could skip
a required call to _bt_preprocess_keys(). This confusion allowed later
primitive index scans to overlook tuples matching the true current array
keys. The scan's search-type scan keys would still have spurious values
corresponding to the final array element(s) -- not values matching the
first/now-current array element(s).
To fix, remember that "array key wraparound" has taken place during the
ongoing btrescan in a flag variable stored in the scan's state, and use
that information at the point where btrestrpos decides if another call
to _bt_preprocess_keys is required.
Oversight in commit 70bc5833, which taught nbtree to handle array keys
during mark/restore processing, but missed this subtlety. That commit
was itself a bug fix for an issue in commit 9e8da0f7, which taught
nbtree to handle ScalarArrayOpExpr quals natively.
Author: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAH2-WzkgP3DDRJxw6DgjCxo-cu-DKrvjEv_ArkP2ctBJatDCYg@mail.gmail.com
Backpatch: 11- (all supported branches).
This code was sloppy about comparison of index columns that
are expressions. It didn't reliably reject cases where one
index has an expression where the other has a plain column,
and it could index off the start of the attmap array, leading
to a Valgrind complaint (though an actual crash seems unlikely).
I'm not sure that the expression-vs-column sloppiness leads
to any visible problem in practice, because the subsequent
comparison of the two expression lists would reject cases
where the indexes have different numbers of expressions
overall. Maybe we could falsely match indexes having the
same expressions in different column positions, but it'd
require unlucky contents of the word before the attmap array.
It's not too surprising that no problem has been reported
from the field. Nonetheless, this code is clearly wrong.
Per bug #18135 from Alexander Lakhin. Back-patch to all
supported branches.
Discussion: https://postgr.es/m/18135-532f4a755e71e4d2@postgresql.org
Previously, one of the values in the struct was returned as the return
value, and another was returned via an output parameter. In
preparation for returning more stuff, consolidate both values into a
struct returned via an output parameter.
Melanie Plageman, reviewed by Andres Freund and by me.
Discussion: https://postgr.es/m/CAAKRu_br124qsGJieuYA0nGjywEukhK1dKBfRdby_4yY3E9SXA%40mail.gmail.com
For a reader unfamiliar with the postgres code it might take some
grepping to find where elevels are defined. This adds a reference
to elog.h in the text like how SQLSTATE errorcodes are referenced
to errcodes.h on the same page.
Author: Kuwamura Masaki <kuwamura@db.is.i.nagoya-u.ac.jp>
Discussion: https://postgr.es/m/CAMyC8qqp1UDA9zothnJ9CbUYByytwpALS3LkdZ6bs1w5kZw5Xg@mail.gmail.com
Tid Range scans were added back in bb437f995. That commit forgot to add
handling for TidRangePaths in print_path().
Only people building with OPTIMIZER_DEBUG might have noticed this, which
likely is the reason it's taken 4 years for anyone to notice.
Author: Andrey Lepikhov
Reported-by: Andrey Lepikhov
Discussion: https://postgr.es/m/379082d6-1b6a-4cd6-9ecf-7157d8c08635@postgrespro.ru
Backpatch-through: 14, where bb437f995 was introduced