Explain where pqsignal() came from, what problem it originally solved
without assuming the reader is familiar with historical Unixen, why we
still need it, what it does for us now, and the key differences in
frontend code on Windows.
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://postgr.es/m/CA%2BhUKG%2BRst1h3uo%2BXRgdRVnWHBa4mmj5gFbmCzZr73s-Fh_5JA%40mail.gmail.com
When using GSSAPI encryption in non-blocking mode, libpq sometimes
failed with "GSSAPI caller failed to retransmit all data needing
to be retried". The cause is that pqPutMsgEnd rounds its transmit
request down to an even multiple of 8K, and sometimes that can lead
to not requesting a write of data that was requested to be written
(but reported as not written) earlier. That can upset pg_GSS_write's
logic for dealing with not-yet-written data, since it's possible
the data in question had already been incorporated into an encrypted
packet that we weren't able to send during the previous call.
We could fix this with a one-or-two-line hack to disable pqPutMsgEnd's
round-down behavior, but that seems like making the caller work around
a behavior that pg_GSS_write shouldn't expose in this way. Instead,
adjust pg_GSS_write to never report a partial write: it either
reports a complete write, or reflects the failure of the lower-level
pqsecure_raw_write call. The requirement still exists for the caller
to present at least as much data as on the previous call, but with
the caller-visible write start point not moving there is no temptation
for it to present less. We lose some ability to reclaim buffer space
early, but I doubt that that will make much difference in practice.
This also gets rid of a rather dubious assumption that "any
interesting failure condition (from pqsecure_raw_write) will recur
on the next try". We've not seen failure reports traceable to that,
but I've never trusted it particularly and am glad to remove it.
Make the same adjustments to the equivalent backend routine
be_gssapi_write(). It is probable that there's no bug on the backend
side, since we don't have a notion of nonblock mode there; but we
should keep the logic the same to ease future maintenance.
Per bug #18210 from Lars Kanis. Back-patch to all supported branches.
Discussion: https://postgr.es/m/18210-4c6d0b14627f2eb8@postgresql.org
A WaitEventSet holds file descriptors or event handles (on Windows).
If FreeWaitEventSet is not called, those fds or handles are leaked.
Use ResourceOwners to track WaitEventSets, to clean those up
automatically on error.
This was a live bug in async Append nodes, if a FDW's
ForeignAsyncRequest function failed. (In back branches, I will apply a
more localized fix for that based on PG_TRY-PG_FINALLY.)
The added test doesn't check for leaking resources, so it passed even
before this commit. But at least it covers the code path.
In the passing, fix misleading comment on what the 'nevents' argument
to WaitEventSetWait means.
Report by Alexander Lakhin, analysis and suggestion for the fix by
Tom Lane. Fixes bug #17828.
Reviewed-by: Alexander Lakhin, Thomas Munro
Discussion: https://www.postgresql.org/message-id/472235.1678387869@sss.pgh.pa.us
The copy command formed for initial sync was using parenthesis for tables
with no columns leading to syntax error. This patch avoids adding
parenthesis for such tables.
Reported-by: Justin G
Author: Vignesh C
Reviewed-by: Peter Smith, Amit Kapila
Backpatch-through: 15
Discussion: http://postgr.es/m/18203-df37fe354b626670@postgresql.org
In replorigin_session_setup(), we were needlessly looping for
max_replication_slots even after finding an existing slot for the origin.
This shouldn't hurt us much except for probably large values of
max_replication_slots.
Author: Antonin Houska
Discussion: http://postgr.es/m/2694.1700471273@antos
As written, the query checked for an access method of type 's', which is
not an AM type supported in the core code.
Error introduced by 8586bf7ed8. As this query is not checking what it
should, backpatch all the way down.
Reviewed-by: Aleksander Alekseev
Discussion: https://postgr.es/m/ZVxJkAJrKbfHETiy@paquier.xyz
Backpatch-through: 12
The original code would miscalculate the total number of cells when the
table to print has more than ~4 billion cells, leading to an unnecessary
error. Repair by changing some computations to be 64-bits wide. Add
some necessary overflow checks.
Author: Hongxu Ma <interma@outlook.com>
Discussion: https://postgr.es/m/TYBP286MB0351B057B101C90D7C1239E6B4E2A@TYBP286MB0351.JPNP286.PROD.OUTLOOK.COM
This refactoring reduces the code in charge of creating replication
slots from two "if" block to a single one, making it slightly cleaner.
This change is possible since 1d04a59be3, that has removed the
intermediate code that existed between the two "if" blocks in charge of
initializing the output message buffer.
Author: Peter Smith
Discussion: https://postgr.es/m/CAHut+PtnJzqKT41Zt8pChRzba=QgCqjtfYvcf84NMj3VFJoKfw@mail.gmail.com
This commit log messages (at LOG level when log_replication_commands is
set, otherwise at DEBUG1 level) when walsenders acquire and release
replication slots. These messages help to know the lifetime of a
replication slot - one can know how long a streaming standby, logical
subscriber, or replication slot consumer is down. These messages will be
useful on production servers to debug and analyze inactive replication
slots.
Note that these messages are emitted only for walsenders but not for
backends. This is because walsenders are the ones that typically hold
replication slots for longer durations, unlike backends which hold them
for executing replication related functions.
Author: Bharath Rupireddy
Reviewed-by: Peter Smith, Amit Kapila, Alvaro Herrera
Discussion: http://postgr.es/m/CALj2ACX17G7F-jeLt+7KhJ6YxVeRwR8Zk0rDh4VnT546o0UpTQ@mail.gmail.com
Currently important build targets are somewhat hard to discover. This commit
documents important meson build targets in the sgml documentation. But it's
awkward to have to lookup build targets in the docs when hacking, so this also
adds a 'help' target, printing out the same information. To avoid having to
duplicate information in two places, generate both docbook and interactive
docs from a single source.
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/20231108232121.ww542mt6lfo6f26f@awork3.anarazel.de
This undoes the change in what the 'docs' target builds 969509c3f2. Tom was
concerned with having a target to just build the html docs, which a prior
commit now provided explicitly.
A subsequent commit will overhaul the documentation for the documentation
targets.
While at it, move all target in doc/src/sgml/Makefile up to just after the
default "html" target, and add a comment explaining "all" is *not* the default
target.
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/20230209203855.njrepiupc3rmehfw@awork3.anarazel.de
Discussion: https://postgr.es/m/20231103163848.26egkh5qdgw3vmil@awork3.anarazel.de
We have toplevel html, man targets in the autoconf build as well. It'd be odd
to have an 'html' target but have the install target be 'install-doc-html',
thus rename the install targets to match.
Reviewed-by: Christoph Berg <myon@debian.org>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/20231103163848.26egkh5qdgw3vmil@awork3.anarazel.de
Recent commit f26c2368dc introduced a search_path cache, but left some
potential out-of-memory hazards. Simplify the code and make it safer
against OOM.
This change reintroduces one list_copy(), losing a small amount of the
performance gained in f26c2368dc. A future change may optimize away
the list_copy() again if it can be done in a safer way.
Discussion: https://postgr.es/m/e6fded24cb8a2c53d4ef069d9f69cc7baaafe9ef.camel@j-davis.com
As coded, the start block calculated by BufFileAppend() would overflow
once more than 16k files are used with a default block size. This issue
existed before b1e5c9fa9a, but there's no reason not to be clean about
it.
Per report from Coverity, with a fix suggested by Tom Lane.
The DROP STATISTICS code failed to properly lock the table, leading to
ERROR: tuple concurrently deleted
when executed concurrently with ANALYZE.
Fixed by modifying RemoveStatisticsById() to acquire the same lock as
ANALYZE. This function is called only by DROP STATISTICS, as ANALYZE
calls RemoveStatisticsDataById() directly.
Reported by Justin Pryzby, fix by me. Backpatch through 12. The code was
like this since it was introduced in 10, but older releases are EOL.
Reported-by: Justin Pryzby
Reviewed-by: Tom Lane
Backpatch-through: 12
Discussion: https://postgr.es/m/ZUuk-8CfbYeq6g_u@pryzbyj2023
Commits 146604ec43 and a898b409f6 added overflow checks to
interval_mul(), but not to interval_div(), which contains almost
identical code, and so is susceptible to the same kinds of
overflows. In addition, those checks did not catch all possible
overflow conditions.
Add additional checks to the "cascade down" code in interval_mul(),
and copy all the overflow checks over to the corresponding code in
interval_div(), so that they both generate "interval out of range"
errors, rather than returning bogus results.
Given that these errors are relatively easy to hit, back-patch to all
supported branches.
Per bug #18200 from Alexander Lakhin, and subsequent investigation.
Discussion: https://postgr.es/m/18200-5ea288c7b2d504b1%40postgresql.org
On the MERGE page, the description of the privileges required could be
taken to imply that the SELECT privilege is required on all columns of
the data source, whereas actually it is only required on the columns
referred to by conditions or expressions in the MERGE command. Re-word
it to make that a little clearer, and mention expressions as well as
conditions.
Also, add a glossary entry for MERGE, and nearby on the glossary page,
mention MERGE in the list of commands that cannot update a
materialized view.
Noted by Jian He. Patch by me, reviewed by Jian He.
Discussion: https://postgr.es/m/CACJufxHuSoRXKwr0MtSFLXuT2nFVWcVfEWhxg7qdP9h%2Bs3a%2BUw%40mail.gmail.com
Compute size first, then allocate, then update the structure.
Previously, an out-of-memory when growing could leave the hashtable in
an inconsistent state.
Discussion: https://postgr.es/m/20231117201334.eyb542qr5yk4gilq@awork3.anarazel.de
Reviewed-by: Andres Freund
Reviewed-by: Gurjeet Singh
When there are no indexes on a table, we vacuum each heap block after
pruning it and then update the freespace map. Periodically, we also
vacuum the freespace map. This was done while unnecessarily holding a
lock on the heap page. Release the lock before calling
FreeSpaceMapVacuumRange() and, while we're at it, ensure the range
includes the heap block we just vacuumed.
There are no known deadlocks or other similar issues, therefore don't
backpatch. It's certainly not good to do all this work under a lock, but it's
not frequently reached, making it not worth the risk of backpatching.
Author: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CAAKRu_YiL%3D44GvGnt1dpYouDSSoV7wzxVoXs8m3p311rp-TVQQ%40mail.gmail.com
examine_simple_variable() left this as an unimplemented case years
ago, with the result that plans for queries involving un-flattened
CTEs might be much stupider than necessary. It's not hard to extend
the existing logic for RTE_SUBQUERY cases to also be able to drill
down into CTEs, so let's do that.
There was some discussion of whether this patch breaks the idea
of a MATERIALIZED CTE being an optimization fence. We concluded
it's okay, because we already allow the outer planner level to
see the estimated width and rowcount of the CTE result, and
letting it see column statistics too seems fairly equivalent.
Basically, what we expect of the optimization fence is that the
outer query should not affect the plan chosen for the CTE query.
Once that plan is chosen, it's okay for the outer planner level
to make use of whatever information we have about it.
Jian Guo and Tom Lane, per complaint from Hans Buschmann
Discussion: https://postgr.es/m/4504e67078d648cdac3651b2960da6e7@nidsa.net
Previously --with-selinux was documented only in the in the sepgsql
documentation and there was no corresponding documentation for meson. There
are further improvements that could be made, but this change seems worthwhile
even on its own.
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reported-by: Christoph Berg <myon@debian.org>
Discussion: https://postgr.es/m/20231103163848.26egkh5qdgw3vmil@awork3.anarazel.de
This adds alternative expected files for various tests.
In src/test/regress/sql/password.sql, we make a small change to the
test so that the CREATE ROLE still succeeds even if the ALTER ROLE
that attempts to set a password might fail. That way, the roles are
available for the rest of the test file in either case.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://www.postgresql.org/message-id/flat/dbbd927f-ef1f-c9a1-4ec6-c759778ac852%40enterprisedb.com
A few places in array_in() and plperl would report a misleading value
(always MAXDIM+1) for the number of dimensions in the input, because
we'd error out as soon as that was clearly too large rather than
scanning the entire input. There doesn't seem to be much value in
offering the true number, at least not enough to justify the extra
complication involved in trying to get it. So just remove that
parenthetical remark. We already have other places that do it
like that, anyway.
Per suggestions from Alexander Lakhin and Heikki Linnakangas.
Discussion: https://postgr.es/m/2794005.1683042087@sss.pgh.pa.us
This adds several alternative expected files for when MD5 and 3DES are
not available. This is similar to the alternative expected files for
when the legacy provider is disabled. In fact, running the pgcrypto
tests in FIPS mode makes use of some of these existing alternative
expected files as well (e.g., for blowfish).
These new expected files currently cover the FIPS mode provided by
OpenSSL 3.x as well as the modified OpenSSL 3.x from Red Hat (e.g.,
Fedora 38), but not the modified OpenSSL 1.x from Red Hat (e.g.,
Fedora 35). (The latter will have some error message wording
differences.)
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://www.postgresql.org/message-id/flat/dbbd927f-ef1f-c9a1-4ec6-c759778ac852%40enterprisedb.com
The code previously relied on "long" as type to track block numbers,
which would be 4 bytes in all Windows builds or any 32-bit builds. This
limited the code to be able to handle up to 16TB of data with the
default block size of 8kB, like during a CLUSTER. This code now relies
on a more portable int64, which should be more than enough for at least
the next 20 years to come.
This issue has been reported back in 2017, but nothing was done about it
back then, so here we go now.
Reported-by: Peter Geoghegan
Reviewed-by: Heikki Linnakangas
Discussion: https://postgr.es/m/CAH2-WznCscXnWmnj=STC0aSa7QG+BRedDnZsP=Jo_R9GUZvUrg@mail.gmail.com
This routine has been marked as NOT_USED since 20ad43b576 from 2000,
and a patch is planned to switch the logtape/tuplestore APIs to rely on
int64 rather than long for the block nunbers, which is more portable.
Keeping it is more confusing than anything at this stage, so let's get
rid of it entirely.
Thanks for Heikki Linnakangas for the poke on this one.
Discussion: https://postgr.es/m/5047be8c-7ee6-4dd5-af76-6c916c3103b4@iki.fi
In FIPS mode, these tests will fail. By having them in a separate
file, it would make it easier to have an alternative output file or
selectively disable these tests. This isn't done here; this is just
some preparation.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/2766054.1700080156@sss.pgh.pa.us
contain_mutable_functions and contain_volatile_functions give
reliable answers only after expression preprocessing (specifically
eval_const_expressions). Some places understand this, but some did
not get the memo --- which is not entirely their fault, because the
problem is documented only in places far away from those functions.
Introduce wrapper functions that allow doing the right thing easily,
and add commentary in hopes of preventing future mistakes from
copy-and-paste of code that's only conditionally safe.
Two actual bugs of this ilk are fixed here. We failed to preprocess
column GENERATED expressions before checking mutability, so that the
code could fail to detect the use of a volatile function
default-argument expression, or it could reject a polymorphic function
that is actually immutable on the datatype of interest. Likewise,
column DEFAULT expressions weren't preprocessed before determining if
it's safe to apply the attmissingval mechanism. A false negative
would just result in an unnecessary table rewrite, but a false
positive could allow the attmissingval mechanism to be used in a case
where it should not be, resulting in unexpected initial values in a
new column.
In passing, re-order the steps in ComputePartitionAttrs so that its
checks for invalid column references are done before applying
expression_planner, rather than after. The previous coding would
not complain if a partition expression contains a disallowed column
reference that gets optimized away by constant folding, which seems
to me to be a behavior we do not want.
Per bug #18097 from Jim Keener. Back-patch to all supported versions.
Discussion: https://postgr.es/m/18097-ebb179674f22932f@postgresql.org
If the tap_tests option is disabled under Meson, the TAP tests are
currently not registered at all. But this makes it harder to see what
is going on, why suddently there are fewer tests than before.
Instead, run testwrap with an option that marks the test as skipped.
That way, the total list and count of tests is constant whether the
option is enabled or not.
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://www.postgresql.org/message-id/ad5ec96d-69ec-317b-a137-367ea5019b61@eisentraut.org
Currently, pg_stat_reset_shared() cannot reset the counters in the view
pg_stat_slru even if it is a type of shared stats. This patch adds
support for a new value in pg_stat_reset_shared(), called "slru", able
to do that. Note that pg_stat_reset_shared(NULL) also resets SLRU
counters.
There may be a point in removing pg_stat_reset_slru() that was
introduced in 28cac71bd3 (v13~) as the new option overlaps with this
function, but we would lose the ability to reset individual SLRU
counters. This is left for future reconsideration.
Author: Atsushi Torikoshi
Discussion: https://postgr.es/m/e3c25d72e81378e7b64f3c52e0306fc9@oss.nttdata.com
"AS" is added as a suggested keyword for CREATE TABLE for a few query
patterns, including the case where a list of columns is given in
parenthesis.
More queries can be now completed with the keywords supported for
queries in a CTAS, after:
CREATE TABLE [TEMP|TEMPORARY|UNLOGGED] <name> [ (...) ] AS
Author: Gilles Darold
Reviewed-by: Jim Jones
Discussion: https://postgr.es/m/e462b251-99a7-4abc-aedc-214688742c80@darold.net
The fallback implementation of pg_atomic_test_set_flag() that uses
atomic-exchange gives pg_atomic_exchange_u32_impl() an extra
argument. This issue has been present since the introduction of
the atomics API in commit b64d92f1a5.
Reviewed-by: Andres Freund
Discussion: https://postgr.es/m/20231114035439.GA1809032%40nathanxps13
Backpatch-through: 12
As of commit eaa5808e8e, MemoryContextResetAndDeleteChildren() is
just a backwards compatibility macro for MemoryContextReset(). Now
that some time has passed, this macro seems more likely to create
confusion.
This commit removes the macro and replaces all remaining uses with
calls to MemoryContextReset(). Any third-party code that use this
macro will need to be adjusted to call MemoryContextReset()
instead. Since the two have behaved the same way since v9.5, such
adjustments won't produce any behavior changes for all
currently-supported versions of PostgreSQL.
Reviewed-by: Amul Sul, Tom Lane, Alvaro Herrera, Dagfinn Ilmari Mannsåker
Discussion: https://postgr.es/m/20231113185950.GA1668018%40nathanxps13