Presently, the archive_status directory was scanned for each file to
archive. When there are many status files, say because archive_command
has been failing for a long time, these directory scans can get very
slow. With this change, the archiver remembers several files to archive
during each directory scan, speeding things up.
To ensure timeline history files are archived as quickly as possible,
XLogArchiveNotify() forces the archiver to do a new directory scan as
soon as the .ready file for one is created.
Nathan Bossart, per a long discussion involving many people. It is
not clear to me exactly who out of all those people reviewed this
particular patch.
Discussion: http://postgr.es/m/CA+TgmobhAbs2yabTuTRkJTq_kkC80-+jw=pfpypdOJ7+gAbQbw@mail.gmail.com
Discussion: http://postgr.es/m/620F3CE1-0255-4D66-9D87-0EADE866985A@amazon.com
It's a coin toss which of these is a better default assumption.
However, of the machines we have in the buildfarm, the only ones
relying on the fallback socklen_t definition are ancient HPUX,
and on that platform unsigned int is the right choice. Minor
tweak to ee3a1a5b6.
Discussion: https://postgr.es/m/1440792.1636558888@sss.pgh.pa.us
PostgreSQL 13 and newer versions are directly impacted by that through
the SQL function normalize(), which would cause a call of this function
to write one byte past its allocation if using in input an empty
string after recomposing the string with NFC and NFKC. Older versions
(v10~v12) are not directly affected by this problem as the only code
path using normalization is SASLprep in SCRAM authentication that
forbids the case of an empty string, but let's make the code more robust
anyway there so as any out-of-core callers of this function are covered.
The solution chosen to fix this issue is simple, with the addition of a
fast-exit path if the decomposed string is found as empty. This would
only happen for an empty string as at its lowest level a codepoint would
be decomposed as itself if it has no entry in the decomposition table or
if it has a decomposition size of 0.
Some tests are added to cover this issue in v13~. Note that an empty
string has always been considered as normalized (grammar "IS NF[K]{C,D}
NORMALIZED", through the SQL function is_normalized()) for all the
operations allowed (NFC, NFD, NFKC and NFKD) since this feature has been
introduced as of 2991ac5. This behavior is unchanged but some tests are
added in v13~ to check after that.
I have also checked "make normalization-check" in src/common/unicode/,
while on it (works in 13~, and breaks in older stable branches
independently of this commit).
The release notes should just mention this commit for v13~.
Reported-by: Matthijs van der Vleuten
Discussion: https://postgr.es/m/17277-0c527a373794e802@postgresql.org
Backpatch-through: 10
clang-12 has introduced -Wcompound-token-split-by-macro, that is causing
a large amount of warnings when building PL/Perl because of its
interactions with upstream Perl. This commit adds one -Wno to CFLAGS at
./configure time if the flag is supported by the compiler to silence all
those warnings.
Upstream perl has fixed this issue, but it is going to take some time
before this is spread across the buildfarm, and we have noticed that
some animals would be useful with an extra -Werror to help with the
detection of incorrect placeholders (see b0cf544), dangomushi being
one.
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/YYr3qYa/R3Gw+Sbg@paquier.xyz
Backpatch-through: 10
protocol.sgml documented the layout for Type messages, but completely
dropped the ball otherwise, failing to explain what they are, when
they are sent, or what they're good for. While at it, do a little
copy-editing on the description of Relation messages.
In passing, adjust the comment for apply_handle_type() to make it
clearer that we choose not to do anything when receiving a Type
message, not that we think it has no use whatsoever.
Per question from Stefen Hillman.
Discussion: https://postgr.es/m/CAPgW8pMknK5pup6=T4a_UG=Cz80Rgp=KONqJmTdHfaZb0RvnFg@mail.gmail.com
Commit 5a1007a508 tried to introduce
an assertion that the block size was at least twice the size of a
tar block, but I got the math wrong. My error was reported to me
off-list.
In XLogCtlData, rename the structure member ThisTimeLineID to
InsertTimeLineID and update the comments to make clear that it's only
expected to be set after recovery is complete.
In StartupXLOG, replace the local variables ThisTimeLineID and
PrevTimeLineID with new local variables replayTLI and newTLI. In the
old scheme, ThisTimeLineID was the replay TLI until we created a new
timeline, and after that the replay TLI was in PrevTimeLineID. Now,
replayTLI is the TLI from which we last replayed WAL throughout the
entire function, and newTLI is either that, or the new timeline created
upon promotion.
Remove some misleading comments from the comment block just above where
recoveryTargetTimeLineGoal and friends are declared. It's become
incorrect, not only because ThisTimeLineID as a variable is now gone,
but also because the rmgr code does not care about ThisTimeLineID and
has not since what used to be the TLI field in the page header was
repurposed to store the page checksum.
Add a comment GetFlushRecPtr that it's only supposed to be used in
normal running, and an assertion to verify that this is so.
Per some ideas from Michael Paquier and some of my own. Review by
Michael Paquier also.
Discussion: http://postgr.es/m/CA+TgmoY1a2d1AnVR3tJcKmGGkhj7GGrwiNwjtKr21dxOuLBzCQ@mail.gmail.com
A couple of code paths related to logical decoding (WAL sender, slot
advancing, etc.) use XLogReadRecord(), feeding on error messages
generated by walreader.c on a failure. All those messages have no
context, making it harder to spot from where an error could come even if
these should not happen. All the other callers of XLogReadRecord() do
that already.
Reviewed-by: Kyotaro Horiguchi
Discussion: https://postgr.es/m/YYnTH6OyOwQcAdkw@paquier.xyz
We've seen intermittent failures in this test on slower buildfarm
machines, which I think can be explained by assuming that autovacuum
emitted some additional WAL. Disable autovacuum to stabilize it.
In passing, use stringwise not numeric comparison to compare
WAL file names. Doesn't matter at present, but they are
hex strings not decimal ...
Discussion: https://postgr.es/m/1372189.1636499287@sss.pgh.pa.us
Earlier versions of PostgreSQL featured a version of pg_basebackup
that wanted to edit tar archives but was too dumb to parse them
properly. The server made things easier for the client by failing
to add the two blocks of zero bytes that ought to end a tar file,
leaving it up to the client to do that.
But since commit 23a1c6578c, we
don't need this hack any more, because pg_basebackup is now smarter
and can parse tar files even if they are properly terminated! So
change the server to always properly terminate the tar files. Older
versions of pg_basebackup can't talk to new servers anyway, so
there's no compatibility break.
On the pg_basebackup side, we see still need to add the terminating
zero bytes if we're talking to an older server, but not when the
server is v15+. Hopefully at some point we'll be able to remove
some of this compatibility cruft, but it seems best to hang on to
it for now.
In passing, add a file header comment to bbstreamer_tar.c, to make
it clearer what's going on here.
Discussion: http://postgr.es/m/CA+TgmoZbNzsWwM4BE5Jb_qHncY817DYZwGf+2-7hkMQ27ZwsMQ@mail.gmail.com
This check was used to accommodate a staggering variety in particular
in the type of the third argument of accept(). This is no longer of
concern on currently supported systems. We can just use socklen_t in
the code and put in a simple check that substitutes int for socklen_t
if it's missing, to cover the few stragglers.
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://www.postgresql.org/message-id/3538f4c4-1886-64f2-dcff-aaad8267fb82@enterprisedb.com
Commit 5a2832465f introduced some enums to represent all tables in schema
publications and used REL in their names. Use TABLE instead of REL in
those enums to avoid confusion with other objects like SEQUENCES that can
be part of a publication in the future.
In the passing, (a) Change one of the newly introduced error messages to
make it consistent for Create and Alter commands, (b) add missing alias in
one of the SQL Statements that is used to print publications associated
with the table.
Reported-by: Tomas Vondra, Peter Smith
Author: Vignesh C
Reviewed-by: Hou Zhijie, Peter Smith
Discussion: https://www.postgresql.org/message-id/CALDaNm0OANxuJ6RXqwZsM1MSY4s19nuH3734j4a72etDwvBETQ%40mail.gmail.com
Commit 23a1c6578c improved
pg_basebackup's ability to parse tar archives, but also arranged
to parse them only when we need to make some modification to the
contents of the archive. That's a problem, because the server
doesn't actually terminate tar archives. When the new parsing
logic was engaged, pg_basebackup would properly terminate the
tar file, but when it was skipped, pg_basebackup would just write
whatever it got from the server, meaning that the terminator
was missing.
Most versions of tar are willing to overlook the missing terminator, but
the AIX buildfarm animals were not. Fix by inventing a new kind of
bbstreamer that just blindly adds a terminator, and using it whenever we
don't parse the tar archive.
Discussion: http://postgr.es/m/CA+TgmoZbNzsWwM4BE5Jb_qHncY817DYZwGf+2-7hkMQ27ZwsMQ@mail.gmail.com
libpq collects up to a bufferload of data whenever it reads data from
the socket. When SSL or GSS encryption is requested during startup,
any additional data received with the server's yes-or-no reply
remained in the buffer, and would be treated as already-decrypted data
once the encryption handshake completed. Thus, a man-in-the-middle
with the ability to inject data into the TCP connection could stuff
some cleartext data into the start of a supposedly encryption-protected
database session.
This could probably be abused to inject faked responses to the
client's first few queries, although other details of libpq's behavior
make that harder than it sounds. A different line of attack is to
exfiltrate the client's password, or other sensitive data that might
be sent early in the session. That has been shown to be possible with
a server vulnerable to CVE-2021-23214.
To fix, throw a protocol-violation error if the internal buffer
is not empty after the encryption handshake.
Our thanks to Jacob Champion for reporting this problem.
Security: CVE-2021-23222
The server collects up to a bufferload of data whenever it reads data
from the client socket. When SSL or GSS encryption is requested
during startup, any additional data received with the initial
request message remained in the buffer, and would be treated as
already-decrypted data once the encryption handshake completed.
Thus, a man-in-the-middle with the ability to inject data into the
TCP connection could stuff some cleartext data into the start of
a supposedly encryption-protected database session.
This could be abused to send faked SQL commands to the server,
although that would only work if the server did not demand any
authentication data. (However, a server relying on SSL certificate
authentication might well not do so.)
To fix, throw a protocol-violation error if the internal buffer
is not empty after the encryption handshake.
Our thanks to Jacob Champion for reporting this problem.
Security: CVE-2021-23214
In v14, because we don't have a field in RestrictInfo to cache both the
left and right type's hash equality operator, we just restrict the scope
of Memoize to only when the left and right types of a RestrictInfo are the
same.
In master we add another field to RestrictInfo and cache both hash
equality operators.
Reported-by: Jaime Casanova
Author: David Rowley
Discussion: https://postgr.es/m/20210929185544.GB24346%40ahch-to
Backpatch-through: 14
Commit 57e3c5160b added a new GiST bool opclass, but it used gbtreekey4
to store the data, which left two bytes undefined, as reported by skink,
our valgrind animal. There was a bit more confusion, because the opclass
also used gbtreekey8 in the definition.
Fix by defining a new gbtreekey2 struct, and using it in all the places.
Discussion: https://postgr.es/m/CAE2gYzyDKJBZngssR84VGZEN=Ux=V9FV23QfPgo+7-yYnKKg4g@mail.gmail.com
Quite a few buildfarm animals are warning about this, and lapwing
is actually failing (because -Werror). It's a false positive AFAICS,
so no need to do more than zero the variable to start with.
Discussion: https://postgr.es/m/YYXJnUxgw9dZKxlX@paquier.xyz
Re-ordering the #include's is a bit problematic here because
libpq/libpq-be.h needs to include <openssl/ssl.h>. Instead,
let's #undef the unwanted macro after all the #includes.
This is definitely uglier than the other way, but it should
work despite possible future header rearrangements.
(A look at the openssl headers indicates that X509_NAME is the
only conflicting symbol that we use.)
In passing, remove a related but long-incorrect comment in
pg_backup_archiver.h.
Discussion: https://postgr.es/m/1051867.1635720347@sss.pgh.pa.us
Since 8162464a25 we do so in win32_port.h. But it likely didn't do much
before that either, because at that point windows.h was already included via
win32_port.h.
Reported-By: Tom Lane
Discussion: https://postgr.es/m/612842.1636237461@sss.pgh.pa.us
Commit db7d1a7b0 pulled out Mkvcbuild.pm's custom support for building
contrib/pgcrypto, but neglected to inform it that that module can now
be built normally. Or at least I guess it can now be built normally.
But this is definitely causing bowerbird to fail, since it's trying to
test a module it hasn't built.
The tsvector data type has always forbidden lexemes to be empty.
However, array_to_tsvector() didn't get that memo, and would
allow an empty-string array element to become an empty lexeme.
This could result in dump/restore failures later, not to mention
whatever semantic issues might be behind the original prohibition.
However, other functions that take a plain text input directly as
a lexeme value do not need a similar restriction, because they only
match the string against existing tsvector entries. In particular
it'd be a bad idea to make ts_delete() reject empty strings, since
that is the most convenient way to clean up any bad data that might
have gotten into a tsvector column via this bug.
Reflecting on that, let's also remove the prohibition against NULL
array elements in tsvector_delete_arr and tsvector_setweight_by_filter.
It seems more consistent to ignore them, as an empty-string element
would be ignored.
There's a case for back-patching this, since it's clearly a bug fix.
On balance though, it doesn't seem like something to change in a
minor release.
Jean-Christophe Arnu
Discussion: https://postgr.es/m/CAHZmTm1YVndPgUVRoag2WL0w900XcoiivDDj-gTTYBsG25c65A@mail.gmail.com
After further investigation, it seems the cause of the problem
is our recent decision to start defining WIN32_LEAN_AND_MEAN.
That causes <windows.h> to no longer include <wincrypt.h>, which
means that the OpenSSL headers are unable to prevent conflicts
with that header by #undef'ing the conflicting macros. Apparently,
some other system header that be-secure-openssl.c #includes after
the OpenSSL headers is pulling in <wincrypt.h>. It's obscure just
where that happens and why we're not seeing it on other Windows
buildfarm animals. However, it should work to move the OpenSSL
#includes to the end of the list. For the sake of future-proofing,
do likewise in fe-secure-openssl.c. In passing, remove useless
double inclusions of <openssl/ssl.h>.
Thanks to Thomas Munro for running down the relevant information.
Discussion: https://postgr.es/m/1051867.1635720347@sss.pgh.pa.us
Currently, lastOverflowedXid is never reset. It's just adjusted on new
transactions known to be overflowed. But if there are no overflowed
transactions for a long time, snapshots could be mistakenly marked as
suboverflowed due to wraparound.
This commit fixes this issue by resetting lastOverflowedXid when needed
altogether with KnownAssignedXids.
Backpatch to all supported versions.
Reported-by: Stan Hu
Discussion: https://postgr.es/m/CAMBWrQ%3DFp5UAsU_nATY7EMY7NHczG4-DTDU%3DmCvBQZAQ6wa2xQ%40mail.gmail.com
Author: Kyotaro Horiguchi, Alexander Korotkov
Reviewed-by: Stan Hu, Simon Riggs, Nikolay Samokhvalov, Andrey Borodin, Dmitry Dolgov
Commit 23a1c6578 thought it'd be cute to refactor
pg_basebackup/Makefile with a new variable BBOBJS,
but our MSVC build system knows nothing of that.
Per buildfarm.
Adds bool opclass to btree_gist extension, to allow creating GiST
indexes on bool columns. GiST indexes on a single bool column don't seem
particularly useful, but this allows defining exclusion constraings
involving a bool column, for example.
Author: Emre Hasegeli
Reviewed-by: Andrey Borodin
Discussion: https://postgr.es/m/CAE2gYzyDKJBZngssR84VGZEN=Ux=V9FV23QfPgo+7-yYnKKg4g@mail.gmail.com
When calculating distance between float4/float8 values, we need to be a
bit more careful about NaN values in order not to trigger assert. We
consider NaN values to be equal (distace 0.0) and in infinite distance
from all other values.
On builds without asserts, this issue is mostly harmless - the ranges
may be merged in less efficient order, but the index is still correct.
Per report from Andreas Seltenreich. Backpatch to 14, where this new
BRIN opclass was introduced.
Reported-by: Andreas Seltenreich
Discussion: https://postgr.es/m/87r1bw9ukm.fsf@credativ.de
Add new comments that spell out what VACUUM expects from heap pruning:
pruning must never leave behind DEAD tuples that still have tuple
storage. This has at least been the case since commit 8523492d, which
established the principle that vacuumlazy.c doesn't have to deal with
DEAD tuples that still have tuple storage directly, except perhaps by
simply retrying pruning (to handle a rare corner case involving
concurrent transaction abort).
In passing, update some references to old symbol names that were missed
by the snapshot scalability work (specifically commit dc7420c2c9).
StartupXLOG() still has ThisTimeLineID as a local variable, but the
remaining code in xlog.c now needs to the relevant TimeLineID by some
other means. Mostly, this means that we now pass it as a function
parameter to a bunch of functions where we didn't previously.
However, a few cases require special handling:
- In functions that might be called by outside callers who
wouldn't necessarily know what timeline to specify, we get
the timeline ID from shared memory. XLogCtl->ThisTimeLineID
can be used in most cases since recovery is known to have
completed by the time those functions are called. In
xlog_redo(), we can use XLogCtl->replayEndTLI.
- XLogFileClose() needs to know the TLI of the open logfile.
Do that with a new global variable openLogTLI. While
someone could argue that this is just trading one global
variable for another, the new one has a far more narrow
purposes and is referenced in just a few places.
- read_backup_label() now returns the TLI that it obtains
by parsing the backup_label file. Previously, ReadRecord()
could be called to parse the checkpoint record without
ThisTimeLineID having been initialized. Now, the timeline
is passed down, and I didn't want to pass an uninitialized
variable; this change lets us avoid that. The old coding
didn't seem to have any practical consequences that we need
to worry about, but this is cleaner.
- In BootstrapXLOG(), it's just a constant.
Patch by me, reviewed and tested by Michael Paquier, Amul Sul, and
Álvaro Herrera.
Discussion: https://postgr.es/m/CA+TgmobfAAqhfWa1kaFBBFvX+5CjM=7TE=n4r4Q1o2bjbGYBpA@mail.gmail.com
All such code deals with this global variable in one of three ways.
Sometimes the same functions use it in more than one of these ways
at the same time.
First, sometimes it's an implicit argument to one or more functions
being called in xlog.c or elsewhere, and must be set to the
appropriate value before calling those functions lest they
misbehave. In those cases, it is now passed as an explicit argument
instead.
Second, sometimes it's used to obtain the current timeline after
the end of recovery, i.e. the timeline to which WAL is being
written and flushed. Such code now calls GetWALInsertionTimeLine()
or relies on the new out parameter added to GetFlushRecPtr().
Third, sometimes it's used during recovery to store the current
replay timeline. That can change, so such code must generally
update the value before each use. It can still do that, but must
now use a local variable instead.
The net effect of these changes is to reduce by a fair amount the
amount of code that is directly accessing this global variable.
That's good, because history has shown that we don't always think
clearly about which timeline ID it's supposed to contain at any
given point in time, or indeed, whether it has been or needs to
be initialized at any given point in the code.
Patch by me, reviewed and tested by Michael Paquier, Amul Sul, and
Álvaro Herrera.
Discussion: https://postgr.es/m/CA+TgmobfAAqhfWa1kaFBBFvX+5CjM=7TE=n4r4Q1o2bjbGYBpA@mail.gmail.com
In slotfuncs.c, pg_replication_slot_advance() needs to determine
the LSN up to which the slot should be advanced, but that doesn't
require us to update ThisTimeLineID, because none of the code called
from here depends on it. If the replication slot is logical,
pg_logical_replication_slot_advance will call read_local_xlog_page,
which does use ThisTimeLineID, but also takes care of making sure
it's up to date. If the replication slot is physical, the timeline
isn't used for anything at all.
In logicalfuncs.c, pg_logical_slot_get_changes_guts() has the same
issue: the only code we're going to run that cares about timelines
is in or downstream of read_local_xlog_page, which already makes
sure that the correct value gets set. Hence, don't do it here.
Patch by me, reviewed and tested by Michael Paquier, Amul Sul, and
Álvaro Herrera.
Discussion: https://postgr.es/m/CA+TgmobfAAqhfWa1kaFBBFvX+5CjM=7TE=n4r4Q1o2bjbGYBpA@mail.gmail.com
When a role being dropped contains is referenced by catalog objects that
are concurrently also being dropped, a crash can result while trying to
construct the string that describes the objects. Suppress that by
ignoring objects whose descriptions are returned as NULL.
The majority of relevant codesites were already cautious about this
already; we had just missed a couple.
This is an old bug, so backpatch all the way back.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/17126-21887f04508cb5c8@postgresql.org
pg_basebackup knows how to do quite a few things with a backup that it
gets from the server, like just write out the files, or compress them
first, or even parse the tar format and inject a modified
postgresql.auto.conf file into the archive generated by the server.
Unforatunely, this makes pg_basebackup.c a very large source file, and
also somewhat difficult to enhance, because for example the knowledge
that the server is sending us a 'tar' file rather than some other sort
of archive is spread all over the place rather than centralized.
In an effort to improve this situation, this commit invents a new
'bbstreamer' abstraction. Each archive received from the server is
fed to a bbstreamer which may choose to dispose of it or pass it
along to some other bbstreamer. Chunks may also be "labelled"
according to whether they are part of the payload data of a file
in the archive or part of the archive metadata.
So, for example, if we want to take a tar file, modify the
postgresql.auto.conf file it contains, and the gzip the result
and write it out, we can use a bbstreamer_tar_parser to parse the
tar file received from the server, a bbstreamer_recovery_injector
to modify the contents of postgresql.auto.conf, a
bbstreamer_tar_archiver to replace the tar headers for the file
modified in the previous step with newly-built ones that are
correct for the modified file, and a bbstreamer_gzip_writer to
gzip and write the resulting data. Only the objects with "tar"
in the name know anything about the tar archive format, and in
theory we could re-archive using some other format rather than
"tar" if somebody wanted to write the code.
These chances do add a substantial amount of code, but I think the
result is a lot more maintainable and extensible. pg_basebackup.c
itself shrinks by roughly a third, with a lot of the complexity
previously contained there moving into the newly-added files.
Patch by me. The larger patch series of which this is a part has been
reviewed and tested at various times by Andres Freund, Sumanta
Mukherjee, Dilip Kumar, Suraj Kharage, Dipesh Pandit, Tushar Ahuja,
Mark Dilger, Sergei Kornilov, and Jeevan Ladhe.
Discussion: https://postgr.es/m/CA+TgmoZGwR=ZVWFeecncubEyPdwghnvfkkdBe9BLccLSiqdf9Q@mail.gmail.com
Discussion: https://postgr.es/m/CA+TgmoZvqk7UuzxsX1xjJRmMGkqoUGYTZLDCH8SmU1xTPr1Xig@mail.gmail.com
The base backup code has accumulated a healthy number of new
features over the years, but it's becoming increasingly difficult
to maintain and further enhance that code because there's no
real separation of concerns. For example, the code that
understands knows the details of how we send data to the client
using the libpq protocol is scattered throughout basebackup.c,
rather than being centralized in one place.
To try to improve this situation, introduce a new 'bbsink' object
which acts as a recipient for archives generated during the base
backup progress and also for the backup manifest. This commit
introduces three types of bbsink: a 'copytblspc' bbsink forwards the
backup to the client using one COPY OUT operation per tablespace and
another for the manifest, a 'progress' bbsink performs command
progress reporting, and a 'throttle' bbsink performs rate-limiting.
The 'progress' and 'throttle' bbsink types also forward the data to a
successor bbsink; at present, the last bbsink in the chain will
always be of type 'copytblspc'. There are plans to add more types
of 'bbsink' in future commits.
This abstraction is a bit leaky in the case of progress reporting,
but this still seems cleaner than what we had before.
Patch by me, reviewed and tested by Andres Freund, Sumanta Mukherjee,
Dilip Kumar, Suraj Kharage, Dipesh Pandit, Tushar Ahuja, Mark Dilger,
and Jeevan Ladhe.
Discussion: https://postgr.es/m/CA+TgmoZGwR=ZVWFeecncubEyPdwghnvfkkdBe9BLccLSiqdf9Q@mail.gmail.com
Discussion: https://postgr.es/m/CA+TgmoZvqk7UuzxsX1xjJRmMGkqoUGYTZLDCH8SmU1xTPr1Xig@mail.gmail.com