Commit 5a2832465f introduced some enums to represent all tables in schema
publications and used REL in their names. Use TABLE instead of REL in
those enums to avoid confusion with other objects like SEQUENCES that can
be part of a publication in the future.
In the passing, (a) Change one of the newly introduced error messages to
make it consistent for Create and Alter commands, (b) add missing alias in
one of the SQL Statements that is used to print publications associated
with the table.
Reported-by: Tomas Vondra, Peter Smith
Author: Vignesh C
Reviewed-by: Hou Zhijie, Peter Smith
Discussion: https://www.postgresql.org/message-id/CALDaNm0OANxuJ6RXqwZsM1MSY4s19nuH3734j4a72etDwvBETQ%40mail.gmail.com
Commit 23a1c6578c improved
pg_basebackup's ability to parse tar archives, but also arranged
to parse them only when we need to make some modification to the
contents of the archive. That's a problem, because the server
doesn't actually terminate tar archives. When the new parsing
logic was engaged, pg_basebackup would properly terminate the
tar file, but when it was skipped, pg_basebackup would just write
whatever it got from the server, meaning that the terminator
was missing.
Most versions of tar are willing to overlook the missing terminator, but
the AIX buildfarm animals were not. Fix by inventing a new kind of
bbstreamer that just blindly adds a terminator, and using it whenever we
don't parse the tar archive.
Discussion: http://postgr.es/m/CA+TgmoZbNzsWwM4BE5Jb_qHncY817DYZwGf+2-7hkMQ27ZwsMQ@mail.gmail.com
libpq collects up to a bufferload of data whenever it reads data from
the socket. When SSL or GSS encryption is requested during startup,
any additional data received with the server's yes-or-no reply
remained in the buffer, and would be treated as already-decrypted data
once the encryption handshake completed. Thus, a man-in-the-middle
with the ability to inject data into the TCP connection could stuff
some cleartext data into the start of a supposedly encryption-protected
database session.
This could probably be abused to inject faked responses to the
client's first few queries, although other details of libpq's behavior
make that harder than it sounds. A different line of attack is to
exfiltrate the client's password, or other sensitive data that might
be sent early in the session. That has been shown to be possible with
a server vulnerable to CVE-2021-23214.
To fix, throw a protocol-violation error if the internal buffer
is not empty after the encryption handshake.
Our thanks to Jacob Champion for reporting this problem.
Security: CVE-2021-23222
The server collects up to a bufferload of data whenever it reads data
from the client socket. When SSL or GSS encryption is requested
during startup, any additional data received with the initial
request message remained in the buffer, and would be treated as
already-decrypted data once the encryption handshake completed.
Thus, a man-in-the-middle with the ability to inject data into the
TCP connection could stuff some cleartext data into the start of
a supposedly encryption-protected database session.
This could be abused to send faked SQL commands to the server,
although that would only work if the server did not demand any
authentication data. (However, a server relying on SSL certificate
authentication might well not do so.)
To fix, throw a protocol-violation error if the internal buffer
is not empty after the encryption handshake.
Our thanks to Jacob Champion for reporting this problem.
Security: CVE-2021-23214
In v14, because we don't have a field in RestrictInfo to cache both the
left and right type's hash equality operator, we just restrict the scope
of Memoize to only when the left and right types of a RestrictInfo are the
same.
In master we add another field to RestrictInfo and cache both hash
equality operators.
Reported-by: Jaime Casanova
Author: David Rowley
Discussion: https://postgr.es/m/20210929185544.GB24346%40ahch-to
Backpatch-through: 14
Commit 57e3c5160b added a new GiST bool opclass, but it used gbtreekey4
to store the data, which left two bytes undefined, as reported by skink,
our valgrind animal. There was a bit more confusion, because the opclass
also used gbtreekey8 in the definition.
Fix by defining a new gbtreekey2 struct, and using it in all the places.
Discussion: https://postgr.es/m/CAE2gYzyDKJBZngssR84VGZEN=Ux=V9FV23QfPgo+7-yYnKKg4g@mail.gmail.com
Quite a few buildfarm animals are warning about this, and lapwing
is actually failing (because -Werror). It's a false positive AFAICS,
so no need to do more than zero the variable to start with.
Discussion: https://postgr.es/m/YYXJnUxgw9dZKxlX@paquier.xyz
Re-ordering the #include's is a bit problematic here because
libpq/libpq-be.h needs to include <openssl/ssl.h>. Instead,
let's #undef the unwanted macro after all the #includes.
This is definitely uglier than the other way, but it should
work despite possible future header rearrangements.
(A look at the openssl headers indicates that X509_NAME is the
only conflicting symbol that we use.)
In passing, remove a related but long-incorrect comment in
pg_backup_archiver.h.
Discussion: https://postgr.es/m/1051867.1635720347@sss.pgh.pa.us
Since 8162464a25 we do so in win32_port.h. But it likely didn't do much
before that either, because at that point windows.h was already included via
win32_port.h.
Reported-By: Tom Lane
Discussion: https://postgr.es/m/612842.1636237461@sss.pgh.pa.us
Commit db7d1a7b0 pulled out Mkvcbuild.pm's custom support for building
contrib/pgcrypto, but neglected to inform it that that module can now
be built normally. Or at least I guess it can now be built normally.
But this is definitely causing bowerbird to fail, since it's trying to
test a module it hasn't built.
The tsvector data type has always forbidden lexemes to be empty.
However, array_to_tsvector() didn't get that memo, and would
allow an empty-string array element to become an empty lexeme.
This could result in dump/restore failures later, not to mention
whatever semantic issues might be behind the original prohibition.
However, other functions that take a plain text input directly as
a lexeme value do not need a similar restriction, because they only
match the string against existing tsvector entries. In particular
it'd be a bad idea to make ts_delete() reject empty strings, since
that is the most convenient way to clean up any bad data that might
have gotten into a tsvector column via this bug.
Reflecting on that, let's also remove the prohibition against NULL
array elements in tsvector_delete_arr and tsvector_setweight_by_filter.
It seems more consistent to ignore them, as an empty-string element
would be ignored.
There's a case for back-patching this, since it's clearly a bug fix.
On balance though, it doesn't seem like something to change in a
minor release.
Jean-Christophe Arnu
Discussion: https://postgr.es/m/CAHZmTm1YVndPgUVRoag2WL0w900XcoiivDDj-gTTYBsG25c65A@mail.gmail.com
After further investigation, it seems the cause of the problem
is our recent decision to start defining WIN32_LEAN_AND_MEAN.
That causes <windows.h> to no longer include <wincrypt.h>, which
means that the OpenSSL headers are unable to prevent conflicts
with that header by #undef'ing the conflicting macros. Apparently,
some other system header that be-secure-openssl.c #includes after
the OpenSSL headers is pulling in <wincrypt.h>. It's obscure just
where that happens and why we're not seeing it on other Windows
buildfarm animals. However, it should work to move the OpenSSL
#includes to the end of the list. For the sake of future-proofing,
do likewise in fe-secure-openssl.c. In passing, remove useless
double inclusions of <openssl/ssl.h>.
Thanks to Thomas Munro for running down the relevant information.
Discussion: https://postgr.es/m/1051867.1635720347@sss.pgh.pa.us
Currently, lastOverflowedXid is never reset. It's just adjusted on new
transactions known to be overflowed. But if there are no overflowed
transactions for a long time, snapshots could be mistakenly marked as
suboverflowed due to wraparound.
This commit fixes this issue by resetting lastOverflowedXid when needed
altogether with KnownAssignedXids.
Backpatch to all supported versions.
Reported-by: Stan Hu
Discussion: https://postgr.es/m/CAMBWrQ%3DFp5UAsU_nATY7EMY7NHczG4-DTDU%3DmCvBQZAQ6wa2xQ%40mail.gmail.com
Author: Kyotaro Horiguchi, Alexander Korotkov
Reviewed-by: Stan Hu, Simon Riggs, Nikolay Samokhvalov, Andrey Borodin, Dmitry Dolgov
Commit 23a1c6578 thought it'd be cute to refactor
pg_basebackup/Makefile with a new variable BBOBJS,
but our MSVC build system knows nothing of that.
Per buildfarm.
Adds bool opclass to btree_gist extension, to allow creating GiST
indexes on bool columns. GiST indexes on a single bool column don't seem
particularly useful, but this allows defining exclusion constraings
involving a bool column, for example.
Author: Emre Hasegeli
Reviewed-by: Andrey Borodin
Discussion: https://postgr.es/m/CAE2gYzyDKJBZngssR84VGZEN=Ux=V9FV23QfPgo+7-yYnKKg4g@mail.gmail.com
When calculating distance between float4/float8 values, we need to be a
bit more careful about NaN values in order not to trigger assert. We
consider NaN values to be equal (distace 0.0) and in infinite distance
from all other values.
On builds without asserts, this issue is mostly harmless - the ranges
may be merged in less efficient order, but the index is still correct.
Per report from Andreas Seltenreich. Backpatch to 14, where this new
BRIN opclass was introduced.
Reported-by: Andreas Seltenreich
Discussion: https://postgr.es/m/87r1bw9ukm.fsf@credativ.de
Add new comments that spell out what VACUUM expects from heap pruning:
pruning must never leave behind DEAD tuples that still have tuple
storage. This has at least been the case since commit 8523492d, which
established the principle that vacuumlazy.c doesn't have to deal with
DEAD tuples that still have tuple storage directly, except perhaps by
simply retrying pruning (to handle a rare corner case involving
concurrent transaction abort).
In passing, update some references to old symbol names that were missed
by the snapshot scalability work (specifically commit dc7420c2c9).
StartupXLOG() still has ThisTimeLineID as a local variable, but the
remaining code in xlog.c now needs to the relevant TimeLineID by some
other means. Mostly, this means that we now pass it as a function
parameter to a bunch of functions where we didn't previously.
However, a few cases require special handling:
- In functions that might be called by outside callers who
wouldn't necessarily know what timeline to specify, we get
the timeline ID from shared memory. XLogCtl->ThisTimeLineID
can be used in most cases since recovery is known to have
completed by the time those functions are called. In
xlog_redo(), we can use XLogCtl->replayEndTLI.
- XLogFileClose() needs to know the TLI of the open logfile.
Do that with a new global variable openLogTLI. While
someone could argue that this is just trading one global
variable for another, the new one has a far more narrow
purposes and is referenced in just a few places.
- read_backup_label() now returns the TLI that it obtains
by parsing the backup_label file. Previously, ReadRecord()
could be called to parse the checkpoint record without
ThisTimeLineID having been initialized. Now, the timeline
is passed down, and I didn't want to pass an uninitialized
variable; this change lets us avoid that. The old coding
didn't seem to have any practical consequences that we need
to worry about, but this is cleaner.
- In BootstrapXLOG(), it's just a constant.
Patch by me, reviewed and tested by Michael Paquier, Amul Sul, and
Álvaro Herrera.
Discussion: https://postgr.es/m/CA+TgmobfAAqhfWa1kaFBBFvX+5CjM=7TE=n4r4Q1o2bjbGYBpA@mail.gmail.com
All such code deals with this global variable in one of three ways.
Sometimes the same functions use it in more than one of these ways
at the same time.
First, sometimes it's an implicit argument to one or more functions
being called in xlog.c or elsewhere, and must be set to the
appropriate value before calling those functions lest they
misbehave. In those cases, it is now passed as an explicit argument
instead.
Second, sometimes it's used to obtain the current timeline after
the end of recovery, i.e. the timeline to which WAL is being
written and flushed. Such code now calls GetWALInsertionTimeLine()
or relies on the new out parameter added to GetFlushRecPtr().
Third, sometimes it's used during recovery to store the current
replay timeline. That can change, so such code must generally
update the value before each use. It can still do that, but must
now use a local variable instead.
The net effect of these changes is to reduce by a fair amount the
amount of code that is directly accessing this global variable.
That's good, because history has shown that we don't always think
clearly about which timeline ID it's supposed to contain at any
given point in time, or indeed, whether it has been or needs to
be initialized at any given point in the code.
Patch by me, reviewed and tested by Michael Paquier, Amul Sul, and
Álvaro Herrera.
Discussion: https://postgr.es/m/CA+TgmobfAAqhfWa1kaFBBFvX+5CjM=7TE=n4r4Q1o2bjbGYBpA@mail.gmail.com
In slotfuncs.c, pg_replication_slot_advance() needs to determine
the LSN up to which the slot should be advanced, but that doesn't
require us to update ThisTimeLineID, because none of the code called
from here depends on it. If the replication slot is logical,
pg_logical_replication_slot_advance will call read_local_xlog_page,
which does use ThisTimeLineID, but also takes care of making sure
it's up to date. If the replication slot is physical, the timeline
isn't used for anything at all.
In logicalfuncs.c, pg_logical_slot_get_changes_guts() has the same
issue: the only code we're going to run that cares about timelines
is in or downstream of read_local_xlog_page, which already makes
sure that the correct value gets set. Hence, don't do it here.
Patch by me, reviewed and tested by Michael Paquier, Amul Sul, and
Álvaro Herrera.
Discussion: https://postgr.es/m/CA+TgmobfAAqhfWa1kaFBBFvX+5CjM=7TE=n4r4Q1o2bjbGYBpA@mail.gmail.com
When a role being dropped contains is referenced by catalog objects that
are concurrently also being dropped, a crash can result while trying to
construct the string that describes the objects. Suppress that by
ignoring objects whose descriptions are returned as NULL.
The majority of relevant codesites were already cautious about this
already; we had just missed a couple.
This is an old bug, so backpatch all the way back.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/17126-21887f04508cb5c8@postgresql.org
pg_basebackup knows how to do quite a few things with a backup that it
gets from the server, like just write out the files, or compress them
first, or even parse the tar format and inject a modified
postgresql.auto.conf file into the archive generated by the server.
Unforatunely, this makes pg_basebackup.c a very large source file, and
also somewhat difficult to enhance, because for example the knowledge
that the server is sending us a 'tar' file rather than some other sort
of archive is spread all over the place rather than centralized.
In an effort to improve this situation, this commit invents a new
'bbstreamer' abstraction. Each archive received from the server is
fed to a bbstreamer which may choose to dispose of it or pass it
along to some other bbstreamer. Chunks may also be "labelled"
according to whether they are part of the payload data of a file
in the archive or part of the archive metadata.
So, for example, if we want to take a tar file, modify the
postgresql.auto.conf file it contains, and the gzip the result
and write it out, we can use a bbstreamer_tar_parser to parse the
tar file received from the server, a bbstreamer_recovery_injector
to modify the contents of postgresql.auto.conf, a
bbstreamer_tar_archiver to replace the tar headers for the file
modified in the previous step with newly-built ones that are
correct for the modified file, and a bbstreamer_gzip_writer to
gzip and write the resulting data. Only the objects with "tar"
in the name know anything about the tar archive format, and in
theory we could re-archive using some other format rather than
"tar" if somebody wanted to write the code.
These chances do add a substantial amount of code, but I think the
result is a lot more maintainable and extensible. pg_basebackup.c
itself shrinks by roughly a third, with a lot of the complexity
previously contained there moving into the newly-added files.
Patch by me. The larger patch series of which this is a part has been
reviewed and tested at various times by Andres Freund, Sumanta
Mukherjee, Dilip Kumar, Suraj Kharage, Dipesh Pandit, Tushar Ahuja,
Mark Dilger, Sergei Kornilov, and Jeevan Ladhe.
Discussion: https://postgr.es/m/CA+TgmoZGwR=ZVWFeecncubEyPdwghnvfkkdBe9BLccLSiqdf9Q@mail.gmail.com
Discussion: https://postgr.es/m/CA+TgmoZvqk7UuzxsX1xjJRmMGkqoUGYTZLDCH8SmU1xTPr1Xig@mail.gmail.com
The base backup code has accumulated a healthy number of new
features over the years, but it's becoming increasingly difficult
to maintain and further enhance that code because there's no
real separation of concerns. For example, the code that
understands knows the details of how we send data to the client
using the libpq protocol is scattered throughout basebackup.c,
rather than being centralized in one place.
To try to improve this situation, introduce a new 'bbsink' object
which acts as a recipient for archives generated during the base
backup progress and also for the backup manifest. This commit
introduces three types of bbsink: a 'copytblspc' bbsink forwards the
backup to the client using one COPY OUT operation per tablespace and
another for the manifest, a 'progress' bbsink performs command
progress reporting, and a 'throttle' bbsink performs rate-limiting.
The 'progress' and 'throttle' bbsink types also forward the data to a
successor bbsink; at present, the last bbsink in the chain will
always be of type 'copytblspc'. There are plans to add more types
of 'bbsink' in future commits.
This abstraction is a bit leaky in the case of progress reporting,
but this still seems cleaner than what we had before.
Patch by me, reviewed and tested by Andres Freund, Sumanta Mukherjee,
Dilip Kumar, Suraj Kharage, Dipesh Pandit, Tushar Ahuja, Mark Dilger,
and Jeevan Ladhe.
Discussion: https://postgr.es/m/CA+TgmoZGwR=ZVWFeecncubEyPdwghnvfkkdBe9BLccLSiqdf9Q@mail.gmail.com
Discussion: https://postgr.es/m/CA+TgmoZvqk7UuzxsX1xjJRmMGkqoUGYTZLDCH8SmU1xTPr1Xig@mail.gmail.com
Expand the checks of toasted attributes to complain if the rawsize is
overlarge. For compressed attributes, also complain if compression
appears to have expanded the attribute or if the compression method is
invalid.
Mark Dilger, reviewed by Justin Pryzby, Alexander Alekseev, Heikki
Linnakangas, Greg Stark, and me.
Discussion: http://postgr.es/m/8E42250D-586A-4A27-B317-8B062C3816A8@enterprisedb.com
pgcrypto had internal implementations of some encryption algorithms,
as an alternative to calling out to OpenSSL. These were rarely used,
since most production installations are built with OpenSSL. Moreover,
maintaining parallel code paths makes the code more complex and
difficult to maintain.
This patch removes these internal implementations. Now, pgcrypto is
only built if OpenSSL support is configured.
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://www.postgresql.org/message-id/flat/0b42f1df-8cba-6a30-77d7-acc241cc88c1%40enterprisedb.com
Completion is added for more object types, like domain constraints, text
search-ish objects or policies. Moreover, the area is reorganized,
changing the list of objects supported by COMMENT to be in the same
order as the documentation to ease future additions.
Author: Ken Kato
Reviewed-by: Fujii Masao, Shinya Kato, Suraj Khamkar, Michael Paquier
Discussion: https://postgr.es/m/6e0c2f3f657b229bea32d098d118f307@oss.nttdata.com
Add hardening to the heapam index tuple deletion path to catch TIDs in
index pages that point to a heap item that index tuples should never
point to. The corruption we're trying to catch here is particularly
tricky to detect, since it typically involves "extra" (corrupt) index
tuples, as opposed to the absence of required index tuples in the index.
For example, a heap TID from an index page that turns out to point to an
LP_UNUSED item in the heap page has a good chance of being caught by one
of the new checks. There is a decent chance that the recently fixed
parallel VACUUM bug (see commit 9bacec15) would have been caught had
that particular check been in place for Postgres 14. No backpatch of
this extra hardening for now, though.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CAH2-Wzk-4_raTzawWGaiqNvkpwDXxv3y1AQhQyUeHfkU=tFCeA@mail.gmail.com
pg_receivewal gains a new option, --compression-method=lz4, available
when the code is compiled with --with-lz4. Similarly to gzip, this
gives the possibility to compress archived WAL segments with LZ4. This
option is not compatible with --compress.
The implementation uses LZ4 frames, and is compatible with simple lz4
commands. Like gzip, using --synchronous ensures that any data will be
flushed to disk within the current .partial segment, so as it is
possible to retrieve as much WAL data as possible even from a
non-completed segment (this requires completing the partial file with
zeros up to the WAL segment size supported by the backend after
decompression, but this is the same as gzip).
The calculation of the streaming start LSN is able to transparently find
and check LZ4-compressed segments. Contrary to gzip where the
uncompressed size is directly stored in the object read, the LZ4 chunk
protocol does not store the uncompressed data by default. There is
contentSize that can be used with LZ4 frames by that would not help if
using an archive that includes segments compressed with the defaults of
a "lz4" command, where this is not stored. So, this commit has taken
the most extensible approach by decompressing the already-archived
segment to check its uncompressed size, through a blank output buffer in
chunks of 64kB (no actual performance difference noticed with 8kB, 16kB
or 32kB, and the operation in itself is actually fast).
Tests have been added to verify the creation and correctness of the
generated LZ4 files. The latter is achieved by the use of command
"lz4", if found in the environment.
The tar-based WAL method in walmethods.c, used now only by
pg_basebackup, does not know yet about LZ4. Its code could be extended
for this purpose.
Author: Georgios Kokolatos
Reviewed-by: Michael Paquier, Jian Guo, Magnus Hagander, Dilip Kumar
Discussion: https://postgr.es/m/ZCm1J5vfyQ2E6dYvXz8si39HQ2gwxSZ3IpYaVgYa3lUwY88SLapx9EEnOf5uEwrddhx2twG7zYKjVeuP5MwZXCNPybtsGouDsAD1o2L_I5E=@pm.me
These assertions document (and verify) our high level assumptions about
how pruning can and cannot affect existing items from target heap pages.
For example, one of the new assertions verifies that pruning does not
set a heap-only tuple to LP_DEAD.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CAH2-Wz=vhvBx1GjF+oueHh8YQcHoQYrMi0F0zFMHEr8yc4sCoA@mail.gmail.com
The option name was incorrect in one of the error messages, and the
short option 'I' was used in the code but we did not intend things to be
this way. While on it, fix the documentation to refer to a "method",
and not a "level.
Oversights in commit d62bcc8, that I have detected after more review of
the LZ4 patch for pg_receivewal.
pg_receivewal includes since cada1af the option --compress, to allow the
compression of WAL segments using gzip, with a value of 0 (the default)
meaning that no compression can be used.
This commit introduces a new option, called --compression-method, able
to use as values "none", the default, and "gzip", to make things more
extensible. The case of --compress=0 becomes fuzzy with this option
layer, so we have made the choice to make pg_receivewal return an error
when using "none" and a non-zero compression level, meaning that the
authorized values of --compress are now [1,9] instead of [0,9]. Not
specifying --compress with "gzip" as compression method makes
pg_receivewal use the default of zlib instead (Z_DEFAULT_COMPRESSION).
The code in charge of finding the streaming start LSN when scanning the
existing archives is refactored and made more extensible. While on it,
rename "compression" to "compression_level" in walmethods.c, to reduce
the confusion with the introduction of the compression method, even if
the tar method used by pg_basebackup does not rely on the compression
method (yet, at least), but just on the compression level (this area
could be improved more, actually).
This is in preparation for an upcoming patch that adds LZ4 support to
pg_receivewal.
Author: Georgios Kokolatos
Reviewed-by: Michael Paquier, Jian Guo, Magnus Hagander, Dilip Kumar,
Robert Haas
Discussion: https://postgr.es/m/ZCm1J5vfyQ2E6dYvXz8si39HQ2gwxSZ3IpYaVgYa3lUwY88SLapx9EEnOf5uEwrddhx2twG7zYKjVeuP5MwZXCNPybtsGouDsAD1o2L_I5E=@pm.me
If lo_export() fails to open the target file or to write to it, it leaks
the created LargeObjectDesc and its snapshot in the top-transaction
context and resource owner. That's pretty harmless, it's a small leak
after all, but it gives the user a "Snapshot reference leak" warning.
Fix by using a short-lived memory context and no resource owner for
transient LargeObjectDescs that are opened and closed within one function
call. The leak is easiest to reproduce with lo_export() on a directory
that doesn't exist, but in principle the other lo_* functions could also
fail.
Backpatch to all supported versions.
Reported-by: Andrew B
Reviewed-by: Alvaro Herrera
Discussion: https://www.postgresql.org/message-id/32bf767a-2d65-71c4-f170-122f416bab7e@iki.fi
Commit b4af70cb inverted the return value of the function
parallel_processing_is_safe(), but missed the amvacuumcleanup test.
Index AMs that don't support parallel cleanup at all were affected.
The practical consequences of this bug were not very serious. Hash
indexes are affected, but since they just return the number of blocks
during hashvacuumcleanup anyway, it can't have had much impact.
Author: Masahiko Sawada <sawada.mshk@gmail.com>
Discussion: https://postgr.es/m/CAD21AoA-Em+aeVPmBbL_s1V-ghsJQSxYL-i3JP8nTfPiD1wjKw@mail.gmail.com
Backpatch: 14-, where commit b4af70cb appears.
Buildfarm member hamerkop has been failing for the last few days
with errors that look like OpenSSL's X509-related symbols have
not been imported into be-secure-openssl.c. It's unclear why
this should be, but let's try adding an explicit #include of
<openssl/x509v3.h>, as there has long been in fe-secure-openssl.c.
Discussion: https://postgr.es/m/1051867.1635720347@sss.pgh.pa.us
Commit b4af70cb, which simplified state managed by VACUUM, performed
refactoring of parallel VACUUM in passing. Confusion about the exact
details of the tasks that the leader process is responsible for led to
code that made it possible for parallel VACUUM to miss a subset of the
table's indexes entirely. Specifically, indexes that fell under the
min_parallel_index_scan_size size cutoff were missed. These indexes are
supposed to be vacuumed by the leader (alongside any parallel unsafe
indexes), but weren't vacuumed at all. Affected indexes could easily
end up with duplicate heap TIDs, once heap TIDs were recycled for new
heap tuples. This had generic symptoms that might be seen with almost
any index corruption involving structural inconsistencies between an
index and its table.
To fix, make sure that the parallel VACUUM leader process performs any
required index vacuuming for indexes that happen to be below the size
cutoff. Also document the design of parallel VACUUM with these
below-size-cutoff indexes.
It's unclear how many users might be affected by this bug. There had to
be at least three indexes on the table to hit the bug: a smaller index,
plus at least two additional indexes that themselves exceed the size
cutoff. Cases with just one additional index would not run into
trouble, since the parallel VACUUM cost model requires two
larger-than-cutoff indexes on the table to apply any parallel
processing. Note also that autovacuum was not affected, since it never
uses parallel processing.
Test case based on tests from a larger patch to test parallel VACUUM by
Masahiko Sawada.
Many thanks to Kamigishi Rei for her invaluable help with tracking this
problem down.
Author: Peter Geoghegan <pg@bowt.ie>
Author: Masahiko Sawada <sawada.mshk@gmail.com>
Reported-By: Kamigishi Rei <iijima.yun@koumakan.jp>
Reported-By: Andrew Gierth <andrew@tao11.riddles.org.uk>
Diagnosed-By: Andres Freund <andres@anarazel.de>
Bug: #17245
Discussion: https://postgr.es/m/17245-ddf06aaf85735f36@postgresql.org
Discussion: https://postgr.es/m/20211030023740.qbnsl2xaoh2grq3d@alap3.anarazel.de
Backpatch: 14-, where the refactoring commit appears.
In walreceiver, set the publisher's relevant GUCs (datestyle,
intervalstyle, extra_float_digits) to the same values that pg_dump uses,
and for the same reason: we need the output to be read the same way
regardless of the receiver's settings. Without this, it's possible
for subscribers to misinterpret transmitted values.
Although this is clearly a bug fix, it's not without downsides:
subscribers that are storing values into some other datatype, such as
text, could get different results than before, and perhaps be unhappy
about that. Given the lack of previous complaints, it seems best
to change this only in HEAD, and to call it out as an incompatible
change in v15.
Japin Li, per report from Sadhuprasad Patro
Discussion: https://postgr.es/m/CAFF0-CF=D7pc6st-3A9f1JnOt0qmc+BcBPVzD6fLYisKyAjkGA@mail.gmail.com
This undoes a mistake in 1ec7679f1: domainval and domainnull were
meant to live across loop iterations, but they were incorrectly
moved inside the loop. The effect was only to emit useless extra
EEOP_MAKE_READONLY steps, so it's not a big deal; nonetheless,
back-patch to v13 where the mistake was introduced.
Ranier Vilela
Discussion: https://postgr.es/m/CAEudQAqXuhbkaAp-sGH6dR6Nsq7v28_0TPexHOm6FiDYqwQD-w@mail.gmail.com