Previously Catalog.pm eval'd each individual hash reference
so that comments and whitespace can be preserved when running
reformat-dat-files. This is unnecessary when building, and we can save
~15% off the run time of genbki.pl by simply slurping and eval'-ing
the whole file at once. This saves a bit of time, especially in highly
parallel builds, since most build targets depend on this script's outputs.
Report and review by Andres Freund
Discussion: https://www.postgresql.org/message-id/CAFBsxsGW%3DWRbnxXrc8UqqR479XuxtukSFWV-hnmtgsbuNAUO6w%40mail.gmail.com
This commit raises a warning message for a combination of options
('copy_data = true' and 'origin = none') during CREATE/ALTER subscription
operations if the publication tables were also replicated from other
publishers.
During replication, we can skip the data from other origins as we have that
information in WAL but that is not possible during initial sync so we raise
a warning if there is such a possibility.
Author: Vignesh C
Reviewed-By: Peter Smith, Amit Kapila, Jonathan Katz, Shi yu, Wang wei
Discussion: https://www.postgresql.org/message-id/CALDaNm0gwjY_4HFxvvty01BOT01q_fJLKQ3pWP9=9orqubhjcQ@mail.gmail.com
c6e0fe1f2 recently changed the way we store headers for allocated chunks
of memory. Prior to that commit, we stored a pointer to the owning
MemoryContext directly prior to the pointer to the allocated memory.
That's no longer true and c6e0fe1f2 neglected to update
MemoryContextContains() so that it correctly obtains the owning context
with the new method.
A side effect of this change and c6e0fe1f2, in general, is that it's even
less safe than it was previously to pass MemoryContextContains() an
arbitrary pointer which was not allocated by one of our MemoryContexts.
Previously some comments in MemoryContextContains() seemed to indicate
that the worst that could happen by passing an arbitrary pointer would be
a false positive return value. It seems to me that this was a rather
wishful outlook as we subsequently proceeded to subtract sizeof(void *)
from the given pointer and then dereferenced that memory. So it seems
quite likely that we could have segfaulted instead of returning a false
positive. However, it's not impossible that the memory sizeof(void *)
bytes before the pointer could have been owned by the process, but it's
far less likely to work now as obtaining a pointer to the owning
MemoryContext is less direct than before c6e0fe1f2 and will access memory
that's possibly much further away to obtain the owning MemoryContext.
Because of this, I took the liberty of updating the comment to warn
against any future usages of the function and checked the existing core
usages to ensure that we only ever pass in a pointer to memory allocated
by a MemoryContext.
Extension authors updating their code for PG16 who are using
MemoryContextContains should check to ensure that only NULL pointers and
pointers to chunks allocated with a MemoryContext will ever be passed to
MemoryContextContains.
Reported-by: Andres Freund
Discussion: https://postgr.es/m/20220905230949.kb3x2fkpfwtngz43@awork3.anarazel.de
Traditionally, in MEMORY_CONTEXT_CHECKING builds, we only ever marked a
sentinel byte just beyond the requested size if there happened to be
enough space on the chunk to do so. For Slab and Generation context
types, we only rounded the size of the chunk up to the next maxalign
boundary, so it was often not that likely that those would ever have space
for the sentinel given that the majority of allocation requests are going
to be for sizes which are maxaligned. For AllocSet, it was a little
different as smaller allocations are rounded up to the next power-of-2
value rather than the next maxalign boundary, so we're a bit more likely
to have space for the sentinel byte, especially when we get away from tiny
sized allocations such as 8 or 16 bytes.
Here we make more of an effort to allow space so that there is enough room
for the sentinel byte in more cases. This makes it more likely that we'll
detect when buggy code accidentally writes beyond the end of any of its
memory allocations.
Each of the 3 MemoryContext types has been changed as follows:
The Slab allocator will now always set a sentinel byte. Both the current
usages of this MemoryContext type happen to use chunk sizes which were on
the maxalign boundary, so these never used sentinel bytes previously.
For the Generation allocator, we now always ensure there's enough space in
the allocation for a sentinel byte.
For AllocSet, this commit makes an adjustment for allocation sizes which
are greater than allocChunkLimit. We now ensure there is always space for
a sentinel byte. We don't alter the sentinel behavior for request sizes
<= allocChunkLimit. Making way for the sentinel byte for power-of-2
request sizes would require doubling up to the next power of 2. Some
analysis done on the request sizes made during installcheck shows that a
fairly large portion of allocation requests are for power-of-2 sizes. The
amount of additional memory for the sentinel there seems prohibitive, so
we do nothing for those here.
Author: David Rowley
Discussion: https://postgr.es/m/3478405.1661824539@sss.pgh.pa.us
Add a new logical replication section for "Column Lists" (analogous to the
Row Filters page). This explains how the feature can be used and the
caveats in it.
Author: Peter Smith
Reviewed-by: Shi yu, Vignesh C, Erik Rijkers, Amit Kapila
Backpatch-through: 15, where it was introduced
Discussion: https://postgr.es/m/CAHut+PvOuc9=_4TbASc5=VUqh16UWtFO3GzcKQK_5m1hrW3vqg@mail.gmail.com
The addition of published column names forgot to filter on attisdropped,
leading to cases where you could see "........pg.dropped.1........"
or the like as a reportedly-published column.
While we're here, rewrite the new subquery to get a more efficient plan
for it.
Hou Zhijie, per report from Jaime Casanova. Back-patch to v15 where
the bug was introduced. (Sadly, this means we need a post-beta4
catversion bump before beta4 has even hit the streets. I see no
good alternative though.)
Discussion: https://postgr.es/m/Yxa1SU4nH2HfN3/i@ahch-to
Here we remove some dead code from CreateTriggerFiringOn() which was
attempting to find the relevant child partition index corresponding to the
given indexOid. As it turned out, thanks to -Wshadow=compatible-local,
this code was buggy as the code which was finding the child indexes
assigned those to a shadowed variable that directly went out of scope.
The code which thought it was looking at the List of child indexes was
always referencing an empty List.
On further investigation, this code is dead. We never call
CreateTriggerFiringOn() passing a valid indexOid in a way that the
function would actually ever execute the code in question. So, for lack
of a way to test if a fix actually works, let's just remove the dead code
instead.
As a reminder, if there is ever a need to resurrect this code, an Assert()
has been added to remind future feature developers that they might need to
write some code to find the corresponding child index.
Reported-by: Justin Pryzby
Reviewed-by: Justin Pryzby
Discussion: https://postgr.es/m/20220819211824.GX26426@telsasoft.com
In a similar effort to f736e188c and 110d81728, fixup various usages of
string functions where a more appropriate function is available and more
fit for purpose.
These changes include:
1. Use cstring_to_text_with_len() instead of cstring_to_text() when
working with a StringInfoData and the length can easily be obtained.
2. Use appendStringInfoString() instead of appendStringInfo() when no
formatting is required.
3. Use pstrdup(...) instead of psprintf("%s", ...)
4. Use pstrdup(...) instead of psprintf(...) (with no formatting)
5. Use appendPQExpBufferChar() instead of appendPQExpBufferStr() when the
length of the string being appended is 1.
6. appendStringInfoChar() instead of appendStringInfo() when no formatting
is required and string is 1 char long.
7. Use appendPQExpBufferStr(b, .) instead of appendPQExpBuffer(b, "%s", .)
8. Don't use pstrdup when it's fine to just point to the string constant.
I (David) did find other cases of #8 but opted to use #4 instead as I
wasn't certain enough that applying #8 was ok (e.g in hba.c)
Author: Ranier Vilela, David Rowley
Discussion: https://postgr.es/m/CAApHDvo2j2+RJBGhNtUz6BxabWWh2Jx16wMUMWKUjv70Ver1vg@mail.gmail.com
Since these macros just cast whatever you give them to the designated
output type, and many normal uses also cast the output type further, a
number of incorrect uses go undiscovered. The fixes in this patch
have been discovered by changing these macros to inline functions,
which is the subject of a future patch.
Reviewed-by: Aleksander Alekseev <aleksander@timescale.com>
Discussion: https://www.postgresql.org/message-id/flat/8528fb7e-0aa2-6b54-85fb-0c0886dbd6ed%40enterprisedb.com
Calculating similarity between large strings can be timesconsuming
and overrun configured statement timeouts. Check for interrupts in
the main loop to ensure query cancellation can be performed.
Author: Robins Tharakan <tharakan@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAEP4nAxvmfc_XWTz73bqXRhgjONi=1HaX4_NhsopA3L6UvnN1g@mail.gmail.com
Improve documentation regarding the limitations of unique and primary key
constraints on partitioned tables. The existing documentation didn't make
it clear that the constraint columns had to be present in the partition
key as bare columns. The reader could be led to believe that it was ok to
include the constraint columns as part of a function call's parameters or
as part of an expression. Additionally, the documentation didn't mention
anything about the fact that we disallow unique and primary key
constraints if the partition keys contain *any* function calls or
expressions, regardless of if the constraint columns appear as columns
elsewhere in the partition key.
The confusion here was highlighted by a report on the general mailing list
by James Vanns.
Discussion: https://postgr.es/m/CAH7vdhNF0EdYZz3GLpgE3RSJLwWLhEk7A_fiKS9dPBT3Dz_3eA@mail.gmail.com
Discussion: https://postgr.es/m/CAApHDvoU-u9iTqKjteYRFfi+UNEk7dbSAcyxEQD==vZt9B1KnA@mail.gmail.com
Reviewed-by: Erik Rijkers
Backpatch-through: 11
Commit db0d67db2 tweaked sort costing, which however resulted in a
couple plan changes in our regression tests. Most of the new plans were
fine, but partition_aggregate were meant to test parallel plans and the
new plans were serial.
Fix that by lowering parallel_setup_cost to 0, which is enough to switch
to the parallel plan again.
Commit 1349d2790 already made the plans parallel again, but do this
anyway to keep the tests in sync with 15, to make backpatching simpler.
Report and patch by David Rowley.
Author: David Rowley
Reviewed-by: Tomas Vondra
Discussion: https://postgr.es/m/CAApHDvpVFgWzXdtUQkjyOPhNrNvumRi_=ftgS79KeAZ92tnHKQ@mail.gmail.com
XLogPageRead() can retry internally after a pread() system call has
succeeded, in the case of short reads, and page validation failures
while in standby mode (see commit 0668719801). Due to an oversight in
commit 3f1ce973, these cases could leave stale data in the internal
cache of xlogreader.c without marking it invalid. The main defense
against stale cached data on failure to read a page was in the error
handling path of the calling function ReadPageInternal(), but that
wasn't quite enough for errors handled internally by XLogPageRead()'s
retry loop if we then exited with XLREAD_WOULDBLOCK.
1. ReadPageInternal() now marks the cache invalid before calling the
page_read callback, by setting state->readLen to 0. It'll be set to
a non-zero value only after a successful read. It'll stay valid as
long as the caller requests data in the cached range.
2. XLogPageRead() no long performs internal retries while reading
ahead. While such retries should work, the general philosophy is
that we should give up prefetching if anything unusual happens so we
can handle it when recovery catches up, to reduce the complexity of
the system. Let's do that here too.
3. While here, a new function XLogReaderResetError() improves the
separation between xlogrecovery.c and xlogreader.c, where the former
previously clobbered the latter's internal error buffer directly.
The new function makes this more explicit, and also clears a related
flag, without which a standby would needlessly retry in the outer
function.
Thanks to Noah Misch for tracking down the conditions required for a
rare build farm failure in src/bin/pg_ctl/t/003_promote.pl, and
providing a reproducer.
Back-patch to 15.
Reported-by: Noah Misch <noah@leadboat.com>
Discussion: https://postgr.es/m/20220807003627.GA4168930%40rfd.leadboat.com
The planner has to special-case indexes on boolean columns, because
what we need for an indexscan on such a column is a qual of the shape
of "boolvar = pseudoconstant". For plain bool constants, previous
simplification will have reduced this to "boolvar" or "NOT boolvar",
and we have to reverse that if we want to make an indexqual. There is
existing code to do so, but it only fires when the index's opfamily
is BOOL_BTREE_FAM_OID or BOOL_HASH_FAM_OID. Thus extension AMs, or
extension opclasses such as contrib/btree_gin, are out in the cold.
The reason for hard-wiring the set of relevant opfamilies was mostly
to avoid a catalog lookup in a hot code path. We can improve matters
while not taking much of a performance hit by relying on the
hard-wired set when the opfamily OID is visibly built-in, and only
checking the catalogs when dealing with an extension opfamily.
While here, rename IsBooleanOpfamily to IsBuiltinBooleanOpfamily
to remind future users of that macro of its limitations. At some
point we might want to make indxpath.c's improved version of the
test globally accessible, but it's not presently needed elsewhere.
Zongliang Quan and Tom Lane
Discussion: https://postgr.es/m/f293b91d-1d46-d386-b6bb-4b06ff5c667b@yeah.net
It was not strictly correct to say that a column list must always include
replica identity columns because that is true for only updates and
deletes.
Author: Peter Smith
Reviwed-by: Vignesh C, Amit Kapila
Backpatch-through: 15, where it was introduced
Discussion: https://postgr.es/m/CAHut+PvOuc9=_4TbASc5=VUqh16UWtFO3GzcKQK_5m1hrW3vqg@mail.gmail.com
Several backend-side loops scanning one or more directories with
ReadDir() (WAL segment recycle/removal in xlog.c, backend-side directory
copy, temporary file removal, configuration file parsing, some logical
decoding logic and some pgtz stuff) already know the type of the entry
being scanned thanks to the dirent structure associated to the entry, on
platforms where we know about DT_REG, DT_DIR and DT_LNK to make the
difference between a regular file, a directory and a symbolic link.
Relying on the direct structure of an entry saves a few system calls to
stat() and lstat() in the loops updated here, shaving some code while on
it. The logic of the code remains the same, calling stat() or lstat()
depending on if it is necessary to look through symlinks.
Authors: Nathan Bossart, Bharath Rupireddy
Reviewed-by: Andres Freund, Thomas Munro, Michael Paquier
Discussion: https://postgr.es/m/CALj2ACV8n-J-f=yiLUOx2=HrQGPSOZM3nWzyQQvLPcccPXxEdg@mail.gmail.com
The sysroot determination is fairly complex and will soon also be needed when
building with meson. Instead of duplicating the logic, move it to a dedicated
shell script invoked both by configure and meson.
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Discussion: https://postgr.es/m/2180a97c-c026-1b6c-cec8-d6e499f97017@enterprisedb.com
The reverts the following and makes some associated cleanups:
commit f79b803dc: Common SQL/JSON clauses
commit f4fb45d15: SQL/JSON constructors
commit 5f0adec25: Make STRING an unreserved_keyword.
commit 33a377608: IS JSON predicate
commit 1a36bc9db: SQL/JSON query functions
commit 606948b05: SQL JSON functions
commit 49082c2cc: RETURNING clause for JSON() and JSON_SCALAR()
commit 4e34747c8: JSON_TABLE
commit fadb48b00: PLAN clauses for JSON_TABLE
commit 2ef6f11b0: Reduce running time of jsonb_sqljson test
commit 14d3f24fa: Further improve jsonb_sqljson parallel test
commit a6baa4bad: Documentation for SQL/JSON features
commit b46bcf7a4: Improve readability of SQL/JSON documentation.
commit 112fdb352: Fix finalization for json_objectagg and friends
commit fcdb35c32: Fix transformJsonBehavior
commit 4cd8717af: Improve a couple of sql/json error messages
commit f7a605f63: Small cleanups in SQL/JSON code
commit 9c3d25e17: Fix JSON_OBJECTAGG uniquefying bug
commit a79153b7a: Claim SQL standard compliance for SQL/JSON features
commit a1e7616d6: Rework SQL/JSON documentation
commit 8d9f9634e: Fix errors in copyfuncs/equalfuncs support for JSON node types.
commit 3c633f32b: Only allow returning string types or bytea from json_serialize
commit 67b26703b: expression eval: Fix EEOP_JSON_CONSTRUCTOR and EEOP_JSONEXPR size.
The release notes are also adjusted.
Backpatch to release 15.
Discussion: https://postgr.es/m/40d2c882-bcac-19a9-754d-4299e1d87ac7@postgresql.org
Not passing -shared to gcc when building a shared library triggers linking to
the wrong libgcc (libgcc.a instead of libgcc_s.a) and prevents emitting
correct unwind information. It's somewhat surprising that this hasn't caused
known problems so far.
Doing so requires adding path to libgcc to libpath, or linking statically to
libgcc - as the latter increases .so size substantially (for not entirely
obvious reasons), shared linking seems preferrable. It likely is worth
building executables with -shared-libgcc too, but I've not done that here.
Discussion: https://postgr.es/m/20220820174213.d574qde4ptwdzoqz@awork3.anarazel.de
Oversight in commit 418ec3207: it's better to do it like this,
else you have to drop and recreate the extension for each
permutation. tcn.spec only has one permutation at present,
so this doesn't speed it up any, but it's still a bad example.
Buildfarm member bowerbird is (inconsistently) showing different
results for this test case since we enabled ASLR for MSVC builds.
It's not very clear whether that's a bug in its version of libxml2
or the test case is relying on nominally-undefined behavior, ie the
ordering of results from XPath's node(). It seems quite unlikely
that it's *our* bug though, and what's more, using node() adds
nothing to the test coverage so far as our code is concerned.
So, tweak the test to not use node().
For the moment, only change HEAD because we've only seen the
problem there. Perhaps a case will emerge for back-patching.
Discussion: https://postgr.es/m/2655387.1661695793@sss.pgh.pa.us