The solution arrived at in commit e74dd00f5 presumes that the compiler
has a suitable default -isysroot setting ... but further experience
shows that in many combinations of macOS version, XCode version, Xcode
command line tools version, and phase of the moon, Apple's compiler
will *not* supply a default -isysroot value.
We could potentially go back to the approach used in commit 68fc227dd,
but I don't have a lot of faith in the reliability or life expectancy of
that either. Let's just revert to the approach already shipped in 11.0,
namely specifying an -isysroot switch globally. As a partial response to
the concerns raised by Jakob Egger, adjust the contents of Makefile.global
to look like
CPPFLAGS = -isysroot $(PG_SYSROOT) ...
PG_SYSROOT = /path/to/sysroot
This allows overriding the sysroot path at build time in a relatively
painless way.
Add documentation to installation.sgml about how to use the PG_SYSROOT
option. I also took the opportunity to document how to work around
macOS's "System Integrity Protection" feature.
As before, back-patch to all supported versions.
Discussion: https://postgr.es/m/20840.1537850987@sss.pgh.pa.us
NULL keys in left joins were skipped when building batch files.
Repair, by making the keep_nulls argument to ExecHashGetHashValue()
depend on whether this is a left outer join, as we do in other
paths.
Bug #15475. Thinko in 1804284042. Back-patch to 11.
Reported-by: Paul Schaap
Diagnosed-by: Andrew Gierth
Dicussion: https://postgr.es/m/15475-11a7a783fed72a36%40postgresql.org
When restoring slot information from disk at startup and filling in
shared memory information, the startup process would issue a PANIC
message if more slots are found than what max_replication_slots allows,
and then Postgres generates a core dump, recommending to increase
max_replication_slots. This gives users a switch to crash Postgres at
will by creating slots, lower the configuration to not support it, and
then restart it.
Making Postgres crash hard in this case is overdoing it just to give a
recommendation to users. So instead use a FATAL, which makes Postgres
fail to start without crashing, still giving the recommendation. This
is more consistent with what happens for prepared transactions for
example.
Author: Michael Paquier
Reviewed-by: Andres Freund
Discussion: https://postgr.es/m/20181030025109.GD1644@paquier.xyz
This has been deprecated and effectively unused for a long time.
Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
The project message style guide dictates: "When citing the name of an
object, state what kind of object it is". The parallel CREATE INDEX
patch added a worker number to most of the trace_sort messages within
tuplesort.c without specifying the object type. Bring these messages
into compliance with the style guide.
We're still treating a leader or serial Tuplesortstate as having worker
number -1. trace_sort is a developer option, and these two cases are
highly comparable, so this seems appropriate.
Per complaint from Tom Lane.
Discussion: https://postgr.es/m/8330.1540831863@sss.pgh.pa.us
Backpatch: 11-, where parallel CREATE INDEX was introduced.
Previously it was possible to create a slot, change wal_level, and
restart, even if the new wal_level was insufficient for the
slot. That's a problem for both logical and physical slots, because
the necessary WAL records are not generated.
This removes a few tests in newer versions that, somewhat
inexplicably, whether restarting with a too low wal_level worked (a
buggy behaviour!).
Reported-By: Joshua D. Drake
Author: Andres Freund
Discussion: https://postgr.es/m/20181029191304.lbsmhshkyymhw22w@alap3.anarazel.de
Backpatch: 9.4-, where replication slots where introduced
spgendscan neglected to pfree all the memory allocated by spgbeginscan.
It's possible to get away with that in most normal queries, since the
memory is allocated in the executor's per-query context which is about
to get deleted anyway; but it causes severe memory leakage during
creation or filling of large exclusion-constraint indexes.
Also, document that amendscan is supposed to free what ambeginscan
allocates. The docs' lack of clarity on that point probably caused this
bug to begin with. (There is discussion of changing that API spec going
forward, but I don't think it'd be appropriate for the back branches.)
Per report from Bruno Wolff. It's been like this since the beginning,
so back-patch to all active branches.
In HEAD, also fix an independent leak caused by commit 2a6368343
(allocating memory during spgrescan instead of spgbeginscan, which
might be all right if it got cleaned up, but it didn't). And do a bit
of code beautification on that commit, too.
Discussion: https://postgr.es/m/20181024012314.GA27428@wolff.to
This patch absorbs an upstream fix to "zic" for a recently-introduced
bug that made it output data that some 32-bit clients couldn't read.
Given the current source data, the bug only manifests in zones with
leap seconds, which we don't generate, so that there's no actual
change in our installed timezone data files from this. Still, in
case somebody uses our copy of "zic" to do something else, it seems
best to apply the fix promptly.
Also, update the README's notes about converting upstream code to
our conventions.
An array-type coercion appearing within a CASE that has a constant
(after const-folding) test expression was mangled by the planner, causing
all the elements of the resulting array to be equal to the coerced value
of the CASE's test expression. This is my oversight in commit c12d570fa:
that changed ArrayCoerceExpr to use a subexpression involving a
CaseTestExpr, and I didn't notice that eval_const_expressions needed an
adjustment to keep from folding such a CaseTestExpr to a constant when
it's inside a suitable CASE.
This is another in what's getting to be a depressingly long line of bugs
associated with misidentification of the referent of a CaseTestExpr.
We're overdue to redesign that mechanism; but any such fix is unlikely
to be back-patchable into v11. As a stopgap, fix eval_const_expressions
to do what it must here. Also add a bunch of comments pointing out the
restrictions and assumptions that are needed to make this work at all.
Also fix a related oversight: contain_context_dependent_node() was not
aware of the relationship of ArrayCoerceExpr to CaseTestExpr. That was
somewhat fail-soft, in that the outcome of a wrong answer would be to
prevent optimizations that could have been made, but let's fix it while
we're at it.
Per bug #15471 from Matt Williams. Back-patch to v11 where the faulty
logic came in.
Discussion: https://postgr.es/m/15471-1117f49271989bad@postgresql.org
This moves one check for conflicting options from the archive restore
code to the main function where other similar checks are performed.
Also reword the error message to be consistent with other messages.
The only option combination impacted is --create specified with
--single-transaction, and informing the caller at an early step saves
from opening the archive worked on. A TAP test is added for this
combination.
Author: Daniel Gustafsson
Reviewed-by: Fabien Coelho
Discussion: https://postgr.es/m/616808BD-4B59-4E6C-97A9-7317F62D5570@yesql.se
This new function is useful to display a full tree of partitions with a
partitioned table given in output, and avoids the need of any complex
WITH RECURSIVE query when looking at partition trees which are
deep multiple levels.
It returns a set of records, one for each partition, containing the
partition's name, its immediate parent's name, a boolean value telling
if the relation is a leaf in the tree and an integer telling its level
in the partition tree with given table considered as root, beginning at
zero for the root, and incrementing by one each time the scan goes one
level down.
Author: Amit Langote
Reviewed-by: Jesper Pedersen, Michael Paquier, Robert Haas
Discussion: https://postgr.es/m/8d00e51a-9a51-ad02-d53e-ba6bf50b2e52@lab.ntt.co.jp
Exclude tmp_check and tmp_install from pgindent. In a fully-built
tree, pgindent would spend a lot of time digging through these
directories and ends up re-indenting installed header files.
The reference to pg_attribute is switched to a link, which is more
useful for the html documentation. The conditions under which a default
value is defined for a given column are made more general.
Author: Daniel Gustafsson
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/0E8748E3-8B7D-445E-9ABA-09DA5C7345CC@yesql.se
Modern versions of perl no longer include the current directory in the
perl searchpath, as it's insecure. Instead of adding the current
directory, we get around the problem by adding the directory where the
script lives.
Problem noted by Victor Wagner.
Solution adapted from buildfarm client code.
Backpatch to all live versions.
This adds tab completion of the clauses WHEN and EXECUTE
FUNCTION|PROCEDURE clauses to CREATE EVENT TRIGGER, similar to CREATE
TRIGGER in the previous commit. This has version-dependent logic so as
FUNCTION is chosen over PROCEDURE for 11 and newer versions.
Author: Dagfinn Ilmari Mannsåker
Reviewed-by: Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/d8jmur4q4yc.fsf@dalvik.ping.uio.no
The change to accept EXECUTE FUNCTION as well as EXECUTE PROCEDURE in
CREATE TRIGGER (added by 0a63f99) forgot to tell psql's tab completion
system about this. In passing, add tab completion of EXECUTE
FUNCTION/PROCEDURE after a complete WHEN ( … ) clause.
This change is version-aware, with FUNCTION being selected automatically
instead of PROCEDURE depending on the backend version, PROCEDURE being
an historical grammar kept for compatibility and considered as
deprecated in v11.
Author: Dagfinn Ilmari Mannsåker
Reviewed-by: Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/d8jmur4q4yc.fsf@dalvik.ping.uio.no
This function is able to promote a standby with this new SQL-callable
function. Execution access can be granted to non-superusers so that
failover tools can observe the principle of least privilege.
Catalog version is bumped.
Author: Laurenz Albe
Reviewed-by: Michael Paquier, Masahiko Sawada
Discussion: https://postgr.es/m/6e7c79b3ec916cf49742fb8849ed17cd87aed620.camel@cybertec.at
Commit 16828d5c0 incorrectly set an invalid pointer for t_self for heap
tuples. This patch correctly copies it from the source tuple, and
includes a regression test that relies on it being set correctly.
Backpatch to release 11.
Fixes bug #15448 reported by Tillmann Schulz
Diagnosis and test case by Amit Langote
This changes the documentation, and the related structures so as
everything is consistent.
Some wait events were not listed alphabetically since their
introduction, others have been added rather randomly. Keeping all those
entries in order helps in maintenance, and helps the user looking at the
documentation.
Author: Michael Paquier, Kuntal Ghosh
Discussion: https://postgr.es/m/20181024002539.GI1658@paquier.xyz
Backpatch-through: 10, only for the documentation part to avoid an ABI
breakage.
All existing uses can get this information more easily from the
relation descriptor, so the detour through the syscache is not
necessary.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Like for relations, switching this parameter is optimistic by turning it
on each time a partitioned index gains a partition. So seeing this
parameter set to true means that the partitioned index has or has had
partitions. The flag cannot be reset yet for partitioned indexes, which
is something not obvious anyway as partitioned relations exist to have
partitions.
This allows to track more conveniently partition trees for indexes,
which will come in use with an upcoming patch helping in listing
partition trees with an SQL-callable function.
Author: Amit Langote
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/80306490-b5fc-ea34-4427-f29c52156052@lab.ntt.co.jp
The pg_verify_checksums test tries to create files with corrupt data
named "123." and "123_." But on Windows a file name with a trailing dot
is the same as a file without the trailing dot. In the first case this
will create a file with a "valid" name, which causes the test to fail in
an unexpected way, and in the secongd case this will be redandant as the
test already creates a file named "123_".
Bug discovered by buildfarm animal bowerbird.
On Windows this mean that the regression tests can now safely and
successfully run as Administrator, which is useful in situations like
Appveyor. Elsewhere it's a no-op.
Backpatch to 9.5 - this is harder in earlier branches and not worth the
trouble.
Discussion: https://postgr.es/m/650b0c29-9578-8571-b1d2-550d7f89f307@2ndQuadrant.com
PQnotifies() is defined to just process already-read data, not try to read
any more from the socket. (This is a debatable decision, perhaps, but I'm
hesitant to change longstanding library behavior.) The documentation has
long recommended calling PQconsumeInput() before PQnotifies() to ensure
that any already-arrived message would get absorbed and processed.
However, psql did not get that memo, which explains why it's not very
reliable about reporting notifications promptly.
Also, most (not quite all) callers called PQconsumeInput() just once before
a PQnotifies() loop. Taking this recommendation seriously implies that we
should do PQconsumeInput() before each call. This is more important now
that we have "payload" strings in notification messages than it was before;
that increases the probability of having more than one packet's worth
of notify messages. Hence, adjust code as well as documentation examples
to do it like that.
Back-patch to 9.5 to match related server fixes. In principle we could
probably go back further with these changes, but given lack of field
complaints I doubt it's worthwhile.
Discussion: https://postgr.es/m/CAOYf6ec-TmRYjKBXLLaGaB-jrd=mjG1Hzn1a1wufUAR39PQYhw@mail.gmail.com
Commit 4f85fde8e introduced some code that was meant to ensure that we'd
process cancel, die, sinval catchup, and notify interrupts while waiting
for client input. But there was a flaw: it supposed that the process
latch would be set upon arrival at secure_read() if any such interrupt
was pending. In reality, we might well have cleared the process latch
at some earlier point while those flags remained set -- particularly
notifyInterruptPending, which can't be handled as long as we're within
a transaction.
To fix the NOTIFY case, also attempt to process signals (except
ProcDiePending) before trying to read.
Also, if we see that ProcDiePending is set before we read, forcibly set the
process latch to ensure that we will handle that signal promptly if no data
is available. I also made it set the process latch on the way out, in case
there is similar logic elsewhere. (It remains true that we won't service
ProcDiePending here unless we need to wait for input.)
The code for handling ProcDiePending during a write needs those changes,
too.
Also be a little more careful about when to reset whereToSendOutput,
and improve related comments.
Back-patch to 9.5 where this code was added. I'm not entirely convinced
that older branches don't have similar issues, but the complaint at hand
is just about the >= 9.5 code.
Jeff Janes and Tom Lane
Discussion: https://postgr.es/m/CAOYf6ec-TmRYjKBXLLaGaB-jrd=mjG1Hzn1a1wufUAR39PQYhw@mail.gmail.com
About half of this is purely cosmetic changes to reduce the diff between
our code and theirs, like inserting "const" markers where they have them.
The other half is tracking actual code changes in zic.c and localtime.c.
I don't think any of these represent near-term compatibility hazards, but
it seems best to stay up to date.
I also fixed longstanding bugs in our code for producing the
known_abbrevs.txt list, which by chance hadn't been exposed before,
but which resulted in some garbage output after applying the upstream
changes in zic.c. Notably, because upstream removed their old phony
transitions at the Big Bang, it's now necessary to cope with TZif files
containing no DST transition times at all.
DST law changes in Chile, Fiji, and Russia (Volgograd).
Historical corrections for China, Japan, Macau, and North Korea.
Note: like the previous tzdata update, this involves a depressingly
large amount of semantically-meaningless churn in tzdata.zi. That
is a consequence of upstream's data compression method assigning
unstable abbreviations to DST rulesets. I complained about that
to them last time, and this version now uses an assignment method
that pays some heed to not changing abbreviations unnecessarily.
So hopefully, that'll be better going forward.
The original implementation of pg_verify_checksums used a blacklist to
decide which files should be skipped for scanning as they do not include
data checksums, like pg_internal.init or pg_control. However, this
missed two things:
- Some files are created within builds of EXEC_BACKEND and these were
not listed, causing failures on Windows.
- Extensions may create custom files in data folders, causing the tool
to equally fail.
This commit switches to a whitelist-like method instead by checking if
the files to scan are authorized relation files. This is close to a
reverse-engineering of what is defined in relpath.c in charge of
building the relation paths, and we could consider refactoring what this
patch does so as all routines are in a single place. This is left for
later.
This is based on a suggestion from Andres Freund. TAP tests are updated
so as multiple file patterns are tested. The bug has been spotted by
various buildfarm members as a result of b34e84f which has introduced
the TAP tests of pg_verify_checksums.
Author: Michael Paquier
Reviewed-by: Andrew Dunstan, Michael Banck
Discussion: https://postgr.es/m/20181012005614.GC26424@paquier.xyz
Backpatch-through: 11