Commit 587cda35c added a test to updatable_views.sql that created
tables named the same as tables used by the concurrent inherit.sql
script. Unsurprisingly, this results in random failures.
Pick different names.
Per buildfarm.
In the new code for selecting sequence data from pg_sequence, set the
schema to pg_catalog instead of the sequences own schema, and refer to
the sequence by OID instead of name, which was missing a schema
qualification.
Reported-by: Stephen Frost <sfrost@snowman.net>
Formerly an alternate password file could only be selected via the
environment variable PGPASSFILE; now it can also be selected via a
new connection parameter "passfile", corresponding to the conventions
for most other connection parameters. There was some concern about
this creating a security weakness, but it was agreed that that argument
was pretty thin, and there are clear use-cases for handling password
files this way.
Julian Markwort, reviewed by Fabien Coelho, some adjustments by me
Discussion: https://postgr.es/m/a4b4f4f1-7b58-a0e8-5268-5f7db8e8ccaa@uni-muenster.de
This is useful infrastructure for an upcoming proposed patch to
allow the WAL segment size to be changed at initdb time; tools like
pg_basebackup need the ability to interrogate the server setting.
But it also doesn't seem like a bad thing to have independently of
that; it may find other uses in the future.
Robert Haas and Beena Emerson. (The original patch here was by
Beena, but I rewrote it to such a degree that most of the code
being committed here is mine.)
Discussion: http://postgr.es/m/CA+TgmobNo4qz06wHEmy9DszAre3dYx-WNhHSCbU9SAwf+9Ft6g@mail.gmail.com
If you create a DestReciver of type DestRemote and try to use it from
a replication connection that is not bound to a specific daabase, or
any other hypothetical type of backend that is not bound to a specific
database, it will fail because it doesn't have a pg_proc catalog to
look up properties of the types being printed. In general, that's
an unavoidable problem, but we can hardwire the properties of a few
builtin types in order to support utility commands. This new
DestReceiver of type DestRemoteSimple does just that.
Patch by me, reviewed by Michael Paquier.
Discussion: http://postgr.es/m/CA+TgmobNo4qz06wHEmy9DszAre3dYx-WNhHSCbU9SAwf+9Ft6g@mail.gmail.com
This patch doesn't actually make any index AM parallel-aware, but it
provides the necessary functions at the AM layer to do so.
Rahila Syed, Amit Kapila, Robert Haas
Previously, ExecInitModifyTable was missing handling for WITH CHECK
OPTION, and view_query_is_auto_updatable was missing handling for
RELKIND_PARTITIONED_TABLE.
Amit Langote, reviewed by me.
In 2ac3ef7a01, we changed things so that
it's possible for a different TupleTableSlot to be used for partitioned
tables at successively lower levels. If we do end up changing the slot
from the original, we must update ecxt_scantuple to point to the new one
for partition key of the tuple to be computed correctly.
Reported by Rajkumar Raghuwanshi. Patch by Amit Langote.
Discussion: http://postgr.es/m/CAKcux6%3Dm1qyqB2k6cjniuMMrYXb75O-MB4qGQMu8zg-iGGLjDw%40mail.gmail.com
We've accumulated quite a bit of stuff with which pgindent is not
quite happy in this code; clean it up to provide a less-annoying base
for future pgindent runs.
The code here previously tried to call the partitioning operator, but
really the right thing to do (and the safe thing to do) is use
datumIsEqual().
Amit Langote, but I expanded the comment and fixed a compiler warning.
For some reason that is lost in history, a descending sequence would
default its minimum value to -2^63+1 (-PG_INT64_MAX) instead of
-2^63 (PG_INT64_MIN), even though explicitly specifying a minimum value
of -2^63 would work. Fix this inconsistency by using the full range by
default.
Reported-by: Daniel Verite <daniel@manitou-mail.org>
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
initdb used to warn about that, but it was changed to an error in
pg_import_system_locales, but some build farm members failed because of
that. Change it back to a warning.
Vacuum truncation scan can be sped up on rotating media by prefetching
blocks in forward direction. That makes the blocks already present in
memory by the time they are needed, while also letting OS read-ahead
kick in.
The truncate scan has been measured to be five times faster than without
this patch (that was on a slow disk, but it shouldn't hurt on fast
disks.)
Author: Álvaro Herrera, loosely based on a submission by Claudio Freire
Discussion: https://postgr.es/m/CAGTBQpa6NFGO_6g_y_7zQx8L9GcHDSQKYdo1tGuh791z6PYgEg@mail.gmail.com
Joining three tables only takes two join nodes. I think when I (tgl)
wrote this, I was envisioning possible additional joins; but since the
example doesn't show any fourth table, it's just confusing to write
a third join node.
Etsuro Fujita
Discussion: https://postgr.es/m/e6cfbaa3-af02-1abc-c25e-8fa5c6bc4e21@lab.ntt.co.jp
This appears to be necessary to fix a failure seen on buildfarm member
sittella. It shouldn't be necessary according to the letter of the C
standard, because we don't change the values of these variables within
the PG_TRY blocks; but somehow gcc 4.7.2 is dropping the ball.
Discussion: https://postgr.es/m/17555.1485179975@sss.pgh.pa.us
Missing a destroyPQExpBuffer() in the early exit branch. The early
exits aren't really necessary. Most similar functions just proceed
running the rest of the code zero times and clean up at the end.
Project style is to put things in this order, for the good and sufficient
reason that you often need the typedefs in the function declarations.
There already was one function declaration that needed a typedef, which
was randomly placed away from all the other static function declarations
in consequence. And the submitted patch for better json_populate_record
functionality jumped through even more hoops in order to preserve this
bad idea.
This patch only moves lines from point A to point B, no other changes.
Coverity complained quite properly that commit ea15e1867 had introduced
unreachable code into ExecGather(); to wit, it was no longer possible to
iterate the final for-loop more or less than once. So remove the for().
In passing, clean up a couple of comments, and make better use of a local
variable.
Turns out this has been broken for years and we'd not noticed. The one
case that was getting exercised in the buildfarm, or probably anywhere
else, was postgres_fdw.sl's reference to libpq.sl; and it turns out that
that was always going to libpq.sl in the actual installation directory
not the temporary install. We'd not noticed because the buildfarm script
does "make install" before it tests contrib. However, the recent addition
of a logical-replication test to the core regression scripts resulted in
trying to use libpqwalreceiver.sl before "make install" happens, and that
failed for lack of finding libpq.sl, as shown by failures on buildfarm
members gaur and pademelon.
There are two changes needed to fix it: the magic environment variable to
specify shlib search path at runtime is SHLIB_PATH not LD_LIBRARY_PATH,
and the shlib link command needs to specify the +s switch else the library
will not honor SHLIB_PATH.
I'm not quite sure why buildfarm members anole and gharial (HPUX 11) didn't
show the same failure. Consulting man pages on the web says that HPUX 11
honors both LD_LIBRARY_PATH and SHLIB_PATH, which would explain half of it,
and the rather confusing wording I've been able to find suggests that +s
might effectively be the default in HPUX 11. But it seems at least as
likely that there's just a libpq.so installed in /usr/lib on that machine;
as long as it's not too ancient, that would satisfy the test. In any case
I do not think this patch will break HPUX 11.
At the moment I don't see a need to back-patch this, since it only matters
for testing purposes, not to mention that HPUX 10 is probably dead in the
real world anyway.
When (1) autovacuum = off and (2) there's at least one database with
an XID age greater than autovacuum_freeze_max_age and (3) all tables
in that database that need vacuuming are already being processed by a
worker and (4) the autovacuum launcher is started, a kind of infinite
loop occurs. The launcher starts a worker and immediately exits. The
worker, finding no worker to do, immediately starts the launcher,
supposedly so that the next database can be processed. But because
datfrozenxid for that database hasn't been advanced yet, the new
worker gets put right back into the same database as the old one,
where it once again starts the launcher and exits. High-speed ping
pong ensues.
There are several possible ways to break the cycle; this seems like
the safest one.
Amit Khandekar (code) and Robert Haas (comments), reviewed by
Álvaro Herrera.
Discussion: http://postgr.es/m/CAJ3gD9eWejf72HKquKSzax0r+epS=nAbQKNnykkMA0E8c+rMDg@mail.gmail.com
If either bound is infinite, then we shouldn't even try to perform a
comparison of the values themselves. Rearrange the logic so that
we don't.
Per buildfarm member skink and Tom Lane.
This was forgotten in 665d1fad99 and
caused the whole buildfarm to become red for a little while.
Author: Petr Jelínek
Also fix a typo in a nearby error message.
pgoutput evidently needs to be built without -DBUILDING_DLL. (It seems
like a pretty bad idea that these makefiles need to know exactly where
all the shlibs are in the tree, or maybe what's bad is putting them under
src/backend/. But right now is not the time to redesign that.)
Also, remove "override CPPFLAGS" in pgoutput's Makefile. I don't think
that that actually has any bad consequences, but it's certainly useless
in a directory that has no .h files, and it might be contributing to the
failure somehow.
Per buildfarm.
A pgbench meta command can now be continued onto additional line(s) of a
script file by writing backslash-return. The continuation marker is
equivalent to white space in that it separates tokens.
Eventually it'd be nice to have the same thing in psql, but that will
be a much larger project.
Fabien Coelho, reviewed by Rafia Sabih
Discussion: https://postgr.es/m/alpine.DEB.2.20.1610031049310.19411@lancre
The publication test didn't drop all the publications it was creating
when it was probably intending to do that. There is still a bug with
dependency tracking in there, but this should at least quiet down the
build farm.
Brown-paper-bag bug in commit ab1f0c822: the old code here coped with
null CachedPlanSource.raw_parse_tree, the new code not so much.
Per report from Dave Cramer.
No regression test, because our core testing infrastructure doesn't
provide any easy way to exercise this path. Fortunately, the JDBC
crew test it regularly.
Discussion: https://postgr.es/m/CADK3HH+Ug3xCysKqw_dZOnaNnytZ1Rh5yP05hjO-e4NoyRxVvA@mail.gmail.com
I'd somehow talked myself into believing that set_append_rel_size
doesn't need to worry about getting back an AND clause when it applies
eval_const_expressions to the result of adjust_appendrel_attrs (that is,
transposing the appendrel parent's restriction clauses for one child).
But that is nonsense, and Andreas Seltenreich's fuzz tester soon
turned up a counterexample. Put back the make_ands_implicit step
that was there before, and add a regression test covering the case.
Report: https://postgr.es/m/878tq6vja6.fsf@ansel.ydns.eu
Due to the changed costing in that commit hash-aggregates started to
be used, which results in big-endian vs. little-endian output
differences. Disable hash-aggs for those tests.
Author: Andres Freund, with input from Tom Lane
Discussion: https://postgr.es/m/22891.1484791792@sss.pgh.pa.us
Since 69f4b9c plain expression evaluation (and thus normal projection)
can't return sets of tuples anymore. Thus remove code dealing with
that possibility.
This will require adjustments in external code using
ExecEvalExpr()/ExecProject() - that should neither be hard nor very
common.
Author: Andres Freund and Tom Lane
Discussion: https://postgr.es/m/20160822214023.aaxz5l4igypowyri@alap3.anarazel.de
If a user requests the commit timestamp for a transaction old enough
that its data is concurrently being truncated away by vacuum at just the
right time, they would receive an ugly internal file-not-found error
message from slru.c rather than the expected NULL return value.
In a primary server, the window for the race is very small: the lookup
has to occur exactly between the two calls by vacuum, and there's not a
lot that happens between them (mostly just a multixact truncate). In a
standby server, however, the window is larger because the truncation is
executed as soon as the WAL record for it is replayed, but the advance
of the oldest-Xid is not executed until the next checkpoint record.
To fix in the primary, simply reverse the order of operations in
vac_truncate_clog. To fix in the standby, augment the WAL truncation
record so that the standby is aware of the new oldest-XID value and can
apply the update immediately. WAL version bumped because of this.
No backpatch, because of the low importance of the bug and its rarity.
Author: Craig Ringer
Reviewed-By: Petr Jelínek, Peter Eisentraut
Discussion: https://postgr.es/m/CAMsr+YFhVtRQT1VAwC+WGbbxZZRzNou=N9Ed-FrCqkwQ8H8oJQ@mail.gmail.com
The previous coding did not properly quote the user name before casting
it to regrole. To avoid all that, just pass in BOOTSTRAP_SUPERUSERID
numerically.
Also fix one place where the BOOTSTRAP_SUPERUSERID was hardcoded as 10.