A materialized view has a rule just like a view and a heap and
other physical properties like a table. The rule is only used to
populate the table, references in queries refer to the
materialized data.
This is a minimal implementation, but should still be useful in
many cases. Currently data is only populated "on demand" by the
CREATE MATERIALIZED VIEW and REFRESH MATERIALIZED VIEW statements.
It is expected that future releases will add incremental updates
with various timings, and that a more refined concept of defining
what is "fresh" data will be developed. At some point it may even
be possible to have queries use a materialized in place of
references to underlying tables, but that requires the other
above-mentioned features to be working first.
Much of the documentation work by Robert Haas.
Review by Noah Misch, Thom Brown, Robert Haas, Marko Tiikkaja
Security review by KaiGai Kohei, with a decision on how best to
implement sepgsql still pending.
Also make sure other fields of the view's pg_class entry are appropriate
for a view; it shouldn't have relfrozenxid set for instance.
This ancient omission isn't believed to have any serious consequences in
versions 8.4-9.2, so no backpatch. But let's fix it before it does bite
us in some serious way. It's just luck that the case doesn't cause
problems for autovacuum. (It did cause problems in 8.3, but that's out
of support.)
Andres Freund
fmgr_sql had been designed on the assumption that the FmgrInfo it's called
with has only query lifespan. This is demonstrably unsafe in connection
with range types, as shown in bug #7881 from Andrew Gierth. Fix things
so that we re-generate the function's cache data if the (sub)transaction
it was made in is no longer active.
Back-patch to 9.2. This might be needed further back, but it's not clear
whether the case can realistically arise without range types, so for now
I'll desist from back-patching further.
Careless use of TopMemoryContext for I/O function data meant that repeated
use of spi_prepare and spi_freeplan would leak memory at the session level,
as per report from Christian Schröder. In addition, spi_prepare
leaked a lot of transient data within the current plperl function's SPI
Proc context, which would be a problem for repeated use of spi_prepare
within a single plperl function call; and it wasn't terribly careful
about releasing permanent allocations in event of an error, either.
In passing, clean up some copy-and-pasteos in query-lookup error messages.
Alex Hunsaker and Tom Lane
In copy-out mode, the frontend should not send any messages until the
backend has finished streaming, by sending a CopyDone message. I'm not sure
if it would be legal for the client to send a new query before receiving the
CopyDone message from the backend, but trying to support that would require
bigger changes to the backend code structure.
Fixes an assertion failure reported by Fujii Masao.
This includes backend "COPY TO/FROM PROGRAM '...'" syntax, and corresponding
psql \copy syntax. Like with reading/writing files, the backend version is
superuser-only, and in the psql version, the program is run in the client.
In the passing, the psql \copy STDIN/STDOUT syntax is subtly changed: if you
the stdin/stdout is quoted, it's now interpreted as a filename. For example,
"\copy foo from 'stdin'" now reads from a file called 'stdin', not from
standard input. Before this, there was no way to specify a filename called
stdin, stdout, pstdin or pstdout.
This creates a new function in pgport, wait_result_to_str(), which can
be used to convert the exit status of a process, as returned by wait(3),
to a human-readable string.
Etsuro Fujita, reviewed by Amit Kapila.
parseqatom() failed to check for an error return (NULL result) from its
recursive call to parsebranch(), and in consequence could crash with a
null-pointer dereference after an error return. This bug has been there
since day one, but wasn't noticed before, probably because most error cases
in parsebranch() didn't actually lead to returning NULL. Add the missing
error check, and also tweak parsebranch() to exit in a less indirect
fashion after a call to parseqatom() fails.
Report by Tomasz Karlik, fix by me.
The backend grammar treats STDIN and STDOUT completely interchangeable, so
that the above accepted. Arguably that was a mistake the backend grammar,
but it's not ecpg's business to second guess that.
There's no harm in excessive quoting per se, but it makes the strings nicer
to read. The values can get quite unwieldy, when they're first quoted within
within single-quotes when included in the connection string, and then all
the single-quotes are escaped when the connection string is passed as a
shell argument.
Like with pg_basebackup and pg_receivexlog, it's a bit strange to call the
option -d/--dbname, when in fact you cannot pass a database name in it.
Original patch by Amit Kapila, heavily modified by me.
You could already pass a database name just by passing it as the last
option, without -d. This is an alias for that, like the -d/--dbname option
in psql and many other client applications. For consistency.
The previous commit didn't work on MSVC editions earlier than
Visual Studio 2011, apparently. This works by copying files into the
contrib directory, and making provision to clean them up, which should
work on all editions.
Without this, there's no way to pass arbitrary libpq connection parameters
to these applications. It's a bit strange that the option is called
-d/--dbname, when in fact you can *not* pass a database name in it, but it's
consistent with other client applications where a connection string is also
passed using -d.
Original patch by Amit Kapila, heavily modified by me.
This program relies on rm_desc backend routines and the xlogreader
infrastructure to emit human-readable rendering of WAL records.
Author: Andres Freund, with many reworks by Álvaro
Reviewed (in a much earlier version) by Peter Eisentraut
Include eval costs of local conditions in remote-estimate mode, and don't
assume the remote eval cost is zero in local-estimate mode. (The best
we can do with that at the moment is to assume a seqscan, which may well
be wildly pessimistic ... but zero won't do at all.)
To get a reasonable local estimate, we need to know the relpages count
for the remote rel, so improve the ANALYZE code to fetch that rather
than just setting the foreign table's relpages field to zero.
On reflection this method seems to be exposing an unreasonable amount of
implementation detail. It wouldn't matter when talking to a remote server
of the identical Postgres version, but it seems likely to make things worse
not better if the remote is a different version with different casting
infrastructure. Instead adopt ruleutils.c's policy of regurgitating the
cast as it was originally specified; including not showing it at all, if
it was implicit to start with. (We must do that because for some datatypes
explicit and implicit casts have different semantics.)
The only place we depended on that was in sending numeric type OIDs in
PQexecParams; but we can replace that usage with explicitly casting
each Param symbol in the query string, so that the types are specified
to the remote by name not OID. This makes no immediate difference but
will be essential if we ever hope to support use of non-builtin types.
We must still initialize minRecoveryPoint if we start straight with archive
recovery, e.g when recovering from a normal base backup taken with
pg_start/stop_backup. Otherwise we never consider the system consistent.
Set the remote session's search path to exactly "pg_catalog" at session
start, then schema-qualify only names that aren't in that schema. This
greatly reduces clutter in the generated SQL commands, as seen in the
regression test changes. Per discussion.
Also, rethink use of FirstNormalObjectId as the "built-in object" cutoff
--- FirstBootstrapObjectId is safer, since the former will accept
objects in information_schema for instance.
If you create a base backup using an atomic filesystem snapshot, and try to
perform PITR starting from that base backup, or if you just kill a master
server and create recovery.conf to put it into standby mode, we don't know
how far we need to recover before reaching consistency. Normally in crash
recovery, we replay all the WAL present in pg_xlog, and assume that we're
consistent after that. And normally in archive recovery, minRecoveryPoint,
backupEndRequired, or backupEndPoint is set in the control file, indicating
how far we need to replay to reach consistency. But if the server was
previously up and running normally, and you kill -9 it or take an atomic
filesystem snapshot, none of those fields are set in the control file.
The solution is to perform crash recovery first, replaying all the WAL in
pg_xlog. After that's done, we assume that the system is consistent like in
normal crash recovery, and switch to archive recovery mode after that.
Per report from Kyotaro HORIGUCHI. In his scenario, recovery.conf was
created after "pg_ctl stop -m i". I'm not sure we need to support that exact
scenario, but we should support backing up using a filesystem snapshot,
which looks identical.
This issue goes back to at least 9.0, where hot standby was introduced and
we started to track when consistency is reached. In 9.1 and 9.2, we would
open up for hot standby too early, and queries could briefly see an
inconsistent state. But 9.2 made it more visible, as we started to PANIC if
we see a reference to a non-existing page during recovery, if we've already
reached consistency. This is a fairly big patch, so back-patch to 9.2 only,
where the issue is more visible. We can consider back-patching further after
this has received some more testing in 9.2 and master.
This enables non-backend code, such as pg_xlogdump, to use it easily.
The previous location, in src/backend/catalog/catalog.c, made that
essentially impossible because that file depends on many backend-only
facilities; so this needs to live separately.
There's still a lot of room for improvement, but it basically works,
and we need this to be present before we can do anything much with the
writable-foreign-tables patch. So let's commit it and get on with testing.
Shigeru Hanada, reviewed by KaiGai Kohei and Tom Lane