Although these files get cleaned up if the test runs to completion,
a failure partway through leaves trash all over the floor. The Makefile
ought to be bright enough to get rid of it when you say "make clean".
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
Instead of changing the tuple xmin to FrozenTransactionId, the combination
of HEAP_XMIN_COMMITTED and HEAP_XMIN_INVALID, which were previously never
set together, is now defined as HEAP_XMIN_FROZEN. A variety of previous
proposals to freeze tuples opportunistically before vacuum_freeze_min_age
is reached have foundered on the objection that replacing xmin by
FrozenTransactionId might hinder debugging efforts when things in this
area go awry; this patch is intended to solve that problem by keeping
the XID around (but largely ignoring the value to which it is set).
Third-party code that checks for HEAP_XMIN_INVALID on tuples where
HEAP_XMIN_COMMITTED might be set will be broken by this change. To fix,
use the new accessor macros in htup_details.h rather than consulting the
bits directly. HeapTupleHeaderGetXmin has been modified to return
FrozenTransactionId when the infomask bits indicate that the tuple is
frozen; use HeapTupleHeaderGetRawXmin when you already know that the
tuple isn't marked commited or frozen, or want the raw value anyway.
We currently do this in routines that display the xmin for user consumption,
in tqual.c where it's known to be safe and important for the avoidance of
extra cycles, and in the function-caching code for various procedural
languages, which shouldn't invalidate the cache just because the tuple
gets frozen.
Robert Haas and Andres Freund
Previously, lookups of non-existent user names could return "Success";
it will now return "User does not exist" by resetting errno. This also
centralizes the user name lookup code in libpgport.
Report and analysis by Nicolas Marchildon; patch by me
Integer overflow showed minus percent and minus remaining time something like this.
239300000 of 3800000000 tuples (-48%) done (elapsed 226.86 s, remaining -696.10 s).
Previous commit e5de601267 modified dblink
to ensure client encoding matched the server. However the added
PQsetClientEncoding() call added significant overhead. Restore original
performance in the common case where client encoding already matches
server encoding by doing nothing in that case. Applies to all active
branches.
Issue reported and work sponsored by Zonar Systems.
The query ID is the internal hash identifier of the statement,
and was not available in pg_stat_statements view so far.
Daniel Farina, Sameer Thakur and Peter Geoghegan, reviewed by me.
Previously missing or invalid service files returned NULL. Also fix
pg_upgrade to report "out of memory" for a null return from
PQconndefaults().
Patch by Steve Singer, rewritten by me
This function formerly crashed if called as a statement-level trigger,
or if a column-name argument wasn't given.
In passing, add the trigger name to all error messages from the function.
(None of them are expected cases, so this shouldn't pose any compatibility
risk.)
Marc Cousin, reviewed by Sawada Masahiko
The command we're telling people to type needs to include double-quoting
around the unfortunately-chosen extension name. Twiddle the textual
quoting so that it looks somewhat sane. Per gripe from roadrunner6.
This patch adds the ability to write TABLE( function1(), function2(), ...)
as a single FROM-clause entry. The result is the concatenation of the
first row from each function, followed by the second row from each
function, etc; with NULLs inserted if any function produces fewer rows than
others. This is believed to be a much more useful behavior than what
Postgres currently does with multiple SRFs in a SELECT list.
This syntax also provides a reasonable way to combine use of column
definition lists with WITH ORDINALITY: put the column definition list
inside TABLE(), where it's clear that it doesn't control the ordinality
column as well.
Also implement SQL-compliant multiple-argument UNNEST(), by turning
UNNEST(a,b,c) into TABLE(unnest(a), unnest(b), unnest(c)).
The SQL standard specifies TABLE() with only a single function, not
multiple functions, and it seems to require an implicit UNNEST() which is
not what this patch does. There may be something wrong with that reading
of the spec, though, because if it's right then the spec's TABLE() is just
a pointless alternative spelling of UNNEST(). After further review of
that, we might choose to adopt a different syntax for what this patch does,
but in any case this functionality seems clearly worthwhile.
Andrew Gierth, reviewed by Zoltán Böszörményi and Heikki Linnakangas, and
significantly revised by me
This only affects upgrades from 8.3 currently, and is harmless as the
child just generates an error in the script, but we should get it right
in case we ever need this for more complex uses.
Per report from Peter Eisentraut
Previously, pg_upgrade would abort copy_file() on a short write without
setting errno, which the caller would report as an error with the
message "Success". We assume ENOSPC in that case, as we do elsewhere in
the code. Also set errno in some other error cases in copy_file() to
avoid bogus "Success" error messages.
This was broken in 6b711cf37c, so 9.2 and
before are OK.
pgbench formerly failed on lines longer than BUFSIZ, unexpectedly
splitting them into multiple commands. Allow it to work with any
length of input line.
Sawada Masahiko
The NetBSD shell apparently returns non-zero from an unset command if
the variable is already unset. This matters when, as in pg_upgrade's
test.sh, we are working under 'set -e'. To protect against this, we
first set the PG variables to an empty string before unsetting them
completely.
Error found on buildfarm member coypu, solution from Rémi Zara.
Previous commit modified the test case, but I didn't update cube.out
expected output file in previous commit because it was not needed by the
platforms I have easy access to. Buildfarm animal 'dugong', running
"Debian 4.0 icc 10.1.011 ia64", has now gone red because of that, so update
it now.
Also adjust cube_3.out. According to git history, it was added to support
64-bit MinGW. There is no such animal in the buildfarm, so I'm doing this
blindly, but it was added quite recently so maybe someone still cares.
If the lower left and upper right corners of a cube are the same, set a
flag in the cube header, and only store one copy of the coordinates. That
cuts the on-disk size into half for the common case that the cube datatype
is used to represent points rather than boxes.
The new format is backwards-compatible with the old one, so pg_upgrade
still works. However, to get the space savings, the data needs to be
rewritten. A simple VACUUM FULL or REINDEX is not enough, as the old
Datums will just be moved to the new heap/index as is. A pg_dump and
reload, or something similar like casting to text and back, will do the
trick.
This patch deliberately doesn't update all the alternative expected output
files, as I don't have access to machines that produce those outputs. I'm
not sure if they are still relevant, but if they are, the buildfarm will
tell us and produce the diff required to fix it. If none of the buildfarm
animals need them, they should be removed altogether.
Patch by Stas Kelvich.
Add asprintf(), pg_asprintf(), and psprintf() to simplify string
allocation and composition. Replacement implementations taken from
NetBSD.
Reviewed-by: Álvaro Herrera <alvherre@2ndquadrant.com>
Reviewed-by: Asif Naeem <anaeem.it@gmail.com>
REFRESH MATERIALIZED VIEW CONCURRENTLY was broken for any matview
containing a column of a type without a default btree operator
class. It also did not produce results consistent with a non-
concurrent REFRESH or a normal view if any column was of a type
which allowed user-visible differences between values which
compared as equal according to the type's default btree opclass.
Concurrent matview refresh was modified to use the new operators
to solve these problems.
Documentation was added for record comparison, both for the
default btree operator class for record, and the newly added
operators. Regression tests now check for proper behavior both
for a matview with a box column and a matview containing a citext
column.
Reviewed by Steve Singer, who suggested some of the doc language.
Isolate transaction latency (elapsed time between submitting first
command and receiving response to last command) from client-side delays
pertaining to the --rate schedule. Under --rate, report schedule lag as
defined in the documentation. Report latency standard deviation
whenever we collect the measurements to do so. All of these changes
affect --progress messages and the final report.
Fabien COELHO, reviewed by Pavel Stehule.
With the PGXS boilerplate in place, pg_xlogdump currently fails with an
ominous error message that certain targets cannot be built because
certain files do not exist. Remove that and instead throw a quick error
message alerting the user of the actual problem, which should be easier
to diagnose that the statu quo.
Andres Freund
This should have been done when the json functionality was added to
hstore in 9.3.0. To handle this correctly, the upgrade script therefore
uses conditional logic by using plpgsql in a DO statement to add the two
new functions and the new cast. If hstore_to_json_loose is detected as
already present and dependent on the hstore extension nothing is done.
This will require that the database be loaded with plpgsql.
People who have installed the earlier and spurious 1.1 version of hstore
will need to do:
ALTER EXTENSION hstore UPDATE;
to pick up the new functions properly.
Previously a one-dimensional empty array was returned, but its text
representation matched a zero-dimensional array, and there is no way to
dump/reload a one-dimensional empty array.
BACKWARD INCOMPATIBILITY
Per report from elein
Using the infrastructure provided by this patch, it's possible either
to wait for the startup of a dynamically-registered background worker,
or to poll the status of such a worker without waiting. In either
case, the current PID of the worker process can also be obtained.
As usual, worker_spi is updated to demonstrate the new functionality.
Patch by me. Review by Andres Freund.
The planner largely failed to consider the possibility that a
PlaceHolderVar's expression might contain a lateral reference to a Var
coming from somewhere outside the PHV's syntactic scope. We had a previous
report of a problem in this area, which I tried to fix in a quick-hack way
in commit 4da6439bd8, but Antonin Houska
pointed out that there were still some problems, and investigation turned
up other issues. This patch largely reverts that commit in favor of a more
thoroughly thought-through solution. The new theory is that a PHV's
ph_eval_at level cannot be higher than its original syntactic level. If it
contains lateral references, those don't change the ph_eval_at level, but
rather they create a lateral-reference requirement for the ph_eval_at join
relation. The code in joinpath.c needs to handle that.
Another issue is that createplan.c wasn't handling nested PlaceHolderVars
properly.
In passing, push knowledge of lateral-reference checks for join clauses
into join_clause_is_movable_to. This is mainly so that FDWs don't need
to deal with it.
This patch doesn't fix the original join-qual-placement problem reported by
Jeremy Evans (and indeed, one of the new regression test cases shows the
wrong answer because of that). But the PlaceHolderVar problems need to be
fixed before that issue can be addressed, so committing this separately
seems reasonable.
These modules used the YYPARSE_PARAM macro, which has been deprecated
by the bison folk since 1.875, and which they finally removed in 3.0.
Adjust the code to use the replacement facility, %parse-param, which
is a much better solution anyway since it allows specification of the
type of the extra parser parameter. We can thus get rid of a lot of
unsightly casting.
Back-patch to all active branches, since somebody might try to build
a back branch with up-to-date tools.
Pg_Upgrade cannot write the command string to the log file and then call
system() to write to the same file without causing occasional file-share
errors on Windows. So instead, write the command string to the log file
after system(), in those cases.
Backpatch to 9.3.
Tuples belonging to uncommitted transactions should not be
counted as dead.
This is arguably a bug fix that should be back-patched, but
as no one ever noticed until it came time to try to get rid
of SnapshotNow, I'm only doing this in master for now.
Since pg_upgrade -j on Windows uses threads, calling umask()
before/after opening a file via fopen_priv() is no longer possible, so
set umask() as we enter the thread-creating loop, and reset it on exit.
Also adjust internal fopen_priv() calls to just use fopen().
Backpatch to 9.3beta.
This fixes the problem of passing the wrong function pointer when doing
parallel copy/link operations on Windows.
Backpatched to 9.3beta.
Found and patch supplied by Andrew Dunstan
This controls the target transaction rate to certain tps, rather than
maximum. Patch contributed by Fabien COELHO, reviewed by Greg Smith,
and slight editing by me.
Per discussion, it's desirable to eliminate all remaining uses of
SnapshotNow, because it has unpleasant semantics: race conditions
can result in seeing multiple versions of a concurrently updated
row, or none at all. By using GetActiveSnapshot() here, callers
will see exactly those rows that would have been visible if the
invoking query had scanned the table using, for example, a SELECT
statement.
This is slightly different from the old behavior, because commits
that happen concurrently with the scan will not affect the results.
In REPEATABLE READ or SERIALIZABLE modes, where transaction
snapshots are used, commits that have happened since the start of
the transaction will also not affect the results. It is hoped
that this minor incompatibility will be thought an improvement,
or at least no worse than what we did before.
Per discussion on pgsql-hackers, these aren't really needed. Interim
versions of the background worker patch had the worker starting with
signals already unblocked, which would have made this necessary.
But the final version does not, so we don't really need it; and it
doesn't work well with the new facility for starting dynamic background
workers, so just rip it out.
Also per discussion on pgsql-hackers, back-patch this change to 9.3.
It's best to get the API break out of the way before we do an
official release of this facility, to avoid more pain for extension
authors later.
Previously, these functions took a HeapTupleHeader, but upcoming
patches for logical replication will introduce new a new snapshot
type under which the tuple's TID will be used to lookup (CMIN, CMAX)
for visibility determination purposes. This makes that information
available. Code churn is minimal since HeapTupleSatisfiesVisibility
took the HeapTuple anyway, and deferenced it before calling the
satisfies function.
Independently of logical replication, this allows t_tableOid and
t_self to be cross-checked via assertions in tqual.c. This seems
like a useful way to make sure that all callers are setting these
values properly, which has been previously put forward as
desirable.
Andres Freund, reviewed by Álvaro Herrera
This allows us to specify the target relation with several expressions,
'relname', 'schemaname.relname' and OID in all pgstattuple functions.
pgstatindex() and pg_relpages() could not accept OID as the argument
so far.
Per discussion on -hackers, we decided to keep two types of interfaces,
with regclass-type and TEXT-type argument, for each pgstattuple
function because of the backward-compatibility issue. The functions
which have TEXT-type argument will be deprecated in the future release.
Patch by Satoshi Nagayasu, reviewed by Rushabh Lathia and Fujii Masao.
This is SQL-standard with a few extensions, namely support for
subqueries and outer references in clause expressions.
catversion bump due to change in Aggref and WindowFunc.
David Fetter, reviewed by Dean Rasheed.
There is a new API, RegisterDynamicBackgroundWorker, which allows
an ordinary user backend to register a new background writer during
normal running. This means that it's no longer necessary for all
background workers to be registered during processing of
shared_preload_libraries, although the option of registering workers
at that time remains available.
When a background worker exits and will not be restarted, the
slot previously used by that background worker is automatically
released and becomes available for reuse. Slots used by background
workers that are configured for automatic restart can't (yet) be
released without shutting down the system.
This commit adds a new source file, bgworker.c, and moves some
of the existing control logic for background workers there.
Previously, there was little enough logic that it made sense to
keep everything in postmaster.c, but not any more.
This commit also makes the worker_spi contrib module into an
extension and adds a new function, worker_spi_launch, which can
be used to demonstrate the new facility.
This prevents the client from gobbling up too much memory when the
number of large objects to be removed is very large.
Andrew Dunstan, reviewed by Josh Kupershmidt
Treat TOAST index just the same as normal one and get the OID
of TOAST index from pg_index but not pg_class.reltoastidxid.
This change allows us to handle multiple TOAST indexes, and
which is required infrastructure for upcoming
REINDEX CONCURRENTLY feature.
Patch by Michael Paquier, reviewed by Andres Freund and me.
On the command line, GUC option strings are handled by the guc parser,
not by the shell parser, so '' is the proper way to represent a
zero-length string. This reverts commit
3132a9b7ab.
SnapshotNow scans have the undesirable property that, in the face of
concurrent updates, the scan can fail to see either the old or the new
versions of the row. In many cases, we work around this by requiring
DDL operations to hold AccessExclusiveLock on the object being
modified; in some cases, the existing locking is inadequate and random
failures occur as a result. This commit doesn't change anything
related to locking, but will hopefully pave the way to allowing lock
strength reductions in the future.
The major issue has held us back from making this change in the past
is that taking an MVCC snapshot is significantly more expensive than
using a static special snapshot such as SnapshotNow. However, testing
of various worst-case scenarios reveals that this problem is not
severe except under fairly extreme workloads. To mitigate those
problems, we avoid retaking the MVCC snapshot for each new scan;
instead, we take a new snapshot only when invalidation messages have
been processed. The catcache machinery already requires that
invalidation messages be sent before releasing the related heavyweight
lock; else other backends might rely on locally-cached data rather
than scanning the catalog at all. Thus, making snapshot reuse
dependent on the same guarantees shouldn't break anything that wasn't
already subtly broken.
Patch by me. Review by Michael Paquier and Andres Freund.
Previous code had old/new prefixes on option values, e.g.
--old-datadir=OLDDATADIR. Remove them, for simplicity; now:
--old-datadir=DATADIR. Also update docs to do the same.
Change -u (user) option to -U, for consistency with other tools like
pg_dump and psql. Also expand --user to --username, again for
consistency.
BACKWARD INCOMPATIBILITY
Extend the FDW API (which we already changed for 9.3) so that an FDW can
report whether specific foreign tables are insertable/updatable/deletable.
The default assumption continues to be that they're updatable if the
relevant executor callback function is supplied by the FDW, but finer
granularity is now possible. As a test case, add an "updatable" option to
contrib/postgres_fdw.
This patch also fixes the information_schema views, which previously did
not think that foreign tables were ever updatable, and fixes
view_is_auto_updatable() so that a view on a foreign table can be
auto-updatable.
initdb forced due to changes in information_schema views and the functions
they rely on. This is a bit unfortunate to do post-beta1, but if we don't
change this now then we'll have another API break for FDWs when we do
change it.
Dean Rasheed, somewhat editorialized on by Tom Lane
Autovacuum occurring while the test runs could allow some of the inserts to
go into recycled space, thus changing the output ordering of later queries.
While we could complicate those queries to force sorting of their output
rows, it doesn't seem like that would make the test better in any
meaningful way, and conceivably it could hide unexpected diffs. Instead,
tweak the affected queries so that the inserted rows aren't updated by the
following UPDATE. Per buildfarm.
Make slightly better decisions about indentation than what pgindent
is capable of. Mostly breaking out long function calls into one
line per argument, with a few other minor adjustments.
No functional changes- all whitespace.
pgindent ran cleanly (didn't change anything) after.
Passes all regressions.
The behavior is that the required sequence is created locally, which is
appropriate because the default expression will be evaluated locally.
Per gripe from Brad Nicholson that this case was refused with a confusing
error message. We could have improved the error message but it seems
better to just allow the case.
Also, remove ALTER TABLE's arbitrary prohibition against being applied to
foreign tables, which was pretty inconsistent considering we allow it for
views, sequences, and other relation types that aren't even called tables.
This is needed to avoid breaking pg_dump, which sometimes emits column
defaults using separate ALTER TABLE commands. (I think this can happen
even when the default is not associated with a sequence, so that was a
pre-existing bug once we allowed column defaults for foreign tables.)
Previously, the port number used in this test script was hard-wired at
pg_upgrade's default of 50432; which is not so great because parallel build
runs might conflict. Commit 3d53173e20
removed this setting for the postmasters started by the script proper
(not by pg_upgrade), which didn't do anything to fix that problem and also
guaranteed a failure if there was a live postmaster at the build's default
port number. Instead, select a non-conflicting temporary port number in
the same way that pg_regress.c does. (Its method isn't entirely
bulletproof, but given the lack of complaints I'm not going to worry
about that today.)
In passing, unset MAKEFLAGS and MAKELEVEL to avoid problems with the
script's internal invocations of make, for the same reason pg_regress.c
does: it could cause problems in a parallel make.
This helps guard against changes in the set of reserved keywords from
one version to another. In theory it should only be an issue if we
de-reserve a keyword in a newer release, since that can create the type
of problem shown in bug #8128.
Back-patch to 9.1 where the --quote-all-identifiers option was added.
This code was left over from when pg_upgrade paid attention to PGPORT.
Now it would only affects the regression test run before the test run of
pg_upgrade. You can still set PGPORT for that, but there is no reason
to have the test driver default it to 50432.
Choose a saner ordering of parameters (adding a new input param after
the output params seemed a bit random), update the function's header
comment to match reality (cmon folks, is this really that hard?),
get rid of useless and sloppily-defined distinction between
PROCESS_UTILITY_SUBCOMMAND and PROCESS_UTILITY_GENERATED.
The initial coding just descended the index if any of the target trigrams
were possibly present at the next level down. But actually we can apply
trigramsMatchGraph() so as to take advantage of AND requirements when there
are some. The input data might contain false positive matches, but that
can only result in a false positive result, not false negative, so it's
safe to do it this way.
Alexander Korotkov
This changes the behavior of the start and stop actions to exit
successfully if the server was already started or stopped.
This changes the default behavior of the start action: Before, if the
server was already running, it would print a message and succeed. Now,
that situation will result in an error. When running in idempotent
mode, no message is printed and pg_ctl exits successfully.
It was considered to just make the idempotent behavior the default and
only option, but pg_upgrade needs the old behavior.
This wasn't addressed in the original patch, but it doesn't take very
much additional code to cover the case, so let's get it done.
Since pg_trgm 1.1 hasn't been released yet, I just changed the definition
of what's in it, rather than inventing a 1.2.
Make use of some GUC variables, and add SIGHUP handling to reload
the config file. Patch submitted by Guillaume Lelarge.
Also, report to pg_stat_activity. Per report from Marc Cousin, add
setting of statement start time.
This works by extracting trigrams from the given regular expression,
in generally the same spirit as the previously-existing support for
LIKE searches, though of course the details are far more complicated.
Currently, only GIN indexes are supported. We might be able to make
it work with GiST indexes later.
The implementation includes adding API functions to backend/regex/
to provide a view of the search NFA created from a regular expression.
These functions are meant to be generic enough to be supportable in
a standalone version of the regex library, should that ever happen.
Alexander Korotkov, reviewed by Heikki Linnakangas and Tom Lane
contrib/pg_trgm's make_trigrams() was coded to ignore multibyte character
boundaries and just make trigrams from bytes if USE_WIDE_UPPER_LOWER wasn't
defined. This is a bit odd, since there's no obvious reason why trigram
compaction rules should depend on the presence of towlower() and friends.
What's more, there was an Assert() that would fail if that code path was
fed any multibyte characters.
We need to do something about this since the pending regex-indexing patch
has an assumption that you get just one "trgm" from any three characters.
The best solution seems to be to remove the USE_WIDE_UPPER_LOWER
dependency, which shouldn't really have been there in the first place.
The second loop in make_trigrams() is now just a fast path and not a
potentially incompatible algorithm.
If there is anybody still using Postgres on machines without wcstombs() or
towlower(), and they have non-ASCII data indexed by pg_trgm, they'll need
to REINDEX those indexes after pg_upgrade to 9.3, else searches may fail
incorrectly. It seems likely that there are no such installations, though.
In passing, rename cnt_trigram to compact_trigram, which seems to better
describe its functionality, and improve make_trigrams' test for whether it
has to use the slow path or not (per a suggestion from Alexander Korotkov).
Now that pg_dump no longer dumps invalid indexes, per commit
683abc73df, have pg_upgrade also skip
them. Previously pg_upgrade threw an error if invalid indexes existed.
Backpatch to 9.2, 9.1, and 9.0 (where pg_upgrade was added to git)
Windows sometimes gets upset if we rename a large directory and then try
to use the old name quickly, as seen in occasional buildfarm failures.
So we avoid that by building the old version in the intended
destination in the first place instead of renaming it, similar to the
change made for the same reason in commit b7f8465c.
The main change here is to call security_compute_create_name_raw()
rather than security_compute_create_raw(). This ups the minimum
requirement for libselinux from 2.0.99 to 2.1.10, but it looks
like most distributions will have picked that up before 9.3 is out.
KaiGai Kohei
One of the use-cases for postgres_fdw is extracting data from older PG
servers, so cross-version compatibility is important. Document what we
can do here, and further annotate some of the coding choices that create
compatibility constraints. In passing, remove one unnecessary
incompatibility with old servers, namely assuming that we didn't need to
quote the timezone name 'UTC'.
If the remote database's settings of these GUCs are different from ours,
ambiguous datetime values may be read incorrectly. To fix, temporarily
adopt the remote server's settings while we ingest a query result.
This is not a complete fix, since it doesn't do anything about ambiguous
values in commands sent to the remote server; but there seems little we
can do about that end of it given dblink's entirely textual API for
transmitted commands.
Back-patch to 9.2. The hazard exists in all versions, but this patch
would need more work to apply before 9.2. Given the lack of field
complaints about this issue, it doesn't seem worth the effort at present.
Daniel Farina and Tom Lane
Checksums are set immediately prior to flush out of shared buffers
and checked when pages are read in again. Hint bit setting will
require full page write when block is dirtied, which causes various
infrastructure changes. Extensive comments, docs and README.
WARNING message thrown if checksum fails on non-all zeroes page;
ERROR thrown but can be disabled with ignore_checksum_failure = on.
Feature enabled by an initdb option, since transition from option off
to option on is long and complex and has not yet been implemented.
Default is not to use checksums.
Checksum used is WAL CRC-32 truncated to 16-bits.
Simon Riggs, Jeff Davis, Greg Smith
Wide input and assistance from many community members. Thank you.
This should provide some marginal overall savings, since it surely takes
many more cycles for the remote server to deal with the NULL columns than
it takes for postgres_fdw not to emit them. But really the reason is to
keep the emitted queries from looking quite so silly ...
I wasn't going to ship this without having at least some example of how
to do that. This version isn't terribly bright; in particular it won't
consider any combinations of multiple join clauses. Given the cost of
executing a remote EXPLAIN, I'm not sure we want to be very aggressive
about doing that, anyway.
In support of this, refactor generate_implied_equalities_for_indexcol
so that it can be used to extract equivalence clauses that aren't
necessarily tied to an index.
Remove use of PageSetTLI() from all page manipulation functions
and adjust README to indicate change in the way we make changes
to pages. Repurpose those bytes into the pd_checksum field and
explain how that works in comments about page header.
Refactoring ahead of actual feature patch which would make use
of the checksum field, arriving later.
Jeff Davis, with comments and doc changes by Simon Riggs
Direction suggested by Robert Haas; many others providing
review comments.
The semantics of signal(2) are more variable than one could wish; in
particular, on strict-POSIX platforms the signal handler will be reset
to SIG_DFL when the signal is delivered. This demonstrably breaks
pg_test_fsync's use of SIGALRM. The other changes I made are not
absolutely necessary today, because the called handlers all exit the
program anyway. But it seems like a good general practice to use
pqsignal() exclusively in Postgres code, now that we have it available
everywhere.
We had two copies of this function in the backend and libpq, which was
already pretty bogus, but it turns out that we need it in some other
programs that don't use libpq (such as pg_test_fsync). So put it where
it probably should have been all along. The signal-mask-initialization
support in src/backend/libpq/pqsignal.c stays where it is, though, since
we only need that in the backend.
Clarify the docs explaining what commit_delay does, and add a
recommendation about a useful value for it, namely half of the single-page
fsync time reported by pg_test_fsync. This is informed by testing of
the new-in-9.3 implementation of commit_delay; in prior versions it
was far harder to arrive at a useful setting.
In passing, do some wordsmithing and markup-fixing in the same general
area.
Also, change pg_test_fsync's default time-per-test from 2 seconds to 5.
The old value was about the minimum at which the results could be taken
seriously at all, and so seems a tad optimistic as a default.
Peter Geoghegan, reviewed by Noah Misch; some additional editing by me
Treat expressions as being remotely executable only if all collations used
in them are determined by Vars of the foreign table. This means that, if
the foreign server gets different answers than we do, it's the user's fault
for not having marked the foreign table columns with collations equivalent
to the remote table's. This rule allows most simple expressions such as
"var < 'constant'" to be sent to the remote side, because the constant
isn't determining the collation (the Var's collation would win). There's
still room for improvement, but it's hard to see how to do it without a
lot more knowledge and/or assumptions about what the remote side will do.
Adopt the position that only locally-defined defaults matter. Any defaults
defined in the remote database do not affect insertions performed through
a foreign table (unless they are for columns not known to the foreign
table). While it'd arguably be more useful to permit remote defaults to be
used, making that work in a consistent fashion requires far more work than
seems possible for 9.3.
A test intended to provoke an error on the remote side was coded in such
a way that multiple rows should be updated, so the output would vary
depending on which one was processed first. Per buildfarm.
For datatypes whose output formatting depends on one or more GUC settings,
we have to worry about whether the other server will interpret the value
the same way it was meant. pg_dump has been aware of this hazard for a
long time, but postgres_fdw needs to deal with it too. To fix data
retrieval from the remote server, set the necessary remote GUC settings at
connection startup. (We were already assuming that settings made then
would persist throughout the remote session.) To fix data transmission to
the remote server, temporarily force the relevant GUCs to the right values
when we're about to convert any data values to text for transmission.
This is all pretty grotty, and not very cheap either. It's tempting to
think of defining one uber-GUC that would override any settings that might
render printed data values unportable. But of course, older remote servers
wouldn't know any such thing and would still need this logic.
While at it, revert commit f7951eef89, since
this provides a real fix. (The timestamptz given in the error message
returned from the "remote" server will now reliably be shown in UTC.)
This adds the following:
json_agg(anyrecord) -> json
to_json(any) -> json
hstore_to_json(hstore) -> json (also used as a cast)
hstore_to_json_loose(hstore) -> json
The last provides heuristic treatment of numbers and booleans.
Also, in json generation, if any non-builtin type has a cast to json,
that function is used instead of the type's output function.
Andrew Dunstan, reviewed by Steve Singer.
Catalog version bumped.
We probably need to tell the remote server to use specific timezone and
datestyle settings, and maybe other things. But for now let's just hack
the postgres_fdw regression test to not provoke failures when run in
non-EST5EDT environments. Per buildfarm.
This patch adds the core-system infrastructure needed to support updates
on foreign tables, and extends contrib/postgres_fdw to allow updates
against remote Postgres servers. There's still a great deal of room for
improvement in optimization of remote updates, but at least there's basic
functionality there now.
KaiGai Kohei, reviewed by Alexander Korotkov and Laurenz Albe, and rather
heavily revised by Tom Lane.
A materialized view has a rule just like a view and a heap and
other physical properties like a table. The rule is only used to
populate the table, references in queries refer to the
materialized data.
This is a minimal implementation, but should still be useful in
many cases. Currently data is only populated "on demand" by the
CREATE MATERIALIZED VIEW and REFRESH MATERIALIZED VIEW statements.
It is expected that future releases will add incremental updates
with various timings, and that a more refined concept of defining
what is "fresh" data will be developed. At some point it may even
be possible to have queries use a materialized in place of
references to underlying tables, but that requires the other
above-mentioned features to be working first.
Much of the documentation work by Robert Haas.
Review by Noah Misch, Thom Brown, Robert Haas, Marko Tiikkaja
Security review by KaiGai Kohei, with a decision on how best to
implement sepgsql still pending.
This includes backend "COPY TO/FROM PROGRAM '...'" syntax, and corresponding
psql \copy syntax. Like with reading/writing files, the backend version is
superuser-only, and in the psql version, the program is run in the client.
In the passing, the psql \copy STDIN/STDOUT syntax is subtly changed: if you
the stdin/stdout is quoted, it's now interpreted as a filename. For example,
"\copy foo from 'stdin'" now reads from a file called 'stdin', not from
standard input. Before this, there was no way to specify a filename called
stdin, stdout, pstdin or pstdout.
This creates a new function in pgport, wait_result_to_str(), which can
be used to convert the exit status of a process, as returned by wait(3),
to a human-readable string.
Etsuro Fujita, reviewed by Amit Kapila.
This program relies on rm_desc backend routines and the xlogreader
infrastructure to emit human-readable rendering of WAL records.
Author: Andres Freund, with many reworks by Álvaro
Reviewed (in a much earlier version) by Peter Eisentraut
Include eval costs of local conditions in remote-estimate mode, and don't
assume the remote eval cost is zero in local-estimate mode. (The best
we can do with that at the moment is to assume a seqscan, which may well
be wildly pessimistic ... but zero won't do at all.)
To get a reasonable local estimate, we need to know the relpages count
for the remote rel, so improve the ANALYZE code to fetch that rather
than just setting the foreign table's relpages field to zero.
On reflection this method seems to be exposing an unreasonable amount of
implementation detail. It wouldn't matter when talking to a remote server
of the identical Postgres version, but it seems likely to make things worse
not better if the remote is a different version with different casting
infrastructure. Instead adopt ruleutils.c's policy of regurgitating the
cast as it was originally specified; including not showing it at all, if
it was implicit to start with. (We must do that because for some datatypes
explicit and implicit casts have different semantics.)
The only place we depended on that was in sending numeric type OIDs in
PQexecParams; but we can replace that usage with explicitly casting
each Param symbol in the query string, so that the types are specified
to the remote by name not OID. This makes no immediate difference but
will be essential if we ever hope to support use of non-builtin types.
Set the remote session's search path to exactly "pg_catalog" at session
start, then schema-qualify only names that aren't in that schema. This
greatly reduces clutter in the generated SQL commands, as seen in the
regression test changes. Per discussion.
Also, rethink use of FirstNormalObjectId as the "built-in object" cutoff
--- FirstBootstrapObjectId is safer, since the former will accept
objects in information_schema for instance.
There's still a lot of room for improvement, but it basically works,
and we need this to be present before we can do anything much with the
writable-foreign-tables patch. So let's commit it and get on with testing.
Shigeru Hanada, reviewed by KaiGai Kohei and Tom Lane
If users create tablespaces inside the old cluster directory, it is
impossible for the delete script to delete _only_ the old cluster files,
so don't create a script in that case, and issue a message to the user.
Cases such as similarity('', '') produced a NaN result due to computing
0/0. Per discussion, make it return zero instead.
This appears to be the basic cause of bug #7867 from Michele Baravalle,
although it remains unclear why her installation doesn't think Cyrillic
letters are letters.
Back-patch to all active branches.
libpgcommon is a new static library to allow sharing code among the
various frontend programs and backend; this lets us eliminate duplicate
implementations of common routines. We avoid libpgport, because that's
intended as a place for porting issues; per discussion, it seems better
to keep them separate.
The first use case, and the only implemented by this patch, is pg_malloc
and friends, which many frontend programs were already using.
At the same time, we can use this to provide palloc emulation functions
for the frontend; this way, some palloc-using files in the backend can
also be used by the frontend cleanly. To do this, we change palloc() in
the backend to be a function instead of a macro on top of
MemoryContextAlloc(). This was previously believed to cause loss of
performance, but this implementation has been tweaked by Tom and Andres
so that on modern compilers it provides a slight improvement over the
previous one.
This lets us clean up some places that were already with
localized hacks.
Most of the pg_malloc/palloc changes in this patch were authored by
Andres Freund. Zoltán Böszörményi also independently provided a form of
that. libpgcommon infrastructure was authored by Álvaro.
The previous coding supposed that the first differing bytes in two varlena
datums must have the same sign difference as their overall comparison
result. This is obviously bogus for text strings in non-C locales, and
probably wrong for numeric, and even for bytea I think it was wrong on
machines where char is signed. When the assumption failed, the function
could deliver a zero or negative penalty in situations where such a result
is quite ridiculous, leading the core GiST code to make very bad page-split
decisions.
To fix, take the absolute values of the byte-level differences. Also,
switch the code to using unsigned char not just char, so that the behavior
will be consistent whether char is signed or not.
Per investigation of a trouble report from Tomas Vondra. Back-patch to all
supported branches.
gbt_var_bin_union() failed to do the right thing when the existing range
needed to be widened at both ends rather than just one end. This could
result in an invalid index in which keys that are present would not be
found by searches, because the searches would not think they need to
descend to the relevant leaf pages. This error affected all the varlena
datatypes supported by btree_gist (text, bytea, bit, numeric).
Per investigation of a trouble report from Tomas Vondra. (There is also
an issue in gbt_var_penalty(), but that should only result in inefficiency
not wrong answers. I'm committing this separately so that we have a git
state in which it can be tested that bad penalty results don't produce
invalid indexes.) Back-patch to all supported branches.
The new option specifies length of aggregation interval (in
seconds). May be used only together with -l. With this option, the log
contains per-interval summary (number of transactions, min/max latency
and two additional fields useful for variance estimation).
Patch contributed by Tomas Vondra, reviewed by Pavel Stehule. Slight
change by Tatsuo Ishii, suggested by Robert Hass to emit an error
message indicating that the option is not currently supported on
Windows.
Beyond 21474, the number of accounts exceed the range for int4. Change the
initialization code to use bigint for account id columns when scale is large
enough, and switch to using int64s for the variables in pgbench code. The
threshold where we switch to bigints is set at 20000, because that's easier
to remember and document than 21474, and ensures that there is some headroom
when int4s are used.
Greg Smith, with various changes by Euler Taveira de Oliveira, Gurjeet
Singh and Satoshi Nagayasu.
This makes 9.3 -> 9.3 upgrades work when they cross the commit that
added persistent multixacts; early 9.3 pg_controldata did not have the
required oldestMultiXact line, and so would fail to upgrade.
per Bruce Momjian
When pg_upgrade can't find required pg_controldata information, report
_which_ cluster is failing, with this message:
The %s cluster lacks some required control information:
This patch introduces two additional lock modes for tuples: "SELECT FOR
KEY SHARE" and "SELECT FOR NO KEY UPDATE". These don't block each
other, in contrast with already existing "SELECT FOR SHARE" and "SELECT
FOR UPDATE". UPDATE commands that do not modify the values stored in
the columns that are part of the key of the tuple now grab a SELECT FOR
NO KEY UPDATE lock on the tuple, allowing them to proceed concurrently
with tuple locks of the FOR KEY SHARE variety.
Foreign key triggers now use FOR KEY SHARE instead of FOR SHARE; this
means the concurrency improvement applies to them, which is the whole
point of this patch.
The added tuple lock semantics require some rejiggering of the multixact
module, so that the locking level that each transaction is holding can
be stored alongside its Xid. Also, multixacts now need to persist
across server restarts and crashes, because they can now represent not
only tuple locks, but also tuple updates. This means we need more
careful tracking of lifetime of pg_multixact SLRU files; since they now
persist longer, we require more infrastructure to figure out when they
can be removed. pg_upgrade also needs to be careful to copy
pg_multixact files over from the old server to the new, or at least part
of multixact.c state, depending on the versions of the old and new
servers.
Tuple time qualification rules (HeapTupleSatisfies routines) need to be
careful not to consider tuples with the "is multi" infomask bit set as
being only locked; they might need to look up MultiXact values (i.e.
possibly do pg_multixact I/O) to find out the Xid that updated a tuple,
whereas they previously were assured to only use information readily
available from the tuple header. This is considered acceptable, because
the extra I/O would involve cases that would previously cause some
commands to block waiting for concurrent transactions to finish.
Another important change is the fact that locking tuples that have
previously been updated causes the future versions to be marked as
locked, too; this is essential for correctness of foreign key checks.
This causes additional WAL-logging, also (there was previously a single
WAL record for a locked tuple; now there are as many as updated copies
of the tuple there exist.)
With all this in place, contention related to tuples being checked by
foreign key rules should be much reduced.
As a bonus, the old behavior that a subtransaction grabbing a stronger
tuple lock than the parent (sub)transaction held on a given tuple and
later aborting caused the weaker lock to be lost, has been fixed.
Many new spec files were added for isolation tester framework, to ensure
overall behavior is sane. There's probably room for several more tests.
There were several reviewers of this patch; in particular, Noah Misch
and Andres Freund spent considerable time in it. Original idea for the
patch came from Simon Riggs, after a problem report by Joel Jacobson.
Most code is from me, with contributions from Marti Raudsepp, Alexander
Shulgin, Noah Misch and Andres Freund.
This patch was discussed in several pgsql-hackers threads; the most
important start at the following message-ids:
AANLkTimo9XVcEzfiBR-ut3KVNDkjm2Vxh+t8kAmWjPuv@mail.gmail.com1290721684-sup-3951@alvh.no-ip.org1294953201-sup-2099@alvh.no-ip.org1320343602-sup-2290@alvh.no-ip.org1339690386-sup-8927@alvh.no-ip.org4FE5FF020200002500048A3D@gw.wicourts.gov4FEAB90A0200002500048B7D@gw.wicourts.gov
With AtEOXact applied, --single-transaction makes pg_restore slower, and
has the potential to require lock table configuration, so remove the
argument.
Per suggestion from Tom.
In commit 71450d7fd6, we added code to inform
suitably-intelligent compilers that ereport() doesn't return if the elevel
is ERROR or higher. This patch extends that to elog(), and also fixes a
double-evaluation hazard that the previous commit created in ereport(),
as well as reducing the emitted code size.
The elog() improvement requires the compiler to support __VA_ARGS__, which
should be available in just about anything nowadays since it's required by
C99. But our minimum language baseline is still C89, so add a configure
test for that.
The previous commit assumed that ereport's elevel could be evaluated twice,
which isn't terribly safe --- there are already counterexamples in xlog.c.
On compilers that have __builtin_constant_p, we can use that to protect the
second test, since there's no possible optimization gain if the compiler
doesn't know the value of elevel. Otherwise, use a local variable inside
the macros to prevent double evaluation. The local-variable solution is
inferior because (a) it leads to useless code being emitted when elevel
isn't constant, and (b) it increases the optimization level needed for the
compiler to recognize that subsequent code is unreachable. But it seems
better than not teaching non-gcc compilers about unreachability at all.
Lastly, if the compiler has __builtin_unreachable(), we can use that
instead of abort(), resulting in a noticeable code savings since no
function call is actually emitted. However, it seems wise to do this only
in non-assert builds. In an assert build, continue to use abort(), so that
the behavior will be predictable and debuggable if the "impossible"
happens.
These changes involve making the ereport and elog macros emit do-while
statement blocks not just expressions, which forces small changes in
a few call sites.
Andres Freund, Tom Lane, Heikki Linnakangas
This is now used by ecpg tests, and not clobbered by pg_upgrade
tests. This change won't affect anything that doesn't set this
environment variable, but will enable the buildfarm to control
exactly what port regression test installs will be running on,
and thus to detect possible rogue postmasters more easily.
Backpatch to release 9.2 where EXTRA_REGRESS_OPTS was first used.
(-i), producing only one progress message per 5 seconds along with
elapsed time and estimated remaining time. Also add elapsed time and
estimated remaining time to the default logging(prints one message
each 100000 rows).
Patch contributed by Tomas Vondra, reviewed by Jeevan Chalke and
Tatsuo Ishii.
On non-Windows machines, we use the Unix socket for connections to test
postmasters, so there is no need to create a TCP socket. Furthermore,
doing so causes failures due to port conflicts if two builds are carried
out concurrently on one machine. (If the builds are done in different
chroots, which is standard practice at least in Red Hat distros, there
is no risk of conflict on the Unix socket.) Suppressing the TCP socket
by setting listen_addresses to empty has long been standard practice
for pg_regress, and pg_upgrade knows about this too ... but pg_upgrade's
test.sh didn't get the memo.
Back-patch to 9.2, and also sync the 9.2 version of the script with HEAD
as much as practical.