I failed to think much about UPDATE/DELETE when implementing LATERAL :-(.
The implemented behavior ended up being that subqueries in the FROM or
USING clause (respectively) could access the update/delete target table as
though it were a lateral reference; which seems fine if they said LATERAL,
but certainly ought to draw an error if they didn't. Fix it so you get a
suitable error when you omit LATERAL. Per report from Emre Hasegeli.
And the same for do_pg_stop_backup. The code in do_pg_* is not allowed
to access the catalogs. For manual base backups, the permissions
check can be handled in the calling function, and for streaming
base backups only users with the required permissions can get past
the authentication step in the first place.
Reported by Antonin Houska, diagnosed by Andres Freund
If a tablespace was crated inside PGDATA it was backed up both as part
of the PGDATA backup and as the backup of the tablespace. Avoid this
by skipping any directory inside PGDATA that contains one of the active
tablespaces.
Dimitri Fontaine and Magnus Hagander
The initial commit of ordered-set aggregates just did all the setup work
afresh each time the aggregate function is started up. But in a GROUP BY
query, the catalog lookups need not be repeated for each group, since the
column datatypes and sort information won't change. When there are many
small groups, this makes for a useful, though not huge, performance
improvement. Per suggestion from Andrew Gierth.
Profiling of these cases suggests that it might be profitable to avoid
duplicate lookups within tuplesort startup as well; but changing the
tuplesort APIs would have much broader impact, so I left that for
another day.
Several previous commits have added columns to various \d queries without
updating their translate_columns[] arrays, leading to potentially incorrect
translations in NLS-enabled builds. Offenders include commit 893686762
(added prosecdef to \df+), c9ac00e6e (added description to \dc+) and
3b17efdfd (added description to \dC+). Fix those cases back to 9.3 or
9.2 as appropriate.
Since this is evidently more easily missed than one would like, in HEAD
also add an Assert that the supplied array is long enough. This requires
an API change for printQuery(), so it seems inappropriate for back
branches, but presumably all future changes will be tested in HEAD anyway.
In HEAD and 9.3, also clean up a whole lot of sloppiness in the emitted
SQL for \dy (event triggers): lack of translatability due to failing to
pass words-to-be-translated through gettext_noop(), inadequate schema
qualification, and sloppy formatting resulting in unnecessarily ugly
-E output.
Peter Eisentraut and Tom Lane, per bug #8702 from Sergey Burladyan
The result is an int less than, equal to, or greater than zero, in the
style of memcmp (and, in fact, exactly the output of memcmp in some cases).
This comment previously said -1, 1, or 0, which was an overspecification,
as noted by Emre Hasegeli. All of the existing callers appear to be fine
with the actual behavior, so just fix the comment.
In passing, improve infelicitous formatting of some call sites.
The PGSTAT_NUM_TABENTRIES macro should have been updated when new fields
were added to struct PgStat_MsgTabstat in commit 644828908, but it wasn't.
Fix that.
Also, add a static assertion that we didn't overrun the intended size limit
on stats messages. This will not necessarily catch every mistake in
computing the maximum array size for stats messages, but it will catch ones
that have practical consequences. (The assertion in fact doesn't complain
about the aforementioned error in PGSTAT_NUM_TABENTRIES, because that was
not big enough to cause the array length to increase.)
No back-patch, as there's no actual bug in existing releases; this is just
in the nature of future-proofing.
Mark Dilger and Tom Lane
Original users of slru.c were all producing 4-digit filenames, so that
was all that that code was prepared to handle. Changes to multixact.c
in the course of commit 0ac5ad5134 made pg_multixact/members create
5-digit filenames once a certain threshold was reached, which
SlruScanDirectory wasn't prepared to deal with; in particular,
5-digit-name files were not removed during truncation. Change that
routine to make it aware of those files, and have it process them just
like any others.
Right now, some pg_multixact/members directories will contain a mixture
of 4-char and 5-char filenames. A future commit is expected fix things
so that each slru.c user declares the correct maximum width for the
files it produces, to avoid such unsightly mixtures.
Noticed while investigating bug #8673 reported by Serge Negodyuck.
In the 9.2 code for extending multixact/members, the logic was very
simple because the number of entries in a members page was a proper
divisor of 2^32, and thus at 2^32 wraparound the logic for page switch
was identical than at any other page boundary. In commit 0ac5ad5134 I
failed to realize this and introduced code that was not able to go over
the 2^32 boundary. Fix that by ensuring that when we reach the last
page of the last segment we correctly zero the initial page of the
initial segment, using correct uint32-wraparound-safe arithmetic.
Noticed while investigating bug #8673 reported by Serge Negodyuck, as
diagnosed by Andres Freund.
In pg_multixact/members, relying on modulo-2^32 arithmetic for
wraparound handling doesn't work all that well. Because we don't
explicitely track wraparound of the allocation counter for members, it
is possible that the "live" area exceeds 2^31 entries; trying to remove
SLRU segments that are "old" according to the original logic might lead
to removal of segments still in use. To fix, have the truncation
routine use a tailored SlruScanDirectory callback that keeps track of
the live area in actual use; that way, when the live range exceeds 2^31
entries, the oldest segments still live will not get removed untimely.
This new SlruScanDir callback needs to take care not to remove segments
that are "in the future": if new SLRU segments appear while the
truncation is ongoing, make sure we don't remove them. This requires
examination of shared memory state to recheck for false positives, but
testing suggests that this doesn't cause a problem. The original coding
didn't suffer from this pitfall because segments created when truncation
is running are never considered to be removable.
Per Andres Freund's investigation of bug #8673 reported by Serge
Negodyuck.
We haven't wanted to do this in the past on the grounds that in rare
cases the original xmin value will be needed for forensic purposes, but
commit 37484ad2aa removes that objection,
so now we can.
Per extensive discussion, among many people, on pgsql-hackers.
Although these files get cleaned up if the test runs to completion,
a failure partway through leaves trash all over the floor. The Makefile
ought to be bright enough to get rid of it when you say "make clean".
CREATE EVENT TRIGGER forgot to mark the event trigger as a member of its
extension, and pg_dump didn't pay any attention anyway when deciding
whether to dump the event trigger. Per report from Moshe Jacobson.
Given the obvious lack of testing here, it's rather astonishing that
ALTER EXTENSION ADD/DROP EVENT TRIGGER work, but they seem to.
We don't need make_restrictinfo_from_bitmapqual() anymore at all.
generate_bitmap_or_paths() doesn't need to be exported, and we can
drop its rather klugy restriction_only flag.
It's possible to extract a restriction OR clause from a join clause that
has the form of an OR-of-ANDs, if each sub-AND includes a clause that
mentions only one specific relation. While PG has been aware of that idea
for many years, the code previously only did it if it could extract an
indexable OR clause. On reflection, though, that seems a silly limitation:
adding a restriction clause can be a win by reducing the number of rows
that have to be filtered at the join step, even if we have to test the
clause as a plain filter clause during the scan. This should be especially
useful for foreign tables, where the change can cut the number of rows that
have to be retrieved from the foreign server; but testing shows it can win
even on local tables. Per a suggestion from Robert Haas.
As a heuristic, I made the code accept an extracted restriction clause
if its estimated selectivity is less than 0.9, which will probably result
in accepting extracted clauses just about always. We might need to tweak
that later based on experience.
Since the code no longer has even a weak connection to Path creation,
remove orindxpath.c and create a new file optimizer/util/orclauses.c.
There's some additional janitorial cleanup of now-dead code that needs
to happen, but it seems like that's a fit subject for a separate commit.
There was an apparent attempt to limit the target database for
pg_restore to version 7.1.0 or later. Due to a leading zero this
was interpreted as an octal number, which allowed targets with
version numbers down to 2.87.36. The lowest actual release above
that was 6.0.0, so that was effectively the limit.
Since the success of the restore attempt will depend primarily on
on what statements were generated by the dump run, we don't want
pg_restore trying to guess whether a given target should be allowed
based on version number. Allow a connection to any version. Since
it is very unlikely that anyone would be using a recent version of
pg_restore to restore to a pre-6.0 database, this has little to no
practical impact, but it makes the code less confusing to read.
Issue reported and initial patch suggestion from Joel Jacobson
based on an article by Andrey Karpov reporting on issues found by
PVS-Studio static code analyzer. Final patch based on analysis by
Tom Lane. Back-patch to all supported branches.
Defining this symbol causes OS X 10.5 to use a buggy version of readdir(),
which can sometimes fail with EINVAL if the previously-fetched directory
entry has been deleted or renamed. In later OS X versions that bug has
been repaired, but we still don't need the #define because it's on by
default. So this is just an all-around bad idea, and we can do without it.
Instead of looking for characters that aren't valid in JSON numbers, we
simply pass the output string through the JSON number parser, and if it
fails the string is quoted. This means among other things that money and
domains over money will be quoted correctly and generate valid JSON.
Fixes bug #8676 reported by Anderson Cristian da Silva.
Backpatched to 9.2 where JSON generation was introduced.
The bug would only show up if the C sockaddr structure contained
zero in the first byte for a valid address; otherwise it would
fail to fail, which is probably why it went unnoticed for so long.
Patch submitted by Joel Jacobson after seeing an article by Andrey
Karpov in which he reports finding this through static code
analysis using PVS-Studio. While I was at it I moved a definition
of a local variable referenced in the buggy code to a more local
context.
Backpatch to all supported branches.
Most other range operations seem to work all right on domains,
but this one not so much, at least not since commit 918eee0c.
Per bug #8684 from Brett Neumeier.
Commit 37484ad2aa invalidated a good
chunk of documentation, so patch it up to reflect the new state of
play. Along the way, patch remaining documentation references to
FrozenXID to say instead FrozenTransactionId, so that they match the
way we actually spell it in the code.
Overly compact coding in makeOrderedSetArgs() led to a platform dependency:
if the compiler chose to execute the subexpressions in the wrong order,
list_length() might get applied to an already-modified List, giving a
value we didn't want. Per buildfarm.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()). We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions. To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c. This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need. There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates. Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
Instead of changing the tuple xmin to FrozenTransactionId, the combination
of HEAP_XMIN_COMMITTED and HEAP_XMIN_INVALID, which were previously never
set together, is now defined as HEAP_XMIN_FROZEN. A variety of previous
proposals to freeze tuples opportunistically before vacuum_freeze_min_age
is reached have foundered on the objection that replacing xmin by
FrozenTransactionId might hinder debugging efforts when things in this
area go awry; this patch is intended to solve that problem by keeping
the XID around (but largely ignoring the value to which it is set).
Third-party code that checks for HEAP_XMIN_INVALID on tuples where
HEAP_XMIN_COMMITTED might be set will be broken by this change. To fix,
use the new accessor macros in htup_details.h rather than consulting the
bits directly. HeapTupleHeaderGetXmin has been modified to return
FrozenTransactionId when the infomask bits indicate that the tuple is
frozen; use HeapTupleHeaderGetRawXmin when you already know that the
tuple isn't marked commited or frozen, or want the raw value anyway.
We currently do this in routines that display the xmin for user consumption,
in tqual.c where it's known to be safe and important for the avoidance of
extra cycles, and in the function-caching code for various procedural
languages, which shouldn't invalidate the cache just because the tuple
gets frozen.
Robert Haas and Andres Freund
We can allocate the initial relations-to-drop array when first needed,
instead of at function entry; this avoids allocating it when the
function is not going to do anything, which is most of the time.
Backpatch to 9.3, where this behavior was introduced by commit
279628a0a7.
There's more that could be done here, such as possible reworking of the
code to avoid having to palloc anything, but that doesn't sound as
backpatchable as this relatively minor change.
Per complaint from Noah Misch in
20131031145234.GA621493@tornado.leadboat.com
This ensures that all stdout output is flushed immediately, to match
stderr. This eliminates the need for fflush(stdout) calls sprinkled all
over the place.
Per Daniel Wood in message 519A79C6.90308@salesforce.com
Updating or locking a row that was already locked by the same
transaction under the same Xid caused a MultiXact to be created; but
this is unnecessary, because there's no usefulness in being able to
differentiate two locks by the same transaction. In particular, if a
transaction executed SELECT FOR UPDATE followed by an UPDATE that didn't
modify columns of the key, we would dutifully represent the resulting
combination as a multixact -- even though a single key-update is
sufficient.
Optimize the case so that only the strongest of both locks/updates is
represented in Xmax. This can save some Xmax's from becoming
MultiXacts, which can be a significant optimization.
This missed optimization opportunity was spotted by Andres Freund while
investigating a bug reported by Oliver Seemann in message
CANCipfpfzoYnOz5jj=UZ70_R=CwDHv36dqWSpwsi27vpm1z5sA@mail.gmail.com
and also directly as a performance regression reported by Dong Ye in
message
d54b8387.000012d8.00000010@YED-DEVD1.vmware.com
Reportedly, this patch fixes the performance regression.
Since the missing optimization was reported as a significant performance
regression from 9.2, backpatch to 9.3.
Andres Freund, tweaked by Álvaro Herrera
Just as backends must clean up their shared memory state (releasing
lwlocks, buffer pins, etc.) before exiting, they must also perform
any similar cleanups related to dynamic shared memory segments they
have mapped before unmapping those segments. So add a mechanism to
ensure that.
Existing on_shmem_exit hooks include both "user level" cleanup such
as transaction abort and removal of leftover temporary relations and
also "low level" cleanup that forcibly released leftover shared
memory resources. On-detach callbacks should run after the first
group but before the second group, so create a new before_shmem_exit
function for registering the early callbacks and keep on_shmem_exit
for the regular callbacks. (An earlier draft of this patch added an
additional argument to on_shmem_exit, but that had a much larger
footprint and probably a substantially higher risk of breaking third
party code for no real gain.)
Patch by me, reviewed by KaiGai Kohei and Andres Freund.