fail to provide the function itself. Not sure how we escaped testing anything
later than 7.3 on such cases, but they still exist, as per André Volpato's
report about AIX 5.3.
encoding conversion of any elog/ereport message being sent to the frontend.
This generalizes a patch that I put in last October, which suppressed
translation of only specific messages known to be associated with recursive
can't-translate-the-message behavior. As shown in bug #4680, we need a more
general answer in order to have some hope of coping with broken encoding
conversion setups. This approach seems a good deal less klugy anyway.
Patch in all supported branches.
fail on zero-length inputs. This isn't an issue in normal use because the
conversion infrastructure skips calling the converters for empty strings.
However a problem was created by yesterday's patch to check whether the
right conversion function is supplied in CREATE CONVERSION. The most
future-proof fix seems to be to make the converters safe for this corner case.
function for the specified source and destination encodings. We do that by
calling the function with an empty string. If it can't perform the requested
conversion, it will throw an error.
Backport to 7.4 - 8.3. Per bug report #4680 by Denis Afonin.
they are out of scope for any code after that anyway, leaving isnull true
should be harmless. However, PL/pgSQL Debugger doesn't seem to care about
the scoping and crashed, per report by Robert Walker (bug #4635). And it's
good to be tidy for debugging purposes too.
Fix in 8.3, 8.2 and 8.1 branches, CVS HEAD was fixed earlier already.
Analysis and fix by Ashesh Vashi and Dave Page.
looks for a CaseTestExpr to figure out what the parser did, but it failed to
consider the possibility that an implicit coercion might be inserted above
the CaseTestExpr. This could result in an Assert failure in some cases
(but correct results if Asserts weren't enabled), or an "unexpected CASE WHEN
clause" error in other cases. Per report from Alan Li.
Back-patch to 8.1; problem doesn't exist before that because CASE was
implemented differently.
TABLE: if the command is executed by someone other than the table owner (eg,
a superuser) and the table has a toast table, the toast table's pg_type row
ends up with the wrong typowner, ie, the command issuer not the table owner.
This is quite harmless for most purposes, since no interesting permissions
checks consult the pg_type row. However, it could lead to unexpected failures
if one later tries to drop the role that issued the command (in 8.1 or 8.2),
or strange warnings from pg_dump afterwards (in 8.3 and up, which will allow
the DROP ROLE because we don't create a "redundant" owner dependency for table
rowtypes). Problem identified by Cott Lang.
Back-patch to 8.1. The problem is actually far older --- the CLUSTER variant
can be demonstrated in 7.0 --- but it's mostly cosmetic before 8.1 because we
didn't track ownership dependencies before 8.1. Also, fixing it before 8.1
would require changing the call signature of heap_create_with_catalog(), which
seems to carry a nontrivial risk of breaking add-on modules.
encoding conversion functions. These are not can't-happen cases because
it's possible to create a conversion with the wrong conversion function
for the specified encoding pair. That would lead to an Assert crash in
an Assert-enabled build, or incorrect conversion otherwise, neither of
which is desirable. This would be a DOS issue if production databases
were customarily built with asserts enabled, but fortunately that's not so.
Per an observation by Heikki.
Back-patch to all supported branches.
to the documented API value. The previous code got it right as
it's implemented, but accepted too much/too little compared to
the API documentation.
Per comment from Zdenek Kotala.
context long after it had been destroyed.
Per problem report from Justin Pasher. Patch by Tom Lane and me.
8.3 and later do not have this bug, because this code has been restructured for
unrelated reasons. In 8.2 it does not manifest as a crash, but it still seems
safer fixing it nonetheless.
If the table was smaller than REL_TRUNCATE_FRACTION (= 16) pages, we always
tried to acquire AccessExclusiveLock on it even if there was no empty pages
at the end.
Report by Simon Riggs. Back-patch all the way to 7.4.
toasted values, since those could get dropped once the cursor's transaction
is over. Per bug #4553 from Andrew Gierth.
Back-patch as far as 8.1. The bug actually exists back to 7.4 when holdable
cursors were introduced, but this patch won't work before 8.1 without
significant adjustments. Given the lack of field complaints, it doesn't seem
worth the work (and risk of introducing new bugs) to try to make a patch for
the older branches.
AND, OR, or equivalent clauses: if there are too many (more than 100) just
exit without proving anything. This ensures that we don't spend O(N^2) time
trying (and most likely failing) to prove anything about very long IN lists
and similar cases.
Also, install a couple of CHECK_FOR_INTERRUPTS calls to ensure that a long
proof attempt can be interrupted.
Per gripe from Sergey Konoplev.
Back-patch the whole patch to 8.2 and just the CHECK_FOR_INTERRUPTS addition
to 8.1. (The rest of the patch doesn't apply cleanly, and since 8.1 doesn't
show the complained-of behavior anyway, it doesn't seem necessary to work
hard on it.)
recursion when we are unable to convert a localized error message to the
client's encoding. We've been over this ground before, but as reported by
Ibrar Ahmed, it still didn't work in the case of conversion failures for
the conversion-failure message itself :-(. Fix by installing a "circuit
breaker" that disables attempts to localize this message once we get into
recursion trouble.
Patch all supported branches, because it is in fact broken in all of them;
though I had to add some missing translations to the older branches in
order to expose the failure in the particular test case I was using.
address of afterTriggers->query_stack[afterTriggers->query_depth] and hung
onto it through all its firings of triggers. However, if a trigger causes
sufficiently many nested query executions, query_stack will get repalloc'd
bigger, leaving AfterTriggerEndQuery --- and hence afterTriggerInvokeEvents
--- using a stale pointer.
So far as I can find, the only consequence of this error is to stomp on a
couple of words of already-freed memory; which would lead to a failure only if
that chunk had already gotten re-allocated for something else. So it's hard
to exhibit a simple failure case, but this is surely a bug.
I noticed this while working on my recent patch to reduce pending-trigger
space usage. The present patch is mighty ugly, because it requires making
afterTriggerInvokeEvents know about all the possible event lists it might get
called on. Fortunately, this is only needed in back branches because CVS HEAD
avoids the problem in a different way: afterTriggerInvokeEvents only touches
the passed AfterTriggerEventList pointer once at startup. Back branches are
stable enough that wiring in knowledge of all possible call usages doesn't
seem like a killer problem.
Back-patch to 8.0. 7.4's trigger code is completely different and doesn't
seem to have the problem (it doesn't even use repalloc).
correctly set. As result, killtuple() marks as dead
wrong tuple on page. Bug was introduced by me while fixing
possible duplicates during GiST index scan.
according to the TupleDesc's natts, not the number of physical columns in the
tuple. The previous coding would do the wrong thing in cases where natts is
different from the tuple's column count: either incorrectly report error when
it should just treat the column as null, or actually crash due to indexing off
the end of the TupleDesc's attribute array. (The second case is probably not
possible in modern PG versions, due to more careful handling of inheritance
cases than we once had. But it's still a clear lack of robustness here.)
The incorrect error indication is ignored by all callers within the core PG
distribution, so this bug has no symptoms visible within the core code, but
it might well be an issue for add-on packages. So patch all the way back.
the isTrigger state explicitly, not rely on nonzero-ness of trigrelOid
to indicate trigger-hood, because trigrelOid will be left zero when compiling
for validation. The (useless) function hash entry built by the validator
was able to match an ordinary non-trigger call later in the same session,
thereby bypassing the check that is supposed to prevent such a call.
Per report from Alvaro.
It might be worth suppressing the useless hash entry altogether, but
that's a bigger change than I want to consider back-patching.
Back-patch to 8.0. 7.4 doesn't have the problem because it doesn't
have validation mode.
use the old relfilenode in the new tablespace. There might be another relation
in the new tablespace with the same relfilenode, so we must generate a fresh
relfilenode in the new tablespace.
The 8.3 patch to let deleted relation files linger as zero-length files until
the next checkpoint made this more obvious: moving a relation from one table
space another, and then back again, caused a collision with the lingering
file.
Back-patch to 8.1. The issue is present in 8.0 as well, but it doesn't seem
worth fixing there, because we didn't have protection from OID collisions
after OID wraparound before 8.1.
Report by Guillaume Lelarge.
a SubLink expression into a rule query. We missed cases where the original
query contained a sub-SELECT in a function in FROM, a multi-row VALUES list,
or a RETURNING list. Per bug #4434 from Dean Rasheed and subsequent
investigation.
Back-patch to 8.1; older releases don't have the issue because they didn't
try to be smart about setting hasSubLinks only when needed.
forestalls potential overflow when the same table (or other object, but
usually tables) is accessed by very many successive queries within a single
transaction. Per report from Michael Milligan.
Back-patch to 8.0, which is as far back as the patch conveniently applies.
There have been no reports of overflow in pre-8.3 releases, but clearly the
risk existed all along. (Michael's report suggests that 8.3 may consume lock
counts faster than prior releases, but with no test case to look at it's hard
to be sure about that. Widening the counts seems a good future-proofing
measure in any event.)
whenever possible, as per bug report from Oleg Serov. While at it, reorder
the operations in the RECORD case to avoid possible palloc failure while the
variable update is only partly complete.
Back-patch as far as 8.1. Although the code of the particular function is
similar in 8.0, 8.0's support for composite fields in rows is sufficiently
broken elsewhere that it doesn't seem worth fixing this.
at once and ItemPointers are collected in memory.
Remove tuple's killing by killtuple() if tuple was moved to another
page - it could produce unaceptable overhead.
Backpatch up to 8.1 because the bug was introduced by GiST's concurrency support.
returns NULL instead of a PGresult. The former coding would fail, which
is OK, but it neglected to give you the PQerrorMessage that might tell
you why. In the oldest branches, there was another problem: it'd sometimes
report PQerrorMessage from the wrong connection.