is PG_GETARG_BOOL(2), should be PG_GETARG_BOOL(1).
Apply simple fix to back branches only. More extensive change to be applied
to head per Tom's suggestion.
the other major heapam.c functions. The only known consequence of this
omission is that UPDATE RETURNING failed to return the correct value for
"tableoid", as per report from KaiGai Kohei.
Back-patch to 8.2. Arguably it's wrong all the way back; but without
evidence of visible breakage before RETURNING was added, I'll desist from
patching the older branches.
when they are invoked by the parser. We had been setting up a snapshot at
plan time but really it needs to be done earlier, before parse analysis.
Per report from Dmitry Koterov.
Also fix two related problems discovered while poking at this one:
exec_bind_message called datatype input functions without establishing a
snapshot, and SET CONSTRAINTS IMMEDIATE could call trigger functions without
establishing a snapshot.
Backpatch to 8.2. The underlying problem goes much further back, but it is
masked in 8.1 and before because we didn't attempt to invoke domain check
constraints within datatype input. It would only be exposed if a C-language
datatype input function used the snapshot; which evidently none do, or we'd
have heard complaints sooner. Since this code has changed a lot over time,
a back-patch is hardly risk-free, and so I'm disinclined to patch further
than absolutely necessary.
outer join clauses. Given, say,
... from a left join b on a.a1 = b.b1 where a.a1 = 42;
we'll deduce a clause b.b1 = 42 and then mark the original join clause
redundant (we can't remove it completely for reasons I don't feel like
squeezing into this log entry). However the original implementation of
that wasn't bulletproof, because clause_selectivity() wouldn't honor
this_selec if given nonzero varRelid --- which in practice meant that
it worked as desired *except* when considering index scan quals. Which
resulted in bogus underestimation of the size of the indexscan result for
an inner indexscan in an outer join, and consequently a possibly bad
choice of indexscan vs. bitmap scan. Fix by introducing an explicit test
into clause_selectivity(). Also, to make sure we don't trigger that test
in corner cases, change the convention to be that this_selec > 1, not
this_selec = 1, means it's been marked redundant. Per trouble report from
Scara Maccai.
Back-patch to 8.2, where the problem was introduced.
toasted values, since those could get dropped once the cursor's transaction
is over. Per bug #4553 from Andrew Gierth.
Back-patch as far as 8.1. The bug actually exists back to 7.4 when holdable
cursors were introduced, but this patch won't work before 8.1 without
significant adjustments. Given the lack of field complaints, it doesn't seem
worth the work (and risk of introducing new bugs) to try to make a patch for
the older branches.
This was a thinko introduced in a patch from last February; it results
in memory leakage if an SRF is shut down before the actual end of query,
because subsequent code will be running in a longer-lived context than
it's expecting to be.
AND, OR, or equivalent clauses: if there are too many (more than 100) just
exit without proving anything. This ensures that we don't spend O(N^2) time
trying (and most likely failing) to prove anything about very long IN lists
and similar cases.
Also, install a couple of CHECK_FOR_INTERRUPTS calls to ensure that a long
proof attempt can be interrupted.
Per gripe from Sergey Konoplev.
Back-patch the whole patch to 8.2 and just the CHECK_FOR_INTERRUPTS addition
to 8.1. (The rest of the patch doesn't apply cleanly, and since 8.1 doesn't
show the complained-of behavior anyway, it doesn't seem necessary to work
hard on it.)
we extended the appendrel mechanism to support UNION ALL optimization. The
reason nobody noticed was that we are not actually using attr_needed data for
appendrel children; hence it seems more reasonable to rip it out than fix it.
Back-patch to 8.2 because an Assert failure is possible in corner cases.
Per examination of an example from Jim Nasby.
In HEAD, also get rid of AppendRelInfo.col_mappings, which is quite inadequate
to represent UNION ALL situations; depend entirely on translated_vars instead.
it was using too soon. In a situation where pg_do_encoding_conversion is
a no-op, this led to garbage data returned.
In HEAD, also modify the code that's ensuring null termination to make it
a tad more obvious what's happening.
recursion when we are unable to convert a localized error message to the
client's encoding. We've been over this ground before, but as reported by
Ibrar Ahmed, it still didn't work in the case of conversion failures for
the conversion-failure message itself :-(. Fix by installing a "circuit
breaker" that disables attempts to localize this message once we get into
recursion trouble.
Patch all supported branches, because it is in fact broken in all of them;
though I had to add some missing translations to the older branches in
order to expose the failure in the particular test case I was using.
treat Var and non-Var IN-list items differently. Only non-Var items are
candidates to go into an ANY(ARRAY) construct --- we put all Vars as separate
OR conditions on the grounds that that leaves more scope for optimization.
Per suggestion from Robert Haas.
into an OR of equality comparisons, rather than x = ANY(ARRAY[...]), when there
are Vars in the right-hand side. This avoids a performance regression compared
to pre-8.2 releases, in cases where the OR form can be optimized into scans
of multiple indexes. Limit the possible downside by preferring this form only
when the list isn't very long (I set the cutoff at 32 elements, which is a
bit arbitrary but in the right ballpark). Per discussion with Jim Nasby.
In passing, also make it try the OR form if it cannot select a common type
for the array elements; we've seen a complaint or two about how the OR form
worked for such cases and ARRAY doesn't.
address of afterTriggers->query_stack[afterTriggers->query_depth] and hung
onto it through all its firings of triggers. However, if a trigger causes
sufficiently many nested query executions, query_stack will get repalloc'd
bigger, leaving AfterTriggerEndQuery --- and hence afterTriggerInvokeEvents
--- using a stale pointer.
So far as I can find, the only consequence of this error is to stomp on a
couple of words of already-freed memory; which would lead to a failure only if
that chunk had already gotten re-allocated for something else. So it's hard
to exhibit a simple failure case, but this is surely a bug.
I noticed this while working on my recent patch to reduce pending-trigger
space usage. The present patch is mighty ugly, because it requires making
afterTriggerInvokeEvents know about all the possible event lists it might get
called on. Fortunately, this is only needed in back branches because CVS HEAD
avoids the problem in a different way: afterTriggerInvokeEvents only touches
the passed AfterTriggerEventList pointer once at startup. Back branches are
stable enough that wiring in knowledge of all possible call usages doesn't
seem like a killer problem.
Back-patch to 8.0. 7.4's trigger code is completely different and doesn't
seem to have the problem (it doesn't even use repalloc).
correctly set. As result, killtuple() marks as dead
wrong tuple on page. Bug was introduced by me while fixing
possible duplicates during GiST index scan.
In the previous coding, the list of columns that needed to be hashed on
was allocated in the per-query context, but we reallocated every time
the Agg node was rescanned. Since this information doesn't change over
a rescan, just construct the list of columns once during ExecInitAgg().
according to the TupleDesc's natts, not the number of physical columns in the
tuple. The previous coding would do the wrong thing in cases where natts is
different from the tuple's column count: either incorrectly report error when
it should just treat the column as null, or actually crash due to indexing off
the end of the TupleDesc's attribute array. (The second case is probably not
possible in modern PG versions, due to more careful handling of inheritance
cases than we once had. But it's still a clear lack of robustness here.)
The incorrect error indication is ignored by all callers within the core PG
distribution, so this bug has no symptoms visible within the core code, but
it might well be an issue for add-on packages. So patch all the way back.
the isTrigger state explicitly, not rely on nonzero-ness of trigrelOid
to indicate trigger-hood, because trigrelOid will be left zero when compiling
for validation. The (useless) function hash entry built by the validator
was able to match an ordinary non-trigger call later in the same session,
thereby bypassing the check that is supposed to prevent such a call.
Per report from Alvaro.
It might be worth suppressing the useless hash entry altogether, but
that's a bigger change than I want to consider back-patching.
Back-patch to 8.0. 7.4 doesn't have the problem because it doesn't
have validation mode.
use the old relfilenode in the new tablespace. There might be another relation
in the new tablespace with the same relfilenode, so we must generate a fresh
relfilenode in the new tablespace.
The 8.3 patch to let deleted relation files linger as zero-length files until
the next checkpoint made this more obvious: moving a relation from one table
space another, and then back again, caused a collision with the lingering
file.
Back-patch to 8.1. The issue is present in 8.0 as well, but it doesn't seem
worth fixing there, because we didn't have protection from OID collisions
after OID wraparound before 8.1.
Report by Guillaume Lelarge.
a SubLink expression into a rule query. We missed cases where the original
query contained a sub-SELECT in a function in FROM, a multi-row VALUES list,
or a RETURNING list. Per bug #4434 from Dean Rasheed and subsequent
investigation.
Back-patch to 8.1; older releases don't have the issue because they didn't
try to be smart about setting hasSubLinks only when needed.
forestalls potential overflow when the same table (or other object, but
usually tables) is accessed by very many successive queries within a single
transaction. Per report from Michael Milligan.
Back-patch to 8.0, which is as far back as the patch conveniently applies.
There have been no reports of overflow in pre-8.3 releases, but clearly the
risk existed all along. (Michael's report suggests that 8.3 may consume lock
counts faster than prior releases, but with no test case to look at it's hard
to be sure about that. Widening the counts seems a good future-proofing
measure in any event.)
GetOldestXmin() instead of RecentGlobalXmin; this is safer because we do not
depend on the latter being correctly set elsewhere, and while it is more
expensive, this code path is not performance-critical. This is a real
risk for autovacuum, because it can execute whole cycles without doing
a single vacuum, which would mean that RecentGlobalXmin would stay at its
initialization value, FirstNormalTransactionId, causing a bogus value to be
inserted in pg_database. This bug could explain some recent reports of
failure to truncate pg_clog.
At the same time, change the initialization of RecentGlobalXmin to
InvalidTransactionId, and ensure that it's set to something else whenever
it's going to be used. Using it as FirstNormalTransactionId in HOT page
pruning could incur in data loss. InitPostgres takes care of setting it
to a valid value, but the extra checks are there to prevent "special"
backends from behaving in unusual ways.
Per Tom Lane's detailed problem dissection in 29544.1221061979@sss.pgh.pa.us
whenever possible, as per bug report from Oleg Serov. While at it, reorder
the operations in the RECORD case to avoid possible palloc failure while the
variable update is only partly complete.
Back-patch as far as 8.1. Although the code of the particular function is
similar in 8.0, 8.0's support for composite fields in rows is sufficiently
broken elsewhere that it doesn't seem worth fixing this.
at once and ItemPointers are collected in memory.
Remove tuple's killing by killtuple() if tuple was moved to another
page - it could produce unaceptable overhead.
Backpatch up to 8.1 because the bug was introduced by GiST's concurrency support.
Per Microsoft knowledge base article Q201213, early versions of
Windows fail when we do this. Later versions of Windows appear
to have a higher limit than 64Kb, but do still fail on large
sends, so we unconditionally limit it for all versions.
Patch from Tom Lane.
returns NULL instead of a PGresult. The former coding would fail, which
is OK, but it neglected to give you the PQerrorMessage that might tell
you why. In the oldest branches, there was another problem: it'd sometimes
report PQerrorMessage from the wrong connection.
parent, not only those with RangeTblRefs. We need them in ExecCheckRTPerms.
Report by Brendan O'Shea. Back-patch to 8.2, where pull_up_simple_union_all
was introduced.
INSERT or UPDATE will match the target table's current rowtype. In pre-8.3
releases inconsistency can arise with stale cached plans, as reported by
Merlin Moncure. (We patched the equivalent hazard on the SELECT side in Feb
2007; I'm not sure why we thought there was no risk on the insertion side.)
In 8.3 and HEAD this problem should be impossible due to plan cache
invalidation management, but it seems prudent to make the check anyway.
Back-patch as far as 8.0. 7.x versions lack ALTER COLUMN TYPE, so there
seems no way to abuse a stale plan comparably.
would work, but in fact it didn't return the same rows when moving backwards
as when moving forwards. This would have no visible effect in a DISTINCT
query (at least assuming the column datatypes use a strong definition of
equality), but it gave entirely wrong answers for DISTINCT ON queries.
the alreadyDeleted list has to be passed down through
deleteDependentObjects(), else objects that are deleted via auto/internal
dependencies don't get reported back up to performMultipleDeletions().
Depending on the visitation order, this could cause the code to try to delete
an already-deleted object, leading to strange errors in DROP OWNED (typically
"cache lookup failed for relation NNNNN" or similar). Per bug #4289.
Patch for back branches only. This code has recently been rewritten in HEAD,
and doesn't have this particular bug anymore.
bug #4290. The fundamental bug is that masking extParam by outer_params,
as finalize_plan had been doing, caused us to lose the information that
an initPlan depended on the output of a sibling initPlan. On reflection
the best thing to do seemed to be not to try to adjust outer_params for
this case but get rid of it entirely. The only thing it was really doing
for us was to filter out param IDs associated with SubPlan nodes, and that
can be done (with greater accuracy) while processing individual SubPlan
nodes in finalize_primnode. This approach was vindicated by the discovery
that the masking method was hiding a second bug: SS_finalize_plan failed to
remove extParam bits for initPlan output params that were referenced in the
main plan tree (it only got rid of those referenced by other initPlans).
It's not clear that this caused any real problems, given the limited use
of extParam by the executor, but it's certainly not what was intended.
I originally thought that there was also a problem with needing to include
indirect dependencies on external params in initPlans' param sets, but it
turns out that the executor handles this correctly so long as the depended-on
initPlan is earlier in the initPlans list than the one using its output.
That seems a bit of a fragile assumption, but it is true at the moment,
so I just documented it in some code comments rather than making what would
be rather invasive changes to remove the assumption.
Back-patch to 8.1. Previous versions don't have the case of initPlans
referring to other initPlans' outputs, so while the existing logic is still
questionable for them, there are not any known bugs to be fixed. So I'll
refrain from changing them for now.