as per my recent proposal. release.sgml itself is now just a stub that should
change rarely; ideally, only once per major release to add a new include line.
Most editing work will occur in the release-N.N.sgml files. To update a back
branch for a minor release, just copy the appropriate release-N.N.sgml
file(s) into the back branch.
This commit doesn't change the end-product documentation at all, only the
source layout. However, it makes it easy to start omitting ancient information
from newer branches' documentation, should we ever decide to do that.
part that rounds up to exactly 1.0 second. The previous coding rejected input
like "00:12:57.9999999999999999999999999999", with the exact number of nines
needed to cause failure varying depending on float-timestamp option and
possibly on platform. Obviously this should round up to the next integral
second, if we don't have enough precision to distinguish the value from that.
Per bug #4789 from Robert Kruus.
In passing, fix a missed check for fractional seconds in one copy of the
"is it greater than 24:00:00" code.
Broken all the way back, so patch all the way back.
aggregate function. By definition, such a sub-SELECT cannot reference any
variables of query levels between itself and the aggregate's semantic level
(else the aggregate would've been assigned to that lower level instead).
So the correct, most efficient implementation is to treat the sub-SELECT as
being a sub-select of that outer query level, not the level the aggregate
syntactically appears in. Not doing so also confuses the heck out of our
parameter-passing logic, as illustrated in bug report from Daniel Grace.
Fortunately, we were already copying the whole Aggref expression up to the
outer query level, so all that's needed is to delay SS_process_sublinks
processing of the sub-SELECT until control returns to the outer level.
This has been broken since we introduced spec-compliant treatment of
outer aggregates in 7.4; so patch all the way back.
interval_eq() considers equal. I'm not sure how that fundamental requirement
escaped us through multiple revisions of this hash function, but there it is;
it's been wrong since interval_hash was first written for PG 7.1.
Per bug #4748 from Roman Kononov.
Backpatch to all supported releases.
This patch changes the contents of hash indexes for interval columns. That's
no particular problem for PG 8.4, since we've broken on-disk compatibility
of hash indexes already; but it will require a migration warning note in
the next minor releases of all existing branches: "if you have any hash
indexes on columns of type interval, REINDEX them after updating".
temporary tables of other sessions; that is unsafe because of the way our
buffer management works. Per report from Stuart Bishop.
This is redundant with the bufmgr.c checks in HEAD, but not at all redundant
in the back branches.
format codes are misapplied to a numeric argument. (The code still produces
a pretty bogus error message in such cases, but I'll settle for stopping the
crash for now.) Per bug #4700 from Sergey Burladyan.
Problem exists in all supported branches, so patch all the way back.
In HEAD, also clean up some ugly coding in the nearby cache management
code.
fail to provide the function itself. Not sure how we escaped testing anything
later than 7.3 on such cases, but they still exist, as per André Volpato's
report about AIX 5.3.
encoding conversion of any elog/ereport message being sent to the frontend.
This generalizes a patch that I put in last October, which suppressed
translation of only specific messages known to be associated with recursive
can't-translate-the-message behavior. As shown in bug #4680, we need a more
general answer in order to have some hope of coping with broken encoding
conversion setups. This approach seems a good deal less klugy anyway.
Patch in all supported branches.
fail on zero-length inputs. This isn't an issue in normal use because the
conversion infrastructure skips calling the converters for empty strings.
However a problem was created by yesterday's patch to check whether the
right conversion function is supplied in CREATE CONVERSION. The most
future-proof fix seems to be to make the converters safe for this corner case.
function for the specified source and destination encodings. We do that by
calling the function with an empty string. If it can't perform the requested
conversion, it will throw an error.
Backport to 7.4 - 8.3. Per bug report #4680 by Denis Afonin.
encoding conversion functions. These are not can't-happen cases because
it's possible to create a conversion with the wrong conversion function
for the specified encoding pair. That would lead to an Assert crash in
an Assert-enabled build, or incorrect conversion otherwise, neither of
which is desirable. This would be a DOS issue if production databases
were customarily built with asserts enabled, but fortunately that's not so.
Per an observation by Heikki.
Back-patch to all supported branches.
to the documented API value. The previous code got it right as
it's implemented, but accepted too much/too little compared to
the API documentation.
Per comment from Zdenek Kotala.
If the table was smaller than REL_TRUNCATE_FRACTION (= 16) pages, we always
tried to acquire AccessExclusiveLock on it even if there was no empty pages
at the end.
Report by Simon Riggs. Back-patch all the way to 7.4.
recursion when we are unable to convert a localized error message to the
client's encoding. We've been over this ground before, but as reported by
Ibrar Ahmed, it still didn't work in the case of conversion failures for
the conversion-failure message itself :-(. Fix by installing a "circuit
breaker" that disables attempts to localize this message once we get into
recursion trouble.
Patch all supported branches, because it is in fact broken in all of them;
though I had to add some missing translations to the older branches in
order to expose the failure in the particular test case I was using.
address of afterTriggers->query_stack[afterTriggers->query_depth] and hung
onto it through all its firings of triggers. However, if a trigger causes
sufficiently many nested query executions, query_stack will get repalloc'd
bigger, leaving AfterTriggerEndQuery --- and hence afterTriggerInvokeEvents
--- using a stale pointer.
So far as I can find, the only consequence of this error is to stomp on a
couple of words of already-freed memory; which would lead to a failure only if
that chunk had already gotten re-allocated for something else. So it's hard
to exhibit a simple failure case, but this is surely a bug.
I noticed this while working on my recent patch to reduce pending-trigger
space usage. The present patch is mighty ugly, because it requires making
afterTriggerInvokeEvents know about all the possible event lists it might get
called on. Fortunately, this is only needed in back branches because CVS HEAD
avoids the problem in a different way: afterTriggerInvokeEvents only touches
the passed AfterTriggerEventList pointer once at startup. Back branches are
stable enough that wiring in knowledge of all possible call usages doesn't
seem like a killer problem.
Back-patch to 8.0. 7.4's trigger code is completely different and doesn't
seem to have the problem (it doesn't even use repalloc).
according to the TupleDesc's natts, not the number of physical columns in the
tuple. The previous coding would do the wrong thing in cases where natts is
different from the tuple's column count: either incorrectly report error when
it should just treat the column as null, or actually crash due to indexing off
the end of the TupleDesc's attribute array. (The second case is probably not
possible in modern PG versions, due to more careful handling of inheritance
cases than we once had. But it's still a clear lack of robustness here.)
The incorrect error indication is ignored by all callers within the core PG
distribution, so this bug has no symptoms visible within the core code, but
it might well be an issue for add-on packages. So patch all the way back.
the isTrigger state explicitly, not rely on nonzero-ness of trigrelOid
to indicate trigger-hood, because trigrelOid will be left zero when compiling
for validation. The (useless) function hash entry built by the validator
was able to match an ordinary non-trigger call later in the same session,
thereby bypassing the check that is supposed to prevent such a call.
Per report from Alvaro.
It might be worth suppressing the useless hash entry altogether, but
that's a bigger change than I want to consider back-patching.
Back-patch to 8.0. 7.4 doesn't have the problem because it doesn't
have validation mode.
forestalls potential overflow when the same table (or other object, but
usually tables) is accessed by very many successive queries within a single
transaction. Per report from Michael Milligan.
Back-patch to 8.0, which is as far back as the patch conveniently applies.
There have been no reports of overflow in pre-8.3 releases, but clearly the
risk existed all along. (Michael's report suggests that 8.3 may consume lock
counts faster than prior releases, but with no test case to look at it's hard
to be sure about that. Widening the counts seems a good future-proofing
measure in any event.)
returns NULL instead of a PGresult. The former coding would fail, which
is OK, but it neglected to give you the PQerrorMessage that might tell
you why. In the oldest branches, there was another problem: it'd sometimes
report PQerrorMessage from the wrong connection.