last pair of parameter name/value strings, even when there are MAXPARAMS
of them. Aboriginal bug in contrib/xml2, noted while studying bug #4912
(though I'm not sure whether there's something else involved in that
report).
This might be thought a security issue, since it's a potential backend
crash; but considering that untrustworthy users shouldn't be allowed
to get their hands on xslt_process() anyway, it's probably not worth
getting excited about.
a number of other geometric operators also depend on. It miscalculated the
slope of the perpendicular to the given line segment anytime that slope was
other than 0, infinite, or +/-1. In some cases the error would be masked
because the true closest point on the line segment was one of its endpoints
rather than the intersection point, but in other cases it could give an
arbitrarily bad answer. Per bug #4872 from Nick Roosevelt.
Bug goes clear back to Berkeley days, so patch all supported branches.
Make a couple of cosmetic adjustments while at it.
eg Japan. Report and fix by Itagaki Takahiro. Also fix CASHDEBUG printout
format for branches with 64-bit money type, and some minor comment cleanup.
Back-patch to 7.4, because it's broken all the way back.
should use GinItemPointerGetBlockNumber/GinItemPointerGetOffsetNumber,
not ItemPointerGetBlockNumber/ItemPointerGetOffsetNumber, because the latter
will Assert() on ip_posid == 0, ie a "Min" pointer. (Thus, ItemPointerIsMin
has never worked at all, but it seems unused at present.) I'm not certain
that the case can occur in normal functioning, but it's blowing up on me
while investigating Tatsuo-san's data corruption problem. In any case it
seems like a problem waiting to bite someone.
Back-patch just in case this really is a problem for somebody in the field.
there's no analyzable attributes or indexes. We also used to report 0 live
and dead tuples for such tables, which messed with autovacuum threshold
calculations.
This fixes bug #4812 reported by George Su. Backpatch back to 8.1.
of AND/OR clause branches that predtest.c would attempt to deal with. As
noted in bug #4721, that change disabled proof attempts for sizes of problems
that people are actually expecting it to work for. The original complaint
it was trying to solve was O(N^2) behavior for long IN-lists, so let's try
applying the limit to just ScalarArrayOpExprs rather than everything.
Another case of "foolish consistency" I fear.
Back-patch to 8.2, same as the previous patch was.
you can end up with an unrecoverable backup if you start a new base backup
right after finishing archive recovery. In that scenario, the redo pointer of
the checkpoint that pg_start_backup() writes points to the XLOG segment where
the timeline-changing end-of-archive-recovery checkpoint is. The beginning
of that segment contains pages with the old timeline ID, and we don't accept
that in recovery unless we find a history file covering the old timeline ID.
If you omit pg_xlog from the base backup and clear the archive directory
before starting the backup, there will be no such history file available.
The bug is present in all versions since PITR was introduced in 8.0, but I'm
back-patching only back to 8.2. Earlier versions didn't have XLOG switch
records, making this fix unfeasible. Given the lack of reports until now,
it doesn't seem worthwhile to spend more effort to fix 8.0 and 8.1.
Per report and suggestion by Mikael Krantz
to make sure that the error code is reset, as a precaution in
case the API doesn't properly reset it on success. This could
be necessary, since we check the error value even if the function
doesn't fail for specific success cases.
as per my recent proposal. release.sgml itself is now just a stub that should
change rarely; ideally, only once per major release to add a new include line.
Most editing work will occur in the release-N.N.sgml files. To update a back
branch for a minor release, just copy the appropriate release-N.N.sgml
file(s) into the back branch.
This commit doesn't change the end-product documentation at all, only the
source layout. However, it makes it easy to start omitting ancient information
from newer branches' documentation, should we ever decide to do that.
part that rounds up to exactly 1.0 second. The previous coding rejected input
like "00:12:57.9999999999999999999999999999", with the exact number of nines
needed to cause failure varying depending on float-timestamp option and
possibly on platform. Obviously this should round up to the next integral
second, if we don't have enough precision to distinguish the value from that.
Per bug #4789 from Robert Kruus.
In passing, fix a missed check for fractional seconds in one copy of the
"is it greater than 24:00:00" code.
Broken all the way back, so patch all the way back.
aggregate function. By definition, such a sub-SELECT cannot reference any
variables of query levels between itself and the aggregate's semantic level
(else the aggregate would've been assigned to that lower level instead).
So the correct, most efficient implementation is to treat the sub-SELECT as
being a sub-select of that outer query level, not the level the aggregate
syntactically appears in. Not doing so also confuses the heck out of our
parameter-passing logic, as illustrated in bug report from Daniel Grace.
Fortunately, we were already copying the whole Aggref expression up to the
outer query level, so all that's needed is to delay SS_process_sublinks
processing of the sub-SELECT until control returns to the outer level.
This has been broken since we introduced spec-compliant treatment of
outer aggregates in 7.4; so patch all the way back.
interval_eq() considers equal. I'm not sure how that fundamental requirement
escaped us through multiple revisions of this hash function, but there it is;
it's been wrong since interval_hash was first written for PG 7.1.
Per bug #4748 from Roman Kononov.
Backpatch to all supported releases.
This patch changes the contents of hash indexes for interval columns. That's
no particular problem for PG 8.4, since we've broken on-disk compatibility
of hash indexes already; but it will require a migration warning note in
the next minor releases of all existing branches: "if you have any hash
indexes on columns of type interval, REINDEX them after updating".
Windows without that, but we shouldn't put bad examples where people might
copy them. Also, reformat slightly to improve the odds that pgindent
won't go nuts on this.
This method will not catch all different ways since the locale
handling in NTFS doesn't provide an easy way to do that, but it
will hopefully solve the most common cases causing startup
problems when the backend is found in the system PATH.
Attempts to fix bug #4694.
casting effort whenever the input value was NULL. However this prevents
application of not-null domain constraints in the cases that use this
function, as illustrated in bug #4741. Since this function isn't meant
for use in performance-critical paths anyway, this certainly seems like
another case of "premature optimization is the root of all evil".
Back-patch as far as 8.2; older versions made no effort to enforce
domain constraints here anyway.
temporary tables of other sessions; that is unsafe because of the way our
buffer management works. Per report from Stuart Bishop.
This is redundant with the bufmgr.c checks in HEAD, but not at all redundant
in the back branches.
at the same instant as a new backend is spawned. Since CountActiveBackends()
doesn't hold ProcArrayLock, it needs to be prepared for the case that a
pointer at the end of the proc array is still NULL even though numProcs says
it should be valid, since it doesn't hold ProcArrayLock. Backpatch to 8.1.
8.0 and earlier had this right, but it was broken in the split of PGPROC and
sinval shared memory arrays.
Per report and proposal by Marko Kreen.
TupleTableSlots. We have functions for retrieving a minimal tuple from a slot
after storing a regular tuple in it, or vice versa; but these were implemented
by converting the internal storage from one format to the other. The problem
with that is it invalidates any pass-by-reference Datums that were already
fetched from the slot, since they'll be pointing into the just-freed version
of the tuple. The known problem cases involve fetching both a whole-row
variable and a pass-by-reference value from a slot that is fed from a
tuplestore or tuplesort object. The added regression tests illustrate some
simple cases, but there may be other failure scenarios traceable to the same
bug. Note that the added tests probably only fail on unpatched code if it's
built with --enable-cassert; otherwise the bug leads to fetching from freed
memory, which will not have been overwritten without additional conditions.
Fix by allowing a slot to contain both formats simultaneously; which turns out
not to complicate the logic much at all, if anything it seems less contorted
than before.
Back-patch to 8.2, where minimal tuples were introduced.
not global variables of anonymous enum types. This didn't actually hurt
much because most linkers will just merge the duplicated definitions ...
but some will complain. Per bug #4731 from Ceriel Jacobs.
Backpatch to 8.1 --- the declarations don't exist before that.
them from degrading badly when the input is sorted or nearly so. In this
scenario the tree is unbalanced to the point of becoming a mere linked list,
so insertions become O(N^2). The easiest and most safely back-patchable
solution is to stop growing the tree sooner, ie limit the growth of N. We
might later consider a rebalancing tree algorithm, but it's not clear that
the benefit would be worth the cost and complexity. Per report from Sergey
Burladyan and an earlier complaint from Heikki.
Back-patch to 8.2; older versions didn't have GIN indexes.
format codes are misapplied to a numeric argument. (The code still produces
a pretty bogus error message in such cases, but I'll settle for stopping the
crash for now.) Per bug #4700 from Sergey Burladyan.
Problem exists in all supported branches, so patch all the way back.
In HEAD, also clean up some ugly coding in the nearby cache management
code.
fail to provide the function itself. Not sure how we escaped testing anything
later than 7.3 on such cases, but they still exist, as per André Volpato's
report about AIX 5.3.
encoding conversion of any elog/ereport message being sent to the frontend.
This generalizes a patch that I put in last October, which suppressed
translation of only specific messages known to be associated with recursive
can't-translate-the-message behavior. As shown in bug #4680, we need a more
general answer in order to have some hope of coping with broken encoding
conversion setups. This approach seems a good deal less klugy anyway.
Patch in all supported branches.
- pg_wchar and wchar_t could have different size, so char2wchar
doesn't call pg_mb2wchar_with_len to prevent out-of-bound
memory bug
- make char2wchar/wchar2char symmetric, now they should not be
called with C-locale because mbstowcs/wcstombs oftenly doesn't
work correct with C-locale.
- Text parser uses pg_mb2wchar_with_len directly in case of
C-locale and multibyte encoding
Per bug report by Hiroshi Inoue <inoue@tpf.co.jp> and
following discussion.
Backpatch up to 8.2 when multybyte support was implemented in tsearch.
fail on zero-length inputs. This isn't an issue in normal use because the
conversion infrastructure skips calling the converters for empty strings.
However a problem was created by yesterday's patch to check whether the
right conversion function is supplied in CREATE CONVERSION. The most
future-proof fix seems to be to make the converters safe for this corner case.
function for the specified source and destination encodings. We do that by
calling the function with an empty string. If it can't perform the requested
conversion, it will throw an error.
Backport to 7.4 - 8.3. Per bug report #4680 by Denis Afonin.
they are out of scope for any code after that anyway, leaving isnull true
should be harmless. However, PL/pgSQL Debugger doesn't seem to care about
the scoping and crashed, per report by Robert Walker (bug #4635). And it's
good to be tidy for debugging purposes too.
Fix in 8.3, 8.2 and 8.1 branches, CVS HEAD was fixed earlier already.
Analysis and fix by Ashesh Vashi and Dave Page.