xml_parse, all arising from the same sloppy usage of parse_xml_decl.
The original coding had that function returning its output string
parameters in the libxml context, which is long-lived, and all but one
of its callers neglected to free the strings afterwards. The easiest
and most bulletproof fix is to return the strings in the local palloc
context instead, since that's short-lived. This was only costing a
dozen or two bytes per function call, but that adds up fast if the
function is called repeatedly ...
Noted while poking at the more general problem of what to do with our
libxml memory allocation hooks. Back-patch to 8.3, which has the
identical coding.
of AND/OR clause branches that predtest.c would attempt to deal with. As
noted in bug #4721, that change disabled proof attempts for sizes of problems
that people are actually expecting it to work for. The original complaint
it was trying to solve was O(N^2) behavior for long IN-lists, so let's try
applying the limit to just ScalarArrayOpExprs rather than everything.
Another case of "foolish consistency" I fear.
Back-patch to 8.2, same as the previous patch was.
you can end up with an unrecoverable backup if you start a new base backup
right after finishing archive recovery. In that scenario, the redo pointer of
the checkpoint that pg_start_backup() writes points to the XLOG segment where
the timeline-changing end-of-archive-recovery checkpoint is. The beginning
of that segment contains pages with the old timeline ID, and we don't accept
that in recovery unless we find a history file covering the old timeline ID.
If you omit pg_xlog from the base backup and clear the archive directory
before starting the backup, there will be no such history file available.
The bug is present in all versions since PITR was introduced in 8.0, but I'm
back-patching only back to 8.2. Earlier versions didn't have XLOG switch
records, making this fix unfeasible. Given the lack of reports until now,
it doesn't seem worthwhile to spend more effort to fix 8.0 and 8.1.
Per report and suggestion by Mikael Krantz
it fails because the shared memory segment already exists. This
means it can take up to 10 seconds before it reports the error
if it *does* exist, but hopefully it will make the system capable
of restarting even when the server is under high load.
to make sure that the error code is reset, as a precaution in
case the API doesn't properly reset it on success. This could
be necessary, since we check the error value even if the function
doesn't fail for specific success cases.
as per my recent proposal. release.sgml itself is now just a stub that should
change rarely; ideally, only once per major release to add a new include line.
Most editing work will occur in the release-N.N.sgml files. To update a back
branch for a minor release, just copy the appropriate release-N.N.sgml
file(s) into the back branch.
This commit doesn't change the end-product documentation at all, only the
source layout. However, it makes it easy to start omitting ancient information
from newer branches' documentation, should we ever decide to do that.
part that rounds up to exactly 1.0 second. The previous coding rejected input
like "00:12:57.9999999999999999999999999999", with the exact number of nines
needed to cause failure varying depending on float-timestamp option and
possibly on platform. Obviously this should round up to the next integral
second, if we don't have enough precision to distinguish the value from that.
Per bug #4789 from Robert Kruus.
In passing, fix a missed check for fractional seconds in one copy of the
"is it greater than 24:00:00" code.
Broken all the way back, so patch all the way back.
aggregate function. By definition, such a sub-SELECT cannot reference any
variables of query levels between itself and the aggregate's semantic level
(else the aggregate would've been assigned to that lower level instead).
So the correct, most efficient implementation is to treat the sub-SELECT as
being a sub-select of that outer query level, not the level the aggregate
syntactically appears in. Not doing so also confuses the heck out of our
parameter-passing logic, as illustrated in bug report from Daniel Grace.
Fortunately, we were already copying the whole Aggref expression up to the
outer query level, so all that's needed is to delay SS_process_sublinks
processing of the sub-SELECT until control returns to the outer level.
This has been broken since we introduced spec-compliant treatment of
outer aggregates in 7.4; so patch all the way back.
constants through full joins, as in
select * from tenk1 a full join tenk1 b using (unique1)
where unique1 = 42;
which should generate a fairly cheap plan where we apply the constraint
unique1 = 42 in each relation scan. This had been broken by my patch of
2008-06-27, which is now reverted in favor of a more invasive but hopefully
less incorrect approach. That patch was meant to prevent incorrect extraction
of OR'd indexclauses from OR conditions above an outer join. To do that
correctly we need more information than the outerjoin_delay flag can provide,
so add a nullable_relids field to RestrictInfo that records exactly which
relations are nulled by outer joins that are underneath a particular qual
clause. A side benefit is that we can make the test in create_or_index_quals
more specific: it is now smart enough to extract an OR'd indexclause into the
outer side of an outer join, even though it must not do so in the inner side.
The old coding couldn't distinguish these cases so it could not do either.
interval_eq() considers equal. I'm not sure how that fundamental requirement
escaped us through multiple revisions of this hash function, but there it is;
it's been wrong since interval_hash was first written for PG 7.1.
Per bug #4748 from Roman Kononov.
Backpatch to all supported releases.
This patch changes the contents of hash indexes for interval columns. That's
no particular problem for PG 8.4, since we've broken on-disk compatibility
of hash indexes already; but it will require a migration warning note in
the next minor releases of all existing branches: "if you have any hash
indexes on columns of type interval, REINDEX them after updating".
Windows without that, but we shouldn't put bad examples where people might
copy them. Also, reformat slightly to improve the odds that pgindent
won't go nuts on this.
This method will not catch all different ways since the locale
handling in NTFS doesn't provide an easy way to do that, but it
will hopefully solve the most common cases causing startup
problems when the backend is found in the system PATH.
Attempts to fix bug #4694.
we failed to assign, even in "can't happen" cases. Motivated by wondering
what's going on in a recent trouble report where "failed to commit" did
happen.
casting effort whenever the input value was NULL. However this prevents
application of not-null domain constraints in the cases that use this
function, as illustrated in bug #4741. Since this function isn't meant
for use in performance-critical paths anyway, this certainly seems like
another case of "premature optimization is the root of all evil".
Back-patch as far as 8.2; older versions made no effort to enforce
domain constraints here anyway.
temporary tables of other sessions; that is unsafe because of the way our
buffer management works. Per report from Stuart Bishop.
This is redundant with the bufmgr.c checks in HEAD, but not at all redundant
in the back branches.
at the same instant as a new backend is spawned. Since CountActiveBackends()
doesn't hold ProcArrayLock, it needs to be prepared for the case that a
pointer at the end of the proc array is still NULL even though numProcs says
it should be valid, since it doesn't hold ProcArrayLock. Backpatch to 8.1.
8.0 and earlier had this right, but it was broken in the split of PGPROC and
sinval shared memory arrays.
Per report and proposal by Marko Kreen.
TupleTableSlots. We have functions for retrieving a minimal tuple from a slot
after storing a regular tuple in it, or vice versa; but these were implemented
by converting the internal storage from one format to the other. The problem
with that is it invalidates any pass-by-reference Datums that were already
fetched from the slot, since they'll be pointing into the just-freed version
of the tuple. The known problem cases involve fetching both a whole-row
variable and a pass-by-reference value from a slot that is fed from a
tuplestore or tuplesort object. The added regression tests illustrate some
simple cases, but there may be other failure scenarios traceable to the same
bug. Note that the added tests probably only fail on unpatched code if it's
built with --enable-cassert; otherwise the bug leads to fetching from freed
memory, which will not have been overwritten without additional conditions.
Fix by allowing a slot to contain both formats simultaneously; which turns out
not to complicate the logic much at all, if anything it seems less contorted
than before.
Back-patch to 8.2, where minimal tuples were introduced.
not global variables of anonymous enum types. This didn't actually hurt
much because most linkers will just merge the duplicated definitions ...
but some will complain. Per bug #4731 from Ceriel Jacobs.
Backpatch to 8.1 --- the declarations don't exist before that.
them from degrading badly when the input is sorted or nearly so. In this
scenario the tree is unbalanced to the point of becoming a mere linked list,
so insertions become O(N^2). The easiest and most safely back-patchable
solution is to stop growing the tree sooner, ie limit the growth of N. We
might later consider a rebalancing tree algorithm, but it's not clear that
the benefit would be worth the cost and complexity. Per report from Sergey
Burladyan and an earlier complaint from Heikki.
Back-patch to 8.2; older versions didn't have GIN indexes.
reinstalling the default signal handler doesn't work as it is on Windows.
Presumably core dumps on SIGQUIT are not a problem on Windows, so rather
than figure out what header files or other changes are required to make it
work, just don't bother.
postmaster uses for immediate shutdown. Trap SIGUSR1 as the preferred
signal for that.
Per report by Fujii Masao and subsequent discussion on -hackers.
the cause of the "could not write to log file: Bad file descriptor"
errors reported at
http://archives.postgresql.org//pgsql-general/2008-06/msg00193.php
Backpatch to 8.3, the race condition was introduced by the CSV logging
patch.
Analysis and patch by Gurjeet Singh.
format codes are misapplied to a numeric argument. (The code still produces
a pretty bogus error message in such cases, but I'll settle for stopping the
crash for now.) Per bug #4700 from Sergey Burladyan.
Problem exists in all supported branches, so patch all the way back.
In HEAD, also clean up some ugly coding in the nearby cache management
code.
Only needed in 8.3 because it's already this way in HEAD, and older branches
did not support DTrace. This allows external modules to compile on Linux
machines where SystemTap support was recently added, when the required
SystemTap headers are not present on the build machine.
Approach suggested by Tom, after a RPM build trouble report by Devrim Gunduz.
by the planning process. This prevents the "failed to locate grouping columns"
error recently reported by Dickson Guedes. That happens because planning
replaces SubLinks by SubPlans in the subquery's targetlist, and exprTypmod()
is smarter about the former than the latter, causing the apparent type of
the subquery's output columns to change. This seems to be a deficiency we
should fix in exprTypmod(), but that will be a much more invasive patch
with possible side-effects elsewhere, so I'll do that only in HEAD.
Back-patch to 8.3. Arguably the lack of a copying step is broken/dangerous
all the way back, but in the absence of known problems I'll refrain from
making the older branches pay the extra cost. (The reason this particular
symptom didn't appear before is that exprTypmod() wasn't smart about SubLinks
either, until 8.3.)
fail to provide the function itself. Not sure how we escaped testing anything
later than 7.3 on such cases, but they still exist, as per André Volpato's
report about AIX 5.3.
encoding conversion of any elog/ereport message being sent to the frontend.
This generalizes a patch that I put in last October, which suppressed
translation of only specific messages known to be associated with recursive
can't-translate-the-message behavior. As shown in bug #4680, we need a more
general answer in order to have some hope of coping with broken encoding
conversion setups. This approach seems a good deal less klugy anyway.
Patch in all supported branches.
- pg_wchar and wchar_t could have different size, so char2wchar
doesn't call pg_mb2wchar_with_len to prevent out-of-bound
memory bug
- make char2wchar/wchar2char symmetric, now they should not be
called with C-locale because mbstowcs/wcstombs oftenly doesn't
work correct with C-locale.
- Text parser uses pg_mb2wchar_with_len directly in case of
C-locale and multibyte encoding
Per bug report by Hiroshi Inoue <inoue@tpf.co.jp> and
following discussion.
Backpatch up to 8.2 when multybyte support was implemented in tsearch.