constants through full joins, as in
select * from tenk1 a full join tenk1 b using (unique1)
where unique1 = 42;
which should generate a fairly cheap plan where we apply the constraint
unique1 = 42 in each relation scan. This had been broken by my patch of
2008-06-27, which is now reverted in favor of a more invasive but hopefully
less incorrect approach. That patch was meant to prevent incorrect extraction
of OR'd indexclauses from OR conditions above an outer join. To do that
correctly we need more information than the outerjoin_delay flag can provide,
so add a nullable_relids field to RestrictInfo that records exactly which
relations are nulled by outer joins that are underneath a particular qual
clause. A side benefit is that we can make the test in create_or_index_quals
more specific: it is now smart enough to extract an OR'd indexclause into the
outer side of an outer join, even though it must not do so in the inner side.
The old coding couldn't distinguish these cases so it could not do either.
interval_eq() considers equal. I'm not sure how that fundamental requirement
escaped us through multiple revisions of this hash function, but there it is;
it's been wrong since interval_hash was first written for PG 7.1.
Per bug #4748 from Roman Kononov.
Backpatch to all supported releases.
This patch changes the contents of hash indexes for interval columns. That's
no particular problem for PG 8.4, since we've broken on-disk compatibility
of hash indexes already; but it will require a migration warning note in
the next minor releases of all existing branches: "if you have any hash
indexes on columns of type interval, REINDEX them after updating".
Windows without that, but we shouldn't put bad examples where people might
copy them. Also, reformat slightly to improve the odds that pgindent
won't go nuts on this.
This method will not catch all different ways since the locale
handling in NTFS doesn't provide an easy way to do that, but it
will hopefully solve the most common cases causing startup
problems when the backend is found in the system PATH.
Attempts to fix bug #4694.
we failed to assign, even in "can't happen" cases. Motivated by wondering
what's going on in a recent trouble report where "failed to commit" did
happen.
casting effort whenever the input value was NULL. However this prevents
application of not-null domain constraints in the cases that use this
function, as illustrated in bug #4741. Since this function isn't meant
for use in performance-critical paths anyway, this certainly seems like
another case of "premature optimization is the root of all evil".
Back-patch as far as 8.2; older versions made no effort to enforce
domain constraints here anyway.
temporary tables of other sessions; that is unsafe because of the way our
buffer management works. Per report from Stuart Bishop.
This is redundant with the bufmgr.c checks in HEAD, but not at all redundant
in the back branches.
at the same instant as a new backend is spawned. Since CountActiveBackends()
doesn't hold ProcArrayLock, it needs to be prepared for the case that a
pointer at the end of the proc array is still NULL even though numProcs says
it should be valid, since it doesn't hold ProcArrayLock. Backpatch to 8.1.
8.0 and earlier had this right, but it was broken in the split of PGPROC and
sinval shared memory arrays.
Per report and proposal by Marko Kreen.
TupleTableSlots. We have functions for retrieving a minimal tuple from a slot
after storing a regular tuple in it, or vice versa; but these were implemented
by converting the internal storage from one format to the other. The problem
with that is it invalidates any pass-by-reference Datums that were already
fetched from the slot, since they'll be pointing into the just-freed version
of the tuple. The known problem cases involve fetching both a whole-row
variable and a pass-by-reference value from a slot that is fed from a
tuplestore or tuplesort object. The added regression tests illustrate some
simple cases, but there may be other failure scenarios traceable to the same
bug. Note that the added tests probably only fail on unpatched code if it's
built with --enable-cassert; otherwise the bug leads to fetching from freed
memory, which will not have been overwritten without additional conditions.
Fix by allowing a slot to contain both formats simultaneously; which turns out
not to complicate the logic much at all, if anything it seems less contorted
than before.
Back-patch to 8.2, where minimal tuples were introduced.
not global variables of anonymous enum types. This didn't actually hurt
much because most linkers will just merge the duplicated definitions ...
but some will complain. Per bug #4731 from Ceriel Jacobs.
Backpatch to 8.1 --- the declarations don't exist before that.
them from degrading badly when the input is sorted or nearly so. In this
scenario the tree is unbalanced to the point of becoming a mere linked list,
so insertions become O(N^2). The easiest and most safely back-patchable
solution is to stop growing the tree sooner, ie limit the growth of N. We
might later consider a rebalancing tree algorithm, but it's not clear that
the benefit would be worth the cost and complexity. Per report from Sergey
Burladyan and an earlier complaint from Heikki.
Back-patch to 8.2; older versions didn't have GIN indexes.
reinstalling the default signal handler doesn't work as it is on Windows.
Presumably core dumps on SIGQUIT are not a problem on Windows, so rather
than figure out what header files or other changes are required to make it
work, just don't bother.
postmaster uses for immediate shutdown. Trap SIGUSR1 as the preferred
signal for that.
Per report by Fujii Masao and subsequent discussion on -hackers.
the cause of the "could not write to log file: Bad file descriptor"
errors reported at
http://archives.postgresql.org//pgsql-general/2008-06/msg00193.php
Backpatch to 8.3, the race condition was introduced by the CSV logging
patch.
Analysis and patch by Gurjeet Singh.
format codes are misapplied to a numeric argument. (The code still produces
a pretty bogus error message in such cases, but I'll settle for stopping the
crash for now.) Per bug #4700 from Sergey Burladyan.
Problem exists in all supported branches, so patch all the way back.
In HEAD, also clean up some ugly coding in the nearby cache management
code.
Only needed in 8.3 because it's already this way in HEAD, and older branches
did not support DTrace. This allows external modules to compile on Linux
machines where SystemTap support was recently added, when the required
SystemTap headers are not present on the build machine.
Approach suggested by Tom, after a RPM build trouble report by Devrim Gunduz.
by the planning process. This prevents the "failed to locate grouping columns"
error recently reported by Dickson Guedes. That happens because planning
replaces SubLinks by SubPlans in the subquery's targetlist, and exprTypmod()
is smarter about the former than the latter, causing the apparent type of
the subquery's output columns to change. This seems to be a deficiency we
should fix in exprTypmod(), but that will be a much more invasive patch
with possible side-effects elsewhere, so I'll do that only in HEAD.
Back-patch to 8.3. Arguably the lack of a copying step is broken/dangerous
all the way back, but in the absence of known problems I'll refrain from
making the older branches pay the extra cost. (The reason this particular
symptom didn't appear before is that exprTypmod() wasn't smart about SubLinks
either, until 8.3.)
fail to provide the function itself. Not sure how we escaped testing anything
later than 7.3 on such cases, but they still exist, as per André Volpato's
report about AIX 5.3.
encoding conversion of any elog/ereport message being sent to the frontend.
This generalizes a patch that I put in last October, which suppressed
translation of only specific messages known to be associated with recursive
can't-translate-the-message behavior. As shown in bug #4680, we need a more
general answer in order to have some hope of coping with broken encoding
conversion setups. This approach seems a good deal less klugy anyway.
Patch in all supported branches.
- pg_wchar and wchar_t could have different size, so char2wchar
doesn't call pg_mb2wchar_with_len to prevent out-of-bound
memory bug
- make char2wchar/wchar2char symmetric, now they should not be
called with C-locale because mbstowcs/wcstombs oftenly doesn't
work correct with C-locale.
- Text parser uses pg_mb2wchar_with_len directly in case of
C-locale and multibyte encoding
Per bug report by Hiroshi Inoue <inoue@tpf.co.jp> and
following discussion.
Backpatch up to 8.2 when multybyte support was implemented in tsearch.
fail on zero-length inputs. This isn't an issue in normal use because the
conversion infrastructure skips calling the converters for empty strings.
However a problem was created by yesterday's patch to check whether the
right conversion function is supplied in CREATE CONVERSION. The most
future-proof fix seems to be to make the converters safe for this corner case.
function for the specified source and destination encodings. We do that by
calling the function with an empty string. If it can't perform the requested
conversion, it will throw an error.
Backport to 7.4 - 8.3. Per bug report #4680 by Denis Afonin.
they are out of scope for any code after that anyway, leaving isnull true
should be harmless. However, PL/pgSQL Debugger doesn't seem to care about
the scoping and crashed, per report by Robert Walker (bug #4635). And it's
good to be tidy for debugging purposes too.
Fix in 8.3, 8.2 and 8.1 branches, CVS HEAD was fixed earlier already.
Analysis and fix by Ashesh Vashi and Dave Page.
looks for a CaseTestExpr to figure out what the parser did, but it failed to
consider the possibility that an implicit coercion might be inserted above
the CaseTestExpr. This could result in an Assert failure in some cases
(but correct results if Asserts weren't enabled), or an "unexpected CASE WHEN
clause" error in other cases. Per report from Alan Li.
Back-patch to 8.1; problem doesn't exist before that because CASE was
implemented differently.
TABLE: if the command is executed by someone other than the table owner (eg,
a superuser) and the table has a toast table, the toast table's pg_type row
ends up with the wrong typowner, ie, the command issuer not the table owner.
This is quite harmless for most purposes, since no interesting permissions
checks consult the pg_type row. However, it could lead to unexpected failures
if one later tries to drop the role that issued the command (in 8.1 or 8.2),
or strange warnings from pg_dump afterwards (in 8.3 and up, which will allow
the DROP ROLE because we don't create a "redundant" owner dependency for table
rowtypes). Problem identified by Cott Lang.
Back-patch to 8.1. The problem is actually far older --- the CLUSTER variant
can be demonstrated in 7.0 --- but it's mostly cosmetic before 8.1 because we
didn't track ownership dependencies before 8.1. Also, fixing it before 8.1
would require changing the call signature of heap_create_with_catalog(), which
seems to carry a nontrivial risk of breaking add-on modules.
any LISTEN command. This is more important than it used to be because
DISCARD ALL invokes UNLISTEN. Connection-pooled applications making heavy
use of DISCARD ALL were seeing significant contention for pg_listener,
as reported by Matteo Beccati. It seems unlikely that clients using LISTEN
would use pooled connections, so this simple tweak seems sufficient,
especially since the pg_listener implementation is slated to go away soon
anyway.
Back-patch to 8.3, where DISCARD ALL was introduced.
in the string, not just at the start. Per bug #4629 from Martin Blazek.
Back-patch to 8.2; prior versions don't have the problem, at least not in
the reported case, because they don't try to recognize INTO in non-SELECT
statements. (IOW, this is really fallout from the RETURNING patch.)
from Rushabh Lathia.
Back-patch of patch of 2009-01-08. This is necessary in 8.3, as reported
by Bjorn Munch. It's not currently necessary in 8.2, AFAICS, but seems
best to include it there too.