out that ExecEvalVar and friends don't necessarily have access to a tuple
descriptor with correct typmod: it definitely can contain -1, and possibly
might contain other values that are different from the Var's value.
Arguably this should be cleaned up someday, but it's not a simple change,
and in any case typmod discrepancies don't pose a security hazard.
Per reports from numerous people :-(
I'm not entirely sure whether the failure can occur in 8.0 --- the simple
test cases reported so far don't trigger it there. But back-patch the
change all the way anyway.
had stopped working for tables buried inside views or sub-selects. This is
because I had gotten rid of the simplify_jointree() preprocessing step, and
optimize_minmax_aggregates() wasn't smart enough to deal with a non-canonical
FromExpr. Per gripe from Bill Howe.
threshold for tuple length. On 4-byte-MAXALIGN machines, the toast code
creates tuples that have t_len exactly TOAST_TUPLE_THRESHOLD ... but this
number is not itself maxaligned, so if heap_insert maxaligns t_len before
comparing to TOAST_TUPLE_THRESHOLD, it'll uselessly recurse back to
tuptoaster.c, wasting cycles. (It turns out that this does not happen on
8-byte-MAXALIGN machines, because for them the outer MAXALIGN in the
TOAST_MAX_CHUNK_SIZE macro reduces TOAST_MAX_CHUNK_SIZE so that toast tuples
will be less than TOAST_TUPLE_THRESHOLD in size. That MAXALIGN is really
incorrect, but we can't remove it now, see below.) There isn't any particular
value in maxaligning before comparing to the thresholds, so just don't do
that, which saves a small number of cycles in itself.
These numbers should be rejiggered to minimize wasted space on toast-relation
pages, but we can't do that in the back branches because changing
TOAST_MAX_CHUNK_SIZE would force an initdb (by changing the contents of toast
tables). We can move the toast decision thresholds a bit, though, which is
what this patch effectively does.
Thanks to Pavan Deolasee for discovering the unintended recursion.
Back-patch into 8.2, but not further, pending more testing. (HEAD is about
to get a further patch modifying the thresholds, so it won't help much
for testing this form of the patch.)
made query plan. Use of ALTER COLUMN TYPE creates a hazard for cached
query plans: they could contain Vars that claim a column has a different
type than it now has. Fix this by checking during plan startup that Vars
at relation scan level match the current relation tuple descriptor. Since
at that point we already have at least AccessShareLock, we can be sure the
column type will not change underneath us later in the query. However,
since a backend's locks do not conflict against itself, there is still a
hole for an attacker to exploit: he could try to execute ALTER COLUMN TYPE
while a query is in progress in the current backend. Seal that hole by
rejecting ALTER TABLE whenever the target relation is already open in
the current backend.
This is a significant security hole: not only can one trivially crash the
backend, but with appropriate misuse of pass-by-reference datatypes it is
possible to read out arbitrary locations in the server process's memory,
which could allow retrieving database content the user should not be able
to see. Our thanks to Jeff Trout for the initial report.
Security: CVE-2007-0556
we should check that the function code returns the claimed result datatype
every time we parse the function for execution. Formerly, for simple
scalar result types we assumed the creation-time check was sufficient, but
this fails if the function selects from a table that's been redefined since
then, and even more obviously fails if check_function_bodies had been OFF.
This is a significant security hole: not only can one trivially crash the
backend, but with appropriate misuse of pass-by-reference datatypes it is
possible to read out arbitrary locations in the server process's memory,
which could allow retrieving database content the user should not be able
to see. Our thanks to Jeff Trout for the initial report.
Security: CVE-2007-0555
more, and standard_conforming_strings less, because in the future non-E
strings will not treat backslashes specially.
Also use E'' strings where backslashes are used in examples. (The
existing examples would have drawn warnings.)
Backpatch to 8.2.X.
The original coding failed (tried to access deallocated memory) if there were
two active call sites (fn_extra pointers) for the same function and the
function definition was updated. Also, if an update of a recursive function
was detected upon nested entry to the function, the existing compiled version
was summarily deallocated, resulting in crash upon return to the outer
instance. Problem observed while studying a bug report from Sergiy
Vyshnevetskiy.
Bug does not exist before 8.1 since older versions just leaked the memory of
obsoleted compiled functions, rather than trying to reclaim it.
by plpgsql can themselves use SPI --- possibly indirectly, as in the case
of domain_in() invoking plpgsql functions in a domain check constraint.
Per bug #2945 from Sergiy Vyshnevetskiy.
Somewhat arbitrarily, I've chosen to back-patch this as far as 8.0. Given
the lack of prior complaints, it doesn't seem critical for 7.x.
safely in the presence of subtransactions. To ensure that any ExprContext
shutdown callbacks are called at the right times, we have to have a separate
EState for each level of subtransaction. Per "TupleDesc reference leak" bug
report from Stefan Kaltenbrunner.
Although I'm convinced the code is wrong as far back as 8.0, it doesn't seem
that there are any ways for the problem to really manifest before 8.2: AFAICS,
8.0 and 8.1 only use the ExprContextCallback mechanism to handle set-returning
functions, which cannot usefully be executed in a "simple expression" anyway.
Hence, no backpatch before 8.2 --- the risk of unforeseen breakage seems
to outweigh the chance of fixing something.
versus varchar[]. This oversight probably explains Ryan Holmes' recent
complaint --- he was getting a generic selectivity estimate instead of
anything intelligent.
exactly at the point where we need to insert a new item, the calculation used
the wrong size for the "high key" of the new left page. This could lead to
choosing an unworkable split, resulting in "PANIC: failed to add item to the
left sibling" (or "right sibling") failure. Although this bug has been there
a long time, it's very difficult to trigger a failure before 8.2, since there
was generally a lot of free space on both sides of a chosen split. In 8.2,
where the user-selected fill factor determines how much free space the code
tries to leave, an unworkable split is much more likely. Report by Joe
Conway, diagnosis and fix by Heikki Linnakangas.
DROP TABLE and DROP DATABASE. Should prevent unexpected "permission denied"
failures on Windows, and is cleaner on other platforms too since we no longer
have to take it on faith that ENOENT is okay during an fsync attempt.
Patched as far back as 8.1; per recent discussion I think we are not going
to worry about Windows-specific issues in 8.0 anymore.
input in the stats collector. Our select() emulation is apparently buggy
for UDP sockets :-(. This should resolve problems with stats collection
(and hence autovacuum) failing under more than minimal load. Diagnosis
and patch by Magnus Hagander.
Patch probably needs to be back-ported to 8.1 and 8.0, but first let's
see if it makes the buildfarm happy...
page about the maximum UTF8 sequence length we support (4 bytes since 8.1,
3 before that). pg_utf2wchar_with_len never got updated to support 4-byte
characters at all, and in any case had a buffer-overrun risk in that it
could produce multiple pg_wchars from what mblen claims to be just one UTF8
character. The only reason we don't have a major security hole is that most
callers allocate worst-case output buffers; the sole exception in released
versions appears to be pre-8.2 iwchareq() (ie, ILIKE), which can be crashed
due to zeroing out its return address --- but AFAICS that can't be exploited
for anything more than a crash, due to inability to control what gets written
there. Per report from James Russell and Michael Fuhr.
Pre-8.1 the risk is much less, but I still think pg_utf2wchar_with_len's
behavior given an incomplete final character risks buffer overrun, so
back-patch that logic change anyway.
This patch also makes sure that UTF8 sequences exceeding the supported
length (whichever it is) are consistently treated as error cases, rather
than being treated like a valid shorter sequence in some places.
involving unions of types having typmods. Variants of the failure are known
to occur in 8.1 and up; not sure if it's possible in 8.0 and 7.4, but since
the code exists that far back, I'll just patch 'em all. Per report from
Brian Hurt.
match the postgresql.conf file. Also add units to descriptions that
lacked them. Wording improvements. Mention pg_settings.unit as the way
to find the default units for setting.
Backpatch to 8.2.X.
it was checking a pg_constraint OID instead of pg_class OID, resulting in
"relation with OID nnnnn does not exist" failures for anyone who wasn't
owner of the table being examined. Per bug #2848 from Laurence Rowe.
Note: for existing 8.2 installations a simple version update won't fix this;
the easiest fix is to CREATE OR REPLACE this view with the corrected
definition.
standard convention the 21st century runs from 2001-2100, not 2000-2099,
so make it work like that. Per bug #2885 from Akio Iwaasa.
Backpatch to 8.2, but no further, since this is really a definitional
change; users of older branches are probably more interested in stability.
(or other types of pg_class entry): the function pgstat_vacuum_tabstat,
invoked during VACUUM startup, had runtime proportional to the number of
stats table entries times the number of pg_class rows; in other words
O(N^2) if the stats collector's information is reasonably complete.
Replace list searching with a hash table to bring it back to O(N)
behavior. Per report from kim at myemma.com.
Back-patch as far as 8.1; 8.0 and before use different coding here.
Call srandom() instead of srand().
pgbench calls random() later, so it should have called srandom().
On most platforms except Windows srandom() is actually identical
to srand(), so the bug only bites Windows users.
per bug report from Akio Ishida.
when collapsing of JOIN trees is stopped by join_collapse_limit. For instance
a list of 11 LEFT JOINs with limit 8 now produces something like
((1 2 3 4 5 6 7 8) 9 10 11 12)
instead of
(((1 2 3 4 5 6 7 8) (9)) 10 11 12)
The latter structure is really only required for a FULL JOIN.
Noted while studying an example from Shane Ambler.
hash joins with the estimated-larger relation on the inside. There are
several cases where doing that makes perfect sense, and in cases where it
doesn't, the regular cost computation really ought to be able to figure that
out. Make some marginal tweaks in said computation to try to get results
approximating reality a bit better. Per an example from Shane Ambler.
Also, fix an oversight in the original patch to add seq_page_cost: the costs
of spilling a hash join to disk should be scaled by seq_page_cost.