This patch causes pg_upgrade to error out during its check phase if:
(1) template0 is marked connectable
or
(2) any other database is marked non-connectable
This is done because, in the first case, pg_upgrade would fail because
the pg_dumpall --globals restore would fail, and in the second case, the
database would not be restored, leading to data loss.
Report by Matt Landry (1), Stephen Frost (2)
Backpatch through 9.0
These functions should return SETOF TEXT[], like the core functions they
are wrappers for; but they were incorrectly declared as returning just
TEXT[]. This mistake had two results: first, if there was no match you got
a scalar null result, whereas what you should get is an empty set (zero
rows). Second, the 'g' flag was effectively ignored, since you would get
only one result array even if there were multiple matches, as reported by
Jeff Certain.
While ignoring 'g' is a clear bug, the behavior for no matches might well
have been thought to be the intended behavior by people who hadn't compared
it carefully to the core regexp_matches() functions. So we should tread
carefully about introducing this change in the back branches. Still, it
clearly is a bug and so providing some fix is desirable.
After discussion, the conclusion was to introduce the change in a 1.1
version of the citext extension (as we would need to do anyway); 1.0 still
contains the incorrect behavior. 1.1 is the default and only available
version in HEAD, but it is optional in the back branches, where 1.0 remains
the default version. People wishing to adopt the fix in back branches will
need to explicitly do ALTER EXTENSION citext UPDATE TO '1.1'. (I also
provided a downgrade script in the back branches, so people could go back
to 1.0 if necessary.)
This should be called out as an incompatible change in the 9.5 release
notes, although we'll also document it in the next set of back-branch
release notes. The notes should mention that any views or rules that use
citext's regexp_matches() functions will need to be dropped before
upgrading to 1.1, and then recreated again afterwards.
Back-patch to 9.1. The bug goes all the way back to citext's introduction
in 8.4, but pre-9.1 there is no extension mechanism with which to manage
the change. Given the lack of previous complaints it seems unnecessary to
change this behavior in 9.0, anyway.
While gcc doesn't complain if you declare a function "static" and then
define it not-static, other compilers do; and in any case the code is
highly misleading this way. Add the missing "static" keywords to a
couple of recent patches. Per buildfarm member pademelon.
As with initdb these programs need to run with a restricted token, and
if they don't pg_upgrade will fail when run as a user with Adminstrator
privileges.
Backpatch to all live branches. On the development branch the code is
reorganized so that the restricted token code is now in a single
location. On the stable bramches a less invasive change is made by
simply copying the relevant code to pg_upgrade.c and pg_resetxlog.c.
Patches and bug report from Muhammad Asif Naeem, reviewed by Michael
Paquier, slightly edited by me.
It's all very well to claim that a simplistic sort is fast in easy
cases, but O(N^2) in the worst case is not good ... especially if the
worst case is as easy to hit as "descending order input". Replace that
bit with our standard qsort.
Per bug #12866 from Maksym Boguk. Back-patch to all active branches.
This covers alterations to buffer sizing and zeroing made between imath
1.3 and imath 1.20. Valgrind Memcheck identified the buffer overruns
and reliance on uninitialized data; their exploit potential is unknown.
Builds specifying --with-openssl are unaffected, because they use the
OpenSSL BIGNUM facility instead of imath. Back-patch to 9.0 (all
supported versions).
Security: CVE-2015-0243
Most callers pass a stack buffer. The ensuing stack smash can crash the
server, and we have not ruled out the viability of attacks that lead to
privilege escalation. Back-patch to 9.0 (all supported versions).
Marko Tiikkaja
Security: CVE-2015-0243
Coverity points out that mdc_finish returns a pointer to a local buffer
(which of course is gone as soon as the function returns), leaving open
a risk of misbehaviors possibly as bad as a stack overwrite.
In reality, the only possible call site is in process_data_packets()
which does not examine the returned pointer at all. So there's no
live bug, but nonetheless the code is confusing and risky. Refactor
to avoid the issue by letting process_data_packets() call mdc_finish()
directly instead of going through the pullf_read() API.
Although this is only cosmetic, it seems good to back-patch so that
the logic in pgp-decrypt.c stays in sync across all branches.
Marko Kreen
connectby() didn't adequately check that the constructed SQL query returns
what it's expected to; in fact, since commit 08c33c426b it wasn't
checking that at all. This could result in a null-pointer-dereference
crash if the constructed query returns only one column instead of the
expected two. Less excitingly, it could also result in surprising data
conversion failures if the constructed query returned values that were
not I/O-conversion-compatible with the types specified by the query
calling connectby().
In all branches, insist that the query return at least two columns;
this seems like a minimal sanity check that can't break any reasonable
use-cases.
In HEAD, insist that the constructed query return the types specified by
the outer query, including checking for typmod incompatibility, which the
code never did even before it got broken. This is to hide the fact that
the implementation does a conversion to text and back; someday we might
want to improve that.
In back branches, leave that alone, since adding a type check in a minor
release is more likely to break things than make people happy. Type
inconsistencies will continue to work so long as the actual type and
declared type are I/O representation compatible, and otherwise will fail
the same way they used to.
Also, in all branches, be on guard for NULL results from the constructed
query, which formerly would cause null-pointer dereference crashes.
We now print the row with the NULL but don't recurse down from it.
In passing, get rid of the rather pointless idea that
build_tuplestore_recursively() should return the same tuplestore that's
passed to it.
Michael Paquier, adjusted somewhat by me
Some fields of the sinfo struct are modified within PG_TRY and then
referenced within PG_CATCH, so as with recent patch to async.c, "volatile"
is necessary for strict POSIX compliance; and that propagates to a couple
of subroutines as well as materializeQueryResult() itself. I think the
risk of actual issues here is probably higher than in async.c, because
storeQueryResult() is likely to get inlined into materializeQueryResult(),
leaving the compiler free to conclude that its stores into sinfo fields are
dead code.
Previously, read() might have returned a length equal to the buffer
length, and then the subsequent store to buf[len] would write a
zero-byte one byte past the end. This doesn't seem likely to be
a security issue, but there's some chance it could result in
pg_standby misbehaving.
Spotted by Coverity; patch by Michael Paquier, reviewed by me.
Previously the computation of the total test duration, measured in
microseconds, accidentally overflowed due to accidentally using signed
32bit arithmetic. As the only consequence is that pg_test_timing
invocations with such, overly large, durations never finished the
practical consequences of this bug are minor.
Pointed out by Coverity.
Backpatch to 9.2 where pg_test_timing was added.
In the unlikely case of stdin (fd 0) being closed, the off-by-one
would lead to pg_xlogdump failing to open files.
Spotted by Coverity.
Backpatch to 9.3 where pg_xlogdump was introduced.
Use SSPI authentication to allow connections exclusively from the OS
user that launched the test suite. This closes on Windows the
vulnerability that commit be76a6d39e
closed on other platforms. Users of "make installcheck" or custom test
harnesses can run "pg_regress --config-auth=DATADIR" to activate the
same authentication configuration that "make check" would use.
Back-patch to 9.0 (all supported versions).
Security: CVE-2014-0067
If the called command fails to return data, runShellCommand forgot to
pclose() the pipe before returning. This is fairly harmless in the current
code, because pgbench would then abandon further processing of that client
thread; so no more than nclients descriptors could be leaked this way. But
it's not hard to imagine future improvements whereby that wouldn't be true.
In any case, it's sloppy coding, so patch all branches. Found by Coverity.
In commit 462bd95705, I changed postgres_fdw
to rely on get_plan_rowmark() instead of get_parse_rowmark(). I still
think that's a good idea in the long run, but as Etsuro Fujita pointed out,
it doesn't work today because planner.c forces PlanRowMarks to have
markType = ROW_MARK_COPY for all foreign tables. There's no urgent reason
to change this in the back branches, so let's just revert that part of
yesterday's commit rather than trying to design a better solution under
time pressure.
Also, add a regression test case showing what postgres_fdw does with FOR
UPDATE/SHARE. I'd blithely assumed there was one already, else I'd have
realized yesterday that this code didn't work.
Ordinarily we can omit checking of a WHERE condition that matches a partial
index's condition, when we are using an indexscan on that partial index.
However, in SELECT FOR UPDATE we must include the "redundant" filter
condition in the plan so that it gets checked properly in an EvalPlanQual
recheck. The planner got this mostly right, but improperly omitted the
filter condition if the index in question was on an inheritance child
table. In READ COMMITTED mode, this could result in incorrectly returning
just-updated rows that no longer satisfy the filter condition.
The cause of the error is using get_parse_rowmark() when get_plan_rowmark()
is what should be used during planning. In 9.3 and up, also fix the same
mistake in contrib/postgres_fdw. It's currently harmless there (for lack
of inheritance support) but wrong is wrong, and the incorrect code might
get copied to someplace where it's more significant.
Report and fix by Kyotaro Horiguchi. Back-patch to all supported branches.
We expose a function IsValidJsonNumber that internally calls the lexer
for json numbers. That allows us to use the same test everywhere,
instead of inventing a broken test for hstore conversions. The new
function is also used in datum_to_json, replacing the code that is now
moved to the new function.
Backpatch to 9.3 where hstore_to_json_loose was introduced.
Mark Simonetti reported that libxslt sometimes crashes for him, and that
swapping xslt_process's object-freeing calls around to do them in reverse
order of creation seemed to fix it. I've not reproduced the crash, but
valgrind clearly shows a reference to already-freed memory, which is
consistent with the idea that shutdown of the xsltTransformContext is
trying to reference the already-freed stylesheet or input document.
With this patch, valgrind is no longer unhappy.
I have an inquiry in to see if this is a libxslt bug or if we're just
abusing the library; but even if it's a library bug, we'd want to adjust
our code so it doesn't fail with unpatched libraries.
Back-patch to all supported branches, because we've been doing this in
the wrong(?) order for a long time.
postgres_fdw would send query conditions involving system columns to the
remote server, even though it makes no effort to ensure that system
columns other than CTID match what the remote side thinks. tableoid,
in particular, probably won't match and might have some use in queries.
Hence, prevent sending conditions that include non-CTID system columns.
Also, create_foreignscan_plan neglected to check local restriction
conditions while determining whether to set fsSystemCol for a foreign
scan plan node. This again would bollix the results for queries that
test a foreign table's tableoid.
Back-patch the first fix to 9.3 where postgres_fdw was introduced.
Back-patch the second to 9.2. The code is probably broken in 9.1 as
well, but the patch doesn't apply cleanly there; given the weak state
of support for FDWs in 9.1, it doesn't seem worth fixing.
Etsuro Fujita, reviewed by Ashutosh Bapat, and somewhat modified by me
This fixes a scenario in which pgp_sym_decrypt() failed with "Wrong key
or corrupt data" on messages whose length is 6 less than a power of 2.
Per bug #11905 from Connor Penhale. Fix by Marko Tiikkaja, regression
test case from Jeff Janes.
In general, datatype I/O functions are supposed to be immutable or at
worst stable. Some contrib I/O functions were, through oversight, not
marked with any volatility property at all, which made them VOLATILE.
Since (most of) these functions actually behave immutably, the erroneous
marking isn't terribly harmful; but it can be user-visible in certain
circumstances, as per a recent bug report from Joe Van Dyk in which a
cast to text was disallowed in an expression index definition.
To fix, just adjust the declarations in the extension SQL scripts. If we
were being very fussy about this, we'd bump the extension version numbers,
but that seems like more trouble (for both developers and users) than the
problem is worth.
A fly in the ointment is that chkpass_in actually is volatile, because
of its use of random() to generate a fresh salt when presented with a
not-yet-encrypted password. This is bad because of the general assumption
that I/O functions aren't volatile: the consequence is that records or
arrays containing chkpass elements may have input behavior a bit different
from a bare chkpass column. But there seems no way to fix this without
breaking existing usage patterns for chkpass, and the consequences of the
inconsistency don't seem bad enough to justify that. So for the moment,
just document it in a comment.
Since we're not bumping version numbers, there seems no harm in
back-patching these fixes; at least future installations will get the
functions marked correctly.
This back-patches commit 0c5783ff30 into the
9.3 branch. At the time, Heikki just thought he was fixing an unlikely
integer-overflow scenario, but in point of fact the original coding was
hopelessly broken: it supposed that escape_json never enlarges the data
more than 2X, which is wrong on its face. The revised code eliminates
making any a-priori assumptions about the output length.
Per report from Saul Costa. The bogus code doesn't exist before 9.3,
so no other branches need fixing.
pgp_sym_encrypt's option is spelled "sess-key", not "enable-session-key".
Spotted by Jeff Janes.
In passing, improve a comment in pgp-pgsql.c to make it clearer that
the debugging options are intentionally undocumented.
If the locale names are not equal, try to canonicalize both of them by
passing them to setlocale(). Before, we only canonicalized the old cluster's
locale if upgrading from a 8.4-9.2 server, but we also need to canonicalize
when upgrading from a pre-8.4 server. That was an oversight in the code. But
we should also canonicalize on newer server versions, so that we cope if the
canonical form changes from one release to another. I'm about to do just
that to fix bug #11431, by mapping a locale name that contains non-ASCII
characters to a pure-ASCII alias of the same locale.
This is partial backpatch of commit 33755e8edf
in master. Apply to 9.2, 9.3 and 9.4. The canonicalization code didn't exist
before 9.2. In 9.2 and 9.3, this effectively also back-patches the changes
from commit 58274728fb, to be more lax about
the spelling of the encoding in the locale names.
Up to now, PG has assumed that any given timezone abbreviation (such as
"EDT") represents a constant GMT offset in the usage of any particular
region; we had a way to configure what that offset was, but not for it
to be changeable over time. But, as with most things horological, this
view of the world is too simplistic: there are numerous regions that have
at one time or another switched to a different GMT offset but kept using
the same timezone abbreviation. Almost the entire Russian Federation did
that a few years ago, and later this month they're going to do it again.
And there are similar examples all over the world.
To cope with this, invent the notion of a "dynamic timezone abbreviation",
which is one that is referenced to a particular underlying timezone
(as defined in the IANA timezone database) and means whatever it currently
means in that zone. For zones that use or have used daylight-savings time,
the standard and DST abbreviations continue to have the property that you
can specify standard or DST time and get that time offset whether or not
DST was theoretically in effect at the time. However, the abbreviations
mean what they meant at the time in question (or most recently before that
time) rather than being absolutely fixed.
The standard abbreviation-list files have been changed to use this behavior
for abbreviations that have actually varied in meaning since 1970. The
old simple-numeric definitions are kept for abbreviations that have not
changed, since they are a bit faster to resolve.
While this is clearly a new feature, it seems necessary to back-patch it
into all active branches, because otherwise use of Russian zone
abbreviations is going to become even more problematic than it already was.
This change supersedes the changes in commit 513d06ded et al to modify the
fixed meanings of the Russian abbreviations; since we've not shipped that
yet, this will avoid an undesirably incompatible (not to mention incorrect)
change in behavior for timestamps between 2011 and 2014.
This patch makes some cosmetic changes in ecpglib to keep its usage of
datetime lookup tables as similar as possible to the backend code, but
doesn't do anything about the increasingly obsolete set of timezone
abbreviation definitions that are hard-wired into ecpglib. Whatever we
do about that will likely not be appropriate material for back-patching.
Also, a potential free() of a garbage pointer after an out-of-memory
failure in ecpglib has been fixed.
This patch also fixes pre-existing bugs in DetermineTimeZoneOffset() that
caused it to produce unexpected results near a timezone transition, if
both the "before" and "after" states are marked as standard time. We'd
only ever thought about or tested transitions between standard and DST
time, but that's not what's happening when a zone simply redefines their
base GMT offset.
In passing, update the SGML documentation to refer to the Olson/zoneinfo/
zic timezone database as the "IANA" database, since it's now being
maintained under the auspices of IANA.
This has been broken since commit af7914c662,
which added the EXPLAIN (TIMING) option. Although that commit included
updates to auto_explain, they evidently weren't tested very carefully,
because the code failed to print node timings even when it should, due to
failure to set es.timing in the ExplainState struct. Reported off-list by
Neelakanth Nadgir of Salesforce.
In passing, clean up the documentation for auto_explain's options a
little bit, including re-ordering them into what seems to me a more
logical order.
In commit 45e02e3232, we intentionally
disallowed updates on individual elements of oidvector columns. While that
still seems like a sane idea in the abstract, we (I) forgot that citext's
"upgrade from unpackaged" script did in fact perform exactly such updates,
in order to fix the problem that citext indexes should have a collation
but would not in databases dumped or upgraded from pre-9.1 installations.
Even if we wanted to add casts to allow such updates, there's no practical
way to do so in the back branches, so the only real alternative is to make
citext's kluge even klugier. In this patch, I cast the oidvector to text,
fix its contents with regexp_replace, and cast back to oidvector. (Ugh!)
Since the aforementioned commit went into all active branches, we have to
fix this in all branches that contain the now-broken update script.
Per report from Eric Malm.
Some of the many error messages introduced in 458857cc missed 'FROM
unpackaged'. Also e016b724 and 45ffeb7e forgot to quote extension
version numbers.
Backpatch to 9.1, just like 458857cc which introduced the messages. Do
so because the error messages thrown when the wrong command is copy &
pasted aren't easy to understand.
Previously, TOAST tables only required in the new cluster could cause
oid conflicts if they were auto-numbered and a later conflicting oid had
to be assigned.
Backpatch through 9.3
autovacuum_multixact_freeze_max_age was added as a pg_ctl start
parameter in 9.3.X to prevent autovacuum from running. However, only
some 9.3.X releases have autovacuum_multixact_freeze_max_age as it was
added in a minor PG 9.3 release. It also isn't needed because -b turns
off autovacuum in 9.1+.
Without this fix, trying to upgrade from an early 9.3 release to 9.4
would fail.
Report by EDB
Backpatch through 9.3
get_raw_page tried to validate the supplied block number against
RelationGetNumberOfBlocks(), which of course is only right when
accessing the main fork. In most cases, the main fork is longer
than the others, so that the check was too weak (allowing a
lower-level error to be reported, but no real harm to be done).
However, very small tables could have an FSM larger than their heap,
in which case the mistake prevented access to some FSM pages.
Per report from Torsten Foertsch.
In passing, make the bad-block-number error into an ereport not elog
(since it's certainly not an internal error); and fix sloppily
maintained comment for RelationGetNumberOfBlocksInFork.
This has been wrong since we invented relation forks, so back-patch
to all supported branches.
With OpenLDAP versions 2.4.24 through 2.4.31, inclusive, PostgreSQL
backends can crash at exit. Raise a warning during "configure" based on
the compile-time OpenLDAP version number, and test the crash scenario in
the dblink test suite. Back-patch to 9.0 (all supported versions).
Previously, when calculations on the need for toast tables changed,
pg_upgrade could not handle cases where the new cluster needed a TOAST
table and the old cluster did not. (It already handled the opposite
case.) This fixes the "OID mismatch" error typically generated in this
case.
Backpatch through 9.2
The output buffer size in unaccent_lexize() was calculated as input string
length times pg_database_encoding_max_length(), which effectively assumes
that replacement strings aren't more than one character. While that was
all that we previously documented it to support, the code actually has
always allowed replacement strings of arbitrary length; so if you tried
to make use of longer strings, you were at risk of buffer overrun. To fix,
use an expansible StringInfo buffer instead of trying to determine the
maximum space needed a-priori.
This would be a security issue if unaccent rules files could be installed
by unprivileged users; but fortunately they can't, so in the back branches
the problem can be labeled as improper configuration by a superuser.
Nonetheless, a memory stomp isn't a nice way of reacting to improper
configuration, so let's back-patch the fix.
This function continued to use it after heap_endscan() freed it. In
passing, don't explicit create a strategy here. Instead, use the one
created by heap_beginscan_strat(), if any. Back-patch to 9.2, where use
of a BufferAccessStrategy here was introduced.