This lets us remove the large alternative results files for the main
json and jsonb tests, which makes modifying those tests simpler for
committers and patch submitters.
Backpatch to 9.4 for jsonb and 9.3 for json.
In general one may have to run both REASSIGN OWNED and DROP OWNED to get
rid of all the dependencies of a role to be dropped. This was alluded to
in the REASSIGN OWNED man page, but not really spelled out in full; and in
any case the procedure ought to be documented in a more prominent place
than that. Add a section to the "Database Roles" chapter explaining this,
and do a bit of wordsmithing in the relevant commands' man pages.
The postmaster now checks every minute or so (worst case, at most two
minutes) that postmaster.pid is still there and still contains its own PID.
If not, it performs an immediate shutdown, as though it had received
SIGQUIT.
The original goal behind this change was to ensure that failed buildfarm
runs would get fully cleaned up, even if the test scripts had left a
postmaster running, which is not an infrequent occurrence. When the
buildfarm script removes a test postmaster's $PGDATA directory, its next
check on postmaster.pid will fail and cause it to exit. Previously, manual
intervention was often needed to get rid of such orphaned postmasters,
since they'd block new test postmasters from obtaining the expected socket
address.
However, by checking postmaster.pid and not something else, we can provide
additional robustness: manual removal of postmaster.pid is a frequent DBA
mistake, and now we can at least limit the damage that will ensue if a new
postmaster is started while the old one is still alive.
Back-patch to all supported branches, since we won't get the desired
improvement in buildfarm reliability otherwise.
Some of the buildfarm members are evidently miserly enough of stack space
to pass the originally-committed form of this test. Increase the
requirement 10X to hopefully ensure that it fails as-expected everywhere.
Security: CVE-2015-5289
The documentation for the autovacuum_multixact_freeze_max_age and
autovacuum_freeze_max_age relation level parameters contained:
"Note that while you can set autovacuum_multixact_freeze_max_age very
small, or even zero, this is usually unwise since it will force frequent
vacuuming."
which hasn't been true since these options were made relation options,
instead of residing in the pg_autovacuum table (834a6da4f7).
Remove the outdated sentence. Even the lowered limits from 2596d70 are
high enough that this doesn't warrant calling out the risk in the CREATE
TABLE docs.
Per discussion with Tom Lane and Alvaro Herrera
Discussion: 26377.1443105453@sss.pgh.pa.us
Backpatch: 9.0- (in parts)
The tsquery, ltxtquery and query_int data types have a common ancestor.
Having acquired check_stack_depth() calls independently, each was
missing at least one call. Back-patch to 9.0 (all supported versions).
A range type can name another range type as its subtype, and a record
type can bear a column of another record type. Consequently, functions
like range_cmp() and record_recv() are recursive. Functions at risk
include operator family members and referents of pg_type regproc
columns. Treat as recursive any such function that looks up and calls
the same-purpose function for a record column type or the range subtype.
Back-patch to 9.0 (all supported versions).
An array type's element type is never itself an array type, so array
functions are unaffected. Recursion depth proportional to array
dimensionality, found in array_dim_to_jsonb(), is fine thanks to MAXDIM.
Sufficiently-deep recursion heretofore elicited a SIGSEGV. If an
application constructs PostgreSQL json or jsonb values from arbitrary
user input, application users could have exploited this to terminate all
active database connections. That applies to 9.3, where the json parser
adopted recursive descent, and later versions. Only row_to_json() and
array_to_json() were at risk in 9.2, both in a non-security capacity.
Back-patch to 9.2, where the json type was introduced.
Oskari Saarenmaa, reviewed by Michael Paquier.
Security: CVE-2015-5289
Certain short salts crashed the backend or disclosed a few bytes of
backend memory. For existing salt-induced error conditions, emit a
message saying as much. Back-patch to 9.0 (all supported versions).
Josh Kupershmidt
Security: CVE-2015-5288
In 020235a575 I lowered the autovacuum_*freeze_max_age minimums to
allow for easier testing of wraparounds. I did not touch the
corresponding per-table limits. While those don't matter for the purpose
of wraparound, it seems more consistent to lower them as well.
It's noteworthy that the previous reloption lower limit for
autovacuum_multixact_freeze_max_age was too high by one magnitude, even
before 020235a575.
Discussion: 26377.1443105453@sss.pgh.pa.us
Backpatch: back to 9.0 (in parts), like the prior patch
On reflection, the submitted patch didn't really work to prevent the
request size from exceeding MaxAllocSize, because of the fact that we'd
happily round nbuckets up to the next power of 2 after we'd limited it to
max_pointers. The simplest way to enforce the limit correctly is to
round max_pointers down to a power of 2 when it isn't one already.
(Note that the constraint to INT_MAX / 2, if it were doing anything useful
at all, is properly applied after that.)
Limit the size of the hashtable pointer array to not more than
MaxAllocSize. We've seen reports of failures due to this in HEAD/9.5,
and it seems possible in older branches as well. The change in
NTUP_PER_BUCKET in 9.5 may have made the problem more likely, but
surely it didn't introduce it.
Tomas Vondra, slightly modified by me
DST law changes in Cayman Islands, Fiji, Moldova, Morocco, Norfolk Island,
North Korea, Turkey, Uruguay. New zone America/Fort_Nelson for Canadian
Northern Rockies.
Some of the functions in regex compilation and execution recurse, and
therefore could in principle be driven to stack overflow. The Tcl crew
has seen this happen in practice in duptraverse(), though their fix was
to put in a hard-wired limit on the number of recursive levels, which is
not too appetizing --- fortunately, we have enough infrastructure to check
the actually available stack. Greg Stark has also seen it in other places
while fuzz testing on a machine with limited stack space. Let's put guards
in to prevent crashes in all these places.
Since the regex code would leak memory if we simply threw elog(ERROR),
we have to introduce an API that checks for stack depth without throwing
such an error. Fortunately that's not difficult.
In cfindloop(), if the initial call to shortest() reports that a
zero-length match is possible at the current search start point, but then
it is unable to construct any actual match to that, it'll just loop around
with the same start point, and thus make no progress. We need to force the
start point to be advanced. This is safe because the loop over "begin"
points has already tried and failed to match starting at "close", so there
is surely no need to try that again.
This bug was introduced in commit e2bd904955,
wherein we allowed continued searching after we'd run out of match
possibilities, but evidently failed to think hard enough about exactly
where we needed to search next.
Because of the way this code works, such a match failure is only possible
in the presence of backrefs --- otherwise, shortest()'s judgment that a
match is possible should always be correct. That probably explains how
come the bug has escaped detection for several years.
The actual fix is a one-liner, but I took the trouble to add/improve some
comments related to the loop logic.
After fixing that, the submitted test case "()*\1" didn't loop anymore.
But it reported failure, though it seems like it ought to match a
zero-length string; both Tcl and Perl think it does. That seems to be from
overenthusiastic optimization on my part when I rewrote the iteration match
logic in commit 173e29aa5d: we can't just
"declare victory" for a zero-length match without bothering to set match
data for capturing parens inside the iterator node.
Per fuzz testing by Greg Stark. The first part of this is a bug in all
supported branches, and the second part is a bug since 9.2 where the
iteration rewrite happened.
Commit 9662143f0c added infrastructure to
allow regular-expression operations to be terminated early in the event
of SIGINT etc. However, fuzz testing by Greg Stark disclosed that there
are still cases where regex compilation could run for a long time without
noticing a cancel request. Specifically, the fixempties() phase never
adds new states, only new arcs, so it doesn't hit the cancel check I'd put
in newstate(). Add one to newarc() as well to cover that.
Some experimentation of my own found that regex execution could also run
for a long time despite a pending cancel. We'd put a high-level cancel
check into cdissect(), but there was none inside the core text-matching
routines longest() and shortest(). Ordinarily those inner loops are very
very fast ... but in the presence of lookahead constraints, not so much.
As a compromise, stick a cancel check into the stateset cache-miss
function, which is enough to guarantee a cancel check at least once per
lookahead constraint test.
Making this work required more attention to error handling throughout the
regex executor. Henry Spencer had apparently originally intended longest()
and shortest() to be incapable of incurring errors while running, so
neither they nor their subroutines had well-defined error reporting
behaviors. However, that was already broken by the lookahead constraint
feature, since lacon() can surely suffer an out-of-memory failure ---
which, in the code as it stood, might never be reported to the user at all,
but just silently be treated as a non-match of the lookahead constraint.
Normalize all that by inserting explicit error tests as needed. I took the
opportunity to add some more comments to the code, too.
Back-patch to all supported branches, like the previous patch.
It's not terribly hard to devise regular expressions that take large
amounts of time and/or memory to process. Recent testing by Greg Stark has
also shown that machines with small stack limits can be driven to stack
overflow by suitably crafted regexps. While we intend to fix these things
as much as possible, it's probably impossible to eliminate slow-execution
cases altogether. In any case we don't want to treat such things as
security issues. The history of that code should already discourage
prudent DBAs from allowing execution of regexp patterns coming from
possibly-hostile sources, but it seems like a good idea to warn about the
hazard explicitly.
Currently, similar_escape() allows access to enough of the underlying
regexp behavior that the warning has to apply to SIMILAR TO as well.
We might be able to make it safer if we tightened things up to allow only
SQL-mandated capabilities in SIMILAR TO; but that would be a subtly
non-backwards-compatible change, so it requires discussion and probably
could not be back-patched.
Per discussion among pgsql-security list.
This case seems to have been overlooked when unvalidated check constraints
were introduced, in 9.2. The code would attempt to dump such constraints
over again for each child table, even though adding them to the parent
table is sufficient.
In 9.2 and 9.3, also fix contrib/pg_upgrade/Makefile so that the "make
clean" target fully cleans up after a failed test. This evidently got
dealt with at some point in 9.4, but it wasn't back-patched. I ran into
it while testing this fix ...
Per bug #13656 from Ingmar Brouns.
Previously it was documented that the details on HeapTupleHeaderData
struct could be found in htup.h. This is not correct because it's now
defined in htup_details.h.
Back-patch to 9.3 where the definition of HeapTupleHeaderData struct
was moved from htup.h to htup_details.h.
Michael Paquier
If some existing listener is far behind, incoming new listener sessions
would start from that session's read pointer and then need to advance over
many already-committed notification messages, which they have no interest
in. This was expensive in itself and also thrashed the pg_notify SLRU
buffers a lot more than necessary. We can improve matters considerably
in typical scenarios, without much added cost, by starting from the
furthest-ahead read pointer, not the furthest-behind one. We do have to
consider only sessions in our own database when doing this, which requires
an extra field in the data structure, but that's a pretty small cost.
Back-patch to 9.0 where the current LISTEN/NOTIFY logic was introduced.
Matt Newell, slightly adjusted by me
We were passing error message texts to croak() verbatim, which turns out
not to work if the text contains non-ASCII characters; Perl mangles their
encoding, as reported in bug #13638 from Michal Leinweber. To fix, convert
the text into a UTF8-encoded SV first.
It's hard to test this without risking failures in different database
encodings; but we can follow the lead of plpython, which is already
assuming that no-break space (U+00A0) has an equivalent in all encodings
we care about running the regression tests in (cf commit 2dfa15de5).
Back-patch to 9.1. The code is quite different in 9.0, and anyway it seems
too risky to put something like this into 9.0's final minor release.
Alex Hunsaker, with suggestions from Tim Bunce and Tom Lane
This replaces ill-fated commit 5ddc72887a,
which was reverted because it broke active uses of FK cache entries. In
this patch, we still do nothing more to invalidatable cache entries than
mark them as needing revalidation, so we won't break active uses. To keep
down the overhead of InvalidateConstraintCacheCallBack(), keep a list of
just the currently-valid cache entries. (The entries are large enough that
some added space for list links doesn't seem like a big problem.) This
would still be O(N^2) when there are many valid entries, though, so when
the list gets too long, just force the "sinval reset" behavior to remove
everything from the list. I set the threshold at 1000 entries, somewhat
arbitrarily. Possibly that could be fine-tuned later. Another item for
future study is whether it's worth adding reference counting so that we
could safely remove invalidated entries. As-is, problem cases are likely
to end up with large and mostly invalid FK caches.
Like the previous attempt, backpatch to 9.3.
Jan Wieck and Tom Lane
(Third time's the charm, I hope.)
Additional testing disclosed that this code could mangle already-localized
output from the "money" datatype. We can't very easily skip applying it
to "money" values, because the logic is tied to column right-justification
and people expect "money" output to be right-justified. Short of
decoupling that, we can fix it in what should be a safe enough way by
testing to make sure the string doesn't contain any characters that would
not be expected in plain numeric output.
On closer inspection, those seemingly redundant atoi() calls were not so
much inefficient as just plain wrong: the author of this code either had
not read, or had not understood, the POSIX specification for localeconv().
The grouping field is *not* a textual digit string but separate integers
encoded as chars.
We'll follow the existing code as well as the backend's cash.c in only
honoring the first group width, but let's at least honor it correctly.
This doesn't actually result in any behavioral change in any of the
locales I have installed on my Linux box, which may explain why nobody's
complained; grouping width 3 is close enough to universal that it's barely
worth considering other cases. Still, wrong is wrong, so back-patch.
This code did the wrong thing entirely for numbers with an exponent
but no decimal point (e.g., '1e6'), as reported by Jeff Janes in
bug #13636. More generally, it made lots of unverified assumptions
about what the input string could possibly look like. Rearrange so
that it only fools with leading digits that it's directly verified
are there, and an immediately adjacent decimal point. While at it,
get rid of some useless inefficiencies, like converting the grouping
count string to integer over and over (and over).
This has been broken for a long time, so back-patch to all supported
branches.
If we have a local Var of say varchar type with default collation, and
we apply a RelabelType to convert that to text with default collation, we
don't want to consider that as creating an FDW_COLLATE_UNSAFE situation.
It should be okay to compare that to a remote Var, so long as the remote
Var determines the comparison collation. (When we actually ship such an
expression to the remote side, the local Var would become a Param with
default collation, meaning the remote Var would in fact control the
comparison collation, because non-default implicit collation overrides
default implicit collation in parse_collate.c.) To fix, be more precise
about what FDW_COLLATE_NONE means: it applies either to a noncollatable
data type or to a collatable type with default collation, if that collation
can't be traced to a remote Var. (When it can, FDW_COLLATE_SAFE is
appropriate.) We were essentially using that interpretation already at
the Var/Const/Param level, but we weren't bubbling it up properly.
An alternative fix would be to introduce a separate FDW_COLLATE_DEFAULT
value to describe the second situation, but that would add more code
without changing the actual behavior, so it didn't seem worthwhile.
Also, since we're clarifying the rule to be that we care about whether
operator/function input collations match, there seems no need to fail
immediately upon seeing a Const/Param/non-foreign-Var with nondefault
collation. We only have to reject if it appears in a collation-sensitive
context (for example, "var IS NOT NULL" is perfectly safe from a collation
standpoint, whatever collation the var has). So just set the state to
UNSAFE rather than failing immediately.
Per report from Jeevan Chalke. This essentially corrects some sloppy
thinking in commit ed3ddf918b, so back-patch
to 9.3 where that logic appeared.
The old minimum values are rather large, making it time consuming to
test related behaviour. Additionally the current limits, especially for
multixacts, can be problematic in space-constrained systems. 10000000
multixacts can contain a lot of members.
Since there's no good reason for the current limits, lower them a good
bit. Setting them to 0 would be a bad idea, triggering endless vacuums,
so still retain a limit.
While at it fix autovacuum_multixact_freeze_max_age to refer to
multixact.c instead of varsup.c.
Reviewed-By: Robert Haas
Discussion: CA+TgmoYmQPHcrc3GSs7vwvrbTkbcGD9Gik=OztbDGGrovkkEzQ@mail.gmail.com
Backpatch: back to 9.0 (in parts)
mul_var() postpones propagating carries until it risks overflow in its
internal digit array. However, the logic failed to account for the
possibility of overflow in the carry propagation step, allowing wrong
results to be generated in corner cases. We must slightly reduce the
when-to-propagate-carries threshold to avoid that.
Discovered and fixed by Dean Rasheed, with small adjustments by me.
This has been wrong since commit d72f6c7503,
so back-patch to all supported branches.
RemoveLocalLock() must consider the possibility that LockAcquireExtended()
failed to palloc the initial space for a locallock's lockOwners array.
I had evidently meant to cope with this hazard when the code was originally
written (commit 1785acebf2), but missed that
the pfree needed to be protected with an if-test. Just to make sure things
are left in a clean state, reset numLockOwners as well.
Per low-memory testing by Andreas Seltenreich. Back-patch to all supported
branches.
After an internal failure in shortest() or longest() while pinning down the
exact location of a match, find() forgot to free the DFA structure before
returning. This is pretty unlikely to occur, since we just successfully
ran the "search" variant of the DFA; but it could happen, and it would
result in a session-lifespan memory leak since this code uses malloc()
directly. Problem seems to have been aboriginal in Spencer's library,
so back-patch all the way.
In passing, correct a thinko in a comment I added awhile back about the
meaning of the "ntree" field.
I happened across these issues while comparing our code to Tcl's version
of the library.
This setting contains extra configuration for the temp instance, as used
in pg_regress' --temp-config flag.
Backpatch to 9.2 where test.sh was introduced.
The docs claimed that \uhhhh would be interpreted as a Unicode value
regardless of the database encoding, but it's never been implemented
that way: \uhhhh and \xhhhh actually mean exactly the same thing, namely
the character that pg_mb2wchar translates to 0xhhhh. Moreover we were
falsely dismissive of the usefulness of Unicode code points above FFFF.
Fix that.
It's been like this for ages, so back-patch to all supported branches.
Commit 5ddc72887a does not actually work
because it will happily blow away ri_constraint_cache entries that are
in active use in outer call levels. In any case, it's a very ugly,
brute-force solution to the problem of limiting the cache size.
Revert until it can be redesigned.
Modify pg_dump to restore postgres/template1 databases to non-default
tablespaces by switching out of the database to be moved, then switching
back.
Also, to fix potentially cases where the old/new tablespaces might not
match, fix pg_upgrade to process new/old tablespaces separately in all
cases.
Report by Marti Raudsepp
Patch by Marti Raudsepp, me
Backpatch through 9.0
Commit 45ba424f improved foreign key lookups during bulk updates
when the FK value does not change. When restoring a schema dump
from a database with many (say 100,000) foreign keys, this cache
would grow very big and every ALTER TABLE command was causing an
InvalidateConstraintCacheCallBack(), which uses a sequential hash
table scan. This could cause a severe performance regression in
restoring a schema dump (including during pg_upgrade).
The patch uses a heuristic method of detecting when the hash table
should be destroyed and recreated.
InvalidateConstraintCacheCallBack() adds the current size of the
hash table to a counter. When that sum reaches 1,000,000, the hash
table is flushed. This fixes the regression without noticeable
harm to the bulk update use case.
Jan Wieck
Backpatch to 9.3 where the performance regression was introduced.
The "typo" alleged in commit 1e460d4bd was actually a comment that was
correct when written, but I missed updating it in commit b5282aa89.
Use a slightly less specific (and hopefully more future-proof) description
of what is collected. Back-patch to 9.2 where that commit appeared, and
revert the comment to its then-entirely-correct state before that.
The list-wrangling here was done wrong, allowing the same state to get
put into the list twice. The following loop then would clone it twice.
The second clone would wind up with no inarcs, so that there was no
observable misbehavior AFAICT, but a useless state in the finished NFA
isn't an especially good thing.