The documentation for the autovacuum_multixact_freeze_max_age and
autovacuum_freeze_max_age relation level parameters contained:
"Note that while you can set autovacuum_multixact_freeze_max_age very
small, or even zero, this is usually unwise since it will force frequent
vacuuming."
which hasn't been true since these options were made relation options,
instead of residing in the pg_autovacuum table (834a6da4f7).
Remove the outdated sentence. Even the lowered limits from 2596d70 are
high enough that this doesn't warrant calling out the risk in the CREATE
TABLE docs.
Per discussion with Tom Lane and Alvaro Herrera
Discussion: 26377.1443105453@sss.pgh.pa.us
Backpatch: 9.0- (in parts)
The tsquery, ltxtquery and query_int data types have a common ancestor.
Having acquired check_stack_depth() calls independently, each was
missing at least one call. Back-patch to 9.0 (all supported versions).
A range type can name another range type as its subtype, and a record
type can bear a column of another record type. Consequently, functions
like range_cmp() and record_recv() are recursive. Functions at risk
include operator family members and referents of pg_type regproc
columns. Treat as recursive any such function that looks up and calls
the same-purpose function for a record column type or the range subtype.
Back-patch to 9.0 (all supported versions).
An array type's element type is never itself an array type, so array
functions are unaffected. Recursion depth proportional to array
dimensionality, found in array_dim_to_jsonb(), is fine thanks to MAXDIM.
Certain short salts crashed the backend or disclosed a few bytes of
backend memory. For existing salt-induced error conditions, emit a
message saying as much. Back-patch to 9.0 (all supported versions).
Josh Kupershmidt
Security: CVE-2015-5288
In 020235a575 I lowered the autovacuum_*freeze_max_age minimums to
allow for easier testing of wraparounds. I did not touch the
corresponding per-table limits. While those don't matter for the purpose
of wraparound, it seems more consistent to lower them as well.
It's noteworthy that the previous reloption lower limit for
autovacuum_multixact_freeze_max_age was too high by one magnitude, even
before 020235a575.
Discussion: 26377.1443105453@sss.pgh.pa.us
Backpatch: back to 9.0 (in parts), like the prior patch
On reflection, the submitted patch didn't really work to prevent the
request size from exceeding MaxAllocSize, because of the fact that we'd
happily round nbuckets up to the next power of 2 after we'd limited it to
max_pointers. The simplest way to enforce the limit correctly is to
round max_pointers down to a power of 2 when it isn't one already.
(Note that the constraint to INT_MAX / 2, if it were doing anything useful
at all, is properly applied after that.)
Limit the size of the hashtable pointer array to not more than
MaxAllocSize. We've seen reports of failures due to this in HEAD/9.5,
and it seems possible in older branches as well. The change in
NTUP_PER_BUCKET in 9.5 may have made the problem more likely, but
surely it didn't introduce it.
Tomas Vondra, slightly modified by me
DST law changes in Cayman Islands, Fiji, Moldova, Morocco, Norfolk Island,
North Korea, Turkey, Uruguay. New zone America/Fort_Nelson for Canadian
Northern Rockies.
Some of the functions in regex compilation and execution recurse, and
therefore could in principle be driven to stack overflow. The Tcl crew
has seen this happen in practice in duptraverse(), though their fix was
to put in a hard-wired limit on the number of recursive levels, which is
not too appetizing --- fortunately, we have enough infrastructure to check
the actually available stack. Greg Stark has also seen it in other places
while fuzz testing on a machine with limited stack space. Let's put guards
in to prevent crashes in all these places.
Since the regex code would leak memory if we simply threw elog(ERROR),
we have to introduce an API that checks for stack depth without throwing
such an error. Fortunately that's not difficult.
In cfindloop(), if the initial call to shortest() reports that a
zero-length match is possible at the current search start point, but then
it is unable to construct any actual match to that, it'll just loop around
with the same start point, and thus make no progress. We need to force the
start point to be advanced. This is safe because the loop over "begin"
points has already tried and failed to match starting at "close", so there
is surely no need to try that again.
This bug was introduced in commit e2bd904955,
wherein we allowed continued searching after we'd run out of match
possibilities, but evidently failed to think hard enough about exactly
where we needed to search next.
Because of the way this code works, such a match failure is only possible
in the presence of backrefs --- otherwise, shortest()'s judgment that a
match is possible should always be correct. That probably explains how
come the bug has escaped detection for several years.
The actual fix is a one-liner, but I took the trouble to add/improve some
comments related to the loop logic.
After fixing that, the submitted test case "()*\1" didn't loop anymore.
But it reported failure, though it seems like it ought to match a
zero-length string; both Tcl and Perl think it does. That seems to be from
overenthusiastic optimization on my part when I rewrote the iteration match
logic in commit 173e29aa5d: we can't just
"declare victory" for a zero-length match without bothering to set match
data for capturing parens inside the iterator node.
Per fuzz testing by Greg Stark. The first part of this is a bug in all
supported branches, and the second part is a bug since 9.2 where the
iteration rewrite happened.
Commit 9662143f0c added infrastructure to
allow regular-expression operations to be terminated early in the event
of SIGINT etc. However, fuzz testing by Greg Stark disclosed that there
are still cases where regex compilation could run for a long time without
noticing a cancel request. Specifically, the fixempties() phase never
adds new states, only new arcs, so it doesn't hit the cancel check I'd put
in newstate(). Add one to newarc() as well to cover that.
Some experimentation of my own found that regex execution could also run
for a long time despite a pending cancel. We'd put a high-level cancel
check into cdissect(), but there was none inside the core text-matching
routines longest() and shortest(). Ordinarily those inner loops are very
very fast ... but in the presence of lookahead constraints, not so much.
As a compromise, stick a cancel check into the stateset cache-miss
function, which is enough to guarantee a cancel check at least once per
lookahead constraint test.
Making this work required more attention to error handling throughout the
regex executor. Henry Spencer had apparently originally intended longest()
and shortest() to be incapable of incurring errors while running, so
neither they nor their subroutines had well-defined error reporting
behaviors. However, that was already broken by the lookahead constraint
feature, since lacon() can surely suffer an out-of-memory failure ---
which, in the code as it stood, might never be reported to the user at all,
but just silently be treated as a non-match of the lookahead constraint.
Normalize all that by inserting explicit error tests as needed. I took the
opportunity to add some more comments to the code, too.
Back-patch to all supported branches, like the previous patch.
It's not terribly hard to devise regular expressions that take large
amounts of time and/or memory to process. Recent testing by Greg Stark has
also shown that machines with small stack limits can be driven to stack
overflow by suitably crafted regexps. While we intend to fix these things
as much as possible, it's probably impossible to eliminate slow-execution
cases altogether. In any case we don't want to treat such things as
security issues. The history of that code should already discourage
prudent DBAs from allowing execution of regexp patterns coming from
possibly-hostile sources, but it seems like a good idea to warn about the
hazard explicitly.
Currently, similar_escape() allows access to enough of the underlying
regexp behavior that the warning has to apply to SIMILAR TO as well.
We might be able to make it safer if we tightened things up to allow only
SQL-mandated capabilities in SIMILAR TO; but that would be a subtly
non-backwards-compatible change, so it requires discussion and probably
could not be back-patched.
Per discussion among pgsql-security list.
If some existing listener is far behind, incoming new listener sessions
would start from that session's read pointer and then need to advance over
many already-committed notification messages, which they have no interest
in. This was expensive in itself and also thrashed the pg_notify SLRU
buffers a lot more than necessary. We can improve matters considerably
in typical scenarios, without much added cost, by starting from the
furthest-ahead read pointer, not the furthest-behind one. We do have to
consider only sessions in our own database when doing this, which requires
an extra field in the data structure, but that's a pretty small cost.
Back-patch to 9.0 where the current LISTEN/NOTIFY logic was introduced.
Matt Newell, slightly adjusted by me
(Third time's the charm, I hope.)
Additional testing disclosed that this code could mangle already-localized
output from the "money" datatype. We can't very easily skip applying it
to "money" values, because the logic is tied to column right-justification
and people expect "money" output to be right-justified. Short of
decoupling that, we can fix it in what should be a safe enough way by
testing to make sure the string doesn't contain any characters that would
not be expected in plain numeric output.
On closer inspection, those seemingly redundant atoi() calls were not so
much inefficient as just plain wrong: the author of this code either had
not read, or had not understood, the POSIX specification for localeconv().
The grouping field is *not* a textual digit string but separate integers
encoded as chars.
We'll follow the existing code as well as the backend's cash.c in only
honoring the first group width, but let's at least honor it correctly.
This doesn't actually result in any behavioral change in any of the
locales I have installed on my Linux box, which may explain why nobody's
complained; grouping width 3 is close enough to universal that it's barely
worth considering other cases. Still, wrong is wrong, so back-patch.
This code did the wrong thing entirely for numbers with an exponent
but no decimal point (e.g., '1e6'), as reported by Jeff Janes in
bug #13636. More generally, it made lots of unverified assumptions
about what the input string could possibly look like. Rearrange so
that it only fools with leading digits that it's directly verified
are there, and an immediately adjacent decimal point. While at it,
get rid of some useless inefficiencies, like converting the grouping
count string to integer over and over (and over).
This has been broken for a long time, so back-patch to all supported
branches.
The old minimum values are rather large, making it time consuming to
test related behaviour. Additionally the current limits, especially for
multixacts, can be problematic in space-constrained systems. 10000000
multixacts can contain a lot of members.
Since there's no good reason for the current limits, lower them a good
bit. Setting them to 0 would be a bad idea, triggering endless vacuums,
so still retain a limit.
While at it fix autovacuum_multixact_freeze_max_age to refer to
multixact.c instead of varsup.c.
Reviewed-By: Robert Haas
Discussion: CA+TgmoYmQPHcrc3GSs7vwvrbTkbcGD9Gik=OztbDGGrovkkEzQ@mail.gmail.com
Backpatch: 9.0 (in parts)
Symbols S_IRWXG and S_IRWXO became available to Windows builds in commit
1319002e2e, so PostgreSQL 9.0 cannot use
them in platform-independent code. Rewrite commit
24aed2124a to not change Windows builds.
The new umask() call had no effect on Windows, anyway. Per buildfarm.
mul_var() postpones propagating carries until it risks overflow in its
internal digit array. However, the logic failed to account for the
possibility of overflow in the carry propagation step, allowing wrong
results to be generated in corner cases. We must slightly reduce the
when-to-propagate-carries threshold to avoid that.
Discovered and fixed by Dean Rasheed, with small adjustments by me.
This has been wrong since commit d72f6c7503,
so back-patch to all supported branches.
RemoveLocalLock() must consider the possibility that LockAcquireExtended()
failed to palloc the initial space for a locallock's lockOwners array.
I had evidently meant to cope with this hazard when the code was originally
written (commit 1785acebf2), but missed that
the pfree needed to be protected with an if-test. Just to make sure things
are left in a clean state, reset numLockOwners as well.
Per low-memory testing by Andreas Seltenreich. Back-patch to all supported
branches.
After an internal failure in shortest() or longest() while pinning down the
exact location of a match, find() forgot to free the DFA structure before
returning. This is pretty unlikely to occur, since we just successfully
ran the "search" variant of the DFA; but it could happen, and it would
result in a session-lifespan memory leak since this code uses malloc()
directly. Problem seems to have been aboriginal in Spencer's library,
so back-patch all the way.
In passing, correct a thinko in a comment I added awhile back about the
meaning of the "ntree" field.
I happened across these issues while comparing our code to Tcl's version
of the library.
The docs claimed that \uhhhh would be interpreted as a Unicode value
regardless of the database encoding, but it's never been implemented
that way: \uhhhh and \xhhhh actually mean exactly the same thing, namely
the character that pg_mb2wchar translates to 0xhhhh. Moreover we were
falsely dismissive of the usefulness of Unicode code points above FFFF.
Fix that.
It's been like this for ages, so back-patch to all supported branches.
In branches before 9.3, commit 8703059c6 caused join_is_legal()'s
unique_ified variable to become unused, since its only remaining
use is for LATERAL-related tests which don't exist pre-9.3.
My compiler didn't complain about that, but Peter's does.
Modify pg_dump to restore postgres/template1 databases to non-default
tablespaces by switching out of the database to be moved, then switching
back.
Also, to fix potentially cases where the old/new tablespaces might not
match, fix pg_upgrade to process new/old tablespaces separately in all
cases.
Report by Marti Raudsepp
Patch by Marti Raudsepp, me
Backpatch through 9.0
The list-wrangling here was done wrong, allowing the same state to get
put into the list twice. The following loop then would clone it twice.
The second clone would wind up with no inarcs, so that there was no
observable misbehavior AFAICT, but a useless state in the finished NFA
isn't an especially good thing.
This was forgotten in 8a3631f (commit that originally added the parameter)
and 0ca9907 (commit that added the documentation later that year).
Back-patch to all supported versions.
We were missing a few return checks on OpenSSL calls. Should be pretty
harmless, since we haven't seen any user reports about problems, and
this is not a high-traffic module anyway; still, a bug is a bug, so
backpatch this all the way back to 9.0.
Author: Michael Paquier, while reviewing another sslinfo patch
Cleanup process could be called by ordinary insert/update and could take a lot
of time. Add vacuum_delay_point() to make this process interruptable. Under
vacuum this call will also throttle a vacuum process to decrease system load,
called from insert/update it will not throttle, and that reduces a latency.
Backpatch for all supported branches.
Jeff Janes <jeff.janes@gmail.com>
RESERV. RESERV is meant for tokens like "now" and having them in that
category throws errors like these when used as an input date:
stark=# SELECT 'doy'::timestamptz;
ERROR: unexpected dtype 33 while parsing timestamptz "doy"
LINE 1: SELECT 'doy'::timestamptz;
^
stark=# SELECT 'dow'::timestamptz;
ERROR: unexpected dtype 32 while parsing timestamptz "dow"
LINE 1: SELECT 'dow'::timestamptz;
^
Found by LLVM's Libfuzzer
Formerly, we treated only portals created in the current subtransaction as
having failed during subtransaction abort. However, if the error occurred
while running a portal created in an outer subtransaction (ie, a cursor
declared before the last savepoint), that has to be considered broken too.
To allow reliable detection of which ones those are, add a bookkeeping
field to struct Portal that tracks the innermost subtransaction in which
each portal has actually been executed. (Without this, we'd end up
failing portals containing functions that had called the subtransaction,
thereby breaking plpgsql exception blocks completely.)
In addition, when we fail an outer-subtransaction Portal, transfer its
resources into the subtransaction's resource owner, so that they're
released early in cleanup of the subxact. This fixes a problem reported by
Jim Nasby in which a function executed in an outer-subtransaction cursor
could cause an Assert failure or crash by referencing a relation created
within the inner subtransaction.
The proximate cause of the Assert failure is that AtEOSubXact_RelationCache
assumed it could blow away a relcache entry without first checking that the
entry had zero refcount. That was a bad idea on its own terms, so add such
a check there, and to the similar coding in AtEOXact_RelationCache. This
provides an independent safety measure in case there are still ways to
provoke the situation despite the Portal-level changes.
This has been broken since subtransactions were invented, so back-patch
to all supported branches.
Tom Lane and Michael Paquier
On recent AIX it's necessary to configure gcc to use the native assembler
(because the GNU assembler hasn't been updated to handle AIX 6+). This
caused PG builds to fail with assembler syntax errors, because we'd try
to compile s_lock.h's gcc asm fragment for PPC, and that assembly code
relied on GNU-style local labels. We can't substitute normal labels
because it would fail in any file containing more than one inlined use of
tas(). Fortunately, that code is stable enough, and the PPC ISA is simple
enough, that it doesn't seem like too much of a maintenance burden to just
hand-code the branch offsets, removing the need for any labels.
Note that the AIX assembler only accepts "$" for the location counter
pseudo-symbol. The usual GNU convention is "."; but it appears that all
versions of gas for PPC also accept "$", so in theory this patch will not
break any other PPC platforms.
This has been reported by a few people, but Steve Underwood gets the credit
for being the first to pursue the problem far enough to understand why it
was failing. Thanks also to Noah Misch for additional testing.
The default argument, if given, has to be of exactly the same datatype
as the first argument; but this was not stated in so many words, and
the error message you get about it might not lead your thought in the
right direction. Per bug #13587 from Robert McGehee.
A quick scan says that these are the only two built-in functions with two
anyelement arguments and no other polymorphic arguments. There are plenty
of cases of, eg, anyarray and anyelement, but those seem less likely to
confuse. For instance this doesn't seem terribly hard to figure out:
"function array_remove(integer[], numeric) does not exist". So I've
contented myself with fixing these two cases.