There is a rare race condition, when a transaction that inserted a tuple
aborts while vacuum is processing the page containing the inserted tuple.
Vacuum prunes the page first, which normally removes any dead tuples, but
if the inserting transaction aborts right after that, the loop after
pruning will see a dead tuple and remove it instead. That's OK, but if the
page is on a table with no indexes, and the page becomes completely empty
after removing the dead tuple (or tuples) on it, it will be immediately
marked as all-visible. That's OK, but the sanity check in vacuum would
throw a warning because it thinks that the page contains dead tuples and
was nevertheless marked as all-visible, even though it just vacuumed away
the dead tuples and so it doesn't actually contain any.
Spotted this while reading the code. It's difficult to hit the race
condition otherwise, but can be done by putting a breakpoint after the
heap_page_prune() call.
Backpatch all the way to 8.4, where this code first appeared.
B-tree operators are not allowed to leak memory into the current memory
context. Range_cmp leaked detoasted copies of the arguments. That caused
a quick out-of-memory error when creating an index on a range column.
Reported by Marian Krucina, bug #8468.
Though @libdir@ almost always matches @abs_builddir@ in this context,
the test could only fail if they differed. Back-patch to 9.1, where the
test was introduced.
Hamid Quddus Akhtar
Previously, arbitray system columns could be mentioned in table
constraints, but they were not correctly checked at runtime, because
the values weren't actually set correctly in the tuple. Since it
seems easy enough to initialize the table OID properly, do that,
and continue allowing that column, but disallow the rest unless and
until someone figures out a way to make them work properly.
No back-patch, because this doesn't seem important enough to take the
risk of destabilizing the back branches. In fact, this will pose a
dump-and-reload hazard for those upgrading from previous versions:
constraints that were accepted before but were not correctly enforced
will now either be enforced correctly or not accepted at all. Either
could result in restore failures, but in practice I think very few
users will notice the difference, since the use case is pretty
marginal anyway and few users will be relying on features that have
not historically worked.
Amit Kapila, reviewed by Rushabh Lathia, with doc changes by me.
In libpq, we set up and pass to OpenSSL callback routines to handle
locking. When we run out of SSL connections, we try to clean things
up by de-registering the hooks. Unfortunately, we had a few calls
into the OpenSSL library after these hooks were de-registered during
SSL cleanup which lead to deadlocking. This moves the thread callback
cleanup to be after all SSL-cleanup related OpenSSL library calls.
I've been unable to reproduce the deadlock with this fix.
In passing, also move the close_SSL call to be after unlocking our
ssl_config mutex when in a failure state. While it looks pretty
unlikely to be an issue, it could have resulted in deadlocks if we
ended up in this code path due to something other than SSL_new
failing. Thanks to Heikki for pointing this out.
Back-patch to all supported versions; note that the close_SSL issue
only goes back to 9.0, so that hunk isn't included in the 8.4 patch.
Initially found and reported by Vesa-Matti J Kari; many thanks to
both Heikki and Andres for their help running down the specific
issue and reviewing the patch.
When a timeline history file is fetched from server, it is initially created
with a temporary file name, and renamed to place. However, the temporary
file name was constructed using an uninitialized buffer. Usually that meant
that the file was created in current directory instead of the target, which
usually goes unnoticed, but if the target is on a different filesystem than
the current dir, the rename() would fail. Fix that.
The second issue is that pg_receivexlog would not take .partial files into
account when determining when scanning the target directory for existing
WAL files. If the timeline has switched in the server several times in the
last WAL segment, and pg_receivexlog is restarted, it would choose a too
old starting point. That's not a problem as long as the old WAL segment
exists in the server and can be streamed over, but will cause a failure if
it's not.
Backpatch to 9.3, where this timeline handling code was written.
Analysed by Andrew Gierth, bug #8453, based on a bug report on IRC.
It seems to make more sense to use "cutoff multixact" terminology
throughout the backend code; "freeze" is associated with replacing of an
Xid with FrozenTransactionId, which is not what we do for MultiXactIds.
Andres Freund
Some adjustments by Álvaro Herrera
Once the administrator has called for an immediate shutdown or a backend
crash has triggered a reinitialization, no mere SIGINT or SIGTERM should
change that course. Such derailment remains possible when the signal
arrives before quickdie() blocks signals. That being a narrow race
affecting most PostgreSQL signal handlers in some way, leave it for
another patch. Back-patch this to all supported versions.
Previously a one-dimensional empty array was returned, but its text
representation matched a zero-dimensional array, and there is no way to
dump/reload a one-dimensional empty array.
BACKWARD INCOMPATIBILITY
Per report from elein
Doing so was helpful for some Valgrind usage and distracting for other
usage. One can achieve the same effect by changing log_statement and
pointing both PostgreSQL and Valgrind logging to stderr.
Per gripe from Andres Freund.
Commit 95ef6a3448 removed the
ability to create rules on an individual column as of 7.3, but
left some residual code which has since been useless. This cleans
up that dead code without any change in behavior other than
dropping the useless column from the catalog.
If the hash table backing a catalog cache becomes too full (fillfactor > 2),
enlarge it. A new buckets array, double the size of the old, is allocated,
and all entries in the old hash are moved to the right bucket in the new
hash.
This has two benefits. First, cache lookups don't get so expensive when
there are lots of entries in a cache, like if you access hundreds of
thousands of tables. Second, we can make the (initial) sizes of the caches
much smaller, which saves memory.
This patch dials down the initial sizes of the catcaches. The new sizes are
chosen so that a backend that only runs a few basic queries still won't need
to enlarge any of them.
This reverts commit 269e780822
and commit 5b571bb8c8.
Unfortunately, the initial patch had insufficient performance testing,
and resulted in a regression.
Per report by Thom Brown.
Performance testing shows that if the insertpos_lck spinlock and the fields
that it protects are on the same cache line with other variables that are
frequently accessed, the false sharing can hurt performance a lot. Keep
them apart by adding some padding.
This GUC context value was once only used by ALTER DATABASE SET and
ALTER USER SET. That's not true anymore, though, so rewrite the
comments to be a bit more general.
Patch in HEAD only, since this is just an internal documentation issue.
The previous coding attempted to activate all the GUC settings specified
in SET clauses, so that the function validator could operate in the GUC
environment expected by the function body. However, this is problematic
when restoring a dump, since the SET clauses might refer to database
objects that don't exist yet. We already have the parameter
check_function_bodies that's meant to prevent forward references in
function definitions from breaking dumps, so let's change CREATE FUNCTION
to not install the SET values if check_function_bodies is off.
Authors of function validators were already advised not to make any
"context sensitive" checks when check_function_bodies is off, if indeed
they're checking anything at all in that mode. But extend the
documentation to point out the GUC issue in particular.
(Note that we still check the SET clauses to some extent; the behavior
with !check_function_bodies is now approximately equivalent to what ALTER
DATABASE/ROLE have been doing for awhile with context-dependent GUCs.)
This problem can be demonstrated in all active branches, so back-patch
all the way.
There's no inherent reason why an aggregate function can't be variadic
(even VARIADIC ANY) if its transition function can handle the case.
Indeed, this patch to add the feature touches none of the planner or
executor, and little of the parser; the main missing stuff was DDL and
pg_dump support.
It is true that variadic aggregates can create the same sort of ambiguity
about parameters versus ORDER BY keys that was complained of when we
(briefly) had both one- and two-argument forms of string_agg(). However,
the policy formed in response to that discussion only said that we'd not
create any built-in aggregates with varying numbers of arguments, not that
we shouldn't allow users to do it. So the logical extension of that is
we can allow users to make variadic aggregates as long as we're wary about
shipping any such in core.
In passing, this patch allows aggregate function arguments to be named, to
the extent of remembering the names in pg_proc and dumping them in pg_dump.
You can't yet call an aggregate using named-parameter notation. That seems
like a likely future extension, but it'll take some work, and it's not what
this patch is really about. Likewise, there's still some work needed to
make window functions handle VARIADIC fully, but I left that for another
day.
initdb forced because of new aggvariadic field in Aggref parse nodes.
I started out just to fix the broken markup in commit
1c20857661, but got distracted by
copy-editing. I see Bruce already fixed the markup, but I'll
commit the wordsmithing anyway.