Previously missing or invalid service files returned NULL. Also fix
pg_upgrade to report "out of memory" for a null return from
PQconndefaults().
Patch by Steve Singer, rewritten by me
When an external recovery command such as restore_command or
archive_cleanup_command fails, report the exit code properly,
distinguishing signals and normal exists, using the existing
wait_result_to_str() facility, instead of just reporting the return
value from system().
Reviewed-by: Peter Geoghegan <pg@heroku.com>
Many committers seem to now be using a work flow in which back-patched
commits are timestamped minutes or even hours apart in different branches
(most likely because they commit in one branch before starting work on
the next one). git_changelog was failing to merge its reports in such
cases, so increase the max time it's willing to merge commits across.
I considered getting rid of the limit altogether, but that produces
some odd results in terms of how the merged commit gets sorted relative
to unrelated commits.
Remove the variable from the enclosing scopes so that nothing can be
relying on it. The net result of this refactoring is that we get rid
of a few unnecessary strlen() calls.
Original patch from Greg Jaskiewicz, substantially expanded by me.
If errordata_stack_depth < 0, we won't find that out and correct the
problem until CHECK_STACK_DEPTH() is invoked. In the meantime,
elevel will be set based on an invalid read. This is probably
harmless in practice, but it seems cleaner this way.
Xi Wang
I'm putting these up for review before I start to extract the relevant
subsets for the older branches. It'll be easier to make any suggested
wording improvements at this stage.
This is mostly to fix incorrect migration instructions: since the preceding
minor releases advised reindexing some GIST indexes, it's important that
we back-link to that advice rather than earlier instances.
Also improve some bug descriptions and fix a few typos.
No back-patch yet; these files will get copied into the back branches
later in the release process.
pg_dumpall's charter is to be able to recreate a database cluster's
contents in a virgin installation, but it was failing to honor that
contract if the cluster had any ALTER DATABASE SET
default_transaction_read_only settings. By including a SET command
for the connection for each connection opened by pg_dumpall output,
errors are avoided and the source cluster is successfully
recreated.
There was discussion of whether to also set this for the connection
applying pg_dump output, but it was felt that it was both less
appropriate in that context, and far easier to work around.
Backpatch to all supported branches.
Remove the use of the following macros, which are obsolescent according
to the Autoconf documentation:
- AC_C_CONST
- AC_C_STRINGIZE
- AC_C_VOLATILE
- AC_FUNC_MEMCMP
Both heap_freeze_tuple() and heap_tuple_needs_freeze() neglected to look
into a multixact to check the members against cutoff_xid. This means
that a very old Xid could survive hidden within a multi, possibly
outliving its CLOG storage. In the distant future, this would cause
clog lookup failures:
ERROR: could not access status of transaction 3883960912
DETAIL: Could not open file "pg_clog/0E78": No such file or directory.
This mostly was problematic when the updating transaction aborted, since
in that case the row wouldn't get pruned away earlier in vacuum and the
multixact could possibly survive for a long time. In many cases, data
that is inaccessible for this reason way can be brought back
heuristically.
As a second bug, heap_freeze_tuple() didn't properly handle multixacts
that need to be frozen according to cutoff_multi, but whose updater xid
is still alive. Instead of preserving the update Xid, it just set Xmax
invalid, which leads to both old and new tuple versions becoming
visible. This is pretty rare in practice, but a real threat
nonetheless. Existing corrupted rows, unfortunately, cannot be repaired
in an automated fashion.
Existing physical replicas might have already incorrectly frozen tuples
because of different behavior than in master, which might only become
apparent in the future once pg_multixact/ is truncated; it is
recommended that all clones be rebuilt after upgrading.
Following code analysis caused by bug report by J Smith in message
CADFUPgc5bmtv-yg9znxV-vcfkb+JPRqs7m2OesQXaM_4Z1JpdQ@mail.gmail.com
and privately by F-Secure.
Backpatch to 9.3, where freezing of MultiXactIds was introduced.
Analysis and patch by Andres Freund, with some tweaks by Álvaro.
It is dangerous to do so, because some code expects to be able to see what's
the true Xmax even if it is aborted (particularly while traversing HOT
chains). So don't do it, and instead rely on the callers to verify for
abortedness, if necessary.
Several race conditions and bugs fixed in the process. One isolation test
changes the expected output due to these.
This also reverts commit c235a6a589, which is no longer necessary.
Backpatch to 9.3, where this function was introduced.
Andres Freund
Commit 9dc842f08 of 8.2 era prevented MultiXact truncation during crash
recovery, because there was no guarantee that enough state had been
setup, and because it wasn't deemed to be a good idea to remove data
during crash recovery anyway. Since then, due to Hot-Standby, streaming
replication and PITR, the amount of time a cluster can spend doing crash
recovery has increased significantly, to the point that a cluster may
even never come out of it. This has made not truncating the content of
pg_multixact/ not defensible anymore.
To fix, take care to setup enough state for multixact truncation before
crash recovery starts (easy since checkpoints contain the required
information), and move the current end-of-recovery actions to a new
TrimMultiXact() function, analogous to TrimCLOG().
At some later point, this should probably done similarly to the way
clog.c is doing it, which is to just WAL log truncations, but we can't
do that for the back branches.
Back-patch to 9.0. 8.4 also has the problem, but since there's no hot
standby there, it's much less pressing. In 9.2 and earlier, this patch
is simpler than in newer branches, because multixact access during
recovery isn't required. Add appropriate checks to make sure that's not
happening.
Andres Freund
While autovacuum dutifully launched anti-multixact-wraparound vacuums
when the multixact "age" was reached, the vacuum code was not aware that
it needed to make them be full table vacuums. As the resulting
partial-table vacuums aren't capable of actually increasing relminmxid,
autovacuum continued to launch anti-wraparound vacuums that didn't have
the intended effect, until age of relfrozenxid caused the vacuum to
finally be a full table one via vacuum_freeze_table_age.
To fix, introduce logic for multixacts similar to that for plain
TransactionIds, using the same GUCs.
Backpatch to 9.3, where permanent MultiXactIds were introduced.
Andres Freund, some cleanup by Álvaro
Parts of the code used autovacuum_freeze_max_age to determine whether
anti-multixact-wraparound vacuums are necessary, while others used a
hardcoded 200000000 value. This leads to problems when
autovacuum_freeze_max_age is set to a non-default value. Use the latter
everywhere.
Backpatch to 9.3, where vacuuming of multixacts was introduced.
Andres Freund
Ensure that the invocation command for postgres or pg_ctl runservice
double-quotes the executable's pathname; failure to do this leads to
trouble when the path contains spaces.
Also, ensure that the path ends in ".exe" in both cases and uses
backslashes rather than slashes as directory separators. The latter issue
is reported to confuse some third-party tools such as Symantec Backup Exec.
Also, rewrite the function to avoid buffer overrun issues by using a
PQExpBuffer instead of a fixed-size static buffer. Combinations of
very long executable pathnames and very long data directory pathnames
could have caused trouble before, for example.
Back-patch to all active branches, since this code has been like this
for a long while.
Naoya Anzai and Tom Lane, reviewed by Rajeev Rastogi
The various places that transferred fast-path locks to the main lock table
neglected to release the PGPROC's backendLock if SetupLockInTable failed
due to being out of shared memory. In most cases this is no big deal since
ensuing error cleanup would release all held LWLocks anyway. But there are
some hot-standby functions that don't consider failure of
FastPathTransferRelationLocks to be a hard error, and in those cases this
oversight could lead to system lockup. For consistency, make all of these
places look the same as FastPathTransferRelationLocks.
Noted while looking for the cause of Dan Wood's bugs --- this wasn't it,
but it's a bug anyway.
Prevent handle_sig_alarm from losing control partway through due to a query
cancel (either an asynchronous SIGINT, or a cancel triggered by one of the
timeout handler functions). That would at least result in failure to
schedule any required future interrupt, and might result in actual
corruption of timeout.c's data structures, if the interrupt happened while
we were updating those.
We could still lose control if an asynchronous SIGINT arrives just as the
function is entered. This wouldn't break any data structures, but it would
have the same effect as if the SIGALRM interrupt had been silently lost:
we'd not fire any currently-due handlers, nor schedule any new interrupt.
To forestall that scenario, forcibly reschedule any pending timer interrupt
during AbortTransaction and AbortSubTransaction. We can avoid any extra
kernel call in most cases by not doing that until we've allowed
LockErrorCleanup to kill the DEADLOCK_TIMEOUT and LOCK_TIMEOUT events.
Another hazard is that some platforms (at least Linux and *BSD) block a
signal before calling its handler and then unblock it on return. When we
longjmp out of the handler, the unblock doesn't happen, and the signal is
left blocked indefinitely. Again, we can fix that by forcibly unblocking
signals during AbortTransaction and AbortSubTransaction.
These latter two problems do not manifest when the longjmp reaches
postgres.c, because the error recovery code there kills all pending timeout
events anyway, and it uses sigsetjmp(..., 1) so that the appropriate signal
mask is restored. So errors thrown outside any transaction should be OK
already, and cleaning up in AbortTransaction and AbortSubTransaction should
be enough to fix these issues. (We're assuming that any code that catches
a query cancel error and doesn't re-throw it will do at least a
subtransaction abort to clean up; but that was pretty much required already
by other subsystems.)
Lastly, ProcSleep should not clear the LOCK_TIMEOUT indicator flag when
disabling that event: if a lock timeout interrupt happened after the lock
was granted, the ensuing query cancel is still going to happen at the next
CHECK_FOR_INTERRUPTS, and we want to report it as a lock timeout not a user
cancel.
Per reports from Dan Wood.
Back-patch to 9.3 where the new timeout handling infrastructure was
introduced. We may at some point decide to back-patch the signal
unblocking changes further, but I'll desist from that until we hear
actual field complaints about it.
Although user-defined relations can't be directly created in
pg_catalog, it's possible for them to end up there, because you can
create them in some other schema and then use ALTER TABLE .. SET SCHEMA
to move them there. Previously, such relations couldn't afterwards
be manipulated, because IsSystemRelation()/IsSystemClass() rejected
all attempts to modify objects in the pg_catalog schema, regardless
of their origin. With this patch, they now reject only those
objects in pg_catalog which were created at initdb-time, allowing
most operations on user-created tables in pg_catalog to proceed
normally.
This patch also adds new functions IsCatalogRelation() and
IsCatalogClass(), which is similar to IsSystemRelation() and
IsSystemClass() but with a slightly narrower definition: only TOAST
tables of system catalogs are included, rather than *all* TOAST tables.
This is currently used only for making decisions about when
invalidation messages need to be sent, but upcoming logical decoding
patches will find other uses for this information.
Andres Freund, with some modifications by me.
In the GIN incomplete-splits patch, I used BlockIdDatas to store the block
number of left and right children, when inserting a downlink after a split
to an internal page posting list page. But gin_desc thought they were stored
as BlockNumbers.
We have for a long time checked the head pointer of each of the backend's
proclock lists and skipped acquiring the corresponding locktable partition
lock if the head pointer was NULL. This was safe enough in the days when
proclock lists were changed only by the owning backend, but it is pretty
questionable now that the fast-path patch added cases where backends add
entries to other backends' proclock lists. However, we don't really wish
to revert to locking each partition lock every time, because in simple
transactions that would add a lot of useless lock/unlock cycles on
already-heavily-contended LWLocks. Fortunately, the only way that another
backend could be modifying our proclock list at this point would be if it
was promoting a formerly fast-path lock of ours; and any such lock must be
one that we'd decided not to delete in the previous loop over the locallock
table. So it's okay if we miss seeing it in this loop; we'd just decide
not to delete it again. However, once we've detected a non-empty list,
we'd better re-fetch the list head pointer after acquiring the partition
lock. This guards against possibly fetching a corrupt-but-non-null pointer
if pointer fetch/store isn't atomic. It's not clear if any practical
architectures are like that, but we've never assumed that before and don't
wish to start here. In any case, the situation certainly deserves a code
comment.
While at it, refactor the partition traversal loop to use a for() construct
instead of a while() loop with goto's.
Back-patch, just in case the risk is real and not hypothetical.
Instead of simply checking the KEYS_UPDATED bit, we need to check
whether each lock held on the future version of the tuple conflicts with
the lock we're trying to acquire.
Per bug report #8434 by Tomonari Katsumata
Not doing so causes us to traverse an update chain that has been broken
by concurrent page pruning. All other code that traverses update chains
uses this check as one of the cases in which to stop iterating, so
replicate it here too. Failure to do so leads to erroneous CLOG,
subtrans or multixact lookups.
Per discussion following the bug report by J Smith in
CADFUPgc5bmtv-yg9znxV-vcfkb+JPRqs7m2OesQXaM_4Z1JpdQ@mail.gmail.com
as diagnosed by Andres Freund.
If a transaction updates/deletes a tuple just before aborting, and a
concurrent transaction tries to prune the page concurrently, the pruner
may see HeapTupleSatisfiesVacuum return HEAPTUPLE_DELETE_IN_PROGRESS,
but a later call to HeapTupleGetUpdateXid() return InvalidXid. This
would cause an assertion failure in development builds, but would be
otherwise Mostly Harmless.
Fix by checking whether the updater Xid is valid before trying to apply
it as page prune point.
Reported by Andres in 20131124000203.GA4403@alap2.anarazel.de
The reason for the fetch failure is that the tuple was removed because
it was dead; so the failure is innocuous and can be ignored. Moreover,
there's no need for further work and we can return success to the caller
immediately. EvalPlanQualFetch is doing something very similar to this
already.
Report and test case from Andres Freund in
20131124000203.GA4403@alap2.anarazel.de
When acquiring a lock in fast-path mode, we must reset the locallock
object's lock and proclock fields to NULL. They are not necessarily that
way to start with, because the locallock could be left over from a failed
lock acquisition attempt earlier in the transaction. Failure to do this
led to all sorts of interesting misbehaviors when LockRelease tried to
clean up no-longer-related lock and proclock objects in shared memory.
Per report from Dan Wood.
In passing, modify LockRelease to elog not just Assert if it doesn't find
lock and proclock objects for a formerly fast-path lock, matching the code
in FastPathGetRelationLockEntry and LockRefindAndRelease. This isn't a
bug but it will help in diagnosing any future bugs in this area.
Also, modify FastPathTransferRelationLocks and FastPathGetRelationLockEntry
to break out of their loops over the fastpath array once they've found the
sole matching entry. This was inconsistently done in some search loops
and not others.
Improve assorted related comments, too.
Back-patch to 9.2 where the fast-path mechanism was introduced.
Correct an obsolete statement that no backend touches another backend's
PROCLOCK lists. This was probably wrong even when written (the deadlock
checker looks at everybody's lists), and it's certainly quite wrong now
that fast-path locking can require creation of lock and proclock objects
on behalf of another backend. Also improve some statements in the hot
standby explanation, and do one or two other trivial bits of wordsmithing/
reformatting.
Replace it with an approach similar to what GiST uses: when a page is split,
the left sibling is marked with a flag indicating that the parent hasn't been
updated yet. When the parent is updated, the flag is cleared. If an insertion
steps on a page with the flag set, it will finish split before proceeding
with the insertion.
The post-recovery cleanup mechanism was never totally reliable, as insertion
to the parent could fail e.g because of running out of memory or disk space,
leaving the tree in an inconsistent state.
This also divides the responsibility of WAL-logging more clearly between
the generic ginbtree.c code, and the parts specific to entry and posting
trees. There is now a common WAL record format for insertions and deletions,
which is written by ginbtree.c, followed by tree-specific payload, which is
returned by the placetopage- and split- callbacks.
Separate the insertion payload from the more static portions of GinBtree.
GinBtree now only contains information related to searching the tree, and
the information of what to insert is passed separately.
Add root block number to GinBtree, instead of passing it around all the
functions as argument.
Split off ginFinishSplit() from ginInsertValue(). ginFinishSplit is
responsible for finding the parent and inserting the downlink to it.
I neglected this in the previous commit that updated the plpython2 output,
which I forgot to "git add" earlier.
As pointed out by Rodolfo Campero and Marko Kreen.