all the data and using posix_fadvise to nudge the OS into flushing it
earlier. This also hopefully makes CREATE DATABASE avoid spamming the
cache.
Tests show a big speedup on Linux at least on some filesystems.
Idea and patch from Andres Freund.
The purpose of this change is to eliminate the need for every caller
of SearchSysCache, SearchSysCacheCopy, SearchSysCacheExists,
GetSysCacheOid, and SearchSysCacheList to know the maximum number
of allowable keys for a syscache entry (currently 4). This will
make it far easier to increase the maximum number of keys in a
future release should we choose to do so, and it makes the code
shorter, too.
Design and review by Tom Lane.
prefix, instead of assuming it will always be following the default layout.
All information we need is not available on Windows, but the number of
assumptions are at least fewer this way than before.
Based on suggestions from James William Pye.
defined. Its reference to CurrentMemoryContext causes link failures on some
platforms, evidently because the inline function gets compiled despite lack of
use. Per buildfarm member warthog.
where a database has a non-default tablespaceid. Pass thru MyDatabaseId
and MyDatabaseTableSpace to allow file path to be re-created in
standby and correct invalidation to take place in all cases.
Update and rework xact_commit_desc() debug messages.
Bug report from Tom by code inspection. Fix by me.
compilers, by applying a configure check to see if the compiler will accept
an unreferenced "static inline foo ..." function without warnings. It is
believed that such warnings are the only reason not to declare inlined
functions in headers, if the compiler understands "inline" at all.
Kurt Harriman
process. If startup waits on a buffer pin we send a request to all
backends to cancel themselves if they are holding the buffer pin
required and they are also waiting on a lock. If not, startup waits
until max_standby_delay before cancelling any backend waiting for
the requested buffer pin.
before we start analyzing the parent statement. This is to make it
more clear that the WITH isn't affected by anything in the parent.
I don't believe there's any actual bug here, because the stuff that
was being done before WITH didn't affect subqueries; but it's certainly
a potential for error (and apparently misled Marko into committing some
real errors...).
that happens to be composite itself. Per bug #5314 from Oleg Serov.
Backpatch to 8.0 --- 7.4 has got too many other shortcomings in
composite-type support to make this worth worrying about in that branch.
This patch allows the frame to start from CURRENT ROW (in either RANGE or
ROWS mode), and it also adds support for ROWS n PRECEDING and ROWS n FOLLOWING
start and end points. (RANGE value PRECEDING/FOLLOWING isn't there yet ---
the grammar works, but that's all.)
Hitoshi Harada, reviewed by Pavel Stehule
echo all the recovery.conf options. Don't emit the "initializing
recovery connections" message, which doesn't mean anything to a user.
Remove the "starting archive recovery" message and replace the
"automatic recovery in progress" message with a more informative message
saying whether the server is doing PITR, normal archive recovery, or
standby mode.
a partial WAL file, assume it's because the file is just being copied to
the archive and treat it the same as "file not found" in standby mode.
pg_standby has a similar check, so it seems reasonable to have the same
level of protection in the built-in standby mode.
several places, but for now only GIN uses it during index creation.
Using self-balanced tree greatly speeds up index creation in corner cases
with preordered data.
restoring from archive, the last WAL segment is not necessarily open at
the end of recovery. Fix assertion that assumed that.
Fujii Masao, fixing the assertion failure reported by Martin Pihlak.
The previous coding missed a bet by sometimes picking the "sorted" path
from query_planner even though hashing would be preferable. To fix, we have
to be willing to make the choice sooner. This contorts things a little bit,
but I thought of a factorization that makes it not too awful.
Move rd_targblock, rd_fsm_nblocks, and rd_vm_nblocks from relcache to the smgr
relation entries, so that they will get reset to InvalidBlockNumber whenever
an smgr-level flush happens. Because we now send smgr invalidation messages
immediately (not at end of transaction) when a relation truncation occurs,
this ensures that other backends will reset their values before they next
access the relation. We no longer need the unreliable assumption that a
VACUUM that's doing a truncation will hold its AccessExclusive lock until
commit --- in fact, we can intentionally release that lock as soon as we've
completed the truncation. This patch therefore reverts (most of) Alvaro's
patch of 2009-11-10, as well as my marginal hacking on it yesterday. We can
also get rid of assorted no-longer-needed relcache flushes, which are far more
expensive than an smgr flush because they kill a lot more state.
In passing this patch fixes smgr_redo's failure to perform visibility-map
truncation, and cleans up some rather dubious assumptions in freespace.c and
visibilitymap.c about when rd_fsm_nblocks and rd_vm_nblocks can be out of
date.
sections under "High Availability, Load Balancing, and Replication"
chapter. Streaming replication chapter needs a lot more work, but this
commit just moves things around.
truncating the table and transaction commit. This isn't really making
it safe, but at least there is no good reason to do free space map
cleanup within the risk window. Don't lock out cancel interrupts
until we have to, either.
being called as aggregates, and to get the aggregate transition state memory
context if needed. Use it instead of poking directly into AggState and
WindowAggState in places that shouldn't know so much.
We should have done this in 8.4, probably, but better late than never.
Revised version of a patch by Hitoshi Harada.
recovery. It's zeroed out whenever a checkpoint is written, so the only
scenario where the removed code did anything is when you kill archive
recovery, remove recovery.conf, and start up the server, so that it goes
into crash recovery instead. That's a "don't do that" scenario, but it
seems better to not clear minRecoveryPoint but instead update it like we
do in archive recovery, which is what will now happen.
needed by nothing else.
The restructuring I just finished doing on cache management exposed to me how
silly this routine was. Its function was to go into the catcache and blow
away all entries related to a given relation when there was a relcache flush
on that relation. However, there is no point in removing a catcache entry
if the catalog row it represents is still valid --- and if it isn't valid,
there must have been a catcache entry flush on it, because that's triggered
directly by heap_update or heap_delete on the catalog row. So this routine
accomplished nothing except to blow away valid cache entries that we'd very
likely be wanting in the near future to help reconstruct the relcache entry.
Dumb.
On top of which, it required a subtle and easy-to-get-wrong attribute in
syscache definitions, ie, the column containing the OID of the related
relation if any. Removing that is a very useful maintenance simplification.
VACUUM FULL INPLACE), along with a boatload of subsidiary code and complexity.
Per discussion, the use case for this method of vacuuming is no longer large
enough to justify maintaining it; not to mention that we don't wish to invest
the work that would be needed to make it play nicely with Hot Standby.
Aside from the code directly related to old-style VACUUM FULL, this commit
removes support for certain WAL record types that could only be generated
within VACUUM FULL, redirect-pointer removal in heap_page_prune, and
nontransactional generation of cache invalidation sinval messages (the last
being the sticking point for Hot Standby).
We still have to retain all code that copes with finding HEAP_MOVED_OFF and
HEAP_MOVED_IN flag bits on existing tuples. This can't be removed as long
as we want to support in-place update from pre-9.0 databases.
as per my recent proposal.
First, teach IndexBuildHeapScan to not wait for INSERT_IN_PROGRESS or
DELETE_IN_PROGRESS tuples to commit unless the index build is checking
uniqueness/exclusion constraints. If it isn't, there's no harm in just
indexing the in-doubt tuple.
Second, modify VACUUM FULL/CLUSTER to suppress reverifying
uniqueness/exclusion constraint properties while rebuilding indexes of
the target relation. This is reasonable because these commands aren't
meant to deal with corrupted-data situations. Constraint properties
will still be rechecked when an index is rebuilt by a REINDEX command.
This gets us out of the problem that new-style VACUUM FULL would often
wait for other transactions while holding exclusive lock on a system
catalog, leading to probable deadlock because those other transactions
need to look at the catalogs too. Although the real ultimate cause of
the problem is a debatable choice to release locks early after modifying
system catalogs, changing that choice would require pretty serious
analysis and is not something to be undertaken lightly or on a tight
schedule. The present patch fixes the problem in a fairly reasonable
way and should also improve the speed of VACUUM FULL/CLUSTER a little bit.