currently have any better strategy for this query than re-running the
sub-select over and over; it seems unlikely that doing so 10000 times
is a more useful test than doing it a few dozen times.
entries for the victim database go away sooner rather than later. We already
did the equivalent thing at the per-relation level, not sure why it's not
been done for whole databases. With this change, pgstat_vacuum_tabstat
should usually not find anything to do; though we still need it as a backstop
in case DROPDB or TABPURGE messages get lost under load.
< * Merge xmin/xmax/cmin/cmax back into three header fields
<
< Before subtransactions, there used to be only three fields needed to
< store these four values. This was possible because only the current
< transaction looks at the cmin/cmax values. If the current transaction
< created and expired the row the fields stored where xmin (same as
< xmax), cmin, cmax, and if the transaction was expiring a row from a
< another transaction, the fields stored were xmin (cmin was not
< needed), xmax, and cmax. Such a system worked because a transaction
< could only see rows from another completed transaction. However,
< subtransactions can see rows from outer transactions, and once the
< subtransaction completes, the outer transaction continues, requiring
< the storage of all four fields. With subtransactions, an outer
< transaction can create a row, a subtransaction expire it, and when the
< subtransaction completes, the outer transaction still has to have
< proper visibility of the row's cmin, for example, for cursors.
<
< One possible solution is to create a phantom cid which represents a
< cmin/cmax pair and is stored in local memory. Another idea is to
< store both cmin and cmax only in local memory.
<
> * -Merge xmin/xmax/cmin/cmax back into three header fields
keeping private state in each backend that has inserted and deleted the same
tuple during its current top-level transaction. This is sufficient since
there is no need to be able to determine the cmin/cmax from any other
transaction. This gets us back down to 23-byte headers, removing a penalty
paid in 8.0 to support subtransactions. Patch by Heikki Linnakangas, with
minor revisions by moi, following a design hashed out awhile back on the
pghackers list.
< * Consider placing all sequences in a single table, now that system
< tables are full transactional
> * Consider placing all sequences in a single table
get away with not (re)initializing a local variable if the variable is marked
"isconst" and not "isnull". Unfortunately it makes this decision after having
already freed the old value, meaning that something like
for i in 1..10 loop
declare c constant text := 'hi there';
leads to subsequent accesses to freed memory, and hence probably crashes.
(In particular, this is why Asif Ali Rehman's bug leads to crash and not
just an unexpectedly-NULL value for SQLERRM: SQLERRM is marked CONSTANT
and so triggers this error.)
The whole thing seems wrong on its face anyway: CONSTANT means that you can't
change the variable inside the block, not that the initializer expression is
guaranteed not to change value across successive block entries. Hence,
remove the "optimization" instead of trying to fix it.
DECLARE section needs to know about it. Formerly, everyplace besides DECLARE
that created variables needed to do "plpgsql_add_initdatums(NULL)" to prevent
those variables from being sucked up as part of a subsequent DECLARE block.
This is obviously error-prone, and in fact the SQLSTATE/SQLERRM patch had
failed to do it for those two variables, leading to the bug recently exhibited
by Asif Ali Rehman: a DECLARE within an exception handler tried to reinitialize
SQLERRM.
Although the SQLSTATE/SQLERRM patch isn't in any pre-8.1 branches, and so
I can't point to a demonstrable failure there, it seems wise to back-patch
this into the older branches anyway, just to keep the logic similar to HEAD.
For win32 in general, this makes it possible to run the regression tests
as an admin user by using the same restricted token method that's used
by pg_ctl and initdb.
For vc++, it adds building of pg_regress.exe, adds a resultmap, and
fixes how it runs the install.
Magnus Hagander
pg_standby is a production-ready program that can be used to
create a Warm Standby server. Other configuration is required
as well, all of which is described in the main server manual.
Simon Riggs
pg_standby is a production-ready program that can be used to
create a Warm Standby server. Other configuration is required
as well, all of which is described in the main server manual.
Simon Riggs
where possible, and fix some sites that apparently thought that fgets()
will overwrite the buffer by one byte.
Also add some strlcpy() to eliminate some weird memory handling.
> Currently, an index split writes all the data on the split page to
> WAL. That's a lot of WAL traffic. The tuples that are copied to the
> right page need to be WAL logged, but the tuples that stay on the
> original page don't.
Heikki Linnakangas
already collected in the current transaction; this allows plpgsql functions to
watch for stats updates even though they are confined to a single transaction.
Use this instead of the previous kluge involving pg_stat_file() to wait for
the stats collector to update in the stats regression test. Internally,
decouple storage of stats snapshots from transaction boundaries; they'll
now stick around until someone calls pgstat_clear_snapshot --- which xact.c
still does at transaction end, to maintain the previous behavior. This makes
the logic a lot cleaner, at the price of a couple dozen cycles per transaction
exit.
changes (with an upper limit of 30 seconds), and record the delay time in
the postmaster log. This should give us some info about what's happening
with the intermittent stats failures in buildfarm. After an idea of
Andrew Dunstan's.
"database system is ready to accept connections", which is issued by the
postmaster when it really is ready to accept connections. Per proposal from
Markus Schiltknecht and subsequent discussion.
thought that it didn't have to reposition the underlying tuplestore if the
portal is atEnd. But this is not so, because tuplestores have separate read
and write cursors ... and the read cursor hasn't moved from the start.
This mistake explains bug #2970 from William Zhang.
Note: the coding here is pretty inefficient, but given that no one has noticed
this bug until now, I'd say hardly anyone uses the case where the cursor has
been advanced before being persisted. So maybe it's not worth worrying about.
<P>USA saving time changes are included in PostgreSQL release 8.0.[4+],
and all later major releases, e.g. 8.1. Canada and Western Australia
changes are included in 8.0.[10+], 8.1.[6+], and all later major
releases. PostgreSQL releases prior to 8.0 use the operating system's
timezone database for daylight saving information.</P>
out that ExecEvalVar and friends don't necessarily have access to a tuple
descriptor with correct typmod: it definitely can contain -1, and possibly
might contain other values that are different from the Var's value.
Arguably this should be cleaned up someday, but it's not a simple change,
and in any case typmod discrepancies don't pose a security hazard.
Per reports from numerous people :-(
I'm not entirely sure whether the failure can occur in 8.0 --- the simple
test cases reported so far don't trigger it there. But back-patch the
change all the way anyway.