useful than just \'failed\' when there's a problem. Per gripe from
Chris Albertson.
In an unrelated change, use VACUUM FULL; VACUUM FREEZE; rather than
a single VACUUM FULL FREEZE command, to respond to my worries of a
couple days ago about the reliability of doing this in one go.
/*
* Some compilers with throw a warning knowing this test can never be
* true because off_t can't exceed the compared maximum.
*/
if (th->fileLen > MAX_TAR_MEMBER_FILELEN)
die_horribly(AH, modulename, "archive member too large for tar format\n");
> * Auto-vacuum
> o Move into the backend code
> o Scan the buffer cache to find free space or use background writer
> o Use free-space map information to guide refilling
prevents problems when the DECLARE is in a portal and is executed
repeatedly, as is possible in v3 protocol. Per analysis by Oliver
Jowett, though I didn't use his patch exactly.
< information, either by name or offset from UTC
> information, either zone name or offset from UTC
>
> If the TIMESTAMP value is stored with a time zone name, interval
> computations should adjust based on the time zone rules, e.g. adding
> 24 hours to a timestamp would yield a different result from adding one
> day.
>
error conditions during regexp compile, but not during regexp execution;
any sort of "can't happen" errors would be treated as no-match instead
of being reported as they should be. Noticed while trying to duplicate
a reported Tcl bug.
to be processed by GUC before InitPostgres, because any required lookup
of the encoding conversion function has to be done during InitializeClientEncoding.
So, I broke this last week by moving GUC processing to after InitPostgres :-(.
What we can do as a compromise is process non-SUSET variables during
command line scanning (the same as before), and postpone the processing
of only SUSET variables. None of the SUSET variables need to be set
before InitPostgres.
data returned from Perl. Consolidate multiple bits of code to convert
a Perl hash to a tuple, and drive the conversion off the keys present
in the hash rather than the tuple column names, so we detect error if
the hash contains keys it shouldn't. (This means keys not in the hash
will silently default to NULL, which seems ok to me.) Fix a bunch of
reference-count leaks too.
fill factor has been exceeded. We usually run with ffactor == 1, but
the way the test was coded, it wouldn't split a bucket until the actual
fill factor reached 2.0, because of use of integer division. Change
from > to >= so that it will split more aggressively when the table
starts to get full.
few cycles during transaction exit. A typical session probably
wouldn't have as many as half a dozen portals open at once, so the
original value of 64 seems far larger than needed.
subtransactions quite right either: the ReleaseCurrentSubTransaction
call should occur inside the PG_TRY, so that the proper path is taken
if an error occurs during subtransaction commit. This assumes that
AbortSubTransaction can cope with the state left behind if
CommitSubTransaction fails partway through, but we were already
requiring that.
operations are now run as subtransactions, so that errors in them
can be reported as ordinary Perl or Tcl errors and caught by the
normal error handling convention of those languages. Also do some
minor code cleanup in pltcl.c: extract a large chunk of duplicated
code in pltcl_SPI_execute and pltcl_SPI_execute_plan into a shared
subroutine.
no need for it to be nearly as big as the global hash table, and since
it's not in shared memory it can grow if it does need to be bigger.
By reducing the size, we speed up hash_seq_search(), which saves a
significant fraction of subtransaction entry/exit overhead.