eliminate unnecessary code, force initdb because stored rules change
(limit nodes are now supposed to be int8 not int4 expressions).
Update comments and error messages, which still all said 'integer'.
< o Allow point-in-time recovery to archive partially filled
< write-ahead logs? [pitr]
> o Add command to archive partially filled write-ahead logs? [pitr]
< of a disk failure. This could be triggered by a user command or
< a timer.
> of a disk failure.
< recovery. A function call to do this would also be useful.
> recovery.
> o Add reporting of the current WAL file and offset, perhaps as
> part of partial log file archiving
>
> The offset allows parts of a WAL file to be archived using
> an external program.
>
< o Add reporting of the current WAL file and offset, perhaps as
< part of partial log file archiving
<
< The offset allows parts of a WAL file to be archived using
< an external program.
not "unset". An "unset" state doesn't really exist; all variables behave
like an empty string value if the string being pointed to has not been
initialized.
- predefined variable "tps"
The value of variable tps is taken from the scaling factor
specified by -s option.
- -D option
Variable values can be defined by -D option.
- \set command now allows arithmetic calculations.
Update the calling convention for all external facing functions. By
external facing, I mean all functions that are directly referenced in
cube.sql. Prior to my update, all functions used the older V0 calling
convention. They now use V1.
New Functions:
cube(float[]), which makes a zero volume cube from a float array
cube(float[], float[]), which allows the user to create a cube from
two float arrays; one for the upper right and one for the lower left
coordinate.
cube_subset(cube, int4[]), to allow you to reorder or choose a subset of
dimensions from a cube, using index values specified in the array.
Joshua Reich
When we are about to split an index page to do an insertion, first look
to see if any entries marked LP_DELETE exist on the page, and if so remove
them to try to make enough space for the desired insert. This should reduce
index bloat in heavily-updated tables, although of course you still need
VACUUM eventually to clean up the heap.
Junji Teramoto
< o Add reporting of the current WAL file, perhaps as part of
< partial log file archiving
> o Add reporting of the current WAL file and offset, perhaps as
> part of partial log file archiving
configuration files that can be altered by a DBA. The australian_timezones
GUC setting disappears, replaced by a timezone_abbreviations setting (set this
to 'Australia' to get the effect of australian_timezones). The list of zone
names defined by default has undergone a bit of cleanup, too. Documentation
still needs some work --- in particular, should we fix Table B-4, or just get
rid of it? Joachim Wieland, with some editorializing by moi.
thinking that indexes of different sizes are equally attractive. Per
gripe from Jim Nasby. (I remain unconvinced that there's such a problem
in existing releases, but CVS HEAD definitely has got a problem because
of its new count-only-leaf-pages approach to indexscan costing.)
BufferAlloc tries to insert a new mapping entry before deleting the old one
for a buffer, we have a transient need for more than NBuffers entries ---
one more in 8.1, and as many as NUM_BUFFER_PARTITIONS more in CVS HEAD.
In theory this could lead to an "out of shared memory" failure if shmem
had already been completely claimed by the time the extra entries were
needed.
to the low-order bits of the entry hash value. Also make some incidental
cleanups in the dynahash API, such as not exporting the hash header
structs to the world.
effects in a nestloop inner indexscan, I had only dealt with plain index
scans and the index portion of bitmap scans. But there will be cache
benefits for the heap accesses of bitmap scans too, so fix
cost_bitmap_heap_scan() to account for that.
-built-in mechanism through the -MP flag. Adjust the file extensions to
look more like Automake practice. This frees up the .d suffix for use by
DTrace.
opclass. This is not so much because anyone's likely to create an index
on TID, as that sorting TIDs can be useful. Also added max and min
aggregates while at it, so that one can investigate the clusteredness of
a table with queries like SELECT min(ctid), max(ctid) FROM tab WHERE ...
Greg Stark and Tom Lane
pg_regress: there's no other way to cope with testing a relocated
installation. Seems better to call it --psqldir though, since the
only thing we need to find in that case is psql. It'd be better if
we could use find_other_exec, but that's not happening unless we are
willing to install pg_regress alongside psql, which seems unlikely
to happen.
the check on diff's exit status to check for literally 0 or 1. Someone
should look into why WIFEXITED/WEXITSTATUS don't work for this, but I've
spent more than enough time on it already.
recovery. In the first place, it doesn't work because slru's
latest_page_number isn't set up yet (this is why we've been hearing reports
of strange "apparent wraparound" log messages during crash recovery, but
only from people who'd managed to advance their next-mxact counters some
considerable distance from 0). In the second place, it seems a bit unwise
to be throwing away data during crash recovery anwyway. This latter
consideration convinces me to just disable truncation during recovery,
rather than computing latest_page_number and pushing ahead.
just exec instead of creating a subprocess. This reduces process usage
from four processes per parallel test to two. I have no idea whether
a comparable optimization is possible or useful in the Windows port.