< * -Allow pg_dump to dump CREATE CONVERSION (Christopher)
< * -Make pg_restore continue after errors, so it acts more like pg_dump scripts
485,486d482
< * Allow pg_dumpall to use non-text output formats
< * Have pg_dump use multi-statement transactions for INSERT dumps
493,496d488
< * Allow pg_dump to use multiple -t and -n switches
<
< This should be done by allowing a '-t schema.table' syntax.
<
498a491,512
>
> * pg_dump
> o Allow pg_dumpall to use non-text output formats
> o Have pg_dump use multi-statement transactions for INSERT dumps
> o -Allow pg_dump to dump CREATE CONVERSION (Christopher)
> o -Make pg_restore continue after errors, so it acts more like pg_dump
> scripts
> o Allow pg_dump to use multiple -t and -n switches
>
> This should be done by allowing a '-t schema.table' syntax.
>
> o Add dumping of comments on composite type columns
> o Add dumping of comments on index columns
> o Replace crude DELETE FROM method of pg_dumpall for cleaning of
> users and groups with separate DROP commands
> o Add dumping and restoring of LOB comments
> o Stop dumping CASCADE on DROP TYPE commands in clean mode
> o Add full object name to the tag field. eg. for operators we need
> '=(integer, integer)', instead of just '='.
> o Add pg_dumpall custom format dumps. This is probably best done by
> combining pg_dump and pg_dumpall into a single binary
> o Add CSV output format
value of 'start' could be past the end of the page, if the page was
split by some concurrent inserting process since we visited it. In
this situation the code could look at bogus entries and possibly find
a match (since after all those entries still contain what they had
before the split). This would lead to 'specified item offset is too large'
followed by 'PANIC: failed to add item to the page', as reported by Joe
Conway for scenarios involving heavy concurrent insertion activity.
to the physical layout of the rowtype, ie, there are dummy arguments
corresponding to any dropped columns in the rowtype. We formerly had a
couple of places that did it this way and several others that did not.
Fixes Gaetano Mendola's "cache lookup failed for type 0" bug of 5-Aug.
< * -Allow savepoints / nested transactions [transactions] (Alvaro)
> * -Allow savepoints / nested transactions (Alvaro)
348a349,353
> * Add an option to automatically use savepoints for each statement in a
> multi-statement transaction.
>
> When enabled, this would allow errors in multi-statement transactions
> to be automatically ignored.
global variables are problematic on this platform. Simplest solution
seems to be to initialize pthread key variable to 0. Also, rename this
variable and check_sigpipe_handler to something involving "pq" to
avoid gratuitous pollution of application namespace.
> * Set proper permissions on non-system schemas during db creation
>
> Currently all schemas are owned by the super-user because they are
> copied from the template1 database.
>
of XLogInsert had the same sort of checkpoint interlock problem as
RecordTransactionCommit, and indeed I found some. Btree index build
and ALTER TABLE SET TABLESPACE write data outside the friendly confines
of the buffer manager, and therefore they have to take their own
responsibility for checkpoint interlock. The easiest solution seems to
be to force smgrimmedsync at the end of the index build or table copy,
even when the operation is being WAL-logged. This is sufficient since
the new index or table will be of interest to no one if we don't get
as far as committing the current transaction.
therefore starting with GetCurrentTransactionId is wrong. Fixes
miscomputation of RecentGlobalXmin leading to bizarre behavior
reported by Gavin Sherry.
>
> * Allow buffered WAL writes and fsync
>
> Instead of guaranteeing recovery of all committed transactions, this
> would provide improved performance by delaying WAL writes and fsync
> so an abrupt operating system restart might lose a few seconds of
> committed transactions but still be consistent. We could perhaps
> remove the 'fsync' parameter (which results in an an inconsistent
> database) in favor of this capability.
don't hold an open file reference to the original table at the end.
This is a good thing in any case, particularly so on Windows which
cannot drop the table file otherwise.