> why does CVS tip still give me
>
> regression=# select extract(century from now());
> date_part
> -----------
> 20
> (1 row)
> [ ... looks in code ... ]
>
> Apparently it's because you fixed only timestamp_part, and not
> timestamptz_part. I'm not too sure about what timestamp_trunc or
> timestamptz_trunc should do, but they may be wrong as well.
Sigh... as usual, what is not tested does not work:-(
> Could we have a more complete patch?
Please find a submission attached. I hope it really fixes all decade,
century and millenium issues for extract and *_trunc functions on
interval
and other timestamp types. If someone could check that the results
are reasonnable, it would be great.
I indeed overlooked the fact that there were two functions. The patch
fixes the code so that both variants agree.
I added comments to interval extractions, because it relies on the C
division to have a negative remainder: -7/10 = 0 and remains -7.
As for *_trunc functions, I have chosen to put the first year of the
century or millennium: -100, 1, 101... 1001 2001 etc. Indeed, I don't
think it would make sense to put 2000 (last year of the 2nd millennium)
for rounding all years of the third millenium.
I also fixed the code so that all decades last 10 years and decade 199
means the 1990's.
I have added some tests that are relevant to deal with tricky cases. The
formula may be simplified, but all these cases must pass. Please keep
them.
Fabien Coelho
presence of dropped columns. Document the already-presumed fact that
eref aliases in relation RTEs are supposed to have entries for dropped
columns; cause the user alias structs to have such entries too, so that
there's always a one-to-one mapping to the underlying physical attnums.
Adjust expandRTE() and related code to handle the case where a column
that is part of a JOIN has been dropped. Generalize expandRTE()'s API
so that it can be used in a couple of places that formerly rolled their
own implementation of the same logic. Fix ruleutils.c to suppress
display of aliases for columns that were dropped since the rule was made.
< * -Allow pg_dump to dump CREATE CONVERSION (Christopher)
< * -Make pg_restore continue after errors, so it acts more like pg_dump scripts
485,486d482
< * Allow pg_dumpall to use non-text output formats
< * Have pg_dump use multi-statement transactions for INSERT dumps
493,496d488
< * Allow pg_dump to use multiple -t and -n switches
<
< This should be done by allowing a '-t schema.table' syntax.
<
498a491,512
>
> * pg_dump
> o Allow pg_dumpall to use non-text output formats
> o Have pg_dump use multi-statement transactions for INSERT dumps
> o -Allow pg_dump to dump CREATE CONVERSION (Christopher)
> o -Make pg_restore continue after errors, so it acts more like pg_dump
> scripts
> o Allow pg_dump to use multiple -t and -n switches
>
> This should be done by allowing a '-t schema.table' syntax.
>
> o Add dumping of comments on composite type columns
> o Add dumping of comments on index columns
> o Replace crude DELETE FROM method of pg_dumpall for cleaning of
> users and groups with separate DROP commands
> o Add dumping and restoring of LOB comments
> o Stop dumping CASCADE on DROP TYPE commands in clean mode
> o Add full object name to the tag field. eg. for operators we need
> '=(integer, integer)', instead of just '='.
> o Add pg_dumpall custom format dumps. This is probably best done by
> combining pg_dump and pg_dumpall into a single binary
> o Add CSV output format
value of 'start' could be past the end of the page, if the page was
split by some concurrent inserting process since we visited it. In
this situation the code could look at bogus entries and possibly find
a match (since after all those entries still contain what they had
before the split). This would lead to 'specified item offset is too large'
followed by 'PANIC: failed to add item to the page', as reported by Joe
Conway for scenarios involving heavy concurrent insertion activity.
to the physical layout of the rowtype, ie, there are dummy arguments
corresponding to any dropped columns in the rowtype. We formerly had a
couple of places that did it this way and several others that did not.
Fixes Gaetano Mendola's "cache lookup failed for type 0" bug of 5-Aug.
< * -Allow savepoints / nested transactions [transactions] (Alvaro)
> * -Allow savepoints / nested transactions (Alvaro)
348a349,353
> * Add an option to automatically use savepoints for each statement in a
> multi-statement transaction.
>
> When enabled, this would allow errors in multi-statement transactions
> to be automatically ignored.
global variables are problematic on this platform. Simplest solution
seems to be to initialize pthread key variable to 0. Also, rename this
variable and check_sigpipe_handler to something involving "pq" to
avoid gratuitous pollution of application namespace.
> * Set proper permissions on non-system schemas during db creation
>
> Currently all schemas are owned by the super-user because they are
> copied from the template1 database.
>
of XLogInsert had the same sort of checkpoint interlock problem as
RecordTransactionCommit, and indeed I found some. Btree index build
and ALTER TABLE SET TABLESPACE write data outside the friendly confines
of the buffer manager, and therefore they have to take their own
responsibility for checkpoint interlock. The easiest solution seems to
be to force smgrimmedsync at the end of the index build or table copy,
even when the operation is being WAL-logged. This is sufficient since
the new index or table will be of interest to no one if we don't get
as far as committing the current transaction.
therefore starting with GetCurrentTransactionId is wrong. Fixes
miscomputation of RecentGlobalXmin leading to bizarre behavior
reported by Gavin Sherry.
>
> * Allow buffered WAL writes and fsync
>
> Instead of guaranteeing recovery of all committed transactions, this
> would provide improved performance by delaying WAL writes and fsync
> so an abrupt operating system restart might lose a few seconds of
> committed transactions but still be consistent. We could perhaps
> remove the 'fsync' parameter (which results in an an inconsistent
> database) in favor of this capability.