>>'127.0.0.1/32' instead of '127.0.0.1 255.255.255.255'.
>>
>>
>
>Yeah, that's probably the path of least resistance. Note that the
>comments and possibly the SGML docs need to be adjusted to match,
>however, so it's not quite a one-liner.
Andrew Dunstan
>
> The patch adds missing the "libpgport.a" file to the installation under
> "install-all-headers". It is needed by some contribs. I install the
> library in "pkglibdir", but I was wondering whether it should be "libdir"?
> I was wondering also whether it would make sense to have a "libpgport.so"?
>
> It fixes various macros which are used by contrib makefiles, especially
> libpq_*dir and LDFLAGS when used under PGXS. It seems to me that they are
> needed to
>
> It adds the ability to test and use PGXS with contribs, with "make
> USE_PGXS=1". Without the macro, this is exactly as before, there should be
> no difference, esp. wrt the vpath feature that seemed broken by previous
> submission. So it should not harm anybody, and it is useful at least to me.
>
> It fixes some inconsistencies in various contrib makefiles
> (useless override, ":=" instead of "=").
Fabien COELHO
< o Allow databases, schemas, and indexes to be moved to different
< tablespaces
> o Allow databases and schemas to be moved to different tablespaces
>
> One complexity is whether moving a schema should move all existing
> schema objects or just define the location for future object creation.
>
382c385
< o Add ALTER INDEX that works just like ALTER TABLE already does
> o -Add ALTER INDEX that works just like ALTER TABLE already does
384d386
< o Add ALTER INDEX syntax to work like ALTER TABLE indexname
multiline command or to rerun the command easily later.
Whereas displaying the failed SQL command is a matter of fixing the
error
messages.
The latter is complicated by failed COPY commands which, with
die-on-errors
off, results in the data being processed as a command, so dumping the
command will dump all of the data.
In the case of long commands, should the whole command be dumped? eg.
(eg.
several pages of function definition).
In the case of the COPY command, I'm not sure what to do. Obviously, it
would be best to avoid sending the data, but the data and command are
combined (from memory). Also, the 'data' may be in the form of INSERT
statements.
Attached patch produces the first 125 chars of the command:
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC Entry 26; 1255 16449270
FUNCTION
plpgsql_call_handler() pjw
pg_restore: [archiver (db)] could not execute query: ERROR: function
"plpgsql_call_handler" already exists with same argument types
Command was: CREATE FUNCTION plpgsql_call_handler() RETURNS
language_handler
AS '/var/lib/pgsql-8.0b1/lib/plpgsql', 'plpgsql_call_han...
pg_restore: [archiver (db)] Error from TOC Entry 27; 1255 16449271
FUNCTION
plpgsql_validator(oid) pjw
pg_restore: [archiver (db)] could not execute query: ERROR: function
"plpgsql_validator" already exists with same argument types
Command was: CREATE FUNCTION plpgsql_validator(oid) RETURNS void
AS '/var/lib/pgsql-8.0b1/lib/plpgsql', 'plpgsql_validator'
LANGU...
Philip Warner
as it stands - it mixes declarations in code, C++-style. The attached
patch shifts declarations to the tops of functions and enables this file
to compile cleanly as C.
Richard Poole
< * -Have psql \dn show only visible temp schemas using current_schemas()
< * -Have psql '\i ~/<tab><tab>' actually load files it displays from home dir
484a483,484
> * -Have psql \dn show only visible temp schemas using current_schemas()
> * -Have psql '\i ~/<tab><tab>' actually load files it displays from home dir
516a517,527
>
> * psql tab completion
>
> o Provide a list of conversions after ALTER CONVERSION?
> o Support for ALTER SEQUENCE clauses
> o Add RENAME TO to ALTER TRIGGER
> o Support for ALTER USER
> o Fix ALTER (GROUP|DOMAIN|...) <sth> DROP
> o Support for ALTER LANGUAGE <sth> RENAME TO
> o Improve support for COPY
> o Improve support for ALTER TABLE
to/in the psql-tabcomplete code. This diff includes the still missing
tab-complete support for TABLESPACE I already sent earlier. New in this
version of the patch is a small adaption of the tab-complete code to
support the adjusted SAVEPOINT-Syntax commited by Tom, as well as
completion of the only half working (and I think only by accident)
tabcomplete-suppport for "BEGIN [ TRANSACTION | WORK ]".
below is a complete list of the things I have changed with this patch:
*) add tablespace support for CREATE/DROP/ALTER and \db
*) sync the list of possible commands following ALTER with the docs (by
adding
AGGREGATE,CONVERSATION,DOMAIN,FUNCTION,LANGUAGE,OPERATOR,SEQUENCE,TABLESPACE
and TYPE)
*) provide a list of valid users after "OWNER TO"
*) tab-complete support for ALTER (AGGREGATE|CONVERSION|FUNCTION)
*) basic tab-complete support for ALTER DOMAIN
*) provide a list of suitable indexes following ALTER TABLE <sth>
CLUSTER ON(?)
*) add "CLUSTER ON" and "SET" to the ALTER TABLE <sth> - tab-complete
list(fixes incorrect/wrong tab-complete with ALTER TABLE <sth> SET
+<TAB> too)
*) provide a list of possible indexes following ALTER TABLE <sth> CLUSTER ON
*) provide list of possible commands(WITHOUT CLUSTER,WITHOUT OIDS,
TABLESPACE) following ALTER TABLE <sth> SET
*) sync "COMMENT ON" with docs by adding "CAST","CONVERSION","FUNCTION"
*) add ABSOLUT to the list of possible commands after FETCH
*) "END" was missing from the sql-commands overview (though it had
completion support!) - i know it's depreciated but we have ABORT and
others still in ...
*) fixes small buglet with ALTER (TRIGGER|CLUSTER) ON autocomplete
(CLUSTER ON +<TAB> would produce CLUSTER ON ON - same for TRIGGER ON)
*) adapt to new SAVEPOINT syntax
*) fix incomplete Support for BEGIN [ TRANSACTION | WORK ]
Stefan Kaltenbrunn
will treat any unquoted string that starts with a $ and has no preceding
identifier chars as a potential $-quote tag, it then makes sure that the
tag chars are valid. If so, it processes the $-quote.
Philip Warner
file variables:
< Another option is to allow commented values to return to their
< default values.
> This has to address environment variables that are then overridden
> by config file values. Another option is to allow commented values
> to return to their default values.
> pg_restore, as it seems that some people have scripts that rely on the
> previous "abort on error" default behavior when restoring data with a
> direct connection.
>
> Fabien Coelho
> why does CVS tip still give me
>
> regression=# select extract(century from now());
> date_part
> -----------
> 20
> (1 row)
> [ ... looks in code ... ]
>
> Apparently it's because you fixed only timestamp_part, and not
> timestamptz_part. I'm not too sure about what timestamp_trunc or
> timestamptz_trunc should do, but they may be wrong as well.
Sigh... as usual, what is not tested does not work:-(
> Could we have a more complete patch?
Please find a submission attached. I hope it really fixes all decade,
century and millenium issues for extract and *_trunc functions on
interval
and other timestamp types. If someone could check that the results
are reasonnable, it would be great.
I indeed overlooked the fact that there were two functions. The patch
fixes the code so that both variants agree.
I added comments to interval extractions, because it relies on the C
division to have a negative remainder: -7/10 = 0 and remains -7.
As for *_trunc functions, I have chosen to put the first year of the
century or millennium: -100, 1, 101... 1001 2001 etc. Indeed, I don't
think it would make sense to put 2000 (last year of the 2nd millennium)
for rounding all years of the third millenium.
I also fixed the code so that all decades last 10 years and decade 199
means the 1990's.
I have added some tests that are relevant to deal with tricky cases. The
formula may be simplified, but all these cases must pass. Please keep
them.
Fabien Coelho
presence of dropped columns. Document the already-presumed fact that
eref aliases in relation RTEs are supposed to have entries for dropped
columns; cause the user alias structs to have such entries too, so that
there's always a one-to-one mapping to the underlying physical attnums.
Adjust expandRTE() and related code to handle the case where a column
that is part of a JOIN has been dropped. Generalize expandRTE()'s API
so that it can be used in a couple of places that formerly rolled their
own implementation of the same logic. Fix ruleutils.c to suppress
display of aliases for columns that were dropped since the rule was made.
< * -Allow pg_dump to dump CREATE CONVERSION (Christopher)
< * -Make pg_restore continue after errors, so it acts more like pg_dump scripts
485,486d482
< * Allow pg_dumpall to use non-text output formats
< * Have pg_dump use multi-statement transactions for INSERT dumps
493,496d488
< * Allow pg_dump to use multiple -t and -n switches
<
< This should be done by allowing a '-t schema.table' syntax.
<
498a491,512
>
> * pg_dump
> o Allow pg_dumpall to use non-text output formats
> o Have pg_dump use multi-statement transactions for INSERT dumps
> o -Allow pg_dump to dump CREATE CONVERSION (Christopher)
> o -Make pg_restore continue after errors, so it acts more like pg_dump
> scripts
> o Allow pg_dump to use multiple -t and -n switches
>
> This should be done by allowing a '-t schema.table' syntax.
>
> o Add dumping of comments on composite type columns
> o Add dumping of comments on index columns
> o Replace crude DELETE FROM method of pg_dumpall for cleaning of
> users and groups with separate DROP commands
> o Add dumping and restoring of LOB comments
> o Stop dumping CASCADE on DROP TYPE commands in clean mode
> o Add full object name to the tag field. eg. for operators we need
> '=(integer, integer)', instead of just '='.
> o Add pg_dumpall custom format dumps. This is probably best done by
> combining pg_dump and pg_dumpall into a single binary
> o Add CSV output format
value of 'start' could be past the end of the page, if the page was
split by some concurrent inserting process since we visited it. In
this situation the code could look at bogus entries and possibly find
a match (since after all those entries still contain what they had
before the split). This would lead to 'specified item offset is too large'
followed by 'PANIC: failed to add item to the page', as reported by Joe
Conway for scenarios involving heavy concurrent insertion activity.
to the physical layout of the rowtype, ie, there are dummy arguments
corresponding to any dropped columns in the rowtype. We formerly had a
couple of places that did it this way and several others that did not.
Fixes Gaetano Mendola's "cache lookup failed for type 0" bug of 5-Aug.
< * -Allow savepoints / nested transactions [transactions] (Alvaro)
> * -Allow savepoints / nested transactions (Alvaro)
348a349,353
> * Add an option to automatically use savepoints for each statement in a
> multi-statement transaction.
>
> When enabled, this would allow errors in multi-statement transactions
> to be automatically ignored.
global variables are problematic on this platform. Simplest solution
seems to be to initialize pthread key variable to 0. Also, rename this
variable and check_sigpipe_handler to something involving "pq" to
avoid gratuitous pollution of application namespace.
> * Set proper permissions on non-system schemas during db creation
>
> Currently all schemas are owned by the super-user because they are
> copied from the template1 database.
>
of XLogInsert had the same sort of checkpoint interlock problem as
RecordTransactionCommit, and indeed I found some. Btree index build
and ALTER TABLE SET TABLESPACE write data outside the friendly confines
of the buffer manager, and therefore they have to take their own
responsibility for checkpoint interlock. The easiest solution seems to
be to force smgrimmedsync at the end of the index build or table copy,
even when the operation is being WAL-logged. This is sufficient since
the new index or table will be of interest to no one if we don't get
as far as committing the current transaction.