and should do now that we control our own destiny for timezone handling,
but this commit gets the bulk of the picayune diffs in place.
Magnus Hagander and Tom Lane.
> * Support composite types as table columns
286,289d286
< * Python
< o Allow users to register their own types with pg_
< o Allow SELECT to return a dictionary of dictionaries
< o Allow COPY BINARY FROM
456d452
< * Support composite types as table columns
< Bracketed items "[]" have more detailed.
> Bracketed items "[]" have more detail.
35,36d34
< * Remove unreferenced table files and temp tables during database vacuum
< or postmaster startup (Bruce)
68c66
< * Allow pg_dump to dump sequences using NO_MAXVALUE and NO_MINVALUE
> * -Allow pg_dump to dump sequences using NO_MAXVALUE and NO_MINVALUE
70c68
< * Prevent whole-row references from leaking memory, e.g. SELECT COUNT(tab.*)
> * -Prevent whole-row references from leaking memory, e.g. SELECT COUNT(tab.*)
76c74
< * Make LENGTH() of CHAR() not count trailing spaces
> * -Make LENGTH() of CHAR() not count trailing spaces
145c143
< * Allow SELECT * FROM tab WHERE int2col = 4 to use int2col index, int8,
> * -Allow SELECT * FROM tab WHERE int2col = 4 to use int2col index, int8,
179c177
< * Allow more ISOLATION LEVELS to be accepted, but issue a warning for them
> * -Allow more ISOLATION LEVELS to be accepted
186c184
< * Add GUC setting to make created tables default to WITHOUT OIDS
> * -Add GUC setting to make created tables default to WITHOUT OIDS
265d262
< * Allow fastpast to pass values in portable format
271c268
< * Move psql backslash database information into the backend, use nmumonic
> * Move psql backslash database information into the backend, use nmeumonic
275,283d271
< * JDBC
< o Comprehensive test suite. This may be available already.
< o JDBC-standard BLOB support
< o Error Codes (pending backend implementation)
< o Support both 'make' and 'ant'
< o Fix LargeObject API to handle OIDs as unsigned ints
< o Use cursors implicitly to avoid large results (see setCursorName())
< o Add LISTEN/NOTIFY support to the JDBC driver (Barry)
<
332c320
< * Have pg_dump -c clear the database using dependency information
> * -Have pg_dump -c clear the database using dependency information
367,368c355,356
< * Cache last known per-tuple offsets to speed long tuple access
< * Automatically place fixed-width, NOT NULL columns first in a table
> * Cache last known per-tuple offsets to speed long tuple access, adjusting
> for NULLs and TOAST values
467c455,456
< * Change representation of whole-tuple parameters to functions
> * -Change representation of whole-tuple parameters to functions
> * Support composite types as table columns
478,479d466
< * Allow the regression tests to start postmaster with -i so the tests
< can be run on systems that don't support unix-domain sockets
a variant of the function for the 'numeric' datatype; it would be possible
to add additional variants for other datatypes, but I haven't done so yet.
This commit includes regression tests and minimal documentation; if we
want developers to actually use this function in applications, we'll
probably need to document what it does more fully.
rather than allowing them only in a few special cases as before. In
particular you can now pass a ROW() construct to a function that accepts
a rowtype parameter. Internal generation of RowExprs fixes a number of
corner cases that used to not work very well, such as referencing the
whole-row result of a JOIN or subquery. This represents a further step in
the work I started a month or so back to make rowtype values into
first-class citizens.
o -ALTER TABLE ADD COLUMN does not honor DEFAULT and non-CHECK CONSTRAINT
o -ALTER TABLE ADD COLUMN column DEFAULT should fill existing
rows with DEFAULT value
o -Allow ALTER TABLE to modify column lengths and change to binary
compatible types
Remove:
o Allow columns to be reordered using ALTER ... POSITION i col1 [,col2];
have SELECT * and INSERT honor such ordering
* ALTER ... ADD COLUMN with defaults and NOT NULL constraints works per SQL
spec. A default is implemented by rewriting the table with the new value
stored in each row.
* ALTER COLUMN TYPE. You can change a column's datatype to anything you
want, so long as you can specify how to convert the old value. Rewrites
the table. (Possible future improvement: optimize no-op conversions such
as varchar(N) to varchar(N+1).)
* Multiple ALTER actions in a single ALTER TABLE command. You can perform
any number of column additions, type changes, and constraint additions with
only one pass over the table contents.
Basic documentation provided in ALTER TABLE ref page, but some more docs
work is needed.
Original patch from Rod Taylor, additional work from Tom Lane.
Regression tests and documentation have both been updated.
SQL2003 requires that both ceiling() and ceil() be present, so I have
documented both spellings. SQL2003 doesn't mention pow() as far as I
can see, so I decided to replace pow() with power() in the documentation:
there is little reason to encourage the continued usage of a function
that isn't compliant with the standard, given a standard-compliant
alternative.
RELEASE NOTES: should state that pow() is considered deprecated
(although I don't see the need to ever remove it.)
process directly. Some parameters can only be set at server start;
any changes to their entries in the configuration file will be ignored
until the server is restarted.
> >> have to accept a full table scan when locating records.
> >
> > It indexes them, but "is null" is not an indexable operator, so you
> > can't directly solve the above with a 3-column index. What you can do
> > instead is use a partial index, for instance
> >
> > create index i on CUSTOMER.WCCustOrderStatusLog (WCOrderStatusID)
> > where Acknowledged is null and Processing is null;
>
> That's a very nifty trick and exactly the sort of answer I was after!
Add CREATE INDEX doc mention of using partial indexes for IS NULL
indexing; idea from Tom.
reference DEALLOCATE in any way. It points to EXECUTE, but not to
DEALLOCATE. Suggested fix:
... This also means that a single prepared statement cannot be used by
multiple simultaneous database clients; however, each client can create
their own prepared statement to use. The prepared statement can be
manually cleaned up using the DEALLOCATE command.
James Robinson
o -Allow dump/load of CSV format
This adds new keywords to COPY and \copy:
CSV - enable CSV mode (comma separated variable)
QUOTE - specify quote character
ESCAPE - specify escape character
FORCE - force quoting of specified column
LITERAL - suppress null comparison for columns
Doc changes included. Regression updates coming from Andrew.
"millennium" date part implementation in postgresql, both in the code
and the documentation, so that it conforms to the official definition.
If you do not agree with the official definition, please send your
complaint to "pope@vatican.org". I'm not responsible for them;-)
With the previous version, the centuries and millenniums had a wrong
number and started the wrong year. Moreover century number 0, which does
not exist in reality, lasted 200 years. Also, millennium number 0 lasted
2000 years.
If you want postgresql to have it's own definition of "century" and
"millennium" that does not conform to the one of the society, just give
them another name. I would suggest "pgCENTURY" and "pgMILLENNIUM";-)
IMO, if someone may use the options, it means that postgresql is used for
historical data, so it make sense to have an historical definition. Also,
I just want to divide the year by 100 or 1000, I can do that quite easily.
BACKWARD INCOMPATIBLE CHANGE
Fabien Coelho - coelho@cri.ensmp.fr
< * Allow LOCALE on a per-column basis, default to ASCII
> * Allow locale to be set at database creation
> * Allow locale on a per-column basis, default to ASCII
> * Optimize locale to have minimal performance impact when not used (Peter E)
105d106
< * Optimize locale to have minimal performance impact when not used (Peter E)
111d111
< * Allow locale to be set at database creation
> >>with allowed values of "all, mod, ddl, none" with default "none".
OK, here is a patch that implements #1. Here is sample output:
test=> set client_min_messages = 'log';
SET
test=> set log_statement = 'mod';
SET
test=> select 1;
?column?
----------
1
(1 row)
test=> update test set x=1;
LOG: statement: update test set x=1;
ERROR: relation "test" does not exist
test=> update test set x=1;
LOG: statement: update test set x=1;
ERROR: relation "test" does not exist
test=> copy test from '/tmp/x';
LOG: statement: copy test from '/tmp/x';
ERROR: relation "test" does not exist
test=> copy test to '/tmp/x';
ERROR: relation "test" does not exist
test=> prepare xx as select 1;
PREPARE
test=> prepare xx as update x set y=1;
LOG: statement: prepare xx as update x set y=1;
ERROR: relation "x" does not exist
test=> explain analyze select 1;;
QUERY PLAN
------------------------------------------------------------------------------------
Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.006..0.007 rows=1 loops=1)
Total runtime: 0.046 ms
(2 rows)
test=> explain analyze update test set x=1;
LOG: statement: explain analyze update test set x=1;
ERROR: relation "test" does not exist
test=> explain update test set x=1;
ERROR: relation "test" does not exist
It checks PREPARE and EXECUTE ANALYZE too. The log_statement values are
'none', 'mod', 'ddl', and 'all'. For 'all', it prints before the query
is parsed, and for ddl/mod, it does it right after parsing using the
node tag (or command tag for CREATE/ALTER/DROP), so any non-parse errors
will print after the log line.
results with tuples as ordinary varlena Datums. This commit does not
in itself do much for us, except eliminate the horrid memory leak
associated with evaluation of whole-row variables. However, it lays the
groundwork for allowing composite types as table columns, and perhaps
some other useful features as well. Per my proposal of a few days ago.
---------------------------------------------------------------------------
1. In keeping with the recent discussion that there should be more
said about views, stored procedures, and triggers, in the tutorial, I
have added a bit of verbiage to that end.
2. Some formatting changes to the datetime discussion, as well as
addition of a citation of a relevant book on calendars.
Christopher Browne
said about views, stored procedures, and triggers, in the tutorial, I
have added a bit of verbiage to that end.
2. Some formatting changes to the datetime discussion, as well as
addition of a citation of a relevant book on calendars.
Christopher Browne
is measured in kilobytes and checked against actual physical execution
stack depth, as per my proposal of 30-Dec. This gives us a fairly
bulletproof defense against crashing due to runaway recursive functions.
remove separate implementation of ALTER TABLE SET WITHOUT OIDS in favor
of doing a regular DROP. Also, cause CREATE TABLE to account completely
correctly for the inheritance status of the OID column. This fixes
problems with dropping OID columns that have dependencies, as noted by
Christopher Kings-Lynne, as well as making sure that you can't drop an
OID column that was inherited from a parent.
listen_addresses parameter, as per recent discussion. The default behavior
is now to listen on localhost, which eliminates the need for the -i
postmaster switch in many scenarios.
Andrew Dunstan
types. Update the regression tests and the documentation to reflect
this. Remove the UNSAFE_FLOATS #ifdef.
This is only half the story: we still unconditionally reject
floating point operations that result in +/- infinity. See
recent thread on -hackers for more information.