< o Allow COPY to output from views
> o Allow COPY to output from SELECT
570c570
< Another idea would be to allow actual SELECT statements in a COPY.
> COPY should also be able to output views.
> o Add ALTER TABLE tab ADD/DROP INHERITS parent
>
> pg_attribute.attislocal has to be set to 'false' for ADD, and
> pg_attribute.attinhcount adjusted appropriately
>
> * Referential Integrity
>
> o Add MATCH PARTIAL referential integrity
> o Change foreign key constraint for array -> element to mean element
> in array?
> o Enforce referential integrity for system tables
>
>
< Referential Integrity
< =====================
<
< * Add MATCH PARTIAL referential integrity
> Triggers
> ========
< * Change foreign key constraint for array -> element to mean element
< in array?
801d804
< * Enforce referential integrity for system tables
< * %Disallow changing default expression of a SERIAL column?
> * %Disallow changing DEFAULT expression of a SERIAL column?
472a473,476
> * Add DEFAULT .. AS OWNER so permission checks are done as the table
> owner
>
> This would be useful for SERIAL nextval() calls and CHECK constraints.
use RESET CONNECTION:
< * Add RESET SESSION command to reset all session state
> * Add RESET CONNECTION command to reset all session state
447c447
< notify the protocol when a RESET SESSION command is used.
> notify the protocol when a RESET CONNECTION command is used.
< * Add RESET CONNECTION command to reset all session state
> * Add RESET SESSION command to reset all session state
447c447
< notify the protocol when a RESET CONNECTION command is used.
> notify the protocol when a RESET SESSION command is used.
< o %Prevent child tables from altering or dropping constraints
< like CHECK that were inherited from the parent table
< like CHECK that are inherited by child tables
<
< Dropping constraints should only be possible with CASCADE.
<
> like CHECK that are inherited by child tables unless CASCADE
> is used
> o %Prevent child tables from altering or dropping constraints
> like CHECK that were inherited from the parent table
o Support ISO INTERVAL syntax if units cannot be determined from
the string, and are supplied after the string
The SQL standard states that the units after the string specify
the units of the string, e.g. INTERVAL '2' MINUTE should
return '00:02:00'. The current behavior has the units
restrict the interval value to the specified unit or unit range,
INTERVAL '70' SECOND returns '00:00:10'.
For syntax that isn't uniquely ISO or PG syntax, like '1' or
'1:30', treat as ISO if there is a range specification clause,
and as PG if there no clause is present, e.g. interpret
'1:30' MINUTE TO SECOND as '1 minute 30 seconds', and
interpret '1:30' as '1 hour, 30 minutes'.
This makes common cases like SELECT INTERVAL '1' MONTH
SQL-standard results. The SQL standard supports a limited
number of unit combinations and doesn't support unit names
in the string. The PostgreSQL syntax is more flexible in
the range of units supported, e.g. PostgreSQL supports
'1 year 1 hour', while the SQL standard does not.
< * -Eventually enable escape_string_warning and standard_conforming_strings
> * -Enable escape_string_warning and standard_conforming_strings
> * Make standard_conforming_strings the default in 8.3?
>
> When this is done, backslash-quote should be prohibited in non-E''
> strings because of possible confusion over how such strings treat
> backslashes. Basically, '' is always safe for a literal single
> quote, while \' might or might not be based on the backslash
> handling rules.
>
permission item:
< o %Allow pg_hba.conf settings to be controlled via SQL
> o %Allow per-database permissions to be set via GRANT
< This would add a function to load the SQL table from
< pg_hba.conf, and one to writes its contents to the flat file.
< The table should have a line number that is a float so rows
< can be inserted between existing rows, e.g. row 2.5 goes
< between row 2 and row 3.
> Allow database connection checks based on GRANT rules in
> addition to the existing access checks in pg_hba.conf.
>
> o Add new version of PQescapeString() that doesn't double backslashes
> that are part of a client-only multibyte sequence
>
> Single-quote is not a valid byte in any supported client-only
> encoding.
>
> o Add new version of PQescapeString() that doesn't double
> backslashes when standard_conforming_strings is true and
> non-E strings are used
< multiple I/O channels simultaneously.
> multiple I/O channels simultaneously. One idea is to create a
> background reader that can pre-fetch sequential and index scan
> pages needed by other backends. This could be expanded to allow
> concurrent reads from multiple devices in a partitioned table.
> * Allow log_min_messages to be specified on a per-module basis
>
> This would allow administrators to see more detailed information from
> specific sections of the backend, e.g. checkpoints, autovacuum, etc.
< * Experiment with multi-threaded backend [thread]
> * Experiment with multi-threaded backend for backend creation [thread]
1003a1004,1008
>
> * Experiment with multi-threaded backend better resource utilization
>
> This would allow a single query to make use of multiple CPU's or
> multiple I/O channels simultaneously.
> * Allow the creation of indexes with mixed ascending/descending
> specifiers
>
> This is possible now by creating an operator class with reversed sort
> operators. One complexity is that NULLs would then appear at the start
> of the result set, and this might affect certain sort types, like
> merge join.
>
> o Prevent parent tables from altering or dropping constraints
> like CHECK that are inherited by child tables
>
> Dropping constraints should only be possible with CASCADE.
>
< * %Disallow changing sequence characteristics like INCREMENT for SERIAL columns
> * %Disallow ALTER SEQUENCE changes for SERIAL sequences because pg_dump
> does not dump the changes
> * Improve port/qsort() to handle sorts with 50% unique and 50% duplicate
> value [qsort]
>
> This involves choosing better pivot points for the quicksort.
- "Add ON COMMIT capability to CREATE TABLE AS ... SELECT" is done
- "Allow PREPARE to automatically determine parameter types" is done
- "Clean up compiler warnings (especially with gcc version 4)" is done:
AFAIK there are no remaining gcc4 compiler warnings to be fixed.
- Creating rules to do view updates is *not* an easy TODO item
>
> o Allow pg_hba.conf to specify host names along with IP addresses
>
> Host name lookup could occur when the postmaster reads the
> pg_hba.conf file, or when the backend starts. Another
> solution would be to reverse lookup the connection IP and
> check that hostname against the host names in pg_hba.conf.
> We could also then check that the host name maps to the IP
> address.
< * Allow control over which tables are WAL-logged [walcontrol]
> * Allow WAL logging to be turned off for a table, but the table
> might be dropped or truncated during crash recovery [walcontrol]
< commit. To do this, only a single writer can modify the table, and
< writes must happen only on new pages. Readers can continue accessing
< the table. This would affect COPY, and perhaps INSERT/UPDATE too.
< Another option is to avoid transaction logging entirely and truncate
< or drop the table on crash recovery. These should be implemented
< using ALTER TABLE, e.g. ALTER TABLE PERSISTENCE [ DROP | TRUNCATE |
< STABLE | DEFAULT ]. Tables using non-default logging should not use
< referential integrity with default-logging tables, and tables using
< stable logging probably can not have indexes. One complexity is
< the handling of indexes on TOAST tables.
> commit. This should be implemented using ALTER TABLE, e.g. ALTER
> TABLE PERSISTENCE [ DROP | TRUNCATE | DEFAULT ]. Tables using
> non-default logging should not use referential integrity with
> default-logging tables. A table without dirty buffers during a
> crash could perhaps avoid the drop/truncate.
>
> * Allow WAL logging to be turned off for a table, but the table would
> avoid being truncated/dropped [walcontrol]
>
> To do this, only a single writer can modify the table, and writes
> must happen only on new pages so the new pages can be removed during
> crash recovery. Readers can continue accessing the table. Such
> tables probably cannot have indexes. One complexity is the handling
> of indexes on TOAST tables.
< * Allow control over which tables are WAL-logged
> * Allow control over which tables are WAL-logged [walcontrol]
1038c1038,1039
< stable logging probably can not have indexes. [walcontrol]
> stable logging probably can not have indexes. One complexity is
> the handling of indexes on TOAST tables.
> * Allow statistics collector information to be pulled from the collector
> process directly, rather than requiring the collector to write a
> filesystem file twice a second?
>
> o Prevent tab completion of SET TRANSACTION from querying the
> database and therefore preventing the transaction isolation
> level from being set.
>
> Currently, SET <tab> causes a database lookup to check all
> supported session variables. This query causes problems
> because setting the transaction isolation level must be the
> first statement of a transaction.
< * %Prevent INET cast to CIDR if the unmasked bits are not zero, or
< zero the bits
< * %Prevent INET cast to CIDR from dropping netmask, SELECT '1.1.1.1'::inet::cidr
> * -Zero umasked bits in conversion from INET cast to CIDR
> * -Prevent INET cast to CIDR from dropping netmask, SELECT '1.1.1.1'::inet::cidr
< o Allow an alias to be provided for the target table in
< UPDATE/DELETE
<
< This is not SQL-spec but many DBMSs allow it.
<
> o -Allow an alias to be provided for the target table in
> UPDATE/DELETE (Neil)
< STABLE | DEFAULT ]. [wallog]
> STABLE | DEFAULT ]. Tables using non-default logging should not use
> referential integrity with default-logging tables, and tables using
> stable logging probably can not have indexes. [wallog]
< the table. Another option is to avoid transaction logging entirely
< and truncate or drop the table on crash recovery. These should be
< implemented using ALTER TABLE, e.g. ALTER TABLE PERSISTENCE [ DROP |
< TRUNCATE | STABLE | DEFAULT ]. [wallog]
> the table. This would affect COPY, and perhaps INSERT/UPDATE too.
> Another option is to avoid transaction logging entirely and truncate
> or drop the table on crash recovery. These should be implemented
> using ALTER TABLE, e.g. ALTER TABLE PERSISTENCE [ DROP | TRUNCATE |
> STABLE | DEFAULT ]. [wallog]
>
> * Allow control over which tables are WAL-logged
>
> Allow tables to bypass WAL writes and just fsync() dirty pages on
> commit. To do this, only a single writer can modify the table, and
> writes must happen only on new pages. Readers can continue accessing
> the table. Another option is to avoid transaction logging entirely
> and truncate or drop the table on crash recovery. These should be
> implemented using ALTER TABLE, e.g. ALTER TABLE PERSISTENCE [ DROP |
> TRUNCATE | STABLE | DEFAULT ]. [wallog]
* %Make row-wise comparisons work per SQL spec
Right now, '(a, b) < (1, 2)' is processed as 'a < 1 and b < 2', but
the SQL standard requires it to be processed as a column-by-column
comparison, so the proper comparison is '(a < 1) OR (a = 1 AND b < 2)'.
< * Allow star join optimizations
<
< While our bitmap scan allows multiple indexes to be joined to get
< to heap rows, a star joins allows multiple dimension _tables_ to
< be joined to index into a larger main fact table. The join is
< usually performed by either creating a cartesian product of all
< the dimmension tables and doing a single join on that product or
< using subselects to create bitmaps of each dimmension table match
< and merge the bitmaps to perform the join on the fact table. Some
< of these algorithms might be patented.
< * Flush cached query plans when the dependent objects change or
< when the cardinality of parameters changes dramatically
> * Flush cached query plans when the dependent objects change,
> when the cardinality of parameters changes dramatically, or
> when new ANALYZE statistics are available
Drake:
< and merge the bitmaps to perform the join on the fact table.
> and merge the bitmaps to perform the join on the fact table. Some
> of these algorithms might be patented.
* Allow star join optimizations
While our bitmap scan allows multiple indexes to be joined to get
to heap rows, a star joins allows multiple dimension _tables_ to
be joined to index into a larger main fact table. The join is
usually performed by either creating a cartesian product of all
the dimmension tables and doing a single join on that product or
using subselects to create bitmaps of each dimmension table match
and merge the bitmaps to perform the join on the fact table.
< * Flush cached query plans when the dependent objects change
> * Flush cached query plans when the dependent objects change or
> when the cardinality of parameters changes dramatically
< * %Allow pooled connections to list all prepared queries
> * %Allow pooled connections to list all prepared statements
28c28
< the queries prepared in the current session.
> the statements prepared in the current session.
143c143
< o Allow a warm standby system to also allow read-only queries
> o Allow a warm standby system to also allow read-only statements
404c404
< * Add GUC to issue notice about queries that use unjoined tables
> * Add GUC to issue notice about statements that use unjoined tables
490c490
< Another idea would be to allow actual SELECT queries in a COPY.
> Another idea would be to allow actual SELECT statements in a COPY.
554c554
< o Allow function argument names to be queries from PL/PgSQL
> o Allow function argument names to be statements from PL/PgSQL
591c591
< o Improve psql's handling of multi-line queries
> o Improve psql's handling of multi-line statements
< Currently, while \e saves a single query as one entry, interactive
< queries are saved one line at a time. Ideally all queries
> Currently, while \e saves a single statement as one entry, interactive
> statements are saved one line at a time. Ideally all statements
665c665
< o Allow query results to be automatically batched to the client
> o Allow statement results to be automatically batched to the client
667c667
< Currently, all query results are transfered to the libpq
> Currently, all statement results are transfered to the libpq
672c672
< One complexity is that a query like SELECT 1/col could error
> One complexity is that a statement like SELECT 1/col could error
739c739
< * Allow queries across databases or servers with transaction
> * Allow statements across databases or servers with transaction
< inheritance, allow it to work for UPDATE and DELETE queries, and allow
< it to be used for all queries with little performance impact
> inheritance, allow it to work for UPDATE and DELETE statements, and allow
> it to be used for all statements with little performance impact
876c876
< * Consider automatic caching of queries at various levels:
> * Consider automatic caching of statements at various levels:
947c947
< a single session using multiple threads to execute a query faster.
> a single session using multiple threads to execute a statement faster.
1025c1025
< * Log queries where the optimizer row estimates were dramatically
> * Log statements where the optimizer row estimates were dramatically
1146c1146
< of result sets using new query protocol
> of result sets using new statement protocol
< Win32 API, and we have to make sure MinGW handles it.
> Win32 API, and we have to make sure MinGW handles it. Another
> option is to wait for the MinGW project to fix it, or use the
> code from the LibGW32C project as a guide.
> o Add long file support for binary pg_dump output
>
> While Win32 supports 64-bit files, the MinGW API does not,
> meaning we have to build an fseeko replacement on top of the
> Win32 API, and we have to make sure MinGW handles it.
< be cleared when a heap tuple is expired. Another idea is to maintain
< a bitmap of heap pages where all rows are visible to all backends,
< and allow index lookups to reference that bitmap to avoid heap
< lookups, perhaps the same bitmap we might add someday to determine
< which heap pages need vacuuming.
> be cleared when a heap tuple is expired.
>
> Another idea is to maintain a bitmap of heap pages where all rows
> are visible to all backends, and allow index lookups to reference
> that bitmap to avoid heap lookups, perhaps the same bitmap we might
> add someday to determine which heap pages need vacuuming. Frequently
> accessed bitmaps would have to be stored in shared memory. One 8k
> page of bitmaps could track 512MB of heap pages.
< the heap. One way to allow this is to set a bit to index tuples
> the heap. One way to allow this is to set a bit on index tuples
< be cleared when a heap tuple is expired.
<
> be cleared when a heap tuple is expired. Another idea is to maintain
> a bitmap of heap pages where all rows are visible to all backends,
> and allow index lookups to reference that bitmap to avoid heap
> lookups, perhaps the same bitmap we might add someday to determine
> which heap pages need vacuuming.
< * Add MERGE command that does UPDATE/DELETE, or on failure, INSERT (rules,
< triggers?)
> * Add SQL-standard MERGE command, typically used to merge two tables
>
> This is similar to UPDATE, then for unmatched rows, INSERT.
> Whether concurrent access allows modifications which could cause
> row loss is implementation independent.
>
> * Add REPLACE or UPSERT command that does UPDATE, or on failure, INSERT
< #A hyphen, "-", marks changes that will appear in the upcoming 8.1 release.#
> #A hyphen, "-", marks changes that will appear in the upcoming 8.2 release.#
< so duplicate checking can be easily performed.
> so duplicate checking can be easily performed. It is possible to
> do it without a unique index if we require the user to LOCK the table
> before the MERGE.
< * Add a libpq function to support Parse/DescribeStatement capability
< * Add PQescapeIdentifier() to libpq
< * Prevent PQfnumber() from lowercasing unquoted the column name
<
< PQfnumber() should never have been doing lowercasing, but historically
< it has so we need a way to prevent it
<
648a642,661
>
>
> libpq
>
> o Add a function to support Parse/DescribeStatement capability
> o Add PQescapeIdentifier()
> o Prevent PQfnumber() from lowercasing unquoted the column name
>
> PQfnumber() should never have been doing lowercasing, but
> historically it has so we need a way to prevent it
>
> o Allow query results to be automatically batched to the client
>
> Currently, all query results are transfered to the libpq
> client before libpq makes the results available to the
> application. This feature would allow the application to make
> use of the first result rows while the rest are transfered, or
> held on the server waiting for them to be requested by libpq.
> One complexity is that a query like SELECT 1/col could error
> out mid-way through the result set.