< so duplicate checking can be easily performed.
> so duplicate checking can be easily performed. It is possible to
> do it without a unique index if we require the user to LOCK the table
> before the MERGE.
< * Add a libpq function to support Parse/DescribeStatement capability
< * Add PQescapeIdentifier() to libpq
< * Prevent PQfnumber() from lowercasing unquoted the column name
<
< PQfnumber() should never have been doing lowercasing, but historically
< it has so we need a way to prevent it
<
648a642,661
>
>
> libpq
>
> o Add a function to support Parse/DescribeStatement capability
> o Add PQescapeIdentifier()
> o Prevent PQfnumber() from lowercasing unquoted the column name
>
> PQfnumber() should never have been doing lowercasing, but
> historically it has so we need a way to prevent it
>
> o Allow query results to be automatically batched to the client
>
> Currently, all query results are transfered to the libpq
> client before libpq makes the results available to the
> application. This feature would allow the application to make
> use of the first result rows while the rest are transfered, or
> held on the server waiting for them to be requested by libpq.
> One complexity is that a query like SELECT 1/col could error
> out mid-way through the result set.
< o Add a GUC variable to allow output of interval values in ISO8601
< format
212a211,223
> o Add a GUC variable to allow output of interval values in ISO8601
> format
> o Improve timestamptz subtraction to be DST-aware
>
> Currently, subtracting one date from another that crosses a
> daylight savings time adjustment can return '1 day 1 hour', but
> adding that back to the first date returns a time one hour in
> the future. This is caused by the adjustment of '25 hours' to
> '1 day 1 hour', and '1 day' is the same time the next day, even
> if daylight savings adjustments are involved.
>
> o Fix interval display to support values exceeding 2^31 hours
> o Add overflow checking to timestamp and interval arithmetic
>
> o Add auto-expanded mode so expanded output is used if the row
> length is wider than the screen width.
>
> Consider using auto-expanded mode for backslash commands like \df+.
> * Prevent PQfnumber() from lowercasing unquoted the column name
>
> PQfnumber() should never have been doing lowercasing, but historically
> it has so we need a way to prevent it
>
< * Prevent libpq's PQfnumber() from lowercasing the column name
<
< One idea is to lowercase all identifiers except those that are
< surrounded by quotes.
<
<
< * Add code to detect an SMP machine and handle spinlocks accordingly
< from distributted.net, http://www1.distributed.net/source,
< in client/common/cpucheck.cpp
<
< On SMP machines, it is possible that locks might be released shortly,
< while on non-SMP machines, the backend should sleep so the process
< holding the lock can complete and release it.
< o %Add dumping of comments on composite type columns
< o %Add dumping of comments on index columns
< o Stop dumping CASCADE on DROP TYPE commands in clean mode
> o %Add dumping of comments on index columns and composite type columns
604a603
> o Stop dumping CASCADE on DROP TYPE commands in clean mode
< * Prevent libpq's PQfnumber() from lowercasing the column name?
> * Prevent libpq's PQfnumber() from lowercasing the column name
>
> One idea is to lowercase all identifiers except those that are
> surrounded by quotes.
>
> o Allow selection of individual object(s) of all types, not just
> tables
> o In a selective dump, allow dumping of an object and all its
> dependencies
< * Consider compressing indexes by storing key prefix values shared by
> * Consider compressing indexes by storing key values duplicated in
735a736,737
>
> This is difficult because it requires datatype-specific knowledge.
> * Allow protocol-level BIND parameter values to be logged
> * Allow protocol-level EXECUTE that is actually a fetch to appear
> in the logs as a fetch rather than another execute
>
> o Display IN, INOUT, and OUT parameters in \df+
>
> It probably requires psql to output newlines in the proper
> column, which is already on the TODO list.
< This would be beneficial when there are few distinct values.
> This would be beneficial when there are few distinct values. This is
> already used by GROUP BY.
946d946
< * Allow DISTINCT to use hashing like GROUP BY
<
390d388
<
453c451
< removed or have its heap and index files truncated. One
> be removed or have its heap and index files truncated. One
< * Use a phantom command counter for nested subtransactions to reduce
< per-tuple overhead
< cmin/cmax pair and is stored in local memory.
> cmin/cmax pair and is stored in local memory. Another idea is to
> store both cmin and cmax only in local memory.
< have its heap and index files truncated. One issue is
< that no other backend should be able to add to the table
< at the same time, which is something that is currently
< allowed.
> removed or have its heap and index files truncated. One
> issue is that no other backend should be able to add to
> the table at the same time, which is something that is
> currently allowed.
> o Allow COPY on a newly-created table to skip WAL logging
450a452,456
> On crash recovery, the table involved in the COPY would
> have its heap and index files truncated. One issue is
> that no other backend should be able to add to the table
> at the same time, which is something that is currently
> allowed.
> * Use UTF8 encoding for NLS messages so all server encodings can
> read them properly
< o %Add support for Unicode
<
< To fix this, the data needs to be converted to/from UTF16/UTF8
< so the Win32 wcscoll() can be used, and perhaps other functions
< like towupper(). However, UTF8 already works with normal
< locales but provides no ordering or character set classes.