> * Auto-fill the free space map by scanning the buffer cache or by
> checking pages written by the background writer
< * Auto-fill the free space map by scanning the buffer cache or by
< checking pages written by the background writer
>
> * Create a bitmap of pages that need vacuuming
>
> Instead of sequentially scanning the entire table, have the background
> writer or some other process record pages that have expired rows, then
> VACUUM can look at just those pages rather than the entire table. In
> the event of a system crash, the bitmap would probably be invalidated.
"AT TIME ZONE", and not just the shorlist previously available. For
example:
SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London';
works fine now. It will also obey whatever DST rules were in effect at
just that date, which the previous implementation did not.
It also supports the AT TIME ZONE on the timetz datatype. The whole
handling of DST is a bit bogus there, so I chose to make it use whatever
DST rules are in effect at the time of executig the query. not sure if
anybody is actuallyi *using* timetz though, it seems pretty
unpredictable just because of this...
Magnus Hagander
part of service principal. If not set, any service principal matching
an entry in the keytab can be used.
NEW KERBEROS MATCHING BEHAVIOR FOR 8.1.
Todd Kover
instead of just scalar variables. Add regression tests and update the
documentation. Along the way, remove some redundant error checking
code from exec_stmt_perform().
Original patch from Pavel Stehule, reworked by Neil Conway.
nonconsecutive columns of a multicolumn index, as per discussion around
mid-May (pghackers thread "Best way to scan on-disk bitmaps"). This
turns out to require only minimal changes in btree, and so far as I can
see none at all in GiST. btcostestimate did need some work, but its
original assumption that index selectivity == heap selectivity was
quite bogus even before this.
mode to only affect the presentation of normal query results, not the
output of psql slash commands. Documentation updated. I also made
some unrelated minor psql cleanup. Per suggestion from Stuart Cooper.
a descriptor that uses the current transaction snapshot, rather than
SnapshotNow as it did before (and still does if INV_WRITE is set).
This means pg_dump will now dump a consistent snapshot of large object
contents, as it never could do before. Also, add a lo_create() function
that is similar to lo_creat() but allows the desired OID of the large
object to be specified. This will simplify pg_restore considerably
(but I'll fix that in a separate commit).
patch adds missing checks to the call sites of malloc(), strdup(),
PQmakeEmptyPGresult(), pqResultAlloc(), and pqResultStrdup(), and updates
the documentation. Per original report from Volkan Yazici about
PQmakeEmptyPGresult() not checking for malloc() failure.
These contain the SQLSTATE and error message of the current exception,
respectively. They are scope-local variables that are only defined
in exception handlers (so attempting to reference them outside an
exception handler is an error). Update the regression tests and the
documentation.
Also, do some minor related cleanup: export an unpack_sql_state()
function from the backend and use it to unpack a SQLSTATE into a
string, and add a free_var() function to pl_exec.c
Original patch from Pavel Stehule, review by Neil Conway.
history customizable through a variable named HISTFILE, analogous to
psql's already implemented HISTCONTROL and HISTSIZE variables, and
bash's HISTFILE-Variable.
The motivation was to be able to get psql to maintain separate
histories for separate databases. This is now easily achievable
through a line like the following in ~/.psqlrc:
\set HISTFILE ~/.psql_history-:DBNAME
Andreas Seltenreich
pg_restore. It restores the given schemaname only. It can be used in
conjunction with the -t and other switches to make the selection very
fine grained.
Richard van den Bergg, CISSP
function that accepts a double precision argument assumed to be a Unix
epoch timestamp and returns timestamp with time zone, and accompanying
documentation.
Usage:
test=# select to_timestamp(200120400);
to_timestamp
------------------------
1976-05-05 14:00:00+09
(1 row)
Michael Glaesemann
psql. i.e. "\pset format troff-ms". The patch also corrects some
problems with the "latex" format, notably defining an extra column in
the output table, and correcting some alignment issues; it also
changes the output to match the border setting as documented in the
manual page and as shown with the "aligned" format.
The troff-ms output is mostly identical to the latex output allowing
for the differences between the two typesetters.
The output should be saved in a file and piped as follows:
cat file | tbl | troff -T ps -ms > file.ps
or
tbl file | troff -T ps -ms > file.ps
Because it contains tabs, you'll need to redirect psql output or use
"script", rather than pasting from a terminal window, due to the tabs
which can be replaced with spaces.
Roger Leigh
< o Allow databases and schemas to be moved to different tablespaces
<
< One complexity is whether moving a schema should move all existing
< schema objects or just define the location for future object creation.
<
> o Allow databases to be moved to different tablespaces
484c480
< schema. Global system tables can never be moved.
> tablespace. Global system tables can never be moved.
as well as the existing pg_catalog entries for prefix and postfix %.
These have never been documented, though they did appear in one old
regression test. This avoids surprising behavior in cases like
"SELECT -25 % -10". Per recent discussion.
Note: although there is a catalog change here, I did not force initdb
since there's no harm in leaving the inaccessible entries in one's
copy of pg_operator.
transaction IDs, rather than like subtrans; in particular, the information
now survives a database restart. Per previous discussion, this is
essential for PITR log shipping and for 2PC.
< changes made by the interface driver for its internal use. One idea is
< for this to be a protocol-only feature. Another approach is to notify
< the protocol when a RESET CONNECTION command is used.
> changes made by the interface driver for its internal use. One idea
> is for this to be a protocol-only feature. Another approach is to
> notify the protocol when a RESET CONNECTION command is used.
last nextval() or setval() performed by the current session. Update the
docs, add regression tests, and bump the catalog version. Patch from
Dennis Björklund, various improvements by Neil Conway.
This allows the result of executing a SELECT to be assigned to a row
variable, record variable, or list of scalars. Docs and regression tests
updated. Per Pavel Stehule, improvements and cleanup by Neil Conway.
< all temporary tables, removal of any NOTIFYs, cursors, prepared
< queries(?), currval()s, etc. This could be used for connection pooling.
< We could also change RESET ALL to have this functionality.
> temporary tables, removing any NOTIFYs, cursors, open transactions,
> prepared queries, currval()s, etc. This could be used for connection
> pooling. We could also change RESET ALL to have this functionality.
> The difficult of this features is allowing RESET ALL to not affect
> changes made by the interface driver for its internal use. One idea is
> for this to be a protocol-only feature. Another approach is to notify
> the protocol when a RESET CONNECTION command is used.
a new PlannerInfo struct, which is passed around instead of the bare
Query in all the planning code. This commit is essentially just a
code-beautification exercise, but it does open the door to making
larger changes to the planner data structures without having to muck
with the widely-known Query struct.
< cleaned up properly. A new signal is needed for safe termination.
> cleaned up properly. A new signal is needed for safe termination
> because backends must first do a query cancel, then exit once they
> have run the query cancel cleanup routine.
1. Rename spi_return_next to return_next.
2. Add a new test for return_next.
3. Update the expected output.
4. Update the documentation.
Abhijit Menon-Sen
< logs
> logs [pitr]
130c130
< * Allow a warm standby system to also allow read-only queries
> * Allow a warm standby system to also allow read-only queries [pitr]
postgresql.conf.
---------------------------------------------------------------------------
Here's an updated version of the patch, with the following changes:
1) No longer uses "service name" as "application version". It's instead
hardcoded as "postgres". It could be argued that this part should be
backpatched to 8.0, but it doesn't make a big difference until you can
start changing it with GUC / connection parameters. This change only
affects kerberos 5, not 4.
2) Now downcases kerberos usernames when the client is running on win32.
3) Adds guc option for "krb_caseins_users" to make the server ignore
case mismatch which is required by some KDCs such as Active Directory.
Off by default, per discussion with Tom. This change only affects
kerberos 5, not 4.
4) Updated so it doesn't conflict with the rendevouz/bonjour patch
already in ;-)
Magnus Hagander
> * Allow pg_ctl to work properly with configuration files located outside
> the PGDATA directory
>
> pg_ctl can not read the pid file because it isn't located in the
> config directory but in the PGDATA directory. The solution is to
> allow pg_ctl to read and understand postgresql.conf to find the
> data_directory value.
>
and RelationNameGetTupleDesc() as deprecated; remove uses of the
latter in the contrib library. Along the way, clean up crosstab()
code and documentation a little.
< * Prevent child tables from altering constraints like CHECK that were
< inherited from the parent table
470a469,471
>
> o Prevent child tables from altering constraints like CHECK that were
> inherited from the parent table
<
< * Add XML output to pg_dump and COPY
<
< We already allow XML to be stored in the database, and XPath queries
< can be used on that data using /contrib/xml2. It also supports XSLT
< transformations.
> * Consider sorting hash buckets so entries can be found using a binary
> search, rather than a linear scan
> * In hash indexes, consider storing the hash value with or instead
> of the key itself
> * Add the features of packages
> o Make private objects accessable only to objects in the same schema
> o Allow current_schema.objname to access current schema objects
> o Add session variables
> o Allow nested schemas
<
< This will involve adding a way to respond to commit failure by either
< taking the server into offline/readonly mode or notifying the
< administrator
from Abhijit Menon-Sen, minor editorialization from Neil Conway. Also,
improve md5(text) to allocate a constant-sized buffer on the stack
rather than via palloc.
Catalog version bumped.
- make sure we always invoke user-supplied GiST methods in a short-lived
memory context. This means the backend isn't exposed to any memory leaks
that be in those methods (in fact, it is probably a net loss for most
GiST methods to bother manually freeing memory now). This also means
we can do away with a lot of ugly manual memory management in the
GiST code itself.
- keep the current page of a GiST index scan pinned, rather than doing a
ReadBuffer() for each tuple produced by the scan. Since ReadBuffer() is
expensive, this is a perf. win
- implement dead tuple killing for GiST indexes (which is easy to do, now
that we keep a pin on the current scan page). Now all the builtin indexes
implement dead tuple killing.
- cleanup a lot of ugly code in GiST
< * Add session start time and last statement time to pg_stat_activity
> * -Add session start time and last statement time to pg_stat_activity
134c134
< * Add the client IP address and port to pg_stat_activity
> * -Add the client IP address and port to pg_stat_activity
* Add session start time to pg_stat_activity
* Add the client IP address and port to pg_stat_activity
Original patch from Magnus Hagander, code review by Neil Conway. Catalog
version bumped. This patch sends the client IP address and port number in
every statistics message; that's not ideal, but will be fixed up shortly.
< Currently locale can only be set during initdb.
> Currently locale can only be set during initdb. No global tables have
> locale-aware columns. However, the database template used during
> database creation might have locale-aware indexes. The indexes would
> need to be reindexed to match the new locale.
> * Prevent to_char() on interval from returning meaningless values
>
> For example, to_char('1 month', 'mon') is meaningless. Basically,
> most date-related parameters to to_char() are meaningless for
> intervals because interval is not anchored to a date.
>
> * Allow to_char() on interval values to accumulate the highest unit
> requested
>
> o to_char(INTERVAL '1 hour 5 minutes', 'MI') => 65
> o to_char(INTERVAL '43 hours 20 minutes', 'MI' ) => 2600
> o to_char(INTERVAL '43 hours 20 minutes', 'WK:DD:HR:MI') => 0:1:19:20
> o to_char(INTERVAL '3 years 5 months','MM') => 41
>
> Some special format flag would be required to request such
> accumulation. Such functionality could also be added to EXTRACT.
> Prevent accumulation that crosses the month/day boundary because of
> the uneven number of days in a month.
>
output area as INTERNAL not CSTRING. This is to prevent people from
calling the functions by hand. This is a permanent solution for the
back branches but I hope it is just a stopgap for HEAD.
to produce when running the executor. This is consistent with the internal
executor APIs (such as ExecutorRun), which also use a long for this purpose.
It also allows FETCH_ALL to be passed -- since FETCH_ALL is defined as
LONG_MAX, this wouldn't have worked on platforms where int and long are of
different sizes. Per report from Tzahi Fadida.
only one argument. (Per recent discussion, the option to accept multiple
arguments is pretty useless for user-defined types, and would be a likely
source of security holes if it was used.) Simplify call sites of
output/send functions to not bother passing more than one argument.
to eliminate unnecessary deadlocks. This commit adds SELECT ... FOR SHARE
paralleling SELECT ... FOR UPDATE. The implementation uses a new SLRU
data structure (managed much like pg_subtrans) to represent multiple-
transaction-ID sets. When more than one transaction is holding a shared
lock on a particular row, we create a MultiXactId representing that set
of transactions and store its ID in the row's XMAX. This scheme allows
an effectively unlimited number of row locks, just as we did before,
while not costing any extra overhead except when a shared lock actually
has to be shared. Still TODO: use the regular lock manager to control
the grant order when multiple backends are waiting for a row lock.
Alvaro Herrera and Tom Lane.
< * Allow ORDER BY ... LIMIT 1 to select high/low value without sort or
> * Allow ORDER BY ... LIMIT # to select high/low value without sort or
868c868
< Right now, if no index exists, ORDER BY ... LIMIT 1 requires we sort
> Right now, if no index exists, ORDER BY ... LIMIT # requires we sort
870a871
> MIN/MAX already does this, but not for LIMIT > 1.
> * Allow ORDER BY ... LIMIT 1 to select high/low value without sort or
> index using a sequential scan for highest/lowest values
>
> Right now, if no index exists, ORDER BY ... LIMIT 1 requires we sort
> all values to return the high/low value. Instead The idea is to do a
> sequential scan to find the high/low value, thus avoiding the sort.
>
> One possible implementation is to start sequential scans from the lowest
> numbered buffer in the shared cache, and when reaching the end wrap
> around to the beginning, rather than always starting sequential scans
> at the start of the table.
< This allows vacuum to reclaim free space without requiring
< a sequential scan
> This allows vacuum to target specific pages for possible free space
> without requiring a sequential scan.
< * Consider parallel processing a single query
<
< This would involve using multiple threads or processes to do optimization,
< sorting, or execution of single query. The major advantage of such a
< feature would be to allow multiple CPUs to work together to process a
< single query.
<
< * Allow ORDER BY ... LIMIT 1 to select high/low value without sort or
< index using a sequential scan for highest/lowest values
<
< If only one value is needed, there is no need to sort the entire
< table. Instead a sequential scan could get the matching value.
<
< Solaris) might benefit from threading.
> Solaris) might benefit from threading. Also explore the idea of
> a single session using multiple threads to execute a query faster.
< Currently indexes do not have enough tuple tuple visibility
< information to allow data to be pulled from the index without
< also accessing the heap. One way to allow this is to set a bit
< to index tuples to indicate if a tuple is currently visible to
< all transactions when the first valid heap lookup happens. This
< bit would have to be cleared when a heap tuple is expired.
> Currently indexes do not have enough tuple visibility information
> to allow data to be pulled from the index without also accessing
> the heap. One way to allow this is to set a bit to index tuples
> to indicate if a tuple is currently visible to all transactions
> when the first valid heap lookup happens. This bit would have to
> be cleared when a heap tuple is expired.