< * Add session start time and last statement time to pg_stat_activity
> * -Add session start time and last statement time to pg_stat_activity
134c134
< * Add the client IP address and port to pg_stat_activity
> * -Add the client IP address and port to pg_stat_activity
* Add session start time to pg_stat_activity
* Add the client IP address and port to pg_stat_activity
Original patch from Magnus Hagander, code review by Neil Conway. Catalog
version bumped. This patch sends the client IP address and port number in
every statistics message; that's not ideal, but will be fixed up shortly.
< Currently locale can only be set during initdb.
> Currently locale can only be set during initdb. No global tables have
> locale-aware columns. However, the database template used during
> database creation might have locale-aware indexes. The indexes would
> need to be reindexed to match the new locale.
> * Prevent to_char() on interval from returning meaningless values
>
> For example, to_char('1 month', 'mon') is meaningless. Basically,
> most date-related parameters to to_char() are meaningless for
> intervals because interval is not anchored to a date.
>
> * Allow to_char() on interval values to accumulate the highest unit
> requested
>
> o to_char(INTERVAL '1 hour 5 minutes', 'MI') => 65
> o to_char(INTERVAL '43 hours 20 minutes', 'MI' ) => 2600
> o to_char(INTERVAL '43 hours 20 minutes', 'WK:DD:HR:MI') => 0:1:19:20
> o to_char(INTERVAL '3 years 5 months','MM') => 41
>
> Some special format flag would be required to request such
> accumulation. Such functionality could also be added to EXTRACT.
> Prevent accumulation that crosses the month/day boundary because of
> the uneven number of days in a month.
>
output area as INTERNAL not CSTRING. This is to prevent people from
calling the functions by hand. This is a permanent solution for the
back branches but I hope it is just a stopgap for HEAD.
to produce when running the executor. This is consistent with the internal
executor APIs (such as ExecutorRun), which also use a long for this purpose.
It also allows FETCH_ALL to be passed -- since FETCH_ALL is defined as
LONG_MAX, this wouldn't have worked on platforms where int and long are of
different sizes. Per report from Tzahi Fadida.
only one argument. (Per recent discussion, the option to accept multiple
arguments is pretty useless for user-defined types, and would be a likely
source of security holes if it was used.) Simplify call sites of
output/send functions to not bother passing more than one argument.
to eliminate unnecessary deadlocks. This commit adds SELECT ... FOR SHARE
paralleling SELECT ... FOR UPDATE. The implementation uses a new SLRU
data structure (managed much like pg_subtrans) to represent multiple-
transaction-ID sets. When more than one transaction is holding a shared
lock on a particular row, we create a MultiXactId representing that set
of transactions and store its ID in the row's XMAX. This scheme allows
an effectively unlimited number of row locks, just as we did before,
while not costing any extra overhead except when a shared lock actually
has to be shared. Still TODO: use the regular lock manager to control
the grant order when multiple backends are waiting for a row lock.
Alvaro Herrera and Tom Lane.
< * Allow ORDER BY ... LIMIT 1 to select high/low value without sort or
> * Allow ORDER BY ... LIMIT # to select high/low value without sort or
868c868
< Right now, if no index exists, ORDER BY ... LIMIT 1 requires we sort
> Right now, if no index exists, ORDER BY ... LIMIT # requires we sort
870a871
> MIN/MAX already does this, but not for LIMIT > 1.
> * Allow ORDER BY ... LIMIT 1 to select high/low value without sort or
> index using a sequential scan for highest/lowest values
>
> Right now, if no index exists, ORDER BY ... LIMIT 1 requires we sort
> all values to return the high/low value. Instead The idea is to do a
> sequential scan to find the high/low value, thus avoiding the sort.
>
> One possible implementation is to start sequential scans from the lowest
> numbered buffer in the shared cache, and when reaching the end wrap
> around to the beginning, rather than always starting sequential scans
> at the start of the table.
< This allows vacuum to reclaim free space without requiring
< a sequential scan
> This allows vacuum to target specific pages for possible free space
> without requiring a sequential scan.
< * Consider parallel processing a single query
<
< This would involve using multiple threads or processes to do optimization,
< sorting, or execution of single query. The major advantage of such a
< feature would be to allow multiple CPUs to work together to process a
< single query.
<
< * Allow ORDER BY ... LIMIT 1 to select high/low value without sort or
< index using a sequential scan for highest/lowest values
<
< If only one value is needed, there is no need to sort the entire
< table. Instead a sequential scan could get the matching value.
<
< Solaris) might benefit from threading.
> Solaris) might benefit from threading. Also explore the idea of
> a single session using multiple threads to execute a query faster.
< Currently indexes do not have enough tuple tuple visibility
< information to allow data to be pulled from the index without
< also accessing the heap. One way to allow this is to set a bit
< to index tuples to indicate if a tuple is currently visible to
< all transactions when the first valid heap lookup happens. This
< bit would have to be cleared when a heap tuple is expired.
> Currently indexes do not have enough tuple visibility information
> to allow data to be pulled from the index without also accessing
> the heap. One way to allow this is to set a bit to index tuples
> to indicate if a tuple is currently visible to all transactions
> when the first valid heap lookup happens. This bit would have to
> be cleared when a heap tuple is expired.
logic operations during planning. Seems cleaner to create two new Path
node types, instead --- this avoids duplication of cost-estimation code.
Also, create an enable_bitmapscan GUC parameter to control use of bitmap
plans.
< Bitmap indexes index single columns that can be combined with other bitmap
< indexes to dynamically create a composite index to match a specific query.
< Each index is a bitmap, and the bitmaps are bitwise AND'ed or OR'ed to be
< combined. They can index by tid or can be lossy requiring a scan of the
< heap page to find matching rows, or perhaps use a mixed solution where
< tids are recorded for pages with only a few matches and per-page bitmaps
< are used for more dense pages. Another idea is to use a 32-bit bitmap
< for every page and set a bit based on the item number mod(32).
> This feature allows separate indexes to be ANDed or ORed together. This
> is particularly useful for data warehousing applications that need to
> query the database in an many permutations. This feature scans an index
> and creates an in-memory bitmap, and allows that bitmap to be combined
> with other bitmap created in a similar way. The bitmap can either index
> all TIDs, or be lossy, meaning it records just page numbers and each
> page tuple has to be checked for validity in a separate pass.
>>>
>>>No, and I think it should be in the manual as an example.
>>>
>>>You will need to enter a loop that uses exception handling to detect
>>>unique_violation.
>>
>>Pursuant to an IRC discussion to which Dennis Bjorklund and
>>Christopher Kings-Lynne made most of the contributions, please find
>>enclosed an example patch demonstrating an UPSERT-like capability.
>>
David Fetter
>
> No, and I think it should be in the manual as an example.
>
> You will need to enter a loop that uses exception handling to detect
> unique_violation.
Pursuant to an IRC discussion to which Dennis Bjorklund and
Christopher Kings-Lynne made most of the contributions, please find
enclosed an example patch demonstrating an UPSERT-like capability.
David Fetter
< failure.
> failure. This could be triggered by a user command or a timer.
< * Force archiving of partially-full WAL files when pg_stop_backup() is
< called or the server is stopped
> * Automatically force archiving of partially-filled WAL files when
> pg_stop_backup() is called or the server is stopped
indexes. Extend the macros in include/catalog/*.h to carry the info
about hand-assigned OIDs, and adjust the genbki script and bootstrap
code to make the relations actually get those OIDs. Remove the small
number of RelOid_pg_foo macros that we had in favor of a complete
set named like the catname.h and indexing.h macros. Next phase will
get rid of internal use of names for looking up catalogs and indexes;
but this completes the changes forcing an initdb, so it looks like a
good place to commit.
Along the way, I made the shared relations (pg_database etc) not be
'bootstrap' relations any more, so as to reduce the number of hardwired
entries and simplify changing those relations in future. I'm not
sure whether they ever really needed to be handled as bootstrap
relations, but it seems to work fine to not do so now.
avoid encroaching on the 'user' range of OIDs by allowing automatic
OID assignment to use values below 16k until we reach normal operation.
initdb not forced since this doesn't make any incompatible change;
however a lot of stuff will have different OIDs after your next initdb.
be supported for all datatypes. Add CREATE AGGREGATE and pg_dump support
too. Add specialized min/max aggregates for bpchar, instead of depending
on text's min/max, because otherwise the possible use of bpchar indexes
cannot be recognized.
initdb forced because of catalog changes.
the long-term plan for this behavior for quite some time, but it is only
possible now that DELETE has a USING clause so that the user can join
other tables in a DELETE statement without relying on this behavior.
output parameters or VOID or a set. There seems no particular reason to
insist on a RETURN in these cases, since the function return value is
determined by other elements anyway. Per recent discussion.
in UPDATE. We also now issue a NOTICE if a query has _any_ implicit
range table entries -- in the past, we would only warn about implicit
RTEs in SELECTs with at least one explicit RTE.
As a result of the warning change, 25 of the regression tests had to
be updated. I also took the opportunity to remove some bogus whitespace
differences between some of the float4 and float8 variants. I believe
I have correctly updated all the platform-specific variants, but let
me know if that's not the case.
Original patch for DELETE ... USING from Euler Taveira de Oliveira,
reworked by Neil Conway.
OPENed on non-SELECT commands such as EXPLAIN or SHOW (anything that
returns tuples is allowed). This flexibility already existed for
bound cursors, but OPEN was artificially restricting what it would
take. Per a gripe some months back.
proposal for OUT parameter support. The columns don't actually *do*
anything yet, they are just left NULLs. But I thought I'd commit this
part separately as a fairly pure example of the tasks needed when adding
a column to pg_proc or one of the other core system tables.
change saves a great deal of space in pg_proc and its primary index,
and it eliminates the former requirement that INDEX_MAX_KEYS and
FUNC_MAX_ARGS have the same value. INDEX_MAX_KEYS is still embedded
in the on-disk representation (because it affects index tuple header
size), but FUNC_MAX_ARGS is not. I believe it would now be possible
to increase FUNC_MAX_ARGS at little cost, but haven't experimented yet.
There are still a lot of vestigial references to FUNC_MAX_ARGS, which
I will clean up in a separate pass. However, getting rid of it
altogether would require changing the FunctionCallInfoData struct,
and I'm not sure I want to buy into that.
access: define new index access method functions 'amgetmulti' that can
fetch multiple TIDs per call. (The functions exist but are totally
untested as yet.) Since I was modifying pg_am anyway, remove the
no-longer-needed 'rel' parameter from amcostestimate functions, and
also remove the vestigial amowner column that was creating useless
work for Alvaro's shared-object-dependencies project.
Initdb forced due to changes in pg_am.
executing a statement that fires triggers. Formerly this time was
included in "Total runtime" but not otherwise accounted for.
As a side benefit, we avoid re-opening relations when firing non-deferred
AFTER triggers, because the trigger code can re-use the main executor's
ResultRelInfo data structure.
currently does. This is now the default Win32 wal sync method because
we perfer o_datasync to fsync.
Also, change Win32 fsync to a new wal sync method called
fsync_writethrough because that is the behavior of _commit, which is
what is used for fsync on Win32.
Backpatch to 8.0.X.
< * Add ANSI INTERVAL handling
> * Add ISo INTERVAL handling
< o Interpret syntax that isn't uniquely ANSI or PG, like '1:30' or
< '1' as ANSI syntax, e.g. interpret '1:30' MINUTE TO SECOND as
> o Interpret syntax that isn't uniquely ISO or PG, like '1:30' or
> '1' as ISO syntax, e.g. interpret '1:30' MINUTE TO SECOND as
649c649
< * Add pre-parsing phase that converts non-ANSI syntax to supported
> * Add pre-parsing phase that converts non-ISO syntax to supported
< o Process mixed ANSI/PG syntax, and round value to requested
< precision or generate an error
< o Interpret INTERVAL '1 year' MONTH as CAST (INTERVAL '1 year' AS
< INTERVAL MONTH), and this should return '12 months'
194a191,194
> o Interpret INTERVAL '1 year' MONTH as CAST (INTERVAL '1 year' AS
> INTERVAL MONTH), and this should return '12 months'
> o Round or truncate values to the requested precision, e.g.
> INTERVAL '11 months' AS YEAR should return one or zero
< o Add support for day-time syntax, INTERVAL '1 2:03:04'
> o Add support for day-time syntax, INTERVAL '1 2:03:04'
192c192,194
< o Interpret INTERVAL '1:30' MINUTE TO SECOND as '1 minute 30 seconds'
> o Interpret syntax that isn't uniquely ANSI or PG, like '1:30' or
> '1' as ANSI syntax, e.g. interpret '1:30' MINUTE TO SECOND as
> '1 minute 30 seconds'
< * Add support for ANSI time INTERVAL syntax, INTERVAL '1 2:03:04' DAY TO SECOND
< * Add support for ANSI date INTERVAL syntax, INTERVAL '20-6' YEAR TO MONTH
< * Process mixed ANSI/PG INTERVAL syntax, and round value to requested precision
<
< Interpret INTERVAL '1 year' MONTH as CAST (INTERVAL '1 year' AS INTERVAL
< MONTH), and this should return '12 months'
<
< * Interpret INTERVAL '1:30' MINUTE TO SECOND as '1 minute 30 seconds'
> * Add ANSI INTERVAL handling
> o Add support for day-time syntax, INTERVAL '1 2:03:04'
> DAY TO SECOND
> o Add support for year-month syntax, INTERVAL '50-6' YEAR TO MONTH
> o Process mixed ANSI/PG syntax, and round value to requested
> precision or generate an error
> o Interpret INTERVAL '1 year' MONTH as CAST (INTERVAL '1 year' AS
> INTERVAL MONTH), and this should return '12 months'
> o Interpret INTERVAL '1:30' MINUTE TO SECOND as '1 minute 30 seconds'
> o Support precision, CREATE TABLE foo (a INTERVAL MONTH(3))
< * Add support for ANSI date INTERVAL syntax, INTERVAL '9-3' YEAR TO MONTH
> * Add support for ANSI date INTERVAL syntax, INTERVAL '20-6' YEAR TO MONTH
< * Add support for ANSI date INTERVAL syntax, INTERVAL '1-2' YEAR TO MONTH
> * Add support for ANSI date INTERVAL syntax, INTERVAL '9-3' YEAR TO MONTH
ExclusiveLock rather than AccessExclusiveLock. This will allow concurrent
SELECT queries to proceed on the table. Per discussion with Andrew at
SuperNews.
> * Add support for ANSI time INTERVAL syntax, INTERVAL '1 2:03:04' DAY TO SECOND
> * Add support for ANSI date INTERVAL syntax, INTERVAL '1-2' YEAR TO MONTH
> * Process mixed ANSI/PG INTERVAL syntax, and round value to requested precision
184a188,189
> Interpret INTERVAL '1 year' MONTH as CAST (INTERVAL '1 year' AS INTERVAL
> MONTH), and this should return '12 months'
>
> * Support table partitioning that allows a single table to be stored
> in subtables that are partitioned based on the primary key or a WHERE
> clause
convention for isnull flags. Also, remove the useless InsertIndexResult
return struct from index AM aminsert calls --- there is no reason for
the caller to know where in the index the tuple was inserted, and we
were wasting a palloc cycle per insert to deliver this uninteresting
value (plus nontrivial complexity in some AMs).
I forced initdb because of the change in the signature of the aminsert
routines, even though nothing really looks at those pg_proc entries...
< SQL-spec compliant, so allow such handling to be disabled.
> SQL-spec compliant, so allow such handling to be disabled. However,
> disabling backslashes could break many third-party applications and tools.
of tuples when passing data up through multiple plan nodes. A slot can now
hold either a normal "physical" HeapTuple, or a "virtual" tuple consisting
of Datum/isnull arrays. Upper plan levels can usually just copy the Datum
arrays, avoiding heap_formtuple() and possible subsequent nocachegetattr()
calls to extract the data again. This work extends Atsushi Ogawa's earlier
patch, which provided the key idea of adding Datum arrays to TupleTableSlots.
(I believe however that something like this was foreseen way back in Berkeley
days --- see the old comment on ExecProject.) A test case involving many
levels of join of fairly wide tables (about 80 columns altogether) showed
about 3x overall speedup, though simple queries will probably not be
helped very much.
I have also duplicated some code in heaptuple.c in order to provide versions
of heap_formtuple and friends that use "bool" arrays to indicate null
attributes, instead of the old convention of "char" arrays containing either
'n' or ' '. This provides a better match to the convention used by
ExecEvalExpr. While I have not made a concerted effort to get rid of uses
of the old routines, I think they should be deprecated and eventually removed.
< o Disallow encodings like UTF8 which PostgreSQL supports
< but the operating system does not (already disallowed by
< pginstaller)
> o Add support for Unicode
< To fix UTF8, the data needs to be converted to UTF16 and then
< the Win32 wcscoll() can be used, and perhaps other functions
> To fix this, the data needs to be converted to/from UTF16/UTF8
> so the Win32 wcscoll() can be used, and perhaps other functions
< locales but provides no ordering.
<
> locales but provides no ordering or character set classes.
whether or not it is a security definer. Changing a function's strictness
is required by SQL2003, and the other capabilities make sense. Also, allow
an optional RESTRICT noise word to be specified, for SQL conformance.
Some trivial regression tests added and the documentation has been
updated.