* Experiment with multi-threaded backend better I/O utilization
This would allow a single query to make use of multiple I/O channels
simultaneously. One idea is to create a background reader that can
pre-fetch sequential and index scan pages needed by other backends.
This could be expanded to allow concurrent reads from multiple devices
in a partitioned table.
* Experiment with multi-threaded backend better CPU utilization
This would allow several CPUs to be used for a single query, such as
for sorting or query execution.
* Speed WAL recovery by allowing more than one page to be prefetched
This should be done utilizing the same infrastructure used for
prefetching in general to avoid introducing complex error-prone code
in WAL replay.
are declared to return set, and consist of just a single SELECT. We
can replace the FROM-item with a sub-SELECT and then optimize much as
if we were dealing with a view. Patch from Richard Rowell, cleaned up
by me.
errors in any commands, including in various clean targets that have so far
been handled inconsistently. make -i is available to ignore all errors in
a consistent and official way.
during a bitmap index scan. This cannot affect the query results
(since we're just dumping the TIDs into a bitmap) but it might offer
some advantage in locality of access to the index. Per Greg Stark.
value for a precision is negative, act as though precision weren't
specified at all, that is the whole .* part of the format spec should
be ignored. Our previous coding took it as .0 which is certainly
wrong. Per report from Kris Jurka and local testing.
Possibly this should be back-patched, but it would be good to get
some more testing first; in any case there are no known cases where
there's really a problem on the backend side.
support DTrace in the future.
Switch from using DTRACE_PROBEn macros to the dynamically generated macros.
Use "dtrace -h" to create a header file that contains the dynamically
generated macros to be used in the source code instead of the DTRACE_PROBEn
macros. A dummy header file is generated for builds without DTrace support.
Author: Robert Lor <Robert.Lor@sun.com>
oprofile shows that a nontrivial amount of time is being spent in
repeated calls to index_getprocinfo, which really only needs to be
called once. So do that, and inline _hash_datum2hashkey to make it
work.
linear search when checking child-transaction XIDs. This makes for an
important speedup in transactions that have large numbers of children,
as in a recent example from Craig Ringer. We can also get rid of an
ugly kluge that represented lists of TransactionIds as lists of OIDs.
Heikki Linnakangas
bucket number, so as to ensure locality of access to the index during the
insertion step. Without this, building an index significantly larger than
available RAM takes a very long time because of thrashing. On the other
hand, sorting is just useless overhead when the index does fit in RAM.
We choose to sort when the initial index size exceeds effective_cache_size.
This is a revised version of work by Tom Raney and Shreya Bhargava.
deals with the queue, including locking etc, is all in sinvaladt.c. This means
that the struct definition of the queue, and the queue pointer, are now
internal "implementation details" inside sinvaladt.c.
Per my proposal dated 25-Jun-2007 and followup discussion.
two buckets at the start, we create a number of buckets appropriate for the
estimated size of the table. This avoids a lot of expensive bucket-split
actions during initial index build on an already-populated table.
This is one of the two core ideas of Tom Raney and Shreya Bhargava's patch
to reduce hash index build time. I'm committing it separately to make it
easier for people to test the effects of this separately from the effects
of their other core idea (pre-sorting the index entries by bucket number).
This accidentally failed to fail before 8.3, because the context we were
switching back to was long-lived anyway; but it sure looks risky as can be
now. Well spotted by Pavan Deolasee.
that are reported as "equal" by wcscoll() are checked to see if they really
are bitwise equal, and are sorted per strcmp() if not. We made this happen
a couple of years ago in the regular code path, but it unaccountably got
left out of the Windows/UTF8 case (probably brain fade on my part at the
time). As in the prior set of changes, affected users may need to reindex
indexes on textual columns.
Backpatch as far as 8.2, which is the oldest release we are still supporting
on Windows.