that the Mackert-Lohmann formula applies across all the repetitions of the
nestloop, not just each scan independently. We use the M-L formula to
estimate the number of pages fetched from the index as well as from the table;
that isn't what it was designed for, but it seems reasonably applicable
anyway. This makes large numbers of repetitions look much cheaper than
before, which accords with many reports we've received of overestimation
of the cost of a nestloop. Also, change the index access cost model to
charge random_page_cost per index leaf page touched, while explicitly
not counting anything for access to metapage or upper tree pages. This
may all need tweaking after we get some field experience, but in simple
tests it seems to be giving saner results than before. The main thing
is to get the infrastructure in place to let cost_index() and amcostestimate
functions take repeated scans into account at all. Per my recent proposal.
Note: this patch changes pg_proc.h, but I did not force initdb because
the changes are basically cosmetic --- the system does not look into
pg_proc to decide how to call an index amcostestimate function, and
there's no way to call such a function from SQL at all.
This shouldn't affect simple indexscans much, while for bitmap scans that
are touching a lot of index rows, this seems to bring the estimates more
in line with reality. Per recent discussion.
assumed that a sequential page fetch has cost 1.0. This patch doesn't
in itself change the system's behavior at all, but it opens the door to
people adopting other units of measurement for EXPLAIN costs. Also, if
we ever decide it's worth inventing per-tablespace access cost settings,
this change provides a workable intellectual framework for that.
< o Allow COPY to output from views
> o Allow COPY to output from SELECT
570c570
< Another idea would be to allow actual SELECT statements in a COPY.
> COPY should also be able to output views.
as this seems only likely to create headaches for module developers. Put
the macro in the pre-existing fmgr.h file instead. Avoid being too cute
about how many fields we can cram into a word, and avoid trying to fetch
from a library we've already unlinked.
Along the way, it occurred to me that the magic block really ought to be
'const' so it can be stored in the program text area. Do the same for
the existing data blocks for PG_FUNCTION_INFO_V1 functions.
It now only checks four things:
Major version number (7.4 or 8.1 for example)
NAMEDATALEN
FUNC_MAX_ARGS
INDEX_MAX_KEYS
The three constants were chosen because:
1. We document them in the config page in the docs
2. We mark them as changable in pg_config_manual.h
3. Changing any of these will break some of the more popular modules:
FUNC_MAX_ARGS changes fmgr interface, every module uses this NAMEDATALEN
changes syscache interface, every PL as well as tsearch uses this
INDEX_MAX_KEYS breaks tsearch and anything using GiST.
Martijn van Oosterhout
---------------------------------------------------------------------------
Add dynamic record inspection to PL/PgSQL, useful for generic triggers:
tval2 := r.(cname);
or
columns := r.(*);
Titus von Boxberg
An article at WebProNews quoted from the PG docs as to the merits of
stored procedures. I have added a bit more material on their merits,
as well as making a few changes to improve the introductions to
PL/Perl and PL/Tcl.
Chris Browne
kept but now deprecated. Patch from Adam Sjøgren. Add regression test to
show plperl trigger data (Andrew).
TBD: apply similar changes to plpgsql, plpython and pltcl.
* some refactoring and simplify code int gistutil.c and gist.c
* now in some cases it can be called used-defined
picksplit method for non-first column in index, but here
is a place to do more.
* small fix of docs related to support NULL.