rid of the assumption that sizeof(Oid)==sizeof(int). This is one small
step towards someday supporting 8-byte OIDs. For the moment, it doesn't
do much except get rid of a lot of unsightly casts.
expression accepted by the regex operators, per discussion yesterday.
Along the way, reduce deadlock_timeout from PGC_POSTMASTER to PGC_SIGHUP
category. It is probably best to insist that all backends share the same
setting, but that doesn't mean it has to be frozen at startup.
(extracted from Tcl 8.4.1 release, as Henry still hasn't got round to
making it a separate library). This solves a performance problem for
multibyte, as well as upgrading our regexp support to match recent Tcl
and nearly match recent Perl.
restriction was debatable to begin with, but it has now become obvious
that it breaks forward-porting of user-defined types; contrib/lo being
the most salient example.
for type 'time without time zone', as we already did for type
'timestamp without time zone'. This patch was proposed by Tom Lockhart
on 7-Nov-02, but he never got around to applying it. Adjust regression
tests and documentation to match.
value of MAX_TIME_PRECISION in floating-point-timestamp-storage case
from 13 to 10, which is as much as time_out is actually willing to print.
(The alternative of increasing the number of digits we are willing to
print looks risky; we might find ourselves printing roundoff garbage.)
passed to join selectivity estimators. Make use of this in eqjoinsel
to derive non-bogus selectivity for IN clauses. Further tweaking of
cost estimation for IN.
initdb forced because of pg_proc.h changes.
necessarily following the JOIN syntax to develop the query plan. The old
behavior is still available by setting GUC variable JOIN_COLLAPSE_LIMIT
to 1. Also create a GUC variable FROM_COLLAPSE_LIMIT to control the
similar decision about when to collapse sub-SELECT lists into their parent
lists. (This behavior existed already, but the limit was always
GEQO_THRESHOLD/2; now it's separately adjustable.)
that's selecting into a RECORD variable returns zero rows, make it
assign an all-nulls row to the RECORD; this is consistent with what
happens when the SELECT INTO target is not a RECORD. In support of
this, tweak the SPI code so that a valid tuple descriptor is returned
even when a SPI select returns no rows.
There are two implementation techniques: the executor understands a new
JOIN_IN jointype, which emits at most one matching row per left-hand row,
or the result of the IN's sub-select can be fed through a DISTINCT filter
and then joined as an ordinary relation.
Along the way, some minor code cleanup in the optimizer; notably, break
out most of the jointree-rearrangement preprocessing in planner.c and
put it in a new file prep/prepjointree.c.
including:
- replacing all the appropriate usages of <citetitle>PostgreSQL
...</citetitle> with &cite-user;, &cite-admin;, and so on
- fix an omission in the EXECUTE documentation
- add some more text to the EXPLAIN documentation
- improve the PL/PgSQL RETURN NEXT documentation (more work to do here)
- minor markup fixes
Neil Conway
containing a volatile function), rather than only on 'Var = Var' clauses
as before. This makes it practical to do flatten_join_alias_vars at the
start of planning, which in turn eliminates a bunch of klugery inside the
planner to deal with alias vars. As a free side effect, we now detect
implied equality of non-Var expressions; for example in
SELECT ... WHERE a.x = b.y and b.y = 42
we will deduce a.x = 42 and use that as a restriction qual on a. Also,
we can remove the restriction introduced 12/5/02 to prevent pullup of
subqueries whose targetlists contain sublinks.
Still TODO: make statistical estimation routines in selfuncs.c and costsize.c
smarter about expressions that are more complex than plain Vars. The need
for this is considerably greater now that we have to be able to estimate
the suitability of merge and hash join techniques on such expressions.
costs for expression evaluation, not only per-tuple cost as before.
This extension is needed in order to deal realistically with hashed or
materialized sub-selects.