into an OR of equality comparisons, rather than x = ANY(ARRAY[...]), when there
are Vars in the right-hand side. This avoids a performance regression compared
to pre-8.2 releases, in cases where the OR form can be optimized into scans
of multiple indexes. Limit the possible downside by preferring this form only
when the list isn't very long (I set the cutoff at 32 elements, which is a
bit arbitrary but in the right ballpark). Per discussion with Jim Nasby.
In passing, also make it try the OR form if it cannot select a common type
for the array elements; we've seen a complaint or two about how the OR form
worked for such cases and ARRAY doesn't.
recent proposal. In typical cases, we now need 12 bytes per insert or delete
event and 16 bytes per update event; previously we needed 40 bytes per
event on 32-bit hardware and 80 bytes per event on 64-bit hardware. Even
in the worst case usage pattern with a large number of distinct triggers being
fired in one query, usage is at most 32 bytes per event. It seems to be a
bit faster than the old code as well, due to reduction of palloc overhead.
This commit doesn't address the TODO item of allowing the event list to spill
to disk; rather it's trying to stave off the need for that. However, it
probably makes that task a bit easier by reducing the data structure's
dependency on pointers. It would now be practical to dump an event list to
disk by "chunks" instead of individual events.
of copy/paste:d emails. Much of the contents had already been migrated
into the main documentation, some was out of date and some just plain
wrong.
Keep the "protocol-flowchart" which can still be useful.
in their targetlists had better reset ps_TupFromTlist during ReScan calls.
There's no need to back-patch here since nodeAgg and nodeGroup didn't
even pretend to support SRFs in prior releases.
* make LDAP use this instead of the hacky previous method to specify
the DN to bind as
* make all auth options behave the same when they are not compiled
into the server
* rename "ident maps" to "user name maps", and support them for all
auth methods that provide an external username
This makes a backwards incompatible change in the format of pg_hba.conf
for the ident, PAM and LDAP authentication methods.
inputs is unique or nearly so), make eqjoinsel() clamp the ndistinct estimates
to be not more than the estimated number of rows coming from the input
relations. This allows the estimate to change in response to the selectivity
of restriction conditions on the inputs.
This is a pretty narrow patch and maybe we should be more aggressive about
similarly clamping ndistinct in other cases; but I'm worried about
double-counting the effects of the restriction conditions. However, it seems
to help for the case exhibited by Grzegorz Jaskiewicz (antijoin against a
small subset of a relation), so let's try this for awhile.
until vars are distributed to rels during query_planner() startup. We don't
really need it before that, and not building it early has some advantages.
First, we don't need to put it through the various preprocessing steps, which
saves some cycles and eliminates the need for a number of routines to support
PlaceHolderInfo nodes at all. Second, this means one less unused plan for any
sub-SELECT appearing in a placeholder's expression, since we don't build
placeholder_list until after sublink expansion is complete.
correctly set. As result, killtuple() marks as dead
wrong tuple on page. Bug was introduced by me while fixing
possible duplicates during GiST index scan.
that represent some expression that we desire to compute below the top level
of the plan, and then let that value "bubble up" as though it were a plain
Var (ie, a column value).
The immediate application is to allow sub-selects to be flattened even when
they are below an outer join and have non-nullable output expressions.
Formerly we couldn't flatten because such an expression wouldn't properly
go to NULL when evaluated above the outer join. Now, we wrap it in a
PlaceHolderVar and arrange for the actual evaluation to occur below the outer
join. When the resulting Var bubbles up through the join, it will be set to
NULL if necessary, yielding the correct results. This fixes a planner
limitation that's existed since 7.1.
In future we might want to use this mechanism to re-introduce some form of
Hellerstein's "expensive functions" optimization, ie place the evaluation of
an expensive function at the most suitable point in the plan tree.
This patch eliminates the marking of subtransactions as SUBCOMMITTED in pg_clog
during their commit; instead they remain in-progress until main transaction
commit. At main transaction commit, the commit protocol is atomic-by-page
instead of one transaction at a time. To avoid a race condition with some
subtransactions appearing committed before others in the case where they span
more than one pg_clog page, we conserve the logic that marks them subcommitted
before marking the parent committed.
Simon Riggs with minor help from me
scanning; GiST and GIN do not, and it seems like too much trouble to make
them do so. By teaching ExecSupportsBackwardScan() about this restriction,
we ensure that the planner will protect a scroll cursor from the problem
by adding a Materialize node.
In passing, fix another longstanding bug in the same area: backwards scan of
a plan with set-returning functions in the targetlist did not work either,
since the TupFromTlist expansion code pays no attention to direction (and
has no way to run a SRF backwards anyway). Again the fix is to make
ExecSupportsBackwardScan check this restriction.
Also adjust the index AM API specification to note that mark/restore support
is unnecessary if the AM can't produce ordered output.
set_rel_width(). The code had been catering for the possibility of different
varnos in the relation targetlist, but this is impossible for a base relation
(and if it were possible, putting all the widths in the same RelOptInfo would
be wrong anyway).
is NULL but SK_SEARCHNULL is not set. Add checking IS NULL of keys
to set during key initialization. If key is NULL and SK_SEARCHNULL is not
set then nothnig can be satisfied.
With assert-enabled compilation that causes coredump.
Bug was introduced in 8.3 by support of IS NULL index scan.
In the previous coding, the list of columns that needed to be hashed on
was allocated in the per-query context, but we reallocated every time
the Agg node was rescanned. Since this information doesn't change over
a rescan, just construct the list of columns once during ExecInitAgg().
according to the TupleDesc's natts, not the number of physical columns in the
tuple. The previous coding would do the wrong thing in cases where natts is
different from the tuple's column count: either incorrectly report error when
it should just treat the column as null, or actually crash due to indexing off
the end of the TupleDesc's attribute array. (The second case is probably not
possible in modern PG versions, due to more careful handling of inheritance
cases than we once had. But it's still a clear lack of robustness here.)
The incorrect error indication is ignored by all callers within the core PG
distribution, so this bug has no symptoms visible within the core code, but
it might well be an issue for add-on packages. So patch all the way back.
lengthof(SysAtt) not FirstLowInvalidHeapAttributeNumber, for consistency with
the other uses of the SysAtt array, and to make it clearer that it doesn't
walk off the end of that array.
Formerly, the lack of any opclasses that could accept such data was enough
of a defense, but now with a "record" opclass we need to check more carefully.
(You can still use that opclass for an index, but you have to store a named
composite type not an anonymous one.)
pointers. This is only a whitespace change, which ought to be ignored
by regression testing, but for some reason buildfarm member spoonbill
doesn't like it.
the timestamp types. Turns out this doesn't even reduce the available
range of dates, since the restriction to dates that work for Julian-date
arithmetic is much tighter than the int32 range anyway. Per a longstanding
TODO item.