Stefan Kaltenbrunner. The most reasonable behavior (at least for the near
term) seems to be to ignore the PlaceHolderVar and examine its argument
instead. In support of this, change the API of pull_var_clause() to allow
callers to request recursion into PlaceHolderVars. Currently
estimate_num_groups() is the only customer for that behavior, but where
there's one there may be others.
more nearly matching the core SQL scanner. The user-visible effects are:
* Block comments (slash-star comments) now nest, as per SQL spec.
* In standard_conforming_strings mode, backslash as the last character of a
non-E string literal is now correctly taken as an ordinary character;
formerly it was misinterpreted as escaping the ending quote. (Since the
string also had to pass through the core scanner, this invariably led
to syntax errors.)
* Formerly, backslashes in the format string of RAISE were always treated as
quoting the next character, regardless of mode. Now, they are ordinary
characters with standard_conforming_strings on, while with it off, they
introduce the same set of escapes as in the core SQL scanner. Also,
escape_string_warning is now effective for RAISE format strings. These
changes make RAISE format strings work just like any other string literal.
This is implemented by copying and pasting a lot of logic from the core
scanner. It would be a good idea to look into getting rid of plpgsql's
scanner entirely in favor of using the core scanner. However, that involves
more change than I can justify making during beta --- in particular, the core
scanner would have to become re-entrant.
In passing, remove the kluge that made the plpgsql scanner emit T_FUNCTION or
T_TRIGGER as a made-up first token. That presumably had some value once upon
a time, but now it's just useless complication for both the scanner and the
grammar.
etc are no longer guaranteed to produce sorted output; per gripe from Ian
Barwick. Also improve the release note entries about to_timestamp(), per
Brendan Jurd.
constants through full joins, as in
select * from tenk1 a full join tenk1 b using (unique1)
where unique1 = 42;
which should generate a fairly cheap plan where we apply the constraint
unique1 = 42 in each relation scan. This had been broken by my patch of
2008-06-27, which is now reverted in favor of a more invasive but hopefully
less incorrect approach. That patch was meant to prevent incorrect extraction
of OR'd indexclauses from OR conditions above an outer join. To do that
correctly we need more information than the outerjoin_delay flag can provide,
so add a nullable_relids field to RestrictInfo that records exactly which
relations are nulled by outer joins that are underneath a particular qual
clause. A side benefit is that we can make the test in create_or_index_quals
more specific: it is now smart enough to extract an OR'd indexclause into the
outer side of an outer join, even though it must not do so in the inner side.
The old coding couldn't distinguish these cases so it could not do either.
select u&42 from table-with-a-u-column;
Also fix missing SET_YYLLOC() in the {dolqfailed} production that I suppose
this was based on. The latter is a pre-existing bug, but the only effect
is to misplace the error cursor by one token, so probably not worth
backpatching.
If a currently running item needs an exclusive lock on any item that the candidate items needs
any sort of lock on, or vice versa, then the candidate item is not allowed to run now, and
must wait till later.
tablespaces in an order that has some chance of working.
Per a complaint from Kevin Bailey.
This is a pre-existing bug, but given the lack of prior complaints I'm
not sure it's worth back-patching. In most cases failure of the DROP
commands wouldn't be that important anyway.
In passing, fix syntax errors in dumpCreateDB()'s queries for old servers;
these were apparently introduced in recent binary_upgrade patch.
how this ought to behave for multi-dimensional arrays. Per discussion,
not having it at all seems better than having it with what might prove
to be the wrong behavior. We can always add it later when we have consensus
on the correct behavior.
by my patch of 2007-01-28 to use per-subtransaction ExprContexts/EStates:
since we re-prepared any expression tree when the current subtransaction ID
changed, we'd accumulate more and more leaked expression state trees in the
outermost subtransaction if the same function was executed at multiple levels
of subtransaction nesting. To fix, go back to the previous scheme where
there was only one EState per transaction for simple plpgsql expressions.
We really only need an ExprContext per subtransaction, not a whole EState,
so it's possible to keep prepared expression state trees in the one EState
throughout the transaction. This should be more efficient as well as not
leaking memory for cases involving lots of subtransactions.
The added regression test is the case that inspired the 2007-01-28 patch in
the first place, just to make sure we didn't go backwards. The current
memory leak complaint is unfortunately hard to test for in the regression
test framework, though manual testing shows it's fixed.
Although this is a pre-existing bug, I'm not back-patching because I'd like to
see this method get some field testing first. Consider back-patching if it
gets through 8.4beta unscathed.
cstring from the output of \df. Now that the default behavior is to
exclude all system functions, the de-cluttering rationale for this behavior
seems pretty weak; and it was always quite confusing/unhelpful if you were
actually looking for I/O functions. (Not to mention if you were looking
for encoding converters or other cases that might take or return cstring.)