be part of multixacts, so allocate a slot for each prepared transaction in
the "oldest member" array in multixact.c. On PREPARE TRANSACTION, transfer
the oldest member value from the current backends slot to the prepared xact
slot. Also save and recover the value from the 2pc state file.
The symptom of the bug was that after a transaction prepared, a shared lock
still held by the prepared transaction was sometimes ignored by other
transactions.
Fix back to 8.1, where both 2PC and multixact were introduced.
We have used -w for a long time as a means of reducing the reported diff
volume when one element of a result table isn't of the expected width.
However, most of the time the results just pass anyway, so this isn't as
important as it once was. Meanwhile, the risk of missing potentially
significant deviations has gone up, particularly with psql's ability to
report error cursor positions. So, let's switch over to space-sensitive
comparisons. Per my proposal of yesterday.
(All the expected files that I can test here seem to be ready for this
already, but we'll see what the buildfarm thinks about others.)
in the formerly-always-blank columns just to left and right of the data.
Different marking is used for a line break caused by a newline in the data
than for a straight wraparound. A newline break is signaled by a "+" in the
right margin column in ASCII mode, or a carriage return arrow in UNICODE mode.
Wraparound is signaled by a dot in the right margin as well as the following
left margin in ASCII mode, or an ellipsis symbol in the same places in UNICODE
mode. "\pset linestyle old-ascii" is added to make the previous behavior
available if anyone really wants it.
In passing, this commit also cleans up a few regression test files that
had unintended spacing differences from the current actual output.
Roger Leigh, reviewed by Gabrielle Roth and other members of PDXPUG.
list, minus a few specific words that have to be treated specially. This
replaces a hard-wired list of keywords that would have needed manual
maintenance, and was not getting it. The 8.4 coding was already missing
these words, causing ecpg to incorrectly treat them as reserved words:
CALLED, CATALOG, DEFINER, ENUM, FOLLOWING, INVOKER, OPTIONS, PARTITION,
PRECEDING, RANGE, SECURITY, SERVER, UNBOUNDED, WRAPPER. In HEAD we were
additionally missing COMMENTS, FUNCTIONS, SEQUENCES, TABLES.
Per gripe from Bosco Rama.
checked to determine whether the trigger should be fired.
For BEFORE triggers this is mostly a matter of spec compliance; but for AFTER
triggers it can provide a noticeable performance improvement, since queuing of
a deferred trigger event and re-fetching of the row(s) at end of statement can
be short-circuited if the trigger does not need to be fired.
Takahiro Itagaki, reviewed by KaiGai Kohei.
output filename if CSV logging was enabled and only one of the two possible
output files got rotated during a particular call (which would, in fact,
typically be the case during a size-based rotation). This would amount to
about MAXPGPATH (1KB) per rotation, and it's been there since the CSV
code was put in, so it's surprising that nobody noticed it before.
Per bug #5196 from Thomas Poindessous.
strength of database passwords, and create a sample implementation of
such a hook as a new contrib module "passwordcheck".
Laurenz Albe, reviewed by Takahiro Itagaki
adopted for EXPLAIN. This will allow additional options to be implemented
in future without having to make them fully-reserved keywords. The old syntax
remains available for existing options, however.
Itagaki Takahiro
non-Var sort/group expressions using ressortgroupref labels instead of
depending entirely on equal()-ity of the upper node's tlist expressions
to the lower node's. This avoids emitting the wrong outputs in cases
where there are textually identical volatile sort/group expressions,
as for example
select distinct random(),random() from generate_series(1,10);
Per report from Andrew Gierth.
Backpatch to 8.4. Arguably this is wrong all the way back, but the only known
case where there's an observable problem is when using hash aggregation to
implement DISTINCT, which is new as of 8.4. So for the moment I'll refrain
from backpatching further.
mergejoin to shield it from doing mark/restore and refetches. Put an explicit
flag in MergePath so we can centralize the logic that knows about this,
and add costing logic that considers using Materialize even when it's not
forced by the previously-existing considerations. This is in response to
a discussion back in August that suggested that materializing an inner
indexscan can be helpful when the refetch percentage is high enough.
default be "throw error on conflict", as per discussions. The GUC variable
is plpgsql.variable_conflict, with values "error", "use_variable",
"use_column". The behavior can also be specified per-function by inserting
one of
#variable_conflict error
#variable_conflict use_variable
#variable_conflict use_column
at the start of the function body.
The 8.5 release notes will need to mention using "use_variable" to retain
backward-compatible behavior, although we should encourage people to migrate
to the much less mistake-prone "error" setting.
Update the plpgsql documentation to match this and other recent changes.
but the transformed ArrayExpr claimed to have a return type of "domain",
even though the domain constraint was only checked by the enclosing
CoerceToDomain node. With this fix, the ArrayExpr is correctly labeled with
the base type of the domain. Per gripe by Tom Lane.
we need to check domain constraints. We used to do it correctly, but 8.4
introduced a separate code path for the "ARRAY[]::arraytype" case to infer
the type of an empty ARRAY construct from the cast target, and forgot to take
domains into account.
Per report from Florian G. Pflug.
User-defined consistent functions believes the check array
contains at least one true element which was not a true for
scanning pending list.
Per report from Yury Don <yura@vpcit.ru>
Per discussion, this should result in defaulting to SQL_ASCII encoding.
The original coding could not support that because it conflated selection
of SQL_ASCII encoding with not being able to determine the encoding.
Adjust pg_get_encoding_from_locale()'s API to distinguish these cases,
and fix callers appropriately. Only initdb actually changes behavior,
since the other callers were perfectly content to consider these cases
equivalent.
Per bug #5178 from Boh Yap. Not going to bother back-patching, since
no one has complained before and there's an easy workaround (namely,
specify the encoding you want).
directly. This was a lot of trouble, but should be worth it in terms of
not having to keep the plpgsql lexer in step with core anymore. In addition
the handling of keywords is significantly better-structured, allowing us to
de-reserve a number of words that plpgsql formerly treated as reserved.
The main motivation for this is that it's required for Informix compatibility
in ECPG.
This patch makes the ECPG and core grammars a bit closer to one another for
these productions.
Author: Zoltan Boszormenyi
mainloop.c. This ensures that postgres_fe.h is read before including
any system headers, which is necessary to avoid problems on some platforms
where we make nondefault selections of feature macros for stdio.h or
other headers. We have had this policy for flex modules in the backend
for many years, but for some reason it was not applied to psql.
Per trouble report from Alexandra Roy and diagnosis by Albe Laurenz.
Apple has fixed that bug in 10.6.2, and we should encourage users to
update to that version rather than trusting this cosmetic patch.
As was recently noted by Stephen Tyler, this patch was only masking
the problem in the context of DROP TABLESPACE, but the failure could
occur in other places such as pg_xlog cleanup.
In VACUUM FULL, an interrupt after the initial transaction has been recorded
as committed can cause postmaster to restart with the following error message:
PANIC: cannot abort transaction NNNN, it was already committed
This problem has been reported many times.
In lazy VACUUM, an interrupt after the table has been truncated by
lazy_truncate_heap causes other backends' relcache to still point to the
removed pages; this can cause future INSERT and UPDATE queries to error out
with the following error message:
could not read block XX of relation 1663/NNN/MMMM: read only 0 of 8192 bytes
The window to this race condition is extremely narrow, but it has been seen in
the wild involving a cancelled autovacuum process.
The solution for both problems is to inhibit interrupts in both operations
until after the respective transactions have been committed. It's not a
complete solution, because the transaction could theoretically be aborted by
some other error, but at least fixes the most common causes of both problems.
yytext. This is a necessary change if we're going to have a lexer interface
layer that does lookahead, since yytext won't necessarily be in step with
what the grammar thinks is the current token. yylval and yylloc should
be the only side-variables that we need to manage when doing lookahead.
of different parsers having different YYSTYPE unions that they want to use
with it. I defined a new union core_YYSTYPE that is just the (very short)
list of semantic values returned by the core scanner. I had originally
worried that this would require an extra interface layer, but actually we can
have parser.c's base_yylex (formerly filtered_base_yylex) take care of that at
no extra cost. Names associated with the core scanner are now "core_yy_foo",
with "base_yy_foo" being used in the core Bison parser and the parser.c
interface layer.
This solves the last serious stumbling block to eliminating plpgsql's separate
lexer. One restriction that will still be present is that plpgsql and the
core will have to agree on the token numbers assigned to tokens that can be
returned by the core lexer. Since Bison doesn't seem willing to accept
external assignments of those numbers, we'll have to live with decreeing that
core and plpgsql grammars declare these tokens first and in the same order.
can be the name of a plpgsql cursor variable, which formerly was converted
to $N before the core parser saw it, but that's no longer the case.
Deal with plain name references to plpgsql variables, and add a regression
test case that exposes the failure.
like the core parser's code. In particular, track locations at the character
rather than line level during parsing, allowing many more parse-time error
conditions to be reported with precise error pointers rather than just
"near line N".
Also, exploit the fact that we no longer need to substitute $N for variable
references by making extracted SQL queries and expressions be exact copies
of subranges of the function text, rather than having random whitespace
changes within them. This makes it possible to directly map parse error
positions from the core parser onto positions in the function text, which
lets us report them without the previous kluge of showing the intermediate
internal-query form. (Later it might be good to do that for core
parse-analysis errors too, but this patch is just touching plpgsql's
lexer/parser, not what happens at runtime.)
In passing, make plpgsql's lexer use palloc not malloc.
These changes make plpgsql's parse-time error reports noticeably nicer
(as illustrated by the regression test changes), and will also simplify
the planned removal of plpgsql's separate lexer by reducing the impedance
mismatch between what it does and what the core lexer does.
* Pull the responsibility for %TYPE and %ROWTYPE out of the scanner,
letting read_datatype manage it instead.
* Avoid unnecessary scanner-driven lookups of plpgsql variables in
places where it's not needed, which is actually most of the time;
we do not need it in DECLARE sections nor in text that is a SQL
query or expression.
* Rationalize the set of token types returned by the scanner:
distinguishing T_SCALAR, T_RECORD, T_ROW seems to complicate the grammar
in more places than it simplifies it, so merge these into one
token type T_DATUM; but split T_ERROR into T_DBLWORD and T_TRIPWORD
for clarity and simplicity of later processing.
Some of this will need to be revisited again when we try to make
plpgsql use the core scanner, but this patch gets some of the bigger
stumbling blocks out of the way.
into SQL expressions, to using the newly added parser callback hooks.
This allows us to do the substitutions in a more semantically-aware way:
a variable reference will only be recognized where it can validly go,
ie, a place where a column value or parameter would be legal, instead of
the former behavior that would replace any textual match including
table names and column aliases (leading to syntax errors later on).
A release-note-worthy fine point is that plpgsql variable names that match
fully-reserved words will now need to be quoted.
This commit preserves the former behavior that variable references take
precedence over any possible match to a column name. The infrastructure
is in place to support the reverse precedence or throwing an error on
ambiguity, but those behaviors aren't accessible yet.
Most of the code changes here are associated with making the namespace
data structure persist so that it can be consulted at runtime, instead
of throwing it away at the end of initial function parsing.
The plpgsql scanner is still doing name lookups, but that behavior is
now irrelevant for SQL expressions. A future commit will deal with
removing unnecessary lookups.