Clear isReset before, not after, calling the context-specific alloc method,
so as to preserve the option to do a tail call in MemoryContextAlloc
(and also so this code isn't assuming that a failed alloc call won't have
changed the context's state before failing). Fix missed direct invocation
of reset method. Reformat a comment.
The core CREATE FUNCTION code only enforces that IN parameter names are
non-duplicate, and that OUT parameter names are separately non-duplicate.
This is because some function languages might not have any confusion
between the two. But in plpgsql, such names are all in the same namespace,
so we'd better disallow it.
Per a recent complaint from Dan S. Not back-patching since this is a small
issue and the change could cause unexpected failures if we started to
enforce it in a minor release.
The new logic is less vulnerable to transpositions.
This invalidates the contents of hash indexes built with the old
functions; hence, bump catversion.
Dean Rasheed
The planner can sometimes compute very large values for numGroups, and in
cases where we have no alternative to building a hashtable, such a value
will get fed directly to BuildTupleHashTable as its nbuckets parameter.
There were two ways in which that could go bad. First, BuildTupleHashTable
declared the parameter as "int" but most callers were passing "long"s,
so on 64-bit machines undetected overflow could occur leading to a bogus
negative value. The obvious fix for that is to change the parameter to
"long", which is what I've done in HEAD. In the back branches that seems a
bit risky, though, since third-party code might be calling this function.
So for them, just put in a kluge to treat negative inputs as INT_MAX.
Second, hash_create can go nuts with extremely large requested table sizes
(notably, my_log2 becomes an infinite loop for inputs larger than
LONG_MAX/2). What seems most appropriate to avoid that is to bound the
initial table size request to work_mem.
This fixes bug #6035 reported by Daniel Schreiber. Although the reported
case only occurs back to 8.4 since it involves WITH RECURSIVE, I think
it's a good idea to install the defenses in all supported branches.
Historically we didn't do this, even though we had the information, because
plpgsql passed its Params via SPI APIs that only include type OIDs not
typmods. Now that plpgsql uses parser callbacks to create Params, it's
easy to insert the right typmod. This should generally result in lower
surprise factors, because a plpgsql variable that is declared with a typmod
will now work more like a table column with the same typmod. In particular
it's the "right" way to fix bug #6020, in which plpgsql's attempt to return
an anonymous record type is defeated by stricter record-type matching
checks that were added in 9.0. However, it's not impossible that this
could result in subtle behavioral changes that could break somebody's
existing plpgsql code, so I'm afraid to back-patch this change into
released branches. In those branches we'll have to lobotomize the
record-type checks instead.
avoids the overhead of one function call when calling MemoryContextReset(),
and it seems like the isReset optimization would be applicable to any new
memory context we might invent in the future anyway.
This buys back the overhead I just added in previous patch to always call
MemoryContextReset() in ExecScan, even when there's no quals or projections.
there's no quals or projections. Currently this only matters for foreign
scans, as none of the other scan nodes litter the per-tuple memory context
when there's no quals or projections.
Even though this only affects the insertion of a parenthesized word,
it's unwise to assume that parentheses can pass through untranslated.
And in any case, the new version is clearer in the code and for
translators.
Also, we removed the display of the current value of
max_connections/MaxBackends from some messages earlier, because it was
confusing, so do that in the remaining one as well.
This was broken in commit ef19dc6d39 by
the Bunce/Hunsaker/Dunstan team, which moved the declaration from
plperl_create_sub to plperl_call_perl_trigger_func. This doesn't
actually work because the validator code would not find the variable
declared; and even if you manage to get past the validator, it still
doesn't work because get_sv("_TD", GV_ADD) doesn't have the expected
effect. The only reason this got beyond testing is that it only fails
in strict mode.
We need to declare it as a global just like %_SHARED; it is simpler than
trying to actually do what the patch initially intended, and is said to
have the same performance benefit.
As a more serious issue, fix $_TD not being properly local()ized,
meaning nested trigger functions would clobber $_TD.
Alex Hunsaker, per test report from Greg Mullane
pg_dump has some heuristic rules for whether to dump casts and procedural
languages, since it's not all that easy to distinguish built-in ones from
user-defined ones. However, we should not apply those rules to objects
that belong to an extension, but just use the perfectly well-defined rules
for what to do with extension member objects. Otherwise we might
mistakenly lose extension member objects during a binary upgrade (which is
the only time that we'd want to dump extension members).
This commit fixes psql, pg_dump, and the information schema to be
consistent with the backend changes which I made as part of commit
be90032e0d, and also includes a
related documentation tweak.
Shigeru Hanada, with slight adjustment.