The new logic is less vulnerable to transpositions.
This invalidates the contents of hash indexes built with the old
functions; hence, bump catversion.
Dean Rasheed
The planner can sometimes compute very large values for numGroups, and in
cases where we have no alternative to building a hashtable, such a value
will get fed directly to BuildTupleHashTable as its nbuckets parameter.
There were two ways in which that could go bad. First, BuildTupleHashTable
declared the parameter as "int" but most callers were passing "long"s,
so on 64-bit machines undetected overflow could occur leading to a bogus
negative value. The obvious fix for that is to change the parameter to
"long", which is what I've done in HEAD. In the back branches that seems a
bit risky, though, since third-party code might be calling this function.
So for them, just put in a kluge to treat negative inputs as INT_MAX.
Second, hash_create can go nuts with extremely large requested table sizes
(notably, my_log2 becomes an infinite loop for inputs larger than
LONG_MAX/2). What seems most appropriate to avoid that is to bound the
initial table size request to work_mem.
This fixes bug #6035 reported by Daniel Schreiber. Although the reported
case only occurs back to 8.4 since it involves WITH RECURSIVE, I think
it's a good idea to install the defenses in all supported branches.
Historically we didn't do this, even though we had the information, because
plpgsql passed its Params via SPI APIs that only include type OIDs not
typmods. Now that plpgsql uses parser callbacks to create Params, it's
easy to insert the right typmod. This should generally result in lower
surprise factors, because a plpgsql variable that is declared with a typmod
will now work more like a table column with the same typmod. In particular
it's the "right" way to fix bug #6020, in which plpgsql's attempt to return
an anonymous record type is defeated by stricter record-type matching
checks that were added in 9.0. However, it's not impossible that this
could result in subtle behavioral changes that could break somebody's
existing plpgsql code, so I'm afraid to back-patch this change into
released branches. In those branches we'll have to lobotomize the
record-type checks instead.
avoids the overhead of one function call when calling MemoryContextReset(),
and it seems like the isReset optimization would be applicable to any new
memory context we might invent in the future anyway.
This buys back the overhead I just added in previous patch to always call
MemoryContextReset() in ExecScan, even when there's no quals or projections.
there's no quals or projections. Currently this only matters for foreign
scans, as none of the other scan nodes litter the per-tuple memory context
when there's no quals or projections.
Even though this only affects the insertion of a parenthesized word,
it's unwise to assume that parentheses can pass through untranslated.
And in any case, the new version is clearer in the code and for
translators.
Also, we removed the display of the current value of
max_connections/MaxBackends from some messages earlier, because it was
confusing, so do that in the remaining one as well.
This was broken in commit ef19dc6d39 by
the Bunce/Hunsaker/Dunstan team, which moved the declaration from
plperl_create_sub to plperl_call_perl_trigger_func. This doesn't
actually work because the validator code would not find the variable
declared; and even if you manage to get past the validator, it still
doesn't work because get_sv("_TD", GV_ADD) doesn't have the expected
effect. The only reason this got beyond testing is that it only fails
in strict mode.
We need to declare it as a global just like %_SHARED; it is simpler than
trying to actually do what the patch initially intended, and is said to
have the same performance benefit.
As a more serious issue, fix $_TD not being properly local()ized,
meaning nested trigger functions would clobber $_TD.
Alex Hunsaker, per test report from Greg Mullane
pg_dump has some heuristic rules for whether to dump casts and procedural
languages, since it's not all that easy to distinguish built-in ones from
user-defined ones. However, we should not apply those rules to objects
that belong to an extension, but just use the perfectly well-defined rules
for what to do with extension member objects. Otherwise we might
mistakenly lose extension member objects during a binary upgrade (which is
the only time that we'd want to dump extension members).
This commit fixes psql, pg_dump, and the information schema to be
consistent with the backend changes which I made as part of commit
be90032e0d, and also includes a
related documentation tweak.
Shigeru Hanada, with slight adjustment.
The code to assemble ldap_get_values_len's output into a single string
wrote the terminating null one byte past where it should. Fix that,
and make some other cosmetic adjustments to make the code a trifle more
readable and more in line with usual Postgres coding style.
Also, free the "result" string when done with it, to avoid a permanent
memory leak.
Bug report and patch by Albe Laurenz, cosmetic adjustments by me.
Failure to distinguish these cases is the real cause behind the recent
reports of Windows builds crashing on 'infinity'::timestamp, which was
directly due to failure to establish a value of timezone_abbreviations
in postmaster child processes. The postmaster had the desired value,
but write_one_nondefault_variable() didn't transmit it to backends.
To fix that, invent a new value PGC_S_DYNAMIC_DEFAULT, and be sure to use
that or PGC_S_ENV_VAR (as appropriate) for "default" settings that are
computed during initialization. (We need both because there's at least
one variable that could receive a value from either source.)
This commit also fixes ProcessConfigFile's failure to restore the correct
default value for certain GUC variables if they are set in postgresql.conf
and then removed/commented out of the file. We have to recompute and
reinstall the value for any GUC variable that could have received a value
from PGC_S_DYNAMIC_DEFAULT or PGC_S_ENV_VAR sources, and there were a
number of oversights. (That whole thing is a crock that needs to be
redesigned, but not today.)
However, I intentionally didn't make it work "exactly right" for the cases
of timezone and log_timezone. The exactly right behavior would involve
running select_default_timezone, which we'd have to do independently in
each postgres process, causing the whole database to become entirely
unresponsive for as much as several seconds. That didn't seem like a good
idea, especially since the variable's removal from postgresql.conf might be
just an accidental edit. Instead the behavior is to adopt the previously
active setting as if it were default.
Note that this patch creates an ABI break for extensions that use any of
the PGC_S_XXX constants; they'll need to be recompiled.
Use ColLabel in place of ColId, so that reserved words are accepted as if
they were not reserved. Also, remove BCONST and XCONST, which were never
documented as allowed. Allowing those exposes to users an implementation
detail, namely the format in which the lexer outputs such constants, that
seems unwise to expose.
No documentation change needed, since this just makes the code act more
like you'd expect from reading the CREATE TRIGGER man page.
Per complaint from Szymon Guz and subsequent discussion.