If the existing citext type has not merely been created, but used in any
tables, then the upgrade script wasn't doing enough. We have to update
attcollation for each citext table column, and indcollation for each citext
index column, as well. Per report from Rudolf van der Leeden.
Since PostgreSQL 9.0, we've emitted a warning message when an operator
named => is created, because the SQL standard now reserves that token
for another use. But we've also shipped such an operator with hstore.
Use of the function hstore(text, text) has been recommended in
preference to =>(text, text). Per discussion, it's now time to take
the next step and stop shipping the operator. This will allow us to
prohibit the use of => as an operator name in a future release if and
when we wish to support the SQL standard use of this token.
The release notes should mention this incompatibility.
Patch by me, reviewed by David Wheeler, Dimitri Fontaine and Tom Lane.
Make it use t_isspace() to identify whitespace, rather than relying on
sscanf which is known to get it wrong on some platform/locale combinations.
Get rid of fixed-size buffers. Make it actually continue to parse the file
after ignoring a line with untranslatable characters, as was obviously
intended.
The first of these issues is per gripe from J Smith, though not exactly
either of his proposed patches.
Both dict_int and dict_xsyn were blithely assuming that whatever memory
palloc gives back will be pre-zeroed. This would typically work for
just about long enough to run their regression tests, and no longer :-(.
The pre-9.0 code in dict_xsyn was even lamer than that, as it would
happily give back a pointer to the result of palloc(0), encouraging
its caller to access off the end of memory. Again, this would just
barely fail to fail as long as memory contained nothing but zeroes.
Per a report from Rodrigo Hjort that code based on these examples
didn't work reliably.
We have seen one too many reports of people trying to use 9.1 extension
files in the old-fashioned way of sourcing them in psql. Not only does
that usually not work (due to failure to substitute for MODULE_PATHNAME
and/or @extschema@), but if it did work they'd get a collection of loose
objects not an extension. To prevent this, insert an \echo ... \quit
line that prints a suitable error message into each extension script file,
and teach commands/extension.c to ignore lines starting with \echo.
That should not only prevent any adverse consequences of loading a script
file the wrong way, but make it crystal clear to users that they need to
do it differently now.
Tom Lane, following an idea of Andrew Dunstan's. Back-patch into 9.1
... there is not going to be much value in this if we wait till 9.2.
A similar problem for pgstattuple() was fixed in April of 2010 by commit
33065ef8bc, but pgstatindex() seems to have
been overlooked.
Back-patch all the way, as with that commit, though not to 7.4 through
8.1, since those are now EOL.
Arrange for any problems with pre-existing settings to be reported as
WARNING not ERROR, so that we don't undesirably abort the loading of the
incoming add-on module. The bad setting is just discarded, as though it
had never been applied at all. (This requires a change in the API of
set_config_option. After some thought I decided the most potentially
useful addition was to allow callers to just pass in a desired elevel.)
Arrange to restore the complete stacked state of the variable, rather than
cheesily reinstalling only the active value. This ensures that custom GUCs
will behave unsurprisingly even when the module loading operation occurs
within nested subtransactions that have changed the active value. Since a
module load could occur as a result of, eg, a PL function call, this is not
an unlikely scenario.
Since gtrgm_penalty() is usually called many times in a row with the same
"newval" (to determine which item on an index page newval fits into best),
the makesign() calculation is repetitious. It's expensive enough to make
it worth caching the result, so do so. On my machine this is good for
more than a 40% savings in the time needed to build a trigram index on
/usr/share/dict/words. This is all per a suggestion of Heikki's.
In passing, make some mostly-cosmetic improvements in the caching logic in
the other functions in this file that rely on caching info in fn_extra.
Because these tests require root privileges, not to mention invasive
changes to the security configuration of the host system, it's not
reasonable for them to be invoked by a regular "make check" or "make
installcheck". Instead, dike out the Makefile's knowledge of the tests,
and change chkselinuxenv (now renamed "test_sepgsql") into a script that
verifies the environment is workable and then runs the tests. It's
expected that test_sepgsql will only be run manually.
While at it, do some cleanup in the error checking in the script, and
do some wordsmithing in the documentation.
This is implemented as a per-column boolean option, rather than trying
to match COPY's convention of a single option listing the column names.
Shigeru Hanada, reviewed by KaiGai Kohei
Rewrite plancache.c so that a "cached plan" (which is rather a misnomer
at this point) can support generation of custom, parameter-value-dependent
plans, and can make an intelligent choice between using custom plans and
the traditional generic-plan approach. The specific choice algorithm
implemented here can probably be improved in future, but this commit is
all about getting the mechanism in place, not the policy.
In addition, restructure the API to greatly reduce the amount of extraneous
data copying needed. The main compromise needed to make that possible was
to split the initial creation of a CachedPlanSource into two steps. It's
worth noting in particular that SPI_saveplan is now deprecated in favor of
SPI_keepplan, which accomplishes the same end result with zero data
copying, and no need to then spend even more cycles throwing away the
original SPIPlan. The risk of long-term memory leaks while manipulating
SPIPlans has also been greatly reduced. Most of this improvement is based
on use of the recently-added MemoryContextSetParent primitive.
This addresses only those cases that are easy to fix by adding or
moving a const qualifier or removing an unnecessary cast. There are
many more complicated cases remaining.
Add __attribute__ decorations for printf format checking to the places that
were missing them. Fix the resulting warnings. Add
-Wmissing-format-attribute to the standard set of warnings for GCC, so these
don't happen again.
The warning fixes here are relatively harmless. The one serious problem
discovered by this was already committed earlier in
cf15fb5cab.
As per my recent proposal, this refactors things so that these typedefs and
macros are available in a header that can be included in frontend-ish code.
I also changed various headers that were undesirably including
utils/timestamp.h to include datatype/timestamp.h instead. Unsurprisingly,
this showed that half the system was getting utils/timestamp.h by way of
xlog.h.
No actual code changes here, just header refactoring.
walsender.h should depend on xlog.h, not vice versa. (Actually, the
inclusion was circular until a couple hours ago, which was even sillier;
but Bruce broke it in the expedient rather than logically correct
direction.) Because of that poor decision, plus blind application of
pgrminclude, we had a situation where half the system was depending on
xlog.h to include such unrelated stuff as array.h and guc.h. Clean up
the header inclusion, and manually revert a lot of what pgrminclude had
done so things build again.
This episode reinforces my feeling that pgrminclude should not be run
without adult supervision. Inclusion changes in header files in particular
need to be reviewed with great care. More generally, it'd be good if we
had a clearer notion of module layering to dictate which headers can sanely
include which others ... but that's a big task for another day.
Don't test whether the number of labels is numerically equal to zero;
count(*) isn't going return zero anyway, and the current coding blows
up if it returns an empty string or an error.
There's no reason for this test to use the undocumented pg_prepared_xact()
function, when it can use the stable API pg_prepared_xacts instead.
Fixes breakage against 8.3, as reported by Justin Arnold.
For an empty index, the pgstatindex() function would compute 0.0/0.0 for
its avg_leaf_density and leaf_fragmentation outputs. On machines that
follow the IEEE float arithmetic standard with any care, that results in
a NaN. However, per report from Rushabh Lathia, Microsoft couldn't
manage to get this right, so you'd get a bizarre error on Windows.
Fix by forcing the results to be NaN explicitly, rather than relying on
the division operator to give that or the snprintf function to print it
correctly. I have some doubts that this is really the most useful
definition, but it seems better to remain backward-compatible with
those platforms for which the behavior wasn't completely broken.
Back-patch to 8.2, since the code is like that in all current releases.
The previous coding resulted in contrib modules unintentionally overriding
the use of CONTRIB_TESTDB. There seems no particularly good reason to
allow that (after all, the makefile can set CONTRIB_TESTDB if that's really
what it intends).
In passing, document REGRESS_OPTS where the other pgxs.mk options are
documented.
Back-patch to 9.1 --- in prior versions, there were no cases of contrib
modules setting REGRESS_OPTS without including the --dbname switch, so
while the coding was fragile there was no actual bug.
We'll have to settle for just listing the extensions' data types,
since function arguments seem to sort differently in different locales.
Per buildfarm results.
When we implemented extensions, we made findDependentObjects() treat
EXTENSION dependency links similarly to INTERNAL links. However, that
logic contained an implicit assumption that an object could have at most
one INTERNAL dependency, so it did not work correctly for objects having
both INTERNAL and DEPENDENCY links. This led to failure to drop some
extension member objects when dropping the extension. Furthermore, we'd
never actually exercised the case of recursing to an internally-referenced
(owning) object from anything other than a NORMAL dependency, and it turns
out that passing the incoming dependency's flags to the owning object is
the Wrong Thing. This led to sometimes dropping a whole extension silently
when we should have rejected the drop command for lack of CASCADE.
Since we obviously were under-testing extension drop scenarios, add some
regression test cases. Unfortunately, such test cases require some
extensions (duh), so we can't test for problems in the core regression
tests. I chose to add them to the earthdistance contrib module, which is
a good test case because it has a dependency on the cube contrib module.
Back-patch to 9.1. Arguably these are pre-existing bugs in INTERNAL
dependency handling, but since it appears that the cases can never arise
pre-9.1, I'll refrain from back-patching the logic changes further than
that.
Eliminate dependencies on "which", as we don't really need that to be
installed for proper testing. Don't number the tests, as that increases
the footprint of every patch that wants to add or remove tests. Make
the test output more informative, so that it's a bit easier to see what
went right (or wrong). Spelling and grammar improvements.
contrib/xml2 can get by without libxslt; the relevant features just
won't work. But if doesn't have libxml2, or if sepgsql doesn't have
libselinux, the link succeeds but the module then fails to work at load
time. To avoid that, link the require libraries unconditionally, so
that it will be clear at link-time that there is a problem.
Per discussion with Tom Lane and KaiGai Kohei.
Also, handle failure better: don't just blindly keep trying to delete
stuff after the transaction has already failed.
Tim Lewis, reviewed by Josh Kupershmidt, with further hacking by me.
The old check against MAX_RANDOM_VALUE is clearly irrelevant since
getrand() no longer calls random(). Instead, check whether min and max
are close enough together to avoid an overflow inside getrand(), as
suggested by Tom Lane. This is still somewhat silly, because we're
using atoi(), which doesn't check for overflow anyway and (at least on
my system) will cheerfully return 0 when given "4294967296". But that's
a problem for another commit.
glibc renders random() thread-safe by wrapping a futex lock around it;
testing reveals that this limits the performance of pgbench on machines
with many CPU cores. Rather than switching to random_r(), which is
only available on GNU systems and crashes unless you use undocumented
alchemy to initialize the random state properly, switch to our built-in
implementation of erand48(), which is both thread-safe and concurrent.
Since the list of reasons not to use the operating system's erand48()
is getting rather long, rename ours to pg_erand48() (and similarly
for our implementations of lrand48() and srand48()) and just always
use those. We were already doing this on Cygwin anyway, and the
glibc implementation is not quite thread-safe, so pgbench wouldn't
be able to use that either.
Per discussion with Tom Lane.
libxml reports some errors (like invalid xmlns attributes) via the error
handler hook, but still returns a success indicator to the library caller.
This causes us to miss some errors that are important to report. Since the
"generic" error handler hook doesn't know whether the message it's getting
is for an error, warning, or notice, stop using that and instead start
using the "structured" error handler hook, which gets enough information
to be useful.
While at it, arrange to save and restore the error handler hook setting in
each libxml-using function, rather than assuming we can set and forget the
hook. This should improve the odds of working nicely with third-party
libraries that also use libxml.
In passing, volatile-ize some local variables that get modified within
PG_TRY blocks. I noticed this while testing with an older gcc version
than I'd previously tried to compile xml.c with.
Florian Pflug and Tom Lane, with extensive review/testing by Noah Misch
There may be some other places where we should use errdetail_internal,
but they'll have to be evaluated case-by-case. This commit just hits
a bunch of places where invoking gettext is obviously a waste of cycles.
Add errno-based output to error messages where appropriate, reformat
blocks to about 72 characters per line, use spaces instead of tabs for
indentation, and other style adjustments.
This was already a runtime failure condition, but it's better to check
at validation time if possible. Lightly modified version of a patch
by Shigeru Hanada.
Certain subdirectories do not get built if corresponding options are not
selected at configure time. However, "make distprep" should visit such
directories anyway, so that constructing derived files to be included in
the tarball happens without requiring all configure options to be given
in the tarball build script. Likewise, it's better if cleanup actions
unconditionally visit all directories (for example, this ensures proper
cleanup if someone has done a manual make in such a subdirectory).
To handle this, set up a convention that subdirectories that are
conditionally included in SUBDIRS should be added to ALWAYS_SUBDIRS
instead when they are excluded.
Back-patch to 9.1, so that plpython's spiexceptions.h will get provided
in 9.1 tarballs. There don't appear to be any instances where distprep
actions got missed in previous releases, and anyway this fix requires
gmake 3.80 so we don't want to apply it before 9.1.
A password containing a character with the high bit set was misprocessed
on machines where char is signed (which is most). This could cause the
preceding one to three characters to fail to affect the hashed result,
thus weakening the password. The result was also unportable, and failed
to match some other blowfish implementations such as OpenBSD's.
Since the fix changes the output for such passwords, upstream chose
to provide a compatibility hack: password salts beginning with $2x$
(instead of the usual $2a$ for blowfish) are intentionally processed
"wrong" to give the same hash as before. Stored password hashes can
thus be modified if necessary to still match, though it'd be better
to change any affected passwords.
In passing, sync a couple other upstream changes that marginally improve
performance and/or tighten error checking.
Back-patch to all supported branches. Since this issue is already
public, no reason not to commit the fix ASAP.
This is an ugly hack to get around the fact that significant parts of the
core backend assume they don't need to worry about passing collation to
equality and hashing functions. That's true for the core string datatypes,
but citext should ideally have equality behavior that depends on the
specified collation's LC_CTYPE. However, there's no chance of fixing the
core before 9.2, so we'll have to live with this compromise arrangement for
now. Per bug #6053 from Regina Obe.
The code changes in this commit should be reverted in full once the core
code is up to speed, but be careful about reverting the docs changes:
I fixed a number of obsolete statements while at it.
For consistency, have all non-ASCII characters from contributors'
names in the source be in UTF-8. But remove some other more
gratuitous uses of non-ASCII characters.
For consistency with other tools, put the options before further usage
information.
In pg_standby, remove the supposedly deprecated -l option from the
given example invocation.
the connection; also restructure the libpq connection code.
This patch also removes the unused variable postmasterPID and fixes a
libpq structure leak that was in the testing loop.
Added a new option --extra-install to pg_regress to arrange installing
the respective contrib directory into the temporary installation.
This is currently not yet supported for Windows MSVC builds.
Updated the .gitignore files for contrib modules to ignore the
leftovers of a temp-install check run.
Changed the exit status of "make check" in a pgxs build (which still
does nothing) to 0 from 1.
Added "make check" in contrib to top-level "make check-world".
This option turns off autovacuum, prevents non-super-user connections,
and enables oid setting hooks in the backend. The code continues to use
the old autoavacuum disable settings for servers with earlier catalog
versions.
This includes a catalog version bump to identify servers that support
the -b option.
Make use of the collation attached to the index column, instead of
hard-wiring DEFAULT_COLLATION_OID. (Note: in theory this could require
reindexing btree_gist indexes on textual columns, but I rather doubt anyone
has one with a non-default declared collation as yet.)
Using DEFAULT_COLLATION_OID in the comparePartial functions was not only
a lame hack, but outright wrong, because the compare functions for
collation-aware types were already responding to the declared index
collation. So comparePartial would have the wrong expectation about
the index's sort order, possibly leading to missing matches for prefix
searches.
If someone removes the 'postgres' database from the old cluster and the
new cluster has a 'postgres' database, the number of databases will not
match. We actually could upgrade such a setup, but it would violate the
1-to-1 mapping of database counts, so we throw an error instead.
Previously they got an error during the upgrade, and not at the check
stage; PG 9.0.4 does the same.
... for some value of "properly". Instead of overriding REGRESS_OPTS,
set the variables ENCODING and NO_LOCALE, which is more expressive and
allows overriding by the user. Fix vcregress.pl to handle that.
Since collation is effectively an argument, not a property of the function,
FmgrInfo is really the wrong place for it; and this becomes critical in
cases where a cached FmgrInfo is used for varying purposes that might need
different collation settings. Fix by passing it in FunctionCallInfoData
instead. In particular this allows a clean fix for bug #5970 (record_cmp
not working). This requires touching a bit more code than the original
method, but nobody ever thought that collations would not be an invasive
patch...
This warning is new in gcc 4.6 and part of -Wall. This patch cleans
up most of the noise, but there are some still warnings that are
trickier to remove.
The previous functions of assign hooks are now split between check hooks
and assign hooks, where the former can fail but the latter shouldn't.
Aside from being conceptually clearer, this approach exposes the
"canonicalized" form of the variable value to guc.c without having to do
an actual assignment. And that lets us fix the problem recently noted by
Bernd Helmle that the auto-tune patch for wal_buffers resulted in bogus
log messages about "parameter "wal_buffers" cannot be changed without
restarting the server". There may be some speed advantage too, because
this design lets hook functions avoid re-parsing variable values when
restoring a previous state after a rollback (they can store a pre-parsed
representation of the value instead). This patch also resolves a
longstanding annoyance about custom error messages from variable assign
hooks: they should modify, not appear separately from, guc.c's own message
about "invalid parameter value".
Although there remains some debate about how CREATE TYPE should represent
the collation property, this doesn't really affect what we need to do in
citext's script, so go ahead and fix that.
The originally committed patch for modifying CTEs didn't interact well
with EXPLAIN, as noted by myself, and also had corner-case problems with
triggers, as noted by Dean Rasheed. Those problems show it is really not
practical for ExecutorEnd to call any user-defined code; so split the
cleanup duties out into a new function ExecutorFinish, which must be called
between the last ExecutorRun call and ExecutorEnd. Some Asserts have been
added to these functions to help verify correct usage.
It is no longer necessary for callers of the executor to call
AfterTriggerBeginQuery/AfterTriggerEndQuery for themselves, as this is now
done by ExecutorStart/ExecutorFinish respectively. If you really need to
suppress that and do it for yourself, pass EXEC_FLAG_SKIP_TRIGGERS to
ExecutorStart.
Also, refactor portal commit processing to allow for the possibility that
PortalDrop will invoke user-defined code. I think this is not actually
necessary just yet, since the portal-execution-strategy logic forces any
non-pure-SELECT query to be run to completion before we will consider
committing. But it seems like good future-proofing.
HeapTupleHeader's t_infomask and t_infomask2 are defined as 16-bit
unsigned integers, so when the 16th bit was set, heap_page_item was
returning them as negative values, which was ugly.
The change to pageinspect--unpackaged--1.0.sql allows a module upgraded
from 9.0 to be cleanly updated from the previous definition.
File encodings can be specified separately from client encoding.
If not specified, client encoding is used for backward compatibility.
Cases when the encoding doesn't match client encoding are slower
than matched cases because we don't have conversion procs for other
encodings. Performance improvement would be be a future work.
Original patch by Hitoshi Harada, and modified by me.
This is both very useful in its own right, and an important test case
for the core FDW support.
This commit includes a small refactoring of copy.c to expose its option
checking code as a separately callable function. The original patch
submission duplicated hundreds of lines of that code, which seemed pretty
unmaintainable.
Shigeru Hanada, reviewed by Itagaki Takahiro and Tom Lane
intarray and tsearch2 both reference core support functions in their GIN
opclasses, and the signatures of those functions changed for 9.1. We added
backwards-compatible pg_proc entries for the functions in order to allow
9.0 dump files to be restored at all, but that hack leaves the opclasses
pointing at pg_proc entries different from what they'd point to if the
contrib modules were installed fresh in 9.1. To forestall any possibility
of future problems, fix the opclasses to match fresh installs via the
expedient of direct UPDATEs on pg_amproc in the update-from-unpackaged
scripts. (Yech ... but the alternatives are worse, or require far more
effort than seems justified right now.)
Note: updating pg_amproc is sufficient because there will be no pg_depend
entries corresponding to these dependencies, since the referenced functions
are all pinned.
The initial version of the update-from-unpackaged script neglected to
include the <> operators that were added to the opclasses during 9.1.
To make this script produce the same final state as the regular install
script, use the same ALTER OPERATOR FAMILY trick as in pg_trgm.
Take care of some loose ends in the update-from-unpackaged script, and
apply some ugly hacks to ensure that it produces the same catalog state
as the fresh-install script. Per discussion, this seems like a safer
plan than having two different catalog states that both call themselves
"pg_trgm 1.0", even if it's not immediately clear that the subtle
differences would ever matter.
Also, fix the stub function gin_extract_trgm() so that it works instead
of just bleating. Needed because this function will get called during a
regular dump and reload, if there are any indexes using its opclass.
The user won't have an opportunity to update the extension till later,
so telling him to do so is unhelpful.
These are needed to support reloading dumps of 9.0 installations containing
contrib/intarray or contrib/tsearch2. Since not only regular dump/reload
but binary upgrade would fail, it seems worth the trouble to carry these
stubs for awhile. Note that the contrib opclasses referencing these
functions will still work fine, since GIN doesn't actually pay any
attention to the declared signature of a support function.
Initially it was called int_aggregate after the old SQL file, but since
the documentation just says "intagg" and that's also the directory name,
let's conform to that instead.
From first pass of testing. Notably, there seems to be no need for
adminpack--unpackaged--1.0.sql because none of the objects that the
old module creates would ever be dumped by pg_dump anyway (they are
all in pg_catalog).
It was never terribly consistent to use OR REPLACE (because of the lack of
comparable functionality for data types, operators, etc), and
experimentation shows that it's now positively pernicious in the extension
world. We really want a failure to occur if there are any conflicts, else
it's unclear what the extension-ownership state of the conflicted object
ought to be. Most of the time, CREATE EXTENSION will fail anyway because
of conflicts on other object types, but an extension defining only
functions can succeed, with bad results.
This isn't fully tested as yet, in particular I'm not sure that the
"foo--unpackaged--1.0.sql" scripts are OK. But it's time to get some
buildfarm cycles on it.
sepgsql is not converted to an extension, mainly because it seems to
require a very nonstandard installation process.
Dimitri Fontaine and Tom Lane
relative, by creating a function path_is_relative_and_below_cwd() to
check for specific requirements. It is unclear if this fixes a security
problem or not but the new code is more robust.
This follows recent discussions, so it's quite a bit different from
Dimitri's original. There will probably be more changes once we get a bit
of experience with it, but let's get it in and start playing with it.
This is still just core code. I'll start converting contrib modules
shortly.
Dimitri Fontaine and Tom Lane
This follows my proposal of yesterday, namely that we try to recreate the
previous state of the extension exactly, instead of allowing CREATE
EXTENSION to run a SQL script that might create some entirely-incompatible
on-disk state. In --binary-upgrade mode, pg_dump won't issue CREATE
EXTENSION at all, but instead uses a kluge function provided by
pg_upgrade_support to recreate the pg_extension row (and extension-level
pg_depend entries) without creating any member objects. The member objects
are then restored in the same way as if they weren't members, in particular
using pg_upgrade's normal hacks to preserve OIDs that need to be preserved.
Then, for each member object, ALTER EXTENSION ADD is issued to recreate the
pg_depend entry that marks it as an extension member.
In passing, fix breakage in pg_upgrade's enum-type support: somebody didn't
fix it when the noise word VALUE got added to ALTER TYPE ADD. Also,
rationalize parsetree representation of COMMENT ON DOMAIN and fix
get_object_address() to allow OBJECT_DOMAIN.
This adds collation support for columns and domains, a COLLATE clause
to override it per expression, and B-tree index support.
Peter Eisentraut
reviewed by Pavel Stehule, Itagaki Takahiro, Robert Haas, Noah Misch