The string representation of ImportError changed. Remove printing
that; it's not necessary for the test.
The order in which members of a dict are printed changed. But this
was always implementation-dependent, so we have just been lucky for a
long time. Do the printing the hard way to ensure sorted order.
The old way of implementing slicing support by implementing
PySequenceMethods.sq_slice no longer works in Python 3. You now have
to implement PyMappingMethods.mp_subscript. Do this by simply
proxying the call to the wrapped list of result dictionaries.
Consolidate some of the subscripting regression tests.
Jan Urbański
It was already on its last legs, and it turns out that it was
accidentally broken in commit 89e850e6fd
and no one cared. So remove the rest the support for it and update
the documentation to indicate that Python 2.3 is now required.
Add test cases for inline handler of plython2u (when using that
language name), and for result object element assignment. There is
now at least one test case for every top-level functionality, except
plpy.Fatal (annoying to use in regression tests) and result object
slice retrieval and slice assignment (which are somewhat broken).
Allocate PLyResultObject.tupdesc in TopMemoryContext, because its
lifetime is the lifetime of the Python object and it shouldn't be
freed by some other memory context, such as one controlled by SPI. We
trust that the Python object will clean up its own memory.
Before, this would crash the included regression test case by trying
to use memory that was already freed.
reported by Asif Naeem, analysis by Tom Lane
Before 9.1, PL/Python functions returning composite types could return
a string and it would be parsed using record_in. The 9.1 changes made
PL/Python only expect dictionaries, tuples, or objects supporting
getattr as output of composite functions, resulting in a regression
and a confusing error message, as the strings were interpreted as
sequences and the code for transforming lists to database tuples was
used. Fix this by treating strings separately as before, before
checking for the other types.
The reason why it's important to support string to database tuple
conversion is that trigger functions on tables with composite columns
get the composite row passed in as a string (from record_out).
Without supporting converting this back using record_in, this makes it
impossible to implement pass-through behavior for these columns, as
PL/Python no longer accepts strings for composite values.
A better solution would be to fix the code that transforms composite
inputs into Python objects to produce dictionaries that would then be
correctly interpreted by the Python->PostgreSQL counterpart code. But
that would be too invasive to backpatch to 9.1, and it is too late in
the 9.2 cycle to attempt it. It should be revisited in the future,
though.
Reported as bug #6559 by Kirill Simonov.
Jan Urbański
The header file is needed by any module that wants to use the PL/pgSQL
instrumentation plugin interface. Most notably, the pldebugger plugin needs
this. With this patch, it can be built using pgxs, without having the full
server source tree available.
The result object methods colnames() etc. would crash when called
after a command that did not produce a result set. Now they throw an
exception.
discovery and initial patch by Jean-Baptiste Quenot
The parser got confused if a cursor parameter had the same name as
a plpgsql variable. Reported and diagnosed by Yeb Havinga, though
this isn't exactly his proposed fix.
Also, some mostly-but-not-entirely-cosmetic adjustments to the original
named-cursor-parameter patch, for code readability and better error
diagnostics.
An incorrect and entirely unnecessary "safety check" in exec_stmt_getdiag()
caused the code to treat an assignment to a variable with dno zero as a
no-op. Unfortunately, that's a perfectly valid dno. This has been broken
since GET DIAGNOSTICS was invented. It's not terribly surprising that the
bug went unnoticed for so long, since in most cases you probably wouldn't
use the function's first-created variable (normally its first parameter)
as a GET DIAGNOSTICS target. Nonetheless, it's broken. Per bug #6551
from Adam Buraczewski.
Making this operation look like a utility statement seems generally a good
idea, and particularly so in light of the desire to provide command
triggers for utility statements. The original choice of representing it as
SELECT with an IntoClause appendage had metastasized into rather a lot of
places, unfortunately, so that this patch is a great deal more complicated
than one might at first expect.
In particular, keeping EXPLAIN working for SELECT INTO and CREATE TABLE AS
subcommands required restructuring some EXPLAIN-related APIs. Add-on code
that calls ExplainOnePlan or ExplainOneUtility, or uses
ExplainOneQuery_hook, will need adjustment.
Also, the cases PREPARE ... SELECT INTO and CREATE RULE ... SELECT INTO,
which formerly were accepted though undocumented, are no longer accepted.
The PREPARE case can be replaced with use of CREATE TABLE AS EXECUTE.
The CREATE RULE case doesn't seem to have much real-world use (since the
rule would work only once before failing with "table already exists"),
so we'll not bother with that one.
Both SELECT INTO and CREATE TABLE AS still return a command tag of
"SELECT nnnn". There was some discussion of returning "CREATE TABLE nnnn",
but for the moment backwards compatibility wins the day.
Andres Freund and Tom Lane
Failing to do so causes trigger invocation to fail when they are nested
within a function invocation that changes the current package.
Backpatch to 9.1; previous releases used a different method to obtain
_TD. Per bug report from Mark Murawski (bug #6511)
Author: Alex Hunsaker
Dave Malcolm of Red Hat is working on a static code analysis tool for
Python-related C code. It reported a number of problems in plpython,
most of which were failures to check for NULL results from object-creation
functions, so would only be an issue in very-low-memory situations.
Patch in HEAD and 9.1. We could go further back but it's not clear that
these issues are important enough to justify the work.
Jan Urbański
This replaces the former global variable PLy_curr_procedure, and provides
a place to stash per-call-level information. In particular we create a
per-call-level scratch memory context.
For the moment, the scratch context is just used to avoid leaking memory
from datatype output function calls in PLyDict_FromTuple. There probably
will be more use-cases in future.
Although this is a fix for a pre-existing memory leakage bug, it seems
sufficiently invasive to not want to back-patch; it feels better as part
of the major rearrangement of plpython code that we've already done as
part of 9.2.
Jan Urbański
Datatype I/O functions are allowed to leak memory in CurrentMemoryContext,
since they are generally called in short-lived contexts. However, plpgsql
calls such functions for purposes of type conversion, and was calling them
in its procedure context. Therefore, any leaked memory would not be
recovered until the end of the plpgsql function. If such a conversion
was done within a loop, quite a bit of memory could get consumed. Fix by
calling such functions in the transient "eval_econtext", and adjust other
logic to match. Back-patch to all supported versions.
Andres Freund, Jan Urbański, Tom Lane
Don't quote the output of format_procedure(); it's already quoted quite
enough. Remove the fn_name field, which was now just dead weight. Fix
remaining expected-output files.
Add result object functions .colnames, .coltypes, .coltypmods to
obtain information about the result column names and types, which was
previously not possible in the PL/Python SPI interface.
reviewed by Abhijit Menon-Sen
Certain things like typeglobs or readonly things like $^V cause
perl's SvPVutf8() to die nastily and crash the backend. To avoid
that bug we make a copy of the object, which will subsequently be
garbage collected.
Back patched to 9.1 where we first started using SvPVutf8().
Per -hackers discussion. Original problem reported by David Wheeler.
This moves the code around from one huge file into hopefully logical
and more manageable modules. For the most part, the code itself was
not touched, except: PLy_function_handler and PLy_trigger_handler were
renamed to PLy_exec_function and PLy_exec_trigger, because they were
not actually handlers in the PL handler sense, and it makes the naming
more similar to the way PL/pgSQL is organized. The initialization of
the procedure caches was separated into a new function
init_procedure_caches to keep the hash tables private to
plpy_procedures.c.
Jan Urbański and Peter Eisentraut
Add a function plpy.cursor that is similar to plpy.execute but uses an
SPI cursor to avoid fetching the entire result set into memory.
Jan Urbański, reviewed by Steve Singer
In the previous coding, callers were faced with an awkward choice:
look up the name, do permissions checks, and then lock the table; or
look up the name, lock the table, and then do permissions checks.
The first choice was wrong because the results of the name lookup
and permissions checks might be out-of-date by the time the table
lock was acquired, while the second allowed a user with no privileges
to interfere with access to a table by users who do have privileges
(e.g. if a malicious backend queues up for an AccessExclusiveLock on
a table on which AccessShareLock is already held, further attempts
to access the table will be blocked until the AccessExclusiveLock
is obtained and the malicious backend's transaction rolls back).
To fix, allow callers of RangeVarGetRelid() to pass a callback which
gets executed after performing the name lookup but before acquiring
the relation lock. If the name lookup is retried (because
invalidation messages are received), the callback will be re-executed
as well, so we get the best of both worlds. RangeVarGetRelid() is
renamed to RangeVarGetRelidExtended(); callers not wishing to supply
a callback can continue to invoke it as RangeVarGetRelid(), which is
now a macro. Since the only one caller that uses nowait = true now
passes a callback anyway, the RangeVarGetRelid() macro defaults nowait
as well. The callback can also be used for supplemental locking - for
example, REINDEX INDEX needs to acquire the table lock before the index
lock to reduce deadlock possibilities.
There's a lot more work to be done here to fix all the cases where this
can be a problem, but this commit provides the general infrastructure
and fixes the following specific cases: REINDEX INDEX, REINDEX TABLE,
LOCK TABLE, and and DROP TABLE/INDEX/SEQUENCE/VIEW/FOREIGN TABLE.
Per discussion with Noah Misch and Alvaro Herrera.
The old expression sed 's,$(srcdir),python3,' would normally resolve
as sed 's,.,python3,', which is not really what we wanted. While it
doesn't actually break anything right now, it's still wrong, so put in
a bit more work to make it more robust.
The original coding was
var->value = (Datum) state;
which is bogus, and then in commit 2f0f7b4bce
it was "corrected" to
var->value = PointerGetDatum(state);
which is a faithful translation but still wrong.
This seems purely cosmetic, though, so no need for a back-patch.
Pavel Stehule
distro version of perl.
David Wheeler and Alex Hunsaker.
Backpatch to 9.1 where it applies cleanly. A simple workaround is available for earlier
branches, and further effort doesn't seem warranted.
exception handler. This was a regression in 9.1, when the capability
to catch specific SPI errors was added, so backpatch to 9.1.
Mika Eloranta, with some editing by Jan Urbański.
This seems to have been just an oversight in previous foreign-table work.
A quick grep didn't turn up any other places where RELKIND_FOREIGN_TABLE
was obviously omitted.
One change noted by Alexander Soudakov, the other by me.
Back-patch to 9.1.
The original implementation of ELSIF in plpgsql converted the construct
into nested simple IF statements. This was prone to stack overflow with
long ELSIF lists, in two different ways. First, it's difficult to generate
the parsetree without using right-recursion in the bison grammar, and
that's prone to parser stack overflow since nothing can be reduced until
the whole list has been read. Second, we'd recurse during execution, thus
creating an unnecessary risk of execution-time stack overflow. Rewrite
so that the ELSIF list is represented as a flat list, scanned via iteration
not recursion, and generated through left-recursion in the grammar.
Per a gripe from Håvard Kongsgård.
This patch restores the pre-9.1 behavior that pl/perl functions returning
VOID ignore the result value of their last Perl statement. 9.1.0
unintentionally threw an error if the last statement returned a reference,
as reported by Amit Khandekar.
Also, make sure it works to return a string value for a composite type,
so long as the string meets the type's input format. We already allowed
the equivalent behavior for arrays, so it seems inconsistent to not allow
it for composites.
In addition, ensure we throw errors for attempts to return arrays or hashes
when the function's declared result type is not an array or composite type,
respectively. Pre-9.1 versions rather uselessly returned strings like
ARRAY(0x221a9a0) or HASH(0x221aa90), while 9.1.0 threw an error for the
hash case and returned a garbage value for the array case.
Also, clean up assorted grotty coding in Perl array conversion, including
use of a session-lifespan memory context to accumulate the array value
(resulting in session-lifespan memory leak on error), failure to apply the
declared typmod if any, and failure to detect some cases of non-rectangular
multi-dimensional arrays.
Alex Hunsaker and Tom Lane
We used to just remember the GucSource, but saving GucContext too provides
a little more information --- notably, whether a SET was done by a
superuser or regular user. This allows us to rip out the fairly dodgy code
that define_custom_variable used to use to try to infer the context to
re-install a pre-existing setting with. In particular, it now works for
a superuser to SET a extension's SUSET custom variable before loading the
associated extension, because GUC can remember whether the SET was done as
a superuser or not. The plperl regression tests contain an example where
this is useful.
This variable provides only marginal error-prevention capability (since
it can only check the prefix of a qualified GUC name), and the consensus
is that that isn't worth the amount of hassle that maintaining the setting
creates for DBAs. So, let's just remove it.
With this commit, the system will silently accept a value for any qualified
GUC name at all, whether it has anything to do with any known extension or
not. (Unqualified names still have to match known built-in settings,
though; and you will get a WARNING at extension load time if there's an
unrecognized setting with that extension's prefix.)
There's still some discussion ongoing about whether to tighten that up and
if so how; but if we do come up with a solution, it's not likely to look
anything like custom_variable_classes.
plpgsql's exec_stmt_execsql was Assert'ing that a CachedPlanSource was
is_valid immediately after exec_prepare_plan. The risk factor in this case
is that after building the prepared statement, exec_prepare_plan calls
exec_simple_check_plan, which might try to generate a generic plan --- and
with CLOBBER_CACHE_ALWAYS or other unusual causes of invalidation, that
could result in an invalidation. However, that path could only be taken
for a SELECT query, for which we need not set mod_stmt. So in this case
I think it's best to just remove the Assert; it's okay to look at a
slightly-stale querytree for what we need here. Per buildfarm testing.
Now that a NULL ParamListInfo pointer causes significantly different
behavior in plancache.c, be sure to pass it that way when the expression
is known not to reference any plpgsql variables. Saves a few setup
cycles anyway.
Rewrite plancache.c so that a "cached plan" (which is rather a misnomer
at this point) can support generation of custom, parameter-value-dependent
plans, and can make an intelligent choice between using custom plans and
the traditional generic-plan approach. The specific choice algorithm
implemented here can probably be improved in future, but this commit is
all about getting the mechanism in place, not the policy.
In addition, restructure the API to greatly reduce the amount of extraneous
data copying needed. The main compromise needed to make that possible was
to split the initial creation of a CachedPlanSource into two steps. It's
worth noting in particular that SPI_saveplan is now deprecated in favor of
SPI_keepplan, which accomplishes the same end result with zero data
copying, and no need to then spend even more cycles throwing away the
original SPIPlan. The risk of long-term memory leaks while manipulating
SPIPlans has also been greatly reduced. Most of this improvement is based
on use of the recently-added MemoryContextSetParent primitive.
The $(PERL) macro will be set by configure if it finds perl at all,
but $(perl_privlibexp) isn't configured unless you said --with-perl.
This results in confusing error messages if someone cd's into
src/pl/plperl and tries to build there despite the configure omission,
as reported by Tomas Vondra in bug #6198. Add simple checks to
provide a more useful report, while not disabling other use of the
makefile such as "make clean".
Back-patch to 9.0, which is as far as the patch applies easily.
walsender.h should depend on xlog.h, not vice versa. (Actually, the
inclusion was circular until a couple hours ago, which was even sillier;
but Bruce broke it in the expedient rather than logically correct
direction.) Because of that poor decision, plus blind application of
pgrminclude, we had a situation where half the system was depending on
xlog.h to include such unrelated stuff as array.h and guc.h. Clean up
the header inclusion, and manually revert a lot of what pgrminclude had
done so things build again.
This episode reinforces my feeling that pgrminclude should not be run
without adult supervision. Inclusion changes in header files in particular
need to be reviewed with great care. More generally, it'd be good if we
had a clearer notion of module layering to dictate which headers can sanely
include which others ... but that's a big task for another day.
Module initialization functions in Python 3 must have external
linkage, because PyMODINIT_FUNC does dllexport on Windows-like
platforms. Without this change, the build with Python 3 fails on
Windows.
Dropped columns within a composite type were not handled correctly.
Also, we did not check for whether a composite result type had changed
since we cached the information about it.
Jan Urbański, per a bug report from Jean-Baptiste Quenot
To avoid having the python headers hijack various definitions,
we now include them after all the system headers we want, having
first undefined some of the things they want to define. After that's
done we restore the things they scribbled on that matter, namely our
snprintf and vsnprintf macros, if we're using them.
This is more SQL-spec-compliant, more easily extensible, and better
performing than the old method of inventing special variables.
Pavel Stehule, reviewed by Shigeru Hanada and David Wheeler
There may be some other places where we should use errdetail_internal,
but they'll have to be evaluated case-by-case. This commit just hits
a bunch of places where invoking gettext is obviously a waste of cycles.
In the previous coding, we would look up a relation in RangeVarGetRelid,
lock the resulting OID, and then AcceptInvalidationMessages(). While
this was sufficient to ensure that we noticed any changes to the
relation definition before building the relcache entry, it didn't
handle the possibility that the name we looked up no longer referenced
the same OID. This was particularly problematic in the case where a
table had been dropped and recreated: we'd latch on to the entry for
the old relation and fail later on. Now, we acquire the relation lock
inside RangeVarGetRelid, and retry the name lookup if we notice that
invalidation messages have been processed meanwhile. Many operations
that would previously have failed with an error in the presence of
concurrent DDL will now succeed.
There is a good deal of work remaining to be done here: many callers
of RangeVarGetRelid still pass NoLock for one reason or another. In
addition, nothing in this patch guards against the possibility that
the meaning of an unqualified name might change due to the creation
of a relation in a schema earlier in the user's search path than the
one where it was previously found. Furthermore, there's nothing at
all here to guard against similar race conditions for non-relations.
For all that, it's a start.
Noah Misch and Robert Haas
Certain subdirectories do not get built if corresponding options are not
selected at configure time. However, "make distprep" should visit such
directories anyway, so that constructing derived files to be included in
the tarball happens without requiring all configure options to be given
in the tarball build script. Likewise, it's better if cleanup actions
unconditionally visit all directories (for example, this ensures proper
cleanup if someone has done a manual make in such a subdirectory).
To handle this, set up a convention that subdirectories that are
conditionally included in SUBDIRS should be added to ALWAYS_SUBDIRS
instead when they are excluded.
Back-patch to 9.1, so that plpython's spiexceptions.h will get provided
in 9.1 tarballs. There don't appear to be any instances where distprep
actions got missed in previous releases, and anyway this fix requires
gmake 3.80 so we don't want to apply it before 9.1.
Since the names try_relation_openrv() and try_heap_openrv() don't seem
quite appropriate, rename the functions to relation_openrv_extended()
and heap_openrv_extended(). This is also more general, if we have a
future need for additional parameters that are of interest to only a
few callers.
This is infrastructure for a forthcoming patch to allow
get_object_address() to take a missing_ok argument as well.
Patch by me, review by Noah Misch.
The --flag argument can be used to tell xgettext the arguments of
which functions should be flagged with c-format in the PO files,
instead of guessing based on the presence of format specifiers, which
fails if no format specifiers are present but the translation
accidentally introduces one.
Appropriate flag settings have been added for each message catalog.
based on a patch by Christoph Berg for bug #6066
Put gettext trigger words that are common to the backend and backend
modules into a makefile variable to include everywhere, to avoid
error-prone repetitions.
It currently doesn't make a difference, but it's inconsistent with
most other usage, and it might interfere with a future patch, so I'll
change it all in a separate commit.
Also, replace tabs with spaces for alignment.
According to perlguts, &PL_sv_undef is not the right thing to use in
those cases because it doesn't behave the same way as an undef value via
Perl code. Seems the intuitive way to deal with undef values is subtly
enough broken that it's hard to notice when misused.
The broken uses got inadvertently introduced in commit
87bb2ade2c by Alexey Klyukin, Alex
Hunsaker and myself on 2011-02-17; no backpatch is necessary.
Per testing report from Greg Mullane.
Author: Alex Hunsaker
The core CREATE FUNCTION code only enforces that IN parameter names are
non-duplicate, and that OUT parameter names are separately non-duplicate.
This is because some function languages might not have any confusion
between the two. But in plpgsql, such names are all in the same namespace,
so we'd better disallow it.
Per a recent complaint from Dan S. Not back-patching since this is a small
issue and the change could cause unexpected failures if we started to
enforce it in a minor release.
Historically we didn't do this, even though we had the information, because
plpgsql passed its Params via SPI APIs that only include type OIDs not
typmods. Now that plpgsql uses parser callbacks to create Params, it's
easy to insert the right typmod. This should generally result in lower
surprise factors, because a plpgsql variable that is declared with a typmod
will now work more like a table column with the same typmod. In particular
it's the "right" way to fix bug #6020, in which plpgsql's attempt to return
an anonymous record type is defeated by stricter record-type matching
checks that were added in 9.0. However, it's not impossible that this
could result in subtle behavioral changes that could break somebody's
existing plpgsql code, so I'm afraid to back-patch this change into
released branches. In those branches we'll have to lobotomize the
record-type checks instead.
This was broken in commit ef19dc6d39 by
the Bunce/Hunsaker/Dunstan team, which moved the declaration from
plperl_create_sub to plperl_call_perl_trigger_func. This doesn't
actually work because the validator code would not find the variable
declared; and even if you manage to get past the validator, it still
doesn't work because get_sv("_TD", GV_ADD) doesn't have the expected
effect. The only reason this got beyond testing is that it only fails
in strict mode.
We need to declare it as a global just like %_SHARED; it is simpler than
trying to actually do what the patch initially intended, and is said to
have the same performance benefit.
As a more serious issue, fix $_TD not being properly local()ized,
meaning nested trigger functions would clobber $_TD.
Alex Hunsaker, per test report from Greg Mullane
install-sh can install multiple files at once, so for loops are not
necessary. This was already changed for the rest of the code some
time ago, but pgxs.mk was apparently forgotten, and the obsolete
coding style has now been copied to the PLs as well.
This also fixes the problem that the for loops in question did not
catch errors.
The style is set to "printf" for backwards compatibility everywhere except
on Windows, where it is set to "gnu_printf", which eliminates hundreds of
false error messages from modern versions of gcc arising from %m and %ll{d,u}
formats.
This quiets compiler warnings about redefined macros and unused Perl__unused variables. The
redefinition of snprintf and vsnprintf is something we want to avoid anyway, if we've
gone to the bother of setting up the macros to point to our implementation.
It assumed that the lineno from the traceback always refers to the
PL/Python function. If you created a PL/Python function that imports
some code, runs it, and that code raises an exception, PLy_traceback
would get utterly confused.
Now we look at the file name reported with the traceback and only
print the source line if it came from the PL/Python function.
Jan Urbański
This allows the usual rules for assigning a collation to a local variable
to be overridden. Per discussion, it seems appropriate to support this
rather than forcing all local variables to have the argument-derived
collation.
Since collation is effectively an argument, not a property of the function,
FmgrInfo is really the wrong place for it; and this becomes critical in
cases where a cached FmgrInfo is used for varying purposes that might need
different collation settings. Fix by passing it in FunctionCallInfoData
instead. In particular this allows a clean fix for bug #5970 (record_cmp
not working). This requires touching a bit more code than the original
method, but nobody ever thought that collations would not be an invasive
patch...
This warning is new in gcc 4.6 and part of -Wall. This patch cleans
up most of the noise, but there are some still warnings that are
trickier to remove.
The previous functions of assign hooks are now split between check hooks
and assign hooks, where the former can fail but the latter shouldn't.
Aside from being conceptually clearer, this approach exposes the
"canonicalized" form of the variable value to guc.c without having to do
an actual assignment. And that lets us fix the problem recently noted by
Bernd Helmle that the auto-tune patch for wal_buffers resulted in bogus
log messages about "parameter "wal_buffers" cannot be changed without
restarting the server". There may be some speed advantage too, because
this design lets hook functions avoid re-parsing variable values when
restoring a previous state after a rollback (they can store a pre-parsed
representation of the value instead). This patch also resolves a
longstanding annoyance about custom error messages from variable assign
hooks: they should modify, not appear separately from, guc.c's own message
about "invalid parameter value".
This fixes the gripe I made a few months ago about DO blocks getting
slower with repeated use. At least, it fixes it for the case where
the DO block isn't aborted by an error. We could try running
plpgsql_free_function_memory() even during error exit, but that seems
a bit scary since it makes a lot of presumptions about the data
structures being in good shape. It's probably reasonable to assume
that repeated failures of DO blocks isn't a performance-critical case.
Make plpgsql treat the input collation as a polymorphism variable, so
that we cache separate plans for each input collation that's used in a
particular session, as per recent discussion. Propagate the input
collation to all collatable input parameters.
I chose to also propagate the input collation to all declared variables of
collatable types, which is a bit more debatable but seems to be necessary
for non-astonishing behavior. (Copying a parameter into a separate local
variable shouldn't result in a change of behavior, for example.) There is
enough infrastructure here to support declaring a collation for each local
variable to override that default, but I thought we should wait to see what
the field demand is before adding such a feature.
In passing, remove exec_get_rec_fieldtype(), which wasn't used anywhere.
Documentation patch to follow.
All expression nodes now have an explicit output-collation field, unless
they are known to only return a noncollatable data type (such as boolean
or record). Also, nodes that can invoke collation-aware functions store
a separate field that is the collation value to pass to the function.
This avoids confusion that arises when a function has collatable inputs
and noncollatable output type, or vice versa.
Also, replace the parser's on-the-fly collation assignment method with
a post-pass over the completed expression tree. This allows us to use
a more complex (and hopefully more nearly spec-compliant) assignment
rule without paying for it in extra storage in every expression node.
Fix assorted bugs in the planner's handling of collations by making
collation one of the defining properties of an EquivalenceClass and
by converting CollateExprs into discardable RelabelType nodes during
expression preprocessing.
CollateClause is now used only in raw grammar output, and CollateExpr after
parse analysis. This is for clarity and to avoid carrying collation names
in post-analysis parse trees: that's both wasteful and possibly misleading,
since the collation's name could be changed while the parsetree still
exists.
Also, clean up assorted infelicities and omissions in processing of the
node type.
The initial collations patch treated a COLLATE spec as part of a TypeName,
following what can only be described as brain fade on the part of the SQL
committee. It's a lot more reasonable to treat COLLATE as a syntactically
separate object, so that it can be added in only the productions where it
actually belongs, rather than needing to reject it in a boatload of places
where it doesn't belong (something the original patch mostly failed to do).
In addition this change lets us meet the spec's requirement to allow
COLLATE anywhere in the clauses of a ColumnDef, and it avoids unfriendly
behavior for constructs such as "foo::type COLLATE collation".
To do this, pull collation information out of TypeName and put it in
ColumnDef instead, thus reverting most of the collation-related changes in
parse_type.c's API. I made one additional structural change, which was to
use a ColumnDef as an intermediate node in AT_AlterColumnType AlterTableCmd
nodes. This provides enough room to get rid of the "transform" wart in
AlterTableCmd too, since the ColumnDef can carry the USING expression
easily enough.
Also fix some other minor bugs that have crept in in the same areas,
like failure to copy recently-added fields of ColumnDef in copyfuncs.c.
While at it, document the formerly secret ability to specify a collation
in ALTER TABLE ALTER COLUMN TYPE, ALTER TYPE ADD ATTRIBUTE, and
ALTER TYPE ALTER ATTRIBUTE TYPE; and correct some misstatements about
what the default collation selection will be when COLLATE is omitted.
BTW, the three-parameter form of format_type() should go away too,
since it just contributes to the confusion in this area; but I'll do
that in a separate patch.
This improves reporting, as the error string now includes the actual
Python exception. As a side effect, this no longer sets the errcode to
ERRCODE_DATA_EXCEPTION, which might be considered a feature, as it's
not documented and not clear why iterator errors should be treated
differently.
Jan Urbański
Seen with an older gcc version. I'm not sure these represent any real
risk factor, but still a bit scary. Anyway we have lots of other
volatile-marked variables in this code, so a couple more won't hurt.
The original scheme for this was to symlink plpython.$DLSUFFIX to
plpython2.$DLSUFFIX, but that doesn't work on Windows, and only
accidentally failed to fail because of the way that CREATE LANGUAGE created
or didn't create new C functions. My changes of yesterday exposed the
weakness of that approach. To fix, get rid of the symlink and make
pg_pltemplate show what's really going on.
This mostly just involves creating control, install, and
update-from-unpackaged scripts for them. However, I had to adjust plperl
and plpython to not share the same support functions between variants,
because we can't put the same function into multiple extensions.
catversion bump forced due to new contents of pg_pltemplate, and because
initdb now installs plpgsql as an extension not a bare language.
Add support for regression testing these as extensions not bare
languages.
Fix a couple of other issues that popped up while testing this: my initial
hack at pg_dump binary-upgrade support didn't work right, and we don't want
an extra schema permissions test after all.
Documentation changes still to come, but I'm committing now to see
whether the MSVC build scripts need work (likely they do).
This provides a separate exception class for each error code that the
backend defines, as well as the ability to get the SQLSTATE from the
exception object.
Jan Urbański, reviewed by Steve Singer
Adds a context manager, obtainable by plpy.subtransaction(), to run a
group of statements in a subtransaction.
Jan Urbański, reviewed by Steve Singer, additional scribbling by me
We don't have complete expected coverage for Python 2.2 anyway, so it
doesn't seem worth keeping this one around that no one appears to be
updating anyway. Visual inspection of the differences ought to be
good enough for those few who care about this obsolete Python version.
This allows functions with multiple OUT parameters returning both one
or multiple records (RECORD or SETOF RECORD).
Jan Urbański, reviewed by Hitoshi Harada
Add functions plpy.quote_ident, plpy.quote_literal,
plpy.quote_nullable, which wrap the equivalent SQL functions.
To be able to propagate char * constness properly, make the argument
of quote_literal_cstr() const char *. This also makes it more
consistent with quote_identifier().
Jan Urbański, reviewed by Hitoshi Harada, some refinements by Peter
Eisentraut
More generally, arrays are turned in Perl array references, and row and
composite types are turned into Perl hash references. This is done
recursively, in a way that's natural to every Perl programmer.
To avoid a backwards compatibility hit, the string representation of
each structure is also available if the function requests it.
Authors: Alexey Klyukin and Alex Hunsaker.
Some code cleanups by me.
(I'm not entirely sure that we've finished bikeshedding the syntax details,
but the functionality seems OK.)
Pavel Stehule, reviewed by Stephen Frost and Tom Lane
This adds collation support for columns and domains, a COLLATE clause
to override it per expression, and B-tree index support.
Peter Eisentraut
reviewed by Pavel Stehule, Itagaki Takahiro, Robert Haas, Noah Misch
String are converted to UTF8 on the way into perl and to the
database encoding on the way back. This avoids a number of
observed anomalies, and ensures Perl a consistent view of the
world.
Some minor code cleanups are also accomplished.
Alex Hunsaker, reviewed by Andy Colson.
src/pl/plpgsql/src/plerrcodes.h, src/include/utils/errcodes.h, and a
big chunk of errcodes.sgml are now automatically generated from a single
file, src/backend/utils/errcodes.txt.
Jan Urbański, reviewed by Tom Lane.
This allows the language-specific try/catch construct to catch and
handle exceptions arising from SPI calls, matching the behavior of
other PLs.
As an additional bonus you no longer get all the ugly "unrecognized
error in PLy_spi_execute_query" errors.
Jan Urbański, reviewed by Steve Singer
This is an efficiency change, and means we now no longer have to run
"out $_TD; local $_TD = shift;", which was especially pointless in the case of
non-trigger functions where the passed value was always undef anyway.
A tiny open issue is whether we should get rid of the $prolog argument of
mkfunc, and the corresponding pushed value, which is now just a constant "false".
Tim Bunce, reviewed by Alex Hunsaker.
Previously reported as ERRCODE_ADMIN_SHUTDOWN, this case is now
reported as ERRCODE_T_R_DATABASE_DROPPED. No message text change.
Unlikely to happen on most servers, so low impact change to allow
session poolers to correctly handle this situation.
Tatsuo Ishii, edits by me, review by Robert Haas
Use the built-in TypeError, not SPIError, for errors having to do with
argument counts or types. Use SPIError, not simply plpy.Error, for
errors in PLy_spi_execute_plan. Finally, do not set a Python
exception if PyArg_ParseTuple failed, as it already sets the correct
exception.
Jan Urbański
This reverts commit 740e54ca84, which seems
to have tickled an optimization bug in gcc 4.5.x, as reported upstream at
https://bugzilla.redhat.com/show_bug.cgi?id=671899
Since this patch had no purpose beyond code beautification, it's not
worth expending a lot of effort to look for another workaround.
It's not clear to me what should happen to the other plpython_unicode
variant expected files, but this patch gets things passing on my own
machines and at least some of the buildfarm.
Not sure why I'm seeing this on Fedora 14 and not earlier versions.
Seems like a regression that gcc no longer knows that DIE() doesn't return.
Still, adding a dummy return is harmless enough.
Global error handling led to confusion and was hard to manage. With
this change, errors from PostgreSQL are immediately reported to Python
as exceptions. This requires setting a Python exception after
reporting the caught PostgreSQL error as a warning, because PLy_elog
destroys the Python exception state.
Ideally, all places where PostgreSQL errors need to be reported back
to Python should be wrapped in subtransactions, to make going back to
Python from a longjmp safe. This will be handled in a separate patch.
Jan Urbański
The way the exception types where added to the module was wrong for
Python 3. Exception classes were not actually available from plpy.
Fix that by factoring out code that is responsible for defining new
Python exceptions and make it work with Python 3. New regression test
makes sure the plpy module has the expected contents.
Jan Urbanśki, slightly revised by me
Pay attention to the attisdropped field and skip over TupleDesc fields
that have it set. Not a real problem until we get table returning
functions, but it's the right thing to do anyway.
Jan Urbański
If the function using yield to return rows fails halfway, the iterator
stays open and subsequent calls to the function will resume reading
from it. The fix is to unref the iterator and set it to NULL if there
has been an error.
Jan Urbański
Two separate hash tables are used for regular procedures and for
trigger procedures, since the way trigger procedures work is quite
different from normal stored procedures. Change the signatures of
PLy_procedure_{get,create} to accept the function OID and a Boolean
flag indicating whether it's a trigger. This should make implementing
a PL/Python validator easier.
Using HTABs instead of Python dictionaries makes error recovery
easier, and allows for procedures to be cached based on their OIDs,
not their names. It also allows getting rid of the PyCObject field
that used to hold a pointer to PLyProcedure, since PyCObjects are
deprecated in Python 2.7 and replaced by Capsules in Python 3.
Jan Urbański
My previous commit, 85cff3ce7f on
2010-12-25, failed to update errcodes.sgml or plerrcodes.h. This patch
corrects that oversight, per a gripe from Tom Lane, and also corrects
a typographical error.
Given a column reference foo.bar, where there is a composite plpgsql
variable foo but it doesn't contain a column bar, the pre-9.0 coding would
immediately throw a "record foo has no field bar" error. In 9.0 the parser
hook instead falls through to let the core parser see if it can resolve the
reference. If not, you get a complaint about "missing FROM-clause entry
for table foo", which while in some sense correct isn't terribly helpful.
Complicate things a bit so that we can throw the old error message if
neither the core parser nor the hook are able to resolve the column
reference, while not changing the behavior in any other case.
Per bug #5757 from Andrey Galkin.
We must stay in the function's SPI context until done calling the iterator
that returns the set result. Otherwise, any attempt to invoke SPI features
in the python code called by the iterator will malfunction. Diagnosis and
patch by Jan Urbanski, per bug report from Jean-Baptiste Quenot.
Back-patch to 8.2; there was no support for SRFs in previous versions of
plpython.
Replace for loops in makefiles with proper dependencies. Parallel
make can now span across directories. Also, make -k and make -q work
properly.
GNU make 3.80 or newer is now required.
Like plperl and unlike plpgsql, there isn't any cached state that could
depend on exactly which relation the trigger is being fired for. So we
can use just one hash entry for all relations, which might save a little
something.
Alex Hunsaker
As noted by Jan Urbanski, this flag is in fact needed to ensure that the
function's input/result conversion functions are set up as expected.
Add a regression test to discourage anyone from making same mistake
in future.
Instead of using ExecPrepareExpr, call ExecInitExpr. The net change here
is that we don't apply expression_planner() to the expression tree. There
is no need to do so, because that tree is extracted from a fully planned
plancache entry, so all the needed work is already done. This reduces
the setup costs by about a factor of 2 according to some simple tests.
Oversight noted while fooling around with the simple-expression code for
previous fix.
In general, expression execution state trees aren't re-entrantly usable,
since functions can store private state information in them.
For efficiency reasons, plpgsql tries to cache and reuse state trees for
"simple" expressions. It can get away with that most of the time, but it
can fail if the state tree is dirty from a previous failed execution (as
in an example from Alvaro) or is being used recursively (as noted by me).
Fix by tracking whether a state tree is in use, and falling back to the
"non-simple" code path if so. This results in a pretty considerable speed
hit when the non-simple path is taken, but the available alternatives seem
even more unpleasant because they add overhead in the simple path. Per
idea from Heikki.
Back-patch to all supported branches.
This patch eliminates various bizarre behaviors caused by sloppy thinking
about the difference between a domain type and its underlying array type.
In particular, the operation of updating one element of such an array
has to be considered as yielding a value of the underlying array type,
*not* a value of the domain, because there's no assurance that the
domain's CHECK constraints are still satisfied. If we're intending to
store the result back into a domain column, we have to re-cast to the
domain type so that constraints are re-checked.
For similar reasons, such a domain can't be blindly matched to an ANYARRAY
polymorphic parameter, because the polymorphic function is likely to apply
array-ish operations that could invalidate the domain constraints. For the
moment, we just forbid such matching. We might later wish to insert an
automatic downcast to the underlying array type, but such a change should
also change matching of domains to ANYELEMENT for consistency.
To ensure that all such logic is rechecked, this patch removes the original
hack of setting a domain's pg_type.typelem field to match its base type;
the typelem will always be zero instead. In those places where it's really
okay to look through the domain type with no other logic changes, use the
newly added get_base_element_type function in place of get_element_type.
catversion bumped due to change in pg_type contents.
Per bug #5717 from Richard Huxton and subsequent discussion.
This patch adds the SQL-standard concept of an INSTEAD OF trigger, which
is fired instead of performing a physical insert/update/delete. The
trigger function is passed the entire old and/or new rows of the view,
and must figure out what to do to the underlying tables to implement
the update. So this feature can be used to implement updatable views
using trigger programming style rather than rule hacking.
In passing, this patch corrects the names of some columns in the
information_schema.triggers view. It seems the SQL committee renamed
them somewhere between SQL:99 and SQL:2003.
Dean Rasheed, reviewed by Bernd Helmle; some additional hacking by me.
Various places were testing TRIGGER_FIRED_BEFORE() where what they really
meant was !TRIGGER_FIRED_AFTER(), or vice versa. This needs to be cleaned
up because there are about to be more than two possible states.
We might want to note this in the 9.1 release notes as something for
trigger authors to double-check.
For consistency's sake I also changed some places that assumed that
TRIGGER_FIRED_FOR_ROW and TRIGGER_FIRED_FOR_STATEMENT are necessarily
mutually exclusive; that's not in immediate danger of breaking, but
it's still sloppier than it should be.
Extracted from Dean Rasheed's patch for triggers on views. I'm committing
this separately since it's an identifiable separate issue, and is the
only reason for the patch to touch most of these particular files.
There are numerous methods by which a Perl or Tcl function can subvert
the behavior of another such function executed later; for example, by
redefining standard functions or operators called by the target function.
If the target function is SECURITY DEFINER, or is called by such a
function, this means that any ordinary SQL user with Perl or Tcl language
usage rights can do essentially anything with the privileges of the target
function's owner.
To close this security hole, create a separate Perl or Tcl interpreter for
each SQL userid under which plperl or pltcl functions are executed within
a session. However, all plperlu or pltclu functions run within a session
still share a single interpreter, since they all execute at the trust
level of a database superuser anyway.
Note: this change results in a functionality loss when libperl has been
built without the "multiplicity" option: it's no longer possible to call
plperl functions under different userids in one session, since such a
libperl can't support multiple interpreters in one process. However, such
a libperl already failed to support concurrent use of plperl and plperlu,
so it's likely that few people use such versions with Postgres.
Security: CVE-2010-3433
This is reproducibly possible in Python 2.7 if the user turned
PendingDeprecationWarning into an error, but it's theoretically also possible
in earlier versions in case of exceptional conditions.
backpatched to 8.0
Aside from being more forgiving, this prevents a rather surprising misbehavior
when the "wrong" order was used: the old code didn't throw a syntax error,
but absorbed the INTO clause into the last USING expression, which then did
strange things downstream.
Intentionally not changing the documentation; we'll continue to advertise
only the "standard" clause order.
Backpatch to 8.4, where the USING clause was added to EXECUTE.
It's not clear if this situation can occur in plpgsql other than via the
EXECUTE USING case Heikki illustrated, which I will shortly close off.
However, ignoring the intoClause if it's there is surely wrong, so let's
patch it for safety.
Backpatch to 8.3, which is as far back as this code has a PlannedStmt
to deal with. There might be another way to make an equivalent test
before that, but since this is just preventing hypothetical bugs,
I'm not going to obsess about it.
pointed out, it would need a 2nd pass after the whole query is processed to
correctly check that an unknown Param is coerced to the same target type
everywhere. Adding the 2nd pass would add a lot more code, which doesn't
seem worth the risk given that there isn't much of a use case for passing
unknown Params in the first place. The code would work without that check,
but it might be confusing and the behavior would be different from the
varparams case.
Instead, just coerce all unknown params in a PL/pgSQL USING clause to text.
That's simple, and is usually what users expect.
Revert the patch in CVS HEAD and master, and backpatch the new solution to
8.4. Unlike the previous solution, this applies easily to 8.4 too.