An incorrect and entirely unnecessary "safety check" in exec_stmt_getdiag()
caused the code to treat an assignment to a variable with dno zero as a
no-op. Unfortunately, that's a perfectly valid dno. This has been broken
since GET DIAGNOSTICS was invented. It's not terribly surprising that the
bug went unnoticed for so long, since in most cases you probably wouldn't
use the function's first-created variable (normally its first parameter)
as a GET DIAGNOSTICS target. Nonetheless, it's broken. Per bug #6551
from Adam Buraczewski.
Per a suggestion from Euler Taveira, it seems like a good idea to include
this information in \du (and \dg) output. This costs nothing for people
who are not using the VALID UNTIL feature, while for those who are, it's
rather critical information.
Fabrízio de Royes Mello
PGAC_PATH_COLLATEINDEX supposed that it could use AC_PATH_PROGS to search
for collateindex.pl, but that macro will only accept files that are marked
executable, and at least some DocBook installations don't mark the script
executable (a case the docs Makefile was already prepared for). Accept the
script if it's present and readable in $DOCBOOKSTYLE/bin, and otherwise
search the PATH as before.
Having fixed that up, we don't need the fallback case that was in the docs
Makefile, and instead can throw an understandable error if configure didn't
find the script. Per recent trouble report from John Lumby.
For those variables only used when asserts are enabled, use a new
macro PG_USED_FOR_ASSERTS_ONLY, which expands to
__attribute__((unused)) when asserts are not enabled.
Document that routine vacuuming is now also important for the purpose
of index-only scans; and mention in the section that describes the
visibility map that it is used to implement index-only scans.
Marti Raudsepp, with some changes by me.
This restores the pre-9.0 situation that it's possible to add new indexes
on pg_class and other mapped-but-not-shared catalogs, so long as you broke
the glass and flipped the big red Dont-Touch-Me switch. As before, there
are a lot of gotchas, and you'd have to be pretty desperate to try this
on a production database; but there doesn't seem to be a reason for
relmapper.c to be preventing such things all by itself. Per
experimentation with a case suggested by Cody Cutrer.
The prior coding instructs the user to pick an alternative maintenance
database, but this is overly clever, since it obscures whatever the real
cause of the failure is.
Josh Kupershmidt
I broke this in commit 337b6f5ecf, which
among other things arranged for quicksorts to CHECK_FOR_INTERRUPTS()
slightly less frequently. Sadly, it also arranged for heapsorts to
CHECK_FOR_INTERRUPTS() much less frequently. Repair.
Instead of just stopping after removing an arbitrary subset of orphaned
large objects, commit and start a new transaction after each -l objects.
This is just as effective as the original patch at limiting the number of
locks used, and it doesn't require doing the OID collection process
repeatedly to get everything. Since the option no longer changes the
fundamental behavior of vacuumlo, and it avoids a known server-side
limitation, enable it by default (with a default limit of 1000 LOs per
transaction).
In passing, be more careful about properly quoting the names of tables
and fields, and do some other cosmetic cleanup.
The old code was using exit_horribly or die_horribly other depending on
whether it had an ArchiveHandle on which to close the connection or not;
but there were places that were passing a NULL ArchiveHandle to
die_horribly, and other places that used exit_horribly while having an
AH available. So there wasn't all that much consistency.
Improve the situation by keeping only one of the routines, and instead
of having to pass the AH down from the caller, arrange for it to be
present for an on_exit_nicely callback to operate on.
Author: Joachim Wieland
Some tweaks by me
Per a suggestion from Robert Haas, in the ongoing "parallel pg_dump"
saga.
This was for demonstration only, and now it was creating compiler
warnings from zlib without an obvious fix (see also
d923125b77), let's just remove it. The
"directory" format is presumably similar enough anyway.
Making this operation look like a utility statement seems generally a good
idea, and particularly so in light of the desire to provide command
triggers for utility statements. The original choice of representing it as
SELECT with an IntoClause appendage had metastasized into rather a lot of
places, unfortunately, so that this patch is a great deal more complicated
than one might at first expect.
In particular, keeping EXPLAIN working for SELECT INTO and CREATE TABLE AS
subcommands required restructuring some EXPLAIN-related APIs. Add-on code
that calls ExplainOnePlan or ExplainOneUtility, or uses
ExplainOneQuery_hook, will need adjustment.
Also, the cases PREPARE ... SELECT INTO and CREATE RULE ... SELECT INTO,
which formerly were accepted though undocumented, are no longer accepted.
The PREPARE case can be replaced with use of CREATE TABLE AS EXECUTE.
The CREATE RULE case doesn't seem to have much real-world use (since the
rule would work only once before failing with "table already exists"),
so we'll not bother with that one.
Both SELECT INTO and CREATE TABLE AS still return a command tag of
"SELECT nnnn". There was some discussion of returning "CREATE TABLE nnnn",
but for the moment backwards compatibility wins the day.
Andres Freund and Tom Lane
Failing to do so causes trigger invocation to fail when they are nested
within a function invocation that changes the current package.
Backpatch to 9.1; previous releases used a different method to obtain
_TD. Per bug report from Mark Murawski (bug #6511)
Author: Alex Hunsaker
the non-development install. Instead, use the LOAD mechanism to check
for the pg_upgrade_support shared object, like we do for other shared
object checks.
Backpatch to 9.1.
Report from Àlvaro
When converting source files, pg_regress' inputdir and outputdir options were
ignored when computing the locations of the destination files. In consequence,
these options were effectively unusable when the regression inputs need to
be adjusted by pg_regress. This patch makes pg_regress put the converted files
in the same place that these options specify non-converted input or results
files are to be found. Backpatched to all live branches.
When using connection info arrays with a conninfo string in the dbname
slot, some memory would be leaked if an error occurred while
processing the following array slots.
found by Coverity
For a little while there I thought match_pathkeys_to_index() was broken
because it wasn't trying to match index columns to pathkeys in order.
Actually that's correct, because GiST can support ordering operators
on any random collection of index columns, but it sure needs a comment.
In commit 57664ed25e I tried to fix a bug
reported by Teodor Sigaev by making non-simple-Var output columns distinct
(by wrapping their expressions with dummy PlaceHolderVar nodes). This did
not work too well. Commit b28ffd0fcc fixed
some ensuing problems with matching to child indexes, but per a recent
report from Claus Stadler, constraint exclusion of UNION ALL subqueries was
still broken, because constant-simplification didn't handle the injected
PlaceHolderVars well either. On reflection, the original patch was quite
misguided: there is no reason to expect that EquivalenceClass child members
will be distinct. So instead of trying to make them so, we should ensure
that we can cope with the situation when they're not.
Accordingly, this patch reverts the code changes in the above-mentioned
commits (though the regression test cases they added stay). Instead, I've
added assorted defenses to make sure that duplicate EC child members don't
cause any problems. Teodor's original problem ("MergeAppend child's
targetlist doesn't match MergeAppend") is addressed more directly by
revising prepare_sort_from_pathkeys to let the parent MergeAppend's sort
list guide creation of each child's sort list.
In passing, get rid of add_sort_column; as far as I can tell, testing for
duplicate sort keys at this stage is dead code. Certainly it doesn't
trigger often enough to be worth expending cycles on in ordinary queries.
And keeping the test would've greatly complicated the new logic in
prepare_sort_from_pathkeys, because comparing pathkey list entries against
a previous output array requires that we not skip any entries in the list.
Back-patch to 9.1, like the previous patches. The only known issue in
this area that wasn't caused by the ill-advised previous patches was the
MergeAppend planning failure, which of course is not relevant before 9.1.
It's possible that we need some of the new defenses against duplicate child
EC entries in older branches, but until there's some clear evidence of that
I'm going to refrain from back-patching further.
This is intended as infrastructure to allow sepgsql to cooperate with
connection pooling software, by allowing the effective security label
to be set for each new connection.
KaiGai Kohei, reviewed by Yeb Havinga.
This is for tools such as Coverity that don't know that the grammar
enforces that the case of not having a relation (but instead a query)
cannot happen in the FROM case.
Although we often don't care about freeing all memory in pg_dump,
these functions already freed the same memory in other code paths, so
we might as well do it consistently.
found by Coverity
Dave Malcolm of Red Hat is working on a static code analysis tool for
Python-related C code. It reported a number of problems in plpython,
most of which were failures to check for NULL results from object-creation
functions, so would only be an issue in very-low-memory situations.
Patch in HEAD and 9.1. We could go further back but it's not clear that
these issues are important enough to justify the work.
Jan Urbański
This replaces the former global variable PLy_curr_procedure, and provides
a place to stash per-call-level information. In particular we create a
per-call-level scratch memory context.
For the moment, the scratch context is just used to avoid leaking memory
from datatype output function calls in PLyDict_FromTuple. There probably
will be more use-cases in future.
Although this is a fix for a pre-existing memory leakage bug, it seems
sufficiently invasive to not want to back-patch; it feels better as part
of the major rearrangement of plpython code that we've already done as
part of 9.2.
Jan Urbański
Extracted from a larger patch by Jaime Casanova, reviewed by Noah Misch.
I think this error message could use some more extensive revision, but
this at least makes the handling of spgist consistent with what we do for
other types of indexes that this code doesn't know how to handle.
add ability to control permissions of created files
have psql echo its queries for easier debugging
output four separate log files, and delete them on success
add -r/--retain option to keep log files after success
make logs file append-only
remove -g/-G/-l logging options
sugggest tailing appropriate log file on failure
enhance -v/--verbose behavior