This gets rid of a significant amount of duplicative code.
KaiGai Kohei, reviewed in earlier versions by Dimitri Fontaine, with
further review and cleanup by me.
This is merely an exercise in satisfying pedants, not a bug fix, because
in every case we were checking for failure later with ferror(), or else
there was nothing useful to be done about a failure anyway. Document
the latter cases.
An empty HBA file is surely an error, since it means there is no way to
connect to the server. We've not heard identifiable reports of people
actually doing that, but this will also close off the case Thom Brown just
complained of, namely pointing hba_file at a directory. (On at least some
platforms with some directories, it will read as an empty file.)
Perhaps this should be back-patched, but given the lack of previous
complaints, I won't add extra work for the translators.
There's no particular value in doing AssertMacro((tup) != NULL) in front
of code that's certain to crash anyway if tup is NULL. And if "tup" is
actually the address of a local variable, gcc 4.6 whinges about it. That's
arguably pretty broken on gcc's part, but we might as well remove the
useless test to silence the warnings. This gets rid of all the -Waddress
warnings in the backend; there are some in libpq and psql that are a bit
harder to avoid.
The heuristic for when to dump a cast failed for a cast between table
rowtypes, as reported by Frédéric Rejol. Fix it by setting
the "dump" flag for such a type the same way as the flag is set for the
underlying table or base type. This won't result in the auto-generated
type appearing in the output, since setting its objType to DO_DUMMY_TYPE
unconditionally suppresses that. But it will result in dumpCast doing what
was intended.
Back-patch to 8.3. The 8.2 code is rather different in this area, and it
doesn't seem worth any risk to fix a corner case that nobody has stumbled
on before.
In general the data returned by an index-only scan should have the
datatypes originally computed by FormIndexDatum. If the index opclasses
use "storage" datatypes different from their input datatypes, the scan
tuple will not have the same rowtype attributed to the index; but we had
a hard-wired assumption that that was true in nodeIndexonlyscan.c. We'd
already hacked around the issue for the one case where the types are
different in btree indexes (btree name_ops), but this would definitely
come back to bite us if we ever implement index-only scans in GiST.
To fix, require the index AM to explicitly provide the tupdesc for the
tuple it is returning. btree can just pass back the index's tupdesc, but
GiST will have to work harder when and if it supports index-only scans.
I had previously proposed fixing this by allowing the index AM to fill the
scan tuple slot directly; but on reflection that seemed like a module
layering violation, since TupleTableSlots are creatures of the executor.
At least in the btree case, it would also be less efficient, since the
tuple deconstruction work would occur even for rows later found to be
invisible to the scan's snapshot.
Rearrange text to improve clarity, and add an example of implicit reference
to a plpgsql variable in a bound cursor's query. Byproduct of some work
I'd done on the "named cursor parameters" patch before giving up on it.
This view was being insufficiently careful about matching the FK constraint
to the depended-on primary or unique key constraint. That could result in
failure to show an FK constraint at all, or showing it multiple times, or
claiming that it depended on a different constraint than the one it really
does. Fix by joining via pg_depend to ensure that we find only the correct
dependency.
Back-patch, but don't bump catversion because we can't force initdb in back
branches. The next minor-version release notes should explain that if you
need to fix this in an existing installation, you can drop the
information_schema schema then re-create it by sourcing
$SHAREDIR/information_schema.sql in each database (as a superuser of
course).
Add a column pg_class.relallvisible to remember the number of pages that
were all-visible according to the visibility map as of the last VACUUM
(or ANALYZE, or some other operations that update pg_class.relpages).
Use relallvisible/relpages, instead of an arbitrary constant, to estimate
how many heap page fetches can be avoided during an index-only scan.
This is pretty primitive and will no doubt see refinements once we've
acquired more field experience with the index-only scan mechanism, but
it's way better than using a constant.
Note: I had to adjust an underspecified query in the window.sql regression
test, because it was changing answers when the plan changed to use an
index-only scan. Some of the adjacent tests perhaps should be adjusted
as well, but I didn't do that here.
This way, if a role's config setting uses the name of another role,
the validity of the dump isn't dependent on the order in which those
two roles are dumped.
Code by Phil Sorber, comment by me.
Nobody using the missing_ok flag yet, but let's speculate that this will
be a better interface for future callers.
KaiGai Kohei, with some adjustments by me.
This patch restores the pre-9.1 behavior that pl/perl functions returning
VOID ignore the result value of their last Perl statement. 9.1.0
unintentionally threw an error if the last statement returned a reference,
as reported by Amit Khandekar.
Also, make sure it works to return a string value for a composite type,
so long as the string meets the type's input format. We already allowed
the equivalent behavior for arrays, so it seems inconsistent to not allow
it for composites.
In addition, ensure we throw errors for attempts to return arrays or hashes
when the function's declared result type is not an array or composite type,
respectively. Pre-9.1 versions rather uselessly returned strings like
ARRAY(0x221a9a0) or HASH(0x221aa90), while 9.1.0 threw an error for the
hash case and returned a garbage value for the array case.
Also, clean up assorted grotty coding in Perl array conversion, including
use of a session-lifespan memory context to accumulate the array value
(resulting in session-lifespan memory leak on error), failure to apply the
declared typmod if any, and failure to detect some cases of non-rectangular
multi-dimensional arrays.
Alex Hunsaker and Tom Lane
Relation rowtypes and automatically-generated array types do not need to
have their own extension membership dependency entries. If we create such
then it becomes more difficult to remove items from an extension, and it's
also harder for an extension upgrade script to make sure it duplicates the
dependencies created by the extension's regular installation script.
I changed the code in such a way that this happened in commit
988cccc620, I think because of worries about
the shell-type-replacement case; but that cure was worse than the disease.
It would only matter if one extension created a shell type that was
replaced with an auto-generated type in another extension, which seems
pretty far-fetched. Better to make this work unsurprisingly in normal
cases.
Report and patch by Robert Haas, comment adjustments by me.
We have seen one too many reports of people trying to use 9.1 extension
files in the old-fashioned way of sourcing them in psql. Not only does
that usually not work (due to failure to substitute for MODULE_PATHNAME
and/or @extschema@), but if it did work they'd get a collection of loose
objects not an extension. To prevent this, insert an \echo ... \quit
line that prints a suitable error message into each extension script file,
and teach commands/extension.c to ignore lines starting with \echo.
That should not only prevent any adverse consequences of loading a script
file the wrong way, but make it crystal clear to users that they need to
do it differently now.
Tom Lane, following an idea of Andrew Dunstan's. Back-patch into 9.1
... there is not going to be much value in this if we wait till 9.2.
The documentation neglected to explain its behavior in a script file
(it only ends execution of the script, not psql as a whole), and failed
to mention the long form \quit either.
It's been bothering me for several days that pretending that the cstring
data stored in a btree name_ops column is really a "name" Datum could lead
to reading past the end of memory. However, given the current memory
layout used for index-only scans in the btree code, a crash is in fact not
possible. Document that so we don't break it. I have not thought of any
other solutions that aren't fairly ugly too, and most of them lose the
functionality of index-only scans on name columns altogether, so this seems
like the way to go.
Dept. of second thoughts: as long as we've got that tlist hanging around
anyway, we can apply ExecTypeFromTL to it to get a suitable descriptor for
the ScanTupleSlot. This is a nicer solution than the previous one because
it eliminates some hard-wired knowledge about btree name_ops, and because
it avoids the somewhat shaky assumption that we needn't set up the scan
tuple descriptor in EXPLAIN_ONLY mode. It doesn't change what actually
happens at run-time though, and I'm still a bit nervous about that.
This commit changes index-only scans so that data is read directly from the
index tuple without first generating a faux heap tuple. The only immediate
benefit is that indexes on system columns (such as OID) can be used in
index-only scans, but this is necessary infrastructure if we are ever to
support index-only scans on expression indexes. The executor is now ready
for that, though the planner still needs substantial work to recognize
the possibility.
To do this, Vars in index-only plan nodes have to refer to index columns
not heap columns. I introduced a new special varno, INDEX_VAR, to mark
such Vars to avoid confusion. (In passing, this commit renames the two
existing special varnos to OUTER_VAR and INNER_VAR.) This allows
ruleutils.c to handle them with logic similar to what we use for subplan
reference Vars.
Since index-only scans are now fundamentally different from regular
indexscans so far as their expression subtrees are concerned, I also chose
to change them to have their own plan node type (and hence, their own
executor source file).
There's no particular advantage to this change on its face; indeed,
it's possible that this might be slightly slower than the old way.
But it makes this information more easily accessible to other
functions, and therefore paves the way for future code consolidation.
Performance isn't critical here, so there's no need to be smart about
how we do the search.
This is a heavily cut-down version of a patch from KaiGai Kohei,
with several fixes by me. Additional review from Dimitri Fontaine.
This might help to avoid confusion between the CREATE USER command,
and the deprecated CREATEUSER option to CREATE ROLE, as per a recent
complaint from Ron Adams. At any rate, having a cross-link here
seems like a good idea; two commands that are so similar should
reference each other.