value. This means that hash index lookups are always lossy and have to be
rechecked when the heap is visited; however, the gain in index compactness
outweighs this when the indexed values are wide. Also, we only need to
perform datatype comparisons when the hash codes match exactly, rather than
for every entry in the hash bucket; so it could also win for datatypes that
have expensive comparison functions. A small additional win is gained by
keeping hash index pages sorted by hash code and using binary search to reduce
the number of index tuples we have to look at.
Xiao Meng
This commit also incorporates Zdenek Kotala's patch to isolate hash metapages
and hash bitmaps a bit better from the page header datastructures.
each connection. This makes it possible to catch errors in the pg_hba
file when it's being reloaded, instead of silently reloading a broken
file and failing only when a user tries to connect.
This patch also makes the "sameuser" argument to ident authentication
optional.
btree. We can't easily tell whether clauses generated from the equivalence
class could be used with such an index, so just assume that they might be.
This bit of over-optimization prevented use of non-btree indexes for nestloop
inner indexscans, in any case where the join uses an equality operator that
is also a btree operator --- which in particular is typically true for hash
indexes. Noted while trying to test the current hash index patch.
and the literal syntax INTERVAL 'string' ... SECOND(n), as required by the
SQL standard. Our old syntax put (n) directly after INTERVAL, which was
a mistake, but will still be accepted for backward compatibility as well
as symmetry with the TIMESTAMP cases.
Change intervaltypmodout to show it in the spec's way, too. (This could
potentially affect clients, if there are any that analyze the typmod of an
INTERVAL in any detail.)
Also fix interval input to handle 'min:sec.frac' properly; I had overlooked
this case in my previous patch.
Document the use of the interval fields qualifier, which up to now we had
never mentioned in the docs. (I think the omission was intentional because
it didn't work per spec; but it does now, or at least close enough to be
credible.)
GetOldestXmin() instead of RecentGlobalXmin; this is safer because we do not
depend on the latter being correctly set elsewhere, and while it is more
expensive, this code path is not performance-critical. This is a real
risk for autovacuum, because it can execute whole cycles without doing
a single vacuum, which would mean that RecentGlobalXmin would stay at its
initialization value, FirstNormalTransactionId, causing a bogus value to be
inserted in pg_database. This bug could explain some recent reports of
failure to truncate pg_clog.
At the same time, change the initialization of RecentGlobalXmin to
InvalidTransactionId, and ensure that it's set to something else whenever
it's going to be used. Using it as FirstNormalTransactionId in HOT page
pruning could incur in data loss. InitPostgres takes care of setting it
to a valid value, but the extra checks are there to prevent "special"
backends from behaving in unusual ways.
Per Tom Lane's detailed problem dissection in 29544.1221061979@sss.pgh.pa.us
a lot closer than it was before). To do this, tweak coerce_type() to pass
through the typmod information when invoking interval_in() on an UNKNOWN
constant; then fix DecodeInterval to pay attention to the typmod when deciding
how to interpret a units-less integer value. I changed one or two other
details as well. I believe the code now reacts as expected by spec for all
the literal syntaxes that are specifically enumerated in the spec. There
are corner cases involving strings that don't exactly match the set of fields
called out by the typmod, for which we might want to tweak the behavior some
more; but I think this is an area of user friendliness rather than spec
compliance. There remain some non-compliant details about the SQL syntax
(as opposed to what's inside the literal string); but at least we'll throw
error rather than silently doing the wrong thing in those cases.
when user-defined functions used in a plan are modified. Also invalidate
plans when schemas, operators, or operator classes are modified; but for these
cases we just invalidate everything rather than tracking exact dependencies,
since these types of objects seldom change in a production database.
Tom Lane; loosely based on a patch by Martin Pihlak.
for pg_stop_backup. First, it is possible that the history file name is not
alphabetically later than the last WAL file name, so we should explicitly
check that both have been archived. Second, the previous coding would wait
forever if a checkpoint had managed to remove the WAL file before we look for
it.
Simon Riggs, plus some code cleanup by me.
referenced tables are dumped before the referencing tables. This avoids
failures when the data is loaded with the FK constraints already active.
If no such ordering is possible because of circular or self-referential
constraints, print a NOTICE to warn the user about it.
searching instead of naive matching. In the worst case this has the same
O(M*N) complexity as the naive method, but the worst case is hard to hit,
and the average case is very fast, especially with longer patterns.
David Rowley
for editing if no function name is specified. This seems a much cleaner way
to offer that functionality than the original patch had. In passing,
de-clutter the error displays that are given for a bogus function-name
argument, and standardize on "$function$" as the default delimiter for the
function body. (The original coding would use the shortest possible
dollar-quote delimiter, which seems to create unnecessarily high risk of
later conflicts with the user-modified function body.)
In support of that, create a backend function pg_get_functiondef().
The psql command is functional but maybe a bit rough around the edges...
Abhijit Menon-Sen
inserting a materialize node above an inner-side sort node, when the sort is
expected to spill to disk. (The materialize protects the sort from having
to support mark/restore, allowing it to do its final merge pass on-the-fly.)
We neglected to teach cost_mergejoin about that hack, so it was failing to
include the materialize's costs in the estimated cost of the mergejoin.
The materialize's costs are generally going to be pretty negligible in
comparison to the sort's, so this is only a small error and probably not
worth back-patching; but it's still wrong.
In the similar case where a materialize is inserted to protect an inner-side
node that can't do mark/restore at all, it's still true that the materialize
should not spill to disk, and so we should cost it cheaply rather than
expensively.
Noted while thinking about a question from Tom Raney.
during parsing. Formerly the parser's stack was allocated with malloc
and so wouldn't be reclaimed; this patch makes it use palloc instead,
so that flushing the current context will reclaim the memory. Per
Marko Kreen.
whenever possible, as per bug report from Oleg Serov. While at it, reorder
the operations in the RECORD case to avoid possible palloc failure while the
variable update is only partly complete.
Back-patch as far as 8.1. Although the code of the particular function is
similar in 8.0, 8.0's support for composite fields in rows is sufficiently
broken elsewhere that it doesn't seem worth fixing this.
output that is seen when a checkpoint occurs at just the right time during
the test. Per my report of 2008-08-31.
This could be back-patched but I'm not sure it's worth the trouble.
command id is the cmin, when it can in fact be a combo cid. That made rows
incorrectly invisible to a transaction where a tuple was deleted by multiple
aborted subtransactions.
Report and patch Karl Schnaitter. Back-patch to 8.3, where combo cids was
introduced.
SELECT foo.*) so that it cannot be confused with a quoted identifier "*".
Instead create a separate node type A_Star to represent this notation.
Per pgsql-hackers discussion of 2007-Sep-27.
syntax to avoid a useless store into a global variable. Per experimentation,
this works better than my original thought of trying to push the code into
an out-of-line subroutine.