This should make it easier to identify which row is problematic when an
insert or update is processing many rows.
The formatting is similar to that for unique-index violation messages,
except that we limit field widths to 64 bytes since otherwise the message
could get unreasonably long. (In particular, there's currently no attempt
to quote or escape field values that contain commas etc.)
Jan Kundrát, reviewed by Royce Ausburn, somewhat rewritten by me.
The old expression sed 's,$(srcdir),python3,' would normally resolve
as sed 's,.,python3,', which is not really what we wanted. While it
doesn't actually break anything right now, it's still wrong, so put in
a bit more work to make it more robust.
Moving the code two full tab stops to the right requires rethinking of
cosmetic code layout choices, which pgindent isn't really able to do for
us. Whitespace and comment adjustments only, no code changes.
While the deletion in itself wouldn't break things, any further creation
of objects in the script would result in dangling pg_depend entries being
added by recordDependencyOnCurrentExtension(). An example from Phil
Sorber convinced me that this is just barely likely enough to be worth
expending a couple lines of code to defend against. The resulting error
message might be confusing, but it's better than leaving corrupted catalog
contents for the user to deal with.
This function has now grown enough cases that a switch seems appropriate.
This results in a measurable speed improvement on some platforms, and
should certainly not hurt. The code's in need of a pgindent run now,
though.
Andres Freund
The correct information appears in the text, so just remove the statement
in the table, where it did not fit nicely anyway. (Curiously, the correct
info has been there much longer than the erroneous table entry.)
Resolves problem noted by Daniele Varrazzo.
In HEAD and 9.1, also do a bit of wordsmithing on other text on the page.
The server name for a foreign table was not quoted at need, as per report
from Ronan Dunklau. Also, queries related to FDW options were inadequately
schema-qualified in places where the search path isn't just pg_catalog, and
were inconsistently formatted everywhere, and we didn't always check that
we got the expected number of rows from them.
The EvalPlanQual machinery assumes that whole-row Vars generated for the
outputs of non-table RTEs will be of composite types. However, for the
case where the RTE is a function call returning a scalar type, we were
doing the wrong thing, as a result of sharing code with a parser case
where the function's scalar output is wanted. (Or at least, that's what
that case has done historically; it does seem a bit inconsistent.)
To fix, extend makeWholeRowVar's API so that it can support both use-cases.
This fixes Belinda Cussen's report of crashes during concurrent execution
of UPDATEs involving joins to the result of UNNEST() --- in READ COMMITTED
mode, we'd run the EvalPlanQual machinery after a conflicting row update
commits, and it was expecting to get a HeapTuple not a scalar datum from
the "wholerowN" variable referencing the function RTE.
Back-patch to 9.0 where the current EvalPlanQual implementation appeared.
In 9.1 and up, this patch also fixes failure to attach the correct
collation to the Var generated for a scalar-result case. An example:
regression=# select upper(x.*) from textcat('ab', 'cd') x;
ERROR: could not determine which collation to use for upper() function
This fixes a longstanding but up to now benign bug in the way pg_dumpall
was built. The bug was exposed by recent code adjustments. The Makefile
does not use $(OBJS) to build pg_dumpall, so this fix removes their source
files from the pg_dumpall object and adds in the one source file it
consequently needs.
Use of a randomly chosen large value was never exactly graceful, and
now that there are penalty functions that are intentionally using infinity,
it doesn't seem like a good idea for null-vs-not-null to be using something
less.
In the original implementation, a range-contained-by search had to scan
the entire index because an empty range could be lurking anywhere.
Improve that by adding a flag to upper GiST entries that says whether the
represented subtree contains any empty ranges.
Also, make a simple mod to the penalty function to discourage empty ranges
from getting pushed into subtrees without any. This needs more work, and
the picksplit function should be taught about it too, but that code can be
improved without causing an on-disk compatibility break; so we'll leave it
for another day.
Since we're breaking on-disk compatibility of range values anyway, I took
the opportunity to reorganize the range flags bits; the unused
RANGE_xB_NULL bits are now adjacent, which might open the door for using
them in some other way later.
In passing, remove the GiST range opclass entry for <>, which doesn't seem
like it can really be indexed usefully.
Alexander Korotkov, with some editorializing by Tom
It runs the regression tests, runs pg_upgrade on the populated
database, and compares the before and after dumps. While not actually
a cross-version upgrade, this does detect omissions and bugs in the
involved tools from time to time. It's also possible to do a
cross-version upgrade by manually supplying parameters.
The original coding was
var->value = (Datum) state;
which is bogus, and then in commit 2f0f7b4bce
it was "corrected" to
var->value = PointerGetDatum(state);
which is a faithful translation but still wrong.
This seems purely cosmetic, though, so no need for a back-patch.
Pavel Stehule
distro version of perl.
David Wheeler and Alex Hunsaker.
Backpatch to 9.1 where it applies cleanly. A simple workaround is available for earlier
branches, and further effort doesn't seem warranted.
In the cases where the result of the called proc is negated, we should
explicitly test both inputs for empty, to ensure we'll never return "true"
for an unsatisfiable query. In other cases we can rely on the called proc
to say the right thing.
Same bug as reported by Thom Brown for check constraints on tables: the
constraint must be dumped separately from the domain, otherwise it is
restored before the data and thus prevents potentially-violating data
from being loaded in the first place.
Per Dean Rasheed
This adds some I/O stats to the logging of autovacuum (when the
operation takes long enough that log_autovacuum_min_duration causes it
to be logged), so that it is easier to tune. Notably, it adds buffer
I/O counts (hits, misses, dirtied) and read and write rate.
Authors: Greg Smith and Noah Misch
A simple thinko in ginRedoUpdateMetapage, namely failing to increment a
loop counter, led to inserting records into the last pending-list page in
the wrong order (the opposite of that intended). So far as I can tell,
this would not upset the code that eventually flushes pending items into
the main part of the GIN index. But it did break the code that searched
the pending list for matches, resulting in transient failure to find
matching entries during index lookups, as illustrated in bug #6307 from
Maksym Boguk.
Back-patch to 8.4 where the incorrect code was introduced.
This speeds up snapshot-taking and reduces ProcArrayLock contention.
Also, the PGPROC (and PGXACT) structures used by two-phase commit are
now allocated as part of the main array, rather than in a separate
array, and we keep ProcArray sorted in pointer order. These changes
are intended to minimize the number of cache lines that must be pulled
in to take a snapshot, and testing shows a substantial increase in
performance on both read and write workloads at high concurrencies.
Pavan Deolasee, Heikki Linnakangas, Robert Haas
The WITH [NO] DATA option was not supported, nor the ability to specify
replacement column names; the former limitation wasn't even documented, as
per recent complaint from Naoya Anzai. Fix by moving the responsibility
for supporting these options into the executor. It actually takes less
code this way ...
catversion bump due to change in representation of IntoClause, which might
affect stored rules.
exception handler. This was a regression in 9.1, when the capability
to catch specific SPI errors was added, so backpatch to 9.1.
Mika Eloranta, with some editing by Jan Urbański.
The original coding would not work for discrete ranges in which the
canonicalization rule is to produce symmetric boundaries (either [] or ()
style), as noted by Jeff Davis. Florian Pflug pointed out that we could
fix that by invoking the canonicalization function to see if the range
"between" the two given ranges normalizes to empty. This implementation
of Florian's idea is a tad slower than the original code, but only in the
case where there actually is a canonicalization function --- if not, it's
essentially the same logic as before.
Since range types can be created by non-superusers, we need to consider
their permissions. Ideally we'd check this when the type is used, not
when it's created, but that seems like much more trouble than it's worth.
The existing restriction that the support functions be immutable already
prevents most cases where an unauthorized call to a function might be
thought a security issue, and the fact that the user has no access to
the results of the system's calls to subtype_diff closes off the other
plausible reason for concern. So this check is basically pro-forma,
but let's make it anyway.
It's not clear that a per-datatype typanalyze function would be any more
useful than a generic typanalyze for ranges. What *is* clear is that
letting unprivileged users select typanalyze functions is a crash risk or
worse. So remove the option from CREATE TYPE AS RANGE, and instead put in
a generic typanalyze function for ranges. The generic function does
nothing as yet, but hopefully we'll improve that before 9.2 release.
Per discussion, the zero-argument forms aren't really worth the catalog
space (just write 'empty' instead). The one-argument forms have some use,
but they also have a serious problem with looking too much like functional
cast notation; to the point where in many real use-cases, the parser would
misinterpret what was wanted.
Committing this as a separate patch, with the thought that we might want
to revert part or all of it if we can think of some way around the cast
ambiguity.