This doesn't work as expected because the isolationtester program requires
libpq to already be installed. While it works when you've already installed
libpq, having to already have done "make install" defeats most of the point
of a check with a temp installation. And there are weird corner cases if
the dynamic linker picks up an old libpq.so from system library directories.
Remove the target (or more precisely, make it print a helpful message) so
people don't expect the case to work.
Remove random system #includes in favor of using postgres_fe.h. (The
alternative to that is letting this module grow its own configuration
testing ability...)
Also fix the "make clean" target to actually clean things up.
Per local testing.
With some compilers such as Clang and ICC emulating GCC, using a
version string of the form "GCC $version" can be quite misleading.
Also, a great while ago, the version output from gcc --version started
including the string "gcc", so it is redundant to repeat that. In
order to support ancient GCC versions, we now prefix the result with
"GCC " only if the version output does not start with a letter.
The SSI patch inserted a call of RegisterPredicateLockingXid into
GetNewTransactionId, which was a bad idea on a couple of grounds. First,
it's not necessary to hold XidGenLock while manipulating that shared
memory, and doing so is bad because XidGenLock is a high-contention lock
that should be held for as short a time as possible. (Not to mention that
it adds an entirely unnecessary deadlock hazard, since we must take
SerializableXactHashLock as well.) Second, the specific place where it was
put was between extending CLOG and advancing nextXid, which could result in
unpleasant behavior in case of a failure there. Pull the call out to
AssignTransactionId, which is much safer and arguably better from a
modularity standpoint too.
There is more work to do to clean up the failure-before-advancing-nextXid
issue, but that is a separate change that will need to be back-patched.
So for the moment I just want to make GetNewTransactionId look the same as
it did in prior versions.
These were labeled with precedences just to avoid attaching explicit
precedences to the productions in which they were the last terminal symbol.
Since a terminal symbol precedence marking can affect many other things
too, it seems like better practice to attach precedence labels to the
productions, and not mark the terminal symbols.
Ideally we'd also remove the precedence attached to NULL_P, but it turns
out that we are actually depending on that having a precedence higher than
POSTFIXOP, else we get a shift/reduce conflict for postfix operators in
b_expr. (Which more or less proves my point about these markings having a
high risk of unexpected consequences.) For the moment, move NULL_P into
the set of keywords grouped with IDENT, so that at least it will act
similarly to non-keywords; and document the interaction.
For consistency with other tools, put the options before further usage
information.
In pg_standby, remove the supposedly deprecated -l option from the
given example invocation.
Per gripe from Grzegorz Szpetkowski.
Also, change the subsection heading from "Lexical Precedence" (which is
a contradiction in terms) to "Operator Precedence".
After finding an EXISTS or ANY sub-select that can be converted to a
semi-join or anti-join, we should recurse into the body of the sub-select.
This allows cases such as EXISTS-within-EXISTS to be optimized properly.
The original coding would leave the lower sub-select as a SubLink, which
is no better and often worse than what we can do with a join. Per example
from Wayne Conrad.
Back-patch to 8.4. There is a related issue in older versions' handling
of pull_up_IN_clauses, but they're lame enough anyway about the whole area
that it seems not worth the extra work to try to fix.
The previous coding would allow requests up to half of maxBlockSize to be
treated as "chunks", but when that actually did happen, we'd waste nearly
half of the space in the malloc block containing the chunk, if no smaller
requests came along to fill it. Avoid this scenario by limiting the
maximum size of a chunk to 1/8th maxBlockSize, so that we can waste no more
than 1/8th of the allocated space. This will not change the behavior at
all for the default context size parameters (with large maxBlockSize),
but it will change the behavior when using ALLOCSET_SMALL_MAXSIZE.
In particular, there's no longer a need for spell.c to be overly concerned
about the request size parameters it uses, so remove a rather unhelpful
comment about that.
Merlin Moncure, per an idea of Tom Lane's
install-sh can install multiple files at once, so for loops are not
necessary. This was already changed for the rest of the code some
time ago, but pgxs.mk was apparently forgotten, and the obsolete
coding style has now been copied to the PLs as well.
This also fixes the problem that the for loops in question did not
catch errors.
We must lock out autovacuuming of the old toast table before computing the
OldestXmin horizon we will use. Otherwise, autovacuum could start on the
toast table later, compute a later OldestXmin horizon, and remove as DEAD
toast tuples that we still need (because we think their parent tuples are
only RECENTLY_DEAD). Per further thought about bug #5998.
VACUUM was willing to remove a committed-dead tuple immediately if it was
deleted by the same transaction that inserted it. The idea is that such a
tuple could never have been visible to any other transaction, so we don't
need to keep it around to satisfy MVCC snapshots. However, there was
already an exception for tuples that are part of an update chain, and this
exception created a problem: we might remove TOAST tuples (which are never
part of an update chain) while their parent tuple stayed around (if it was
part of an update chain). This didn't pose a problem for most things,
since the parent tuple is indeed dead: no snapshot will ever consider it
visible. But MVCC-safe CLUSTER had a problem, since it will try to copy
RECENTLY_DEAD tuples to the new table. It then has to copy their TOAST
data too, and would fail if VACUUM had already removed the toast tuples.
Easiest fix is to get rid of the special case for xmin == xmax. This may
delay reclaiming dead space for a little bit in some cases, but it's by far
the most reliable way to fix the issue.
Per bug #5998 from Mark Reid. Back-patch to 8.3, which is the oldest
version with MVCC-safe CLUSTER.
Convert it to use successive shifts right instead of increasing a divisor.
This is probably a tad more efficient than the original coding, and it's
nicer-looking than the previous patch because we don't need a special case
to avoid overflow in the last branch. But the real reason to do it is to
avoid a Solaris compiler bug, as per results from buildfarm member moa.
Per recent -hackers discussion. The formats in question are %G and %V,
and cause warnings on MinGW at least. We assume the ecpg application
knows what it's doing if it passes these formats to the library.
The style is set to "printf" for backwards compatibility everywhere except
on Windows, where it is set to "gnu_printf", which eliminates hundreds of
false error messages from modern versions of gcc arising from %m and %ll{d,u}
formats.
Also remove the material about this being an alpha release.
The notes still need a lot of work, but they're more or less presentable
as a beta version now.
Instead of dumping them as CREATE TABLE ... OF, dump them as normal
tables with the usual special processing for dropped columns, and then
attach them to the type afterward, using ALTER TABLE ... OF. This is
analogous to the existing handling of inherited tables.
This code was accidentally part of the patch, it's only
needed for the code that's for 9.2. Not needing the timeline
also removes the need to call IDENTIFY_SYSTEM.
Noted by Peter E.
There was already one recommendation in the documentation about writing
C functions to ensure padding bytes are zeroes, but make it stronger.
Also fix an example that was still using direct assignment to a varlena
length word, which no longer works since the varvarlena changes.
Per recent discussion, it's important for all computed datums (not only the
results of input functions) to not contain any ill-defined (uninitialized)
bits. Failing to ensure that can result in equal() reporting that
semantically indistinguishable Consts are not equal, which in turn leads to
bizarre and undesirable planner behavior, such as in a recent example from
David Johnston. We might eventually try to fix this in a general manner by
allowing datatypes to define identity-testing functions, but for now the
path of least resistance is to expect datatypes to force all unused bits
into consistent states.
Per some testing by Noah Misch, array and path functions seem to be the
only ones presenting risks at the moment, so I looked through all the
functions in adt/array*.c and geo_ops.c and fixed them as necessary. In
the array functions, the easiest/safest fix is to allocate result arrays
with palloc0 instead of palloc. Possibly in future someone will want to
look into whether we can just zero the padding bytes, but that looks too
complex for a back-patchable fix. In the path functions, we already had a
precedent in path_in for just zeroing the one known pad field, so duplicate
that code as needed.
Back-patch to all supported branches.