After finding an EXISTS or ANY sub-select that can be converted to a
semi-join or anti-join, we should recurse into the body of the sub-select.
This allows cases such as EXISTS-within-EXISTS to be optimized properly.
The original coding would leave the lower sub-select as a SubLink, which
is no better and often worse than what we can do with a join. Per example
from Wayne Conrad.
Back-patch to 8.4. There is a related issue in older versions' handling
of pull_up_IN_clauses, but they're lame enough anyway about the whole area
that it seems not worth the extra work to try to fix.
The previous coding would allow requests up to half of maxBlockSize to be
treated as "chunks", but when that actually did happen, we'd waste nearly
half of the space in the malloc block containing the chunk, if no smaller
requests came along to fill it. Avoid this scenario by limiting the
maximum size of a chunk to 1/8th maxBlockSize, so that we can waste no more
than 1/8th of the allocated space. This will not change the behavior at
all for the default context size parameters (with large maxBlockSize),
but it will change the behavior when using ALLOCSET_SMALL_MAXSIZE.
In particular, there's no longer a need for spell.c to be overly concerned
about the request size parameters it uses, so remove a rather unhelpful
comment about that.
Merlin Moncure, per an idea of Tom Lane's
install-sh can install multiple files at once, so for loops are not
necessary. This was already changed for the rest of the code some
time ago, but pgxs.mk was apparently forgotten, and the obsolete
coding style has now been copied to the PLs as well.
This also fixes the problem that the for loops in question did not
catch errors.
We must lock out autovacuuming of the old toast table before computing the
OldestXmin horizon we will use. Otherwise, autovacuum could start on the
toast table later, compute a later OldestXmin horizon, and remove as DEAD
toast tuples that we still need (because we think their parent tuples are
only RECENTLY_DEAD). Per further thought about bug #5998.
VACUUM was willing to remove a committed-dead tuple immediately if it was
deleted by the same transaction that inserted it. The idea is that such a
tuple could never have been visible to any other transaction, so we don't
need to keep it around to satisfy MVCC snapshots. However, there was
already an exception for tuples that are part of an update chain, and this
exception created a problem: we might remove TOAST tuples (which are never
part of an update chain) while their parent tuple stayed around (if it was
part of an update chain). This didn't pose a problem for most things,
since the parent tuple is indeed dead: no snapshot will ever consider it
visible. But MVCC-safe CLUSTER had a problem, since it will try to copy
RECENTLY_DEAD tuples to the new table. It then has to copy their TOAST
data too, and would fail if VACUUM had already removed the toast tuples.
Easiest fix is to get rid of the special case for xmin == xmax. This may
delay reclaiming dead space for a little bit in some cases, but it's by far
the most reliable way to fix the issue.
Per bug #5998 from Mark Reid. Back-patch to 8.3, which is the oldest
version with MVCC-safe CLUSTER.
Convert it to use successive shifts right instead of increasing a divisor.
This is probably a tad more efficient than the original coding, and it's
nicer-looking than the previous patch because we don't need a special case
to avoid overflow in the last branch. But the real reason to do it is to
avoid a Solaris compiler bug, as per results from buildfarm member moa.
Per recent -hackers discussion. The formats in question are %G and %V,
and cause warnings on MinGW at least. We assume the ecpg application
knows what it's doing if it passes these formats to the library.
The style is set to "printf" for backwards compatibility everywhere except
on Windows, where it is set to "gnu_printf", which eliminates hundreds of
false error messages from modern versions of gcc arising from %m and %ll{d,u}
formats.
Also remove the material about this being an alpha release.
The notes still need a lot of work, but they're more or less presentable
as a beta version now.
Instead of dumping them as CREATE TABLE ... OF, dump them as normal
tables with the usual special processing for dropped columns, and then
attach them to the type afterward, using ALTER TABLE ... OF. This is
analogous to the existing handling of inherited tables.
This code was accidentally part of the patch, it's only
needed for the code that's for 9.2. Not needing the timeline
also removes the need to call IDENTIFY_SYSTEM.
Noted by Peter E.
There was already one recommendation in the documentation about writing
C functions to ensure padding bytes are zeroes, but make it stronger.
Also fix an example that was still using direct assignment to a varlena
length word, which no longer works since the varvarlena changes.
Per recent discussion, it's important for all computed datums (not only the
results of input functions) to not contain any ill-defined (uninitialized)
bits. Failing to ensure that can result in equal() reporting that
semantically indistinguishable Consts are not equal, which in turn leads to
bizarre and undesirable planner behavior, such as in a recent example from
David Johnston. We might eventually try to fix this in a general manner by
allowing datatypes to define identity-testing functions, but for now the
path of least resistance is to expect datatypes to force all unused bits
into consistent states.
Per some testing by Noah Misch, array and path functions seem to be the
only ones presenting risks at the moment, so I looked through all the
functions in adt/array*.c and geo_ops.c and fixed them as necessary. In
the array functions, the easiest/safest fix is to allocate result arrays
with palloc0 instead of palloc. Possibly in future someone will want to
look into whether we can just zero the padding bytes, but that looks too
complex for a back-patchable fix. In the path functions, we already had a
precedent in path_in for just zeroing the one known pad field, so duplicate
that code as needed.
Back-patch to all supported branches.
In a couple of places we said "not supported on this platform" for cases
that aren't really platform-specific, but could depend on configuration
options such as --with-openssl. Use "not supported by this build" instead,
as that doesn't convey the impression that you can't fix it without moving
to another OS; that's also more consistent with the wording used for an
identical error case in guc.c.
No back-patch, as the clarity gain is small enough to not be worth
burdening translators with back-branch changes.
Most commenters agreed that this is more friendly than silently failing
to match the line during actual connection attempts. Also, this will
prevent corner cases that might arise when trying to handle such a line
when the SSL code isn't turned on. An example is that specifying
clientcert=1 in such a line would formerly result in a completely
misleading complaint that root.crt wasn't present, as seen in a recent
report from Marc-Andre Laverdiere. While we could have instead fixed
that specific behavior, it seems likely that we'd have a continuing stream
of such bizarre behaviors if we keep on allowing hostssl lines when SSL is
disabled.
Back-patch to 8.4, where clientcert was introduced. Earlier versions don't
have this specific issue, and the code is enough different to make this
patch not applicable without more work than it seems worth.
the connection; also restructure the libpq connection code.
This patch also removes the unused variable postmasterPID and fixes a
libpq structure leak that was in the testing loop.
Per discussion, removing the hint seems better than correcting it because
the adjacent analogous cases in RenameRelation don't have any hints, and
nobody seems to have missed 'em.
Shigeru Hanada
Per bug #5988, reported by Marko Tiikkaja, and further analyzed by Tom
Lane, the previous coding was broken in several respects: even if the
target table already existed, a subsequent CREATE TABLE IF NOT EXISTS
might try to add additional constraints or sequences-for-serial
specified in the new CREATE TABLE statement.
In passing, this also fixes a minor information leak: it's no longer
possible to figure out whether a schema to which you don't have CREATE
access contains a sequence named like "x_y_seq" by attempting to create a
table in that schema called "x" with a serial column called "y".
Some more refactoring of this code in the future might be warranted,
but that will need to wait for a later major release.
Instead, foreign tables are treated just like views: permissions can
be granted using GRANT privilege ON [TABLE] foreign_table_name TO role,
and revoked similarly. GRANT/REVOKE .. FOREIGN TABLE is no longer
supported, just as we don't support GRANT/REVOKE .. VIEW. The set of
accepted permissions for foreign tables is now identical to the set for
regular tables, and views.
Per report from Thom Brown, and subsequent discussion.
The expression that tried to round the value to the nearest TB could
overflow, leading to bogus output as reported in bug #5993 from Nicola
Cossu. This isn't likely to ever happen in the intended usage of the
function (if it could, we'd be needing to use a wider datatype instead);
but it's not hard to give the expected output, so let's do so.
Added a new option --extra-install to pg_regress to arrange installing
the respective contrib directory into the temporary installation.
This is currently not yet supported for Windows MSVC builds.
Updated the .gitignore files for contrib modules to ignore the
leftovers of a temp-install check run.
Changed the exit status of "make check" in a pgxs build (which still
does nothing) to 0 from 1.
Added "make check" in contrib to top-level "make check-world".
This quiets compiler warnings about redefined macros and unused Perl__unused variables. The
redefinition of snprintf and vsnprintf is something we want to avoid anyway, if we've
gone to the bother of setting up the macros to point to our implementation.
This option turns off autovacuum, prevents non-super-user connections,
and enables oid setting hooks in the backend. The code continues to use
the old autoavacuum disable settings for servers with earlier catalog
versions.
This includes a catalog version bump to identify servers that support
the -b option.