The style is set to "printf" for backwards compatibility everywhere except
on Windows, where it is set to "gnu_printf", which eliminates hundreds of
false error messages from modern versions of gcc arising from %m and %ll{d,u}
formats.
Also remove the material about this being an alpha release.
The notes still need a lot of work, but they're more or less presentable
as a beta version now.
Instead of dumping them as CREATE TABLE ... OF, dump them as normal
tables with the usual special processing for dropped columns, and then
attach them to the type afterward, using ALTER TABLE ... OF. This is
analogous to the existing handling of inherited tables.
This code was accidentally part of the patch, it's only
needed for the code that's for 9.2. Not needing the timeline
also removes the need to call IDENTIFY_SYSTEM.
Noted by Peter E.
There was already one recommendation in the documentation about writing
C functions to ensure padding bytes are zeroes, but make it stronger.
Also fix an example that was still using direct assignment to a varlena
length word, which no longer works since the varvarlena changes.
Per recent discussion, it's important for all computed datums (not only the
results of input functions) to not contain any ill-defined (uninitialized)
bits. Failing to ensure that can result in equal() reporting that
semantically indistinguishable Consts are not equal, which in turn leads to
bizarre and undesirable planner behavior, such as in a recent example from
David Johnston. We might eventually try to fix this in a general manner by
allowing datatypes to define identity-testing functions, but for now the
path of least resistance is to expect datatypes to force all unused bits
into consistent states.
Per some testing by Noah Misch, array and path functions seem to be the
only ones presenting risks at the moment, so I looked through all the
functions in adt/array*.c and geo_ops.c and fixed them as necessary. In
the array functions, the easiest/safest fix is to allocate result arrays
with palloc0 instead of palloc. Possibly in future someone will want to
look into whether we can just zero the padding bytes, but that looks too
complex for a back-patchable fix. In the path functions, we already had a
precedent in path_in for just zeroing the one known pad field, so duplicate
that code as needed.
Back-patch to all supported branches.
In a couple of places we said "not supported on this platform" for cases
that aren't really platform-specific, but could depend on configuration
options such as --with-openssl. Use "not supported by this build" instead,
as that doesn't convey the impression that you can't fix it without moving
to another OS; that's also more consistent with the wording used for an
identical error case in guc.c.
No back-patch, as the clarity gain is small enough to not be worth
burdening translators with back-branch changes.
Most commenters agreed that this is more friendly than silently failing
to match the line during actual connection attempts. Also, this will
prevent corner cases that might arise when trying to handle such a line
when the SSL code isn't turned on. An example is that specifying
clientcert=1 in such a line would formerly result in a completely
misleading complaint that root.crt wasn't present, as seen in a recent
report from Marc-Andre Laverdiere. While we could have instead fixed
that specific behavior, it seems likely that we'd have a continuing stream
of such bizarre behaviors if we keep on allowing hostssl lines when SSL is
disabled.
Back-patch to 8.4, where clientcert was introduced. Earlier versions don't
have this specific issue, and the code is enough different to make this
patch not applicable without more work than it seems worth.
the connection; also restructure the libpq connection code.
This patch also removes the unused variable postmasterPID and fixes a
libpq structure leak that was in the testing loop.
Per discussion, removing the hint seems better than correcting it because
the adjacent analogous cases in RenameRelation don't have any hints, and
nobody seems to have missed 'em.
Shigeru Hanada
Per bug #5988, reported by Marko Tiikkaja, and further analyzed by Tom
Lane, the previous coding was broken in several respects: even if the
target table already existed, a subsequent CREATE TABLE IF NOT EXISTS
might try to add additional constraints or sequences-for-serial
specified in the new CREATE TABLE statement.
In passing, this also fixes a minor information leak: it's no longer
possible to figure out whether a schema to which you don't have CREATE
access contains a sequence named like "x_y_seq" by attempting to create a
table in that schema called "x" with a serial column called "y".
Some more refactoring of this code in the future might be warranted,
but that will need to wait for a later major release.
Instead, foreign tables are treated just like views: permissions can
be granted using GRANT privilege ON [TABLE] foreign_table_name TO role,
and revoked similarly. GRANT/REVOKE .. FOREIGN TABLE is no longer
supported, just as we don't support GRANT/REVOKE .. VIEW. The set of
accepted permissions for foreign tables is now identical to the set for
regular tables, and views.
Per report from Thom Brown, and subsequent discussion.
The expression that tried to round the value to the nearest TB could
overflow, leading to bogus output as reported in bug #5993 from Nicola
Cossu. This isn't likely to ever happen in the intended usage of the
function (if it could, we'd be needing to use a wider datatype instead);
but it's not hard to give the expected output, so let's do so.
Added a new option --extra-install to pg_regress to arrange installing
the respective contrib directory into the temporary installation.
This is currently not yet supported for Windows MSVC builds.
Updated the .gitignore files for contrib modules to ignore the
leftovers of a temp-install check run.
Changed the exit status of "make check" in a pgxs build (which still
does nothing) to 0 from 1.
Added "make check" in contrib to top-level "make check-world".
This quiets compiler warnings about redefined macros and unused Perl__unused variables. The
redefinition of snprintf and vsnprintf is something we want to avoid anyway, if we've
gone to the bother of setting up the macros to point to our implementation.
This option turns off autovacuum, prevents non-super-user connections,
and enables oid setting hooks in the backend. The code continues to use
the old autoavacuum disable settings for servers with earlier catalog
versions.
This includes a catalog version bump to identify servers that support
the -b option.
The MSVC compiler complains if a macro is called with less arguments
than its definition provides for. flex generates a macro with one
argument for yywrap, but only supplies the argument for reentrant
scanners, so we remove the useless argument in the non-reentrant
case to silence the warning.
Strip leading and trailing whitespace and replace interior whitespace
by a single space. This avoids problems with the index generator
producing duplicate index entries for terms that differ only in
whitespace.
Commit dca30da343 actually fixed all the
indexterm elements that would cause this problem at the moment, but in
case it sneaks in again, we're set.
The previous coding failed to account properly for the costs of evaluating
the input expressions of aggregates and window functions, as seen in a
recent gripe from Claudio Freire. (I said at the time that it wasn't
counting these costs at all; but on closer inspection, it was effectively
charging these costs once per output tuple. That is completely wrong for
aggregates, and not exactly right for window functions either.)
There was also a hard-wired assumption that aggregates and window functions
had procost 1.0, which is now fixed to respect the actual cataloged costs.
The costing of WindowAgg is still pretty bogus, since it doesn't try to
estimate the effects of spilling data to disk, but that seems like a
separate issue.
Teach the program and script to deal with OID-array referencing columns,
which we now have several of. Also, modify the recommended usage process
to specify that the program should be run against the regression database
rather than template1. This lets it find numerous joins that cannot be
found in template1 because the relevant catalogs are entirely empty.
Together these changes add seventeen formerly-missed cases to the oidjoins
regression test.
Most of these cast DWORD to int or unsigned int for printf type handling.
This is safe even on 64 bit architectures because a DWORD is always 32 bits.
In one case a variable is initialised to keep the compiler happy.
This test should now work in any database with UTF8 encoding, regardless
of the database's default locale. The former restriction was really
"doesn't work if default locale is C", and that was because of not handling
mbstowcs/wcstombs correctly.
These functions should take a pg_locale_t, not a collation OID, and should
call mbstowcs_l/wcstombs_l where available. Where those functions are not
available, temporarily select the correct locale with uselocale().
This change removes the bogus assumption that all locales selectable in
a given database have the same wide-character conversion method; in
particular, the collate.linux.utf8 regression test now passes with
LC_CTYPE=C, so long as the database encoding is UTF8.
I decided to move the char2wchar/wchar2char functions out of mbutils.c and
into pg_locale.c, because they work on wchar_t not pg_wchar_t and thus
don't really belong with the mbutils.c functions. Keeping them where they
were would have required importing pg_locale_t into pg_wchar.h somehow,
which did not seem like a good plan.
Make use of the collation attached to the index column, instead of
hard-wiring DEFAULT_COLLATION_OID. (Note: in theory this could require
reindexing btree_gist indexes on textual columns, but I rather doubt anyone
has one with a non-default declared collation as yet.)
Experimentation with contrib/btree_gist shows that the majority of the GIST
support functions potentially need collation information. Safest policy
seems to be to pass it to all of them, instead of making assumptions about
which ones could possibly need it.
Using DEFAULT_COLLATION_OID in the comparePartial functions was not only
a lame hack, but outright wrong, because the compare functions for
collation-aware types were already responding to the declared index
collation. So comparePartial would have the wrong expectation about
the index's sort order, possibly leading to missing matches for prefix
searches.
Since both tarballs and git now result in a "postgresql" directory
rather than a "pgsql" directory, adjust the example configuration to
look for the former.