uses of the long-deprecated float32 in contrib/seg; the definitions themselves
are still there, but no longer used. fmgr/README updated to match.
I added a CREATE FUNCTION to account for existing seg_center() code in seg.c
too, and some tests for it and the neighbor functions. At the same time,
remove checks for NULL which are not needed (because the functions are declared
STRICT).
I had to do some adjustments to contrib's btree_gist too. The choices for
representation there are not ideal for changing the underlying types :-(
Original patch by Zoltan Boszormenyi, with some adjustments by me.
"consistent" functions, and remove pg_amop.opreqcheck, as per recent
discussion. The main immediate benefit of this is that we no longer need
8.3's ugly hack of requiring @@@ rather than @@ to test weight-using tsquery
searches on GIN indexes. In future it should be possible to optimize some
other queries better than is done now, by detecting at runtime whether the
index match is exact or not.
Tom Lane, after an idea of Heikki's, and with some help from Teodor.
results to contain uninitialized, unpredictable values. While this was okay
as far as the datatypes themselves were concerned, it's a problem for the
parser because occurrences of the "same" literal might not be recognized as
equal by datumIsEqual (and hence not by equal()). It seems sufficient to fix
this in the input functions since the only critical use of equal() is in the
parser's comparisons of ORDER BY and DISTINCT expressions.
Per a trouble report from Marc Cousin.
Patch all the way back. Interestingly, array_in did not have the bug before
8.2, which may explain why the issue went unnoticed for so long.
data. This makes for a significant speedup at the cost that the results
now vary between little-endian and big-endian machines; which forces us
to add explicit ORDER BYs in a couple of regression tests to preserve
machine-independent comparison results. Also, force initdb by bumping
catversion, since the contents of hash indexes will change (at least on
big-endian machines).
Kenneth Marshall and Tom Lane, based on work from Bob Jenkins. This commit
does not adopt Bob's new faster mix() algorithm, however, since we still need
to convince ourselves that that doesn't degrade the quality of the hashing.
specify the cost values to use, instead of always using 1's.
Volkan Yazici
In passing, remove fuzzystrmatch.h, which contained a bunch of stuff that had
no business being in a .h file; fold it into its only user, fuzzystrmatch.c.
strings. This patch introduces four support functions cstring_to_text,
cstring_to_text_with_len, text_to_cstring, and text_to_cstring_buffer, and
two macros CStringGetTextDatum and TextDatumGetCString. A number of
existing macros that provided variants on these themes were removed.
Most of the places that need to make such conversions now require just one
function or macro call, in place of the multiple notational layers that used
to be needed. There are no longer any direct calls of textout or textin,
and we got most of the places that were using handmade conversions via
memcpy (there may be a few still lurking, though).
This commit doesn't make any serious effort to eliminate transient memory
leaks caused by detoasting toasted text objects before they reach
text_to_cstring. We changed PG_GETARG_TEXT_P to PG_GETARG_TEXT_PP in a few
places where it was easy, but much more could be done.
Brendan Jurd and Tom Lane
pattern-examination heuristic method to purely histogram-driven selectivity at
histogram size 100, we compute both estimates and use a weighted average.
The weight put on the heuristic estimate decreases linearly with histogram
size, dropping to zero for 100 or more histogram entries.
Likewise in ltreeparentsel(). After a patch by Greg Stark, though I
reorganized the logic a bit to give the caller of histogram_selectivity()
more control.
data structures and backend internal APIs. This solves problems we've seen
recently with inconsistent layout of pg_control between machines that have
32-bit time_t and those that have already migrated to 64-bit time_t. Also,
we can get out from under the problem that Windows' Unix-API emulation is not
consistent about the width of time_t.
There are a few remaining places where local time_t variables are used to hold
the current or recent result of time(NULL). I didn't bother changing these
since they do not affect any cross-module APIs and surely all platforms will
have 64-bit time_t before overflow becomes an actual risk. time_t should
be avoided for anything visible to extension modules, however.
failed to cover all the ways in which a connection can be initiated in dblink.
Plug the remaining holes. Also, disallow transient connections in functions
for which that feature makes no sense (because they are only sensible as
part of a sequence of operations on the same connection). Joe Conway
Security: CVE-2007-6601
hazards. Instead teach these programs to prompt for a password when
necessary, just like all our other programs.
I did not bother to invent -W switches for them, since the return on
investment seems so low.
switch optional, as is the case for every other one of our programs.
I had already documented its -W as being optional, so this is bringing
the code into line with the docs ...
The original coding leaked memory (at least 8K per crosstab_hash call)
because it allowed the hash table to be allocated as a child of
TopMemoryContext and then never freed it. Fix that by putting the
hash table under per_query_ctx, instead. Also get rid of use
of a static variable to point to the hash table. Aside from being
ugly, that would actively do the wrong thing in the case of re-entrant
calls to crosstab_hash, which are at least theoretically possible
since it was expecting the static variable to stay valid across
a SPI_execute call.
since we supported standard FOREIGN KEY constraint syntax. It was
harmless enough just sitting there, but the prospect of having to
document it is surely more work than it's worth.
but no database changes have been made since the last CommandCounterIncrement.
This should result in a significant improvement in the number of "commands"
that can typically be performed within a transaction before hitting the 2^32
CommandId size limit. In particular this buys back (and more) the possible
adverse consequences of my previous patch to fix plan caching behavior.
The implementation requires tracking whether the current CommandCounter
value has been "used" to mark any tuples. CommandCounter values stored into
snapshots are presumed not to be used for this purpose. This requires some
small executor changes, since the executor used to conflate the curcid of
the snapshot it was using with the command ID to mark output tuples with.
Separating these concepts allows some small simplifications in executor APIs.
Something for the TODO list: look into having CommandCounterIncrement not do
AcceptInvalidationMessages. It seems fairly bogus to be doing it there,
but exactly where to do it instead isn't clear, and I'm disinclined to mess
with asynchronous behavior during late beta.
inappropriately generic-sounding names. This is more or less free since
we already forced initdb for the next beta, and it may prevent confusion or
name conflicts (particularly at the C-global-symbol level) down the road.
Per my proposal yesterday.
compatibility package. This supports importing dumps from past versions
using tsearch2, and provides the old names and API for most functions
that were changed. (rewrite(ARRAY[...]) is a glaring omission, though.)
Pavel Stehule and Tom Lane
about best practice for including the module creation scripts: to wit
that you should suppress NOTICE messages. This avoids creating
regression failures by adding or removing comment lines in the module
scripts.