>
> * Change memory allocation for multi-byte functions so memory is
> allocated inside conversion functions
>
> Currently we preallocate memory based on worst-case usage.
* Consider increasing the number of default statistics target, and
reduce statistics target overhead
Also consider having a larger statistics target for indexed columns
and expression indexes
<
> http://archives.postgresql.org/pgsql-general/2007-06/msg00542.php
* Consider increasing the number of default statistics target, and
reduce statistics target overhead
Also consider having a larger statistics target for indexed columns
and expression indexes
> http://archives.postgresql.org/pgsql-general/2007-05/msg01228.php
>
>
> * Consider increasing the number of default statistics target, and
> reduce statistics target overhead
>
> Also consider having a larger statistics target for indexed columns
> and expression indexes
This prevents compiler optimizations that assume overflow won't occur, which
breaks numerous overflow tests that we need to have working. It is known
that gcc 4.3 causes problems and possible that 4.1 does. Per my proposal
of some time ago and a recent report from Kris Jurka.
Backpatch as far as 8.0, which is as far as the patch conveniently goes.
7.x was pretty short of overflow tests anyway, so it may not matter there,
even assuming that anyone cares whether 7.x builds on recent gcc.
than dividing them into 1GB segments as has been our longtime practice. This
requires working support for large files in the operating system; at least for
the time being, it won't be the default.
Zdenek Kotala
variables to it. More need to be converted, but I wanted to get this in
before it conflicts with too much...
Other than just centralising the text-to-int conversion for parameters,
this allows the pg_settings view to contain a list of available options
and allows an error hint to show what values are allowed.
With the addition of multiple autovacuum workers, our choices were to delete
the check, document the interaction with autovacuum_max_workers, or complicate
the check to try to hide that interaction. Since this restriction has never
been adequate to ensure backends can't run out of pinnable buffers, it doesn't
really have enough excuse to live to justify the second or third choices.
Per discussion of a complaint from Andreas Kling (see also bug #3888).
This commit also removes several documentation references to this restriction,
but I'm not sure I got them all.
pattern-examination heuristic method to purely histogram-driven selectivity at
histogram size 100, we compute both estimates and use a weighted average.
The weight put on the heuristic estimate decreases linearly with histogram
size, dropping to zero for 100 or more histogram entries.
Likewise in ltreeparentsel(). After a patch by Greg Stark, though I
reorganized the logic a bit to give the caller of histogram_selectivity()
more control.
of the generated range condition var >= 'foo' AND var < 'fop' as being less
than what eqsel() would estimate for var = 'foo'. This is intuitively
reasonable and it gets rid of the need for some entirely ad-hoc coding we
formerly used to reject bogus estimates. The basic problem here is that
if the prefix is more than a few characters long, the two boundary values
are too close together to be distinguishable by comparison to the column
histogram, resulting in a selectivity estimate of zero, which is often
not very sane. Change motivated by an example from Peter Eisentraut.
Arguably this is a bug fix, but I'll refrain from back-patching it
for the moment.
it accumulates the set of changes to be made and then applies them. It had
to accumulate the set of changes anyway to prepare a WAL record for the
pruning action, so this isn't an enormous change; the only new complexity is
to not doubly mark tuples that are visited twice in the scan. The main
advantage is that we can substantially reduce the scope of the critical
section in which the changes are applied, thus avoiding PANIC in foreseeable
cases like running out of memory in inval.c. A nice secondary advantage is
that it is now far clearer that WAL replay will actually do the same thing
that the original pruning did.
This commit doesn't do anything about the open problem that
CacheInvalidateHeapTuple doesn't have the right semantics for a CTID change
caused by collapsing out a redirect pointer. But whatever we do about that,
it'll be a good idea to not do it inside a critical section.
available output buffer when presented with corrupt input. Some testing
suggests that this slows the decompression loop about 1%, which seems an
acceptable price to pay for more robustness. (Curiously, the penalty
seems to be *less* on not-very-compressible data, which I didn't expect
since the overhead per output byte ought to be more in the literal-bytes
path.)
Patch from Zdenek Kotala. I fixed a corner case and did some renaming
of variables to make the routine more readable.
were discussed last year, but we felt it was too late in the 8.3 cycle to
change the code immediately. Specifically, the patch:
* Reduces the minimum datum size to be considered for compression from
256 to 32 bytes, as suggested by Greg Stark.
* Increases the required compression rate for compressed storage from
20% to 25%, again per Greg's suggestion.
* Replaces force_input_size (size above which compression is forced)
with a maximum size to be considered for compression. It was agreed
that allowing large inputs to escape the minimum-compression-rate
requirement was not bright, and that indeed we'd rather have a knob
that acted in the other direction. I set this value to 1MB for the
moment, but it could use some performance studies to tune it.
* Adds an early-failure path to the compressor as suggested by Jan:
if it's been unable to find even one compressible substring in the
first 1KB (parameterizable), assume we're looking at incompressible
input and give up. (Possibly this logic can be improved, but I'll
commit it as-is for now.)
* Improves the toasting heuristics so that when we have very large
fields with attstorage 'x' or 'e', we will push those out to toast
storage before considering inline compression of shorter fields.
This also responds to a suggestion of Greg's, though my original
proposal for a solution was a bit off base because it didn't fix
the problem for large 'e' fields.
There was some discussion in the earlier threads of exposing some
of the compression knobs to users, perhaps even on a per-column
basis. I have not done anything about that here. It seems to me
that if we are changing around the parameters, we'd better get some
experience and be sure we are happy with the design before we set
things in stone by providing user-visible knobs.