An oversight in setting the boundaries of known commit timestamps during
startup caused old commit timestamps to become inaccessible after a
server restart.
Author and reporter: Julien Rouhaud
Review, test code: Craig Ringer
INSERT ... ON CONFLICT (specifically ExecCheckHeapTupleVisible) contains
another example of this unsafe coding practice. It is much harder to get
a failure out of it than the case fixed in commit 6292c2339, because in
most scenarios any hint bits that could be set would have already been set
earlier in the command. However, Konstantin Knizhnik reported a failure
with a custom transaction manager, and it's clearly possible to get a
failure via a race condition in async-commit mode.
For lack of a reproducible example, no regression test case in this
commit.
I did some testing with Asserts added to tqual.c's functions, and can say
that running "make check-world" exposed these two bugs and no others.
The Asserts are messy enough that I've not added them to the code for now.
Report: <57EE93C8.8080504@postgrespro.ru>
Related-Discussion: <CAO3NbwOycQjt2Oqy2VW-eLTq2M5uGMyHnGm=RNga4mjqcYD7gQ@mail.gmail.com>
A transaction that conflicts against itself, for example
INSERT INTO t(pk) VALUES (1),(1) ON CONFLICT DO NOTHING;
should behave the same regardless of isolation level. It certainly
shouldn't throw a serialization error, as retrying will not help.
We got this wrong due to the ON CONFLICT logic not considering the case,
as reported by Jason Dusek.
Core of this patch is by Peter Geoghegan (based on an earlier patch by
Thomas Munro), though I didn't take his proposed code refactoring for fear
that it might have unexpected side-effects. Test cases by Thomas Munro
and myself.
Report: <CAO3NbwOycQjt2Oqy2VW-eLTq2M5uGMyHnGm=RNga4mjqcYD7gQ@mail.gmail.com>
Related-Discussion: <57EE93C8.8080504@postgrespro.ru>
Despite the argumentation I wrote in commit 7a2fe85b0, it's unsafe to do
this, because in corner cases it's possible for HeapTupleSatisfiesSelf
to try to set hint bits on the target tuple; and at least since 8.2 we
have required the buffer content lock to be held while setting hint bits.
The added regression test exercises one such corner case. Unpatched, it
causes an assertion failure in assert-enabled builds, or otherwise would
cause a hint bit change in a buffer we don't hold lock on, which given
the right race condition could result in checksum failures or other data
consistency problems. The odds of a problem in the field are probably
pretty small, but nonetheless back-patch to all supported branches.
Report: <19391.1477244876@sss.pgh.pa.us>
Using the name fsync clashed with the #define we have on Windows that
redefines it to _commit. Naming it sync should remove that conflict.
Per all the Windows buildfarm members
This will write the received transaction log into a file called
pg_wal.tar(.gz) next to the other tarfiles instead of writing it to
base.tar. When using fetch mode, the transaction log is still written to
base.tar like before, and when used against a pre-10 server, the file
is named pg_xlog.tar.
To do this, implement a new concept of a "walmethod", which is
responsible for writing the WAL. Two implementations exist, one that
writes to a plain directory (which is also used by pg_receivexlog) and
one that writes to a tar file with optional compression.
Reviewed by Michael Paquier
Show how to get the system's huge page size, rather than misleadingly
referring to PAGE_SIZE (which is usually understood to be the regular
page size). Show how to confirm whether huge pages have been allocated.
Minor wordsmithing. Back-patch to 9.4 where this section appeared.
Set enable_hashagg to false for tests involving least_agg(), so that
we get the same plan regardless of local costing variances. Also,
remove a test involving sqrt(); it's there to test deparsing of
HAVING clauses containing expressions, but that's tested elsewhere
anyway, and sqrt(2) deparses with different amounts of precision on
different machines.
Per buildfarm.
Replace "Full path to ..." with "Full path name of ...". At least one
user has misinterpreted the existing wording as meaning "Directory
containing ...".
Now that the upper planner uses paths, and now that we have proper hooks
to inject paths into the upper planning process, it's possible for
foreign data wrappers to arrange to push aggregates to the remote side
instead of fetching all of the rows and aggregating them locally. This
figures to be a massive win for performance, so teach postgres_fdw to
do it.
Jeevan Chalke and Ashutosh Bapat. Reviewed by Ashutosh Bapat with
additional testing by Prabhat Sahu. Various mostly cosmetic changes
by me.
With track_io_timing = on, EXPLAIN (ANALYZE, BUFFERS) will emit fields
named like "I/O Read Time". The slash makes that invalid as an XML
element name, so that adding FORMAT XML would produce invalid XML.
We already have code in there to translate spaces to dashes, so let's
generalize that to convert anything that isn't a valid XML name character,
viz letters, digits, hyphens, underscores, and periods. We could just
reject slashes, which would run a bit faster. But the fact that this went
unnoticed for so long doesn't give me a warm feeling that we'd notice the
next creative violation, so let's make it a permanent fix.
Reported by Markus Winand, though this isn't his initial patch proposal.
Back-patch to 9.2 where track_io_timing was added. The problem is only
latent in 9.1, so I don't feel a need to fix it there.
Discussion: <E0BF6A45-68E8-45E6-918F-741FB332C6BB@winand.at>
This absorbs a fix for a symlink-manipulation bug in zic that was
introduced in 2016g. It probably isn't interesting for our use-case,
but I'm not quite sure, so let's update while we're at it.
"xlog" is not a particularly clear abbreviation for "write-ahead log",
and it sometimes confuses users into believe that the contents of the
"pg_xlog" directory are not critical data, leading to unpleasant
consequences. So, rename the directory to "pg_wal".
This patch modifies pg_upgrade and pg_basebackup to understand both
the old and new directory layouts; the former is necessary given the
purpose of the tool, while the latter merely avoids an unnecessary
backward-compatibility break.
We may wish to consider renaming other programs, switches, and
functions which still use the old "xlog" naming to also refer to
"wal". However, that's still under discussion, so let's do just this
much for now.
Discussion: CAB7nPqTeC-8+zux8_-4ZD46V7YPwooeFxgndfsq5Rg8ibLVm1A@mail.gmail.com
Michael Paquier
This is mostly to absorb some corner-case fixes in zic for year-2037
timestamps. The other changes that have been made are unlikely to affect
our usage, but nonetheless we may as well take 'em.
IANA got rid of the really silly "abbreviation" and replaced it with one
that's only moderately silly. But it's still pointless, so keep on not
showing it.
DST law changes in Turkey. Historical corrections for America/Los_Angeles,
Europe/Kirov, Europe/Moscow, Europe/Samara, and Europe/Ulyanovsk.
Rename Asia/Rangoon to Asia/Yangon, with a backward compatibility link.
The IANA crew continue their campaign to replace invented time zone
abbrevations with numeric GMT offsets. This update changes numerous zones
in Antarctica and the former Soviet Union, for instance Antarctica/Casey
now reports "+08" not "AWST" in the pg_timezone_names view. I kept these
abbreviations in the tznames/ data files, however, so that we will still
accept them for input. (We may want to start trimming those files someday,
but today is not that day.)
An exception is that since IANA no longer claims that "AMT" is in use
in Armenia for GMT+4, I replaced it in the Default file with GMT-4,
corresponding to Amazon Time which is in use in South America. It may be
that that meaning is also invented and IANA will drop it in a future
update; but for now, it seems silly to give pride of place to a meaning
not traceable to IANA over one that is.
Switch TAP tests to use the new wait mode of pg_ctl promote. This
allows avoiding extra logic with poll_query_until() to be sure that a
promoted standby is ready for read-write queries.
From: Michael Paquier <michael.paquier@gmail.com>
--noclean and --nosync were the only options spelled without a hyphen,
so change this for consistency with other options. The options in
pg_basebackup have not been in a release, so we just rename them. For
initdb, we retain the old variants.
Vik Fearing and me
When a relation is truncated, it is important that the FSM is truncated as
well. Otherwise, after recovery, the FSM can return a page that has been
truncated away, leading to errors like:
ERROR: could not read block 28991 in file "base/16390/572026": read only 0
of 8192 bytes
We were using MarkBufferDirtyHint() to dirty the buffer holding the last
remaining page of the FSM, but during recovery, that might in fact not
dirty the page, and the FSM update might be lost.
To fix, use the stronger MarkBufferDirty() function. MarkBufferDirty()
requires us to do WAL-logging ourselves, to protect from a torn page, if
checksumming is enabled.
Also fix an oversight in visibilitymap_truncate: it also needs to WAL-log
when checksumming is enabled.
Analysis by Pavan Deolasee.
Discussion: <CABOikdNr5vKucqyZH9s1Mh0XebLs_jRhKv6eJfNnD2wxTn=_9A@mail.gmail.com>
On my system, this improves coverage for src/backend/access/hash from
61.3% of lines to 88.2% of lines, and from 83.5% of functions to 97.5%
of functions, which is pretty good for 36 lines of tests.
Mithun Cy, reviewing by Amit Kapila and Álvaro Herrera
CommandId is declared as uint32, and values up to 4G are indeed legal.
cidout() handles them properly by treating the value as unsigned int.
But cidin() was just using atoi(), which has platform-dependent behavior
for values outside the range of signed int, as reported by Bart Lengkeek
in bug #14379. Use strtoul() instead, as xidin() does.
In passing, make some purely cosmetic changes to make xidin/xidout
look more like cidin/cidout; the former didn't have a monopoly on
best practice IMO.
Neither xidin nor cidin make any attempt to throw error for invalid input.
I didn't change that here, and am not sure it's worth worrying about
since neither is really a user-facing type. The point is just to ensure
that indubitably-valid inputs work as expected.
It's been like this for a long time, so back-patch to all supported
branches.
Report: <20161018152550.1413.6439@wrigleys.postgresql.org>
This reverts commit 9e083fd468. That was a
few bricks shy of a load:
* Query cancel stopped working
* Buildfarm member pademelon stopped working, because the box doesn't have
/dev/urandom nor /dev/random.
This clearly needs some more discussion, and a quite different patch, so
revert for now.
This value might not be to everyone's taste; in particular, some
people might prefer %t to %m, and others may want %u, %d, or other
fields. However, it's a vast improvement on the old default of ''.
Christoph Berg
The old "low-level" API is deprecated, and doesn't support hardware
acceleration. And this makes the code simpler, too.
Discussion: <561274F1.1030000@iki.fi>
Have tuplesort_gettupleslot() copy the contents of its current table slot
as needed. This is based on an approach taken by tuplestore_gettupleslot().
In the future, tuplesort_gettupleslot() may also be taught to avoid copying
the tuple where caller can determine that that is safe (the
tuplestore_gettupleslot() interface already offers this option to callers).
Patch by Peter Geoghegan. Fixes bug #14344, reported by Regina Obe.
Report: <20160929035538.20224.39628@wrigleys.postgresql.org>
Backpatch-through: 9.6
This adds a new routine, pg_strong_random() for generating random bytes,
for use in both frontend and backend. At the moment, it's only used in
the backend, but the upcoming SCRAM authentication patches need strong
random numbers in libpq as well.
pg_strong_random() is based on, and replaces, the existing implementation
in pgcrypto. It can acquire strong random numbers from a number of sources,
depending on what's available:
- OpenSSL RAND_bytes(), if built with OpenSSL
- On Windows, the native cryptographic functions are used
- /dev/urandom
- /dev/random
Original patch by Magnus Hagander, with further work by Michael Paquier
and me.
Discussion: <CAB7nPqRy3krN8quR9XujMVVHYtXJ0_60nqgVc6oUk8ygyVkZsA@mail.gmail.com>
The more efficient hashtable speeds up hash-aggregations with more than
a few hundred groups significantly. Improvements of over 120% have been
measured.
Due to the the different hash table queries that not fully
determined (e.g. GROUP BY without ORDER BY) may change their result
order.
The conversion is largely straight-forward, except that, due to the
static element types of simplehash.h type hashes, the additional data
some users store in elements (e.g. the per-group working data for hash
aggregaters) is now stored in TupleHashEntryData->additional. The
meaning of BuildTupleHashTable's entrysize (renamed to additionalsize)
has been changed to only be about the additionally stored size. That
size is only used for the initial sizing of the hash-table.
Reviewed-By: Tomas Vondra
Discussion: <20160727004333.r3e2k2y6fvk2ntup@alap3.anarazel.de>
Use the new simplehash.h to speed up tidbitmap.c uses. For bitmap scan
heavy queries speedups of over 100% have been measured. Both lossy and
exact scans benefit, but the wins are bigger for mostly exact scans.
The conversion is mostly trivial, except that tbm_lossify() now restarts
lossifying at the point it previously stopped. Otherwise the hash table
becomes unbalanced because the scan in done in hash-order, leaving the
end of the hashtable more densely filled then the beginning. That caused
performance issues with dynahash as well, but due to the open chaining
they were less pronounced than with the linear adressing from
simplehash.h.
Reviewed-By: Tomas Vondra
Discussion: <20160727004333.r3e2k2y6fvk2ntup@alap3.anarazel.de>
dynahash.c hash tables aren't quite fast enough for some
use-cases. There are several reasons for lacking performance:
- the use of chaining for collision handling makes them cache
inefficient, that's especially an issue when the tables get bigger.
- as the element sizes for dynahash are only determined at runtime,
offset computations are somewhat expensive
- hash and element comparisons are indirect function calls, causing
unnecessary pipeline stalls
- it's two level structure has some benefits (somewhat natural
partitioning), but increases the number of indirections
to fix several of these the hash tables have to be adjusted to the
individual use-case at compile-time. C unfortunately doesn't provide a
good way to do compile code generation (like e.g. c++'s templates for
all their weaknesses do). Thus the somewhat ugly approach taken here is
to allow for code generation using a macro-templatized header file,
which generates functions and types based on a prefix and other
parameters.
Later patches use this infrastructure to use such hash tables for
tidbitmap.c (bitmap scans) and execGrouping.c (hash aggregation,
...). In queries where these use up a large fraction of the time, this
has been measured to lead to performance improvements of over 100%.
There are other cases where this could be useful (e.g. catcache.c).
The hash table design chosen is a variant of linear open-addressing. The
biggest disadvantage of simple linear addressing schemes are highly
variable lookup times due to clustering, and deletions leaving a lot of
tombstones around. To address these issues a variant of "robin hood"
hashing is employed. Robin hood hashing optimizes chaining lengths by
moving elements close to their optimal bucket ("rich" elements), out of
the way if a to-be-inserted element is further away from its optimal
position (i.e. it's "poor"). While that can make insertions slower, the
average lookup performance is a lot better, and higher fill factors can
be used in a still performant manner. To avoid tombstones - which
normally solve the issue that a deleted node's presence is relevant to
determine whether a lookup needs to continue looking or is done -
buckets following a deleted element are shifted backwards, unless
they're empty or already at their optimal position.
There's further possible improvements that can be made to this
implementation. Amongst others:
- Use distance as a termination criteria during searches. This is
generally a good idea, but I've been able to see the overhead of
distance calculations in some cases.
- Consider combining the 'empty' status into the hashvalue, and enforce
storing the hashvalue. That could, in some cases, increase memory
density and remove a few instructions.
- Experiment further with the, very conservatively choosen, fillfactor.
- Make maximum size of hashtable configurable, to allow storing very
very large tables. That'd require 64bit hash values to be more common
than now, though.
- some smaller memcpy calls could be optimized to copy larger chunks
But since the new implementation is already considerably faster than
dynahash it seem sensible to start using it.
Reviewed-By: Tomas Vondra
Discussion: <20160727004333.r3e2k2y6fvk2ntup@alap3.anarazel.de>
These are useful for very hot code paths. Because it's easy to guess
wrongly about likelihood, and because such likelihoods change over time,
they should be used sparingly.
Past tests have shown it'd be a good idea to use them in some places,
e.g. in error checks around ereports that ERROR out, but that's work for
later.
Discussion: <20160727004333.r3e2k2y6fvk2ntup@alap3.anarazel.de>
bitshiftright() and bitshiftleft() would recursively call each other
infinitely if the user passed INT_MIN for the shift amount, due to integer
overflow in negating the shift amount. To fix, clamp to -VARBITMAXLEN.
That doesn't change the results since any shift distance larger than the
input bit string's length produces an all-zeroes result.
Also fix some places that seemed inadequately paranoid about input typmods
exceeding VARBITMAXLEN. While a typmod accepted by anybit_typmodin() will
certainly be much less than that, at least some of these spots are
reachable with user-chosen integer values.
Andreas Seltenreich and Tom Lane
Discussion: <87d1j2zqtz.fsf@credativ.de>