While building error messages to return to the user,
BuildIndexValueDescription, ExecBuildSlotValueDescription and
ri_ReportViolation would happily include the entire key or entire row in
the result returned to the user, even if the user didn't have access to
view all of the columns being included.
Instead, include only those columns which the user is providing or which
the user has select rights on. If the user does not have any rights
to view the table or any of the columns involved then no detail is
provided and a NULL value is returned from BuildIndexValueDescription
and ExecBuildSlotValueDescription. Note that, for key cases, the user
must have access to all of the columns for the key to be shown; a
partial key will not be returned.
Back-patch all the way, as column-level privileges are now in all
supported versions.
This has been assigned CVE-2014-8161, but since the issue and the patch
have already been publicized on pgsql-hackers, there's no point in trying
to hide this commit.
Commit 145343534c arranged to store numeric
NaN values as short-header numerics, but the field access macros did not
get the memo: they thought only "SHORT" numerics have short headers.
Most of the time this makes no difference because we don't access the
weight or dscale of a NaN; but numeric_send does that. As pointed out
by Andrew Gierth, this led to fetching uninitialized bytes.
AFAICS this could not have any worse consequences than that; in particular,
an unaligned stored numeric would have been detoasted by PG_GETARG_NUMERIC,
so that there's no risk of a fetch off the end of memory. Still, the code
is wrong on its own terms, and it's not hard to foresee future changes that
might expose us to real risks. So back-patch to all affected branches.
Some fields of the sinfo struct are modified within PG_TRY and then
referenced within PG_CATCH, so as with recent patch to async.c, "volatile"
is necessary for strict POSIX compliance; and that propagates to a couple
of subroutines as well as materializeQueryResult() itself. I think the
risk of actual issues here is probably higher than in async.c, because
storeQueryResult() is likely to get inlined into materializeQueryResult(),
leaving the compiler free to conclude that its stores into sinfo fields are
dead code.
The "callargs" variable is modified within PG_TRY and then referenced
within PG_CATCH, which is exactly the coding pattern we've now found
to be unsafe. Marking "callargs" volatile would be problematic because
it is passed by reference to some Tcl functions, so fix the problem
by not modifying it within PG_TRY. We can just postpone the free()
till we exit the PG_TRY construct, as is already done elsewhere in this
same file.
Also, fix failure to free(callargs) when exiting on too-many-arguments
error. This is only a minor memory leak, but a leak nonetheless.
In passing, remove some unnecessary "volatile" markings in the same
function. Those doubtless are there because gcc 2.95.3 whinged about
them, but we now know that its algorithm for complaining is many bricks
shy of a load.
This is certainly a live bug with compilers that optimize similarly
to current gcc, so back-patch to all active branches.
The "pos" variable is modified within PG_TRY and then referenced
within PG_CATCH, so for strict POSIX conformance it must be marked
volatile. Superficially the code looked safe because pos's address
was taken, which was sufficient to force it into memory ... but it's
not sufficient to ensure that the compiler applies updates exactly
where the program text says to. The volatility marking has to extend
into a couple of subroutines too, but I think that's probably a good
thing because the risk of out-of-order updates is mostly in those
subroutines not asyncQueueReadAllNotifications() itself. In principle
the compiler could have re-ordered operations such that an error could
be thrown while "pos" had an incorrect value.
It's unclear how real the risk is here, but for safety back-patch
to all active branches.
strncpy() has a well-deserved reputation for being unsafe, so make an
effort to get rid of nearly all occurrences in HEAD.
A large fraction of the remaining uses were passing length less than or
equal to the known strlen() of the source, in which case no null-padding
can occur and the behavior is equivalent to memcpy(), though doubtless
slower and certainly harder to reason about. So just use memcpy() in
these cases.
In other cases, use either StrNCpy() or strlcpy() as appropriate (depending
on whether padding to the full length of the destination buffer seems
useful).
I left a few strncpy() calls alone in the src/timezone/ code, to keep it
in sync with upstream (the IANA tzcode distribution). There are also a
few such calls in ecpg that could possibly do with more analysis.
AFAICT, none of these changes are more than cosmetic, except for the four
occurrences in fe-secure-openssl.c, which are in fact buggy: an overlength
source leads to a non-null-terminated destination buffer and ensuing
misbehavior. These don't seem like security issues, first because no stack
clobber is possible and second because if your values of sslcert etc are
coming from untrusted sources then you've got problems way worse than this.
Still, it's undesirable to have unpredictable behavior for overlength
inputs, so back-patch those four changes to all active branches.
Move random() and setseed() to a separate table, to have them grouped
together. Also add a notice that random() is not cryptographically secure.
Back-patch of commit 75fdcec145 into
all supported versions, per discussion of the need to document that
random() is just a wrapper around random(3).
This results in a very substantial reduction in disk space usage during
"make check-world", since that sequence involves creation of numerous
temporary installations. It should also help a bit in the buildfarm, even
though the buildfarm script doesn't create as many temp installations,
because the current script misses deleting some of them; and anyway it
seems better to do this once in one place rather than expecting that
script to get it right every time.
In 9.4 and HEAD, also undo the unwise choice in commit b1aebbb6a8
to report strerror(errno) after a rmtree() failure. rmtree has already
reported that, possibly for multiple failures with distinct errnos; and
what's more, by the time it returns there is no good reason to assume
that errno still reflects the last reportable error. So reporting errno
here is at best redundant and at worst badly misleading.
Back-patch to all supported branches, so that future revisions of the
buildfarm script can rely on this behavior.
Per discussion, change the log level of this message to be LOG not WARNING.
The main point of this change is to avoid causing buildfarm run failures
when the stats collector is exceptionally slow to respond, which it not
infrequently is on some of the smaller/slower buildfarm members.
This change does lose notice to an interactive user when his stats query
is looking at out-of-date stats, but the majority opinion (not necessarily
that of yours truly) is that WARNING messages would probably not get
noticed anyway on heavily loaded production systems. A LOG message at
least ensures that the problem is recorded somewhere where bulk auditing
for the issue is possible.
Also, instead of an untranslated "pgstat wait timeout" message, provide
a translatable and hopefully more understandable message "using stale
statistics instead of current ones because stats collector is not
responding". The original text was written hastily under the assumption
that it would never really happen in practice, which we now know to be
unduly optimistic.
Back-patch to all active branches, since we've seen the buildfarm issue
in all branches.
pg_dump.c:dumDatabase() called ArchiveEntry() with the results of a a
query that was PQclear()ed a couple lines earlier.
Backpatch to 9.2 where security labels for shared objects where
introduced.
Previously, the xml value resulting from an xpath query would not have
namespace declarations if the namespace declarations were attached to
an ancestor element in the input xml value. That means the output value
was not correct XML. Fix that by running the result value through
xmlCopyNode(), which produces the correct namespace declarations.
Author: Ali Akbar <the.apaan@gmail.com>
Previous fix mapped "Norwegian (Bokmål)" locale, which contains a non-ASCII
character, to the pure ASCII alias "norwegian-bokmal". However, it turns
out that more recent versions of the CRT library, in particular MSVCR110
(Visual Studio 2012), changed the behaviour of setlocale() so that if
you pass "norwegian-bokmal" to setlocale, it returns "Norwegian_Norway".
That meant trouble, when setlocale(..., NULL) first returned
"Norwegian (Bokmål)_Norway", which we mapped to "norwegian-bokmal_Norway",
but another call to setlocale(..., "norwegian-bokmal_Norway") returned
"Norwegian_Norway". That caused PostgreSQL to think that they are different
locales, and therefore not compatible. That caused initdb to fail at
CREATE DATABASE.
Older CRT versions seem to accept "Norwegian_Norway" too, so change the
mapping to return "Norwegian_Norway" instead of "norwegian-bokmal".
Backpatch to 9.2 like the previous attempt. We haven't made a release that
includes the previous fix yet, so we don't need to worry about changing the
locale of existing clusters from "norwegian-bokmal" to "Norwegian_Norway".
(Doing any mapping like this at all requires changing the locale of
existing databases; the release notes need to include instructions for
that).
Commit 894459e59f revealed this option to
be broken for NLS builds on Darwin, but "make -C contrib/unaccent check"
and the buildfarm client rely on it. Fix that configuration by
redefining the option to imply LANG=C on Darwin. In passing, use LANG=C
instead of LANG=en on Windows; since only postmaster startup uses that
value, testers are unlikely to notice the change. Back-patch to 9.0,
like the predecessor commit.
Up to now, the "child" executor state trees generated for EvalPlanQual
rechecks have simply shared the ResultRelInfo arrays used for the original
execution tree. However, this leads to dangling-pointer problems, because
ExecInitModifyTable() is all too willing to scribble on some fields of the
ResultRelInfo(s) even when it's being run in one of those child trees.
This trashes those fields from the perspective of the parent tree, because
even if the generated subtree is logically identical to what was in use in
the parent, it's in a memory context that will go away when we're done
with the child state tree.
We do however want to share information in the direction from the parent
down to the children; in particular, fields such as es_instrument *must*
be shared or we'll lose the stats arising from execution of the children.
So the simplest fix is to make a copy of the parent's ResultRelInfo array,
but not copy any fields back at end of child execution.
Per report from Manuel Kniep. The added isolation test is based on his
example. In an unpatched memory-clobber-enabled build it will reliably
fail with "ctid is NULL" errors in all branches back to 9.1, as a
consequence of junkfilter->jf_junkAttNo being overwritten with $7f7f.
This test cannot be run as-is before that for lack of WITH syntax; but
I have no doubt that some variant of this problem can arise in older
branches, so apply the code change all the way back.
As of 9.3, ruleutils.c goes to some lengths to ensure that table and column
aliases used in its output are unique. Of course this takes more time than
was required before, which in itself isn't fatal. However, EXPLAIN was set
up so that recalculation of the unique aliases was repeated for each
subexpression printed in a plan. That results in O(N^2) time and memory
consumption for large plan trees, which did not happen in older branches.
Fortunately, the expensive work is the same across a whole plan tree,
so there is no need to repeat it; we can do most of the initialization
just once per query and re-use it for each subexpression. This buys
back most (not all) of the performance loss since 9.2.
We need an extra ExplainState field to hold the precalculated deparse
context. That's no problem in HEAD, but in the back branches, expanding
sizeof(ExplainState) seems risky because third-party extensions might
have local variables of that struct type. So, in 9.4 and 9.3, introduce
an auxiliary struct to keep sizeof(ExplainState) the same. We should
refactor the APIs to avoid such local variables in future, but that's
material for a separate HEAD-only commit.
Per gripe from Alexey Bashtanov. Back-patch to 9.3 where the issue
was introduced.
Previously, read() might have returned a length equal to the buffer
length, and then the subsequent store to buf[len] would write a
zero-byte one byte past the end. This doesn't seem likely to be
a security issue, but there's some chance it could result in
pg_standby misbehaving.
Spotted by Coverity; patch by Michael Paquier, reviewed by me.
Commit b94ce6e80 reordered postmaster's startup sequence so that the
tempfile directory is only cleaned up after all the necessary state
for pg_ctl is collected. Unfortunately the chosen location is after
the syslogger has been started; which normally is fine, except for
!WIN32 EXEC_BACKEND builds, which pass information to children via
files in the temp directory.
Move the call to RemovePgTempFiles() to just before the syslogger has
started. That's the first child we fork.
Luckily EXEC_BACKEND is pretty much only used by endusers on windows,
which has a separate method to pass information to children. That
means the real world impact of this bug is very small.
Discussion: 20150113182344.GF12272@alap3.anarazel.de
Backpatch to 9.1, just as the previous commit was.
I noticed the "vacuum" regression test taking really significantly longer
than it used to on a slow machine. Investigation pointed the finger at
commit e415b469b3, which added creation of
an index using an extremely expensive index function. That function was
evidently meant to be applied only twice ... but the test re-used an
existing test table, which up till a couple lines before that had had over
two thousand rows. Depending on timing of the concurrent regression tests,
the intervening VACUUMs might have been unable to remove those
recently-dead rows, and then the index build would need to create index
entries for them too, leading to the wrap_do_analyze() function being
executed 2000+ times not twice. Avoid this by using a different table
that is guaranteed to have only the intended two rows in it.
Back-patch to 9.0, like the commit that created the problem.
The mechanism added in commit dbdf9679d7
for associating the correct translation domain with errcontext strings
potentially fails in cases where errcontext() is used within an ereport()
macro. Such usage was not originally envisioned for errcontext(), but we
do have a few places that do it. In this situation, the intended comma
expression becomes just a couple of arguments to errfinish(), which the
compiler might choose to evaluate right-to-left.
Fortunately, in such cases the textdomain for the errcontext string must
be the same as for the surrounding ereport. So we can fix this by letting
errstart initialize context_domain along with domain; then it will have
the correct value no matter which order the calls occur in. (Note that
error stack callback functions are not invoked until errfinish, so normal
usage of errcontext won't affect what happens for errcontext calls within
the ereport macro.)
In passing, make sure that errcontext calls within the main backend set
context_domain to something non-NULL. This isn't a live bug because
NULL would select the current textdomain() setting which should be the
right thing anyway --- but it seems better to handle this completely
consistently with the regular domain field.
Per report from Dmitry Voronin. Backpatch to 9.3; before that, there
wasn't any attempt to ensure that errcontext strings were translated
in an appropriate domain.
Back in ed0b409, PGPROC was split and moved to static variables in
procarray.c, with procs in ProcArrayStruct replaced by an array of
integers representing process numbers (pgprocnos), with -1 indicating a
dead process which has yet to be removed. Access to procArray is
generally done under ProcArrayLock and therefore most code does not have
to concern itself with -1 entries.
However, MinimumActiveBackends intentionally does not take
ProcArrayLock, which means it has to be extra careful when accessing
procArray. Prior to ed0b409, this was handled by checking for a NULL
in the pointer array, but that check was no longer valid after the
split. Coverity pointed out that the check could never happen and so
it was removed in 5592eba. That didn't make anything worse, but it
didn't fix the issue either.
The correct fix is to check for pgprocno == -1 and skip over that entry
if it is encountered.
Back-patch to 9.2, since there can be attempts to access the arrays
prior to their start otherwise. Note that the changes prior to 9.4 will
look a bit different due to the change in 5592eba.
Note that MinimumActiveBackends only returns a bool for heuristic
purposes and any pre-array accesses are strictly read-only and so there
is no security implication and the lack of fields complaints indicates
it's very unlikely to run into issues due to this.
Pointed out by Noah.
For some reason I overlooked in GETTEXT_TRIGGERS that the right argument
be read by gettext in 7fcbf6a405. This
will drop the translation percentages for the backend all the way back
to 9.3 ...
Problem reported by Heikki.
Darwin --enable-nls builds use a substitute setlocale() that may start a
thread. Buildfarm member orangutan experienced BackendList corruption
on account of different postmaster threads executing signal handlers
simultaneously. Furthermore, a multithreaded postmaster risks undefined
behavior from sigprocmask() and fork(). Emit LOG messages about the
problem and its workaround. Back-patch to 9.0 (all supported versions).
Typical server invocations already achieved that. Invalid locale
settings in the initial postmaster environment interfered, as could
malloc() failure. Setting "LC_MESSAGES=pt_BR.utf8 LC_ALL=invalid" in
the postmaster environment will now choose C-locale messages, not
Brazilian Portuguese messages. Most localized programs, including all
PostgreSQL frontend executables, do likewise. Users are unlikely to
observe changes involving locale categories other than LC_MESSAGES.
CheckMyDatabase() ensures that we successfully set LC_COLLATE and
LC_CTYPE; main() sets the remaining three categories to locale "C",
which almost cannot fail. Back-patch to 9.0 (all supported versions).
vacuum()'s static variable handling makes it non-reentrant; an ensuing
null pointer deference crashed the backend. Back-patch to 9.0 (all
supported versions).
The corner case where a relcache invalidation tried to rebuild the
entry for a referenced relation but couldn't find it in the catalog
wasn't correct.
The code tried to RelationCacheDelete/RelationDestroyRelation the
entry. That didn't work when assertions are enabled because the latter
contains an assertion ensuring the refcount is zero. It's also more
generally a bad idea, because by virtue of being referenced somebody
might actually look at the entry, which is possible if the error is
trapped and handled via a subtransaction abort.
Instead just error out, without deleting the entry. As the entry is
marked invalid, the worst that can happen is that the invalid (and at
some point unused) entry lingers in the relcache.
Discussion: 22459.1418656530@sss.pgh.pa.us
There should be no way to hit this case < 9.4 where logical decoding
introduced a bug that can hit this. But since the code for handling
the corner case is there it should do something halfway sane, so
backpatch all the the way back. The logical decoding bug will be
handled in a separate commit.
This never worked, I think. Per report from Marc Munro.
In passing, fix funny spacing in the COMMENT ON command as a result of
excess space in the "label" string.
Commit 0e5680f473 contained a thinko
mixing LOCKMODE with LockTupleMode. This caused misbehavior in the case
where a tuple is marked with a multixact with at most a FOR SHARE lock,
and another transaction tries to acquire a FOR NO KEY EXCLUSIVE lock;
this case should block but doesn't.
Include a new isolation tester spec file to explicitely try all the
tuple lock combinations; without the fix it shows the problem:
starting permutation: s1_begin s1_lcksvpt s1_tuplock2 s2_tuplock3 s1_commit
step s1_begin: BEGIN;
step s1_lcksvpt: SELECT * FROM multixact_conflict FOR KEY SHARE; SAVEPOINT foo;
a
1
step s1_tuplock2: SELECT * FROM multixact_conflict FOR SHARE;
a
1
step s2_tuplock3: SELECT * FROM multixact_conflict FOR NO KEY UPDATE;
a
1
step s1_commit: COMMIT;
With the fixed code, step s2_tuplock3 blocks until session 1 commits,
which is the correct behavior.
All other cases behave correctly.
Backpatch to 9.3, like the commit that introduced the problem.
Previously the computation of the total test duration, measured in
microseconds, accidentally overflowed due to accidentally using signed
32bit arithmetic. As the only consequence is that pg_test_timing
invocations with such, overly large, durations never finished the
practical consequences of this bug are minor.
Pointed out by Coverity.
Backpatch to 9.2 where pg_test_timing was added.
In the unlikely case of stdin (fd 0) being closed, the off-by-one
would lead to pg_xlogdump failing to open files.
Spotted by Coverity.
Backpatch to 9.3 where pg_xlogdump was introduced.
As every error in mark_file_as_archived() will lead to a failure of
pg_basebackup the FD leak couldn't ever lead to a real problem. It
seems better to fix the leak anyway though, rather than silence
Coverity, as the usage of the function might get extended or copied at
some point in the future.
Pointed out by Coverity.
Backpatch to 9.2, like the relevant part of the previous patch.
WAL (and timeline history) files created by pg_basebackup did not
maintain the new base backup's archive status. That's currently not a
problem if the new node is used as a standby - but if that node is
promoted all still existing files can get archived again. With a high
wal_keep_segment settings that can happen a significant time later -
which is quite confusing.
Change both the backend (for the -x/-X fetch case) and pg_basebackup
(for -X stream) itself to always mark WAL/timeline files included in
the base backup as .done. That's in line with walreceiver.c doing so.
The verbosity of the pg_basebackup changes show pretty clearly that it
needs some refactoring, but that'd result in not be backpatchable
changes.
Backpatch to 9.1 where pg_basebackup was introduced.
Discussion: 20141205002854.GE21964@awork2.anarazel.de
Backpatch to 9.3 where src/common was introduce, because a bugfix that
needs to be backpatched, requires the function. Earlier branches will
have to duplicate the code.
Use the phraseology "ISO 8601 week-numbering year" in place of just
"ISO year", and make related adjustments to other terminology.
The point of this change is that it seems some people see "ISO year"
and think "standard year", whereupon they're surprised when constructs
like to_char(..., "IYYY-MM-DD") produce nonsensical results. Perhaps
hanging a few more adjectives on it will discourage them from jumping
to false conclusions. I put in an explicit warning against that
specific usage, too, though the main point is to discourage people
who haven't read this far down the page.
In passing fix some nearby markup and terminology inconsistencies.
For simple boolean variables such as ON_ERROR_STOP, psql has for a long
time recognized variant spellings of "on" and "off" (such as "1"/"0"),
and it also made a point of warning you if you'd misspelled the setting.
But these conveniences did not exist for other keyword-valued variables.
In particular, though ECHO_HIDDEN and ON_ERROR_ROLLBACK include "on" and
"off" as possible values, none of the alternative spellings for those were
recognized; and to make matters worse the code would just silently assume
"on" was meant for any unrecognized spelling. Several people have reported
getting bitten by this, so let's fix it. In detail, this patch:
* Allows all spellings recognized by ParseVariableBool() for ECHO_HIDDEN
and ON_ERROR_ROLLBACK.
* Reports a warning for unrecognized values for COMP_KEYWORD_CASE, ECHO,
ECHO_HIDDEN, HISTCONTROL, ON_ERROR_ROLLBACK, and VERBOSITY.
* Recognizes all values for all these variables case-insensitively;
previously there was a mishmash of case-sensitive and case-insensitive
behaviors.
Back-patch to all supported branches. There is a small risk of breaking
existing scripts that were accidentally failing to malfunction; but the
consensus is that the chance of detecting real problems and preventing
future mistakes outweighs this.
Document the long forms of \H \i \ir \o \p \r \w ... apparently, we have
a long and dishonorable history of leaving out the unabbreviated names of
psql backslash commands.
Avoid saying "Unix shell"; we can just say "shell" with equal clarity,
and not leave Windows users wondering whether the feature works for them.
Improve consistency of documentation of \g \o \w metacommands. There's
no reason to use slightly different wording or markup for each one.
We were trying to acquire the lock even when we were subsequently
not sleeping in some other transaction, which opens us up unnecessarily
to deadlocks. In particular, this is troublesome if an update tries to
lock an updated version of a tuple and finds itself doing EvalPlanQual
update chain walking; more than two sessions doing this concurrently
will find themselves sleeping on each other because the HW tuple lock
acquisition in heap_lock_tuple called from EvalPlanQualFetch races with
the same tuple lock being acquired in heap_update -- one of these
sessions sleeps on the other one to finish while holding the tuple lock,
and the other one sleeps on the tuple lock.
Per trouble report from Andrew Sackville-West in
http://www.postgresql.org/message-id/20140731233051.GN17765@andrew-ThinkPad-X230
His scenario can be simplified down to a relatively simple
isolationtester spec file which I don't include in this commit; the
reason is that the current isolationtester is not able to deal with more
than one blocked session concurrently and it blocks instead of raising
the expected deadlock. In the future, if we improve isolationtester, it
would be good to include the spec file in the isolation schedule. I
posted it in
http://www.postgresql.org/message-id/20141212205254.GC1768@alvh.no-ip.org
Hat tip to Mark Kirkwood, who helped diagnose the trouble.
Windows versions later than Windows Server 2003 map "localhost" to ::1.
Account for that in the generated pg_hba.conf, fixing another oversight
in commit f6dc6dd5ba. Back-patch to 9.0,
like that commit.
David Rowley and Noah Misch
For some reason this seems to have been missed when the lists in
src/timezone/tznames/ were first constructed. We can't put it in Default
because of the conflict with US CST, but we should certainly list it among
the alternative entries in Asia.txt. (I checked for other oversights, but
all the other abbreviations that are in current use according to the IANA
files seem to be accounted for.) Noted while responding to bug #12326.
json_agg was originally designed to aggregate records. However, it soon
became clear that it is useful for aggregating all kinds of values and
that's what we have on 9.3 and 9.4, and in head for it and jsonb_agg.
The documentation suggested otherwise, so this fixes it.
Explain that you have to use "VARIADIC ARRAY[]" to pass an empty array
to a variadic parameter position. This was already implicit in the text
but it seems better to spell it out.
Per a suggestion from David Johnston, though I didn't use his proposed
wording. Back-patch to all supported branches.
We used time(null) to set a TimestampTz field, which gave bogus results.
Noticed while looking at pg_xlogdump output.
Backpatch to 9.3 and above, where the fast promotion was introduced.
In LWLockRelease() (and in 9.4+ LWLockUpdateVar()) we release enqueued
waiters using PGSemaphoreUnlock(). As there are other sources of such
unlocks backends only wake up if MyProc->lwWaiting is set to false;
which is only done in the aforementioned functions.
Before this commit there were dangers because the store to lwWaitLink
could become visible before the store to lwWaitLink. This could both
happen due to compiler reordering (on most compilers) and on some
platforms due to the CPU reordering stores.
The possible consequence of this is that a backend stops waiting
before lwWaitLink is set to NULL. If that backend then tries to
acquire another lock and has to wait there the list could become
corrupted once the lwWaitLink store is finally performed.
Add a write memory barrier to prevent that issue.
Unfortunately the barrier support has been only added in 9.2. Given
that the issue has not knowingly been observed in praxis it seems
sufficient to prohibit compiler reordering using volatile for 9.0 and
9.1. Actual problems due to compiler reordering are more likely
anyway.
Discussion: 20140210134625.GA15246@awork2.anarazel.de