The recent addition of regression tests to uuid-ossp exposed the fact
that the MSVC build system wasn't being consistent about whether it was
building/testing that contrib module, ie, it would try to test the module
even when it hadn't built it. The same hazard was latent for sslinfo.
For the moment I just copied the more up-to-date logic from point A to
point B, but this is screaming for refactoring.
Per buildfarm results.
Commit 5035701e07 improved xlog.c's method
for creating a database system identifier, but I neglected to fix the
copy of that code appearing in pg_resetxlog.c. Spotted by Andres Freund.
Allow the contrib/uuid-ossp extension to be built atop any one of these
three popular UUID libraries. (The extension's name is now arguably a
misnomer, but we'll keep it the same so as not to cause unnecessary
compatibility issues for users.)
We would not normally consider a change like this post-beta1, but the issue
has been forced by our upgrade to autoconf 2.69, whose more rigorous header
checks are causing OSSP's header files to be rejected on some platforms.
It's been foreseen for some time that we'd have to move away from depending
on OSSP UUID due to lack of upstream maintenance, so this is a down payment
on that problem.
While at it, add some simple regression tests, in hopes of catching any
major incompatibilities between the three implementations.
Matteo Beccati, with some further hacking by me
Commit 090d0f2050 added new code showing
how it can be useful to set bgw_notify_pid to a non-zero value, but it
failed to make sure that the existing call to RegisterBackgroundWorker
initialized the new field at all.
Report and patch by Shigeru Hanada.
On Mingw, it seems that scanf() doesn't necessarily accept the same format
codes that printf() does, and in particular it may fail to recognize %llu
even though printf() does. Since configure only probes printf() behavior
while setting up the INT64_FORMAT macros, this means it's unsafe to use
those macros with scanf(). We had only one instance of such a coding
pattern, in contrib/pg_stat_statements, so change that code to avoid
the problem.
Per buildfarm warnings. Back-patch to 9.0 where the troublesome code
was introduced.
Michael Paquier
The bug was caused by omitting 'I:' from the short argument list to
getopt_long(). To make similar bugs in the future less likely reorder
options in --help, long and short option lists to be in the same,
alphabetical within groups, order.
Report and fix by Michael Paquier, some additional reordering by me.
The new page deletion code didn't cope with the case the target page's
right sibling was marked half-dead. It failed a sanity check which checked
that the downlinks in the parent page match the lower level, because a
half-dead page has no downlink. To cope, check for that condition, and
just give up on the deletion if it happens. The vacuum will finish the
deletion of the half-dead page when it gets there, and on the next vacuum
after that the empty can be deleted.
Reported by Jeff Janes.
Change the total-transactions counters from int32 to int64 to accommodate
cases where we do more than 2^31 transactions during a run. This patch
does not change the INT_MAX limit on explicit "-t" parameters, but it
does allow the product of the -t and -c parameters to exceed INT_MAX, or
allow a -T limit that is large enough that more than 2^31 transactions
can be completed. While pgbench did not actually fail in such cases,
it did print an incorrect total-transactions count, and some of the
derived numbers such as TPS would have been wrong as well.
Tomas Vondra
HeapTupleHeaderGetCmax() asserts that it is only used if the tuple has
been updated by the current transaction. That check is correct and
sensible but requires allocating memory if xmax is a multixact. When
wal_level is set to logical cmax needs to be included in a wal record
, generated inside a critical section, which can trigger the assertion
added in 4a170ee9e.
Reported-By: Steve Singer
Define padding bytes in SharedInvalidationMessage structs to be
defined. Otherwise the sinvaladt.c ringbuffer, which is accessed by
multiple processes, will cause spurious valgrind warnings about
undefined memory being used. That's because valgrind remembers the
undefined bytes from the last local process's store, not realizing
that another process has written since, filling the previously
uninitialized bytes.
Commit af7914c662, which introduced the
EXPLAIN (TIMING) option, for some reason coded explain.c to look at
planstate->instrument->need_timer rather than es->timing to decide
whether to print timing info. However, the former flag might get set
as a result of contrib/auto_explain wanting timing information. We
certainly don't want activation of auto_explain to change user-visible
statement behavior, so fix that.
Also fix an independent bug introduced in the same patch: in the code
path for a never-executed node with a machine-friendly output format,
if timing was selected, it would fail to print the Actual Rows and Actual
Loops items.
Per bug #10404 from Tomonari Katsumata. Back-patch to 9.2 where the
faulty code was introduced.
I got the backup block numbers off-by-one in the commit that changed the
way incomplete-splits are handled. I blame the comments, which said
"backup block 1" and "backup block 2", even though the backup blocks
are numbered starting from 0, in the macros and functions used in replay.
Fix the comments and the code.
Per Jeff Janes' bug report about corruption caused by torn page writes.
The incorrect code is new in git master, but backpatch the comment change
down to 9.0, where the numbering in the redo-side macros was changed.
C89 says that compound initializers may only contain constant expressions;
a restriction violated by commit 89d00cbe. While we've had no actual field
complaints about this, C89 is still the project standard, and it's not
saving all that much code to break compatibility here. So let's adhere to
the old restriction.
In passing, replace a bunch of hardwired constants "256" with
sizeof(target-variable), just because the latter is more readable and
less breakable. And const-ify where possible.
Back-patch to 9.3 where the nonportable code was added.
Andres Freund and Tom Lane
That's what I get for not fully retesting the final version of the patch.
The replace_allowed cross-check needs an additional special case for
bootstrapping.
RelationCacheInsert() ignored the possibility that hash_search(HASH_ENTER)
might find a hashtable entry already present for the same OID. However,
that can in fact occur during recursive relcache load scenarios. When it
did happen, we overwrote the pointer to the pre-existing Relation, causing
a session-lifespan leakage of that entire structure. As far as is known,
the pre-existing Relation would always have reference count zero by the
time we arrive back at the outer insertion, so add code that deletes the
pre-existing Relation if so. If by some chance its refcount is positive,
elog a WARNING and allow the pre-existing Relation to be leaked as before.
Also, AttrDefaultFetch() was sloppy about leaking the cstring form of the
pg_attrdef.adbin value it's copying into the relcache structure. This is
only a query-lifespan leakage, and normally not very significant, but it
adds up during CLOBBER_CACHE testing.
These bugs are of very ancient vintage, but I'll refrain from back-patching
since there's no evidence that these leaks amount to anything in ordinary
usage.
The fallback implementation involves acquiring and releasing a spinlock
variable that is otherwise unreferenced --- not even to the extent of
initializing it. This accidentally fails to fail on platforms where
spinlocks should be initialized to zeroes, but elsewhere it results in
a "stuck spinlock" failure during startup.
I griped about this last July, and put in a hack that worked for gcc
on HPPA, but didn't get around to fixing the general case. Per the
discussion back then, the best thing to do seems to be to initialize
dummy_spinlock in main.c.
The xl_heap_header_len structures in an XLOG_HEAP_UPDATE record aren't
necessarily aligned adequately. The regular replay function for these
records is aware of that, but decode.c didn't get the memo. I'm not
sure why the buildfarm failed to catch this; the test_decoding test
certainly blows up real good on my old HPPA box.
Also, I'm pretty sure that the address arithmetic was wrong for the
case of XLOG_HEAP_CONTAINS_OLD and not XLOG_HEAP_CONTAINS_NEW_TUPLE,
though this apparently can't happen when logical decoding is active.
transam/README explained how B-tree incomplete splits were tracked and
fixed after recovery, as an example of handling complex actions that need
multiple WAL records, but that's not how it works anymore. Explain the new
paradigm.
Several years ago we changed chr(int) so that if the database encoding is
UTF8, it would interpret its argument as a Unicode code point and expand it
into the appropriate multibyte sequence. However, we weren't sufficiently
careful about checking validity of the input. According to RFC3629, UTF8
disallows code points above U+10FFFF (note that the predecessor standard
RFC2279 was more liberal). Also, both versions of the UTF8 spec agree
that Unicode surrogate-pair codes should never appear in UTF8. Because
our encoding validity checks follow RFC3629, our failure to enforce these
restrictions in chr() means it could be used to produce text strings that
will be rejected when the database is dumped and reloaded. To ensure
consistency with the input functions, let's actually apply
pg_utf8_islegal() to the proposed output of chr().
Per discussion, this seems like too much of a behavioral change to
back-patch, but it's not too late to squeeze it into 9.4.
gbt_macad_union also allocated 12-byte structs where we really need 16.
Per report from Andres Freund. No back-patch since there's no current
risk of a real problem.
The macaddr opclass stores two macaddr structs (each of size 6) in an
index column that's declared as being of type gbtreekey16, ie 16 bytes.
In the original coding this led to passing a palloc'd value of size 12
to the index insertion code, so that data would be fetched past the
end of the allocated value during index tuple construction. This makes
valgrind unhappy. In principle it could result in a SIGSEGV, though
with the current implementation of palloc there's no risk since
the 12-byte request size would be rounded up to 16 bytes anyway.
To fix, add a field to struct gbtree_ninfo showing the declared size of
the index datums, and use that in the palloc requests; and use palloc0
to be sure that any wasted bytes are cleanly initialized.
Per report from Andres Freund. No back-patch since there's no current
risk of a real problem.
pg_stat_replication shows connected replication clients. The ddl test case
never has any replication clients connected, so querying pg_stat_replication
is pointless. To check that a slot has been dropped correctly, query
pg_replication_slots instead.
Andres Freund
The decoding of prepared transaction commits accidentally used the XID of
the transaction performing the COMMIT PREPARED, not the XID of the prepared
transaction. Before bb38fb0d43 that lead to those transactions not being
decoded, afterwards to a assertion failure.
Let's complain about e.g an invalid path or permission problem sooner rather
than later. Before this patch, we would only try to open the output file
after receiving the first decoded message from the server.
Commit dd428c79 added dbId and tsId to the xl_xact_commit struct but missed
that prepared transaction commits reuse that struct. Fix that.
Because those fields were left unitialized, replaying a commit prepared WAL
record in a hot standby node would fail to remove the relcache init file.
That can lead to "could not open file" errors on the standby. Relcache init
file only needs to be removed when a system table/index is rewritten in the
transaction using two phase commit, so that should be rare in practice. In
HEAD, the incorrect dbId/tsId values are also used for filtering in logical
replication code, causing the transaction to always be filtered out.
Analysis and fix by Andres Freund. Backpatch to 9.0 where hot standby was
introduced.
In yesterday's commit 2dc4f011fd, I tried
to force buffering of stdout/stderr in initdb to be what it is by
default when the program is run interactively on Unix (since that's how
most manual testing is done). This tripped over the fact that Windows
doesn't support _IOLBF mode. We dealt with that a long time ago in
syslogger.c by falling back to unbuffered mode on Windows. Export that
solution in port.h and use it in initdb.
Back-patch to 8.4, like the previous commit.
Don't close stdout on SIGHUP. Also, when a SIGHUP is received, close the
file immediately, rather than only after receiving some more data from
the server. Rename a variable, to avoid mentally dealing with double
negatives (not unsynced means synced).
The proc array can contain duplicate XIDs, when a transaction is just being
prepared for two-phase commit. To cope, remove any duplicates in
txid_current_snapshot(). Also ignore duplicates in the input functions, so
that if e.g. you have an old pg_dump file that already contains duplicates,
it will be accepted.
Report and fix by Jan Wieck. Backpatch to all supported versions.