Commit Graph

36031 Commits

Author SHA1 Message Date
Robert Haas
14de825dee Recursively fsync() the data directory after a crash.
Otherwise, if there's another crash, some writes from after the first
crash might make it to disk while writes from before the crash fail
to make it to disk.  This could lead to data corruption.

Back-patch to all supported versions.

Abhijit Menon-Sen, reviewed by Andres Freund and slightly revised
by me.
2015-05-04 12:27:55 -04:00
Alvaro Herrera
e60581fdf3 Fix pg_upgrade's multixact handling (again)
We need to create the pg_multixact/offsets file deleted by pg_upgrade
much earlier than we originally were: it was in TrimMultiXact(), which
runs after we exit recovery, but it actually needs to run earlier than
the first call to SetMultiXactIdLimit (before recovery), because that
routine already wants to read the first offset segment.

Per pg_upgrade trouble report from Jeff Janes.

While at it, silence a compiler warning about a pointless assert that an
unsigned variable was being tested non-negative.  This was a signed
constant in Thomas Munro's patch which I changed to unsigned before
commit.  Pointed out by Andres Freund.
2015-04-30 13:55:06 -03:00
Alvaro Herrera
cf0d888ac5 Code review for multixact bugfix
Reword messages, rename a confusingly named function.

Per Robert Haas.
2015-04-28 14:52:29 -03:00
Alvaro Herrera
e2eda4b115 Protect against multixact members wraparound
Multixact member files are subject to early wraparound overflow and
removal: if the average multixact size is above a certain threshold (see
note below) the protections against offset overflow are not enough:
during multixact truncation at checkpoint time, some
pg_multixact/members files would be removed because the server considers
them to be old and not needed anymore.  This leads to loss of files that
are critical to interpret existing tuples's Xmax values.

To protect against this, since we don't have enough info in pg_control
and we can't modify it in old branches, we maintain shared memory state
about the oldest value that we need to keep; we use this during new
multixact creation to abort if an old still-needed file would get
overwritten.  This value is kept up to date by checkpoints, which makes
it not completely accurate but should be good enough.  We start emitting
warnings sometime earlier, so that the eventual multixact-shutdown
doesn't take DBAs completely by surprise (more precisely: once 20
members SLRU segments are remaining before shutdown.)

On troublesome average multixact size: The threshold size depends on the
multixact freeze parameters. The oldest age is related to the greater of
multixact_freeze_table_age and multixact_freeze_min_age: anything
older than that should be removed promptly by autovacuum.  If autovacuum
is keeping up with multixact freezing, the troublesome multixact average
size is
	(2^32-1) / Max(freeze table age, freeze min age)
or around 28 members per multixact.  Having an average multixact size
larger than that will eventually cause new multixact data to overwrite
the data area for older multixacts.  (If autovacuum is not able to keep
up, or there are errors in vacuuming, the actual maximum is
multixact_freeeze_max_age instead, at which point multixact generation
is stopped completely.  The default value for this limit is 400 million,
which means that the multixact size that would cause trouble is about 10
members).

Initial bug report by Timothy Garnett, bug #12990
Backpatch to 9.3, where the problem was introduced.

Authors: Álvaro Herrera, Thomas Munro
Reviews: Thomas Munro, Amit Kapila, Robert Haas, Kevin Grittner
2015-04-28 11:32:53 -03:00
Noah Misch
723613edf1 Build libecpg with -DFRONTEND in all supported versions.
Fix an oversight in commit 151e74719b by
back-patching commit 44c5d387ea to 9.0.
2015-04-26 17:20:10 -04:00
Tom Lane
3e47d0b2a3 Prevent improper reordering of antijoins vs. outer joins.
An outer join appearing within the RHS of an antijoin can't commute with
the antijoin, but somehow I missed teaching make_outerjoininfo() about
that.  In Teodor Sigaev's recent trouble report, this manifests as a
"could not find RelOptInfo for given relids" error within eqjoinsel();
but I think silently wrong query results are possible too, if the planner
misorders the joins and doesn't happen to trigger any internal consistency
checks.  It's broken as far back as we had antijoins, so back-patch to all
supported branches.
2015-04-25 16:44:27 -04:00
Noah Misch
05c13920a1 Build every ECPG library with -DFRONTEND.
Each of the libraries incorporates src/port files, which often check
FRONTEND.  Build systems disagreed on whether to build libpgtypes this
way.  Only libecpg incorporates files that rely on it today.  Back-patch
to 9.0 (all supported versions) to forestall surprises.
2015-04-24 19:29:24 -04:00
Tom Lane
c82e13a915 Fix obsolete comment in set_rel_size().
The cross-reference to set_append_rel_pathlist() was obsoleted by
commit e2fa76d80b, which split what
had been set_rel_pathlist() and child routines into two sets of
functions.  But I (tgl) evidently missed updating this comment.

Back-patch to 9.2 to avoid unnecessary divergence among branches.

Amit Langote
2015-04-24 15:18:43 -04:00
Heikki Linnakangas
f73ebd766a Fix deadlock at startup, if max_prepared_transactions is too small.
When the startup process recovers transactions by scanning pg_twophase
directory, it should clear MyLockedGxact after it's done processing each
transaction. Like we do during normal operation, at PREPARE TRANSACTION.
Otherwise, if the startup process exits due to an error, it will try to
clear the locking_backend field of the last recovered transaction. That's
usually harmless, but if the error happens in MarkAsPreparing, while
holding TwoPhaseStateLock, the shmem-exit hook will try to acquire
TwoPhaseStateLock again, and deadlock with itself.

This fixes bug #13128 reported by Grant McAlister. The bug was introduced
by commit bb38fb0d, so backpatch to all supported versions like that
commit.
2015-04-23 21:36:24 +03:00
Alvaro Herrera
7954bc5ecb Fix typo in comment
SLRU_SEGMENTS_PER_PAGE -> SLRU_PAGES_PER_SEGMENT

I introduced this ancient typo in subtrans.c and later propagated it to
multixact.c.  I fixed the latter in f741300c, but only back to 9.3;
backpatch to all supported branches for consistency.
2015-04-14 12:12:18 -03:00
Heikki Linnakangas
a800267e46 Don't archive bogus recycled or preallocated files after timeline switch.
After a timeline switch, we would leave behind recycled WAL segments that
are in the future, but on the old timeline. After promotion, and after they
become old enough to be recycled again, we would notice that they don't have
a .ready or .done file, create a .ready file for them, and archive them.
That's bogus, because the files contain garbage, recycled from an older
timeline (or prealloced as zeros). We shouldn't archive such files.

This could happen when we're following a timeline switch during replay, or
when we switch to new timeline at end-of-recovery.

To fix, whenever we switch to a new timeline, scan the data directory for
WAL segments on the old timeline, but with a higher segment number, and
remove them. Those don't belong to our timeline history, and are most
likely bogus recycled or preallocated files. They could also be valid files
that we streamed from the primary ahead of time, but in any case, they're
not needed to recover to the new timeline.
2015-04-13 17:22:35 +03:00
Heikki Linnakangas
8dfddf14c5 Remove duplicated words in comments.
David Rowley
2015-04-12 10:49:34 +03:00
Magnus Hagander
3b4da9ae99 Fix incorrect punctuation
Amit Langote
2015-04-09 13:36:07 +02:00
Alvaro Herrera
0d6c9e061b Fix autovacuum launcher shutdown sequence
It was previously possible to have the launcher re-execute its main loop
before shutting down if some other signal was received or an error
occurred after getting SIGTERM, as reported by Qingqing Zhou.

While investigating, Tom Lane further noticed that if autovacuum had
been disabled in the config file, it would misbehave by trying to start
a new worker instead of bailing out immediately -- it would consider
itself as invoked in emergency mode.

Fix both problems by checking the shutdown flag in a few more places.
These problems have existed since autovacuum was introduced, so
backpatch all the way back.
2015-04-08 13:19:49 -03:00
Tom Lane
b1145ca198 Fix assorted inconsistent function declarations.
While gcc doesn't complain if you declare a function "static" and then
define it not-static, other compilers do; and in any case the code is
highly misleading this way.  Add the missing "static" keywords to a
couple of recent patches.  Per buildfarm member pademelon.
2015-04-07 16:56:21 -04:00
Fujii Masao
4e3b1e2389 Fix typo in libpq.sgml.
Back-patch to all supported versions.

Michael Paquier
2015-04-06 12:17:24 +09:00
Tom Lane
6347bdb314 Suppress clang's unhelpful gripes about -pthread switch being unused.
Considering the number of cases in which "unused" command line arguments
are silently ignored by compilers, it's fairly astonishing that anybody
thought this warning was useful; it's certainly nothing but an annoyance
when building Postgres.  One such case is that neither gcc nor clang
complain about unrecognized -Wno-foo switches, making it more difficult
to figure out whether the switch does anything than one could wish.

Back-patch to 9.3, which is as far back as the patch applies conveniently
(we'd have to back-patch PGAC_PROG_CC_VAR_OPT to go further, and it doesn't
seem worth that).
2015-04-05 13:01:55 -04:00
Tom Lane
e105df208c Fix incorrect matching of subexpressions in outer-join plan nodes.
Previously we would re-use input subexpressions in all expression trees
attached to a Join plan node.  However, if it's an outer join and the
subexpression appears in the nullable-side input, this is potentially
incorrect for apparently-matching subexpressions that came from above
the outer join (ie, targetlist and qpqual expressions), because the
executor will treat the subexpression value as NULL when maybe it should
not be.

The case is fairly hard to hit because (a) you need a non-strict
subexpression (else NULL is correct), and (b) we don't usually compute
expressions in the outputs of non-toplevel plan nodes.  But we might do
so if the expressions are sort keys for a mergejoin, for example.

Probably in the long run we should make a more explicit distinction between
Vars appearing above and below an outer join, but that will be a major
planner redesign and not at all back-patchable.  For the moment, just hack
set_join_references so that it will not match any non-Var expressions
coming from nullable inputs to expressions that came from above the join.
(This is somewhat overkill, in that a strict expression could still be
matched, but it doesn't seem worth the effort to check that.)

Per report from Qingqing Zhou.  The added regression test case is based
on his example.

This has been broken for a very long time, so back-patch to all active
branches.
2015-04-04 19:55:15 -04:00
Tom Lane
cbccaf22bf Remove unnecessary variables in _hash_splitbucket().
Commit ed9cc2b5df made it unnecessary to pass
start_nblkno to _hash_splitbucket(), and for that matter unnecessary to
have the internal nblkno variable either.  My compiler didn't complain
about that, but some did.  I also rearranged the use of oblkno a bit to
make that case more parallel.

Report and initial patch by Petr Jelinek, rearranged a bit by me.
Back-patch to all branches, like the previous patch.
2015-04-03 16:49:11 -04:00
Alvaro Herrera
f4540cae10 psql: fix \connect with URIs and conninfo strings
psql was already accepting conninfo strings as the first parameter in
\connect, but the way it worked wasn't sane; some of the other
parameters would get the previous connection's values, causing it to
connect to a completely unexpected server or, more likely, not finding
any server at all because of completely wrong combinations of
parameters.

Fix by explicitely checking for a conninfo-looking parameter in the
dbname position; if one is found, use its complete specification rather
than mix with the other arguments.  Also, change tab-completion to not
try to complete conninfo/URI-looking "dbnames" and document that
conninfos are accepted as first argument.

There was a weak consensus to backpatch this, because while the behavior
of using the dbname as a conninfo is nowhere documented for \connect, it
is reasonable to expect that it works because it does work in many other
contexts.  Therefore this is backpatched all the way back to 9.0.

To implement this, routines previously private to libpq have been
duplicated so that psql can decide what looks like a conninfo/URI
string.  In back branches, just duplicate the same code all the way back
to 9.2, where URIs where introduced; 9.0 and 9.1 have a simpler version.
In master, the routines are moved to src/common and renamed.

Author: David Fetter, Andrew Dunstan.  Some editorialization by me
(probably earning a Gierth's "Sloppy" badge in the process.)
Reviewers: Andrew Gierth, Erik Rijkers, Pavel Stěhule, Stephen Frost,
Robert Haas, Andrew Dunstan.
2015-04-01 20:00:07 -03:00
Tom Lane
44f8f56e6d Fix incorrect markup in documentation of window frame clauses.
You're required to write either RANGE or ROWS to start a frame clause,
but the documentation incorrectly implied this is optional.  Noted by
David Johnston.
2015-03-31 20:03:50 -04:00
Heikki Linnakangas
9f06729e2a Remove spurious semicolons.
Petr Jelinek
2015-03-31 15:15:04 +03:00
Andrew Dunstan
0904eb3e19 Run pg_upgrade and pg_resetxlog with restricted token on Windows
As with initdb these programs need to run with a restricted token, and
if they don't pg_upgrade will fail when run as a user with Adminstrator
privileges.

Backpatch to all live branches. On the development branch the code is
reorganized so that the restricted token code is now in a single
location. On the stable bramches a less invasive change is made by
simply copying the relevant code to pg_upgrade.c and pg_resetxlog.c.

Patches and bug report from Muhammad Asif Naeem, reviewed by Michael
Paquier, slightly edited by me.
2015-03-30 17:17:17 -04:00
Tom Lane
246bbf65ce Fix bogus concurrent use of _hash_getnewbuf() in bucket split code.
_hash_splitbucket() obtained the base page of the new bucket by calling
_hash_getnewbuf(), but it held no exclusive lock that would prevent some
other process from calling _hash_getnewbuf() at the same time.  This is
contrary to _hash_getnewbuf()'s API spec and could in fact cause failures.
In practice, we must only call that function while holding write lock on
the hash index's metapage.

An additional problem was that we'd already modified the metapage's bucket
mapping data, meaning that failure to extend the index would leave us with
a corrupt index.

Fix both issues by moving the _hash_getnewbuf() call to just before we
modify the metapage in _hash_expandtable().

Unfortunately there's still a large problem here, which is that we could
also incur ENOSPC while trying to get an overflow page for the new bucket.
That would leave the index corrupt in a more subtle way, namely that some
index tuples that should be in the new bucket might still be in the old
one.  Fixing that seems substantially more difficult; even preallocating as
many pages as we could possibly need wouldn't entirely guarantee that the
bucket split would complete successfully.  So for today let's just deal
with the base case.

Per report from Antonin Houska.  Back-patch to all active branches.
2015-03-30 16:40:05 -04:00
Tom Lane
995a664c85 Add vacuum_delay_point call in compute_index_stats's per-sample-row loop.
Slow functions in index expressions might cause this loop to take long
enough to make it worth being cancellable.  Probably it would be enough
to call CHECK_FOR_INTERRUPTS here, but for consistency with other
per-sample-row loops in this file, let's use vacuum_delay_point.

Report and patch by Jeff Janes.  Back-patch to all supported branches.
2015-03-29 15:04:24 -04:00
Tatsuo Ishii
56abebb9be Make SyncRepWakeQueue to a static function
It is only used in src/backend/replication/syncrep.c.

Back-patch to all supported branches except 9.1 which declares the
function as static.
2015-03-26 10:39:18 +09:00
Tom Lane
7cd5498b37 Fix ExecOpenScanRelation to take a lock on a ROW_MARK_COPY relation.
ExecOpenScanRelation assumed that any relation listed in the ExecRowMark
list has been locked by InitPlan; but this is not true if the rel's
markType is ROW_MARK_COPY, which is possible if it's a foreign table.

In most (possibly all) cases, failure to acquire a lock here isn't really
problematic because the parser, planner, or plancache would have taken the
appropriate lock already.  In principle though it might leave us vulnerable
to working with a relation that we hold no lock on, and in any case if the
executor isn't depending on previously-taken locks otherwise then it should
not do so for ROW_MARK_COPY relations.

Noted by Etsuro Fujita.  Back-patch to all active versions, since the
inconsistency has been there a long time.  (It's almost certainly
irrelevant in 9.0, since that predates foreign tables, but the code's
still wrong on its own terms.)
2015-03-24 15:53:06 -04:00
Tom Lane
83587a075d Replace insertion sort in contrib/intarray with qsort().
It's all very well to claim that a simplistic sort is fast in easy
cases, but O(N^2) in the worst case is not good ... especially if the
worst case is as easy to hit as "descending order input".  Replace that
bit with our standard qsort.

Per bug #12866 from Maksym Boguk.  Back-patch to all active branches.
2015-03-15 23:22:03 -04:00
Tom Lane
2cb76fa6ff Remove workaround for ancient incompatibility between readline and libedit.
GNU readline defines the return value of write_history() as "zero if OK,
else an errno code".  libedit's version of that function used to have a
different definition (to wit, "-1 if error, else the number of lines
written to the file").  We tried to work around that by checking whether
errno had become nonzero, but this method has never been kosher according
to the published API of either library.  It's reportedly completely broken
in recent Ubuntu releases: psql bleats about "No such file or directory"
when saving ~/.psql_history, even though the write worked fine.

However, libedit has been following the readline definition since somewhere
around 2006, so it seems all right to finally break compatibility with
ancient libedit releases and trust that the return value is what readline
specifies.  (I'm not sure when the various Linux distributions incorporated
this fix, but I did find that OS X has been shipping fixed versions since
10.5/Leopard.)

If anyone is still using such an ancient libedit, they will find that psql
complains it can't write ~/.psql_history at exit, even when the file was
written correctly.  This is no worse than the behavior we're fixing for
current releases.

Back-patch to all supported branches.
2015-03-14 13:43:13 -04:00
Tatsuo Ishii
089b371a39 Fix integer overflow in debug message of walreceiver
The message tries to tell the replication apply delay which fails if
the first WAL record is not applied yet. Fix is, instead of telling
overflowed minus numeric, showing "N/A" which indicates that the delay
data is not yet available. Problem reported by me and patch by
Fabrízio de Royes Mello.

Back patched to 9.4, 9.3 and 9.2 stable branches (9.1 and 9.0 do not
have the debug message).
2015-03-14 08:21:56 +09:00
Tom Lane
5bdf3cf5ad Ensure tableoid reads correctly in EvalPlanQual-manufactured tuples.
The ROW_MARK_COPY path in EvalPlanQualFetchRowMarks() was just setting
tableoid to InvalidOid, I think on the assumption that the referenced
RTE must be a subquery or other case without a meaningful OID.  However,
foreign tables also use this code path, and they do have meaningful
table OIDs; so failure to set the tuple field can lead to user-visible
misbehavior.  Fix that by fetching the appropriate OID from the range
table.

There's still an issue about whether CTID can ever have a meaningful
value in this case; at least with postgres_fdw foreign tables, it does.
But that is a different problem that seems to require a significantly
different patch --- it's debatable whether postgres_fdw really wants to
use this code path at all.

Simplified version of a patch by Etsuro Fujita, who also noted the
problem to begin with.  The issue can be demonstrated in all versions
having FDWs, so back-patch to 9.1.
2015-03-12 13:38:49 -04:00
Tom Lane
d16d821faa Cast to (void *) rather than (int *) when passing int64's to PQfn().
This is a possibly-vain effort to silence a Coverity warning about
bogus endianness dependency.  The code's fine, because it takes care
of endianness issues for itself, but Coverity sees an int64 being
passed to an int* argument and not unreasonably suspects something's
wrong.  I'm not sure if putting the void* cast in the way will shut it
up; but it can't hurt and seems better from a documentation standpoint
anyway, since the pointer is not used as an int* in this code path.

Just for a bit of additional safety, verify that the result length
is 8 bytes as expected.

Back-patch to 9.3 where the code in question was added.
2015-03-08 13:58:39 -04:00
Tom Lane
9937f6e4c8 Fix documentation for libpq's PQfn().
The SGML docs claimed that 1-byte integers could be sent or received with
the "isint" options, but no such behavior has ever been implemented in
pqGetInt() or pqPutInt().  The in-code documentation header for PQfn() was
even less in tune with reality, and the code itself used parameter names
matching neither the SGML docs nor its libpq-fe.h declaration.  Do a bit
of additional wordsmithing on the SGML docs while at it.

Since the business about 1-byte integers is a clear documentation bug,
back-patch to all supported branches.
2015-03-08 13:35:41 -04:00
Tom Lane
d645273cff Rethink function argument sorting in pg_dump.
Commit 7b583b20b1 created an unnecessary
dump failure hazard by applying pg_get_function_identity_arguments()
to every function in the database, even those that won't get dumped.
This could result in snapshot-related problems if concurrent sessions are,
for example, creating and dropping temporary functions, as noted by Marko
Tiikkaja in bug #12832.  While this is by no means pg_dump's only such
issue with concurrent DDL, it's unfortunate that we added a new failure
mode for cases that used to work, and even more so that the failure was
created for basically cosmetic reasons (ie, to sort overloaded functions
more deterministically).

To fix, revert that patch and instead sort function arguments using
information that pg_dump has available anyway, namely the names of the
argument types.  This will produce a slightly different sort ordering for
overloaded functions than the previous coding; but applying strcmp
directly to the output of pg_get_function_identity_arguments really was
a bit odd anyway.  The sorting will still be name-based and hence
independent of possibly-installation-specific OID assignments.  A small
additional benefit is that sorting now works regardless of server version.

Back-patch to 9.3, where the previous commit appeared.
2015-03-06 13:27:46 -05:00
Alvaro Herrera
49bb34382d Fix contrib/file_fdw's expected file
I forgot to update it on yesterday's cf34e373fc.
2015-03-06 11:47:09 -03:00
Alvaro Herrera
5cf4000034 Fix user mapping object description
We were using "user mapping for user XYZ" as description for user mappings, but
that's ambiguous because users can have mappings on multiple foreign
servers; therefore change it to "for user XYZ on server UVW" instead.
Object identities for user mappings are also updated in the same way, in
branches 9.3 and above.

The incomplete description string was introduced together with the whole
SQL/MED infrastructure by commit cae565e503 of 8.4 era, so backpatch all
the way back.
2015-03-05 18:03:16 -03:00
Alvaro Herrera
73f236f579 Add comment for "is_internal" parameter
This was missed in my commit f4c4335 of 9.3 vintage, so backpatch to
that.
2015-03-03 14:04:34 -03:00
Stephen Frost
43d81f16a3 Fix pg_dump handling of extension config tables
Since 9.1, we've provided extensions with a way to denote
"configuration" tables- tables created by an extension which the user
may modify.  By marking these as "configuration" tables, the extension
is asking for the data in these tables to be pg_dump'd (tables which
are not marked in this way are assumed to be entirely handled during
CREATE EXTENSION and are not included at all in a pg_dump).

Unfortunately, pg_dump neglected to consider foreign key relationships
between extension configuration tables and therefore could end up
trying to reload the data in an order which would cause FK violations.

This patch teaches pg_dump about these dependencies, so that the data
dumped out is done so in the best order possible.  Note that there's no
way to handle circular dependencies, but those have yet to be seen in
the wild.

The release notes for this should include a caution to users that
existing pg_dump-based backups may be invalid due to this issue.  The
data is all there, but restoring from it will require extracting the
data for the configuration tables and then loading them in the correct
order by hand.

Discussed initially back in bug #6738, more recently brought up by
Gilles Darold, who provided an initial patch which was further reworked
by Michael Paquier.  Further modifications and documentation updates
by me.

Back-patch to 9.1 where we added the concept of extension configuration
tables.
2015-03-02 14:12:33 -05:00
Noah Misch
585f16dc80 Unlink static libraries before rebuilding them.
When the library already exists in the build directory, "ar" preserves
members not named on its command line.  This mattered when, for example,
a "configure" rerun dropped a file from $(LIBOBJS).  libpgport carried
the obsolete member until "make clean".  Back-patch to 9.0 (all
supported versions).
2015-03-01 13:06:39 -05:00
Tom Lane
1b558782b7 Fix planning of star-schema-style queries.
Part of the intent of the parameterized-path mechanism was to handle
star-schema queries efficiently, but some overly-restrictive search
limiting logic added in commit e2fa76d80b
prevented such cases from working as desired.  Fix that and add a
regression test about it.  Per gripe from Marc Cousin.

This is arguably a bug rather than a new feature, so back-patch to 9.2
where parameterized paths were introduced.
2015-02-28 12:43:04 -05:00
Andres Freund
abce8dc7d6 Reconsider when to wait for WAL flushes/syncrep during commit.
Up to now RecordTransactionCommit() waited for WAL to be flushed (if
synchronous_commit != off) and to be synchronously replicated (if
enabled), even if a transaction did not have a xid assigned. The primary
reason for that is that sequence's nextval() did not assign a xid, but
are worthwhile to wait for on commit.

This can be problematic because sometimes read only transactions do
write WAL, e.g. HOT page prune records. That then could lead to read only
transactions having to wait during commit. Not something people expect
in a read only transaction.

This lead to such strange symptoms as backends being seemingly stuck
during connection establishment when all synchronous replicas are
down. Especially annoying when said stuck connection is the standby
trying to reconnect to allow syncrep again...

This behavior also is involved in a rather complicated <= 9.4 bug where
the transaction started by catchup interrupt processing waited for
syncrep using latches, but didn't get the wakeup because it was already
running inside the same overloaded signal handler. Fix the issue here
doesn't properly solve that issue, merely papers over the problems. In
9.5 catchup interrupts aren't processed out of signal handlers anymore.

To fix all this, make nextval() acquire a top level xid, and only wait for
transaction commit if a transaction both acquired a xid and emitted WAL
records.  If only a xid has been assigned we don't uselessly want to
wait just because of writes to temporary/unlogged tables; if only WAL
has been written we don't want to wait just because of HOT prunes.

The xid assignment in nextval() is unlikely to cause overhead in
real-world workloads. For one it only happens SEQ_LOG_VALS/32 values
anyway, for another only usage of nextval() without using the result in
an insert or similar is affected.

Discussion: 20150223165359.GF30784@awork2.anarazel.de,
    369698E947874884A77849D8FE3680C2@maumau,
    5CF4ABBA67674088B3941894E22A0D25@maumau

Per complaint from maumau and Thom Brown

Backpatch all the way back; 9.0 doesn't have syncrep, but it seems
better to be consistent behavior across all maintained branches.
2015-02-26 12:50:07 +01:00
Noah Misch
4651e37cb1 Free SQLSTATE and SQLERRM no earlier than other PL/pgSQL variables.
"RETURN SQLERRM" prompted plpgsql_exec_function() to read from freed
memory.  Back-patch to 9.0 (all supported versions).  Little code ran
between the premature free and the read, so non-assert builds are
unlikely to witness user-visible consequences.
2015-02-25 23:48:54 -05:00
Tom Lane
f864fe0740 Fix dumping of views that are just VALUES(...) but have column aliases.
The "simple" path for printing VALUES clauses doesn't work if we need
to attach nondefault column aliases, because there's noplace to do that
in the minimal VALUES() syntax.  So modify get_simple_values_rte() to
detect nondefault aliases and treat that as a non-simple case.  This
further exposes that the "non-simple" path never actually worked;
it didn't produce valid syntax.  Fix that too.  Per bug #12789 from
Curtis McEnroe, and analysis by Andrew Gierth.

Back-patch to all supported branches.  Before 9.3, this also requires
back-patching the part of commit 092d7ded29
that created get_simple_values_rte() to begin with; inserting the extra
test into the old factorization of that logic would've been too messy.
2015-02-25 12:01:12 -05:00
Andres Freund
a6ddff8122 Guard against spurious signals in LockBufferForCleanup.
When LockBufferForCleanup() has to wait for getting a cleanup lock on a
buffer it does so by setting a flag in the buffer header and then wait
for other backends to signal it using ProcWaitForSignal().
Unfortunately LockBufferForCleanup() missed that ProcWaitForSignal() can
return for other reasons than the signal it is hoping for. If such a
spurious signal arrives the wait flags on the buffer header will still
be set. That then triggers "ERROR: multiple backends attempting to wait
for pincount 1".

The fix is simple, unset the flag if still set when retrying. That
implies an additional spinlock acquisition/release, but that's unlikely
to matter given the cost of waiting for a cleanup lock.  Alternatively
it'd have been possible to move responsibility for maintaining the
relevant flag to the waiter all together, but that might have had
negative consequences due to possible floods of signals. Besides being
more invasive.

This looks to be a very longstanding bug. The relevant code in
LockBufferForCleanup() hasn't changed materially since its introduction
and ProcWaitForSignal() was documented to return for unrelated reasons
since 8.2.  The master only patch series removing ImmediateInterruptOK
made it much easier to hit though, as ProcSendSignal/ProcWaitForSignal
now uses a latch shared with other tasks.

Per discussion with Kevin Grittner, Tom Lane and me.

Backpatch to all supported branches.

Discussion: 11553.1423805224@sss.pgh.pa.us
2015-02-23 16:14:15 +01:00
Heikki Linnakangas
cdf813c593 Fix potential deadlock with libpq non-blocking mode.
If libpq output buffer is full, pqSendSome() function tries to drain any
incoming data. This avoids deadlock, if the server e.g. sends a lot of
NOTICE messages, and blocks until we read them. However, pqSendSome() only
did that in blocking mode. In non-blocking mode, the deadlock could still
happen.

To fix, take a two-pronged approach:

1. Change the documentation to instruct that when PQflush() returns 1, you
should wait for both read- and write-ready, and call PQconsumeInput() if it
becomes read-ready. That fixes the deadlock, but applications are not going
to change overnight.

2. In pqSendSome(), drain the input buffer before returning 1. This
alleviates the problem for applications that only wait for write-ready. In
particular, a slow but steady stream of NOTICE messages during COPY FROM
STDIN will no longer cause a deadlock. The risk remains that the server
attempts to send a large burst of data and fills its output buffer, and at
the same time the client also sends enough data to fill its output buffer.
The application will deadlock if it goes to sleep, waiting for the socket
to become write-ready, before the server's data arrives. In practice,
NOTICE messages and such that the server might be sending are usually
short, so it's highly unlikely that the server would fill its output buffer
so quickly.

Backpatch to all supported versions.
2015-02-23 13:32:42 +02:00
Tom Lane
f389b6e0a7 Fix misparsing of empty value in conninfo_uri_parse_params().
After finding an "=" character, the pointer was advanced twice when it
should only advance once.  This is harmless as long as the value after "="
has at least one character; but if it doesn't, we'd miss the terminator
character and include too much in the value.

In principle this could lead to reading off the end of memory.  It does not
seem worth treating as a security issue though, because it would happen on
client side, and besides client logic that's taking conninfo strings from
untrusted sources has much worse security problems than this.

Report and patch received off-list from Thomas Fanghaenel.
Back-patch to 9.2 where the faulty code was introduced.
2015-02-21 12:59:39 -05:00
Alvaro Herrera
a196e67f93 Fix object identities for pg_conversion objects
We were neglecting to schema-qualify them.

Backpatch to 9.3, where object identities were introduced as a concept
by commit f8348ea32e.
2015-02-18 14:28:12 -03:00
Tom Lane
a7ad5cf0cf Fix failure to honor -Z compression level option in pg_dump -Fd.
cfopen() and cfopen_write() failed to pass the compression level through
to zlib, so that you always got the default compression level if you got
any at all.

In passing, also fix these and related functions so that the correct errno
is reliably returned on failure; the original coding supposes that free()
cannot change errno, which is untrue on at least some platforms.

Per bug #12779 from Christoph Berg.  Back-patch to 9.1 where the faulty
code was introduced.

Michael Paquier
2015-02-18 11:43:00 -05:00
Tom Lane
4ea2d2ddbe Remove code to match IPv4 pg_hba.conf entries to IPv4-in-IPv6 addresses.
In investigating yesterday's crash report from Hugo Osvaldo Barrera, I only
looked back as far as commit f3aec2c7f5 where the breakage occurred
(which is why I thought the IPv4-in-IPv6 business was undocumented).  But
actually the logic dates back to commit 3c9bb8886d and was simply
broken by erroneous refactoring in the later commit.  A bit of archives
excavation shows that we added the whole business in response to a report
that some 2003-era Linux kernels would report IPv4 connections as having
IPv4-in-IPv6 addresses.  The fact that we've had no complaints since 9.0
seems to be sufficient confirmation that no modern kernels do that, so
let's just rip it all out rather than trying to fix it.

Do this in the back branches too, thus essentially deciding that our
effective behavior since 9.0 is correct.  If there are any platforms on
which the kernel reports IPv4-in-IPv6 addresses as such, yesterday's fix
would have made for a subtle and potentially security-sensitive change in
the effective meaning of IPv4 pg_hba.conf entries, which does not seem like
a good thing to do in minor releases.  So let's let the post-9.0 behavior
stand, and change the documentation to match it.

In passing, I failed to resist the temptation to wordsmith the description
of pg_hba.conf IPv4 and IPv6 address entries a bit.  A lot of this text
hasn't been touched since we were IPv4-only.
2015-02-17 12:49:18 -05:00
Robert Haas
9a90ec9cff Improve pg_check_dir code and comments.
Avoid losing errno if readdir() fails and closedir() works.  Consistently
return 4 rather than 3 if both a lost+found directory and other files are
found, rather than returning one value or the other depending on the
order of the directory listing.  Update comments to match the actual
behavior.

These oversights date to commits 6f03927fce
and 17f1523932.

Marco Nenciarini
2015-02-17 10:54:29 -05:00