All functions provided by this extension are PARALLEL SAFE. Given the
general prohibition against write operations in parallel queries, it is
perhaps a bit surprising that pg_stat_statements_reset() is parallel safe.
But since it only modifies shared memory, not the database, it's OK.
Andreas Karlsson
Since indexes are created without valid LSNs, an index created
while a snapshot older than old_snapshot_threshold existed could
cause queries to return incorrect results when those old snapshots
were used, if any relevant rows had been subject to early pruning
before the index was built. Prevent usage of a newly created index
until all such snapshots are released, for relations where this can
happen.
Questions about the interaction of "snapshot too old" with index
creation were initially raised by Andres Freund.
Reviewed by Robert Haas.
Transmit the leader's temp-namespace state to workers. This is important
because without it, the workers do not really have the same search path
as the leader. For example, there is no good reason (and no extant code
either) to prevent a worker from executing a temp function that the
leader created previously; but as things stood it would fail to find the
temp function, and then either fail or execute the wrong function entirely.
We still prohibit a worker from creating a temp namespace on its own.
In effect, a worker can only see the session's temp namespace if the leader
had created it before starting the worker, which seems like the right
semantics.
Also, transmit the leader's BackendId to workers, and arrange for workers
to use that when determining the physical file path of a temp relation
belonging to their session. While the original intent was to prevent such
accesses entirely, there were a number of holes in that, notably in places
like dbsize.c which assume they can safely access temp rels of other
sessions anyway. We might as well get this right, as a small down payment
on someday allowing workers to access the leader's temp tables. (With
this change, directly using "MyBackendId" as a relation or buffer backend
ID is deprecated; you should use BackendIdForTempRelations() instead.
I left a couple of such uses alone though, as they're not going to be
reachable in parallel workers until we do something about localbuf.c.)
Move the thou-shalt-not-access-thy-leader's-temp-tables prohibition down
into localbuf.c, which is where it actually matters, instead of having it
in relation_open(). This amounts to recognizing that access to temp
tables' catalog entries is perfectly safe in a worker, it's only the data
in local buffers that is problematic.
Having done all that, we can get rid of the test in has_parallel_hazard()
that says that use of a temp table's rowtype is unsafe in parallel workers.
That test was unduly expensive, and if we really did need such a
prohibition, that was not even close to being a bulletproof guard for it.
(For example, any user-defined function executed in a parallel worker
might have attempted such access.)
In commits 9ff60273e3 and dbe2328959 I (tgl) fixed the
signatures of a bunch of contrib's GIN and GIST support functions so that
they would pass validation by the recently-added amvalidate functions.
The backend does not actually consult or check those signatures otherwise,
so I figured this was basically cosmetic and did not require an extension
version bump. However, Alexander Korotkov pointed out that that would
leave us in a pretty messy situation if we ever wanted to redefine those
functions later, because there wouldn't be a unique way to name them.
Since we're going to be bumping these extensions' versions anyway for
parallel-query cleanups, let's take care of this now.
Andreas Karlsson, adjusted for more search-path-safety by me
Such paths are unsafe. To make it cheaper to detect when this case
applies, track whether a relation's default PathTarget contains any
non-Vars. In most cases, the answer will be no, which enables us to
determine cheaply that the target list for a proposed path is
parallel-safe. However, subquery pull-up can create cases that
require us to inspect the target list more carefully.
Amit Kapila, reviewed by me.
Document these as "nearest integer >= argument" and "nearest integer <=
argument", which will hopefully be less confusing than the old formulation.
New wording is from Matlab via Dean Rasheed.
I changed the pg_description entries as well as the SGML docs. In the
back branches, this will only affect installations initdb'd in the future,
but it should be harmless otherwise.
Discussion: <CAEZATCW3yzJo-NMSiQs5jXNFbTsCEftZS-Og8=FvFdiU+kYuSA@mail.gmail.com>
Fix a couple of overlooked uses of "degree" terminology. Make the parallel
worker count selection logic in create_plain_partial_paths more robust (in
particular, it failed with max_parallel_workers_per_gather set to zero).
This terminology provoked widespread complaints. So, instead, rename
the GUC max_parallel_degree to max_parallel_workers_per_gather
(leaving room for a possible future GUC max_parallel_workers that acts
as a system-wide limit), and rename the parallel_degree reloption to
parallel_workers. Rename structure members to match.
These changes create a dump/restore hazard for users of PostgreSQL
9.6beta1 who have set the reloption (or applied the GUC using ALTER
USER or ALTER DATABASE).
Fix grammar, improve examples, etc.
I did not attempt to document the current behavior concerning distance-zero
matches, because I think that's broken and needs to change, so I'm not
going to use up brain cells figuring out how to explain how it works now.
One way or the other, there's still more to write here.
To achieve this, ANALYZE the data table before querying it, as suggested
by Tom Lane. On my system, this enables the test to pass with 128 kB of
work_mem (a value with which other tests fail -- so it seems good
enough).
Reported by Michaël Paquier.
In VPATH builds, the build directory was not being searched for files in
GETTEXT_FILES, leading to failure to construct the .pot files. This has
bit me all along, but never hard enough to get it fixed; I suppose not a
lot of people uses VPATH and NLS-enabled builds, and those that do,
don't do "make update-po" often.
This is a longstanding problem, so backpatch all the way back.
Commit 6820094d1 mixed up types of parent object (table) with type of
sub-object being commented on. Noticed while fixing docs for
COMMENT ON ACCESS METHOD.
Backpatch to 9.5, like that commit.
This commit reverts 137805f89 as well as the associated commits 015e88942,
5306df283, and 68d704edb. We found multiple bugs in this feature, and
there was concern about possible planner slowdown (though to be fair,
exhibiting a very large slowdown proved difficult). The way forward
requires a considerable rewrite, which may or may not be possible to
accomplish in time for beta2. In my judgment reviewing the rewrite will
be easier to accomplish starting from a clean slate, so let's temporarily
revert what's there now. This also leaves us in a safe state if it turns
out to be necessary to postpone the rewrite to the next development cycle.
Discussion: <20160429102531.GA13701@huehner.biz>
It was previously suggested that "esoteric" operations such as creating
a new access method would require direct manipulation of the system
catalogs, but that example has gone away, and I can't think of a new one
to replace it, so just put in some weasel wording.
crosstabview.c was not added to nls.mk when it was added. Also remove
redundant gettext markers, since psql_error() is already registered as a
gettext keyword.
dumpAccessMethod() didn't get the memo that we now have a bitfield for
the components which should be dumped instead of a simple boolean.
Correct that by checking if the relevant bit is set for each component
being dumped out (and not dumping it out if it isn't set).
This corrects an issue where CREATE ACCESS METHOD commands were being
included in non-binary-upgrades when an extension included an access
method (as the bloom extensions does).
Also add a regression test to make sure that we only dump out the
ACCESS METHOD commands, when they are part of an extension, when doing
a binary upgrade.
Pointed out by Thom Brown.
In commit 5c3c3cd0a3, the new tests were
apparently just dumped into the first convenient file. Move them to a
separate file dedicated to testing that functionality and leave the
plpython_test test to test basic functionality, as it did before.
If we ANALYZE only selected columns of a table, we should not postpone
auto-analyze because of that; other columns may well still need stats
updates. As committed, the counter is left alone if a column list is
given, whether or not it includes all analyzable columns of the table.
Per complaint from Tomasz Ostrowski.
It's been like this a long time, so back-patch to all supported branches.
Report: <ef99c1bd-ff60-5f32-2733-c7b504eb960c@ato.waw.pl>
If a Gather node has read as many tuples as it needs (for example, due
to Limit) it may detach the queue connecting it to the worker before
reading all of the worker's tuples. Rather than let the worker
continue to generate and send all of the results, have it stop after
sending the next tuple.
More could be done here to stop the worker even quicker, but this is
about as well as we can hope to do for 9.6.
This is in response to a problem report from Andreas Seltenreich.
Commit 44339b892a should be actually be
sufficient to fix that example even without this change, but it seems
better to do this, too, since we might otherwise waste quite a large
amount of effort in one or more workers.
Discussion: CAA4eK1KOKGqmz9bGu+Z42qhRwMbm4R5rfnqsLCNqFs9j14jzEA@mail.gmail.com
Amit Kapila
Prior to this patch, it was occasionally possible, after shm_mq_sendv
had previously returned SHM_MQ_DETACHED, for a later shm_mq_sendv
operation to fail an assertion instead of just again returning
SHM_MQ_ATTACHED. From the shm_mq code's point of view, it was
expecting to be called again with the same arguments, since the
previous operation had only partially completed. However, a caller
who isn't using non-blocking mode won't be prepared to repeat the call
with the same arguments, and this code shouldn't expect that they
will. Repair in such a way that we'll be OK whether the next call
uses the same arguments or not.
Found by Andreas Seltenreich. Analysis and sketch of fix by Amit
Kapila. Patch by me, reviewed by Amit Kapila.
For historical reasons, copyFile and rewriteVisibilityMap took a force
argument which was always passed as true, meaning that any existing
file should be overwritten. However, it seems much safer to instead
fail if a file we need to write already exists.
While we're at it, remove the "force" argument altogether, since it was
never passed as anything other than true (and now we would never pass
it as anything other than false, if we kept it).
Noted by Andres Freund during post-commit review of the patch that added
rewriteVisibilityMap, commit 7087166a88,
but this also changes the behavior when copying files without rewriting
them.
Patch by Masahiko Sawada.