Commit 8457d0beca introduced an example which, while not incorrect,
failed to exhibit the behavior it meant to describe, as a result of omitting
an E'' prefix that needed to be there. Noticed and fixed by Peter Geoghegan.
I (tgl) failed to resist the temptation to wordsmith nearby text a bit
while at it.
In commit a5ec86a7c7 I wrote a quick hack
that reduced the number of TeX string pool entries created while converting
our documentation to PDF form. That held the fort for awhile, but as of
HEAD we're back up against the same limitation. It turns out that the
original coding of \FlowObjectSetup actually results in *three* string pool
entries being generated for every "flow object" (that is, potential
cross-reference target) in the documentation, and my previous hack only got
rid of one of them. With a little more care, we can reduce the string
count to one per flow object plus one per actually-cross-referenced flow
object (about 115000 + 5000 as of current HEAD); that should work until
the documentation volume roughly doubles from where it is today.
As a not-incidental side benefit, this change also causes pdfjadetex to
stop emitting unreferenced hyperlink anchors (bookmarks) into the PDF file.
It had been making one willy-nilly for every flow object; now it's just one
per actually-cross-referenced object. This results in close to a 2X
savings in PDF file size. We will still want to run the output through
"jpdftweak" to get it to be compressed; but we no longer need removal of
unreferenced bookmarks, so we might be able to find a quicker tool for
that step.
Although the failure only affects HEAD and US-format output at the moment,
9.5 cannot be more than a few pages short of failing likewise, so it
will inevitably fail after a few rounds of minor-version release notes.
I don't have a lot of faith that we'll never hit the limit in the older
branches; and anyway it would be nice to get rid of jpdftweak across the
board. Therefore, back-patch to all supported branches.
Per discussion, the original name was a bit misleading, and
PQsslAttributeNames() seems more apropos. It's not quite too late to
change this in 9.5, so let's change it while we can.
Also, make sure that the pointer array is const, not only the pointed-to
strings.
Minor documentation wordsmithing while at it.
Lars Kanis, slight adjustments by me
Up until now, the total amount of data that could be passed to a
background worker at startup was one datum, which can be a small as
4 bytes on some systems. That's enough to pass a dsm_handle or an
array index, but not much else. Add a bgw_extra flag to the
BackgroundWorker struct, allowing up to 128 bytes to be passed to
a new worker on any platform.
Use this to fix a problem I recently discovered with the parallel
context machinery added in 9.5: the master assigns each worker an
array index, and each worker subsequently assigns itself an array
index, and there's nothing to guarantee that the two sets of indexes
match, leading to chaos.
Normally, I would not back-patch the change to add bgw_extra, since it
is basically a feature addition. However, since 9.5 is still in beta
and there seems to be no other sensible way to repair the broken
parallel context machinery, back-patch to 9.5. Existing background
worker code can ignore the bgw_extra field without a problem, but
might need to be recompiled since the structure size has changed.
Report and patch by me. Review by Amit Kapila.
The user can whitelist specified extension(s) in the foreign server's
options, whereupon we will treat immutable functions and operators of those
extensions as candidates to be sent for remote execution.
Whitelisting an extension in this way basically promises that the extension
exists on the remote server and behaves compatibly with the local instance.
We have no way to prove that formally, so we have to rely on the user to
get it right. But this seems like something that people can usually get
right in practice.
We might in future allow functions and operators to be whitelisted
individually, but extension granularity is a very convenient special case,
so it got done first.
The patch as-committed lacks any regression tests, which is unfortunate,
but introducing dependencies on other extensions for testing purposes
would break "make installcheck" scenarios, which is worse. I have some
ideas about klugy ways around that, but it seems like material for a
separate patch. For the moment, leave the problem open.
Paul Ramsey, hacked up a bit more by me
Standard-conforming literals have been the default for long enough that
it no longer seems necessary to go out of our way to tell people to write
regex escapes illegibly.
Fix some brain fade in commit a2dabf0e1d: erroneous variable names
in docs, rearrangements that made sentences less clear not more so,
undocumented and poorly-chosen-anyway API behaviors of subroutines,
bad grammar in error messages, copy-and-paste faults.
Albe Laurenz and Tom Lane
A lookbehind constraint is like a lookahead constraint in that it consumes
no text; but it checks for existence (or nonexistence) of a match *ending*
at the current point in the string, rather than one *starting* at the
current point. This is a long-requested feature since it exists in many
other regex libraries, but Henry Spencer had never got around to
implementing it in the code we use.
Just making it work is actually pretty trivial; but naive copying of the
logic for lookahead constraints leads to code that often spends O(N^2) time
to scan an N-character string, because we have to run the match engine
from string start to the current probe point each time the constraint is
checked. In typical use-cases a lookbehind constraint will be written at
the start of the regex and hence will need to be checked at every character
--- so O(N^2) work overall. To fix that, I introduced a third copy of the
core DFA matching loop, paralleling the existing longest() and shortest()
loops. This version, matchuntil(), can suspend and resume matching given
a couple of pointers' worth of storage space. So we need only run it
across the string once, stopping at each interesting probe point and then
resuming to advance to the next one.
I also put in an optimization that simplifies one-character lookahead and
lookbehind constraints, such as "(?=x)" or "(?<!\w)", into AHEAD and BEHIND
constraints, which already existed in the engine. This avoids the overhead
of the LACON machinery entirely for these rather common cases.
The net result is that lookbehind constraints run a factor of three or so
slower than Perl's for multi-character constraints, but faster than Perl's
for one-character constraints ... and they work fine for variable-length
constraints, which Perl gives up on entirely. So that's not bad from a
competitive perspective, and there's room for further optimization if
anyone cares. (In reality, raw scan rate across a large input string is
probably not that big a deal for Postgres usage anyway; so I'm happy if
it's linear.)
Show how this can be used in practice to make queries simpler and more
flexible. Also, draw an explicit contrast to the existence operator,
which doesn't work that way.
Peter Geoghegan and Tom Lane
Once upon a time we did not have a separate CREATEROLE privilege, and
CREATEUSER effectively meant SUPERUSER. When we invented CREATEROLE
(in 8.1) we also added SUPERUSER so as to have a less confusing keyword
for this role property. However, we left CREATEUSER in place as a
deprecated synonym for SUPERUSER, because of backwards-compatibility
concerns. It's still there and is still confusing people, as for example
in bug #13694 from Justin Catterson. 9.6 will be ten years or so later,
which surely ought to be long enough to end the deprecation and just
remove these old keywords. Hence, do so.
This fixes a long-standing bug which was discovered while investigating
the interaction between the new join pushdown code and the EvalPlanQual
machinery: if a ForeignScan appears on the inner side of a paramaterized
nestloop, an EPQ recheck would re-return the original tuple even if
it no longer satisfied the pushed-down quals due to changed parameter
values.
This fix adds a new member to ForeignScan and ForeignScanState and a
new argument to make_foreignscan, and requires changes to FDWs which
push down quals to populate that new argument with a list of quals they
have chosen to push down. Therefore, I'm only back-patching to 9.5,
even though the bug is not new in 9.5.
Etsuro Fujita, reviewed by me and by Kyotaro Horiguchi.
We hyphenate "fixed-length" earlier in the same sentence, and overall we
more often use "variable-length" rather than "variable length".
Nikolay Shaplov
In general one may have to run both REASSIGN OWNED and DROP OWNED to get
rid of all the dependencies of a role to be dropped. This was alluded to
in the REASSIGN OWNED man page, but not really spelled out in full; and in
any case the procedure ought to be documented in a more prominent place
than that. Add a section to the "Database Roles" chapter explaining this,
and do a bit of wordsmithing in the relevant commands' man pages.
Failure to perform garbage collection now has a user-visible effect, so
explain that and explain that reducing pgss_max is the way to prevent it.
Per gripe from Andrew Dunstan.
The documentation for the autovacuum_multixact_freeze_max_age and
autovacuum_freeze_max_age relation level parameters contained:
"Note that while you can set autovacuum_multixact_freeze_max_age very
small, or even zero, this is usually unwise since it will force frequent
vacuuming."
which hasn't been true since these options were made relation options,
instead of residing in the pg_autovacuum table (834a6da4f7).
Remove the outdated sentence. Even the lowered limits from 2596d70 are
high enough that this doesn't warrant calling out the risk in the CREATE
TABLE docs.
Per discussion with Tom Lane and Alvaro Herrera
Discussion: 26377.1443105453@sss.pgh.pa.us
Backpatch: 9.0- (in parts)
To allow users to force RLS to always be applied, even for table owners,
add ALTER TABLE .. FORCE ROW LEVEL SECURITY.
row_security=off overrides FORCE ROW LEVEL SECURITY, to ensure pg_dump
output is complete (by default).
Also add SECURITY_NOFORCE_RLS context to avoid data corruption when
ALTER TABLE .. FORCE ROW SECURITY is being used. The
SECURITY_NOFORCE_RLS security context is used only during referential
integrity checks and is only considered in check_enable_rls() after we
have already checked that the current user is the owner of the relation
(which should always be the case during referential integrity checks).
Back-patch to 9.5 where RLS was added.
Specifically, make its effect independent from the row_security GUC, and
make it affect permission checks pertinent to views the BYPASSRLS role
owns. The row_security GUC thereby ceases to change successful-query
behavior; it can only make a query fail with an error. Back-patch to
9.5, where BYPASSRLS was introduced.
Without CASCADE, if an extension has an unfullfilled dependency on
another extension, CREATE EXTENSION ERRORs out with "required extension
... is not installed". That is annoying, especially when that dependency
is an implementation detail of the extension, rather than something the
extension's user can make sense of.
In addition to CASCADE this also includes a small set of regression
tests around CREATE EXTENSION.
Author: Petr Jelinek, editorialized by Michael Paquier, Andres Freund
Reviewed-By: Michael Paquier, Andres Freund, Jeff Janes
Discussion: 557E0520.3040800@2ndquadrant.com
It's not terribly hard to devise regular expressions that take large
amounts of time and/or memory to process. Recent testing by Greg Stark has
also shown that machines with small stack limits can be driven to stack
overflow by suitably crafted regexps. While we intend to fix these things
as much as possible, it's probably impossible to eliminate slow-execution
cases altogether. In any case we don't want to treat such things as
security issues. The history of that code should already discourage
prudent DBAs from allowing execution of regexp patterns coming from
possibly-hostile sources, but it seems like a good idea to warn about the
hazard explicitly.
Currently, similar_escape() allows access to enough of the underlying
regexp behavior that the warning has to apply to SIMILAR TO as well.
We might be able to make it safer if we tightened things up to allow only
SQL-mandated capabilities in SIMILAR TO; but that would be a subtly
non-backwards-compatible change, so it requires discussion and probably
could not be back-patched.
Per discussion among pgsql-security list.
The "floatrange" example is a bit too simple because float8mi can be
used without any additional type conversion. Add an example that does
have to account for that, and do some minor other wordsmithing.
Previously it was documented that the details on HeapTupleHeaderData
struct could be found in htup.h. This is not correct because it's now
defined in htup_details.h.
Back-patch to 9.3 where the definition of HeapTupleHeaderData struct
was moved from htup.h to htup_details.h.
Michael Paquier
A Gather executor node runs any number of copies of a plan in an equal
number of workers and merges all of the results into a single tuple
stream. It can also run the plan itself, if the workers are
unavailable or haven't started up yet. It is intended to work with
the Partial Seq Scan node which will be added in future commits.
It could also be used to implement parallel query of a different sort
by itself, without help from Partial Seq Scan, if the single_copy mode
is used. In that mode, a worker executes the plan, and the parallel
leader does not, merely collecting the worker's results. So, a Gather
node could be inserted into a plan to split the execution of that plan
across two processes. Nested Gather nodes aren't currently supported,
but we might want to add support for that in the future.
There's nothing in the planner to actually generate Gather nodes yet,
so it's not quite time to break out the champagne. But we're getting
close.
Amit Kapila. Some designs suggestions were provided by me, and I also
reviewed the patch. Single-copy mode, documentation, and other minor
changes also by me.
With the arrival of the CUBE key word/feature, the index entries for the
cube extension and the CUBE feature were collapsed into one. Tweak the
entry for the cube extension so they are separate entries.