Transactions can now set their commit timestamp directly as they commit,
or an external transaction commit timestamp can be fed from an outside
system using the new function TransactionTreeSetCommitTsData(). This
data is crash-safe, and truncated at Xid freeze point, same as pg_clog.
This module is disabled by default because it causes a performance hit,
but can be enabled in postgresql.conf requiring only a server restart.
A new test in src/test/modules is included.
Catalog version bumped due to the new subdirectory within PGDATA and a
couple of new SQL functions.
Authors: Álvaro Herrera and Petr Jelínek
Reviewed to varying degrees by Michael Paquier, Andres Freund, Robert
Haas, Amit Kapila, Fujii Masao, Jaime Casanova, Simon Riggs, Steven
Singer, Peter Eisentraut
check-world failed in a completely clean tree, because src/test/modules
fail to build unless errcodes.h is generated first. To fix this,
install a dependency in src/test/modules' Makefile so that the necessary
file is generated. Even with this, running "make check" within
individual module subdirs will still fail because the dependency is not
considered there, but this case is less interesting and would be messier
to fix.
check-world still failed with the above fix in place, this time because
dummy_seclabel used LOAD to load the dynamic library, which doesn't work
because the @libdir@ (expanded by the makefile) is expanded to the final
install path, not the temporary installation directory used by make
check. To fix, tweak things so that CREATE EXTENSION can be used
instead, which solves the problem because the library path is expanded
by the backend, which is aware of the true libdir.
Make the error messages issued by array_in() uniformly follow the style
ERROR: malformed array literal: "actual input string"
DETAIL: specific complaint here
and rewrite many of the specific complaints to be clearer.
The immediate motivation for doing this is a complaint from Josh Berkus
that json_to_record() produced an unintelligible error message when
dealing with an array item, because it tries to feed the JSON-format
array value to array_in(). Really it ought to be smart enough to
perform JSON-to-Postgres array conversion, but that's a future feature
not a bug fix. In the meantime, this change is something we agreed
we could back-patch into 9.4, and it should help de-confuse things a bit.
The logical decoding patchset introduced PROC_IN_LOGICAL_DECODING flag
PGXACT flag, that allows such backends to be skipped when computing
the xmin horizon/snapshots. That's fine and sensible for walsenders
streaming out logical changes, but not at all fine for SQL backends
doing logical decoding. If the latter set that flag any change they
have performed outside of logical decoding will not be regarded as
visible - which e.g. can lead to that change being vacuumed away.
Note that not setting the flag for SQL backends isn't particularly
bothersome - the SQL backend doesn't do streaming, so it only runs for
a limited amount of time.
Per buildfarm member 'tick' and Alvaro.
Backpatch to 9.4, where logical decoding was introduced.
Davide S. reported that json_agg() sometimes produced multiple trailing
right brackets. This turns out to be because json_agg_finalfn() attaches
the final right bracket, and was doing so by modifying the aggregate state
in-place. That's verboten, though unfortunately it seems there's no way
for nodeAgg.c to check for such mistakes.
Fix that back to 9.3 where the broken code was introduced. In 9.4 and
HEAD, likewise fix json_object_agg(), which had copied the erroneous logic.
Make some cosmetic cleanups as well.
Get rid of PG_FUNCTION_INFO_V1() macros, which are quite inappropriate
for built-in functions (possibly leftovers from testing as a loadable
module?). Also, fix gratuitous inconsistency between SQL-level and
C-level names of the minmax support functions.
We were not checking to see if the supplied dscale was valid for the given
digit array when receiving binary-format numeric values. While dscale can
validly be more than the number of nonzero fractional digits, it shouldn't
be less; that case causes fractional digits to be hidden on display even
though they're there and participate in arithmetic.
Bug #12053 from Tommaso Sala indicates that there's at least one broken
client library out there that sometimes supplies an incorrect dscale value,
leading to strange behavior. This suggests that simply throwing an error
might not be the best response; it would lead to failures in applications
that might seem to be working fine today. What seems the least risky fix
is to truncate away any digits that would be hidden by dscale. This
preserves the existing behavior in terms of what will be printed for the
transmitted value, while preventing subsequent arithmetic from producing
results inconsistent with that.
In passing, throw a specific error for the case of dscale being outside
the range that will fit into a numeric's header. Before you got "value
overflows numeric format", which is a bit misleading.
Back-patch to all supported branches.
Rather than have the core security_label regression test depend on the
dummy_seclabel module, have that part of the test be executed by
dummy_seclabel itself directly. This simplifies the testing rig a bit;
in particular it should silence the problems from the MSVC buildfarm
phylum, which haven't yet gotten taught how to install src/test/modules.
We expose a function IsValidJsonNumber that internally calls the lexer
for json numbers. That allows us to use the same test everywhere,
instead of inventing a broken test for hstore conversions. The new
function is also used in datum_to_json, replacing the code that is now
moved to the new function.
Backpatch to 9.3 where hstore_to_json_loose was introduced.
Extracted from pending inet selectivity patch. The rest of it isn't
quite ready to commit, but we might as well push this part so the patch
doesn't have to track the moving target of pg_operator.h.
Coverity complained that the "else" added to fillPGconn() was unreachable,
which it was. Remove the dead code. In passing, rearrange the tests so as
not to bother trying to fetch values for options that can't be assigned.
Pre-9.3 did not have that issue, but it did have a "return" that should be
"goto oom_error" to ensure that a suitable error message gets filled in.
This is advance preparation for introducing even more test modules; the
easy solution is to add them to contrib, but that's bloated enough that
it seems a good time to think of something different.
Moved modules are dummy_seclabel, test_shm_mq, test_parser and
worker_spi.
(test_decoding was also a candidate, but there was too much opposition
to moving that one. We can always reconsider later.)
Apart from ignoring "hostaddr" set to the empty string, this behaves
identically to its predecessor. Back-patch to 9.4, where the original
commit first appeared.
Reviewed by Fujii Masao.
This reverts commit 9f80f4835a. The
function returned the raw value of a connection parameter, a task served
by PQconninfo(). The next commit will reimplement the psql \conninfo
change that way. Back-patch to 9.4, where that commit first appeared.
The original definitions were leaving no room for cross-type operators,
so queries that compared a column of one type against something of a
different type were not taking advantage of the index. Fix by making
the opfamilies more like the ones for Btree, and include a few
cross-type operator classes.
Catalog version bumped.
Per complaints from Hubert Lubaczewski, Mark Wong, Heikki Linnakangas.
This patch adds a function that replaces a bms_membership() test followed
by a bms_singleton_member() call, performing both the test and the
extraction of a singleton set's member in one scan of the bitmapset.
The performance advantage over the old way is probably minimal in current
usage, but it seems worthwhile on notational grounds anyway.
David Rowley
This patch adds a way of iterating through the members of a bitmapset
nondestructively, unlike the old way with bms_first_member(). While
bms_next_member() is very slightly slower than bms_first_member()
(at least for typical-size bitmapsets), eliminating the need to palloc
and pfree a temporary copy of the target bitmapset is a significant win.
So this method should be preferred in all cases where a temporary copy
would be necessary.
Tom Lane, with suggestions from Dean Rasheed and David Rowley
This function was initially coded on the assumption that it would not be
performance-critical, but that turns out to be wrong in workloads that
are heavily dependent on the speed of plpgsql functions. Speed it up by
hard-coding the comparison rules, thereby avoiding palloc/pfree traffic
from creating and immediately freeing an OverrideSearchPath object.
Per report from Scott Marlowe.
Previously, if the typcache had for example tried and failed to find a hash
opclass for a given data type, it would nonetheless repeat the unsuccessful
catalog lookup each time it was asked again. This can lead to a
significant amount of useless bufmgr traffic, as in a recent report from
Scott Marlowe. Like the catalog caches, typcache should be able to cache
negative results. This patch arranges that by making use of separate flag
bits to remember whether a particular item has been looked up, rather than
treating a zero OID as an indicator that no lookup has been done.
Also, install a credible invalidation mechanism, namely watching for inval
events in pg_opclass. The sole advantage of the lack of negative caching
was that the code would cope if operators or opclasses got added for a type
mid-session; to preserve that behavior we have to be able to invalidate
stale lookup results. Updates in pg_opclass should be pretty rare in
production systems, so it seems sufficient to just invalidate all the
dependent data whenever one happens.
Adding proper invalidation also means that this code will now react sanely
if an opclass is dropped mid-session. Arguably, that's a back-patchable
bug fix, but in view of the lack of complaints from the field I'll refrain
from back-patching. (Probably, in most cases where an opclass is dropped,
the data type itself is dropped soon after, so that this misfeasance has
no bad consequences.)
InitXLogInsert() cannot be called in a critical section, because it
allocates memory. But CreateCheckPoint() did that, when called for the
end-of-recovery checkpoint by the startup process.
In the passing, fix the scratch space allocation in InitXLogInsert to go to
the right memory context. Also update the comment at InitXLOGAccess, which
hasn't been totally accurate since hot standby was introduced (in a hot
standby backend, InitXLOGAccess isn't called at backend startup).
Reported by Michael Paquier
Previously \watch always ignored the user's \pset null setting.
\pset null setting should be ignored for \d and similar queries.
For those, the code can reasonably have an opinion about what
the presentation should be like, since it knows what SQL query
it's issuing. This argument surely doesn't apply to \watch,
so this commit makes \watch use the user's \pset null setting.
Back-patch to 9.3 where \watch was added.
Mark Simonetti reported that libxslt sometimes crashes for him, and that
swapping xslt_process's object-freeing calls around to do them in reverse
order of creation seemed to fix it. I've not reproduced the crash, but
valgrind clearly shows a reference to already-freed memory, which is
consistent with the idea that shutdown of the xsltTransformContext is
trying to reference the already-freed stylesheet or input document.
With this patch, valgrind is no longer unhappy.
I have an inquiry in to see if this is a libxslt bug or if we're just
abusing the library; but even if it's a library bug, we'd want to adjust
our code so it doesn't fail with unpatched libraries.
Back-patch to all supported branches, because we've been doing this in
the wrong(?) order for a long time.
As pointed out by Robert, we should really have named pg_rowsecurity
pg_policy, as the objects stored in that catalog are policies. This
patch fixes that and updates the column names to start with 'pol' to
match the new catalog name.
The security consideration for COPY with row level security, also
pointed out by Robert, has also been addressed by remembering and
re-checking the OID of the relation initially referenced during COPY
processing, to make sure it hasn't changed under us by the time we
finish planning out the query which has been built.
Robert and Alvaro also commented on missing OCLASS and OBJECT entries
for POLICY (formerly ROWSECURITY or POLICY, depending) in various
places. This patch fixes that too, which also happens to add the
ability to COMMENT on policies.
In passing, attempt to improve the consistency of messages, comments,
and documentation as well. This removes various incarnations of
'row-security', 'row-level security', 'Row-security', etc, in favor
of 'policy', 'row level security' or 'row_security' as appropriate.
Happy Thanksgiving!
In passing, add an Assert defending the presumption that bytes_left
is positive to start with. (I'm not exactly convinced that using an
unsigned type was such a bright thing here, but let's at least do
this much.)
Add a bit of context sensitivity to plpgsql_yylex() so that it can
recognize when the word it is looking at is the first word of a new
statement, and if so whether it is the target of an assignment statement.
When we are at start of statement and it's not an assignment, we can
prefer recognizing unreserved keywords over recognizing variable names,
thereby allowing most statements' initial keywords to be demoted from
reserved to unreserved status. This is rather useful already (there are
15 such words that get demoted here), and what's more to the point is
that future patches proposing to add new plpgsql statements can avoid
objections about having to add new reserved words.
The keywords BEGIN, DECLARE, FOR, FOREACH, LOOP, WHILE need to remain
reserved because they can be preceded by block labels, and the logic
added here doesn't understand about block labels. In principle we
could probably fix that, but it would take more than one token of
lookback and the benefit doesn't seem worth extra complexity.
Also note I didn't de-reserve EXECUTE, because it is used in more places
than just statement start. It's possible it could be de-reserved with
more work, but that would be an independent fix.
In passing, also de-reserve COLLATE and DEFAULT, which shouldn't have
been reserved in the first place since they only need to be recognized
within DECLARE sections.
These cases formerly failed with errors about "could not find array type
for data type". Now they yield arrays of the same element type and one
higher dimension.
The implementation involves creating functions with API similar to the
existing accumArrayResult() family. I (tgl) also extended the base family
by adding an initArrayResult() function, which allows callers to avoid
special-casing the zero-inputs case if they just want an empty array as
result. (Not all do, so the previous calling convention remains valid.)
This allowed simplifying some existing code in xml.c and plperl.c.
Ali Akbar, reviewed by Pavel Stehule, significantly modified by me
Per discussion with Tom and Andrew, 64bit integers are no longer a
problem for the catalogs, so go ahead and add the mapping from the C
int64 type to the int8 SQL identification to allow using them.
Patch by Adam Brightwell
The old method of appending options to the connection string didn't work if
the primary_conninfo was a postgres:// style URI, instead of a traditional
connection string. Use PQconnectdbParams instead.
Alex Shulgin
If the "dbname" attribute in PQconnectDBParams contained a connection string
or URI (and expand_dbname = TRUE), the database name from the connection
string could not be overridden by a subsequent "dbname" keyword in the
array. That was not intentional; all other options can be overridden.
Furthermore, any subsequent "dbname" caused the connection string from the
first dbname value to be processed again, overriding any values for the same
options that were given between the connection string and the second dbname
option.
In the passing, clarify in the docs that only the first dbname option in the
array is parsed as a connection string.
Alex Shulgin. Backpatch to all supported versions.
In the regression tests, when doing cascaded drops, we need to suppress
the notices from DROP CASCADE or there can be transient regression
failures as the order of drops can depend on the physical row order in
pg_depend.
Report and fix suggestion from Tom.
An out-of-memory in most of these would lead to strange behavior, like
connecting to a different database than intended, but some would lead to
an outright segfault.
Alex Shulgin and me. Backpatch to all supported versions.
Code that check the flag no longer need #ifdef's, which is more convenient.
In particular, makes it easier to write extensions that depend on it.
In the passing, modify sslinfo's ssl_is_used function to check ssl_in_use
instead of the OpenSSL specific 'ssl' pointer. It doesn't make any
difference currently, as sslinfo is only compiled when built with OpenSSL,
but seems cleaner anyway.
This gives an overview of what Lehman & Yao's paper is all about, so that
you can understand the rest of the README without having to read the paper.
Per discussion with Peter Geoghegan and others.