By converting to using forward slashes at configure time we avoid
having to repeat the logic anywhere that this is needed, such as
in transforms modules for plpython.
This eliminates many seconds of test duration and the cause to invoke
"rm -rf", which is typically unavailable on Windows.
Michael Paquier and Noah Misch
Commit c67a86f7da caught most of these,
but this negative test escaped notice. The test did pass, for the wrong
reason, under affected configurations.
Michael Paquier
coerce_type() has local variables named targetTypeId, baseTypeId, and
targetType. targetType has been the Type structure for baseTypeId, so
rename it to baseType.
Combine the two places that set CPPFLAGS into one. Also, some settings
should be restricted to Windows only. More precisely, -Wno-comment is
a GCC-only option, but Windows in a makefile implies GCC at the moment.
Also, since -Wno-comment is more properly a preprocessor option, move it
to CPPFLAGS to simplify things a bit.
For building PL/Perl, PL/Python, and PL/Tcl, we need a shared library of
libperl, libpython, and libtcl, respectively. Previously, this was
checked in the makefiles, skipping the PL build with a warning if no
shared library was available. Now this is checked in configure, with an
error if no shared library is available.
The previous situation arose because in the olden days, the configure
options --with-perl, --with-python, and --with-tcl controlled whether
frontend interfaces for those languages would be built. The procedural
languages were added later, and shared libraries were often not
available in the beginning. So it was decided skip the builds of the
procedural languages in those cases. The frontend interfaces have since
been removed from the tree, and shared libraries are now available most
of the time, so that setup makes much less sense now.
Also, the new setup allows contrib modules and pgxs users to rely on the
respective PLs being available based on configure flags.
This involves moving perl's CORE library to the end of the include list,
and adding other compilation settings that plperl uses. This won't
completely fix the breakage currently being seen by gcc builds on
Windows, but it will let the build get further, and should be wholly
benign, if not beneficial, on *nix.
Tom Lane pointed out that this wasn't done, and asked whether that was
intentional. Subsequent discussion was in favor of making the change,
so here we go.
Foreign data wrappers can use this capability for so-called "join
pushdown"; that is, instead of executing two separate foreign scans
and then joining the results locally, they can generate a path which
performs the join on the remote server and then is scanned locally.
This commit does not extend postgres_fdw to take advantage of this
capability; it just provides the infrastructure.
Custom scan providers can use this in a similar way. Previously,
it was only possible for a custom scan provider to scan a single
relation. Now, it can scan an entire join tree, provided of course
that it knows how to produce the same results that the join would
have produced if executed normally.
KaiGai Kohei, reviewed by Shigeru Hanada, Ashutosh Bapat, and me.
ParseCommitRecord() accessed xl_xact_origin directly. But the chunks in
the commit record's data only have 4 byte alignment, whereas
xl_xact_origin's members require 8 byte alignment on some
platforms. Update comments to make not of that and copy the record to
stack local storage before reading.
With help from Stefan Kaltenbrunner in pinning down the buildfarm and
verifying the fix.
pg_rewind looks at the control file to determine the server's timeline. If
the standby performs a "fast promotion", the timeline ID in the control
file is not updated until the next checkpoint. The startup process requests
a checkpoint immediately after promotion, so this is unlikely to be an
issue in the real world, but the regression suite ran pg_rewind so quickly
after promotion that the checkpoint had not yet completed.
Reported by Stephen Frost
In commit 31eae6028e, some documents were not updated to show the new
capability; fix that. Also, the error message you get when CURRENT_USER
and SESSION_USER are used in a context that doesn't accept them could be
clearer about it being a problem only in those contexts; so add the
word "here".
Author: Kyotaro HORIGUCHI
His patch submission also included changes to GRANT/REVOKE, but those
seemed more controversial, so I left them out. We can reconsider these
changes later.
This does four basic things. First, it provides convenience routines
to coordinate the startup and shutdown of parallel workers. Second,
it synchronizes various pieces of state (e.g. GUCs, combo CID
mappings, transaction snapshot) from the parallel group leader to the
worker processes. Third, it prohibits various operations that would
result in unsafe changes to that state while parallelism is active.
Finally, it propagates events that would result in an ErrorResponse,
NoticeResponse, or NotifyResponse message being sent to the client
from the parallel workers back to the master, from which they can then
be sent on to the client.
Robert Haas, Amit Kapila, Noah Misch, Rushabh Lathia, Jeevan Chalke.
Suggestions and review from Andres Freund, Heikki Linnakangas, Noah
Misch, Simon Riggs, Euler Taveira, and Jim Nasby.
We need to create the pg_multixact/offsets file deleted by pg_upgrade
much earlier than we originally were: it was in TrimMultiXact(), which
runs after we exit recovery, but it actually needs to run earlier than
the first call to SetMultiXactIdLimit (before recovery), because that
routine already wants to read the first offset segment.
Per pg_upgrade trouble report from Jeff Janes.
While at it, silence a compiler warning about a pointless assert that an
unsigned variable was being tested non-negative. This was a signed
constant in Thomas Munro's patch which I changed to unsigned before
commit. Pointed out by Andres Freund.
The "check" target no longer needs to depend on "all", because it now
runs "install" directly, which in turn depends on "all". Doing both
will cause problems with parallel make, because two builds will run next
to each other.
Also remove the redirection of the temp-install output into a log file.
This was appropriate when this was done from within pg_regress, but now
it's just a regular make run, and especially with the above changes this
will now take the place of running the "all" target before the test
suites.
problem report by Jeff Janes, patch in part by Michael Paquier
This file isn't entirely consistent about whether "on" and "off"
should be marked up with <literal>, but it doesn't make much sense
to be inconsistent within a single sentence.
Etsuro Fujita
When this code was written, catalog scans were normally performed using
SnapshotNow, making special handling necessary here. Now, however, all
catalog scans use MVCC snapshots, so we can change these cases to look
more like what we do for catalog scans elsewhere in the code.
Per discussion with Tom Lane and a reminder from Bruce Momjian.
Currently regression tests for python 3 are disabled on MSVC, and these
tests fail with python 3, too, so we have some work to do to enable
both. Meanwhile, all the buildfarm hosts seem to be building with python
2 anyway, so this at least gets us some coverage.
Original patch from Michael Paquier, significantly modified by me.
When implementing a replication solution ontop of logical decoding, two
related problems exist:
* How to safely keep track of replication progress
* How to change replication behavior, based on the origin of a row;
e.g. to avoid loops in bi-directional replication setups
The solution to these problems, as implemented here, consist out of
three parts:
1) 'replication origins', which identify nodes in a replication setup.
2) 'replication progress tracking', which remembers, for each
replication origin, how far replay has progressed in a efficient and
crash safe manner.
3) The ability to filter out changes performed on the behest of a
replication origin during logical decoding; this allows complex
replication topologies. E.g. by filtering all replayed changes out.
Most of this could also be implemented in "userspace", e.g. by inserting
additional rows contain origin information, but that ends up being much
less efficient and more complicated. We don't want to require various
replication solutions to reimplement logic for this independently. The
infrastructure is intended to be generic enough to be reusable.
This infrastructure also replaces the 'nodeid' infrastructure of commit
timestamps. It is intended to provide all the former capabilities,
except that there's only 2^16 different origins; but now they integrate
with logical decoding. Additionally more functionality is accessible via
SQL. Since the commit timestamp infrastructure has also been introduced
in 9.5 (commit 73c986add) changing the API is not a problem.
For now the number of origins for which the replication progress can be
tracked simultaneously is determined by the max_replication_slots
GUC. That GUC is not a perfect match to configure this, but there
doesn't seem to be sufficient reason to introduce a separate new one.
Bumps both catversion and wal page magic.
Author: Andres Freund, with contributions from Petr Jelinek and Craig Ringer
Reviewed-By: Heikki Linnakangas, Petr Jelinek, Robert Haas, Steve Singer
Discussion: 20150216002155.GI15326@awork2.anarazel.de,
20140923182422.GA15776@alap3.anarazel.de,
20131114172632.GE7522@alap2.anarazel.de
I thought I'd gone through all of these before, but a fresh review found
this one too. (Perhaps it would be better to just delete this test and
let the failure occur later, but for the moment I'll preserve the logic.)
The case that this was rejecting is like
CREATE FOREIGN TABLE ft (f1 int ...) ...;
CREATE TABLE c1 (UNIQUE(f1)) INHERITS(ft);
This is necessary in view of the changes to allow foreign tables to be
full members of inheritance hierarchies, but I (tgl) unaccountably missed
it in commit cb1ca4d800.
Noted by Amit Langote, patch by Etsuro Fujita
With this patch the MSVC build and installation will work correctly with
the transforms. However the python transform tests for hstore and ltree
are still disabled pending some further adjustments.
Michael Paquier with some tweaks from me.
Multixact member files are subject to early wraparound overflow and
removal: if the average multixact size is above a certain threshold (see
note below) the protections against offset overflow are not enough:
during multixact truncation at checkpoint time, some
pg_multixact/members files would be removed because the server considers
them to be old and not needed anymore. This leads to loss of files that
are critical to interpret existing tuples's Xmax values.
To protect against this, since we don't have enough info in pg_control
and we can't modify it in old branches, we maintain shared memory state
about the oldest value that we need to keep; we use this during new
multixact creation to abort if an old still-needed file would get
overwritten. This value is kept up to date by checkpoints, which makes
it not completely accurate but should be good enough. We start emitting
warnings sometime earlier, so that the eventual multixact-shutdown
doesn't take DBAs completely by surprise (more precisely: once 20
members SLRU segments are remaining before shutdown.)
On troublesome average multixact size: The threshold size depends on the
multixact freeze parameters. The oldest age is related to the greater of
multixact_freeze_table_age and multixact_freeze_min_age: anything
older than that should be removed promptly by autovacuum. If autovacuum
is keeping up with multixact freezing, the troublesome multixact average
size is
(2^32-1) / Max(freeze table age, freeze min age)
or around 28 members per multixact. Having an average multixact size
larger than that will eventually cause new multixact data to overwrite
the data area for older multixacts. (If autovacuum is not able to keep
up, or there are errors in vacuuming, the actual maximum is
multixact_freeeze_max_age instead, at which point multixact generation
is stopped completely. The default value for this limit is 400 million,
which means that the multixact size that would cause trouble is about 10
members).
Initial bug report by Timothy Garnett, bug #12990
Backpatch to 9.3, where the problem was introduced.
Authors: Álvaro Herrera, Thomas Munro
Reviews: Thomas Munro, Amit Kapila, Robert Haas, Kevin Grittner
Some operating systems, including the reporter's windows, return EBADFD
or similar when fsync() is invoked on a O_RDONLY file descriptor.
Unfortunately RestoreSlotFromDisk() does exactly that; which causes
failures after restarts in at least some scenarios.
If you hit the bug the error message will be something like
ERROR: could not fsync file "pg_replslot/$name/state": Bad file descriptor
Simply use O_RDWR instead of O_RDONLY when opening the relevant file
descriptor to fix the bug. Unfortunately I have no way of verifying the
fix, but we've seen similar problems in the past.
This bug goes back to 9.4 where slots were introduced. Backpatch
accordingly.
Reported-By: Patrice Drolet
Bug: #13143:
Discussion: 20150424101006.2556.60897@wrigleys.postgresql.org
The original security barrier view implementation, on which RLS is
built, prevented all non-leakproof functions from being pushed down to
below the view, even when the function was not receiving any data from
the view. This optimization improves on that situation by, instead of
checking strictly for non-leakproof functions, it checks for Vars being
passed to non-leakproof functions and allows functions which do not
accept arguments or whose arguments are not from the current query level
(eg: constants can be particularly useful) to be pushed down.
As discussed, this does mean that a function which is pushed down might
gain some idea that there are rows meeting a certain criteria based on
the number of times the function is called, but this isn't a
particularly new issue and the documentation in rules.sgml already
addressed similar covert-channel risks. That documentation is updated
to reflect that non-leakproof functions may be pushed down now, if
they meet the above-described criteria.
Author: Dean Rasheed, with a bit of rework to make things clearer,
along with comment and documentation updates from me.
Switching the Windows build scripts to use forward slashes instead of
backslashes has caused a couple of issues in VC builds:
- The file tree list was not correctly generated, build script
generating vcproj file missing tree dependencies when listing items in
Filter.
- VC builds do not accept file paths with forward slashes, perhaps it
could be possible to use a Condition but it seems safer to simply
enforce the file paths to use backslashes in the vcproj files.
- chkpass had an unneeded dependency with libpgport and libpgcommon to
make build succeed but actually it is not necessary as crypt.c is
already listed for this project and should be replaced with a fake name
as it is a unique file.
Michael Paquier
Since both forms are arguably legal I wasn't sure about changing
this. But then Tom argued for 'therefore'...
Author: Dmitriy Olshevskiy
Discussion: 34789.1430067832@sss.pgh.pa.us
When displaying stats it was possible that a floating point division by
zero occured when no FPIs were issued for a type of record.
Author: Abhijit Menon-Sen
Discussion: 20150417091811.GA14008@toroid.org
This provides a mechanism for specifying conversions between SQL data
types and procedural languages. As examples, there are transforms
for hstore and ltree for PL/Perl and PL/Python.
reviews by Pavel Stěhule and Andres Freund
pg_dump has historically assumed that default_with_oids affects only plain
tables and not other relkinds. Conceivably we could make it apply to some
newly invented relkind if we did so from the get-go, but changing the
behavior for existing object types will break existing dump scripts.
Add code comments warning about this interaction.
Also, make sure that default_with_oids doesn't cause parse_utilcmd.c to
think that CREATE FOREIGN TABLE will create an OID column. I think this is
only a latent bug right now, since we don't allow UNIQUE/PKEY constraints
in CREATE FOREIGN TABLE, but it's better to be consistent and future-proof.