The main change here is to call security_compute_create_name_raw()
rather than security_compute_create_raw(). This ups the minimum
requirement for libselinux from 2.0.99 to 2.1.10, but it looks
like most distributions will have picked that up before 9.3 is out.
KaiGai Kohei
On older-model gcc, the original coding of UTILITY_BEGIN_QUERY() can
draw this error because of multiple assignments to _needCleanup.
Rather than mark that variable volatile, we can suppress the warning
by arranging to have just one unconditional assignment before PG_TRY.
This event takes place just before ddl_command_end, and is fired if and
only if at least one object has been dropped by the command. (For
instance, DROP TABLE IF EXISTS of a table that does not in fact exist
will not lead to such a trigger firing). Commands that drop multiple
objects (such as DROP SCHEMA or DROP OWNED BY) will cause a single event
to fire. Some firings might be surprising, such as
ALTER TABLE DROP COLUMN.
The trigger is fired after the drop has taken place, because that has
been deemed the safest design, to avoid exposing possibly-inconsistent
internal state (system catalogs as well as current transaction) to the
user function code. This means that careful tracking of object
identification is required during the object removal phase.
Like other currently existing events, there is support for tag
filtering.
To support the new event, add a new pg_event_trigger_dropped_objects()
set-returning function, which returns a set of rows comprising the
objects affected by the command. This is to be used within the user
function code, and is mostly modelled after the recently introduced
pg_identify_object() function.
Catalog version bumped due to the new function.
Dimitri Fontaine and Álvaro Herrera
Review by Robert Haas, Tom Lane
Previously, if the postmaster initialized OpenSSL's PRNG (which it will do
when ssl=on in postgresql.conf), the same pseudo-random state would be
inherited by each forked child process. The problem is masked to a
considerable extent if the incoming connection uses SSL encryption, but
when it does not, identical pseudo-random state is made available to
functions like contrib/pgcrypto. The process's PID does get mixed into any
requested random output, but on most systems that still only results in 32K
or so distinct random sequences available across all Postgres sessions.
This might allow an attacker who has database access to guess the results
of "secure" operations happening in another session.
To fix, forcibly reset the PRNG after fork(). Each child process that has
need for random numbers from OpenSSL's generator will thereby be forced to
go through OpenSSL's normal initialization sequence, which should provide
much greater variability of the sequences. There are other ways we might
do this that would be slightly cheaper, but this approach seems the most
future-proof against SSL-related code changes.
This has been assigned CVE-2013-1900, but since the issue and the patch
have already been publicized on pgsql-hackers, there's no point in trying
to hide this commit.
Back-patch to all supported branches.
Marko Kreen
In a heap update, if the old and new tuple were on different pages, and the
new page no longer existed (because it was subsequently truncated away by
vacuum), heap_xlog_update forgot to release the pin on the old buffer. This
bug was introduced by the "Fix multiple problems in WAL replay" patch,
commit 3bbf668de9 (on master branch).
With full_page_writes=off, this triggered an "incorrect local pin count"
error later in replay, if the old page was vacuumed.
This fixes bug #7969, reported by Yunong Xiao. Backpatch to 9.0, like the
commit that introduced this bug.
Move functions used only by pg_dump and pg_restore from dumputils.c to a new
file, pg_backup_utils.c. dumputils.c is linked into psql and some programs
in bin/scripts, so it seems good to keep it slim. The parallel functionality
is moved to parallel.c, as is exit_horribly, because the interesting code in
exit_horribly is parallel-related.
This refactoring gets rid of the on_exit_msg_func function pointer. It was
problematic, because a modern gcc version with -Wmissing-format-attribute
complained if it wasn't marked with PF_PRINTF_ATTRIBUTE, but the ancient gcc
version that Tom Lane's old HP-UX box has didn't accept that attribute on a
function pointer, and gave an error. We still use a similar function pointer
trick for getLocalPQBuffer() function, to use a thread-local version of that
in parallel mode on Windows, but that dodges the problem because it doesn't
take printf-like arguments.
Dumping invalid indexes can cause problems at restore time, for example
if the reason the index creation failed was because it tried to enforce
a uniqueness condition not satisfied by the table's data. Also, if the
index creation is in fact still in progress, it seems reasonable to
consider it to be an uncommitted DDL change, which pg_dump wouldn't be
expected to dump anyway.
Back-patch to all active versions, and teach them to ignore invalid
indexes in servers back to 8.2, where the concept was introduced.
Michael Paquier
For getting the server's version in numeric form, use PQserverVersion().
It does the exact same parsing as dumputils.c's parse_version(), and has
been around in libpq for a long time. For the client's version, just use
the PG_VERSION_NUM constant.
If you have clusters of different versions pointing to the same tablespace
location, we would incorrectly include all the data belonging to the other
versions, too.
Fixes bug #7986, reported by Sergey Burladyan.
A new 'starttli' field was added to the response of BASE_BACKUP command.
Make pg_basebackup tolerate the case that it's missing, so that it still
works with older servers.
Add an explicit check for the server version, so that you get a nicer error
message if you try to use it with a pre-9.1 server.
The streaming protocol message format changed in 9.3, so -X stream still won't
work with pre-9.3 servers. I added a version check to ReceiveXLogStream()
earlier, but write that slightly differently, so that in 9.4, it will still
work with a 9.3 server. (In 9.4, the error message needs to be adjusted to
"9.3 or above", though). Also, if the version check fails, don't retry.
New infrastructure is added which creates a set number of workers
(threads on Windows, forked processes on Unix). Jobs are then
handed out to these workers by the master process as needed.
pg_restore is adjusted to use this new infrastructure in place of the
old setup which created a new worker for each step on the fly. Parallel
dumps acquire a snapshot clone in order to stay consistent, if
available.
The parallel option is selected by the -j / --jobs command line
parameter of pg_dump.
Joachim Wieland, lightly editorialized by Andrew Dunstan.
Most (all?) of Russia has moved to what's effectively year-round daylight
savings time, so that the "standard" zone names now mean an hour later
than they used to. Update that, notably changing MSK as per recent
complaint from Sergey Konoplev, but also CHOT, GET, IRKT, KGT, KRAT,
MAGT, NOVT, OMST, VLAT, YAKT, YEKT. The corresponding DST abbreviations
are presumably now obsolete, but I left them in place with their old
definitions, just to reduce any possible breakage from this change.
Also add VOLT (Europe/Volgograd), which for some reason we never had
before, as well as MIST (Antarctica/Macquarie), and fix obsolete
definitions of MAWT, TKT, and WST.
Add an option to zic.c to dump out all non-obsolete timezone abbreviations
defined in the Olson database. Comparing this list to its previous state
will clue us in when something happens that we may need to account for in
the tznames/ time zone abbreviation lists. The README file's previous
exhortation to "just grep for differences" was completely useless advice,
in my now-considerable experience; but maybe this will be a bit more
useful. As a starting point I built the same list from the tzdata files
as they existed in 2006, which is committed here as known_abbrevs.txt.
Comparison indeed turned up quite a few changes we had neglected to account
for, which I will commit separately.
This appears to cause some intermittent file system problems
on Windows 8. Instead, set up the old data directory in its
intended final location to start with.
Doing that results in a broken index entry in PDF output. We had only
a few like that, which is probably why nobody noticed before.
Standardize on putting the <term> first.
Josh Kupershmidt
One of the use-cases for postgres_fdw is extracting data from older PG
servers, so cross-version compatibility is important. Document what we
can do here, and further annotate some of the coding choices that create
compatibility constraints. In passing, remove one unnecessary
incompatibility with old servers, namely assuming that we didn't need to
quote the timezone name 'UTC'.
If the remote database's settings of these GUCs are different from ours,
ambiguous datetime values may be read incorrectly. To fix, temporarily
adopt the remote server's settings while we ingest a query result.
This is not a complete fix, since it doesn't do anything about ambiguous
values in commands sent to the remote server; but there seems little we
can do about that end of it given dblink's entirely textual API for
transmitted commands.
Back-patch to 9.2. The hazard exists in all versions, but this patch
would need more work to apply before 9.2. Given the lack of field
complaints about this issue, it doesn't seem worth the effort at present.
Daniel Farina and Tom Lane
Commit 13fe298ca0 changed this GUC to be
PGC_SUSET, but neglected to update the documentation to match.
While at it, edit and rearrange the text a little for clarity.
Checksums are set immediately prior to flush out of shared buffers
and checked when pages are read in again. Hint bit setting will
require full page write when block is dirtied, which causes various
infrastructure changes. Extensive comments, docs and README.
WARNING message thrown if checksum fails on non-all zeroes page;
ERROR thrown but can be disabled with ignore_checksum_failure = on.
Feature enabled by an initdb option, since transition from option off
to option on is long and complex and has not yet been implemented.
Default is not to use checksums.
Checksum used is WAL CRC-32 truncated to 16-bits.
Simon Riggs, Jeff Davis, Greg Smith
Wide input and assistance from many community members. Thank you.
Prior to 9.3 the commit_delay affected only the current user,
whereas now only the group leader waits while holding the
WALWriteLock. Deliberate or accidental settings to a poor
value could seriously degrade performance for all users.
Privileges may be delegated by SECURITY DEFINER functions
for anyone that needs per-user settings in real situations.
Request for change from Peter Geoghegan
This should provide some marginal overall savings, since it surely takes
many more cycles for the remote server to deal with the NULL columns than
it takes for postgres_fdw not to emit them. But really the reason is to
keep the emitted queries from looking quite so silly ...
I wasn't going to ship this without having at least some example of how
to do that. This version isn't terribly bright; in particular it won't
consider any combinations of multiple join clauses. Given the cost of
executing a remote EXPLAIN, I'm not sure we want to be very aggressive
about doing that, anyway.
In support of this, refactor generate_implied_equalities_for_indexcol
so that it can be used to extract equivalence clauses that aren't
necessarily tied to an index.
The statistics-based cost estimation patch for range types broke that, by
incorrectly assuming that the left operand of all range oeprators is a
range. That lead to a "type x is not a range type" error. Because it took so
long for anyone to notice, add a regression test for that case.
We still don't do proper statistics-based cost estimation for that, so you
just get a default constant estimate. We should look into implementing that,
but this patch at least fixes the regression.
Spotted by Tom Lane, when testing query from Josh Berkus.
Introduce pg_identify_object(oid,oid,int4), which is similar in spirit
to pg_describe_object but instead produces a row of machine-readable
information to uniquely identify the given object, without resorting to
OIDs or other internal representation. This is intended to be used in
the event trigger implementation, to report objects being operated on;
but it has usefulness of its own.
Catalog version bumped because of the new function.
The buildfarm members using -DCLOBBER_CACHE_ALWAYS still don't like this
test. Some experimentation shows that on my machine, isolationtester's
query to check for "waiting" state takes 2 to 2.5 seconds to bind+execute
under -DCLOBBER_CACHE_ALWAYS. Set the timeouts to 5 seconds to leave some
headroom for possibly-slower buildfarm critters.
Really we ought to fix the "waiting" query, which is not only horridly
slow but outright wrong in detail; and then maybe we can back off these
timeouts. But right now I'm just trying to get the buildfarm green again.
Per report from Hadi Moshayedi of matview regression test failure
with optimization of aggregates. A few ORDER BY clauses improve
code coverage for matviews while solving that problem.
Add section to the Reliability section about what is and is not protected for
various file types.
Further edits welcome.
Designed to allow 1-2 line change when/if checksums are committed.
Inspired by docs written by Jeff Davis, though completely different from his
patch.
The docs showed that early-January dates can be considered part of the
previous year for week-counting purposes, but failed to say explicitly
that late-December dates can also be considered part of the next year.
Fix that, and add a cross-reference to the "isoyear" field. Per bug
#7967 from Pawel Kobylak.