CreateReplicationSlot() and DropReplicationSlot() were not cleaning up
the query buffer in some cases (mostly error conditions) which meant a
small leak. Not generally an issue as the error case would result in an
immediate exit, but not difficult to fix either and reduces the number
of false positives from code analyzers.
In passing, also add appropriate PQclear() calls to RunIdentifySystem().
Pointed out by Coverity.
pg_receivexlog already has the capability to use a replication slot to
reserve WAL on the upstream node. But the used slot currently has to
be created via SQL.
To allow using slots directly, without involving SQL, add
--create-slot and --drop-slot actions, analogous to the logical slot
manipulation support in pg_recvlogical.
Author: Michael Paquier
Discussion: CABUevEx+zrOHZOQg+dPapNPFRJdsk59b=TSVf30Z71GnFXhQaw@mail.gmail.com
A future patch (9.5 only) adds slot management to pg_receivexlog. The
verbs create/drop don't seem descriptive enough there. It seems better
to rename pg_recvlogical's commands now, in beta, than live with the
inconsistency forever.
The old form (e.g. --drop) will still be accepted by virtue of most
getopt_long() options accepting abbreviations for long commands.
Backpatch to 9.4 where pg_recvlogical was introduced.
Author: Michael Paquier and Andres Freund
Discussion: CAB7nPqQtt79U6FmhwvgqJmNyWcVCbbV-nS72j_jyPEopERg9rg@mail.gmail.com
Add entries for recent changes, including noting the JSONB format change
and the recent timezone data changes. We should remove those two items
before 9.4 final: the JSONB change will be of no interest in the long
run, and it's not normally our habit to mention timezone updates in
major-release notes. But it seems important to document them temporarily
for beta testers.
I failed to resist the temptation to wordsmith a couple of existing
entries, too.
Teach sigusr1_handler() to use the same test for whether a worker
might need to be started as ServerLoop(). Aside from being perhaps
a bit simpler, this prevents a potentially-unbounded delay when
starting a background worker. On some platforms, select() doesn't
return when interrupted by a signal, but is instead restarted,
including a reset of the timeout to the originally-requested value.
If signals arrive often enough, but no connection requests arrive,
sigusr1_handler() will be executed repeatedly, but the body of
ServerLoop() won't be reached. This change ensures that, even in
that case, background workers will eventually get launched.
This is far from a perfect fix; really, we need select() to return
control to ServerLoop() after an interrupt, either via the self-pipe
trick or some other mechanism. But that's going to require more
work and discussion, so let's do this for now to at least mitigate
the damage.
Per investigation of test_shm_mq failures on buildfarm member anole.
Most zones in the Russian Federation are subtracting one or two hours
as of 2014-10-26. Update the meanings of the abbreviations IRKT, KRAT,
MAGT, MSK, NOVT, OMST, SAKT, VLAT, YAKT, YEKT to match.
The IANA timezone database has adopted abbreviations of the form AxST/AxDT
for all Australian time zones, reflecting what they believe to be current
majority practice Down Under. These names do not conflict with usage
elsewhere (other than ACST for Acre Summer Time, which has been in disuse
since 1994). Accordingly, adopt these names into our "Default" timezone
abbreviation set. The "Australia" abbreviation set now contains only
CST,EAST,EST,SAST,SAT,WST, all of which are thought to be mostly historical
usage. Note that SAST has also been changed to be South Africa Standard
Time in the "Default" abbreviation set.
Add zone abbreviations SRET (Asia/Srednekolymsk) and XJT (Asia/Urumqi),
and use WSST/WSDT for western Samoa.
Also a DST law change in the Turks & Caicos Islands (America/Grand_Turk),
and numerous corrections for historical time zone data.
This updates known_abbrevs.txt to be what it should have been already,
were my -P patch not broken; and updates some tznames/ entries that
missed getting any love in previous timezone data updates because zic
failed to flag the change of abbreviation.
The non-cosmetic updates:
* Remove references to "ADT" as "Arabia Daylight Time", an abbreviation
that's been out of use since 2007; therefore, claiming there is a conflict
with "Atlantic Daylight Time" doesn't seem especially helpful. (We have
left obsolete entries in the files when they didn't conflict with anything,
but that seems like a different situation.)
* Fix entirely incorrect GMT offsets for CKT (Cook Islands), FJT, FJST
(Fiji); we didn't even have them on the proper side of the date line.
(Seems to have been aboriginal errors in our tznames data; there's no
evidence anything actually changed recently.)
* FKST (Falkland Islands Summer Time) is now used all year round, so
don't mark it as a DST abbreviation.
* Update SAKT (Sakhalin) to mean GMT+11 not GMT+10.
In cosmetic changes, I fixed a bunch of wrong (or at least obsolete)
claims about abbreviations not being present in the zic files, and
tried to be consistent about how obsolete abbreviations are labeled.
Note the underlying timezone/data files are still at release 2014e;
this is just trying to get us in sync with what those files actually
say before we go to the next update.
Peter G pointed out that valgrind was, rightfully, complaining about
CreatePolicy() ending up copying beyond the end of the parsed policy
name. Name is a fixed-size type and we need to use namein (through
DirectFunctionCall1()) to flush out the entire array before we pass
it down to heap_form_tuple.
Michael Paquier pointed out that pg_dump --verbose was missing a
newline and Fabrízio de Royes Mello further pointed out that the
schema was also missing from the messages, so fix those also.
Also, based on an off-list comment from Kevin, rework the psql \d
output to facilitate copy/pasting into a new CREATE or ALTER POLICY
command.
Lastly, improve the pg_policies view and update the documentation for
it, along with a few other minor doc corrections based on an off-list
discussion with Adam Brightwell.
The quick hack I added to zic to dump out currently-in-use timezone
abbreviations turns out to have a nasty bug: within each zone, it was
printing the last "struct ttinfo" to be *defined*, not necessarily the
last one in use. This was mainly a problem in zones that had changed the
meaning of their zone abbreviation (to another GMT offset value) and later
changed it back.
As a result of this error, we'd missed out updating the tznames/ files
for some jurisdictions that have changed their zone abbreviations since
the tznames/ files were originally created. I'll address the missing data
updates in a separate commit.
When there are cost-delay-related storage options set for a table,
trying to make that table participate in the autovacuum cost-limit
balancing algorithm produces undesirable results: instead of using the
configured values, the global values are always used,
as illustrated by Mark Kirkwood in
http://www.postgresql.org/message-id/52FACF15.8020507@catalyst.net.nz
Since the mechanism is already complicated, just disable it for those
cases rather than trying to make it cope. There are undesirable
side-effects from this too, namely that the total I/O impact on the
system will be higher whenever such tables are vacuumed. However, this
is seen as less harmful than slowing down vacuum, because that would
cause bloat to accumulate. Anyway, in the new system it is possible to
tweak options to get the precise behavior one wants, whereas with the
previous system one was simply hosed.
This has been broken forever, so backpatch to all supported branches.
This might affect systems where cost_limit and cost_delay have been set
for individual tables.
The page splitting code would go into infinite recursion if you try to
insert an index tuple that doesn't fit even on an empty page.
Per analysis and suggested fix by Andrew Gierth. Fixes bug #11555, reported
by Bryan Seitz (analysis happened over IRC). Backpatch to all supported
versions.
Testing by Amit Kapila, Andres Freund, and myself, with and without
other patches that also aim to improve scalability, seems to indicate
that this change is a significant win over the current value and over
smaller values such as 64. It's not clear how high we can push this
value before it starts to have negative side-effects elsewhere, but
going this far looks OK.
As of commit a87c72915 (which later got backpatched as far as 9.1),
we're explicitly supporting the notion that append relations can be
nested; this can occur when UNION ALL constructs are nested, or when
a UNION ALL contains a table with inheritance children.
Bug #11457 from Nelson Page, as well as an earlier report from Elvis
Pranskevichus, showed that there were still nasty bugs associated with such
cases: in particular the EquivalenceClass mechanism could try to generate
"join" clauses connecting an appendrel child to some grandparent appendrel,
which would result in assertion failures or bogus plans.
Upon investigation I concluded that all current callers of
find_childrel_appendrelinfo() need to be fixed to explicitly consider
multiple levels of parent appendrels. The most complex fix was in
processing of "broken" EquivalenceClasses, which are ECs for which we have
been unable to generate all the derived equality clauses we would like to
because of missing cross-type equality operators in the underlying btree
operator family. That code path is more or less entirely untested by
the regression tests to date, because no standard opfamilies have such
holes in them. So I wrote a new regression test script to try to exercise
it a bit, which turned out to be quite a worthwhile activity as it exposed
existing bugs in all supported branches.
The present patch is essentially the same as far back as 9.2, which is
where parameterized paths were introduced. In 9.0 and 9.1, we only need
to back-patch a small fragment of commit 5b7b5518d, which fixes failure to
propagate out the original WHERE clauses when a broken EC contains constant
members. (The regression test case results show that these older branches
are noticeably stupider than 9.2+ in terms of the quality of the plans
generated; but we don't really care about plan quality in such cases,
only that the plan not be outright wrong. A more invasive fix in the
older branches would not be a good idea anyway from a plan-stability
standpoint.)
Move some more code to manage replication connection command to
streamutil.c. A later patch will introduce replication slot via
pg_receivexlog and this avoid duplicating relevant code between
pg_receivexlog and pg_recvlogical.
Author: Michael Paquier, with some editing by me.
Several comments still referred to 'initiating', 'freeing', 'stopping'
replication slots. These were terms used during different phases of
the development of logical decoding, but are no long accurate.
Also rename StreamLog() to StreamLogicalLog() and add 'void' to the
prototype.
Author: Michael Paquier, with some editing by me.
Backpatch to 9.4 where pg_recvlogical was introduced.
I left the GUC in place for the beta period, so that people could experiment
with different values. No-one's come up with any data that a different value
would be better under some circumstances, so rather than try to document to
users what the GUC, let's just hard-code the current value, 8.
DetermineSleepTime() was previously called without blocked
signals. That's not good, because it allows signal handlers to
interrupt its workings.
DetermineSleepTime() was added in 9.3 with the addition of background
workers (da07a1e856), where it only read from
BackgroundWorkerList.
Since 9.4, where dynamic background workers were added (7f7485a0cd),
the list is also manipulated in DetermineSleepTime(). That's bad
because the list now can be persistently corrupted if modified by both
a signal handler and DetermineSleepTime().
This was discovered during the investigation of hangs on buildfarm
member anole. It's unclear whether this bug is the source of these
hangs or not, but it's worth fixing either way. I have confirmed that
it can cause crashes.
It luckily looks like this only can cause problems when bgworkers are
actively used.
Discussion: 20140929193733.GB14400@awork2.anarazel.de
Backpatch to 9.3 where background workers were introduced.
This add a new pgp_armor_headers function to extract armor headers from an
ASCII-armored blob, and a new overloaded variant of the armor function, for
constructing an ASCII-armor with extra headers.
Marko Tiikkaja and me.
As noted in http://bugs.debian.org/763098 there is a conflict between
postgres' definition of CACHE_LINE_SIZE and the definition by various
*bsd platforms. It's debatable who has the right to define such a
name, but postgres' use was only introduced in 375d8526f2 (9.4), so
it seems like a good idea to rename it.
Discussion: 20140930195756.GC27407@msg.df7cb.de
Per complaint of Christoph Berg in the above email, although he's not
the original bug reporter.
Backpatch to 9.4 where the define was introduced.
The COPY documentation incorrectly stated, for the PROGRAM case,
that we read from stdin and wrote to stdout. Fix that, and improve
consistency by referring to the 'PostgreSQL' user instead of the
'postgres' user, as is done in the rest of the COPY documentation.
Pointed out by Peter van Dijk.
Back-patch to 9.3 where COPY .. PROGRAM was introduced.
Per discussion, revert the commit which added 'ignore_nulls' to
row_to_json. This capability would be better added as an independent
function rather than being bolted on to row_to_json. Additionally,
the implementation didn't address complex JSON objects, and so was
incomplete anyway.
Pointed out by Tom and discussed with Andrew and Robert.
The original design used an array of offsets into the variable-length
portion of a JSONB container. However, such an array is basically
uncompressible by simple compression techniques such as TOAST's LZ
compressor. That's bad enough, but because the offset array is at the
front, it tended to trigger the give-up-after-1KB heuristic in the TOAST
code, so that the entire JSONB object was stored uncompressed; which was
the root cause of bug #11109 from Larry White.
To fix without losing the ability to extract a random array element in O(1)
time, change this scheme so that most of the JEntry array elements hold
lengths rather than offsets. With data that's compressible at all, there
tend to be fewer distinct element lengths, so that there is scope for
compression of the JEntry array. Every N'th entry is still an offset.
To determine the length or offset of any specific element, we might have
to examine up to N preceding JEntrys, but that's still O(1) so far as the
total container size is concerned. Testing shows that this cost is
negligible compared to other costs of accessing a JSONB field, and that
the method does largely fix the incompressible-data problem.
While at it, rearrange the order of elements in a JSONB object so that
it's "all the keys, then all the values" not alternating keys and values.
This doesn't really make much difference right at the moment, but it will
allow providing a fast path for extracting individual object fields from
large JSONB values stored EXTERNAL (ie, uncompressed), analogously to the
existing optimization for substring extraction from large EXTERNAL text
values.
Bump catversion to denote the incompatibility in on-disk format.
We will need to fix pg_upgrade to disallow upgrading jsonb data stored
with 9.4 betas 1 and 2.
Heikki Linnakangas and Tom Lane
Andres pointed out that there was an extra ';' in equalPolicies, which
made me realize that my prior testing with CLOBBER_CACHE_ALWAYS was
insufficient (it didn't always catch the issue, just most of the time).
Thanks to that, a different issue was discovered, specifically in
equalRSDescs. This change corrects eqaulRSDescs to return 'true' once
all policies have been confirmed logically identical. After stepping
through both functions to ensure correct behavior, I ran this for
about 12 hours of CLOBBER_CACHE_ALWAYS runs of the regression tests
with no failures.
In addition, correct a few typos in the documentation which were pointed
out by Thom Brown (thanks!) and improve the policy documentation further
by adding a flushed out usage example based on a unix passwd file.
Lastly, clean up a few comments in the regression tests and pg_dump.h.
Without this fix, parallel restore of a schema-only dump can deadlock,
because when the dump is schema-only, the dependency will still be
pointing at the TABLE item rather than the TABLE DATA item.
Robert Haas and Tom Lane
* Don't play tricks for a more efficient pg_atomic_clear_flag() in the
generic gcc implementation. The old version was broken on gcc < 4.7
on !x86 platforms. Per buildfarm member chipmunk.
* Make usage of __atomic() fences depend on HAVE_GCC__ATOMIC_INT32_CAS
instead of HAVE_GCC__ATOMIC_INT64_CAS - there's platforms with 32bit
support that don't support 64bit atomics.
* Blindly fix two superflous #endif in generic-xlc.h
* Check for --disable-atomics in platforms but x86.
That get rids of the only -Wempty-body warning when compiling postgres
with gcc 4.8/9. As 6550b901f shows, it's useful to be able to use that
option routinely.
Without asserts there's many more warnings, but that's food for
another commit.
Some x86 32bit versions of gcc apparently generate references to the
nonexistant %sil register when using when using the r input
constraint, but not with the =q constraint. The latter restricts
allocations to a/b/c/d which should all work.
Several upcoming performance/scalability improvements require atomic
operations. This new API avoids the need to splatter compiler and
architecture dependent code over all the locations employing atomic
ops.
For several of the potential usages it'd be problematic to maintain
both, a atomics using implementation and one using spinlocks or
similar. In all likelihood one of the implementations would not get
tested regularly under concurrency. To avoid that scenario the new API
provides a automatic fallback of atomic operations to spinlocks. All
properties of atomic operations are maintained. This fallback -
obviously - isn't as fast as just using atomic ops, but it's not bad
either. For one of the future users the atomics ontop spinlocks
implementation was actually slightly faster than the old purely
spinlock using implementation. That's important because it reduces the
fear of regressing older platforms when improving the scalability for
new ones.
The API, loosely modeled after the C11 atomics support, currently
provides 'atomic flags' and 32 bit unsigned integers. If the platform
efficiently supports atomic 64 bit unsigned integers those are also
provided.
To implement atomics support for a platform/architecture/compiler for
a type of atomics 32bit compare and exchange needs to be
implemented. If available and more efficient native support for flags,
32 bit atomic addition, and corresponding 64 bit operations may also
be provided. Additional useful atomic operations are implemented
generically ontop of these.
The implementation for various versions of gcc, msvc and sun studio have
been tested. Additional existing stub implementations for
* Intel icc
* HUPX acc
* IBM xlc
are included but have never been tested. These will likely require
fixes based on buildfarm and user feedback.
As atomic operations also require barriers for some operations the
existing barrier support has been moved into the atomics code.
Author: Andres Freund with contributions from Oskari Saarenmaa
Reviewed-By: Amit Kapila, Robert Haas, Heikki Linnakangas and Álvaro Herrera
Discussion: CA+TgmoYBW+ux5-8Ja=Mcyuy8=VXAnVRHp3Kess6Pn3DMXAPAEA@mail.gmail.com,
20131015123303.GH5300@awork2.anarazel.de,
20131028205522.GI20248@awork2.anarazel.de
We removed a similar ban on this in json_object recently, but the ban in
datum_to_json was left, which generate4d sprutious errors in othee json
generators, notable json_build_object.
Along the way, add an assertion that datum_to_json is not passed a null
key. All current callers comply with this rule, but the assertion will
catch any possible future misbehaviour.
Previously, we used an lwlock that was held from the time we began
seeking a candidate buffer until the time when we found and pinned
one, which is disastrous for concurrency. Instead, use a spinlock
which is held just long enough to pop the freelist or advance the
clock sweep hand, and then released. If we need to advance the clock
sweep further, we reacquire the spinlock once per buffer.
This represents a significant increase in atomic operations around
buffer eviction, but it still wins on many workloads. On others, it
may result in no gain, or even cause a regression, unless the number
of buffer mapping locks is also increased. However, that seems like
material for a separate commit. We may also need to consider other
methods of mitigating contention on this spinlock, such as splitting
it into multiple locks or jumping the clock sweep hand more than one
buffer at a time, but those, too, seem like separate improvements.
Patch by me, inspired by a much larger patch from Amit Kapila.
Reviewed by Andres Freund.
Instead of trying to accurately calculate the space needed, use a StringInfo
that's enlarged as needed. This is just moving things around currently - the
old code was not wrong - but this is in preparation for a patch that adds
support for extra armor headers, and would make the space calculation more
complicated.
Marko Tiikkaja
Some compilers don't automatically search the current directory for
included files. 9cc2c182fc fixed that for builds from tarballs by
adding an include to the source directory. But that doesn't work when
the scanner is generated in the VPATH directory. Use the same search
path as the other parsers in the tree.
One compiler that definitely was affected is solaris' sun cc.
Backpatch to 9.1 which introduced using an actual parser for
replication commands.
It was confusing that to other commands, like initdb and postgres, you would
pass the data directory with "-D datadir", but pg_controldata and
pg_resetxlog would take just plain path, without the "-D". With this patch,
pg_controldata and pg_resetxlog also accept "-D datadir".
Abhijit Menon-Sen, with minor kibitzing by me
Address a few typos in the row security update, pointed out
off-list by Adam Brightwell. Also include 'ALL' in the list
of commands supported, for completeness.