Programmers discovered decades ago that it was useful to have a simple
interface for allocating and freeing memory, which is why malloc() and
free() were invented. Unfortunately, those handy tools don't work
with dynamic shared memory segments because those are specific to
PostgreSQL and are not necessarily mapped at the same address in every
cooperating process. So invent our own allocator instead. This makes
it possible for processes cooperating as part of parallel query
execution to allocate and free chunks of memory without having to
reserve them prior to the start of execution. It could also be used
for longer lived objects; for example, we could consider storing data
for pg_stat_statements or the stats collector in shared memory using
these interfaces, rather than writing them to files. Basically,
anything that needs shared memory but can't predict in advance how
much it's going to need might find this useful.
Thomas Munro and Robert Haas. The original code (of mine) on which
Thomas based his work was actually designed to be a new backend-local
memory allocator for PostgreSQL, but that hasn't gone anywhere - or
not yet, anyway. Thomas took that work and performed major
refactoring and extensive modifications to make it work with dynamic
shared memory, including the addition of appropriate locking.
Discussion: CA+TgmobkeWptGwiNa+SGFWsTLzTzD-CeLz0KcE-y6LFgoUus4A@mail.gmail.com
Discussion: CAEepm=1z5WLuNoJ80PaCvz6EtG9dN0j-KuHcHtU6QEfcPP5-qA@mail.gmail.com
This is intended as infrastructure for a full-fledged allocator for
dynamic shared memory. The interface looks a bit like a real
allocator, but only supports allocating and freeing memory in
multiples of the 4kB page size. Further, to free memory, you must
know the size of the span you wish to free, in pages. While these are
make it unsuitable as an allocator in and of itself, it still serves
as very useful scaffolding for a full-fledged allocator.
Robert Haas and Thomas Munro. This code is mostly the same as my 2014
submission, but Thomas fixed quite a few bugs and made some changes to
the interface.
Discussion: CA+TgmobkeWptGwiNa+SGFWsTLzTzD-CeLz0KcE-y6LFgoUus4A@mail.gmail.com
Discussion: CAEepm=1z5WLuNoJ80PaCvz6EtG9dN0j-KuHcHtU6QEfcPP5-qA@mail.gmail.com
C doesn't have any sort of built-in understanding of a pointer
relative to some arbitrary base address, but dynamic shared memory
segments can be mapped at different addresses in different processes,
so any sort of shared data structure stored within a dynamic shared
memory segment can't use absolute pointers. We could use something
like Size to represent a relative pointer, but then the compiler
provides no type-checking. Use stupid macro tricks to get some
type-checking.
Patch originally by me. Concept suggested by Andres Freund. Recently
resubmitted as part of Thomas Munro's work on dynamic shared memory
allocation.
Discussion: 20131205144434.GG12398@alap2.anarazel.de
Discussion: CAEepm=1z5WLuNoJ80PaCvz6EtG9dN0j-KuHcHtU6QEfcPP5-qA@mail.gmail.com
There was always documentation of the GUC that controlled what the
limit actually was, but previously the documentation of the field
itself made no mention of that limit.
Ian Barwick
Increase the default value of the existing max_worker_processes GUC
from 8 to 16, and add a new max_parallel_workers GUC with a maximum
of 8. This way, even if the maximum amount of parallel query is
happening, there is still room for background workers that do other
things, as originally envisioned when max_worker_processes was added.
Julien Rouhaud, reviewed by Amit Kapila and by revised by me.
Our documentation states that our maximum field size is 1 GB, and that
our maximum row size of 1.6 TB. However, while this might be attainable
in theory with enough contortions, it is not workable in practice; for
starters, pg_dump fails to dump tables containing rows larger than 1 GB,
even if individual columns are well below the limit; and even if one
does manage to manufacture a dump file containing a row that large, the
server refuses to load it anyway.
This commit enables dumping and reloading of such tuples, provided two
conditions are met:
1. no single column is larger than 1 GB (in output size -- for bytea
this includes the formatting overhead)
2. the whole row is not larger than 2 GB
There are three related changes to enable this:
a. StringInfo's API now has two additional functions that allow creating
a string that grows beyond the typical 1GB limit (and "long" string).
ABI compatibility is maintained. We still limit these strings to 2 GB,
though, for reasons explained below.
b. COPY now uses long StringInfos, so that pg_dump doesn't choke
trying to emit rows longer than 1GB.
c. heap_form_tuple now uses the MCXT_ALLOW_HUGE flag in its allocation
for the input tuple, which means that large tuples are accepted on
input. Note that at this point we do not apply any further limit to the
input tuple size.
The main reason to limit to 2 GB is that the FE/BE protocol uses 32 bit
length words to describe each row; and because the documentation is
ambiguous on its signedness and libpq does consider it signed, we cannot
use the highest-order bit. Additionally, the StringInfo API uses "int"
(which is 4 bytes wide in most platforms) in many places, so we'd need
to change that API too in order to improve, which has lots of fallout.
Backpatch to 9.5, which is the oldest that has
MemoryContextAllocExtended, a necessary piece of infrastructure. We
could apply to 9.4 with very minimal additional effort, but any further
than that would require backpatching "huge" allocations too.
This is the largest set of changes we could find that can be
back-patched without breaking compatibility with existing systems.
Fixing a bigger set of problems (for example, dumping tuples bigger than
2GB, or dumping fields bigger than 1GB) would require changing the FE/BE
protocol and/or changing the StringInfo API in an ABI-incompatible way,
neither of which would be back-patchable.
Authors: Daniel Vérité, Álvaro Herrera
Reviewed by: Tomas Vondra
Discussion: https://postgr.es/m/20160229183023.GA286012@alvherre.pgsql
The whole walreceiver API is now wrapped into a struct, like most of our
other loadable module APIs. The libpq connection is no longer a global
variable in libpqwalreceiver. Instead, it is encapsulated into a struct
that is passed around the functions. This allows multiple walreceivers
to run at the same time.
Add some rudimentary support for logical replication connections to
libpqwalreceiver.
These changes are mostly cosmetic and are going to be useful for the
future logical replication patches.
From: Petr Jelinek <petr@2ndquadrant.com>
Replace use of poll()/select() by WaitLatchOrSocket(), which is more
portable and flexible.
Also change walreceiver to use its procLatch instead of a custom latch.
From: Petr Jelinek <petr@2ndquadrant.com>
This works like function_with_argtypes, but aggregates allow slightly
different arguments.
Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
It was apparently added for use by GRANT/REVOKE, but move it closer to
where other function signature related things are kept.
Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Instead of sometimes referring to a function signature like func_name
func_args, use the existing function_with_argtypes symbol, which
combines the two.
Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Commit 274bb2b385 caused PQhost() to
return the value of the hostaddr parameter rather than the relevant
host when the latter parameter was specified. That's wrong. Commit
9a1d0af4ad then amplified the damage by
using PQhost() in more places, so that the SSL test suite started
failing.
Report by Andreas Karlsson; patch by me.
So far the hashtable stored representative tuples in the form of its
input slot, with all columns in the hashtable that are not
needed (i.e. not grouped upon or functionally dependent) set to NULL.
Thats good for saving memory, but it turns out that having tuples full
of NULL isn't free. slot_deform_tuple is faster if there's no NULL
bitmap even if no NULLs are encountered, and skipping over leading NULLs
isn't free.
So compute a separate tuple descriptor that only contains the needed
columns. As columns have already been moved in/out the slot for the
hashtable that does not imply additional per-row overhead.
Author: Andres Freund
Reviewed-By: Heikki Linnakangas
Discussion: https://postgr.es/m/20161103110721.h5i5t5saxfk5eeik@alap3.anarazel.de
Previously we did a ExecProject() for each individual aggregate
argument. That turned out to be a performance bottleneck in queries with
multiple aggregates.
Doing all the argument computations in one ExecProject() is quite a bit
cheaper because ExecProject's fastpath can do the work at once in a
relatively tight loop, and because it can get all the required columns
with a single slot_getsomeattr and save some other redundant setup
costs.
Author: Andres Freund
Reviewed-By: Heikki Linnakangas
Discussion: https://postgr.es/m/20161103110721.h5i5t5saxfk5eeik@alap3.anarazel.de
Previously, the right to split a bucket was represented by a
heavyweight lock on the page number of the primary bucket page.
Unfortunately, this meant that every scan needed to take a heavyweight
lock on that bucket also, which was bad for concurrency. Instead, use
a cleanup lock on the primary bucket page to indicate the right to
begin a split, so that scans only need to retain a pin on that page,
which is they would have to acquire anyway, and which is also much
cheaper.
In addition to reducing the locking cost, this also avoids locking out
scans and inserts for the entire lifetime of the split: while the new
bucket is being populated with copies of the appropriate tuples from
the old bucket, scans and inserts can happen in parallel. There are
minor concurrency improvements for vacuum operations as well, though
the situation there is still far from ideal.
This patch also removes the unworldly assumption that a split will
never be interrupted. With the new code, a split is done in a series
of small steps and the system can pick up where it left off if it is
interrupted prior to completion. While this patch does not itself add
write-ahead logging for hash indexes, it is clearly a necessary first
step, since one of the things that could interrupt a split is the
removal of electrical power from the machine performing it.
Amit Kapila. I wrote the original design on which this patch is
based, and did a good bit of work on the comments and README through
multiple rounds of review, but all of the code is Amit's. Also
reviewed by Jesper Pedersen, Jeff Janes, and others.
Discussion: http://postgr.es/m/CAA4eK1LfzcZYxLoXS874Ad0+S-ZM60U9bwcyiUZx9mHZ-KCWhw@mail.gmail.com
Per bug #14441 from Mark Pether, the documentation could be misread,
mainly because some of the examples failed to show what happens with
a multicharacter "characters to trim" string. Also, while the text
description in most of these entries was fairly clear that the
"characters" argument is a set of characters not a substring to match,
some of them used variant wording that was a bit less clear.
trim() itself suffered from both deficiencies and was thus pretty
misinterpretable.
Also fix failure to explain which of LEADING/TRAILING/BOTH is the
default.
Discussion: https://postgr.es/m/20161130011710.6539.53657@wrigleys.postgresql.org
Generate EUC_CN mappings from gb-18030-2000.xml, because GB2312.TXT is no
longer available.
Get UHC from windows-949-2000.xml, it's more up-to-date.
Plus tons more small changes. With these changes, the perl scripts
faithfully produce the *.map files we have in the repository, from the
external source files.
In the passing, fix the Makefile to also download CP932.TXT and CP950.TXT.
Based on patches by Kyotaro Horiguchi, reviewed by Daniel Gustafsson.
Discussion: https://postgr.es/m/08e7892a-d55c-eefe-76e6-7910bc8dd1f3@iki.fi
The common style is to pad to 4 digits.
Running the current perl scripts to generate these map files would override
this change, but the next commit will rewrite the perl scripts to produce
this style. I'm doing this as a separate commit, to make it more clear what
non-cosmetic changes the next commit makes to the map files.
Discussion: https://postgr.es/m/08e7892a-d55c-eefe-76e6-7910bc8dd1f3@iki.fi
PostgreSQL treats characters with < 0x80 leading byte as plain ASCII, and
they are not even passed to the conversion routines. There is no point in
having them in the conversion tables.
Everything in the tables were direct ASCII-ASCII mappings, except for two:
* SHIFT_JIS_2004 code point 0x5C (backslash in ASCII) was mapped to Unicode
YEN SIGN character.
* Unicode 0x5C (backslash again) was mapped to "REVERSE SOLIDUS" in
SHIFT_JIS_2004
These mappings never had any effect, so there's no functional change from
removing them.
Discussion: https://postgr.es/m/08e7892a-d55c-eefe-76e6-7910bc8dd1f3@iki.fi
pgp-pubkey-DISABLED test has been unused since 2006, when support for
built-in bignum math was added (commit 1abf76e8). pgp-encrypt-DISABLED has
been unused forever, AFAICS.
Also remove a couple of unused error codes.
consider_parallel_nestloop passed the wrong jointype down to its
subroutines for JOIN_UNIQUE_INNER cases (it should pass JOIN_INNER), and it
thought that it could pass paths other than innerrel->cheapest_total_path
to create_unique_path, which create_unique_path is not on board with.
These bugs would lead to assertion failures or other errors, suggesting
that this code path hasn't been tested much.
hash_inner_and_outer's code for parallel join effectively treated both
JOIN_UNIQUE_OUTER and JOIN_UNIQUE_INNER the same as JOIN_INNER (for
different reasons :-(), leading to incorrect plans that treated a semijoin
as if it were a plain join.
Michael Day submitted a test case demonstrating that hash_inner_and_outer
failed for JOIN_UNIQUE_OUTER, and I found the other cases through code
review.
Report: https://postgr.es/m/D0E8A029-D1AC-42E8-979A-5DE4A77E4413@rcmail.com
If we don't have any MCV statistics for the inner relation, and we don't
trust its numdistinct estimate either, eqjoinsel_semi falls back to a very
conservative estimate (that 50% of the outer rows have matches). This is
particularly problematic if the inner relation is completely empty, since
then even an explicit ANALYZE won't produce any pg_statistic entries,
so there's no way to budge the planner off the bad estimate.
We'd produce a better estimate in such cases if we used the nd2/nd1
selectivity heuristic, so an easy fix is to treat the nd2 estimate as
non-default if we derive it from clamping to the inner rel's rowcount
estimate. This won't fix every related case (mainly because the rowcount
estimate might be larger than DEFAULT_NUM_DISTINCT), but it seems like a
sane extension of the existing logic, so let's apply the change in HEAD
and see if anyone complains. Per bug #14438 from Nikolay Nikitin.
Report: https://postgr.es/m/20161128182113.6527.58926@wrigleys.postgresql.org
Discussion: https://postgr.es/m/31089.1480384713@sss.pgh.pa.us
I'd supposed that people would do this manually when creating new operator
classes, but the folly of that was exposed today. The tests seem fast
enough that we can just apply them during the normal regression tests.
contrib/isn fails the checks for lack of complete sets of cross-type
operators. That's a nice-to-have policy rather than a functional
requirement, so leave it as-is, but insert ORDER BY in the query to
ensure consistent cross-platform output.
Discussion: https://postgr.es/m/7076.1480446837@sss.pgh.pa.us
Commit 274bb2b385 made it possible to
specify multiple IPs in a connection string, but that's not good
enough for the case where you have a read-write master and a bunch of
read-only standbys and want to connect to whichever server is the
master at the current time. This commit allows that, by making it
possible to specify target_session_attrs=read-write as a connection
parameter.
There was extensive discussion of the best name for the connection
parameter and its values as well as the best way to distinguish master
and standbys. For now, adopt the same solution as JDBC: if the user
wants a read-write connection, issue 'show transaction_read_only' and
rejection the connection if the result is 'on'. In the future, we
could add additional values of this new target_session_attrs parameter
that issue different queries; or we might have some way of
distinguishing the server type without resorting to an SQL query; but
right now, we have this, and that's (hopefully) a good start.
Victor Wagner and Mithun Cy. Design review by Álvaro Herrera, Catalin
Iacob, Takayuki Tsunakawa, and Craig Ringer; code review by me. I
changed Mithun's patch to skip all remaining IPs for a host if we
reject a connection based on this new parameter, rewrote the
documentation, and did some other cosmetic cleanup.
Discussion: http://postgr.es/m/CAD__OuhqPRGpcsfwPHz_PDqAGkoqS1UvnUnOnAB-LBWBW=wu4A@mail.gmail.com
Add an option to exclude blobs when running pg_dump. By default, blobs
are included but this option can be used to exclude them while keeping
the rest of the dump.
Commment updates and regression tests from me.
Author: Guillaume Lelarge
Reviewed-by: Amul Sul
Discussion: https://postgr.es/m/VisenaEmail.48.49926ea6f91dceb6.15355a48249@tc7-visena
The documentation around the -b/--blobs option to pg_dump seemed to
imply that it might be possible to add blobs to a "schema-only" dump or
similar. Clarify that blobs are data and therefore will only be
included in dumps where data is being included, even when -b is used to
request blobs be included.
The -b option has been around since before 9.2, so back-patch to all
supported branches.
Discussion: https://postgr.es/m/20161119173316.GA13284@tamriel.snowman.net
An example in the psql documentation had an incorrect field name from
what the command actually produced.
Pointed out by Fabien COELHO
Back-patch to 9.6 where the example was added.
Discussion: https://postgr.es/m/alpine.DEB.2.20.1611291349400.19314@lancre
Oversight in my commit 0bb51aa96. Noted while poking at a recent
bug report --- HEAD's estimates for a query using CURRENT_DATE
were unexpectedly much worse than 9.6's.
Commit a734fd5d1 exposed some race conditions that existed previously
in the autovac code, but were basically harmless because autovac would
not try to delete orphaned relations immediately. Specifically, the test
for orphaned-ness was made on a pg_class tuple that might be dead by now,
allowing autovac to try to remove a table that the owning backend had just
finished deleting. This resulted in a hard crash due to inadequate caution
about accessing the table's catalog entries without any lock. We must take
a relation lock and then recheck whether the table is still present and
still looks deletable before we do anything.
Also, it seemed to me that deleting multiple tables per transaction, and
trying to continue after errors, represented unjustifiable complexity.
We do not expect this code path to be taken often in the field, nor even
during testing, which means that prioritizing performance over correctness
is a bad tradeoff. Rip all that out in favor of just starting a new
transaction after each successful temp table deletion. If we're unlucky
enough to get an error, which shouldn't happen anyway now that we're being
more cautious, let the autovacuum worker fail as it normally would.
In passing, improve the order of operations in the initial scan loop.
Now that we don't care about whether a temp table is a wraparound hazard,
there's no need to perform extract_autovac_opts, get_pgstat_tabentry_relid,
or relation_needs_vacanalyze for temp tables.
Also, if GetTempNamespaceBackendId returns InvalidBackendId (indicating
it doesn't recognize the schema as temp), treat that as meaning it's NOT
an orphaned temp table, not that it IS one, which is what happened before
because BackendIdGetProc necessarily failed. The case really shouldn't
come up for a table that has RELPERSISTENCE_TEMP, but the consequences
if it did seem undesirable. (This might represent a back-patchable bug
fix; not sure if it's worth the trouble.)
Discussion: https://postgr.es/m/21299.1480272347@sss.pgh.pa.us
Commit 08dd23cec introduced an exception to the rule that extension member
objects can only be dropped as part of dropping the whole extension,
intending to allow such drops while running the extension's own creation or
update scripts. However, the exception was only applied at the outermost
recursion level, because it was modeled on a pre-existing check to ignore
dependencies on objects listed in pendingObjects. Bug #14434 from Philippe
Beaudoin shows that this is inadequate: in some cases we can reach an
extension member object by recursion from another one. (The bug concerns
the serial-sequence case; I'm not sure if there are other cases, but there
might well be.)
To fix, revert 08dd23cec's changes to findDependentObjects() and instead
apply the creating_extension exception regardless of stack level.
Having seen this example, I'm a bit suspicious that the pendingObjects
logic is also wrong and such cases should likewise be allowed at any
recursion level. However, changing that would interact in subtle ways
with the recursion logic (at least it would need to be moved to after the
recursing-from check). Given that the code's been like that a long time,
I'll refrain from touching it without a clear example showing it's wrong.
Back-patch to all active branches. In HEAD and 9.6, where suitable
test infrastructure exists, add a regression test case based on the
bug report.
Report: <20161125151448.6529.33039@wrigleys.postgresql.org>
Discussion: <13224.1480177514@sss.pgh.pa.us>
Instead of confusingly stating platform-dependent defaults for these
parameters in the comments in postgresql.conf.sample (with the main
entry being a lie on Linux), teach initdb to install the correct
platform-dependent value in postgresql.conf, similarly to the way
we handle other platform-dependent defaults. This won't do anything
for existing 9.6 installations, but since it's effectively only a
documentation improvement, that seems OK.
Since this requires initdb to have access to the default values,
move the #define's for those to pg_config_manual.h; the original
placement in bufmgr.h is unworkable because that file can't be
included by frontend programs.
Adjust the default value for wal_writer_flush_after so that it is 1MB
regardless of XLOG_BLCKSZ, conforming to what is stated in both the
SGML docs and postgresql.conf. (We could alternatively make it scale
with XLOG_BLCKSZ, but I'm not sure I see the point.)
Copy-edit related SGML documentation.
Fabien Coelho and Tom Lane, per a gripe from Tomas Vondra.
Discussion: <30ebc6e3-8358-09cf-44a8-578252938424@2ndquadrant.com>
Andreas Seltenreich found another case where we were being too optimistic
about allowing a plan to be considered parallelizable despite it containing
initPlans. It seems like the real issue here is that if we know we are
going to tack initPlans onto the topmost Plan node for a subquery, we
had better mark that subquery's result Paths as not-parallel-safe. That
fixes this problem and allows reversion of a kluge (added in commit
7b67a0a49 and extended in f24cf960d) to not trust the parallel_safe flag
at top level.
Discussion: <874m2w4k5d.fsf@ex.ansel.ydns.eu>
When dropping a foreign key constraint with ALTER TABLE DROP CONSTRAINT,
we refuse the drop if there are any pending trigger events on the named
table; this ensures that we won't remove the pg_trigger row that will be
consulted by those events. But we should make the same check for the
referenced relation, else we might remove a due-to-be-referenced pg_trigger
row for that relation too, resulting in "could not find trigger NNN" or
"relation NNN has no triggers" errors at commit. Per bug #14431 from
Benjie Gillam. Back-patch to all supported branches.
Report: <20161124114911.6530.31200@wrigleys.postgresql.org>
Previously, requesting commit timestamp for transactions
FrozenTransactionId and BootstrapTransactionId resulted in an error.
But since those values can validly appear in committed tuples' Xmin,
this behavior is unhelpful and error prone: each caller would have to
special-case those values before requesting timestamp data for an Xid.
We already have a perfectly good interface for returning "the Xid you
requested is too old for us to have commit TS data for it", so let's use
that instead.
Backpatch to 9.5, where commit timestamps appeared.
Author: Craig Ringer
Discussion: https://www.postgresql.org/message-id/CAMsr+YFM5Q=+ry3mKvWEqRTxrB0iU3qUSRnS28nz6FJYtBwhJg@mail.gmail.com