Walsender uses the local buffers for each outgoing and incoming message.
Previously when creating replication slot, walsender forgot to initialize
one of them and which can cause the segmentation fault error. To fix this
issue, this commit changes walsender so that it always initialize them
before it executes the requested replication command.
Back-patch to 9.4 where replication slot was introduced.
Problem report and initial patch by Stas Kelvich, modified by me.
Report: https://www.postgresql.org/message-id/A1E9CB90-1FAC-4CAD-8DBA-9AA62A6E97C5@postgrespro.ru
Creating global objects named "foo" isn't an especially wise thing,
but especially not in a test script that has already used that name
for something else, and most especially not in a script that runs
in parallel with other scripts that use that name :-(
Per buildfarm.
Commit 04aad4018 added this check after the search for a Python shared
library, which seems to me to be a pretty unfriendly ordering. The
search might fail for what are basically version-related reasons, and
in such a case it'd be better to say "your Python is too old" than
"could not find shared library for Python".
There is no specific reason for this right now, but keeping support for
old Python versions around indefinitely increases the maintenance
burden. The oldest supported Python version is now Python 2.4, which is
still shipped in RHEL/CentOS 5 by default.
In configure, add a check for the required Python version and give a
friendly error message for an old version, instead of relying on an
obscure build error later on.
Be specific about which pattern is being complained of, and avoid saying
"it's not supported in to_date", which is just confusing if the error is
actually coming out of to_timestamp. We can phrase it as "is only
supported in to_char", instead. Also, use the term "formatting field" not
"format pattern", because other error messages in the same file prefer that
terminology. (This isn't terribly consistent with the documentation, so
maybe we should change all these error messages?)
These are only supported in to_char, not in the other direction, but the
documentation failed to mention that. Also, describe TZ/tz as printing the
time zone "abbreviation", not "name", because what they print is elsewhere
referred to that way. Per bug #14558.
It didn't take long at all for me to become irritated that the original
choice of name for this script resulted in "warning" showing up in several
places in build logs, because I tend to grep for that. Change the script
name to avoid that.
One case in the PL/Tcl tests is observed to fail on RHEL5 with a Turkish
time zone setting. It's not clear if this is an old Tcl bug or something
odd about the zone data, but in any case that test is meant to see if the
Tcl [clock] command works at all, not what its corner-case behaviors are.
Therefore we have no need to test exactly which week a Sunday midnight is
considered to fall into. Probe the following Tuesday instead.
Discussion: https://postgr.es/m/797.1487517822@sss.pgh.pa.us
Versions of flex before 2.5.36 might generate code that results in an
"unused variable" warning, when using %option reentrant. Historically
we've worked around that by specifying -Wno-error, but that's an
unsatisfying solution. The official "fix" for this was just to insert a
dummy reference to the variable, so write a small perl script that edits
the generated C code similarly.
The MSVC side of this is untested, but the buildfarm should soon reveal
if I broke that.
Discussion: https://postgr.es/m/25456.1487437842@sss.pgh.pa.us
Commit 5262f7a4fc added similar support
for parallel index scans; this extends that work to index-only scans.
As with parallel index scans, this requires support from the index AM,
so currently parallel index-only scans will only be possible for btree
indexes.
Rafia Sabih, reviewed and tested by Rahila Syed, Tushar Ahuja,
and Amit Kapila
Discussion: http://postgr.es/m/CAOGQiiPEAs4C=TBp0XShxBvnWXuzGL2u++Hm1=qnCpd6_Mf8Fw@mail.gmail.com
A new function dsa_allocate_extended now takes flags which indicate
that huge allocations should be permitted, that out-of-memory
conditions should not throw an error, and/or that the returned memory
should be zero-filled, just like MemoryContextAllocateExtended.
Commit 9acb85597f, which added
dsa_allocate0, was broken because it failed to account for the
possibility that dsa_allocate() might return InvalidDsaPointer.
This fixes that problem along the way.
Thomas Munro, with some comment changes by me.
Discussion: http://postgr.es/m/CA+Tgmobt7CcF_uQP2UQwWmu4K9qCHehMJP9_9m1urwP8hbOeHQ@mail.gmail.com
The recovery.conf file that's generated is specifically for replication,
and not needed (or wanted) for regular backup restore, so indicate that
in the message.
The way the old query was written prevented some join optimizations
because the join conditions were hidden inside a CASE expression. With
a large number of constraints, the query became unreasonably slow. The
new query performs much better.
From: Alexey Bashtanov <bashtanov@imap.cc>
Reviewed-by: Ashutosh Bapat <ashutosh.bapat@enterprisedb.com>
Also add to the existing rather half-baked description of PROFILE,
which does exactly the same thing, but I think people use it differently.
Discussion: https://postgr.es/m/16461.1487361849@sss.pgh.pa.us
This causes a warning with the old html-docs toolchain, though not with the
new. I had originally supposed that we needed both <indexterm> entries to
get both a primary index entry and a see-also link; but evidently not,
as pointed out by Fabien Coelho.
Discussion: https://postgr.es/m/alpine.DEB.2.20.1702161616060.5445@lancre
Make the typedefs for output plugins consistent with project style;
they were previously not even consistent with each other as to layout
or inclusion of parameter names. Make the documentation look the same,
and fix errors therein (missing and misdescribed parameters).
Back-patch because of the documentation bugs.
The loops in ExecHashJoinNewBatch(), ExecHashIncreaseNumBatches(), and
ExecHashRemoveNextSkewBucket() are all capable of iterating over many
tuples without ever doing a CHECK_FOR_INTERRUPTS, so that the backend
might fail to respond to SIGINT or SIGTERM for an unreasonably long time.
Fix that. In the case of ExecHashJoinNewBatch(), it seems useful to put
the added CHECK_FOR_INTERRUPTS into ExecHashJoinGetSavedTuple() rather
than directly in the loop, because that will also ensure that both
principal code paths through ExecHashJoinOuterGetTuple() will do a
CHECK_FOR_INTERRUPTS, which seems like a good idea to avoid surprises.
Back-patch to all supported branches.
Tom Lane and Thomas Munro
Discussion: https://postgr.es/m/6044.1487121720@sss.pgh.pa.us
Commit 906bfcad7 adjusted the syntax synopsis for UPDATE, but missed
the fact that the INSERT synopsis now contains a duplicate of that.
In passing, improve wording and markup about using a table alias to
dodge the conflict with use of "excluded" as a special table name.
It wouldn't complete "TO" after the variable name, which is certainly
minor enough. But since we do complete "TO" after "SET variable ...",
and since this case used to work pre-9.6, I think this is a bug.
Also, fix the query used to collect the variable names; whoever last
touched it evidently didn't understand how the pieces are supposed
to fit together. It accidentally worked anyway, because readline
ignores irrelevant completions, but it was randomly unlike the ones
around it, and could be a source of actual bugs if someone copied
it as a prototype for another query.
Jeff Janes noted that the error cursor position shown for some errors
would vary when operator_precedence_warning is turned on. We'd prefer
that option to have no undocumented effects, so this isn't desirable.
To fix, make sure that an AEXPR_PAREN node has the same exprLocation
as its child node.
(Note: it would be a little cheaper to use @2 here instead of an
exprLocation call, but there are cases where that wouldn't produce
the identical answer, so don't do it like that.)
Back-patch to 9.5 where this feature was introduced.
Discussion: https://postgr.es/m/CAMkU=1ykK+VhhcQ4Ky8KBo9FoaUJH3f3rDQB8TkTXi-ZsBRUkQ@mail.gmail.com
In combination with 569174f1be, which
taught the btree AM how to perform parallel index scans, this allows
parallel index scan plans on btree indexes. This infrastructure
should be general enough to support parallel index scans for other
index AMs as well, if someone updates them to support parallel
scans.
Amit Kapila, reviewed and tested by Anastasia Lubennikova, Tushar
Ahuja, and Haribabu Kommi, and me.
When min_parallel_relation_size was added, the only supported type
of parallel scan was a parallel sequential scan, but there are
pending patches for parallel index scan, parallel index-only scan,
and parallel bitmap heap scan. Those patches introduce two new
types of complications: first, what's relevant is not really the
total size of the relation but the portion of it that we will scan;
and second, index pages and heap pages shouldn't necessarily be
treated in exactly the same way. Typically, the number of index
pages will be quite small, but that doesn't necessarily mean that
a parallel index scan can't pay off.
Therefore, we introduce min_parallel_table_scan_size, which works
out a degree of parallelism for scans based on the number of table
pages that will be scanned (and which is therefore equivalent to
min_parallel_relation_size for parallel sequential scans) and also
min_parallel_index_scan_size which can be used to work out a degree
of parallelism based on the number of index pages that will be
scanned.
Amit Kapila and Robert Haas
Discussion: http://postgr.es/m/CAA4eK1KowGSYYVpd2qPpaPPA5R90r++QwDFbrRECTE9H_HvpOg@mail.gmail.com
Discussion: http://postgr.es/m/CAA4eK1+TnM4pXQbvn7OXqam+k_HZqb0ROZUMxOiL6DWJYCyYow@mail.gmail.com
I didn't realize these would ever be visible to clients, but Michael
figured out that it can happen when using asynchronous interfaces
such as PQconnectPoll.
Michael Paquier
Commit 85c11324ca renamed pg_resetxlog
to pg_resetwal, but didn't make pg_upgrade smart enough to cope with
the situation.
Michael Paquier, per a complaint from Jeff Janes
The core of the functionality was already implemented when
pg_import_system_collations was added. This just exposes it as an
option in the SQL command.
This isn't exposed to the optimizer or the executor yet; we'll add
support for those things in a separate patch. But this puts the
basic mechanism in place: several processes can attach to a parallel
btree index scan, and each one will get a subset of the tuples that
would have been produced by a non-parallel scan. Each index page
becomes the responsibility of a single worker, which then returns
all of the TIDs on that page.
Rahila Syed, Amit Kapila, Robert Haas, reviewed and tested by
Anastasia Lubennikova, Tushar Ahuja, and Haribabu Kommi.
This doesn't do anything to make Param nodes anything other than
parallel-restricted, so this only helps with uncorrelated subplans,
and it's not necessarily very cheap because each worker will run the
subplan separately (just as a Hash Join will build a separate copy of
the hash table in each participating process), but it's a first step
toward supporting cases that are more likely to help in practice, and
is occasionally useful on its own.
Amit Kapila, reviewed and tested by Rafia Sabih, Dilip Kumar, and
me.
Discussion: http://postgr.es/m/CAA4eK1+e8Z45D2n+rnDMDYsVEb5iW7jqaCH_tvPMYau=1Rru9w@mail.gmail.com
The xlog-specific headers need to be included in both frontend code -
specifically, pg_waldump - and the backend, but the remainder of the
private headers for each index are only needed by the backend. By
splitting the xlog stuff out into separate headers, pg_waldump pulls
in fewer backend headers, which is a good thing.
Patch by me, reviewed by Michael Paquier and Andres Freund, per a
complaint from Dilip Kumar.
Discussion: http://postgr.es/m/CA+TgmoZ=F=GkxV0YEv-A8tb+AEGy_Qa7GSiJ8deBKFATnzfEug@mail.gmail.com