In both partitioning and traditional inheritance, require child
columns to be GENERATED if and only if their parent(s) are.
Formerly we allowed the case of an inherited column being
GENERATED when its parent isn't, but that results in inconsistent
behavior: the column can be directly updated through an UPDATE
on the parent table, leading to it containing a user-supplied
value that might not match the generation expression. This also
fixes an oversight that we enforced partition-key-columns-can't-
be-GENERATED against parent tables, but not against child tables
that were dynamically attached to them.
Also, remove the restriction that the child's generation expression
be equivalent to the parent's. In the wake of commit 3f7836ff6,
there doesn't seem to be any reason that we need that restriction,
since generation expressions are always computed per-table anyway.
By removing this, we can also allow a child to merge multiple
inheritance parents with inconsistent generation expressions, by
overriding them with its own expression, much as we've long allowed
for DEFAULT expressions.
Since we're rejecting a case that we used to accept, this doesn't
seem like a back-patchable change. Given the lack of field
complaints about the inconsistent behavior, it's likely that no
one is doing this anyway, but we won't change it in minor releases.
Amit Langote and Tom Lane
Discussion: https://postgr.es/m/2793383.1672944799@sss.pgh.pa.us
Can be set to the empty string, or to either or both of "set" or
"inherit". If set to a non-empty value, a non-superuser who creates
a role (necessarily by relying up the CREATEROLE privilege) will
grant that role back to themselves with the specified options.
This isn't a security feature, because the grant that this feature
triggers can also be performed explicitly. Instead, it's a user experience
feature. A superuser would necessarily inherit the privileges of any
created role and be able to access all such roles via SET ROLE;
with this patch, you can configure createrole_self_grant = 'set, inherit'
to provide a similar experience for a user who has CREATEROLE but not
SUPERUSER.
Discussion: https://postgr.es/m/CA+TgmobN59ct+Emmz6ig1Nua2Q-_o=r6DSD98KfU53kctq_kQw@mail.gmail.com
Previously, CREATEROLE users were permitted to make nearly arbitrary
changes to roles that they didn't create, with certain exceptions,
particularly superuser roles. Instead, allow CREATEROLE users to make such
changes to roles for which they possess ADMIN OPTION, and to
grant membership only in roles for which they possess ADMIN OPTION.
When a CREATEROLE user who is not a superuser creates a role, grant
ADMIN OPTION on the newly-created role to the creator, so that they
can administer roles they create or for which they have been given
privileges.
With these changes, CREATEROLE users still have very significant
powers that unprivileged users do not receive: they can alter, rename,
drop, comment on, change the password for, and change security labels
on roles. However, they can now do these things only for roles for
which they possess appropriate privileges, rather than all
non-superuser roles; moreover, they cannot grant a role such as
pg_execute_server_program unless they themselves possess it.
Patch by me, reviewed by Mark Dilger.
Discussion: https://postgr.es/m/CA+TgmobN59ct+Emmz6ig1Nua2Q-_o=r6DSD98KfU53kctq_kQw@mail.gmail.com
Currently, for large transactions, the publisher sends the data in
multiple streams (changes divided into chunks depending upon
logical_decoding_work_mem), and then on the subscriber-side, the apply
worker writes the changes into temporary files and once it receives the
commit, it reads from those files and applies the entire transaction. To
improve the performance of such transactions, we can instead allow them to
be applied via parallel workers.
In this approach, we assign a new parallel apply worker (if available) as
soon as the xact's first stream is received and the leader apply worker
will send changes to this new worker via shared memory. The parallel apply
worker will directly apply the change instead of writing it to temporary
files. However, if the leader apply worker times out while attempting to
send a message to the parallel apply worker, it will switch to
"partial serialize" mode - in this mode, the leader serializes all
remaining changes to a file and notifies the parallel apply workers to
read and apply them at the end of the transaction. We use a non-blocking
way to send the messages from the leader apply worker to the parallel
apply to avoid deadlocks. We keep this parallel apply assigned till the
transaction commit is received and also wait for the worker to finish at
commit. This preserves commit ordering and avoid writing to and reading
from files in most cases. We still need to spill if there is no worker
available.
This patch also extends the SUBSCRIPTION 'streaming' parameter so that the
user can control whether to apply the streaming transaction in a parallel
apply worker or spill the change to disk. The user can set the streaming
parameter to 'on/off', or 'parallel'. The parameter value 'parallel' means
the streaming will be applied via a parallel apply worker, if available.
The parameter value 'on' means the streaming transaction will be spilled
to disk. The default value is 'off' (same as current behaviour).
In addition, the patch extends the logical replication STREAM_ABORT
message so that abort_lsn and abort_time can also be sent which can be
used to update the replication origin in parallel apply worker when the
streaming transaction is aborted. Because this message extension is needed
to support parallel streaming, parallel streaming is not supported for
publications on servers < PG16.
Author: Hou Zhijie, Wang wei, Amit Kapila with design inputs from Sawada Masahiko
Reviewed-by: Sawada Masahiko, Peter Smith, Dilip Kumar, Shi yu, Kuroda Hayato, Shveta Mallik
Discussion: https://postgr.es/m/CAA4eK1+wyN6zpaHUkCLorEWNx75MG0xhMwcFhvjqm2KURZEAGw@mail.gmail.com
This allows an optional "S" modifier to be added to \dp and \z, to
have them include system objects in the list.
Note that this also changes the behaviour of a bare \dp or \z without
the "S" modifier to include temp objects in the list, and exclude
information_schema objects, making them consistent with other psql
meta-commands.
Nathan Bossart, reviewed by Maxim Orlov.
Discussion: https://postgr.es/m/20221206193606.GB3078082@nathanxps13
VACUUM normally ends by running vac_update_datfrozenxid(), which
requires a scan of pg_class. Therefore, if one attempts to vacuum a
database one table at a time --- as vacuumdb has done since v12 ---
we will spend O(N^2) time in vac_update_datfrozenxid(). That causes
serious performance problems in databases with tens of thousands of
tables, and indeed the effect is measurable with only a few hundred.
To add insult to injury, only one process can run
vac_update_datfrozenxid at the same time per DB, so this behavior
largely defeats vacuumdb's -j option.
Hence, invent options SKIP_DATABASE_STATS and ONLY_DATABASE_STATS
to allow applications to postpone vac_update_datfrozenxid() until the
end of a series of VACUUM requests, and teach vacuumdb to use them.
Per bug #17717 from Gunnar L. Sadly, this answer doesn't seem
like something we'd consider back-patching, so the performance
problem will remain in v12-v15.
Tom Lane and Nathan Bossart
Discussion: https://postgr.es/m/17717-6c50eb1c7d23a886@postgresql.org
In user-manag.sgml, document precisely what privileges are conveyed
by CREATEROLE. Make particular note of the fact that it allows
changing passwords and granting access to high-privilege roles.
Also remove the suggestion of using a user with CREATEROLE and
CREATEDB instead of a superuser, as there is no real security
advantage to this approach.
Elsewhere in the documentation, adjust text that suggests that
<literal>CREATEROLE</literal> only allows for role creation, and
refer to the documentation in user-manag.sgml as appropriate.
Patch by me, reviewed by Álvaro Herrera
Discussion: http://postgr.es/m/CA+TgmoZBsPL8nPhvYecx7iGo5qpDRqa9k_AcaW1SbOjugAY1Ag@mail.gmail.com
While on it, newlines are removed from the end of two elog() strings.
The others are simple grammar mistakes. One comment in pg_upgrade
referred incorrectly to sequences since a7e5457.
Author: Justin Pryzby
Discussion: https://postgr.es/m/20221230231257.GI1153@telsasoft.com
Backpatch-through: 11
This is like the existing bt_page_stats() function, but it can
report on a range of pages rather than just one at a time.
I don't have a huge amount of faith in the portability of the
new test cases, but they do pass in a 32-bit FreeBSD VM here.
Further adjustment may be needed depending on buildfarm results.
Hamid Akhtar, reviewed by Naeem Akhter, Bertrand Drouvot,
Bharath Rupireddy, and myself
Discussion: https://postgr.es/m/CANugjht-=oGMRmNJKMqnBC69y7vr+wHDmm0ZK6-1pJsxoBKBbA@mail.gmail.com
A refcursor variable that is bound to a specific query (by declaring
it with "CURSOR FOR") now chooses a portal name in the same way as an
unbound, plain refcursor variable. Its string value starts out as
NULL, and unless that's overridden by manual assignment, it will be
replaced by a unique-within-session portal name during OPEN.
The previous behavior was to initialize such variables to contain
their own name, resulting in that also being the portal name unless
the user overwrote it before OPEN. The trouble with this is that
it causes failures due to conflicting portal names if the same
cursor variable name is used in different functions. It is pretty
non-orthogonal to have bound and unbound refcursor variables behave
differently on this point, too, so let's change it.
This change can cause compatibility problems for applications that
open a bound cursor in a plpgsql function and then use it in the
calling code without explicitly passing back the refcursor value
(portal name). If the calling code simply assumes that the portal
name matches the called function's variable name, it will now fail.
That can be fixed by explicitly assigning a string value to the
refcursor variable before OPEN, e.g.
DECLARE myc CURSOR FOR SELECT ...;
BEGIN
myc := 'myc'; -- add this
OPEN myc;
We have no documentation examples showing the troublesome usage
pattern, so we can hope it's rare in practice.
Patch by me; thanks to Pavel Stehule and Jan Wieck for review.
Discussion: https://postgr.es/m/1465101.1667345983@sss.pgh.pa.us
When collecting ANALYZE sample on foreign tables, postgres_fdw fetched
all rows and performed the sampling locally. For large tables this means
transferring and immediately discarding large amounts of data.
This commit allows the sampling to be performed on the remote server,
transferring only the much smaller sample. The sampling is performed
using the built-in TABLESAMPLE methods (system, bernoulli) or random()
function, depending on the remote server version.
Remote sampling can be enabled by analyze_sampling on the foreign server
and/or foreign table, with supported values 'off', 'auto', 'system',
'bernoulli' and 'random'. The default value is 'auto' which uses either
'bernoulli' (TABLESAMPLE method) or 'random' (for remote servers without
TABLESAMPLE support).
Teach VACUUM to decide on whether or not to trigger freezing at the
level of whole heap pages. Individual XIDs and MXIDs fields from tuple
headers now trigger freezing of whole pages, rather than independently
triggering freezing of each individual tuple header field.
Managing the cost of freezing over time now significantly influences
when and how VACUUM freezes. The overall amount of WAL written is the
single most important freezing related cost, in general. Freezing each
page's tuples together in batch allows VACUUM to take full advantage of
the freeze plan WAL deduplication optimization added by commit 9e540599.
Also teach VACUUM to trigger page-level freezing whenever it detects
that heap pruning generated an FPI. We'll have already written a large
amount of WAL just to do that much, so it's very likely a good idea to
get freezing out of the way for the page early. This only happens in
cases where it will directly lead to marking the page all-frozen in the
visibility map.
In most cases "freezing a page" removes all XIDs < OldestXmin, and all
MXIDs < OldestMxact. It doesn't quite work that way in certain rare
cases involving MultiXacts, though. It is convenient to define "freeze
the page" in a way that gives FreezeMultiXactId the leeway to put off
the work of processing an individual tuple's xmax whenever it happens to
be a MultiXactId that would require an expensive second pass to process
aggressively (allocating a new multi is especially worth avoiding here).
FreezeMultiXactId is eager when processing is cheap (as it usually is),
and lazy in the event of an individual multi that happens to require
expensive second pass processing. This avoids regressions related to
processing of multis that page-level freezing might otherwise cause.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Jeff Davis <pgsql@j-davis.com>
Reviewed-By: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CAH2-WzkFok_6EAHuK39GaW4FjEFQsY=3J0AAd6FXk93u-Xq3Fg@mail.gmail.com
Given the soft-input-error feature, we can reduce these functions
to be just thin wrappers around a soft-error call of the
corresponding datatype input function. This means less code and
more certainty that the to_reg* functions match the normal input
behavior.
Notably, it also means that they will accept numeric OID input,
which they didn't before. It's not clear to me if that omission
had more than laziness behind it, but it doesn't seem like
something we need to work hard to preserve.
Discussion: https://postgr.es/m/3910031.1672095600@sss.pgh.pa.us
This option extracts (potentially decompressing) full-page images
included in WAL records into a given target directory. These images are
subject to the same filtering rules as the normal display of the WAL
records, hence with --relation one can for example extract only the FPIs
issued on the relation defined. By default, the records are printed or
their stats computed (--stats), using --quiet would only save the images
without any output generated.
This is a tool aimed mostly for very experienced users, useful for
fixing page-level corruption or just analyzing the past state of a page,
and there were no easy way to do that with the in-core tools up to now
when looking at WAL.
Each block is saved in a separate file, to ease their manipulation, with
the file respecting <lsn>.<ts>.<db>.<rel>.<blk>_<fork> with as format.
For instance, 00000000-010000C0.1663.1.6117.123_main refers to:
- WAL record LSN in hexa format (00000000-010000C0).
- Tablespace OID (1663).
- Database OID (1).
- Relfilenode (6117).
- Block number (123).
- Fork name of the file this block came from (_main).
Author: David Christensen
Reviewed-by: Sho Kato, Justin Pryzby, Bharath Rupireddy, Matthias van de
Meent
Discussion: https://postgr.es/m/CAOxo6XKjQb2bMSBRpePf3ZpzfNTwjQUc4Tafh21=jzjX6bX8CA@mail.gmail.com
This enables streaming or serializing changes immediately in logical
decoding. This parameter is intended to be used to test logical decoding
and replication of large transactions for which otherwise we need to
generate the changes till logical_decoding_work_mem is reached.
This helps in reducing the timing of existing tests related to logical
replication of in-progress transactions and will help in writing tests for
for the upcoming feature for parallelly applying large in-progress
transactions.
Author: Shi yu
Reviewed-by: Sawada Masahiko, Shveta Mallik, Amit Kapila, Dilip Kumar, Kuroda Hayato, Kyotaro Horiguchi
Discussion: https://postgr.es/m/OSZPR01MB63104E7449DBE41932DB19F1FD1B9@OSZPR01MB6310.jpnprd01.prod.outlook.com
After some copy-edit I made in commit 3a06a79cd1, we have a <sect2>
that only contains a warning box. This doesn't look good. Rework by
moving the sect2 title to be the warning's title, and put the 'id' to it
as well, so that the external reference continues to work.
Backpatch to 15.
In branch master, I also take the opportunity to add titles to a couple
of other warning boxes elsewhere in the documentation.
Discussion: https://postgr.es/m/20221219164713.ccnlvtkyj6lmshqq@alvherre.pgsql
Commit 2f9661311b changed command tags from strings to numbers, but
forgot to adjust the code in the event trigger example, which
consequently failed to compile.
While fixing that, improve the indentation to adhere to pgindent style.
Backpatch to v13, where the change was introduced.
Author: Laurenz Albe
Discussion: https://postgr.es/m/81e36ac17dc80489e74dc5b6914afa6ccdb1a99d.camel@cybertec.at
The former name was discussed as being confusing, so use "split", as per
a suggestion from Magnus Hagander.
While on it, one of the output arguments is renamed from "segno" to
"segment_number", as per a suggestion from Kyotaro Horiguchi.
The documentation is updated to reflect all these changes.
Bump catalog version.
Author: Bharath Rupireddy, Michael Paquier
Discussion: https://postgr.es/m/CABUevEytQVaOOhGdoh0D7hGwe3fuKcRF6NthsSW7ww04EmtFgQ@mail.gmail.com
1349d279 added query planner support to allow more efficient execution of
aggregate functions which have an ORDER BY or a DISTINCT clause. Prior to
that commit, the planner would only request that the lower planner produce
a plan with the order required for the GROUP BY clause and it would be
left up to nodeAgg.c to perform the final sort of records within each
group so that the aggregate transition functions were called in the
correct order. Now that the planner requests the lower planner produce a
plan with the GROUP BY and the ORDER BY / DISTINCT aggregates in mind,
there is the possibility that the planner chooses a plan which could be
less efficient than what would have been produced before 1349d279.
While developing 1349d279, I had in mind that Incremental Sort would help
us in cases where an index exists only on the GROUP BY column(s).
Incremental Sort would just replace the implicit tuplesorts which are
being performed in nodeAgg.c. However, because the planner has the
flexibility to instead choose a plan which just performs a full sort on
both the GROUP BY and ORDER BY / DISTINCT aggregate columns, there is
potential for the planner to make a bad choice. The costing for
Incremental Sort is not perfect as it assumes an even distribution of rows
to sort within each sort group.
Here we add an escape hatch in the form of the enable_presorted_aggregate
GUC. This will allow users to get the pre-PG16 behavior in cases where
they have no other means to convince the query planner to produce a plan
which only sorts on the GROUP BY column(s).
Discussion: https://postgr.es/m/CAApHDvr1Sm+g9hbv4REOVuvQKeDWXcKUAhmbK5K+dfun0s9CvA@mail.gmail.com
This function takes in input a WAL segment name and returns a tuple made
of the segment sequence number (dependent on the WAL segment size of the
cluster) and its timeline, as of a thin SQL wrapper around the existing
XLogFromFileName().
This function has multiple usages, like being able to compile a LSN from
a file name and an offset, or finding the timeline of a segment without
having to do to some maths based on the first eight characters of the
segment.
Bump catalog version.
Author: Bharath Rupireddy
Reviewed-by: Nathan Bossart, Kyotaro Horiguchi, Maxim Orlov, Michael
Paquier
Discussion: https://postgr.es/m/CALj2ACWV=FCddsxcGbVOA=cvPyMr75YCFbSQT6g4KDj=gcJK4g@mail.gmail.com
A new function pg_stat_get_backend_subxact() can be used to get
information about the number of subtransactions in the cache of
a particular backend and whether that cache has overflowed. This
can be useful for tracking down performance problems that can
result from overflowed snapshots.
Dilip Kumar, reviewed by Zhihong Yu, Nikolay Samokhvalov,
Justin Pryzby, Nathan Bossart, Ashutosh Sharma, Julien
Rouhaud. Additional design comments from Andres Freund,
Tom Lane, Bruce Momjian, and David G. Johnston.
Discussion: http://postgr.es/m/CAFiTN-ut0uwkRJDQJeDPXpVyTWD46m3gt3JDToE02hTfONEN=Q@mail.gmail.com
This option selects the default transfer mode. Having an explicit
option is handy to make scripts and tests more explicit. It also
makes it easier to talk about a "copy" mode rather than "the default
mode" or something like that, since until now the default mode didn't
have an externally visible name.
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/50a97009-8ff9-ca4d-a0f6-6086a6775a5b%40enterprisedb.com
Add support for hexadecimal, octal, and binary integer literals:
0x42F
0o273
0b100101
per SQL:202x draft.
This adds support in the lexer as well as in the integer type input
functions.
Reviewed-by: John Naylor <john.naylor@enterprisedb.com>
Reviewed-by: Zhihong Yu <zyu@yugabyte.com>
Reviewed-by: David Rowley <dgrowleyml@gmail.com>
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/b239564c-cad0-b23e-c57e-166d883cb97d@enterprisedb.com
Commits f92944137 et al. made IsInTransactionBlock() set the
XACT_FLAGS_NEEDIMMEDIATECOMMIT flag before returning "false",
on the grounds that that kept its API promises equivalent to those of
PreventInTransactionBlock(). This turns out to be a bad idea though,
because it allows an ANALYZE in a pipelined series of commands to
cause an immediate commit, which is unexpected.
Furthermore, if we return "false" then we have another issue,
which is that ANALYZE will decide it's allowed to do internal
commit-and-start-transaction sequences, thus possibly unexpectedly
committing the effects of previous commands in the pipeline.
To fix the latter situation, invent another transaction state flag
XACT_FLAGS_PIPELINING, which explicitly records the fact that we
have executed some extended-protocol command and not yet seen a
commit for it. Then, require that flag to not be set before allowing
InTransactionBlock() to return "false".
Having done that, we can remove its setting of NEEDIMMEDIATECOMMIT
without fear of causing problems. This means that the API guarantees
of IsInTransactionBlock now diverge from PreventInTransactionBlock,
which is mildly annoying, but it seems OK given the very limited usage
of IsInTransactionBlock. (In any case, a caller preferring the old
behavior could always set NEEDIMMEDIATECOMMIT for itself.)
For consistency also require XACT_FLAGS_PIPELINING to not be set
in PreventInTransactionBlock. This too is meant to prevent commands
such as CREATE DATABASE from silently committing previous commands
in a pipeline.
Per report from Peter Eisentraut. As before, back-patch to all
supported branches (which sadly no longer includes v10).
Discussion: https://postgr.es/m/65a899dd-aebc-f667-1d0a-abb89ff3abf8@enterprisedb.com
Add some cross-links between chapter "20. Server Parameters" and
"31. Logical Replication" regarding the available configuration
parameters, for easier navigation; and some more explanatory text too.
I (Álvaro) chose to duplicate max_replication_slots in Chapter 20,
because it has completely different meanings at each side of the
replication link.
Author: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: vignesh C <vignesh21@gmail.com>
Reviewed-by: samay sharma <smilingsamay@gmail.com>
Discussion: https://postgr.es/m/CAHut+PsESqpy7w3Y6cX98c255ZuCjvipkhKjy6hZBjOv4E6iJA@mail.gmail.com
This is straightforward as far as it goes. However, it does not
attempt to trap errors occurring during the execution of domain
CHECK constraints. Since those are general user-defined
expressions, the only way to do that would involve starting up a
subtransaction for each check. Of course the entire point of
the soft-errors feature is to not need subtransactions, so that
would be self-defeating. For now, we'll rely on the assumption
that domain checks are written to avoid throwing errors.
Discussion: https://postgr.es/m/1181028.1670635727@sss.pgh.pa.us
pg_input_is_valid() returns boolean, while pg_input_error_message()
returns the primary error message if the input is bad, or NULL
if the input is OK. The main reason for having two functions is
so that we can test both the details-wanted and the no-details-wanted
code paths.
Although these are primarily designed with testing in mind,
it could well be that they'll be useful to end users as well.
This patch is mostly by me, but it owes very substantial debt to
earlier work by Nikita Glukhov, Andrew Dunstan, and Amul Sul.
Thanks to Andres Freund for review.
Discussion: https://postgr.es/m/3bbbb0df-7382-bf87-9737-340ba096e034@postgrespro.ru
Postgres' standard mechanism for reporting errors (ereport() or elog())
is used for all sorts of error conditions. This means that throwing
an exception via ereport(ERROR) requires an expensive transaction or
subtransaction abort and cleanup, since the exception catcher dare not
make many assumptions about what has gone wrong. There are situations
where we would rather have a lighter-weight mechanism for dealing
with errors that are known to be safe to recover from without a full
transaction cleanup. This commit creates infrastructure to let us
adapt existing error-reporting code for that purpose. See the
included documentation changes for details. Follow-on commits will
provide test code and usage examples.
The near-term plan is to convert most if not all datatype input
functions to report invalid input "softly". This will enable
implementing some SQL/JSON features cleanly and without the cost
of subtransactions, and it will also allow creating COPY options
to deal with bad input without cancelling the whole COPY.
This patch is mostly by me, but it owes very substantial debt to
earlier work by Nikita Glukhov, Andrew Dunstan, and Amul Sul.
Thanks also to Andres Freund for review.
Discussion: https://postgr.es/m/3bbbb0df-7382-bf87-9737-340ba096e034@postgrespro.ru
The USER SET flag specifies that the variable should be set on behalf of an
ordinary role. That lets ordinary roles set placeholder variables, which
permission requirements are not known yet. Such a value wouldn't be used if
the variable finally appear to require superuser privileges.
The new flags are stored in the pg_db_role_setting.setuser array. Catversion
is bumped.
This commit is inspired by the previous work by Steve Chavez.
Discussion: https://postgr.es/m/CAPpHfdsLd6E--epnGqXENqLP6dLwuNZrPMcNYb3wJ87WR7UBOQ%40mail.gmail.com
Author: Alexander Korotkov, Steve Chavez
Reviewed-by: Pavel Borisov, Steve Chavez
Commit 7103ebb7aa added support for MERGE, which included support for
inheritance hierarchies, but didn't document the fact that ONLY could
be specified before the source and/or target tables to exclude tables
inheriting from the tables specified.
Update merge.sgml to mention this, and while at it, add some
regression tests to cover it.
Dean Rasheed, reviewed by Nathan Bossart.
Backpatch to 15, where MERGE was added.
Discussion: https://postgr.es/m/CAEZATCU0XM-bJCvpJuVRU3UYNRqEBS6g4-zH%3Dj9Ye0caX8F6uQ%40mail.gmail.com
The tests don't have much coverage of segment related code, as we don't create
large enough tables. To make it easier to test these paths, add a new option
specifying the segment size in blocks.
Set the new option to 6 blocks in one of the CI tasks. Smaller numbers
currently fail one of the tests, for understandable reasons.
While at it, fix some segment size related issues in the meson build.
Author: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20221107171355.c23fzwanfzq2pmgt@awork3.anarazel.de
To run all tests that support running against existing server:
$ meson test --setup running
To run just the main pg_regress tests against existing server:
$ meson test --setup running regress-running/regress
To ensure the 'running' setup continues to work, test it as part of the
freebsd CI task.
Discussion: https://postgr.es/m/CAH2-Wz=XDQcmLoo7RR_i6FKQdDmcyb9q5gStnfuuQXrOGhB2sQ@mail.gmail.com
Provide <sect2> subdivisions in 9.26 System Information Functions and
Operators. This is useful because it adds a mini-TOC at the top of
the page to aid jumping to portions of what's become quite a long
section. Also, now that several of the subsections contain multiple
tables, it's hard to see the overall structure without headings.
Discussion: https://postgr.es/m/4026789.1670426602@sss.pgh.pa.us
For historical reasons, pg_dump refers to large objects as "BLOBs".
This term is not used anywhere else in PostgreSQL, and it also means
something different in the SQL standard and other SQL systems.
This patch renames internal functions, code comments, documentation,
etc. to use the "large object" or "LO" terminology instead. There is
no functionality change, so the archive format still uses the name
"BLOB" for the archive entry. Additional long command-line options
are added with the new naming.
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://www.postgresql.org/message-id/flat/868a381f-4650-9460-1726-1ffd39a270b4%40enterprisedb.com
Up to now we have allowed manual creation of an ON SELECT rule on
a table to convert it into a view. That was never anything but a
horrid, error-prone hack though. pg_dump used to rely on that
behavior to deal with cases involving circular dependencies,
where a dependency loop could be broken by separating the creation
of a view from installation of its ON SELECT rule. However, we
changed pg_dump to use CREATE OR REPLACE VIEW for that in commit
d8c05aff5 (which was later back-patched as far as 9.4), so there's
not a good argument anymore for continuing to support the behavior.
The proximate reason for axing it now is that we found that the
new statistics code has failure modes associated with the relkind
change caused by this behavior. We'll patch around that in v15,
but going forward it seems like a better idea to get rid of the
need to support relkind changes.
Discussion: https://postgr.es/m/CALDaNm2yXz+zOtv7y5zBd5WKT8O0Ld3YxikuU3dcyCvxF7gypA@mail.gmail.com
This rewrites the few places where first-person form was used, and
replaces with wording more in line with the rest of the documentation.
While there, the section on editing PO files is split between what
to edit with, and how to edit, as well as add a missing colon.
Author: Daniel Gustafsson <daniel@yesql.se>
Reported-by: wolakk@gmail.com
Discussion: https://postgr.es/m/166692508416.630.9910522387468316027@wrigleys.postgresql.org
Datatypes in CREATE TABLE statements in the examples were vertically
aligned in most cases, a few examples were unaligned with a single
space. This makes sure all examples on the same page are consistently
aligned.
Patch by Laurenz Albe with some additional fixups by me.
Author: Laurenz Albe <laurenz.albe@cybertec.at>
Reported-by: lemes.marcelo26@gmail.com
Discussion: https://postgr.es/m/166870885664.635.16667004450401573487@wrigleys.postgresql.org
Compression specifications are currently used by pg_basebackup and
pg_receivewal, and are able to let the user control in an extended way
the method and level of compression used. As an effect of this commit,
pg_dump's -Z/--compress is now able to use more than just an integer, as
of the grammar "method[:detail]".
The method can be either "none" or "gzip", and can optionally take a
detail string. If the detail string is only an integer, it defines the
compression level. A comma-separated list of keywords can also be used
method allows for more options, the only keyword supported now is
"level".
The change is backward-compatible, hence specifying only an integer
leads to no compression for a level of 0 and gzip compression when the
level is greater than 0.
Most of the code changes are straight-forward, as pg_dump was relying on
an integer tracking the compression level to check for gzip or no
compression. These are changed to use a compression specification and
the algorithm stored in it.
As of this change, note that the dump format is not bumped because there
is no need yet to track the compression algorithm in the TOC entries.
Hence, we still rely on the compression level to make the difference
when reading them. This will be mandatory once a new compression method
is added, though.
In order to keep the code simpler when parsing the compression
specification, the code is changed so as pg_dump now fails hard when
using gzip on -Z/--compress without its support compiled, rather than
enforcing no compression without the user knowing about it except
through a warning. Like before this commit, archive and custom formats
are compressed by default when the code is compiled with gzip, and left
uncompressed without gzip.
Author: Georgios Kokolatos
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/O4mutIrCES8ZhlXJiMvzsivT7ztAMja2lkdL1LJx6O5f22I2W8PBIeLKz7mDLwxHoibcnRAYJXm1pH4tyUNC4a8eDzLn22a6Pb1S74Niexg=@pm.me
This has always worked, but you'd be unlikely to guess it
from the documentation. Add an example showing it.
Lack of docs noted by David Johnston. Back-patch to v13;
the documentation layout we used before that was not very
amenable to squeezing in multiple examples.
Discussion: https://postgr.es/m/CAKFQuwZ4Vy1Xty0G5Ok+ot=NDrU3C6hzF1JwUk-FEkwe3V9_RA@mail.gmail.com
Rearrange the discussion of user-private schemas so that details
applying only to upgraded-from-pre-v15 databases are in a follow-on
paragraph, not in the main description of how to set up this pattern.
This seems a little clearer even today, and it'll get more so as
pre-v15 systems fade into the sunset.
Wording contributions from Robert Haas, Tom Lane, Noah Misch.
Discussion: https://postgr.es/m/CA+TgmoYUHsfp90inEMAP0yNr7Y_L6EphPH1YOon1JKtBztXHyQ@mail.gmail.com
This also adds references to this new chapter at relevant sections of
our documentation. Previously much of these internal details were
exposed to users, but not explained. This also updates RELEASE
SAVEPOINT.
Discussion: https://postgr.es/m/CANbhV-E_iy9fmrErxrCh8TZTyenpfo72Hf_XD2HLDppva4dUNA@mail.gmail.com
Author: Simon Riggs, Laurenz Albe
Reviewed-by: Bruce Momjian
Backpatch-through: 11
This commit moves the code in charge of deparsing the method and detail
strings fed later to parse_compress_specification() to a common routine,
where the backward-compatible case of only an integer being found (N
= 0 => "none", N > 1 => gzip at level N) is handled.
Note that this has a side-effect for pg_basebackup, as we now attempt to
detect "server-" and "client-" before checking for the integer-only
pre-14 grammar, where values like server-N and client-N (without the
follow-up detail string) are now valid rather than failing because of an
unsupported method name. Past grammars are still handled the same way,
but these flavors are now authorized, and would now switch to consider N
= 0 as no compression and N > 1 as gzip with the compression level used
as N, with the caller still controlling if the compression method should
be done server-side, client-side or is unspecified. The documentation
of pg_basebackup is updated to reflect that.
This benefits other code paths that would like to rely on the same logic
as pg_basebackup and pg_receivewal with option values used for
compression specifications, one area discussed lately being pg_dump.
Author: Georgios Kokolatos, Michael Paquier
Discussion: https://postgr.es/m/O4mutIrCES8ZhlXJiMvzsivT7ztAMja2lkdL1LJx6O5f22I2W8PBIeLKz7mDLwxHoibcnRAYJXm1pH4tyUNC4a8eDzLn22a6Pb1S74Niexg=@pm.me
Previously, an idle startup (recovery) process would wake up every 5
seconds to have a chance to poll for promote_trigger_file, even if that
GUC was not configured. That promotion triggering mechanism was
effectively superseded by pg_ctl promote and pg_promote() a long time
ago. There probably aren't many users left and it's very easy to change
to the modern mechanisms, so we agreed to remove the feature.
This is part of a campaign to reduce wakeups on idle systems.
Author: Simon Riggs <simon.riggs@enterprisedb.com>
Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Ian Lawrence Barwick <barwick@gmail.com>
Discussion: https://postgr.es/m/CANbhV-FsjnzVOQGBpQ589%3DnWuL1Ex0Ykn74Nh1hEjp2usZSR5g%40mail.gmail.com
This provides two new predefined roles: pg_vacuum_all_tables and
pg_analyze_all_tables. Roles which have been granted these roles can
perform vacuum or analyse respectively on any or all tables as if they
were a superuser. This removes the need to grant superuser privilege to
roles just so they can perform vacuum and/or analyze.
Nathan Bossart
Reviewed by: Bharath Rupireddy, Kyotaro Horiguchi, Stephen Frost, Robert
Haas, Mark Dilger, Tom Lane, Corey Huinker, David G. Johnston, Michael
Paquier.
Discussion: https://postgr.es/m/20220722203735.GB3996698@nathanxps13
Currently a table can only be vacuumed or analyzed by its owner or
a superuser. This can now be extended to any user by means of an
appropriate GRANT.
Nathan Bossart
Reviewed by: Bharath Rupireddy, Kyotaro Horiguchi, Stephen Frost, Robert
Haas, Mark Dilger, Tom Lane, Corey Huinker, David G. Johnston, Michael
Paquier.
Discussion: https://postgr.es/m/20220722203735.GB3996698@nathanxps13
pg_hba.conf and pg_ident.conf gain support for three record keywords:
- "include", to include a file.
- "include_if_exists", to include a file, ignoring it if missing.
- "include_dir", to include a directory of files. These are classified
by name (C locale, mostly) and need to be prefixed by ".conf", hence
following the same rules as GUCs.
This commit relies on the refactoring pieces done in efc9816, ad6c528,
783e8c6 and 1b73d0b, adding a small wrapper to build a list of
TokenizedAuthLines (tokenize_include_file), and the code is shaped to
offer some symmetry with what is done for GUCs with the same options.
pg_hba_file_rules and pg_ident_file_mappings gain a new field called
file_name, to track from which file a record is located, taking
advantage of the addition of rule_number in c591300 to offer an
organized view of the HBA or ident records loaded.
Bump catalog version.
Author: Julien Rouhaud
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/20220223045959.35ipdsvbxcstrhya@jrouhaud
The lwlock wait queue scalability issue fixed in a4adc31f69 was quite hard to
find because of the exponential backoff and because we adjust spins_per_delay
over time within a backend.
To make it easier to find similar issues in the future, add a wait event for
the pg_usleep() in perform_spin_delay(). Showing a wait event while spinning
without sleeping would increase the overhead of spinlocks, which we do not
want.
We may at some later point want to have more granular wait events, but that'd
be a substantial amount of work. This provides at least some insights into
something currently hard to observe.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com>
https://postgr.es/m/20221120204310.xywrhyxyytsajuuq@awork3.anarazel.de
The Hunspell project moved from Sourceforge to Github sometime
in 2016, so update our links to match the new URL. Backpatch
the doc changes to all supported versions.
Discussion: https://postgr.es/m/DC9A662A-360D-4125-A453-5A6CB9C6C4B4@yesql.se
Backpatch-through: v11
The postmaster normally sends SIGQUIT to force-terminate its
child processes after a child crash or immediate-stop request.
If that doesn't result in child exit within a few seconds,
we follow it up with SIGKILL. This patch provides GUC flags
that allow either of these signals to be replaced with SIGABRT.
On typically-configured Unix systems, that will result in a
core dump being produced for each such child. This can be
useful for debugging problems, although it's not something you'd
want to have on in production due to the risk of disk space
bloat from lots of core files.
The old postmaster -T switch, which sent SIGSTOP in place of
SIGQUIT, is changed to be the same as send_abort_for_crash.
As far as I can tell from the code comments, the intent of
that switch was just to block things for long enough to force
core dumps manually, which seems like an unnecessary extra step.
(Maybe at the time, there was no way to get most kernels to
produce core files with per-PID names, requiring manual core
file renaming after each one. But now it's surely the hard way.)
I also took the opportunity to remove the old postmaster -n
(skip shmem reinit) switch, which hasn't actually done anything
in decades, though the documentation still claimed it did.
Discussion: https://postgr.es/m/2251016.1668797294@sss.pgh.pa.us
Similar to how the INHERIT option controls whether or not the
permissions of the granted role are automatically available to the
grantee, the new SET permission controls whether or not the grantee
may use the SET ROLE command to assume the privileges of the granted
role.
In addition, the new SET permission controls whether or not it
is possible to transfer ownership of objects to the target role
or to create new objects owned by the target role using commands
such as CREATE DATABASE .. OWNER. We could alternatively have made
this controlled by the INHERIT option, or allow it when either
option is given. An advantage of this approach is that if you
are granted a predefined role with INHERIT TRUE, SET FALSE, you
can't go and create objects owned by that role.
The underlying theory here is that the ability to create objects
as a target role is not a privilege per se, and thus does not
depend on whether you inherit the target role's privileges. However,
it's surely something you could do anyway if you could SET ROLE
to the target role, and thus making it contingent on whether you
have that ability is reasonable.
Design review by Nathan Bossat, Wolfgang Walther, Jeff Davis,
Peter Eisentraut, and Stephen Frost.
Discussion: http://postgr.es/m/CA+Tgmob+zDSRS6JXYrgq0NWdzCXuTNzT5eK54Dn2hhgt17nm8A@mail.gmail.com
The TAP tests can, to some degree, be controlled by a set of environment
variables. These were however only documented in a README and not in the
main documentation. This adds documentation of these variables, as well
as changes one CPAN reference to a ulink for consistency. While there,
also tag CPAN as an acronym as it's listed in the acronyms section.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/YyPd9unV14SX2bLF@paquier.xyz
To improve readability of the TG_ variables definition lists, this moves
the datatypes up to the defined term to avoid having each entry start
with "Data type". This also removes redundant wording that that didn't
carry any information from the descriptions.
Author: Christoph Berg <myon@debian.org>
Reviewed-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Discussion: https://postgr.es/m/Yw4Noe3A2s87A0qq@msg.df7cb.de
This adds a new psql command \bind that sets query parameters and
causes the next query to be sent using the extended query protocol.
Example:
SELECT $1, $2 \bind 'foo' 'bar' \g
This may be useful for psql scripting, but one of the main purposes is
also to be able to test various aspects of the extended query protocol
from psql and to write tests more easily.
Reviewed-by: Corey Huinker <corey.huinker@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/e8dd1cd5-0e04-3598-0518-a605159fe314@enterprisedb.com
Setting archive_library and archive_command at the same time is now an
error. Before, archive_library would take precedence over
archive_command.
Author: Nathan Bossart <nathandbossart@gmail.com>
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Discussion: https://www.postgresql.org/message-id/20220914222736.GA3042279%40nathanxps13
Commit fede15417 introduced FILTER by jamming it into the existing
example introducing HAVING, which seems pedagogically poor to me;
and it added no information about what the keyword actually does.
Not to mention that the claimed output didn't match the sample
data being used in this running example.
Revert that and instead make an independent example using FILTER.
To help drive home the point that it's a per-aggregate filter,
we need to use two aggregates not just one; for consistency
expand all the examples in this segment to do that.
Also adjust the example using WHERE ... LIKE so that it'd produce
nonempty output with this sample data, and show that output.
Back-patch, as the previous patch was. (Sadly, v10 is now out
of scope.)
Discussion: https://postgr.es/m/166794307526.652.9073408178177444190@wrigleys.postgresql.org
There are some cases (e.g. when the subscription is created using the
connect = false option) where the remote replication slot was not created
automatically and the user must create it manually before the subscription
can be activated. There was not enough information in the docs for users
to do this easily.
Author: Peter Smith
Reviewd by: Shi yu, Amit Kapila
Discussion: https://postgr.es/m/CAHut+PvqdqOanheWSHDyhQiF+Z-7w=-+k4U+bwbT=b6YQ_hrXQ@mail.gmail.com
pg_ident_file_mappings.line_number was described as a line number in
pg_ident.conf for a "rule" number, but this should refer to a "map".
The same inconsistent term was used in the main paragraph describing the
view.
Extracted from a patch by the same author. Issue introduced by
a2c8499 where this view has been added.
Author: Julien Rouhaud
Discussion: https://postgr.es/m/20221026031948.cbrnzgy5e7glsq2d@jrouhaud
Backpatch-through: 15
This gives some additional advice on using row filters and column
lists on publications securely.
Author: Antonin Houska <ah@cybertec.at>
Reviewed-by: Euler Taveira <euler@eulerto.com>
Discussion: https://www.postgresql.org/message-id/flat/20330.1652105397@antos
These numbers are strictly-monotone identifiers assigned to each rule
of pg_hba_file_rules and each map of pg_ident_file_mappings when loading
the HBA and ident configuration files, indicating the order in which
they are checked at authentication time, until a match is found.
With only one file loaded currently, this is equivalent to the line
numbers assigned to the entries loaded if one wants to know their order,
but this becomes mandatory once the inclusion of external files is
added to the HBA and ident files to be able to know in which order the
rules and/or maps are applied at authentication. Note that NULL is used
when a HBA or ident entry cannot be parsed or validated, aka when an
error exists, contrary to the line number.
Bump catalog version.
Author: Julien Rouhaud
Discussion: https://postgr.es/m/20220223045959.35ipdsvbxcstrhya@jrouhaud
As of this commit, any database or user entry beginning with a slash (/)
is considered as a regular expression. This is particularly useful for
users, as now there is no clean way to match pattern on multiple HBA
lines. For example, a user name mapping with a regular expression needs
first to match with a HBA line, and we would skip the follow-up HBA
entries if the ident regexp does *not* match with what has matched in
the HBA line.
pg_hba.conf is able to handle multiple databases and roles with a
comma-separated list of these, hence individual regular expressions that
include commas need to be double-quoted.
At authentication time, user and database names are now checked in the
following order:
- Arbitrary keywords (like "all", the ones beginning by '+' for
membership check), that we know will never have a regexp. A fancy case
is for physical WAL senders, we *have* to only match "replication" for
the database.
- Regular expression matching.
- Exact match.
The previous logic did the same, but without the regexp step.
We have discussed as well the possibility to support regexp pattern
matching for host names, but these happen to lead to tricky issues based
on what I understand, particularly with host entries that have CIDRs.
This commit relies heavily on the refactoring done in a903971 and
fc579e1, so as the amount of code required to compile and execute
regular expressions is now minimal. When parsing pg_hba.conf, all the
computed regexps needs to explicitely free()'d, same as pg_ident.conf.
Documentation and TAP tests are added to cover this feature, including
cases where the regexps use commas (for clarity in the docs, coverage
for the parsing logic in the tests).
Note that this introduces a breakage with older versions, where a
database or user name beginning with a slash are treated as something to
check for an equal match. Per discussion, we have discarded this as
being much of an issue in practice as it would require a cluster to
have database and/or role names that begin with a slash, as well as HBA
entries using these. Hence, the consistency gained with regexps in
pg_ident.conf is more appealing in the long term.
**This compatibility change should be mentioned in the release notes.**
Author: Bertrand Drouvot
Reviewed-by: Jacob Champion, Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/fff0d7c1-8ad4-76a1-9db3-0ab6ec338bf7@amazon.com
56788d215 adjusted the parallel seq scan code so that instead of handing
out a single block at a time to parallel workers, it now hands out ranges
of blocks.
Here we update the documentation which still claimed that workers received
just 1 block at a time.
Reported-by: Zhang Mingli
Discussion: https://postgr.es/m/17c99615-2c3b-4e4e-9d0b-424a66a7bccd@Spark
Backpatch-through: 14, where 56788d215 was added.
Various test suites use the "openssl" program as part of their setup.
There isn't a way to override which openssl program is to be used,
other than by fiddling with the path, perhaps. This has gotten
increasingly problematic because different versions of openssl have
different capabilities and do different things by default.
This patch checks for an openssl binary in configure and meson setup,
with appropriate ways to override it. This is similar to how "lz4"
and "zstd" are handled, for example. The meson build system actually
already did this, but the result was only used in some places. This
is now applied more uniformly.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://www.postgresql.org/message-id/flat/dc638b75-a16a-007d-9e1c-d16ed6cf0ad2%40enterprisedb.com
As currently designed, with a callback registered in a ERROR_CLEANUP
block, the shutdown callback would get called twice when updating
archive_library on SIGHUP, which is something that we want to avoid to
ease the life of extension writers.
Anyway, an ERROR in the archiver process is treated as a FATAL, stopping
it immediately, hence there is no need for a ERROR_CLEANUP block.
Instead of that, the shutdown callback is not called upon
before_shmem_exit(), giving to the modules the opportunity to do any
cleanup actions before the server shuts down its subsystems.
While on it, this commit adds some testing coverage for the shutdown
callback. Neither shell_archive nor basic_archive have been using it,
and one is added to shell_archive, whose trigger is checked in a TAP
test through a shutdown sequence.
Author: Nathan Bossart, Bharath Rupireddy
Reviewed-by: Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/20221015221328.GB1821022@nathanxps13
Backpatch-through: 15
It can be useful to know when a relation has last been used, e.g., when
evaluating whether an index is still required. It was already possible to
infer the time of the last usage by tracking, e.g.,
pg_stat_all_indexes.idx_scan over time. But far from everybody does so.
To make it easier to detect the last time a relation has been scanned, track
that time in each relation's pgstat entry. To minimize overhead a) the
timestamp is updated only when the backend pending stats entry is flushed to
shared stats b) the last transaction's stop timestamp is used as the
timestamp.
Bumps catalog and stats format versions.
Author: Dave Page <dpage@pgadmin.org>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Bruce Momjian <bruce@momjian.us>
Reviewed-by: Vik Fearing <vik@postgresfriends.org>
Discussion: https://postgr.es/m/CA+OCxozrVHNFVEPkweUHMZje+t1tfY816d9MZYc6eZwOOusOaQ@mail.gmail.com
Using pg_buffercache_summary() is significantly cheaper than querying
pg_buffercache and summarizing in SQL.
Author: Melih Mutlu <m.melihmutlu@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Aleksander Alekseev <aleksander@timescale.com>
Reviewed-by: Zhang Mingli <zmlpostgres@gmail.com>
Discussion: https://postgr.es/m/CAGPVpCQAXYo54Q%3D8gqBsS%3Du0uk9qhnnq4%2B710BtUhUisX1XGEg%40mail.gmail.com
Commit 3d956d956 allowed the COPY, but it's done by inserting individual
rows to the foreign table, so it can be inefficient due to the overhead
caused by each round-trip to the foreign server. To improve performance
of the COPY in such a case, this patch allows batch insertion, by
extending the multi-insert machinery in CopyFrom() to the foreign-table
case so that we insert multiple rows to the foreign table at once using
the FDW callback routine added by commit b663a4136. This patch also
allows this for postgres_fdw. It is enabled by the "batch_size" option
added by commit b663a4136, which is disabled by default.
When doing batch insertion, we update progress of the COPY command after
performing the FDW callback routine, to count rows not suppressed by the
FDW as well as a BEFORE ROW INSERT trigger. For consistency, this patch
changes the timing of updating it for plain tables: previously, we
updated it immediately after adding each row to the multi-insert buffer,
but we do so only after writing the rows stored in the buffer out to the
table using table_multi_insert(), which I think would be consistent even
with non-batching mode, because in that mode we update it after writing
each row out to the table using table_tuple_insert().
Andrey Lepikhov, heavily revised by me, with review from Ian Barwick,
Andrey Lepikhov, and Zhihong Yu.
Discussion: https://postgr.es/m/bc489202-9855-7550-d64c-ad2d83c24867%40postgrespro.ru
Add
After=network-online.target
Wants=network-online.target
to the suggested unit file for starting a Postgres server.
This delays startup until the network interfaces have been
configured; without that, any attempt to bind to a specific
IP address will fail.
If listen_addresses is set to "localhost" or "*", it might be
possible to get away with the less restrictive "network.target",
but I don't think we need to get into such detail here.
Per suggestion from Pablo Federico.
Discussion: https://postgr.es/m/166552157407.591805.10036014441784710940@wrigleys.postgresql.org
meson/ninja do not support specifying dependencies via globs (as those make it
significantly more expensive to check the current build state). Instead
targets should emit dependency information when running that then can be
cheaply re-checked during future builds.
To handle xmllint and xsltproc invocations in the docs, add and use a wrapper
that uses --load-trace to collect dependency information.
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Author: Nazir Bilal Yavuz <byavuz81@gmail.com>
Author: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/c5736f70-bb6d-8d25-e35c-e3d886e4e905@enterprisedb.com
This reverts commit db0d67db24 and
several follow-on fixes. The idea of making a cost-based choice
of the order of the sorting columns is not fundamentally unsound,
but it requires cost information and data statistics that we don't
really have. For example, relying on procost to distinguish the
relative costs of different sort comparators is pretty pointless
so long as most such comparator functions are labeled with cost 1.0.
Moreover, estimating the number of comparisons done by Quicksort
requires more than just an estimate of the number of distinct values
in the input: you also need some idea of the sizes of the larger
groups, if you want an estimate that's good to better than a factor of
three or so. That's data that's often unknown or not very reliable.
Worse, to arrive at estimates of the number of calls made to the
lower-order-column comparison functions, the code needs to make
estimates of the numbers of distinct values of multiple columns,
which are necessarily even less trustworthy than per-column stats.
Even if all the inputs are perfectly reliable, the cost algorithm
as-implemented cannot offer useful information about how to order
sorting columns beyond the point at which the average group size
is estimated to drop to 1.
Close inspection of the code added by db0d67db2 shows that there
are also multiple small bugs. These could have been fixed, but
there's not much point if we don't trust the estimates to be
accurate in-principle.
Finally, the changes in cost_sort's behavior made for very large
changes (often a factor of 2 or so) in the cost estimates for all
sorting operations, not only those for multi-column GROUP BY.
That naturally changes plan choices in many situations, and there's
precious little evidence to show that the changes are for the better.
Given the above doubts about whether the new estimates are really
trustworthy, it's hard to summon much confidence that these changes
are better on the average.
Since we're hard up against the release deadline for v15, let's
revert these changes for now. We can always try again later.
Note: in v15, I left T_PathKeyInfo in place in nodes.h even though
it's unreferenced. Removing it would be an ABI break, and it seems
a bit late in the release cycle for that.
Discussion: https://postgr.es/m/TYAPR01MB586665EB5FB2C3807E893941F5579@TYAPR01MB5866.jpnprd01.prod.outlook.com
Currently, PQsslAttributeNames() returns the same list of attribute
names regardless of its conn parameter. This patch changes it to
have behavior parallel to what 80a05679d installed for PQsslAttribute:
you get OpenSSL's attributes if conn is NULL or is an SSL-encrypted
connection, or an empty list if conn is a non-encrypted connection.
The point of this is to have sensible connection-dependent behavior
in case we ever support multiple SSL libraries. The behavior for
NULL can be defined as "the attributes for the default SSL library",
parallel to what PQsslAttribute(NULL, "library") does.
Since this is mostly just future-proofing, no back-patch.
Discussion: https://postgr.es/m/17625-fc47c78b7d71b534@postgresql.org
The compression parameter to PQsslAttribute has never returned the
compression method used, it has always returned "on" or "off since
it was added in commit 91fa7b4719. Backpatch through v10.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/B9EC60EC-F665-47E8-A221-398C76E382C9@yesql.se
Backpatch-through: v10
Commit ebc8b7d44 intended to change the behavior of
PQsslAttribute(NULL, "library"), but accidentally also changed
what happens with a non-NULL conn pointer. Undo that so that
only the intended behavior change happens. Clarify some
associated documentation.
Per bug #17625 from Heath Lord. Back-patch to v15.
Discussion: https://postgr.es/m/17625-fc47c78b7d71b534@postgresql.org
Up to now, the ID values returned by pg_stat_get_backend_idset() and
used by pg_stat_get_backend_activity() and allied functions were just
indexes into a local array of sessions seen by the last stats refresh.
This is problematic for a few reasons. The "ID" of a session can vary
over its existence, which is surprising. Also, while these numbers
often match the "backend ID" used for purposes like temp schema
assignment, that isn't reliably true. We can fairly cheaply switch
things around to make these numbers actually be the sessions' backend
IDs. The added test case illustrates that with this definition, the
temp schema used by a given session can be obtained given its PID.
While here, delete some dead code that guarded against getting
a NULL return from pgstat_fetch_stat_local_beentry(). That can't
happen as long as the caller is careful to pass an in-range array
index, as all the callers are. (This code may not have been dead
when written, but it surely is now.)
Nathan Bossart
Discussion: https://postgr.es/m/20220815205811.GA250990@nathanxps13
SYSTEM_USER is a reserved keyword of the SQL specification that,
roughly described, is aimed at reporting some information about the
system user who has connected to the database server. It may include
implementation-specific information about the means by the user
connected, like an authentication method.
This commit implements SYSTEM_USER as of auth_method:identity, where
"auth_method" is a keyword about the authentication method used to log
into the server (like peer, md5, scram-sha-256, gss, etc.) and
"identity" is the authentication identity as introduced by 9afffcb (peer
sets authn to the OS user name, gss to the user principal, etc.). This
format has been suggested by Tom Lane.
Note that thanks to d951052, SYSTEM_USER is available to parallel
workers.
Bump catalog version.
Author: Bertrand Drouvot
Reviewed-by: Jacob Champion, Joe Conway, Álvaro Herrera, Michael Paquier
Discussion: https://postgr.es/m/7e692b8c-0b11-45db-1cad-3afc5b57409f@amazon.com
For some reason the "bpchar" type name was defined nowhere in
our SGML docs, although several places refer to it in passing.
Give it a proper mention under Character Types.
While here, also provide an explanation of how the text and varchar
types relate. The previous wording seemed to be doing its best
to sweep text under the rug, which doesn't seem very appropriate
given its prominence in other parts of the docs.
Minor rearrangements and word-smithing for clarity, too.
Laurenz Albe and Tom Lane, per gripe from Yanliang Lei
Discussion: https://postgr.es/m/120b3084.56b6.1833b5ffe4b.Coremail.msdnchina@163.com
There are still some alignment-related failures in the buildfarm,
which might or might not be able to be fixed quickly, but I've also
just realized that it increased the size of many WAL records by 4 bytes
because a block reference contains a RelFileLocator. The effect of that
hasn't been studied or discussed, so revert for now.
The previous macro implementations just cast the argument to a target
type but did not check whether the input type was appropriate. The
function implementation can do better type checking of the input type.
For the *GetDatumFast() macros, converting to an inline function
doesn't work in the !USE_FLOAT8_BYVAL case, but we can use
AssertVariableIsOfTypeMacro() to get a similar level of type checking.
Reviewed-by: Aleksander Alekseev <aleksander@timescale.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/8528fb7e-0aa2-6b54-85fb-0c0886dbd6ed%40enterprisedb.com
RelFileNumbers are now assigned using a separate counter, instead of
being assigned from the OID counter. This counter never wraps around:
if all 2^56 possible RelFileNumbers are used, an internal error
occurs. As the cluster is limited to 2^64 total bytes of WAL, this
limitation should not cause a problem in practice.
If the counter were 64 bits wide rather than 56 bits wide, we would
need to increase the width of the BufferTag, which might adversely
impact buffer lookup performance. Also, this lets us use bigint for
pg_class.relfilenode and other places where these values are exposed
at the SQL level without worrying about overflow.
This should remove the need to keep "tombstone" files around until
the next checkpoint when relations are removed. We do that to keep
RelFileNumbers from being recycled, but now that won't happen
anyway. However, this patch doesn't actually change anything in
this area; it just makes it possible for a future patch to do so.
Dilip Kumar, based on an idea from Andres Freund, who also reviewed
some earlier versions of the patch. Further review and some
wordsmithing by me. Also reviewed at various points by Ashutosh
Sharma, Vignesh C, Amul Sul, Álvaro Herrera, and Tom Lane.
Discussion: http://postgr.es/m/CA+Tgmobp7+7kmi4gkq7Y+4AM9fTvL+O1oQ4-5gFTT+6Ng-dQ=g@mail.gmail.com
Previously, the transaction-property GUCs such as transaction_isolation
could be reset after starting a transaction, because we marked them
as GUC_NO_RESET_ALL but still allowed a targeted RESET. That leads to
assertion failures or worse, because those properties aren't supposed
to change after we've acquired a transaction snapshot.
There are some NO_RESET_ALL variables for which RESET is okay, so
we can't just redefine the semantics of that flag. Instead introduce
a separate GUC_NO_RESET flag. Mark "seed", as well as the transaction
property GUCs, as GUC_NO_RESET.
We have to disallow GUC_ACTION_SAVE as well as straight RESET, because
otherwise a function having a "SET transaction_isolation" clause can
still break things: the end-of-function restore action is equivalent
to a RESET.
No back-patch, as it's conceivable that someone is doing something
this patch will forbid (like resetting one of these GUCs at transaction
start, or "CREATE FUNCTION ... SET transaction_read_only = 1") and not
running into problems with it today. Given how long we've had this
issue and not noticed, the side effects in non-assert builds can't be
too serious.
Per bug #17385 from Andrew Bille.
Masahiko Sawada
Discussion: https://postgr.es/m/17385-9ee529fb091f0ce5@postgresql.org
We weren't jumbling the merge action list, so wildly different commands
would be considered to use the same query ID. Add that, mention it in
the docs, and some test lines.
Backpatch to 15.
Author: Tatsu <bt22nakamorit@oss.nttdata.com>
Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Discussion: https://postgr.es/m/d87e391694db75a038abc3b2597828e8@oss.nttdata.com
The extended query protocol implementation I added in commit
acb7e4eb6b has bugs when used in pipeline mode. Rather than spend
more time trying to fix it, remove that code and make the function rely
on simple query protocol only, meaning it can no longer be used in
pipeline mode.
Users can easily change their applications to use PQsendQueryParams
instead. We leave PQsendQuery in place for Postgres 14, just in case
somebody is using it and has not hit the mentioned bugs; but we should
recommend that it not be used.
Backpatch to 15.
Per bug report from Gabriele Varrazzo.
Discussion: https://postgr.es/m/CA+mi_8ZGSQNmW6-mk_iSR4JZB_LJ4ww3suOF+1vGNs3MrLsv4g@mail.gmail.com
We previously thought that allowing such cases can confuse users when they
specify DROP TABLES IN SCHEMA but that doesn't seem to be the case based
on discussion. This helps to uplift the restriction during
ALTER TABLE ... SET SCHEMA which used to ensure that we couldn't end up
with a publication having both a schema and the same schema's table.
To allow this, we need to forbid having any schema on a publication if
column lists on a table are specified (and vice versa). This is because
otherwise we still need a restriction during ALTER TABLE ... SET SCHEMA to
forbid cases where it could lead to a publication having both a schema and
the same schema's table with column list.
Based on suggestions by Peter Eisentraut.
Author: Hou Zhijie and Vignesh C
Reviewed-By: Peter Smith, Amit Kapila
Backpatch-through: 15, where it was introduced
Discussion: https://postgr.es/m/2729c9e2-9aac-8cda-f2f4-34f2bcc18f4e@enterprisedb.com
The pg_dump and pg_dumpall man pages referred to app-psql-patterns
as appearing "below", which I suspect was copied-and-pasted from
equivalent text in psql-ref.sgml rather than being actually thought
through. At least to me, that phrasing means "later in this same
web page/section", which this link target is not. Drop the
misleading and unnecessary-in-any-case adjective.
This may be a bit too subtle, but removing that word from there makes
this clause no longer a perfect parallel of the GRANT variant "ALL
TABLES IN SCHEMA": indeed, for publications what we record is the schema
itself, not the tables therein, which means that any tables added to the
schema in the future are also published. This is completely different
to what GRANT does, which is affect only the tables that exist when the
command is executed.
There isn't resounding support for this change, but there are a few
positive votes and no opposition. Because the time to 15 RC1 is very
short, let's get this out now.
Backpatch to 15.
Discussion: https://postgr.es/m/2729c9e2-9aac-8cda-f2f4-34f2bcc18f4e
Commit 5ef1eefd76, which added
archive_library, purged most mentions of archive_command from the
documentation. This is inappropriate, since archive_command is still
a feature in use and users will want to see information about it.
This restores all the removed mentions and rephrases things so that
archive_command and archive_library are presented as alternatives of
each other.
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://www.postgresql.org/message-id/9366d634-a917-85a9-4991-b2a4859edaf9@enterprisedb.com
The bounds hardcoded in compression.c since ffd5365 (minimum at 1 and
maximum at 22) do not match the reality of what zstd is able to
handle, these values being available via ZSTD_maxCLevel() and
ZSTD_minCLevel() at run-time. The maximum of 22 is actually correct
in recent versions, but the minimum was not as the library can go down
to -131720 by design. This commit changes the code to use the run-time
values in the code instead of some hardcoded ones.
Zstd seems to assume that these bounds could change in the future, and
Postgres will be able to adapt automatically to such changes thanks to
what's being done in this commit.
Reported-by: Justin Prysby
Discussion: https://postgr.es/m/20220922033716.GL31833@telsasoft.com
Backpatch-through: 15
Autoconf is showing its age, fewer and fewer contributors know how to wrangle
it. Recursive make has a lot of hard to resolve dependency issues and slow
incremental rebuilds. Our home-grown MSVC build system is hard to maintain for
developers not using Windows and runs tests serially. While these and other
issues could individually be addressed with incremental improvements, together
they seem best addressed by moving to a more modern build system.
After evaluating different build system choices, we chose to use meson, to a
good degree based on the adoption by other open source projects.
We decided that it's more realistic to commit a relatively early version of
the new build system and mature it in tree.
This commit adds an initial version of a meson based build system. It supports
building postgres on at least AIX, FreeBSD, Linux, macOS, NetBSD, OpenBSD,
Solaris and Windows (however only gcc is supported on aix, solaris). For
Windows/MSVC postgres can now be built with ninja (faster, particularly for
incremental builds) and msbuild (supporting the visual studio GUI, but
building slower).
Several aspects (e.g. Windows rc file generation, PGXS compatibility, LLVM
bitcode generation, documentation adjustments) are done in subsequent commits
requiring further review. Other aspects (e.g. not installing test-only
extensions) are not yet addressed.
When building on Windows with msbuild, builds are slower when using a visual
studio version older than 2019, because those versions do not support
MultiToolTask, required by meson for intra-target parallelism.
The plan is to remove the MSVC specific build system in src/tools/msvc soon
after reaching feature parity. However, we're not planning to remove the
autoconf/make build system in the near future. Likely we're going to keep at
least the parts required for PGXS to keep working around until all supported
versions build with meson.
Some initial help for postgres developers is at
https://wiki.postgresql.org/wiki/Meson
With contributions from Thomas Munro, John Naylor, Stone Tickle and others.
Author: Andres Freund <andres@anarazel.de>
Author: Nazir Bilal Yavuz <byavuz81@gmail.com>
Author: Peter Eisentraut <peter@eisentraut.org>
Reviewed-By: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Discussion: https://postgr.es/m/20211012083721.hvixq4pnh2pixr3j@alap3.anarazel.de
The parameter controlling if two-phase transactions can be decoded was
named "two_phase" in the documentation while its procedure defines
"twophase".
Author: Florin Irion
Discussion: https://postgr.es/m/5eeabd10-1aff-ea61-f92d-9fa0d9a7e207@gmail.com
Backpatch-through: 14
Check in CREATE DATABASE and initdb that the selected encoding is
supported by ICU. Before, they would pass but users would later get
an error from the server when they tried to use the database.
Also document that initdb sets the encoding to UTF8 by default if the
ICU locale provider is chosen.
Author: Marina Polyakova <m.polyakova@postgrespro.ru>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://www.postgresql.org/message-id/6dd6db0984d86a51b7255ba79f111971@postgrespro.ru
There was a excessive structure, leading to somewhat disorganized
presentation of the information. Remove a few tags and reorder
paragraphs to make the text flow more easily. Also, reword some of it
to be more concise.
The bit about column list combination is not modified, other than to
remove an uninteresting (and IMO confusing and wrong) paragraph; I
intend to deal with it differently afterwards.
Backpatch to 15.
Discussion: https://postgr.es/m/20220913121138.yn7ekkfysxzhkm2u@alvherre.pgsql
Referring to the WAL as just "log" invites confusion with the
postmaster log, so avoid doing that in docs and error messages.
Also shorten "WAL segment file" to just "WAL file" in various
places.
Bharath Rupireddy, reviewed by Nathan Bossart and Kyotaro Horiguchi
Discussion: https://postgr.es/m/CALj2ACUeXa8tDPaiTLexBDMZ7hgvaN+RTb957-cn5qwv9zf-MQ@mail.gmail.com
Before, each documentation target that built something from
postgres.sgml ran xmllint first to validate the input. Here, we
change it so that the validation only runs once and produces an output
file, and all the other targets build from that output file. This
avoids redundant work when building multiple documentation targets
(such as html and man).
Also, when we run xmllint, we can resolve entities (included files).
This helps with tools that don't support vpath builds, such as
dbtoepub.
All this also organizes the make targets a bit better for implementing
equivalent steps in meson.
Discussion: https://www.postgresql.org/message-id/e3ae16de-c9f9-f559-2d11-70b1342ae3d1@enterprisedb.com
In pg_receivewal, compressed output is only flushed on clean exits. The
reason to support SIGTERM as well as SIGINT (which is currently handled)
is that pg_receivewal might well be running as a daemon, and systemd's
default KillSignal is SIGTERM.
Since pg_recvlogical is also supposed to run as a daemon, teach it about
SIGTERM as well and update the documentation to match. While in there,
change pg_receivewal's time_to_stop to be sig_atomic_t like it is in
pg_recvlogical.
Author: Christoph Berg <myon@debian.org>
Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/Yvo/5No5S0c4EFMj@msg.df7cb.de
The oldest vendor-shipped Perl in the buildfarm is 5.14.2, which is
the last version that Debian Wheezy shipped. That OS is EOL, but we
keep it running because there is no other convenient way to test certain
non-mainstream 32-bit platforms. There is no bugfix in the 5.14.2 release
that is required, and yet it's also not the latest minor release --
that would be 5.14.4. To clarify the situation, we have thus arranged the
buildfarm to test 5.14.0. That allows configure scripts and documentation
to state 5.14 without fine print.
The MSVC build didn't check the version, since our previous minimum 5.8.3
was considered too old to check for on Windows. We will need a check for
Windows sometime during the v16 cycle, but that could be rendered moot
by the impending Meson conversion, so it seems safe to just document
the requirement for now.
Reviewed by Tom Lane
Discussion: https://www.postgresql.org/message-id/20220902181553.ev4pgzhubhdkguuv@awork3.anarazel.de
PG_COMPRESSION_OPTION_LEVEL is removed from the compression
specification logic, and instead the compression level is always
assigned with each library's default if nothing is directly given. This
centralizes the checks on the compression methods supported by a given
build, and always assigns a default compression level when parsing a
compression specification. This results in complaining at an earlier
stage than previously if a build supports a compression method or not,
aka when parsing a specification in the backend or the frontend, and not
when processing it. zstd, lz4 and zlib are able to handle in their
respective routines setting up the compression level the case of a
default value, hence the backend or frontend code (pg_receivewal or
pg_basebackup) has now no need to know what the default compression
level should be if nothing is specified: the logic is now done so as the
specification parsing assigns it. It can also be enforced by passing
down a "level" set to the default value, that the backend will accept
(the replication protocol is for example able to handle a command like
BASE_BACKUP (COMPRESSION_DETAIL 'gzip:level=-1')).
This code simplification fixes an issue with pg_basebackup --gzip
introduced by ffd5365, where the tarball of the streamed WAL segments
would be created as of pg_wal.tar.gz with uncompressed contents, while
the intention is to compress the segments with gzip at a default level.
The origin of the confusion comes from the handling of the default
compression level of gzip (-1 or Z_DEFAULT_COMPRESSION) and the value of
0 was getting assigned, which is what walmethods.c would consider
as equivalent to no compression when streaming WAL segments with its tar
methods. Assigning always the compression level removes the confusion
of some code paths considering a value of 0 set in a specification as
either no compression or a default compression level.
Note that 010_pg_basebackup.pl has to be adjusted to skip a few tests
where the shape of the compression detail string for client and
server-side compression was checked using gzip. This is a result of the
code simplification, as gzip specifications cannot be used if a build
does not support it.
Reported-by: Tom Lane
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/1400032.1662217889@sss.pgh.pa.us
Backpatch-through: 15
Locale options can be specified for initdb, createdb, and CREATE
DATABASE. In initdb, it has always been possible to specify --locale
and then some --lc-* option to override a category. CREATE DATABASE
and createdb didn't allow that, requiring either the all-categories
option or only per-category options. In
f2553d4306, this was changed in CREATE
DATABASE (perhaps by accident?) to be more like the initdb behavior,
but createdb still had the old behavior.
Now we change createdb to match the behavior of CREATE DATABASE and
initdb, and also update the documentation of CREATE DATABASE to match
the new behavior, which was not done in the above commit.
Author: Marina Polyakova <m.polyakova@postgrespro.ru>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://www.postgresql.org/message-id/7c99c132dc9c0ac630e0127f032ac480@postgrespro.ru
pg_walinspect uses datatype double (double precision floating point
number) for WAL stats percentile calculations and expose them via
float4 (single precision floating point number), which an unnecessary
loss of precision and confusing. Even though, it's harmless that way,
let's use float8 (double precision floating-point number) to be in
sync with what pg_walinspect does internally and what it exposes to
the users. This seems to be the pattern used elsewhere in the code.
Reported-by: Peter Eisentraut
Author: Bharath Rupireddy
Reviewed-by: Peter Eisentraut
Discussion: https://www.postgresql.org/message-id/36ee692b-232f-0484-ce94-dc39d82021ad%40enterprisedb.com
Be more clear about when and how an extension-defined GUC comes to be
visible in pg_settings. (Move the para to the bottom of the page, too;
whoever thought this point was more important than the para about the
view being updatable had odd priorities IMNSHO.)
Back-patch to v15 where archive modules were added, since that seems
to have made this more of a sore spot than it was before.
Benoit Lobréau, Nathan Bossart
Discussion: https://postgr.es/m/CAPE8EZ7KHaXMHKwT=HOim23tDVKYA1PruRuTfeYdCrYWwPGhag@mail.gmail.com
Remove no-longer-accurate claim that Windows lacks home directories.
Clarify the text by more clearly distinguishing which statements
reflect hard-wired choices versus which ones reflect overridable
defaults. Update the examples of version-specific file names,
and make them track future version changes by using "&majorversion;"
and "&version;". (BTW, in devel and beta releases this method
correctly says that you can use strings like "16devel" and "15beta4"
as minor version identifiers.)
Back-patch to v15, but not further, with the thought that in older
releases the examples with three-part version numbers still had
some historical relevance. v15 will be the first major release after
the last 9.x branch went out of support.
Robert Treat and Tom Lane, reviewed by Julien Rouhaud
Discussion: https://postgr.es/m/CAJSLCQ07F-WCYYYOY8+dWhHcVeJ1Pb01cWc-c0Hu=M3EjKT2Eg@mail.gmail.com
When using the BSD UUID functions, contrib/uuid-ossp expects
uuid_create() to produce a version-1 UUID. FreeBSD still does so,
but in recent NetBSD releases that function produces a version-4
(random) UUID instead. That's not acceptable for our purposes:
if the user wanted v4 she would have asked for v4, not v1.
Hence, check the version digit and complain if it's not '1'.
Also drop the documentation's claim that the NetBSD implementation
is usable. It might be, depending on which OS version you're using,
but we're not going to get into that kind of detail.
(Maybe someday we should ditch all these external libraries
and just write our own UUID code, but today is not that day.)
Nazir Bilal Yavuz, with cosmetic adjustments and docs by me.
Backpatch to all supported versions.
Discussion: https://postgr.es/m/3848059.1661038772@sss.pgh.pa.us
Discussion: https://postgr.es/m/17358-89806e7420797025@postgresql.org
In commit 3d895bc846 I introduced a bogus semicolon mid-statement by
careless cut-n-paste; move it. This had already been reported by Justin
Pryzby.
Also, change the styling a bit by avoiding names in CamelCase. This is
more consistent with the style we use elsewhere.
Backpatch to 15.
Author: Vitaly Burovoy <vitaly.burovoy@gmail.com>
Reviewed-by: Vik Fearing <vik@postgresfriends.org>
Discussion: https://postgr.es/m/9afe5766-5a61-7860-598c-136867fad065@gmail.com
Discussion: https://postgr.es/m/20220819133016.GV26426@telsasoft.com
Since the retirement of some older buildfarm members, the oldest Bison
that gets regular testing is 2.3. MacOS ships that version, and will
continue doing so for the forseeable future because of Apple's policy
regarding GPLv3. While Mac users could use a package manager to install
a newer version, there is no compelling reason to force them do so at
this time.
Reviewed by Andres Freund
Discussion: https://www.postgresql.org/message-id/1097762.1662145681@sss.pgh.pa.us
This commit raises a warning message for a combination of options
('copy_data = true' and 'origin = none') during CREATE/ALTER subscription
operations if the publication tables were also replicated from other
publishers.
During replication, we can skip the data from other origins as we have that
information in WAL but that is not possible during initial sync so we raise
a warning if there is such a possibility.
Author: Vignesh C
Reviewed-By: Peter Smith, Amit Kapila, Jonathan Katz, Shi yu, Wang wei
Discussion: https://www.postgresql.org/message-id/CALDaNm0gwjY_4HFxvvty01BOT01q_fJLKQ3pWP9=9orqubhjcQ@mail.gmail.com
Add a new logical replication section for "Column Lists" (analogous to the
Row Filters page). This explains how the feature can be used and the
caveats in it.
Author: Peter Smith
Reviewed-by: Shi yu, Vignesh C, Erik Rijkers, Amit Kapila
Backpatch-through: 15, where it was introduced
Discussion: https://postgr.es/m/CAHut+PvOuc9=_4TbASc5=VUqh16UWtFO3GzcKQK_5m1hrW3vqg@mail.gmail.com
Improve documentation regarding the limitations of unique and primary key
constraints on partitioned tables. The existing documentation didn't make
it clear that the constraint columns had to be present in the partition
key as bare columns. The reader could be led to believe that it was ok to
include the constraint columns as part of a function call's parameters or
as part of an expression. Additionally, the documentation didn't mention
anything about the fact that we disallow unique and primary key
constraints if the partition keys contain *any* function calls or
expressions, regardless of if the constraint columns appear as columns
elsewhere in the partition key.
The confusion here was highlighted by a report on the general mailing list
by James Vanns.
Discussion: https://postgr.es/m/CAH7vdhNF0EdYZz3GLpgE3RSJLwWLhEk7A_fiKS9dPBT3Dz_3eA@mail.gmail.com
Discussion: https://postgr.es/m/CAApHDvoU-u9iTqKjteYRFfi+UNEk7dbSAcyxEQD==vZt9B1KnA@mail.gmail.com
Reviewed-by: Erik Rijkers
Backpatch-through: 11
It was not strictly correct to say that a column list must always include
replica identity columns because that is true for only updates and
deletes.
Author: Peter Smith
Reviwed-by: Vignesh C, Amit Kapila
Backpatch-through: 15, where it was introduced
Discussion: https://postgr.es/m/CAHut+PvOuc9=_4TbASc5=VUqh16UWtFO3GzcKQK_5m1hrW3vqg@mail.gmail.com
The reverts the following and makes some associated cleanups:
commit f79b803dc: Common SQL/JSON clauses
commit f4fb45d15: SQL/JSON constructors
commit 5f0adec25: Make STRING an unreserved_keyword.
commit 33a377608: IS JSON predicate
commit 1a36bc9db: SQL/JSON query functions
commit 606948b05: SQL JSON functions
commit 49082c2cc: RETURNING clause for JSON() and JSON_SCALAR()
commit 4e34747c8: JSON_TABLE
commit fadb48b00: PLAN clauses for JSON_TABLE
commit 2ef6f11b0: Reduce running time of jsonb_sqljson test
commit 14d3f24fa: Further improve jsonb_sqljson parallel test
commit a6baa4bad: Documentation for SQL/JSON features
commit b46bcf7a4: Improve readability of SQL/JSON documentation.
commit 112fdb352: Fix finalization for json_objectagg and friends
commit fcdb35c32: Fix transformJsonBehavior
commit 4cd8717af: Improve a couple of sql/json error messages
commit f7a605f63: Small cleanups in SQL/JSON code
commit 9c3d25e17: Fix JSON_OBJECTAGG uniquefying bug
commit a79153b7a: Claim SQL standard compliance for SQL/JSON features
commit a1e7616d6: Rework SQL/JSON documentation
commit 8d9f9634e: Fix errors in copyfuncs/equalfuncs support for JSON node types.
commit 3c633f32b: Only allow returning string types or bytea from json_serialize
commit 67b26703b: expression eval: Fix EEOP_JSON_CONSTRUCTOR and EEOP_JSONEXPR size.
The release notes are also adjusted.
Backpatch to release 15.
Discussion: https://postgr.es/m/40d2c882-bcac-19a9-754d-4299e1d87ac7@postgresql.org
SUSv3 <netinet/in.h> defines struct sockaddr_in6, and all targeted Unix
systems have it. Windows has it in <ws2ipdef.h>. Remove the configure
probe, the macro and a small amount of dead code.
Also remove a mention of IPv6-less builds from the documentation, since
there aren't any.
This is similar to commits f5580882 and 077bf2f2 for Unix sockets. Even
though AF_INET6 is an "optional" component of SUSv3, there are no known
modern operating system without it, and it seems even less likely to be
omitted from future systems than AF_UNIX.
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CA+hUKGKErNfhmvb_H0UprEmp4LPzGN06yR2_0tYikjzB-2ECMw@mail.gmail.com
The GRANT statement can now specify WITH INHERIT TRUE or WITH
INHERIT FALSE to control whether the member inherits the granted
role's permissions. For symmetry, you can now likewise write
WITH ADMIN TRUE or WITH ADMIN FALSE to turn ADMIN OPTION on or off.
If a GRANT does not specify WITH INHERIT, the behavior based on
whether the member role is marked INHERIT or NOINHERIT. This means
that if all roles are marked INHERIT or NOINHERIT before any role
grants are performed, the behavior is identical to what we had before;
otherwise, it's different, because ALTER ROLE [NO]INHERIT now only
changes the default behavior of future grants, and has no effect on
existing ones.
Patch by me. Reviewed and testing by Nathan Bossart and Tushar Ahuja,
with design-level comments from various others.
Discussion: http://postgr.es/m/CA+Tgmoa5Sf4PiWrfxA=sGzDKg0Ojo3dADw=wAHOhR9dggV=RmQ@mail.gmail.com
On fast machines, it's possible for applications such as pgbench
to issue connection requests so quickly that the postmaster's
listen queue overflows in the kernel, resulting in unexpected
failures (with not-very-helpful error messages). Most modern OSes
allow the queue size to be increased, so document how to do that.
Per report from Kevin McKibbin.
Discussion: https://postgr.es/m/CADc_NKg2d+oZY9mg4DdQdoUcGzN2kOYXBu-3--RW_hEe0tUV=g@mail.gmail.com
sysctl is more portable than Linux's /proc/sys file tree, and
often easier to use too. That's why most of our docs refer to
sysctl when talking about how to adjust kernel parameters.
Bring the few stragglers into line.
Discussion: https://postgr.es/m/361175.1661187463@sss.pgh.pa.us
Previously, membership of role A in role B could be recorded in the
catalog tables only once. This meant that a new grant of role A to
role B would overwrite the previous grant. For other object types, a
new grant of permission on an object - in this case role A - exists
along side the existing grant provided that the grantor is different.
Either grant can be revoked independently of the other, and
permissions remain so long as at least one grant remains. Make role
grants work similarly.
Previously, when granting membership in a role, the superuser could
specify any role whatsoever as the grantor, but for other object types,
the grantor of record must be either the owner of the object, or a
role that currently has privileges to perform a similar GRANT.
Implement the same scheme for role grants, treating the bootstrap
superuser as the role owner since roles do not have owners. This means
that attempting to revoke a grant, or admin option on a grant, can now
fail if there are dependent privileges, and that CASCADE can be used
to revoke these. It also means that you can't grant ADMIN OPTION on
a role back to a user who granted it directly or indirectly to you,
similar to how you can't give WITH GRANT OPTION on a privilege back
to a role which granted it directly or indirectly to you.
Previously, only the superuser could specify GRANTED BY with a user
other than the current user. Relax that rule to allow the grantor
to be any role whose privileges the current user posseses. This
doesn't improve compatibility with what we do for other object types,
where support for GRANTED BY is entirely vestigial, but it makes this
feature more usable and seems to make sense to change at the same time
we're changing related behaviors.
Along the way, fix "ALTER GROUP group_name ADD USER user_name" to
require the same privileges as "GRANT group_name TO user_name".
Previously, CREATEROLE privileges were sufficient for either, but
only the former form was permissible with ADMIN OPTION on the role.
Now, either CREATEROLE or ADMIN OPTION on the role suffices for
either spelling.
Patch by me, reviewed by Stephen Frost.
Discussion: http://postgr.es/m/CA+TgmoaFr-RZeQ+WoQ5nKPv97oT9+aDgK_a5+qWHSgbDsMp1Vg@mail.gmail.com
Perform some minor wordsmithing on two sentences in the COPY documentation
to make them clearer.
While there, also ensure to wrap a few occurrences of CSV in <literal>
which were missing this.
Reported-by: Eric Mutta <eric.mutta@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/166104548566.654.11680826843612576896@wrigleys.postgresql.org
This commit adds or fixes markups used in a couple of places in the docs
(for <command>, <systemitem> and <literal>). While on it, this
clarifies some of the documentation added recently for archiving modules
with archive_command, that would still be used as default choice if no
external module is defined (though an archive module could as well use
an archive_command).
Author: Maxim Yablokov
Discussion: https://postgr.es/m/b47ec4e8-6f6a-2aba-038e-d5db150b245e@postgrespro.ru
Backpatch-through: 15
Previously, "GRANT foo TO bar" or "GRANT foo TO bar GRANTED BY baz"
would record the OID of the grantor in pg_auth_members.grantor, but
that role could later be dropped without modifying or removing the
pg_auth_members record. That's not great, because we typically try
to avoid dangling references in catalog data.
Now, a role grant depends on the grantor, and the grantor can't be
dropped without removing the grant or changing the grantor. "DROP
OWNED BY" will remove the grant, just as it does for other kinds of
privileges. "REASSIGN OWNED BY" will not, again just like what we do
in other cases involving privileges.
pg_auth_members now has an OID column, because that is needed in order
for dependencies to work. It also now has an index on the grantor
column, because otherwise dropping a role would require a sequential
scan of the entire table to see whether the role's OID is in use as
a grantor. That probably wouldn't be too large a problem in practice,
but it seems better to have an index just in case.
A follow-on patch is planned with the goal of more thoroughly
rationalizing the behavior of role grants. This patch is just trying
to do enough to make sure that the data we store in the catalogs is at
some basic level valid.
Patch by me, reviewed by Stephen Frost
Discussion: http://postgr.es/m/CA+TgmoaFr-RZeQ+WoQ5nKPv97oT9+aDgK_a5+qWHSgbDsMp1Vg@mail.gmail.com
The table column that stores this is of type oid, but is actually limited
to uint16 and has a different path for creating new values. Some of
the documentation already referred to it as an ID, so let's standardize
on that.
While at it, most format strings already use %u, so for consintency
change the remaining stragglers using %d.
Per suggestions from Tom Lane and Justin Pryzby
Discussion: https://www.postgresql.org/message-id/3437166.1659620465%40sss.pgh.pa.us
Backpatch to v15
While PO files can be edited in any text editor, specialized tools for
translation editing can be quite helpful with automating tasks etc. Add
a small note about PO editors to encourage new translators to research
which tool works best for them.
Reviewed-by: Bruce Momjian <bruce@momjian.us>
Discussion: https://postgr.es/m/163490116698.684.10398197970578456928@wrigleys.postgresql.org
While almost all occurrences of "case-insensitive{ly}" were spelled with
a dash, a few were using "case insensitive{ly}" with a space instead. Fix
by changing these to use a dash to be consistent.
Discussion: https://postgr.es/m/7657EDEE-5EE2-4AAB-BA95-47B4F71653E1@yesql.se
This event can happen when using SET ACCESS METHOD, as the data files of
the materialized need a full refresh but this command tag was not
updated to reflect that. The documentation is updated to track this
behavior.
Author: Onder Kalaci
Discussion: https://postgr.es/m/CACawEhXwHN3X34FiwoYG8vXR-oyUdrp7qcfRWSzS+NPahS5gSw@mail.gmail.com
Backpatch-through: 15
The unit of size and allocated_size was not documented. Speciyfing the
unit is in line with how many other (but not all) system view attributes
are documented so fixing by adding the unit which is "bytes".
Author: Euler Taveira <euler@eulerto.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reported-by: coleman.rik@gmail.com
Discussion: https://postgr.es/m/166033703458.653.1583077816076994614@wrigleys.postgresql.org
Since HAVE_UNIX_SOCKETS is now defined unconditionally, remove the macro
and drop a small amount of dead code.
The last known systems not to have them (as far as I know at least) were
QNX, which we de-supported years ago, and Windows, which now has them.
If a new OS ever shows up with the POSIX sockets API but without working
AF_UNIX, it'll presumably still be able to compile the code, and fail at
runtime with an unsupported address family error. We might want to
consider adding a HINT that you should turn off the option to use it if
your network stack doesn't support it at that point, but it doesn't seem
worth making the relevant code conditional at compile time.
Also adjust a couple of places in the docs and comments that referred to
builds without Unix-domain sockets, since there aren't any. Windows
still gets a special mention in those places, though, because we don't
try to use them by default there yet.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Discussion: https://postgr.es/m/CA%2BhUKG%2BL_3brvh%3D8e0BW_VfX9h7MtwgN%3DnFHP5o7X2oZucY9dg%40mail.gmail.com
The messages were using 'two-phase transaction' at some places and
'prepared transaction' at other places. Make them consistently use
'prepared transaction'.
Reported-by: Ekaterina Kiryanova
Author: Peter Smith
Reviewed by: Amit Kapila
Backpatch-through: 15
Discussion: https://postgr.es/m/745414e7-efb2-a6ae-5b83-fcbdf35aabc8@postgrespro.ru
This commit addresses a few things around GUCs:
- The TCP-related parameters (the four tcp_keepalives_* and
client_connection_check_interval are listed in postgresql.conf.sample in
a subsection called "TCP settings" of "CONNECTIONS AND AUTHENTICATION",
but they did not have their own group name in guc.c.
- enable_group_by_reordering, stats_fetch_consistency and
recovery_prefetch had an inconsistent description, missing a dot at the
end.
- In postgresql.conf.sample, "Process title" should not have a section
of its own, but it should be a subsection of "REPORTING AND LOGGING".
This impacts the contents of pg_settings, which could be seen as a
compatibility break, so no backpatch is done. This is similar to the
cleanup done in a55a984.
Author: Shinya Kato
Discussion: https://postgr.es/m/5e0c9c608624eafbba910c344282cb14@oss.nttdata.com
Previously, if an extension script did CREATE OR REPLACE and there was
an existing object not belonging to the extension, it would overwrite
the object and adopt it into the extension. This is problematic, first
because the overwrite is probably unintentional, and second because we
didn't change the object's ownership. Thus a hostile user could create
an object in advance of an expected CREATE EXTENSION command, and would
then have ownership rights on an extension object, which could be
modified for trojan-horse-type attacks.
Hence, forbid CREATE OR REPLACE of an existing object unless it already
belongs to the extension. (Note that we've always forbidden replacing
an object that belongs to some other extension; only the behavior for
previously-free-standing objects changes here.)
For the same reason, also fail CREATE IF NOT EXISTS when there is
an existing object that doesn't belong to the extension.
Our thanks to Sven Klemm for reporting this problem.
Security: CVE-2022-2625
symlink() and readlink() are in SUSv2 and all targeted Unix systems have
them. We have partial emulation on Windows. Code that raised runtime
errors on systems without it has been dead for years, so we can remove
that and also references to such systems in the documentation.
Define HAVE_READLINK and HAVE_SYMLINK macros on Unix. Our Windows
replacement functions based on junction points can't be used for
relative paths or for non-directories, so the macros can be used to
check for full symlink support. The places that deal with tablespaces
can just use symlink functions without checking the macros. (If they
did check the macros, they'd need to provide an #else branch with a
runtime or compile time error, and it'd be dead code.)
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CA+hUKGJ3LHeP9w5Fgzdr4G8AnEtJ=z=p6hGDEm4qYGEUX5B6fQ@mail.gmail.com
Previously, a byte with the high bit set was just transmitted
as-is by charin() and charout(). This is problematic if the
database encoding is multibyte, because the result of charout()
won't be validly encoded, which breaks various stuff that
expects all text strings to be validly encoded. We've
previously decided to enforce encoding validity rather than try
to individually harden each place that might have a problem with
such strings, so it's time to do something about "char".
To fix, represent high-bit-set characters as \ooo (backslash
and three octal digits), following the ancient "escape" format
for bytea. charin() will continue to accept the old way as well,
though that is only reachable in single-byte encodings.
Add some test cases just so there is coverage for this code.
We'll otherwise leave this question undocumented as it was before,
because we don't really want to encourage end-user use of "char".
For the moment, back-patch into v15 so that this change appears
in 15beta3. If there's not great pushback we should consider
absorbing this change into the older branches.
Discussion: https://postgr.es/m/2318797.1638558730@sss.pgh.pa.us
These two new options can be used to either process all tables in
specific schemas or to skip processing all tables in specific
schemas. This change also refactors the handling of invalid
combinations of command-line options to a new helper function.
Author: Gilles Darold
Reviewed-by: Justin Pryzby, Nathan Bossart and Michael Paquier.
Discussion: https://postgr.es/m/929fbf3c-24b8-d454-811f-1d5898ab3e91%40migops.com
There wasn't an especially nice way to read all of a file while
passing missing_ok = true. Add an additional overloaded variant
to support that use-case.
While here, refactor the C code to avoid a rats-nest of PG_NARGS
checks, instead handling the argument collection in the outer
wrapper functions. It's a bit longer this way, but far more
straightforward.
(Upon looking at the code coverage report for genfile.c, I was
impelled to also add a test case for pg_stat_file() -- tgl)
Kyotaro Horiguchi
Discussion: https://postgr.es/m/20220607.160520.1984541900138970018.horikyota.ntt@gmail.com
... when it applies to partitioned relations. This is almost the
opposite of 0c06534bd6, which removed references to "partition" in
favour of "child".
Author: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Ashutosh Bapat <ashutosh.bapat@enterprisedb.com>
Reviewed-by: Amit Langote <amitlangote09@gmail.com>
Discussion: https://postgr.es/m/20220525013248.GO19626@telsasoft.com
After commit 089480c07, it's necessary for background worker entry
points to be marked PGDLLEXPORT, else they aren't findable by
LookupBackgroundWorkerFunction(). Since pg_prewarm lacks any
regression tests, it's not surprising its worker entry points were
overlooked. (A quick search turned up no other such oversights.)
I added some documentation pointing out the need for this, too.
Robins Tharakan and Tom Lane
CAEP4nAzndnQv3-1QKb=D-hLoK3Rko12HHMFHHtdj2GQAUXO3gw@mail.gmail.com
We have a few commands that "can't run in a transaction block",
meaning that if they complete their processing but then we fail
to COMMIT, we'll be left with inconsistent on-disk state.
However, the existing defenses for this are only watertight for
simple query protocol. In extended protocol, we didn't commit
until receiving a Sync message. Since the client is allowed to
issue another command instead of Sync, we're in trouble if that
command fails or is an explicit ROLLBACK. In any case, sitting
in an inconsistent state while waiting for a client message
that might not come seems pretty risky.
This case wasn't reachable via libpq before we introduced pipeline
mode, but it's always been an intended aspect of extended query
protocol, and likely there are other clients that could reach it
before.
To fix, set a flag in PreventInTransactionBlock that tells
exec_execute_message to force an immediate commit. This seems
to be the approach that does least damage to existing working
cases while still preventing the undesirable outcomes.
While here, add some documentation to protocol.sgml that explicitly
says how to use pipelining. That's latent in the existing docs if
you know what to look for, but it's better to spell it out; and it
provides a place to document this new behavior.
Per bug #17434 from Yugo Nagata. It's been wrong for ages,
so back-patch to all supported branches.
Discussion: https://postgr.es/m/17434-d9f7a064ce2a88a3@postgresql.org
This addresses a couple of bugs in the REINDEX grammar, introduced by
83011ce:
- A name was never specified for DATABASE/SYSTEM, even if the query
included one. This caused such REINDEX queries to always work with any
object name, but we should complain if the object name specified does
not match the name of the database we are connected to. A test is added
for this case in the main regression test suite, provided by Álvaro.
- REINDEX SYSTEM CONCURRENTLY [name] was getting rejected in the
parser. Concurrent rebuilds are not supported for catalogs but the
error provided at execution time is more helpful for the user, and
allowing this flavor results in a simplification of the parsing logic.
- REINDEX DATABASE CONCURRENTLY was rebuilding the index in a
non-concurrent way, as the option was not being appended correctly in
the list of DefElems in ReindexStmt (REINDEX (CONCURRENTLY) DATABASE was
working fine. A test is added in the TAP tests of reindexdb for this
case, where we already have a REINDEX DATABASE CONCURRENTLY query
running on a small-ish instance. This relies on the work done in
2cbc3c1 for SYSTEM, but here we check if the OIDs of the index relations
match or not after the concurrent rebuild. Note that in order to get
this part to work, I had to tweak the tests so as the index OID and
names are saved separately. This change not affect the reliability or
of the coverage of the existing tests.
While on it, I have implemented a tweak in the grammar to reduce the
parsing by one branch, simplifying things even more.
Author: Michael Paquier, Álvaro Herrera
Discussion: https://postgr.es/m/YttqI6O64wDxGn0K@paquier.xyz
The setting controls tha maximum length of the header line in expanded
format output. Possible settings are full, column, page, or an integer.
the default is full, the current behaviour, and in this case the header
line is the length of the widest line of output. column causes the
header to be truncated to the width of the first column, page causes it
to be truncated to the width of the terminal page, and an integer causes
it to be truncated to that value. If the full value is less than the
page or integer value no truncation occurs. If given without an argument
this option prints its current setting.
Platon Pronko, somewhat modified by me.
Discussion: https://postgr.es/m/f03d38a3-db96-a56e-d1bc-dbbc80bbde4d@gmail.com
We didn't explicitly say that random() uses a randomly-chosen seed
if you haven't called setseed(). Do so.
Also, remove ref/set.sgml's no-longer-accurate (and never very
relevant) statement that the seed value is multiplied by 2^31-1.
Back-patch to v12 where set.sgml's claim stopped being true.
The claim that we use a source of random bits as seed was debatable
before 4203842a1, too, so v12 seems like a good place to stop.
Per question from Carl Sopchak.
Discussion: https://postgr.es/m/f37bb937-9d99-08f0-4de7-80c91a3cfc2e@sopchak.me
We've long held the minimum at 3.80, but that's required more than
one workaround. Commit 0f39b70a6 broke it again, because it turns
out that exporting a target-specific variable didn't work in 3.80.
Considering that 3.81 is now old enough to get a driver's license,
and that the only remaining buildfarm member testing 3.80 (prairiedog)
is likely to be retired soon, let's just stop supporting 3.80.
Adjust docs and Makefile.global's minimum-version check to match.
There are a couple of comments in the Makefiles suggesting that
random things could be done differently after we desupport 3.80,
but I couldn't get excited about changing any of them right now.
Back-patch to v15, as 0f39b70a6 was.
Discussion: https://postgr.es/m/20220720172321.GL12702@telsasoft.com
This allows users to omit the statistics name in a CREATE STATISTICS
command, letting the system auto-generate a sensible, unique name,
putting the statistics object in the same schema as the table.
Simon Riggs, reviewed by Matthias van de Meent.
Discussion: https://postgr.es/m/CANbhV-FGD2d_C3zFTfT2aRfX_TaPSgOeKES58RLZx5XzQp5NhA@mail.gmail.com
This patch adds a new SUBSCRIPTION parameter "origin". It specifies
whether the subscription will request the publisher to only send changes
that don't have an origin or send changes regardless of origin. Setting it
to "none" means that the subscription will request the publisher to only
send changes that have no origin associated. Setting it to "any" means
that the publisher sends changes regardless of their origin. The default
is "any".
Usage:
CREATE SUBSCRIPTION sub1 CONNECTION 'dbname=postgres port=9999'
PUBLICATION pub1 WITH (origin = none);
This can be used to avoid loops (infinite replication of the same data)
among replication nodes.
This feature allows filtering only the replication data originating from
WAL but for initial sync (initial copy of table data) we don't have such a
facility as we can only distinguish the data based on origin from WAL. As
a follow-up patch, we are planning to forbid the initial sync if the
origin is specified as none and we notice that the publication tables were
also replicated from other publishers to avoid duplicate data or loops.
We forbid to allow creating origin with names 'none' and 'any' to avoid
confusion with the same name options.
Author: Vignesh C, Amit Kapila
Reviewed-By: Peter Smith, Amit Kapila, Dilip Kumar, Shi yu, Ashutosh Bapat, Hayato Kuroda
Discussion: https://postgr.es/m/CALDaNm0gwjY_4HFxvvty01BOT01q_fJLKQ3pWP9=9orqubhjcQ@mail.gmail.com
In the docs, the GUC flags that pg_settings_get_flags() reported were
listed using <simplelist>. But the list was treated as separate lines
in the existing function table and didn't look good. For better view,
this commit separates the list from the table entry for
pg_settings_get_flags() and adds the table for it at the bottom of
the existing function table.
Author: Fujii Masao
Reviewed-by: Alvaro Herrera, Michael Paquier
Discussion: https://postgr.es/m/f093edf9-6e5a-b119-ee50-6a2c97c79ee8@oss.nttdata.com
Upcoming custom Table Access Methods (TableAM) need benchmarking. Despite
pgbench doesn't have an explicit option for TableAM specification, one can
specify it using PGOPTION environmental variable. The present commit documents
this way to specify TableAM for pgbench.
Discussion: https://postgr.es/m/CAC77N6ih%3DLbhZQXV76grEsaVQkBL464Y2Foqq9o%3Df4UBfEOfEQ%40mail.gmail.com
Author: Michel Pelletier, Alexander Korotkov
Reviewed-by: Justin Pryzby, Mason Sharp, Michael Paquier
This allows aliases for sub-SELECTs and VALUES clauses in the FROM
clause to be omitted.
This is an extension of the SQL standard, supported by some other
database systems, and so eases the transition from such systems, as
well as removing the minor inconvenience caused by requiring these
aliases.
Patch by me, reviewed by Tom Lane.
Discussion: https://postgr.es/m/CAEZATCUCGCf82=hxd9N5n6xGHPyYpQnxW8HneeH+uP7yNALkWA@mail.gmail.com
Windows 10 gained support for flushing NTFS files with fdatasync()
semantics. The main advantage over open_datasync (in Windows API terms
FILE_FLAG_WRITE_THROUGH) is that the latter does not flush SATA drive
caches. The default setting is not changed, so users have to opt in to
this.
Discussion: https://postgr.es/m/CA%2BhUKGJZJVO%3DiX%2Beb-PXi2_XS9ZRqnn_4URh0NUQOwt6-_51xQ%40mail.gmail.com
Per discussion, this commit includes a couple of changes to these two
flavors of REINDEX:
* The grammar is changed to make the name of the object optional, hence
one can rebuild all the indexes of the wanted area by specifying only
"REINDEX DATABASE;" or "REINDEX SYSTEM;". Previously, the object name
was mandatory and had to match the name of the database on which the
command is issued.
* REINDEX DATABASE is changed to ignore catalogs, making this task only
possible with REINDEX SYSTEM. This is a historical change, but there
was no way to work only on the indexes of a database without touching
the catalogs. We have discussed more approaches here, like the addition
of an option to skip the catalogs without changing the original
behavior, but concluded that what we have here is for the best.
This builds on top of the TAP tests introduced in 5fb5b6c, showing the
change in behavior for REINDEX SYSTEM. reindexdb is updated so as we do
not issue an extra REINDEX SYSTEM when working on a database in the
non-concurrent case, something that was confusing when --concurrently
got introduced, so this simplifies the code.
Author: Simon Riggs
Reviewed-by: Ashutosh Bapat, Bernd Helmle, Álvaro Herrera, Cary Huang,
Michael Paquier
Discussion: https://postgr.es/m/CANbhV-H=NH6Om4-X6cRjDWfH_Mu1usqwkuYVp-hwdB_PSHWRfg@mail.gmail.com