Based on comments from Peter Eisentraut.
* Document CREATE DATABASE ... BUILTIN_LOCALE.
* Determine required encoding based on locale name for CREATE
COLLATION. Use -1 for "C" (requires catversion bump).
* initdb output fixups.
* Make ctype_is_c a constant true for now.
* Fixups to ICU 010_create_database.pl test.
Discussion: https://postgr.es/m/4135cf11-206d-40ed-96c0-9363c1232379@eisentraut.org
Commit 61461a300c introduced new functions to libpq for cancelling
queries. This replaces the usage of the old ones in parts of the
codebase with these newer ones. This specifically leaves out changes to
psql and pgbench, as those would need a much larger refactor to be able
to call them due to the new functions not being signal-safe; and also
postgres_fdw, because the original code there is not clear to me
(Álvaro) and not fully tested.
Author: Jelte Fennema-Nio <postgres@jeltef.nl>
Discussion: https://postgr.es/m/CAGECzQT_VgOWWENUqvUV9xQmbaCyXjtRRAYO8W07oqashk_N+g@mail.gmail.com
I disabled it in cc6e64afda because it was unstable; hopefully the
changes here make it more stable. (Jelte proposed to submit the queries
using simple query protocol, but I chose to instead make the monitor
connection wait until the PgSleep wait event is seen. This is probably
not a terribly meaningful increase in coverage ...)
Discussion: https://postgr.es/m/CAGECzQRQh_5tSy+5cndgv08eNJ2O0Zpwn2YddJtSsmC=Wpy1BQ@mail.gmail.com
Introduce new postmaster_child_launch() function that deals with the
differences in EXEC_BACKEND mode.
Refactor the mechanism of passing information from the parent to child
process. Instead of using different command-line arguments when
launching the child process in EXEC_BACKEND mode, pass a
variable-length blob of startup data along with all the global
variables. The contents of that blob depend on the kind of child
process being launched. In !EXEC_BACKEND mode, we use the same blob,
but it's simply inherited from the parent to child process.
Reviewed-by: Tristan Partin, Andres Freund
Discussion: https://www.postgresql.org/message-id/7a59b073-5b5b-151e-7ed3-8b01ff7ce9ef@iki.fi
This just moves the functions, with no other changes, to make the next
commits smaller and easier to review. The moved functions are related
to launching postmaster child processes in EXEC_BACKEND mode.
Reviewed-by: Tristan Partin, Andres Freund
Discussion: https://www.postgresql.org/message-id/7a59b073-5b5b-151e-7ed3-8b01ff7ce9ef@iki.fi
The next commit will move the internal_forkexec() function to a
different source file, but it makes sense to keep all the code related
to the win32 waitpid() emulation in postmaster.c. Split it off to a
separate function now, to make the next commit more mechanical.
Reviewed-by: Tristan Partin, Andres Freund
Discussion: https://www.postgresql.org/message-id/7a59b073-5b5b-151e-7ed3-8b01ff7ce9ef@iki.fi
This routine has been removed in 39969e2a1e with the exclusive backup
mode but there were still references to it.
Issue noticed while working on 071e3ad59d.
Since commit 012460ee93, some compilers have been warning that a
couple of variables may be used uninitialized. There doesn't
appear to be any actual risk, so let's just initialize these
variables to 0 to silence the compiler warnings.
Discussion: https://postgr.es/m/20240317192927.GA3978212%40nathanxps13
Allocate memory for the error message inside memory owned by the
JsonLexContext and move responsibility away from the caller for
freeing it. This means that we can partially revert b44669b2ca
as this is now safe to use in FRONTEND code. The motivation for
this comes from the OAuth and incremental JSON patchsets but it
also adds value on its own.
Author: Jacob Champion <jacob.champion@enterprisedb.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/CAOYmi+mWdTd6ujtyF7MsvXvk7ToLRVG_tYAcaGbQLvf=N4KrQw@mail.gmail.com
This allows a RETURNING clause to be appended to a MERGE query, to
return values based on each row inserted, updated, or deleted. As with
plain INSERT, UPDATE, and DELETE commands, the returned values are
based on the new contents of the target table for INSERT and UPDATE
actions, and on its old contents for DELETE actions. Values from the
source relation may also be returned.
As with INSERT/UPDATE/DELETE, the output of MERGE ... RETURNING may be
used as the source relation for other operations such as WITH queries
and COPY commands.
Additionally, a special function merge_action() is provided, which
returns 'INSERT', 'UPDATE', or 'DELETE', depending on the action
executed for each row. The merge_action() function can be used
anywhere in the RETURNING list, including in arbitrary expressions and
subqueries, but it is an error to use it anywhere outside of a MERGE
query's RETURNING list.
Dean Rasheed, reviewed by Isaac Morland, Vik Fearing, Alvaro Herrera,
Gurjeet Singh, Jian He, Jeff Davis, Merlin Moncure, Peter Eisentraut,
and Wolfgang Walther.
Discussion: http://postgr.es/m/CAEZATCWePEGQR5LBn-vD6SfeLZafzEm2Qy_L_Oky2=qw2w3Pzg@mail.gmail.com
This allows setting attstattarget when a relation is created.
We make use of this by having index_concurrently_create_copy() copy
over the attstattarget values when the new index is created, instead
of having index_concurrently_swap() fix it up later.
Reviewed-by: Tomas Vondra <tomas.vondra@enterprisedb.com>
Discussion: https://www.postgresql.org/message-id/flat/4da8d211-d54d-44b9-9847-f2a9f1184c76@eisentraut.org
DDL code uses tuple descriptors to pass around pg_attribute values
during table and index creation. But tuple descriptors don't include
the variable-length/nullable columns of pg_attribute, so they have to
be handled separately. Right now, the attoptions field is handled in
a one-off way with a separate argument passed to
InsertPgAttributeTuples(). The other affected fields of pg_attribute
are right now not needed at relation creation time.
The goal of this patch is to generalize this to allow handling
additional variable-length/nullable columns of pg_attribute in a
similar manner. For that, create a new struct
FormExtraData_pg_attribute, which is to be passed around in parallel
to the tuple descriptor and optionally supplies the additional
columns. Right now, this struct only contains one field for
attoptions, so no functionality is actually changed by this.
Reviewed-by: Tomas Vondra <tomas.vondra@enterprisedb.com>
Discussion: https://www.postgresql.org/message-id/flat/4da8d211-d54d-44b9-9847-f2a9f1184c76@eisentraut.org
Given a subplan in a MERGE query, EXPLAIN would sometimes fail to
properly display expressions involving Params referencing variables in
other parts of the plan tree.
This would affect subplans outside the topmost join plan node, for
which expansion of Params would go via the top-level ModifyTable plan
node. The problem was that "inner_tlist" for the ModifyTable node's
deparse_namespace was set to the join node's targetlist, but
"inner_plan" was set to the ModifyTable node itself, rather than the
join node, leading to incorrect results when descending to the
referenced variable.
Fix and backpatch to v15, where MERGE was introduced.
Discussion: https://postgr.es/m/CAEZATCWAv-sZuH%2BwG5xJ-%2BGt7qGNGX8wUQd3XYydMFDKgRB9nw%40mail.gmail.com
This introduces a new function equalRowTypes() that is effectively a
subset of equalTupleDescs() but only compares the number of attributes
and attribute name, type, typmod, and collation. This is enough for
most existing uses of equalTupleDescs(), which are changed to use the
new function. The only remaining callers of equalTupleDescs() are
those that really want to check the full tuple descriptor as such,
without concern about record or row or record type semantics.
The existing function hashTupleDesc() is renamed to hashRowType(),
because it now corresponds more to equalRowTypes().
The purpose of this change is to be clearer about the semantics of the
equality asked for by each caller. (At least one caller had a comment
that questioned whether equalTupleDescs() was too restrictive.) For
example, 4f622503d6 removed attstattarget from the tuple descriptor
structure. It was not fully clear at the time how this should affect
equalTupleDescs(). Now the answer is clear: By their own definitions,
equalRowTypes() does not care, and equalTupleDescs() just compares
whatever is in the tuple descriptor but does not care why it is in
there.
Reviewed-by: Tomas Vondra <tomas.vondra@enterprisedb.com>
Discussion: https://www.postgresql.org/message-id/flat/f656d6d9-6660-4518-a006-2f65cafbebd1%40eisentraut.org
destroyStringInfo() is a counterpart to makeStringInfo(), freeing a
palloc'd StringInfo and its data. This is a convenience function to
align the StringInfo API with the PQExpBuffer API. Originally added
in the OAuth patchset, it was extracted and committed separately in
order to aid upcoming JSON work.
Author: Daniel Gustafsson <daniel@yesql.se>
Author: Jacob Champion <jacob.champion@enterprisedb.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/CAOYmi+mWdTd6ujtyF7MsvXvk7ToLRVG_tYAcaGbQLvf=N4KrQw@mail.gmail.com
psql has the :{?name} syntax for testing for a psql variable existence. This
commit implements a tab completion for this syntax. Notably, in order to
implement this we have to remove '{' from WORD_BREAKS. It appears that
'{' here from the very beginning and it comes from the default value of
rl_basic_word_break_characters. And :{?name} is the only psql syntax using
the '{' sign. So, removing it from WORD_BREAKS shouldn't break anything.
Discussion: https://postgr.es/m/CAGRrpzZU48F2oV3d8eDLr%3D4TU9xFH5Jt9ED%2BqU1%2BX91gMH68Sw%40mail.gmail.com
Author: Steve Chavez
Reviewed-by: Erik Wienhold
Commit c855872074 introduced a new parameter to pg_regress to set
the directory where to look for expected files, but accidentally
only implemented it for when compiling pg_regress for ECPG tests.
Fix by adding support for the parameter to the main regression test
compilation of pg_regress as well.
Backpatch to v16 where --expecteddir was introduced.
Author: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>
Discussion: https://postgr.es/m/CAO6_Xqq5yKJHcJsq__LPcKwSY0XHRqVERNWGxx5ttNXXj7+W=A@mail.gmail.com
Backpatch-through: 16
The buildfarm script attempts to run all tests marked as
NO_INSTALLCHECK under src/test/modules without paying attention to
whether they are enabled or disabled in the parent Makefile. That
hasn't been a problem so far, because all the tests marked with
NO_INSTALLCHECK ran unconditionally in "make check". But commit
e2e3b8ae9e changed that: the injection fault tests are marked as
NO_INSTALLCHECK, and also depend on --enable-injection-points.
Try to work around that by ensuring that "make check" does nothing in
the those subdirectories. We can hopefully get rid of this hack soon,
after fixing the buildfarm client, or by switching to meson.
Discussion: https://www.postgresql.org/message-id/8e4cf596-dd70-432e-9068-16466ed596ed%40iki.fi
The interruption handler within the injection point can get stuck in an
infinite loop while handling transaction timeout. To avoid this situation
we reset the timeout flag before invoking the injection point.
Author: Alexander Korotkov
Reviewed-by: Andrey Borodin
Discussion: https://postgr.es/m/ZfPchPC6oNN71X2J%40paquier.xyz
There are currently no tests for the low-level backup method where
pg_backup_start() and pg_backup_stop() are involved while taking a
file-system backup. The tests introduced in this commit rely on a
background psql process to make sure that the backup is taken while the
session doing the SQL start and stop calls remains alive.
Two cases are checked here with the backup taken:
- Recovery without a backup_label, leading to a corrupted state.
- Recovery with a backup_label, with a consistent state reached.
Both cases cross-check some patterns in the logs generated when running
recovery.
Compared to the first attempt in 99b4a63bef, this includes a couple of
fixes making the CI stable (5 runs succeeded here):
- Add the file to the list of tests in meson.build.
- Race condition with the first WAL segment that we expect in the
primary's archives, by adding a poll on pg_stat_archiver. The second
segment with the checkpoint record is archived thanks to pg_backup_stop
waiting for it.
- Fix failure of test where the backup_label does not exist. The
cluster inherits the configuration of the first node; it was attempting
to store segments in the first node's archives, triggering failures with
copy on Windows.
- Fix failure of test on Windows because of incorrect parsing of the
backup_file in the success case. The data of the backup_label file is
retrieved from the output pg_backup_stop() from a BackgroundPsql written
directly to the backup's data folder. This would include CRLFs (\r\n),
causing the startup process to fail at the beginning of recovery when
parsing the backup_label because only LFs (\n) are allowed.
Author: David Steele
Discussion: https://postgr.es/m/f20fcc82-dadb-478d-beb4-1e2ffb0ace76@pgmasters.net
The same pattern is used three times in dynahash.c to retrieve a bucket
number and a hash bucket from a hash value. This has popped up while
discussing improvements for the type cache, where this piece of
refactoring would become useful.
Note that hash_search_with_hash_value() does not need the bucket number,
just the hash bucket.
Author: Teodor Sigaev
Reviewed-by: Aleksander Alekseev, Michael Paquier
Discussion: https://postgr.es/m/5812a6e5-68ae-4d84-9d85-b443176966a1@sigaev.ru
Similar to d8a295389, trim off any PathKeys which are for ORDER BY /
DISTINCT aggregate functions from the PathKey List for the Gather Merge
paths created by gather_grouping_paths(). These additional PathKeys are
not valid to use after grouping has taken place as these PathKeys belong
to columns which are inputs to an aggregate function and, therefore are
unavailable after aggregation.
Reported-by: Alexander Lakhin
Discussion: https://postgr.es/m/cf63174c-8c89-3953-cb49-48f41f74941a@gmail.com
Backpatch-through: 16, where 1349d2790 was added
Commit a3c7a993d fixed some cases involving target columns that are
arrays or composites by applying transformAssignedExpr to the VALUES
entries, and then stripping off any assignment ArrayRefs or
FieldStores that the transformation added. But I forgot about domains
over arrays or composites :-(. Such cases would either fail with
surprising complaints about mismatched datatypes, or insert unexpected
coercions that could lead to odd results. To fix, extend the
stripping logic to get rid of CoerceToDomain if it's atop an ArrayRef
or FieldStore.
While poking at this, I realized that there's a poorly documented and
not-at-all-tested behavior nearby: we coerce each VALUES column to
the domain type separately, and rely on the rewriter to merge those
operations so that the domain constraints are checked only once.
If that merging did not happen, it's entirely possible that we'd get
unexpected domain constraint failures due to checking a
partially-updated container value. There's no bug there, but while
we're here let's improve the commentary about it and add some test
cases that explicitly exercise that behavior.
Per bug #18393 from Pablo Kharo. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/18393-65fedb1a0de9260d@postgresql.org
This function returns the chunk_id of an on-disk TOASTed value. If
the value is un-TOASTed or not on-disk, it returns NULL. This is
useful for identifying which values are actually TOASTed and for
investigating "unexpected chunk number" errors.
Bumps catversion.
Author: Yugo Nagata
Reviewed-by: Jian He
Discussion: https://postgr.es/m/20230329105507.d764497456eeac1ca491b5bd%40sraoss.co.jp
The parallel query infrastructure copies the leader backend's active
snapshot to the worker processes. But BitmapHeapScan node also had
bespoken code to pass the snapshot from leader to the worker. That was
redundant, so remove it.
The removed code was analogous to the snapshot serialization in
table_parallelscan_initialize(), but that was the wrong role model. A
parallel bitmap heap scan is more like an independent non-parallel
bitmap heap scan in each parallel worker as far as the table AM is
concerned, because the coordination is done in nodeBitmapHeapscan.c,
and the table AM doesn't need to know anything about it.
This relies on the assumption that es_snapshot ==
GetActiveSnapshot(). That's not a new assumption, things would get
weird if you used the QueryDesc's snapshot for visibility checks in
the scans, but the active snapshot for evaluating quals, for
example. This could use some refactoring and cleanup, but for now,
just add some assertions.
Reviewed-by: Dilip Kumar, Robert Haas
Discussion: https://www.postgresql.org/message-id/5f3b9d59-0f43-419d-80ca-6d04c07cf61a@iki.fi
We don't determine the position at which a process waiting for a lock
should insert itself into the wait queue until we reach ProcSleep(),
and we may at that point discover that we must insert ourselves ahead
of everyone who wants a conflicting lock, in which case we obtain the
lock immediately. Up until now, a no-wait lock acquisition would fail
in such cases, erroneously claiming that the lock couldn't be obtained
immediately. Fix that by trying ProcSleep even in the no-wait case.
No back-patch for now, because I'm treating this as an improvement to
the existing no-wait feature. It could instead be argued that it's a
bug fix, on the theory that there should never be any case whatsoever
where no-wait fails to obtain a lock that would have been obtained
immediately without no-wait, but I'm reluctant to interpret the
semantics of no-wait that strictly.
Robert Haas and Jingxian Li
Discussion: http://postgr.es/m/CA+TgmobCH-kMXGVpb0BB-iNMdtcNkTvcZ4JBxDJows3kYM+GDg@mail.gmail.com
This commit adds new tests to verify that transaction_timeout,
idle_session_timeout, and idle_in_transaction_session_timeout work as expected.
We introduce new injection points in before throwing a timeout FATAL error
and check these injection points are reached.
Discussion: https://postgr.es/m/CAAhFRxiQsRs2Eq5kCo9nXE3HTugsAAJdSQSmxncivebAxdmBjQ%40mail.gmail.com
Author: Andrey Borodin
Reviewed-by: Alexander Korotkov
Currently, pg_visibility computes its xid horizon using the
GetOldestNonRemovableTransactionId(). The problem is that this horizon can
sometimes go backward. That can lead to reporting false errors.
In order to fix that, this commit implements a new function
GetStrictOldestNonRemovableTransactionId(). This function computes the xid
horizon, which would be guaranteed to be newer or equal to any xid horizon
computed before.
We have to do the following to achieve this.
1. Ignore processes xmin's, because they consider connection to other databases
that were ignored before.
2. Ignore KnownAssignedXids, because they are not database-aware. At the same
time, the primary could compute its horizons database-aware.
3. Ignore walsender xmin, because it could go backward if some replication
connections don't use replication slots.
As a result, we're using only currently running xids to compute the horizon.
Surely these would significantly sacrifice accuracy. But we have to do so to
avoid reporting false errors.
Inspired by earlier patch by Daniel Shelepanov and the following discussion
with Robert Haas and Tom Lane.
Discussion: https://postgr.es/m/1649062270.289865713%40f403.i.mail.ru
Reviewed-by: Alexander Lakhin, Dmitry Koval
New provider for collations, like "libc" or "icu", but without any
external dependency.
Initially, the only locale supported by the builtin provider is "C",
which is identical to the libc provider's "C" locale. The libc
provider's "C" locale has always been treated as a special case that
uses an internal implementation, without using libc at all -- so the
new builtin provider uses the same implementation.
The builtin provider's locale is independent of the server environment
variables LC_COLLATE and LC_CTYPE. Using the builtin provider, the
database collation locale can be "C" while LC_COLLATE and LC_CTYPE are
set to "en_US", which is impossible with the libc provider.
By offering a new builtin provider, it clarifies that the semantics of
a collation using this provider will never depend on libc, and makes
it easier to document the behavior.
Discussion: https://postgr.es/m/ab925f69-5f9d-f85e-b87c-bd2a44798659@joeconway.com
Discussion: https://postgr.es/m/dd9261f4-7a98-4565-93ec-336c1c110d90@manitou-mail.org
Discussion: https://postgr.es/m/ff4c2f2f9c8fc7ca27c1c24ae37ecaeaeaff6b53.camel%40j-davis.com
Reviewed-by: Daniel Vérité, Peter Eisentraut, Jeremy Schneider