This adds a new option called BGWORKER_BYPASS_ROLELOGINCHECK to the
flags available to BackgroundWorkerInitializeConnection() and
BackgroundWorkerInitializeConnectionByOid().
This gives the possibility to bgworkers to bypass the role login check,
making possible the use of a role that has no login rights while not
being a superuser. PostgresInit() gains a new flag called
INIT_PG_OVERRIDE_ROLE_LOGIN, taking advantage of the refactoring done in
4800a5dfb4.
Regression tests are added to worker_spi to check the behavior of this
new option with bgworkers.
Author: Bertrand Drouvot
Reviewed-by: Nathan Bossart, Michael Paquier, Bharath Rupireddy
Discussion: https://postgr.es/m/bcc36259-7850-4882-97ef-d6b905d2fc51@gmail.com
In commit 3fc6e2d7f, I (tgl) argued that we only need to check for
a constant-FALSE restriction clause when there's exactly one
restriction clause, on the grounds that const-folding would have
thrown away anything ANDed with a Const FALSE. That's true just after
const-folding has been applied, but subsequent processing such as
equivalence class expansion could result in cases where a Const FALSE
is ANDed with some other stuff. (Compare for instance joinrels.c's
restriction_is_constant_false.) Hence, tweak this logic to check all
the elements of the baserestrictinfo list, not just one; that's cheap
enough to not be worth worrying about.
There is one existing test case where this visibly improves the plan.
There would not be any savings in runtime, but the planner effort and
executor startup effort will be reduced, and anyway it's odd that
we can detect related cases but not this one.
Richard Guo (independently discovered by David Rowley)
Discussion: https://postgr.es/m/CAMbWs4_x3-CnVVrCboS1LkEhB5V+W7sLSCabsRiG+n7+5_kqbg@mail.gmail.com
The check is not about processing the startup packet, so the calling
function seems like a more natural place. I'm also working on a patch
that moves 'canAcceptConnections' out of the Port struct, and this
makes that refactoring more convenient.
Reviewed-by: Tristan Partin
Discussion: https://www.postgresql.org/message-id/7a59b073-5b5b-151e-7ed3-8b01ff7ce9ef@iki.fi
InitPostgres() has been using a set of boolean arguments to control its
behavior, and a patch under discussion was aiming at expanding it with a
third one. In preparation for expanding this area, this commit switches
all the current boolean arguments of this routine to a single bits32
argument instead. Two values are currently supported for the flags:
- INIT_PG_LOAD_SESSION_LIBS to load [session|local]_preload_libraries at
startup.
- INIT_PG_OVERRIDE_ALLOW_CONNS to allow connection to a database even if
it has !datallowconn. This is used by bgworkers.
Reviewed-by: Bertrand Drouvot
Discussion: https://postgr.es/m/ZSTn66_BXRZCeaqS@paquier.xyz
Since we added the PlannerInfo.all_baserels set, it's not really
necessary to grovel over the rangetable to count baserels in the
current query. So let's drop has_multiple_baserels() in favor
of a bms_membership() test. This might be microscopically
faster, but the main point is to remove some unnecessary code.
Richard Guo
Discussion: https://postgr.es/m/CAMbWs4_8RcSbbfs1ASZLrMuL0c0EQgXWcoLTQD8swBRY_pQQiA@mail.gmail.com
The present pg_resetwal code hardcodes the minimum value for -c as 2,
which is FrozenTransactionId, but it's not clear why that is allowed.
After some research, it was probably a mistake in the original patch.
Change it to FirstNormalTransactionId, which matches other xid-related
options in pg_resetwal.
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://www.postgresql.org/message-id/flat/d09f0e91-8757-642b-1a92-da9a52f5589a%40eisentraut.org
While working on a8a968a82, I failed to consider that
cheapest_startup_path can be NULL when there is no non-parameterized
path in the pathlist. This is well documented in set_cheapest(), I just
failed to notice.
Here we adjust the code to just check if the RelOptInfo has a
cheapest_startup_path set before adding it to the startup_subpaths list.
Reported-by: Richard Guo
Author: Richard Guo
Discussion: https://postgr.es/m/CAMbWs49w3t03V69XhdCuw+GDwivny4uQUxrkVp6Gejaspt0wMQ@mail.gmail.com
worker_spi_launch() may report that a worker stopped when it fails to
connect on a database that does not allow connections if the worker
exits before the SQL function checks for the current status of the
worker. The test is switched to use Cluster::psql instead of
safe_psql() so as it does not fail hard when this query errors. While
on it, this removes a query that looks at pg_stat_activity to simplify
the test, as a check on the contents of the server logs achieves the
same when the worker cannot connect to the database without
datallowconn.
Per buildfarm members kestrel, mamba and serinus. Bonus thanks to Tom
Lane for providing the logs of the failure from mamba that the buildfarm
was not able to show up. Note that I have reproduced the failure with a
hardcoded stop point.
Discussion: https://postgr.es/m/3365937.1696801735@sss.pgh.pa.us
While these two built-in functions do exactly the same thing,
CURRENT_USER seems preferable to use in documentation examples.
It's easier to look up if the reader is unsure what it is.
Also, this puts these examples in sync with an adjacent example
that already used CURRENT_USER.
Per question from Kirk Parker.
Discussion: https://postgr.es/m/CANwZ8rmN_Eb0h0hoMRS8Feftaik0z89PxVsKg+cP+PctuOq=Qg@mail.gmail.com
The comment claimed that it is "called from postmaster", but it is
actually called in the child process, pretty early in the process
initialization. I guess you could interpret "called from postmaster"
to mean that, but it seems wrong to me. Rename the function to be
consistent with other functions with similar role.
Reviewed-by: Thomas Munro
Discussion: https://www.postgresql.org/message-id/4f95c1fc-ad3c-7974-3a8c-6faa3931804c@iki.fi
In EXEC_BACKEND or single-user mode, we process
shared_preload_libraries at postmaster startup as usual, but also at
backend startup. When a library calls RegisterBackgroundWorker() when
being loaded into a backend process, we go through the motions to add
the worker to BackgroundWorkerList, even though that is a
postmaster-private data structure. Make it return early when called in
a backend process, without changing BackgroundWorkerList.
You could argue that it was intentional: In non-EXEC_BACKEND mode, the
backend processes inherit BackgroundWorkerList at fork(), so it does
make some sense to initialize it to the same state in EXEC_BACKEND
mode, too. It's clearly a postmaster-private structure, though, and
all the functions that use it are clearly marked as "should only be
called in postmaster".
You could also argue that libraries should not call
RegisterBackgroundWorker() during backend startup. It's too late to
correctly register any static background workers at that stage. But
it's a common pattern in extensions, and it doesn't seem worth the
churn to require all extensions to change it.
Another sloppiness was the exception for "internal" background
workers. We checked that RegisterBackgroundWorker() was called during
shared_preload_libraries processing, or the background worker was an
internal one. That exception was made in commit 665d1fad99 to allow
postmaster to register the logical apply launcher in
ApplyLauncherRegister(). The way the check was written, it would not
complain if you registered an internal background worker in a regular
backend process. But it would complain if postmaster registered a
background worker defined in a shared library, outside
shared_preload_libraries processing. I think the correct rule is that
you can only register static background workers in the postmaster
process, and only before the bgworker shared memory array has been
initialized. Check for that more directly.
Reviewed-by: Thomas Munro
Discussion: https://www.postgresql.org/message-id/4f95c1fc-ad3c-7974-3a8c-6faa3931804c@iki.fi
The serialized representation of an internal aggregate state is a bytea
value. In each deserial function, in order to "receive" the bytea value
we appended it onto a short-lived StringInfoData using
appendBinaryStringInfo. This was a little wasteful as it meant having to
palloc memory, copy a (possibly long) series of bytes then later pfree
that memory. Instead of going to this extra trouble, we can just fake up
a StringInfoData and point the data directly at the bytea's payload. This
should help increase the performance of internal aggregate
deserialization.
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/CAApHDvr=e-YOigriSHHm324a40HPqcUhSp6pWWgjz5WwegR=cQ@mail.gmail.com
1349d2790 added code to adjust the PlannerInfo.group_pathkeys so that
ORDER BY / DISTINCT aggregate functions could obtain pre-sorted inputs
to allow faster execution. That commit forgot to adjust the pathkeys in
create_agg_path(). Some code in there assumed that it was always fine
to make the AggPath's pathkeys the same as its subpath's. That seems to
have been ok up until 1349d2790, but since that commit adds pathkeys for
columns which are within the aggregate function, those columns won't be
available above the aggregate node. This can result in "could not find
pathkey item to sort" during create_plan().
The fix here is to strip off the additional pathkeys added by
adjust_group_pathkeys_for_groupagg(). It seems that the pathkeys here
will only ever be group_pathkeys, so all we need to do is check if the
length of the pathkey list is longer than the num_groupby_pathkeys and
get rid of the additional ones only if we see extras.
Reported-by: Justin Pryzby
Reviewed-by: Richard Guo
Discussion: https://postgr.es/m/ZQhYYRhUxpW3PSf9%40telsasoft.com
Backpatch-through: 16, where 1349d2790 was introduced
Going by c4a1933b4, b33ef397a and 05893712c (to name just a few), it seems
that maintaining debug_print_rel() is often forgotten. In the case of
c4a1933b4, it was several years before anyone noticed that a path type
was not handled by debug_print_rel(). (debug_print_rel() is only
compiled when building with OPTIMIZER_DEBUG).
After a quick survey on the pgsql-hackers mailing list, nobody came
forward to admit that they use OPTIMIZER_DEBUG. So to prevent any future
maintenance neglect, let's just remove debug_print_rel() and have
OPTIMIZER_DEBUG make use of pprint() instead (as suggested by Tom Lane).
If anyone wants to come forward to claim they make use of
OPTIMIZER_DEBUG in a way that they need debug_print_rel() then they have
around 10 months remaining in the v17 cycle where we could revert this.
If nobody comes forward in that time, then we can likely safely declare
debug_print_rel() as not worth keeping.
Discussion: https://postgr.es/m/CAApHDvoCdjo8Cu2zEZF4-AxWG-90S+pYXAnoDDa9J3xH-OrczQ@mail.gmail.com
Back in the 8.3 era we discovered that it was problematic if
libpq.so had encoding ID assignments different from the backend,
which is possible because on some platforms libpq.so might be
of a different major version from the calling programs.
psql should use libpq's assignments, but initdb has to use the
backend's, else it will put wrong values into pg_database.
The solution devised in commit 8468146b0 relied on giving initdb
its own copy of encnames.c rather than relying on the functions
exported by libpq. Later, that metamorphosed into ensuring that
libpgcommon got linked before libpq -- which made things OK for
initdb but broke psql. We didn't notice for lack of any changes
in enum pg_enc since then. Commit 06843df4a reversed that, fixing
the latent bug in psql but adding one in initdb. The meson build
infrastructure is also not being sufficiently careful about link
order, and trying to make it so would be equally fragile.
Hence, let's use a new scheme based on giving the libpq-exported
symbols different real names than the same functions exported from
libpgcommon.a or libpgcommon_srv.a. (We could distinguish those
two cases as well, but there seems no need to.) libpq gets the
official names to avoid an ABI break for libpq clients, while the
other cases use #define's to make the real names "xxx_private"
rather than "xxx". By controlling where the #define's are
applied, we can force any particular client program to use one
set or the other of the encnames.c functions.
We cannot back-patch this, since it'd be an ABI break for backend
loadable modules, but there seems little need to. We're just
trying to ensure that the world is safe for hypothetical future
additions to enum pg_enc.
In passing this should fix "duplicate symbol" linker warnings
that we've been seeing on AIX buildfarm members since commit
06843df4a. It's not very clear why that linker is complaining
now, when there were strictly *more* duplicates visible before,
but in any case this should remove the reason for complaint.
Patch by me; thanks to Andres Freund for review.
Discussion: https://postgr.es/m/2385119.1696354473@sss.pgh.pa.us
There was some discussion what the line length should be. Most output
currently clearly targets around 80 columns, but the maximum in use
currently is 95, so we set that as the current maximum. This just
ensures that there is some guidance and there are no wild deviations.
based on patch by Atsushi Torikoshi <torikoshia@oss.nttdata.com>
Discussion: https://www.postgresql.org/message-id/flat/50ca8ff35a8dd8f9ec89963b503571a7@oss.nttdata.com
This seemed inconsistent with the --help output of other tools.
Depending on the values, it can cause ugly formatting. Also, we're
not getting the defaults from libpq, we're just emulating the methods
libpq uses to derive these values, so they might not be 100% correct.
Author: Atsushi Torikoshi <torikoshia@oss.nttdata.com>
Discussion: https://www.postgresql.org/message-id/flat/50ca8ff35a8dd8f9ec89963b503571a7@oss.nttdata.com
Currently, B-tree code matches every scan key to every item on the page.
Imagine the ordered B-tree scan for the query like this.
SELECT * FROM tbl WHERE col > 'a' AND col < 'b' ORDER BY col;
The (col > 'a') scan key will be always matched once we find the location to
start the scan. The (col < 'b') scan key will match every item on the page
as long as it matches the last item on the page.
This patch implements prechecking of the scan keys required for directional
scan on beginning of page scan. If precheck is successful we can skip this
scan keys check for the items on the page. That could lead to significant
acceleration especially if the comparison operator is expensive.
Idea from patch by Konstantin Knizhnik.
Discussion: https://postgr.es/m/079c3f8e-3371-abe2-e93c-fc8a0ae3f571%40garret.ru
Reviewed-by: Peter Geoghegan, Pavel Borisov
A bgworker can spawn parallel workers of its own when executing queries,
and if the worker uses BGWORKER_BYPASS_ALLOWCONN while the database it
is connected to does not allow connections, a parallel worker would fail
to startup. In the case of this module, the step checking for the
presence of the schema to create was spawning a worker, failing the last
test introduced by 991bb0f965.
This issue could be reproduced with debug_parallel_query = 'regress',
for example.
Per buildfarm member crake.
BuildDescForRelation() main job is to convert ColumnDef lists to
pg_attribute/tuple descriptor arrays, which is really mostly an
internal subroutine of DefineRelation() and some related functions,
which is more the remit of tablecmds.c and doesn't have much to do
with the basic tuple descriptor interfaces in tupdesc.c. This is also
supported by observing the header includes we can remove in tupdesc.c.
By moving it over, we can also (in the future) make
BuildDescForRelation() use more internals of tablecmds.c that are not
sensible to be exposed in tupdesc.c.
Discussion: https://www.postgresql.org/message-id/flat/52a125e4-ff9a-95f5-9f61-b87cf447e4da@eisentraut.org
Keep track of the used size of the array. That avoids looping through
the whole array in a few places. It doesn't matter from a performance
point of view since the array is small anyway, but this feels less
surprising and is a little less code. Now that we have an explicit
NumListenSockets variable that is statically initialized to 0, we
don't need the loop to initialize the array.
Allocate the array in PostmasterContext. The array isn't needed in
child processes, so this allows reusing that memory. We could easily
make the array resizable now, but we haven't heard any complaints
about the current 64 sockets limit.
Discussion: https://www.postgresql.org/message-id/7bb7ad65-a018-2419-742f-fa5fd877d338@iki.fi
Previously, the JSON code didn't have to worry too much about freeing
JsonLexContext, because it was never too long-lived. With new features
being added for SQL/JSON this is no longer the case. Add a routine
that knows how to free this struct and apply that to a few places, to
prevent this from becoming problematic.
At the same time, we change the API of makeJsonLexContextCstringLen to
make it receive a pointer to JsonLexContext for callers that want it to
be stack-allocated; it can also be passed as NULL to get the original
behavior of a palloc'ed one.
This also causes an ABI break due to the addition of flags to
JsonLexContext, so we can't easily backpatch it. AFAICS that's not much
of a problem; apparently some leaks might exist in JSON usage of
text-search, for example via json_to_tsvector, but I haven't seen any
complaints about that.
Per Coverity complaint about datum_to_jsonb_internal().
Discussion: https://postgr.es/m/20230808174110.oq3iymllsv6amkih@alvherre.pgsql
6b94e7a6d did this for ordered append paths to allow fast startup
MergeAppends, however, nothing was done for the Append case.
Here we adjust add_paths_to_append_rel() to have it build an AppendPath
containing the cheapest startup paths from each of the child relations
when the append rel has "consider_startup" set.
Author: Andy Fan, David Rowley
Discussion: https://www.postgresql.org/message-id/CAKU4AWrXSkUV=Pt-gRxQT7EbfUeNssprGyNsB=5mJibFZ6S3ww@mail.gmail.com
Ensure we switch to the per-tuple memory context to prevent any memory
leaks of detoasted Datums in MemoizeHash_hash() and MemoizeHash_equal().
Reported-by: Orlov Aleksej
Author: Orlov Aleksej, David Rowley
Discussion: https://postgr.es/m/83281eed63c74e4f940317186372abfd%40cft.ru
Backpatch-through: 14, where Memoize was added
The comment
/* On some platforms, readline is declared as readline(char *) */
is obsolete. The casting away of const can be removed.
The const in the readline() prototype was added in GNU readline 4.2,
released in 2001. BSD libedit has also had const in the prototype
since at least 2001.
(The commit that introduced this comment (187e865174) talked about
FreeBSD 4.8, which didn't have readline compatibility in libedit yet,
so it must have been talking about GNU readline in the base system.
This checks out, but already FreeBSD 5 had an updated GNU readline
with const.)
Reviewed-by: Aleksander Alekseev <aleksander@timescale.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/862fc1d4-9a0c-d2b6-5451-ee3dc750bcab%40eisentraut.org
A couple of new options are added to this module to provide more control
on the ways bgworkers are started:
- A new GUC called worker_spi.role to control which role to use by
default when starting a worker.
- worker_spi_launch() gains three arguments: a role OID, a database OID
and flags (currently only BGWORKER_BYPASS_ALLOWCONN). By default, the
role OID and the database OID are InvalidOid, in which case the worker
would use the related GUCs.
Workers loaded by shared_preload_libraries use the default values
provided by the GUCs, with flags at 0. The options are given to the
main bgworker routine through bgw_extra. A test case is tweaked to
start two dynamic workers with databases and roles defined by the caller
of worker_spi_launch().
These additions will have the advantage of expanding the tests for
bgworkers, for at least two cases:
- BGWORKER_BYPASS_ALLOWCONN has no coverage in the core tree.
- A new bgworker flag is under discussion, and this eases the
integration of new tests.
Reviewed-by: Bertrand Drouvot
Discussion: https://postgr.es/m/bcc36259-7850-4882-97ef-d6b905d2fc51@gmail.com
Two custom wait events are added here:
- "DblinkConnect", when waiting to establish a connection to a remote
server.
- "DblinkGetConnect", when waiting to establish a connection to a remote
server but it could not be found in the list of already-opened ones.
Author: Masahiro Ikeda
Discussion: https://postgr.es/m/197bce267fa691a0ac62c86c4ab904c4@oss.nttdata.com