It happens that the parts of hba.conf that are planned to be extended
to support regular expressions would finish by using the same error
message as the one used currently for pg_ident.conf when a regular
expression cannot be compiled, as long as the routine centralizing the
logic, regcomp_auth_token(), knows from which file the regexp comes from
and its line location in the so-said file.
This change makes the follow-up patches slightly simpler, and the logic
remains the same. I suspect that this makes the proposal to add support
for file inclusions in pg_ident.conf and pg_hba.conf slightly simpler,
as well.
Extracted from a larger patch by the same author. This is similar to
the refactoring done in fc579e1.
Author: Bertrand Drouvot
Discussion: https://postgr.es/m/fff0d7c1-8ad4-76a1-9db3-0ab6ec338bf7@amazon.com
56788d215 adjusted the parallel seq scan code so that instead of handing
out a single block at a time to parallel workers, it now hands out ranges
of blocks.
Here we update the documentation which still claimed that workers received
just 1 block at a time.
Reported-by: Zhang Mingli
Discussion: https://postgr.es/m/17c99615-2c3b-4e4e-9d0b-424a66a7bccd@Spark
Backpatch-through: 14, where 56788d215 was added.
Various test suites use the "openssl" program as part of their setup.
There isn't a way to override which openssl program is to be used,
other than by fiddling with the path, perhaps. This has gotten
increasingly problematic because different versions of openssl have
different capabilities and do different things by default.
This patch checks for an openssl binary in configure and meson setup,
with appropriate ways to override it. This is similar to how "lz4"
and "zstd" are handled, for example. The meson build system actually
already did this, but the result was only used in some places. This
is now applied more uniformly.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://www.postgresql.org/message-id/flat/dc638b75-a16a-007d-9e1c-d16ed6cf0ad2%40enterprisedb.com
This makes the choice of result scale of numeric power() for integer
exponents consistent with the choice for non-integer exponents, and
with the result scale of other numeric functions. Specifically, the
result scale will be at least as large as the scale of either input,
and sufficient to ensure that the result has at least 16 significant
digits.
Formerly, the result scale was based only on the scale of the first
input, without taking into account the weight of the result. For
results with negative weight, that could lead to results with very few
or even no non-zero significant digits (e.g., 10.0 ^ (-18) produced
0.0000000000000000).
Fix this by moving responsibility for the choice of result scale into
power_var_int(), which already has code to estimate the result weight.
Per report by Adrian Klaver and suggested fix by Tom Lane.
No back-patch -- arguably this is a bug fix, but one which is easy to
work around, so it doesn't seem worth the risk of changing query
results in stable branches.
Discussion: https://postgr.es/m/12a40226-70ac-3a3b-3d3a-fdaf9e32d312%40aklaver.com
When the logical decoding restarts from NEW_CID, since there is no
association between the top transaction and its subtransaction, both are
created as top transactions and have the same LSN. This caused the
assertion failure in AssertTXNLsnOrder().
This patch skips the assertion check until we reach the LSN at which we
start decoding the contents of the transaction, specifically
start_decoding_at LSN in SnapBuild. This is okay because we don't
guarantee to make the association between top transaction and
subtransaction until we try to decode the actual contents of transaction.
The ordering of the records prior to the start_decoding_at LSN should have
been checked before the restart.
The other assertion failure is due to the reason that we forgot to track
that we have considered top-level transaction id in the list of catalog
changing transactions that were committed when one of its subtransactions
is marked as containing catalog change.
Reported-by: Tomas Vondra, Osumi Takamichi
Author: Masahiko Sawada, Kuroda Hayato
Reviewed-by: Amit Kapila, Dilip Kumar, Kuroda Hayato, Kyotaro Horiguchi, Masahiko Sawada
Backpatch-through: 10
Discussion: https://postgr.es/m/a89b46b6-0239-2fd5-71a9-b19b1f7a7145%40enterprisedb.com
Discussion: https://postgr.es/m/TYCPR01MB83733C6CEAE47D0280814D5AED7A9%40TYCPR01MB8373.jpnprd01.prod.outlook.com
Set up a signal handler for INT/TERM so that we run our END block if we
get them. In END, if the exit status indicates a problem, call
_update_pid(-1) to improve chances of the stop working in case start()
hasn't returned yet.
Also, change END's teardown_node() so that it passes fail_ok=>1, so that
if a node fails to stop, we still stop the other nodes in the same test.
Per complaint from Andres Freund.
This doesn't seem important enough to backpatch, at least for now.
Discussion: https://postgr.es/m/20220930040734.mbted42oiynhn2t6@awork3.anarazel.de
After commit 39969e2a1e, ->forcePageWrites is no longer very
interesting: we can just test whether runningBackups is different from 0.
This simplifies some code, so do away with it.
Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Discussion: https://postgr.es/m/39969e2a1e4d7f5a37f3ef37d53bbfe171e7d77a
We had two copies of almost identical logic to revert shared memory
state when a running backup aborts; we can remove
pg_backup_start_callback if we adapt do_pg_abort_backup so that it can
be used for this purpose too.
However, in order for this to work, we have to repurpose the flag passed
to do_pg_abort_backup. It used to indicate whether to throw a warning
(and the only caller always passed true). It now indicates whether the
callback is being called at start time (in which case the session backup
state is known not to have been set to RUNNING yet, so action is always
taken) or shmem time (in which case action is only taken if the session
backup state is RUNNING). Thus the meaning of the flag is no longer
superfluous, but it's actually quite critical to get right. I (Álvaro)
chose to change the polarity and the code flow re. the flag from what
Bharath submitted, for coding clarity.
Co-authored-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Discussion: https://www.postgresql.org/message-id/20221013111330.564fk5tkwe3ha77l%40alvherre.pgsql
As currently designed, with a callback registered in a ERROR_CLEANUP
block, the shutdown callback would get called twice when updating
archive_library on SIGHUP, which is something that we want to avoid to
ease the life of extension writers.
Anyway, an ERROR in the archiver process is treated as a FATAL, stopping
it immediately, hence there is no need for a ERROR_CLEANUP block.
Instead of that, the shutdown callback is not called upon
before_shmem_exit(), giving to the modules the opportunity to do any
cleanup actions before the server shuts down its subsystems.
While on it, this commit adds some testing coverage for the shutdown
callback. Neither shell_archive nor basic_archive have been using it,
and one is added to shell_archive, whose trigger is checked in a TAP
test through a shutdown sequence.
Author: Nathan Bossart, Bharath Rupireddy
Reviewed-by: Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/20221015221328.GB1821022@nathanxps13
Backpatch-through: 15
make_ctags did not include field members of structs since the commit
964d01ae90.
For example, in the following field of RestrictInfo:
Selectivity norm_selec pg_node_attr(equal_ignore);
pg_node_attr was mistakenly interpreted to be the name of the field.
To fix this, add -I option to ctags command if the command is
Exuberant ctags or Universal ctags (for plain old ctags, struct
members are not included in the tags file anyway).
Also add "-e" and "-n" options to make_ctags. The -e option invokes
ctags command with -e option, which produces TAGS file for emacs. This
allows to eliminate duplicate codes in make_etags so that make_etags
just exec make_ctags with -e option.
The -n option allows not to produce symbolic links in each
sub directory (the default is producing symbolic links).
Author: Yugo Nagata
Reviewers: Alvaro Herrera, Tatsuo Ishii
Discussion: https://postgr.es/m/flat/20221007154442.76233afc7c5b255c4de6528a%40sraoss.co.jp
AuthToken gains a regular expression, and IdentLine is changed so as it
uses an AuthToken rather than tracking separately the ident user string
used for the regex compilation and its generated regex_t. In the case
of pg_ident.conf, a set of AuthTokens is built in the pre-parsing phase
of the file, and an extra regular expression is compiled when building
the list of IdentLines, after checking the sanity of the fields in a
pre-parsed entry.
The logic in charge of computing and executing regular expressions is
now done in a new set of routines called respectively
regcomp_auth_token() and regexec_auth_token() that are wrappers around
pg_regcomp() and pg_regexec(), working on AuthTokens. While on it, this
patch adds a routine able to free an AuthToken, free_auth_token(), to
simplify a bit the logic around the requirement of using a specific free
routine for computed regular expressions. Note that there are no
functional or behavior changes introduced by this commit.
The goal of this patch is to ease the use of regular expressions with
more items of pg_hba.conf (user list, database list, potentially
hostnames) where AuthTokens are used extensively. This will be tackled
later in a separate patch.
Author: Bertrand Drouvot, Michael Paquier
Discussion: https://postgr.es/m/fff0d7c1-8ad4-76a1-9db3-0ab6ec338bf7@amazon.com
Preprocessing of the HAVING clause will reduce havingQual to NIL
if the clause is constant-TRUE. This is one case where that
convention is rather unfortunate, because "HAVING TRUE" is not at all
the same as not having any HAVING clause at all. (Per the SQL spec,
it still forces the query to be grouped.) The planner deals with this
by having a boolean hasHavingQual that records whether havingQual was
originally nonempty; places that just want to check whether HAVING
was specified are supposed to consult that.
I found three places that got that wrong. Fortunately, these could
only affect cost estimates not correctness. It'd be hard even
to demonstrate the errors; for example, the one in allpaths.c would
only matter in a query that has HAVING TRUE but no GROUP BY and no
aggregates, which would require a completely variable-free SELECT
list, making the case probably of only academic interest. Hence,
while these are worth fixing before someone copies the incorrect
coding somewhere more critical, they don't seem worth back-patching.
I didn't bother trying to devise regression tests, either.
Discussion: https://postgr.es/m/2503888.1666042643@sss.pgh.pa.us
The original hint says to use SET PUBLICATION when really ADD/DROP
PUBLICATION is called for, so this is arguably a bug fix.
Also, a very similar message elsewhere was using an inconsistent
SQLSTATE.
While at it, unwrap some strings.
Backpatch to 15.
Author: Hou zj <houzj.fnst@fujitsu.com>
Discussion: https://postgr.es/m/OS0PR01MB57160AD0E7386547BA978EB394299@OS0PR01MB5716.jpnprd01.prod.outlook.com
These routines have been renamed in a19e5ce. There is no need to keep
the compatibility declarations on HEAD, as once an extension moves to
the new routine name when compiling with v16~ the code would work the
same way when recompiled on v15. No backpatch to v15 for this one,
because ABI compatibility has to be maintained there.
Discussion: https://postgr.es/m/20221013194820.ciktb2sbbpw7cljm@awork3.anarazel.de
Per discussion, the existing routine name able to initialize a SRF
function with materialize mode is unpopular, so rename it. Equally, the
flags of this function are renamed, as of:
- SRF_SINGLE_USE_EXPECTED -> MAT_SRF_USE_EXPECTED_DESC
- SRF_SINGLE_BLESS -> MAT_SRF_BLESS
The previous function and flags introduced in 9e98583 are kept around
for compatibility purposes, so as any extension code already compiled
with v15 continues to work as-is. The declarations introduced here for
compatibility will be removed from HEAD in a follow-up commit.
The new names have been suggested by Andres Freund and Melanie
Plageman.
Discussion: https://postgr.es/m/20221013194820.ciktb2sbbpw7cljm@awork3.anarazel.de
Backpatch-through: 15
When creating a cast that uses a conversion function, we've
historically allowed the input and result types to be
binary-compatible with the function's input and result types,
rather than necessarily being identical. This means that the new
cast is logically dependent on the binary-compatible cast or casts
that it references: if those are defined by pg_cast entries, and you
try to restore the new cast without having defined them, it'll fail.
Hence, we should make pg_depend entries to record these dependencies
so that pg_dump knows that there is an ordering requirement.
This is not the only place where we allow such shortcuts; aggregate
functions for example are similarly lax, and in principle should gain
similar dependencies. However, for now it seems sufficient to fix
the cast-versus-cast case, as pg_dump's other ordering heuristics
should keep it out of trouble for other object types.
Per report from David Turoň; thanks also to Robert Haas for
preliminary investigation. I considered back-patching, but
seeing that this issue has existed for many years without
previous reports, it's not clear it's worth the trouble.
Moreover, back-patching wouldn't be enough to ensure that the
new pg_depend entries exist in existing databases anyway.
Discussion: https://postgr.es/m/OF0A160F3E.578B15D1-ONC12588DA.003E4857-C12588DA.0045A428@notes.linuxbox.cz
DefineQueryRewrite() has long required that ON SELECT rules be named
"_RETURN". But we overlooked the converse case: we should forbid
non-ON-SELECT rules that are named "_RETURN". In particular this
prevents using CREATE OR REPLACE RULE to overwrite a view's _RETURN
rule with some other kind of rule, thereby breaking the view.
Per bug #17646 from Kui Liu. Back-patch to all supported branches.
Discussion: https://postgr.es/m/17646-70c93cfa40365776@postgresql.org
The executor will dump core if it's asked to execute a seqscan on
a relation having no table AM, such as a view. While that shouldn't
really happen, it's possible to get there via catalog corruption,
such as a missing ON SELECT rule. It seems worth installing a defense
against that. There are multiple plausible places for such a defense,
but I picked the planner's get_relation_info().
Per discussion of bug #17646 from Kui Liu. Back-patch to v12 where
the tableam APIs were introduced; in older versions you won't get a
SIGSEGV, so it seems less pressing.
Discussion: https://postgr.es/m/17646-70c93cfa40365776@postgresql.org
The file name used for its temporary destination, before renaming it to
the real deal, has been using a microseconds in a timestamp aimed to be
originally in milli-seconds. This is harmless as this is aimed at being
a safeguard against name collisions (note MyProcPid in the name), but
let's be correct with the maths.
While on it, add a note in the module's makefile to document why
installcheck is not supported.
Author: Nathan Bossart
Reviewed-by: Bharath Rupireddy
Discussion: https://postgr.es/m/20221014044106.GA1673343@nathanxps13
Backpatch-through: 15
There is already some coverage for that in the kerberos test suite,
though it requires PG_TEST_EXTRA to be set as per its insecure nature.
This provides coverage in a default setup, as long as peer is supported
on the platform where its test is run.
Author: Bertrand Drouvot
Discussion: https://postgr.es/m/7f87ca27-e184-29da-15d6-8be4325ad02e@gmail.com
If the non-recursive term of a SEARCH BREADTH FIRST recursive
query has only constants in its target list, the planner will
fold the starting RowExpr added by rewrite into a simple Const
of type RECORD. The executor doesn't have any problem with
that --- but EXPLAIN VERBOSE will encounter the Const as the
ultimate source of truth about what the field names of the
SET column are, and it didn't know what to do with that.
Fortunately, we can pull the identifying typmod out of the
Const, in much the same way that record_out would.
For reasons that remain a bit obscure to me, this only fails
with SEARCH BREADTH FIRST, not SEARCH DEPTH FIRST or CYCLE.
But I added regression test cases for both of those options
too, just to make sure we don't break it in future.
Per bug #17644 from Matthijs van der Vleuten. Back-patch
to v14 where these constructs were added.
Discussion: https://postgr.es/m/17644-3bd1f3036d6d7a16@postgresql.org
In the latest version of Apple's macOS SDK, <sys/socket.h>
fails to compile if "REF" is #define'd as something.
Apple may or may not agree that this is a bug, and even if
they do accept the bug report I filed, they probably won't
fix it very quickly. In the meantime, our back branches will all
fail to compile gram.y. v15 and HEAD currently escape the problem
thanks to the refactoring done in 98e93a1fc, but that's purely
accidental. Moreover, since that patch removed a widely-visible
inclusion of <netdb.h>, back-patching it seems too likely to break
third-party code.
Instead, change the token's code name to REF_P, following our usual
convention for naming parser tokens that are likely to have symbol
conflicts. The effects of that should be localized to the grammar
and immediately surrounding files, so it seems like a safer answer.
Per project policy that we want to keep recently-out-of-support
branches buildable on modern systems, back-patch all the way to 9.2.
Discussion: https://postgr.es/m/1803927.1665938411@sss.pgh.pa.us
snprintf.c has always fallen back on libc's *printf implementation
when printing pointers (%p) and floats. When this code originated,
we were still supporting some platforms that lacked native snprintf,
so we used sprintf for that. That's not actually unsafe in our usage,
but nonetheless builds on macOS are starting to complain about sprintf
being unconditionally deprecated; and I wouldn't be surprised if other
platforms follow suit. There seems little reason to believe that any
platform supporting C99 wouldn't have standards-compliant snprintf,
so let's just use that instead to suppress such warnings.
Back-patch to v12, which is where we started to require C99. It's
also where we started to use our snprintf.c everywhere, so this
wouldn't be enough to suppress the warning in older branches anyway
--- that is, in older branches these aren't necessarily all our
usages of libc's sprintf. It is enough in v12+ because any
deprecation annotation attached to libc's sprintf won't apply to
pg_sprintf. (Whether all our usages of pg_sprintf are adequately
safe is not a matter I intend to address here, but perhaps it could
do with some review.)
Per report from Andres Freund and local testing.
Discussion: https://postgr.es/m/20221015211955.q4cwbsfkyk3c4ty3@awork3.anarazel.de
I (Andres) missed a few recent changes to configure when merging
e6927270cd "meson: Add initial version of meson based build system". Mirror
the changes in
- ec3c9cc202 "Add definition pg_attribute_aligned() for MSVC"
- b086a47a27 "Bump minimum version of Bison to 2.3"
- 8b878bffa8 "Bump minimum version of Flex to 2.5.35"
As MSVC does not implement 128 bit integers, the oversight of not using
pg_attribute_aligned() should not have current practical consequences. But of
course the code from c.h should still be correctly mirrored.
I (Andres) also hadn't implemented the minimum perl version check. Added that
now.
Reported-by: Junwang Zhao <zhjwpku@gmail.com>
Author: Junwang Zhao <zhjwpku@gmail.com>
Author: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CAEG8a3K9c87EwAwmdOgmS0Li1J6P_7r-Uc0-zN6cJtrMr7VvPg@mail.gmail.com
While directly targetting a foreign table with MERGE was already
expressly forbidden, we failed to catch the case of a partitioned table
that has a foreign table as a partition; and the result if you try is an
incomprehensible error. Fix that by adding a specific check.
Backpatch to 15.
Reported-by: Tatsuhiro Nakamori <bt22nakamorit@oss.nttdata.com>
Discussion: https://postgr.es/m/bt22nakamorit@oss.nttdata.com
It can be useful to know when a relation has last been used, e.g., when
evaluating whether an index is still required. It was already possible to
infer the time of the last usage by tracking, e.g.,
pg_stat_all_indexes.idx_scan over time. But far from everybody does so.
To make it easier to detect the last time a relation has been scanned, track
that time in each relation's pgstat entry. To minimize overhead a) the
timestamp is updated only when the backend pending stats entry is flushed to
shared stats b) the last transaction's stop timestamp is used as the
timestamp.
Bumps catalog and stats format versions.
Author: Dave Page <dpage@pgadmin.org>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Bruce Momjian <bruce@momjian.us>
Reviewed-by: Vik Fearing <vik@postgresfriends.org>
Discussion: https://postgr.es/m/CA+OCxozrVHNFVEPkweUHMZje+t1tfY816d9MZYc6eZwOOusOaQ@mail.gmail.com
Previously GetCurrentTransactionStopTimestamp() computed a new timestamp
whenever xactStopTimestamp was unset and xactStopTimestamp was only set when a
commit or abort record was written.
An upcoming patch will add additional calls to
GetCurrentTransactionStopTimestamp() from pgstats. To avoid computing
timestamps multiple times, set xactStopTimestamp in
GetCurrentTransactionStopTimestamp() if not already set.
Author: Dave Page <dpage@pgadmin.org>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Vik Fearing <vik@postgresfriends.org>
Discussion: https://postgr.es/m/20220906155325.an3xesq5o3fq36gt%40awork3.anarazel.de
When a query whose results were requested in single-row mode is the last
in the queue by the time those results are being read, the single-row
flag was not being reset, because we were returning early from
pqPipelineProcessQueue. Move that stanza up so that the flag is always
reset at the end of sending that query's results.
Add a test for the situation.
Backpatch to 14.
Author: Denis Laxalde <denis.laxalde@dalibo.com>
Discussion: https://postgr.es/m/01af18c5-dacc-a8c8-07ee-aecc7650c3e8@dalibo.com
The previous patch made addition of new GUCs cheap, but other GUC
operations aren't improved and indeed get a bit slower, because
hash_seq_search() is slower than just scanning a pointer array.
However, most performance-critical GUC operations only need
to touch a relatively small fraction of the GUCs; especially
so for AtEOXact_GUC(). We can improve matters at the cost
of a bit more space by adding dlist or slist links to the
GUC data structures. This patch invents lists that track
(1) all GUCs with non-default "source";
(2) all GUCs with nonempty state stack (implying they've
been changed in the current transaction);
(3) all GUCs due for reporting to the client.
All of guc.c's performance-critical cases can make use of one or
another of these lists to avoid searching the whole hash table.
In particular, the stack list means that transaction end
doesn't take time proportional to the number of GUCs, but
only to the number changed in the current transaction.
Discussion: https://postgr.es/m/2982579.1662416866@sss.pgh.pa.us
This gets rid of bsearch() in favor of hashed lookup. The main
advantage is that it becomes far cheaper to add new GUCs, since
we needn't re-sort the pointer array. Adding N new GUCs had
been O(N^2 log N), but now it's closer to O(N). We need to
sort only in SHOW ALL and equivalent functions, which are
hopefully not performance-critical to anybody.
Also, merge GetNumConfigOptions() into get_guc_variables(),
because in a world where the set of GUCs isn't fairly static
you really want to consider those two results as tied together
not independent.
Discussion: https://postgr.es/m/2982579.1662416866@sss.pgh.pa.us
The only real argument for using malloc directly was that we needed
the ability to not throw error on OOM; but mcxt.c grew that feature
awhile ago.
Keeping the data in a memory context improves accountability and
debuggability --- for example, without this it's almost impossible
to detect memory leaks in the GUC code with anything less costly
than valgrind. Moreover, the next patch in this series will add a
hash table for GUC lookup, and it'd be pretty silly to be using
palloc-dependent hash facilities alongside malloc'd storage of the
underlying data.
This is a bit invasive though, in particular causing an API break
for GUC check hooks that want to modify the GUC's value or use an
"extra" data structure. They must now use guc_malloc() and
guc_free() instead of malloc() and free(). Failure to change
affected code will result in assertion failures or worse; but
thanks to recent effort in the mcxt infrastructure, it shouldn't
be too hard to diagnose such oversights (at least in assert-enabled
builds).
One note is that this changes ParseLongOption() to return short-lived
palloc'd not malloc'd data. There wasn't any caller for which the
previous definition was better.
Discussion: https://postgr.es/m/2982579.1662416866@sss.pgh.pa.us
We lack a version of repalloc() that supports MCXT_ALLOC_NO_OOM
semantics, so invent repalloc_extended() with the usual set of
flags. repalloc_huge() becomes a legacy wrapper for that.
Also, fix dynahash.c so that it can support HASH_ENTER_NULL
requests when using the default palloc-based allocator.
The only reason it didn't do that already was the lack of the
MCXT_ALLOC_NO_OOM option when that code was written, ages ago.
While here, simplify a few overcomplicated tests in mcxt.c.
Discussion: https://postgr.es/m/2982579.1662416866@sss.pgh.pa.us
Most code prints PIDs as %d, but some code tried to print them as long
or unsigned long. While this is in theory allowed, the fact that PIDs
fit into int is deeply baked into all PostgreSQL code, so these random
deviations don't accomplish anything except confusion.
Note that we still need casts from pid_t to int, because on 64-bit
MinGW, pid_t is long long int. (But per above, actually supporting
that range in PostgreSQL code would be major surgery and probably not
useful.)
Discussion: https://www.postgresql.org/message-id/289c2e45-c7d9-5ce4-7eff-a9e2a33e1580@enterprisedb.com
The comment talked about some Asserts which did not exist and also a
variable name which seems to have long since disappeared.
Rewrite the comment in a way that will hopefully stand the test of
time and inform people why we always write "INSERT 0 <nrows>" instead of
"INSERT <nrows>" in the command completion tag for INSERT.
Reviewed-by: Mark Dilger
Discussion: https://postgr.es/m/CAApHDvpiUg09AvvGAVopNAKemA9z-kCmt7Fi6HKauc32bKzx4w@mail.gmail.com
This is a follow-up commit to ca7f8e2 which removed the allocation
abstraction from pgcrypto and replaced px_alloc + memset calls with
palloc0 calls. The particular memset in this commit was missed in
that work though.
Author: Zhihong Yu <zyu@yugabyte.com>
Reviewed-by: Bruce Momjian <bruce@momjian.us>
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://postgr.es/m/CALNJ-vT5qRucrFMPSzQyAWods1b4MnNPG-M=_ZUzh1SoTh0vNw@mail.gmail.com
Using pg_buffercache_summary() is significantly cheaper than querying
pg_buffercache and summarizing in SQL.
Author: Melih Mutlu <m.melihmutlu@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Aleksander Alekseev <aleksander@timescale.com>
Reviewed-by: Zhang Mingli <zmlpostgres@gmail.com>
Discussion: https://postgr.es/m/CAGPVpCQAXYo54Q%3D8gqBsS%3Du0uk9qhnnq4%2B710BtUhUisX1XGEg%40mail.gmail.com
In FIPS mode, these calls will fail. By having them in a separate
file, it would make it easier to have an alternative output file or
selectively disable these tests. This isn't done here; this is just
some preparation.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://www.postgresql.org/message-id/647f6cc1-473d-f788-ade0-c09201e5ab6a@enterprisedb.com
Commit 3d956d956 allowed the COPY, but it's done by inserting individual
rows to the foreign table, so it can be inefficient due to the overhead
caused by each round-trip to the foreign server. To improve performance
of the COPY in such a case, this patch allows batch insertion, by
extending the multi-insert machinery in CopyFrom() to the foreign-table
case so that we insert multiple rows to the foreign table at once using
the FDW callback routine added by commit b663a4136. This patch also
allows this for postgres_fdw. It is enabled by the "batch_size" option
added by commit b663a4136, which is disabled by default.
When doing batch insertion, we update progress of the COPY command after
performing the FDW callback routine, to count rows not suppressed by the
FDW as well as a BEFORE ROW INSERT trigger. For consistency, this patch
changes the timing of updating it for plain tables: previously, we
updated it immediately after adding each row to the multi-insert buffer,
but we do so only after writing the rows stored in the buffer out to the
table using table_multi_insert(), which I think would be consistent even
with non-batching mode, because in that mode we update it after writing
each row out to the table using table_tuple_insert().
Andrey Lepikhov, heavily revised by me, with review from Ian Barwick,
Andrey Lepikhov, and Zhihong Yu.
Discussion: https://postgr.es/m/bc489202-9855-7550-d64c-ad2d83c24867%40postgrespro.ru