This commit adds XML_PARSE_HUGE to the libxml2 functions used in core
for the parsing of XML objects, raising up the original limit of 10MB
supported by libxml2.
In most code paths of upstream, XML_MAX_TEXT_LENGTH (10^7) is the
historical limit that gets upgraded to XML_MAX_HUGE_LENGTH (10^9) once
XML_PARSE_HUGE is given to the parser calls. These are still limited by
any palloc() calls for text, up to 1GB.
This offers the possibility to handle within the backend XML objects
larger than 10MB in general, with also a higher depth limit. This
change affects the contrib module xml2, the xml data type and SQL/XML.
Author: Dmitry Koval
Reviewed-by: Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/18274-98d16bc03520665f@postgresql.org
Some functions are used in the tree and are currently marked as
deprecated by upstream. This commit refreshes the code to use the
recommended functions, leading to the following changes:
- xmlSubstituteEntitiesDefault() is gone, and needs to be replaced with
XML_PARSE_NOENT for the paths doing the parsing.
- xmlParseMemory() -> xmlReadMemory().
These functions, as well as more functions setting global states, have
been officially marked as deprecated by upstream in August 2022. Their
replacements exist since the 2001-ish area, as far as I have checked,
so that should be safe.
Author: Dmitry Koval
Discussion: https://postgr.es/m/18274-98d16bc03520665f@postgresql.org
This replaces dblink's blocking libpq calls, allowing cancellation and
allowing DROP DATABASE (of a database not involved in the query). Apart
from explicit dblink_cancel_query() calls, dblink still doesn't cancel
the remote side. The replacement for the blocking calls consists of
new, general-purpose query execution wrappers in the libpqsrv facility.
Out-of-tree extensions should adopt these. Use them in postgres_fdw,
replacing a local implementation from which the libpqsrv implementation
derives. This is a bug fix for dblink. Code inspection identified the
bug at least thirteen years ago, but user complaints have not appeared.
Hence, no back-patch for now.
Discussion: https://postgr.es/m/20231122012945.74@rfd.leadboat.com
An array element equal to INT_MAX gave this code indigestion,
causing an infinite loop that surely ended in SIGSEGV. We fixed
some nearby problems awhile ago (cf 757c5182f) but missed this.
Report and diagnosis by Alexander Lakhin (bug #18273); patch by me
Discussion: https://postgr.es/m/18273-9a832d1da122600c@postgresql.org
New TAP tests added by commits 9a17be1e and 4710b67d missed to convert
warnings to FATAL. This commit fixes that.
Author: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
There are a lot of Perl scripts in the tree, mostly code generation
and TAP tests. Occasionally, these scripts produce warnings. These
are probably always mistakes on the developer side (true positives).
Typical examples are warnings from genbki.pl or related when you make
a mess in the catalog files during development, or warnings from tests
when they massage a config file that looks different on different
hosts, or mistakes during merges (e.g., duplicate subroutine
definitions), or just mistakes that weren't noticed because there is a
lot of output in a verbose build.
This changes all warnings into fatal errors, by replacing
use warnings;
by
use warnings FATAL => 'all';
in all Perl files.
Discussion: https://www.postgresql.org/message-id/flat/06f899fd-1826-05ab-42d6-adeb1fd5e200%40eisentraut.org
This function reads directly a page from a relation, relying on
index_open() to open the index to read from. Unfortunately, this would
crash when using partitioned indexes, as these can be opened with
index_open() but they have no physical pages.
Alexander has fixed the module, while I have written the test.
Author: Alexander Lakhin, Michael Paquier
Discussion: https://postgr.es/m/18246-f4d9ff7cb3af77e6@postgresql.org
Backpatch-through: 12
As coded, the function relied on index_open() when opening an index
relation, allowing partitioned indexes to be processed by
pgstathashindex(). This was leading to a "could not open file" error
because partitioned indexes have no physical files, or to a crash with
an assertion failure (like on HEAD).
This issue is fixed by applying the same checks as the other stat
functions for indexes, with a lookup at both RELKIND_INDEX and the index
AM expected.
Author: Alexander Lakhin
Discussion: https://postgr.es/m/18246-f4d9ff7cb3af77e6@postgresql.org
Backpatch-through: 12
Teach _bt_binsrch (and related helper routines like _bt_search and
_bt_compare) about the initial positioning requirements of backward
scans. Routines like _bt_binsrch already know all about "nextkey"
searches, so it seems natural to teach them about "goback"/backward
searches, too. These concepts are closely related, and are much easier
to understand when discussed together.
Now that certain implementation details are hidden from _bt_first, it's
straightforward to add a new optimization: backward scans using the <
strategy now avoid extra leaf page accesses in certain "boundary cases".
Consider the following example, which uses the tenk1 table (and its
tenk1_hundred index) from the standard regression tests:
SELECT * FROM tenk1 WHERE hundred < 12 ORDER BY hundred DESC LIMIT 1;
Before this commit, nbtree would scan two leaf pages, even though it was
only really necessary to scan one leaf page. We'll now descend straight
to the leaf page containing a (12, -inf) high key instead. The scan
will locate matching non-pivot tuples with "hundred" values starting
from the value 11. The scan won't waste a page access on the right
sibling leaf page, which cannot possibly contain any matching tuples.
You can think of the optimization added by this commit as disabling an
optimization (the _bt_compare "!pivotsearch" behavior that was added to
Postgres 12 in commit dd299df8) for a small subset of cases where it was
always counterproductive.
Equivalently, you can think of the new optimization as extending the
"pivotsearch" behavior that page deletion by VACUUM has long required
(since the aforementioned Postgres 12 commit went in) to other, similar
cases. Obviously, this isn't strictly necessary for these new cases
(unlike VACUUM, _bt_first is prepared to move the scan to the left once
on the leaf level), but the underlying principle is the same.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Matthias van de Meent <boekewurm+postgres@gmail.com>
Discussion: https://postgr.es/m/CAH2-Wz=XPzM8HzaLPq278Vms420mVSHfgs9wi5tjFKHcapZCEw@mail.gmail.com
Allow using multiple worker processes to build BRIN index, which until
now was supported only for BTREE indexes. For large tables this often
results in significant speedup when the build is CPU-bound.
The work is split in a simple way - each worker builds BRIN summaries on
a subset of the table, determined by the regular parallel scan used to
read the data, and feeds them into a shared tuplesort which sorts them
by blkno (start of the range). The leader then reads this sorted stream
of ranges, merges duplicates (which may happen if the parallel scan does
not align with BRIN pages_per_range), and adds the resulting ranges into
the index.
The number of duplicate results produced by workers (requiring merging
in the leader process) should be fairly small, thanks to how parallel
scans assign chunks to workers. The likelihood of duplicate results may
increase for higher pages_per_range values, but then there are fewer
page ranges in total. In any case, we expect the merging to be much
cheaper than summarization, so this should be a win.
Most of the parallelism infrastructure is a simplified copy of the code
used by BTREE indexes, omitting the parts irrelevant for BRIN indexes
(e.g. uniqueness checks).
This also introduces a new index AM flag amcanbuildparallel, determining
whether to attempt to start parallel workers for the index build.
Original patch by me, with reviews and substantial reworks by Matthias
van de Meent, certainly enough to make him a co-author.
Author: Tomas Vondra, Matthias van de Meent
Reviewed-by: Matthias van de Meent
Discussion: https://postgr.es/m/c2ee7d69-ce17-43f2-d1a0-9811edbda6e6%40enterprisedb.com
When building BRIN indexes, the brinbuildCallback only advances to the
next page range when seeing a tuple that doesn't belong to the current
one. This means that the index may end up missing ranges at the end of
the table, if those pages do not contain any indexable tuples.
We tend not to have completely empty pages at the end of a relation, but
this also applies to partial indexes, where the tuples may simply not
match the index predicate. This results in inefficient scans using the
affected BRIN index - without the summaries, the page ranges have to be
read and processed, which consumes I/O and possibly also CPU time.
The existing code already added empty ranges for earlier parts of the
table, this commit makes sure we add them for the ranges at the end of
the table too.
Patch by Matthias van de Meent, with review/improvements by me.
Author: Matthias van de Meent
Reviewed-by: Tomas Vondra
Discussion: https://postgr.es/m/CAEze2WiMsPZg%3DxkvSF_jt4%3D69k6K7gz5B8V2wY3gCGZ%2B1BzCbQ%40mail.gmail.com
SEMI-JOIN is deparsed as the EXISTS subquery. It references outer and inner
relations, so it should be evaluated as the condition in the upper-level WHERE
clause. The signatures of deparseFromExprForRel() and deparseRangeTblRef() are
revised so that they can add conditions to the upper level.
PgFdwRelationInfo now has a hidden_subquery_rels field, referencing the relids
used in the inner parts of semi-join. They can't be referred to from upper
relations and should be used internally for equivalence member searches.
The planner can create semi-join, which refers to inner rel vars in its target
list. However, we deparse semi-join as an exists() subquery. So we skip the
case when the target list references to inner rel of semi-join.
Author: Alexander Pyhalov
Reviewed-by: Ashutosh Bapat, Ian Lawrence Barwick, Yuuki Fujii, Tomas Vondra
Discussion: https://postgr.es/m/c9e2a757cf3ac2333714eaf83a9cc184@postgrespro.ru
It's not necessary to fill the key field in most cases, since
hash_search has already done that. Some existing call sites have an
assert or comment that this contract has been fulfilled, but those
are quite old and that practice seems unnecessary here.
While at it, remove a nearby redundant assignment that a smart compiler
will elide anyway.
Zhao Junwang, with some adjustments by me
Reviewed by Nathan Bossart, with additional feedback from Tom Lane
Discussion: http://postgr.es/m/CAEG8a3%2BUPF%3DR2QGPgJMF2mKh8xPd1H2TmfH77zPuVUFdBpiGUA%40mail.gmail.com
Quotes are applied to GUCs in a very inconsistent way across the code
base, with a mix of double quotes or no quotes used. This commit
removes double quotes around all the GUC names that are obviously
referred to as parameters with non-English words (use of underscore,
mixed case, etc).
This is the result of a discussion with Álvaro Herrera, Nathan Bossart,
Laurenz Albe, Peter Eisentraut, Tom Lane and Daniel Gustafsson.
Author: Peter Smith
Discussion: https://postgr.es/m/CAHut+Pv-kSN8SkxSdoHano_wPubqcg5789ejhCDZAcLFceBR-w@mail.gmail.com
This patch adds 'stats_since' and 'minmax_stats_since' columns to the
pg_stat_statements view and pg_stat_statements() function. The new min/max
reset mode for the pg_stat_stetments_reset() function is controlled by the
parameter minmax_only.
'stat_since' column is populated with the current timestamp when a new
statement is added to the pg_stat_statements hashtable. It provides clean
information about statistics collection time intervals for each statement.
Besides it can be used by sampling solutions to detect situations when a
statement was evicted and stored again between samples.
Such a sampling solution could derive any pg_stat_statements statistic values
for an interval between two samples with the exception of all min/max
statistics. To address this issue this patch adds the ability to reset
min/max statistics independently of the statement reset using the new
minmax_only parameter of the pg_stat_statements_reset(userid oid, dbid oid,
queryid bigint, minmax_only boolean) function. The timestamp of such reset
is stored in the minmax_stats_since field for each statement.
pg_stat_statements_reset() function now returns the timestamp of a reset as the
result.
Discussion: https://postgr.es/m/flat/72e80e7b160a6eb189df9ef6f068cce3765d37f8.camel%40moonset.ru
Author: Andrei Zubkov
Reviewed-by: Julien Rouhaud, Hayato Kuroda, Yuki Seino, Chengxi Sun
Reviewed-by: Anton Melnikov, Darren Rush, Michael Paquier, Sergei Kornilov
Reviewed-by: Alena Rybakina, Andrei Lepikhov
This is preliminary patch. It adds NOT NULL checking for the result of
pg_stat_statements_reset() function. It is needed for upcoming patch
"Track statement entry timestamp" that will change the result type of
this function to the timestamp of a reset performed.
Discussion: https://postgr.es/m/flat/72e80e7b160a6eb189df9ef6f068cce3765d37f8.camel%40moonset.ru
Author: Andrei Zubkov
Reviewed-by: Julien Rouhaud, Hayato Kuroda, Yuki Seino, Chengxi Sun
Reviewed-by: Anton Melnikov, Darren Rush, Michael Paquier, Sergei Kornilov
Reviewed-by: Alena Rybakina, Andrei Lepikhov
The brininsert code used to initialize (and destroy) BrinDesc and
BrinRevmap for each tuple, which is not free. This patch initializes
these structures only once, and reuses them for all inserts in the same
command. The data is passed through indexInfo->ii_AmCache.
This also introduces an optional AM callback "aminsertcleanup" that
allows performing custom cleanup in case simply pfree-ing ii_AmCache is
not sufficient (which is the case when the cache contains TupleDesc,
Buffers, and so on).
Author: Soumyadeep Chakraborty
Reviewed-by: Alvaro Herrera, Matthias van de Meent, Tomas Vondra
Discussion: https://postgr.es/m/CAE-ML%2B9r2%3DaO1wwji1sBN9gvPz2xRAtFUGfnffpd0ZqyuzjamA%40mail.gmail.com
A WaitEventSet holds file descriptors or event handles (on Windows).
If FreeWaitEventSet is not called, those fds or handles are leaked.
Use ResourceOwners to track WaitEventSets, to clean those up
automatically on error.
This was a live bug in async Append nodes, if a FDW's
ForeignAsyncRequest function failed. (In back branches, I will apply a
more localized fix for that based on PG_TRY-PG_FINALLY.)
The added test doesn't check for leaking resources, so it passed even
before this commit. But at least it covers the code path.
In the passing, fix misleading comment on what the 'nevents' argument
to WaitEventSetWait means.
Report by Alexander Lakhin, analysis and suggestion for the fix by
Tom Lane. Fixes bug #17828.
Reviewed-by: Alexander Lakhin, Thomas Munro
Discussion: https://www.postgresql.org/message-id/472235.1678387869@sss.pgh.pa.us
This adds alternative expected files for various tests.
In src/test/regress/sql/password.sql, we make a small change to the
test so that the CREATE ROLE still succeeds even if the ALTER ROLE
that attempts to set a password might fail. That way, the roles are
available for the rest of the test file in either case.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://www.postgresql.org/message-id/flat/dbbd927f-ef1f-c9a1-4ec6-c759778ac852%40enterprisedb.com
This adds several alternative expected files for when MD5 and 3DES are
not available. This is similar to the alternative expected files for
when the legacy provider is disabled. In fact, running the pgcrypto
tests in FIPS mode makes use of some of these existing alternative
expected files as well (e.g., for blowfish).
These new expected files currently cover the FIPS mode provided by
OpenSSL 3.x as well as the modified OpenSSL 3.x from Red Hat (e.g.,
Fedora 38), but not the modified OpenSSL 1.x from Red Hat (e.g.,
Fedora 35). (The latter will have some error message wording
differences.)
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://www.postgresql.org/message-id/flat/dbbd927f-ef1f-c9a1-4ec6-c759778ac852%40enterprisedb.com
In FIPS mode, these tests will fail. By having them in a separate
file, it would make it easier to have an alternative output file or
selectively disable these tests. This isn't done here; this is just
some preparation.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/2766054.1700080156@sss.pgh.pa.us
This adds support for infinity to the interval data type, using the
same input/output representation as the other date/time data types
that support infinity. This allows various arithmetic operations on
infinite dates, timestamps and intervals.
The new values are represented by setting all fields of the interval
to INT32/64_MIN for -infinity, and INT32/64_MAX for +infinity. This
ensures that they compare as less/greater than all other interval
values, without the need for any special-case comparison code.
Note that, since those 2 values were formerly accepted as legal finite
intervals, pg_upgrade and dump/restore from an old database will turn
them from finite to infinite intervals. That seems OK, since those
exact values should be extremely rare in practice, and they are
outside the documented range supported by the interval type, which
gives us a certain amount of leeway.
Bump catalog version.
Joseph Koshakow, Jian He, and Ashutosh Bapat, reviewed by me.
Discussion: https://postgr.es/m/CAAvxfHea4%2BsPybKK7agDYOMo9N-Z3J6ZXf3BOM79pFsFNcRjwA%40mail.gmail.com
When we decide that we don't want to track execution time of a
specific planner or ProcessUtility call, we still have to increment
the nesting depth, or we'll make the wrong determination of whether
we are at top level when considering nested statements. (PREPARE
and EXECUTE are exceptions, for reasons explained in the code.)
Counting planner nesting depth separately from executor nesting depth
was a mistake: it causes us to make the wrong determination of whether
we are at top level when considering nested statements that get
executed during planning (as a result of constant-folding of
functions, for example). Merge those counters into one.
In passing, get rid of the PGSS_HANDLED_UTILITY macro in favor of
explicitly listing statement types. It seems somewhat coincidental
that PREPARE and EXECUTE are handled alike in each of the places where
that was used: the reasoning tends to be different for each one.
Thus, the macro seems as likely to encourage future bugs as prevent
them, since it's quite unclear whether any future statement type that
might need special-casing here would also need the same choices at
each spot.
Sergei Kornilov, Julien Rouhaud, and Tom Lane, per bug #17552 from
Maxim Boguk. This is pretty clearly a bug fix, but it's also a
behavioral change that might surprise somebody, so no back-patch.
Discussion: https://postgr.es/m/17552-213b534c56ab5d02@postgresql.org
A PostgreSQL release tarball contains a number of prebuilt files, in
particular files produced by bison, flex, perl, and well as html and
man documentation. We have done this consistent with established
practice at the time to not require these tools for building from a
tarball. Some of these tools were hard to get, or get the right
version of, from time to time, and shipping the prebuilt output was a
convenience to users.
Now this has at least two problems:
One, we have to make the build system(s) work in two modes: Building
from a git checkout and building from a tarball. This is pretty
complicated, but it works so far for autoconf/make. It does not
currently work for meson; you can currently only build with meson from
a git checkout. Making meson builds work from a tarball seems very
difficult or impossible. One particular problem is that since meson
requires a separate build directory, we cannot make the build update
files like gram.h in the source tree. So if you were to build from a
tarball and update gram.y, you will have a gram.h in the source tree
and one in the build tree, but the way things work is that the
compiler will always use the one in the source tree. So you cannot,
for example, make any gram.y changes when building from a tarball.
This seems impossible to fix in a non-horrible way.
Second, there is increased interest nowadays in precisely tracking the
origin of software. We can reasonably track contributions into the
git tree, and users can reasonably track the path from a tarball to
packages and downloads and installs. But what happens between the git
tree and the tarball is obscure and in some cases non-reproducible.
The solution for both of these issues is to get rid of the step that
adds prebuilt files to the tarball. The tarball now only contains
what is in the git tree (*). Getting the additional build
dependencies is no longer a problem nowadays, and the complications to
keep these dual build modes working are significant. And of course we
want to get the meson build system working universally.
This commit removes the make distprep target altogether. The make
dist target continues to do its job, it just doesn't call distprep
anymore.
(*) - The tarball also contains the INSTALL file that is built at make
dist time, but not by distprep. This is unchanged for now.
The make maintainer-clean target, whose job it is to remove the
prebuilt files in addition to what make distclean does, is now just an
alias to make distprep. (In practice, it is probably obsolete given
that git clean is available.)
The following programs are now hard build requirements in configure
(they were already required by meson.build):
- bison
- flex
- perl
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://www.postgresql.org/message-id/flat/e07408d9-e5f2-d9fd-5672-f53354e9305e@eisentraut.org
cac169d68 adjusted DEFAULT_FDW_TUPLE_COST and that seems to have caused
a test to become unstable on 32-bit machines.
4b14e1871 tried to fix this as originally the plan was flipping between
a Nested Loop and Hash Join. That commit forced the Nested Loop, but
there's still flexibility to push or not push the sort to the remote
server and 32-bit seems to prefer to push and on 64-bit, the costs
prefer not to.
Here let's just turn off enable_sort to significantly encourage the sort
to take place on the remote server.
Reported-by: Michael Paquier, Richard Guo
Discussion: https://postgr.es/m/ZUM2IhA8X2lrG50K@paquier.xyz
cac169d68 adjusted DEFAULT_FDW_TUPLE_COST and that seems to have caused
a test to become unstable on 32-bit machines. Try to make it stable
again.
Reported-by: Michael Paquier
Discussion: https://postgr.es/m/ZUM2IhA8X2lrG50K@paquier.xyz
0.01 was unrealistically low as it's the same as the default
cpu_tuple_cost and 10x cheaper than the default parallel_tuple_cost.
It's hard to imagine a situation where fetching a tuple from a foreign
server would be cheaper than fetching one from a parallel worker.
After some experimentation on a loopback server, somewhere between 0.15
and 0.3 seems more realistic. Here we split the difference and set it
to 0.2.
This will cause operations that reduce the number of tuples (e.g.
aggregation) to be more likely to take place on the foreign server.
Adjusting this causes some plan changes in the postgres_fdw regression
tests. This is because penalizing each Path with the additional tuple
costs causes some dilution of the costs of the other operations being
charged for and results in various paths appearing to be closer to the
same costs such that add_path's STD_FUZZ_FACTOR is more likely to see two
paths as costing (fuzzily) the same. This isn't ideal, but it shouldn't
be reason enough to use artificially low costs.
Discussion: https://postgr.es/m/CAApHDvopVjjfh5c1Ed2HRvDdfom2dEpMwwiu5-f1AnmYprJngA@mail.gmail.com
4b8266415 added a precheck to pgrowlocks() to ensure the given object's
pg_class.relam is HEAP_TABLE_AM_OID, however, that check was put before
another check which was checking if the given object was a partitioned
table. Since the pg_class.relam is always InvalidOid for partitioned
tables, if pgrowlocks() was called passing a partitioned table, then the
"only heap AM is supported" error would be raised instead of the intended
error about the given object being a partitioned table.
Here we simply move the pg_class.relam check to after the check that
verifies that we are in fact working with a normal (non-partitioned)
table.
Reported-by: jian he
Discussion: https://postgr.es/m/CACJufxFaSp_WguFCf0X98951zFVX+dXFnF1mxAb-G3g1HiHOow@mail.gmail.com
Backpatch-through: 12, where 4b8266415 was introduced.
pgstatindex failed with ERRCODE_DATA_CORRUPTED, of the "can't-happen"
class XX. The other functions succeeded on an empty index; they might
have malfunctioned if the failed index build left torn I/O or other
complex state. Report an ERROR in statistics functions pgstatindex,
pgstatginindex, pgstathashindex, and pgstattuple. Report DEBUG1 and
skip all index I/O in maintenance functions brin_desummarize_range,
brin_summarize_new_values, brin_summarize_range, and
gin_clean_pending_list. Back-patch to v11 (all supported versions).
Discussion: https://postgr.es/m/20231001195309.a3@google.com
This prevents false-positive reports about "the first child of leftmost
target page is not leftmost of its level", "block %u is not leftmost"
and "left link/right link pair". They appeared if amcheck ran before
VACUUM cleaned things, after a cluster exited recovery between the
first-stage and second-stage WAL records of a deletion. Back-patch to
v11 (all supported versions).
Reviewed by Peter Geoghegan.
Discussion: https://postgr.es/m/20231005025232.c7.nmisch@google.com
In test_decoding module, when skip_empty_xacts option was specified, add
stream_start/stop for streaming transactional messages. This makes the
handling of transactional messages stream consistent irrespective of
whether skip_empty_xacts option was specified.
Commit 26dd0284b9 made a similar change for non-streaming messages but
forgot to update the streaming cases.
Author: Peter Smith
Reviewed-by: Amit Kapila
Discussion: http://postgr.es/m/OS0PR01MB5716AEBD2988F8F5E9D5985794DFA@OS0PR01MB5716.jpnprd01.prod.outlook.com