The following options are added for extensions:
- TAP_TESTS, to allow an extention to run TAP tests which are the ones
present in t/*.pl. A subset of tests can always be run with the
existing PROVE_TESTS for developers.
- ISOLATION, to define a list of isolation tests.
- ISOLATION_OPTS, to pass custom options to isolation_tester.
A couple of custom Makefile rules have been accumulated across the tree
to cover the lack of facility in PGXS for a couple of releases when
using those test suites, which are all now replaced with the new flags,
without reducing the test coverage. Note that tests of contrib/bloom/
are not enabled yet, as those are proving unstable in the buildfarm.
Author: Michael Paquier
Reviewed-by: Adam Berlin, Álvaro Herrera, Tom Lane, Nikolay Shaplov,
Arthur Zakirov
Discussion: https://postgr.es/m/20180906014849.GG2726@paquier.xyz
Re-making ecpglib's typename.o is dangerous because another make thread
could be doing that at the same time. While we've not heard field
complaints traceable to this, it seems inevitable that it'd bite someone
eventually. Instead, symlink typename.c into the preproc directory and
recompile it there. That file is small enough that compiling it twice
isn't much of a penalty. Furthermore, this way we get a .o file that's
made without shlib CFLAGS, which seems cleaner.
This requires adding more stuff to the module's -I list. The MSVC
aspect of that is untested, but I'm sure the buildfarm will tell me
if I got it wrong.
Per a suggestion from Peter Eisentraut. Although this is theoretically
a bug fix, the lack of field reports makes me feel we needn't back-patch.
Discussion: https://postgr.es/m/31364.1543511708@sss.pgh.pa.us
This should reduce confusion, and in particular make it safe to
copy typename.c into preproc/ and compile it there.
This doesn't affect anything outside ecpg, and particularly not
end users, because these files don't get installed; they just
exist to share declarations among the .c files of each subdirectory.
Discussion: https://postgr.es/m/31364.1543511708@sss.pgh.pa.us
This allows control of the directory in which the postmaster sockets
are created for the temporary postmasters started by pg_upgrade.
The default location remains the current working directory, which is
typically fine, but if it is deeply nested then its pathname might
be too long to be a socket name.
In passing, clean up some messiness in pg_upgrade's option handling,
particularly the confusing and undocumented way that configuration-only
datadirs were handled. And fix check_required_directory's substantially
under-baked cleanup of directory pathnames.
Daniel Gustafsson, reviewed by Hironobu Suzuki, some code cleanup by me
Discussion: https://postgr.es/m/E72DD5C3-2268-48A5-A907-ED4B34BEC223@yesql.se
TAP tests on msys need to run with the DTK perl, which understands msys
virtualized paths. Postgres, however, does not understand such paths,
so before a path can be used safely with CREATE TABLESPACE, it needs to
be translated into a path on the underlying file system.
Per report from buildfarm member jacana. Suggested fix is from Andrew
Dunstan.
Discussion: https://postgr.es/m/20181130053555.GF2267@paquier.xyz
Three issues are fixed in this patch:
- Base backups forgot to ignore files specific to EXEC_BACKEND, leading
to spurious warnings when checksums are enabled, per analysis from me.
- pg_verify_checksums forgot about files specific to EXEC_BACKEND,
leading to failures of the tool on any such build, particularly Windows.
This error was originally found by newly-introduced TAP tests in various
buildfarm members using EXEC_BACKEND.
- pg_verify_checksums forgot to count for temporary files and temporary
paths, which could be valid relation files, without checksums, per
report from Andres Freund. More tests are added to cover this case.
A new test case which emulates corruption for a file in a different
tablespace is added, coming from from Michael Banck, while I have coded
the main code and refactored the test code.
Author: Michael Banck, Michael Paquier
Reviewed-by: Stephen Frost, David Steele
Discussion: https://postgr.es/m/20181021134206.GA14282@paquier.xyz
This basically reverts commit d55241af70,
leaving around a portion of the regression tests still adapted with
empty relation files, and corrupted cases. This is also proving to be
failing to check properly relation files located in a non-default
tablespace path.
Per discussion with various folks, including Stephen Frost, David
Steele, Andres Freund, Michael Banck and myself.
Reported-by: Michael Banck
Discussion: https://postgr.es/m/20181021134206.GA14282@paquier.xyz
Backpatch-through: 11
The source code comments documented this, but the user-facing docs, not
so much. Add a section to Appendix B that discusses it.
In passing, improve a couple other things in Appendix B --- notably,
a long-obsolete claim that time zone abbreviations are looked up in
a fixed table.
Per bug #15527 from Michael Davidson.
Discussion: https://postgr.es/m/15527-f1be0b4dc99ebbe7@postgresql.org
This allows to set a lower log_min_duration_statement value without
incurring excessive log traffic (which reduces performance). This can
be useful to analyze workloads with lots of short queries.
Author: Adrien Nayrat
Reviewed-by: David Rowley, Vik Fearing
Discussion: https://postgr.es/m/c30ee535-ee1e-db9f-fa97-146b9f62caed@anayrat.info
In at least Apple's version of ranlib, the output file is updated to have
a mod time equal to the max of the timestamps of its components, and that
data only has seconds precision. On a filesystem with sub-second file
timestamp precision --- say, APFS --- this can result in the finished
static library appearing older than its input files, which causes useless
rebuilds and possible outright failures in parallel makes.
We've only seen this reported in the field from people using Apple's
ranlib with a non-Apple make, because Apple's make doesn't know about
sub-second timestamps either so it doesn't decide rebuilds are needed.
But Apple's ranlib presumably shares code with at least some BSDen,
so it's not that unlikely that the same problem could arise elsewhere.
To fix, just "touch" the output file after ranlib finishes.
We seem to need this in only one place. There are other calls of
ranlib in our makefiles, but they are working on intermediate files
whose timestamps are not actually important, or else on an installed
static library for which sub-second timestamp precision is unlikely
to matter either. (Also, so far as I can tell, Apple's ranlib doesn't
mess up the file timestamp in the latter usage anyhow.)
In passing, change "ranlib" to "$(RANLIB)" in one place that was
bypassing the make macro for no good reason.
Per bug #15525 from Jack Kelly (via Alyssa Ross).
Back-patch to all supported branches.
Discussion: https://postgr.es/m/15525-a30da084f17a1faa@postgresql.org
When fetching a list of tests for a given extension in contrib/ or
src/test/modules/, NO_INSTALLCHECK now gets checked first. If present,
an empty list of tests is returned to let the caller know that tests
for this module need to be bypassed.
This actually fixes a set of issues with MSVC with modules using
REGRESS_OPTS, as an incorrect parsing caused the launched command
to eat the first test listed. The actual effect on the tree is that
several modules listed a single test, so regressions have been running
with no actual tests. pg_stat_statements, test_rls_hooks and commit_ts
were impacted by that. Some other modules like test_decoding (or
snapshot_too_old) don't use yet PGXS rules, but their makefiles will
soon be refactored with an upcoming patch.
Author: Michael Paquier
Reviewed-by: Andrew Dunstan
Discussion: https://postgr.es/m/20181126054302.GI1776@paquier.xyz
This bypasses installcheck if specified, which makes sense for those
modules as they require non-default configuration, something which
typical users don't have. Those have been missing from the start, still
no back-patch is done.
This will be used by an upcoming patch for MSVC scripts adding support
for NO_INSTALLCHECK as installcheck is the default mode for contrib and
modules for performance reasons in the buildfarm.
Author: Michael Paquier
Reviewed-by: Andrew Dunstan
Discussion: https://postgr.es/m/20181126054302.GI1776@paquier.xyz
This fixes an oversight from c6c3334 which forgot that if a subset of
WAL senders are stopping and in a sync state, other WAL senders could
still be waiting for a WAL position to be synced while committing a
transaction. However the subset of stopping senders would not release
waiters, potentially breaking synchronous replication guarantees. This
commit makes sure that even WAL senders stopping are able to release
waiters and are tracked properly.
On 9.4, this can also trigger an assertion failure when setting for
example max_wal_senders to 1 where a WAL sender is not able to find
itself as in synchronous state when the instance stops.
Reported-by: Paul Guo
Author: Paul Guo, Michael Paquier
Discussion: https://postgr.es/m/CAEET0ZEv8VFqT3C-cQm6byOB4r4VYWcef1J21dOX-gcVhCSpmA@mail.gmail.com
Backpatch-through: 9.4
Move the responsibility for checking for and reporting a failure from
the only current BufFileSize() caller, logtape.c, to BufFileSize()
itself. Code within buffile.c is generally responsible for interfacing
with fd.c to report irrecoverable failures. This seems like a
convention that's worth sticking to.
Reorganizing things this way makes it easy to make the error message
raised in the event of BufFileSize() failure descriptive of the
underlying problem. We're now clear on the distinction between
temporary file name and BufFile name, and can show errno, confident that
its value actually relates to the error being reported. In passing, an
existing, similar buffile.c ereport() + errcode_for_file_access() site
is changed to follow the same conventions.
The API of the function BufFileSize() is changed by this commit, despite
already being in a stable release (Postgres 11). This seems acceptable,
since the BufFileSize() ABI was changed by commit aa55183042, which
hasn't made it into a point release yet. Besides, it's difficult to
imagine a third party BufFileSize() caller not just raising an error
anyway, since BufFile state should be considered corrupt when
BufFileSize() fails.
Per complaint from Tom Lane.
Discussion: https://postgr.es/m/26974.1540826748@sss.pgh.pa.us
Backpatch: 11-, where shared BufFiles were introduced.
The previous recovery.conf regime accepted multiple recovery_target*
settings and used the last one. This does not translate well to the
general GUC system. Specifically, under EXEC_BACKEND, the settings
are written out not in any particular order, so the order in which
they were originally set is not available to new processes.
Rather than redesign the GUC system, it was decided to abandon the old
behavior and only allow one recovery target setting. A second setting
will cause an error. However, it is allowed to set the same parameter
multiple times or unset a parameter and set a different one.
Discussion: https://www.postgresql.org/message-id/flat/27802171543235530%40iva2-6ec8f0a6115e.qloud-c.yandex.net#701a59c837ad0bf8c244344aaf3ef5a4
Since commit 2f1d2b7a we have set PAM_RHOST to "[local]" for Unix
sockets. This caused Linux PAM's libaudit integration to make DNS
requests for that name. It's not exactly clear what value PAM_RHOST
should have in that case, but it seems clear that we shouldn't set it
to an unresolvable name, so don't do that.
Back-patch to 9.6. Bug #15520.
Author: Thomas Munro
Reviewed-by: Peter Eisentraut
Reported-by: Albert Schabhuetl
Discussion: https://postgr.es/m/15520-4c266f986998e1c5%40postgresql.org
During table rewrites (VACUUM FULL and CLUSTER), the main heap is logged
using XLOG / FPI records, and thus (correctly) ignored in decoding.
But the associated TOAST table is WAL-logged as plain INSERT records,
and so was logically decoded and passed to reorder buffer.
That has severe consequences with TOAST tables of non-trivial size.
Firstly, reorder buffer has to keep all those changes, possibly spilling
them to a file, incurring I/O costs and disk space.
Secondly, ReoderBufferCommit() was stashing all those TOAST chunks into
a hash table, which got discarded only after processing the row from the
main heap. But as the main heap is not decoded for rewrites, this never
happened, so all the TOAST data accumulated in memory, resulting either
in excessive memory consumption or OOM.
The fix is simple, as commit e9edc1ba already introduced infrastructure
(namely HEAP_INSERT_NO_LOGICAL flag) to skip logical decoding of TOAST
tables, but it only applied it to system tables. So simply use it for
all TOAST data in raw_heap_insert().
That would however solve only the memory consumption issue - the TOAST
changes would still be decoded and added to the reorder buffer, and
spilled to disk (although without TOAST tuple data, so much smaller).
But we can solve that by tweaking DecodeInsert() to just ignore such
INSERT records altogether, using XLH_INSERT_CONTAINS_NEW_TUPLE flag,
instead of skipping them later in ReorderBufferCommit().
Review: Masahiko Sawada
Discussion: https://www.postgresql.org/message-id/flat/1a17c643-e9af-3dba-486b-fbe31bc1823a%402ndquadrant.com
Backpatch: 9.4-, where logical decoding was introduced
CREATE STATISTICS completion was checking manually for the start and end
of the parenthesised list of types. That works, but we now have a better
way to do that as commit 121213d9d taught word_matches() to allow '*' in
the middle of an alternative. But it only applied that to tab completion
for EXPLAIN, ANALYZE and VACUUM. Use it for CREATE STATISTICS too.
Author: Dagfinn Ilmari Mannsåker
Discussion: https://www.postgresql.org/message-id/flat/d8jwooziy1s.fsf%40dalvik.ping.uio.no
If you extend a relation, it should count as a block written, not
read (we write a zero-filled block). If you ask for a zero-filled
buffer, it shouldn't be counted as read or written.
Later we might consider counting zero-filled buffers with a separate
counter, if they become more common due to future work.
Author: Thomas Munro
Reviewed-by: Haribabu Kommi, Kyotaro Horiguchi, David Rowley
Discussion: https://postgr.es/m/CAEepm%3D3JytB3KPpvSwXzkY%2Bdwc5zC8P8Lk7Nedkoci81_0E9rA%40mail.gmail.com
The primary purpose of this commit is to ensure pg_upgrade tests yield
comparable dumps pre/post upgrade, which got broken by 12a53c732 /
578b229718, as the order in pg_largeobject_metadata is likely to
differ pre/post upgrade.
It also seems like a generally good idea to make sure such dumps are
comparable, outside of pg_upgrade tests.
LO metadata already was already dumped in an ordered manner as the
metadata is dumped in a well defined order via
sortDumpableObjectsByTypeName() and sortDumpableObjects(). But large
object data is currently not tracked via that mechanism.
As Tom points out it seems possible that at some point dumpBlobs() was
assumed to dump out objects in a well defined order, due to the use of
DISTINCT, which at that time only was done using sorting.
Per complaint from Andrew Dunstan and discussion with him and Tom
Lane.
Author: Andres Freund
Discussion: https://postgr.es/m/2735.1543333649@sss.pgh.pa.us
The function generated to perform JIT compiled tuple deforming failed
when HeapTupleHeader's t_hoff was bigger than a signed int8. I'd
failed to realize that LLVM's getelementptr would treat an int8 index
argument as signed, rather than unsigned. That means that a hoff
larger than 127 would result in a negative offset being applied. Fix
that by widening the index to 32bit.
Add a testcase with a wide table. Don't drop it, as it seems useful to
verify other tools deal properly with wide tables.
Thanks to Justin Pryzby for both reporting a bug and then reducing it
to a reproducible testcase!
Reported-By: Justin Pryzby
Author: Andres Freund
Discussion: https://postgr.es/m/20181115223959.GB10913@telsasoft.com
Backpatch: 11, just as jit compilation was
Debian testing and newer now require that RSA and DHE keys are at
least 2048 bit long and no longer allow SHA-1 for signatures in
certificates. This is currently causing the ssl tests to fail there
because the test certificates and keys have been created in violation
of those conditions.
Update the parameters to create the test files and create a new set of
test files.
Author: Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>
Reported-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://www.postgresql.org/message-id/flat/20180917131340.GE31460%40paquier.xyz
Unfortunately ac218aa4f6 missed the fact that a reference to
'pg_catalog.regnamespace'::regclass wouldn't work before that type is
known. Fix that, by replacing the regtype usage with a join to
pg_type.
Reported-By: Tom Lane
Author: Andres Freund
Discussion: https://postgr.es/m/8863.1543297423@sss.pgh.pa.us
Backpatch: 9.5-, like ac218aa4f6
When the regrole (0c90f6769) and regnamespace (cb9fa802b) types were
added in 9.5, pg_upgrade's check for reg* types wasn't updated. While
regrole currently is safe, regnamespace is not.
It seems unlikely that anybody uses regnamespace inside catalog tables
across a pg_upgrade, but the tests should be correct nevertheless.
While at it, reorder the types checked in the query to be
alphabetical. Otherwise it's annoying to compare existing and tested
for types.
Author: Andres Freund
Discussion: https://postgr.es/m/037e152a-cb25-3bcb-4f35-bdc9988f8204@2ndQuadrant.com
Backpatch: 9.5-, as regrole/regnamespace
pg_upgrade previously copied pg_largeobject_metadata over from the old
cluster. That doesn't work, because the table has oids before
578b229718. I missed that.
As most pieces of metadata for large objects already were dumped as
DDL (except for comments overwritten by pg_upgrade, due to the copy of
pg_largeobject_metadata) it seems reasonable to just also dump grants
for large objects. If we ever consider this a relevant performance
problem, we'd need to fix the rest of the already emitted DDL
too.
There's still an open discussion about whether we'll want to force a
specific ordering for the dumped objects, as currently
pg_largeobjects_metadata potentially has a different ordering
before/after pg_upgrade, which can make automated testing a bit
harder.
Reported-By: Andrew Dunstan
Author: Andres Freund
Discussion: https://postgr.es/m/91a8a980-41bc-412b-fba2-2ba71a141c2b@2ndQuadrant.com
latex_escaped_print() mistranslated \ and failed to provide any translation
for # ^ and ~, all of which would typically lead to LaTeX document syntax
errors. In addition it didn't translate < > and |, which would typically
render as unexpected characters.
To some extent this represents shortcomings in ancient versions of LaTeX,
which if memory serves had no easy way to render these control characters
as ASCII text. But that's been fixed for, um, decades. In any case there
is no value in emitting guaranteed-to-fail output for these characters.
Noted while fooling with test cases added by commit 9a98984f4. Back-patch
the code change to all supported versions.
I'd forgotten that in the buildfarm, parts of the regression tests
may run with psql exposed to a non-default LC_NUMERIC setting.
Hence we can't assume that C locale prevails, nor is there any
accessible way to force the setting for this single test step.
Lobotomize the test case added by commit 9a98984f4 so that it covers as
much as we can of print.c without having any locale-varying output.
"\pset format csv", or --csv, selects comma-separated values table format.
This is compliant with RFC 4180, except that we aren't too picky about
whether the record separator is LF or CRLF; also, the user may choose a
field separator other than comma.
This output format is directly compatible with the server's COPY CSV
format, and will also be useful as input to other programs. It's
considerably safer for that purpose than the old recommendation to
use "unaligned" format, since the latter couldn't handle data
containing the field separator character.
Daniel Vérité, reviewed by Fabien Coelho and David Fetter, some
tweaking by me
Discussion: https://postgr.es/m/a8de371e-006f-4f92-ab72-2bbe3ee78f03@manitou-mail.org
As penance for the "\pset format latex" silliness, add some regression
test coverage for the off-the-beaten-path output formats, which formerly
had exactly no coverage, except for some poorly-thought-out (unreadable,
repetitive, and incomplete) tests for asciidoc format.
I make no claims for the behavior exposed here actually being correct;
these test cases are just designed to ensure full code coverage in
fe_utils/print.c. This brings the line coverage for that file up
from ~60% to ~93%.
Commit eaf746a5b unintentionally made psql's "latex" output format
inaccessible, since not only "latex" but all abbreviations of it
were considered ambiguous against "latex-longtable". Let's go
back to the longstanding behavior that all shortened versions
mean "latex", and you have to write at least "latex-" to get
"latex-longtable". This leaves the only difference from pre-v12
behavior being that "\pset format a" is considered ambiguous.
The fact that the regression tests didn't expose this is pretty bad,
but fixing it is material for a separate commit.
Discussion: https://postgr.es/m/cb7e1caf-3ea6-450d-af28-f524903a030c@manitou-mail.org
A set of failures in buildfarm machines are proving that this is not
quite ready yet because of another set of issues:
- MSVC scripts assume that REGRESS_OPTS can only use top_builddir. Some
test suites actually finish by using top_srcdir, like pg_stat_statements
which cause the regression tests to never run.
- Trying to enforce top_builddir does not work either when using VPATH
as this is not recognized properly.
- TAP tests of bloom are unstable on various platforms, causing various
failures.
The set of scripts in charge of running the regression tests for MSVC
run currently under the assumption that only $(top_builddir) can used in
option values defined in REGRESS_OPTS, and those options need to have a
specific format as well to be correctly parsed, so fix the Makefile
values so as those are correctly set.
Per complains from buildfarm member dory and whelk, with some extra
testing done on my side with MSVC to check this patch.
The recent commit 03faa4a8 has enabled those tests, however several
buildfarm members are complaining about their stability on Windows and
macOS. This will keep the buildfarm green, while investigating the
root problem.
Discussion: https://postgr.es/m/20181126003351.GE1776@paquier.xyz
The following options are added for extensions:
- TAP_TESTS, to allow an extention to run TAP tests which are the ones
present in t/*.pl. A subset of tests can always be run with the
existing PROVE_TESTS for developers.
- ISOLATION, to define a list of isolation tests.
- ISOLATION_OPTS, to pass custom options to isolation_tester.
A couple of custom Makefile targets have been accumulated across the
tree to cover the lack of facility in PGXS for a couple of releases when
using those test suites, which are all now replaced with the new flags,
without reducing the test coverage. This also fixes an issue with
contrib/bloom/, which had a custom target to trigger its TAP tests of
its own not part of the main check runs.
Author: Michael Paquier
Reviewed-by: Adam Berlin, Álvaro Herrera, Tom Lane, Nikolay Shaplov,
Arthur Zakirov
Discussion: https://postgr.es/m/20180906014849.GG2726@paquier.xyz
recovery.conf settings are now set in postgresql.conf (or other GUC
sources). Currently, all the affected settings are PGC_POSTMASTER;
this could be refined in the future case by case.
Recovery is now initiated by a file recovery.signal. Standby mode is
initiated by a file standby.signal. The standby_mode setting is
gone. If a recovery.conf file is found, an error is issued.
The trigger_file setting has been renamed to promote_trigger_file as
part of the move.
The documentation chapter "Recovery Configuration" has been integrated
into "Server Configuration".
pg_basebackup -R now appends settings to postgresql.auto.conf and
creates a standby.signal file.
Author: Fujii Masao <masao.fujii@gmail.com>
Author: Simon Riggs <simon@2ndquadrant.com>
Author: Abhijit Menon-Sen <ams@2ndquadrant.com>
Author: Sergei Kornilov <sk@zsrv.org>
Discussion: https://www.postgresql.org/message-id/flat/607741529606767@web3g.yandex.ru/
Commit cfdf4dc4 added an assertion that every WaitLatch() or similar
handles postmaster death. One place did not, but was missed in
review and testing due to the need for an SSL connection. Fix, by
asking for WL_EXIT_ON_PM_DEATH.
Reported-by: Christoph Berg
Discussion: https://postgr.es/m/20181124143845.GA15039%40msg.df7cb.de
Hstore data generated on pg 8.4 and pg_upgraded to current versions
remains in its original on-disk format unless modified. The same goes
for values generated by the addon hstore-new module on pre-9.0
versions. (The hstoreUpgrade function converts old values on the fly
when read in, but the on-disk value is not modified by this.)
Since old-format empty hstores (and hstore-new hstores) have
representations compatible with the new format, hstoreUpgrade thought
it could get away without modifying such values; but this breaks
hstore_hash (and the new hstore_hash_extended) which assumes
bit-perfect matching between semantically identical hstore values.
Only one bit actually differs (the "new version" flag in the count
field) but that of course is enough to break the hash.
Fix by making hstoreUpgrade unconditionally convert all old values to
new format.
Backpatch all the way, even though this changes a hash value in some
cases, because in those cases the hash value is already failing - for
example, a hash join between old- and new-format empty hstores will be
failing to match, or a hash index on an hstore column containing an
old-format empty value will be failing to find the value since it will
be searching for a hash derived from a new-format datum. (There are no
known field reports of this happening, probably because hashing of
hstores has only been useful in limited circumstances and there
probably isn't much upgraded data being used this way.)
Per concerns arising from discussion of commit eb6f29141b. Original
bug is my fault.
Discussion: https://postgr.es/m/60b1fd3b-7332-40f0-7e7f-f2f04f777747%402ndquadrant.com
1. Integer overflow in internal_size could result in memory corruption
in decompression since a zero-length array would be allocated and then
written to. This leads to crashes or corruption when traversing an
index which has been populated with sufficiently sparse values. Fix by
using int64 for computations and checking for overflow.
2. Integer overflow in g_int_compress could cause pessimal merge
choices, resulting in unnecessarily large ranges (which would in turn
trigger issue 1 above). Fix by using int64 again.
3. Even without overflow, array sizes could become large enough to
cause unexplained memory allocation errors. Fix by capping the sizes
to a safe limit and report actual errors pointing at gist__intbig_ops
as needed.
4. Large inputs to the compression function always consist of large
runs of consecutive integers, and the compression loop was processing
these one at a time in an O(N^2) manner with a lot of overhead. The
expected runtime of this function could easily exceed 6 months for a
single call as a result. Fix by performing a linear-time first pass,
which reduces the worst case to something on the order of seconds.
Backpatch all the way, since this has been wrong forever.
Per bug #15518 from report from irc user "dymk", analysis and patch by
me.
Discussion: https://postgr.es/m/15518-799e426c3b4f8358@postgresql.org