Previously the docs just said "5.8 or later". Experimentation shows
that while you can build on Unix from a git checkout with 5.8.0,
compiling recent PL/Perl requires at least 5.8.1, and you won't be
able to run the TAP tests with less than 5.8.3 because that's when
they added "prove". (I do not have any information on just what the
MSVC build scripts require.)
Since all these versions are quite ancient, let's not split hairs
in the docs, but just say that 5.8.3 is the minimum requirement.
Discussion: https://postgr.es/m/16894.1501392088@sss.pgh.pa.us
Previously we had a mix of uses of AC_CHECK_PROG[S] and AC_PATH_PROG[S].
The only difference between those macros is that the latter emits the
full path to the program it finds, eg "/usr/bin/prove", whereas the
former emits just "prove". Let's standardize on always emitting the
full path; this is better for documentation of the build, and it might
prevent some types of failures if later build steps are done with
a different PATH setting.
I did not touch the AC_CHECK_PROG[S] calls in ax_pthread.m4 and
ax_prog_perl_modules.m4. There seems no need to make those diverge from
upstream, since we do not record the programs sought by the former, while
the latter's call to AC_CHECK_PROG(PERL,...) will never be reached.
Discussion: https://postgr.es/m/25937.1501433410@sss.pgh.pa.us
Coverity complained about this code's practice of using scalar variables
as single-element arrays. While that's really just nitpicking, it probably
is more readable to declare them as arrays, so let's do that. A more
important point is that the code was just blithely assuming that the
result tupledesc has exactly one column; if it doesn't, we'd likely get
a crash of some sort in tuplestore_putvalues. Since the tupledesc is
manufactured outside of plperl, that seems like an uncomfortably long
chain of assumptions. We can nail it down at little cost with a sanity
check earlier in the function.
The comment for dumpACL() got neglected when initacls and initracls were
added and the discussion of what 'racls' is wasn't very clear either.
Per complaint from Tom.
This allows us to add stack-depth checks the first time an executor
node is called, and skip that overhead on following
calls. Additionally it yields a nice speedup.
While it'd probably have been a good idea to have that check all
along, it has become more important after the new expression
evaluation framework in b8d7f053c5 - there's no stack depth
check in common paths anymore now. We previously relied on
ExecEvalExpr() being executed somewhere.
We should move towards that model for further routines, but as this is
required for v10, it seems better to only do the necessary (which
already is quite large).
Author: Andres Freund, Tom Lane
Reported-By: Julien Rouhaud
Discussion:
https://postgr.es/m/22833.1490390175@sss.pgh.pa.ushttps://postgr.es/m/b0af9eaa-130c-60d0-9e4e-7a135b1e0c76@dalibo.com
In a followup commit ExecProcNode(), and especially the large switch
it contains, will largely be replaced by a function pointer directly
to the correct node. The node functions will then get invoked by a
thin inline function wrapper. To avoid having to include miscadmin.h
in headers - CHECK_FOR_INTERRUPTS() - move the interrupt checks into
the individual executor routines.
While looking through all executor nodes, I noticed a number of
arguably missing interrupt checks, add these too.
Author: Andres Freund, Tom Lane
Reviewed-By: Tom Lane
Discussion:
https://postgr.es/m/22833.1490390175@sss.pgh.pa.us
The Perl documentation is very clear that stuff calling libperl should
be built with the compiler switches shown by Perl's $Config{ccflags}.
We'd been ignoring that up to now, and mostly getting away with it,
but recent Perl versions contain ABI compatibility cross-checks that
fail on some builds because of this omission. In particular the
sizeof(PerlInterpreter) can come out different due to some fields being
added or removed; which means we have a live ABI hazard that we'd better
fix rather than continuing to sweep it under the rug.
However, it still seems like a bad idea to just absorb $Config{ccflags}
verbatim. In some environments Perl was built with a different compiler
that doesn't even use the same switch syntax. -D switch syntax is pretty
universal though, and absorbing Perl's -D switches really ought to be
enough to fix the problem.
Furthermore, Perl likes to inject stuff like -D_LARGEFILE_SOURCE and
-D_FILE_OFFSET_BITS=64 into $Config{ccflags}, which affect libc ABIs on
platforms where they're relevant. Adopting those seems dangerous too.
It's unclear whether a build wherein Perl and Postgres have different ideas
of sizeof(off_t) etc would work, or whether anyone would care about making
it work. But it's dead certain that having different stdio ABIs in
core Postgres and PL/Perl will not work; we've seen that movie before.
Therefore, let's also ignore -D switches for symbols beginning with
underscore. The symbols that we actually need to import should be the ones
mentioned in perl.h's PL_bincompat_options stanza, and none of those start
with underscore, so this seems likely to work. (If it turns out not to
work everywhere, we could consider intersecting the symbols mentioned in
PL_bincompat_options with the -D switches. But that will be much more
complicated, so let's try this way first.)
This will need to be back-patched, but first let's see what the
buildfarm makes of it.
Ashutosh Sharma, some adjustments by me
Discussion: https://postgr.es/m/CANFyU97OVQ3+Mzfmt3MhuUm5NwPU=-FtbNH5Eb7nZL9ua8=rcA@mail.gmail.com
In Perl builds that define PERL_IMPLICIT_SYS, XSUB.h defines macros
that replace a whole lot of basic libc functions with Perl functions.
We can't tolerate that in plperl.c; it breaks at least PG_TRY and
probably other stuff. The core idea of this patch is to include XSUB.h
only in the .xs files where it's really needed, and to move any code
broken by PERL_IMPLICIT_SYS out of the .xs files and into plperl.c.
The reason this hasn't been a problem before is that our build techniques
did not result in PERL_IMPLICIT_SYS appearing as a #define in PL/Perl,
even on some platforms where Perl thinks it is defined. That's about to
change in order to fix a nasty portability issue, so we need this work to
make the code safe for that.
Rather unaccountably, the Perl people chose XSUB.h as the place to provide
the versions of the aTHX/aTHX_ macros that are needed by code that's not
explicitly aware of the MULTIPLICITY API conventions. Hence, just removing
XSUB.h from plperl.c fails miserably. But we can work around that by
defining PERL_NO_GET_CONTEXT (which would make the relevant stanza of
XSUB.h a no-op anyway). As explained in perlguts.pod, that means we need
to add a "dTHX" macro call in every C function that calls a Perl API
function. In most of them we just add this at the top; but since the
macro fetches the current Perl interpreter pointer, more care is needed
in functions that switch the active interpreter. Lack of the macro is
easily recognized since it results in bleats about "my_perl" not being
defined.
(A nice side benefit of this is that it significantly reduces the number
of fetches of the current interpreter pointer. On my machine, plperl.so
gets more than 10% smaller, and there's probably some performance win too.
We could reduce the number of fetches still more by decorating the code
with pTHX_/aTHX_ macros to pass the interpreter pointer around, as
explained by perlguts.pod; but that's a task for another day.)
Formatting note: pgindent seems happy to treat "dTHX;" as a declaration
so long as it's the first thing after the left brace, as we'd already
observed with respect to the similar macro "dSP;". If you try to put
it later in a set of declarations, pgindent puts ugly extra space
around it.
Having removed XSUB.h from plperl.c, we need only move the support
functions for spi_return_next and util_elog (both of which use PG_TRY)
out of the .xs files and into plperl.c. This seems sufficient to
avoid the known problems caused by PERL_IMPLICIT_SYS, although we
could move more code if additional issues emerge.
This will need to be back-patched, but first let's see what the
buildfarm makes of it.
Patch by me, with some help from Ashutosh Sharma
Discussion: https://postgr.es/m/CANFyU97OVQ3+Mzfmt3MhuUm5NwPU=-FtbNH5Eb7nZL9ua8=rcA@mail.gmail.com
Most functions in this file are content to print an empty table if there
are no matching objects. In some, the behavior is to loop over all
matching objects and print a table for each one; therefore, without any
extra logic, nothing at all would be printed if no objects match.
We accept that outcome in QUIET mode, but in normal mode it seems better
to print a helpful message. The new \dRp+ command had not gotten that
memo; fix it.
listDbRoleSettings() is out of step on this, but I think it's better for
it to print a custom message rather than an empty table, because of the
possibility that the user is confused about what the pattern arguments mean
or which is which. The original message wording was entirely useless for
clarifying that, though, not to mention being unlike the wordings used
elsewhere. Improve the text, and also print the messages with psql_error
as is the general custom here.
listTables() is also out in left field, but since it's such a heavily
used function, I'm hesitant to change its behavior so much as to print
an empty table rather than a custom message. People are probably used
to getting a message. But we can make the wording more standardized and
helpful, and print it with psql_error rather than printing to stdout.
In both listDbRoleSettings and listTables, we play dumb and emit an
empty table, not a custom message, in QUIET mode. That was true before
and I see no need to change it.
Several of the places printing such messages risked dumping core if
no pattern string had been provided; make them more wary. (This case
is presently unreachable in describeTableDetails; but it shouldn't be
assuming that command.c will never pass it a null. The text search
functions would only reach the case if a database contained no text
search objects, which is also currently impossible since we pin the
built-in objects, but again it seems unwise to assume that here.)
Daniel Gustafsson, tweaked a bit by me
Discussion: https://postgr.es/m/3641F19B-336A-431A-86CE-A80562505C5E@yesql.se
Most places were already using the PQExpBuffer library for constructing
variable-length strings; bring the two stragglers into line.
describeOneTSParser was living particularly dangerously since it wasn't
even using snprintf().
Daniel Gustafsson
Discussion: https://postgr.es/m/3641F19B-336A-431A-86CE-A80562505C5E@yesql.se
listDbRoleSettings() handled its server version check randomly differently
from every other comparable function in describe.c, not only as to code
layout but also message wording. It also leaked memory, because its
PQExpBuffer management was also unlike everyplace else (and wrong).
Also fix an error-case leak in add_tablespace_footer().
In passing, standardize the format of function header comments in
describe.c --- we usually put "/*" alone on a line.
Daniel Gustafsson, memory leak fixes by me
Discussion: https://postgr.es/m/3641F19B-336A-431A-86CE-A80562505C5E@yesql.se
\drds leaked its second pattern argument if any, and \connect leaked
any empty-string or "-" arguments. These are old bugs, but it's hard
to imagine any real use-case where the leaks could amount to anything
meaningful, so not bothering with a back-patch.
Daniel Gustafsson and Tom Lane
Discussion: https://postgr.es/m/3641F19B-336A-431A-86CE-A80562505C5E@yesql.se
When output of IPC::Run::run () is redirected to scalar references, in
certain circumstances the Msys perl does not correctly detect that the
end of file has been seen, making the test hang indefinitely. One such
circumstance is when the command is 'pg_ctl start', and such a change
was made in commit f13ea95f9e. The workaround, which only applies on
MSys, is to redirect the output to temporary files and then read them in
when the process has finished.
Patch by me, reviewed and tweaked by Tom Lane.
Fix assorted places that had not bothered with the convention of
prefixing catalog and function names with "pg_catalog.". That
could possibly result in query failure when running with a nondefault
search_path. Also fix two places that weren't quoting OID literals.
I think the latter hasn't mattered much since about 7.3, but it's still
a bad idea to be doing it in 99 places and not in 2 others.
Also remove a useless EXISTS sub-select that someone had stuck into
describeOneTableDetails' queries for child tables. We just got the OID
out of pg_class, so I hardly see how checking that it exists in pg_class
was doing anything helpful.
In passing, try to improve the emitted formatting of a couple of
these queries, though I didn't work really hard on that. And merge
unnecessarily duplicative coding in some other places.
Much of this was new in HEAD, but some was quite old; back-patch
as appropriate.
If several sessions are concurrently locking a tuple update chain with
nonconflicting lock modes using an old snapshot, and they all succeed,
it may happen that some of them fail because of restarting the loop (due
to a concurrent Xmax change) and getting an error in the subsequent pass
while trying to obtain a tuple lock that they already have in some tuple
version.
This can only happen with very high concurrency (where a row is being
both updated and FK-checked by multiple transactions concurrently), but
it's been observed in the field and can have unpleasant consequences
such as an FK check failing to see a tuple that definitely exists:
ERROR: insert or update on table "child_table" violates foreign key constraint "fk_constraint_name"
DETAIL: Key (keyid)=(123456) is not present in table "parent_table".
(where the key is observably present in the table).
Discussion: https://postgr.es/m/20170714210011.r25mrff4nxjhmf3g@alvherre.pgsql
Initial submitted versions of the functional dependencies patch ignored
row groups that were smaller than a configured size. However, that
consideration was removed in late stages of the patch just before
commit, but some comments referring to it remained. Remove them to
avoid confusion.
Author: Atsushi Torikoshi
Discussion: https://postgr.es/m/7cfb23fc-4493-9c02-5da9-e505fd0115d2@lab.ntt.co.jp
This module becomes much more useful if we allow it to be used as base
class for external projects. To achieve this, change the exported
get_new_node function into a class method instead, and use the standard
Perl idiom of accepting the class as first argument. This method works
as expected for subclasses. The standalone function is kept for
backwards compatibility, though it could be removed in pg11.
Author: Chap Flackman, based on an earlier patch from Craig Ringer
Discussion: https://postgr.es/m/CAMsr+YF8kO+4+K-_U4PtN==2FndJ+5Bn6A19XHhMiBykEwv0wA@mail.gmail.com
It is relatively easy to get a replication slot to look as still active
while one process is in the process of getting rid of it; when some
other process tries to "acquire" the slot, it would fail with an error
message of "replication slot XYZ is active for PID N".
The error message in itself is fine, except that when the intention is
to drop the slot, it is unhelpful: the useful behavior would be to wait
until the slot is no longer acquired, so that the drop can proceed. To
implement this, we use a condition variable so that slot acquisition can
be told to wait on that condition variable if the slot is already
acquired, and we make any change in active_pid broadcast a signal on the
condition variable. Thus, as soon as the slot is released, the drop
will proceed properly.
Reported by: Tom Lane
Discussion: https://postgr.es/m/11904.1499039688@sss.pgh.pa.us
Authors: Petr Jelínek, Álvaro Herrera
Trading a little too heavily on letting the code path be the same whether
we were creating shared data structures or only attaching to them,
InitPredicateLocks() inserted the "scratch" PredicateLockTargetHash entry
unconditionally. This is just wrong if we're in a postmaster child,
which would only reach this code in EXEC_BACKEND builds. Most of the
time, the hash_search(HASH_ENTER) call would simply report that the
entry already existed, causing no visible effect since the code did not
bother to check for that possibility. However, if this happened while
some other backend had transiently removed the "scratch" entry, then
that other backend's eventual RestoreScratchTarget would suffer an
assert failure; this appears to be the explanation for a recent failure
on buildfarm member culicidae. In non-assert builds, there would be
no visible consequences there either. But nonetheless this is a pretty
bad bug for EXEC_BACKEND builds, for two reasons:
1. Each new backend would perform the hash_search(HASH_ENTER) call
without holding any lock that would prevent concurrent access to the
PredicateLockTargetHash hash table. This creates a low but certainly
nonzero risk of corruption of that hash table.
2. In the event that the race condition occurred, by reinserting the
scratch entry too soon, we were defeating the entire purpose of the
scratch entry, namely to guarantee that transaction commit could move
hash table entries around with no risk of out-of-memory failure.
The odds of an actual OOM failure are quite low, but not zero, and if
it did happen it would again result in corruption of the hash table.
The user-visible symptoms of such corruption are a little hard to predict,
but would presumably amount to misbehavior of SERIALIZABLE transactions
that'd require a crash or postmaster restart to fix.
To fix, just skip the hash insertion if IsUnderPostmaster. I also
inserted a bunch of assertions that the expected things happen
depending on whether IsUnderPostmaster is true. That might be overkill,
since most comparable code in other functions isn't quite that paranoid,
but once burnt twice shy.
In passing, also move a couple of lines to places where they seemed
to make more sense.
Diagnosis of problem by Thomas Munro, patch by me. Back-patch to
all supported branches.
Discussion: https://postgr.es/m/10593.1500670709@sss.pgh.pa.us
If the user modifies a view that has CHECK OPTIONs and this gets
translated into a modification to an underlying relation which happens
to be a foreign table, the check options should be enforced. In the
normal code path, that was happening properly, but it was not working
properly for "direct" modification because the whole operation gets
pushed to the remote side in that case and we never have an option to
enforce the constraint against individual tuples. Fix by disabling
direct modification when there is a need to enforce CHECK OPTIONs.
Etsuro Fujita, reviewed by Kyotaro Horiguchi and by me.
Discussion: http://postgr.es/m/f8a48f54-6f02-9c8a-5250-9791603171ee@lab.ntt.co.jp
Various cases involving renaming of view columns are handled by having
make_viewdef pass down the view's current relation tupledesc to
get_query_def, which then takes care to use the column names from the
tupledesc for the output column names of the SELECT. For some reason
though, we'd missed teaching make_ruledef to do similarly when it is
printing an ON SELECT rule, even though this is exactly the same case.
The results from pg_get_ruledef would then be different and arguably wrong.
In particular, this breaks pre-v10 versions of pg_dump, which in some
situations would define views by means of emitting a CREATE RULE ... ON
SELECT command. Third-party tools might not be happy either.
In passing, clean up some crufty code in make_viewdef; we'd apparently
modernized the equivalent code in make_ruledef somewhere along the way,
and missed this copy.
Per report from Gilles Darold. Back-patch to all supported versions.
Discussion: https://postgr.es/m/ec05659a-40ff-4510-fc45-ca9d965d0838@dalibo.com
Add error checks in some places that were calling get_opfamily_member
or get_opfamily_proc and just assuming that the call could never fail.
Also, standardize the wording for such errors in some other places.
None of these errors are expected in normal use, hence they're just
elog not ereport. But they may be handy for diagnosing omissions in
custom opclasses.
Rushabh Lathia found the oversight in RelationBuildPartitionKey();
I found the others by grepping for all callers of these functions.
Discussion: https://postgr.es/m/CAGPqQf2R9Nk8htpv0FFi+FP776EwMyGuORpc9zYkZKC8sFQE3g@mail.gmail.com
pg_dump with the --clean option failed to emit DROP EVENT TRIGGER
commands for event triggers. In a closely related oversight,
it also did not emit ALTER OWNER commands for event triggers.
Since only superusers can create event triggers, the latter oversight
is of little practical consequence ... but if we're going to record
an owner for event triggers, then surely pg_dump should preserve it.
Per complaint from Greg Atkins. Back-patch to 9.3 where event triggers
were introduced.
Discussion: https://postgr.es/m/20170722191142.yi4e7tzcg3iacclg@gmail.com
While I couldn't find any live bugs in commit 44ca4022f, the comments
seemed pretty far from adequate; in particular it was not made plain that
"borrowing" entries from other freelists is critical for correctness.
Try to improve the commentary. A couple of very minor code style
tweaks, as well.
Discussion: https://postgr.es/m/10593.1500670709@sss.pgh.pa.us
I believe this changed as a consequence of commit 54baa4813: trying to
clone the "C" collation now produces a true clone with collencoding -1,
hence the error message if it's duplicate no longer specifies an encoding.
Per buildfarm member crake, which apparently hadn't been running this
test for the last few weeks.
The descriptions of PQserverVersion and PQlibVersion hadn't been updated
for the new two-part version-numbering approach. Fix that.
In passing, remove some trailing whitespace elsewhere in the file.
When incrementally updating a file larger than 2GB, the old code could
either fail outright (if the client asked the server for bytes beyond
the 2GB boundary) or fail to copy all the blocks that had actually
been modified (if the server reported a file size to the client in
excess of 2GB), resulting in data corruption. Generally, such files
won't occur anyway, but they might if using a non-default segment size
or if there the directory contains stray files unrelated to
PostgreSQL. Fix by a more prudent choice of data types.
Even with these improvements, this code still uses a mix of different
types (off_t, size_t, uint64, int64) to represent file sizes and
offsets, not all of which necessarily have the same width or
signedness, so further cleanup might be in order here. However, at
least now they all have the potential to be 64 bits wide on 64-bit
platforms.
Kuntal Ghosh and Michael Paquier, with a tweak by me.
Discussion: http://postgr.es/m/CAGz5QC+8gbkz=Brp0TgoKNqHWTzonbPtPex80U0O6Uh_bevbaA@mail.gmail.com
The new test cases added in commit 8bf58c0d9 turn out to have output
that can vary depending on the lc_messages setting prevailing on the
test server. Hide the remote end's error messages to ensure stable
output. This isn't a terribly desirable solution; we'd rather know
that the connection failed for the expected reason and not some other
one. But there seems little choice for the moment.
Per buildfarm.
Discussion: https://postgr.es/m/18419.1500658570@sss.pgh.pa.us
Previously, postgres_fdw would keep on using an existing connection even
if the user did ALTER SERVER or ALTER USER MAPPING commands that should
affect connection parameters. Teach it to watch for catcache invals
on these catalogs and re-establish connections when the relevant catalog
entries change. Per bug #14738 from Michal Lis.
In passing, clean up some rather crufty decisions in commit ae9bfc5d6
about where fields of ConnCacheEntry should be reset. We now reset
all the fields whenever we open a new connection.
Kyotaro Horiguchi, reviewed by Ashutosh Bapat and myself.
Back-patch to 9.3 where postgres_fdw appeared.
Discussion: https://postgr.es/m/20170710113917.7727.10247@wrigleys.postgresql.org
SLRU buffer lwlocks are allocated twice by oversight in commit
fe702a7b3f where that locks were moved to
separate tranche. The bug doesn't have user-visible effects except small
overspending of shared memory.
Backpatch to 9.6 where it was introduced.
Alexander Korotkov with small editorization by me.
The order of partitions listed by \d+ is in general locale-dependent.
Rename the partitions in the test added by d363d42bb9 to force them to
be listed in a consistent order.
Previously, UNBOUNDED meant no lower bound when used in the FROM list,
and no upper bound when used in the TO list, which was OK for
single-column range partitioning, but problematic with multiple
columns. For example, an upper bound of (10.0, UNBOUNDED) would not be
collocated with a lower bound of (10.0, UNBOUNDED), thus making it
difficult or impossible to define contiguous multi-column range
partitions in some cases.
Fix this by using MINVALUE and MAXVALUE instead of UNBOUNDED to
represent a partition column that is unbounded below or above
respectively. This syntax removes any ambiguity, and ensures that if
one partition's lower bound equals another partition's upper bound,
then the partitions are contiguous.
Also drop the constraint prohibiting finite values after an unbounded
column, and just document the fact that any values after MINVALUE or
MAXVALUE are ignored. Previously it was necessary to repeat UNBOUNDED
multiple times, which was needlessly verbose.
Note: Forces a post-PG 10 beta2 initdb.
Report by Amul Sul, original patch by Amit Langote with some
additional hacking by me.
Discussion: https://postgr.es/m/CAAJ_b947mowpLdxL3jo3YLKngRjrq9+Ej4ymduQTfYR+8=YAYQ@mail.gmail.com
Claiming that NATURAL JOIN is equivalent to CROSS JOIN when there are
no common column names is only strictly correct if it's an inner join;
you can't say e.g. CROSS LEFT JOIN. Better to explain it as meaning
JOIN ON TRUE, instead. Per a suggestion from David Johnston.
Discussion: https://postgr.es/m/CAKFQuwb+mYszQhDS9f_dqRrk1=Pe-S6D=XMkAXcDf4ykKPmgKQ@mail.gmail.com
Normally, a JoinExpr would have empty "quals" only if it came from CROSS
JOIN syntax. However, it's possible to get to this state by specifying
NATURAL JOIN between two tables with no common column names, and there
might be other ways too. The code previously printed no ON clause if
"quals" was empty; that's right for CROSS JOIN but syntactically invalid
if it's some type of outer join. Fix by printing ON TRUE in that case.
This got broken by commit 2ffa740be, which stopped using NATURAL JOIN
syntax in ruleutils output due to its brittleness in the face of
column renamings. Back-patch to 9.3 where that commit appeared.
Per report from Tushar Ahuja.
Discussion: https://postgr.es/m/98b283cd-6dda-5d3f-f8ac-87db8c76a3da@enterprisedb.com
When pg_control was first designed, sizeof(ControlFileData) was small
enough that a comment seemed like plenty to document the assumption that
it'd fit into one disk sector. Now it's nearly 300 bytes, raising the
possibility that somebody would carelessly add enough stuff to create
a problem. Let's add a StaticAssertStmt() to ensure that the situation
doesn't pass unnoticed if it ever occurs.
While at it, rename PG_CONTROL_SIZE to PG_CONTROL_FILE_SIZE to make it
clearer what that symbol means, and convert the existing runtime
comparisons of sizeof(ControlFileData) vs. PG_CONTROL_FILE_SIZE to be
static asserts --- we didn't have that technology when this code was
first written.
Discussion: https://postgr.es/m/9192.1500490591@sss.pgh.pa.us
It seemed a bit silly that each caller of make_tsvector() was laboriously
special-casing the situation where no lexemes were found, when it would
be easy and much more bullet-proof to make make_tsvector() handle that.
In an off-list followup to bug #14745, Bob Jones complained that
to_tsvector() on a 2MB jsonb value took an unreasonable amount of
time and space --- enough to draw the wrath of the OOM killer on
his machine. On my machine, his example proved to require upwards
of 18 seconds and 4GB, which seemed pretty bogus considering that
to_tsvector() on the same data treated as text took just a couple
hundred msec and 10 or so MB.
On investigation, the problem is that the implementation scans each
string element of the json(b) and converts it to tsvector separately,
then applies tsvector_concat() to join those separate tsvectors.
The unreasonable memory usage came from leaking every single one of
the transient tsvectors --- but even without that mistake, this is an
O(N^2) or worse algorithm, because tsvector_concat() has to repeatedly
process the words coming from earlier elements.
We can fix it by accumulating all the lexeme data and applying
make_tsvector() just once. As a side benefit, that also makes the
desired adjustment of lexeme positions far cheaper, because we can
just tweak the running "pos" counter between JSON elements.
In passing, try to make the explanation of that tweak more intelligible.
(I didn't think that a barely-readable comment far removed from the
actual code was helpful.) And do some minor other code beautification.