This data was only in separate files because it was the most convenient
way to handle it with a shell script. Now that we use a general-purpose
programming language, it's easy to assemble the data into the same format
as the rest of the catalogs and output it into postgres.bki. This allows
removal of some special-purpose code from initdb.c.
Discussion: https://www.postgresql.org/message-id/CACPNZCtVFtjHre6hg9dput0qRPp39pzuyA2A6BT8wdgrRy%2BQdA%40mail.gmail.com
Author: John Naylor
Rather than intermixing the discussion of text-string and binary-string
functions, make a clean break, moving all discussion of binary-string
operations into section 9.5. This involves some duplication of
function descriptions between 9.4 and 9.5, but it seems cleaner on the
whole since the individual descriptions are clearer (and on the other
side of the coin, it gets rid of some duplicated descriptions, too).
Move the convert*/encode/decode functions to a separate table, because
they don't quite seem to fit under the heading of "binary string
functions".
Also provide full documentation of the textual formats supported by
encode() and decode() (which was the original goal of this patch
series, many moons ago).
Also move the table of built-in encoding conversions out of section 9.4,
where it no longer had any relevance whatsoever, and put it into section
23.3 about character sets. I chose to put both that and table 23.2
(multibyte-translation-table) into a new <sect2> so as not to break up
the flow of discussion in 23.3.3.
Also do a bunch of minor copy-editing on the function descriptions
in 9.4 and 9.5.
Karl Pinc, reviewed by Fabien Coelho, further hacking by me
Discussion: https://postgr.es/m/20190304163347.7bca4897@slate.meme.com
Mixing incorrect bounds set in the SSL context leads to confusing error
messages generated by OpenSSL which are hard to act on. New checks are
added within the GUC machinery to improve the user experience as they
apply to any SSL implementation, not only OpenSSL, and doing the checks
beforehand avoids the creation of a SSL during a reload (or startup)
which we know will never be used anyway.
Backpatch down to 12, as those parameters have been introduced by
e73e67c.
Author: Michael Paquier
Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/20200114035420.GE1515@paquier.xyz
Backpatch-through: 12
The strategy of GIN index scan is driven by opclass-specific extract_query
method. This method that needed search mode is GIN_SEARCH_MODE_ALL. This
mode means that matching tuple may contain none of extracted entries. Simple
example is '!term' tsquery, which doesn't need any term to exist in matching
tsvector.
In order to handle such scan key GIN calculates virtual entry, which contains
all TIDs of all entries of attribute. In fact this is full scan of index
attribute. And typically this is very slow, but allows to handle some queries
correctly in GIN. However, current algorithm calculate such virtual entry for
each GIN_SEARCH_MODE_ALL scan key even if they are multiple for the same
attribute. This is clearly not optimal.
This commit improves the situation by introduction of "exclude only" scan keys.
Such scan keys are not capable to return set of matching TIDs. Instead, they
are capable only to filter TIDs produced by normal scan keys. Therefore,
each attribute should contain at least one normal scan key, while rest of them
may be "exclude only" if search mode is GIN_SEARCH_MODE_ALL.
The same optimization might be applied to the whole scan, not per-attribute.
But that leads to NULL values elimination problem. There is trade-off between
multiple possible ways to do this. We probably want to do this later using
some cost-based decision algorithm.
Discussion: https://postgr.es/m/CAOBaU_YGP5-BEt5Cc0%3DzMve92vocPzD%2BXiZgiZs1kjY0cj%3DXBg%40mail.gmail.com
Author: Nikita Glukhov, Alexander Korotkov, Tom Lane, Julien Rouhaud
Reviewed-by: Julien Rouhaud, Tomas Vondra, Tom Lane
Commit 9b63c13f0 turns out to have been fundamentally misguided:
the parent node's subPlan list is by no means the only way in which
a child SubPlan node can be hooked into the outer execution state.
As shown in bug #16213 from Matt Jibson, we can also get short-lived
tuple table slots added to the outer es_tupleTable list. At this point
I have little faith that there aren't other possible connections as
well; the long time it took to notice this problem shows that this
isn't a heavily-exercised situation.
Therefore, revert that fix, returning to the coding that passed a
NULL parent plan pointer down to the transiently-built subexpressions.
That gives us a pretty good guarantee that they won't hook into the
outer executor state in any way. But then we need some other solution
to make SubPlans work. Adopt the solution speculated about in the
previous commit's log message: do expression initialization at plan
startup for just those VALUES rows containing SubPlans, abandoning the
goal of reclaiming memory intra-query for those rows. In practice it
seems unlikely that queries containing a vast number of VALUES rows
would be using SubPlans in them, so this should not give up much.
(BTW, this test case also refutes my claim in connection with the prior
commit that the issue only arises with use of LATERAL. That was just
wrong: some variants of SubLink always produce SubPlans.)
As with previous patch, back-patch to all supported branches.
Discussion: https://postgr.es/m/16213-871ac3bc208ecf23@postgresql.org
... specifically, set it incrementally as each individual change is
spilled down to disk. This way, it is set correctly when the
transaction disappears without trace, ie. without leaving an XACT_ABORT
wal record. (This happens when the server crashes midway through a
transaction.)
Failing to have final_lsn prevents ReorderBufferRestoreCleanup() from
working, since it needs the final_lsn in order to know the endpoint of
its iteration through spilled files.
Commit df9f682c7b already tried to fix the problem, but it didn't set
the final_lsn in all cases. Revert that, since it's no longer needed.
Author: Vignesh C
Reviewed-by: Amit Kapila, Dilip Kumar
Discussion: https://postgr.es/m/CALDaNm2CLk+K9JDwjYST0sPbGg5AQdvhUt0jbKyX_HdAE0jk3A@mail.gmail.com
The bitmap used by SlabCheck to cross-check free chunks in a block used
to be allocated for each SlabCheck call, and was never freed. The memory
leak could be fixed by simply adding a pfree call, but it's actually a
bad idea to do any allocations in SlabCheck at all as it assumes the
state of the memory management as a whole is sane.
So instead we allocate the bitmap as part of SlabContext, which means
we don't need to do any allocations in SlabCheck and the bitmap goes
away together with the SlabContext.
Backpatch to 10, where the Slab context was introduced.
Author: Tomas Vondra
Reported-by: Andres Freund
Reviewed-by: Tom Lane
Backpatch-through: 10
Discussion: https://www.postgresql.org/message-id/20200116044119.g45f7pmgz4jmodxj%40alap3.anarazel.de
jsonb_set_lax() is the same as jsonb_set, except that it takes and extra
argument that specifies what to do if the value argument is NULL. The
default is 'use_json_null'. Other possibilities are 'raise_exception',
'return_target' and 'delete_key', all these behaviours having been
suggested as reasonable by various users.
Discussion: https://postgr.es/m/375873e2-c957-3a8d-64f9-26c43c2b16e7@2ndQuadrant.com
Reviewed by: Pavel Stehule
Two routines have been added in OpenSSL 1.1.0 to set the protocol bounds
allowed within a given SSL context:
- SSL_CTX_set_min_proto_version
- SSL_CTX_set_max_proto_version
As Postgres supports OpenSSL down to 1.0.1 (as of HEAD), equivalent
replacements exist in the tree, which are only available for the
backend. A follow-up patch is planned to add control of the SSL
protocol bounds for libpq, so move those routines to src/common/ so as
libpq can use them.
Author: Daniel Gustafsson
Discussion: https://postgr.es/m/4F246AE3-A7AE-471E-BD3D-C799D3748E03@yesql.se
Move all the backend-only code that'd crept into wchar.c and encnames.c
into mbutils.c.
To remove the last few #ifdef dependencies from wchar.c and encnames.c,
also make the following changes:
* Adjust get_encoding_name_for_icu to return NULL, not throw an error,
for unsupported encodings. Its sole caller can perfectly well throw an
error instead. (While at it, I also made this function and its sibling
is_encoding_supported_by_icu proof against out-of-range encoding IDs.)
* Remove the overlength-name error condition from pg_char_to_encoding.
It's completely silly not to treat that just like any other
the-name-is-not-in-the-table case.
Also, get rid of pg_mic_mblen --- there's no obvious reason why
conv.c shouldn't call pg_mule_mblen instead.
Other than that, this is just code movement and comment-polishing with
no functional changes. Notably, I reordered declarations in pg_wchar.h
to show which functions are frontend-accessible and which are not.
Discussion: https://postgr.es/m/CA+TgmoYO8oq-iy8E02rD8eX25T-9SmyxKWqqks5OMHxKvGXpXQ@mail.gmail.com
Formerly, various frontend directories symlinked these two sources
and then built them locally. That's an ancient, ugly hack, and
we now have a much better way: put them into libpgcommon.
So do that. (The immediate motivation for this is the prospect
of having to introduce still more symlinking if we don't.)
This commit moves these two files absolutely verbatim, for ease of
reviewing the git history. There's some follow-on work to be done
that will modify them a bit.
Robert Haas, Tom Lane
Discussion: https://postgr.es/m/CA+TgmoYO8oq-iy8E02rD8eX25T-9SmyxKWqqks5OMHxKvGXpXQ@mail.gmail.com
Previously, check_xact_readonly() was responsible for determining
which types of queries could not be run in a read-only transaction,
standard_ProcessUtility() was responsibility for prohibiting things
which were allowed in read only transactions but not in recovery, and
utility commands were basically prohibited in bulk in parallel mode by
calls to CommandIsReadOnly() in functions.c and spi.c. This situation
was confusing and error-prone. Accordingly, move all the checks to a
new function ClassifyUtilityCommandAsReadOnly(), which determines the
degree to which a given statement is read only.
In the old code, check_xact_readonly() inadvertently failed to handle
several statement types that actually should have been prohibited,
specifically T_CreatePolicyStmt, T_AlterPolicyStmt, T_CreateAmStmt,
T_CreateStatsStmt, T_AlterStatsStmt, and T_AlterCollationStmt. As a
result, thes statements were erroneously allowed in read only
transactions, parallel queries, and standby operation. Generally, they
would fail anyway due to some lower-level error check, but we
shouldn't rely on that. In the new code structure, future omissions
of this type should cause ClassifyUtilityCommandAsReadOnly() to
complain about an unrecognized node type.
As a fringe benefit, this means we can allow certain types of utility
commands in parallel mode, where it's safe to do so. This allows
ALTER SYSTEM, CALL, DO, CHECKPOINT, COPY FROM, EXPLAIN, and SHOW.
It might be possible to allow additional commands with more work
and thought.
Along the way, document the thinking process behind the current set
of checks, as per discussion especially with Peter Eisentraut. There
is some interest in revising some of these rules, but that seems
like a job for another patch.
Patch by me, reviewed by Tom Lane, Stephen Frost, and Peter
Eisentraut.
Discussion: http://postgr.es/m/CA+TgmoZ_rLqJt5sYkvh+JpQnfX0Y+B2R+qfi820xNih6x-FQOQ@mail.gmail.com
We've had numerous bug reports about how (1) IF NOT EXISTS clauses in
ALTER TABLE don't behave as-expected, and (2) combining certain actions
into one ALTER TABLE doesn't work, though executing the same actions as
separate statements does. This patch cleans up all of the cases so far
reported from the field, though there are still some oddities associated
with identity columns.
The core problem behind all of these bugs is that we do parse analysis
of ALTER TABLE subcommands too soon, before starting execution of the
statement. The root of the bugs in group (1) is that parse analysis
schedules derived commands (such as a CREATE SEQUENCE for a serial
column) before it's known whether the IF NOT EXISTS clause should cause
a subcommand to be skipped. The root of the bugs in group (2) is that
earlier subcommands may change the catalog state that later subcommands
need to be parsed against.
Hence, postpone parse analysis of ALTER TABLE's subcommands, and do
that one subcommand at a time, during "phase 2" of ALTER TABLE which
is the phase that does catalog rewrites. Thus the catalog effects
of earlier subcommands are already visible when we analyze later ones.
(The sole exception is that we do parse analysis for ALTER COLUMN TYPE
subcommands during phase 1, so that their USING expressions can be
parsed against the table's original state, which is what we need.
Arguably, these bugs stem from falsely concluding that because ALTER
COLUMN TYPE must do early parse analysis, every other command subtype
can too.)
This means that ALTER TABLE itself must deal with execution of any
non-ALTER-TABLE derived statements that are generated by parse analysis.
Add a suitable entry point to utility.c to accept those recursive
calls, and create a struct to pass through the information needed by
the recursive call, rather than making the argument lists of
AlterTable() and friends even longer.
Getting this to work correctly required a little bit of fiddling
with the subcommand pass structure, in particular breaking up
AT_PASS_ADD_CONSTR into multiple passes. But otherwise it's mostly
a pretty straightforward application of the above ideas.
Fixing the residual issues for identity columns requires refactoring of
where the dependency link from an identity column to its sequence gets
set up. So that seems like suitable material for a separate patch,
especially since this one is pretty big already.
Discussion: https://postgr.es/m/10365.1558909428@sss.pgh.pa.us
This uses the progress reporting infrastructure added by c16dc1aca5,
adding support for ANALYZE.
Co-authored-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Co-authored-by: Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>
Reviewed-by: Julien Rouhaud, Robert Haas, Anthony Nowocien, Kyotaro Horiguchi,
Vignesh C, Amit Langote
For historical reasons, libpq used a separate libpq.rc file for the
Windows builds while all other components use a common file
win32ver.rc. With a bit of tweaking, the libpq build can also use the
win32ver.rc file. This removes a bit of duplicative code.
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://www.postgresql.org/message-id/flat/ad505e61-a923-e114-9f38-9867d161073f@2ndquadrant.com
The logic introduced in this routine as of 246a6c8 would report an
incorrect result when a session calls it to check if the temporary
namespace owned by the session is in use or not. It is possible to
optimize more the routine in this case to avoid a PGPROC lookup, but
let's keep the logic simple. As this routine is used only by autovacuum
for now, there were no live bugs, still let's be correct for any future
code involving it.
Author: Michael Paquier
Reviewed-by: Julien Rouhaud
Discussion: https://postgr.es/m/20200113093703.GA41902@paquier.xyz
Backpatch-through: 11
Introduce new fields amusemaintenanceworkmem and amparallelvacuumoptions
in IndexAmRoutine for parallel vacuum. The amusemaintenanceworkmem tells
whether a particular IndexAM uses maintenance_work_mem or not. This will
help in controlling the memory used by individual workers as otherwise,
each worker can consume memory equal to maintenance_work_mem. The
amparallelvacuumoptions tell whether a particular IndexAM participates in
a parallel vacuum and if so in which phase (bulkdelete, vacuumcleanup) of
vacuum.
Author: Masahiko Sawada and Amit Kapila
Reviewed-by: Dilip Kumar, Amit Kapila, Tomas Vondra and Robert Haas
Discussion:
https://postgr.es/m/CAD21AoDTPMgzSkV4E3SFo1CH_x50bf5PqZFQf4jmqjk-C03BWg@mail.gmail.comhttps://postgr.es/m/CAA4eK1LmcD5aPogzwim5Nn58Ki+74a6Edghx4Wd8hAskvHaq5A@mail.gmail.com
If no permanent replication slot is configured using
primary_slot_name, the walreceiver now creates and uses a temporary
replication slot. A new setting wal_receiver_create_temp_slot can be
used to disable this behavior, for example, if the remote instance is
out of replication slots.
Reviewed-by: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>
Discussion: https://www.postgresql.org/message-id/CA%2Bfd4k4dM0iEPLxyVyme2RAFsn8SUgrNtBJOu81YqTY4V%2BnqZA%40mail.gmail.com
A view with conditional INSTEAD rules and no unconditional INSTEAD
rules or INSTEAD OF triggers is not auto-updatable. Previously we
relied on a check in the executor to catch this, but that's
problematic since the planner may fail to properly handle such a query
and thus return a particularly unhelpful error to the user, before
reaching the executor check.
Instead, trap this in the rewriter and report the correct error there.
Doing so also allows us to include more useful error detail than the
executor check can provide. This doesn't change the existing behaviour
of updatable views; it merely ensures that useful error messages are
reported when a view isn't updatable.
Per report from Pengzhou Tang, though not adopting that suggested fix.
Back-patch to all supported branches.
Discussion: https://postgr.es/m/CAG4reAQn+4xB6xHJqWdtE0ve_WqJkdyCV4P=trYr4Kn8_3_PEA@mail.gmail.com
This test was trying to test the mechanism to release kernel FDs as needed
to get us under the max_safe_fds limit in case of spill files. To do that,
it needs to set max_files_per_process to a very low value which doesn't
even permit starting of the server in the case when there are a few already
opened files. This test also won't work on platforms where we use one FD
per semaphore.
Backpatch-through: 10, till where this test was added
Discussion:
https://postgr.es/m/CAA4eK1LHhERi06Q+MmP9qBXBBboi+7WV3910J0aUgz71LcnKAw@mail.gmail.comhttps://postgr.es/m/6485.1578583522@sss.pgh.pa.us
Previously, the core scanner's yy_transition[] array had 37045 elements.
Since that number is larger than INT16_MAX, Flex generated the array to
contain 32-bit integers. By reimplementing some of the bulkier scanner
rules, this patch reduces the array to 20495 elements. The much smaller
total length, combined with the consequent use of 16-bit integers for
the array elements reduces the binary size by over 200kB. This was
accomplished in two ways:
1. Consolidate handling of quote continuations into a new start condition,
rather than duplicating that logic for five different string types.
2. Treat Unicode strings and identifiers followed by a UESCAPE sequence
as three separate tokens, rather than one. The logic to de-escape
Unicode strings is moved to the filter code in parser.c, which already
had the ability to provide special processing for token sequences.
While we could have implemented the conversion in the grammar, that
approach was rejected for performance and maintainability reasons.
Performance in microbenchmarks of raw parsing seems equal or slightly
faster in most cases, and it's reasonable to expect that in real-world
usage (with more competition for the CPU cache) there will be a larger
win. The exception is UESCAPE sequences; lexing those is about 10%
slower, primarily because the scanner now has to be called three times
rather than one. This seems acceptable since that feature is very
rarely used.
The psql and epcg lexers are likewise modified, primarily because we
want to keep them all in sync. Since those lexers don't use the
space-hogging -CF option, the space savings is much less, but it's
still good for perhaps 10kB apiece.
While at it, merge the ecpg lexer's handling of C-style comments used
in SQL and in C. Those have different rules regarding nested comments,
but since we already have the ability to keep track of the previous
start condition, we can use that to handle both cases within a single
start condition. This matches the core scanner more closely.
John Naylor
Discussion: https://postgr.es/m/CACPNZCvaoa3EgVWm5yZhcSTX6RAtaLgniCPcBVOCwm8h3xpWkw@mail.gmail.com
The use of pg_atoi() for parsing a string into an Oid fails for values
larger than INT32_MAX, since OIDs are unsigned. Instead, use
atooid(). While this has less error checking, the contents of the
data directory are expected to be trustworthy, so we don't need to go
out of our way to do full error checking.
Discussion: https://www.postgresql.org/message-id/flat/dea47fc8-6c89-a2b1-07e3-754ff1ab094b%402ndquadrant.com
Earlier, we use to postpone deleting empty pages till the second stage of
vacuum to amortize the cost of scanning internal pages. However, that can
sometimes (say vacuum is canceled or errored between first and second
stage) delay the pages to be recycled.
Another thing is that to facilitate deleting empty pages in the second
stage, we need to share the information about internal and empty pages
between different stages of vacuum. It will be quite tricky to share this
information via DSM which is required for the upcoming parallel vacuum
patch.
Also, it will bring the logic to reclaim deleted pages closer to nbtree
where we delete empty pages in each pass.
Overall, the advantages of deleting empty pages in each pass outweigh the
advantages of postponing the same.
Author: Dilip Kumar, with changes by Amit Kapila
Reviewed-by: Sawada Masahiko and Amit Kapila
Discussion: https://postgr.es/m/CAA4eK1LGr+MN0xHZpJ2dfS8QNQ1a_aROKowZB+MPNep8FVtwAA@mail.gmail.com
Until now we've only used a single multivariate MCV list per relation,
covering the largest number of clauses. So for example given a query
SELECT * FROM t WHERE a = 1 AND b =1 AND c = 1 AND d = 1
and extended statistics on (a,b) and (c,d), we'd only pick and use one
of them. This commit improves this by repeatedly picking and applying
the best statistics (matching the largest number of remaining clauses)
until no additional statistics is applicable.
This greedy algorithm is simple, but may not be optimal. A different
choice of statistics may leave fewer clauses unestimated and/or give
better estimates for some other reason.
This can however happen only when there are overlapping statistics, and
selecting one makes it impossible to use the other. E.g. with statistics
on (a,b), (c,d), (b,c,d), we may pick either (a,b) and (c,d) or (b,c,d).
But it's not clear which option is the best one.
We however assume cases like this are rare, and the easiest solution is
to define statistics covering the whole group of correlated columns. In
the future we might support overlapping stats, using some of the clauses
as conditions (in conditional probability sense).
Author: Tomas Vondra
Reviewed-by: Mark Dilger, Kyotaro Horiguchi
Discussion: https://postgr.es/m/20191028152048.jc6pqv5hb7j77ocp@development
When considering functional dependencies during selectivity estimation,
it's not necessary to bother with selecting the best extended statistic
object and then use just dependencies from it. We can simply consider
all applicable functional dependencies at once.
This means we need to deserialie all (applicable) dependencies before
applying them to the clauses. This is a bit more expensive than picking
the best statistics and deserializing dependencies for it. To minimize
the additional cost, we ignore statistics that are not applicable.
Author: Tomas Vondra
Reviewed-by: Mark Dilger
Discussion: https://postgr.es/m/20191028152048.jc6pqv5hb7j77ocp@development
When estimating the selectivity of "range_var <@ range_constant" or
"range_var @> range_constant", if the upper (or respectively lower)
bound of the range_constant was above the last bin of the range_var's
histogram, the code would access uninitialized memory and potentially
crash (though it seems the probability of a crash is quite low).
Handle the endpoint cases explicitly to fix that.
While at it, be more paranoid about the possibility of getting NaN
or other silly results from the range type's subdiff function.
And improve some comments.
Ordinarily we'd probably add a regression test case demonstrating
the bug in unpatched code. But it's too hard to get it to crash
reliably because of the uninitialized-memory dependence, so skip that.
Per bug #16122 from Adam Scott. It's been broken from the beginning,
apparently, so backpatch to all supported branches.
Diagnosis by Michael Paquier, patch by Andrey Borodin and Tom Lane.
Discussion: https://postgr.es/m/16122-eb35bc248c806c15@postgresql.org
On the publisher, it was assumed that an INSERT change cannot happen for
a relation with no replica identity. However this is true only for a
change that needs references to old rows, aka UPDATE or DELETE, so
trying to use logical replication with a relation that has no replica
identity led to an assertion failure in the publisher when issuing an
INSERT. This commit removes the incorrect assertion, and adds more
regression tests to provide coverage for relations without replica
identity.
Reported-by: Neha Sharma
Author: Dilip Kumar, Michael Paquier
Reviewed-by: Andres Freund
Discussion: https://postgr.es/m/CANiYTQsL1Hb8_Km08qd32svrqNumXLJeoGo014O7VZymgOhZEA@mail.gmail.com
Backpatch-through: 10
Fix assorted bugs in handling of non-blocking I/O when using GSSAPI
encryption. The encryption layer could return the wrong status
information to its caller, resulting in effectively dropping some data
(or possibly in aborting a not-broken connection), or in a "livelock"
situation where data remains to be sent but the upper layers think
transmission is done and just go to sleep. There were multiple small
thinkos contributing to that, as well as one big one (failure to think
through what to do when a send fails after having already transmitted
data). Note that these errors could cause failures whether the client
application asked for non-blocking I/O or not, since both libpq and
the backend always run things in non-block mode at this level.
Also get rid of use of static variables for GSSAPI inside libpq;
that's entirely not okay given that multiple connections could be
open at once inside a single client process.
Also adjust a bunch of random small discrepancies between the frontend
and backend versions of the send/receive functions -- except for error
handling, they should be identical, and now they are.
Also extend the Kerberos TAP tests to exercise cases where nontrivial
amounts of data need to be pushed through encryption. Before, those
tests didn't provide any useful coverage at all for the cases of
interest here. (They still might not, depending on timing, but at
least there's a chance.)
Per complaint from pmc@citylink and subsequent investigation.
Back-patch to v12 where this code was introduced.
Discussion: https://postgr.es/m/20200109181822.GA74698@gate.oper.dinoex.org
FileClose() failure ordinarily causes a PANIC. Suppose the user
disables that PANIC via data_sync_retry=on. After mdclose() issued a
FileClose() that failed, calls into md.c raised SIGSEGV. This fix adds
repalloc() calls during mdclose(); update a comment about ignoring
repalloc() cost. The rate of relation segment count change is a minor
factor; more relevant to overall performance is the rate of mdclose()
and subsequent re-opening of segments. Back-patch to v10, where commit
45e191e3aa introduced the bug.
Reviewed by Kyotaro Horiguchi.
Discussion: https://postgr.es/m/20191222091930.GA1280238@rfd.leadboat.com
Experience so far suggests that getting these tests to pass on
all libedit versions that are out there may be impossible, or
require dumbing down the tests to the point of uselessness.
So we need to provide a way to skip them when the user knows they'll
fail. An environment variable is probably the most convenient way
to deal with this; it's easy for, e.g., a buildfarm animal's
configuration to set up.
Discussion: https://postgr.es/m/9594.1578586797@sss.pgh.pa.us