I (Andres) broke this in 623a9CA79bx, because I didn't think about the
way snapshots are built on standbys sufficiently. Unfortunately our
existing tests did not catch this, as they are all just querying with
psql (therefore ending up with fresh snapshots).
The fix is trivial, we just need to increment the transaction
completion counter in ExpireTreeKnownAssignedTransactionIds(), which
is the equivalent of ProcArrayEndTransaction() during recovery.
This commit also adds a new test doing some basic testing of the
correctness of snapshots built on standbys. To avoid the
aforementioned issue of one-shot psql's not exercising the snapshot
caching, the test uses a long lived psqls, similar to
013_crash_restart.pl. It'd be good to extend the test further.
Reported-By: Ian Barwick <ian.barwick@2ndquadrant.com>
Author: Andres Freund <andres@anarazel.de>
Author: Ian Barwick <ian.barwick@2ndquadrant.com>
Discussion: https://postgr.es/m/61291ffe-d611-f889-68b5-c298da9fb18f@2ndquadrant.com
The error message about columns in the primary key not including all of
the partition key was unclear; reword it.
Backpatch all the way to pg11, where it appeared.
Reported-by: Nagaraj Raj <nagaraj.sf@yahoo.com>
Discussion: https://postgr.es/m/64062533.78364.1601415362244@mail.yahoo.com
Previously, a conversion such as
to_date('-44-02-01','YYYY-MM-DD')
would result in '0045-02-01 BC', as the code attempted to interpret
the negative year as BC, but failed to apply the correction needed
for our internal handling of BC years. Fix the off-by-one problem.
Also, arrange for the combination of a negative year and an
explicit "BC" marker to cancel out and produce AD. This is how
the negative-century case works, so it seems sane to do likewise.
Continue to read "year 0000" as 1 BC. Oracle would throw an error,
but we've accepted that case for a long time so I'm hesitant to
change it in a back-patch.
Per bug #16419 from Saeed Hubaishan. Back-patch to all supported
branches.
Dar Alathar-Yemen and Tom Lane
Discussion: https://postgr.es/m/16419-d8d9db0a7553f01b@postgresql.org
For some reason, the id of the description of
max_parallel_maintenance_workers has been
guc-max-parallel-workers-maintenance since the beginning. Flip that
around to make it consistent.
PostgresNode.pm set "max_wal_senders = 5" for replication testing,
but this seems to be slightly too low for our current test suite.
Slower buildfarm members frequently report "number of requested standby
connections exceeds max_wal_senders" failures, due to old walsenders
not exiting instantaneously. Usually, the test does not fail overall
because of automatic walreceiver restart, but sometimes the failure
becomes visible; and in any case such retries slow down the test.
That value came in with commit 89ac7004d, but was soon obsoleted by
f6d6d2920, which raised the built-in default from zero to 10; so that
PostgresNode.pm is actually setting it to less than the conservative
built-in default. That seems pretty pointless, so let's remove the
special setting and let the default prevail, in hopes of making
the TAP tests more robust.
Likewise, the setting "max_replication_slots = 5" is obsolete and
can be removed.
While here, reverse-engineer a comment about why we're choosing
less-than-default values for some other settings.
(Note: before v12, max_wal_senders counted against max_connections
so that the latter setting also needs some fiddling with.)
Back-patch to v10 where the subscription tests were added.
It's likely that the older branches aren't pushing the boundaries
of max_wal_senders, but I'm disinclined to spend time trying to
figure out exactly when it started to be a problem.
Discussion: https://postgr.es/m/723911.1601417626@sss.pgh.pa.us
Explicitly mention that primary key constraints are also included in the
limitation that the constraint columns must be a superset of the partition key
columns.
Wording suggestion from Tom Lane.
Discussion: https://postgr.es/m/64062533.78364.1601415362244@mail.yahoo.com
Backpatch-through: 11, where unique constraints on partitioned tables were added
Previously we threw an error. But make_date already allowed the case,
so it is inconsistent as well as unhelpful for make_timestamp not to.
Both functions continue to reject year zero.
Code and test fixes by Peter Eisentraut, doc changes by me
Discussion: https://postgr.es/m/13c0992c-f15a-a0ca-d839-91d3efd965d9@2ndquadrant.com
When executing a CALL or DO in a non-atomic context (i.e., not inside
a function or query), plpgsql creates a new plan each time through,
as a rather hacky solution to some resource management issues. But
it failed to free this plan until exit of the current procedure or DO
block, resulting in serious memory bloat in procedures that called
other procedures many times. Fix by remembering to free the plan,
and by being more honest about restoring the previous state (otherwise,
recursive procedure calls have a problem).
There was also a smaller leak associated with recalculation of the
"target" list of output variables. Fix that by using the statement-
lifespan context to hold non-permanent values.
Back-patch to v11 where procedures were introduced.
Pavel Stehule and Tom Lane
Discussion: https://postgr.es/m/CAFj8pRDiiU1dqym+_P4_GuTWm76knJu7z9opWayBJTC0nQGUUA@mail.gmail.com
The SQL standard doesn't require jsonpath .datetime() method to support the
ISO 8601 format. But our to_json[b]() functions convert timestamps to text in
the ISO 8601 format in the sake of compatibility with javascript. So, we add
support of the ISO 8601 to the jsonpath .datetime() in the sake compatibility
with to_json[b]().
The standard mode of datetime parsing currently supports just template patterns
and separators in the format string. In order to implement ISO 8601, we have to
add support of the format string double quotes to the standard parsing mode.
Discussion: https://postgr.es/m/94321be0-cc96-1a81-b6df-796f437f7c66%40postgrespro.ru
Author: Nikita Glukhov, revised by me
Backpatch-through: 13
bffe1bd684 has introduced jsonpath .datetime() method, but default formats
for time and timestamp contain excess space between time and timezone. This
commit removes this excess space making behavior of .datetime() method
standard-compliant.
Discussion: https://postgr.es/m/94321be0-cc96-1a81-b6df-796f437f7c66%40postgrespro.ru
Author: Nikita Glukhov
Backpatch-through: 13
Previously the standby server didn't archive timeline history files
streamed from the primary even when archive_mode is set to "always",
while it archives the streamed WAL files. This could cause the PITR to
fail because there was no required timeline history file in the archive.
The cause of this issue was that walreceiver didn't mark those files as
ready for archiving.
This commit makes walreceiver mark those streamed timeline history
files as ready for archiving if archive_mode=always. Then the archiver
process archives the marked timeline history files.
Back-patch to all supported versions.
Reported-by: Grigory Smolkin
Author: Grigory Smolkin, Fujii Masao
Reviewed-by: David Zhang, Anastasia Lubennikova
Discussion: https://postgr.es/m/54b059d4-2b48-13a4-6f43-95a087c92367@postgrespro.ru
This addresses a couple of issues with the so-said subject:
- Report the correct parent relation with the index actually being
rebuilt or validated. Previously, the command status remained set to
the last index created for the progress of the index build and
validation, which would be incorrect when working on a table that has
more than one index.
- Use the correct phase when waiting before the drop of the old
indexes. Previously, this was reported with the same status as when
waiting before the old indexes are marked as dead.
Author: Matthias van de Meent, Michael Paquier
Discussion: https://postgr.es/m/CAEze2WhqFgcwe1_tv=sFYhLWV2AdpfukumotJ6JNcAOQs3jufg@mail.gmail.com
Backpatch-through: 12
We have a dozen or so places that need to iterate over all but the
first cell of a List. Prior to v13 this was typically written as
for_each_cell(lc, lnext(list_head(list)))
Commit 1cff1b95a changed these to
for_each_cell(lc, list, list_second_cell(list))
This patch introduces a new macro for_each_from() which expresses
the start point as a list index, allowing these to be written as
for_each_from(lc, list, 1)
This is marginally more efficient, since ForEachState.i can be
initialized directly instead of backing into it from a ListCell
address. It also seems clearer and less typo-prone.
Some of the remaining uses of for_each_cell() look like they could
profitably be changed to for_each_from(), but here I confined myself
to changing uses of list_second_cell().
Also, fix for_each_cell_setup() and for_both_cell_setup() to
const-ify their arguments; that's a simple oversight in 1cff1b95a.
Back-patch into v13, on the grounds that (1) the const-ification
is a minor bug fix, and (2) it's better for back-patching purposes
if we only have two ways to write these loops rather than three.
In HEAD, also remove list_third_cell() and list_fourth_cell(),
which were also introduced in 1cff1b95a, and are unused as of
cc99baa43. It seems unlikely that any third-party code would
have started to use them already; anyone who has can be directed
to list_nth_cell instead.
Discussion: https://postgr.es/m/CAApHDvpo1zj9KhEpU2cCRZfSM3Q6XGdhzuAS2v79PH7WJBkYVA@mail.gmail.com
This reverts commit e21cbb4, as the switch to EVP routines requires a
more careful design where we would need to have at least our wrapper
routines return a status instead of issuing an error by themselves to
let the caller do the error handling. The memory handling was also
incorrect and could cause leaks in the backend if a failure happened,
requiring most likely a callback to do the necessary cleanup as the only
clean way to be able to allocate an EVP context requires the use of an
allocation within OpenSSL. The potential rework of the wrappers also
impacts the fallback implementation when not building with OpenSSL.
Originally, prairiedog has reported a compilation failure, but after
discussion with Tom Lane this needs a better design.
Discussion: https://postgr.es/m/20200928073330.GC2316@paquier.xyz
Adding \d+ to the test in commit 2dfa3fea8 was ill-advised,
because the partitions' names are such that their sort order
is locale dependent. We could rename them to avoid that,
but it doesn't seem worth the trouble; just take \d+ out again.
Per buildfarm.
transformPartitionBoundValue went out of its way to do the wrong
thing: there is no reason to complain about a non-matching COLLATE
clause in a partition boundary expression. We're coercing the
bound expression to the target column type as though by an
implicit assignment, and the rules for implicit assignment say
that collations can be implicitly converted.
What we *do* need to do, and the code is not doing, is apply
assign_expr_collations() to the bound expression. While this is
merely a definition disagreement, that is a bug that needs to be
back-patched, so I'll commit it separately.
Discussion: https://postgr.es/m/CAJV4CdrZ5mKuaEsRSbLf2URQ3h6iMtKD=hik8MaF5WwdmC9uZw@mail.gmail.com
SQL operations such as CURRENT_DATE, CURRENT_TIME, LOCALTIME, and
conversion of "now" in a datetime input string have to obtain the
transaction start timestamp ("now()") as a broken-down struct pg_tm.
This is a remarkably expensive conversion, and since now() does not
change intra-transaction, it doesn't really need to be done more than
once per transaction. Introducing a simple cache provides visible
speedups in queries that compute these values many times, for example
insertion of many rows that use a default value of CURRENT_DATE.
Peter Smith, with a bit of kibitzing by me
Discussion: https://postgr.es/m/CAHut+Pu89TWjq530V2gY5O6SWi=OEJMQ_VHMt8bdZB_9JFna5A@mail.gmail.com
The use of low-level hash routines is not recommended by upstream
OpenSSL since 2000, and pgcrypto already switched to EVP as of 5ff4a67.
Note that this also fixes a failure with SCRAM authentication when using
FIPS in OpenSSL, but as there have been few complaints about this
problem and as this causes an ABI breakage, no backpatch is done.
Author: Michael Paquier, Alessandro Gherardi
Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/20200924025314.GE7405@paquier.xyz
Discussion: https://postgr.es/m/20180911030250.GA27115@paquier.xyz
Prior to this commit, the linitial(), lsecond(), lthird(), lfourth()
macros and their int and Oid list cousins would call their corresponding
inlined function to fetch the cell of interest. Those inline functions
were kind enough to return NULL if the particular cell did not exist.
Unfortunately, the care that these functions took was of no relevance to
the calling macros as they proceeded to directly dereference the returned
value without any regard to whether that value was NULL or not. If it had
been, we'd have segfaulted.
Of course, the fact that we would have segfaulted on misuse of these
macros just goes to prove that nobody is relying on the empty or list too
small checks. So here we just get rid of those checks completely.
The existing inline functions have been left alone as someone may be using
those directly. We just replace the call within each macro to use
list_nth_cell().
For the llast*() case we require a new list_last_cell() inline function to
get away from the multiple evaluation hazard that we'd get if we fetched
->length on the macro's parameter.
Author: David Rowley
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/CAApHDvpo1zj9KhEpU2cCRZfSM3Q6XGdhzuAS2v79PH7WJBkYVA@mail.gmail.com
Both tools never had safeguard checks for the options provided, and it
was possible to make pg_test_fsync run an infinite amount of time or
pass down buggy values to pg_test_timing.
These behaviors have existed for a long time, with no actual complaints,
so no backpatch is done. Basic TAP tests are introduced for both tools.
Author: Michael Paquier
Reviewed-by: Peter Eisentraut
Discussion: https://postgr.es/m/20200806062759.GE16470@paquier.xyz
When commit bd3daddaf introduced AlternativeSubPlans, I had some
ambitions towards allowing the choice of subplan to change during
execution. That has not happened, or even been thought about, in the
ensuing twelve years; so it seems like a failed experiment. So let's
rip that out and resolve the choice of subplan at the end of planning
(in setrefs.c) rather than during executor startup. This has a number
of positive benefits:
* Removal of a few hundred lines of executor code, since
AlternativeSubPlans need no longer be supported there.
* Removal of executor-startup overhead (particularly, initialization
of subplans that won't be used).
* Removal of incidental costs of having a larger plan tree, such as
tree-scanning and copying costs in the plancache; not to mention
setrefs.c's own costs of processing the discarded subplans.
* EXPLAIN no longer has to print a weird (and undocumented)
representation of an AlternativeSubPlan choice; it sees only the
subplan actually used. This should mean less confusion for users.
* Since setrefs.c knows which subexpression of a plan node it's
working on at any instant, it's possible to adjust the estimated
number of executions of the subplan based on that. For example,
we should usually estimate more executions of a qual expression
than a targetlist expression. The implementation used here is
pretty simplistic, because we don't want to expend a lot of cycles
on the issue; but it's better than ignoring the point entirely,
as the executor had to.
That last point might possibly result in shifting the choice
between hashed and non-hashed EXISTS subplans in a few cases,
but in general this patch isn't meant to change planner choices.
Since we're doing the resolution so late, it's really impossible
to change any plan choices outside the AlternativeSubPlan itself.
Patch by me; thanks to David Rowley for review.
Discussion: https://postgr.es/m/1992952.1592785225@sss.pgh.pa.us
This function leaked some memory while loading qual clauses for
an RLS policy. While ordinarily negligible, that could build up
in some repeated-reload cases, as reported by Konstantin Knizhnik.
We can improve matters by borrowing the coding long used in
RelationBuildRuleLock: build stringToNode's result directly in
the target context, and remember to explicitly pfree the
input string.
This patch by no means completely guarantees zero leaks within
this function, since we have no real guarantee that the catalog-
reading subroutines it calls don't leak anything. However,
practical tests suggest that this is enough to resolve the issue.
In any case, any remaining leaks are similar to those risked by
RelationBuildRuleLock and other relcache-loading subroutines.
If we need to fix them, we should adopt a more global approach
such as that used by the RECOVER_RELATION_BUILD_MEMORY hack.
While here, let's remove the need for an expensive PG_TRY block by
using MemoryContextSetParent to reparent an initially-short-lived
context for the RLS data.
Back-patch to all supported branches.
Discussion: https://postgr.es/m/21356c12-8917-8249-b35f-1c447231922b@postgrespro.ru
Commit 464824323e changed the logical replication protocol to allow the
streaming of in-progress transactions and used the new version of protocol
irrespective of the server version. Use the appropriate version of the
protocol based on the server version.
Reported-by: Ashutosh Sharma
Author: Dilip Kumar
Reviewed-by: Ashutosh Sharma and Amit Kapila
Discussion: https://postgr.es/m/CAE9k0P=9OpXcNrcU5Gsvd5MZ8GFpiN833vNHzX6Uc=8+h1ft1Q@mail.gmail.com
Previously, we called fsync() after writing out individual pg_xact,
pg_multixact and pg_commit_ts pages due to cache pressure, leading to
regular I/O stalls in user backends and recovery. Collapse requests for
the same file into a single system call as part of the next checkpoint,
as we already did for relation files, using the infrastructure developed
by commit 3eb77eba. This can cause a significant improvement to
recovery performance, especially when it's otherwise CPU-bound.
Hoist ProcessSyncRequests() up into CheckPointGuts() to make it clearer
that it applies to all the SLRU mini-buffer-pools as well as the main
buffer pool. Rearrange things so that data collected in CheckpointStats
includes SLRU activity.
Also remove the Shutdown{CLOG,CommitTS,SUBTRANS,MultiXact}() functions,
because they were redundant after the shutdown checkpoint that
immediately precedes them. (I'm not sure if they were ever needed, but
they aren't now.)
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> (parts)
Tested-by: Jakub Wartak <Jakub.Wartak@tomtom.com>
Discussion: https://postgr.es/m/CA+hUKGLJ=84YT+NvhkEEDAuUtVHMfQ9i-N7k_o50JmQ6Rpj_OQ@mail.gmail.com
PX_OWN_ALLOC was intended as a way to disable the use of palloc(), and
over the time new palloc() or equivalent calls have been added like in
32984d8, making this extra layer losing its original purpose. This
simplifies on the way some code paths to use palloc0() rather than
palloc() followed by memset(0).
Author: Daniel Gustafsson
Discussion: https://postgr.es/m/A5BFAA1A-B2E8-4CBC-895E-7B1B9475A527@yesql.se
Parallel pg_dump failed if its -d parameter was a connection string
containing any essential information other than host, port, or username.
The same was true for pg_restore with --create.
The reason is that these scenarios failed to preserve the connection
string from the command line; the code felt free to replace that with
just the database name when reconnecting from a pg_dump parallel worker
or after creating the target database. By chance, parallel pg_restore
did not suffer this defect, as long as you didn't say --create.
In practice it seems that the error would be obvious only if the
connstring included essential, non-default SSL or GSS parameters.
This may explain why it took us so long to notice. (It also makes
it very difficult to craft a regression test case illustrating the
problem, since the test would fail in builds without those options.)
Fix by refactoring so that ConnectDatabase always receives all the
relevant options directly from the command line, rather than
reconstructed values. Inject a different database name, when necessary,
by relying on libpq's rules for handling multiple "dbname" parameters.
While here, let's get rid of the essentially duplicate _connectDB
function, as well as some obsolete nearby cruft.
Per bug #16604 from Zsolt Ero. Back-patch to all supported branches.
Discussion: https://postgr.es/m/16604-933f4b8791227b15@postgresql.org
The previous coding was confused about whether head_timestamp was
intended to represent the timestamp for the newest bucket in the
mapping or the oldest timestamp for the oldest bucket in the mapping.
Decide that it's intended to be the oldest one, and repair
accordingly.
To do that, we need to do two things. First, when advancing to a
new bucket, don't categorically set head_timestamp to the new
timestamp. Do this only if we're blowing out the map completely
because a lot of time has passed since we last maintained it. If
we're replacing entries one by one, advance head_timestamp by
1 minute for each; if we're filling in unused entries, don't
advance head_timestamp at all.
Second, fix the computation of how many buckets we need to advance.
The previous formula would be correct if head_timestamp were the
timestamp for the new bucket, but we're now making all the code
agree that it's the timestamp for the oldest bucket, so adjust the
formula accordingly.
This is certainly a bug fix, but I don't feel good about
back-patching it without the introspection tools added by commit
aecf5ee2bb, and perhaps also some
actual tests. Since back-patching the introspection tools might
not attract sufficient support and since there are no automated
tests of these fixes yet, I'm just committing this to master for
now.
Patch by me, reviewed by Thomas Munro, Dilip Kumar, Hamid Akhtar.
Discussion: http://postgr.es/m/CA+TgmoY=aqf0zjTD+3dUWYkgMiNDegDLFjo+6ze=Wtpik+3XqA@mail.gmail.com
Existing code used various inconsistent ways to printf struct stat's
st_size member. The type of that is off_t, which is in most cases a
signed 64-bit integer, so use the long long int format for it.
You can use this to view the contents of the time to XID mapping
which the server maintains when old_snapshot_threshold != -1.
Being able to view that information may be interesting for users,
and it's definitely useful for figuring out whether the mapping
is being maintained correctly. It isn't, so that will need to be
fixed in a subsequent commit.
Patch by me, reviewed by Thomas Munro, Dilip Kumar, Hamid Akhtar.
Discussion: http://postgr.es/m/CA+TgmoY=aqf0zjTD+3dUWYkgMiNDegDLFjo+6ze=Wtpik+3XqA@mail.gmail.com
This makes it possible for code outside snapmgr.c to examine the
contents of this data structure. This commit does not add any code
which actually does so; a subsequent commit will make that change.
Patch by me, reviewed by Thomas Munro, Dilip Kumar, Hamid Akhtar.
Discussion: http://postgr.es/m/CA+TgmoY=aqf0zjTD+3dUWYkgMiNDegDLFjo+6ze=Wtpik+3XqA@mail.gmail.com
Commit fbeb9da22, which added the tsearch_readline APIs, left
t_readline() in place as a compatibility measure. But that function
has been unused and deprecated for twelve years now, so that seems
like enough time to remove it. Doing so, and merging t_readline's
code into tsearch_readline, aids in making several useful
improvements:
* The hard-wired 4K limit on line length in tsearch data files is
removed, by using a StringInfo buffer instead of a fixed-size buffer.
* We can buy back the per-line palloc/pfree added by 3ea7e9550
in the common case where encoding conversion is not required.
* We no longer need a separate pg_verify_mbstr call, as that
functionality was folded into encoding conversion some time ago.
(We could have done some of this stuff while keeping t_readline as a
separate API, but there seems little point, since there's no reason
for anyone to still be using t_readline directly.)
Discussion: https://postgr.es/m/48A4FA71-524E-41B9-953A-FD04EF36E2E7@yesql.se
Harmonize behavior by moving reponsibility for fsyncing directories down
into slru.c. In 10 and later, only the multixact directories were
missed (see commit 1b02be21), and in older branches all SLRUs were
missed.
Back-patch to all supported releases.
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/CA%2BhUKGLtsTUOScnNoSMZ-2ZLv%2BwGh01J6kAo_DM8mTRq1sKdSQ%40mail.gmail.com
We failed to pass down the query string to check_new_partition_bound,
so that its attempts to provide error cursor positions were for naught;
one must have the query string to get parser_errposition to do anything.
Adjust its API to require a ParseState to be passed down.
Also, improve the logic inside check_new_partition_bound so that the
cursor points at the partition bound for the specific column causing
the issue, when one can be identified.
That part is also for naught if we can't determine the query position of
the column with the problem. Improve transformPartitionBoundValue so
that it makes sure that const-simplified partition expressions will be
properly labeled with positions. In passing, skip calling evaluate_expr
if the value is already a Const, which is surely the most common case.
Alexandra Wang, Ashwin Agrawal, Amit Langote; reviewed by Ashutosh Bapat
Discussion: https://postgr.es/m/CACiyaSopZoqssfMzgHk6fAkp01cL6vnqBdmTw2C5_KJaFR_aMg@mail.gmail.com
Discussion: https://postgr.es/m/CAJV4CdrZ5mKuaEsRSbLf2URQ3h6iMtKD=hik8MaF5WwdmC9uZw@mail.gmail.com
tsearch_readline() saves the string pointer it returns to the caller
for possible use in the associated error context callback. However,
the caller will usually pfree that string sometime before it next
calls tsearch_readline(), so that there is a window where an ereport
will try to print an already-freed string.
The built-in users of tsearch_readline() happen to all do that pfree
at the bottoms of their loops, so that the window is effectively
empty for them. However, this is not documented as a requirement,
and contrib/dict_xsyn doesn't do it like that, so it seems likely
that third-party dictionaries might have live bugs here.
The practical consequences of this seem pretty limited in any case,
since production builds wouldn't clobber the freed string immediately,
besides which you'd not expect syntax errors in dictionary files
being used in production. Still, it's clearly a bug waiting to bite
somebody.
Fix by pstrdup'ing the string to be saved for the error callback,
and then pfree'ing it next time through. It's been like this for
a long time, so back-patch to all supported branches.
Discussion: https://postgr.es/m/48A4FA71-524E-41B9-953A-FD04EF36E2E7@yesql.se
pg_restore previously coped with overlength TOC-file lines using some
complicated logic to ignore additional bufferloads. While this isn't
wrong, since we don't expect that the interesting part of a line would
run to more than a dozen or so bytes, it's more complex than it needs
to be. Use a StringInfo instead of a fixed-size buffer so that we can
process long lines as single entities and thus not need the extra
logic.
Daniel Gustafsson
Discussion: https://postgr.es/m/48A4FA71-524E-41B9-953A-FD04EF36E2E7@yesql.se
Use a StringInfo instead of a fixed-size buffer in parseServiceInfo().
While we've not heard complaints about the existing 255-byte limit,
it certainly seems possible that complex cases could run afoul of it.
Daniel Gustafsson
Discussion: https://postgr.es/m/48A4FA71-524E-41B9-953A-FD04EF36E2E7@yesql.se
Further experience says that the appending behavior offered by
pg_get_line_append is useful to only a very small minority of callers.
For most, the requirement to reset the buffer after each line is just
an error-prone nuisance. Hence, invent another alternative call
pg_get_line_buf, which takes care of that detail.
Noted while reviewing a patch from Daniel Gustafsson.
Discussion: https://postgr.es/m/48A4FA71-524E-41B9-953A-FD04EF36E2E7@yesql.se
pgindent messes up entries in this file if their names match
typedef names. While there's reason to avoid choosing conflicting
names, we have some historical exceptions, and there's no guarantee
that more duplicates won't appear in future. Since this is a derived
file anyway, there's little harm in just excluding it.
I said yesterday that all our derived files are pgindent-clean, or else
explicitly excluded from indentation, but I'd forgotten about this one.
Now that project is really done, as confirmed by a test run.
Discussion: https://postgr.es/m/79ed5348-be7a-b647-dd40-742207186a22@2ndquadrant.com
The existing message about "a column definition list is only allowed for
functions returning "record"" could be given in some cases where it was
fairly confusing; in particular, a function with multiple OUT parameters
*does* return record according to pg_proc. Break it down into a couple
more cases to deliver a more on-point complaint. Per complaint from
Bruce Momjian.
Discussion: https://postgr.es/m/798909.1600562993@sss.pgh.pa.us
This completes the project of making all our derived files be
pgindent-clean (or else explicitly excluded from indentation),
so that no surprises result when running pgindent in a built-out
development tree.
Discussion: https://postgr.es/m/79ed5348-be7a-b647-dd40-742207186a22@2ndquadrant.com
Since we're bypassing the buffer manager, we need to call
PageSetChecksumInplace() directly. As reported by Justin Pryzby.
In the passing, add RelationOpenSmgr() calls before all smgrwrite() and
smgrextend() calls. Tom added one before the first smgrextend() call in
commit c2bb287025, which seems to be enough, but let's play it safe and
do it before each one. That's how it's done in the similar code in
nbtsort.c, too.
Discussion: https://www.postgresql.org/message-id/20200920224446.GF30557@telsasoft.com