postgres_fdw.application_name relies on MyProcPort to define the data
that should be added to escape sequences %u (user name) or %d (database
name). However this code could be run in processes that lack a
MyProcPort, like an autovacuum process, causing crashes.
The code generating the application name is made more flexible with this
commit, so as it now generates no data for %u and %d if MyProcPort is
missing, and a simple "unknown" if MyProcPort exists, but the expected
fields are not set.
Reported-by: Alexander Lakhin
Author: Kyotaro Horiguchi, Michael Paquier
Reviewed-by: Hayato Kuroda, Masahiko Sawada
Discussion: https://postgr.es/m/17789-8b31c5a4672b74d9@postgresql.org
Backpatch-through: 15
GetWALRecordsInfo() and pg_get_wal_fpi_info() can leak memory across
WAL record iterations. Fix this by using a temporary memory context
that's reset for each WAL record iteraion.
Also a use temporary context for loops in GetXLogSummaryStats(). The
number of iterations is a small constant, so the previous behavior was
not a leak, but fix for clarity (but no need to backport).
Backport GetWALRecordsInfo() change to version
15. pg_get_wal_fpi_info() didn't exist in version 15.
Reported-by: Peter Geoghegan
Author: Bharath Rupireddy
Discussion: https://www.postgresql.org/message-id/CAH2-WznLEJjn7ghmKOABOEZYuJvkTk%3DGKU3m0%2B-XBAH%2BerPiJQ%40mail.gmail.com
Backpatch-through: 15
This commit adds more coverage for utility statements so as it is
possible to track down all the effects of query normalization done for
all the queries that use either Const or A_Const nodes, which are the
nodes where normalization makes the most sense as they apply to
constants (well, most of the time, really).
This set of queries is extracted from an analysis done while looking at
full dumps of the regression database when applying different levels of
normalization to either Const or A_Const nodes for utilities, as of a
minimal set of these, for:
- All relkinds (CREATE, ALTER, DROP)
- Policies
- Cursors
- Triggers
- Types
- Rules
- Statistics
- CALL
- Transaction statements (isolation level, options)
- EXPLAIN
- COPY
Note that pg_stat_statements is not switched yet to show any
normalization for utilities, still it improves the default coverage of
the query jumbling code (not by as much as enabling query jumbling on
the main regression test suite, though):
- queryjumblefuncs.funcs.c: 36.8% => 48.5%
- queryjumblefuncs.switch.c: 33.2% => 43.1%
Reviewed-by: Bertrand Drouvot
Discussion: https://postgr.es/m/Y+MRdEq9W9XVa2AB@paquier.xyz
pg_stat_statements.sql acts as the main file for all the core tests of
the module, but things have become complicated to follow over the years
as some of the sub-scenarios tested in this file rely on assumptions
that come from completely different areas of it, like a GUC setup or a
relation created previously. For example, row tracking for CTAS/COPY
was looking at the number of plans, which was not necessary, or level
tracking was mixed with checks on planner counts.
This commit refactors the tests of pg_stat_statements, by moving test
cases out of pg_stat_statements.sql into their own file, as of:
- Planning-related tests in planning.sql, for [re]plan counts and
top-level handling. These depend on pg_stat_statements.track_planning.
- Utilities in utility.sql (pg_stat_statements.track_utility), that
includes now the tests for:
-- Row tracking for CTAS, CREATE MATERIALIZED VIEW, COPY.
-- Basic utility statements.
-- SET statements.
- Tracking level, depending on pg_stat_statements.track. This part has
been looking at scenarios with DO blocks, PL functions and SQL
functions.
pg_stat_statements.sql (still named the same for now) still includes
some checks for role-level tracking and WAL generation metrics, that
ought to become independent in the long term for clarity.
While on it, this switches the order of the attributes when querying
pg_stat_statements, the query field becoming last. This makes much
easier the tracking of changes related to normalization, as queries are
the only variable-length attributes queried (unaligned mode would be one
extra choice, but that reduces the checks on the other fields).
Test scenarios and their results match exactly with what was happening
before this commit in terms of calls, number of plans, number of rows,
cached data or level tracking, so this has no effect on the coverage in
terms of what is produced by the reports in the table
pg_stat_statements. A follow-up patch will extend more the tests of
pg_stat_statements around utilities, so this split creates a foundation
for this purpose, without complicating more pg_stat_statements.sql.
Reviewed-by: Bertrand Drouvot
Discussion: https://postgr.es/m/Y+MRdEq9W9XVa2AB@paquier.xyz
A new callback named startup_cb, called shortly after a module is
loaded, is added. This makes possible the initialization of any
additional state data required by a module. This initial state data can
be saved in a ArchiveModuleState, that is now passed down to all the
callbacks that can be defined in a module. With this design, it is
possible to have a per-module state, aimed at opening the door to the
support of more than one archive module.
The initialization of the callbacks is changed so as
_PG_archive_module_init() does not anymore give in input a
ArchiveModuleCallbacks that a module has to fill in with callback
definitions. Instead, a module now needs to return a const
ArchiveModuleCallbacks.
All the structure and callback definitions of archive modules are moved
into their own header, named archive_module.h, from pgarch.h.
Command-based archiving follows the same line, with a new set of files
named shell_archive.{c,h}.
There are a few more items that are under discussion to improve the
design of archive modules, like the fact that basic_archive calls
sigsetjmp() by itself to define its own error handling flow. These will
be adjusted later, the changes done here cover already a good portion
of what has been discussed.
Any modules created for v15 will need to be adjusted to this new
design.
Author: Nathan Bossart
Reviewed-by: Andres Freund
Discussion: https://postgr.es/m/20230130194810.6fztfgbn32e7qarj@awork3.anarazel.de
Like the implementation for node copy, write and read, this node
requires a custom implementation so as the query jumbling is able to
consider the correct value assigned to it, depending on its type (int,
float, bool, string, bitstring).
Based on a dump of pg_stat_statements from the regression database, this
would confuse the query jumbling of the following queries:
- SET.
- COPY TO with SELECT queries.
- START TRANSACTION with different isolation levels.
- ALTER TABLE with default expressions.
- CREATE TABLE with partition bounds.
Note that there may be a long-term argument in tracking the location of
such nodes so as query strings holding such nodes could be normalized,
but this is left as a separate discussion.
Oversight in 3db72eb.
Discussion: https://postgr.es/m/Y9+HuYslMAP6yyPb@paquier.xyz
Thanks to Robins to find the issue and Nathan for promptly writing a test case
to prevent future problems like this.
Reported-by: Nathan Bossart <nathandbossart@gmail.com>
Reported-by: Robins Tharakan <tharakan@gmail.com>
Author: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://postgr.es/m/20230130193008.GA2740781@nathanxps13
This commit changes the query jumbling code in queryjumblefuncs.c to be
generated automatically based on the information of the nodes in the
headers of src/include/nodes/ by using gen_node_support.pl. This
approach offers many advantages:
- Support for query jumbling for all the utility statements, based on the
state of their parsed Nodes and not only their query string. This will
greatly ease the switch to normalize the information of some DDLs, like
SET or CALL for example (this is left unchanged and should be part of a
separate discussion). With this feature, the number of entries stored
for utilities in pg_stat_statements is reduced (for example now
"CHECKPOINT" and "checkpoint" mean the same thing with the same query
ID).
- Documentation of query jumbling directly in the structure definition
of the nodes. Since this code has been introduced in pg_stat_statements
and then moved to code, the reasons behind the choices of what should be
included in the jumble are rather sparse. Note that some explanation is
added for the most relevant parts, as a start.
- Overall code reduction and more consistency with the other parts
generating read, write and copy depending on the nodes.
The query jumbling is controlled by a couple of new node attributes,
documented in nodes/nodes.h:
- custom_query_jumble, to mark a Node as having a custom
implementation.
- no_query_jumble, to ignore entirely a Node.
- query_jumble_ignore, to ignore a field in a Node.
- query_jumble_location, to mark a location in a Node, for
normalization. This can apply only to int fields, with "location" in
their name (only Const as of this commit).
There should be no compatibility impact on pg_stat_statements, as the
new code applies the jumbling to the same fields for each node (its
regression tests have no modification, for one).
Some benchmark of the query jumbling between HEAD and this commit for
SELECT and DMLs has proved that this new code does not cause a
performance regression, with computation times close for both methods.
For utility queries, the new method is slower than the previous method
of calculating a hash of the query string, though we are talking about
extra ns-level changes based on what I measured, which is unnoticeable
even for OLTP workloads as a query ID is calculated once per query
post-parse analysis.
Author: Michael Paquier
Reviewed-by: Peter Eisentraut
Discussion: https://postgr.es/m/Y5BHOUhX3zTH/ig6@paquier.xyz
EquivalenceClasses are now understood as applying within a "join
domain", which is a set of inner-joined relations (possibly underneath
an outer join). We no longer need to treat an EC from below an outer
join as a second-class citizen.
I have hopes of eventually being able to treat outer-join clauses via
EquivalenceClasses, by means of only applying deductions within the
EC's join domain. There are still problems in the way of that, though,
so for now the reconsider_outer_join_clause logic is still here.
I haven't been able to get rid of RestrictInfo.is_pushed_down either,
but I wonder if that could be recast using JoinDomains.
I had to hack one test case in postgres_fdw.sql to make it still test
what it was meant to, because postgres_fdw is inconsistent about
how it deals with quals containing non-shippable expressions; see
https://postgr.es/m/1691374.1671659838@sss.pgh.pa.us. That should
be improved, but I don't think it's within the scope of this patch
series.
Patch by me; thanks to Richard Guo for review.
Discussion: https://postgr.es/m/830269.1656693747@sss.pgh.pa.us
Remove RestrictInfo.nullable_relids, along with a good deal of
infrastructure that calculated it. One use-case for it was in
join_clause_is_movable_to, but we can now replace that usage with
a check to see if the clause's relids include any outer join
that can null the target relation. The other use-case was in
join_clause_is_movable_into, but that test can just be dropped
entirely now that the clause's relids include outer joins.
Furthermore, join_clause_is_movable_into should now be
accurate enough that it will accept anything returned by
generate_join_implied_equalities, so we can restore the Assert
that was diked out in commit 95f4e59c3.
Remove the outerjoin_delayed mechanism. We needed this before to
prevent quals from getting evaluated below outer joins that should
null some of their vars. Now that we consider varnullingrels while
placing quals, that's taken care of automatically, so throw the
whole thing away.
Teach remove_useless_result_rtes to also remove useless FromExprs.
Having done that, the delay_upper_joins flag serves no purpose any
more and we can remove it, largely reverting 11086f2f2.
Use constant TRUE for "dummy" clauses when throwing back outer joins.
This improves on a hack I introduced in commit 6a6522529. If we
have a left-join clause l.x = r.y, and a WHERE clause l.x = constant,
we generate r.y = constant and then don't really have a need for the
join clause. But we must throw the join clause back anyway after
marking it redundant, so that the join search heuristics won't think
this is a clauseless join and avoid it. That was a kluge introduced
under time pressure, and after looking at it I thought of a better
way: let's just introduce constant-TRUE "join clauses" instead,
and get rid of them at the end. This improves the generated plans for
such cases by not having to test a redundant join clause. We can also
get rid of the ugly hack used to mark such clauses as redundant for
selectivity estimation.
Patch by me; thanks to Richard Guo for review.
Discussion: https://postgr.es/m/830269.1656693747@sss.pgh.pa.us
Traditionally we used the same Var struct to represent the value
of a table column everywhere in parse and plan trees. This choice
predates our support for SQL outer joins, and it's really a pretty
bad idea with outer joins, because the Var's value can depend on
where it is in the tree: it might go to NULL above an outer join.
So expression nodes that are equal() per equalfuncs.c might not
represent the same value, which is a huge correctness hazard for
the planner.
To improve this, decorate Var nodes with a bitmapset showing
which outer joins (identified by RTE indexes) may have nulled
them at the point in the parse tree where the Var appears.
This allows us to trust that equal() Vars represent the same value.
A certain amount of klugery is still needed to cope with cases
where we re-order two outer joins, but it's possible to make it
work without sacrificing that core principle. PlaceHolderVars
receive similar decoration for the same reason.
In the planner, we include these outer join bitmapsets into the relids
that an expression is considered to depend on, and in consequence also
add outer-join relids to the relids of join RelOptInfos. This allows
us to correctly perceive whether an expression can be calculated above
or below a particular outer join.
This change affects FDWs that want to plan foreign joins. They *must*
follow suit when labeling foreign joins in order to match with the
core planner, but for many purposes (if postgres_fdw is any guide)
they'd prefer to consider only base relations within the join.
To support both requirements, redefine ForeignScan.fs_relids as
base+OJ relids, and add a new field fs_base_relids that's set up by
the core planner.
Large though it is, this commit just does the minimum necessary to
install the new mechanisms and get check-world passing again.
Follow-up patches will perform some cleanup. (The README additions
and comments mention some stuff that will appear in the follow-up.)
Patch by me; thanks to Richard Guo for review.
Discussion: https://postgr.es/m/830269.1656693747@sss.pgh.pa.us
When auto_explain.log_verbose is on, auto_explain should print in the
logs plans equivalent to the EXPLAIN (VERBOSE). However, when
compute_query_id is on, query identifiers were not showing up, being
only handled by EXPLAIN (VERBOSE). This brings auto_explain on par with
EXPLAIN regarding that. Note that like EXPLAIN, auto_explain does not
show the query identifier when compute_query_id=regress.
The change is done so as the choice of printing the query identifier is
done in ExplainPrintPlan() rather than in ExplainOnePlan(), to avoid a
duplication of the logic dealing with the query ID. auto_explain is the
only in-core caller of ExplainPrintPlan().
While looking at the area, I have noticed that more consolidation
between EXPLAIN and auto_explain would be in order for the logging of
the plan duration and the buffer usage. This refactoring is left as a
future change.
Author: Atsushi Torikoshi
Reviewed-by: Justin Pryzby, Julien Rouhaud
Discussion: https://postgr.es/m/1ea21936981f161bccfce05765c03bee@oss.nttdata.com
This rename is in preparation for the introduction of recovery modules,
where basic_wal_module will be used as a base template for the set of
callbacks introduced. The former name did not really reflect all that.
Author: Nathan Bossart
Discussion: https://postgr.es/m/20221227192449.GA3672473@nathanxps13
Until now dblink and postgres_fdw did not process interrupts during connection
establishment. Besides preventing query cancellations etc, this can lead to
undetected deadlocks, as global barriers are not processed.
These aforementioned undetected deadlocks are the reason for the spate of CI
test failures in the FreeBSD 'test_running' step.
Fix the bug by using the helper from libpq-be-fe-helpers.h, introduced in a
prior commit. Besides fixing the bug, this also removes duplicated code
around reserving file descriptors.
As the change is relatively large and there are no field reports of the
problem, don't backpatch for now.
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/20220925232237.p6uskba2dw6fnwj2@awork3.anarazel.de
Backpatch:
This function is able to extract the full page images from a range of
records, specified as of input arguments start_lsn and end_lsn. Like
the other functions of this module, an error is returned if using LSNs
that do not reflect real system values. All the FPIs stored in a single
record are extracted.
The module's version is bumped to 1.1.
Author: Bharath Rupireddy
Reviewed-by: Bertrand Drouvot
Discussion: https://postgr.es/m/CALj2ACVCcvzd7WiWvD=6_7NBvVB_r6G0EGSxL4F8vosAi6Se4g@mail.gmail.com
This will ease a follow-up move that will generate automatically this
code. The C file is renamed, for consistency with the node-related
files whose code are generated by gen_node_support.pl:
- queryjumble.c -> queryjumblefuncs.c
- utils/queryjumble.h -> nodes/queryjumble.h
Per a suggestion from Peter Eisentraut.
Reviewed-by: Peter Eisentraut
Discussion: https://postgr.es/m/Y5BHOUhX3zTH/ig6@paquier.xyz
The rule system needs "old" and/or "new" pseudo-RTEs in rule actions
that are ON INSERT/UPDATE/DELETE. Historically it's put such entries
into the ON SELECT rules of views as well, but those are really quite
vestigial. The only thing we've used them for is to carry the
view's relid forward to AcquireExecutorLocks (so that we can
re-lock the view to verify it hasn't changed before re-using a plan)
and to carry its relid and permissions data forward to execution-time
permissions checks. What we can do instead of that is to retain
these fields of the RTE_RELATION RTE for the view even after we
convert it to an RTE_SUBQUERY RTE. This requires a tiny amount of
extra complication in the planner and AcquireExecutorLocks, but on
the other hand we can get rid of the logic that moves that data from
one place to another.
The principal immediate benefit of doing this, aside from a small
saving in the pg_rewrite data for views, is that these pseudo-RTEs
no longer trigger ruleutils.c's heuristic about qualifying variable
names when the rangetable's length is more than 1. That results
in quite a number of small simplifications in regression test outputs,
which are all to the good IMO.
Bump catversion because we need to dump a few more fields of
RTE_SUBQUERY RTEs. While those will always be zeroes anyway in
stored rules (because we'd never populate them until query rewrite)
they are useful for debugging, and it seems like we'd better make
sure to transmit such RTEs accurately in plans sent to parallel
workers. I don't think the executor actually examines these fields
after startup, but someday it might.
This is a second attempt at committing 1b4d280ea. The difference
from the first time is that now we can add some filtering rules to
AdjustUpgrade.pm to allow cross-version upgrade testing to pass
despite all the cosmetic changes in CREATE VIEW outputs.
Amit Langote (filtering rules by me)
Discussion: https://postgr.es/m/CA+HiwqEf7gPN4Hn+LoZ4tP2q_Qt7n3vw7-6fJKOf92tSEnX6Gg@mail.gmail.com
Discussion: https://postgr.es/m/891521.1673657296@sss.pgh.pa.us
Avoid explicitly grouping by columns that we know are redundant
for sorting, for example we need group by only one of x and y in
SELECT ... WHERE x = y GROUP BY x, y
This comes up more often than you might think, as shown by the
changes in the regression tests. It's nearly free to detect too,
since we are just piggybacking on the existing logic that detects
redundant pathkeys. (In some of the existing plans that change,
it's visible that a sort step preceding the grouping step already
didn't bother to sort by the redundant column, making the old plan
a bit silly-looking.)
To do this, build processed_groupClause and processed_distinctClause
lists that omit any provably-redundant sort items, and consult those
not the originals where relevant. This means that within the
planner, one should usually consult root->processed_groupClause or
root->processed_distinctClause if one wants to know which columns
are to be grouped on; but to check whether grouping or distinct-ing
is happening at all, check non-NIL-ness of parse->groupClause or
parse->distinctClause. This is comparable to longstanding rules
about handling the HAVING clause, so I don't think it'll be a huge
maintenance problem.
nodeAgg.c also needs minor mods, because it's now possible to generate
AGG_PLAIN and AGG_SORTED Agg nodes with zero grouping columns.
Patch by me; thanks to Richard Guo and David Rowley for review.
Discussion: https://postgr.es/m/185315.1672179489@sss.pgh.pa.us
This makes sure that the internal logic of these functions does not
attempt to change the value of the arguments constified, and it removes
one unconstify() in basic_archive.c.
Author: Nathan Bossart
Reviewed-by: Andrew Dunstan, Peter Eisentraut
Discussion: https://postgr.es/m/20230114231126.GA2580330@nathanxps13
It's likely worth adding some automated way of preventing further
omissions. We're discussing how to best do that.
Reported-by: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/20230117173509.GV9837@telsasoft.com
This reverts commit 1b4d280ea1.
It's broken the buildfarm members that run cross-version-upgrade tests,
because they're not prepared to deal with cosmetic differences between
CREATE VIEW commands emitted by older servers and HEAD. Even if we had
a solution to that, which we don't, it'd take some time to roll it out
to the affected animals. This improvement isn't valuable enough to
justify addressing that problem on an emergency basis, so revert it
for now.
The rule system needs "old" and/or "new" pseudo-RTEs in rule actions
that are ON INSERT/UPDATE/DELETE. Historically it's put such entries
into the ON SELECT rules of views as well, but those are really quite
vestigial. The only thing we've used them for is to carry the
view's relid forward to AcquireExecutorLocks (so that we can
re-lock the view to verify it hasn't changed before re-using a plan)
and to carry its relid and permissions data forward to execution-time
permissions checks. What we can do instead of that is to retain
these fields of the RTE_RELATION RTE for the view even after we
convert it to an RTE_SUBQUERY RTE. This requires a tiny amount of
extra complication in the planner and AcquireExecutorLocks, but on
the other hand we can get rid of the logic that moves that data from
one place to another.
The principal immediate benefit of doing this, aside from a small
saving in the pg_rewrite data for views, is that these pseudo-RTEs
no longer trigger ruleutils.c's heuristic about qualifying variable
names when the rangetable's length is more than 1. That results
in quite a number of small simplifications in regression test outputs,
which are all to the good IMO.
Bump catversion because we need to dump a few more fields of
RTE_SUBQUERY RTEs. While those will always be zeroes anyway in
stored rules (because we'd never populate them until query rewrite)
they are useful for debugging, and it seems like we'd better make
sure to transmit such RTEs accurately in plans sent to parallel
workers. I don't think the executor actually examines these fields
after startup, but someday it might.
Amit Langote
Discussion: https://postgr.es/m/CA+HiwqEf7gPN4Hn+LoZ4tP2q_Qt7n3vw7-6fJKOf92tSEnX6Gg@mail.gmail.com
There are a number of places where a shell command is constructed with
percent-placeholders (like %x). It's cumbersome to have to open-code
this several times. This factors out this logic into a separate
function. This also allows us to ensure consistency for and document
some subtle behaviors, such as what to do with unrecognized
placeholders.
The unified handling is now that incorrect and unknown placeholders
are an error, where previously in most cases they were skipped or
ignored. This affects the following settings:
- archive_cleanup_command
- archive_command
- recovery_end_command
- restore_command
- ssl_passphrase_command
The following settings are part of this refactoring but already had
stricter error handling and should be unchanged in their behavior:
- basebackup_to_shell.command
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/5238bbed-0b01-83a6-d4b2-7eb0562a054e%40enterprisedb.com
Check the remote relkind before trying to use TABLESAMPLE to acquire
sample from the remote relation. Even if the remote server version has
TABLESAMPLE support, the foreign table may point to incompatible relkind
(e.g. a view or a sequence).
If the relkind does not support TABLESAMPLE, error out if TABLESAMPLE
was requested specifically (as system/bernoulli), or fallback to random
just like we do for old server versions.
We currently end up disabling sampling for such relkind values anyway,
due to reltuples being -1 or 1, but that seems rather accidental, and
might get broken by improving reltuples estimates, etc. So better to
make the check explicit.
Reported-by: Tom Lane
Discussion: https://postgr.es/m/951485.1672461744%40sss.pgh.pa.us
A comment was left behind referencing sample rate adjustment removed
from 8ad51b5f44. So clean that up. While at it also remove the sample
rate clamping which should not be necessary without the clamping, and
just check that with an assert.
Reported-by: Tom Lane
Discussion: https://postgr.es/m/951485.1672461744%40sss.pgh.pa.us
This is like the existing bt_page_stats() function, but it can
report on a range of pages rather than just one at a time.
I don't have a huge amount of faith in the portability of the
new test cases, but they do pass in a 32-bit FreeBSD VM here.
Further adjustment may be needed depending on buildfarm results.
Hamid Akhtar, reviewed by Naeem Akhter, Bertrand Drouvot,
Bharath Rupireddy, and myself
Discussion: https://postgr.es/m/CANugjht-=oGMRmNJKMqnBC69y7vr+wHDmm0ZK6-1pJsxoBKBbA@mail.gmail.com
When collecting ANALYZE sample on foreign tables, postgres_fdw fetched
all rows and performed the sampling locally. For large tables this means
transferring and immediately discarding large amounts of data.
This commit allows the sampling to be performed on the remote server,
transferring only the much smaller sample. The sampling is performed
using the built-in TABLESAMPLE methods (system, bernoulli) or random()
function, depending on the remote server version.
Remote sampling can be enabled by analyze_sampling on the foreign server
and/or foreign table, with supported values 'off', 'auto', 'system',
'bernoulli' and 'random'. The default value is 'auto' which uses either
'bernoulli' (TABLESAMPLE method) or 'random' (for remote servers without
TABLESAMPLE support).
Fix incorrect code which was trying to convert a Bitmapset of columns at
the attnums according to a parent table and transform them into the
equivalent Bitmapset with same attnums according to the given child table.
This code is new as of a61b1f748 and was failing to do the correct
translation when there was an intermediate parent table between 'rel' and
'top_parent_rel'.
Reported-by: Ranier Vilela
Author: Richard Guo, Amit Langote
Discussion: https://postgr.es/m/CAEudQArohfB_Gy%2BhcH2-bANUkxgjJiP%3DABq01_LgTNTbcNijag%40mail.gmail.com
seg stores the number of significant digits in an input number
in a "char" field. If char is signed, and the input is more than
127 digits long, the count can read out as negative causing
seg_out() to print garbage (or, if you're really unlucky,
even crash).
To fix, clamp the digit count to be not more than FLT_DIG.
(In theory this loses some information about what the original
input was, but it doesn't seem like useful information; it would
not survive dump/restore in any case.)
Also, in case there are stored values of the seg type containing
bad data, add a clamp in seg_out's restore() subroutine.
Per bug #17725 from Robins Tharakan. It's been like this
forever, so back-patch to all supported branches.
Discussion: https://postgr.es/m/17725-0a09313b67fbe86e@postgresql.org
This shaves some code by replacing the combinations of
CreateTemplateTupleDesc()/TupleDescInitEntry() hardcoding a mapping of
the attributes listed in pg_proc.dat by get_call_result_type() to build
the TupleDesc needed for the rows generated.
get_call_result_type() is more expensive than the former style, but this
removes some duplication with the lists of OUT parameters (pg_proc.dat
and the attributes hardcoded in these code paths). This is applied to
functions that are not considered as critical (aka that could be called
repeatedly for monitoring purposes).
Author: Bharath Rupireddy
Reviewed-by: Robert Haas, Álvaro Herrera, Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/CALj2ACV23HW5HP5hFjd89FNS-z5X8r2jNXdMXcpN2BgTtKd87w@mail.gmail.com