A new function pg_stat_get_backend_subxact() can be used to get
information about the number of subtransactions in the cache of
a particular backend and whether that cache has overflowed. This
can be useful for tracking down performance problems that can
result from overflowed snapshots.
Dilip Kumar, reviewed by Zhihong Yu, Nikolay Samokhvalov,
Justin Pryzby, Nathan Bossart, Ashutosh Sharma, Julien
Rouhaud. Additional design comments from Andres Freund,
Tom Lane, Bruce Momjian, and David G. Johnston.
Discussion: http://postgr.es/m/CAFiTN-ut0uwkRJDQJeDPXpVyTWD46m3gt3JDToE02hTfONEN=Q@mail.gmail.com
While fooling with my pet outer-join-variables patch, I discovered
that the test case I added in commit 11086f2f2 no longer demonstrates
what it's supposed to. The idea is to tempt the planner to reverse
the order of the two outer joins, which would leave noplace to
correctly evaluate the WHERE clause that's inserted between them.
Before the addition of the delay_upper_joins mechanism, it would
have taken the bait.
However, subsequent improvements broke the test in two different ways.
First, we now recognize the IS NULL coding pattern as an antijoin, and
we won't re-order antijoins; even if we did, the IS NULL test clauses
get removed so there would be no opportunity for them to misbehave.
Second, the planner now discovers that nested parameterized indexscans
are a lot cheaper than the double hash join it used back in the day,
and that approach doesn't want to re-order the joins anyway. Thus,
in HEAD the test passes even if one dikes out delay_upper_joins.
To fix, change the IS NULL tests to COALESCE clauses, which produce
the same results but the planner isn't smart enough to convert them
to antijoins. It'll still go for parameterized indexscans though,
so drop the index enabling that (don't know why I added that in the
first place), and disable nestloop joining just to be sure.
This time around, add an EXPLAIN to make the choice of plan visible.
Such references failed with "cache lookup failed for type 0"
because we didn't resolve the type of the CYCLE column until after
analyzing the CTE's query. We can just move that processing
to before the recursive parse_sub_analyze call, though.
While here, invent a couple of local variables to make this
code less egregiously wider-than-80-columns.
Per bug #17723 from Vik Fearing. Back-patch to v14 where
the CYCLE feature was added.
Discussion: https://postgr.es/m/17723-2c4985ff111e7bba@postgresql.org
The environment variable PG_TEST_PG_UPGRADE_MODE can be set to
override the default transfer mode for the pg_upgrade tests.
(Automatically running the pg_upgrade tests for all supported modes
would be too slow.)
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/50a97009-8ff9-ca4d-a0f6-6086a6775a5b%40enterprisedb.com
This option selects the default transfer mode. Having an explicit
option is handy to make scripts and tests more explicit. It also
makes it easier to talk about a "copy" mode rather than "the default
mode" or something like that, since until now the default mode didn't
have an externally visible name.
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/50a97009-8ff9-ca4d-a0f6-6086a6775a5b%40enterprisedb.com
This ancient bit of code was summarily trapping any ereport longjmp
whatsoever and assuming that it must represent an invalid-XML report.
It's not really appropriate to handle OOM-like situations that way:
maybe the input is valid or maybe not, but we couldn't find out.
And it'd be a seriously bad idea to ignore, say, a query cancel
error that way. (Perhaps that can't happen because there is no
CHECK_FOR_INTERRUPTS anywhere within xml_parse, but even if that's
true today it's obviously a very fragile assumption.)
But in the wake of the previous commit, we can drop the PG_TRY
here altogether, and use the soft error mechanism to catch only
the kinds of errors that are legitimate to treat as invalid-XML.
(This is our first use of the soft error mechanism for something
not directly related to a datatype input function. It won't be
the last.)
xml_is_document can be converted in the same way. That one is
not actively broken, because it was checking specifically for
ERRCODE_INVALID_XML_DOCUMENT rather than trapping everything;
but the code is still shorter and probably faster this way.
Discussion: https://postgr.es/m/3564577.1671142683@sss.pgh.pa.us
The key idea here is that xml_parse must distinguish hard errors
from soft errors. We want to throw a hard error for libxml
initialization failures: those might be out-of-memory, or something
else, but in any case they are not the fault of the input string.
If we get to the point of parsing the input, and something goes
wrong, we can fairly consider that to mean bad input.
One thing that arguably does mean bad input, but I didn't trouble
to handle softly, is encoding conversion failure while converting
the server encoding to UTF8. This might be something to improve
later, but it seems like a pretty low-probability scenario.
Discussion: https://postgr.es/m/3564577.1671142683@sss.pgh.pa.us
When incremental sorts were added in v13 a 1.5x pessimism factor was added
to the cost modal. Seemingly this was done because the cost modal only
has an estimate of the total number of input rows and the number of
presorted groups. It assumes that the input rows will be evenly
distributed throughout the presorted groups. The 1.5x pessimism factor
was added to slightly reduce the likelihood of incremental sorts being
used in the hope to avoid performance regressions where an incremental
sort plan was picked and turned out slower due to a large skew in the
number of rows in the presorted groups.
An additional quirk with the path generation code meant that we could
consider both a sort and an incremental sort on paths with presorted keys.
This meant that with the pessimism factor, it was possible that we opted
to perform a sort rather than an incremental sort when the given path had
presorted keys.
Here we remove the 1.5x pessimism factor to allow incremental sorts to
have a fairer chance at being chosen against a full sort.
Previously we would generally create a sort path on the cheapest input
path (if that wasn't sorted already) and incremental sort paths on any
path which had presorted keys. This meant that if the cheapest input path
wasn't completely sorted but happened to have presorted keys, we would
create a full sort path *and* an incremental sort path on that input path.
Here we change this logic so that if there are presorted keys, we only
create an incremental sort path, and create sort paths only when a full
sort is required.
Both the removal of the cost pessimism factor and the changes made to the
path generation make it more likely that incremental sorts will now be
chosen. That, of course, as with teaching the planner any new tricks,
means an increased likelihood that the planner will perform an incremental
sort when it's not the best method. Our standard escape hatch for these
cases is an enable_* GUC. enable_incremental_sort already exists for
this.
This came out of a report by Pavel Luzanov where he mentioned that the
master branch was choosing to perform a Seq Scan -> Sort -> Group
Aggregate for his query with an ORDER BY aggregate function. The v15 plan
for his query performed an Index Scan -> Group Aggregate, of course, the
aggregate performed the final sort internally in nodeAgg.c for the
aggregate's ORDER BY. The ideal plan would have been to use the index,
which provided partially sorted input then use an incremental sort to
provide the aggregate with the sorted input. This was not being chosen
due to the pessimism in the incremental sort cost modal, so here we remove
that and rationalize the path generation so that sort and incremental sort
plans don't have to needlessly compete. We assume that it's senseless
to ever use a full sort on a given input path where an incremental sort
can be performed.
Reported-by: Pavel Luzanov
Reviewed-by: Richard Guo
Discussion: https://postgr.es/m/9f61ddbf-2989-1536-b31e-6459370a6baa%40postgrespro.ru
It seems that drop-index-concurrently-1 has started to forget what it was
originally meant to be testing. d2d8a229b, which added incremental sorts
changed the expected plan to be an Index Scan plan instead of a Seq Scan
plan. This occurred as the primary key index of the table in question
provided presorted input and, because that index happened to be the
cheapest input path due to enable_seqscan being disabled, the incremental
sort changes just added a Sort on top of that. It seems based on the name
of the PREPAREd statement that the intention here is that the query
produces a seqscan plan.
The reason this test has become broken seems to be due to how the test was
originally coded. The test was trying to force a seqscan plan by
performing some casting to make it so the test_dc index couldn't be used
to perform the required filtering. Trying to coax the planner into using
a plan which has costed in a disable_cost seems like it's always going to
be flakey as small changes in costs are drowned out by the large
disable_cost combined with add_path's STD_FUZZ_FACTOR. Here we get rid of
the casts that we're using to try to trick the planner into a seqscan and
instead toggle enable_seqscan as and when required to get the desired
plan.
Additionally, rename a few things in the test and add some additional
wording to the comments to try and make it more clear in the future what
we expect this test to be doing.
Discussion: https://postgr.es/m/CAApHDvrbDhObhLV+=U_K_-t+2Av2av1aL9d+2j_3AO-XndaviA@mail.gmail.com
Backpatch-through: 13, where d2d8a229b changed the expected test output
The building of command completion tags could often be seen showing up in
profiles when running high tps workloads.
The query completion tags were being built with snprintf, which is slow at
the best of times when compared with more manual ways of formatting
strings. Here we introduce BuildQueryCompletionString() to do this job
for us. We also now store the completion tag's strlen in the
CommandTagBehavior struct so that we can quickly memcpy this number of
bytes into the completion tag string. Appending the rows affected is done
via pg_ulltoa_n. BuildQueryCompletionString returns the length of the
built string. This saves us having to call strlen to figure out how many
bytes to pass to pq_putmessage().
Author: David Rowley, Andres Freund
Reviewed-by: Andres Freund
Discussion: https://postgr.es/m/CAHoyFK-Xwqc-iY52shj0G+8K9FJpse+FuZ36XBKy78wDVnd=Qg@mail.gmail.com
This is mostly straightforward, except that if the range type
has a canonical function, that might throw an error during range
input. (Such errors probably only occur for edge cases: in the
in-core canonical functions, it happens only if a bound has the
maximum valid value for the underlying type.) Hence, this patch
extends the soft-error regime to allow canonical functions to
return errors softly as well. Extensions implementing range
canonical functions will need modification anyway because of the
API change for range_serialize(); while at it, they might want
to do something similar to what's been done here in the in-core
canonical functions.
Discussion: https://postgr.es/m/3284599.1671075185@sss.pgh.pa.us
Because we added StaticAssertStmt() first before StaticAssertDecl(),
some uses as well as the instructions in c.h are now a bit backwards
from the "native" way static assertions are meant to be used in C.
This updates the guidance and moves some static assertions to better
places.
Specifically, since the addition of StaticAssertDecl(), we can put
static assertions at the file level. This moves a number of static
assertions out of function bodies, where they might have been stuck
out of necessity, to perhaps better places at the file level or in
header files.
Also, when the static assertion appears in a position where a
declaration is allowed, then using StaticAssertDecl() is more native
than StaticAssertStmt().
Reviewed-by: John Naylor <john.naylor@enterprisedb.com>
Discussion: https://www.postgresql.org/message-id/flat/941a04e7-dd6f-c0e4-8cdf-a33b3338cbda%40enterprisedb.com
35f059e9bd put the provariadic sanity
check into type_sanity.sql, even though it's not about types, and
moreover in the middle of some connected test group, which makes it
all very confusing. Move it to opr_sanity.sql, where it is in better
company.
Convert assorted internal-ish datatypes, namely aclitemin,
int2vectorin, oidin, oidvectorin, pg_lsn_in, pg_snapshot_in,
and tidin to the new style.
(Some others you might expect to find in this group, such as
cidin and xidin, need no changes because they never throw
errors at all. That seems a little cheesy ... but it is not in
the charter of this patch series to add new error conditions.)
Amul Sul, minor mods by me
Discussion: https://postgr.es/m/CAAJ_b97KeDWUdpTKGOaFYPv0OicjOu6EW+QYWj-Ywrgj_aEy1g@mail.gmail.com
Convert box_in, circle_in, line_in, lseg_in, path_in, point_in,
and poly_in to the new style.
line_in still throws hard errors for overflows/underflows that can occur
when the input is specified as two points rather than in the canonical
"Ax + By + C = 0" style. I'm not too concerned about that: it won't be
reached in normal dump/restore cases, and it's fairly debatable that
such conversion should ever have been made part of a type input function
in the first place. But in any case, I don't want to extend the soft
error conventions into float.h without more discussion than this patch
has had.
Amul Sul, minor mods by me
Discussion: https://postgr.es/m/CAAJ_b97KeDWUdpTKGOaFYPv0OicjOu6EW+QYWj-Ywrgj_aEy1g@mail.gmail.com
Add support for hexadecimal, octal, and binary integer literals:
0x42F
0o273
0b100101
per SQL:202x draft.
This adds support in the lexer as well as in the integer type input
functions.
Reviewed-by: John Naylor <john.naylor@enterprisedb.com>
Reviewed-by: Zhihong Yu <zyu@yugabyte.com>
Reviewed-by: David Rowley <dgrowleyml@gmail.com>
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/b239564c-cad0-b23e-c57e-166d883cb97d@enterprisedb.com
This referred to the size of the buffers for k_ipad and k_opad in HMAC
computations. This is unused since e6bdfd9, where SCRAM has switched to
the cryptohash routines for its HMAC calculations rather than its own
maths.
Reviewed-by: Jacob Champion
Discussion: https://postgr.es/m/Y5gGMjXhyp0oK0mH@paquier.xyz
Commits f92944137 et al. made IsInTransactionBlock() set the
XACT_FLAGS_NEEDIMMEDIATECOMMIT flag before returning "false",
on the grounds that that kept its API promises equivalent to those of
PreventInTransactionBlock(). This turns out to be a bad idea though,
because it allows an ANALYZE in a pipelined series of commands to
cause an immediate commit, which is unexpected.
Furthermore, if we return "false" then we have another issue,
which is that ANALYZE will decide it's allowed to do internal
commit-and-start-transaction sequences, thus possibly unexpectedly
committing the effects of previous commands in the pipeline.
To fix the latter situation, invent another transaction state flag
XACT_FLAGS_PIPELINING, which explicitly records the fact that we
have executed some extended-protocol command and not yet seen a
commit for it. Then, require that flag to not be set before allowing
InTransactionBlock() to return "false".
Having done that, we can remove its setting of NEEDIMMEDIATECOMMIT
without fear of causing problems. This means that the API guarantees
of IsInTransactionBlock now diverge from PreventInTransactionBlock,
which is mildly annoying, but it seems OK given the very limited usage
of IsInTransactionBlock. (In any case, a caller preferring the old
behavior could always set NEEDIMMEDIATECOMMIT for itself.)
For consistency also require XACT_FLAGS_PIPELINING to not be set
in PreventInTransactionBlock. This too is meant to prevent commands
such as CREATE DATABASE from silently committing previous commands
in a pipeline.
Per report from Peter Eisentraut. As before, back-patch to all
supported branches (which sadly no longer includes v10).
Discussion: https://postgr.es/m/65a899dd-aebc-f667-1d0a-abb89ff3abf8@enterprisedb.com
Instead of half a dozen of mostly-duplicate ExecGrant_Foo() functions,
write one common function ExecGrant_generic() that can handle most of
them. We already have all the information we need, such as which
system catalog corresponds to which catalog table and which column is
the ACL column.
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Antonin Houska <ah@cybertec.at>
Discussion: https://www.postgresql.org/message-id/flat/22c7e802-4e7d-8d87-8b71-cba95e6f4bcf%40enterprisedb.com
jsonb_get_element() was incautious enough to use VARDATA() and
VARSIZE() directly on an arbitrary text Datum. That of course
fails if the Datum is short-header, compressed, or out-of-line.
The typical result would be failing to match any element of a
jsonb object, though matching the wrong one seems possible as well.
setPathObject() was slightly brighter, in that it used VARDATA_ANY
and VARSIZE_ANY_EXHDR, but that only kept it out of trouble for
short-header Datums. push_path() had the same issue. This could
result in faulty subscripted insertions, though keys long enough to
cause a problem are likely rare in the wild.
Having seen these, I looked around for unsafe usages in the rest
of the adt/json* files. There are a couple of places where it's not
immediately obvious that the Datum can't be compressed or out-of-line,
so I added pg_detoast_datum_packed() to cope if it is. Also, remove
some other usages of VARDATA/VARSIZE on Datums we just extracted from
a text array. Those aren't actively broken, but they will become so
if we ever start allowing short-header array elements, which does not
seem like a terribly unreasonable thing to do. In any case they are
not great coding examples, and they could also do with comments
pointing out that we're assuming we don't need pg_detoast_datum_packed.
Per report from exe-dealer@yandex.ru. Patch by me, but thanks to
David Johnston for initial investigation. Back-patch to v14 where
jsonb subscripting was introduced.
Discussion: https://postgr.es/m/205321670615953@mail.yandex.ru
Clang 16 is still in development, but seawasp reveals that it has
started warning about many of our casts of function pointers (those
introduced by commit 1c27d16e, and some older ones). Disable the new
warning for now, since otherwise buildfarm animal seawasp fails, and we
have no current plans to change our strategy for these callback function
types.
May be back-patched with other Clang/LLVM 16 changes around release
time.
Discussion: https://postgr.es/m/CA%2BhUKGJvX%2BL3aMN84ksT-cGy08VHErRNip3nV-WmTx7f6Pqhyw%40mail.gmail.com
Add some cross-links between chapter "20. Server Parameters" and
"31. Logical Replication" regarding the available configuration
parameters, for easier navigation; and some more explanatory text too.
I (Álvaro) chose to duplicate max_replication_slots in Chapter 20,
because it has completely different meanings at each side of the
replication link.
Author: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: vignesh C <vignesh21@gmail.com>
Reviewed-by: samay sharma <smilingsamay@gmail.com>
Discussion: https://postgr.es/m/CAHut+PsESqpy7w3Y6cX98c255ZuCjvipkhKjy6hZBjOv4E6iJA@mail.gmail.com
If sendFileWithContent were used to send a file larger than the
bbsink buffer size, this would result in corruption. The only
files that are sent via sendFileWithContent are the backup label
file, the tablespace map file, and .done files for WAL segments
included in the backup. Of these, it seems that only the
tablespace_map file can become large enough to cause a problem,
and then only if you have a lot of tablespaces. If you do have
that situation, you might end up with a corrupted
tablespace_map file, which would be bad.
My commit bef47ff85d introduced
this problem.
Report and patch by Antonin Houska.
Discussion: http://postgr.es/m/15764.1670528645@antos
During ALTER TABLE execution, when prep-time handling of subcommands of
certain types determine that execution-time handling requires recursion,
they signal this by changing the subcommand type to a special value.
This can be done in a simpler way by using a separate flag introduced by
commit ec0925c22a, so do that.
Catversion bumped. It's not clear to me that ALTER TABLE subcommands
are stored anywhere in catalogs (CREATE FUNCTION rejects it in BEGIN
ATOMIC function bodies), but we do have both write and read support for
them, so be safe.
Discussion: https://postgr.es/m/20220929090033.zxuaezcdwh2fgfjb@alvherre.pgsql
3d14e17 has added support for this query but psql was not able to
complete it. Spotted while working on a different patch in the same
area.
Reviewed-by: Robert Haas
Discussion: https://postgr.es/m/Y3hw7yvG0VwpC1jq@paquier.xyz
This is straightforward as far as it goes. However, it does not
attempt to trap errors occurring during the execution of domain
CHECK constraints. Since those are general user-defined
expressions, the only way to do that would involve starting up a
subtransaction for each check. Of course the entire point of
the soft-errors feature is to not need subtransactions, so that
would be self-defeating. For now, we'll rely on the assumption
that domain checks are written to avoid throwing errors.
Discussion: https://postgr.es/m/1181028.1670635727@sss.pgh.pa.us
This requires a bit of further infrastructure-extension to allow
trapping errors reported by numeric_in and pg_unicode_to_server,
but otherwise it's pretty straightforward.
In the case of jsonb_in, we are only capturing errors reported
during the initial "parse" phase. The value-construction phase
(JsonbValueToJsonb) can also throw errors if assorted implementation
limits are exceeded. We should improve that, but it seems like a
separable project.
Andrew Dunstan and Tom Lane
Discussion: https://postgr.es/m/3bac9841-fe07-713d-fa42-606c225567d6@dunslane.net
Formerly, semantic action functions for the JSON parser returned void,
so that there was no way for them to affect the parser's behavior.
That means in particular that they can't force an error exit except by
longjmp'ing. That won't do in the context of our project to make input
functions return errors softly. Hence, change them to return the same
JsonParseErrorType enum value as the parser itself uses. If an action
function returns anything besides JSON_SUCCESS, the parse is abandoned
and that error code is returned.
Action functions can thus easily return the same error conditions that
the parser already knows about. As an escape hatch for expansion, also
invent a code JSON_SEM_ACTION_FAILED that the core parser does not know
the exact meaning of. When returning this code, an action function
must use some out-of-band mechanism for reporting the error details.
This commit simply makes the API change and causes all the existing
action functions to return JSON_SUCCESS, so that there is no actual
change in behavior here. This is long enough and boring enough that
it seemed best to commit it separately from the changes that make
real use of the new mechanism.
In passing, remove a duplicate assignment of
transform_string_values_scalar.
Discussion: https://postgr.es/m/1436686.1670701118@sss.pgh.pa.us
We chose a specific wording of the not-implemented errors for
pseudotype I/O functions and other cases where there's little
value in implementing input and/or output. gtsvectorin never
got that memo though, nor did most of contrib. Make these all
fall in line, mostly because I'm a neatnik but also to remove
unnecessary translatable strings.
gbtreekey_in needs a bit of extra love since it supports
multiple SQL types. Sadly, gbtreekey_out doesn't have the
ability to do that, but I think it's unreachable anyway.
Noted while surveying datatype input functions to see what we
have left to fix.
Apparently anymultirange_in was written before we converted all
these pseudotype input functions to use a common macro, and it didn't
get fixed before committing. Sloppy merging probably explains its
unintuitive ordering, too, so rearrange.
Noted while surveying datatype input functions to see what we
have left to fix. I'm inclined to leave the pseudotypes as
throwing hard errors, because it's difficult to see a reason why
anyone would need something else. But in any case, if we want
to change that, we shouldn't have to change multiple copies of
the code.
9d9c02ccd added code to allow WindowAgg to take some shortcuts when a
monotonic WindowFunc reached some value that it could never come back
from due to the function's monotonic nature. That commit added a
runCondition field to WindowClause to store the condition which, when it
becomes false we can start taking shortcuts in nodeWindowAgg.c.
Here we fix an issue where subquery pullups didn't properly update the
runCondition to update the Vars to properly reference the new query level.
Here we also add a missing call to preprocess_expression() for the
WindowClause's runCondtion. The WindowFuncs in the targetlist will have
had this process done, so we must also do it for the WindowFuncs in the
runCondition so that they can be correctly found in the targetlist
during setrefs.c
Bug: #17709
Reported-by: Alexey Makhmutov
Author: Richard Guo, David Rowley
Discussion: https://postgr.es/m/17709-4f557160e3e8ee9a@postgresql.org
Backpatch-through: 15, where 9d9c02ccd was introduced
Buildfarm member wrasse has been complaining about empty declarations
as an effect of 8018ffb and 83a1a1b due to extra semicolons.
While on it, remove also the last backslash of the macros definitions,
causing more lines to be eaten in it than necessary, per comment from
Tom Lane.
Reported-by: Tom Lane, and buildfarm member wrasse
Author: Nathan Bossart, Michael Paquier
Discussion: https://postgr.es/m/1188769.1670640236@sss.pgh.pa.us
Replace the error trapping scheme introduced in 5bc450629 with our
shiny new errsave/ereturn mechanism. This doesn't have any real
functional impact (although I think that the new coding is able
to report a few more errors softly than v15 did). And I doubt
there's any measurable performance difference either. But this
gets rid of an ad-hoc, one-of-a-kind design in favor of a mechanism
that will be widely used going forward, so it should be a net win
for code readability.
Discussion: https://postgr.es/m/3bbbb0df-7382-bf87-9737-340ba096e034@postgrespro.ru
This patch converts the input functions for date, time, timetz,
timestamp, timestamptz, and interval to the new soft-error style.
There's some related stuff in formatting.c that remains to be
cleaned up, but that seems like a separable project.
Discussion: https://postgr.es/m/3bbbb0df-7382-bf87-9737-340ba096e034@postgrespro.ru
Pay down some ancient technical debt (dating to commit 022fd9966):
fix a couple of places in datetime parsing that were throwing
ereport's immediately instead of returning a DTERR code that could be
interpreted by DateTimeParseError. The reason for that was that there
was no mechanism for passing any auxiliary data (such as a zone name)
to DateTimeParseError, and these errors seemed to really need it.
Up to now it didn't matter that much just where the error got thrown,
but now we'd like to have a hard policy that datetime parse errors
get thrown from just the one place.
Hence, invent a "DateTimeErrorExtra" struct that can be used to
carry any extra values needed for specific DTERR codes. Perhaps
in the future somebody will be motivated to use this to improve
the specificity of other DateTimeParseError messages, but for now
just deal with the timezone-error cases.
This is on the way to making the datetime input functions report
parse errors softly; but it's really an independent change, so
commit separately.
Discussion: https://postgr.es/m/3bbbb0df-7382-bf87-9737-340ba096e034@postgrespro.ru
Previously the host operating system and 32/64 bit were not included and the
build machine's cpu was used, which is potentially wrong for cross builds.
Author: Juan José Santamaría Flecha <juanjo.santamaria@gmail.com>
Author: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CAC+AXB16gwYhKCdS+t0pk3U7kKtpVj5L-ynmhK3Gbea330At3w@mail.gmail.com