verify_heapam() wasn't being careful to sanity-check tuple line
pointers before using them, resulting in SIGBUS on alignment-picky
architectures. Fix that, add some more test coverage.
Mark Dilger, some tweaking by me
Discussion: https://postgr.es/m/30B8E99A-2D9C-48D4-A55C-741C9D5F1563@enterprisedb.com
Instead of immediately PQfinish'ing a dead connection, save it aside
so that we can still extract its parameters for \connect attempts.
(This works because PQconninfo doesn't care whether the PGconn is in
CONNECTION_BAD state.) This allows developers to reconnect with
just \c after a database crash and restart.
It's tempting to use the same approach instead of closing the old
connection after a failed non-interactive \connect command. However,
that would not be very safe: consider a script containing
\c db1 user1 live_server
\c db2 user2 dead_server
\c db3
The script would be expecting to connect to db3 at dead_server, but
if we re-use parameters from the first connection then it might
successfully connect to db3 at live_server. This'd defeat the goal
of not letting a script accidentally execute commands against the
wrong database.
Discussion: https://postgr.es/m/38464.1603394584@sss.pgh.pa.us
The ExplainCloseGroup arguments for incremental sort usage data
didn't match the corresponding ExplainOpenGroup. This only matters
for XML-format output, which is probably why we'd not noticed.
Daniel Gustafsson, per bug #16683 from Frits Jalvingh
Discussion: https://postgr.es/m/16683-8005033324ad34e9@postgresql.org
The order of AuthenticationGSSContinue and AuthenticationSSPI was
swapped, based on the other Authentication* protocol messages being
listed in subcode order.
This replaces the existing binary search with two perfect hash functions
for the composition and the decomposition in the backend code, at the
cost of slightly-larger binaries there (35kB in libpgcommon_srv.a). Per
the measurements done, this improves the speed of the recomposition and
decomposition by up to 30~40 times for the NFC and NFKC conversions,
while all other operations get at least 40% faster. This is not as
"good" as what libicu has, but it closes the gap a lot as per the
feedback from Daniel Verite.
The decomposition table remains the same, getting used for the binary
search in the frontend code, where we care more about the size of the
libraries like libpq over performance as this gets involved only in code
paths related to the SCRAM authentication. In consequence, note that
the perfect hash function for the recomposition needs to use a new
inverse lookup array back to to the existing decomposition table.
The size of all frontend deliverables remains unchanged, even with
--enable-debug, including libpq.
Author: John Naylor
Reviewed-by: Michael Paquier, Tom Lane
Discussion: https://postgr.es/m/CAFBsxsHUuMFCt6-pU+oG-F1==CmEp8wR+O+bRouXWu6i8kXuqA@mail.gmail.com
There's no functional change at all here, but I'm curious to see
whether this change successfully shuts up Coverity's warning about
a useless strcmp(), which appeared with the previous update.
Discussion: http://mm.icann.org/pipermail/tz/2020-October/029370.html
ECPG's PREPARE ... FROM and EXECUTE IMMEDIATE can optionally take
the target query as a simple literal, rather than the more usual
string-variable reference. This was previously documented as
being a C string literal, but that's a lie in one critical respect:
you can't write a data double quote as \" in such literals. That's
because the lexer is in SQL mode at this point, so it'll parse
double-quoted strings as SQL identifiers, within which backslash
is not special, so \" ends the literal.
I looked into making this work as documented, but getting the lexer
to switch behaviors at just the right point is somewhere between
very difficult and impossible. It's not really worth the trouble,
because these cases are next to useless: if you have a fixed SQL
statement to execute or prepare, you might as well write it as
a direct EXEC SQL, saving the messiness of converting it into
a string literal and gaining the opportunity for compile-time
SQL syntax checking.
Instead, let's just document (and test) the workaround of writing
a double quote as an octal escape (\042) in such cases.
There's no code behavioral change here, so in principle this could
be back-patched, but it's such a niche case I doubt it's worth
the trouble.
Per report from 1250kv.
Discussion: https://postgr.es/m/673825.1603223178@sss.pgh.pa.us
If you write the literal 'abc''def' in an EXEC SQL command, that will
come out the other end as 'abc'def', triggering a syntax error in the
backend. Likewise, "abc""def" is reduced to "abc"def" which is wrong
syntax for a quoted identifier.
The cause is that the lexer thinks it should emit just one quote
mark, whereas what it really should do is keep the string as-is.
Add some docs and test cases, too.
Although this seems clearly a bug, I fear users wouldn't appreciate
changing it in minor releases. Some may well be working around it
by applying an extra doubling of affected quotes, as for example
sql/dyntest.pgc has been doing.
Per investigation of a report from 1250kv, although this isn't
exactly what he/she was on about.
Discussion: https://postgr.es/m/673825.1603223178@sss.pgh.pa.us
The check for whether to complain about not having an old connection
to get parameters from was seriously out of date: it had not been
rethought when we invented connstrings, nor when we invented the
-reuse-previous option. Replace it with a check that throws an
error if reuse-previous is active and we lack an old connection to
reuse. While that doesn't move the goalposts very far in terms of
easing reconnection after a server crash, at least it's consistent.
If the user specifies a connstring plus additional parameters
(which is invalid per the documentation), the extra parameters were
silently ignored. That seems like it could be really confusing,
so let's throw a syntax error instead.
Teach the connstring code path to re-use the old connection's password
in the same cases as the old-style-syntax code path would, ie if we
are reusing parameters and the values of username, host/hostaddr, and
port are not being changed. Document this behavior, too, since it was
unmentioned before. Also simplify the implementation a bit, giving
rise to two new and useful properties: if there's a "password=xxx" in
the connstring, we'll use it not ignore it, and by default (i.e.,
except with --no-password) we will prompt for a password if the
re-used password or connstring password doesn't work. The previous
code just failed if the re-used password didn't work.
Given the paucity of field complaints about these issues, I don't
think that they rise to the level of back-patchable bug fixes,
and in any case they might represent undesirable behavior changes
in minor releases. So no back-patch.
Discussion: https://postgr.es/m/235210.1603321144@sss.pgh.pa.us
There is a handful of places where we called list_delete_ptr() to remove
some element from a List. In many of these places we know, or with very
little additional effort know the index of the ListCell that we need to
remove.
Here we change all of those places to instead either use one of;
list_delete_nth_cell(), foreach_delete_current() or list_delete_last().
Each of these saves from having to iterate over the list to search for the
element to remove by its pointer value.
There are some small performance gains to be had by doing this, but in the
general case, none of these lists are likely to be very large, so the
lookup was probably never that expensive anyway. However, some of the
calls are in fairly hot code paths, e.g process_equivalence(). So any
small gains there are useful.
Author: Zhijie Hou and David Rowley
Discussion: https://postgr.es/m/b3517353ec7c4f87aa560678fbb1034b@G08CNEXMBPEKD05.g08.fujitsu.local
psql's \connect claims to be able to re-use previous connection
parameters, but in fact it only re-uses the database name, user name,
host name (and possibly hostaddr, depending on version), and port.
This is problematic for assorted use cases. Notably, pg_dump[all]
emits "\connect databasename" commands which we would like to have
re-use all other parameters. If such a script is loaded in a psql run
that initially had "-d connstring" with some non-default parameters,
those other parameters would be lost, potentially causing connection
failure. (Thus, this is the same kind of bug addressed in commits
a45bc8a4f and 8e5793ab6, although the details are much different.)
To fix, redesign do_connect() so that it pulls out all properties
of the old PGconn using PQconninfo(), and then replaces individual
properties in that array. In the case where we don't wish to re-use
anything, get libpq's default settings using PQconndefaults() and
replace entries in that, so that we don't need different code paths
for the two cases.
This does result in an additional behavioral change for cases where
the original connection parameters allowed multiple hosts, say
"psql -h host1,host2", and the \connect request allows re-use of the
host setting. Because the previous coding relied on PQhost(), it
would only permit reconnection to the same host originally selected.
Although one can think of scenarios where that's a good thing, there
are others where it is not. Moreover, that behavior doesn't seem to
meet the principle of least surprise, nor was it documented; nor is
it even clear it was intended, since that coding long pre-dates the
addition of multi-host support to libpq. Hence, this patch is content
to drop it and re-use the host list as given.
Per Peter Eisentraut's comments on bug #16604. Back-patch to all
supported branches.
Discussion: https://postgr.es/m/16604-933f4b8791227b15@postgresql.org
Theoretically one could go into src/test/thread and build/run this
program there. In practice, that hasn't worked since 96bf88d52,
and probably much longer on some platforms (likely including just
the sort of hoary leftovers where this test might be of interest).
While it wouldn't be too hard to repair the breakage, the fact that
nobody has noticed for two years shows that there is zero usefulness
in maintaining this build pathway. Let's get rid of it and decree
that thread_test.c is *only* meant to be built/used in configure.
Given that decision, it makes sense to put thread_test.c under config/
and get rid of src/test/thread altogether, so that's what I did.
In passing, update src/test/README, which had been ignored by some
not-so-recent additions of subdirectories.
Discussion: https://postgr.es/m/227659.1603041612@sss.pgh.pa.us
After de8feb1f3a, some warnings remained
that were only visible when using GCC on Windows. Fix those as well.
Note that the ecpg test source files don't use the full pg_config.h,
so we can't use pg_funcptr_t there but have to do it the long way.
80f8eb7 has added to the normalization quick check headers some code
generated by PerfectHash.pm that is incompatible with the settings of
gitattributes for this repository, as whitespaces followed a set of tabs
for the first element of a line in the table. Instead of adding a new
exception to gitattributes, rework the format generated so as a right
padding with spaces is used instead of a left padding. This keeps the
table generated in a readable shape with its set of columns, making
unnecessary an update of gitattributes.
Reported-by: Peter Eisentraut
Author: John Naylor
Discussion: https://postgr.es/m/d601b3b5-a3c7-5457-2f84-3d6513d690fc@2ndquadrant.com
More precisely, correctly handle the ONLY flag indicating not to
recurse. This was implemented in 86f575948c by recursing in
trigger.c, but that's the wrong place; use ATSimpleRecursion instead,
which behaves properly. However, because legacy inheritance has never
recursed in that situation, make sure to do that only for new-style
partitioning.
I noticed this problem while testing a fix for another bug in the
vicinity.
This has been wrong all along, so backpatch to 11.
Discussion: https://postgr.es/m/20201016235925.GA29829@alvherre.pgsql
Change the attribute 'name' to 'slot_name' in pg_stat_replication_slots
view to make it clear and that way we will be consistent with the other
places like pg_stat_wal_receiver view where we display the same attribute.
In the passing, fix the typo in one of the macros in the related code.
Bump the catversion as we have modified the name in the catalog as well.
Reported-by: Noriyoshi Shinoda
Author: Noriyoshi Shinoda
Reviewed-by: Sawada Masahiko and Amit Kapila
Discussion: https://postgr.es/m/CA+fd4k5_pPAYRTDrO2PbtTOe0eHQpBvuqmCr8ic39uTNmR49Eg@mail.gmail.com
When told to process all databases, clusterdb, reindexdb, and vacuumdb
would reconnect by replacing their --maintenance-db parameter with the
name of the target database. If that parameter is a connstring (which
has been allowed for a long time, though we failed to document that
before this patch), we'd lose any other options it might specify, for
example SSL or GSS parameters, possibly resulting in failure to connect.
Thus, this is the same bug as commit a45bc8a4f fixed in pg_dump and
pg_restore. We can fix it in the same way, by using libpq's rules for
handling multiple "dbname" parameters to add the target database name
separately. I chose to apply the same refactoring approach as in that
patch, with a struct to handle the command line parameters that need to
be passed through to connectDatabase. (Maybe someday we can unify the
very similar functions here and in pg_dump/pg_restore.)
Per Peter Eisentraut's comments on bug #16604. Back-patch to all
supported branches.
Discussion: https://postgr.es/m/16604-933f4b8791227b15@postgresql.org
Since commit 913bbd88d, check_sql_fn_retval() can either insert type
coercion steps in-line in the Query that produces the SQL function's
results, or generate a new top-level Query to perform the coercions,
if modifying the Query's output in-place wouldn't be safe. However,
it appears that the latter case has never actually worked, because
the code tried to inject the new Query back into the query list it was
passed ... which is not the list that will be used for later processing
when we execute the SQL function "normally" (without inlining it).
So we ended up with no coercion happening at run-time, leading to
wrong results or crashes depending on the datatypes involved.
While the regression tests look like they cover this area well enough,
through a huge bit of bad luck all the test cases that exercise the
separate-Query path were checking either inline-able cases (which
accidentally didn't have the bug) or cases that are no-ops at runtime
(e.g., varchar to text), so that the failure to perform the coercion
wasn't obvious. The fact that the cases that don't work weren't
allowed at all before v13 probably contributed to not noticing the
problem sooner, too.
To fix, get rid of the separate "flat" list of Query nodes and instead
pass the real two-level list that is going to be used later. I chose
to make the same change in check_sql_fn_statements(), although that has
no actual bug, just so that we don't need that data structure at all.
This is an API change, as evidenced by the adjustments needed to
callers outside functions.c. That's a bit scary to be doing in a
released branch, but so far as I can tell from a quick search,
there are no outside callers of these functions (and they are
sufficiently specific to our semantics for SQL-language functions that
it's not apparent why any extension would need to call them). In any
case, v13 already changed the API of check_sql_fn_retval() compared to
prior branches.
Per report from pinker. Back-patch to v13 where this code came in.
Discussion: https://postgr.es/m/1603050466566-0.post@n3.nabble.com
- Misc grammar and punctuation fixes.
- Stylistic cleanup: use spaces between function arguments and JSON fields
in examples. For example "foo(a,b)" -> "foo(a, b)". Add semicolon after
last END in a few PL/pgSQL examples that were missing them.
- Make sentence that talked about "..." and ".." operators more clear,
by avoiding to end the sentence with "..". That makes it look the same
as "..."
- Fix syntax description for HAVING: HAVING conditions cannot be repeated
Patch by Justin Pryzby, per Yaroslav Schekin's report. Backpatch to all
supported versions, to the extent that the patch applies easily.
Discussion: https://www.postgresql.org/message-id/20201005191922.GE17626%40telsasoft.com
Store the tuple conversion map to convert a tuple from a child table's
format to the root format in a new ri_ChildToRootMap field in
ResultRelInfo. It is initialized if transition tuple capture for FOR
STATEMENT triggers or INSERT tuple routing on a partitioned table is
needed. Previously, ModifyTable kept the maps in the per-subplan
ModifyTableState->mt_per_subplan_tupconv_maps array, or when tuple
routing was used, in
ResultRelInfo->ri_Partitioninfo->pi_PartitionToRootMap. The new field
replaces both of those.
Now that the child-to-root tuple conversion map is always available in
ResultRelInfo (when needed), remove the TransitionCaptureState.tcs_map
field. The callers of Exec*Trigger() functions no longer need to set or
save it, which is much less confusing and bug-prone. Also, as a future
optimization, this will allow us to delay creating the map for a given
result relation until the relation is actually processed during
execution.
Author: Amit Langote
Discussion: https://www.postgresql.org/message-id/CA%2BHiwqHtCWLdK-LO%3DNEsvOdHx%2B7yv4mE_zYK0i3BH7dXb-wxog%40mail.gmail.com
When executing DDL on a partitioned table or on a table with inheritance
children, statement-level triggers must be fired against the table given
in the original statement. The code to look that up was a bit messy and
duplicative. Commit 501ed02cf6 added a helper function,
getASTriggerResultRelInfo() (later renamed to getTargetResultRelInfo())
for it, but for some reason it was only used when firing AFTER STATEMENT
triggers and the code to fire BEFORE STATEMENT triggers duplicated the
logic.
Determine the target relation in ExecInitModifyTable(), and set it always
in ModifyTableState. Code that used to call getTargetResultRelInfo() can
now use ModifyTableState->rootResultRelInfo directly.
Discussion: https://www.postgresql.org/message-id/CA%2BHiwqFViT47Zbr_ASBejiK7iDG8%3DQ1swQ-tjM6caRPQ67pT%3Dw%40mail.gmail.com
In shm_mq_receive(), a huge payload could trigger an unjustified
"invalid memory alloc request size" error due to the way the buffer
size is increased.
Add error checks (documenting the upper limit) and avoid the error by
limiting the allocation size to MaxAllocSize.
Author: Markus Wanner <markus.wanner@2ndquadrant.com>
Discussion: https://www.postgresql.org/message-id/flat/3bb363e7-ac04-0ac4-9fe8-db1148755bfa%402ndquadrant.com
The rules to choose the number of parallel workers to perform parallel
vacuum operation were not clearly specified.
Reported-by: Peter Eisentraut
Author: Amit Kapila
Backpatch-through: 13, where it was introduced
Discussion: https://postgr.es/m/36aa8aea-61b7-eb3c-263b-648e0cb117b7@2ndquadrant.com
When allocating a EVP context, it would have been possible to leak some
memory allocated directly by OpenSSL, that PostgreSQL lost track of if
the initialization of the context allocated failed. The cleanup can be
done with EVP_MD_CTX_destroy().
Note that EVP APIs exist since OpenSSL 0.9.7 and we have in the tree
equivalent implementations for older versions since ce9b75d (code
removed with 9b7cd59a as of 10~). However, in 9.5 and 9.6, the existing
code makes use of EVP_MD_CTX_destroy() and EVP_MD_CTX_create() without
an equivalent implementation when building the tree with OpenSSL 0.9.6
or older, meaning that this code is in reality broken with such versions
since it got introduced in e2838c5. As we have heard no complains about
that, it does not seem worth bothering with in 9.5 and 9.6, so I have
left that out for simplicity.
Author: Michael Paquier
Discussion: https://postgr.es/m/20201015072212.GC2305@paquier.xyz
Backpatch-through: 9.5
Given a query with enough joins, it was possible that the query planner,
after multiplying the row estimates with the join selectivity that the
estimated number of rows would exceed the limits of the double data type
and become infinite.
To give an indication on how extreme a case is required to hit this, the
particular example case reported required 379 joins to a table without any
statistics, which resulted in the 1.0/DEFAULT_NUM_DISTINCT being used for
the join selectivity. This eventually caused the row estimates to go
infinite and resulted in an assert failure in initial_cost_mergejoin()
where the infinite row estimated was multiplied by an outerstartsel of 0.0
resulting in NaN. The failing assert verified that NaN <= Inf, which is
false.
To get around this we use clamp_row_est() to cap row estimates at a
maximum of 1e100. This value is thought to be low enough that costs
derived from it would remain within the bounds of what the double type can
represent.
Aside from fixing the failing Assert, this also has the added benefit of
making it so add_path() will still receive proper numerical values as
costs which will allow it to make more sane choices when determining the
cheaper path in extreme cases such as the one described above.
Additionally, we also get rid of the isnan() checks in the join costing
functions. The actual case which originally triggered those checks to be
added in the first place never made it to the mailing lists. It seems
likely that the new code being added to clamp_row_est() will result in
those becoming checks redundant, so just remove them.
The fairly harmless assert failure problem does also exist in the
backbranches, however, a more minimalistic fix will be applied there.
Reported-by: Onder Kalaci
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/DM6PR21MB1211FF360183BCA901B27F04D80B0@DM6PR21MB1211.namprd21.prod.outlook.com
According to Microsoft's documentation, 2.2 has been the current
version since Windows 98 or so. Moreover, that's what the Postgres
backend has been requesting since 2004 (cf commit 4cdf51e64).
So there seems no reason for libpq to keep asking for 1.1.
Bring thread_test along, too, so that we're uniformly asking for 2.2
in all our WSAStartup calls.
It's not clear whether there's any point in back-patching this,
so for now I didn't.
Discussion: https://postgr.es/m/132799.1602960277@sss.pgh.pa.us
Failure to write data to a large object during restore led to an ugly
and uninformative error message. To add insult to injury, it then
fatal'd out, where other SQL-level errors usually result in pressing on.
Report the underlying error condition, rather than just giving not-very-
useful byte counts, and use warn_or_exit_horribly() so as to adhere to
pg_restore's general policy about whether to continue or not.
Also recognize that lo_write() returns int not size_t.
Per report from Justin Pryzby, though I didn't use his patch.
Given the lack of comparable complaints, I'm not sure this is
worth back-patching.
Discussion: https://postgr.es/m/20201018010232.GF9241@telsasoft.com
The Windows documentation insists that every WSAStartup call should
have a matching WSACleanup call. However, if that ever had actual
relevance, it wasn't in this century. Every remotely-modern Windows
kernel is capable of cleaning up when a process exits without doing
that, and must be so to avoid resource leaks in case of a process
crash. Moreover, Postgres backends have done WSAStartup without
WSACleanup since commit 4cdf51e64 in 2004, and we've never seen any
indication of a problem with that.
libpq's habit of doing WSAStartup during connection start and
WSACleanup during shutdown is also rather inefficient, since a
series of non-overlapping connection requests leads to repeated,
quite expensive DLL unload/reload cycles. We document a workaround
for that (having the application call WSAStartup for itself), but
that's just a kluge. It's also worth noting that it's far from
uncommon for applications to exit without doing PQfinish, and
we've not heard reports of trouble from that either.
However, the real reason for acting on this is that recent
experiments by Alexander Lakhin suggest that calling WSACleanup
during PQfinish might be triggering the symptom we occasionally see
that a process using libpq fails to emit expected stdio output.
Therefore, let's change libpq so that it calls WSAStartup only
once per process, during the first connection attempt, and never
calls WSACleanup at all.
While at it, get rid of the only other WSACleanup call in our code
tree, in pg_dump/parallel.c; that presumably is equally useless.
If this proves to suppress the fairly-common ecpg test failures
we see on Windows, I'll back-patch, but for now let's just do it
in HEAD and see what happens.
Discussion: https://postgr.es/m/ac976d8c-03df-d6b8-025c-15a2de8d9af1@postgrespro.ru
Section 8.5.1.4, which defines these literals, made only a vague
reference to the fact that they might be evaluated too soon to be
safe in non-interactive contexts. Provide a more explicit caution
against misuse. Also, generalize the wording in the related tip in
section 9.9.4: while it clearly described this problem, it implied
(or really, stated outright) that the problem only applies to table
DEFAULT clauses.
Per gripe from Tijs van Dam. Back-patch to all supported branches.
Discussion: https://postgr.es/m/c2LuRv9BiRT3bqIo5mMQiVraEXey_25B4vUn0kDqVqilwOEu_iVF1tbtvLnyQK7yDG3PFaz_GxLLPil2SDkj1MCObNRVaac-7j1dVdFERk8=@thalex.com
This changes zic's default output format from "-b fat" to "-b slim".
We were already using "slim" in v13/HEAD, so those branches drop
the explicit -b switch in the Makefiles. Instead, add an explicit
"-b fat" in v12 and before, so that we don't change the output file
format in those branches. (This is perhaps excessively conservative,
but we decided not to do so in a12079109, and I'll stick with that.)
Other non-cosmetic changes are to drop support for zic's long-obsolete
"-y" switch, and to ensure that strftime() does not change errno
unless it fails.
As usual with tzcode changes, back-patch to all supported branches.
In theory, the second px_find_digest call in px_crypt_md5 could fail
even though the first one succeeded, since resource allocation is
required. Don't skip testing for a failure. (If one did happen,
the likely result would be a crash rather than clean recovery from
an OOM failure.)
The code's been like this all along, so back-patch to all supported
branches.
Daniel Gustafsson
Discussion: https://postgr.es/m/AA8D6FE9-4AB2-41B4-98CB-AE64BA668C03@yesql.se
Commit a97e85f2b caused "exceed the available area" warnings in PDF
builds. Fine-tune colwidth values to avoid that.
Back-patch to 9.6, like the prior patch. (This is of dubious value
before v13, since we were far from free of such warnings in older
branches. But we might as well keep the SGML looking the same in all
branches.)
Per buildfarm.
Commit 32a9c0bdf introduced connection retry logic into postgres_fdw.
Previously it used goto statement for retry. This commit gets rid of that
goto from the logic to make the code simpler and easier-to-read.
When getting out of PG_CATCH() for the retry, the error state should be
cleaned up and the memory context should be reset. But commit 32a9c0bdf
forgot to do that. This commit also fixes this bug.
Previously only PQstatus()==CONNECTION_BAD was verified to detect
connection failure. But this could cause false detection in the case where
any error other than connection failure (e.g., out-of-memory) was thrown
after a broken connection was detected in libpq and CONNECTION_BAD is set.
To fix this issue, this commit changes the logic so that it also checks
the error's sqlstate is ERRCODE_CONNECTION_FAILURE.
Author: Fujii Masao
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/2943611.1602375376@sss.pgh.pa.us