When libpq encounters a connection-level error, e.g. runs out of memory
while forming a result, there will be no error associated with PGresult,
but a message will be placed into PGconn's error buffer. postgres_fdw
takes care to use the PGconn error message when PGresult does not have
one, but dblink has been negligent in that regard. Modify dblink to mirror
what postgres_fdw has been doing.
Back-patch to all supported branches.
Author: Joe Conway
Reviewed-By: Tom Lane
Discussion: https://postgr.es/m/02fa2d90-2efd-00bc-fefc-c23c00eb671e%40joeconway.com
When dblink uses a postgres_fdw server name for its connection, it
is possible for the connection to have options that are invalid
with dblink (e.g. "updatable"). The recommended way to avoid this
problem is to use dblink_fdw servers instead. However there are use
cases for using postgres_fdw, and possibly other FDWs, for dblink
connection options, therefore protect against trying to use any
options that do not apply by using is_valid_dblink_option() when
building the connection string from the options.
Back-patch to 9.3. Although 9.2 supports FDWs for connection info,
is_valid_dblink_option() did not yet exist, and neither did
postgres_fdw, at least in the postgres source tree. Given the lack
of previous complaints, fixing that seems too invasive/not worth it.
Author: Corey Huinker
Reviewed-By: Joe Conway
Discussion: https://postgr.es/m/CADkLM%3DfWyXVEyYcqbcRnxcHutkP45UHU9WD7XpdZaMfe7S%3DRwA%40mail.gmail.com
When dblink or postgres_fdw detects an error on the remote side of the
connection, it will try to construct a local error message as best it
can using libpq's PQresultErrorField(). When no primary message is
available, it was bailing out with an unhelpful "unknown error". Make
that message better and more style guide compliant. Per discussion
on hackers.
Backpatch to 9.2 except postgres_fdw which didn't exist before 9.3.
Discussion: https://postgr.es/m/19872.1482338965%40sss.pgh.pa.us
Some fields of the sinfo struct are modified within PG_TRY and then
referenced within PG_CATCH, so as with recent patch to async.c, "volatile"
is necessary for strict POSIX compliance; and that propagates to a couple
of subroutines as well as materializeQueryResult() itself. I think the
risk of actual issues here is probably higher than in async.c, because
storeQueryResult() is likely to get inlined into materializeQueryResult(),
leaving the compiler free to conclude that its stores into sinfo fields are
dead code.
Use SSPI authentication to allow connections exclusively from the OS
user that launched the test suite. This closes on Windows the
vulnerability that commit be76a6d39e
closed on other platforms. Users of "make installcheck" or custom test
harnesses can run "pg_regress --config-auth=DATADIR" to activate the
same authentication configuration that "make check" would use.
Back-patch to 9.0 (all supported versions).
Security: CVE-2014-0067
Some of the many error messages introduced in 458857cc missed 'FROM
unpackaged'. Also e016b724 and 45ffeb7e forgot to quote extension
version numbers.
Backpatch to 9.1, just like 458857cc which introduced the messages. Do
so because the error messages thrown when the wrong command is copy &
pasted aren't easy to understand.
With OpenLDAP versions 2.4.24 through 2.4.31, inclusive, PostgreSQL
backends can crash at exit. Raise a warning during "configure" based on
the compile-time OpenLDAP version number, and test the crash scenario in
the dblink test suite. Back-patch to 9.0 (all supported versions).
dblink uses a short-lived data conversion memory context. However it
was not deleted when no longer needed, leading to a noticeable memory
leak under some circumstances. Plug the hole, along with minor
refactoring. Backpatch to 9.2 where the leak was introduced.
Report and initial patch by MauMau. Reviewed/modified slightly by
Tom Lane and me.
This was not changed in HEAD, but will be done later as part of a
pgindent run. Future pgindent runs will also do this.
Report by Tom Lane
Backpatch through all supported branches, but not HEAD
Previous commit e5de601267 modified dblink
to ensure client encoding matched the server. However the added
PQsetClientEncoding() call added significant overhead. Restore original
performance in the common case where client encoding already matches
server encoding by doing nothing in that case. Applies to all active
branches.
Issue reported and work sponsored by Zonar Systems.
If the remote database's settings of these GUCs are different from ours,
ambiguous datetime values may be read incorrectly. To fix, temporarily
adopt the remote server's settings while we ingest a query result.
This is not a complete fix, since it doesn't do anything about ambiguous
values in commands sent to the remote server; but there seems little we
can do about that end of it given dblink's entirely textual API for
transmitted commands.
Back-patch to 9.2. The hazard exists in all versions, but this patch
would need more work to apply before 9.2. Given the lack of field
complaints about this issue, it doesn't seem worth the effort at present.
Daniel Farina and Tom Lane
Normally each module is tested in a database named contrib_regression,
which is dropped and recreated at the beginhning of each pg_regress run.
This new mode, enabled by adding USE_MODULE_DB=1 to the make command
line, runs most modules in a database with the module name embedded in
it.
This will make testing pg_upgrade on clusters with the contrib modules
a lot easier.
Second attempt at this, this time accomodating make versions older
than 3.82.
Still to be done: adapt to the MSVC build system.
Backpatch to 9.0, which is the earliest version it is reasonably
possible to test upgrading from.
Normally each module is tested in aq database named contrib_regression,
which is dropped and recreated at the beginhning of each pg_regress run.
This mode, enabled by adding USE_MODULE_DB=1 to the make command line,
runs most modules in a database with the module name embedded in it.
This will make testing pg_upgrade on clusters with the contrib modules
a lot easier.
Still to be done: adapt to the MSVC build system.
Backpatch to 9.0, which is the earliest version it is reasonably possible
to test upgrading from.
The HINTs generated for these error cases vary across builds. We
could try to work around that, but the test cases aren't really useful
enough to justify taking any trouble.
Per buildfarm.
dblink now has its own validator function dblink_fdw_validator(), which is
better than the core function postgresql_fdw_validator() because it gets
the list of legal options from libpq instead of having a hard-wired list.
Make the dblink extension module provide a standard foreign data wrapper
dblink_fdw that encapsulates use of this validator, and recommend use of
that wrapper instead of making up wrappers on the fly.
Unfortunately, because ad-hoc wrappers *were* recommended practice
previously, it's not clear when we can get rid of postgresql_fdw_validator
without causing upgrade problems. But this is a step in the right
direction.
Shigeru Hanada, reviewed by KaiGai Kohei
This reduces unnecessary exposure of other headers through htup.h, which
is very widely included by many files.
I have chosen to move the function prototypes to the new file as well,
because that means htup.h no longer needs to include tupdesc.h. In
itself this doesn't have much effect in indirect inclusion of tupdesc.h
throughout the tree, because it's also required by execnodes.h; but it's
something to explore in the future, and it seemed best to do the htup.h
change now while I'm busy with it.
After taking awhile to digest the row-processor feature that was added to
libpq in commit 92785dac2e, we've concluded
it is over-complicated and too hard to use. Leave the core infrastructure
changes in place (that is, there's still a row processor function inside
libpq), but remove the exposed API pieces, and instead provide a "single
row" mode switch that causes PQgetResult to return one row at a time in
separate PGresult objects.
This approach incurs more overhead than proper use of a row processor
callback would, since construction of a PGresult per row adds extra cycles.
However, it is far easier to use and harder to break. The single-row mode
still affords applications the primary benefit that the row processor API
was meant to provide, namely not having to accumulate large result sets in
memory before processing them. Preliminary testing suggests that we can
probably buy back most of the extra cycles by micro-optimizing construction
of the extra results, but that task will be left for another day.
Marko Kreen
This patch provides a test case for libpq's row processor API.
contrib/dblink can deal with very large result sets by dumping them into
a tuplestore (which can spill to disk) --- but until now, the intermediate
storage of the query result in a PGresult meant memory bloat for any large
result. Now we use a row processor to convert the data to tuple form and
dump it directly into the tuplestore.
A limitation is that this only works for plain dblink() queries, not
dblink_send_query() followed by dblink_get_result(). In the latter
case we don't know the desired tuple rowtype soon enough. While hack
solutions to that are possible, a different user-level API would
probably be a better answer.
Kyotaro Horiguchi, reviewed by Marko Kreen and Tom Lane
dblink_exec leaked temporary database connections if any error occurred
after connection setup, for example
SELECT dblink_exec('...connect string...', 'select 1/0');
Add a PG_TRY block to ensure PQfinish gets done when it is needed.
(dblink_record_internal is on the hairy edge of needing similar treatment,
but seems not to be actively broken at the moment.)
Also, in 9.0 and up, only one of the three functions using tuplestore
return mode was properly checking that the query context would allow
a tuplestore result.
Noted while reviewing dblink patch. Back-patch to all supported branches.
The DBLINK_GET_CONN and DBLINK_GET_NAMED_CONN macros did not set the
surrounding function's conname variable, causing errors to be incorrectly
reported as having occurred on the "unnamed" connection in some cases.
This bug was actually visible in two cases in the regression tests,
but apparently whoever added those cases wasn't paying attention.
Noted by Kyotaro Horiguchi, though this is different from his proposed
patch.
Back-patch to 8.4; 8.3 does not have the same type of error reporting
so the patch is not relevant.
We have seen one too many reports of people trying to use 9.1 extension
files in the old-fashioned way of sourcing them in psql. Not only does
that usually not work (due to failure to substitute for MODULE_PATHNAME
and/or @extschema@), but if it did work they'd get a collection of loose
objects not an extension. To prevent this, insert an \echo ... \quit
line that prints a suitable error message into each extension script file,
and teach commands/extension.c to ignore lines starting with \echo.
That should not only prevent any adverse consequences of loading a script
file the wrong way, but make it crystal clear to users that they need to
do it differently now.
Tom Lane, following an idea of Andrew Dunstan's. Back-patch into 9.1
... there is not going to be much value in this if we wait till 9.2.
There may be some other places where we should use errdetail_internal,
but they'll have to be evaluated case-by-case. This commit just hits
a bunch of places where invoking gettext is obviously a waste of cycles.
Added a new option --extra-install to pg_regress to arrange installing
the respective contrib directory into the temporary installation.
This is currently not yet supported for Windows MSVC builds.
Updated the .gitignore files for contrib modules to ignore the
leftovers of a temp-install check run.
Changed the exit status of "make check" in a pgxs build (which still
does nothing) to 0 from 1.
Added "make check" in contrib to top-level "make check-world".
It was never terribly consistent to use OR REPLACE (because of the lack of
comparable functionality for data types, operators, etc), and
experimentation shows that it's now positively pernicious in the extension
world. We really want a failure to occur if there are any conflicts, else
it's unclear what the extension-ownership state of the conflicted object
ought to be. Most of the time, CREATE EXTENSION will fail anyway because
of conflicts on other object types, but an extension defining only
functions can succeed, with bad results.
This isn't fully tested as yet, in particular I'm not sure that the
"foo--unpackaged--1.0.sql" scripts are OK. But it's time to get some
buildfarm cycles on it.
sepgsql is not converted to an extension, mainly because it seems to
require a very nonstandard installation process.
Dimitri Fontaine and Tom Lane
This eliminates the need for inefficient implementions of this
functionality in both contrib/dblink and contrib/tablefunc, so remove
them. The upcoming patch implementing an in-core format() function
will also require this functionality.
In passing, add some regression tests.
Replace for loops in makefiles with proper dependencies. Parallel
make can now span across directories. Also, make -k and make -q work
properly.
GNU make 3.80 or newer is now required.
dblink_build_sql_insert() and related functions. Now the column numbers
are treated as logical not physical column numbers. This will provide saner
behavior in the presence of dropped columns; furthermore, if we ever get
around to allowing rearrangement of logical column ordering, the original
definition would become nearly untenable from a usability standpoint.
Per recent discussion of dblink's handling of dropped columns.
Not back-patched for fear of breaking existing applications.
columns correctly. In passing, get rid of some dead logic in the
underlying get_sql_insert() etc functions --- there is no caller that
will pass null value-arrays to them.
Per bug report from Robert Voinea.