Fix PL/Python so that it can handle domains over composite, and so that
it enforces domain constraints correctly in other cases that were not
always done properly before. Notably, it didn't do arrays of domains
right (oversight in commit c12d570fa), and it failed to enforce domain
constraints when returning a composite type containing a domain field,
and if a transform function is being used for a domain's base type then
it failed to enforce domain constraints on the result. Also, in many
places it missed checking domain constraints on null values, because
the plpy_typeio code simply wasn't called for Py_None.
Rather than try to band-aid these problems, I made a significant
refactoring of the plpy_typeio logic. The existing design of recursing
for array and composite members is extended to also treat domains as
containers requiring recursion, and the APIs for the module are cleaned
up and simplified.
The patch also modifies plpy_typeio to rely on the typcache more than
it did before (which was pretty much not at all). This reduces the
need for repetitive lookups, and lets us get rid of an ad-hoc scheme
for detecting changes in composite types. I added a couple of small
features to typcache to help with that.
Although some of this is fixing bugs that long predate v11, I don't
think we should risk a back-patch: it's a significant amount of code
churn, and there've been no complaints from the field about the bugs.
Tom Lane, reviewed by Anthony Bykov
Discussion: https://postgr.es/m/24449.1509393613@sss.pgh.pa.us
The pending list must (for correctness) always be cleaned up by vacuum, and
should (for the avoidance of surprising behavior) always be cleaned up
by an explicit call to gin_clean_pending_list, but cleanup is optional
when inserting. The old logic got this backward: cleanup was forced
if (stats == NULL), but that's going to be *false* when vacuuming and
*true* for inserts.
Masahiko Sawada, reviewed by me.
Discussion: http://postgr.es/m/CAD21AoBLUSyiYKnTYtSAbC+F=XDjiaBrOUEGK+zUXdQ8owfPKw@mail.gmail.com
A handful of settings, most notably shared_preload_libraries, were
just plain the wrong place compared to their assigned config_group
value in guc.c (and thus pg_settings). In other cases the names of
the sections in postgresql.conf.sample were mildly different from
the corresponding entries in config_group_names[]. Make it all
consistent.
Adrián Escoms, reviewed by me.
Discussion: http://postgr.es/m/CACksPC2veEmFRYqwYepWYO9U7aFhAx6sYq+WqjTyHw7uV=E=pw@mail.gmail.com
If a PARAM_EXEC parameter is used below a Gather (Merge) but the InitPlan
that computes it is attached to or above the Gather (Merge), force the
value to be computed before starting parallelism and pass it down to all
workers. This allows us to use parallelism in cases where it previously
would have had to be rejected as unsafe. We do - in this case - lose the
optimization that the value is only computed if it's actually used. An
alternative strategy would be to have the first worker that needs the value
compute it, but one downside of that approach is that we'd then need to
select a parallel-safe path to compute the parameter value; it couldn't for
example contain a Gather (Merge) node. At some point in the future, we
might want to consider both approaches.
Independent of that consideration, there is a great deal more work that
could be done to make more kinds of PARAM_EXEC parameters parallel-safe.
This infrastructure could be used to allow a Gather (Merge) on the inner
side of a nested loop (although that's not a very appealing plan) and
cases where the InitPlan is attached below the Gather (Merge) could be
addressed as well using various techniques. But this is a good start.
Amit Kapila, reviewed and revised by me. Reviewing and testing from
Kuntal Ghosh, Haribabu Kommi, and Tushar Ahuja.
Discussion: http://postgr.es/m/CAA4eK1LV0Y1AUV4cUCdC+sYOx0Z0-8NAJ2Pd9=UKsbQ5Sr7+JQ@mail.gmail.com
Commit 0fb54de9a thought that this was only needed in VS2015 and later,
but buildfarm member woodlouse shows that at least VS2013 whines as
well. Let's just define it regardless of MSVC version; it should be
harmless enough in older releases.
Also, in the wake of ed9b3606d, it seems better to put it in win32_port.h
where <winsock2.h> is included.
Since this is only suppressing a pedantic compiler warning, I don't
feel a need for a back-patch.
Discussion: https://postgr.es/m/20124.1510850225@sss.pgh.pa.us
It's become apparent during testing that there are problems with at
least the testing regime. I don't think we should have it without a
working test regime, and the difficulties might indicate implementation
problems anyway, so I'm backing out the whole thing until that's sorted
out.
This reverts commits 74594849989f92cd8ce3a
Commit 9be95ef15 failed to cure all of the redundancy here: we were
actually calling get_major_server_version() three times for each
of the old and new data directories. While that's not enormously
expensive, it's still sloppy.
A. Akenteva
Discussion: https://postgr.es/m/f9266a85d918a3cf3a386b5148aee666@postgrespro.ru
This continues the work of commit 91aec93e6 by getting rid of a lot of
Windows-specific funny business in "section 0". Instead of including
pg_config_os.h in different places depending on platform, let's
standardize on putting it before the system headers, and in consequence
reduce win32.h to just what has to appear before the system headers or
the body of c.h (the latter category seems to include only PGDLLIMPORT
and PGDLLEXPORT). The rest of what was in win32.h is moved to a new
sub-include of port.h, win32_port.h. Some of what was in port.h seems
to better belong there too.
It's possible that I missed some declaration ordering dependency that
needs to be preserved, but hopefully the buildfarm will find that
out in short order.
Unlike the previous commit, no back-patch, since this is just cleanup
not a prerequisite for a bug fix.
Discussion: https://postgr.es/m/29650.1510761080@sss.pgh.pa.us
Move the sub-routines wrappers to check if a connection to a server is
fine or not into the test main module. This is useful for other tests
willing to check connectivity into a server.
Author: Michael Paquier <michael@paquier.xyz>
The module requires a preloaded library and the defect can't be cured by
a LOAD instruction in the test script. To achieve this we override the
installcheck target in the module's Makefile, and exclude ithe module in
vcregress.pl.
Along the way, revert commit 9989f92aab.
Code should be using true and false. Existing code can be changed to
those in a backward compatible way.
The definitions in the ecpg header files are left around to avoid
upsetting those users unnecessarily.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Some code is moved from partition.c, which has grown very quickly lately;
splitting the executor parts out might help to keep it from getting
totally out of control. Other code is moved from execMain.c. All is
moved to a new file execPartition.c. get_partition_for_tuple now has
a new interface that more clearly separates executor concerns from
generic concerns.
Amit Langote. A slight comment tweak by me.
Discussion: http://postgr.es/m/1f0985f8-3b61-8bc4-4350-baa6d804cb6d@lab.ntt.co.jp
Our initial work with int128 neglected alignment considerations, an
oversight that came back to bite us in bug #14897 from Vincent Lachenal.
It is unsurprising that int128 might have a 16-byte alignment requirement;
what's slightly more surprising is that even notoriously lax Intel chips
sometimes enforce that.
Raising MAXALIGN seems out of the question: the costs in wasted disk and
memory space would be significant, and there would also be an on-disk
compatibility break. Nor does it seem very practical to try to allow some
data structures to have more-than-MAXALIGN alignment requirement, as we'd
have to push knowledge of that throughout various code that copies data
structures around.
The only way out of the box is to make type int128 conform to the system's
alignment assumptions. Fortunately, gcc supports that via its
__attribute__(aligned()) pragma; and since we don't currently support
int128 on non-gcc-workalike compilers, we shouldn't be losing any platform
support this way.
Although we could have just done pg_attribute_aligned(MAXIMUM_ALIGNOF) and
called it a day, I did a little bit of extra work to make the code more
portable than that: it will also support int128 on compilers without
__attribute__(aligned()), if the native alignment of their 128-bit-int
type is no more than that of int64.
Add a regression test case that exercises the one known instance of the
problem, in parallel aggregation over a bigint column.
This will need to be back-patched, along with the preparatory commit
91aec93e6. But let's see what the buildfarm makes of it first.
Discussion: https://postgr.es/m/20171110185747.31519.28038@wrigleys.postgresql.org
Generalize section 1 to handle stuff that is principally about the
compiler (not libraries), such as attributes, and collect stuff there
that had been dropped into various other parts of c.h. Also, push
all the gettext macros into section 8, so that section 0 is really
just inclusions rather than inclusions and random other stuff.
The primary goal here is to get pg_attribute_aligned() defined before
section 3, so that we can use it with int128. But this seems like good
cleanup anyway.
This patch just moves macro definitions around, and shouldn't result
in any changes in generated code. But I'll push it out separately
to see if the buildfarm agrees.
Discussion: https://postgr.es/m/20171110185747.31519.28038@wrigleys.postgresql.org
Commit 5ecc0d738 removed the hard-wired superuser checks in lo_import
and lo_export in favor of protecting them with SQL permissions, but
failed to adjust the documentation to match. Fix that, and add a
<caution> paragraph pointing out the nontrivial security hazards
involved with actually granting such permissions. (It's still better
than ALLOW_DANGEROUS_LO_FUNCTIONS, though.)
Also, commit ae20b23a9 caused large object read/write privilege to
be checked during lo_open() rather than in the actual read or write
calls. Document that.
Discussion: https://postgr.es/m/CAB7nPqRHmNOYbETnc_2EjsuzSM00Z+BWKv9sy6tnvSd5gWT_JA@mail.gmail.com
Instead of passing large swaths of boolean arguments, define some flags
that can be used in a bitmask. This makes it easier not only to figure
out what each call site is doing, but also to add some new flags.
The flags are split in two -- one set for index_create directly and
another for constraints. index_create() itself receives both, and then
passes down the latter to index_constraint_create(), which can also be
called standalone.
Discussion: https://postgr.es/m/20171023151251.j75uoe27gajdjmlm@alvherre.pgsql
Reviewed-by: Simon Riggs
This feature caters to specialized use-cases such as running the normal
pgbench scenario with nonstandard indexes, or inserting other actions
between steps of the initialization sequence. The normal sequence of
initialization actions is broken down into half a dozen steps which can
be executed in a user-specified order, to the extent to which that's
sensible. The actions themselves aren't changed, except to make them
more robust against nonstandard uses:
* all four tables are now dropped in one DROP command, to reduce
assumptions about what foreign key relationships exist;
* all four tables are now truncated at the start of the data load
step, for consistency;
* the foreign key creation commands now specify constraint names, to
prevent accidentally creating duplicate constraints by executing the
'f' step twice.
Make some cosmetic adjustments in the messages emitted by pgbench
so that it's clear which steps are getting run, and so that the
messages agree with the documented names of the steps.
In passing, fix failure to enforce that the -v option is used only
in benchmarking mode.
Masahiko Sawada, reviewed by Fabien Coelho, editorialized a bit by me
Discussion: https://postgr.es/m/CAD21AoCsz0ZzfCFcxYZ+PUdpkDd5VsCSG0Pre_-K1EgokCDFYA@mail.gmail.com
Up until now, we only tracked the number of parameters, which was
sufficient to allocate an array of Datums of the appropriate size,
but not sufficient to, for example, know how to serialize a Datum
stored in one of those slots. An upcoming patch wants to do that,
so add this tracking to make it possible.
Patch by me, reviewed by Tom Lane and Amit Kapila.
Discussion: http://postgr.es/m/CA+TgmoYqpxDKn8koHdW8BEKk8FMUL0=e8m2Qe=M+r0UBjr3tuQ@mail.gmail.com
Apart from calling write_stderr() on failure, the handler depends on no
PostgreSQL facilities. We have experienced crashes before reaching the
former call site. Given such an early crash, this change cannot hurt
and may produce a helpful dump. Absent an early crash, this change has
no effect. Back-patch to 9.3 (all supported versions).
Takayuki Tsunakawa
Discussion: https://postgr.es/m/0A3221C70F24FB45833433255569204D1F80CD13@G01JPEXMBYT05
PostgreSQL running as a Windows service crashed upon calling
write_stderr() before MemoryContextInit(). This fix completes work
started in 5735efee15. Messages this
early contain only ASCII bytes; if we removed the CurrentMemoryContext
requirement, the ensuing conversions would have no effect. Back-patch
to 9.3 (all supported versions).
Takayuki Tsunakawa, reviewed by Michael Paquier.
Discussion: https://postgr.es/m/0A3221C70F24FB45833433255569204D1F80CC73@G01JPEXMBYT05
This suite had been a proper superset of the regular ecpg test suite,
but the three newest tests didn't reach it. To make this less likely to
recur, delete the extra schedule file and pass the TCP-specific test on
the command line. Back-patch to 9.3 (all supported versions).
Since commit 868898739a, it has assumed
"localhost" resolves to both ::1 and 127.0.0.1. We gain nothing from
that assumption, and it does not hold in a default installation of Red
Hat Enterprise Linux 5. Back-patch to 9.3 (all supported versions).
When a value contained an XML declaration naming some other encoding,
this function interpreted UTF8 bytes as the named encoding, yielding
mojibake. xml_parse() already has similar logic. This would be
necessary but not sufficient for non-UTF8 databases, so preserve
behavior there until the xpath facility can support such databases
comprehensively. Back-patch to 9.3 (all supported versions).
Pavel Stehule and Noah Misch
Discussion: https://postgr.es/m/CAFj8pRC-dM=tT=QkGi+Achkm+gwPmjyOayGuUfXVumCxkDgYWg@mail.gmail.com
An LDAP URL without a host name such as "ldap://" or without a base DN
such as "ldap://localhost" would cause a crash when reading pg_hba.conf.
If no binddn is configured, an error message might end up trying to print a
null pointer, which could crash on some platforms.
Author: Thomas Munro <thomas.munro@enterprisedb.com>
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Make bloom WAL test compare psql output text, not just result codes;
this was evidently the intent all along, but it was mis-coded.
In passing, make sure we will notice any failure in setup steps.
Alexander Korotkov, reviewed by Michael Paquier and Masahiko Sawada
Discussion: https://postgr.es/m/CAPpHfdtohPdQ9rc5mdWjxq+3VsBNw534KV_5O65dTQrSdVJNgw@mail.gmail.com
Hash partitioning is useful when you want to partition a growing data
set evenly. This can be useful to keep table sizes reasonable, which
makes maintenance operations such as VACUUM faster, or to enable
partition-wise join.
At present, we still depend on constraint exclusion for partitioning
pruning, and the shape of the partition constraints for hash
partitioning is such that that doesn't work. Work is underway to fix
that, which should both improve performance and make partitioning
pruning work with hash partitioning.
Amul Sul, reviewed and tested by Dilip Kumar, Ashutosh Bapat, Yugo
Nagata, Rajkumar Raghuwanshi, Jesper Pedersen, and by me. A few
final tweaks also by me.
Discussion: http://postgr.es/m/CAAJ_b96fhpJAP=ALbETmeLk1Uni_GFZD938zgenhF49qgDTjaQ@mail.gmail.com
Up to now, ACL checks for large objects happened at the level of
the SQL-callable functions, which led to CVE-2017-7548 because of a
missing check. Push them down to be enforced in inv_api.c as much
as possible, in hopes of preventing future bugs. This does have the
effect of moving read and write permission errors to happen at lo_open
time not loread or lowrite time, but that seems acceptable.
Michael Paquier and Tom Lane
Discussion: https://postgr.es/m/CAB7nPqRHmNOYbETnc_2EjsuzSM00Z+BWKv9sy6tnvSd5gWT_JA@mail.gmail.com
While it's generally unwise to give permissions on these functions to
anyone but a superuser, we've been moving away from hard-wired permission
checks inside functions in favor of using the SQL permission system to
control access. Bring lo_import() and lo_export() into compliance with
that approach.
In particular, this removes the manual configuration option
ALLOW_DANGEROUS_LO_FUNCTIONS. That dates back to 1999 (commit 4cd4a54c8);
it's unlikely anyone has used it in many years. Moreover, if you really
want such behavior, now you can get it with GRANT ... TO PUBLIC instead.
Michael Paquier
Discussion: https://postgr.es/m/CAB7nPqRHmNOYbETnc_2EjsuzSM00Z+BWKv9sy6tnvSd5gWT_JA@mail.gmail.com
Somebody messed up a refactoring here. As it stood, we'd check pg_ctl's
--version output twice for each cluster. Worse, the first check for the
new cluster's version happened before we'd done any validate_exec checks
there, breaking the check ordering the code intended.
A. Akenteva
Discussion: https://postgr.es/m/f9266a85d918a3cf3a386b5148aee666@postgrespro.ru
Upon further review, our Bonjour code doesn't actually work with the
Avahi not-too-compatible compatibility library. While you can get it
to work on non-macOS platforms if you link to Apple's own mDNSResponder
code, there don't seem to be many people who care about that. Leaving in
the AC_SEARCH_LIBS call seems more likely to encourage people to build
broken configurations than to do anything very useful.
Hence, remove the AC_SEARCH_LIBS call and put in a warning comment instead.
Discussion: https://postgr.es/m/2D8331C5-D64F-44C1-8717-63EDC6EAF7EB@brightforge.com
On macOS the relevant functions require no special library, but elsewhere
we need to pull in libdns_sd.
Back-patch to supported branches. No docs change since the docs do not
suggest that this is a Mac-only feature.
Luke Lonergan
Discussion: https://postgr.es/m/2D8331C5-D64F-44C1-8717-63EDC6EAF7EB@brightforge.com
The point of having separate ResourceOwnerEnlargeFoo and
ResourceOwnerRememberFoo functions is so that resource allocation
can happen in between. Doing it in some other order is just wrong.
OpenTemporaryFile() did open(), enlarge, remember, which would leak the
open file if the enlarge step ran out of memory. Because fd.c has its own
layer of resource-remembering, the consequences look like they'd be limited
to an intratransaction FD leak, but it's still not good.
IncrBufferRefCount() did enlarge, remember, incr-refcount, which would blow
up if the incr-refcount step ever failed. It was safe enough when written,
but since the introduction of PrivateRefCountHash, I think the assumption
that no error could happen there is pretty shaky.
The odds of real problems from either bug are probably small, but still,
back-patch to supported branches.
Thomas Munro and Tom Lane, per a comment from Andres Freund