There used to be a lot of different *Type and *Kind symbol groups to
address objects within different commands, most of which have been
replaced by ObjectType, starting with
b256f24264. But this conversion was never
done for the ACL commands until now.
This change ends up being just a plain replacement of the types and
symbols, without any code restructuring needed, except deleting some now
redundant code.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Reviewed-by: Stephen Frost <sfrost@snowman.net>
When CREATE INDEX is run on a partitioned table, create catalog entries
for an index on the partitioned table (which is just a placeholder since
the table proper has no data of its own), and recurse to create actual
indexes on the existing partitions; create them in future partitions
also.
As a convenience gadget, if the new index definition matches some
existing index in partitions, these are picked up and used instead of
creating new ones. Whichever way these indexes come about, they become
attached to the index on the parent table and are dropped alongside it,
and cannot be dropped on isolation unless they are detached first.
To support pg_dump'ing these indexes, add commands
CREATE INDEX ON ONLY <table>
(which creates the index on the parent partitioned table, without
recursing) and
ALTER INDEX ATTACH PARTITION
(which is used after the indexes have been created individually on each
partition, to attach them to the parent index). These reconstruct prior
database state exactly.
Reviewed-by: (in alphabetical order) Peter Eisentraut, Robert Haas, Amit
Langote, Jesper Pedersen, Simon Riggs, David Rowley
Discussion: https://postgr.es/m/20171113170646.gzweigyrgg6pwsg4@alvherre.pgsql
For no apparent reason, this function was using a 16bit-wide inhseqno
value, rather than the correct 32 bit width which is what is stored in
the pg_inherits catalog. This becomes evident if you try to create a
table with more than 65535 parents, because this error appears:
ERROR: duplicate key value violates unique constraint «pg_inherits_relid_seqno_index»
DETAIL: Key (inhrelid, inhseqno)=(329371, 0) already exists.
Needless to say, having so many parents is an uncommon situations, which
explains why this error has never been reported despite being having
been introduced with the Postgres95 1.01 sources in commit d31084e9d1:
https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/commands/creatinh.c;hb=d31084e9d111#l349
Backpatch all the way back.
David Rowley noticed this while reviewing a patch of mine.
Discussion: https://postgr.es/m/CAKJS1f8Dn7swSEhOWwzZzssW7747YB=2Hi+T7uGud40dur69-g@mail.gmail.com
This was hardly tested at all. The trigger case was lightly tested by
the logical replication tests, but rules and event triggers were not
tested at all.
On Sparc64, use of __attribute__(aligned(8)) with __int128 causes faulty
code generation in gcc versions at least through 5.5.0. We can work around
that by disabling use of __int128, so teach configure to test for the bug.
This solution doesn't fix things for the case of cross-compiling with a
buggy compiler; to support that nicely, we'd need to add a manual disable
switch. Unless more such cases turn up, it doesn't seem worth the work.
Affected users could always edit pg_config.h manually.
In passing, fix some typos in the existing configure test for __int128.
They're harmless because we only compile that code not run it, but
they're still confusing for anyone looking at it closely.
This is needed in support of commit 751804998, so back-patch to 9.5
as that was.
Marina Polyakova, Victor Wagner, Tom Lane
Discussion: https://postgr.es/m/0d3a9fa264cebe1cb9966f37b7c06e86@postgrespro.ru
When pushing down a join to a foreign server, postgres_fdw constructs
an alternative plan to be used for any EvalPlanQual rechecks that
prove to be necessary. This plan is stored as the outer subplan of
the Foreign Scan implementing the pushed-down join. Previously, this
alternative plan could have a different nominal sort ordering than its
parent, which seemed OK since there will only be one tuple per base
table anyway in the case of an EvalPlanQual recheck. Actually,
though, it caused a problem if that path was used as a building block
for the EvalPlanQual recheck plan of a higher-level foreign join,
because we could end up with a merge join one of whose inputs was not
labelled with the correct sort order. Repair by injecting an extra
Sort node into the EvalPlanQual recheck plan whenever it would
otherwise fail to be sorted at least as well as its parent Foreign
Scan.
Report by Jeff Janes. Patch by me, reviewed by Tom Lane, who also
provided the test case and comment text.
Discussion: http://postgr.es/m/CAMkU=1y2G8VOVBHv3iXU2TMAj7-RyBFFW1uhkr5sm9LQ2=X35g@mail.gmail.com
node->partitioned_rels is only set in UPDATE/DELETE cases, but
ExecInitModifyTable only uses its "rel" variable in INSERT cases,
so the extra logic to find the root rel is just a waste of complexity
and cycles.
Etsuro Fujita, reviewed by Amit Langote
Discussion: https://postgr.es/m/93cf9816-2f7d-0f67-8ed2-4a4e497a6ab8@lab.ntt.co.jp
Ability to advance both physical and logical replication slots using a
new user function pg_replication_slot_advance().
For logical advance that means records are consumed as fast as possible
and changes are not given to output plugin for sending. Makes 2nd phase
(after we reached SNAPBUILD_FULL_SNAPSHOT) of replication slot creation
faster, especially when there are big transactions as the reorder buffer
does not have to deal with data changes and does not have to spill to
disk.
Author: Petr Jelinek
Reviewed-by: Simon Riggs
The creates a single function JsonEncodeDateTime which will format these
data types in an efficient and consistent manner. This will be all the
more important when we come to jsonpath so we don't have to implement yet
more code doing the same thing in two more places.
This also extends the code to handle time and timetz types which were
not previously handled specially. This requires exposing the time2tm and
timetz2tm functions.
Patch from Nikita Glukhov
Commit 7012b132d, which added the ability to push down aggregates and
grouping to the remote server, wasn't careful to ensure that the remote
server would have the same idea we do about which columns are the grouping
columns, in cases where there are textually identical GROUP BY expressions.
Such cases typically led to "targetlist item has multiple sortgroupref
labels" errors.
To fix this reliably, switch over to using "GROUP BY column-number" syntax
rather than "GROUP BY expression" in transmitted queries, and adjust
foreign_grouping_ok() to be more careful about duplicating the sortgroupref
labeling of the local pathtarget.
Per bug #14890 from Sean Johnston. Back-patch to v10 where the buggy code
was introduced.
Jeevan Chalke, reviewed by Ashutosh Bapat
Discussion: https://postgr.es/m/20171107134948.1508.94783@wrigleys.postgresql.org
If a query against an inheritance tree runs concurrently with an ALTER
TABLE that's disinheriting one of the tree members, it's possible to get
a "could not find inherited attribute" error because after obtaining lock
on the removed member, make_inh_translation_list sees that its columns
have attinhcount=0 and decides they aren't the columns it's looking for.
An ideal fix, perhaps, would avoid including such a just-removed member
table in the query at all; but there seems no way to accomplish that
without adding expensive catalog rechecks or creating a likelihood of
deadlocks. Instead, let's just drop the check on attinhcount. In this
way, a query that's included a just-disinherited child will still
succeed, which is not a completely unreasonable behavior.
This problem has existed for a long time, so back-patch to all supported
branches. Also add an isolation test verifying related behaviors.
Patch by me; the new isolation test is based on Kyotaro Horiguchi's work.
Discussion: https://postgr.es/m/20170626.174612.23936762.horiguchi.kyotaro@lab.ntt.co.jp
If we flatten a subquery whose target list contains constants or
expressions, when those output columns are used in GROUPING SET columns,
the planner was capable of doing the wrong thing by merging a pulled-up
expression into the surrounding expression during const-simplification.
Then the late processing that attempts to match subexpressions to grouping
sets would fail to match those subexpressions to grouping sets, with the
effect that they'd not go to null when expected.
To fix, wrap such subquery outputs in PlaceHolderVars, ensuring that
they preserve their separate identity throughout the planner's expression
processing. This is a bit of a band-aid, because the wrapper defeats
const-simplification even in places where it would be safe to allow.
But a nicer fix would likely be too invasive to back-patch, and the
consequences of the missed optimizations probably aren't large in most
cases.
Back-patch to 9.5 where grouping sets were introduced.
Heikki Linnakangas, with small mods and better test cases by me;
additional review by Andrew Gierth
Discussion: https://postgr.es/m/7dbdcf5c-b5a6-ef89-4958-da212fe10176@iki.fi
Add the ability to label a column's default value in the catalog header,
and implement this for pg_attribute. A new function in Catalog.pm is
used to fill in a tuple with defaults. The build process will complain
loudly if a catalog entry is incomplete,
Commit 8137f2c323 labeled variable length columns for the C preprocessor.
Expose that label to genbki.pl so we can exclude those columns from schema
macros in a general fashion. Also, format schema macro entries according
to their types.
This means slightly less code maintenance, but more importantly it's a
proving ground for mechanisms intended to be used in later commits.
While at it, I (Álvaro) couldn't resist making some changes in
genbki.pl: rename some functions to actually indicate their purpose
instead of actively misleading onlookers; and don't iterate on the whole
of pg_type to find the entry for each catalog row, using a hash instead
of an array.
Author: John Naylor, some changes by Álvaro Herrera
Discussion: https://postgr.es/m/CAJVSVGVJHwD8sfDfZW9TbCHWKf=C1YDRM-rF=2JenRU_y+VcFg@mail.gmail.com
This was nearly the same code. Extend wait_for_catchup to allow waiting
for pg_current_wal_lsn() and use that in the subscription tests. Also
change one use in the pg_rewind tests to use this.
Also remove some broken code in wait_for_catchup and
wait_for_slot_catchup. The error message in case the waiting failed
wanted to show the current LSN, but the way it was written never
worked. So since nobody ever cared, just remove it.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
This should have been done in commit 18ce3a4ab, which added that parameter
to ExplainOneQuery, but it was overlooked. This makes it impossible for
a user of the hook to pass the queryEnv down to ExplainOnePlan.
It's too late to change this API in v10, I suppose, but fortunately
passing NULL to ExplainOnePlan will work in nearly all interesting
cases in v10. That might not be true forever, so we'd better fix it.
Tatsuro Yamada, reviewed by Thomas Munro
Discussion: https://postgr.es/m/890e8dd9-c1c7-a422-6892-874f5eaee048@lab.ntt.co.jp
~> (cube, int) operator was especially designed for knn-gist search.
However, knn-gist supports only ascending ordering of results. Nevertheless
it would be useful to support descending ordering by ~> (cube, int) operator.
We provide workaround for that: negative coordinate give us inversed value
of corresponding cube bound. Therefore, knn search using negative coordinate
gives us an effect of descending ordering by cube bound.
Author: Alexander Korotkov
Reviewed by: Tomas Vondra, Andrey Borodin
Discussion: https://www.postgresql.org/message-id/flat/a9657f6a-b497-36ff-e56-482a2c7e3292@2ndquadrant.com
~> (cube, int) operator was especially designed for knn-gist search.
However, it appears that knn-gist search can't work correctly with current
behavior of this operator when dataset contains cubes of variable
dimensionality. In this case, the same value of second operator argument
can point to different dimension depending on dimensionality of particular cube.
Such behavior is incompatible with gist indexing of cubes, and knn-gist doesn't
work correctly for it.
This patch changes behavior of ~> (cube, int) operator by introducing dimension
numbering where value of second argument unambiguously identifies number of
dimension. With new behavior, this operator can be correctly supported by
knn-gist. Relevant changes to cube operator class are also included.
Backpatch to v9.6 where operator was introduced.
Since behavior of ~> (cube, int) operator is changed, depending entities
must be refreshed after upgrade. Such as, expression indexes using this
operator must be reindexed, materialized views must be rebuilt, stored
procedures and client code must be revised to correctly use new behavior.
That should be mentioned in release notes.
Noticed by: Tomas Vondra
Author: Alexander Korotkov
Reviewed by: Tomas Vondra, Andrey Borodin
Discussion: https://www.postgresql.org/message-id/flat/a9657f6a-b497-36ff-e56-482a2c7e3292@2ndquadrant.com
These functions are stated to be Oracle-compatible, but they weren't.
Yugo Nagata noticed that while our code returns zero for a zero or
negative fourth parameter (occur_index), Oracle throws an error.
Further testing by me showed that there was also a discrepancy in the
interpretation of a negative third parameter (beg_index): Oracle thinks
that a negative beg_index indicates the last place where the target
substring can *begin*, whereas our code thinks it is the last place
where the target can *end*.
Adjust the sample code to behave like Oracle in both these respects.
Also change it to be a CDATA[] section, simplifying copying-and-pasting
out of the documentation source file. And fix minor problems in the
introductory comment, which wasn't very complete or accurate.
Back-patch to all supported branches. Although this patch only touches
documentation, we should probably call it out as a bug fix in the next
minor release notes, since users who have adopted the functions will
likely want to update their versions.
Yugo Nagata and Tom Lane
Discussion: https://postgr.es/m/20171229191705.c0b43a8c.nagata@sraoss.co.jp
PL/pgSQL "pins" internally generated portals so that user code cannot
close them by guessing their names. Add this functionality to PL/Perl
and PL/Python as well, preventing users from manually closing cursors
created by spi_query and plpy.cursor, respectively. (PL/Tcl does not
currently offer any cursor functionality.)
It seems incorrect to assume that the list of CkptSortItems can never
contain duplicate page numbers: concurrent activity could result in some
page getting dropped from a low-numbered buffer and later loaded into a
high-numbered buffer while BufferSync is scanning the buffer pool.
If that happened, the comparator would give self-inconsistent results,
potentially confusing qsort(). Saving one comparison step is not worth
possibly getting the sort wrong.
So far as I can tell, nothing would actually go wrong given our current
implementation of qsort(). It might get a bit slower than expected
if there were a large number of duplicates of one value, but that's
surely a probability-epsilon case. Still, the comment is wrong,
and if we ever switched to another sort implementation it might be
less forgiving.
In passing, avoid casting away const-ness of the argument pointers;
I've not seen any compiler complaints from that, but it seems likely
that some compilers would not like it.
Back-patch to 9.6 where this code came in, just in case I've underestimated
the possible consequences.
Discussion: https://postgr.es/m/18437.1515607610@sss.pgh.pa.us
PL/pgSQL "pins" internally generated (unnamed) portals so that user code
cannot close them by guessing their names. This logic is also useful in
other languages and really for any code. So move that logic into SPI.
An unnamed portal obtained through SPI_cursor_open() and related
functions is now automatically pinned, and SPI_cursor_close()
automatically unpins a portal that is pinned.
In the core distribution, this affects PL/Perl and PL/Python, preventing
users from manually closing cursors created by spi_query and
plpy.cursor, respectively. (PL/Tcl does not currently offer any cursor
functionality.)
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
The previous code gave the same error message for attempting to drop
pinned and active portals, but those are separate states, so give
separate error messages.
Previously aggregate transition and combination functions were invoked
by special case code in nodeAgg.c, evaluating input and filters
separately using the expression evaluation machinery. That turns out
to not be great for performance for several reasons:
- repeated expression evaluations have some cost
- the transition functions invocations are poorly predicted, as
commonly there are multiple aggregates in a query, resulting in the
same call-stack invoking different functions.
- filter and input computation had to be done separately
- the special case code made it hard to implement JITing of the whole
transition function invocation
Address this by building one large expression that computes input,
evaluates filters, and invokes transition functions.
This leads to moderate speedups in queries bottlenecked by aggregate
computations, and enables large speedups for similar cases once JITing
is done.
There's potential for further improvement:
- It'd be nice if we could simplify the somewhat expensive
aggstate->all_pergroups lookups.
- right now there's still an advance_transition_function invocation in
nodeAgg.c, leading to some code duplication.
Author: Andres Freund
Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
As src/backend/access/transam/README says, PageGetLSN may only be called
by processes holding either exclusive lock on buffer, or a shared lock
on buffer plus buffer header lock. Therefore any place that only holds
a shared buffer lock must use BufferGetLSNAtomic instead of PageGetLSN,
which internally obtains buffer header lock prior to reading the LSN.
A few callsites failed to comply with this rule. This was detected by
running all tests under a new (not committed) assertion that verifies
PageGetLSN locking contract. All but one of the callsites that failed
the assertion are fixed by this patch. Remaining callsites were
inspected manually and determined not to need any change.
The exception (unfixed callsite) is in TestForOldSnapshot, which only
has a Page argument, making it impossible to access the corresponding
Buffer from it. Fixing that seems a much larger patch that will have to
be done separately; and that's just as well, since it was only
introduced in 9.6 and other bugs are much older.
Some of these bugs are ancient; backpatch all the way back to 9.3.
Authors: Jacob Champion, Asim Praveen, Ashwin Agrawal
Reviewed-by: Michaël Paquier
Discussion: https://postgr.es/m/CABAq_6GXgQDVu3u12mK9O5Xt5abBZWQ0V40LZCE+oUf95XyNFg@mail.gmail.com
These are compatible with Oracle and required for the datetime template
language for jsonpath in an upcoming patch.
Nikita Glukhov and Andrew Dunstan, reviewed by Pavel Stehule.
After having gotten rid of PortalGetHeapMemory(), there seems little
reason to keep one Portal access macro around that offers no actual
abstraction and isn't consistently used anyway.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Rename PortalMemory to TopPortalContext, to avoid confusion with
PortalContext and align naming with similar top-level memory contexts.
Rename PortalData's "heap" field to portalContext. The "heap" naming
seems quite antiquated and confusing. Also get rid of the
PortalGetHeapMemory() macro and access the field directly, which we do
for other portal fields, so this abstraction doesn't buy anything.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
The initial implementation of list_qsort(), from commit ab7271677,
re-used the ListCells of the input list while not touching the List
header. This meant that anybody who still had a pointer to the
original header would now be in possession of a corrupted list,
a problem that seems sure to bite us eventually.
One possible solution is to re-use the original List header as well,
giving the function the semantics of update-in-place. However, that
doesn't seem like a very good idea either given the way that the
function is used in the planner: create_path functions aren't normally
supposed to modify their input lists. It doesn't look like there would
be a problem today, but it's not hard to foresee a time when modifying
a list of Paths in-place could have side-effects on some other append
path.
On the whole, and in view of the likelihood that this function might
be used in other contexts in the future, it seems best to get rid of
the micro-optimization of re-using the input list cells. Just build
a new list.
Discussion: https://postgr.es/m/16912.1515449066@sss.pgh.pa.us