Unlike "make" itself, the MSVC build process recognized a continuation
even with whitespace after the backslash. (Due to a typo, some code
sites accepted the letter "s" instead of whitespace). Also, it would
consume any number of newlines following a single backslash. This is
mere cleanup; those behaviors were unlikely to cause bugs.
Achieve this by consistently using four-argument Solution::AddProject()
calls. Remove ad hoc Makefile parsing made redundant by doing that.
Michael Paquier and Noah Misch, reviewed by MauMau.
EXPLAIN ANALYZE shows the information of the numbers of exact/lossy blocks which
bitmap heap scan processes. But, previously, when those numbers were both zero,
it displayed only the prefix "Heap Blocks:" in TEXT output format. This is strange
and would confuse the users. So this commit suppresses such unnecessary information.
Backpatch to 9.4 where EXPLAIN ANALYZE was changed so that such information was
displayed.
Etsuro Fujita
Complete SET search_path = ... to non-temporary and non-toast
schemas. Since there pretty much is no use case to add those to the
search path and there can be many it's helpful to exclude them.
It'd be nicer to complete multiple search path elements, but that's
not easy.
Jeff Janes
Commit 1b86c81d2d fixed the decoding of toasted columns for the rows
contained in one xl_heap_multi_insert record. But that's not actually
enough, because heap_multi_insert() will actually first toast all
passed in rows and then emit several *_multi_insert records; one for
each page it fills with tuples.
Add a XLOG_HEAP_LAST_MULTI_INSERT flag which is set in
xl_heap_multi_insert->flag denoting that this multi_insert record is
the last emitted by one heap_multi_insert() call. Then use that flag
in decode.c to only set clear_toast_afterwards in the right situation.
Expand the number of rows inserted via COPY in the corresponding
regression test to make sure that more than one heap page is filled
with tuples by one heap_multi_insert() call.
Backpatch to 9.4 like the previous commit.
ExecEvalWholeRowVar incorrectly supposed that it could "bless" the source
TupleTableSlot just once per query. But if the input is coming from an
Append (or, perhaps, other cases?) more than one slot might be returned
over the query run. This led to "record type has not been registered"
errors when a composite datum was extracted from a non-blessed slot.
This bug has been there a long time; I guess it escaped notice because when
dealing with subqueries the planner tends to expand whole-row Vars into
RowExprs, which don't have the same problem. It is possible to trigger
the problem in all active branches, though, as illustrated by the added
regression test.
This command provides an automated way to create foreign table definitions
that match remote tables, thereby reducing tedium and chances for error.
In this patch, we provide the necessary core-server infrastructure and
implement the feature fully in the postgres_fdw foreign-data wrapper.
Other wrappers will throw a "feature not supported" error until/unless
they are updated.
Ronan Dunklau and Michael Paquier, additional work by me
When the psql variable ECHO is set to 'erros', only failed SQL commands
are printed to standard error output. Also this patch adds -b option into psql.
This is equivalent to setting the variable ECHO to 'errors'.
Pavel Stehule, reviewed by Fabrízio de Royes Mello, Samrat Revagade,
Kumar Rajeev Rastogi, Abhijit Menon-Sen, and me.
While the x output of "select x from t group by x" can be presumed unique,
this does not hold for "select x, generate_series(1,10) from t group by x",
because we may expand the set-returning function after the grouping step.
(Perhaps that should be re-thought; but considering all the other oddities
involved with SRFs in targetlists, it seems unlikely we'll change it.)
Put a check in query_is_distinct_for() so it's not fooled by such cases.
Back-patch to all supported branches.
David Rowley
We used to print this information only in verbose mode, but it's argued
that it's useful enough to print always; one reason being that this
provides some documentation about which Postgres versions the dump is
meant to reload into.
Jing Wang, reviewed by Jeevan Chalke
Previously, when calculations on the need for toast tables changed,
pg_upgrade could not handle cases where the new cluster needed a TOAST
table and the old cluster did not. (It already handled the opposite
case.) This fixes the "OID mismatch" error typically generated in this
case.
Backpatch through 9.2
Per recent analysis by Andres Freund, this implementation is in fact
unsafe, because ARMv5 has weak memory ordering, which means tha the
CPU could move loads or stores across the volatile store performed by
the default S_UNLOCK. We could try to fix this, but have no ARMv5
hardware to test on, so removing support seems better. We can still
support ARMv5 systems on GCC versions new enough to have built-in
atomics support for this platform, and can also re-add support for
the old way if someone has hardware that can be used to test a fix.
However, since the requirement to use a relatively-new GCC hasn't
been an issue for ARMv6 or ARMv7, which lack the swpb instruction
altogether, perhaps it won't be an issue for ARMv5 either.
When decoding the results of a HEAP2_MULTI_INSERT (currently only
generated by COPY FROM) toast columns for all but the last tuple
weren't replaced by their actual contents before being handed to the
output plugin. The reassembled toast datums where disregarded after
every REORDER_BUFFER_CHANGE_(INSERT|UPDATE|DELETE) which is correct
for plain inserts, updates, deletes, but not multi inserts - there we
generate several REORDER_BUFFER_CHANGE_INSERTs for a single
xl_heap_multi_insert record.
To solve the problem add a clear_toast_afterwards boolean to
ReorderBufferChange's union member that's used by modifications. All
row changes but multi_inserts always set that to true, but
multi_insert sets it only for the last change generated.
Add a regression test covering decoding of multi_inserts - there was
none at all before.
Backpatch to 9.4 where logical decoding was introduced.
Bug found by Petr Jelinek.
The isxdigit() calls relied on undefined behavior. The isascii() call
was well-defined, but our prevailing style is to include the cast.
Back-patch to 9.4, where the isxdigit() calls were introduced.
Previously the source codes for receiving the data and for
polling the socket were included in pg_receivexlog main loop.
This commit splits out them as separate functions. This is
useful for improving the readability of main loop code and
making the future pg_receivexlog-related patch simpler.
Although nodeAgg.c currently uses the same per-group memory context for
all groups of a query, that might change in future. Avoid assuming it.
This costs us an extra AggCheckCallContext() call per group, but that's
pretty cheap and is probably good from a safety standpoint anyway.
Back-patch to 9.4 in case any third-party code copies this logic.
Andrew Gierth
The previous design exposed the input and output ExprContexts of the
Agg plan node, but work on grouping sets has suggested that we'll regret
doing that. Instead provide more narrowly-defined APIs that can be
implemented in multiple ways, namely a way to get a short-term memory
context and a way to register an aggregate shutdown callback.
Back-patch to 9.4 where the bad APIs were introduced, since we don't
want third-party code using these APIs and then having to change in 9.5.
Andrew Gierth
Allow PL/Python functions to return arrays of composite types.
Also, fix the restriction that plpy.prepare/plpy.execute couldn't
handle query parameters or result columns of composite types.
In passing, adopt a saner arrangement for where to release the
tupledesc reference counts acquired via lookup_rowtype_tupdesc.
The callers of PLyObject_ToCompositeDatum were doing the lookups,
but then the releases happened somewhere down inside subroutines
of PLyObject_ToCompositeDatum, which is bizarre and bug-prone.
Instead release in the same function that acquires the refcount.
Ed Behn and Ronan Dunklau, reviewed by Abhijit Menon-Sen
Creating the Unix-domain socket in the build directory can run into
name-length limitations. Therefore, create the socket file in the
default temporary directory of the operating system. Keep the temporary
data directory etc. in the build tree.
If a connection committed or rolled back any transactions within a
PGSTAT_STAT_INTERVAL pacing interval without accessing any tables,
the reporting of those statistics would be held up until the
connection closed or until it ended a PGSTAT_STAT_INTERVAL interval
in which it had accessed a table. This could result in under-
reporting of transactions for an extended period, followed by a
spike in reported transactions.
While this is arguably a bug, the impact is minimal, primarily
affecting, and being affected by, monitoring software. It might
cause more confusion than benefit to change the existing behavior
in released stable branches, so apply only to master and the 9.4
beta.
Gurjeet Singh, with review and editing by Kevin Grittner,
incorporating suggested changes from Abhijit Menon-Sen and Tom
Lane.
The old name wasn't very descriptive as of actual contents of the
directory, which are historical snapshots in the snapshots/
subdirectory and mappingdata for rewritten tuples in
mappings/. There's been a fair amount of discussion what would be a
good name. I'm settling for pg_logical because it's likely that
further data around logical decoding and replication will need saving
in the future.
Also add the missing entry for the directory into storage.sgml's list
of PGDATA contents.
Bumps catversion as the data directories won't be compatible.
This function wasn't originally thought to be really user-facing,
because converting a table to a view isn't something we expect people
to do manually. So not all that much effort was spent on the error
messages; in particular, while the code will complain that you got
the column types wrong it won't say exactly what they are. But since
we repurposed the code to also check compatibility of rule RETURNING
lists, it's definitely user-facing. It now seems worthwhile to add
errdetail messages showing exactly what the conflict is when there's
a mismatch of column names or types. This is prompted by bug #10836
from Matthias Raffelsieper, which might have been forestalled if the
error message had reported the wrong column type as being "record".
Back-patch to 9.4, but not into older branches where the set of
translatable error strings is supposed to be stable.
The autocommit-off mode works by issuing an implicit BEGIN just before
any command that is not already in a transaction block and is not itself
a BEGIN or other transaction-control command, nor a command that
cannot be executed inside a transaction block. This commit prevents psql
from issuing such an implicit BEGIN before ALTER SYSTEM because it's
not allowed inside a transaction block.
Backpatch to 9.4 where ALTER SYSTEM was added.
Report by Feike Steenbergen
Historically these database properties could be manipulated only by
manually updating pg_database, which is error-prone and only possible for
superusers. But there seems no good reason not to allow database owners to
set them for their databases, so invent CREATE/ALTER DATABASE options to do
that. Adjust a couple of places that were doing it the hard way to use the
commands instead.
Vik Fearing, reviewed by Pavel Stehule
Most of the existing option names are keywords anyway, but we can get rid
of LC_COLLATE and LC_CTYPE as keywords known to the lexer/grammar. This
immediately reduces the size of the grammar tables by about 8KB, and will
save more when we add additional CREATE/ALTER DATABASE options in future.
A side effect of the implementation is that the CONNECTION LIMIT option
can now also be spelled CONNECTION_LIMIT. We choose not to document this,
however.
Vik Fearing, based on a suggestion by me; reviewed by Pavel Stehule
Almost ten years ago, commit e48322a6d6 broke
the logic in ACX_PTHREAD by looping through all the possible flags rather
than stopping with the first one that would work. This meant that
$acx_pthread_ok was no longer meaningful after the loop; it would usually
be "no", whether or not we'd found working thread flags. The reason nobody
noticed is that Postgres doesn't actually use any of the symbols set up
by the code after the loop. Rather than complicate things some more to
make it work as designed, let's just remove all that dead code, and thereby
save a few cycles in each configure run.
The output buffer size in unaccent_lexize() was calculated as input string
length times pg_database_encoding_max_length(), which effectively assumes
that replacement strings aren't more than one character. While that was
all that we previously documented it to support, the code actually has
always allowed replacement strings of arbitrary length; so if you tried
to make use of longer strings, you were at risk of buffer overrun. To fix,
use an expansible StringInfo buffer instead of trying to determine the
maximum space needed a-priori.
This would be a security issue if unaccent rules files could be installed
by unprivileged users; but fortunately they can't, so in the back branches
the problem can be labeled as improper configuration by a superuser.
Nonetheless, a memory stomp isn't a nice way of reacting to improper
configuration, so let's back-patch the fix.