"\pset format csv", or --csv, selects comma-separated values table format.
This is compliant with RFC 4180, except that we aren't too picky about
whether the record separator is LF or CRLF; also, the user may choose a
field separator other than comma.
This output format is directly compatible with the server's COPY CSV
format, and will also be useful as input to other programs. It's
considerably safer for that purpose than the old recommendation to
use "unaligned" format, since the latter couldn't handle data
containing the field separator character.
Daniel Vérité, reviewed by Fabien Coelho and David Fetter, some
tweaking by me
Discussion: https://postgr.es/m/a8de371e-006f-4f92-ab72-2bbe3ee78f03@manitou-mail.org
As penance for the "\pset format latex" silliness, add some regression
test coverage for the off-the-beaten-path output formats, which formerly
had exactly no coverage, except for some poorly-thought-out (unreadable,
repetitive, and incomplete) tests for asciidoc format.
I make no claims for the behavior exposed here actually being correct;
these test cases are just designed to ensure full code coverage in
fe_utils/print.c. This brings the line coverage for that file up
from ~60% to ~93%.
Commit eaf746a5b unintentionally made psql's "latex" output format
inaccessible, since not only "latex" but all abbreviations of it
were considered ambiguous against "latex-longtable". Let's go
back to the longstanding behavior that all shortened versions
mean "latex", and you have to write at least "latex-" to get
"latex-longtable". This leaves the only difference from pre-v12
behavior being that "\pset format a" is considered ambiguous.
The fact that the regression tests didn't expose this is pretty bad,
but fixing it is material for a separate commit.
Discussion: https://postgr.es/m/cb7e1caf-3ea6-450d-af28-f524903a030c@manitou-mail.org
A set of failures in buildfarm machines are proving that this is not
quite ready yet because of another set of issues:
- MSVC scripts assume that REGRESS_OPTS can only use top_builddir. Some
test suites actually finish by using top_srcdir, like pg_stat_statements
which cause the regression tests to never run.
- Trying to enforce top_builddir does not work either when using VPATH
as this is not recognized properly.
- TAP tests of bloom are unstable on various platforms, causing various
failures.
The set of scripts in charge of running the regression tests for MSVC
run currently under the assumption that only $(top_builddir) can used in
option values defined in REGRESS_OPTS, and those options need to have a
specific format as well to be correctly parsed, so fix the Makefile
values so as those are correctly set.
Per complains from buildfarm member dory and whelk, with some extra
testing done on my side with MSVC to check this patch.
The recent commit 03faa4a8 has enabled those tests, however several
buildfarm members are complaining about their stability on Windows and
macOS. This will keep the buildfarm green, while investigating the
root problem.
Discussion: https://postgr.es/m/20181126003351.GE1776@paquier.xyz
The following options are added for extensions:
- TAP_TESTS, to allow an extention to run TAP tests which are the ones
present in t/*.pl. A subset of tests can always be run with the
existing PROVE_TESTS for developers.
- ISOLATION, to define a list of isolation tests.
- ISOLATION_OPTS, to pass custom options to isolation_tester.
A couple of custom Makefile targets have been accumulated across the
tree to cover the lack of facility in PGXS for a couple of releases when
using those test suites, which are all now replaced with the new flags,
without reducing the test coverage. This also fixes an issue with
contrib/bloom/, which had a custom target to trigger its TAP tests of
its own not part of the main check runs.
Author: Michael Paquier
Reviewed-by: Adam Berlin, Álvaro Herrera, Tom Lane, Nikolay Shaplov,
Arthur Zakirov
Discussion: https://postgr.es/m/20180906014849.GG2726@paquier.xyz
recovery.conf settings are now set in postgresql.conf (or other GUC
sources). Currently, all the affected settings are PGC_POSTMASTER;
this could be refined in the future case by case.
Recovery is now initiated by a file recovery.signal. Standby mode is
initiated by a file standby.signal. The standby_mode setting is
gone. If a recovery.conf file is found, an error is issued.
The trigger_file setting has been renamed to promote_trigger_file as
part of the move.
The documentation chapter "Recovery Configuration" has been integrated
into "Server Configuration".
pg_basebackup -R now appends settings to postgresql.auto.conf and
creates a standby.signal file.
Author: Fujii Masao <masao.fujii@gmail.com>
Author: Simon Riggs <simon@2ndquadrant.com>
Author: Abhijit Menon-Sen <ams@2ndquadrant.com>
Author: Sergei Kornilov <sk@zsrv.org>
Discussion: https://www.postgresql.org/message-id/flat/607741529606767@web3g.yandex.ru/
Commit cfdf4dc4 added an assertion that every WaitLatch() or similar
handles postmaster death. One place did not, but was missed in
review and testing due to the need for an SSL connection. Fix, by
asking for WL_EXIT_ON_PM_DEATH.
Reported-by: Christoph Berg
Discussion: https://postgr.es/m/20181124143845.GA15039%40msg.df7cb.de
Hstore data generated on pg 8.4 and pg_upgraded to current versions
remains in its original on-disk format unless modified. The same goes
for values generated by the addon hstore-new module on pre-9.0
versions. (The hstoreUpgrade function converts old values on the fly
when read in, but the on-disk value is not modified by this.)
Since old-format empty hstores (and hstore-new hstores) have
representations compatible with the new format, hstoreUpgrade thought
it could get away without modifying such values; but this breaks
hstore_hash (and the new hstore_hash_extended) which assumes
bit-perfect matching between semantically identical hstore values.
Only one bit actually differs (the "new version" flag in the count
field) but that of course is enough to break the hash.
Fix by making hstoreUpgrade unconditionally convert all old values to
new format.
Backpatch all the way, even though this changes a hash value in some
cases, because in those cases the hash value is already failing - for
example, a hash join between old- and new-format empty hstores will be
failing to match, or a hash index on an hstore column containing an
old-format empty value will be failing to find the value since it will
be searching for a hash derived from a new-format datum. (There are no
known field reports of this happening, probably because hashing of
hstores has only been useful in limited circumstances and there
probably isn't much upgraded data being used this way.)
Per concerns arising from discussion of commit eb6f29141b. Original
bug is my fault.
Discussion: https://postgr.es/m/60b1fd3b-7332-40f0-7e7f-f2f04f777747%402ndquadrant.com
1. Integer overflow in internal_size could result in memory corruption
in decompression since a zero-length array would be allocated and then
written to. This leads to crashes or corruption when traversing an
index which has been populated with sufficiently sparse values. Fix by
using int64 for computations and checking for overflow.
2. Integer overflow in g_int_compress could cause pessimal merge
choices, resulting in unnecessarily large ranges (which would in turn
trigger issue 1 above). Fix by using int64 again.
3. Even without overflow, array sizes could become large enough to
cause unexplained memory allocation errors. Fix by capping the sizes
to a safe limit and report actual errors pointing at gist__intbig_ops
as needed.
4. Large inputs to the compression function always consist of large
runs of consecutive integers, and the compression loop was processing
these one at a time in an O(N^2) manner with a lot of overhead. The
expected runtime of this function could easily exceed 6 months for a
single call as a result. Fix by performing a linear-time first pass,
which reduces the worst case to something on the order of seconds.
Backpatch all the way, since this has been wrong forever.
Per bug #15518 from report from irc user "dymk", analysis and patch by
me.
Discussion: https://postgr.es/m/15518-799e426c3b4f8358@postgresql.org
Early returns from the buildfarm say that most critters are good with
commit cbdb8b4c0, but gaur gives unexpected results with the test case
involving a float8 that's one-ULP-less-than-2^63. It appears that that
platform's version of rint() rounds that value up to 2^63 instead of
leaving it be. This is possibly a bug, and it's also possible that no
other platform anybody is using anywhere behaves likewise. Still, the
point of the test is not to insist that everybody's rint() behaves exactly
the same. Let's use two-ULPs-less-than-2^63 instead, which I've tested
to act the same on gaur as on more modern hardware.
(This is, more or less, exactly the portability issue I'd feared might
arise...)
Discussion: https://postgr.es/m/15519-4fc785b483201ff1@postgresql.org
ftoi4 and its sibling coercion functions did their overflow checks in
a way that looked superficially plausible, but actually depended on an
assumption that the MIN and MAX comparison constants can be represented
exactly in the float4 or float8 domain. That fails in ftoi4, ftoi8,
and dtoi8, resulting in a possibility that values near the MAX limit will
be wrongly converted (to negative values) when they need to be rejected.
Also, because we compared before rounding off the fractional part,
the other three functions threw errors for values that really ought
to get rounded to the min or max integer value.
Fix by doing rint() first (requiring an assumption that it handles
NaN and Inf correctly; but dtoi8 and ftoi8 were assuming that already),
and by comparing to values that should coerce to float exactly, namely
INTxx_MIN and -INTxx_MIN. Also remove some random cosmetic discrepancies
between these six functions.
Per bug #15519 from Victor Petrovykh. This should get back-patched,
but first let's see what the buildfarm thinks of it --- I'm not too
sure about portability of some of the regression test cases.
Patch by me; thanks to Andrew Gierth for analysis and discussion.
Discussion: https://postgr.es/m/15519-4fc785b483201ff1@postgresql.org
We should never estimate the output of a semijoin to be more rows than
we estimate for an inner join with the same input rels and join condition;
it's obviously impossible for that to happen. However, given the
relatively poor quality of our semijoin selectivity estimates ---
particularly, but not only, in cases where we punt and return a default
estimate --- we did often deliver such estimates. To improve matters,
calculate both estimates inside eqjoinsel() and take the smaller one.
The bulk of this patch is just mechanical refactoring to avoid repetitive
information lookup when we call both eqjoinsel_semi and eqjoinsel_inner.
The actual new behavior is just
selec = Min(selec, inner_rel->rows * selec_inner);
which looks a bit odd but is correct because of our different definitions
for inner and semi join selectivity.
There is one ensuing plan change in the regression tests, but it looks
reasonable enough (and checking the actual row counts shows that the
estimate moved closer to reality, not further away).
Per bug #15160 from Alexey Ermakov. Although this is arguably a bug fix,
I won't risk destabilizing plan choices in stable branches by
back-patching.
Tom Lane, reviewed by Melanie Plageman
Discussion: https://postgr.es/m/152395805004.19366.3107109716821067806@wrigleys.postgresql.org
Users of the WaitEventSet and WaitLatch() APIs can now choose between
asking for WL_POSTMASTER_DEATH and then handling it explicitly, or asking
for WL_EXIT_ON_PM_DEATH to trigger immediate exit on postmaster death.
This reduces code duplication, since almost all callers want the latter.
Repair all code that was previously ignoring postmaster death completely,
or requesting the event but ignoring it, or requesting the event but then
doing an unconditional PostmasterIsAlive() call every time through its
event loop (which is an expensive syscall on platforms for which we don't
have USE_POSTMASTER_DEATH_SIGNAL support).
Assert that callers of WaitLatchXXX() under the postmaster remember to
ask for either WL_POSTMASTER_DEATH or WL_EXIT_ON_PM_DEATH, to prevent
future bugs.
The only process that doesn't handle postmaster death is syslogger. It
waits until all backends holding the write end of the syslog pipe
(including the postmaster) have closed it by exiting, to be sure to
capture any parting messages. By using the WaitEventSet API directly
it avoids the new assertion, and as a by-product it may be slightly
more efficient on platforms that have epoll().
Author: Thomas Munro
Reviewed-by: Kyotaro Horiguchi, Heikki Linnakangas, Tom Lane
Discussion: https://postgr.es/m/CAEepm%3D1TCviRykkUb69ppWLr_V697rzd1j3eZsRMmbXvETfqbQ%40mail.gmail.com,
https://postgr.es/m/CAEepm=2LqHzizbe7muD7-2yHUbTOoF7Q+qkSD5Q41kuhttRTwA@mail.gmail.com
The documentation of CREATE/ALTER ROLE has been missing two things
related to PASSWORD:
- The password value provided needs to be quoted, some places of the
documentation marked the field with quotes, but not others, which led to
confusion.
- PASSWORD NULL was not provided consistently, with ENCRYPTED being not
compatible with it.
Reported-by: Steven Winfield
Author: Michael Paquier
Reviewed-by: David G. Johnston
Discussion: https://postgr.es/m/154282901979.1316.7418475422120496802@wrigleys.postgresql.org
populate_recordset_worker() failed to consider the possibility that the
supplied JSON data contains no rows, so that update_cached_tupdesc never
got called. This led to a null-pointer dereference since commit 9a5e8ed28;
before that it led to a bogus "set-valued function called in context that
cannot accept a set" error. Fix by forcing the update to happen.
Per bug #15514. Back-patch to v11 as 9a5e8ed28 was. (If we were excited
about the bogus error, we could perhaps go back further, but it'd take more
work to figure out how to fix it in older branches. Given the lack of
field complaints about that aspect, I'm not excited.)
Discussion: https://postgr.es/m/15514-59d5b4c4065b178b@postgresql.org
Documenting INCLUDE in the section about unique indexes is confusing,
as complained of by Emilio Platzer. Furthermore, it entirely failed
to explain why you might want to use the feature. The section about
index-only scans is really the right place; it already talked about
making such things the hard way. Rewrite that text to describe INCLUDE
as the normal way to make a covering index.
Also, move that section up a couple of places, as it now seems more
important than some of the stuff we had before it. It still has to
be after expression and partial indexes, since otherwise some of it
would involve forward references.
Discussion: https://postgr.es/m/154031939560.30897.14677735588262722042@wrigleys.postgresql.org
This commit continues the code improvements started by commit
12788ae49e. With this commit, state machine transitions are better
contained in the routine that was called doCustom() and is now called
advanceConnectionState -- the resulting code is easier to reason about,
since there are no state changes occuring in the outer layer.
This change is prompted by future patches to add more features to
pgbench, which will need to effect some more surgery to this code.
Fabien's original had all the machine state changes inside one routine,
but I (Álvaro) thought that a subroutine to handle command execution is
more straightforward to review, so this commit does not match Fabien's
submission closely. If something is broken, it's probably my fault.
Author: Fabien Coelho, Álvaro Herrera
Reviewed-by: Kirk Jamison
Discussion: https://postgr.es/m/alpine.DEB.2.21.1808111104320.1705@lancre
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
As already explained in configure.in, using the OpenSSL version number
to detect presence of functions doesn't work, because LibreSSL reports
incompatible version numbers. Fortunately, the functions we need here
are actually macros, so we can just test for them directly.
The documentation claimed that an enum type requires "one or more"
labels, but since 1fd9883ff4, zero labels are also allowed.
Reported-by: Lukas Eder <lukas.eder@gmail.com>
Bug: #15356
Two issues have been spotted and get fixed here:
- When checking for corrupted files, make sure that pg_verify_checksums
complains about the correct file. In order to make the logic more
robust, all files created are immediately removed once checks on them
are done. The error message generated by pg_verify_checksums also now
includes the file name it sees as corrupted.
- Before running corruption-related tests, empty files are generated
which used names mapping with the corrupted files, potentially leading
to conflicts. So use different set of names for both.
Author: Michael Banck
Discussion: https://postgr.es/m/20181119181119.GC23740@nighthawk.caipicrew.dd-dns.de
Per POSIX, WIFSIGNALED and related macros are provided by <sys/wait.h>.
Apparently on Linux they're also pulled in by some other inclusions,
but BSD-ish systems are pickier. Fixes portability issue in ffa4cbd62.
Per buildfarm.
Previously, any program launched by COPY TO/FROM PROGRAM inherited the
server's setting of SIGPIPE handling, i.e. SIG_IGN. Hence, if we were
doing COPY FROM PROGRAM and closed the pipe early, the child process
would see EPIPE on its output file and typically would treat that as
a fatal error, in turn causing the COPY to report error. Similarly,
one could get a failure report from a query that didn't read all of
the output from a contrib/file_fdw foreign table that uses file_fdw's
PROGRAM option.
To fix, ensure that child programs inherit SIG_DFL not SIG_IGN processing
of SIGPIPE. This seems like an all-around better situation since if
the called program wants some non-default treatment of SIGPIPE, it would
expect to have to set that up for itself. Then in COPY, if it's COPY
FROM PROGRAM and we stop reading short of detecting EOF, treat a SIGPIPE
exit from the called program as a non-error condition. This still allows
us to report an error for any case where the called program gets SIGPIPE
on some other file descriptor.
As coded, we won't report a SIGPIPE if we stop reading as a result of
seeing an in-band EOF marker (e.g. COPY BINARY EOF marker). It's
somewhat debatable whether we should complain if the called program
continues to transmit data after an EOF marker. However, it seems like
we should avoid throwing error in any questionable cases, especially in a
back-patched fix, and anyway it would take additional code to make such
an error get reported consistently.
Back-patch to v10. We could go further back, since COPY FROM PROGRAM
has been around awhile, but AFAICS the only way to reach this situation
using core or contrib is via file_fdw, which has only supported PROGRAM
sources since v10. The COPY statement per se has no feature whereby
it'd stop reading without having hit EOF or an error already. Therefore,
I don't see any upside to back-patching further that'd outweigh the
risk of complaints about behavioral change.
Per bug #15449 from Eric Cyr.
Patch by me, review by Etsuro Fujita and Kyotaro Horiguchi
Discussion: https://postgr.es/m/15449-1cf737dd5929450e@postgresql.org
In \d and \z, instead of conflating partitioned tables and indexes with
plain ones, set the "type" column and table title differently to make
the distinction obvious. A simple ease-of-use improvement.
Author: Pavel Stehule, Michaël Paquier, Álvaro Herrera
Reviewed-by: Amit Langote
Discussion: https://postgr.es/m/CAFj8pRDMWPgijpt_vPj1t702PgLG4Ls2NCf+rEcb+qGPpossmg@mail.gmail.com
This change doesn't fix any bugs that we've heard about, but it seems
like a good idea on general principles to track upstream occasionally.
Discussion: https://postgr.es/m/3320.1542647565@sss.pgh.pa.us
Calling AC_CHECK_DECLS before we've finished setting up the compiler's
CFLAGS seems like a pretty risky proposition, especially now that the
first use of that macro will result in a test to see whether the compiler
gives warning or error for undeclared built-in functions. That answer
could very easily get changed later than where PGAC_LLVM_SUPPORT is
called; furthermore, it's hardly unlikely that flags such as -D_GNU_SOURCE
could change visibility of declarations. Hence, be a little less cavalier
about where to do LLVM-related tests. This results in v11 and HEAD doing
the warning-or-error check at the same place in the script as older
branches are doing it, which seems like a good thing.
Per further thought about commits 0b59b0e8b and 16fbac39f.