This CPU architecture has been discontinued. We already removed HP-UX
support, we never supported Windows/Itanium, and the open source
operating systems that a vintage hardware owner might hope to run have
all either ended Itanium support or never fully released support (NetBSD
may eventually). The extra code we carry for this rare ISA is now
untested. It seems like a good time to remove it.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/1415825.1656893299%40sss.pgh.pa.us
HP-UX hardware is no longer produced, build farm coverage recently
ended, and there are no known active maintainers targeting this OS.
Since there is a major rewrite of the build system in the pipeline for
PostgreSQL 16, and that requires development, testing and maintainance
for each OS and tool chain, it seems like a good time to drop support
for:
* HP-UX, the operating system.
* HP aCC, the HP-UX native compiler.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Discussion: https://postgr.es/m/1415825.1656893299%40sss.pgh.pa.us
This commit bumps the runtime value of _WIN32_WINNT to be 0x0A00 for any
builds on Windows. Hence, this makes Windows 10 the minimal requirement
when running PostgreSQL under WIN32, be it for builds of Cygwin, MinGW
or Visual Studio.
The previous minimal runtime version was either Windows Vista when
building with at least Visual Studio 2015 or Windows XP for the rest.
Windows 10 is the most modern version supported by Microsoft, and per
discussion, as we don't have buildfarm members that run older versions
anymore, this is the minimal supported version that suits better for our
needs. This will actually make easier the development of some patches,
two being async I/O and large page handling by avoiding a lot of
compatibility gotchas, on platforms that have most likely few users
anyway.
It is possible to remove MIN_WINNT in win32.h and the macros
IsWindowsXXXOrGreater() that were used in the code at runtime to check
which version of Windows was getting used. The change in pg_locale.c
comes from Juan. Note that all my tests passed, and that the CI is
green. The buildfarm will quickly tell if this needs more adjustments.
Author: Michael Paquier, Juan José Santamaría Flecha
Reviewed-by: Thomas Munro
Discussion: https://postgr.es/m/Yo7tHKD8VCkeNi71@paquier.xyz
auto_explain.log_parameter_max_length is a new GUC part of the
extension, similar to the corresponding core setting, that controls the
inclusion of query parameters in the logged explain output.
More tests are added to check the behavior of this new parameter: when
parameters logged in full (the default of -1), when disabled (value of
0) and when partially truncated (value different than the two others).
Author: Dagfinn Ilmari Mannsåker
Discussion: https://postgr.es/m/87ee09mohb.fsf@wibble.ilmari.org
Amendment to 84ad713cf8: Not all
prepared statements have a result descriptor. As currently coded,
this would crash when reading pg_prepared_statements. Make those
cases return null for result_types instead. Also add a test case for
it.
Attempting such an operation would already fail, but in various and
confusing ways. For example, while in recovery, some elog() messages
would be reported, but these should never be user-facing. This commit
restricts any write operations done on large objects in a read-only
context, so as the errors generated are more user-friendly. This is per
the discussion done with Tom Lane and Robert Haas.
Some regression tests are added to check the case of all the SQL
functions working on large objects (including an update of the test's
alternate output).
Author: Yugo Nagata
Discussion: https://postgr.es/m/20220527153028.61a4608f66abcd026fd3806f@sraoss.co.jp
Interpret its privileges argument as a comma-separated list of
privilege names, as in has_table_privilege and other functions.
This is actually net less code, since the support routine to
parse that already exists, and we can drop convert_priv_string()
which had no other use-case.
Robins Tharakan
Discussion: https://postgr.es/m/e5a05dc54ba64408b3dd260171c1abaf@EX13D05UWC001.ant.amazon.com
POSIX shm_open() can sleep for a long time and fail spuriously because
of contention on an internal lock file on Solaris (and presumably
illumos). Commit 389869af fixed the main problem with this, namely that
we could crash, but it's now clear that "posix" is not a good default.
Therefore, choose "sysv" at initdb time on Solaris and illumos. Other
choices are still available by editing the postgresql.conf file.
Back-patch only to 15, because contention is much less likely further
back, and it doesn't seem like a good idea to change this in released
branches. This should clear up the failures on build farm animal
margay.
Discussion: https://postgr.es/m/CA%2BhUKGKqKrCV5xKWfh9rnm%3Do%3DDwZLTLtnsj_XpUi9g5%3DV%2B9oyg%40mail.gmail.com
We have had a working and tunable autovacuum
for at least a decade now, so remove the recommendation to
manually vacuum tables at least every night.
Autovacuum is now also triggered by INSERTs, so we can also
remove the recommendation to run VACUUM (ANALYZE) after lots
of INSERTs or DELETEs.
Instead, suggest using autovacuum by moving the respective
paragraph up to where the importance of VACUUM is emphasized.
Author: Laurenz Albe <laurenz.albe@cybertec.at>
Reviewed-By: Magnus Hagander, Peter Geoghegan
Discussion: https://postgr.es/m/6f5e3da98fec14640f389d7b84c3b413833697f4.camel@cybertec.at
This patch documents that the initial data synchronization (tablesync) for
logical replication does not take into account the publication 'publish'
parameter when copying the existing table data.
Author: Peter Smith
Reviewed-by: Shi yu, Euler Taveira, Robert Haas, Amit Kapila
Discussion: https://postgr.es/m/CAHut+PtbfALjFpS2MkrvQ+wWQKByP7CNh9RtFta-r=BHEU3S3w@mail.gmail.com
072132f0 used the attnum offset to access the raw_fields array when
checking that the attribute names of the header and of the relation
match, leading to incorrect results or even crashes if the attribute
numbers of a relation are changed, like on a dropped attribute. This
fixes the logic to use the correct attribute names for the header
matching requirements.
Also, this commit disallows HEADER MATCH in COPY TO as there is no
validation that can be done in this case.
The tests are expanded for HEADER MATCH with COPY FROM and dropped
columns, with cases where a relation has a dropped and re-added column,
as well as a reduced set of columns.
Author: Julien Rouhaud
Reviewed-by: Peter Eisentraut, Michael Paquier
Discussion: https://postgr.es/m/20220607154744.vvmitnqhyxrne5ms@jrouhaud
Three parameters have been using "int" rather than "integer" to describe
their type:
auth_delay.milliseconds
max_logical_replication_workers
pg_prewarm.autoprewarm_interval
This is inconsistent with any other integer GUCs listed in the docs
(148, as far as I can see).
Author: Peter Smith
Discussion: https://postgr.es/m/CAHut+Pv6X5T-veN2abUDUvBxZm+SSm-9otfi3LZPGyOc6u6hiA@mail.gmail.com
This reverts commits 5753d4ee32 and fe60b67250 that modified HOT to
ignore BRIN indexes. The commit message for 5753d4ee32 claims that:
When determining whether an index update may be skipped by using
HOT, we can ignore attributes indexed only by BRIN indexes. There
are no index pointers to individual tuples in BRIN, and the page
range summary will be updated anyway as it relies on visibility
info.
This is partially incorrect - it's true BRIN indexes don't point to
individual tuples, so HOT chains are not an issue, but the visibitlity
info is not sufficient to keep the index up to date. This can easily
result in corrupted indexes, as demonstrated in the hackers thread.
This does not mean relaxing the HOT restrictions for BRIN is a lost
cause, but it needs to handle the two aspects (allowing HOT chains and
updating the page range summaries) as separate. But that requires a
major changes, and it's too late for that in the current dev cycle.
Reported-by: Tomas Vondra
Discussion: https://postgr.es/m/05ebcb44-f383-86e3-4f31-0a97a55634cf@enterprisedb.com
In addition, this moves the new paragraph in the MVCC page upwards, for
a more consistent flow; some minor markup mistakes, style issues and
typos are fixed too.
Per comments from Justin Pryzby.
Discussion: https://postgr.es/m/20220511163350.GL19626@telsasoft.com
This commit, in completion of 157f873, forces a ROLLBACK for
--single-transaction only when ON_ERROR_STOP is used when one of the
steps defined by -f/-c fails. Hence, COMMIT is always used when
ON_ERROR_STOP is not set, ignoring the status code of the last action
taken in the set of switches specified by -c/-f (previously ROLLBACK
would have been issued even without ON_ERROR_STOP if the last step
failed, while COMMIT was issued if a step in-between failed as long as
the last step succeeded, leading to more inconsistency).
While on it, this adds much more test coverage in this area when not
using ON_ERROR_STOP with multiple switch patterns involving -c and -f
for query files, single queries and slash commands.
The behavior of ON_ERROR_STOP is arguably a bug, but there was no much
support for a backpatch to force a ROLLBACK on a step failure, so this
change is done only on HEAD for now.
Per discussion with Tom Lane and Kyotaro Horiguchi.
Discussion: https://postgr.es/m/Yqbc8bAdwnP02na4@paquier.xyz
The previous wording was "the underlying data type's default collation
is used", which is wrong or at least misleading. The domain inherits
the base type's collation behavior, which if "default" actually can
mean that we use some non-default collation obtained from elsewhere.
Per complaint from Jian He.
Discussion: https://postgr.es/m/CACJufxHMR8_4WooDPjjvEdaxB2hQ5a49qthci8fpKP0MKemVRQ@mail.gmail.com
The patch introducing jsonpath dropped a para about that between
two related examples, and didn't bother updating the introductory
sentences that it falsified. The grammar was pretty shaky as well.
38bfae3 has moved the contents written to files by pg_upgrade under a
new directory called pg_upgrade_output.d/ located in the new cluster's
data folder, and it used a simple structure made of two subdirectories
leading to a fixed structure: log/ and dump/. This design has made
weaker pg_upgrade on repeated calls, as we could get failures when
creating one or more of those directories, while potentially losing the
logs of a previous run (logs are retained automatically on failure, and
cleaned up on success unless --retain is specified). So a user would
need to clean up pg_upgrade_output.d/ as an extra step for any repeated
calls of pg_upgrade. The most common scenario here is --check followed
by the actual upgrade, but one could see a failure when specifying an
incorrect input argument value. Removing entirely the logs would have
the disadvantage of removing all the past information, even if --retain
was specified at some past step.
This result is annoying for a lot of users and automated upgrade flows.
So, rather than requiring a manual removal of pg_upgrade_output.d/, this
redesigns the set of output directories in a more dynamic way, based on
a suggestion from Tom Lane and Daniel Gustafsson. pg_upgrade_output.d/
is still the base path, but a second directory level is added, mostly
named after an ISO-8601-formatted timestamp (in short human-readable,
with milliseconds appended to the name to avoid any conflicts). The
logs and dumps are saved within the same subdirectories as previously,
as of log/ and dump/, but these are located inside the subdirectory
named after the timestamp.
The logs of a given run are removed only after a successful run if
--retain is not used, and pg_upgrade_output.d/ is kept if there are any
logs from a previous run. Note that previously, pg_upgrade would have
kept the logs even after a successful --check but that was inconsistent
compared to the case without --check when using --retain. The code in
charge of the removal of the output directories is now refactored into a
single routine.
Two TAP tests are added with some --check commands (one failure case and
one success case), to look after the issue fixed here. Note that the
tests had to be tweaked a bit to fit with the new directory structure so
as it can find any logs generated on failure. This is still going to
require a change in the buildfarm client for the case where pg_upgrade
is tested without the TAP test, though, but I'll tackle that with a
separate patch where needed.
Reported-by: Tushar Ahuja
Author: Michael Paquier
Reviewed-by: Daniel Gustafsson, Justin Pryzby
Discussion: https://postgr.es/m/77e6ecaa-2785-97aa-f229-4b6e047cbd2b@enterprisedb.com
psql --single-transaction is able to handle multiple -c and -f switches
in a single transaction since d5563d7d, but this had the surprising
behavior of forcing a transaction COMMIT even if psql failed with an
error in the client (for example incorrect path given to \copy), which
would generate an error, but still commit any changes that were already
applied in the backend. This commit makes the behavior more consistent,
by enforcing a transaction ROLLBACK if any commands fail, both
client-side and backend-side, so as no changes are applied if one error
happens in any of them.
Some tests are added on HEAD to provide some coverage about all that.
Backend-side errors are unreliable as IPC::Run can complain on SIGPIPE
if psql quits before reading a query result, but that should work
properly in the case where any errors come from psql itself, which is
what the original report is about.
Reported-by: Christoph Berg
Author: Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/17504-76b68018e130415e@postgresql.org
Backpatch-through: 10
The previous entry invited confusion between what uniq() does
by itself and what it does when combined with sort(). The latter
usage is pretty useful so we should show it, but add an additional
example to clarify the results of uniq() alone.
Per suggestion from Martin Kalcher. Back-patch to v13, where
we switched to formatting that supports multiple examples.
Discussion: https://postgr.es/m/165407884456.573551.8779012279828726162@wrigleys.postgresql.org
Currently, we simply combine the column lists when publishing tables on
multiple publications and that can sometimes lead to unexpected behavior.
Say, if a column is published in any row-filtered publication, then the
values for that column are sent to the subscriber even for rows that don't
match the row filter, as long as the row matches the row filter for any
other publication, even if that other publication doesn't include the
column.
The main purpose of introducing a column list is to have statically
different shapes on publisher and subscriber or hide sensitive column
data. In both cases, it doesn't seem to make sense to combine column
lists.
So, we disallow the cases where the column list is different for the same
table when combining publications. It can be later extended to combine the
column lists for selective cases where required.
Reported-by: Alvaro Herrera
Author: Hou Zhijie
Reviewed-by: Amit Kapila
Discussion: https://postgr.es/m/202204251548.mudq7jbqnh7r@alvherre.pgsql
The example given for anyelement <@ anymultirange didn't return
true as claimed; adjust it so it does.
In passing, change a couple of sample results where the modern
numeric-based logic produces a different number of trailing zeroes
than before.
Erik Rijkers
Discussion: https://postgr.es/m/cc35735d-1ec1-5bb3-9e27-cddbab7afa23@xs4all.nl
The PostgreSQL limitations section of the documents mentioned the limit
on the number of columns that can exist in a table. Users might be
surprised to find that there's also a limit on the number of columns that
can exist in a targetlist. Users may experience restrictions which
surprise them if they happened to select a large number of columns from
several tables with many columns. Here we document that there is a
limitation on this and mention what that limit actually is.
Wording proposal by Alvaro Herrera
Reported-by: Vladimir Sitnikov
Author: Dave Crammer
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/CAB=Je-E18aTYpNqje4mT0iEADpeGLSzwUvo3H9kRRuDdsNo4aQ@mail.gmail.com
Backpatch-through: 12, where the limitations section was added
This reverts commit d9d076222f "VACUUM: ignore indexing operations
with CONCURRENTLY".
These changes caused indexes created with the CONCURRENTLY option to
miss heap tuples that were HOT-updated and HOT-pruned during the index
creation. Before these changes, HOT pruning would have been prevented
by the Xmin of the transaction creating the index, but because this
change was precisely to allow the Xmin to move forward ignoring that
backend, now other backends scanning the table can prune them. This is
not a problem for VACUUM (which requires a lock that conflicts with a
CREATE INDEX CONCURRENTLY operation), but HOT-prune can definitely
occur. In other words, Xmin advancement was sped up, but at the cost of
corrupting the resulting index.
Regrettably, this means that the new feature in PG14 that RIC/CIC on
very large tables no longer force VACUUM to retain very old tuples goes
away. We might try to implement it again in a later release, but for
now the risk of indexes missing tuples is too high and there's no easy
fix.
Backpatch to 14, where this change appeared.
Reported-by: Peter Slavov <pet.slavov@gmail.com>
Diagnosys-by: Andrey Borodin <x4mmm@yandex-team.ru>
Diagnosys-by: Michael Paquier <michael@paquier.xyz>
Diagnosys-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/17485-396609c6925b982d%40postgresql.org