mirror of
https://git.postgresql.org/git/postgresql.git
synced 2025-02-23 19:39:53 +08:00
Create common infrastructure for cross-version upgrade testing.
To test pg_upgrade across major PG versions, we have to be able to modify or drop any old objects with no-longer-supported properties, and we have to be able to deal with cosmetic changes in pg_dump output. Up to now, the buildfarm and pg_upgrade's own test infrastructure had separate implementations of the former, and we had nothing but very ad-hoc rules for the latter (including an arbitrary threshold on how many lines of unchecked diff were okay!). This patch creates a Perl module that can be shared by both those use-cases, and adds logic that deals with pg_dump output diffs in a much more tightly defined fashion. This largely supersedes previous efforts in commits0df9641d3
,9814ff550
, and62be9e4cd
, which developed a SQL-script-based solution for the task of dropping old objects. There was nothing fundamentally wrong with that work in itself, but it had no basis for solving the output-formatting problem. The most plausible way to deal with formatting is to build a Perl module that can perform editing on the dump files; and once we commit to that, it makes more sense for the same module to also embed the knowledge of what has to be done for dropping old objects. Back-patch versions of the helper module as far as 9.2, to support buildfarm animals that still test that far back. It's also necessary to back-patch PostgreSQL/Version.pm, because the new code depends on that. I fixed up pg_upgrade's 002_pg_upgrade.pl in v15, but did not look into back-patching it further than that. Tom Lane and Andrew Dunstan Discussion: https://postgr.es/m/891521.1673657296@sss.pgh.pa.us
This commit is contained in:
parent
ac01fa647f
commit
4ad0896bca
@ -3,21 +3,21 @@ THE SHORT VERSION
|
||||
|
||||
On non-Windows machines, you can execute the testing process
|
||||
described below by running the following command in this directory:
|
||||
|
||||
make check
|
||||
|
||||
This will run the TAP tests to run pg_upgrade, performing an upgrade
|
||||
from the version in this source tree to a new instance of the same
|
||||
version.
|
||||
|
||||
Testing an upgrade from a different version requires a dump to set up
|
||||
the contents of this instance, with its set of binaries. The following
|
||||
variables are available to control the test (see DETAILS below about
|
||||
the creation of the dump):
|
||||
Testing an upgrade from a different PG version is also possible, and
|
||||
provides a more thorough test that pg_upgrade does what it's meant for.
|
||||
This requires both a source tree and an installed tree for the old
|
||||
version, as well as a dump file to set up the instance to be upgraded.
|
||||
The following environment variables must be set to enable this testing:
|
||||
export olddump=...somewhere/dump.sql (old version's dump)
|
||||
export oldinstall=...otherversion/ (old version's install base path)
|
||||
|
||||
Finally, the tests can be done by running
|
||||
make check
|
||||
See DETAILS below for more information about creation of the dump.
|
||||
|
||||
DETAILS
|
||||
-------
|
||||
@ -26,22 +26,32 @@ The most effective way to test pg_upgrade, aside from testing on user
|
||||
data, is by upgrading the PostgreSQL regression database.
|
||||
|
||||
This testing process first requires the creation of a valid regression
|
||||
database dump that can be then used for $olddump. Such files contain
|
||||
database dump that can then be used for $olddump. Such files contain
|
||||
most database features and are specific to each major version of Postgres.
|
||||
|
||||
Here are the steps needed to create a dump file:
|
||||
|
||||
1) Create and populate the regression database in the old cluster.
|
||||
This database can be created by running 'make installcheck' from
|
||||
src/test/regress using its source code tree.
|
||||
src/test/regress in the old version's source code tree.
|
||||
|
||||
If you like, you can also populate regression databases for one or
|
||||
more contrib modules by running 'make installcheck USE_MODULE_DB=1'
|
||||
in their directories. (USE_MODULE_DB is essential so that the
|
||||
pg_upgrade test script will understand which database is which.)
|
||||
|
||||
2) Use pg_dumpall to dump out the contents of the instance, including the
|
||||
regression database, in the shape of a SQL file. This requires the *old*
|
||||
cluster's pg_dumpall so as the dump created is compatible with the
|
||||
version of the cluster it is dumped into.
|
||||
regression database(s), into a SQL file. Use the *old* version's
|
||||
pg_dumpall so that the dump created is compatible with that version.
|
||||
|
||||
Once the dump is created, it can be repeatedly used with $olddump and
|
||||
`make check`, that automates the dump of the old database, its upgrade,
|
||||
the dump out of the new database and the comparison of the dumps between
|
||||
the old and new databases. The contents of the dumps can also be manually
|
||||
compared.
|
||||
Once the dump file is created, it can be used repeatedly. Set $olddump
|
||||
to point to the dump file and run 'make check' or 'make installcheck'
|
||||
in the new version's src/bin/pg_upgrade directory. (If you included any
|
||||
contrib databases in the old dump, you must use 'make installcheck' and
|
||||
ensure that the corresponding contrib modules have been installed in
|
||||
the new version's installation tree.) This will build a temporary cluster
|
||||
using the old installation's executables, populate it from the dump file,
|
||||
and then try to pg_upgrade it to the new version. Success is reported
|
||||
if pg_dumpall output matches between the pre-upgrade and post-upgrade
|
||||
databases. In case of trouble, manually comparing those dump files may
|
||||
help to isolate the problem.
|
||||
|
@ -10,6 +10,7 @@ use File::Path qw(rmtree);
|
||||
|
||||
use PostgreSQL::Test::Cluster;
|
||||
use PostgreSQL::Test::Utils;
|
||||
use PostgreSQL::Test::AdjustUpgrade;
|
||||
use Test::More;
|
||||
|
||||
# Generate a database with a name made of a range of ASCII characters.
|
||||
@ -30,11 +31,36 @@ sub generate_db
|
||||
"created database with ASCII characters from $from_char to $to_char");
|
||||
}
|
||||
|
||||
# Filter the contents of a dump before its use in a content comparison.
|
||||
# This returns the path to the filtered dump.
|
||||
sub filter_dump
|
||||
{
|
||||
my ($is_old, $old_version, $dump_file) = @_;
|
||||
my $dump_contents = slurp_file($dump_file);
|
||||
|
||||
if ($is_old)
|
||||
{
|
||||
$dump_contents = adjust_old_dumpfile($old_version, $dump_contents);
|
||||
}
|
||||
else
|
||||
{
|
||||
$dump_contents = adjust_new_dumpfile($old_version, $dump_contents);
|
||||
}
|
||||
|
||||
my $dump_file_filtered = "${dump_file}_filtered";
|
||||
open(my $dh, '>', $dump_file_filtered)
|
||||
|| die "opening $dump_file_filtered";
|
||||
print $dh $dump_contents;
|
||||
close($dh);
|
||||
|
||||
return $dump_file_filtered;
|
||||
}
|
||||
|
||||
# The test of pg_upgrade requires two clusters, an old one and a new one
|
||||
# that gets upgraded. Before running the upgrade, a logical dump of the
|
||||
# old cluster is taken, and a second logical dump of the new one is taken
|
||||
# after the upgrade. The upgrade test passes if there are no differences
|
||||
# in these two dumps.
|
||||
# (after filtering) in these two dumps.
|
||||
|
||||
# Testing upgrades with an older version of PostgreSQL requires setting up
|
||||
# two environment variables, as of:
|
||||
@ -49,8 +75,10 @@ if ( (defined($ENV{olddump}) && !defined($ENV{oldinstall}))
|
||||
die "olddump or oldinstall is undefined";
|
||||
}
|
||||
|
||||
# Temporary location for the dumps taken
|
||||
my $tempdir = PostgreSQL::Test::Utils::tempdir;
|
||||
# Paths to the dumps taken during the tests.
|
||||
my $tempdir = PostgreSQL::Test::Utils::tempdir;
|
||||
my $dump1_file = "$tempdir/dump1.sql";
|
||||
my $dump2_file = "$tempdir/dump2.sql";
|
||||
|
||||
# Initialize node to upgrade
|
||||
my $oldnode =
|
||||
@ -60,7 +88,10 @@ my $oldnode =
|
||||
# To increase coverage of non-standard segment size and group access without
|
||||
# increasing test runtime, run these tests with a custom setting.
|
||||
# --allow-group-access and --wal-segsize have been added in v11.
|
||||
$oldnode->init(extra => [ '--wal-segsize', '1', '--allow-group-access' ]);
|
||||
my %node_params = ();
|
||||
$node_params{extra} = [ '--wal-segsize', '1', '--allow-group-access' ]
|
||||
if $oldnode->pg_version >= 11;
|
||||
$oldnode->init(%node_params);
|
||||
$oldnode->start;
|
||||
|
||||
# The default location of the source code is the root of this directory.
|
||||
@ -129,37 +160,52 @@ else
|
||||
is($rc, 0, 'regression tests pass');
|
||||
}
|
||||
|
||||
# Initialize a new node for the upgrade.
|
||||
my $newnode = PostgreSQL::Test::Cluster->new('new_node');
|
||||
$newnode->init(%node_params);
|
||||
|
||||
my $newbindir = $newnode->config_data('--bindir');
|
||||
my $oldbindir = $oldnode->config_data('--bindir');
|
||||
|
||||
# Before dumping, get rid of objects not existing or not supported in later
|
||||
# versions. This depends on the version of the old server used, and matters
|
||||
# only if different major versions are used for the dump.
|
||||
if (defined($ENV{oldinstall}))
|
||||
{
|
||||
# Note that upgrade_adapt.sql from the new version is used, to
|
||||
# cope with an upgrade to this version.
|
||||
$oldnode->command_ok(
|
||||
[
|
||||
'psql', '-X',
|
||||
'-f', "$srcdir/src/bin/pg_upgrade/upgrade_adapt.sql",
|
||||
'regression'
|
||||
],
|
||||
'ran adapt script');
|
||||
}
|
||||
# Consult AdjustUpgrade to find out what we need to do.
|
||||
my $dbnames =
|
||||
$oldnode->safe_psql('postgres', qq(SELECT datname FROM pg_database));
|
||||
my %dbnames;
|
||||
do { $dbnames{$_} = 1; }
|
||||
foreach split /\s+/s, $dbnames;
|
||||
my $adjust_cmds =
|
||||
adjust_database_contents($oldnode->pg_version, %dbnames);
|
||||
|
||||
# Initialize a new node for the upgrade.
|
||||
my $newnode = PostgreSQL::Test::Cluster->new('new_node');
|
||||
$newnode->init(extra => [ '--wal-segsize', '1', '--allow-group-access' ]);
|
||||
my $newbindir = $newnode->config_data('--bindir');
|
||||
my $oldbindir = $oldnode->config_data('--bindir');
|
||||
foreach my $updb (keys %$adjust_cmds)
|
||||
{
|
||||
my $upcmds = join(";\n", @{ $adjust_cmds->{$updb} });
|
||||
|
||||
# For simplicity, use the newer version's psql to issue the commands.
|
||||
$newnode->command_ok(
|
||||
[
|
||||
'psql', '-X',
|
||||
'-v', 'ON_ERROR_STOP=1',
|
||||
'-c', $upcmds,
|
||||
'-d', $oldnode->connstr($updb),
|
||||
],
|
||||
"ran version adaptation commands for database $updb");
|
||||
}
|
||||
}
|
||||
|
||||
# Take a dump before performing the upgrade as a base comparison. Note
|
||||
# that we need to use pg_dumpall from the new node here.
|
||||
$newnode->command_ok(
|
||||
[
|
||||
'pg_dumpall', '--no-sync',
|
||||
'-d', $oldnode->connstr('postgres'),
|
||||
'-f', "$tempdir/dump1.sql"
|
||||
],
|
||||
'dump before running pg_upgrade');
|
||||
my @dump_command = (
|
||||
'pg_dumpall', '--no-sync', '-d', $oldnode->connstr('postgres'),
|
||||
'-f', $dump1_file);
|
||||
# --extra-float-digits is needed when upgrading from a version older than 11.
|
||||
push(@dump_command, '--extra-float-digits', '0')
|
||||
if ($oldnode->pg_version < 12);
|
||||
$newnode->command_ok(\@dump_command, 'dump before running pg_upgrade');
|
||||
|
||||
# After dumping, update references to the old source tree's regress.so
|
||||
# to point to the new tree.
|
||||
@ -173,7 +219,7 @@ if (defined($ENV{oldinstall}))
|
||||
chomp($output);
|
||||
my @libpaths = split("\n", $output);
|
||||
|
||||
my $dump_data = slurp_file("$tempdir/dump1.sql");
|
||||
my $dump_data = slurp_file($dump1_file);
|
||||
|
||||
my $newregresssrc = "$srcdir/src/test/regress";
|
||||
foreach (@libpaths)
|
||||
@ -183,7 +229,7 @@ if (defined($ENV{oldinstall}))
|
||||
$dump_data =~ s/$libpath/$newregresssrc/g;
|
||||
}
|
||||
|
||||
open my $fh, ">", "$tempdir/dump1.sql" or die "could not open dump file";
|
||||
open my $fh, ">", $dump1_file or die "could not open dump file";
|
||||
print $fh $dump_data;
|
||||
close $fh;
|
||||
|
||||
@ -284,24 +330,34 @@ if (-d $log_path)
|
||||
}
|
||||
|
||||
# Second dump from the upgraded instance.
|
||||
$newnode->command_ok(
|
||||
[
|
||||
'pg_dumpall', '--no-sync',
|
||||
'-d', $newnode->connstr('postgres'),
|
||||
'-f', "$tempdir/dump2.sql"
|
||||
],
|
||||
'dump after running pg_upgrade');
|
||||
@dump_command = (
|
||||
'pg_dumpall', '--no-sync', '-d', $newnode->connstr('postgres'),
|
||||
'-f', $dump2_file);
|
||||
# --extra-float-digits is needed when upgrading from a version older than 11.
|
||||
push(@dump_command, '--extra-float-digits', '0')
|
||||
if ($oldnode->pg_version < 12);
|
||||
$newnode->command_ok(\@dump_command, 'dump after running pg_upgrade');
|
||||
|
||||
# No need to apply filters on the dumps if working on the same version
|
||||
# for the old and new nodes.
|
||||
my $dump1_filtered = $dump1_file;
|
||||
my $dump2_filtered = $dump2_file;
|
||||
if ($oldnode->pg_version != $newnode->pg_version)
|
||||
{
|
||||
$dump1_filtered = filter_dump(1, $oldnode->pg_version, $dump1_file);
|
||||
$dump2_filtered = filter_dump(0, $oldnode->pg_version, $dump2_file);
|
||||
}
|
||||
|
||||
# Compare the two dumps, there should be no differences.
|
||||
my $compare_res = compare("$tempdir/dump1.sql", "$tempdir/dump2.sql");
|
||||
my $compare_res = compare($dump1_filtered, $dump2_filtered);
|
||||
is($compare_res, 0, 'old and new dumps match after pg_upgrade');
|
||||
|
||||
# Provide more context if the dumps do not match.
|
||||
if ($compare_res != 0)
|
||||
{
|
||||
my ($stdout, $stderr) =
|
||||
run_command([ 'diff', "$tempdir/dump1.sql", "$tempdir/dump2.sql" ]);
|
||||
print "=== diff of $tempdir/dump1.sql and $tempdir/dump2.sql\n";
|
||||
run_command([ 'diff', '-u', $dump1_filtered, $dump2_filtered ]);
|
||||
print "=== diff of $dump1_filtered and $dump2_filtered\n";
|
||||
print "=== stdout ===\n";
|
||||
print $stdout;
|
||||
print "=== stderr ===\n";
|
||||
|
@ -1,91 +0,0 @@
|
||||
--
|
||||
-- SQL queries for upgrade tests across different major versions.
|
||||
--
|
||||
-- This file includes a set of SQL queries to make a cluster to-be-upgraded
|
||||
-- compatible with the version this file is based on. Note that this
|
||||
-- requires psql, as per-version queries are controlled with a set of \if
|
||||
-- clauses.
|
||||
|
||||
-- This script is backward-compatible, so it is able to work with any version
|
||||
-- newer than 9.2 we are upgrading from, up to the branch this script is stored
|
||||
-- on (even if this would not run if running pg_upgrade with the same version
|
||||
-- for the origin and the target).
|
||||
|
||||
-- \if accepts a simple boolean value, so all the version checks are
|
||||
-- saved based on this assumption.
|
||||
SELECT
|
||||
ver <= 902 AS oldpgversion_le92,
|
||||
ver <= 904 AS oldpgversion_le94,
|
||||
ver <= 906 AS oldpgversion_le96,
|
||||
ver <= 1000 AS oldpgversion_le10,
|
||||
ver <= 1100 AS oldpgversion_le11,
|
||||
ver <= 1300 AS oldpgversion_le13
|
||||
FROM (SELECT current_setting('server_version_num')::int / 100 AS ver) AS v;
|
||||
\gset
|
||||
|
||||
-- Objects last appearing in 9.2.
|
||||
\if :oldpgversion_le92
|
||||
-- Note that those tables are removed from the regression tests in 9.3
|
||||
-- and newer versions.
|
||||
DROP TABLE abstime_tbl;
|
||||
DROP TABLE reltime_tbl;
|
||||
DROP TABLE tinterval_tbl;
|
||||
\endif
|
||||
|
||||
-- Objects last appearing in 9.4.
|
||||
\if :oldpgversion_le94
|
||||
-- This aggregate has been fixed in 9.5 and later versions, so drop
|
||||
-- and re-create it.
|
||||
DROP AGGREGATE array_cat_accum(anyarray);
|
||||
CREATE AGGREGATE array_larger_accum (anyarray) (
|
||||
sfunc = array_larger,
|
||||
stype = anyarray,
|
||||
initcond = $${}$$);
|
||||
-- This operator has been fixed in 9.5 and later versions, so drop and
|
||||
-- re-create it.
|
||||
DROP OPERATOR @#@ (NONE, bigint);
|
||||
CREATE OPERATOR @#@ (PROCEDURE = factorial,
|
||||
RIGHTARG = bigint);
|
||||
\endif
|
||||
|
||||
-- Objects last appearing in 9.6.
|
||||
\if :oldpgversion_le96
|
||||
DROP FUNCTION public.oldstyle_length(integer, text);
|
||||
\endif
|
||||
|
||||
-- Objects last appearing in 10.
|
||||
\if :oldpgversion_le10
|
||||
DROP FUNCTION IF EXISTS boxarea(box);
|
||||
DROP FUNCTION IF EXISTS funny_dup17();
|
||||
\endif
|
||||
|
||||
-- Objects last appearing in 11.
|
||||
\if :oldpgversion_le11
|
||||
-- WITH OIDS is supported until v11, so remove its support for any
|
||||
-- relations marked as such.
|
||||
DO $stmt$
|
||||
DECLARE
|
||||
rec text;
|
||||
BEGIN
|
||||
FOR rec in
|
||||
SELECT oid::regclass::text
|
||||
FROM pg_class
|
||||
WHERE relname !~ '^pg_'
|
||||
AND relhasoids
|
||||
AND relkind in ('r', 'f')
|
||||
ORDER BY 1
|
||||
LOOP
|
||||
EXECUTE 'ALTER TABLE ' || quote_ident(rec) || ' SET WITHOUT OIDS';
|
||||
END LOOP;
|
||||
END; $stmt$;
|
||||
\endif
|
||||
|
||||
-- Objects last appearing in 13.
|
||||
\if :oldpgversion_le13
|
||||
-- Until v10, operators could only be dropped one at a time, so be careful
|
||||
-- to stick with one command for each drop here.
|
||||
DROP OPERATOR public.#@# (pg_catalog.int8, NONE);
|
||||
DROP OPERATOR public.#%# (pg_catalog.int8, NONE);
|
||||
DROP OPERATOR public.!=- (pg_catalog.int8, NONE);
|
||||
DROP OPERATOR public.#@%# (pg_catalog.int8, NONE);
|
||||
\endif
|
500
src/test/perl/PostgreSQL/Test/AdjustUpgrade.pm
Normal file
500
src/test/perl/PostgreSQL/Test/AdjustUpgrade.pm
Normal file
@ -0,0 +1,500 @@
|
||||
|
||||
# Copyright (c) 2023, PostgreSQL Global Development Group
|
||||
|
||||
=pod
|
||||
|
||||
=head1 NAME
|
||||
|
||||
PostgreSQL::Test::AdjustUpgrade - helper module for cross-version upgrade tests
|
||||
|
||||
=head1 SYNOPSIS
|
||||
|
||||
use PostgreSQL::Test::AdjustUpgrade;
|
||||
|
||||
# Build commands to adjust contents of old-version database before dumping
|
||||
$statements = adjust_database_contents($old_version, %dbnames);
|
||||
|
||||
# Adjust contents of old pg_dumpall output file to match newer version
|
||||
$dump = adjust_old_dumpfile($old_version, $dump);
|
||||
|
||||
# Adjust contents of new pg_dumpall output file to match older version
|
||||
$dump = adjust_new_dumpfile($old_version, $dump);
|
||||
|
||||
=head1 DESCRIPTION
|
||||
|
||||
C<PostgreSQL::Test::AdjustUpgrade> encapsulates various hacks needed to
|
||||
compare the results of cross-version upgrade tests.
|
||||
|
||||
=cut
|
||||
|
||||
package PostgreSQL::Test::AdjustUpgrade;
|
||||
|
||||
use strict;
|
||||
use warnings;
|
||||
|
||||
use Exporter 'import';
|
||||
use PostgreSQL::Version;
|
||||
|
||||
our @EXPORT = qw(
|
||||
adjust_database_contents
|
||||
adjust_old_dumpfile
|
||||
adjust_new_dumpfile
|
||||
);
|
||||
|
||||
=pod
|
||||
|
||||
=head1 ROUTINES
|
||||
|
||||
=over
|
||||
|
||||
=item $statements = adjust_database_contents($old_version, %dbnames)
|
||||
|
||||
Generate SQL commands to perform any changes to an old-version installation
|
||||
that are needed before we can pg_upgrade it into the current PostgreSQL
|
||||
version.
|
||||
|
||||
Typically this involves dropping or adjusting no-longer-supported objects.
|
||||
|
||||
Arguments:
|
||||
|
||||
=over
|
||||
|
||||
=item C<old_version>: Branch we are upgrading from, represented as a
|
||||
PostgreSQL::Version object.
|
||||
|
||||
=item C<dbnames>: Hash of database names present in the old installation.
|
||||
|
||||
=back
|
||||
|
||||
Returns a reference to a hash, wherein the keys are database names and the
|
||||
values are arrayrefs to lists of statements to be run in those databases.
|
||||
|
||||
=cut
|
||||
|
||||
sub adjust_database_contents
|
||||
{
|
||||
my ($old_version, %dbnames) = @_;
|
||||
my $result = {};
|
||||
|
||||
# remove dbs of modules known to cause pg_upgrade to fail
|
||||
# anything not builtin and incompatible should clean up its own db
|
||||
foreach my $bad_module ('test_ddl_deparse', 'tsearch2')
|
||||
{
|
||||
if ($dbnames{"contrib_regression_$bad_module"})
|
||||
{
|
||||
_add_st($result, 'postgres',
|
||||
"drop database contrib_regression_$bad_module");
|
||||
delete($dbnames{"contrib_regression_$bad_module"});
|
||||
}
|
||||
}
|
||||
|
||||
# avoid version number issues with test_ext7
|
||||
if ($dbnames{contrib_regression_test_extensions})
|
||||
{
|
||||
_add_st(
|
||||
$result,
|
||||
'contrib_regression_test_extensions',
|
||||
'drop extension if exists test_ext7');
|
||||
}
|
||||
|
||||
# stuff not supported from release 14
|
||||
if ($old_version < 14)
|
||||
{
|
||||
# postfix operators (some don't exist in very old versions)
|
||||
_add_st(
|
||||
$result,
|
||||
'regression',
|
||||
'drop operator #@# (bigint,NONE)',
|
||||
'drop operator #%# (bigint,NONE)',
|
||||
'drop operator if exists !=- (bigint,NONE)',
|
||||
'drop operator if exists #@%# (bigint,NONE)');
|
||||
|
||||
# get rid of dblink's dependencies on regress.so
|
||||
my $regrdb =
|
||||
$old_version le '9.4'
|
||||
? 'contrib_regression'
|
||||
: 'contrib_regression_dblink';
|
||||
|
||||
if ($dbnames{$regrdb})
|
||||
{
|
||||
_add_st(
|
||||
$result, $regrdb,
|
||||
'drop function if exists public.putenv(text)',
|
||||
'drop function if exists public.wait_pid(integer)');
|
||||
}
|
||||
}
|
||||
|
||||
# user table OIDs are gone from release 12 on
|
||||
if ($old_version < 12)
|
||||
{
|
||||
my $nooid_stmt = q{
|
||||
DO $stmt$
|
||||
DECLARE
|
||||
rec text;
|
||||
BEGIN
|
||||
FOR rec in
|
||||
select oid::regclass::text
|
||||
from pg_class
|
||||
where relname !~ '^pg_'
|
||||
and relhasoids
|
||||
and relkind in ('r','m')
|
||||
order by 1
|
||||
LOOP
|
||||
execute 'ALTER TABLE ' || rec || ' SET WITHOUT OIDS';
|
||||
RAISE NOTICE 'removing oids from table %', rec;
|
||||
END LOOP;
|
||||
END; $stmt$;
|
||||
};
|
||||
|
||||
foreach my $oiddb ('regression', 'contrib_regression_btree_gist')
|
||||
{
|
||||
next unless $dbnames{$oiddb};
|
||||
_add_st($result, $oiddb, $nooid_stmt);
|
||||
}
|
||||
|
||||
# this table had OIDs too, but we'll just drop it
|
||||
if ($old_version >= 10 && $dbnames{'contrib_regression_postgres_fdw'})
|
||||
{
|
||||
_add_st(
|
||||
$result,
|
||||
'contrib_regression_postgres_fdw',
|
||||
'drop foreign table ft_pg_type');
|
||||
}
|
||||
}
|
||||
|
||||
# abstime+friends are gone from release 12 on; but these tables
|
||||
# might or might not be present depending on regression test vintage
|
||||
if ($old_version < 12)
|
||||
{
|
||||
_add_st($result, 'regression',
|
||||
'drop table if exists abstime_tbl, reltime_tbl, tinterval_tbl');
|
||||
}
|
||||
|
||||
# some regression functions gone from release 11 on
|
||||
if ($old_version < 11)
|
||||
{
|
||||
_add_st(
|
||||
$result, 'regression',
|
||||
'drop function if exists public.boxarea(box)',
|
||||
'drop function if exists public.funny_dup17()');
|
||||
}
|
||||
|
||||
# version-0 C functions are no longer supported
|
||||
if ($old_version < 10)
|
||||
{
|
||||
_add_st($result, 'regression',
|
||||
'drop function oldstyle_length(integer, text)');
|
||||
}
|
||||
|
||||
if ($old_version lt '9.5')
|
||||
{
|
||||
# cope with changes of underlying functions
|
||||
_add_st(
|
||||
$result,
|
||||
'regression',
|
||||
'drop operator @#@ (NONE, bigint)',
|
||||
'CREATE OPERATOR @#@ ('
|
||||
. 'PROCEDURE = factorial, RIGHTARG = bigint )',
|
||||
'drop aggregate public.array_cat_accum(anyarray)',
|
||||
'CREATE AGGREGATE array_larger_accum (anyarray) ' . ' ( '
|
||||
. ' sfunc = array_larger, '
|
||||
. ' stype = anyarray, '
|
||||
. ' initcond = $${}$$ ' . ' ) ');
|
||||
|
||||
# "=>" is no longer valid as an operator name
|
||||
_add_st($result, 'regression',
|
||||
'drop operator if exists public.=> (bigint, NONE)');
|
||||
}
|
||||
|
||||
return $result;
|
||||
}
|
||||
|
||||
# Internal subroutine to add statement(s) to the list for the given db.
|
||||
sub _add_st
|
||||
{
|
||||
my ($result, $db, @st) = @_;
|
||||
|
||||
$result->{$db} ||= [];
|
||||
push(@{ $result->{$db} }, @st);
|
||||
}
|
||||
|
||||
=pod
|
||||
|
||||
=item adjust_old_dumpfile($old_version, $dump)
|
||||
|
||||
Edit a dump output file, taken from the adjusted old-version installation
|
||||
by current-version C<pg_dumpall -s>, so that it will match the results of
|
||||
C<pg_dumpall -s> on the pg_upgrade'd installation.
|
||||
|
||||
Typically this involves coping with cosmetic differences in the output
|
||||
of backend subroutines used by pg_dump.
|
||||
|
||||
Arguments:
|
||||
|
||||
=over
|
||||
|
||||
=item C<old_version>: Branch we are upgrading from, represented as a
|
||||
PostgreSQL::Version object.
|
||||
|
||||
=item C<dump>: Contents of dump file
|
||||
|
||||
=back
|
||||
|
||||
Returns the modified dump text.
|
||||
|
||||
=cut
|
||||
|
||||
sub adjust_old_dumpfile
|
||||
{
|
||||
my ($old_version, $dump) = @_;
|
||||
|
||||
# use Unix newlines
|
||||
$dump =~ s/\r\n/\n/g;
|
||||
|
||||
# Version comments will certainly not match.
|
||||
$dump =~ s/^-- Dumped from database version.*\n//mg;
|
||||
|
||||
if ($old_version < 14)
|
||||
{
|
||||
# Remove mentions of extended hash functions.
|
||||
$dump =~ s {^(\s+OPERATOR\s1\s=\(integer,integer\))\s,\n
|
||||
\s+FUNCTION\s2\s\(integer,\sinteger\)\spublic\.part_hashint4_noop\(integer,bigint\);}
|
||||
{$1;}mxg;
|
||||
$dump =~ s {^(\s+OPERATOR\s1\s=\(text,text\))\s,\n
|
||||
\s+FUNCTION\s2\s\(text,\stext\)\spublic\.part_hashtext_length\(text,bigint\);}
|
||||
{$1;}mxg;
|
||||
}
|
||||
|
||||
# Change trigger definitions to say ... EXECUTE FUNCTION ...
|
||||
if ($old_version < 12)
|
||||
{
|
||||
# would like to use lookbehind here but perl complains
|
||||
# so do it this way
|
||||
$dump =~ s/
|
||||
(^CREATE\sTRIGGER\s.*?)
|
||||
\sEXECUTE\sPROCEDURE
|
||||
/$1 EXECUTE FUNCTION/mgx;
|
||||
}
|
||||
|
||||
if ($old_version lt '9.6')
|
||||
{
|
||||
# adjust some places where we don't print so many parens anymore
|
||||
|
||||
my $prefix =
|
||||
"'New York'\tnew & york | big & apple | nyc\t'new' & 'york'\t";
|
||||
my $orig = "( 'new' & 'york' | 'big' & 'appl' ) | 'nyc'";
|
||||
my $repl = "'new' & 'york' | 'big' & 'appl' | 'nyc'";
|
||||
$dump =~ s/(?<=^\Q$prefix\E)\Q$orig\E/$repl/mg;
|
||||
|
||||
$prefix =
|
||||
"'Sanct Peter'\tPeterburg | peter | 'Sanct Peterburg'\t'sanct' & 'peter'\t";
|
||||
$orig = "( 'peterburg' | 'peter' ) | 'sanct' & 'peterburg'";
|
||||
$repl = "'peterburg' | 'peter' | 'sanct' & 'peterburg'";
|
||||
$dump =~ s/(?<=^\Q$prefix\E)\Q$orig\E/$repl/mg;
|
||||
}
|
||||
|
||||
if ($old_version lt '9.5')
|
||||
{
|
||||
# adjust some places where we don't print so many parens anymore
|
||||
|
||||
my $prefix = "CONSTRAINT (?:sequence|copy)_con CHECK [(][(]";
|
||||
my $orig = "((x > 3) AND (y <> 'check failed'::text))";
|
||||
my $repl = "(x > 3) AND (y <> 'check failed'::text)";
|
||||
$dump =~ s/($prefix)\Q$orig\E/$1$repl/mg;
|
||||
|
||||
$prefix = "CONSTRAINT insert_con CHECK [(][(]";
|
||||
$orig = "((x >= 3) AND (y <> 'check failed'::text))";
|
||||
$repl = "(x >= 3) AND (y <> 'check failed'::text)";
|
||||
$dump =~ s/($prefix)\Q$orig\E/$1$repl/mg;
|
||||
|
||||
$orig = "DEFAULT ((-1) * currval('public.insert_seq'::regclass))";
|
||||
$repl =
|
||||
"DEFAULT ('-1'::integer * currval('public.insert_seq'::regclass))";
|
||||
$dump =~ s/\Q$orig\E/$repl/mg;
|
||||
|
||||
my $expr =
|
||||
"(rsl.sl_color = rsh.slcolor) AND (rsl.sl_len_cm >= rsh.slminlen_cm)";
|
||||
$dump =~ s/WHERE \(\(\Q$expr\E\)/WHERE ($expr/g;
|
||||
|
||||
$expr =
|
||||
"(rule_and_refint_t3.id3a = new.id3a) AND (rule_and_refint_t3.id3b = new.id3b)";
|
||||
$dump =~ s/WHERE \(\(\Q$expr\E\)/WHERE ($expr/g;
|
||||
|
||||
$expr =
|
||||
"(rule_and_refint_t3_1.id3a = new.id3a) AND (rule_and_refint_t3_1.id3b = new.id3b)";
|
||||
$dump =~ s/WHERE \(\(\Q$expr\E\)/WHERE ($expr/g;
|
||||
}
|
||||
|
||||
if ($old_version lt '9.3')
|
||||
{
|
||||
# CREATE VIEW/RULE statements were not pretty-printed before 9.3.
|
||||
# To cope, reduce all whitespace sequences within them to one space.
|
||||
# This must be done on both old and new dumps.
|
||||
$dump = _mash_view_whitespace($dump);
|
||||
|
||||
# _mash_view_whitespace doesn't handle multi-command rules;
|
||||
# rather than trying to fix that, just hack the exceptions manually.
|
||||
|
||||
my $prefix =
|
||||
"CREATE RULE rtest_sys_del AS ON DELETE TO public.rtest_system DO (DELETE FROM public.rtest_interface WHERE (rtest_interface.sysname = old.sysname);";
|
||||
my $line2 = " DELETE FROM public.rtest_admin";
|
||||
my $line3 = " WHERE (rtest_admin.sysname = old.sysname);";
|
||||
$dump =~
|
||||
s/(?<=\Q$prefix\E)\Q$line2$line3\E \);/\n$line2\n $line3\n);/mg;
|
||||
|
||||
$prefix =
|
||||
"CREATE RULE rtest_sys_upd AS ON UPDATE TO public.rtest_system DO (UPDATE public.rtest_interface SET sysname = new.sysname WHERE (rtest_interface.sysname = old.sysname);";
|
||||
$line2 = " UPDATE public.rtest_admin SET sysname = new.sysname";
|
||||
$line3 = " WHERE (rtest_admin.sysname = old.sysname);";
|
||||
$dump =~
|
||||
s/(?<=\Q$prefix\E)\Q$line2$line3\E \);/\n$line2\n $line3\n);/mg;
|
||||
|
||||
# and there's one place where pre-9.3 uses a different table alias
|
||||
$dump =~ s {^(CREATE\sRULE\srule_and_refint_t3_ins\sAS\s
|
||||
ON\sINSERT\sTO\spublic\.rule_and_refint_t3\s
|
||||
WHERE\s\(EXISTS\s\(SELECT\s1\sFROM\spublic\.rule_and_refint_t3)\s
|
||||
(WHERE\s\(\(rule_and_refint_t3)
|
||||
(\.id3a\s=\snew\.id3a\)\sAND\s\(rule_and_refint_t3)
|
||||
(\.id3b\s=\snew\.id3b\)\sAND\s\(rule_and_refint_t3)}
|
||||
{$1 rule_and_refint_t3_1 $2_1$3_1$4_1}mx;
|
||||
|
||||
# Also fix old use of NATURAL JOIN syntax
|
||||
$dump =~ s {NATURAL JOIN public\.credit_card r}
|
||||
{JOIN public.credit_card r USING (cid)}mg;
|
||||
$dump =~ s {NATURAL JOIN public\.credit_usage r}
|
||||
{JOIN public.credit_usage r USING (cid)}mg;
|
||||
}
|
||||
|
||||
# Suppress blank lines, as some places in pg_dump emit more or fewer.
|
||||
$dump =~ s/\n\n+/\n/g;
|
||||
|
||||
return $dump;
|
||||
}
|
||||
|
||||
# Internal subroutine to mangle whitespace within view/rule commands.
|
||||
# Any consecutive sequence of whitespace is reduced to one space.
|
||||
sub _mash_view_whitespace
|
||||
{
|
||||
my ($dump) = @_;
|
||||
|
||||
foreach my $leader ('CREATE VIEW', 'CREATE RULE')
|
||||
{
|
||||
my @splitchunks = split $leader, $dump;
|
||||
|
||||
$dump = shift(@splitchunks);
|
||||
foreach my $chunk (@splitchunks)
|
||||
{
|
||||
my @thischunks = split /;/, $chunk, 2;
|
||||
my $stmt = shift(@thischunks);
|
||||
|
||||
# now $stmt is just the body of the CREATE VIEW/RULE
|
||||
$stmt =~ s/\s+/ /sg;
|
||||
# we also need to smash these forms for sub-selects and rules
|
||||
$stmt =~ s/\( SELECT/(SELECT/g;
|
||||
$stmt =~ s/\( INSERT/(INSERT/g;
|
||||
$stmt =~ s/\( UPDATE/(UPDATE/g;
|
||||
$stmt =~ s/\( DELETE/(DELETE/g;
|
||||
|
||||
$dump .= $leader . $stmt . ';' . $thischunks[0];
|
||||
}
|
||||
}
|
||||
return $dump;
|
||||
}
|
||||
|
||||
=pod
|
||||
|
||||
=item adjust_new_dumpfile($old_version, $dump)
|
||||
|
||||
Edit a dump output file, taken from the pg_upgrade'd installation
|
||||
by current-version C<pg_dumpall -s>, so that it will match the old
|
||||
dump output file as adjusted by C<adjust_old_dumpfile>.
|
||||
|
||||
Typically this involves deleting data not present in the old installation.
|
||||
|
||||
Arguments:
|
||||
|
||||
=over
|
||||
|
||||
=item C<old_version>: Branch we are upgrading from, represented as a
|
||||
PostgreSQL::Version object.
|
||||
|
||||
=item C<dump>: Contents of dump file
|
||||
|
||||
=back
|
||||
|
||||
Returns the modified dump text.
|
||||
|
||||
=cut
|
||||
|
||||
sub adjust_new_dumpfile
|
||||
{
|
||||
my ($old_version, $dump) = @_;
|
||||
|
||||
# use Unix newlines
|
||||
$dump =~ s/\r\n/\n/g;
|
||||
|
||||
# Version comments will certainly not match.
|
||||
$dump =~ s/^-- Dumped from database version.*\n//mg;
|
||||
|
||||
if ($old_version < 14)
|
||||
{
|
||||
# Suppress noise-word uses of IN in CREATE/ALTER PROCEDURE.
|
||||
$dump =~ s/^(CREATE PROCEDURE .*?)\(IN /$1(/mg;
|
||||
$dump =~ s/^(ALTER PROCEDURE .*?)\(IN /$1(/mg;
|
||||
$dump =~ s/^(CREATE PROCEDURE .*?), IN /$1, /mg;
|
||||
$dump =~ s/^(ALTER PROCEDURE .*?), IN /$1, /mg;
|
||||
$dump =~ s/^(CREATE PROCEDURE .*?), IN /$1, /mg;
|
||||
$dump =~ s/^(ALTER PROCEDURE .*?), IN /$1, /mg;
|
||||
|
||||
# Remove SUBSCRIPT clauses in CREATE TYPE.
|
||||
$dump =~ s/^\s+SUBSCRIPT = raw_array_subscript_handler,\n//mg;
|
||||
|
||||
# Remove multirange_type_name clauses in CREATE TYPE AS RANGE.
|
||||
$dump =~ s {,\n\s+multirange_type_name = .*?(,?)$} {$1}mg;
|
||||
|
||||
# Remove mentions of extended hash functions.
|
||||
$dump =~
|
||||
s {^ALTER\sOPERATOR\sFAMILY\spublic\.part_test_int4_ops\sUSING\shash\sADD\n
|
||||
\s+FUNCTION\s2\s\(integer,\sinteger\)\spublic\.part_hashint4_noop\(integer,bigint\);} {}mxg;
|
||||
$dump =~
|
||||
s {^ALTER\sOPERATOR\sFAMILY\spublic\.part_test_text_ops\sUSING\shash\sADD\n
|
||||
\s+FUNCTION\s2\s\(text,\stext\)\spublic\.part_hashtext_length\(text,bigint\);} {}mxg;
|
||||
}
|
||||
|
||||
# pre-v12 dumps will not say anything about default_table_access_method.
|
||||
if ($old_version < 12)
|
||||
{
|
||||
$dump =~ s/^SET default_table_access_method = heap;\n//mg;
|
||||
}
|
||||
|
||||
# dumps from pre-9.6 dblink may include redundant ACL settings
|
||||
if ($old_version lt '9.6')
|
||||
{
|
||||
my $comment =
|
||||
"-- Name: FUNCTION dblink_connect_u\(.*?\); Type: ACL; Schema: public; Owner: .*";
|
||||
my $sql =
|
||||
"REVOKE ALL ON FUNCTION public\.dblink_connect_u\(.*?\) FROM PUBLIC;";
|
||||
$dump =~ s/^--\n$comment\n--\n+$sql\n+//mg;
|
||||
}
|
||||
|
||||
if ($old_version lt '9.3')
|
||||
{
|
||||
# CREATE VIEW/RULE statements were not pretty-printed before 9.3.
|
||||
# To cope, reduce all whitespace sequences within them to one space.
|
||||
# This must be done on both old and new dumps.
|
||||
$dump = _mash_view_whitespace($dump);
|
||||
}
|
||||
|
||||
# Suppress blank lines, as some places in pg_dump emit more or fewer.
|
||||
$dump =~ s/\n\n+/\n/g;
|
||||
|
||||
return $dump;
|
||||
}
|
||||
|
||||
=pod
|
||||
|
||||
=back
|
||||
|
||||
=cut
|
||||
|
||||
1;
|
Loading…
Reference in New Issue
Block a user