mirror of
https://git.postgresql.org/git/postgresql.git
synced 2024-12-21 08:29:39 +08:00
fbcbc5d06f
should fix the recent reports of "index is not a btree" failures, as well as preventing a more obscure race condition involving changes to a template database just after copying it with CREATE DATABASE.
1287 lines
55 KiB
Plaintext
1287 lines
55 KiB
Plaintext
<!--
|
|
$PostgreSQL: pgsql/doc/src/sgml/backup.sgml,v 2.69 2005/06/25 22:47:28 tgl Exp $
|
|
-->
|
|
<chapter id="backup">
|
|
<title>Backup and Restore</title>
|
|
|
|
<indexterm zone="backup"><primary>backup</></>
|
|
|
|
<para>
|
|
As with everything that contains valuable data, <productname>PostgreSQL</>
|
|
databases should be backed up regularly. While the procedure is
|
|
essentially simple, it is important to have a basic understanding of
|
|
the underlying techniques and assumptions.
|
|
</para>
|
|
|
|
<para>
|
|
There are three fundamentally different approaches to backing up
|
|
<productname>PostgreSQL</> data:
|
|
<itemizedlist>
|
|
<listitem><para><acronym>SQL</> dump</para></listitem>
|
|
<listitem><para>File system level backup</para></listitem>
|
|
<listitem><para>On-line backup</para></listitem>
|
|
</itemizedlist>
|
|
Each has its own strengths and weaknesses.
|
|
</para>
|
|
|
|
<sect1 id="backup-dump">
|
|
<title><acronym>SQL</> Dump</title>
|
|
|
|
<para>
|
|
The idea behind the SQL-dump method is to generate a text file with SQL
|
|
commands that, when fed back to the server, will recreate the
|
|
database in the same state as it was at the time of the dump.
|
|
<productname>PostgreSQL</> provides the utility program
|
|
<xref linkend="app-pgdump"> for this purpose. The basic usage of this
|
|
command is:
|
|
<synopsis>
|
|
pg_dump <replaceable class="parameter">dbname</replaceable> > <replaceable class="parameter">outfile</replaceable>
|
|
</synopsis>
|
|
As you see, <application>pg_dump</> writes its results to the
|
|
standard output. We will see below how this can be useful.
|
|
</para>
|
|
|
|
<para>
|
|
<application>pg_dump</> is a regular <productname>PostgreSQL</>
|
|
client application (albeit a particularly clever one). This means
|
|
that you can do this backup procedure from any remote host that has
|
|
access to the database. But remember that <application>pg_dump</>
|
|
does not operate with special permissions. In particular, it must
|
|
have read access to all tables that you want to back up, so in
|
|
practice you almost always have to run it as a database superuser.
|
|
</para>
|
|
|
|
<para>
|
|
To specify which database server <application>pg_dump</> should
|
|
contact, use the command line options <option>-h
|
|
<replaceable>host</></> and <option>-p <replaceable>port</></>. The
|
|
default host is the local host or whatever your
|
|
<envar>PGHOST</envar> environment variable specifies. Similarly,
|
|
the default port is indicated by the <envar>PGPORT</envar>
|
|
environment variable or, failing that, by the compiled-in default.
|
|
(Conveniently, the server will normally have the same compiled-in
|
|
default.)
|
|
</para>
|
|
|
|
<para>
|
|
As any other <productname>PostgreSQL</> client application,
|
|
<application>pg_dump</> will by default connect with the database
|
|
user name that is equal to the current operating system user name. To override
|
|
this, either specify the <option>-U</option> option or set the
|
|
environment variable <envar>PGUSER</envar>. Remember that
|
|
<application>pg_dump</> connections are subject to the normal
|
|
client authentication mechanisms (which are described in <xref
|
|
linkend="client-authentication">).
|
|
</para>
|
|
|
|
<para>
|
|
Dumps created by <application>pg_dump</> are internally consistent,
|
|
that is, updates to the database while <application>pg_dump</> is
|
|
running will not be in the dump. <application>pg_dump</> does not
|
|
block other operations on the database while it is working.
|
|
(Exceptions are those operations that need to operate with an
|
|
exclusive lock, such as <command>VACUUM FULL</command>.)
|
|
</para>
|
|
|
|
<important>
|
|
<para>
|
|
When your database schema relies on OIDs (for instance as foreign
|
|
keys) you must instruct <application>pg_dump</> to dump the OIDs
|
|
as well. To do this, use the <option>-o</option> command line
|
|
option.
|
|
</para>
|
|
</important>
|
|
|
|
<sect2 id="backup-dump-restore">
|
|
<title>Restoring the dump</title>
|
|
|
|
<para>
|
|
The text files created by <application>pg_dump</> are intended to
|
|
be read in by the <application>psql</application> program. The
|
|
general command form to restore a dump is
|
|
<synopsis>
|
|
psql <replaceable class="parameter">dbname</replaceable> < <replaceable class="parameter">infile</replaceable>
|
|
</synopsis>
|
|
where <replaceable class="parameter">infile</replaceable> is what
|
|
you used as <replaceable class="parameter">outfile</replaceable>
|
|
for the <application>pg_dump</> command. The database <replaceable
|
|
class="parameter">dbname</replaceable> will not be created by this
|
|
command, you must create it yourself from <literal>template0</> before executing
|
|
<application>psql</> (e.g., with <literal>createdb -T template0
|
|
<replaceable class="parameter">dbname</></literal>).
|
|
<application>psql</> supports options similar to <application>pg_dump</>
|
|
for controlling the database server location and the user name. See
|
|
<xref linkend="app-psql">'s reference page for more information.
|
|
</para>
|
|
|
|
<para>
|
|
Not only must the target database already exist before starting to
|
|
run the restore, but so must all the users who own objects in the
|
|
dumped database or were granted permissions on the objects. If they
|
|
do not, then the restore will fail to recreate the objects with the
|
|
original ownership and/or permissions. (Sometimes this is what you want,
|
|
but usually it is not.)
|
|
</para>
|
|
|
|
<para>
|
|
Once restored, it is wise to run <xref linkend="sql-analyze"
|
|
endterm="sql-analyze-title"> on each database so the optimizer has
|
|
useful statistics. An easy way to do this is to run
|
|
<command>vacuumdb -a -z</> to
|
|
<command>VACUUM ANALYZE</> all databases; this is equivalent to
|
|
running <command>VACUUM ANALYZE</command> manually.
|
|
</para>
|
|
|
|
<para>
|
|
The ability of <application>pg_dump</> and <application>psql</> to
|
|
write to or read from pipes makes it possible to dump a database
|
|
directly from one server to another; for example:
|
|
<programlisting>
|
|
pg_dump -h <replaceable>host1</> <replaceable>dbname</> | psql -h <replaceable>host2</> <replaceable>dbname</>
|
|
</programlisting>
|
|
</para>
|
|
|
|
<important>
|
|
<para>
|
|
The dumps produced by <application>pg_dump</> are relative to
|
|
<literal>template0</>. This means that any languages, procedures,
|
|
etc. added to <literal>template1</> will also be dumped by
|
|
<application>pg_dump</>. As a result, when restoring, if you are
|
|
using a customized <literal>template1</>, you must create the
|
|
empty database from <literal>template0</>, as in the example
|
|
above.
|
|
</para>
|
|
</important>
|
|
|
|
<para>
|
|
For advice on how to load large amounts of data into
|
|
<productname>PostgreSQL</productname> efficiently, refer to <xref
|
|
linkend="populate">.
|
|
</para>
|
|
</sect2>
|
|
|
|
<sect2 id="backup-dump-all">
|
|
<title>Using <application>pg_dumpall</></title>
|
|
|
|
<para>
|
|
The above mechanism is cumbersome and inappropriate when backing
|
|
up an entire database cluster. For this reason the <xref
|
|
linkend="app-pg-dumpall"> program is provided.
|
|
<application>pg_dumpall</> backs up each database in a given
|
|
cluster, and also preserves cluster-wide data such as users and
|
|
groups. The basic usage of this command is:
|
|
<synopsis>
|
|
pg_dumpall > <replaceable>outfile</>
|
|
</synopsis>
|
|
The resulting dump can be restored with <application>psql</>:
|
|
<synopsis>
|
|
psql -f <replaceable class="parameter">infile</replaceable> postgres
|
|
</synopsis>
|
|
(Actually, you can specify any existing database name to start from,
|
|
but if you are reloading in an empty cluster then <literal>postgres</>
|
|
should generally be used.) It is always necessary to have
|
|
database superuser access when restoring a <application>pg_dumpall</>
|
|
dump, as that is required to restore the user and group information.
|
|
</para>
|
|
</sect2>
|
|
|
|
<sect2 id="backup-dump-large">
|
|
<title>Handling large databases</title>
|
|
|
|
<para>
|
|
Since <productname>PostgreSQL</productname> allows tables larger
|
|
than the maximum file size on your system, it can be problematic
|
|
to dump such a table to a file, since the resulting file will likely
|
|
be larger than the maximum size allowed by your system. Since
|
|
<application>pg_dump</> can write to the standard output, you can
|
|
just use standard Unix tools to work around this possible problem.
|
|
</para>
|
|
|
|
<formalpara>
|
|
<title>Use compressed dumps.</title>
|
|
<para>
|
|
You can use your favorite compression program, for example
|
|
<application>gzip</application>.
|
|
|
|
<programlisting>
|
|
pg_dump <replaceable class="parameter">dbname</replaceable> | gzip > <replaceable class="parameter">filename</replaceable>.gz
|
|
</programlisting>
|
|
|
|
Reload with
|
|
|
|
<programlisting>
|
|
createdb <replaceable class="parameter">dbname</replaceable>
|
|
gunzip -c <replaceable class="parameter">filename</replaceable>.gz | psql <replaceable class="parameter">dbname</replaceable>
|
|
</programlisting>
|
|
|
|
or
|
|
|
|
<programlisting>
|
|
cat <replaceable class="parameter">filename</replaceable>.gz | gunzip | psql <replaceable class="parameter">dbname</replaceable>
|
|
</programlisting>
|
|
</para>
|
|
</formalpara>
|
|
|
|
<formalpara>
|
|
<title>Use <command>split</>.</title>
|
|
<para>
|
|
The <command>split</command> command
|
|
allows you to split the output into pieces that are
|
|
acceptable in size to the underlying file system. For example, to
|
|
make chunks of 1 megabyte:
|
|
|
|
<programlisting>
|
|
pg_dump <replaceable class="parameter">dbname</replaceable> | split -b 1m - <replaceable class="parameter">filename</replaceable>
|
|
</programlisting>
|
|
|
|
Reload with
|
|
|
|
<programlisting>
|
|
createdb <replaceable class="parameter">dbname</replaceable>
|
|
cat <replaceable class="parameter">filename</replaceable>* | psql <replaceable class="parameter">dbname</replaceable>
|
|
</programlisting>
|
|
</para>
|
|
</formalpara>
|
|
|
|
<formalpara>
|
|
<title>Use the custom dump format.</title>
|
|
<para>
|
|
If <productname>PostgreSQL</productname> was built on a system with the
|
|
<application>zlib</> compression library installed, the custom dump
|
|
format will compress data as it writes it to the output file. This will
|
|
produce dump file sizes similar to using <command>gzip</command>, but it
|
|
has the added advantage that tables can be restored selectively. The
|
|
following command dumps a database using the custom dump format:
|
|
|
|
<programlisting>
|
|
pg_dump -Fc <replaceable class="parameter">dbname</replaceable> > <replaceable class="parameter">filename</replaceable>
|
|
</programlisting>
|
|
|
|
A custom-format dump is not a script for <application>psql</>, but
|
|
instead must be restored with <application>pg_restore</>.
|
|
See the <xref linkend="app-pgdump"> and <xref
|
|
linkend="app-pgrestore"> reference pages for details.
|
|
</para>
|
|
</formalpara>
|
|
|
|
</sect2>
|
|
</sect1>
|
|
|
|
<sect1 id="backup-file">
|
|
<title>File system level backup</title>
|
|
|
|
<para>
|
|
An alternative backup strategy is to directly copy the files that
|
|
<productname>PostgreSQL</> uses to store the data in the database. In
|
|
<xref linkend="creating-cluster"> it is explained where these files
|
|
are located, but you have probably found them already if you are
|
|
interested in this method. You can use whatever method you prefer
|
|
for doing usual file system backups, for example
|
|
|
|
<programlisting>
|
|
tar -cf backup.tar /usr/local/pgsql/data
|
|
</programlisting>
|
|
</para>
|
|
|
|
<para>
|
|
There are two restrictions, however, which make this method
|
|
impractical, or at least inferior to the <application>pg_dump</>
|
|
method:
|
|
|
|
<orderedlist>
|
|
<listitem>
|
|
<para>
|
|
The database server <emphasis>must</> be shut down in order to
|
|
get a usable backup. Half-way measures such as disallowing all
|
|
connections will <emphasis>not</emphasis> work
|
|
(mainly because <command>tar</command> and similar tools do not take an
|
|
atomic snapshot of the state of the file system at a point in
|
|
time). Information about stopping the server can be found in
|
|
<xref linkend="postmaster-shutdown">. Needless to say that you
|
|
also need to shut down the server before restoring the data.
|
|
</para>
|
|
</listitem>
|
|
|
|
<listitem>
|
|
<para>
|
|
If you have dug into the details of the file system layout of the
|
|
database, you may be tempted to try to back up or restore only certain
|
|
individual tables or databases from their respective files or
|
|
directories. This will <emphasis>not</> work because the
|
|
information contained in these files contains only half the
|
|
truth. The other half is in the commit log files
|
|
<filename>pg_clog/*</filename>, which contain the commit status of
|
|
all transactions. A table file is only usable with this
|
|
information. Of course it is also impossible to restore only a
|
|
table and the associated <filename>pg_clog</filename> data
|
|
because that would render all other tables in the database
|
|
cluster useless. So file system backups only work for complete
|
|
restoration of an entire database cluster.
|
|
</para>
|
|
</listitem>
|
|
</orderedlist>
|
|
</para>
|
|
|
|
<para>
|
|
An alternative file-system backup approach is to make a
|
|
<quote>consistent snapshot</quote> of the data directory, if the
|
|
file system supports that functionality (and you are willing to
|
|
trust that it is implemented correctly). The typical procedure is
|
|
to make a <quote>frozen snapshot</> of the volume containing the
|
|
database, then copy the whole data directory (not just parts, see
|
|
above) from the snapshot to a backup device, then release the frozen
|
|
snapshot. This will work even while the database server is running.
|
|
However, a backup created in this way saves
|
|
the database files in a state where the database server was not
|
|
properly shut down; therefore, when you start the database server
|
|
on the backed-up data, it will think the server had crashed
|
|
and replay the WAL log. This is not a problem, just be aware of
|
|
it (and be sure to include the WAL files in your backup).
|
|
</para>
|
|
|
|
<para>
|
|
If your database is spread across multiple file systems, there may not
|
|
be any way to obtain exactly-simultaneous frozen snapshots of all
|
|
the volumes. For example, if your data files and WAL log are on different
|
|
disks, or if tablespaces are on different file systems, it might
|
|
not be possible to use snapshot backup because the snapshots must be
|
|
simultaneous.
|
|
Read your file system documentation very carefully before trusting
|
|
to the consistent-snapshot technique in such situations. The safest
|
|
approach is to shut down the database server for long enough to
|
|
establish all the frozen snapshots.
|
|
</para>
|
|
|
|
<para>
|
|
Another option is to use <application>rsync</> to perform a file
|
|
system backup. This is done by first running <application>rsync</>
|
|
while the database server is running, then shutting down the database
|
|
server just long enough to do a second <application>rsync</>. The
|
|
second <application>rsync</> will be much quicker than the first,
|
|
because it has relatively little data to transfer, and the end result
|
|
will be consistent because the server was down. This method
|
|
allows a file system backup to be performed with minimal downtime.
|
|
</para>
|
|
|
|
<para>
|
|
Note that a file system backup will not necessarily be
|
|
smaller than an SQL dump. On the contrary, it will most likely be
|
|
larger. (<application>pg_dump</application> does not need to dump
|
|
the contents of indexes for example, just the commands to recreate
|
|
them.)
|
|
</para>
|
|
</sect1>
|
|
|
|
<sect1 id="backup-online">
|
|
<title>On-line backup and point-in-time recovery (PITR)</title>
|
|
|
|
<indexterm zone="backup">
|
|
<primary>on-line backup</primary>
|
|
</indexterm>
|
|
|
|
<indexterm zone="backup">
|
|
<primary>point-in-time recovery</primary>
|
|
</indexterm>
|
|
|
|
<indexterm zone="backup">
|
|
<primary>PITR</primary>
|
|
</indexterm>
|
|
|
|
<para>
|
|
At all times, <productname>PostgreSQL</> maintains a
|
|
<firstterm>write ahead log</> (WAL) in the <filename>pg_xlog/</>
|
|
subdirectory of the cluster's data directory. The log describes
|
|
every change made to the database's data files. This log exists
|
|
primarily for crash-safety purposes: if the system crashes, the
|
|
database can be restored to consistency by <quote>replaying</> the
|
|
log entries made since the last checkpoint. However, the existence
|
|
of the log makes it possible to use a third strategy for backing up
|
|
databases: we can combine a file-system-level backup with backup of
|
|
the WAL files. If recovery is needed, we restore the backup and
|
|
then replay from the backed-up WAL files to bring the backup up to
|
|
current time. This approach is more complex to administer than
|
|
either of the previous approaches, but it has some significant
|
|
benefits:
|
|
<itemizedlist>
|
|
<listitem>
|
|
<para>
|
|
We do not need a perfectly consistent backup as the starting point.
|
|
Any internal inconsistency in the backup will be corrected by log
|
|
replay (this is not significantly different from what happens during
|
|
crash recovery). So we don't need file system snapshot capability,
|
|
just <application>tar</> or a similar archiving tool.
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
Since we can string together an indefinitely long sequence of WAL files
|
|
for replay, continuous backup can be achieved simply by continuing to archive
|
|
the WAL files. This is particularly valuable for large databases, where
|
|
it may not be convenient to take a full backup frequently.
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
There is nothing that says we have to replay the WAL entries all the
|
|
way to the end. We could stop the replay at any point and have a
|
|
consistent snapshot of the database as it was at that time. Thus,
|
|
this technique supports <firstterm>point-in-time recovery</>: it is
|
|
possible to restore the database to its state at any time since your base
|
|
backup was taken.
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
If we continuously feed the series of WAL files to another
|
|
machine that has been loaded with the same base backup file, we
|
|
have a <quote>hot standby</> system: at any point we can bring up
|
|
the second machine and it will have a nearly-current copy of the
|
|
database.
|
|
</para>
|
|
</listitem>
|
|
</itemizedlist>
|
|
</para>
|
|
|
|
<para>
|
|
As with the plain file-system-backup technique, this method can only
|
|
support restoration of an entire database cluster, not a subset.
|
|
Also, it requires a lot of archival storage: the base backup may be bulky,
|
|
and a busy system will generate many megabytes of WAL traffic that
|
|
have to be archived. Still, it is the preferred backup technique in
|
|
many situations where high reliability is needed.
|
|
</para>
|
|
|
|
<para>
|
|
To recover successfully using an on-line backup, you need a continuous
|
|
sequence of archived WAL files that extends back at least as far as the
|
|
start time of your backup. So to get started, you should set up and test
|
|
your procedure for archiving WAL files <emphasis>before</> you take your
|
|
first base backup. Accordingly, we first discuss the mechanics of
|
|
archiving WAL files.
|
|
</para>
|
|
|
|
<sect2 id="backup-archiving-wal">
|
|
<title>Setting up WAL archiving</title>
|
|
|
|
<para>
|
|
In an abstract sense, a running <productname>PostgreSQL</> system
|
|
produces an indefinitely long sequence of WAL records. The system
|
|
physically divides this sequence into WAL <firstterm>segment
|
|
files</>, which are normally 16MB apiece (although the size can be
|
|
altered when building <productname>PostgreSQL</>). The segment
|
|
files are given numeric names that reflect their position in the
|
|
abstract WAL sequence. When not using WAL archiving, the system
|
|
normally creates just a few segment files and then
|
|
<quote>recycles</> them by renaming no-longer-needed segment files
|
|
to higher segment numbers. It's assumed that a segment file whose
|
|
contents precede the checkpoint-before-last is no longer of
|
|
interest and can be recycled.
|
|
</para>
|
|
|
|
<para>
|
|
When archiving WAL data, we want to capture the contents of each segment
|
|
file once it is filled, and save that data somewhere before the segment
|
|
file is recycled for reuse. Depending on the application and the
|
|
available hardware, there could be many different ways of <quote>saving
|
|
the data somewhere</>: we could copy the segment files to an NFS-mounted
|
|
directory on another machine, write them onto a tape drive (ensuring that
|
|
you have a way of restoring the file with its original file name), or batch
|
|
them together and burn them onto CDs, or something else entirely. To
|
|
provide the database administrator with as much flexibility as possible,
|
|
<productname>PostgreSQL</> tries not to make any assumptions about how
|
|
the archiving will be done. Instead, <productname>PostgreSQL</> lets
|
|
the administrator specify a shell command to be executed to copy a
|
|
completed segment file to wherever it needs to go. The command could be
|
|
as simple as a <application>cp</>, or it could invoke a complex shell
|
|
script — it's all up to you.
|
|
</para>
|
|
|
|
<para>
|
|
The shell command to use is specified by the <xref
|
|
linkend="guc-archive-command"> configuration parameter, which in practice
|
|
will always be placed in the <filename>postgresql.conf</filename> file.
|
|
In this string,
|
|
any <literal>%p</> is replaced by the absolute path of the file to
|
|
archive, while any <literal>%f</> is replaced by the file name only.
|
|
Write <literal>%%</> if you need to embed an actual <literal>%</>
|
|
character in the command. The simplest useful command is something
|
|
like
|
|
<programlisting>
|
|
archive_command = 'cp -i %p /mnt/server/archivedir/%f </dev/null'
|
|
</programlisting>
|
|
which will copy archivable WAL segments to the directory
|
|
<filename>/mnt/server/archivedir</>. (This is an example, not a
|
|
recommendation, and may not work on all platforms.)
|
|
</para>
|
|
|
|
<para>
|
|
The archive command will be executed under the ownership of the same
|
|
user that the <productname>PostgreSQL</> server is running as. Since
|
|
the series of WAL files being archived contains effectively everything
|
|
in your database, you will want to be sure that the archived data is
|
|
protected from prying eyes; for example, archive into a directory that
|
|
does not have group or world read access.
|
|
</para>
|
|
|
|
<para>
|
|
It is important that the archive command return zero exit status if and
|
|
only if it succeeded. Upon getting a zero result,
|
|
<productname>PostgreSQL</> will assume that the WAL segment file has been
|
|
successfully archived, and will remove or recycle it.
|
|
However, a nonzero status tells
|
|
<productname>PostgreSQL</> that the file was not archived; it will try
|
|
again periodically until it succeeds.
|
|
</para>
|
|
|
|
<para>
|
|
The archive command should generally be designed to refuse to overwrite
|
|
any pre-existing archive file. This is an important safety feature to
|
|
preserve the integrity of your archive in case of administrator error
|
|
(such as sending the output of two different servers to the same archive
|
|
directory).
|
|
It is advisable to test your proposed archive command to ensure that it
|
|
indeed does not overwrite an existing file, <emphasis>and that it returns
|
|
nonzero status in this case</>. We have found that <literal>cp -i</> does
|
|
this correctly on some platforms but not others. If the chosen command
|
|
does not itself handle this case correctly, you should add a command
|
|
to test for pre-existence of the archive file. For example, something
|
|
like
|
|
<programlisting>
|
|
archive_command = 'test ! -f .../%f && cp %p .../%f'
|
|
</programlisting>
|
|
works correctly on most Unix variants.
|
|
</para>
|
|
|
|
<para>
|
|
While designing your archiving setup, consider what will happen if
|
|
the archive command fails repeatedly because some aspect requires
|
|
operator intervention or the archive runs out of space. For example, this
|
|
could occur if you write to tape without an autochanger; when the tape
|
|
fills, nothing further can be archived until the tape is swapped.
|
|
You should ensure that any error condition or request to a human operator
|
|
is reported appropriately so that the situation can be
|
|
resolved relatively quickly. The <filename>pg_xlog/</> directory will
|
|
continue to fill with WAL segment files until the situation is resolved.
|
|
</para>
|
|
|
|
<para>
|
|
The speed of the archiving command is not important, so long as it can keep up
|
|
with the average rate at which your server generates WAL data. Normal
|
|
operation continues even if the archiving process falls a little behind.
|
|
If archiving falls significantly behind, this will increase the amount of
|
|
data that would be lost in the event of a disaster. It will also mean that
|
|
the <filename>pg_xlog/</> directory will contain large numbers of
|
|
not-yet-archived segment files, which could eventually exceed available
|
|
disk space. You are advised to monitor the archiving process to ensure that
|
|
it is working as you intend.
|
|
</para>
|
|
|
|
<para>
|
|
If you are concerned about being able to recover right up to the
|
|
current instant, you may want to take additional steps to ensure that
|
|
the current, partially-filled WAL segment is also copied someplace.
|
|
This is particularly important if your server generates only little WAL
|
|
traffic (or has slack periods where it does so), since it could take a
|
|
long time before a WAL segment file is completely filled and ready to
|
|
archive. One possible way to handle this is to set up a
|
|
<application>cron</> job that periodically (once a minute, perhaps)
|
|
identifies the current WAL segment file and saves it someplace safe.
|
|
Then the combination of the archived WAL segments and the saved current
|
|
segment will be enough to ensure you can always restore to within a
|
|
minute of current time. This behavior is not presently built into
|
|
<productname>PostgreSQL</> because we did not want to complicate the
|
|
definition of the <xref linkend="guc-archive-command"> by requiring it
|
|
to keep track of successively archived, but different, copies of the
|
|
same WAL file. The <xref linkend="guc-archive-command"> is only
|
|
invoked on completed WAL segments. Except in the case of retrying a
|
|
failure, it will be called only once for any given file name.
|
|
</para>
|
|
|
|
<para>
|
|
In writing your archive command, you should assume that the filenames to
|
|
be archived may be up to 64 characters long and may contain any
|
|
combination of ASCII letters, digits, and dots. It is not necessary to
|
|
remember the original full path (<literal>%p</>) but it is necessary to
|
|
remember the file name (<literal>%f</>).
|
|
</para>
|
|
|
|
<para>
|
|
Note that although WAL archiving will allow you to restore any
|
|
modifications made to the data in your <productname>PostgreSQL</> database
|
|
it will not restore changes made to configuration files (that is,
|
|
<filename>postgresql.conf</>, <filename>pg_hba.conf</> and
|
|
<filename>pg_ident.conf</>), since those are edited manually rather
|
|
than through SQL operations.
|
|
You may wish to keep the configuration files in a location that will
|
|
be backed up by your regular file system backup procedures. See
|
|
<xref linkend="runtime-config-file-locations"> for how to relocate the
|
|
configuration files.
|
|
</para>
|
|
</sect2>
|
|
|
|
<sect2 id="backup-base-backup">
|
|
<title>Making a Base Backup</title>
|
|
|
|
<para>
|
|
The procedure for making a base backup is relatively simple:
|
|
<orderedlist>
|
|
<listitem>
|
|
<para>
|
|
Ensure that WAL archiving is enabled and working.
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
Connect to the database as a superuser, and issue the command
|
|
<programlisting>
|
|
SELECT pg_start_backup('label');
|
|
</programlisting>
|
|
where <literal>label</> is any string you want to use to uniquely
|
|
identify this backup operation. (One good practice is to use the
|
|
full path where you intend to put the backup dump file.)
|
|
<function>pg_start_backup</> creates a <firstterm>backup label</> file,
|
|
called <filename>backup_label</>, in the cluster directory with
|
|
information about your backup.
|
|
</para>
|
|
|
|
<para>
|
|
It does not matter which database within the cluster you connect to to
|
|
issue this command. You can ignore the result returned by the function;
|
|
but if it reports an error, deal with that before proceeding.
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
Perform the backup, using any convenient file-system-backup tool
|
|
such as <application>tar</> or <application>cpio</>. It is neither
|
|
necessary nor desirable to stop normal operation of the database
|
|
while you do this.
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
Again connect to the database as a superuser, and issue the command
|
|
<programlisting>
|
|
SELECT pg_stop_backup();
|
|
</programlisting>
|
|
This should return successfully.
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
Once the WAL segment files used during the backup are archived as part
|
|
of normal database activity, you are done.
|
|
</para>
|
|
</listitem>
|
|
</orderedlist>
|
|
</para>
|
|
|
|
<para>
|
|
It is not necessary to be very concerned about the amount of time elapsed
|
|
between <function>pg_start_backup</> and the start of the actual backup,
|
|
nor between the end of the backup and <function>pg_stop_backup</>; a
|
|
few minutes' delay won't hurt anything. You
|
|
must however be quite sure that these operations are carried out in
|
|
sequence and do not overlap.
|
|
</para>
|
|
|
|
<para>
|
|
Be certain that your backup dump includes all of the files underneath
|
|
the database cluster directory (e.g., <filename>/usr/local/pgsql/data</>).
|
|
If you are using tablespaces that do not reside underneath this directory,
|
|
be careful to include them as well (and be sure that your backup dump
|
|
archives symbolic links as links, otherwise the restore will mess up
|
|
your tablespaces).
|
|
</para>
|
|
|
|
<para>
|
|
You may, however, omit from the backup dump the files within the
|
|
<filename>pg_xlog/</> subdirectory of the cluster directory. This
|
|
slight complication is worthwhile because it reduces the risk
|
|
of mistakes when restoring. This is easy to arrange if
|
|
<filename>pg_xlog/</> is a symbolic link pointing to someplace outside
|
|
the cluster directory, which is a common setup anyway for performance
|
|
reasons.
|
|
</para>
|
|
|
|
<para>
|
|
To make use of this backup, you will need to keep around all the WAL
|
|
segment files generated during and after the file system backup.
|
|
To aid you in doing this, the <function>pg_stop_backup</> function
|
|
creates a <firstterm>backup history file</> that is immediately
|
|
stored into the WAL archive area. This file is named after the first
|
|
WAL segment file that you need to have to make use of the backup.
|
|
For example, if the starting WAL file is
|
|
<literal>0000000100001234000055CD</> the backup history file will be
|
|
named something like
|
|
<literal>0000000100001234000055CD.007C9330.backup</>. (The second
|
|
number in the file name stands for an exact position within the WAL
|
|
file, and can ordinarily be ignored.) Once you have safely archived
|
|
the file system backup and the WAL segment files used during the
|
|
backup (as specified in the backup history file), all archived WAL
|
|
segments with names numerically less are no longer needed to recover
|
|
the file system backup and may be deleted. However, you should
|
|
consider keeping several backup sets to be absolutely certain that
|
|
you are can recover your data. Keep in mind that only completed WAL
|
|
segment files are archived, so there will be delay between running
|
|
<function>pg_stop_backup</> and the archiving of all WAL segment
|
|
files needed to make the file system backup consistent.
|
|
</para>
|
|
<para>
|
|
The backup history file is just a small text file. It contains the
|
|
label string you gave to <function>pg_start_backup</>, as well as
|
|
the starting and ending times of the backup. If you used the label
|
|
to identify where the associated dump file is kept, then the
|
|
archived history file is enough to tell you which dump file to
|
|
restore, should you need to do so.
|
|
</para>
|
|
|
|
<para>
|
|
Since you have to keep around all the archived WAL files back to your
|
|
last base backup, the interval between base backups should usually be
|
|
chosen based on how much storage you want to expend on archived WAL
|
|
files. You should also consider how long you are prepared to spend
|
|
recovering, if recovery should be necessary — the system will have to
|
|
replay all those WAL segments, and that could take awhile if it has
|
|
been a long time since the last base backup.
|
|
</para>
|
|
|
|
<para>
|
|
It's also worth noting that the <function>pg_start_backup</> function
|
|
makes a file named <filename>backup_label</> in the database cluster
|
|
directory, which is then removed again by <function>pg_stop_backup</>.
|
|
This file will of course be archived as a part of your backup dump file.
|
|
The backup label file includes the label string you gave to
|
|
<function>pg_start_backup</>, as well as the time at which
|
|
<function>pg_start_backup</> was run, and the name of the starting WAL
|
|
file. In case of confusion it will
|
|
therefore be possible to look inside a backup dump file and determine
|
|
exactly which backup session the dump file came from.
|
|
</para>
|
|
|
|
<para>
|
|
It is also possible to make a backup dump while the postmaster is
|
|
stopped. In this case, you obviously cannot use
|
|
<function>pg_start_backup</> or <function>pg_stop_backup</>, and
|
|
you will therefore be left to your own devices to keep track of which
|
|
backup dump is which and how far back the associated WAL files go.
|
|
It is generally better to follow the on-line backup procedure above.
|
|
</para>
|
|
</sect2>
|
|
|
|
<sect2 id="backup-pitr-recovery">
|
|
<title>Recovering with an On-line Backup</title>
|
|
|
|
<para>
|
|
Okay, the worst has happened and you need to recover from your backup.
|
|
Here is the procedure:
|
|
<orderedlist>
|
|
<listitem>
|
|
<para>
|
|
Stop the postmaster, if it's running.
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
If you have the space to do so,
|
|
copy the whole cluster data directory and any tablespaces to a temporary
|
|
location in case you need them later. Note that this precaution will
|
|
require that you have enough free space on your system to hold two
|
|
copies of your existing database. If you do not have enough space,
|
|
you need at the least to copy the contents of the <filename>pg_xlog</>
|
|
subdirectory of the cluster data directory, as it may contain logs which
|
|
were not archived before the system went down.
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
Clean out all existing files and subdirectories under the cluster data
|
|
directory and under the root directories of any tablespaces you are using.
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
Restore the database files from your backup dump. Be careful that they
|
|
are restored with the right ownership (the database system user, not
|
|
root!) and with the right permissions. If you are using tablespaces,
|
|
you may want to verify that the symbolic links in <filename>pg_tblspc/</>
|
|
were correctly restored.
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
Remove any files present in <filename>pg_xlog/</>; these came from the
|
|
backup dump and are therefore probably obsolete rather than current.
|
|
If you didn't archive <filename>pg_xlog/</> at all, then re-create it,
|
|
and be sure to re-create the subdirectory
|
|
<filename>pg_xlog/archive_status/</> as well.
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
If you had unarchived WAL segment files that you saved in step 2,
|
|
copy them into <filename>pg_xlog/</>. (It is best to copy them,
|
|
not move them, so that you still have the unmodified files if a
|
|
problem occurs and you have to start over.)
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
Create a recovery command file <filename>recovery.conf</> in the cluster
|
|
data directory (see <xref linkend="recovery-config-settings">). You may
|
|
also want to temporarily modify <filename>pg_hba.conf</> to prevent
|
|
ordinary users from connecting until you are sure the recovery has worked.
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
Start the postmaster. The postmaster will go into recovery mode and
|
|
proceed to read through the archived WAL files it needs. Upon completion
|
|
of the recovery process, the postmaster will rename
|
|
<filename>recovery.conf</> to <filename>recovery.done</> (to prevent
|
|
accidentally re-entering recovery mode in case of a crash later) and then
|
|
commence normal database operations.
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
Inspect the contents of the database to ensure you have recovered to
|
|
where you want to be. If not, return to step 1. If all is well,
|
|
let in your users by restoring <filename>pg_hba.conf</> to normal.
|
|
</para>
|
|
</listitem>
|
|
</orderedlist>
|
|
</para>
|
|
|
|
<para>
|
|
The key part of all this is to set up a recovery command file that
|
|
describes how you want to recover and how far the recovery should
|
|
run. You can use <filename>recovery.conf.sample</> (normally
|
|
installed in the installation <filename>share/</> directory) as a
|
|
prototype. The one thing that you absolutely must specify in
|
|
<filename>recovery.conf</> is the <varname>restore_command</>,
|
|
which tells <productname>PostgreSQL</> how to get back archived
|
|
WAL file segments. Like the <varname>archive_command</>, this is
|
|
a shell command string. It may contain <literal>%f</>, which is
|
|
replaced by the name of the desired log file, and <literal>%p</>,
|
|
which is replaced by the absolute path to copy the log file to.
|
|
Write <literal>%%</> if you need to embed an actual <literal>%</>
|
|
character in the command. The simplest useful command is
|
|
something like
|
|
<programlisting>
|
|
restore_command = 'cp /mnt/server/archivedir/%f %p'
|
|
</programlisting>
|
|
which will copy previously archived WAL segments from the directory
|
|
<filename>/mnt/server/archivedir</>. You could of course use something
|
|
much more complicated, perhaps even a shell script that requests the
|
|
operator to mount an appropriate tape.
|
|
</para>
|
|
|
|
<para>
|
|
It is important that the command return nonzero exit status on failure.
|
|
The command <emphasis>will</> be asked for log files that are not present
|
|
in the archive; it must return nonzero when so asked. This is not an
|
|
error condition. Be aware also that the base name of the <literal>%p</>
|
|
path will be different from <literal>%f</>; do not expect them to be
|
|
interchangeable.
|
|
</para>
|
|
|
|
<para>
|
|
WAL segments that cannot be found in the archive will be sought in
|
|
<filename>pg_xlog/</>; this allows use of recent un-archived segments.
|
|
However segments that are available from the archive will be used in
|
|
preference to files in <filename>pg_xlog/</>. The system will not
|
|
overwrite the existing contents of <filename>pg_xlog/</> when retrieving
|
|
archived files.
|
|
</para>
|
|
|
|
<para>
|
|
Normally, recovery will proceed through all available WAL segments,
|
|
thereby restoring the database to the current point in time (or as
|
|
close as we can get given the available WAL segments). But if you want
|
|
to recover to some previous point in time (say, right before the junior
|
|
DBA dropped your main transaction table), just specify the required
|
|
stopping point in <filename>recovery.conf</>. You can specify the stop
|
|
point, known as the <quote>recovery target</>, either by date/time or
|
|
by completion of a specific transaction ID. As of this writing only
|
|
the date/time option is very usable, since there are no tools to help
|
|
you identify with any accuracy which transaction ID to use.
|
|
</para>
|
|
|
|
<note>
|
|
<para>
|
|
The stop point must be after the ending time of the base backup (the
|
|
time of <function>pg_stop_backup</>). You cannot use a base backup
|
|
to recover to a time when that backup was still going on. (To
|
|
recover to such a time, you must go back to your previous base backup
|
|
and roll forward from there.)
|
|
</para>
|
|
</note>
|
|
|
|
<sect3 id="recovery-config-settings" xreflabel="Recovery Settings">
|
|
<title>Recovery Settings</title>
|
|
|
|
<para>
|
|
These settings can only be made in the <filename>recovery.conf</>
|
|
file, and apply only for the duration of the recovery. They must be
|
|
reset for any subsequent recovery you wish to perform. They cannot be
|
|
changed once recovery has begun.
|
|
</para>
|
|
|
|
<variablelist>
|
|
|
|
<varlistentry id="restore-command" xreflabel="restore_command">
|
|
<term><varname>restore_command</varname> (<type>string</type>)</term>
|
|
<listitem>
|
|
<para>
|
|
The shell command to execute to retrieve an archived segment of
|
|
the WAL file series. This parameter is required.
|
|
Any <literal>%f</> in the string is
|
|
replaced by the name of the file to retrieve from the archive,
|
|
and any <literal>%p</> is replaced by the absolute path to copy
|
|
it to on the server.
|
|
Write <literal>%%</> to embed an actual <literal>%</> character
|
|
in the command.
|
|
</para>
|
|
<para>
|
|
It is important for the command to return a zero exit status if and
|
|
only if it succeeds. The command <emphasis>will</> be asked for file
|
|
names that are not present in the archive; it must return nonzero
|
|
when so asked. Examples:
|
|
<programlisting>
|
|
restore_command = 'cp /mnt/server/archivedir/%f "%p"'
|
|
restore_command = 'copy /mnt/server/archivedir/%f "%p"' # Windows
|
|
</programlisting>
|
|
</para>
|
|
</listitem>
|
|
</varlistentry>
|
|
|
|
<varlistentry id="recovery-target-time" xreflabel="recovery_target_time">
|
|
<term><varname>recovery_target_time</varname>
|
|
(<type>timestamp</type>)
|
|
</term>
|
|
<listitem>
|
|
<para>
|
|
This parameter specifies the time stamp up to which recovery
|
|
will proceed.
|
|
At most one of <varname>recovery_target_time</> and
|
|
<xref linkend="recovery-target-xid"> can be specified.
|
|
The default is to recover to the end of the WAL log.
|
|
The precise stopping point is also influenced by
|
|
<xref linkend="recovery-target-inclusive">.
|
|
</para>
|
|
</listitem>
|
|
</varlistentry>
|
|
|
|
<varlistentry id="recovery-target-xid" xreflabel="recovery_target_xid">
|
|
<term><varname>recovery_target_xid</varname> (<type>string</type>)</term>
|
|
<listitem>
|
|
<para>
|
|
This parameter specifies the transaction ID up to which recovery
|
|
will proceed. Keep in mind
|
|
that while transaction IDs are assigned sequentially at transaction
|
|
start, transactions can complete in a different numeric order.
|
|
The transactions that will be recovered are those that committed
|
|
before (and optionally including) the specified one.
|
|
At most one of <varname>recovery_target_xid</> and
|
|
<xref linkend="recovery-target-time"> can be specified.
|
|
The default is to recover to the end of the WAL log.
|
|
The precise stopping point is also influenced by
|
|
<xref linkend="recovery-target-inclusive">.
|
|
</para>
|
|
</listitem>
|
|
</varlistentry>
|
|
|
|
<varlistentry id="recovery-target-inclusive"
|
|
xreflabel="recovery_target_inclusive">
|
|
<term><varname>recovery_target_inclusive</varname>
|
|
(<type>boolean</type>)
|
|
</term>
|
|
<listitem>
|
|
<para>
|
|
Specifies whether we stop just after the specified recovery target
|
|
(<literal>true</literal>), or just before the recovery target
|
|
(<literal>false</literal>).
|
|
Applies to both <xref linkend="recovery-target-time">
|
|
and <xref linkend="recovery-target-xid">, whichever one is
|
|
specified for this recovery. This indicates whether transactions
|
|
having exactly the target commit time or ID, respectively, will
|
|
be included in the recovery. Default is <literal>true</>.
|
|
</para>
|
|
</listitem>
|
|
</varlistentry>
|
|
|
|
<varlistentry id="recovery-target-timeline"
|
|
xreflabel="recovery_target_timeline">
|
|
<term><varname>recovery_target_timeline</varname>
|
|
(<type>string</type>)
|
|
</term>
|
|
<listitem>
|
|
<para>
|
|
Specifies recovering into a particular timeline. The default is
|
|
to recover along the same timeline that was current when the
|
|
base backup was taken. You would only need to set this parameter
|
|
in complex re-recovery situations, where you need to return to
|
|
a state that itself was reached after a point-in-time recovery.
|
|
See <xref linkend="backup-timelines"> for discussion.
|
|
</para>
|
|
</listitem>
|
|
</varlistentry>
|
|
|
|
</variablelist>
|
|
|
|
</sect3>
|
|
|
|
</sect2>
|
|
|
|
<sect2 id="backup-timelines">
|
|
<title>Timelines</title>
|
|
|
|
<indexterm zone="backup">
|
|
<primary>timelines</primary>
|
|
</indexterm>
|
|
|
|
<para>
|
|
The ability to restore the database to a previous point in time creates
|
|
some complexities that are akin to science-fiction stories about time
|
|
travel and parallel universes. In the original history of the database,
|
|
perhaps you dropped a critical table at 5:15PM on Tuesday evening.
|
|
Unfazed, you get out your backup, restore to the point-in-time 5:14PM
|
|
Tuesday evening, and are up and running. In <emphasis>this</> history of
|
|
the database universe, you never dropped the table at all. But suppose
|
|
you later realize this wasn't such a great idea after all, and would like
|
|
to return to some later point in the original history. You won't be able
|
|
to if, while your database was up-and-running, it overwrote some of the
|
|
sequence of WAL segment files that led up to the time you now wish you
|
|
could get back to. So you really want to distinguish the series of
|
|
WAL records generated after you've done a point-in-time recovery from
|
|
those that were generated in the original database history.
|
|
</para>
|
|
|
|
<para>
|
|
To deal with these problems, <productname>PostgreSQL</> has a notion
|
|
of <firstterm>timelines</>. Each time you recover to a point-in-time
|
|
earlier than the end of the WAL sequence, a new timeline is created
|
|
to identify the series of WAL records generated after that recovery.
|
|
(If recovery proceeds all the way to the end of WAL, however, we do not
|
|
start a new timeline: we just extend the existing one.) The timeline
|
|
ID number is part of WAL segment file names, and so a new timeline does
|
|
not overwrite the WAL data generated by previous timelines. It is
|
|
in fact possible to archive many different timelines. While that might
|
|
seem like a useless feature, it's often a lifesaver. Consider the
|
|
situation where you aren't quite sure what point-in-time to recover to,
|
|
and so have to do several point-in-time recoveries by trial and error
|
|
until you find the best place to branch off from the old history. Without
|
|
timelines this process would soon generate an unmanageable mess. With
|
|
timelines, you can recover to <emphasis>any</> prior state, including
|
|
states in timeline branches that you later abandoned.
|
|
</para>
|
|
|
|
<para>
|
|
Each time a new timeline is created, <productname>PostgreSQL</> creates
|
|
a <quote>timeline history</> file that shows which timeline it branched
|
|
off from and when. These history files are necessary to allow the system
|
|
to pick the right WAL segment files when recovering from an archive that
|
|
contains multiple timelines. Therefore, they are archived into the WAL
|
|
archive area just like WAL segment files. The history files are just
|
|
small text files, so it's cheap and appropriate to keep them around
|
|
indefinitely (unlike the segment files which are large). You can, if
|
|
you like, add comments to a history file to make your own notes about
|
|
how and why this particular timeline came to be. Such comments will be
|
|
especially valuable when you have a thicket of different timelines as
|
|
a result of experimentation.
|
|
</para>
|
|
|
|
<para>
|
|
The default behavior of recovery is to recover along the same timeline
|
|
that was current when the base backup was taken. If you want to recover
|
|
into some child timeline (that is, you want to return to some state that
|
|
was itself generated after a recovery attempt), you need to specify the
|
|
target timeline ID in <filename>recovery.conf</>. You cannot recover into
|
|
timelines that branched off earlier than the base backup.
|
|
</para>
|
|
</sect2>
|
|
|
|
<sect2 id="backup-online-caveats">
|
|
<title>Caveats</title>
|
|
|
|
<para>
|
|
At this writing, there are several limitations of the on-line backup
|
|
technique. These will probably be fixed in future releases:
|
|
|
|
<itemizedlist>
|
|
<listitem>
|
|
<para>
|
|
Operations on hash and R-tree indexes are
|
|
not presently WAL-logged, so replay will not update these index types.
|
|
The recommended workaround is to manually <command>REINDEX</> each
|
|
such index after completing a recovery operation.
|
|
</para>
|
|
</listitem>
|
|
|
|
<listitem>
|
|
<para>
|
|
If a <command>CREATE DATABASE</> command is executed while a base
|
|
backup is being taken, and then the template database that the
|
|
<command>CREATE DATABASE</> copied is modified while the base backup
|
|
is still in progress, it is possible that recovery will cause those
|
|
modifications to be propagated into the created database as well.
|
|
This is of course undesirable. To avoid this risk, it is best not to
|
|
modify any template databases while taking a base backup.
|
|
</para>
|
|
</listitem>
|
|
|
|
<listitem>
|
|
<para>
|
|
<command>CREATE TABLESPACE</> commands are WAL-logged with the literal
|
|
absolute path, and will therefore be replayed as tablespace creations
|
|
with the same absolute path. This might be undesirable if the log is
|
|
being replayed on a different machine. It can be dangerous even if
|
|
the log is being replayed on the same machine, but into a new data
|
|
directory: the replay will still overwrite the contents of the original
|
|
tablespace. To avoid potential gotchas of this sort, the best practice
|
|
is to take a new base backup after creating or dropping tablespaces.
|
|
</para>
|
|
</listitem>
|
|
</itemizedlist>
|
|
</para>
|
|
|
|
<para>
|
|
It should also be noted that the present <acronym>WAL</acronym>
|
|
format is extremely bulky since it includes many disk page
|
|
snapshots. This is appropriate for crash recovery purposes,
|
|
since we may need to fix partially-written disk pages. It is not
|
|
necessary to store so many page copies for PITR operations, however.
|
|
An area for future development is to compress archived WAL data by
|
|
removing unnecessary page copies. In the meantime, administrators
|
|
may wish to reduce the number of page snapshots included in WAL by
|
|
increasing the checkpoint interval parameters as much as feasible.
|
|
</para>
|
|
</sect2>
|
|
</sect1>
|
|
|
|
<sect1 id="migration">
|
|
<title>Migration Between Releases</title>
|
|
|
|
<indexterm zone="migration">
|
|
<primary>upgrading</primary>
|
|
</indexterm>
|
|
|
|
<indexterm zone="migration">
|
|
<primary>version</primary>
|
|
<secondary>compatibility</secondary>
|
|
</indexterm>
|
|
|
|
<para>
|
|
This section discusses how to migrate your database data from one
|
|
<productname>PostgreSQL</> release to a newer one.
|
|
The software installation procedure <foreignphrase>per se</> is not the
|
|
subject of this section; those details are in <xref linkend="installation">.
|
|
</para>
|
|
|
|
<para>
|
|
As a general rule, the internal data storage format is subject to
|
|
change between major releases of <productname>PostgreSQL</> (where
|
|
the number after the first dot changes). This does not apply to
|
|
different minor releases under the same major release (where the
|
|
number after the second dot changes); these always have compatible
|
|
storage formats. For example, releases 7.0.1, 7.1.2, and 7.2 are
|
|
not compatible, whereas 7.1.1 and 7.1.2 are. When you update
|
|
between compatible versions, you can simply replace the executables
|
|
and reuse the data directory on disk. Otherwise you need to back
|
|
up your data and restore it on the new server. This has to be done
|
|
using <application>pg_dump</>; file system level backup methods
|
|
obviously won't work. There are checks in place that prevent you
|
|
from using a data directory with an incompatible version of
|
|
<productname>PostgreSQL</productname>, so no great harm can be done by
|
|
trying to start the wrong server version on a data directory.
|
|
</para>
|
|
|
|
<para>
|
|
It is recommended that you use the <application>pg_dump</> and
|
|
<application>pg_dumpall</> programs from the newer version of
|
|
<productname>PostgreSQL</>, to take advantage of any enhancements
|
|
that may have been made in these programs. Current releases of the
|
|
dump programs can read data from any server version back to 7.0.
|
|
</para>
|
|
|
|
<para>
|
|
The least downtime can be achieved by installing the new server in
|
|
a different directory and running both the old and the new servers
|
|
in parallel, on different ports. Then you can use something like
|
|
|
|
<programlisting>
|
|
pg_dumpall -p 5432 | psql -d postgres -p 6543
|
|
</programlisting>
|
|
|
|
to transfer your data. Or use an intermediate file if you want.
|
|
Then you can shut down the old server and start the new server at
|
|
the port the old one was running at. You should make sure that the
|
|
old database is not updated after you run <application>pg_dumpall</>,
|
|
otherwise you will obviously lose that data. See <xref
|
|
linkend="client-authentication"> for information on how to prohibit
|
|
access.
|
|
</para>
|
|
|
|
<para>
|
|
In practice you probably want to test your client
|
|
applications on the new setup before switching over completely.
|
|
This is another reason for setting up concurrent installations
|
|
of old and new versions.
|
|
</para>
|
|
|
|
<para>
|
|
If you cannot or do not want to run two servers in parallel you can
|
|
do the backup step before installing the new version, bring down
|
|
the server, move the old version out of the way, install the new
|
|
version, start the new server, restore the data. For example:
|
|
|
|
<programlisting>
|
|
pg_dumpall > backup
|
|
pg_ctl stop
|
|
mv /usr/local/pgsql /usr/local/pgsql.old
|
|
cd ~/postgresql-&version;
|
|
gmake install
|
|
initdb -D /usr/local/pgsql/data
|
|
postmaster -D /usr/local/pgsql/data
|
|
psql -f backup postgres
|
|
</programlisting>
|
|
|
|
See <xref linkend="runtime"> about ways to start and stop the
|
|
server and other details. The installation instructions will advise
|
|
you of strategic places to perform these steps.
|
|
</para>
|
|
|
|
<note>
|
|
<para>
|
|
When you <quote>move the old installation out of the way</quote>
|
|
it may no longer be perfectly usable. Some of the executable programs
|
|
contain absolute paths to various installed programs and data files.
|
|
This is usually not a big problem but if you plan on using two
|
|
installations in parallel for a while you should assign them
|
|
different installation directories at build time. (This problem
|
|
is rectified in <productname>PostgreSQL</> 8.0 and later, but you
|
|
need to be wary of moving older installations.)
|
|
</para>
|
|
</note>
|
|
</sect1>
|
|
</chapter>
|
|
|
|
<!-- Keep this comment at the end of the file
|
|
Local variables:
|
|
mode:sgml
|
|
sgml-omittag:nil
|
|
sgml-shorttag:t
|
|
sgml-minimize-attributes:nil
|
|
sgml-always-quote-attributes:t
|
|
sgml-indent-step:1
|
|
sgml-indent-tabs-mode:nil
|
|
sgml-indent-data:t
|
|
sgml-parent-document:nil
|
|
sgml-default-dtd-file:"./reference.ced"
|
|
sgml-exposed-tags:nil
|
|
sgml-local-catalogs:("/usr/share/sgml/catalog")
|
|
sgml-local-ecat-files:nil
|
|
End:
|
|
-->
|