mirror of
https://git.postgresql.org/git/postgresql.git
synced 2024-11-21 03:13:05 +08:00
Add "High Availability, Load Balancing, and Replication Feature Matrix"
table to docs.
This commit is contained in:
parent
5db1c58a1a
commit
621e14dcb2
@ -1,4 +1,4 @@
|
||||
<!-- $PostgreSQL: pgsql/doc/src/sgml/high-availability.sgml,v 1.17 2007/11/04 19:23:24 momjian Exp $ -->
|
||||
<!-- $PostgreSQL: pgsql/doc/src/sgml/high-availability.sgml,v 1.18 2007/11/08 19:16:30 momjian Exp $ -->
|
||||
|
||||
<chapter id="high-availability">
|
||||
<title>High Availability, Load Balancing, and Replication</title>
|
||||
@ -92,16 +92,23 @@
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Shared hardware functionality is common in network storage
|
||||
devices. Using a network file system is also possible, though
|
||||
care must be taken that the file system has full POSIX behavior.
|
||||
One significant limitation of this method is that if the shared
|
||||
disk array fails or becomes corrupt, the primary and standby
|
||||
servers are both nonfunctional. Another issue is that the
|
||||
standby server should never access the shared storage while
|
||||
Shared hardware functionality is common in network storage devices.
|
||||
Using a network file system is also possible, though care must be
|
||||
taken that the file system has full POSIX behavior (see <xref
|
||||
linkend="creating-cluster-nfs">). One significant limitation of this
|
||||
method is that if the shared disk array fails or becomes corrupt, the
|
||||
primary and standby servers are both nonfunctional. Another issue is
|
||||
that the standby server should never access the shared storage while
|
||||
the primary server is running.
|
||||
</para>
|
||||
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
|
||||
<varlistentry>
|
||||
<term>File System Replication</term>
|
||||
<listitem>
|
||||
|
||||
<para>
|
||||
A modified version of shared hardware functionality is file system
|
||||
replication, where all changes to a file system are mirrored to a file
|
||||
@ -125,7 +132,7 @@ protocol to make nodes agree on a serializable transactional order.
|
||||
</varlistentry>
|
||||
|
||||
<varlistentry>
|
||||
<term>Warm Standby Using Point-In-Time Recovery</term>
|
||||
<term>Warm Standby Using Point-In-Time Recovery (<acronym>PITR</>)</term>
|
||||
<listitem>
|
||||
|
||||
<para>
|
||||
@ -190,6 +197,21 @@ protocol to make nodes agree on a serializable transactional order.
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
|
||||
<varlistentry>
|
||||
<term>Asynchronous Multi-Master Replication</term>
|
||||
<listitem>
|
||||
|
||||
<para>
|
||||
For servers that are not regularly connected, like laptops or
|
||||
remote servers, keeping data consistent among servers is a
|
||||
challenge. Using asynchronous multi-master replication, each
|
||||
server works independently, and periodically communicates with
|
||||
the other servers to identify conflicting transactions. The
|
||||
conflicts can be resolved by users or conflict resolution rules.
|
||||
</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
|
||||
<varlistentry>
|
||||
<term>Synchronous Multi-Master Replication</term>
|
||||
<listitem>
|
||||
@ -222,21 +244,6 @@ protocol to make nodes agree on a serializable transactional order.
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
|
||||
<varlistentry>
|
||||
<term>Asynchronous Multi-Master Replication</term>
|
||||
<listitem>
|
||||
|
||||
<para>
|
||||
For servers that are not regularly connected, like laptops or
|
||||
remote servers, keeping data consistent among servers is a
|
||||
challenge. Using asynchronous multi-master replication, each
|
||||
server works independently, and periodically communicates with
|
||||
the other servers to identify conflicting transactions. The
|
||||
conflicts can be resolved by users or conflict resolution rules.
|
||||
</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
|
||||
<varlistentry>
|
||||
<term>Data Partitioning</term>
|
||||
<listitem>
|
||||
@ -253,23 +260,6 @@ protocol to make nodes agree on a serializable transactional order.
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
|
||||
<varlistentry>
|
||||
<term>Multi-Server Parallel Query Execution</term>
|
||||
<listitem>
|
||||
|
||||
<para>
|
||||
Many of the above solutions allow multiple servers to handle
|
||||
multiple queries, but none allow a single query to use multiple
|
||||
servers to complete faster. This solution allows multiple
|
||||
servers to work concurrently on a single query. This is usually
|
||||
accomplished by splitting the data among servers and having
|
||||
each server execute its part of the query and return results
|
||||
to a central server where they are combined and returned to
|
||||
the user. Pgpool-II has this capability.
|
||||
</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
|
||||
<varlistentry>
|
||||
<term>Commercial Solutions</term>
|
||||
<listitem>
|
||||
@ -285,4 +275,139 @@ protocol to make nodes agree on a serializable transactional order.
|
||||
|
||||
</variablelist>
|
||||
|
||||
<para>
|
||||
The table below (<xref linkend="high-availability-matrix">) summarizes
|
||||
the capabilities of the various solutions listed above.
|
||||
</para>
|
||||
|
||||
<table id="high-availability-matrix">
|
||||
<title>High Availability, Load Balancing, and Replication Feature Matrix</title>
|
||||
<tgroup cols="9">
|
||||
<thead>
|
||||
<row>
|
||||
<entry>Feature</entry>
|
||||
<entry>Shared Disk Failover</entry>
|
||||
<entry>File System Replication</entry>
|
||||
<entry>Warm Standby Using PITR</entry>
|
||||
<entry>Master-Slave Replication</entry>
|
||||
<entry>Statement-Based Replication Middleware</entry>
|
||||
<entry>Asynchronous Multi-Master Replication</entry>
|
||||
<entry>Synchronous Multi-Master Replication</entry>
|
||||
<entry>Data Partitioning</entry>
|
||||
</row>
|
||||
</thead>
|
||||
|
||||
<tbody>
|
||||
|
||||
<row>
|
||||
<entry>No special hardware required</entry>
|
||||
<entry align="center"></entry>
|
||||
<entry align="center">•</entry>
|
||||
<entry align="center">•</entry>
|
||||
<entry align="center">•</entry>
|
||||
<entry align="center">•</entry>
|
||||
<entry align="center">•</entry>
|
||||
<entry align="center">•</entry>
|
||||
<entry align="center">•</entry>
|
||||
</row>
|
||||
|
||||
<row>
|
||||
<entry>Allows multiple master servers</entry>
|
||||
<entry align="center"></entry>
|
||||
<entry align="center"></entry>
|
||||
<entry align="center"></entry>
|
||||
<entry align="center"></entry>
|
||||
<entry align="center">•</entry>
|
||||
<entry align="center">•</entry>
|
||||
<entry align="center">•</entry>
|
||||
<entry align="center"></entry>
|
||||
</row>
|
||||
|
||||
<row>
|
||||
<entry>No master server overhead</entry>
|
||||
<entry align="center">•</entry>
|
||||
<entry align="center"></entry>
|
||||
<entry align="center">•</entry>
|
||||
<entry align="center"></entry>
|
||||
<entry align="center"></entry>
|
||||
<entry align="center"></entry>
|
||||
<entry align="center"></entry>
|
||||
<entry align="center"></entry>
|
||||
</row>
|
||||
|
||||
<row>
|
||||
<entry>Master server never locks others</entry>
|
||||
<entry align="center">•</entry>
|
||||
<entry align="center">•</entry>
|
||||
<entry align="center">•</entry>
|
||||
<entry align="center">•</entry>
|
||||
<entry align="center">•</entry>
|
||||
<entry align="center">•</entry>
|
||||
<entry align="center"></entry>
|
||||
<entry align="center">•</entry>
|
||||
</row>
|
||||
|
||||
<row>
|
||||
<entry>Master failure will never lose data</entry>
|
||||
<entry align="center">•</entry>
|
||||
<entry align="center">•</entry>
|
||||
<entry align="center"></entry>
|
||||
<entry align="center"></entry>
|
||||
<entry align="center">•</entry>
|
||||
<entry align="center"></entry>
|
||||
<entry align="center">•</entry>
|
||||
<entry align="center"></entry>
|
||||
</row>
|
||||
|
||||
<row>
|
||||
<entry>Slaves accept read-only queries</entry>
|
||||
<entry align="center"></entry>
|
||||
<entry align="center"></entry>
|
||||
<entry align="center"></entry>
|
||||
<entry align="center">•</entry>
|
||||
<entry align="center">•</entry>
|
||||
<entry align="center">•</entry>
|
||||
<entry align="center">•</entry>
|
||||
<entry align="center">•</entry>
|
||||
</row>
|
||||
|
||||
<row>
|
||||
<entry>Per-table granularity</entry>
|
||||
<entry align="center"></entry>
|
||||
<entry align="center"></entry>
|
||||
<entry align="center"></entry>
|
||||
<entry align="center">•</entry>
|
||||
<entry align="center"></entry>
|
||||
<entry align="center">•</entry>
|
||||
<entry align="center">•</entry>
|
||||
<entry align="center">•</entry>
|
||||
</row>
|
||||
|
||||
<row>
|
||||
<entry>No conflict resolution necessary</entry>
|
||||
<entry align="center">•</entry>
|
||||
<entry align="center">•</entry>
|
||||
<entry align="center">•</entry>
|
||||
<entry align="center">•</entry>
|
||||
<entry align="center"></entry>
|
||||
<entry align="center"></entry>
|
||||
<entry align="center">•</entry>
|
||||
<entry align="center">•</entry>
|
||||
</row>
|
||||
|
||||
</tbody>
|
||||
</tgroup>
|
||||
</table>
|
||||
|
||||
<para>
|
||||
Many of the above solutions allow multiple servers to handle multiple
|
||||
queries, but none allow a single query to use multiple servers to
|
||||
complete faster. Multi-server parallel query execution allows multiple
|
||||
servers to work concurrently on a single query. This is usually
|
||||
accomplished by splitting the data among servers and having each server
|
||||
execute its part of the query and return results to a central server
|
||||
where they are combined and returned to the user. Pgpool-II has this
|
||||
capability.
|
||||
</para>
|
||||
|
||||
</chapter>
|
||||
|
Loading…
Reference in New Issue
Block a user