diff --git a/doc/src/FAQ/FAQ.html b/doc/src/FAQ/FAQ.html index 9d1bf656d4..04b3803b6a 100644 --- a/doc/src/FAQ/FAQ.html +++ b/doc/src/FAQ/FAQ.html @@ -10,91 +10,143 @@
Last updated: Thu Jan 10 18:34:11 EST 2002
+Last updated: Thu Jan 10 18:35:15 EST 2002
-Current maintainer: Bruce Momjian (pgman@candle.pha.pa.us)
+
Current maintainer: Bruce Momjian (pgman@candle.pha.pa.us)
The most recent version of this document can be viewed at http://www.PostgreSQL.org/docs/faq-english.html.
+The most recent version of this document can be viewed at http://www.PostgreSQL.org/docs/faq-english.html.
-Platform-specific questions are answered at http://www.PostgreSQL.org/users-lounge/docs/faq.html.
+Platform-specific questions are answered at http://www.PostgreSQL.org/users-lounge/docs/faq.html.
IN
so slow?IN
so slow?PostgreSQL is pronounced Post-Gres-Q-L.
-PostgreSQL is an enhancement of the POSTGRES database management system, a next-generation DBMS research prototype. While PostgreSQL retains the powerful data model and rich data types of POSTGRES, it replaces the PostQuel query language with an extended subset of SQL. PostgreSQL is free and the complete source is available.
+PostgreSQL is an enhancement of the POSTGRES database management + system, a next-generation DBMS research prototype. + While PostgreSQL retains the powerful data model and rich data + types of POSTGRES, it replaces the PostQuel query language with an + extended subset of SQL. PostgreSQL is free and the + complete source is available.
-PostgreSQL development is performed by a team of Internet developers who all subscribe to the PostgreSQL development mailing list. The current coordinator is Marc G. Fournier (scrappy@PostgreSQL.org). (See below on how to join). This team is now responsible for all development of PostgreSQL.
+PostgreSQL development is performed by a team of Internet + developers who all subscribe to the PostgreSQL development mailing + list. The current coordinator is Marc G. Fournier (scrappy@PostgreSQL.org). (See + below on how to join). This team is now responsible for all + development of PostgreSQL.
-The authors of PostgreSQL 1.01 were Andrew Yu and Jolly Chen. Many others have contributed to the porting, testing, debugging, and enhancement of the code. The original Postgres code, from which PostgreSQL is derived, was the effort of many graduate students, undergraduate students, and staff programmers working under the direction of Professor Michael Stonebraker at the University of California, Berkeley.
+The authors of PostgreSQL 1.01 were Andrew Yu and Jolly Chen. + Many others have contributed to the porting, testing, debugging, + and enhancement of the code. The original Postgres code, from which + PostgreSQL is derived, was the effort of many graduate students, + undergraduate students, and staff programmers working under the + direction of Professor Michael Stonebraker at the University of + California, Berkeley.
-The original name of the software at Berkeley was Postgres. When SQL functionality was added in 1995, its name was changed to Postgres95. The name was changed at the end of 1996 to PostgreSQL.
+The original name of the software at Berkeley was Postgres. When + SQL functionality was added in 1995, its name was + changed to Postgres95. The name was changed at the end of 1996 to + PostgreSQL.
-PostgreSQL is subject to the following COPYRIGHT:
PostgreSQL Data Base Management System
-Portions copyright (c) 1996-2002, PostgreSQL Global Development Group Portions Copyright (c) 1994-6 Regents of the University of California
+Portions copyright (c) 1996-2002, PostgreSQL Global Development + Group Portions Copyright (c) 1994-6 Regents of the University of + California
-Permission to use, copy, modify, and distribute this software and its documentation for any purpose, without fee, and without a written agreement is hereby granted, provided that the above copyright notice and this paragraph and the following two paragraphs appear in all copies.
+Permission to use, copy, modify, and distribute this software + and its documentation for any purpose, without fee, and without a + written agreement is hereby granted, provided that the above + copyright notice and this paragraph and the following two + paragraphs appear in all copies.
-IN NO EVENT SHALL THE UNIVERSITY OF CALIFORNIA BE LIABLE TO ANY PARTY FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+IN NO EVENT SHALL THE UNIVERSITY OF CALIFORNIA BE LIABLE TO ANY + PARTY FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL + DAMAGES, INCLUDING LOST PROFITS, ARISING OUT OF THE USE OF THIS + SOFTWARE AND ITS DOCUMENTATION, EVEN IF THE UNIVERSITY OF + CALIFORNIA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-THE UNIVERSITY OF CALIFORNIA SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS IS" BASIS, AND THE UNIVERSITY OF CALIFORNIA HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
+THE UNIVERSITY OF CALIFORNIA SPECIFICALLY DISCLAIMS ANY + WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES + OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE + SOFTWARE PROVIDED HEREUNDER IS ON AN "AS IS" BASIS, AND THE + UNIVERSITY OF CALIFORNIA HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE, + SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
-In general, a modern Unix-compatible platform should be able to run PostgreSQL. The platforms that had received explicit testing at the time of release are listed in the installation instructions.
+In general, a modern Unix-compatible platform should be able to + run PostgreSQL. The platforms that had received explicit testing at + the time of release are listed in the installation + instructions.
Client
-It is possible to compile the libpq C library, psql, and other interfaces and binaries to run on MS Windows platforms. In this case, the client is running on MS Windows, and communicates via TCP/IP to a server running on one of our supported Unix platforms. A file win31.mak is included in the distribution for making a Win32 libpq library and psql. PostgreSQL also communicates with ODBC clients.
+It is possible to compile the libpq C library, psql, and + other interfaces and binaries to run on MS Windows platforms. In + this case, the client is running on MS Windows, and communicates + via TCP/IP to a server running on one of our supported Unix + platforms. A file win31.mak is included in the distribution + for making a Win32 libpq library and psql. PostgreSQL + also communicates with ODBC clients.
Server
-The database server can run on Windows NT and Win2k using Cygwin, the Cygnus Unix/NT porting library. See pgsql/doc/FAQ_MSWIN in the distribution or the MS Windows FAQ on our web site. We have no plan to do a native port to any Microsoft platform.
+The database server can run on Windows NT and Win2k using + Cygwin, the Cygnus Unix/NT porting library. See + pgsql/doc/FAQ_MSWIN in the distribution or the MS Windows FAQ + on our web site. We have no plan to do a native port to any + Microsoft platform.
The primary anonymous ftp site for PostgreSQL is ftp://ftp.PostgreSQL.org/pub. For mirror sites, see our main web site.
+The primary anonymous ftp site for PostgreSQL is ftp://ftp.PostgreSQL.org/pub. + For mirror sites, see our main web site.
The main mailing list is: pgsql-general@PostgreSQL.org. It is available for discussion of matters pertaining to PostgreSQL. To subscribe, send mail with the following lines in the body (not the subject line):
+The main mailing list is: pgsql-general@PostgreSQL.org. + It is available for discussion of matters pertaining to PostgreSQL. + To subscribe, send mail with the following lines in the body (not + the subject line):
subscribe end-
to pgsql-general-request@PostgreSQL.org.
+to pgsql-general-request@PostgreSQL.org.
-There is also a digest list available. To subscribe to this list, send email to: pgsql-general-digest-request@PostgreSQL.org with a body of:
+There is also a digest list available. To subscribe to this + list, send email to: pgsql-general-digest-request@PostgreSQL.org + with a body of:
subscribe end- Digests are sent out to members of this list whenever the main list has received around 30k of messages. + Digests are sent out to members of this list whenever the main list + has received around 30k of messages. -
The bugs mailing list is available. To subscribe to this list, send email to pgsql-bugs-request@PostgreSQL.org with a body of:
+The bugs mailing list is available. To subscribe to this list, + send email to pgsql-bugs-request@PostgreSQL.org + with a body of:
subscribe end- There is also a developers discussion mailing list available. To subscribe to this list, send email to pgsql-hackers-request@PostgreSQL.org with a body of: + There is also a developers discussion mailing list available. To + subscribe to this list, send email to pgsql-hackers-request@PostgreSQL.org + with a body of:
subscribe end-
Additional mailing lists and information about PostgreSQL can be found via the PostgreSQL WWW home page at:
+Additional mailing lists and information about PostgreSQL can be + found via the PostgreSQL WWW home page at:
http://www.PostgreSQL.org-
There is also an IRC channel on EFNet, channel #PostgreSQL. I use the Unix command irc -c '#PostgreSQL' "$USER" irc.phoenix.net.
There is also an IRC channel on EFNet, channel
+ #PostgreSQL. I use the Unix command irc -c
+ '#PostgreSQL' "$USER" irc.phoenix.net.
A list of commercial support companies is available at http://www.postgresql.org/users-lounge/commercial-support.html.
+A list of commercial support companies is available at http://www.postgresql.org/users-lounge/commercial-support.html.
Several manuals, manual pages, and some small test examples are included in the distribution. See the /doc directory. You can also browse the manual online at http://www.PostgreSQL.org/users-lounge/docs/.
+Several manuals, manual pages, and some small test examples are + included in the distribution. See the /doc directory. You + can also browse the manual online at http://www.PostgreSQL.org/users-lounge/docs/.
-There is a PostgreSQL book available at http://www.PostgreSQL.org/docs/awbook.html.
+There is a PostgreSQL book available at http://www.PostgreSQL.org/docs/awbook.html.
-psql has some nice \d commands to show information about types, operators, functions, aggregates, etc.
+psql has some nice \d commands to show information about + types, operators, functions, aggregates, etc.
Our web site contains even more documentation.
-PostgreSQL supports an extended subset of SQL-92. See our TODO list for known bugs, missing features, and future plans.
+PostgreSQL supports an extended subset of SQL-92. + See our TODO + list for known bugs, missing features, and future plans.
-The PostgreSQL book at http://www.PostgreSQL.org/docs/awbook.html teaches SQL. There is a nice tutorial at http://www.intermedia.net/support/sql/sqltut.shtm and at http://ourworld.compuserve.com/homepages/graeme_birchall/HTM_COOK.HTM.
+The PostgreSQL book at http://www.PostgreSQL.org/docs/awbook.html + teaches SQL. There is a nice tutorial at http://www.intermedia.net/support/sql/sqltut.shtm + and at + http://ourworld.compuserve.com/homepages/graeme_birchall/HTM_COOK.HTM.
-Another one is "Teach Yourself SQL in 21 Days, Second Edition" at http://members.tripod.com/er4ebus/sql/index.htm
+Another one is "Teach Yourself SQL in 21 Days, Second Edition" + at http://members.tripod.com/er4ebus/sql/index.htm
-Many of our users like The Practical SQL Handbook, Bowman, Judith S., et al., Addison-Wesley. Others like The Complete Reference SQL, Groff et al., McGraw-Hill.
+Many of our users like The Practical SQL Handbook, + Bowman, Judith S., et al., Addison-Wesley. Others like The + Complete Reference SQL, Groff et al., McGraw-Hill.
Yes, we easily handle dates past the year 2000 AD, and before 2000 BC.
+Yes, we easily handle dates past the year 2000 AD, and before + 2000 BC.
-First, download the latest source and read the PostgreSQL Developers documentation on our web site, or in the distribution. Second, subscribe to the pgsql-hackers and pgsql-patches mailing lists. Third, submit high quality patches to pgsql-patches.
+First, download the latest source and read the PostgreSQL + Developers documentation on our web site, or in the distribution. + Second, subscribe to the pgsql-hackers and + pgsql-patches mailing lists. Third, submit high quality + patches to pgsql-patches.
-There are about a dozen people who have commit privileges to the PostgreSQL CVS archive. They each have submitted so many high-quality patches that it was impossible for the existing committers to keep up, and we had confidence that patches they committed were of high quality.
+There are about a dozen people who have commit privileges to the + PostgreSQL CVS archive. They each have submitted so + many high-quality patches that it was impossible for the existing + committers to keep up, and we had confidence that patches they + committed were of high quality.
Please visit the PostgreSQL BugTool page, which gives guidelines and directions on how to submit a bug.
+Please visit the PostgreSQL BugTool + page, which gives guidelines and directions on how to submit a + bug.
-Also check out our ftp site ftp://ftp.PostgreSQL.org/pub to see if there is a more recent PostgreSQL version or patches.
+Also check out our ftp site ftp://ftp.PostgreSQL.org/pub to + see if there is a more recent PostgreSQL version or patches.
-There are several ways of measuring software: features, performance, reliability, support, and price.
+There are several ways of measuring software: features, + performance, reliability, support, and price.
PostgreSQL has had a first-class infrastructure since we started six years ago. This is all thanks to Marc Fournier, who has created and managed this infrastructure over the years.
+PostgreSQL has had a first-class infrastructure since we started + six years ago. This is all thanks to Marc Fournier, who has created + and managed this infrastructure over the years.
-Quality infrastructure is very important to an open-source project. It prevents disruptions that can greatly delay forward movement of the project.
+Quality infrastructure is very important to an open-source + project. It prevents disruptions that can greatly delay forward + movement of the project.
-Of course, this infrastructure is not cheap. There are a variety of monthly and one-time expenses that are required to keep it going. If you or your company has money it can donate to help fund this effort, please go to http://www.pgsql.com/pg_goodies and make a donation.
+Of course, this infrastructure is not cheap. There are a variety + of monthly and one-time expenses that are required to keep it + going. If you or your company has money it can donate to help fund + this effort, please go to http://www.pgsql.com/pg_goodies + and make a donation.
-Although the web page mentions PostgreSQL, Inc, the "contributions" item is solely to support the PostgreSQL project and does not fund any specific company. If you prefer, you can also send a check to the contact address.
+Although the web page mentions PostgreSQL, Inc, the + "contributions" item is solely to support the PostgreSQL project + and does not fund any specific company. If you prefer, you can also + send a check to the contact address.
There are two ODBC drivers available, PsqlODBC and OpenLink ODBC.
+There are two ODBC drivers available, PsqlODBC + and OpenLink ODBC.
-PsqlODBC is included in the distribution. More information about it can be gotten from ftp://ftp.PostgreSQL.org/pub/odbc/.
+PsqlODBC is included in the distribution. More information about + it can be gotten from ftp://ftp.PostgreSQL.org/pub/odbc/.
-OpenLink ODBC can be gotten from http://www.openlinksw.com. It works with their standard ODBC client software so you'll have PostgreSQL ODBC available on every client platform they support (Win, Mac, Unix, VMS).
+OpenLink ODBC can be gotten from http://www.openlinksw.com. It + works with their standard ODBC client software so + you'll have PostgreSQL ODBC available on every + client platform they support (Win, Mac, Unix, VMS).
-They will probably be selling this product to people who need commercial-quality support, but a freeware version will always be available. Please send questions to postgres95@openlink.co.uk.
+They will probably be selling this product to people who need + commercial-quality support, but a freeware version will always be + available. Please send questions to postgres95@openlink.co.uk.
-See also the ODBC chapter of the Programmer's Guide.
+See also the ODBC + chapter of the Programmer's Guide.
-A nice introduction to Database-backed Web pages can be seen at: http://www.webreview.com
+A nice introduction to Database-backed Web pages can be seen at: + http://www.webreview.com
-There is also one at http://www.phone.net/home/mwm/hotlist/.
+There is also one at http://www.phone.net/home/mwm/hotlist/.
-For Web integration, PHP is an excellent interface. It is at http://www.php.net.
+For Web integration, PHP is an excellent interface. It is at http://www.php.net.
For complex cases, many use the Perl interface and CGI.pm.
-We have a nice graphical user interface called pgaccess, which is shipped as part of the distribution. pgaccess also has a report generator. The Web page is http://www.flex.ro/pgaccess
+We have a nice graphical user interface called pgaccess, + which is shipped as part of the distribution. pgaccess also + has a report generator. The Web page is http://www.flex.ro/pgaccess
-We also include ecpg, which is an embedded SQL query language interface for C.
+We also include ecpg, which is an embedded SQL query + language interface for C.
-We have:
@@ -330,112 +567,253 @@Specify the --prefix option when running configure.
+Specify the --prefix option when running + configure.
-It could be a variety of problems, but first check to see that you have System V extensions installed in your kernel. PostgreSQL requires kernel support for shared memory and semaphores.
+It could be a variety of problems, but first check to see that + you have System V extensions installed in your kernel. PostgreSQL + requires kernel support for shared memory and semaphores.
-You either do not have shared memory configured properly in your kernel or you need to enlarge the shared memory available in the kernel. The exact amount you need depends on your architecture and how many buffers and backend processes you configure for postmaster. For most systems, with default numbers of buffers and processes, you need a minimum of ~1 MB. See the PostgreSQL Administrator's Guide for more detailed information about shared memory and semaphores.
+You either do not have shared memory configured properly in your + kernel or you need to enlarge the shared memory available in the + kernel. The exact amount you need depends on your architecture and + how many buffers and backend processes you configure for + postmaster. For most systems, with default numbers of + buffers and processes, you need a minimum of ~1 MB. See the PostgreSQL + Administrator's Guide for more detailed information about + shared memory and semaphores.
-If the error message is IpcSemaphoreCreate: semget failed (No space left on device) then your kernel is not configured with enough semaphores. Postgres needs one semaphore per potential backend process. A temporary solution is to start postmaster with a smaller limit on the number of backend processes. Use -N with a parameter less than the default of 32. A more permanent solution is to increase your kernel's SEMMNS and SEMMNI parameters.
+If the error message is IpcSemaphoreCreate: semget failed (No + space left on device) then your kernel is not configured with + enough semaphores. Postgres needs one semaphore per potential + backend process. A temporary solution is to start postmaster + with a smaller limit on the number of backend processes. Use + -N with a parameter less than the default of 32. A more + permanent solution is to increase your kernel's + SEMMNS and SEMMNI parameters.
-Inoperative semaphores can also cause crashes during heavy database access.
+Inoperative semaphores can also cause crashes during heavy + database access.
-If the error message is something else, you might not have semaphore support configured in your kernel at all. See the PostgreSQL Administrator's Guide for more detailed information about shared memory and semaphores.
+If the error message is something else, you might not have + semaphore support configured in your kernel at all. See the + PostgreSQL Administrator's Guide for more detailed information + about shared memory and semaphores.
-By default, PostgreSQL only allows connections from the local machine using Unix domain sockets. Other machines will not be able to connect unless you add the -i flag to postmaster, and enable host-based authentication by modifying the file $PGDATA/pg_hba.conf accordingly. This will allow TCP/IP connections.
+By default, PostgreSQL only allows connections from the local + machine using Unix domain sockets. Other machines will not be able + to connect unless you add the -i flag to postmaster, + and enable host-based authentication by modifying the file + $PGDATA/pg_hba.conf accordingly. This will allow TCP/IP + connections.
-The default configuration allows only Unix domain socket connections from the local machine. To enable TCP/IP connections, make sure postmaster has been started with the -i option, and add an appropriate host entry to the file pgsql/data/pg_hba.conf.
+The default configuration allows only Unix domain socket + connections from the local machine. To enable TCP/IP connections, + make sure postmaster has been started with the -i + option, and add an appropriate host entry to the file + pgsql/data/pg_hba.conf.
-Certainly, indexes can speed up queries. The EXPLAIN command allows you to see how PostgreSQL is interpreting your query, and which indexes are being used.
+Certainly, indexes can speed up queries. The + EXPLAIN command allows you to see how PostgreSQL is + interpreting your query, and which indexes are being used.
-If you are doing many INSERTs, consider doing them in a large batch using the COPY command. This is much faster than individual INSERTS. Second, statements not in a BEGIN WORK/COMMIT transaction block are considered to be in their own transaction. Consider performing several statements in a single transaction block. This reduces the transaction overhead. Also, consider dropping and recreating indexes when making large data changes.
+If you are doing many INSERTs, consider doing + them in a large batch using the COPY command. This + is much faster than individual INSERTS. Second, + statements not in a BEGIN WORK/COMMIT transaction + block are considered to be in their own transaction. Consider + performing several statements in a single transaction block. This + reduces the transaction overhead. Also, consider dropping and + recreating indexes when making large data changes.
-There are several tuning options. You can disable fsync() by starting postmaster with a -o -F option. This will prevent fsync()s from flushing to disk after every transaction.
+There are several tuning options. You can disable fsync() + by starting postmaster with a -o -F option. This will + prevent fsync()s from flushing to disk after every + transaction.
-You can also use the postmaster -B option to increase the number of shared memory buffers used by the backend processes. If you make this parameter too high, the postmaster may not start because you have exceeded your kernel's limit on shared memory space. Each buffer is 8K and the default is 64 buffers.
+You can also use the postmaster -B option to + increase the number of shared memory buffers used by the backend + processes. If you make this parameter too high, the + postmaster may not start because you have exceeded your + kernel's limit on shared memory space. Each buffer is 8K and the + default is 64 buffers.
-You can also use the backend -S option to increase the maximum amount of memory used by the backend process for temporary sorts. The -S value is measured in kilobytes, and the default is 512 (i.e. 512K).
+You can also use the backend -S option to increase the + maximum amount of memory used by the backend process for temporary + sorts. The -S value is measured in kilobytes, and the + default is 512 (i.e. 512K).
-You can also use the CLUSTER command to group data in tables to match an index. See the CLUSTER manual page for more details.
+You can also use the CLUSTER command to group + data in tables to match an index. See the CLUSTER + manual page for more details.
-PostgreSQL has several features that report status information that can be valuable for debugging purposes.
+PostgreSQL has several features that report status information + that can be valuable for debugging purposes.
-First, by running configure with the --enable-cassert option, many assert()s monitor the progress of the backend and halt the program when something unexpected occurs.
+First, by running configure with the --enable-cassert + option, many assert()s monitor the progress of the backend + and halt the program when something unexpected occurs.
-Both postmaster and postgres have several debug options available. First, whenever you start postmaster, make sure you send the standard output and error to a log file, like:
+Both postmaster and postgres have several debug + options available. First, whenever you start postmaster, + make sure you send the standard output and error to a log file, + like:
cd /usr/local/pgsql ./bin/postmaster >server.log 2>&1 &-
This will put a server.log file in the top-level PostgreSQL directory. This file contains useful information about problems or errors encountered by the server. Postmaster has a -d option that allows even more detailed information to be reported. The -d option takes a number that specifies the debug level. Be warned that high debug level values generate large log files.
+This will put a server.log file in the top-level PostgreSQL + directory. This file contains useful information about problems or + errors encountered by the server. Postmaster has a -d + option that allows even more detailed information to be reported. + The -d option takes a number that specifies the debug level. + Be warned that high debug level values generate large log + files.
-If postmaster is not running, you can actually run the postgres backend from the command line, and type your SQL statement directly. This is recommended only for debugging purposes. Note that a newline terminates the query, not a semicolon. If you have compiled with debugging symbols, you can use a debugger to see what is happening. Because the backend was not started from postmaster, it is not running in an identical environment and locking/backend interaction problems may not be duplicated.
+If postmaster is not running, you can actually run the + postgres backend from the command line, and type your + SQL statement directly. This is recommended + only for debugging purposes. Note that a newline terminates + the query, not a semicolon. If you have compiled with debugging + symbols, you can use a debugger to see what is happening. Because + the backend was not started from postmaster, it is not + running in an identical environment and locking/backend interaction + problems may not be duplicated.
-If postmaster is running, start psql in one window, then find the PID of the postgres process used by psql. Use a debugger to attach to the postgres PID. You can set breakpoints in the debugger and issue queries from psql. If you are debugging postgres startup, you can set PGOPTIONS="-W n", then start psql. This will cause startup to delay for n seconds so you can attach to the process with the debugger, set any breakpoints, and continue through the startup sequence.
+If postmaster is running, start psql in one + window, then find the PID of the postgres + process used by psql. Use a debugger to attach to the + postgres PID. You can set breakpoints in the + debugger and issue queries from psql. If you are debugging + postgres startup, you can set PGOPTIONS="-W n", then start + psql. This will cause startup to delay for n seconds + so you can attach to the process with the debugger, set any + breakpoints, and continue through the startup sequence.
-The postgres program has -s, -A, and -t options that can be very useful for debugging and performance measurements.
+The postgres program has -s, -A, and -t + options that can be very useful for debugging and performance + measurements.
-You can also compile with profiling to see what functions are taking execution time. The backend profile files will be deposited in the pgsql/data/base/dbname directory. The client profile file will be put in the client's current directory.
+You can also compile with profiling to see what functions are + taking execution time. The backend profile files will be deposited + in the pgsql/data/base/dbname directory. The client profile + file will be put in the client's current directory.
-You need to increase postmaster's limit on how many concurrent backend processes it can start.
+You need to increase postmaster's limit on how many + concurrent backend processes it can start.
-The default limit is 32 processes. You can increase it by restarting postmaster with a suitable -N value or modifying postgresql.conf.
+The default limit is 32 processes. You can increase it by + restarting postmaster with a suitable -N value or + modifying postgresql.conf.
-Note that if you make -N larger than 32, you must also increase -B beyond its default of 64; -B must be at least twice -N, and probably should be more than that for best performance. For large numbers of backend processes, you are also likely to find that you need to increase various Unix kernel configuration parameters. Things to check include the maximum size of shared memory blocks, SHMMAX; the maximum number of semaphores, SEMMNS and SEMMNI; the maximum number of processes, NPROC; the maximum number of processes per user, MAXUPRC; and the maximum number of open files, NFILE and NINODE. The reason that PostgreSQL has a limit on the number of allowed backend processes is so your system won't run out of resources.
+Note that if you make -N larger than 32, you must also + increase -B beyond its default of 64; -B must be at + least twice -N, and probably should be more than that for + best performance. For large numbers of backend processes, you are + also likely to find that you need to increase various Unix kernel + configuration parameters. Things to check include the maximum size + of shared memory blocks, SHMMAX; the maximum number + of semaphores, SEMMNS and SEMMNI; the + maximum number of processes, NPROC; the maximum + number of processes per user, MAXUPRC; and the + maximum number of open files, NFILE and + NINODE. The reason that PostgreSQL has a limit on + the number of allowed backend processes is so your system won't run + out of resources.
-In PostgreSQL versions prior to 6.5, the maximum number of backends was 64, and changing it required a rebuild after altering the MaxBackendId constant in include/storage/sinvaladt.h.
+In PostgreSQL versions prior to 6.5, the maximum number of + backends was 64, and changing it required a rebuild after altering + the MaxBackendId constant in + include/storage/sinvaladt.h.
-They are temporary files generated by the query executor. For example, if a sort needs to be done to satisfy an ORDER BY, and the sort requires more space than the backend's -S parameter allows, then temporary files are created to hold the extra data.
+They are temporary files generated by the query executor. For + example, if a sort needs to be done to satisfy an ORDER + BY, and the sort requires more space than the backend's + -S parameter allows, then temporary files are created to + hold the extra data.
-The temporary files should be deleted automatically, but might not if a backend crashes during a sort. If you have no backends running at the time, it is safe to delete the pg_tempNNN.NN files.
+The temporary files should be deleted automatically, but might + not if a backend crashes during a sort. If you have no backends + running at the time, it is safe to delete the pg_tempNNN.NN + files.
See the DECLARE manual page for a description.
+See the DECLARE manual page for a + description.
-See the FETCH manual page, or use SELECT ... LIMIT....
+See the FETCH manual page, or use + SELECT ... LIMIT....
-The entire query may have to be evaluated, even if you only want the first few rows. Consider a query that has an ORDER BY. If there is an index that matches the ORDER BY, PostgreSQL may be able to evaluate only the first few records requested, or the entire query may have to be evaluated until the desired rows have been generated.
+The entire query may have to be evaluated, even if you only want + the first few rows. Consider a query that has an ORDER + BY. If there is an index that matches the ORDER + BY, PostgreSQL may be able to evaluate only the first few + records requested, or the entire query may have to be evaluated + until the desired rows have been generated.
-You can read the source code for psql in file pgsql/src/bin/psql/describe.c. It contains SQL commands that generate the output for psql's backslash commands. You can also start psql with the -E option so it will print out the queries it uses to execute the commands you give.
+You can read the source code for psql in file + pgsql/src/bin/psql/describe.c. It contains + SQL commands that generate the output for psql's + backslash commands. You can also start psql with the + -E option so it will print out the queries it uses to + execute the commands you give.
-We do not support ALTER TABLE DROP COLUMN, but do this:
+We do not support ALTER TABLE DROP COLUMN, but do + this:
SELECT ... -- select all columns but the one you want to remove INTO TABLE new_table @@ -444,7 +822,8 @@ ALTER TABLE new_table RENAME TO old_table;-
These are the limits:
@@ -456,17 +835,27 @@ Maximum number of columns in a table? 250-1600 depending on column types Maximum number of indexes on a table? unlimited- Of course, these are not actually unlimited, but limited to available disk space and memory/swap space. Performance may suffer when these values get unusually large. + Of course, these are not actually unlimited, but limited to + available disk space and memory/swap space. Performance may suffer + when these values get unusually large. -
The maximum table size of 16 TB does not require large file support from the operating system. Large tables are stored as multiple 1 GB files so file system size limits are not important.
+The maximum table size of 16 TB does not require large file + support from the operating system. Large tables are stored as + multiple 1 GB files so file system size limits are not + important.
-The maximum table size and maximum number of columns can be increased if the default block size is increased to 32k.
+The maximum table size and maximum number of columns can be + increased if the default block size is increased to 32k.
-A PostgreSQL database may need six-and-a-half times the disk space required to store the data in a flat file.
+A PostgreSQL database may need six-and-a-half times the disk + space required to store the data in a flat file.
-Consider a file of 300,000 lines with two integers on each line. The flat file is 2.4 MB. The size of the PostgreSQL database file containing this data can be estimated at 14 MB:
+Consider a file of 300,000 lines with two integers on each line. + The flat file is 2.4 MB. The size of the PostgreSQL database file + containing this data can be estimated at 14 MB:
36 bytes: each row header (approximate) + 8 bytes: two int fields @ 4 bytes each @@ -487,66 +876,119 @@ 1755 database pages * 8192 bytes per page = 14,376,960 bytes (14 MB)-
Indexes do not require as much overhead, but do contain the data that is being indexed, so they can be large also.
+Indexes do not require as much overhead, but do contain the data + that is being indexed, so they can be large also.
-psql has a variety of backslash commands to show such information. Use \? to see them.
+psql has a variety of backslash commands to show such + information. Use \? to see them.
-Also try the file pgsql/src/tutorial/syscat.source. It illustrates many of the SELECTs needed to get information from the database system tables.
+Also try the file pgsql/src/tutorial/syscat.source. It + illustrates many of the SELECTs needed to get + information from the database system tables.
-PostgreSQL does not automatically maintain statistics. VACUUM must be run to update the statistics. After statistics are updated, the optimizer knows how many rows in the table, and can better decide if it should use indexes. Note that the optimizer does not use indexes in cases when the table is small because a sequential scan would be faster.
+PostgreSQL does not automatically maintain statistics. + VACUUM must be run to update the statistics. After + statistics are updated, the optimizer knows how many rows in the + table, and can better decide if it should use indexes. Note that + the optimizer does not use indexes in cases when the table is small + because a sequential scan would be faster.
-For column-specific optimization statistics, use VACUUM ANALYZE. VACUUM ANALYZE is important for complex multijoin queries, so the optimizer can estimate the number of rows returned from each table, and choose the proper join order. The backend does not keep track of column statistics on its own, so VACUUM ANALYZE must be run to collect them periodically.
+For column-specific optimization statistics, use VACUUM + ANALYZE. VACUUM ANALYZE is important for + complex multijoin queries, so the optimizer can estimate the number + of rows returned from each table, and choose the proper join order. + The backend does not keep track of column statistics on its own, so + VACUUM ANALYZE must be run to collect them + periodically.
-Indexes are usually not used for ORDER BY or joins. A sequential scan followed by an explicit sort is faster than an indexscan of all tuples of a large table. This is because random disk access is very slow.
+Indexes are usually not used for ORDER BY or + joins. A sequential scan followed by an explicit sort is faster + than an indexscan of all tuples of a large table. This is because + random disk access is very slow.
-When using wild-card operators such as LIKE or ~, indexes can only be used if the beginning of the search is anchored to the start of the string. So, to use indexes, LIKE searches should not begin with %, and ~(regular expression searches) should start with ^.
+When using wild-card operators such as LIKE or + ~, indexes can only be used if the beginning of the search + is anchored to the start of the string. So, to use indexes, + LIKE searches should not begin with %, and + ~(regular expression searches) should start with + ^.
-See the EXPLAIN manual page.
An R-tree index is used for indexing spatial data. A hash index can't handle range searches. A B-tree index only handles range searches in a single dimension. R-trees can handle multi-dimensional data. For example, if an R-tree index can be built on an attribute of type point, the system can more efficiently answer queries such as "select all points within a bounding rectangle."
+An R-tree index is used for indexing spatial data. A hash index + can't handle range searches. A B-tree index only handles range + searches in a single dimension. R-trees can handle + multi-dimensional data. For example, if an R-tree index can be + built on an attribute of type point, the system can more + efficiently answer queries such as "select all points within a + bounding rectangle."
-The canonical paper that describes the original R-tree design is:
+The canonical paper that describes the original R-tree design + is:
-Guttman, A. "R-trees: A Dynamic Index Structure for Spatial Searching." Proceedings of the 1984 ACM SIGMOD Int'l Conf on Mgmt of Data, 45-57.
+Guttman, A. "R-trees: A Dynamic Index Structure for Spatial + Searching." Proceedings of the 1984 ACM SIGMOD Int'l Conf on Mgmt + of Data, 45-57.
-You can also find this paper in Stonebraker's "Readings in Database Systems".
+You can also find this paper in Stonebraker's "Readings in + Database Systems".
-Built-in R-trees can handle polygons and boxes. In theory, R-trees can be extended to handle higher number of dimensions. In practice, extending R-trees requires a bit of work and we don't currently have any documentation on how to do it.
+Built-in R-trees can handle polygons and boxes. In theory, + R-trees can be extended to handle higher number of dimensions. In + practice, extending R-trees requires a bit of work and we don't + currently have any documentation on how to do it.
-The GEQO module speeds query optimization when joining many tables by means of a Genetic Algorithm (GA). It allows the handling of large join queries through nonexhaustive search.
+The GEQO module speeds query optimization when + joining many tables by means of a Genetic Algorithm (GA). It allows + the handling of large join queries through nonexhaustive + search.
-The ~ operator does regular expression matching, and ~* does case-insensitive regular expression matching. The case-insensitive variant of LIKE is called ILIKE in PostgreSQL 7.1 and later.
+The ~ operator does regular expression matching, and + ~* does case-insensitive regular expression matching. The + case-insensitive variant of LIKE is called + ILIKE in PostgreSQL 7.1 and later.
-Case-insensitive equality comparisons are normally expressed as:
+Case-insensitive equality comparisons are normally expressed + as:
SELECT * FROM tab WHERE lower(col) = 'abc'- This will not use an standard index. However, if you create a functional index, it will be used: + This will not use an standard index. However, if you create a + functional index, it will be used:
CREATE INDEX tabindex on tab (lower(col));-
You test the column with IS NULL and IS NOT NULL.
+You test the column with IS NULL and IS + NOT NULL.
-Type Internal Name Notes -------------------------------------------------- @@ -557,15 +999,29 @@ TEXT text no specific upper limit on length BYTEA bytea variable-length byte array (null-safe)-
You will see the internal name when examining system catalogs and in some error messages.
+You will see the internal name when examining system catalogs + and in some error messages.
-The last four types above are "varlena" types (i.e., the first four bytes on disk are the length, followed by the data). Thus the actual space used is slightly greater than the declared size. However, these data types are also subject to compression or being stored out-of-line by TOAST, so the space on disk might also be less than expected.
+The last four types above are "varlena" types (i.e., the first + four bytes on disk are the length, followed by the data). Thus the + actual space used is slightly greater than the declared size. + However, these data types are also subject to compression or being + stored out-of-line by TOAST, so the space on disk + might also be less than expected.
-CHAR() is best when storing strings that are usually the same length. VARCHAR() is best when storing variable-length strings but it limits how long a string can be. TEXT is for strings of unlimited length, maximum 1 gigabyte. BYTEA is for storing binary data, particularly values that include NULL bytes.
+CHAR() is best when storing strings that are + usually the same length. VARCHAR() is best when + storing variable-length strings but it limits how long a string can + be. TEXT is for strings of unlimited length, maximum + 1 gigabyte. BYTEA is for storing binary data, + particularly values that include NULL bytes.
-PostgreSQL supports a SERIAL data type. It auto-creates a sequence and index on the column. For example, this:
+PostgreSQL supports a SERIAL data type. It + auto-creates a sequence and index on the column. For example, + this:
CREATE TABLE person ( id SERIAL, @@ -581,35 +1037,76 @@ BYTEA bytea variable-length byte array (null-safe) ); CREATE UNIQUE INDEX person_id_key ON person ( id );- See the create_sequence manual page for more information about sequences. You can also use each row's OID field as a unique value. However, if you need to dump and reload the database, you need to use pg_dump's -o option or COPY WITH OIDS option to preserve the OIDs. + See the create_sequence manual page for more information + about sequences. You can also use each row's OID field as a + unique value. However, if you need to dump and reload the database, + you need to use pg_dump's -o option or COPY + WITH OIDS option to preserve the OIDs. -
One approach is to retrieve the next SERIAL value from the sequence object with the nextval() function before inserting and then insert it explicitly. Using the example table in 4.15.1, that might look like this in Perl:
+One approach is to retrieve the next SERIAL value + from the sequence object with the nextval() function + before inserting and then insert it explicitly. Using the + example table in 4.15.1, that might look like + this in Perl:
new_id = output of "SELECT nextval('person_id_seq')" INSERT INTO person (id, name) VALUES (new_id, 'Blaise Pascal');- You would then also have the new value stored in
new_id
for use in other queries (e.g., as a foreign key to the person
table). Note that the name of the automatically created SEQUENCE object will be named <table>_<serialcolumn>_seq, where table and serialcolumn are the names of your table and your SERIAL column, respectively.
+ You would then also have the new value stored in
+ new_id
for use in other queries (e.g., as a foreign
+ key to the person
table). Note that the name of the
+ automatically created SEQUENCE object will be named
+ <table>_<serialcolumn>_seq, where
+ table and serialcolumn are the names of your table
+ and your SERIAL column, respectively.
- Alternatively, you could retrieve the assigned SERIAL value with the currval() function after it was inserted by default, e.g.,
+Alternatively, you could retrieve the assigned + SERIAL value with the currval() function + after it was inserted by default, e.g.,
INSERT INTO person (name) VALUES ('Blaise Pascal'); new_id = output of "SELECT currval('person_id_seq')";- Finally, you could use the OID returned from the INSERT statement to look up the default value, though this is probably the least portable approach. In Perl, using DBI with Edmund Mergl's DBD::Pg module, the oid value is made available via $sth->{pg_oid_status} after $sth->execute(). + Finally, you could use the OID + returned from the INSERT statement to look up the + default value, though this is probably the least portable approach. + In Perl, using DBI with Edmund Mergl's DBD::Pg module, the oid + value is made available via $sth->{pg_oid_status} after + $sth->execute(). -
No. Currval() returns the current value assigned by your backend, not by all users.
+No. Currval() returns the current value assigned by your + backend, not by all users.
-OIDs are PostgreSQL's answer to unique row ids. Every row that is created in PostgreSQL gets a unique OID. All OIDs generated during initdb are less than 16384 (from backend/access/transam.h). All user-created OIDs are equal to or greater than this. By default, all these OIDs are unique not only within a table or database, but unique within the entire PostgreSQL installation.
+OIDs are PostgreSQL's answer to unique row ids. + Every row that is created in PostgreSQL gets a unique + OID. All OIDs generated during + initdb are less than 16384 (from + backend/access/transam.h). All user-created + OIDs are equal to or greater than this. By default, + all these OIDs are unique not only within a table or + database, but unique within the entire PostgreSQL installation.
-PostgreSQL uses OIDs in its internal system tables to link rows between tables. These OIDs can be used to identify specific user rows and used in joins. It is recommended you use column type OID to store OID values. You can create an index on the OID field for faster access.
+PostgreSQL uses OIDs in its internal system + tables to link rows between tables. These OIDs can + be used to identify specific user rows and used in joins. It is + recommended you use column type OID to store + OID values. You can create an index on the + OID field for faster access.
-OIDs are assigned to all new rows from a central area that is used by all databases. If you want to change the OID to something else, or if you want to make a copy of the table, with the original OIDs, there is no reason you can't do it:
+OIDs are assigned to all new rows from a central + area that is used by all databases. If you want to change the + OID to something else, or if you want to make a copy + of the table, with the original OIDs, there is no + reason you can't do it:
CREATE TABLE new_table(old_oid oid, mycol int); SELECT old_oid, mycol INTO new FROM old; @@ -622,13 +1119,20 @@ BYTEA bytea variable-length byte array (null-safe) -->-
OIDs are stored as 4-byte integers, and will overflow at 4 billion. No one has reported this ever happening, and we plan to have the limit removed before anyone does.
+OIDs are stored as 4-byte integers, and will + overflow at 4 billion. No one has reported this ever happening, and + we plan to have the limit removed before anyone does.
-TIDs are used to identify specific physical rows with block and offset values. TIDs change after rows are modified or reloaded. They are used by index entries to point to physical rows.
+TIDs are used to identify specific physical rows + with block and offset values. TIDs change after rows + are modified or reloaded. They are used by index entries to point + to physical rows.
-Some of the source code and older documentation use terms that have more common usage. Here are some:
+Some of the source code and older documentation use terms that + have more common usage. Here are some:
A list of general database terms can be found at: http://www.comptechnews.com/~reaster/dbdesign.html
+A list of general database terms can be found at: http://www.comptechnews.com/~reaster/dbdesign.html
-If you are running a version older than 7.1, an upgrade may fix the problem. Also it is possible you have run out of virtual memory on your system, or your kernel has a low limit for certain resources. Try this before starting postmaster:
+If you are running a version older than 7.1, an upgrade may fix + the problem. Also it is possible you have run out of virtual memory + on your system, or your kernel has a low limit for certain + resources. Try this before starting postmaster:
ulimit -d 262144 limit datasize 256m- Depending on your shell, only one of these may succeed, but it will set your process data segment limit much higher and perhaps allow the query to complete. This command applies to the current process, and all subprocesses created after the command is run. If you are having a problem with the SQL client because the backend is returning too much data, try it before starting the client. + Depending on your shell, only one of these may succeed, but it will + set your process data segment limit much higher and perhaps allow + the query to complete. This command applies to the current process, + and all subprocesses created after the command is run. If you are + having a problem with the SQL client because the + backend is returning too much data, try it before starting the + client. -
From psql, type select version();
You need to put BEGIN WORK
and COMMIT
around any use of a large object handle, that is, surrounding lo_open
... lo_close.
You need to put BEGIN WORK
and COMMIT
+ around any use of a large object handle, that is, surrounding
+ lo_open
... lo_close.
Currently PostgreSQL enforces the rule by closing large object handles at transaction commit. So the first attempt to do anything with the handle will draw invalid large obj descriptor. So code that used to work (at least most of the time) will now generate that error message if you fail to use a transaction.
+Currently PostgreSQL enforces the rule by closing large object + handles at transaction commit. So the first attempt to do anything + with the handle will draw invalid large obj descriptor. So + code that used to work (at least most of the time) will now + generate that error message if you fail to use a transaction.
-If you are using a client interface like ODBC you may need to set auto-commit off.
If you are using a client interface like ODBC you
+ may need to set auto-commit off.
Use CURRENT_TIMESTAMP:
@@ -681,9 +1206,13 @@ BYTEA bytea variable-length byte array (null-safe)-
IN
so slow?IN
so slow?Currently, we join subqueries to outer queries by sequentially scanning the result of the subquery for each row of the outer query. A workaround is to replace IN
with EXISTS
:
Currently, we join subqueries to outer queries by sequentially
+ scanning the result of the subquery for each row of the outer
+ query. A workaround is to replace IN
with
+ EXISTS
:
SELECT *
FROM tab
@@ -701,7 +1230,8 @@ BYTEA bytea variable-length byte array (null-safe)
4.23) How do I perform an outer join?
- PostgreSQL 7.1 and later supports outer joins using the SQL standard syntax. Here are two examples:
+ PostgreSQL 7.1 and later supports outer joins using the SQL
+ standard syntax. Here are two examples:
SELECT *
FROM t1 LEFT OUTER JOIN t2 ON (t1.col = t2.col);
@@ -712,9 +1242,19 @@ BYTEA bytea variable-length byte array (null-safe)
FROM t1 LEFT OUTER JOIN t2 USING (col);
- These identical queries join t1.col to t2.col, and also return any unjoined rows in t1 (those with no match in t2). A RIGHT join would add unjoined rows of t2. A FULL join would return the matched rows plus all unjoined rows from t1 and t2. The word OUTER is optional and is assumed in LEFT, RIGHT, and FULL joins. Ordinary joins are called INNER joins.
+ These identical queries join t1.col to t2.col, and also return
+ any unjoined rows in t1 (those with no match in t2). A
+ RIGHT join would add unjoined rows of t2. A
+ FULL join would return the matched rows plus all
+ unjoined rows from t1 and t2. The word OUTER is
+ optional and is assumed in LEFT,
+ RIGHT, and FULL joins. Ordinary joins
+ are called INNER joins.
- In previous releases, outer joins can be simulated using UNION and NOT IN. For example, when joining tab1 and tab2, the following query does an outer join of the two tables:
+
In previous releases, outer joins can be simulated using
+ UNION and NOT IN. For example, when
+ joining tab1 and tab2, the following query does an
+ outer join of the two tables:
@@ -728,30 +1268,46 @@ BYTEA bytea variable-length byte array (null-safe)
ORDER BY col1
- 4.24) How do I perform queries using multiple databases?
+ 4.24) How do I perform queries using
+ multiple databases?
- There is no way to query any database except the current one. Because PostgreSQL loads database-specific system catalogs, it is uncertain how a cross-database query should even behave.
+ There is no way to query any database except the current one.
+ Because PostgreSQL loads database-specific system catalogs, it is
+ uncertain how a cross-database query should even behave.
- Of course, a client can make simultaneous connections to different databases and merge the information that way.
+ Of course, a client can make simultaneous connections to
+ different databases and merge the information that way.
Extending PostgreSQL
- 5.1) I wrote a user-defined function. When I run it in psql, why does it dump core?
+ 5.1) I wrote a user-defined function. When I
+ run it in psql, why does it dump core?
- The problem could be a number of things. Try testing your user-defined function in a stand-alone test program first.
+ The problem could be a number of things. Try testing your
+ user-defined function in a stand-alone test program first.
- 5.2) How can I contribute some nifty new types and functions to PostgreSQL?
+ 5.2) How can I contribute some nifty new
+ types and functions to PostgreSQL?
- Send your extensions to the pgsql-hackers mailing list, and they will eventually end up in the contrib/ subdirectory.
+ Send your extensions to the pgsql-hackers mailing list,
+ and they will eventually end up in the contrib/
+ subdirectory.
- 5.3) How do I write a C function to return a tuple?
+ 5.3) How do I write a C function to return a
+ tuple?
- This requires wizardry so extreme that the authors have never tried it, though in principle it can be done.
+ This requires wizardry so extreme that the authors have never
+ tried it, though in principle it can be done.
- 5.4) I have changed a source file. Why does the recompile not see the change?
+ 5.4) I have changed a source file. Why does
+ the recompile not see the change?
- The Makefiles do not have the proper dependencies for include files. You have to do a make clean and then another make. If you are using GCC you can use the --enable-depend option of configure to have the compiler compute the dependencies automatically.
+ The Makefiles do not have the proper dependencies for
+ include files. You have to do a make clean and then another
+ make. If you are using GCC you can use the
+ --enable-depend option of configure to have the
+ compiler compute the dependencies automatically.