diff --git a/doc/src/sgml/pgbench.sgml b/doc/src/sgml/pgbench.sgml
index cb94e30c01..50c98b658f 100644
--- a/doc/src/sgml/pgbench.sgml
+++ b/doc/src/sgml/pgbench.sgml
@@ -1,640 +1,640 @@
-
-
-
- pgbench
-
-
- pgbench
-
-
-
- pgbench is a simple program for running benchmark
- tests on PostgreSQL>. It runs the same sequence of SQL
- commands over and over, possibly in multiple concurrent database sessions,
- and then calculates the average transaction rate (transactions per second).
- By default, pgbench tests a scenario that is
- loosely based on TPC-B, involving five SELECT>,
- UPDATE>, and INSERT> commands per transaction.
- However, it is easy to test other cases by writing your own transaction
- script files.
-
-
-
- Typical output from pgbench looks like:
-
-
-transaction type: TPC-B (sort of)
-scaling factor: 10
-query mode: simple
-number of clients: 10
-number of threads: 1
-number of transactions per client: 1000
-number of transactions actually processed: 10000/10000
-tps = 85.184871 (including connections establishing)
-tps = 85.296346 (excluding connections establishing)
-
-
- The first six lines report some of the most important parameter
- settings. The next line reports the number of transactions completed
- and intended (the latter being just the product of number of clients
- and number of transactions per client); these will be equal unless the run
- failed before completion. The last two lines report the TPS rate,
- figured with and without counting the time to start database sessions.
-
-
-
- Overview
-
-
- The default TPC-B-like transaction test requires specific tables to be
- set up beforehand. pgbench> should be invoked with
- the -i> (initialize) option to create and populate these
- tables. (When you are testing a custom script, you don't need this
- step, but will instead need to do whatever setup your test needs.)
- Initialization looks like:
-
-
-pgbench -i other-options> dbname>
-
-
- where dbname> is the name of the already-created
- database to test in. (You may also need -h>,
- -p>, and/or -U> options to specify how to
- connect to the database server.)
-
-
-
-
- pgbench -i> creates four tables pgbench_accounts>,
- pgbench_branches>, pgbench_history>, and
- pgbench_tellers>,
- destroying any existing tables of these names.
- Be very careful to use another database if you have tables having these
- names!
-
-
-
-
- At the default scale factor> of 1, the tables initially
- contain this many rows:
-
-
-table # of rows
----------------------------------
-pgbench_branches 1
-pgbench_tellers 10
-pgbench_accounts 100000
-pgbench_history 0
-
-
- You can (and, for most purposes, probably should) increase the number
- of rows by using the -s> (scale factor) option. The
- -F> (fillfactor) option might also be used at this point.
-
-
-
- Once you have done the necessary setup, you can run your benchmark
- with a command that doesn't include -i>, that is
-
-
-pgbench options> dbname>
-
-
- In nearly all cases, you'll need some options to make a useful test.
- The most important options are -c> (number of clients),
- -t> (number of transactions), -T> (time limit),
- and -f> (specify a custom script file).
- See below for a full list.
-
-
-
- shows options that are used
- during database initialization, while
- shows options that are used
- while running benchmarks, and
- shows options that are useful
- in both cases.
-
-
-
- pgbench initialization options
-
-
-
- Option
- Description
-
-
-
-
-
- -i
-
- Required to invoke initialization mode.
-
-
-
- -s scale_factor>
-
- Multiply the number of rows generated by the scale factor.
- For example, -s 100> will create 10,000,000 rows
- in the pgbench_accounts> table. Default is 1.
-
-
-
- -F fillfactor>
-
- Create the pgbench_accounts>,
- pgbench_tellers> and
- pgbench_branches> tables with the given fillfactor.
- Default is 100.
-
-
-
-
-
-
-
- pgbench benchmarking options
-
-
-
- Option
- Description
-
-
-
-
-
- -c clients>
-
- Number of clients simulated, that is, number of concurrent database
- sessions. Default is 1.
-
-
-
- -j threads>
-
- Number of worker threads within pgbench.
- Using more than one thread can be helpful on multi-CPU machines.
- The number of clients must be a multiple of the number of threads,
- since each thread is given the same number of client sessions to manage.
- Default is 1.
-
-
-
- -t transactions>
-
- Number of transactions each client runs. Default is 10.
-
-
-
- -T seconds>
-
- Run the test for this many seconds, rather than a fixed number of
- transactions per client. -t and
- -T are mutually exclusive.
-
-
-
- -M querymode>
-
- Protocol to use for submitting queries to the server:
-
-
- simple>: use simple query protocol.
-
-
- extended>: use extended query protocol.
-
-
- prepared>: use extended query protocol with prepared statements.
-
-
- The default is simple query protocol. (See
- for more information.)
-
-
-
- -N
-
- Do not update pgbench_tellers> and
- pgbench_branches>.
- This will avoid update contention on these tables, but
- it makes the test case even less like TPC-B.
-
-
-
- -S
-
- Perform select-only transactions instead of TPC-B-like test.
-
-
-
- -f filename>
-
- Read transaction script from filename>.
- See below for details.
- -N, -S, and -f
- are mutually exclusive.
-
-
-
- -n
-
- Perform no vacuuming before running the test.
- This option is necessary>
- if you are running a custom test scenario that does not include
- the standard tables pgbench_accounts>,
- pgbench_branches>, pgbench_history>, and
- pgbench_tellers>.
-
-
-
- -v
-
- Vacuum all four standard tables before running the test.
- With neither -n> nor -v>, pgbench will vacuum the
- pgbench_tellers> and pgbench_branches>
- tables, and will truncate pgbench_history>.
-
-
-
- -D varname>=>value>
-
- Define a variable for use by a custom script (see below).
- Multiple -D> options are allowed.
-
-
-
- -C
-
- Establish a new connection for each transaction, rather than
- doing it just once per client session.
- This is useful to measure the connection overhead.
-
-
-
- -l
-
- Write the time taken by each transaction to a logfile.
- See below for details.
-
-
-
- -s scale_factor>
-
- Report the specified scale factor in pgbench>'s
- output. With the built-in tests, this is not necessary; the
- correct scale factor will be detected by counting the number of
- rows in the pgbench_branches> table. However, when testing
- custom benchmarks (-f> option), the scale factor
- will be reported as 1 unless this option is used.
-
-
-
- -d
-
- Print debugging output.
-
-
-
-
-
-
-
- pgbench common options
-
-
-
- Option
- Description
-
-
-
-
-
- -h hostname>
- database server's host
-
-
- -p port>
- database server's port
-
-
- -U login>
- username to connect as
-
-
-
-
-
-
-
- What is the transaction> actually performed in pgbench?
-
-
- The default transaction script issues seven commands per transaction:
-
-
-
- BEGIN;
- UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;
- SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
- UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;
- UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;
- INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
- END;
-
-
-
- If you specify -N>, steps 4 and 5 aren't included in the
- transaction. If you specify -S>, only the SELECT> is
- issued.
-
-
-
-
- Custom Scripts
-
-
- pgbench has support for running custom
- benchmark scenarios by replacing the default transaction script
- (described above) with a transaction script read from a file
- (-f option). In this case a transaction>
- counts as one execution of a script file. You can even specify
- multiple scripts (multiple -f options), in which
- case a random one of the scripts is chosen each time a client session
- starts a new transaction.
-
-
-
- The format of a script file is one SQL command per line; multi-line
- SQL commands are not supported. Empty lines and lines beginning with
- --> are ignored. Script file lines can also be
- meta commands>, which are interpreted by pgbench>
- itself, as described below.
-
-
-
- There is a simple variable-substitution facility for script files.
- Variables can be set by the command-line -D> option,
- explained above, or by the meta commands explained below.
- In addition to any variables preset by -D> command-line options,
- the variable scale> is preset to the current scale factor.
- Once set, a variable's
- value can be inserted into a SQL command by writing
- :>variablename>. When running more than
- one client session, each session has its own set of variables.
-
-
-
- Script file meta commands begin with a backslash (\>).
- Arguments to a meta command are separated by white space.
- These meta commands are supported:
-
-
-
-
-
- \set varname> operand1> [ operator> operand2> ]
-
-
-
-
- Sets variable varname> to a calculated integer value.
- Each operand> is either an integer constant or a
- :>variablename> reference to a variable
- having an integer value. The operator> can be
- +>, ->, *>, or />.
-
-
-
- Example:
-
-\set ntellers 10 * :scale
-
-
-
-
-
-
-
- \setrandom varname> min> max>
-
-
-
-
- Sets variable varname> to a random integer value
- between the limits min> and max> inclusive.
- Each limit can be either an integer constant or a
- :>variablename> reference to a variable
- having an integer value.
-
-
-
- Example:
-
-\setrandom aid 1 :naccounts
-
-
-
-
-
-
-
- \sleep number> [ us | ms | s ]
-
-
-
-
- Causes script execution to sleep for the specified duration in
- microseconds (us>), milliseconds (ms>) or seconds
- (s>). If the unit is omitted then seconds are the default.
- number> can be either an integer constant or a
- :>variablename> reference to a variable
- having an integer value.
-
-
-
- Example:
-
-\sleep 10 ms
-
-
-
-
-
-
-
- \setshell varname> command> [ argument> ... ]
-
-
-
-
- Sets variable varname> to the result of the shell command
- command>. The command must return an integer value
- through its standard output.
-
-
-
- argument> can be either a text constant or a
- :>variablename> reference to a variable of
- any types. If you want to use argument> starting with
- colons, you need to add an additional colon at the beginning of
- argument>.
-
-
-
- Example:
-
-\setshell variable_to_be_assigned command literal_argument :variable ::literal_starting_with_colon
-
-
-
-
-
-
-
- \shell command> [ argument> ... ]
-
-
-
-
- Same as \setshell, but the result is ignored.
-
-
-
- Example:
-
-\shell command literal_argument :variable ::literal_starting_with_colon
-
-
-
-
-
-
-
- As an example, the full definition of the built-in TPC-B-like
- transaction is:
-
-
-\set nbranches :scale
-\set ntellers 10 * :scale
-\set naccounts 100000 * :scale
-\setrandom aid 1 :naccounts
-\setrandom bid 1 :nbranches
-\setrandom tid 1 :ntellers
-\setrandom delta -5000 5000
-BEGIN;
-UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;
-SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
-UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;
-UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;
-INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
-END;
-
-
- This script allows each iteration of the transaction to reference
- different, randomly-chosen rows. (This example also shows why it's
- important for each client session to have its own variables —
- otherwise they'd not be independently touching different rows.)
-
-
-
-
-
- Per-transaction logging
-
-
- With the -l> option, pgbench> writes the time
- taken by each transaction to a logfile. The logfile will be named
- pgbench_log.nnn>, where
- nnn> is the PID of the pgbench process.
- If the -j> option is 2 or higher, creating multiple worker
- threads, each will have its own log file. The first worker will use the
- the same name for its log file as in the standard single worker case.
- The additional log files for the other workers will be named
- pgbench_log.nnn>.mmm>,
- where mmm> is a sequential number for each worker starting
- with 1.
-
-
-
- The format of the log is:
-
-
- client_id> transaction_no> time> file_no> time_epoch> time_us>
-
-
- where time> is the elapsed transaction time in microseconds,
- file_no> identifies which script file was used
- (useful when multiple scripts were specified with -f>),
- and time_epoch>/time_us> are a
- UNIX epoch format timestamp and an offset
- in microseconds (suitable for creating a ISO 8601
- timestamp with fractional seconds) showing when
- the transaction completed.
-
-
-
- Here are example outputs:
-
- 0 199 2241 0 1175850568 995598
- 0 200 2465 0 1175850568 998079
- 0 201 2513 0 1175850569 608
- 0 202 2038 0 1175850569 2663
-
-
-
-
-
- Good Practices
-
-
- It is very easy to use pgbench> to produce completely
- meaningless numbers. Here are some guidelines to help you get useful
- results.
-
-
-
- In the first place, never> believe any test that runs
- for only a few seconds. Use the -t> or -T> option
- to make the run last at least a few minutes, so as to average out noise.
- In some cases you could need hours to get numbers that are reproducible.
- It's a good idea to try the test run a few times, to find out if your
- numbers are reproducible or not.
-
-
-
- For the default TPC-B-like test scenario, the initialization scale factor
- (-s>) should be at least as large as the largest number of
- clients you intend to test (-c>); else you'll mostly be
- measuring update contention. There are only -s> rows in
- the pgbench_branches> table, and every transaction wants to
- update one of them, so -c> values in excess of -s>
- will undoubtedly result in lots of transactions blocked waiting for
- other transactions.
-
-
-
- The default test scenario is also quite sensitive to how long it's been
- since the tables were initialized: accumulation of dead rows and dead space
- in the tables changes the results. To understand the results you must keep
- track of the total number of updates and when vacuuming happens. If
- autovacuum is enabled it can result in unpredictable changes in measured
- performance.
-
-
-
- A limitation of pgbench> is that it can itself become
- the bottleneck when trying to test a large number of client sessions.
- This can be alleviated by running pgbench> on a different
- machine from the database server, although low network latency will be
- essential. It might even be useful to run several pgbench>
- instances concurrently, on several client machines, against the same
- database server.
-
-
-
-
+
+
+
+ pgbench
+
+
+ pgbench
+
+
+
+ pgbench is a simple program for running benchmark
+ tests on PostgreSQL>. It runs the same sequence of SQL
+ commands over and over, possibly in multiple concurrent database sessions,
+ and then calculates the average transaction rate (transactions per second).
+ By default, pgbench tests a scenario that is
+ loosely based on TPC-B, involving five SELECT>,
+ UPDATE>, and INSERT> commands per transaction.
+ However, it is easy to test other cases by writing your own transaction
+ script files.
+
+
+
+ Typical output from pgbench looks like:
+
+
+transaction type: TPC-B (sort of)
+scaling factor: 10
+query mode: simple
+number of clients: 10
+number of threads: 1
+number of transactions per client: 1000
+number of transactions actually processed: 10000/10000
+tps = 85.184871 (including connections establishing)
+tps = 85.296346 (excluding connections establishing)
+
+
+ The first six lines report some of the most important parameter
+ settings. The next line reports the number of transactions completed
+ and intended (the latter being just the product of number of clients
+ and number of transactions per client); these will be equal unless the run
+ failed before completion. The last two lines report the TPS rate,
+ figured with and without counting the time to start database sessions.
+
+
+
+ Overview
+
+
+ The default TPC-B-like transaction test requires specific tables to be
+ set up beforehand. pgbench> should be invoked with
+ the -i> (initialize) option to create and populate these
+ tables. (When you are testing a custom script, you don't need this
+ step, but will instead need to do whatever setup your test needs.)
+ Initialization looks like:
+
+
+pgbench -i other-options> dbname>
+
+
+ where dbname> is the name of the already-created
+ database to test in. (You may also need -h>,
+ -p>, and/or -U> options to specify how to
+ connect to the database server.)
+
+
+
+
+ pgbench -i> creates four tables pgbench_accounts>,
+ pgbench_branches>, pgbench_history>, and
+ pgbench_tellers>,
+ destroying any existing tables of these names.
+ Be very careful to use another database if you have tables having these
+ names!
+
+
+
+
+ At the default scale factor> of 1, the tables initially
+ contain this many rows:
+
+
+table # of rows
+---------------------------------
+pgbench_branches 1
+pgbench_tellers 10
+pgbench_accounts 100000
+pgbench_history 0
+
+
+ You can (and, for most purposes, probably should) increase the number
+ of rows by using the -s> (scale factor) option. The
+ -F> (fillfactor) option might also be used at this point.
+
+
+
+ Once you have done the necessary setup, you can run your benchmark
+ with a command that doesn't include -i>, that is
+
+
+pgbench options> dbname>
+
+
+ In nearly all cases, you'll need some options to make a useful test.
+ The most important options are -c> (number of clients),
+ -t> (number of transactions), -T> (time limit),
+ and -f> (specify a custom script file).
+ See below for a full list.
+
+
+
+ shows options that are used
+ during database initialization, while
+ shows options that are used
+ while running benchmarks, and
+ shows options that are useful
+ in both cases.
+
+
+
+ pgbench initialization options
+
+
+
+ Option
+ Description
+
+
+
+
+
+ -i
+
+ Required to invoke initialization mode.
+
+
+
+ -s scale_factor>
+
+ Multiply the number of rows generated by the scale factor.
+ For example, -s 100> will create 10,000,000 rows
+ in the pgbench_accounts> table. Default is 1.
+
+
+
+ -F fillfactor>
+
+ Create the pgbench_accounts>,
+ pgbench_tellers> and
+ pgbench_branches> tables with the given fillfactor.
+ Default is 100.
+
+
+
+
+
+
+
+ pgbench benchmarking options
+
+
+
+ Option
+ Description
+
+
+
+
+
+ -c clients>
+
+ Number of clients simulated, that is, number of concurrent database
+ sessions. Default is 1.
+
+
+
+ -j threads>
+
+ Number of worker threads within pgbench.
+ Using more than one thread can be helpful on multi-CPU machines.
+ The number of clients must be a multiple of the number of threads,
+ since each thread is given the same number of client sessions to manage.
+ Default is 1.
+
+
+
+ -t transactions>
+
+ Number of transactions each client runs. Default is 10.
+
+
+
+ -T seconds>
+
+ Run the test for this many seconds, rather than a fixed number of
+ transactions per client. -t and
+ -T are mutually exclusive.
+
+
+
+ -M querymode>
+
+ Protocol to use for submitting queries to the server:
+
+
+ simple>: use simple query protocol.
+
+
+ extended>: use extended query protocol.
+
+
+ prepared>: use extended query protocol with prepared statements.
+
+
+ The default is simple query protocol. (See
+ for more information.)
+
+
+
+ -N
+
+ Do not update pgbench_tellers> and
+ pgbench_branches>.
+ This will avoid update contention on these tables, but
+ it makes the test case even less like TPC-B.
+
+
+
+ -S
+
+ Perform select-only transactions instead of TPC-B-like test.
+
+
+
+ -f filename>
+
+ Read transaction script from filename>.
+ See below for details.
+ -N, -S, and -f
+ are mutually exclusive.
+
+
+
+ -n
+
+ Perform no vacuuming before running the test.
+ This option is necessary>
+ if you are running a custom test scenario that does not include
+ the standard tables pgbench_accounts>,
+ pgbench_branches>, pgbench_history>, and
+ pgbench_tellers>.
+
+
+
+ -v
+
+ Vacuum all four standard tables before running the test.
+ With neither -n> nor -v>, pgbench will vacuum the
+ pgbench_tellers> and pgbench_branches>
+ tables, and will truncate pgbench_history>.
+
+
+
+ -D varname>=>value>
+
+ Define a variable for use by a custom script (see below).
+ Multiple -D> options are allowed.
+
+
+
+ -C
+
+ Establish a new connection for each transaction, rather than
+ doing it just once per client session.
+ This is useful to measure the connection overhead.
+
+
+
+ -l
+
+ Write the time taken by each transaction to a logfile.
+ See below for details.
+
+
+
+ -s scale_factor>
+
+ Report the specified scale factor in pgbench>'s
+ output. With the built-in tests, this is not necessary; the
+ correct scale factor will be detected by counting the number of
+ rows in the pgbench_branches> table. However, when testing
+ custom benchmarks (-f> option), the scale factor
+ will be reported as 1 unless this option is used.
+
+
+
+ -d
+
+ Print debugging output.
+
+
+
+
+
+
+
+ pgbench common options
+
+
+
+ Option
+ Description
+
+
+
+
+
+ -h hostname>
+ database server's host
+
+
+ -p port>
+ database server's port
+
+
+ -U login>
+ username to connect as
+
+
+
+
+
+
+
+ What is the transaction> actually performed in pgbench?
+
+
+ The default transaction script issues seven commands per transaction:
+
+
+
+ BEGIN;
+ UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;
+ SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
+ UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;
+ UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;
+ INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
+ END;
+
+
+
+ If you specify -N>, steps 4 and 5 aren't included in the
+ transaction. If you specify -S>, only the SELECT> is
+ issued.
+
+
+
+
+ Custom Scripts
+
+
+ pgbench has support for running custom
+ benchmark scenarios by replacing the default transaction script
+ (described above) with a transaction script read from a file
+ (-f option). In this case a transaction>
+ counts as one execution of a script file. You can even specify
+ multiple scripts (multiple -f options), in which
+ case a random one of the scripts is chosen each time a client session
+ starts a new transaction.
+
+
+
+ The format of a script file is one SQL command per line; multi-line
+ SQL commands are not supported. Empty lines and lines beginning with
+ --> are ignored. Script file lines can also be
+ meta commands>, which are interpreted by pgbench>
+ itself, as described below.
+
+
+
+ There is a simple variable-substitution facility for script files.
+ Variables can be set by the command-line -D> option,
+ explained above, or by the meta commands explained below.
+ In addition to any variables preset by -D> command-line options,
+ the variable scale> is preset to the current scale factor.
+ Once set, a variable's
+ value can be inserted into a SQL command by writing
+ :>variablename>. When running more than
+ one client session, each session has its own set of variables.
+
+
+
+ Script file meta commands begin with a backslash (\>).
+ Arguments to a meta command are separated by white space.
+ These meta commands are supported:
+
+
+
+
+
+ \set varname> operand1> [ operator> operand2> ]
+
+
+
+
+ Sets variable varname> to a calculated integer value.
+ Each operand> is either an integer constant or a
+ :>variablename> reference to a variable
+ having an integer value. The operator> can be
+ +>, ->, *>, or />.
+
+
+
+ Example:
+
+\set ntellers 10 * :scale
+
+
+
+
+
+
+
+ \setrandom varname> min> max>
+
+
+
+
+ Sets variable varname> to a random integer value
+ between the limits min> and max> inclusive.
+ Each limit can be either an integer constant or a
+ :>variablename> reference to a variable
+ having an integer value.
+
+
+
+ Example:
+
+\setrandom aid 1 :naccounts
+
+
+
+
+
+
+
+ \sleep number> [ us | ms | s ]
+
+
+
+
+ Causes script execution to sleep for the specified duration in
+ microseconds (us>), milliseconds (ms>) or seconds
+ (s>). If the unit is omitted then seconds are the default.
+ number> can be either an integer constant or a
+ :>variablename> reference to a variable
+ having an integer value.
+
+
+
+ Example:
+
+\sleep 10 ms
+
+
+
+
+
+
+
+ \setshell varname> command> [ argument> ... ]
+
+
+
+
+ Sets variable varname> to the result of the shell command
+ command>. The command must return an integer value
+ through its standard output.
+
+
+
+ argument> can be either a text constant or a
+ :>variablename> reference to a variable of
+ any types. If you want to use argument> starting with
+ colons, you need to add an additional colon at the beginning of
+ argument>.
+
+
+
+ Example:
+
+\setshell variable_to_be_assigned command literal_argument :variable ::literal_starting_with_colon
+
+
+
+
+
+
+
+ \shell command> [ argument> ... ]
+
+
+
+
+ Same as \setshell, but the result is ignored.
+
+
+
+ Example:
+
+\shell command literal_argument :variable ::literal_starting_with_colon
+
+
+
+
+
+
+
+ As an example, the full definition of the built-in TPC-B-like
+ transaction is:
+
+
+\set nbranches :scale
+\set ntellers 10 * :scale
+\set naccounts 100000 * :scale
+\setrandom aid 1 :naccounts
+\setrandom bid 1 :nbranches
+\setrandom tid 1 :ntellers
+\setrandom delta -5000 5000
+BEGIN;
+UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;
+SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
+UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;
+UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;
+INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
+END;
+
+
+ This script allows each iteration of the transaction to reference
+ different, randomly-chosen rows. (This example also shows why it's
+ important for each client session to have its own variables —
+ otherwise they'd not be independently touching different rows.)
+
+
+
+
+
+ Per-transaction logging
+
+
+ With the -l> option, pgbench> writes the time
+ taken by each transaction to a logfile. The logfile will be named
+ pgbench_log.nnn>, where
+ nnn> is the PID of the pgbench process.
+ If the -j> option is 2 or higher, creating multiple worker
+ threads, each will have its own log file. The first worker will use the
+ the same name for its log file as in the standard single worker case.
+ The additional log files for the other workers will be named
+ pgbench_log.nnn>.mmm>,
+ where mmm> is a sequential number for each worker starting
+ with 1.
+
+
+
+ The format of the log is:
+
+
+ client_id> transaction_no> time> file_no> time_epoch> time_us>
+
+
+ where time> is the elapsed transaction time in microseconds,
+ file_no> identifies which script file was used
+ (useful when multiple scripts were specified with -f>),
+ and time_epoch>/time_us> are a
+ UNIX epoch format timestamp and an offset
+ in microseconds (suitable for creating a ISO 8601
+ timestamp with fractional seconds) showing when
+ the transaction completed.
+
+
+
+ Here are example outputs:
+
+ 0 199 2241 0 1175850568 995598
+ 0 200 2465 0 1175850568 998079
+ 0 201 2513 0 1175850569 608
+ 0 202 2038 0 1175850569 2663
+
+
+
+
+
+ Good Practices
+
+
+ It is very easy to use pgbench> to produce completely
+ meaningless numbers. Here are some guidelines to help you get useful
+ results.
+
+
+
+ In the first place, never> believe any test that runs
+ for only a few seconds. Use the -t> or -T> option
+ to make the run last at least a few minutes, so as to average out noise.
+ In some cases you could need hours to get numbers that are reproducible.
+ It's a good idea to try the test run a few times, to find out if your
+ numbers are reproducible or not.
+
+
+
+ For the default TPC-B-like test scenario, the initialization scale factor
+ (-s>) should be at least as large as the largest number of
+ clients you intend to test (-c>); else you'll mostly be
+ measuring update contention. There are only -s> rows in
+ the pgbench_branches> table, and every transaction wants to
+ update one of them, so -c> values in excess of -s>
+ will undoubtedly result in lots of transactions blocked waiting for
+ other transactions.
+
+
+
+ The default test scenario is also quite sensitive to how long it's been
+ since the tables were initialized: accumulation of dead rows and dead space
+ in the tables changes the results. To understand the results you must keep
+ track of the total number of updates and when vacuuming happens. If
+ autovacuum is enabled it can result in unpredictable changes in measured
+ performance.
+
+
+
+ A limitation of pgbench> is that it can itself become
+ the bottleneck when trying to test a large number of client sessions.
+ This can be alleviated by running pgbench> on a different
+ machine from the database server, although low network latency will be
+ essential. It might even be useful to run several pgbench>
+ instances concurrently, on several client machines, against the same
+ database server.
+
+
+
+