This allows logging only some fraction of transactions, greatly reducing
the amount of log generated.
Tomas Vondra, reviewed by Robert Haas and Jeff Janes.
On some platforms these functions return NULL, rather than the more common
practice of returning a pointer to a zero-sized block of memory. Hack our
various wrapper functions to hide the difference by substituting a size
request of 1. This is probably not so important for the callers, who
should never touch the block anyway if they asked for size 0 --- but it's
important for the wrapper functions themselves, which mistakenly treated
the NULL result as an out-of-memory failure. This broke at least pg_dump
for the case of no user-defined aggregates, as per report from
Matthew Carrington.
Back-patch to 9.2 to fix the pg_dump issue. Given the lack of previous
complaints, it seems likely that there is no live bug in previous releases,
even though some of these functions were in place before that.
We had a number of variants on the theme of "malloc or die", with the
majority named like "pg_malloc", but by no means all. Standardize on the
names pg_malloc, pg_malloc0, pg_realloc, pg_strdup. Get rid of pg_calloc
entirely in favor of using pg_malloc0.
This is an essentially cosmetic change, so no back-patch. (I did find
a couple of places where psql and pg_dump were using plain malloc or
strdup instead of the pg_ versions, but they don't look significant
enough to bother back-patching.)
This is apparently faster than doing things the other way around when
the scale factor is large.
Along the way, adjust -n to suppress vacuuming during initialization
as well as during test runs.
Jeff Janes, with some small changes by me.
These days, even a wimpy system can insert 10000 tuples in the blink of
an eye, so there's no real need for this much verbosity.
Per complaint from Tatsuo Ishii.
The option --foreign-keys, used at initialization time, will create foreign
key constraints for the columns that represent references to other tables'
primary keys. This can help in benchmarking FK performance.
Jeff Janes
Before, some places didn't document the short options (-? and -V),
some documented both, some documented nothing, and they were listed in
various orders. Now this is hopefully more consistent and complete.
The old check against MAX_RANDOM_VALUE is clearly irrelevant since
getrand() no longer calls random(). Instead, check whether min and max
are close enough together to avoid an overflow inside getrand(), as
suggested by Tom Lane. This is still somewhat silly, because we're
using atoi(), which doesn't check for overflow anyway and (at least on
my system) will cheerfully return 0 when given "4294967296". But that's
a problem for another commit.
glibc renders random() thread-safe by wrapping a futex lock around it;
testing reveals that this limits the performance of pgbench on machines
with many CPU cores. Rather than switching to random_r(), which is
only available on GNU systems and crashes unless you use undocumented
alchemy to initialize the random state properly, switch to our built-in
implementation of erand48(), which is both thread-safe and concurrent.
Since the list of reasons not to use the operating system's erand48()
is getting rather long, rename ours to pg_erand48() (and similarly
for our implementations of lrand48() and srand48()) and just always
use those. We were already doing this on Cygwin anyway, and the
glibc implementation is not quite thread-safe, so pgbench wouldn't
be able to use that either.
Per discussion with Tom Lane.
out immediately on any out-of-memory condition. It's rather pointless to
imagine that pgbench will be able to continue usefully after a malloc
failure, and in any case there were a number of unchecked mallocs.
writes. The first worker still uses "pgbench_log.<pid>" for the name, but
additional workers use "pgbench_log.<pid>.<serial-number>" instead.
Reported by Greg Smith.
Variables must consist of only alphabets, numerals and underscores.
We had allowed to set variables with invalid names, but could not
refer them in queries.
Thanks to Robert Haas for the review.
\shell command runs an external shell command.
\setshell also does the same and sets the result to a variable.
original patch by Michael Paquier with some editorialization by Itagaki,
and reviewed by Greg Smith.
processes of a pgbench run, when we are using -j > 1 and are emulating
threads via fork(). Otherwise the children all inherit the same random
sequence state and produce the same random-number sequence.
In the threaded case the different threads will share one RNG state, so
they will produce different subsets of one sequence, which is maybe more
correlated than a purist would like but will not be "the same". So we
leave that case alone.
First noticed by Takahiro Itagaki, and is also part of the explanation
for the pgbench misbehavior recently reported by Jaime Casanova.
are used to populate the tables with -i, but when running actual benchmark
it has values separately hard-coded in the query metacommands. This patch
makes the metacommands obtain their values from the relevant #defines.
Patch provided by Jeff Janes.
reviewed by Greg Smith and Josh Williams.
Following is the proposal from ITAGAKI Takahiro:
Pgbench is a famous tool to measure postgres performance, but nowadays
it does not work well because it cannot use multiple CPUs. On the other
hand, postgres server can use CPUs very well, so the bottle-neck of
workload is *in pgbench*.
Multi-threading would be a solution. The attached patch adds -j
(number of jobs) option to pgbench. If the value N is greater than 1,
pgbench runs with N threads. Connections are equally-divided into
them (ex. -c64 -j4 => 4 threads with 16 connections each). It can
run on POSIX platforms with pthread and on Windows with win32 threads.
Here are results of multi-threaded pgbench runs on Fedora 11 with intel
core i7 (8 logical cores = 4 physical cores * HT). -j8 (8 threads) was
the best and the tps is 4.5 times of -j1, that is a traditional result.
$ pgbench -i -s10
$ pgbench -n -S -c64 -j1 => tps = 11600.158593
$ pgbench -n -S -c64 -j2 => tps = 17947.100954
$ pgbench -n -S -c64 -j4 => tps = 26571.124001
$ pgbench -n -S -c64 -j8 => tps = 52725.470403
$ pgbench -n -S -c64 -j16 => tps = 38976.675319
$ pgbench -n -S -c64 -j32 => tps = 28998.499601
$ pgbench -n -S -c64 -j64 => tps = 26701.877815
Is it acceptable to use pthread in contrib module?
If ok, I will add the patch to the next commitfest.
pgbench_history, and pgbench_tellers, rather than just accounts, branches,
history, and tellers. This is to prevent accidental conflicts with real
application tables, as has been reported to happen at least once. Also
remove the automatic "SET search_path = public" that it did at startup,
as this seems to restrict testing flexibility without actually buying much.
Per proposal by Joshua Drake and ensuing discussion.
Joshua Drake and Tom Lane
1. -i option should run vacuum analyze only on pgbench tables, not *all*
tables in database.
2. pre-run cleanup step was DELETE FROM HISTORY then VACUUM HISTORY.
This is just a slow version of TRUNCATE HISTORY.
Simon Riggs
hazards. Instead teach these programs to prompt for a password when
necessary, just like all our other programs.
I did not bother to invent -W switches for them, since the return on
investment seems so low.
couldn't possibly HAVE_GETOPT. I believe this is the most appropriate
form of the patch submitted 2007-08-07 by Hiroshi Saito, though not
having a Windows build environment I won't know for sure till I see
the buildfarm results.
installations whose pg_config program does not appear first in the PATH.
Per gripe from Eddie Stanley and subsequent discussions with Fabien Coelho
and others.
Smith. Along with Japanese doc updation by Tasuo Ishii.
> This patch changes the way pgbench outputs its latency log files so that
> every transaction gets a timestamp and notes which transaction type was
> executed. It's a one-line change that just dumps some additional
> information that was already sitting in that area of code. I also made a
> couple of documentation corrections and clarifications on some of the more
> confusing features of pgbench.
>
> It's straightforward to parse log files in this format to analyze what
> happened during the test at a higher level than was possible with the
> original format. You can find some rough sample code to convert this
> latency format into CVS files and then into graphs at
> http://www.westnet.com/~gsmith/content/postgresql/pgbench.htm which I'll
> be expanding on once I get all my little patches sent in here.
Also tweak README.pgbench/README.pgbench_jis:
Remove history after pgbench was added to PostgreSQL contrib module.
Those info was not only redundant since it has already been in CVS
log, but also incomplete.
--------------------------------------------------------------------------
The attached is a patch to optimize contrib/pgbench using new 8.3 features.
- Use DROP IF EXISTS to suppress errors for initial loadings.
- Use a combination of TRUNCATE and COPY to reduce WAL on creating
the accounts table.
Also, there are some cosmetic changes.
- Change the output of -v option from "starting full vacuum..."
to "starting vacuum accounts..." in reflection of the fact.
- Shape duplicated error checks into executeStatement().
There is a big performance win in "COPY with no WAL" feature.
Thanks for the efforts!
--------------------------------------------------------------------------
pgbench calls random() later, so it should have called srandom().
On most platforms except Windows srandom() is actually identical
to srand(), so the bug only bites Windows users.
per bug report from Akio Ishida.
scenarios. With multiple clinets, only the first client got the right
scaling factor and this gave a illusion of better performance in case
of the scaling factor greater than 1.
max_stack_depth is not set to an unsafe value.
This commit also provides configure-time checking for <sys/resource.h>,
and cleans up some perhaps-unportable code associated with use of that
include file and getrlimit().
- predefined variable "tps"
The value of variable tps is taken from the scaling factor
specified by -s option.
- -D option
Variable values can be defined by -D option.
- \set command now allows arithmetic calculations.
comment line where output as too long, and update typedefs for /lib
directory. Also fix case where identifiers were used as variable names
in the backend, but as typedefs in ecpg (favor the backend for
indenting).
Backpatch to 8.1.X.
Remove unportable use of tfind/tsearch in favor of bsearch. Fix up
random number generator to use random() not rand() and to actually honor
its min/max arguments properly. That wasn't so important before, but
with exposure of capability to ask for general ranges, it will be.