BigBench - benchmark groups of opcodes under different versions/conditions
Usage : ./bb [options] Options: --help print this screen and exit --accuracy=digits round results to so many digits --base=number print relative summary based on number --code=sourcecode bench code snippet and ignore definitions --definitons=file from where to read benchmark definitions --duration=seconds run each op for at least this time --nodetails don't print details while benchmarking --nosummary don't print summary --nointeger don't round results to integer --nounlink don't unlink temporary files (for debug) --path=libpath path to libraries used by templates --reuse=bb_out.dat name of DB file from where to re-use results Set to empty string to disable this. --runs=number run benchmark more than once (see --take) --simulate=srand simulate results by using srand(srand) --skew=factor scale reported numbers by factor --store=bb_out.dat name of DB file where to store results Set to empty string to disable this. --take=run take lowest|average|highest|last --templates=path path to templates to be used --terse terse summary (unless --nosummary) --tight more tight summary (smaller spacing) --version print version and exit Options may be abbreviated, their case does not matter. Examples: ./bb --def=math.def --terse --skew=2.1 # better printable? ./bb --def=str.def --inc=math --duration=5 # really fine-grained ./bb --def=some.def --nosummary # detailed ./bb --def=some.def --terse --base=100 # simulate perlbench ./bb --code='"ababba" =~ /a+/;' # only this ./bb --runs=2 --take=last # cache, then bench ./bb --reuse=bb_out.dat --terse # display reused results
BigBench let's you define groups of opcodes (source code snippets), which are placed into different templates and then benchmarked. The definitons are stored in a file and can thus be re-used and allow to re-do benchmarks with reliable results. It is also possible to run a short benchmark snippet directly from the commandline.
The templates can load different module or Perl versions, or just set different flags. This allows comparisations between versions.
You can specify on a per-op basis, unlike with use Benchmark;, what the empty loop should look and how the benchmark should be setup.
use Benchmark;
The benchmark results are rounded, and a nice summary is printed.
There are many options which control how long to run benchmarks, how the results are rounded, summary should look like etc.
The following commandline options exists:
Print the screen and exit. All other options are ignored.
Round results to so many digits. Default is 3.
Print a relative summary based on number, not with absolute results. If you do --base=100, you can simulate perlbench.
--base=100
Bench the given source code snippet. A --definitions=file will be ignored.
--definitions=file
From which file to read the benchmark definitions. Defaults to 'bigint.def'.
Run each op for at least this time (in seconds). Must be at least one second. The timings are quite accurate with only 2 or 3 seconds, so going higher will not give you much better results. If you want just a quick peek on what the output will look like, use --simulate=sr (see there).
--simulate=sr
Don't print details while running the benchmarks. This is usefull when all the results come from the database.
Don't print summary. When given, --terse and --tight are ignored.
--terse
--tight
Don't unlink temporary files. If something goes wrong, you can have a look at what bb creates.
Specify the path to the libraries used by templates. You can put unpacked modules or Perl versions there. This string will be interpolated into ##path## in the template files.
##path##
Name of DB file from where to re-use results. Set to empty string to disable this. Default is bb_out.db. See also --store.
bb_out.db
Run each benchmark so many times. See also "--take" for what result will be taken.
Simulate results by using srand(sr). This is a quick way to see how the output will look like and also used by the testsuite.
Scale all reported numbers by this factor. For more nicer output.
Name of DB file where to store results. Set to empty string to disable storage entirely. The default is bb_out.dat. The results from previous runs in the database will be merged in, unless you set --reuse to ''.
bb_out.dat
Take the run that is 'lowest', 'average', 'highest' or the 'last'.
Generally, when doing multiply runs, you want to take the highest run. The reason is that any noise introduced by the system while running the benchmarks makes the numbers worse, and since the results are in ops/s, the highest number represents the peak performance, which comes as closest as possible to the real performance.
You want to take the average of all runs when you want to find out the average performance that can be expected on a typical system.
The worst perfomance can be found by taking the lowest run.
The last run is for benchmarks that involve things that get cached (f.i. filesystem dependend benchmarks). In this case, --take=last --runs=2 will get the first run to cache the things, discard it's result and use the one from the second run.
See also "--runs".
Specifies the path to the templates to be used for the actual benchmarking.
Create a more terse summary. It will only print out the group averages, and suppress the actual op results.
Create a more tight summary with smaller spacing.
Print the version string and exit.
./bb --def=math.def --terse --skew=2.1 # better printable? ./bb --def=str.def --inc=math --duration=5 # really fine-grained ./bb --def=some.def --nosummary # detailed ./bb --def=some.def --terse --base=100 # simulate perlbench ./bb --code='"ababba" =~ /a+/;' # only this
This program is free software; you may redistribute it and/or modify it under the same terms as Perl itself.
See the BUGS file.
Tels http://bloodgate.com in late 2001. (C) Tels 2001-2004.
To install BigBench, copy and paste the appropriate command in to your terminal.
cpanm
cpanm BigBench
CPAN shell
perl -MCPAN -e shell install BigBench
For more information on module installation, please visit the detailed CPAN module installation guide.