++ed by:
Author image Matthew Simon Cavalletto


Benchmark::Forking - Run benchmarks in separate processes


  use Benchmark::Forking qw( timethis timethese cmpthese );

  timethis ($count, "code");

  timethese($count, {
      'Name1' => sub { ...code1... },
      'Name2' => sub { ...code2... },
  cmpthese($count, {
      'Name1' => sub { ...code1... },
      'Name2' => sub { ...code2... },

  Benchmark::Forking->enabled(0);  # Stop using forking feature
  Benchmark::Forking->enabled(1);  # Begin using forking again


The Benchmark::Forking module changes the behavior of the standard Benchmark module, running each piece of code to be timed in a separate forked process. Because each child exits after running its timing loop, the computations it performs can't propogate back to affect subsequent test cases.

This can make benchmark comparisons more accurate, because the separate test cases are mostly isolated from side-effects caused by the others. Benchmark scripts typically don't depend on those side-effects, so in most cases you can simply use or require this module at the top of your existing code without having to change anything else. (A few key exceptions are noted in "BUGS".)


The standard Benchmark module can sometimes report inaccurate or misleading results, in part because it doesn't isolate its test cases from one another. This means that the order that cases are run in can influence the results, because side effects, either obvious or obscure, can accumulate and affect later tests.

Data in global variables is an obvious source of side effects; in the below example, the grep takes longer as more items are pushed onto the array, so the test functions that run later will be reported by Benchmark as being slower, despite their code being identical:

  cmpthese( 1000, {
    "test_1" => sub { push @global, scalar grep 1, @global },
    "test_2" => sub { push @global, scalar grep 1, @global },
    "test_3" => sub { push @global, scalar grep 1, @global },
  } );

More cryptic sources of side effects can include cache priming, idiosyncrasies of the underlying Perl implementation, or the state of the operating system and environment. For example, if the code to be benchmarked require a lot of in-process RAM, earlier tests may be slowed down by having to allocate the memory the first time, or later tests may be slowed down by having to pick through the heap looking for free space. These effects are difficult to predict and can be laborious to identify and compensate for.

This module provides a solution to most aspects of this problem. Once you use Benchmark::Forking, the example benchmark above will report the correct conclusion that the three identical tests run at approximately the same speed.


Benchmark::Forking replaces the private runloop() function in the Benchmark module with a wrapper that forks before calling the original function. Forking is accomplished by the special open(F,"-|") call described in "open" in perlfunc, and the results are passed back as text from the child to the parent through an interprocess filehandle.

When comparing several test cases with the timethese or cmpthese functions, the main process will fork off a child and wait for it to complete its timing of all of the repetitions of one piece of code, then fork off a new child to handle the next case and wait again.


You can use this module in the same way you would use the standard Benchmark module.


This module re-exports the same functions provided by Benchmark: countit, timeit, timethis, timethese, and cmpthese.

For a description of these functions, see Benchmark.


The benchmark forking functionality is automatically enabled once you load this module, but you can also disable and re-enable it at run-time using the following class methods.


If called without arguments, reports the current status:

    my $boolean = Benchmark::Forking->enabled;

If passed an additional argument, enables or disable forking:

    Benchmark::Forking->enabled( 1 );
    $t = timeit(10, '$Global = 5 * $Global');
    Benchmark::Forking->enabled( 0 );

Enables benchmark forking.


Disables benchmark forking.



Because this depends on Perl's implementation of fork, it will not work as expected on platforms which lack this feature, notably Microsoft Windows.

Some external resources may not work when opened in the parent process and then accessed from multiple forked instances. If using this module causes your file, network, or database code to fail with an unusual error, this issue may be the culprit.

Some Benchmark scripts either accidentally or deliberately rely on the side-effects that this module avoids. If using this module causes your Perl code to behave differently than expected, you may be relying on this behavior; you can either revise your code to remove the dependency or continue to use the non-forking Benchmark.

If the standard Benchmark module were more fully object-oriented, this functionality could be added via subclassing, rather than by fiddling with Benchmark's internals, but the current implemenation doesn't seem to allow for this.


For documentation of the timing functions, see Benchmark.


This is version 1.01 of Benchmark::Forking.


  2010-02-01: Released version 1.01 to CPAN.
  2010-02-01: Adjusted META.yml to include license. 
  2010-02-01: Released version 1.00 to CPAN.
  2010-02-01: Updated documentation, rebuilt meta.yml, merged in the ReadMe.pod.
  2004-09-05: Released version 0.99 to CPAN.
  2004-09-05: Expanded documentation and packaged for distribution. 
  2004-09-03: First version written.


This module should work with any version of Perl 5, without platform dependencies or additional modules beyond the core distribution.

You should be able to install this module using the CPAN shell interface:

  perl -MCPAN -e 'install Benchmark::Forking'

Alternately, you may retrieve this package from CPAN (http://search.cpan.org/~evo/) and follow the normal procedure to unpack and install it, using the commands shown below or their local equivalents on your system:

  tar xzf Benchmark-Forking-*.tar.gz
  cd Benchmark-Forking-*
  perl Makefile.PL
  make test && sudo make install


Once installed, this module's documentation is available as a manual page via perldoc Benchmark::Forking or on CPAN sites such as http://search.cpan.org/dist/Benchmark-Forking.

If you have questions or feedback about this module, please feel free to contact the author at the address shown below. Although there is no formal support program, I do attempt to answer email promptly. Bug reports that contain a failing test case are greatly appreciated, and suggested patches will be promptly considered for inclusion in future releases.

To report bugs via the CPAN web tracking system, go to http://rt.cpan.org/NoAuth/Bugs.html?Dist=Benchmark-Forking or send mail to Dist=Benchmark-Forking#rt.cpan.org, replacing # with @.

If you've found this module useful or have feedback about your experience with it, consider sharing your opinion with other Perl users by posting your comment to CPAN's ratings system (http://cpanratings.perl.org/rate/?distribution=Benchmark-Forking).

For more general discussion, you may wish to post a message on PerlMonks (http://perlmonks.org/?node=Seekers%20of%20Perl%20Wisdom) or on the comp.lang.perl.misc newsgroup (http://groups.google.com/group/comp.lang.perl.misc/topics).


Developed by Matthew Simon Cavalletto. You may contact the author directly at evo@cpan.org or simonm@cavalletto.org.

Inspired by a discussion with Jim Keenan in the Perl Monks community.

My thanks also to other members of the Perl Monks community for feedback on this module, including graff, tachyon, Aristotle, pbeckingham, and others. http://perlmonks.org/?node_id=388481


Copyright 2010, 2004 Matthew Simon Cavalletto.

You may use, modify, and distribute this software under the same terms as Perl.

See http://dev.perl.org/licenses/ for more information.