Test2::Aggregate - Aggregate tests for increased speed
use Test2::Aggregate; Test2::Aggregate::run_tests( dirs => \@test_dirs ); done_testing();
Version 0.11_1
Aggregates all tests specified with dirs (which can even be individual tests) to avoid forking, reloading etc that can help with performance (dramatically if you have numerous small tests) and also facilitate group profiling. Test files are expected to end in .t and are run as subtests of a single aggregate test.
dirs
A bit similar, but simpler in concept and execution to Test::Aggregate, which makes it more likely to work with your test suite (especially if you use modern tools like Test2). It does not even try to package each test by default, which may be good or bad (e.g. redefines), depending on your requirements.
Test::Aggregate
Generally, the way to use this module is to try to aggregate sets of quick tests (e.g. unit tests). Try to iterativelly add tests to the aggregator, dropping those that do not work. Trying an entire suite in one go is a bad idea, an incompatible test can break the run failing all the subsequent tests. The module can usually work with Test::More suites, but will have more issues than when you use the more modern Test2::Suite (see notes).
Test::More
Test2::Suite
run_tests
my $stats = Test2::Aggregate::run_tests( dirs => \@dirs, # optional if lists defined lists => \@lists, # optional if dirs defined excludes => \@exclude_regexes, # optional root => '/testroot/', # optional load_modules => \@modules, # optional package => 0, # optional shuffle => 0, # optional sort => 0, # optional reverse => 0, # optional unique => 1, # optional repeat => 1, # optional, requires Test2::Plugin::BailOnFail for < 0 slow => 0, # optional override => \%override, # optional, requires Sub::Override stats_output => $stats_output_path, # optional extend_stats => 0, # optional test_warnings => 0, # optional dry_run => 0 # optional );
Runs the aggregate tests. Returns a hashref with stats like this:
$stats = { 'test.t' => { 'test_no' => 1, # numbering starts at 1 'pass_perc' => 100, # for single runs pass/fail is 100/0 'timestamp' => '20190705T145043', # start of test 'time' => '0.1732', # seconds - only with stats_output 'warnings' => $STDERR # only with test_warnings on non empty STDERR } };
The parameters to pass:
dirs (either this or lists is required)
lists
An arrayref containing directories which will be searched recursively, or even individual tests. The directories (unless shuffle or reverse are true) will be processed and tests run in order specified. Test files are expected to end in .t.
shuffle
reverse
.t
lists (either this or dirs is required)
Arrayref of flat files from which each line will be pushed to dirs (so they have a lower precedence - note root still applies, don't include it in the paths inside the list files). If the path does not exist, it will be silently ignored, however the "official" way to skip a line without checking it as a path is to start with a # (comment).
root
#
excludes (optional)
excludes
Arrayref of strings with regex patterns to filter out tests that you want excluded.
root (optional)
If defined, must be a valid root directory that will prefix all dirs and lists items. You may want to set it to './' if you want dirs relative to the current directory and the dot is not in your @INC.
'./'
@INC
load_modules (optional)
load_modules
Arrayref with modules to be loaded (with eval "use ...") at the start of the test. Useful for testing modules with special namespace requirements.
eval "use ..."
package (optional)
package
Will package each test in its own namespace. While it will help avoid things like redefine warnings, it may break some tests when aggregating them, so it is disabled by default.
override (optional)
override
Pass Sub::Override key/values as a hashref.
Sub::Override
repeat (optional)
repeat
Number of times to repeat the test(s) (default is 1 for a single run). If repeat is negative, the tests will repeat until they fail (or produce a warning if test_warnings is also set).
test_warnings
unique (optional)
unique
From v0.11, duplicate tests are by default removed from the running list as that could mess up the stats output. You can still define it as false to allow duplicate tests in the list.
shuffle (optional)
Random order of tests if set to true. Will override sort.
sort
sort (optional)
Sort tests alphabetically if set to true. Provides a way to fix the test order across systems.
reverse (optional)
Reverse order of tests if set to true.
slow (optional)
slow
When true, tests will be skipped if the environment variable SKIP_SLOW is set.
SKIP_SLOW
test_warnings (optional)
Tests for warnings over all the tests if set to true - this is added as a final test which expects zero as the number of tests which had STDERR output. The STDERR output of each test will be printed at the end of the test run (and included in the test run result hash), so if you want to see warnings the moment they are generated leave this option disabled.
dry_run (optional)
dry_run
Instead of running the tests, will do ok($testname) for each one. Otherwise, test order, stats files etc. will be produced normally.
ok($testname)
stats_output_path (optional)
stats_output_path
stats_output_path specifies a path where a file will be created to print out running time per test (average if multiple iterations) and passing percentage. Output is sorted from slowest test to fastest. On negative repeat the stats of each successful run will be written separately instead of the averages. The name of the file is caller_script-YYYYMMDDTHHmmss.txt. If '-' is passed instead of a path, then the output will be written to STDOUT. The timing stats are useful because the test harness doesn't normally measure time per subtest (remember, your individual aggregated tests become subtests).
caller_script-YYYYMMDDTHHmmss.txt
'-'
extend_stats (optional)
extend_stats
This option is to make the default output of stats_output_path be fixed, but still allow additions in future versions that will only be written with the extend_stats option enabled. Additions with extend_stats as of current version:
- starting date/time in ISO_8601.
Not all tests can be modified to run under the aggregator, it is not intended for tests that require an isolated environment, do overrides etc. For other tests which can potentially run under the aggregator, sometimes very simple changes may be needed like giving unique names to subs (or not warning for redefines, or trying the package option), replacing things that complain, restoring the environment at the end of the test etc.
Unit tests are usually great for aggregating. You could use the hash that run_tests returns in a script that tries to add more tests automatically to an aggregate list to see which added tests passed and keep them, dropping failures.
Trying to aggregate too many tests into a single one can be counter-intuitive as you would ideally want to parallelize your test suite (so a super-long aggregated test continuing after the rest are done will slow down the suite). And in general more tests will run aggregated if they are grouped so that tests that can't be aggregated together are in different groups.
In general you can call Test2::Aggregate::run_tests multiple times in a test and even load run_tests with tests that already contain another run_tests, the only real issue with multiple calls is that if you use repeat < 0 on a call, Test2::Plugin::BailOnFail is loaded so any subsequent failure, on any following run_tests call will trigger a Bail.
Test2::Aggregate::run_tests
repeat < 0
Test2::Plugin::BailOnFail
If you haven't switched to the Test2::Suite you are generally advised to do so for a number of reasons, compatibility with this module being only a very minor one. If you are stuck with a Test::More suite, Test2::Aggregate can still probably help you more than the similarly-named Test::Aggregate... modules.
Test2::Aggregate
Test::Aggregate...
Although the module tries to load Test2 in a way to not interfere, it is generally better to do use Test::More; in your aggregating test (i.e. alongside with use Test2::Aggregate.
Test2
use Test::More;
use Test2::Aggregate
One more caveat is that Test2::Aggregate::run_tests uses subtest from the Test2::Suite, which on rare occasions can return a true value when a Test::More subtest fails by running no tests, so you could have a failed test show up as having a 100 pass_perc in the Test2::Aggregate::run_tests output.
subtest
pass_perc
The environment variable AGGREGATE_TESTS will be set while the tests are running for your convenience. Example usage is making a test you know cannot run under the aggregator check and croak if it was run under it, or a module that can only be loaded once, so you load it on the aggregated test file and then use something like this in the individual test files:
AGGREGATE_TESTS
eval 'use My::Module' unless $ENV{AGGREGATE_TESTS};
Dimitrios Kechagias, <dkechag at cpan.org>
<dkechag at cpan.org>
Please report any bugs or feature requests to bug-test2-aggregate at rt.cpan.org, or through the web interface at https://rt.cpan.org/NoAuth/ReportBug.html?Queue=Test2-Aggregate. I will be notified, and then you'll automatically be notified of progress on your bug as I make changes.
bug-test2-aggregate at rt.cpan.org
https://github.com/SpareRoom/Test2-Aggregate
Copyright (C) 2019, SpareRoom.com
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
To install Test2::Aggregate, copy and paste the appropriate command in to your terminal.
cpanm
cpanm Test2::Aggregate
CPAN shell
perl -MCPAN -e shell install Test2::Aggregate
For more information on module installation, please visit the detailed CPAN module installation guide.