Test::Timer - test module to test/assert response times
The documentation describes version 2.05 of Test::Timer
Test subroutines to implement unit-tests to time that your code executes before a specified threshold
Test subroutines to implement unit-tests to time that your code execution exceeds a specified threshold
Test subroutine to implement unit-tests to time that your code executes within a specified time frame
Supports measurements in seconds
Implements configurable alarm signal handler to make sure that your tests do not execute forever
use Test::Timer; time_ok( sub { doYourStuffButBeQuickAboutIt(); }, 1, 'threshold of one second'); time_atmost( sub { doYourStuffYouHave10Seconds(); }, 10, 'threshold of 10 seconds'); time_between( sub { doYourStuffYouHave5-10Seconds(); }, 5, 10, 'lower threshold of 5 seconds and upper threshold of 10 seconds'); # Will succeed time_nok( sub { sleep(2); }, 1, 'threshold of one second'); time_atleast( sub { sleep(2); }, 2, 'threshold of one second'); # Will fail after 5 (threshold) + 2 seconds (default alarm) time_ok( sub { while(1) { sleep(1); } }, 5, 'threshold of one second'); $test::Timer::alarm = 6 #default 2 seconds # Will fail after 5 (threshold) + 6 seconds (specified alarm) time_ok( sub { while(1) { sleep(1); } }, 5, 'threshold of one second');
Test::Timer implements a set of test primitives to test and assert test times from bodies of code.
The key features are subroutines to assert or test the following:
that a given piece of code does not exceed a specified time limit
that a given piece of code takes longer than a specified time limit and does not exceed another
Test::Timer exports:
time_ok
time_nok
time_atleast
time_atmost
time_between
Takes the following parameters:
a reference to a block of code (anonymous sub)
a threshold specified as a integer indicating a number of seconds
a string specifying a test name
time_nok( sub { sleep(2); }, 1, 'threshold of one second');
If the execution of the code exceeds the threshold specified the test fail with the following diagnostic message
Test ran 2 seconds and exceeded specified threshold of 1 seconds
The is the inverted variant of time_ok, it passes if the threshold is exceeded and fails if the benchmark of the code is within the specified timing threshold.
The API is the same as for time_ok.
time_nok( sub { sleep(1); }, 2, 'threshold of two seconds');
If the execution of the code executes below the threshold specified the test fail with the following diagnostic message
Test ran 1 seconds and did not exceed specified threshold of 2 seconds
This is syntactic sugar for time_ok
time_atmost( sub { doYourStuffButBeQuickAboutIt(); }, 1, 'threshold of one second');
Test ran N seconds and exceeded specified threshold of 1 seconds
N will be the actual measured execution time of the specified code
time_atleast( sub { doYourStuffAndTakeYourTimeAboutIt(); }, 1, 'threshold of 1 second');
The test succeeds if the code takes at least the number of seconds specified by the timing threshold.
If the code executes faster, the test fails with the following diagnosic message
Please be aware that Test::Timer, breaks the execution with an alarm specified to trigger after the specified threshold + 2 seconds (default), so if you expect your execution to run longer, set the alarm accordingly.
$Test::Timer::alarm = $my_alarm_in_seconds;
See also diagnostics.
This method is a more extensive variant of time_atmost and time_ok, you can specify a lower and upper threshold, the code has to execute within this interval in order for the test to succeed
time_between( sub { sleep(2); }, 5, 10, 'lower threshold of 5 seconds and upper threshold of 10 seconds');
If the code executes faster than the lower threshold or exceeds the upper threshold, the test fails with the following diagnosic message
Test ran 2 seconds and did not execute within specified interval 5 - 10 seconds
Or
Test ran 12 seconds and did not execute within specified interval 5 - 10 seconds
This is a method to handle the result from _benchmark is initiates the benchmark calling benchmark and based on whether it is within the provided interval true (1) is returned and if not false (0).
This is the method doing the actual benchmark, if a better method is located this is the place to do the handy work.
Currently Benchmark is used. An alternative could be Devel::Timer, but I do not know this module very well and Benchmark is core, so this is used for now.
The method takes two parameters:
a code block via a code reference
a threshold (the upper threshold, since this is added to the default alarm.
This is the method extracts the seconds from benchmarks timestring and returns it as an integer.
It takes the timestring from _benchmark (Benchmark) and returns the seconds part.
Test::Builder required import to do some import hokus-pokus for the test methods exported from Test::Timer. Please refer to the documentation in Test::Builder
All tests either fail or succeed, but a few exceptions are implemented, these are listed below.
Test did not exceed specified threshold, this message is diagnosis for time_atleast and time_nok tests, which do not exceed their specified threshold.
Test exceeded specified threshold, this message is a diagnostic for time_atmost and time_ok, if the specified threshold is surpassed.
This is the key point of the module, either your code is too slow and you should address this or your threshold is too low, in which case you can set it a bit higher and run the test again.
Test did not execute within specified interval, this is the diagnostic from time_between, it is the diagnosis if the execution of the code is not between the specified lower and upper thresholds.
Insufficient parameters, this is the message if a specified test is not provided with the sufficient number of parameters, consult this documentation and correct accordingly.
Execution exceeded threshold and timed out, the exception is thrown if the execution of tested code exceeds even the alarm, which is default 2 seconds, but can be set by the user or is equal to the upperthreshold + 2 seconds.
The exception results in a diagnostic for the failing test. This is a failsafe to avoid that code runs forever. If you get this diagnose either your code is too slow and you should address this or it might be error prone. If this is not the case adjust the alarm setting to suit your situation.
This module requires no special configuration or environment.
Tests are sensitive and be configured using environment and configuration files, please see the section on test and quality.
Carp
Benchmark
Error
Test::Builder
Test::Builder::Module
This module holds no known incompatibilities.
This module holds no known bugs.
The current implementations only use seconds and resolutions should be higher, so the current implementation is limited to seconds as the highest resolution.
On occassion failing tests with CPAN-testers have been observed. This seem to be related to the test-suite being not taking into account that some smoke-testers do not prioritize resources for the test run and that additional processes/jobs are running. The test-suite have been adjusted to accommodate this but these issues might reoccur.
Coverage report for the release described in this documentation (see VERSION).
---------------------------- ------ ------ ------ ------ ------ ------ ------ File stmt bran cond sub pod time total ---------------------------- ------ ------ ------ ------ ------ ------ ------ blib/lib/Test/Timer.pm 100.0 95.0 66.6 100.0 100.0 99.9 98.0 ...Timer/TimeoutException.pm 100.0 n/a n/a 100.0 100.0 0.0 100.0 Total 100.0 95.0 66.6 100.0 100.0 100.0 98.4 ---------------------------- ------ ------ ------ ------ ------ ------ ------
The Test::Perl::Critic test runs with severity 5 (gentle) for now, please refer to t/critic.t and t/perlcriticrc.
Set TEST_POD to enable Test::Pod test in t/pod.t and Test::Pod::Coverage test in t/pod-coverage.t.
Set TEST_CRITIC to enable Test::Perl::Critic test in t/critic.t
This distribution uses Travis for continuous integration testing, the Travis reports are public available.
Test::Benchmark
Please report any bugs or feature requests using Github
Github Issues
You can find (this) documentation for this module with the perldoc command.
perldoc
perldoc Test::Timer
You can also look for information at:
Homepage
MetaCPAN
AnnoCPAN: Annotated CPAN documentation
CPAN Ratings
Github Repository, please see the guidelines for contributing.
Jonas B. Nielsen (jonasbn) <jonasbn at cpan.org>
<jonasbn at cpan.org>
Gregor Herrmann, PR #16 fixes to spelling mistakes
Nigel Horne, issue #15 suggestion for better assertion in time_atleast
Nigel Horne, issue #10/#12 suggestion for improvement to diagnostics
p-alik, PR #4 eliminating warnings during test
Kent Fredric, PR #7 addressing file permissions
Nick Morrott, PR #5 corrections to POD
Bartosz Jakubski, reporting issue #3
Gabor Szabo (GZABO), suggestion for specification of interval thresholds even though this was obsoleted by the later introduced time_between
Paul Leonerd Evans (PEVANS), suggestions for time_atleast and time_atmost and the handling of $SIG{ALRM}. Also bug report for addressing issue with Debian packaging resulting in release 0.10
brian d foy (BDFOY), for patch to _run_test
Test::Timer and related modules are (C) by Jonas B. Nielsen, (jonasbn) 2007-2017
Test::Timer and related modules are released under the Artistic License 2.0
Used distributions are under copyright of there respective authors and designated licenses
Image used on website is under copyright by Veri Ivanova
To install Test::Timer, copy and paste the appropriate command in to your terminal.
cpanm
cpanm Test::Timer
CPAN shell
perl -MCPAN -e shell install Test::Timer
For more information on module installation, please visit the detailed CPAN module installation guide.