The Perl Toolchain Summit needs more sponsors. If your company depends on Perl, please support this very important event.

NAME

Running and Developing Tests with the Apache::Test Framework

Description

The title is self-explanatory :)

The Apache::Test framework was designed for creating test suits for products running on Apache httpd webserver (not necessarily mod_perl). Originally designed for the mod_perl Apache module, it was extended to be used for any Apache module.

This chapter is talking about the Apache::Test framework, and in particular explains how to:

1 run existing tests
2 setup a testing environment for a new project
3 develop new tests

Basics of Perl Modules Testing

The tests themselves are written in Perl. The framework provides an extensive functionality which makes the tests writing a simple and therefore enjoyable process.

If you have ever written or looked at the tests most Perl modules come with, Apache::Test uses the same concept. The script t/TEST is running all the files ending with .t it finds in the t/ directory. When executed a typical test prints the following:

  1..3     # going to run 3 tests
  ok 1     # the first  test has passed
  ok 2     # the second test has passed
  not ok 3 # the third  test has failed

Every ok or not ok is followed by the number which tells which sub-test has succeeded or failed.

t/TEST uses the Test::Harness module which intercepts the STDOUT stream, parses it and at the end of the tests print the results of the tests running: how many tests and sub-tests were run, how many succeeded, skipped or failed.

Some tests may be skipped by printing:

  1..0 # all tests in this file are going to be skipped.

Usually a test may be skipped when some feature is optional and/or prerequisites are not installed on the system, but this is not critical for the usefulness of the test. Once you test that you cannot proceed with the tests and it's not a must pass test, you just skip it.

By default print() statements in the test script are filtered out by Test::Harness. if you want the test to print what it does (if you decide to debug some test) use -verbose option. So for example if your test does this:

  print "# testing : feature foo\n";
  print "# expected: $expected\n";
  print "# received: $received\n";
  ok $expected eq $received;

in the normal mode, you won't see any of these prints. But if you run the test with t/TEST -verbose, you will see something like this:

  # testing : feature foo
  # expected: 2
  # received: 2
  ok 2

When you develop the test you should always put the debug statements there, and once the test works for you do not comment out or delete these debug statements. This is because if some user reports a failure in some test, you can ask him to run the failing test in the verbose mode and send you back the report. It'll be much easier to understand what the problem is if you get these debug printings from the user.

In the section Writing Tests several helper functions which make the tests writing easier are discussed.

For more details about the Test::Harness module please refer to its manpage. Also see the Test manpage about Perl's test suite.

Prerequisites

In order to use Apache::Test it has to be installed first.

Install Apache::Test using the familiar procedure:

  % cd Apache-Test
  % perl Makefile.PL
  % make && make test && make install

If you install mod_perl 2.0, you get Apache::Test installed as well.

Running Tests

It's much easier to copy-cat things, than creating from scratch. It's much easier to develop tests, when you have some existing system that you can test, see how it works and build your own testing environment in a similar fashion. Therefore let's first look at how the existing test enviroments work.

You can look at the modperl-2.0's or httpd-test's (perl-framework) testing environments which both use Apache::Test for their test suites.

Testing Options

Run:

  % t/TEST -help

to get the list of options you can use during testing. Most options are covered further in this document.

Basic Testing

Running tests is just like for any CPAN Perl module; first we generate the Makefile file and build everything with make:

  % perl Makefile.PL [options]
  % make

Now we can do the testing. You can run the tests in two ways. The first one is usual:

  % make test

but it adds quite an overhead, since it has to check that everything is up to date (the usual make source change control). Therefore you have to run it only once after make and for re-running the tests it's faster to run the tests directly via:

  % t/TEST

When make test or t/TEST are run, all tests found in the t directory (files ending with .t are recognized as tests) will be run.

Individual Testing

To run a single test, simple specify it at the command line. For example to run the test file t/protocol/echo.t, execute:

  % t/TEST protocol/echo

Notice that you don't have to add the t/ prefix and .t extension for the test filenames if you specify them explicitly, but you can have these as well. Therefore the following are all valid commands:

  % t/TEST   protocol/echo.t
  % t/TEST t/protocol/echo
  % t/TEST t/protocol/echo.t

The server will be stopped if it was already running and a new one will be started before running the t/protocol/echo.t test. At the end of the test the server will be shut down.

When you run specific tests you may want to run them in the verbose mode, and depending on how the test was written, you may get more debug information under this mode. This mode is turned on with -verbose option:

  % t/TEST -verbose protocol/echo

You can run groups of tests at once. This command:

  % ./t/TEST modules protocol/echo

will run all the tests in t/modules/ directory, followed by t/protocol/echo.t test.

Repetitive Testing

By default when you run the test without -run-tests option, the server will be started before the testing and stopped at the end. If during a debugging process you need to re-run tests without a need to restart the server, you can start the server once:

  % t/TEST -start-httpd

and then run the test(s) with -run-tests option many times:

  % t/TEST -run-tests

without waiting for the server to restart.

When you are done with tests, stop the server with:

  % t/TEST -stop-httpd

When the server is started you can modify .t files and rerun the tests without restarting the server. However if you modify response handlers, you must restart the server for changes to take an effect. If Apache::Reload is used and configured to automatically reload the handlers when they change you don't have to restart the server. For example to automatically reload all TestDirective::* modules when they change on the disk, add to t/conf/extra.conf.in:

  PerlModule Apache::Reload
  PerlInitHandler Apache::Reload
  PerlSetVar ReloadAll Off
  PerlSetVar ReloadModules "TestDirective::*"

and restart the server.

The -start-httpd option always stops the server first if any is running.

Normally when t/TEST is run without specifying the tests to run, the tests will be sorted alphabetically. If tests are explicitly passed as arguments to t/TEST they will be run in a specified order.

Parallel Testing

Sometimes you need to run more than one Apache::Test framework instances at the same time. In this case you have to use different ports for each instance. You can specify explicitly which port to use, using the -port configuration option. For example to run the server on port 34343:

  % t/TEST -start-httpd -port=34343

or by setting an evironment variable APACHE_PORT to the desired value before starting the server.

Specifying the port explicitly may not be the most convenient option if you happen to run many instances of the Apache::Test framework. The -port=select option comes to help. This option will automatically pick for the next available port. For example if you run:

  % t/TEST -start-httpd -port=select

and there is already one server from a different test suite which uses the default port 8529, the new server will try to use a higher port.

There is one problem that remains to be resolved though. It's possible that two or more servers running -port=select will still decide to use the same port, because when the server is configured it only tests whether the port is available but doesn't call bind() immediately. Thefore there is a race condition here, which needs to be resolved. Currently the workaround is to start the instances of the Apache::Test framework with a slight delay between each other. Depending on the speed of you machine, 4-5 seconds can be a good choice. that's approximately the time it takes to configure and start the server on a quite slow machine.

Verbose Mode

In case something goes wrong you should run the tests in the verbose mode:

  % t/TEST -verbose

In this case the test may print useful information, like what values it expects and what values it receives, given that the test is written to report these. In the silent mode (without -verbose) these printouts are filtered out by Test::Harness. When running in the verbose mode usually it's a good idea to run only problematic tests to minimize the size of the generated output.

When debugging problems it helps to keep the error_log file open in another console, and see the debug output in the real time via tail(1):

  % tail -f t/logs/error_log

Of course this file gets created only when the server starts, so you cannot run tail(1) on it before the server starts. Every time t/TEST -clean is run, t/logs/error_log gets deleted, therefore you have to run the tail(1) command again, when the server is started.

Colored Trace Mode

If your terminal supports colored text you may want to set the environment variable APACHE_TEST_COLOR to 1 to enable the colored tracing when running in the non-batch mode, which makes it easier to tell the reported errors and warnings, from the rest of the notifications.

Controlling the Apache::Test's Signal to Noise Ratio

In addition to controlling the verbosity of the test scripts, you can control the amount of information printed by the Apache::Test framework itself. Similar to Apache's log levels, Apache::Test uses these levels for controlling its signal to noise ratio:

  emerg alert crit error warning notice info debug

where emerg is the for the most important messages and debug for the least important ones.

Currently the default level is info, therefore any messages which fall into the info category and above (notice, warning, etc). If for example you want to see the debug messages you can change the default level using -trace option:

  % t/TEST -trace=debug ...

or if you want to get only warning messages and above, use:

  % t/TEST -trace=warning ...

Stress Testing

The Problem

When we try to test a stateless machine (i.e. all tests are independent), running all tests once ensures that all tested things properly work. However when a state machine is tested (i.e. where a run of one test may influence another test) it's not enough to run all the tests once to know that the tested features actually work. It's quite possible that if the same tests are run in a different order and/or repeated a few times, some tests may fail. This usually happens when some tests don't restore the system under test to its pristine state at the end of the run, which may influence other tests which rely on the fact that they start on pristine state, when in fact it's not true anymore. In fact it's possible that a single test may fail when run twice or three times in a sequence.

The Solution

To reduce the possibility of such dependency errors, it's important to run random testing repeated many times with many different pseudo-random engine initialization seeds. Of course if no failures get spotted that doesn't mean that there are no tests inter-dependencies, unless all possible combinations were run (exhaustive approach). Therefore it's possible that some problems may still be seen in production, but this testing greatly minimizes such a possibility.

The Apache::Test framework provides a few options useful for stress testing.

-times

You can run the tests N times by using the -times option. For example to run all the tests 3 times specify:

  % t/TEST -times=3
-order

It's possible that certain tests aren't cleaning up after themselves and modify the state of the server, which may influence other tests. But since normally all the tests are run in the same order, the potential problem may not be discovered until the code is used in production, where the real world testing hits the problem. Therefore in order to try to detect as many problems as possible during the testing process, it's may be useful to run tests in different orders.

This if of course mosly useful in conjunction with -times=N option.

Assuming that we have tests a, b and c:

  • -order=rotate

    rotate the tests: a, b, c, a, b, c

  • -order=repeat

    repeat the tests: a, a, b, b, c, c

  • -order=random

    run in the random order, e.g.: a, c, c, b, a, b

    In this mode the seed picked by srand() is printed to STDOUT, so it then can be used to rerun the tests in exactly the same order (remember to log the output).

  • -order=SEED

    used to initialize the pseudo-random algorithm, which allows to reproduce the same sequence of tests. For example if we run:

      % t/TEST -order=random -times=5

    and the seed 234559 is used, we can repeat the same order of tests, by running:

      % t/TEST -order=234559 -times=5

    Alternatively, the environment variable APACHE_TEST_SEED can be set to the value of a seed when -order=random is used. e.g. under bash(1):

      % APACHE_TEST_SEED=234559 t/TEST -order=random -times=5

    or with any shell program if you have the env(1) utility:

      $ env APACHE_TEST_SEED=234559 t/TEST -order=random -times=5

Resolving Sequence Problems

When this kind of testing is used and a failure is detected there are two problems:

  1. First is to be able to reproduce the problem so if we think we fixed it, we could verify the fix. This one is easy, just remember the sequence of tests run till the failed test and rerun the same sequence once again after the problem has been fixed.

  2. Second is to be able to understand the cause of the problem. If during the random test the failure has happened after running 400 tests, how can we possibly know which previously running tests has caused to the failure of the test 401. Chances are that most of the tests were clean and don't have inter-dependency problem. Therefore it'd be very helpful if we could reduce the long sequence to a minimum. Preferably 1 or 2 tests. That's when we can try to understand the cause of the detected problem.

Apache::TestSmoke Solution

Apache::TestSmoke attempts to solve both problems. When it's run, at the end of each iteration it reports the minimal sequence of tests causing a failure. This doesn't always succeed, but works in many cases.

You should create a small script to drive Apache::TestSmoke, usually t/SMOKE.PL. If you don't have it already, create it:

  file:t/SMOKE.PL
  ---------------
  #!perl
  
  use strict;
  use warnings FATAL => 'all';
  
  use FindBin;
  use lib "$FindBin::Bin/../Apache-Test/lib";
  use lib "$FindBin::Bin/../lib";
  
  use Apache::TestSmoke ();
  
  Apache::TestSmoke->new(@ARGV)->run;

Usually Makefile.PL converts it into t/SMOKE while adjusting the perl path, but you can create t/SMOKE in first place as well.

t/SMOKE performs the following operations:

  1. Runs the tests randomly until the first failure is detected. Or non-randomly if the option -order is set to repeat or rotate.

  2. Then it tries to reduce that sequence of tests to a minimum, and this sequence still causes to the same failure.

  3. It reports all the successful reductions as it goes to STDOUT and report file of the format: smoke-report-<date>.txt.

    In addition the systems build parameters are logged into the report file, so the detected problems could be reproduced.

  4. Goto 1 and run again using a new random seed, which potentially should detect different failures.

Currently for each reduction path, the following reduction algorithms are applied:

  1. Binary search: first try the upper half then the lower.

  2. Random window: randomize the left item, then the right item and return the items between these two points.

You can get the usage information by executing:

  % t/SMOKE -help

By default you don't need to supply any arguments to run it, simply execute:

  % t/SMOKE

If you want to work on certain tests you can specify them in the same way you do with t/TEST:

  % t/SMOKE foo/bar foo/tar

If you already have a sequence of tests that you want to reduce (perhaps because a previous run of the smoke testing didn't reduce the sequence enough to be able to diagnose the problem), you can request to do just that:

  % t/SMOKE -order=rotate -times=1 foo/bar foo/tar

-order=rotate is used just to override the default -order=random, since in this case we want to preserve the order. We also specify -times=1 for the same reason (override the default which is 50).

You can override the number of srand() iterations to perform (read: how many times to randomize the sequence), the number of times to repeat the tests (the default is 10) and the path to the file to use for reports:

  % t/SMOKE -times=5 -iterations=20 -report=../myreport.txt

Finally, any other options passed will be forwarded to t/TEST as is.

RunTime Configuration Overriding

After the server is configured during make test or with t/TEST -config, it's possible to explicitly override certain configuration parameters. The override-able parameters are listed when executing:

  % t/TEST -help

Probably the most useful parameters are:

  • -preamble

    configuration directives to add at the beginning of httpd.conf. For example to turn the tracing on:

      % t/TEST -preamble "PerlTrace all"
  • -postamble

    configuration directives to add at the end of httpd.conf. For example to load a certain Perl module:

      % t/TEST -postamble "PerlModule MyDebugMode"
  • -user

    run as user nobody:

      % t/TEST -user nobody
  • -port

    run on a different port:

      % t/TEST -port 8799
  • -servername

    run on a different server:

      % t/TEST -servername test.example.com
  • -httpd

    configure an httpd other than the default (that apxs figures out):

      % t/TEST -httpd ~/httpd-2.0/httpd
  • -apxs

    switch to another apxs:

      % t/TEST -apxs ~/httpd-2.0-prefork/bin/apxs

For a complete list of override-able configuration parameters see the output of t/TEST -help.

Request Generation and Response Options

We have mentioned already the most useful run-time options. Here are some other options that you may find useful during testing.

  • -ping

    Ping the server to see whether it runs

      % t/TEST -ping

    Ping the server and wait until the server starts, report waiting time.

      % t/TEST -ping=block

    This can be useful in conjunction with -run-tests option during debugging:

      % t/TEST -ping=block -run-tests

    normally, -run-tests will immediately quit if it detects that the server is not running, but with -ping=block in effect, it'll wait indefinitely for the server to start up.

  • -head

    Issue a HEAD request. For example to request /server-info:

      % t/TEST -head /server-info
  • -get

    Request the body of a certain URL via GET.

      % t/TEST -get /server-info

    If no URL is specified / is used.

    ALso you can issue a GET request but to get only headers as a response (e.g. useful to just check Content-length)

      % t/TEST -head -get /server-info

    GET URL with authentication credentials:

      % t/TEST -get /server-info -username dougm -password domination

    (please keep the password secret!)

  • -post

    Generate a POST request.

    Read content to POST from string:

      % t/TEST -post /TestApache::post -content 'name=dougm&company=covalent'

    Read content to POST from STDIN:

      % t/TEST -post /TestApache::post -content - < foo.txt

    Generate a content body of 1024 bytes in length:

      % t/TEST -post /TestApache::post -content x1024

    The same but print only the response headers, e.g. useful to just check Content-length:

      % t/TEST -post -head /TestApache::post -content x1024
  • -header

    Add headers to (-get|-post|-head) request:

      % t/TEST -get -header X-Test=10 -header X-Host=example.com /server-info
  • -ssl

    Run all tests through mod_ssl:

      % t/TEST -ssl
  • -http11

    Run all tests with HTTP/1.1 (KeepAlive) requests:

      % t/TEST -http11
  • -proxy

    Run all tests through mod_proxy:

      % t/TEST -proxy

The debugging options -debug and -breakpoint are covered in the Debugging Tests section.

For a complete list of available switches see the output of t/TEST -help.

Batch Mode

When running in the batch mode and redirecting STDOUT, this state is automagically detected and the no color mode is turned on, under which the program generates a minimal output to make the log files useful. If this doesn't work and you still get all the mess printed during the interactive run, set the APACHE_TEST_NO_COLOR=1 environment variable.

Setting Up Testing Environment

We will assume that you setup your testing environment even before you have started coding the project, which is a very smart thing to do. Of course it'll take you more time upfront, but it'll will save you a lot of time during the project developing and debugging stages. The extreme programming methodology says that tests should be written before starting the code development.

Basic Testing Environment

So the first thing is to create a package and all the helper files, so later on we can distribute it on CPAN. We are going to develop an Apache::Amazing module as an example.

  % h2xs -AXn Apache::Amazing
  Writing Apache/Amazing/Amazing.pm
  Writing Apache/Amazing/Makefile.PL
  Writing Apache/Amazing/README
  Writing Apache/Amazing/test.pl
  Writing Apache/Amazing/Changes
  Writing Apache/Amazing/MANIFEST

h2xs is a nifty utility that gets installed together with Perl and helps us to create some of the files we will need later.

However we are going to use a little bit different files layout, therefore we are going to move things around a bit.

We want our module to live in the Apache-Amazing directory, so we do:

  % mv Apache/Amazing Apache-Amazing
  % rmdir Apache

From now on the Apache-Amazing directory is our working directory.

  % cd Apache-Amazing

We don't need the test.pl. as we are going to create a whole testing environment:

  % rm test.pl

We want our package to reside under the lib directory:

  % mkdir lib
  % mkdir lib/Apache
  % mv Amazing.pm lib/Apache

Now we adjust the lib/Apache/Amazing.pm to look like this:

  file:lib/Apache/Amazing.pm
  --------------------------
  package Apache::Amazing;
  
  use strict;
  use warnings;
  
  use Apache::RequestRec ();
  use Apache::RequestIO ();
  
  $Apache::Amazing::VERSION = '0.01';
  
  use Apache::Const -compile => 'OK';
  
  sub handler {
      my $r = shift;
      $r->content_type('text/plain');
      $r->print("Amazing!");
      return Apache::OK;
  }
  1;
  __END__
  ... pod documentation goes here...

The only thing it does is setting the text/plain header and responding with "Amazing!".

Next adjust or create the Makefile.PL file:

  file:Makefile.PL
  ----------------
  require 5.6.1;
  
  use ExtUtils::MakeMaker;
  
  use lib qw(../blib/lib lib );
  
  use Apache::TestMM qw(test clean); #enable 'make test'
  
  # prerequisites
  my %require =
    (
     "Apache::Test" => "", # any version will do
    );
  my @scripts = qw(t/TEST);

  # accept the configs from command line
  Apache::TestMM::filter_args();
  Apache::TestMM::generate_script('t/TEST');

  WriteMakefile(
      NAME         => 'Apache::Amazing',
      VERSION_FROM => 'lib/Apache/Amazing.pm',
      PREREQ_PM    => \%require,
      clean        => {
                       FILES => "@{ clean_files() }",
                      },
      ($] >= 5.005 ?
          (ABSTRACT_FROM => 'lib/Apache/Amazing.pm',
           AUTHOR        => 'Stas Bekman <stas (at) stason.org>',
          ) : ()
      ),
  );
  
  sub clean_files {
      return [@scripts];
  }

Apache::TestMM will do a lot of thing for us, such as building a complete Makefile with proper 'test' and 'clean' targets, automatically converting .PL and conf/*.in files and more.

As you see we specify a prerequisites hash with Apache::Test in it, so if the package gets distributed on CPAN, CPAN.pm shell will know to fetch and install this required package.

Next we create the test suite, which will reside in the t directory:

  % mkdir t

First we create t/TEST.PL which will be automatically converted into t/TEST during perl Makefile.PL stage:

  file:t/TEST.PL
  --------------
  #!perl
  
  use strict;
  use warnings FATAL => 'all';
  
  use lib qw(lib);
  
  use Apache::TestRunPerl ();
  
  Apache::TestRunPerl->new->run(@ARGV);

Assuming that Apache::Test is already installed on your system and Perl can find it. If not you should tell Perl where to find it. For example you could add:

  use lib qw(Apache-Test/lib);

to t/TEST.PL, if Apache::Test is located in a parallel directory.

As you can see we didn't write the real path to the Perl executable, but #!perl. When t/TEST is created the correct path will be placed there automatically.

Next we need to prepare extra Apache configuration bits, which will reside in t/conf:

  % mkdir t/conf

We create the t/conf/extra.conf.in file which will be automatically converted into t/conf/extra.conf before the server starts. If the file has any placeholders like @documentroot@, these will be replaced with the real values specific for the used server. In our case we put the following configuration bits into this file:

  file:t/conf/extra.conf.in
  -------------------------
  # this file will be Include-d by @ServerRoot@/httpd.conf
  
  # where Apache::Amazing can be found
  PerlSwitches -I@ServerRoot@/../lib
  # preload the module
  PerlModule Apache::Amazing
  <Location /test/amazing>
      SetHandler modperl
      PerlResponseHandler Apache::Amazing
  </Location>

As you can see we just add a simple <Location> container and tell Apache that the namespace /test/amazing should be handled by Apache::Amazing module running as a mod_perl handler. Notice that:

      SetHandler modperl

is mod_perl 2.0 configuration, if you are running under mod_perl 1.0 use the appropriate setting.

As mentioned before you can use Apache::Reload to automatically reload the modules under development when they change. The setup for this module goes into t/conf/extra.conf.in as well.

  file:t/conf/extra.conf.in
  -------------------------
  PerlModule Apache::Reload
  PerlPostReadRequestHandler Apache::Reload
  PerlSetVar ReloadAll Off
  PerlSetVar ReloadModules "Apache::Amazing"

For more information about Apache::Reload refer to its manpage.

Now we can create a simple test:

  file:t/basic.t
  -----------
  use strict;
  use warnings FATAL => 'all';
  
  use Apache::Amazing;
  use Apache::Test;
  use Apache::TestUtil;
  use Apache::TestRequest 'GET_BODY';
  
  plan tests => 2;
  
  ok 1; # simple load test
  
  my $url = '/test/amazing';
  my $data = GET_BODY $url;
  
  ok t_cmp(
           "Amazing!",
           $data,
           "basic test",
          );

Now create the README file.

  % touch README

Don't forget to put in the relevant information about your module, or arrange for ExtUtils::MakeMaker::WriteMakefile() to do this for you with:

  file:Makefile.PL
  ----------------
  WriteMakefile(
               ...
      dist  => {
                PREOP => 'pod2text lib/Apache/Amazing.pm > $(DISTVNAME)/README',
               },
               ...
               );

in this case README will be created from the documenation POD sections in lib/Apache/Amazing.pm, but the file has to exists for make dist to succeed.

and finally we adjust or create the MANIFEST file, so we can prepare a complete distribution. Therefore we list all the files that should enter the distribution including the MANIFEST file itself:

  file:MANIFEST
  -------------
  lib/Apache/Amazing.pm
  t/TEST.PL
  t/basic.t
  t/conf/extra.conf.in
  Makefile.PL
  Changes
  README
  MANIFEST

That's it. Now we can build the package. But we need to know the location of the apxs utility from the installed httpd server. We pass its path as an option to Makefile.PL:

  % perl Makefile.PL -apxs ~/httpd/prefork/bin/apxs
  % make
  % make test

  basic...........ok
  All tests successful.
  Files=1, Tests=2,  1 wallclock secs ( 0.52 cusr +  0.02 csys =  0.54 CPU)

To install the package run:

  % make install

Now we are ready to distribute the package on CPAN:

  % make dist

will create the package which can be immediately uploaded to CPAN. In this example the generated source package with all the required files will be called: Apache-Amazing-0.01.tar.gz.

The only thing that we haven't done and hope that you will do is to write the POD sections for the Apache::Amazing module, explaining how amazingly it works and how amazingly it can be deployed by other users.

Extending Configuration Setup

Sometimes you need to add extra httpd.conf configuration and perl startup specific to your project that uses Apache::Test. This can be accomplished by creating the desired files with an extension .in in the t/conf/ directory and running:

  panic% t/TEST -config

which for each file with the extension .in will create a new file, without this extension, convert any template placeholders into real values and link it from the main httpd.conf. The latter happens only if the file have the following extensions:

  • .conf.in

    will add to t/conf/httpd.conf:

      Include foo.conf
  • .pl.in

    will add to t/conf/httpd.conf:

      PerlRequire foo.pl
  • other

    other files with .in extension will be processed as well, but not linked from httpd.conf.

As mentioned before the converted files are created, any special token in them are getting replaced with the appropriate values. For example the token @ServerRoot@ will be replaced with the value defined by the ServerRoot directive, so you can write a file that does the following:

  file:my-extra.conf.in
  ---------------------
  PerlSwitches -I@ServerRoot@/../lib

and assuming that the ServerRoot is ~/modperl-2.0/t/, when my-extra.conf will be created, it'll look like:

  file:my-extra.conf
  ------------------
  PerlSwitches -I~/modperl-2.0/t/../lib

The valid tokens are defined in %Apache::TestConfig::Usage and also can be seen in the output of t/TEST -help's configuration options section. The tokens are case insensitive.

Special Configuration Files

Some of the files in the t/conf directory have a special meaning, since the Apache::Test framework uses them for the minimal configuration setup. But they can be overriden:

  • if the file t/conf/httpd.conf.in exists, it will be used instead of the default template (in Apache/TestConfig.pm).

  • if the file t/conf/extra.conf.in exists, it will be used to generate t/conf/extra.conf with @variable@ substitutions.

  • if the file t/conf/extra.conf exists, it will be included by httpd.conf.

  • if the file t/conf/modperl_extra.pl exists, it will be included by httpd.conf as a mod_perl file (PerlRequire).

Apache::Test Framework's Architecture

In the previous section we have written a basic test, which doesn't do much. In the following sections we will explain how to write more elaborate tests.

When you write the test for Apache, unless you want to test some static resource, like fetching a file, usually you have to write a response handler and the corresponding test that will generate a request which will exercise this response handler and verify that the response is as expected. From now we may call these two parts as client and server parts of the test, or request and response parts of the test.

In some cases the response part of the test runs the test inside itself, so all it requires from the request part of the test, is to generate the request and print out a complete response without doing anything else. In such cases Apache::Test can auto-generate the client part of the test for you.

Developing Response-only Part of a Test

If you write only a response part of the test, Apache::Test will automatically generate the corresponding test part that will generated the response. In this case your test should print 'ok 1', 'not ok 2' responses as usual tests do. The autogenerated request part will receive the response and print them out automatically completing the Test::Harness expectations.

The corresponding request part of the test is named just like the response part, using the following translation:

  $response_test =~ s|t/[^/]+/Test([^/]+)/(.*).pm$|t/\L$1\E/$2.t|;

so for example t/response/TestApache/write.pm becomes: t/apache/write.t.

If we look at the autogenerated test t/apache/write.t, we can see that it starts with the warning that it has been autogenerated, so you won't attempt to change it. Then you can see the trace of the calls that generated this test, in case you want to figure out how the test was generated. And finally the test loads the Apache::TestRequest module, imports the GET_BODY shortcut and prints the response of the generated request using the GET_BODY:

  use Apache::TestRequest 'GET_BODY';
  print GET_BODY "/TestApache::write";

As you can see the request URI is autogenerated from the response test name:

  $response_test =~ s|.*/([^/]+)/(.*).pm$|/$1::$2|;

So t/response/TestApache/write.pm becomes: /TestApache::write.

Now a simple response test may look like this:

  package TestApache::write;
  
  use strict;
  use warnings FATAL => 'all';
  
  use constant BUFSIZ => 512; #small for testing
  use Apache::Const -compile => 'OK';
  
  sub handler {
      my $r = shift;
      $r->content_type('text/plain');
  
      $r->write("1..2\n");
      $r->write("ok 1")
      $r->write("not ok 2")
  
      Apache::OK;
  }
  1;

[F] Apache::Const is mod_perl 2.0's package, if you test under 1.0, use the Apache::Constants module instead [/F].

The configuration part for this test will be autogenerated by the Apache::Test framework and added to the autogenerated file t/conf/httpd.conf. In our case the following configuration section will be added.

  <Location /TestApache::write>
     SetHandler modperl
     PerlResponseHandler TestApache::write
  </Location>

You should remember to run:

  % t/TEST -clean

so when you run your new tests the new configuration will be added.

Developing Response and Request Parts of a Test

But in most cases you want to write a two parts test where the client (request) parts generates various requests and tests the responses.

It's possible that the client part tests a static file or some other feature that doesn't require a dynamic response. In this case, only the request part of the test should be written.

If you need to write the complete test, with two parts, you proceed just like in the previous section, but now you write the client part of the test by yourself. It's quite easy, all you have to do is to generate requests and check the response. So a typical test will look like this:

  file:t/apache/cool.t
  --------------------
  use strict;
  use warnings FATAL => 'all';

  use Apache::Test;
  use Apache::TestUtil;
  use Apache::TestRequest 'GET_BODY';

  plan tests => 1; # plan one test.

  Apache::TestRequest::module('default');

  my $config   = Apache::Test::config();
  my $hostport = Apache::TestRequest::hostport($config) || '';
  t_debug("connecting to $hostport");
  
  my $received = GET_BODY "/TestApache::cool";
  my $expected = "COOL";
  
  ok t_cmp(
           $expected,
           $received,
           "testing TestApache::cool",
            );

See the Apache::TestUtil manpage for more info on the t_cmp() function (e.g. it works with regexs as well).

And the corresponding response part:

  file:t/response/TestApache/cool.pm
  ----------------------------------
  package TestApache::cool;
  
  use strict;
  use warnings FATAL => 'all';
  
  use Apache::Const -compile => 'OK';
  
  sub handler {
      my $r = shift;
      $r->content_type('text/plain');
  
      $r->write("COOL");
  
      Apache::OK;
  }
  1;

Again, remember to run t/TEST -clean before running the new test so the configuration will be created for it.

As you can see the test generates a request to /TestApache::cool, and expects it to return "COOL". If we run the test:

  % ./t/TEST t/apache/cool

We see:

  apache/cool....ok
  All tests successful.
  Files=1, Tests=1,  1 wallclock secs ( 0.52 cusr +  0.02 csys =  0.54 CPU)

But if we run it in the debug (verbose) mode, we can actually see what we are testing, what was expected and what was received:

  apache/cool....1..1
  # connecting to localhost:8529
  # testing : testing TestApache::cool
  # expected: COOL
  # received: COOL
  ok 1
  ok
  All tests successful.
  Files=1, Tests=1,  1 wallclock secs ( 0.49 cusr +  0.03 csys =  0.52 CPU)

So in case in our simple test we have received something different from COOL or nothing at all, we can immediately see what's the problem.

The name of the request part of the test is very important. If Apache::Test cannot find the corresponding test for the response part it'll automatically generate one and in this case it's probably not what you want. Therefore when you choose the filename for the test, make sure to pick the same Apache::Test will pick. So if the response part is named: t/response/TestApache/cool.pm the request part should be named t/apache/cool.t. See the regular expression that does that in the previous section.

Developing Test Response Handlers in C

If you need to exercise some C API and you don't have a Perl glue for it, you can still use Apache::Test for the testing. It allows you to write response handlers in C and makes it easy to integrate these with other Perl tests and use Perl for request part which will exercise the C module.

The C modules look just like standard Apache C modules, with a couple of differences to:

a

help them fit into the test suite

b

allow them to compile nicely with Apache 1.x or 2.x.

The httpd-test ASF project is a good example to look at. The C modules are located under: httpd-test/perl-framework/c-modules/. Look at c-modules/echo_post/echo_post.c for a nice simple example. mod_echo_post simply echos data that is POSTed to it.

The differences between vairous tests may be summarized as follows:

  • If the first line is:

      #define HTTPD_TEST_REQUIRE_APACHE 1

    or

      #define HTTPD_TEST_REQUIRE_APACHE 2

    then the test will be skipped unless the version matches. If a module is compatible with the version of Apache used then it will be automatically compiled by t/TEST with -DAPACHE1 or -DAPACHE2 so you can conditionally compile it to suit different httpd versions.

  • If there is a section bounded by:

      #if CONFIG_FOR_HTTPD_TEST
      ...
      #endif

    in the .c file then that section will be inserted verbatim into t/conf/httpd.conf by t/TEST.

There is a certain amount of magic which hopefully allows most modules to be compiled for Apache 1.3 or Apache 2.0 without any conditional stuff. Replace XXX with the module name, for example echo_post or random_chunk:

  • You should:

      #include "apache_httpd_test.h" 

    which should be preceded by an:

      #define APACHE_HTTPD_TEST_HANDLER XXX_handler

    apache_httpd_test.h pulls in a lot of required includes and defines some constants and types that are not defined for Apache 1.3.

  • The handler function should be:

      static int XXX_handler(request_rec *r);
  • At the end of the file should be an:

      APACHE_HTTPD_TEST_MODULE(XXX)

    where XXX is the same as that in APACHE_HTTPD_TEST_HANDLER. This will generate the hooks and stuff.

Request and Response Methods

If you have LWP (libwww-perl) installed its LWP::UserAgent serves as an user agent in tests, otherwise Apache::TestClient tries to emulate partial LWP functionality. So most of the LWP documentation applies here, but the Apache::Test framework provides shortcuts that hide many details, making the test writing a simple and swift task. Before using these shortcuts Apache::TestRequest should be loaded, and its import() method will fetch the shortcuts into the caller namespace:

  use Apache::TestRequest;

Request generation methods issue a request and return a response object (HTTP::Response if LWP is available). They are documented in the HTTP::Request::Common manpage. The following methods are available:

  • GET

    Issues the GET request. For example, issue a request and retrieve the response content:

      $url = "$location?foo=1&bar=2";
      $res = GET $url;
      $str = $res->content;

    To set request headers, supply them after the $url, e.g.:

      $res = GET $url, 'Content-type' => 'text/html';
  • HEAD

    Issues the HEAD request. For example issue a request and check that the response's Content-type is text/plain:

      $url = "$location?foo=1&bar=2";
      $res = HEAD $url;
      ok $res->content_type() eq 'text/plain';
  • POST

    Issues the POST request. For example:

      $content = 'PARAM=%33';
      $res = POST $location, content => $content;

    The second argument to POST can be a reference to an array or a hash with key/value pairs to emulate HTML <form> POSTing.

  • PUT

    Issues the PUT request.

  • OPTIONS

    META: ???

These are two special methods added by the Apache::Test framework:

  • UPLOAD

    This special method allows to upload a file or a string which will look as an uploaded file to the server. To upload a file use:

      UPLOAD $location, filename => $filename;

    You can add extra request headers as well:

      UPLOAD $location, filename => $filename, 'X-Header-Test' => 'Test';

    To upload a string as a file, use:

      UPLOAD $location, content => 'some data';
  • UPLOAD_BODY

    Retrieves the content from the response resulted from doing UPLOAD. It's equal to:

      my $body = UPLOAD(@_)->content;

    For example, this code retrieves the content of the response resulted from file upload request:

      my $str = UPLOAD_BODY $location, filename => $filename;

Once the response object is returned, various response object methods can be applied to it. Probably the most useful ones are:

  $content = $res->content;

to retrieve the content fo the respose and:

  $content_type = $res->header('Content-type');

to retrieve specific headers.

Refer to the HTTP::Response manpage for a complete reference of these and other methods.

A few response retrieval shortcuts can be used to retrieve the wanted parts of the response. To apply these simply add the shortcut name to one of the request shortcuts listed earlier. For example instead of retrieving the content part of the response via:

  $res = GET $url;
  $str = $res->content;

simply use:

  $str = GET_BODY $url;
  • RC

    returns the response code, equivalent to:

      $res->code;

    For example to test whether some URL is bogus:

      use Apache::Const 'NOT_FOUND';
      ok GET_RC('/bogus_url') == NOT_FOUND;

    You usually need to import and use Apache::Const constants for the response code comparisons, rather then using codes' corresponding numerical values directly. You can import groups of code as well. For example:

      use Apache::Const ':common';

    Refer to the Apache::Const manpage for a complete reference. Also you may need to use APR and mod_perl constants, which reside in APR::Const and ModPerl::Const modules respectively.

  • OK

    tests whether the response was successful, equivalent to:

      $res->is_success;

    For example:

      ok GET_OK '/foo';
  • STR

    returns the response (both, headers and body) as a string and is equivalent to:

      $res->as_string;

    Mostly useful for debugging, for example:

      use Apache::TestUtil;
      t_debug POST_STR '/test.pl';
  • HEAD

    returns the headers part of the response as a multi-line string.

    For example, this code dumps all the response headers:

      use Apache::TestUtil;
      t_debug GET_HEAD '/index.html';
  • BODY

    returns the response body and is equivalent to:

      $res->content;

    For example, this code validates that the response's body is the one that was expected:

      use Apache::TestUtil;
      ok GET_BODY('/index.html') eq $expect;

Other Request Generation helpers

META: these methods need documentation

Request part:

   Apache::TestRequest::scheme('http'); #force http for t/TEST -ssl 
   Apache::TestRequest::module($module);
   my $config = Apache::Test::config();
   my $hostport = Apache::TestRequest::hostport($config);

Getting the request object? Apache::TestRequest::user_agent()

Starting Multiple Servers

By default the Apache::Test framework sets up only a single server to test against.

In some cases you need to have more than one server. If this is the situation, you have to override the maxclients configuration directive, whose default is 1. Usually this is done in t/TEST.PL by subclassing the parent test run class and overriding the new_test_config() method. For example if the parent class is Apache::TestRunPerl, you can change your t/TEST.PL to be:

  use strict;
  use warnings FATAL => 'all';
  
  use lib "../lib"; # test against the source lib for easier dev
  use lib map {("../blib/$_", "../../blib/$_")} qw(lib arch);
  
  use Apache::TestRunPerl ();
  
  package MyTest;
  
  our @ISA = qw(Apache::TestRunPerl);
  
  # subclass new_test_config to add some config vars which will be
  # replaced in generated httpd.conf
  sub new_test_config {
      my $self = shift;
  
      $self->{conf_opts}->{maxclients} = 2;
  
      return $self->SUPER::new_test_config;
  }
  
  MyTest->new->run(@ARGV);

Multiple User Agents

By default the Apache::Test framework uses a single user agent which talks to the server (this is the LWP user agent, if you have LWP installed). You almost never use this agent directly in the tests, but via various wrappers. However if you need a second user agent you can clone these. For example:

  my $ua2 = Apache::TestRequest::user_agent()->clone;

Hitting the Same Interpreter (Server Thread/Process Instance)

When a single instance of the server thread/process is running, all the tests go through the same server. However if the Apache::Test framework was configured to to run a few instances, two subsequent sub-tests may not hit the same server instance. In certain tests (e.g. testing the closure effect or the BEGIN blocks) it's important to make sure that a sequence of sub-tests are run against the same server instance. The Apache::Test framework supports this internally.

Here is an example from ModPerl::Registry closure tests. Using the counter closure problem under ModPerl::Registry:

  file:cgi-bin/closure.pl
  -----------------------
  #!perl -w
  print "Content-type: text/plain\r\n\r\n";
  
  # this is a closure (when compiled inside handler()):
  my $counter = 0;
  counter();
  
  sub counter {
      #warn "$$";
      print ++$counter;
  }

If this script get invoked twice in a row and we make sure that it gets executed by the same server instance, the first time it'll return 1 and the second time 2. So here is the gist of the request part that makes sure that its two subsequent requests hit the same server instance:

  file:closure.t
  --------------
  ...
  my $url = "/same_interp/cgi-bin/closure.pl";
  my $same_interp = Apache::TestRequest::same_interp_tie($url);
  
  # should be no closure effect, always returns 1
  my $first  = req($same_interp, $url);
  my $second = req($same_interp, $url);
  ok t_cmp(
      1,
      $first && $second && ($second - $first),
      "the closure problem is there",
  );
  sub req {
      my($same_interp, $url) = @_;
      my $res = Apache::TestRequest::same_interp_do($same_interp,
                                                    \&GET, $url);
      return $res ? $res->content : undef;
  }

In this test we generate two requests to cgi-bin/closure.pl and expect the returned value to increment for each new request, because of the closure problem generated by ModPerl::Registry. Since we don't know whether some other test has called this script already, we simply check whether the substraction of the two subsequent requests' outputs gives a value of 1.

The test starts by requesting the server to tie a single instance to all requests made with a certain identifier. This is done using the same_interp_tie() function which returns a unique server instance's indentifier. From now on any requests made through same_interp_do() and supplying this indentifier as the first argument will be served by the same server instance. The second argument to same_interp_do() is the method to use for generating the request and the third is the URL to use. Extra arguments can be supplied if needed by the request generation method (e.g. headers).

This technique works for testing purposes where we know that we have just a few server instances. What happens internally is when same_interp_tie() is called the server instance that served it returns its unique UUID, so when we want to hit the same server instance in subsequent requests we generate the same request until we learn that we are being served by the server instance that we want. This magic is done by using a fixup handler which returns OK only if it sees that its unique id matches. As you understand this technique would be very inefficient in production with many server instances.

Writing Tests

All the communications between tests and Test::Harness which executes them is done via STDOUT. I.e. whatever tests want to report they do by printing something to STDOUT. If a test wants to print some debug comment it should do it starting on a separate line, and each debug line should start with #. The t_debug() function from the Apache::TestUtil package should be used for that purpose.

Defining How Many Sub-Tests Are to Be Run

Before sub-tests of a certain test can be run it has to declare how many sub-tests it is going to run. In some cases the test may decide to skip some of its sub-tests or not to run any at all. Therefore the first thing the test has to print is:

  1..M\n

where M is a positive integer. So if the test plans to run 5 sub-tests it should do:

  print "1..5\n";

In Apache::Test this is done as follows:

  use Apache::Test;
  plan tests => 5;

Skipping a Whole Test

Sometimes when the test cannot be run, because certain prerequisites are missing. To tell Test::Harness that the whole test is to be skipped do:

  print "1..0 # skipped because of foo is missing\n";

The optional comment after # skipped will be used as a reason for test's skipping. Under Apache::Test the optional last argument to the plan() function can be used to define prerequisites and skip the test:

  use Apache::Test;
  plan tests => 5, $test_skipping_prerequisites;

This last argument can be:

  • a SCALAR

    the test is skipped if the scalar has a false value. For example:

      plan tests => 5, 0;
  • an ARRAY reference

    have_module() is called for each value in this array. The test is skipped if have_module() returns false (which happens when at least one C or Perl module from the list cannot be found). For example:

      plan tests => 5, [qw(mod_index mod_mime)];
  • a CODE reference

    the tests will be skipped if the function returns a false value. For example:

        plan tests => 5, \&have_lwp;

    the test will be skipped if LWP is not available

There is a number of useful functions whose return value can be used as a last argument for plan():

  • have_module()

    have_module() tests for presense of Perl modules or C modules mod_*. It accepts a list of modules or a reference to the list. If at least one of the modules is not found it returns a false value, otherwise it returns a true value. For example:

      plan tests => 5, have_module qw(Chatbot::Eliza CGI mod_proxy);

    will skip the whole test unless both Perl modules Chatbot::Eliza and CGI and the C module mod_proxy.c are available.

  • have

    have() called as a last argument of plan() can impose multiple requirements at once.

    have()'s arguments can include scalars, which are passed to have_module(), and hash references. The hash references have a condition code reference as a value and a reason for failure as a key. The condition code is run and if it fails the provided reason is used to tell user why the test was skipped.

    For example:

      plan tests => 5,
          have 'LWP',
               { "perl >= 5.7.3 is required" => sub { $] >= 5.007003   } },
               { "not Win32"                 => sub { $^O eq 'MSWin32' } },
               'cgid';

    In this example, we require the presense of the LWP Perl module, mod_cgid, that we run under perl >= 5.7.3 on Win32. If any of the requirements from this list fail, the test will be skipped and each failed requiremnt will print a reason for its failure.

  • have_perl()

    have_perl('foo') checks whether the value of $Config{foo} or $Config{usefoo} is equal to 'define'. For example:

      plan tests => 2, have_perl 'ithreads';

    if Perl wasn't compiled with -Duseithreads the condition will be false and the test will be skipped.

    Also it checks for Perl extensions. For example:

      plan tests => 5, have_perl 'iolayers';

    tests whether PerlIO is available.

  • have_lwp()

    Tests whether the Perl module LWP is installed.

  • have_http11()

    Tries to tell LWP that sub-tests need to be run under HTTP 1.1 protocol. Fails if the installed version of LWP is not capable of doing that.

  • have_cgi()

    tests whether mod_cgi or mod_cgid is available.

  • have_apache()

    tests for a specific version of httpd. For example:

      plan tests => 2, have_apache 2;

    will skip the test if not run under httpd-2.x.

      plan tests => 2, have_apache 1;

    will run the test only if run under Apache version 1.x.

Skipping Numerous Tests

Just like you can tell Apache::Test to run only specific tests, you can tell it to run all but a few tests.

If all files in a directory t/foo should be skipped, create:

  file:t/foo/all.t
  ----------------
  print "1..0\n";

Alternatively you can specify which tests should be skipped from a single file t/SKIP. This file includes a list of tests to be skipped. You can include comments starting with # and you can use the * wildcharacter for multiply files matching.

For example if in mod_perl 2.0 test suite we create the following file:

  file:t/SKIP
  -----------
  # skip all files in protocol
  protocol
  
  # skip basic cgi test
  modules/cgi.t
  
  # skip all filter/input_* files
  filter/input*.t

In our example the first pattern specifies the directory name protocol, since we want to skip all tests in it. But since the skipping is done based on matching the skip patterns from t/SKIP against a list of potential tests to be run, some other tests may be skipped as well if they match the pattern. Therefore it's safer to use a pattern like this:

  protocol/*.t

The second pattern skips a single test modules/cgi.t. Note that you shouldn't specify the leading t/. The .t extension is optional, so you can tell:

  # skip basic cgi test
  modules/cgi

The last pattern tells Apache::Test to skip all the tests starting with filter/input.

Reporting a Success or a Failure of Sub-tests

After printing the number of planned sub-tests, and assuming that the test is not skipped, the tests is running its sub-tests and each sub-test is expected to report its success or failure by printing ok or not ok respectively followed by its sequential number and a new line. For example:

  print "ok 1\n";
  print "not ok 2\n";
  print "ok 3\n";

In Apache::Test this is done using the ok() function which prints ok if its argument is a true value, otherwise it prints not ok. In addition it keeps track of how many times it was called, and every time it prints an incremental number, therefore you can move sub-tests around without needing to remember to adjust sub-test's sequential number, since now you don't need them at all. For example this test snippet:

  use Apache::Test;
  use Apache::TestUtil;
  plan tests => 3;
  ok "success";
  t_debug("expecting to fail next test");
  ok "";
  ok 0;

will print:

  1..3
  ok 1
  # expecting to fail next test
  not ok 2
  not ok 3

Most of the sub-tests perform one of the following things:

  • test whether some variable is defined:

      ok defined $object;
  • test whether some variable is a true value:

      ok $value;

    or a false value:

      ok !$value;
  • test whether a received from somewhere value is equal to an expected value:

      $expected = "a good value";
      $received = get_value();
      ok defined $received && $received eq $expected;

Skipping Sub-tests

If the standard output line contains the substring # Skip (with variations in spacing and case) after ok or ok NUMBER, it is counted as a skipped test. Test::Harness reports the text after # Skip\S*\s+ as a reason for skipping. So you can count a sub-test as a skipped as follows:

  print "ok 3 # Skip for some reason\n";

or using the Apache::Test's skip() function which works similarly to ok():

  skip $should_skip, $test_me;

so if $should_skip is true, the test will be reported as skipped. The second argument is the one that's sent to ok(), so if $should_skip is true, a normal ok() sub-test is run. The following example represent four possible outcomes of using the skip() function:

  skip_subtest_1.t
  --------------
  use Apache::Test;
  plan tests => 4;
  
  my $ok     = 1;
  my $not_ok = 0;
  
  my $should_skip = "foo is missing";
  skip $should_skip, $ok;
  skip $should_skip, $not_ok;
  
  $should_skip = '';
  skip $should_skip, $ok;
  skip $should_skip, $not_ok;

now we run the test:

  % ./t/TEST -run-tests -verbose skip_subtest_1
  skip_subtest_1....1..4
  ok 1 # skip foo is missing
  ok 2 # skip foo is missing
  ok 3
  not ok 4
  # Failed test 4 in skip_subtest_1.t at line 13
  Failed 1/1 test scripts, 0.00% okay. 1/4 subtests failed, 75.00% okay.

As you can see since $should_skip had a true value, the first two sub-tests were explicitly skipped (using $should_skip as a reason), so the second argument to skip didn't matter. In the last two sub-tests $should_skip had a false value therefore the second argument was passed to the ok() function. Basically the following code:

  $should_skip = '';
  skip $should_skip, $ok;
  skip $should_skip, $not_ok;

is equivalent to:

  ok $ok;
  ok $not_ok;

Apache::Test also allows to write tests in such a way that only selected sub-tests will be run. The test simply needs to switch from using ok() to sok(). Where the argument to sok() is a CODE reference or a BLOCK whose return value will be passed to ok(). If sub-tests are specified on the command line only those will be run/passed to ok(), the rest will be skipped. If no sub-tests are specified, sok() works just like ok(). For example, you can write this test:

  file:skip_subtest_2.t
  ---------------------
  use Apache::Test;
  plan tests => 4;
  sok {1};
  sok {0};
  sok sub {'true'};
  sok sub {''};

and then ask to run only sub-tests 1 and 3 and to skip the rest.

  % ./t/TEST -verbose skip_subtest_2 1 3
  skip_subtest_2....1..4
  ok 1
  ok 2 # skip skipping this subtest
  ok 3
  ok 4 # skip skipping this subtest
  ok, 2/4 skipped:  skipping this subtest
  All tests successful, 2 subtests skipped.

Only the sub-tests 1 and 3 get executed.

A range of sub-tests to run can be given using the Perl's range operand:

  % ./t/TEST -verbose skip_subtest_2 2..4
  skip_subtest_2....1..4
  ok 1 # skip askipping this subtest
  not ok 2
  # Failed test 2
  ok 3
  not ok 4
  # Failed test 4
  Failed 1/1 test scripts, 0.00% okay. 2/4 subtests failed, 50.00% okay.

In this run, only the first sub-test gets executed.

Todo Sub-tests

In a safe fashion to skipping specific sub-tests, it's possible to declare some sub-tests as todo. This distinction is useful when we know that some sub-test is failing but for some reason we want to flag it as a todo sub-test and not as a broken test. Test::Harness recognizes todo sub-tests if the standard output line contains the substring # TODO after not ok or not ok NUMBER and is counted as a todo sub-test. The text afterwards is the explanation of the thing that has to be done before this sub-test will succeed. For example:

  print "not ok 42 # TODO not implemented\n";

In Apache::Test this can be done with passing a reference to a list of sub-tests numbers that should be marked as todo sub-test:

  plan tests => 7, todo => [3, 6];

In this example sub-tests 3 and 6 will be marked as todo sub-tests.

Making it Easy to Debug

Ideally we want all the tests to pass, reporting minimum noise or none at all. But when some sub-tests fail we want to know the reason for their failure. If you are a developer you can dive into the code and easily find out what's the problem, but when you have a user who has a problem with the test suite it'll make his and your life much easier if you make it easy for the user to report you the exact problem.

Usually this is done by printing the comment of what the sub-test does, what is the expected value and what's the received value. This is a good example of debug friendly sub-test:

  file:debug_comments.t
  ---------------------
  use Apache::Test;
  use Apache::TestUtil;
  plan tests => 1;
  
  t_debug("testing feature foo");
  $expected = "a good value";
  $received = "a bad value";
  t_debug("expected: $expected");
  t_debug("received: $received");
  ok defined $received && $received eq $expected;

If in this example $received gets assigned a bad value string, the test will print the following:

  % t/TEST debug_comments
  debug_comments....FAILED test 1

No debug help here, since in a non-verbose mode the debug comments aren't printed. If we run the same test using the verbose mode, enabled with -verbose:

  % t/TEST -verbose debug_comments
  debug_comments....1..1
  # testing feature foo
  # expected: a good value
  # received: a bad value
  not ok 1

we can see exactly what's the problem, by visual expecting of the expected and received values.

It's true that adding a few print statements for each sub tests is cumbersome, and adds a lot of noise, when you could just tell:

  ok "a good value" eq "a bad value";

but no fear, Apache::TestUtil comes to help. The function t_cmp() does all the work for you:

  use Apache::Test;
  use Apache::TestUtil;
  ok t_cmp(
      "a good value",
      "a bad value",
      "testing feature foo");

t_cmp() will handle undef'ined values as well, so you can do:

  my $expected;
  ok t_cmp(undef, $expected, "should be undef");

Finally you can use t_cmp() for regex comparisons. This feature is mostly useful when there may be more than one valid expected value, which can be described with regex. For example this can be useful to inspect the value of $@ when eval() is expected to fail:

  eval {foo();}
  if ($@) {
      ok t_cmp(qr/^expecting foo/, $@, "func eval");
  }

which is the same as:

  eval {foo();}
  if ($@) {
      t_debug("func eval");
      ok $@ =~ /^expecting foo/ ? 1 : 0;
  }

Tie-ing STDOUT to a Response Handler Object

It's possible to run the sub-tests in the response handler, and simply return them as a response to the client which in turn will print them out. Unfortunately in this case you cannot use ok() and other functions, since they print and don't return the results, therefore you have to do it manually. For example:

  sub handler {
      my $r = shift;
  
      $r->print("1..2\n");
      $r->print("ok 1\n");
      $r->print("not ok 2\n");
    
      return Apache::OK;
  }

now the client should print the response to STDOUT for Test::Harness processing.

If the response handler is configured as:

  SetHandler perl-script

STDOUT is already tied to the request object $r. Therefore you can now rewrite the handler as:

  use Apache::Test;
  sub handler {
      my $r = shift;
  
      Apache::Test::test_pm_refresh();
      plan tests => 2;
      ok "true";
      ok "";
    
      return Apache::OK;
  }

However to be on the safe side you also have to call Apache::Test::test_pm_refresh() allowing plan() and friends to be called more than once per-process.

Under different settings STDOUT is not tied to the request object. If the first argument to plan() is an object, such as an Apache::RequestRec object, STDOUT will be tied to it. The Test.pm global state will also be refreshed by calling Apache::Test::test_pm_refresh. For example:

  use Apache::Test;
  sub handler {
      my $r = shift;
  
      plan $r, tests => 2;
      ok "true";
      ok "";
    
      return Apache::OK;
  }

Yet another alternative to handling the test framework printing inside response handler is to use Apache::TestToString class.

The Apache::TestToString class is used to capture Test.pm output into a string. Example:

  use Apache::Test;
  sub handler {
      my $r = shift;
  
      Apache::TestToString->start;
  
      plan tests => 2;
      ok "true";
      ok "";
    
      my $output = Apache::TestToString->finish;
      $r->print($output);
  
      return Apache::OK;
  }

In this example Apache::TestToString intercepts and buffers all the output from Test.pm and can be retrieved with its finish() method. Which then can be printed to the client in one shot. Internally it calls Apache::Test::test_pm_refresh() to make sure plan(), ok() and other functions() will work correctly more than one test is running under the same interpreter.

Auto Configuration

If the test is comprised only from the request part, you have to manually configure the targets you are going to use. This is usually done in t/conf/extra.conf.in.

If your tests are comprised from the request and response parts, Apache::Test automatically adds the configuration section for each response handler it finds. For example for the response handler:

  package TestResponse::nice;
  ... some code
  1;

it will put into t/conf/httpd.conf:

  <Location /TestResponse::nice>
      SetHandler modperl
      PerlResponseHandler TestResponse::nice
  </Location>

If you want to add some extra configuration directives, use the __DATA__ section, as in this example:

  package TestResponse::nice;
  ... some code
  1;
  __DATA__
  PerlSetVar Foo Bar

These directives will be wrapped into the <Location> section and placed into t/conf/httpd.conf:

  <Location /TestResponse::nice>
      SetHandler modperl
      PerlResponseHandler TestResponse::nice
      PerlSetVar Foo Bar
  </Location>

This autoconfiguration feature was added to:

  • simplify (less lines) test configuration.

  • ensure unique namespace for <Location ...>'s.

  • force <Location ...> names to be consistent.

  • prevent clashes within main configuration.

If some directives are supposed to go to the base configuration, i.e. not to be automatically wrapped into <Location> block, you should use a special <Base>..</Base> block:

  __DATA__
  <Base>
      PerlSetVar Config ServerConfig
  <Base>
  PerlSetVar Config LocalConfig

Now the autogenerated section will look like this:

  PerlSetVar Config ServerConfig
  <Location /TestResponse::nice>
     SetHandler modperl
     PerlResponseHandler TestResponse::nice
     PerlSetVar Config LocalConfig
  </Location>

As you can see the <Base>..</Base> block has gone. As you can imagine this block was added to support our virtue of laziness, since most tests don't need to add directives to the base configuration and we want to keep the configuration sections in tests to a minimum and let Perl do the rest of the job for us.

META: Virtual host?

META: a special -configure time method in response part: APACHE_TEST_CONFIGURE

Threaded versus Non-threaded Perl Test's Compatibility

Since the tests are supposed to run properly under non-threaded and threaded perl, you have to worry to enclose the threaded perl specific configuration bits in:

  <IfDefine PERL_USEITHREADS>
      ... configuration bits
  </IfDefine>

Apache::Test will start the server with -DPERL_USEITHREADS if the Perl is ithreaded.

For example PerlOptions +Parent is valid only for the threaded perl, therefore you have to write:

  <IfDefine PERL_USEITHREADS>
      # a new interpreter pool
      PerlOptions +Parent
  </IfDefine>

Just like the configuration, the test's code has to work for both versions as well. Therefore you should wrap the code specific to the threaded perl into:

  if (have_perl 'ithreads'){
      # ithread specific code
  }

which is essentially does a lookup in $Config{useithreads}.

Retrieving the Server Configuration Data

The server configuration data can be retrieved and used in the tests via the configuration object:

  use Apache::Test;
  my $cfg  = Apache::Test::config();

Module Magic Number

The following code retrieves the major and minor MMN numbers.

  my $cfg  = Apache::Test::config();
  my $info = $cfg->{httpd_info};
  
  my $major = $info->{MODULE_MAGIC_NUMBER_MAJOR};
  my $minor = $info->{MODULE_MAGIC_NUMBER_MINOR};
  
  print "major=$major, minor=$minor\n";

For example for MMN 20011218:0, this code prints:

  major=20011218, minor=0

Debugging Tests

Sometimes your tests won't run properly or even worse will segfault. There are cases where it's possible to debug broken tests with simple print statements but usually it's very time consuming and ineffective. Therefore it's a good idea to get yourself familiar with Perl and C debuggers, and this knowledge will save you a lot of time and grief in a long run.

Under C debugger

mod_perl-2.0 provides built in 'make test' debug facility. So in case you get a core dump during make test, or just for fun, run in one shell:

  % t/TEST -debug

in another shell:

  % t/TEST -run-tests

then the -debug shell will have a (gdb) prompt, type where for stacktrace:

  (gdb) where

You can change the default debugger by supplying the name of the debugger as an argument to -debug. E.g. to run the server under ddd:

  % ./t/TEST -debug=ddd

META: list supported debuggers

If you debug mod_perl internals you can set the breakpoints using the -breakpoint option, which can be repeated as many times as needed. When you set at least one breakpoint, the server will start running till it meets the ap_run_pre_config breakpoint. At this point we can set the breakpoint for the mod_perl code, something we cannot do earlier if mod_perl was built as DSO. For example:

  % ./t/TEST -debug -breakpoint=modperl_cmd_switches \
     -breakpoint=modperl_cmd_options

will set the modperl_cmd_switches and modperl_cmd_options breakpoints and run the debugger.

If you want to tell the debugger to jump to the start of the mod_perl code you may run:

  % ./t/TEST -debug -breakpoint=modperl_hook_init

In fact -breakpoint automatically turns on the debug mode, so you can run:

  % ./t/TEST -breakpoint=modperl_hook_init

Under Perl debugger

When the Perl code misbehaves it's the best to run it under the Perl debugger. Normally started as:

  % perl -debug program.pl

the flow control gets passed to the Perl debugger, which allows you to run the program in single steps and examine its states and variables after every executed statement. Of course you can set up breakpoints and watches to skip irrelevant code sections and watch after certain variables. The perldebug and the perldebtut manpages are covering the Perl debugger in fine details.

The Apache::Test framework extends the Perl debugger and plugs in LWP's debug features, so you can debug the requests. Let's take test apache/read from mod_perl 2.0 and present the features as we go:

META: to be completed

run .t test under the perl debugger % t/TEST -debug perl t/modules/access.t

run .t test under the perl debugger (nonstop mode, output to t/logs/perldb.out) % t/TEST -debug perl=nostop t/modules/access.t

turn on -v and LWP trace (1 is the default) mode in Apache::TestRequest % t/TEST -debug lwp t/modules/access.t

turn on -v and LWP trace mode (level 2) in Apache::TestRequest % t/TEST -debug lwp=2 t/modules/access.t

Tracing

To get Start the server under strace(1):

  % t/TEST -debug strace

The output goes to t/logs/strace.log.

Now in a second terminal run:

  % t/TEST -run-tests

Beware that t/logs/strace.log is going to be very big.

META: can we provide strace(1) opts if we want to see only certain syscalls?

Writing Tests Methodology

META: to be completed

When Tests Should Be Written

  • A New feature is Added

    Every time a new feature is added new tests should be added to cover the new feature.

  • A Bug is Reported

    Every time a bug gets reported, before you even attempt to fix the bug, write a test that exposes the bug. This will make much easier for you to test whether your fix actually fixes the bug.

    Now fix the bug and make sure that test passes ok.

    It's possible that a few tests can be written to expose the same bug. Write them all -- the more tests you have the less chances are that there is a bug in your code.

    If the person reporting the bug is a programmer you may try to ask her to write the test for you. But usually if the report includes a simple code that reproduces the bug, it should probably be easy to convert this code into a test.

Other Webserver Regression Testing Frameworks

  • Puffin

    Puffin is a web application regression testing system. It allows you to test any web application from end to end based application as if it were a "black box" accepting inputs and returning outputs.

    It's available from http://puffin.sourceforge.net/

References

  • extreme programming methodology

    Extreme Programming: A Gentle Introduction: http://www.extremeprogramming.org/.

    Extreme Programming: http://www.xprogramming.com/.

    See also other sites linked from these URLs.

Maintainers

Maintainer is the person(s) you should contact with updates, corrections and patches.

  • Stas Bekman <stas (at) stason.org>

Authors

  • Stas Bekman <stas (at) stason.org>

Only the major authors are listed above. For contributors see the Changes file.