The Perl Toolchain Summit needs more sponsors. If your company depends on Perl, please support this very important event.

NAME

  Test - provides a simple framework for writing test scripts

SYNOPSIS

  use strict;
  use Test;
  BEGIN { plan tests => 13, todo => [3,4] }

  ok(0); # failure
  ok(1); # success

  ok(0); # ok, expected failure (see todo list, above)
  ok(1); # surprise success!

  ok(0,1);             # failure: '0' ne '1'
  ok('broke','fixed'); # failure: 'broke' ne 'fixed'
  ok('fixed','fixed'); # success: 'fixed' eq 'fixed'

  ok(sub { 1+1 }, 2);  # success: '2' eq '2'
  ok(sub { 1+1 }, 3);  # failure: '2' ne '3'
  ok(0, int(rand(2));  # (just kidding! :-)

  my @list = (0,0);
  ok @list, 3, "\@list=".join(',',@list);      #extra diagnostics
  ok 'segmentation fault', '/(?i)success/';    #regex match

  skip($feature_is_missing, ...);    #do platform specific test

DESCRIPTION

Test::Harness expects to see particular output when it executes tests. This module aims to make writing proper test scripts just a little bit easier (and less error prone :-).

TEST TYPES

  • NORMAL TESTS

    These tests are expected to succeed. If they don't, something's screwed up!

  • SKIPPED TESTS

    Skip tests need a platform specific feature that might or might not be available. The first argument should evaluate to true if the required feature is NOT available. After the first argument, skip tests work exactly the same way as do normal tests.

  • TODO TESTS

    TODO tests are designed for maintaining an executable TODO list. These tests are expected NOT to succeed (otherwise the feature they test would be on the new feature list, not the TODO list).

    Packages should NOT be released with successful TODO tests. As soon as a TODO test starts working, it should be promoted to a normal test and the newly minted feature should be documented in the release notes.

ONFAIL

  BEGIN { plan test => 4, onfail => sub { warn "CALL 911!" } }

The test failures can trigger extra diagnostics at the end of the test run. onfail is passed an array ref of hash refs that describe each test failure. Each hash will contain at least the following fields: package, repetition, and result. (The file, line, and test number are not included because their correspondance to a particular test is fairly weak.) If the test had an expected value or a diagnostic string, these will also be included.

This optional feature might be used simply to print out the version of your package and/or how to report problems. It might also be used to generate extremely sophisticated diagnostics for a particular test failure. It's not a panacea, however. Core dumps or other unrecoverable errors will prevent the onfail hook from running. (It is run inside an END block.) Besides, onfail is probably over-kill in the majority of cases. (Your test code should be simpler than the code it is testing, yes?)

SEE ALSO

Test::Harness and various test coverage analysis tools.

AUTHOR

Copyright © 1998 Joshua Nathaniel Pritikin. All rights reserved.

This package is free software and is provided "as is" without express or implied warranty. It may be used, redistributed and/or modified under the terms of the Perl Artistic License (see http://www.perl.com/perl/misc/Artistic.html)

1 POD Error

The following errors were encountered while parsing the POD:

Around line 228:

Non-ASCII character seen before =encoding in '©'. Assuming CP1252