Chapter 1. Beginning Testing

You’ve heard about the benefits of testing. You know that it can improve your code’s reliability and maintainability as well as your development processes. You may even know about the wide range of available modules and idioms that Perl offers for testing Perl and non-Perl programs. In short, you may know everything except where to start.

The labs in this chapter walk through the most basic steps of running and writing automated tests with Perl. By the end of the chapter, you’ll know how to start and continue testing, how Perl’s testing libraries work, and where to find more libraries to ease your workload.

Installing Test Modules

One of Perl’s greatest strengths is the CPAN, an archive of thousands of reusuable code libraries—generally called modules—for almost any programming problem anyone has ever solved with Perl. This includes writing and running tests. Before you can use these modules, however, you must install them. Fortunately, Perl makes this easy.

How do I do that?

The best way to install modules from the CPAN is through a packaging system that can handle the details of finding, downloading, building, and installing the modules and their dependencies.

Through the CPAN shell

On Unix-like platforms (including Mac OS X) as well as on Windows platforms if you have a C compiler available, the easiest way to install modules is by using the CPAN module that comes with Perl. To install a new version of the Test::Simple distribution, launch the CPAN shell with the cpan script:

    % cpan
    cpan shell -- CPAN exploration and modules installation (v1.7601)
    ReadLine support enabled

    cpan> install Test::Simple
    Running install for module Test::Simple
    Running make for M/MS/MSCHWERN/Test-Simple-0.54.tar.gz

    <...>

    Appending installation info to /usr/lib/perl5/5.8.6/powerpc-linux/perllocal.pod
      /usr/bin/make install UNINST=1 -- OK

Note

You can also run the CPAN shell manually with perl-MCPAN-e shell.

If Test::Simple had any dependencies (it doesn’t), the shell would have detected them and tried to install them first.

If you haven’t used the CPAN module before, it will prompt you for all sorts of information about your machine and network configuration as well as your installation preferences. Usually the defaults are fine.

Through PPM

By far, most Windows Perl installations use ActiveState’s Active Perl distribution (http://www.activestate.com/Products/ActivePerl/), which includes the ppm utility to download, configure, build, and install modules. With ActivePerl installed, open a console window and type:

    C:\>PPM
    PPM> install Test-Simple

Note

ActivePerl also has distributions for Linux and Solaris, so these instructions also work there.

If the configuration is correct, ppm will download and install the latest Test::Simple distribution from ActiveState’s repository.

If the module that you want isn’t in the repository at all or if the version in the repository is older than you like, you have a few options.

First, you can search alternate repositories. See PodMaster’s list of ppm repositories at http://crazyinsomniac.perlmonk.org/perl/misc/Repositories.pm. For example, to use dada’s Win32 repository permanently, use the set repository command within ppm:

    C:\>PPM
    PPM> set repository dada http://dada.perl.it/PPM
    PPM> set save

By hand

If you want to install a pure-Perl module or are working on a platform that has an appropriate compiler, you can download and install the module by hand. First, find the appropriate module—perhaps by browsing http://search.cpan.org/. Then download the file and extract it to its own directory:

    $ tar xvzf Test-Simple-0.54.tar.gz
    Test-Simple-0.54/
    <...>

Note

To set up a compilation environment for Perl on Windows, consult the README.win32 file that ships with Perl.

Run the Makefile.PL program, and then issue the standard commands to build and test the module:

    $ perl Makefile.PL
    Checking if your kit is complete...
    Looks good
    Writing Makefile for Test::Simple
    $ make
    cp lib/Test/Builder.pm blib/lib/Test/Builder.pm
    cp lib/Test/Simple.pm blib/lib/Test/Simple.pm
    $ make test

Note

Be sure to download the file marked This Release, not the Latest Dev. Release, unless you plan to help develop the code.

If all of the tests pass, great! Otherwise, do what you can to figure out what failed, why, and if it will hurt you. (See "Running Tests" and "Interpreting Test Results,” later in this chapter, for more information.) Finally, install the module by running make install (as root, if you’re installing the module system-wide).

Makefile.PL uses a module called ExtUtils::MakeMaker to configure and install other modules. Some modules use Module::Build instead of ExtUtils::MakeMaker. There are two main differences from the installation standpoint. First, they require you to have Module::Build installed. Second, the installation commands are instead:

    $ perl Build.PL
    $ perl Build
    $ perl Build test
    # perl Build install

Note

Unix users can use ./Build instead of perl Build in the instructions.

Otherwise, they work almost identically.

Windows users may need to install Microsoft’s nmake to install modules by hand. Where Unix users type make, use the nmake command instead: nmake, nmake test, and nmake install.

Note

Consult the README.win32 file from the Perl source code distribution for links to nmake.exe .

What about...

Q: How do I know the name to type when installing modules through PPM? I tried install Test-More, but it couldn’t find it!

A: Type the name of the distribution, not the module within the distribution. To find the name of the distribution, search http://search.cpan.org/ for the name of the module that you want. In this example, Test::More is part of the Test-Simple distribution. Remove the version and use that name within PPM.

Q: I’m not an administrator on the machine, or I don’t want to install the modules for everyone. How can I install a module to a specific directory?

A: Set the PREFIX appropriately when installing the module. For example, a PREFIX of ~/perl/lib will install these modules to that directory (at least on Unix-like machines). Then set the PERL5LIB environment variable to point there or remember to use the lib pragma to add that directory to @INC in all programs in which you want to use your locally installed modules.

Note

See perlfaq8 to learn more about keeping your own module directory.

If you build the module by hand, run Makefile.PL like this:

    $ perl Makefile.PL PREFIX=~/perl/lib

Note

MakeMaker 6.26 release will support the INSTALLBASE parameter; use that instead of PREFIX.

If you use CPAN, configure it to install modules to a directory under your control. Launch the CPAN shell with your own user account and follow the configuration questions. When it prompts for the PREFIX:

    Every Makefile.PL is run by perl in a separate process. Likewise we
    run 'make' and 'make install' in processes. If you have any
    parameters (e.g. PREFIX, LIB, UNINST or the like) you want to pass
    to the calls, please specify them here.

    If you don't understand this question, just press ENTER.

    Parameters for the 'perl Makefile.PL' command?
    Typical frequently used settings:

        PREFIX=~/perl      non-root users (please see manual for more hints)

        Your choice:  [  ]

add a prefix to a directory where you’d like to store your own modules.

If the module uses Module::Build, pass the installbase parameter instead:

    $ perl Build.PL --installbase=~/perl

See the documentation for ExtUtils::MakeMaker, CPAN, and Module::Build for more details.

Running Tests

Before you can gain any benefit from writing tests, you must be able to run them. Fortunately, there are several ways to do this, depending on what you need to know.

How do I do that?

To see real tests in action, download the latest version of Test::Harness (see http://search.cpan.org/dist/Test-Harness) from the CPAN and extract it to its own directory. Change to this directory and build the module as usual (see "Installing Test Modules,” earlier in this chapter). To run all of the tests at once, type make test:

    $ make test
    PERL_DL_NONLAZY=1 /usr/bin/perl5.8.6 "-MExtUtils::Command::MM" "-e" \
        "test_harness(0, 'blib/lib', 'blib/arch')" t/*.t
    t/00compile.........ok 1/5# Testing Test::Harness 2.46
    t/00compile.........ok
    t/assert............ok
    t/base..............ok
    t/callback..........ok
    t/harness...........ok
    t/inc_taint.........ok
    t/nonumbers.........ok
    t/ok................ok
    t/pod...............ok
    t/prove-globbing....ok
    t/prove-switches....ok
    t/strap-analyze.....ok
    t/strap.............ok
    t/test-harness......ok
            56/208 skipped: various reasons
    All tests successful, 56 subtests skipped.
    Files=14, Tests=551,  6 wallclock secs ( 4.52 cusr +  0.97 csys =  5.49 CPU)

What just happened?

make test is the third step of nearly every Perl module installation. This command runs all of the test files it can find through Test::Harness, which summarizes and reports the results. It also takes care of setting the paths appropriately for as-yet-uninstalled modules.

What about...

Q: How do I run tests for distributions that don’t use Makefile.PL?

A: make test comes from ExtUtils::MakeMaker, an old and venerable module. Module::Build is easier to use in some cases. If there’s a Build.PL file, instead use the commands perl Build.PL, perl Build, and perl Build test. Everything will behave as described here.

Q: How do I run tests individually?

A: Sometimes you don’t want to run everything through make test, as it runs all of the tests for a distribution in a specific order. If you want to run a few tests individually, use prove instead. It runs the test files you pass as command-line arguments, and then summarizes and prints the results.

Note

If you don’t have prove installed, you’re using an old version of Test:: Harness. Use bin/ prove instead. Then upgrade.

    $ prove t/strap*.t
    t/strap-analyze....ok
    t/strap............ok
    All tests successful.
    Files=2, Tests=284,  1 wallclock secs ( 0.66 cusr +  0.14 csys =  0.80
        CPU)

If you want the raw details, not just a summary, use prove’s verbose (-v) flag:

    $ prove -v t/assert.t
    t/assert....1..7
    ok 1 - use Test::Harness::Assert;
    ok 2 - assert() exported
    ok 3 - assert( FALSE ) causes death
    ok 4 -   with the right message
    ok 5 - assert( TRUE ) does nothing
    ok 6 - assert( FALSE, NAME )
    ok 7 -   has the name
    ok
    All tests successful.
    Files=1, Tests=7,  0 wallclock secs ( 0.06 cusr +  0.01 csys =  0.07
        CPU)

This flag prevents prove from eating the results. Instead, it prints them directly along with a short summary. This is very handy for development and debugging (see "Interpreting Test Results,” later in this chapter).

Q: How do I run tests individually without prove?

A: You can run most test files manually; they’re normally just Perl files.

    $ perl t/00compile.t
    1..5
    ok 1 - use Test::Harness;
    # Testing Test::Harness 2.42
    ok 2 - use Test::Harness::Straps;
    ok 3 - use Test::Harness::Iterator;
    ok 4 - use Test::Harness::Assert;
    ok 5 - use Test::Harness;

Oops! This ran the test against Test::Harness 2.42, the installed version, instead of Version 2.46, the new version. All of the other solutions set Perl’s @INC path correctly. When running tests manually, use the blib module to pick up the modules as built by make or perl Build:

Note

Confused about @INC and why it matters? See perldoc perlvar for enlightenment.

    $ perl -Mblib t/00compile.t
    1..5
    ok 1 - use Test::Harness;
    # Testing Test::Harness 2.46
    ok 2 - use Test::Harness::Straps;
    ok 3 - use Test::Harness::Iterator;
    ok 4 - use Test::Harness::Assert;
    ok 5 - use Test::Harness;

The -M switch causes Perl to load the given module just as if the program file contained a use blib; line.

The TEST_FILES argument to make_test can simplify this:

Note

TEST_FILE Scan also take a file pattern, such as TEST_FILES=t/ strap*.t.

    $ make test TEST_FILES=t/00compile.t
    t/00compile....ok 1/5# Testing Test::Harness 2.46
    t/00compile....ok
    All tests successful.
    Files=1, Tests=5,  0 wallclock secs ( 0.13 cusr +  0.02 csys =  0.15
        CPU)

For verbose output, add TEST_VERBOSE=1.

Interpreting Test Results

Perl has a wealth of good testing modules that interoperate smoothly through a common protocol (the Test Anything Protocol, or TAP) and common libraries (Test::Builder). You’ll probably never have to write your own testing protocol, but understanding TAP will help you interpret your test results and write better tests.

Note

All of the test modules in this book produce TAP output. Test:: Harness interprets that output. Think of it as a minilanguage about test successes and failures.

How do I do that?

Save the following program to sample_output.pl:

    #!perl

    print <<END_HERE;
    1..9
    ok 1
    not ok 2
    #     Failed test (t/sample_output.t at line 10)
    #          got: '2'
    #     expected: '4'
    ok 3
    ok 4 - this is test 4
    not ok 5 - test 5 should look good too
    not ok 6 # TODO fix test 6
    # I haven't had time add the feature for test 6
    ok 7 # skip these tests never pass in examples
    ok 8 # skip these tests never pass in examples
    ok 9 # skip these tests never pass in examples
    END_HERE

Note

Using Windows and seeing an error about END_HERE? Add a newline to the end of sample_output. pl, then read perldoc perlfaq8.

Now run it through prove (see "Running Tests,” earlier in this chapter):

    $ prove sample_output.pl
    sample_output....FAILED tests 2, 5
        Failed 2/9 tests, 77.789 okay (less 3 skipped tests: 4 okay, 44.44%)
    Failed Test      Stat Wstat Total Fail  Failed  List of Failed
    ------------------------------------------------------------------------
    sample_output.pl                9    2  22.22%  2 5
    3 subtests skipped.
    Failed 1/1 test scripts, 0.00% okay. 2/9 subtests failed, 77.79% okay.

What just happened?

prove interpreted the output of the script as it would the output of a real test. In fact, there’s no effective difference—a real test might produce that exact output.

The lines of the test correspond closely to the results. The first line of the output is the test plan. In this case, it tells the harness to plan to run 9 tests. The second line of the report shows that 9 tests ran, but two failed: tests 2 and 5, both of which start with not ok.

The report also mentions three skipped tests. These are tests 7 through 9, all of which contain the text # skip. They count as successes, not failures. (See "Skipping Tests" in Chapter 2 to learn why.)

That leaves one curious line, test 6. It starts with not ok, but it does not count as a failure because of the text # TODO. The test author expected this test to fail but left it in and marked it appropriately. (See "Marking Tests as TODO" in Chapter 2.)

The test harness ignored all of the rest of the output, which consists of developer diagnostics. When developing, it’s often useful to look at the test output in its entirety, whether by using prove -v or running the tests directly through perl (see "Running Tests,” earlier in this chapter). This prevents the harness from suppressing the diagnostic output, as found with the second test in the sample output.

What about...

Q: What happens when the actual number of tests is different than expected?

A: Running the wrong number of tests counts as a failure. Save the following test as too_few_tests.t:

    use Test::More tests => 3;

    pass( 'one test'  );
    pass( 'two tests' );

Run it with prove:

    $ prove too_few_tests.t
    too_few_tests....ok 2/3# Looks like you planned 3 tests but only ran 2.
    too_few_tests....dubious
            Test returned status 1 (wstat 256, 0x100)
    DIED. FAILED test 3
            Failed 1/3 tests, 66.67% okay
    Failed Test     Stat Wstat Total Fail  Failed  List of Failed
    ------------------------------------------------------------------------
too_few_tests.t    1   256     3    2  66.67%  3
    Failed 1/1 test scripts, 0.00% okay. 1/3 subtests failed, 66.67% okay.

Test::More complained about the mismatch between the test plan and the number of tests that actually ran. The same goes for running too many tests. Save the following code as too_many_tests.t:

    use Test::More tests => 2;

    pass( 'one test'    );
    pass( 'two tests'   );
    pass( 'three tests' );

Run it with prove:

    $ prove -v too_many_tests.t
    too_many_tests....ok 3/2# Looks like you planned 2 tests but ran 1 extra.
    too_many_tests....dubious
            Test returned status 1 (wstat 256, 0x100)
    DIED. FAILED test 3
            Failed 1/2 tests, 50.00% okay
    Failed Test      Stat Wstat Total Fail  Failed  List of Failed
    ------------------------------------------------------------------------
too_many_tests.t    1   256     2    1  50.00%  3
    Failed 1/1 test scripts, 0.00% okay. -1/2 subtests failed, 150.00% okay.

This time, the harness interpreted the presence of the third test as a failure and reported it as such. Again, Test::More warned about the mismatch.

Writing Your First Test

This lab introduces the most basic features of Test::Simple, the simplest testing module. You’ll see how to write your own test for a simple “Hello, world!"-style program.

How do I do that?

Open your favorite text editor and create a file called hello.t. Enter the following code:

    #!perl

    use strict;
    use warnings;

    use Test::Simple tests => 1;

    sub hello_world
    {
        return "Hello, world!";
    }

    ok( hello_world() eq "Hello, world!" );

Save it. Now you have a simple Perl test file. Run it from the command line with prove:

    $ prove hello.t

You’ll see the following output:

    hello....ok
    All tests successful.
    Files=1, Tests=1,  0 wallclock secs ( 0.09 cusr +  0.00 csys =  0.09 CPU)

What just happened?

hello.t looks like a normal Perl program; it uses a couple of pragmas to catch misbehavior as well as the Test::Simple module. It defines a simple subroutine. There’s no special syntax a decent Perl programmer doesn’t already know.

The first potential twist is the use of Test::Simple. By convention, all test files need a plan to declare how many tests you expect to run. If you run the test file with perl and not prove, you’ll notice that the plan output comes before the test output:

    $ perl hello.t
    1..1
    ok 1

The other interesting piece is the ok() subroutine. It comes from Test::Simple and is the module’s only export. ok() is very, very simple. It reports a passed or a failed test, depending on the truth of its first argument. In the example, if whatever hello_world() returns is equal to the string Hello, world!, ok() will report that the test has passed.

Note

Anything that can go in an if statement is fair game for ok().

As the output shows, there’s one test in the file, and it passed. Congratulations!

What about...

Note

In some cases, the number of tests you run is important, so providing a real plan is a good habit to cultivate.

Q: How do I avoid changing the plan number every time I add a test?

A: Writing 'no_plan' on the use line lets Test::Simple know that you’re playing it by ear. In this case, it’ll keep its own count of tests and report that you ran as many as you ran.

    #!perl

    use strict;
    use warnings;

    use Test::Simple 'no_plan';

    sub hello_world
    {
        return "Hello, world!";
    }

    ok( hello_world() eq "Hello, world!" );

When you declare no_plan, the test plan comes after the test output.

    $ perl hello.t
    ok 1
    1..1

This is very handy for developing, when you don’t know how many tests you’ll add. Having a plan is a nice sanity check against unexpected occurrences, though, so consider switching back to using a plan when you finish adding a batch of tests.

Q: How do I make it easier to track down which tests are failing?

A: When there are multiple tests in a file and some of them fail, descriptions help to explain what should have happened. Hopefully that will help you track down why the tests failed. It’s easy to add a description; just change the ok line.

    ok( hello_world() eq "Hello, world!",
        'hello_world() output should be sane' );

Note

Having tests is good. Having tests that make sense is even better.

You should see the same results as before when running it through prove. Running it with the verbose flag will show the test description:

    $ prove -v hello.t
    1..1
    ok 1 - hello_world() output should be sane

Q: How do I make more detailed comparisons?

A: Don’t worry; though you can define an entire test suite in terms of ok(), dozens of powerful and freely available testing modules work together nicely to provide much more powerful testing functions. That list starts with the aptly named Test::More.

Loading Modules

Most of the Perl testing libraries assume that you use them to test Perl modules. Modules are the building blocks of larger Perl programs and well-designed code uses them appropriately. Loading modules for testing seems simple, but it has two complications: how do you know you’ve loaded the right version of the module you want to test, and how do you know that you’ve loaded it successfully?

This lab explains how to test both questions, with a little help from Test::More.

How do I do that?

Imagine that you’re developing a module to analyze sentences to prove that so-called professional writers have poor grammar skills. You’ve started by writing a module named AnalyzeSentence that performs some basic word counting. Save the following code in your library directory as AnalyzeSentence.pm:

Note

Perl is popular among linguists, so someone somewhere may be counting misplaced commas in Perl books.

    package AnalyzeSentence;

    use strict;
    use warnings;

    use base 'Exporter';

    our $WORD_SEPARATOR = qr/\s+/;
    our @EXPORT_OK      = qw( $WORD_SEPARATOR count_words words );

    sub words
    {
        my $sentence = shift;
        return split( $WORD_SEPARATOR, $sentence );
    }

    sub count_words
    {
        my $sentence = shift;
        return scalar words( $sentence );
    }

    1;

Besides checking that words() and count_words() do the right thing, a good test should test that the module loads and imports the two subroutines correctly. Save the following test file as analyze_sentence.t:

    #!perl

    use strict;
    use warnings;

    use Test::More tests => 5;

    my @subs = qw( words count_words );

    use_ok( 'AnalyzeSentence', @subs   );
    can_ok( _ _PACKAGE_ _, 'words'       );
    can_ok( _ _PACKAGE_ _, 'count_words' );

    my $sentence =
        'Queen Esther, ruler of the Frog-Human Alliance, briskly devours a
        monumental ice cream sundae in her honor.';

    my @words = words( $sentence );
    ok( @words =  = 17, 'words() should return all words in sentence' );

    $sentence = 'Rampaging ideas flutter greedily.';
    my $count = count_words( $sentence );

    ok( $count =  = 4, 'count_words() should handle simple sentences' );

Run it with prove:

    $ prove 
               analyze_sentence.t
    analyze_sentence....ok
    All tests successful.
    Files=1, Tests=5,  0 wallclock secs ( 0.08 cusr +  0.01 csys =  0.09 CPU)

What just happened?

Instead of starting with Test::Simple, the test file uses Test::More. As the name suggests, Test::More does everything that Test::Simple does—and more! In particular, it provides the use_ok() and can_ok() functions shown in the test file.

use_ok() takes the name of a module to load, AnalyzeSentence in this case, and an optional list of symbols to pass to the module’s import() method. It attempts to load the module and import the symbols and passes or fails a test based on the results. It’s the test equivalent of writing:

    use AnalyzeSentence qw( words count_words );

Note

See perldoc perlmod and perldoc -f use to learn more about import().

can_ok() is the test equivalent of the can() method. The tests use it here to see if the module has exported words() and count_words() functions into the current namespace. These tests aren’t entirely necessary, as the ok() functions later in the file will fail if the functions are missing, but the import tests can fail for only two reasons: either the import has failed or someone mistyped their names in the test file.

Note

See perldoc UNIVERSAL to learn more about can().

What about...

Q: I don’t want to use use; I want to use require. Can I do that? How?

A: See the Test::More documentation for require_ok().

Q: What if I need to import symbols from the module as it loads?

A: If the test file depends on variables defined in the module being tested, for example, wrap the use_ok() line in a BEGIN block. Consider adding tests for the behavior of $WORD_SEPARATOR. Modify the use_ok() line and add the following lines to the end of analyze_sentence.t:

    use_ok( 'AnalyzeSentence', @subs, '$WORD_SEPARATOR' ) or exit;

    ...

    $WORD_SEPARATOR = qr/(?:\s|-)+/;
    @words    = words( $sentence );
    ok( @words =  = 18, '... respecting $WORD_SEPARATOR, if set' );

Run the test:

    $ prove t/analyze_sentence.t
    t/analyze_sentence....Global symbol "$WORD_SEPARATOR" requires explicit
        package name at t/analyze_sentence.t line 28.
    Execution of t/analyze_sentence.t aborted due to compilation errors.
    # Looks like your test died before it could output anything.
    t/analyze_sentence....dubious
            Test returned status 255 (wstat 65280, 0xff00)
               FAILED--1 test script could be run, alas—no output ever seen

With the strict pragma enabled, when Perl reaches the last lines of the test file in its compile stage, it hasn’t seen the variable named $WORD_SEPARATOR yet. Only when it runs the use_ok() line at runtime will it import the variable.

Change the use_ok() line once more:

                        BEGIN { use_ok( 'AnalyzeSentence', @subs, '$WORD_SEPARATOR') or exit;}

Note

See perldoc perlmod for more information about BEGIN and compile time.

Then run the test again:

    $ prove t/analyze_sentence.t
    t/analyze_sentence....ok
    All tests successful.
    Files=1, Tests=6,  0 wallclock secs ( 0.09 cusr +  0.00 csys =  0.09
        CPU)

Q: What if Perl can’t find AnalyzeSentence or it fails to compile?

A: If there’s a syntax error somewhere in the module, some of your tests will pass and others will fail mysteriously. The successes and failures depend on what Perl has already compiled by the time it reaches the error. It’s difficult to recover from this kind of failure.

The best thing you can do may be to quit the test altogether:

    use_ok( 'AnalyzeSentence' ) or exit;

Note

Some testers prefer to use die() with an informative error message.

If you’ve specified a plan, Test::Harness will note the mismatch between the number of tests run (probably one) and the number of tests expected. Either way, it’s much easier to see the compilation failure if it’s the last failure reported.

Improving Test Comparisons

ok() may be the basis of all testing, but it can be inconvenient to have to reduce every test in your system to a single conditional expression. Fortunately, Test::More provides several other testing functions that can make your work easier. You’ll likely end up using these functions more often than ok().

This lab demonstrates how to use the most common testing functions found in Test::More.

How do I do that?

The following listing tests a class named Greeter, which takes the name and age of a person and allows her to greet other people. Save this code as greeter.t:

    #!perl

    use strict;
    use warnings;

    use Test::More tests => 4;

    use_ok( 'Greeter' ) or exit;

    my $greeter = Greeter->new( name => 'Emily', age => 21 );
    isa_ok( $greeter, 'Greeter' );

    is(   $greeter->age(),   21,
        'age() should return age for object' );
    like( $greeter->greet(), qr/Hello, .+ is Emily!/,
        'greet() should include object name' );

Note

The examples in "Writing Your First Test,” earlier in this chapter, will work the same way if you substitute Test::More for Test::Simple; Test::More is a superset of Test:: Simple.

Now save the module being tested in your library directory as Greeter.pm:

    package Greeter;

    sub new
    {
        my ($class, %args) = @_;
        bless \%args, $class;
    }

    sub name
    {
        my $self = shift;
        return $self->{name};
    }

    sub age
    {
        my $self = shift;
        return $self->{age};
    }

    sub greet
    {
        my $self = shift;
        return "Hello, my name is " . $self->name() . "!";
    }

    1;

Running the file from the command line with prove should reveal three successful tests:

    $ prove greeter.t
    greeter.t....ok
    All tests successful.
    Files=1, Tests=4,  0 wallclock secs ( 0.07 cusr +  0.03 csys =  0.10 CPU)

What just happened?

This program starts by loading the Greeter module and creating a new Greeter object for Emily, age 21. The first test checks to see if the constructor returned an actual Greeter object. isa_ok() performs several checks to see if the variable is actually a defined reference, for example. It fails if it is an undefined value, a non-reference, or an object of any class other than the appropriate class or a derived class.

The next test checks that the object’s age matches the age set for Emily in the constructor. Where a test using Test::Simple would have to perform this comparison manually, Test::More provides the is() function that takes two arguments to compare, along with the test description. It compares the values, reporting a successful test if they match and a failed test if they don’t.

Note

Test::More::is() uses a string comparison. This isn’t always the right choice for your data. See Test::More::cmp_ ok() to perform other comparisons.

Similarly, the final test uses like() to compare the first two arguments. The second argument is a regular expression compiled with the qr// operator. like() compares this regular expression against the first argument—in this case, the result of the call to $greeter->greet()—and reports a successful test if it matches and a failed test if it doesn’t.

Avoiding the need to write the comparisons manually is helpful, but the real improvement in this case is how these functions behave when tests fail. Add two more tests to the file and remember to change the test plan to declare six tests instead of four. The new code is:

    use Test::More tests => 6;

    ...

    is(   $greeter->age(),   22,
        'Emily just had a birthday' );
    like( $greeter->greet(), qr/Howdy, pardner!/,
        '... and she talks like a cowgirl' );

Note

See “Regexp Quote-Like Operators” in perlop to learn more about qr//.

Run the tests again with prove’s verbose mode:

    $ prove -v greeter.t
    greeter.t....1..6
    ok 1 - use Greeter;
    ok 2 - The object isa Greeter
    ok 3 - age() should return age for object
    ok 4 - greet() should include object name
    not ok 5 - Emily just had a birthday
    #     Failed test (greeter.t at line 18)
    #          got: '21'
    #     expected: '22'
    not ok 6 - ... and she talks like a cowgirl
    #     Failed test (greeter.t at line 20)
    #                   'Hello, my name is Emily!'
    #     doesn't match '(?-xism:Howdy, pardner!)'
    # Looks like you failed 2 tests of 6.
    dubious
            Test returned status 2 (wstat 512, 0x200)
    DIED. FAILED tests 5-6
            Failed 2/6 tests, 66.67% okay
    Failed Test Stat Wstat Total Fail  Failed  List of Failed
    ----------------------------------------------------------------------------
    greeter.t      2   512     6    2  33.33%  5-6
    Failed 1/1 test scripts, 0.00% okay. 2/6 subtests failed, 66.67% okay.

Note

The current version of prove doesn’t display the descriptions of failing tests, but it does display diagnostic output.

Notice that the output for the new tests—those that shouldn’t pass—contains debugging information, including what the test saw, what it expected to see, and the line number of the test. If there’s only one benefit to using ok() from Test::Simple or Test::More, it’s these diagnostics.

What about...

Q: How do I test things that shouldn’t match?

A: Test::More provides isnt() and unlike(), which work the same way as is() and like(), except that the tests pass if the arguments do not match. Changing the fourth test to use isnt() and the fifth test to use unlike() will make them pass, though the test descriptions will seem weird.

Get Perl Testing: A Developer's Notebook now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.