O'Reilly logo

Perl Testing: A Developer's Notebook by Chromatic, Ian Langworth

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Loading Modules

Most of the Perl testing libraries assume that you use them to test Perl modules. Modules are the building blocks of larger Perl programs and well-designed code uses them appropriately. Loading modules for testing seems simple, but it has two complications: how do you know you’ve loaded the right version of the module you want to test, and how do you know that you’ve loaded it successfully?

This lab explains how to test both questions, with a little help from Test::More.

How do I do that?

Imagine that you’re developing a module to analyze sentences to prove that so-called professional writers have poor grammar skills. You’ve started by writing a module named AnalyzeSentence that performs some basic word counting. Save the following code in your library directory as AnalyzeSentence.pm:


Perl is popular among linguists, so someone somewhere may be counting misplaced commas in Perl books.

    package AnalyzeSentence;

    use strict;
    use warnings;

    use base 'Exporter';

    our $WORD_SEPARATOR = qr/\s+/;
    our @EXPORT_OK      = qw( $WORD_SEPARATOR count_words words );

    sub words
        my $sentence = shift;
        return split( $WORD_SEPARATOR, $sentence );

    sub count_words
        my $sentence = shift;
        return scalar words( $sentence );


Besides checking that words() and count_words() do the right thing, a good test should test that the module loads and imports the two subroutines correctly. Save the following test file as analyze_sentence.t:


    use strict;
    use warnings;

    use Test::More tests => 5;

    my @subs = qw( words count_words );

    use_ok( 'AnalyzeSentence', @subs   );
    can_ok( _ _PACKAGE_ _, 'words'       );
    can_ok( _ _PACKAGE_ _, 'count_words' );

    my $sentence =
        'Queen Esther, ruler of the Frog-Human Alliance, briskly devours a
        monumental ice cream sundae in her honor.';

    my @words = words( $sentence );
    ok( @words =  = 17, 'words() should return all words in sentence' );

    $sentence = 'Rampaging ideas flutter greedily.';
    my $count = count_words( $sentence );

    ok( $count =  = 4, 'count_words() should handle simple sentences' );

Run it with prove:

    $ prove 
    All tests successful.
    Files=1, Tests=5,  0 wallclock secs ( 0.08 cusr +  0.01 csys =  0.09 CPU)

What just happened?

Instead of starting with Test::Simple, the test file uses Test::More. As the name suggests, Test::More does everything that Test::Simple does—and more! In particular, it provides the use_ok() and can_ok() functions shown in the test file.

use_ok() takes the name of a module to load, AnalyzeSentence in this case, and an optional list of symbols to pass to the module’s import() method. It attempts to load the module and import the symbols and passes or fails a test based on the results. It’s the test equivalent of writing:

    use AnalyzeSentence qw( words count_words );


See perldoc perlmod and perldoc -f use to learn more about import().

can_ok() is the test equivalent of the can() method. The tests use it here to see if the module has exported words() and count_words() functions into the current namespace. These tests aren’t entirely necessary, as the ok() functions later in the file will fail if the functions are missing, but the import tests can fail for only two reasons: either the import has failed or someone mistyped their names in the test file.


See perldoc UNIVERSAL to learn more about can().

What about...

Q: I don’t want to use use; I want to use require. Can I do that? How?

A: See the Test::More documentation for require_ok().

Q: What if I need to import symbols from the module as it loads?

A: If the test file depends on variables defined in the module being tested, for example, wrap the use_ok() line in a BEGIN block. Consider adding tests for the behavior of $WORD_SEPARATOR. Modify the use_ok() line and add the following lines to the end of analyze_sentence.t:

    use_ok( 'AnalyzeSentence', @subs, '$WORD_SEPARATOR' ) or exit;


    $WORD_SEPARATOR = qr/(?:\s|-)+/;
    @words    = words( $sentence );
    ok( @words =  = 18, '... respecting $WORD_SEPARATOR, if set' );

Run the test:

    $ prove t/analyze_sentence.t
    t/analyze_sentence....Global symbol "$WORD_SEPARATOR" requires explicit
        package name at t/analyze_sentence.t line 28.
    Execution of t/analyze_sentence.t aborted due to compilation errors.
    # Looks like your test died before it could output anything.
            Test returned status 255 (wstat 65280, 0xff00)
               FAILED--1 test script could be run, alas—no output ever seen

With the strict pragma enabled, when Perl reaches the last lines of the test file in its compile stage, it hasn’t seen the variable named $WORD_SEPARATOR yet. Only when it runs the use_ok() line at runtime will it import the variable.

Change the use_ok() line once more:

                        BEGIN { use_ok( 'AnalyzeSentence', @subs, '$WORD_SEPARATOR') or exit;}


See perldoc perlmod for more information about BEGIN and compile time.

Then run the test again:

    $ prove t/analyze_sentence.t
    All tests successful.
    Files=1, Tests=6,  0 wallclock secs ( 0.09 cusr +  0.00 csys =  0.09

Q: What if Perl can’t find AnalyzeSentence or it fails to compile?

A: If there’s a syntax error somewhere in the module, some of your tests will pass and others will fail mysteriously. The successes and failures depend on what Perl has already compiled by the time it reaches the error. It’s difficult to recover from this kind of failure.

The best thing you can do may be to quit the test altogether:

    use_ok( 'AnalyzeSentence' ) or exit;


Some testers prefer to use die() with an informative error message.

If you’ve specified a plan, Test::Harness will note the mismatch between the number of tests run (probably one) and the number of tests expected. Either way, it’s much easier to see the compilation failure if it’s the last failure reported.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required