Chapter 7. Measuring Coverage with Coverage.py

How confident in a code change are you when your tests pass?

If you look at tests as a way to detect bugs, you can describe their sensitivity and specificity.

The sensitivity of your test suite is the probability of a test failure when there’s a defect in the code. If large parts of the code are untested, or if the tests don’t check for expected behavior, you have low sensitivity.

The specificity of your tests is the probability that they will pass if the code is free of defects. If your tests are flaky (they fail intermittently) or brittle (they fail when you change implementation details), then you have low specificity. Invariably, people stop paying attention to failing tests. This chapter isn’t about specificity, though.

There’s a great way to boost the sensitivity of your tests: when you add or change behavior, write a failing test before the code that makes it pass. If you do this, your test suite will capture your expectations for the code.

Another effective strategy is to test your software with the various inputs and environmental constraints that you expect it to encounter in the real world. Cover the edge cases of a function, like empty lists or negative numbers. Test common error scenarios, not just the “happy path.”

Code coverage is a measure of the extent to which the test suite exercises your code. Full coverage doesn’t guarantee high sensitivity: if your tests cover every line in your code, you can still have bugs. ...

Get Hypermodern Python Tooling now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.