10.0 Introduction
Developer-level testing is critical to the development lifecycle; it catches potential bugs at the earliest moment. A large number of studies show a rapid escalation in the cost of fixing issues as time in a project progresses. Catch the bugs early, save big amounts of money; catch the bugs late, lose big amounts of money (and sleep) as you try to rework potentially significant amounts of your system.
In the not-too-distant past, software testing was mostly handled in one of the following ways:
Developers used debuggers to step through the application (which took way too much time and often wasn’t done at all).
Developers used the application’s GUI and stepped through a bit of functionality to confirm that everything appeared to be working correctly.
Developers relied on scripts or applications to test the application’s user interface.
These user-interface tests were often extremely brittle because they were very tightly coupled to the functionality driving the interface. That meant a small change in the underlying code often drove large changes in the automation—thereby eliminating any efficiency gained from that automation.
Unit testing, where a separate piece of code exercises a small, specific portion of the system under test, has been around for quite some time, but for whatever reasons, unit testing wasn’t widely practiced in the past. Fortunately, the concept of unit testing has undergone a sea change in the last decade, driven mostly by an ever-growing community ...
Get Windows Developer Power Tools now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.