Chapter 5. Testing with Copilot
Now that you understand how Copilot works and how to interact with it through the editor and chat interfaces, we can move on to other ways it increases productivity. Copilot simplifies routine tasks that can consume a lot of time and resources. Automating such work allows you to devote your cycles, thinking, and focus to the more complex tasks needed to create software.
This chapter focuses on one particular capability: using Copilot to generate tests. In the following sections, you’ll see how Copilot can do the following:
- Provide guidance on testing
- Create standard test cases for unit testing and integration testing
- Build out edge cases
- Utilize custom testing instructions
- Write tests using the framework of your choice
- Help implement best practices, like test-driven development
- Use Copilot’s Agent mode to help drive test creation
Generative AI and Testing
When generating tests, Copilot’s results may vary significantly in content, suitability, and even accuracy. This usually depends on the amount of context provided, the interface, and the prompt.
Per the nature of generative AI, nothing is guaranteed to be exactly as you want. So, it is important to review the suggested tests to ensure that they are valid and a good fit. If they’re not what you expected, you may need to edit them or refactor your prompt and try again.
After reading this chapter, you’ll have a solid framework for harnessing this capability. Using that, you’ll be able to leverage ...
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Read now
Unlock full access