Chapter 5. Increasing Test Coverage Through AI Generation
Our previous chapter focused on how to use the SDA for one of the main tasks that engineers do during the SDLC: developing code. As emphasized in the other chapters, code developed with the help of an SDA still needs to have all of the same kinds of validation done on it as any other code. After initial review and approval by the SDA user, the code needs to undergo thorough testing. Unfortunately, creating test cases is not always as straightforward, or as prioritized, as it should be.
In this chapter, we look at how to use the SDA to help with testing your code, ensuring good test coverage, and simplifying and automating best practices.
GenAI and testing
Per the nature of GenAI, nothing is guaranteed to be exactly like you want. So, as with the code suggestions and chat answers, it is important to review the suggested tests, ensuring they are valid and a good fit. If they are not, edit them or refactor your prompt and iterate.
Testing Context
A key area where an SDA can help engineers is in learning a skill, language, or framework that is new to them. This applies whether you are creating content for the project or doing maintenance on it. The AI can be used to fill in gaps in knowledge and bootstrap functionality. This holds true for testing as well as any other coding. For example, if you are not familiar with how to do unit testing in the context of what you’re working on, you can ask the AI.
Suppose you are tasked ...
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Read now
Unlock full access