Chapter 4. Testing with Copilot

Now that we understand how Copilot works and how to interact with the editor and chat interfaces, we can move on to learning about other ways it increases your productivity. Copilot simplifies routine tasks. These are tasks that, while not difficult to do, can consume a lot of your time and resources. Using Copilot to automate such work, you can devote more of your cycles, thinking, and focus to the more complex tasks needed to create software.

This chapter focuses on one particular capability - using Copilot to generate tests. In the sections that follow, you’ll see Copilot provide guidance for testing in new languages, generate test cases on demand and through prompts, build out edge cases for testing, write tests using the framework of your choice, and make it simple to implement best practices, like Test-Driven Development.

Generative AI and Testing

As with all of Copilot’s functionality, there is a standard disclaimer that the results coming from Copilot, when generating tests, may vary significantly in content, suitability, and even accuracy. This usually depends mostly on the amount of context provided, and the interface and prompt used.

Per the nature of generative AI, nothing is guaranteed to be exactly like you want. So, as with the code suggestions and chat answers, it is important to review the suggested tests, ensure they are valid and a good fit, and edit them or refactor your prompt and try again if needed.

After reading this chapter, you’ll have a solid framework for using this capability.  And with that framework, you’ll be able to leverage Copilot to reduce the need to create tests from scratch.

Let’s start with the broadest question we can ask Copilot about testing: “How do I test my code?”.

How do I test?

The ability to pivot and learn new programming languages and frameworks quickly is a staple expectation for most software professionals in today’s world. You are likely already comfortable with how to write tests for any language you’ve been using regularly.  But, having to switch or migrate code to a different environment can pose a substantial learning curve. As discussed in previous chapters, one of the features of Copilot that is helpful in these instances is the ability to ask how to write or translate code in the new language. The same can be done for testing.

We’re using our Python program again as a quick example.  Suppose that you are new to Python, have implemented the function, but have little or no idea about how to test the code you’ve just produced. Certainly, you could research on Stack Overflow or any other number of forums or sites, or ask colleagues. But you can also simply ask Copilot:

How do I add tests to my project? 

From this simple query, Copilot generates extensive information as shown in Figure 4-2.

Figure 4-2. Output from how to test chat query

The output includes a natural language (conversational) set of instructions. It also produces as example code, and (not shown in the figure), simple guidance on how to run the tests.  Note that the Chat output also suggested a framework to use (unittest), mentioning that it is built into Python. Copilot then tells you how to use it, along with providing a class implementation with example tests to use. And this is directly relevant to your code. This is pretty impressive for one general query!

For reference, here’s the full text of the output from this conversation with Copilot.

Me: How do I add tests to my project?

GitHub Copilot: To add tests to your project, you can use a testing
framework like `unittest` which is built-in in Python. Here's how 
you can do it:

1. Create a new Python file in your project directory, let's call it
 `test_prime.py`.
2. In this file, import the `unittest` module and the function you 
want to test from your `prime.py` file.
3. Create a new class that inherits from `unittest.TestCase`.
4. Within this class, define methods that start with the word `test`
. Each of these methods should represent one test case.
5. In each test method, use the `assertEqual()` or `assertTrue()` 
or `assertFalse()` methods of the `unittest.TestCase` class to check
 the output of your function against the expected output.

Here's an example of what your `test_prime.py` file could look like:

```python
import unittest
from prime import is_prime

class TestPrime(unittest.TestCase):
    def test_is_prime(self):
        self.assertFalse(is_prime(1))
        self.assertTrue(is_prime(2))
        self.assertTrue(is_prime(3))
        self.assertFalse(is_prime(4))
        self.assertTrue(is_prime(5))

if __name__ == '__main__':
    unittest.main()
```

To run the tests, you can simply run the `test_prime.py` file.

From here, you can use the available options in the Chat dialog that we discussed in Chapter 3 to copy the code or put it in a new file.

For an additional example, let’s tackle something that may be less familiar to you  - testing SQL code.  I have a large demo file of SQL statements that I use for some of my training courses that creates a number of tables to populate a database for a university curriculum, schedule, faculty, etc. 

If I open that up in the editor and ask Copilot how to test the content of that file, Copilot replies as shown in Figure 4-3.

Figure 4-3. Copilot answering how I can test the SQL code

Since the screenshot is not readable, here’s the complete text of the output from Copilot.

Me: How do I test the code in #file:create-tables.sql?

GitHub Copilot: Testing SQL code, especially DDL (Data Definition
 Language) statements like `CREATE TABLE`, is not as straightforward
 as testing code in languages like Python or Java. However, there 
are a few ways you can ensure your SQL code is working as expected:

1. **Run the SQL script and check for errors**: The simplest way to
 test your SQL code is to run it in your SQL server and check if the
 tables are created without any errors. If there are any syntax 
errors or other issues, the SQL server will return an error message.

2. **Check the structure of the created tables**: After running the
 script, you can use a `DESCRIBE` statement to check the structure 
of the created tables. This will allow you to verify that the 
tables have the correct columns, data types, and constraints.

3. **Insert test data and run test queries**: You can insert some 
test data into the tables and run some test queries to ensure the 
tables are functioning as expected. This can include testing the 
`FOREIGN KEY` constraints, `NOT NULL` constraints, etc.

Here's an example of how you can do this:

```sql
-- Run the create-tables.sql script
\i create-tables.sql

-- Check the structure of the created tables
DESCRIBE training.categories;
DESCRIBE curriculum.vendors;
DESCRIBE curriculum.subjects;
DESCRIBE courses.students;

-- Insert test data
INSERT INTO training.categories (training_name) VALUES 
('Test Training');
INSERT INTO curriculum.vendors (brand_name) VALUES ('Test Brand');
-- ... and so on for the other tables

-- Run test queries
SELECT * FROM training.categories;
SELECT * FROM curriculum.vendors;
-- ... and so on for the other tables
```

Please note that the exact syntax for these commands may vary 
depending on the SQL server you are using. The above examples are
 for PostgreSQL.

Again, pretty impressive for a the simple query against the one file.

This section has shown how you can utilize Copilot to create instructions and code for testing when you need to start from scratch. More commonly, you may be coding in a language that you are already familiar with, and just want Copilot to help create the boilerplate code for things like unit testing. In the next section, you’ll see several ways to use Copilot for those cases.

Creating Tests

Given the necessity of creating tests for cases like Continuous Integration and Test-Driven Development (TDD), manually crafting tests can represent a significant portion of your time and workload on a project. With Copilot, you can generate tests through several different approaches. And, you can choose from varying degrees of complexity. 

Most commonly, the tests you generate with Copilot will be in the form of unit tests - designed to do basic testing for a single function or procedure. But, the range and depth of Copilot-generated tests can vary depending on the prompt and interface used. We’ll learn more on the different approaches in the next few parts of this section.

The /tests command

Probably the easiest way to have Copilot generate tests for you is using the built-in slash command /tests inline (from the IDE’s editor). This command operates on code you select in the IDE and then attempts to create some basic unit tests that are appropriate to the code. You enter this command via the inline Chat interface that’s invoked via the CMD+I key combination.

Assume you’re once again working with a simple function to determine if a number is prime or not. The particular implementation doesn’t matter, but here’s a listing of the code used for the cases mentioned here.

def is_prime(number):
    if number <= 1:
        return False
    for n in range(2, int(number**0.5) + 1):
        if number % n == 0:
            return False
    return True

Figure 4-4 shows an example of using the /tests command. For the simplest usage, you highlight the code to be tested, and then enter the command in the interface as shown.

Figure 4-4. Invoking Copilot to generate tests via the /tests command

After running that command, Copilot generates some basic assert-based tests. The next listing shows one example of generated output from this action.

    assert is_prime(2) == True
    assert is_prime(3) == True
    assert is_prime(5) == True
    assert is_prime(7) == True
    assert is_prime(11) == True
    assert is_prime(13) == True
    assert is_prime(17) == True
    assert is_prime(19) == True
    assert is_prime(23) == True
    assert is_prime(29) == True
    assert is_prime(31) == True
    assert is_prime(37) == True
    assert is_prime(41) == True

These tests may first be shown in a pop up dialog window. You can then add them into a separate file by accepting them from the resulting dialog, or using the checkmark control as shown in Figure 4-5 .

Figure 4-5. Proposed tests from running /tests

The same results as running /tests in the inline chat interface can be achieved by invoking Generate Tests from the Copilot section of the context menu - see Figure 4-6.

Figure 4-6. Generating tests via the context menu

As noted in Chapter 3, when making requests through the inline Chat interface, Copilot tends to favor simple or routine suggestions that are quick to generate.  When the same prompt is given through the separate Chat interface, the results are usually more complex and more comprehensive. This is because Copilot can take more time for crafting suggestions and has a larger space to output results in. It can be more verbose, just as an email with a person discussing how to do something  is usually more verbose than code written to tell a computer how to do it.

For example, if you provide the same /tests command in the separate Chat interface, you’ll usually get a more elaborate and varied set of test cases. The start of an example output is shown in Figure 4-7.

Figure 4-7. Suggested test cases from using the /tests command in the dedicated Chat interface

Using #selection

If you have a larger set of code and you want to limit test generation (or any other action) to a smaller set, you can highlight and select the code in the editor and then use the #selection qualifier in the Chat query to ensure Copilot only considers the selected code in its processing.

This response to the same command provided a wider coverage of test cases along with supporting imports and a class definition. For comparison, the listing below shows the full set of test cases Copilot provided.

 import unittest
from prime import is_prime

class TestPrime(unittest.TestCase):
    def test_negative_number(self):
        self.assertFalse(is_prime(-1))

    def test_zero(self):
        self.assertFalse(is_prime(0))

    def test_one(self):
        self.assertFalse(is_prime(1))

    def test_two(self):
        self.assertTrue(is_prime(2))

    def test_prime_number(self):
        self.assertTrue(is_prime(7))

    def test_non_prime_number(self):
        self.assertFalse(is_prime(9))

if __name__ == '__main__':
    unittest.main()

These responses cover a range of use cases and can be inserted into the same file or a separate file as desired.  But what if we want to get some additional tests generated and/or more cases covered? We can approach that by supplying more explicit prompts - the topic of the next section. 

Creating tests from explicit prompts

Another way to create tests is by providing prompts through the chat interface. A simple prompt like “Create tests for the selected code” will usually result in the generation of tests similar to the ones generated via the /tests command. In fact, you may see a message from Copilot that it is using /tests when it creates tests from your prompt. In Figure 4-8, the using /tests reference is visible in the gray bar next to GitHub Copilot

Figure 4-8. Copilot generating tests from a basic prompt

However, the prompt interface gives you an extended ability to have Copilot suggest tests for other cases. For example, if you’ve incorporated the tests already generated, you can prompt Copilot via the chat interface about any other edge cases.

are there any other edge cases that should be tested

Copilot responds by identifying some additional tests and generating the explanations and code for them, as shown in Figure 4-9.

Figure 4-9. Additional edge cases generated by Copilot

In the output from this prompt, Copilot provides an updated set of test cases, including the new edge cases.

import unittest
from prime import is_prime

class TestPrime(unittest.TestCase):
    def test_negative_number(self):
        self.assertFalse(is_prime(-1))

    def test_zero(self):
        self.assertFalse(is_prime(0))

    def test_one(self):
        self.assertFalse(is_prime(1))

    def test_two(self):
        self.assertTrue(is_prime(2))

    def test_small_prime_numbers(self):
        for num in [3, 5, 7, 11, 13, 17, 19, 23, 29]:
            self.assertTrue(is_prime(num))

    def test_small_non_prime_numbers(self):
        for num in [4, 6, 8, 10, 12, 14, 15, 16, 18, 20]:
            self.assertFalse(is_prime(num))

    def test_large_prime_number(self):
        self.assertTrue(is_prime(104729))  # 104729 is a known
 large prime number

    def test_large_non_prime_number(self):
        self.assertFalse(is_prime(1000000))  # 1000000 is not a
 prime number

if __name__ == '__main__':
    unittest.main()


This might make you wonder if we can extend the prompt further to cover even more use cases.  Indeed we can!  For example, we can be more prescriptive, telling Copilot to add test cases for other kinds of inputs. 

add test cases for other kinds of inputs

With this prompt, Copilot adds some additional test cases for what we asked.

    def test_float_input(self):
        with self.assertRaises(TypeError):
            is_prime(7.1)

    def test_string_input(self):
        with self.assertRaises(TypeError):
            is_prime("7")

Copilot also helpfully provides some basic explanation about what it did, pulling in some overlapping explanation on other test cases.

In this code, `test_large_prime_number` checks if the function
 correctly identifies a large prime number (104729), 
`test_large_non_prime_number` checks if the function correctly 
identifies a large non-prime number (1000000), 
`test_float_input` checks if the function raises a `TypeError`
 when passed a floating point number, and `test_string_input`
 checks if the function raises a `TypeError` when passed a string.

If you prefer to craft your comments inline with your code, there is a variation on the prompt approach that may be a better fit. We’ll look at that next.

Creating tests from comments

Sometimes you may not want (or need) to engage the chat feature to create your tests. You may be actively working in your editor in the flow and just want to do the equivalent of telling Copilot “insert some tests here”. That can be done by creating a comment in the form of a directive to Copilot. Then, you just need to work through suggestions for tests or testing code that it returns. The process follows the same kinds of interactive flow for acceptance, editing, etc. as discussed in Chapter 2. The advantage to this method is that you can initiate it from a simple (or complex) comment inline with the code. The disadvantage is that you may have to repeatedly work through accepting parts of suggestions, or you may get generic suggestions that aren’t as useful.

Here’s a simple example of a generic comment placed inline, after the code for the is_prime function.

# Create tests for the code above

In response, Copilot supplies a generic testing routine. 

def test_is_prime(number, expected):
    result = is_prime(number)
    assert result == expected, f"Expected {expected} but got 
{result}"

Depending on your particular comment and context, Copilot may produce a more generic testing function or a set of individual test cases.  If you want to ensure you get the latter, you can try a comment like this:

# Create a set of 10 unit tests for the code above

In this case, Copilot will usually generate a more explicit set of tests wrapped in a testing function. One example set of output is shown in the next listing.

def test_is_prime():

    # Test if 2 is prime
    assert is_prime(2) == True

    # Test if 3 is prime
    assert is_prime(3) == True

    # Test if 4 is not prime
    assert is_prime(4) == False

    # Test if 5 is prime
    assert is_prime(5) == True

    # Test if 6 is not prime
    assert is_prime(6) == False

    # Test if 7 is prime
    assert is_prime(7) == True

    # Test if 8 is not prime
    assert is_prime(8) == False

    # Test if 9 is not prime
    assert is_prime(9) == False

    # Test if 10 is not prime
    assert is_prime(10) == False

    # Test if 11 is prime
    assert is_prime(11) == True

    print("All tests pass")

One final approach to getting generated tests involves tricking Copilot a bit by intentionally leveraging its core functionality for generating code. Read on to see how to do that.

Generating tests through code suggestions and corrections

A final approach to generating tests is by allowing Copilot to suggest completions or fixes for tests that are started but not complete. Not complete here may refer to code you’ve started typing and haven’t finished yet. Or it could refer to partially written code that is left deliberately incomplete, so Copilot will suggest a completion or fix. This differs from the previous approaches because we’re not asking Copilot to produce a function or set of tests from scratch. Instead, we’re coding enough of the function or tests so that Copilot can fill in all, or part, of the missing pieces for us. 

The advantage of this approach is that you can lead Copilot more in the style and approach you want to take for testing. You’re not as dependent on how it may choose to create a function, or how it creates discrete tests. Also, you don’t have to craft a different prompt for the testing you want. The disadvantage (or a better word might be challenge) is finding the right balance and amount of code to write as context so that Copilot comes up with the completion or fix that you expect.   

As an example, suppose you type in a line under your function such as

def test_is_prime(number):

Leaving that as-is causes a wavy line to be added on the next line. This means there’s an issue with the code as shown on line 10 in Figure 4-10.

Figure 4-10. Error detected in test function

Note the AI symbol (the two stars) showing up on line 9 in Figure 4-10.  Clicking on the symbol gives us an option to have Copilot provide a fix for the problem. (See Figure 4-11 ). 

Figure 4-11. Fix using Copilot option

If you select that option, Copilot will generate a draft completion for the function using the /fix command. An example is shown in Figure 4-12

Figure 4-12. Draft fix from Copilot for test function

Generate using Copilot

You may have noticed in the context menu that we brought up for the fix action, there was another option titled Generate using Copilot. That option simply invokes the standard inline Chat dialog box.

This general test function is only a simple if/else expression and not really what we would think of as a true test function. But it might provide a starting point that could then be added to. Mostly, in this example, it simply serves as an example of how Copilot’s fix capability can be used to achieve some code completion.

To get something more direct, such as specific use cases, we could add additional tokens and/or keywords to the context. For example, we could add an assert keyword, and then utilize the other approach of having Copilot suggest the remaining part of the test.

A simple example of this is shown in Figure 4-13.

Figure 4-13. Tests generated via code completion

This is again a very simple example.  But it does illustrate the concept.  Overall, this  complete/fix approach represents somewhat of a middle ground among the other approaches we’ve discussed. It can help generate content fairly easily given enough context. But it does require more effort to seed the context and may require more of your time to get to a meaningful fix or completion.

Finally, in this section, let’s look at a different way of using Copilot to help us test - validating inputs to functions inline.

Validating inputs

As another angle on testing, Copilot can also assist with generating code to check that inputs going into a function are valid. With our example function, we can tell Copilot to

generate asserts to ensure that the inputs to the function are valid

From here, Copilot responds and suggest asserts, as requested, to validate the inputs. 

To ensure that the function inputs are valid, we can add assertions
at the start of the function. We want to make sure that the input 
is an integer and that it is not negative. Here's how you can do it:


def is_prime(num):
    assert isinstance(num, int), "Input must be an integer"
    assert num >= 0, "Input must be a non-negative integer"

    if num < 2:
        return False
    for i in range(2, int(num ** 0.5) + 1):
        if num % i == 0:
            return False
    return True

We can also be less specific about using asserts and ask Copilot to generate checks.

generate checks to ensure that the inputs to the function are valid

This allows Copilot to generate code to validate the inputs, but with a more standard coding mechanism to surface any issues.

To ensure that the inputs to the `is_prime` function are valid, you
can add checks at the start of the function.
 Here's how you can do it:

```python
def is_prime(num):
    if not isinstance(num, int):
        raise TypeError("Input must be an integer")
    if num < 0:
        raise ValueError("Input must be a non-negative integer")

    if num < 2:
        return False
    for i in range(2, int(num ** 0.5) + 1):
        if num % i == 0:
            return False
    return True
```

In this code, the `if` statements will raise a `TypeError` if the
 input is not an integer or a `ValueError` if it's a negative 
number.
 This way, you can ensure that the inputs to the function are valid.

Each of these different approaches may be useful in different cases depending on how simple you want the results to be versus the need for wider coverage.  Remember that since you are interacting with an AI, prompting and setting the context are the keys to getting explicit results (versus generic results or ones rendered quickly inline). Depending on where you are at with your overall progress on a project and experience with the languages and frameworks you are using, you may find that a combination of these approaches works well.

For the final part of this chapter, let’s look at how Copilot can help with using frameworks and creating tests before the coding.

Testing before the coding and leveraging frameworks

Test-Driven Development has been mentioned a couple of times in this chapter already. If you’re not familiar with the term, it’s an approach for software development that emphasizes writing test cases for code before writing the actual code itself. TDD is considered a best practice or even requirement in many coding projects and groups.

More on TDD

If you are new to TDD, or interested in learning more about the practice, there are many references on the web.  A fairly quick, but informative read, can be found here.

You can use Copilot starting from the point of creating test cases for TDD and then implementing the code to be tested. Consider a simple example where we want to create a test class and tests for students at a university. We’ll use the testing framework Mockito

About Mockito

Mockito specializes in helping manage dependencies for Java applications and in simulating external interactions from other components without requiring the implementation/presence of the components themselves - i.e. mocking.

In our case, let’s start out with a pom.xml file for a Java project. In the editor for that file, we’ll invoke the inline Chat interface with the CMD+I key combination and ask it to 

add mockito dependency

As shown in Figure 4-14, Copilot suggests the necessary XML to add a recent version of Mockito that you can Accept or Discard.  

Figure 4-14. Adding a Mockito dependency in POM file.

The actual suggested text is

<dependencies>
    <!-- Other dependencies... -->
    <dependency>
        <groupId>org.mockito</groupId>
        <artifactId>mockito-core</artifactId>
        <version>3.12.4</version>
        <scope>test</scope>
    </dependency>
</dependencies>

To build out our testing code, let’s create an appropriate test class and initial set of tests. We’ll do this in the separate Chat interface since we can expect a significant amount of output and we may want to put it in a separate file.

We can formulate a basic prompt as 

create a StudentTest class for students enrolled at a university 
and add tests

But since we want to ensure it uses our pom.xml file that we created to pick up the Mockito dependency, we can be very precise and tell it to reference that, by adding #file:pom.xml to our query.

This then makes our initial query something like:

Referencing #file:pom.xml, create a StudentTest class for students
 enrolled at a university and add tests

Copilot returns the following in response.

Me: Referencing #file:pom.xml, create a StudentTest class for a 
student enrolled at a university

GitHub Copilot: Sure, here's a basic example of a `StudentTest` 
class using JUnit and Mockito. This class tests a hypothetical 
`Student` class that might have methods like `enrollCourse()` and 
`getEnrolledCourses()`. 

```java
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import org.mockito.Mockito;

import java.util.Arrays;
import java.util.List;

import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.mockito.Mockito.when;

public class StudentTest {
    private Student student;
    private University university;

    @BeforeEach
    public void setup() {
        university = Mockito.mock(University.class);
        student = new Student(university);
    }

    @Test
    public void testEnrollCourse() {
        Course course = new Course("CS101");
        when(university.addCourse(course)).thenReturn(true);

        boolean result = student.enrollCourse(course);

        assertEquals(true, result);
    }

    @Test
    public void testGetEnrolledCourses() {
        Course course1 = new Course("CS101");
        Course course2 = new Course("CS102");
        List<Course> courses = Arrays.asList(course1, course2);
        when(university.getEnrolledCourses(student)).thenReturn
(courses);

        List<Course> result = student.getEnrolledCourses();

        assertEquals(courses, result);
    }
}
```

Please replace `Student`, `University`, and `Course` with your 
actual classes and methods. This is just a basic example and might
 not fit your actual use case.

This is very helpful and a good set of suggested starting tests. We can take this and save it off to a src/test/StudentTest.java file if we want. However, if what we really want is just a simple test case for a Student class, this may be overkill as its testing items like EnrolledCourses. In that case, we may need to restructure our prompt and be more specific about what we want. Here’s one option for that.

Referencing #file:pom.xml, create a StudentTest class for a student
 enrolled at a university. A student will have personal attributes
 such as a first and last name, a phone number, an address, and a
 contact email.

In response, Copilot generates:

GitHub Copilot: Sure, here's a basic example of a `StudentTest`
 class using JUnit and Mockito. This class tests a hypothetical
 `Student` class that might have methods like `getFirstName()`, 
`getLastName()`, `getPhoneNumber()`, `getAddress()`, and `getEmail()`.

```java
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import static org.junit.jupiter.api.Assertions.assertEquals;

public class StudentTest {
    private Student student;

    @BeforeEach
    public void setup() {
        student = new Student("John", "Doe", "1234567890", 
"123 Main St", "john.doe@example.com");
    }

    @Test
    public void testGetFirstName() {
        assertEquals("John", student.getFirstName());
    }

    @Test
    public void testGetLastName() {
        assertEquals("Doe", student.getLastName());
    }

    @Test
    public void testGetPhoneNumber() {
        assertEquals("1234567890", student.getPhoneNumber());
    }

    @Test
    public void testGetAddress() {
        assertEquals("123 Main St", student.getAddress());
    }

    @Test
    public void testGetEmail() {
        assertEquals("john.doe@example.com", student.getEmail());
    }
}
```

Please replace `Student` with your actual class and methods. This 
is just a basic example and might not fit your actual use case.

This looks more like we wanted as a starting point, so we can save this in our workspace as src/test/StudentTest.java. If we then execute the test, it should fail since we don’t have any implementation of the Student class yet.

Following the TDD methodology, we should next create the minimum code to make this test pass. Since we have Copilot available, we’ll have it take a pass at that for us. While not necessary, if you have this code open in the editor, you can specify to use that as the key reference.

Referencing #editor, create a student class. 

Copilot returns a reasonable response.

Me: Referencing #editor, create a student class. 

GitHub Copilot: Sure, based on the `StudentTest` class in your
 editor, here's a basic `Student` class that matches the constructor
 and methods being tested:

```java
public class Student {
    private String firstName;
    private String lastName;
    private String phoneNumber;
    private String address;
    private String email;

    public Student(String firstName, String lastName, String
 phoneNumber, String address, String email) {
        this.firstName = firstName;
        this.lastName = lastName;
        this.phoneNumber = phoneNumber;
        this.address = address;
        this.email = email;
    }

    public String getFirstName() {
        return firstName;
    }

    public String getLastName() {
        return lastName;
    }

    public String getPhoneNumber() {
        return phoneNumber;
    }

    public String getAddress() {
        return address;
    }

    public String getEmail() {
        return email;
    }
}
```

This `Student` class has a constructor that takes five parameters
 (first name, last name, phone number, address, and email) and 
getter methods for each of these attributes.

This is a good start for our code that we can save and run our test case against. While we’re at it though, I’d like to make sure this code is well commented. There’s no reason we can’t also use Copilot for that change. All it takes is a slight modification to my query, as shown in the next set of output.

 
Me: Referencing #editor, create a student class with verbose
 comments. 

GitHub Copilot: Sure, here's a `Student` class with verbose 
comments:

```java
/**
 * The Student class represents a student in a university.
 * It holds the student's personal details like name, phone number, 
address, and email.
 */
public class Student {
    // The first name of the student
    private String firstName;
    
    // The last name of the student
    private String lastName;
    
    // The phone number of the student
    private String phoneNumber;
    
    // The address of the student
    private String address;
    
    // The email of the student
    private String email;

    /**
     * This constructor initializes the student object.
     * The constructor provides no user input validation. That 
should be handled in the method that calls this constructor.
     *
     * @param firstName The first name of the student.
     * @param lastName The last name of the student.
     * @param phoneNumber The phone number of the student.
     * @param address The address of the student.
     * @param email The email of the student.
     */
    public Student(String firstName, String lastName, 
String phoneNumber, String address, String email) {
        this.firstName = firstName;
        this.lastName = lastName;
        this.phoneNumber = phoneNumber;
        this.address = address;
        this.email = email;
    }

    // Getter for the first name
    public String getFirstName() {
        return firstName;
    }

    // Getter for the last name
    public String getLastName() {
        return lastName;
    }

    // Getter for the phone number
    public String getPhoneNumber() {
        return phoneNumber;
    }

    // Getter for the address
    public String getAddress() {
        return address;
    }

    // Getter for the email
    public String getEmail() {
        return email;
    }
}
```

This `Student` class has a constructor that takes five parameters 
(first name, last name, phone number, address, and email) and 
getter methods for each of these attributes. Each method and field 
is documented with comments.

From here, I can save this file into my workspace and incorporate it along with my test code as the start of the project. Other frameworks and dependencies can be introduced the same way. As always, the more context you can give Copilot through the prompt, workspace files, editor, etc. the more likely it will produce good suggestions for you to incorporate. If not though, don’t be afraid to iterate on your prompt/query until you get the results you want.

Conclusion

In this chapter, we’ve focused on how to apply Copilot’s completion suggestions and chat capabilities in the context of creating tests for code. As shown in the various sections, there are a number of approaches to have Copilot generate tests relevant to code in your workspace.  Both the inline completion and suggestions from Copilot can be utilized, as well as the chat capabilities.

Most of the options for generating tests rely on passing the request to the Chat interface in some form - either by using shortcut (slash) commands or entering a prompt through the inline or separate chat interfaces. For any substantial or complex, lengthy tests, working through the chat functionality will likely provide the best results.

For quick, boilerplate unit tests or simple unit testing functions, using inline comments as prompts can work well. Since, this interface favors fast turnaround for completions and suggestions, you may still be put into a mode of partial results, simplified approaches, and/or use of back-level versions of dependencies. The same holds true if you opt to use the approach of starting to write testing code and relying on the completion/fix approach to build out the code.

Copilot can also be used to support best practices such as Test-Driven Development (TDD). Copilot can be used to generate initial testing code and then later generate the appropriate implementation to pass the test. As with all approaches mentioned in the chapter, it’s important to review the results to make sure they are usable and what you intended. You should always feel free to disregard suggestions or code from Copilot, and/or reframe prompts and queries to get more accurate results. Also, you can consider defining more explicit context to tell Copilot to consider for generating the results.

In the last example in the chapter, I opted to use Copilot to verbosely comment the code it generated. This makes the code more readable and easily understandable. That exercise highlights another of Copilot’s powerful features - helping to explain and document code. These are the next set of capabilities that we’ll explore in Chapter 5.

Get Learning GitHub Copilot now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.