How (not) to approach persistence testing in Java and Groovy

Manage resources and fixtures with Spock's lifecycle hooks.

By Rob Fletcher
July 24, 2017
Delicate Arch Delicate Arch (source: skeeze)

Testing persistence is one of the most frequently encountered types of integration test.
If done incorrectly it can mean death to a suite of tests because they will run slowly and be incredibly brittle.

One of the most common antipatterns encountered is to test everything by reference to a single monolithic fixture containing an ever-growing volume of data that attempts to cater to every corner-case.
The fixture will soon become the victim of combinatorial explosion—there are so many combinations of entities in various states required that the sheer amount of data becomes overwhelming and impossible to monitor.
Tests based on monolithic fixtures tend to be replete with false moniker testing—“I know the pantoffler widget is the one set up with three variants, one of which has a negative price modifier and is zero-rated for tax.”

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

If any kind of write-testing is done, it’s almost certainly going to mean restoring the database between each test.
Given the complexity of all the foreign-key constraints in that huge fixture, it’s going to be a practical impossibility to do that in any other way than by truncating all of the tables and setting everything up again.
Before you know it, the test suite takes hours to run and no one has any idea whether it works anymore.

This might sound like an exaggeration, but I once worked on a project that did exactly this.
A gargantuan XML file would populate an entire database before each and every test.
Each of the test classes extended AbstractTransactionalSpringContextTests and thus would also initialize the entire Spring application context before each test.
Teams had to coordinate data changes because adding new fixtures or modifying existing ones could break unrelated tests.
The “integration suite” job on the continuous integration server took more than three-and-a-half hours to run, and I don’t recall it ever passing.

Don’t do that.

You should always try to set up the minimum amount of data required for the specific test.
That doesn’t mean not sharing fixtures between tests for which there are commonalities but only where appropriate.

As far as possible you should try to keep everything in-memory, as well.
When testing the peculiarities of a particular database’s SQL syntax, that’s obviously not going to work, but a lightweight in-memory database such as H2 is an excellent option for the majority of persistence tests.

Testing a Persistence Layer

The first thing we need to test is persisting User instances to a database.
We’ll create a data access object (DAO) class DataStore with methods for writing to and reading from the database.

We’ll begin with a feature method that tests storing a user object.
Don’t worry about how the handle and dataStore fields are initialized right now: we’re coming to that.
All you need to know at the moment is that handle is the direct connection to the database, and dataStore is the DAO we’re testing.

Handle handle
@Subject DataStore dataStore

def "can insert a user object"() {
  given:
  def clock = Clock.fixed(now(), UTC) 1

  and:
  def user = new User("spock", clock.instant()) 2

  when:
  dataStore.insert(user) 3

  then:
  def iterator = handle.createQuery("select username, registered from user")
                       .iterator() 4
  iterator.hasNext() 5
  with(iterator.next()) {
    username == user.username
    registered.time == clock.instant().toEpochMilli()
  } 6

  and:
  !iterator.hasNext() 7
}
1

Because the test needs to ensure the registered timestamp is inserted to the database correctly, we’ll use a fixed clock to generate the timestamps.

2

Create a user object.

3

Invoke the insert method of our DAO passing the user.

4

Query the database directly.

5

Assert that we get a first row back.

6

Assert that the username and registered values on the first row correspond to the values on the user object.

7

Assert that no further rows were found.

The feature method is reasonably straightforward; however, one thing merits discussion. The test queries the database directly to verify the insert operation has worked.
It would likely result in more concise code if the test used another DAO method to read the database back again.
This feels wrong, though.
The DAO class is the subject of the test so if any of the assertions failed, it would not be possible to determine whether the problem lies in inserting the data or reading it back.
The least ambiguity in any potential failure arises if the test reads the database directly.

That’s not to say that reading the database is always the right thing to do.
The question, as always, is what behavior am I trying to test here?
In this case, the question we’re interested in is does the persistence layer work? Is the correct data being written to the database?
Given that, it’s appropriate to directly read the database.
A browser-based end-to-end test that fills in a form should almost certainly not then look directly in the database to see if the data was persisted correctly.
Even a test for a different aspect of the DAO might use the insertion and query methods if their behavior is adequately covered elsewhere.

Similarly, we will want to test that data is read from the database correctly.
In that case, it’s appropriate to insert data directly to the database because we’re interested in the translation between database rows and objects.

Let’s add that feature method to the specification:

def "can retrieve a list of user objects"() {
  given:
  def timestamp = LocalDateTime.of(1966, 9, 8, 20, 0).toInstant(UTC)
  ["kirk", "spock"].each {
    handle.createStatement("""insert into user (username, registered)
                              values (?, ?)""")
          .bind(0, it)
          .bind(1, timestamp)
          .execute()
  }

  when:
  def users = dataStore.findAllUsers()

  then:
  with(users.toList()) {
    username == ["kirk", "spock"]
    registered.every {
      it == timestamp
    }
  }
}

Managing Resources with the Spock Lifecycle

The feature method in the previous example uses a dataStore object, which is a DAO wrapping a database connection.
A database connection is a classic example of a managed resource that needs to be acquired and disposed of correctly.
We saw setup and cleanup methods in Chapter 2; now we’ll take a look at lifecycle management in a little more depth.

Before the feature method can run, there are some things that you need to do:

  • Acquire a connection to the database

  • Configure the object-relational mapping (ORM)

  • Create the DAO we are testing

  • Ensure that the tables needed to store our data in fact exist

Afterward, to clean up, the specification must do the following:

  • Clean up any data that was created

  • Dispose of the database connection properly

Warning

Test Leakage

A very important feature of any unit test is that it should be idempotent.
That is to say, the test should produce the same result regardless of whether it is run alone or with other tests in a suite and regardless of the order in which the tests in that suite are run.
When side effects from a test affect subsequent tests in the suite, we can describe that test as leaking.

Test leakage is caused by badly managed resources. Typical causes of leakage include data in a persistent store that is not removed, changes to a class’ metaclass that are unexpectedly still in place later, mocks injected into objects reused between tests, and uncontrolled changes to global state such as the system clock.

Test leakage can be very difficult to track down.
Simply identifying which test is leaking can be time consuming.
For example, the leaking test might not affect the one running directly after it, or continuous integration servers might run test suites in a different order from that of the developer’s computers, leading to protests of but, it works on my machine!

As a starting point, we’ll use a setup and cleanup method, as we saw
in the Block Taxonomy section:

@Subject DataStore dataStore

def dbi = new DBI("jdbc:h2:mem:test")
Handle handle

def setup() {
  dbi.registerArgumentFactory(new TimeTypesArgumentFactory())
  dbi.registerMapper(new TimeTypesMapperFactory())

  handle = dbi.open()
  dataStore = handle.attach(DataStore)

  dataStore.createUserTable()
}

def cleanup() {
  handle.execute("drop table user if exists")
  handle.close()
}

This means that the database connection is acquired and disposed of before and after each feature method.
Given that we’re using a lightweight in-memory database, this is probably not much overhead.
Still, there’s no reason why we can’t reuse the same database connection for every feature method.

In JUnit, we could accomplish this by using a static field managed via methods annotated with @BeforeClass and @AfterClass.
Spock specifications can contain static fields but can better accomplish the same thing using the @spock.lang.Shared annotation.

Note

Notice that when the cleanup method drops the tables, it does so by using drop table user if exists.
It’s a good idea to try to avoid potential errors in cleanup methods because they can muddy the waters of debugging problems.

Here, if anything fundamental went wrong with initializing the DataStore class the specification might not get as far as creating the table, so when cleanup tries to drop it, a SQLException would be thrown.

Fields annotated with @Shared have a different lifecycle to regular fields.
Instead of being reinitialized before each feature method is run they are initialized only once—when the specification is created, before the first feature method is run.
@Shared fields are not declared static.
They are regular instance fields, but the annotation causes Spock to manage their lifecycle differently.
As we’ll see later, they are also useful when parameterizing feature methods using the where: block.

It doesn’t make sense to manage @Shared fields with the setup and cleanup method.
Instead, Spock provides setupSpec and cleanupSpec methods.
As you’d expect, these are run, respectively, before the first and after the last feature method.
Again, they are not static, unlike methods that use JUnit’s @BeforeClass and @AfterClass annotations.
Just like setup and cleanup, setupSpec and cleanupSpec are typed def or void and do not have parameters.

We can make the dbi field in the specification @Shared and then only perform the ORM configuration once in a setupSpec method.

@Subject DataStore dataStore

@Shared dbi = new DBI("jdbc:h2:mem:test") 1
Handle handle

def setupSpec() { 2
  dbi.registerArgumentFactory(new TimeTypesArgumentFactory())
  dbi.registerMapper(new TimeTypesMapperFactory())
}

def setup() {
  handle = dbi.open()
  dataStore = handle.attach(DataStore)

  dataStore.createUserTable()
}

def cleanup() {
  handle.execute("drop table user if exists")
  handle.close()
}
1

The dbi field is now annotated @Shared.

2

A setupSpec method now handles class-wide setup.

At this stage, we’re still opening a connection and creating tables before each test and then dropping the tables and releasing the connection after.
Even though each feature method will need its own data, it seems like the table itself could persist between features.

@Subject @Shared DataStore dataStore 1

@Shared dbi = new DBI("jdbc:h2:mem:test")
@Shared Handle handle 2

def setupSpec() {
  dbi.registerArgumentFactory(new TimeTypesArgumentFactory())
  dbi.registerMapper(new TimeTypesMapperFactory())

  handle = dbi.open() 3
  dataStore = handle.attach(DataStore)
  dataStore.createUserTable()
}

def cleanupSpec() { 4
  handle.execute("drop table user if exists")
  handle.close()
}

def cleanup() {
  handle.execute("delete from user") 5
}
1

Now, the DAO instance is @Shared so that we can use it to create the tables it requires in setupSpec.

2

The database handle we need to create the DAO also needs to be @Shared.

3

We now create the handle and the DAO in setupSpec rather than setup.

4

Instead of dropping the tables in cleanup we do so in cleanupSpec.

5

In cleanup, we’ll ensure that all data is removed from the user table so that each feature method is running in a clean environment.

Using @Shared in this way results in some tradeoffs.
It’s important to manage shared fields very carefully to ensure state does not leak between feature methods.
In the preceding example, we had to add a cleanup step to ensure that any data persisted by the feature methods is deleted.

Note

In this specification, we’ve made the test subject @Shared, meaning that it is not reinitialized before each feature method.
Although generally this is not a good idea, it’s reasonable if—like in this case—the test subject is stateless.

Yes, the database is stateful, but we need to manage that anyway, regardless of the lifecycle of the DAO
instance.

It’s not always obvious that state is leaking between feature methods until you restructure the specification or run things in a different order.
As we saw in the Basic Block Usage section, an expect: block can appear before a when: block as a way of verifying preconditions before the action of the test starts.
If there’s any danger of state leakage, using an expect: block at the start of the feature method to verify the initial state is a good option.
Let’s add that to the feature method we saw earlier:

def "can insert a user object"() {
  given:
  def clock = Clock.fixed(now(), UTC)

  and:
  def user = new User("spock", clock.instant())

  expect:
  rowCount("user") == 0 1

  when:
  dataStore.insert(user)

  then:
  def iterator = handle.createQuery("select username, registered from user")
                       .iterator()
  iterator.hasNext()
  with(iterator.next()) {
    username == user.username
    registered.time == clock.instant().toEpochMilli()
  }

  and:
  !iterator.hasNext()
}

private int rowCount(String table) {
  handle.createQuery("select count(*) from $table")
        .map(IntegerColumnMapper.PRIMITIVE)
        .first()
} 2
1

The feature method now ensures that the database is in the expected state before performing the tested action.

2

A helper method allows for a concise assertion in the expect: block.

Specifications and Inheritance

The lifecycle management that the specification is doing is probably not just applicable to tests for persisting users, but for similar tests that also need to integrate with the database.
So far we’ve made the User class persistent, but we need to do the same for the Message class.
We’ll add some methods to the DataStore DAO with a specification that tests reading from and writing to the database:

class MessagePersistenceSpec extends Specification {

  @Subject @Shared DataStore dataStore
  User kirk, spock

  @Shared dbi = new DBI("jdbc:h2:mem:test")
  @Shared Handle handle

  def setupSpec() {
    dbi.registerArgumentFactory(new TimeTypesArgumentFactory())
    dbi.registerMapper(new TimeTypesMapperFactory())

    handle = dbi.open()

    dataStore = handle.attach(DataStore)

    dataStore.createUserTable()
    dataStore.createMessageTable()
  }

  def cleanupSpec() {
    handle.execute("drop table message if exists")
    handle.execute("drop table user if exists")
    handle.close()
  }

  def setup() {
    kirk = new User("kirk")
    spock = new User("spock")
    [kirk, spock].each { dataStore.insert(it) }
  }

  def cleanup() {
    handle.execute("delete from message")
    handle.execute("delete from user")
  }

  def "can retrieve a list of messages posted by a user"() {
    given:
    insertMessage(kirk, "@khan KHAAANNN!")
    insertMessage(spock, "Fascinating!")
    insertMessage(spock, "@kirk That is illogical, Captain.")

    when:
    def posts = dataStore.postsBy(spock)

    then:
    with(posts) {
      size() == 2
      postedBy.every { it == spock }
    }
  }

  def "can insert a message"() {
    given:
    def clock = Clock.fixed(now(), UTC)
    def message = spock.post(
      "@bones I was merely stating a fact, Doctor.",
      clock.instant()
    )

    when:
    dataStore.insert(message)

    then:
    def iterator = handle.createQuery("""select u.username, m.text, m.posted_at
                                         from message m, user u
                                         where m.posted_by_id = u.id""")
                         .iterator()
    iterator.hasNext()
    with(iterator.next()) {
      text == message.text
      username == message.postedBy.username
      posted_at.time == clock.instant().toEpochMilli()
    }

    and:
    !iterator.hasNext()
  }

  private void insertMessage(User postedBy, String text) {
    handle.createStatement("""insert into message
                              (posted_by_id, text, posted_at)
                              select id, ?, ? from user where username = ?""")
          .bind(0, text)
          .bind(1, now())
          .bind(2, postedBy.username)
          .execute()
  }
}

This code is doing an awful lot of the same work as the test for user persistence.
It would make sense to extract a common superclass that can do some of the lifecycle management and provide some utility methods, such as the rowCount method we used earlier.

One of the advantages of the fact that @Shared fields and the setupSpec and cleanupSpec methods are nonstatic is that they can participate in inheritance hierarchies.
Let’s refactor and extract a superclass:

abstract class BasePersistenceSpec extends Specification {

  @Shared DataStore dataStore

  @Shared dbi = new DBI("jdbc:h2:mem:test")
  @Shared Handle handle

  def setupSpec() {
    dbi.registerArgumentFactory(new TimeTypesArgumentFactory())
    dbi.registerMapper(new TimeTypesMapperFactory())

    handle = dbi.open()
    dataStore = handle.attach(DataStore)
    dataStore.createUserTable()
  }

  def cleanupSpec() {
    handle.execute("drop table user if exists")
    handle.close()
  }

  def cleanup() {
    handle.execute("delete from user")
  }

  protected int rowCount(String table) {
    handle.createQuery("select count(*) from $table")
          .map(IntegerColumnMapper.PRIMITIVE)
          .first()
  }
}

Here, we’ve simply moved all the lifecycle methods and fields up from MessagePersistenceSpec.
The @Subject annotation is gone from the dataStore field because it’s no longer appropriate, and the rowCount method is now protected rather than private.
Otherwise, the code is unchanged.

We don’t need anything else for the UserPersistenceSpec class, but MessagePersistenceSpec has to manage the message table as well as the user table.

The feature methods remain unchanged but we can now remove the common parts of the lifecycle management code that are currently handled by the superclass:

class MessagePersistenceSpec extends BasePersistenceSpec {
  User kirk, spock
  
  def setupSpec() {
    dataStore.createMessageTable()
  }
  
  def cleanupSpec() {
    handle.execute("drop table message if exists")
  }
  
  def setup() {
    kirk = new User("kirk")
    spock = new User("spock")
    [kirk, spock].each { dataStore.insert(it) }
  }
  
  def cleanup() {
    handle.execute("delete from message")
  }

If you’re paying attention, you might notice something missing from the lifecycle methods in this derived class.
None of them are invoking the superclass method they override!
Because forgetting to do so will likely cause problems that can be difficult to debug and could be prone to copy-and-paste errors, Spock helps you by doing the right thing automatically.

If a specification’s superclass has any of the lifecycle management methods, they are automatically executed along with those of the specification itself.
It is not necessary to call super.setup() from a specification’s setup method, for example.

Execution Order of Lifecycle Methods in an Inheritance Hierarchy

Thinking about the order in which the lifecycle methods execute, you might also notice a couple of interesting things:

  • The base class’ setupSpec method initializes the dataStore DAO field, and the subclass setupSpec method uses it to create the message table.

  • The base class’ cleanupSpec method calls handle.close() (which is JDBI’s way of closing the database connection), but the subclass cleanupSpec method uses the handle field to drop the message table.

Spock treats the lifecycle methods like an onion skin.
Execution of the setupSpec and setup methods proceeds down the inheritance tree, whereas the cleanupSpec and cleanup methods execute in the opposite order up the inheritance tree.

Let’s look at a simple example of an inheritance hierarchy that prints something to standard output in each lifecycle method:

abstract class SuperSpec extends Specification {
  def setupSpec() {
    println "> super setupSpec"
  }

  def cleanupSpec() {
    println "> super cleanupSpec"
  }

  def setup() {
    println "--> super setup"
  }

  def cleanup() {
    println "--> super cleanup"
  }
}

class SubSpec extends SuperSpec {
  def setupSpec() {
    println "-> sub setupSpec"
  }

  def cleanupSpec() {
    println "-> sub cleanupSpec"
  }

  def setup() {
    println "---> sub setup"
  }

  def cleanup() {
    println "---> sub cleanup"
  }

  def "feature method 1"() {
    println "----> feature method 1"
    expect:
    2 * 2 == 4
  }

  def "feature method 2"() {
    println "----> feature method 2"
    expect:
    3 * 2 == 6
  }
}

The output generated is as follows:

> super setupSpec
-> sub setupSpec
--> super setup
---> sub setup
----> feature method 1
---> sub cleanup
--> super cleanup
--> super setup
---> sub setup
----> feature method 2
---> sub cleanup
--> super cleanup
-> sub cleanupSpec
> super cleanupSpec

This means that the setupSpec method in BasePersistenceSpec executes before the setupSpec method in MessageStoreSpec.
Therefore, dataStore has been acquired before it’s used to create the message table.
Conversely, the cleanupSpec method of BasePersistenceSpec is executed after the one in MessageStoreSpec, so handle has not been closed when we try to use it to drop the message table.

Of course, if you have more complex requirements for execution order, there’s nothing to prevent you from defining abstract methods in the base class that are referenced from the lifecycle methods and implemented in different ways in the subclasses.

Summary

In this chapter, we covered how to manage resources and fixtures with Spock’s lifecycle hooks. You learned about the following:

  • The four lifecycle methods setupSpec, setup, cleanup and cleanupSpec

  • Using @Shared fields for objects that are not reinitialized between each feature method

  • Structuring specifications in inheritance hierarchies and what that means for the execution order of the lifecycle methods

Post topics: Software Engineering
Share: