jOOQ Tuesdays: Richard North Makes Database Testing More Reproducible with Testcontainers

Welcome to the jOOQ Tuesdays series. In this series, we’ll publish an article on the third Tuesday every other month where we interview someone we find exciting in our industry from a jOOQ perspective. This includes people who work with SQL, Java, Open Source, and a variety of other related topics.

I’m very excited to feature today Richard North, creator of Testcontainers, a promising new database and UI testing tool for reproducible tests.

Hi Richard – you work at Skyscanner, which means you have tons of travel data to work with. What’s the most exciting thing about working with this data?

Skyscanner is a really data-led company, using data at all levels in decision-making. The volume of data we gather and process really helps us understand what travellers want and serve them better. For example, our destination recommender system helps with people discover new and interesting places to go, based upon a vast amount of data, algorithms and experiments. But it’s interesting how this varies from recommendations in internet media companies – there are far fewer possible destinations than books, songs and movies, yet users’ reasons for travelling and tastes can be more nuanced and varied.

There’s yet more data-oriented work under the surface, too – for example, the infrastructure needed to gather and analyse such large amounts of data, and the running of experiments to help us improve. There are a lot of smart people working hard to make this all happen, and it’s an exciting place to be!

You’ve created an increasingly popular testing framework, Testcontainers. What made you do it? What itch does it scratch?

Well, I think it’s something that scratches several itches at the same time – things that previously we only had isolated solutions to. The common element, though, is reproducibility of test environments. Of all my time developing and writing tests for JVM-based systems, it’s always been the non-JVM dependencies that caused the most complexity, unreliability and maintenance overhead.

I remember my first day as a developer, years ago: I was given a desktop machine and 2 days’ worth of step-by-step instructions that I needed to follow – just so that I’d be able to develop and run tests with all dependencies in place. A few months later I had to repeat the same task many times over when building new CI servers.

A lot has changed since then in terms of how we deploy and manage our production infrastructure, and thankfully Docker has done a lot to further bring prod-parity to developers’ machines.

Testcontainers started out as my effort to bring the full power of Docker to integrated testing on the JVM in two areas that I’ve experienced the most pain: testing against a clean, representative database, and making browser-based selenium testing more reproducible, both for developers and on CI.

Mostly being curious about testing databases, your documentation mentions Testcontainers as an alternative to using H2 as a test database. What are the disadvantages of emulating a database e.g. with H2? Did you make any personal experience with that?

Yes, definitely – it was one of the tipping point factors that triggered me to create Testcontainers. I do think H2 is a fantastic piece of work in what it manages to deliver, and it’s something I’ve used on a number of projects to good effect.

However, compatibility with real databases has often been a sticking point. Back in 2015, before I started Testcontainers, we were struggling with a few MySQL features that didn’t have equivalents in H2. We were facing the unpleasant prospect of having to constrain our implementation to what H2 would allow us to test against. It became fairly obvious that there was a gap in the market for an H2-like tool that was actually a facade to a Docker-based database container – and Testcontainers was born.

How do you think of mocking the database at any layer, including the DAO layer, service layer, etc.?

I’m all in favour of keeping tests small, light and layered, and using mocks to accomplish this. This might sound strange coming from somebody who has developed an integrated testing tool, but it’s true!

Still, I feel that we need to be pragmatic about how we approach automated tests and how we make sure we’re testing the right thing – especially when crossing boundaries. Are we testing how this code behaves against reality, or are we testing against our own (potentially false) understanding of how external components work?

My feeling is that it’s quite straightforward to mock layers of your system that you yourself wrote, or where you can easily jump into the source code, a spec or documentation. With an external component, you can still produce a mock that behaves how you expect, or how you witness the real thing behaving. But does that mock continue to represent the real thing, especially after accretion of other features, or the additional perils of state that a database entails – schema changes and actual data?

My ideal is to mock the data access layer for consumption by higher layers, but to be quite careful about what the data access layer itself talks to in my tests. It should probably be a real database. Hopefully Testcontainers is one tool that helps make this particular thing a little less painful – so that when you find yourself needing to do this, there’s a way to do it easily.

What’s the biggest challenge you’ve faced when testing databases, or other things?

It’s not databases, but I’d say that by far the hardest testing challenge I’ve faced as a developer is mobile apps, especially iOS. I’ve always enjoyed mobile development  as a whole, but when switching from a Java server-side/web project to mobile, it really feels like you’re going back in time. Some of the challenges are harder – such as asynchronicity and platform APIs that make it harder to structure software in a testable way. But it also feels like the tooling is much further behind, and until quite recently received far less attention. I feel the net result has been that developers have been discouraged from investing in automated tests, which is sad given that we know how valuable they can be.

Things are getting better, but I do greatly prefer the testing aspects of working on server-side JVM projects. For all its difficulties, we are actually quite lucky!

Static, Non-Static. Mockable, Non-Mockable… Instead, Let’s Focus on Real Added Value…

The never ending topic of testability… Dogmatic discussions about stuff being static, non-static. Mockable, non-mockable. Testable, non-testable. Here’s an article that was recently syndicated on DZone, about the evilness in making things static:

http://java.dzone.com/articles/why-static-bad-and-how-avoid

While the article itself is still somewhat focused on simple means of making something mockable through dependency injection, the big lot of comments and rants is just amazing. If you closely look at the comments, you’ll even find yourself reading gibberish about whether genderless “she” or singular “they” should be preferred. Off-topic troll alarm!

No one doubts the general usefulness of code being testable. If it’s feasible to add automated tests with a reasonable amount of effort, no one sane will question those tests. But where does this anti-static dogma come from? Every project manager will love engineers that follow 80/20 rules. In the end of the day, good software is defined by the added value to all stakeholders. There’s no right or wrong. Instead, there are “50 shades of mockable”. And with a bit of humour we’ll get something in between project day 1 and day 238:

Just face it. Static is a tool like any other tool. It has its merits. And its drawbacks. Choose the tool where it fits and review your overly strict rule set where needed. Being dogmatic will eventually lead to greater chaos than being pragmatic, Try to bebe efficient rather than to fight “evil”. Mocks have their place, as do integration tests.

For those looking for more rants and trolling comments, they can be seen in this article here, where more mocking is advertised in a database context:
http://architects.dzone.com/articles/easy-mocking-your-database-0

And after that. Let’s get back to work and produce something that focuses on adding value!

Easy Mocking of Your Database

Test-driven development is something wonderful! Once you’ve established it in your organisation, you will start to:

  • Greatly improve your quality (things break less often)
  • Greatly improve your processes (things can be changed more easily)
  • Greatly improve your developer atmosphere (things are more fun to do)

The importance of doing the right test-driven development is to find a good ratio of what kind of code is to be covered…

  • by automated unit tests
  • by automated integration tests
  • by manual “smoke tests”
  • by manual “acceptance tests”
  • not at all

Finding that ratio can be grounds for heated, religious discussions. I will soon blog about my own opinion on that subject. In this post, however, we will focus on the first kind of test: unit tests.

Unit testing your data access

When databases are involved, people will probably quickly jump to writing integration tests, because all they have to do is create a little Derby, H2 or HSQLDB (or other) test database, and run a couple of data-setup queries prior to the actual test. Their code module will then hopefully not notice the difference to a productive environment, and the whole system can be tested as a blackbox. The advantage of this is that your tests can be written in a way to verify your business requirements, your user stories, or whatever you call them. So far, the theory.

When these database integration tests pile up, it starts to become increasingly difficult to shield them off one another. Avoiding inter-dependencies and at the same time, avoiding costly database setups is hard. You won’t be able to run the whole test-suite immediately after building / committing. You need nightly builds, weekly builds. But unit testing the data access layer isn’t that much easier! Because JDBC is an awful API to mock. There are so many different ways of configuring and executing queries through this highly stateful API, your unit tests quickly become unmanageable.

There are a few libraries that help you with database testing. Just to name a few:

  • MockRunner: This one has some JDBC-specific extensions that allow for simulating JDBC ResultSets, as well as for checking whether actual queries are executed
  • jMock: An “ordinary” Java mocking library
  • mockito: An “ordinary” Java mocking library
  • DBUnit: This one doesn’t mock your database, it’s good for testing your database. Another use-case, but still worth mentioning here

Some of the above libraries will not get you around the fact that JDBC is an awkward API to mock, specifically if you need to support several (incompatible!) versions of JDBC at the same time. Some examples can be seen here:

Mocking the database with jOOQ

When you’re using jOOQ in your application, mocking your database just became really easy in jOOQ 3.0. jOOQ now also ships with a Mock JDBC Connection. Unlike with other frameworks, however, you only have to implement a single functional interface with jOOQ, and provide that implementation to your MockConnection: The MockDataProvider. Here’s a simple implementation example:

MockDataProvider provider = new MockDataProvider() {

    // Your contract is to return execution results, given a context
    // object, which contains SQL statement(s), bind values, and some
    // other context values
    @Override
    public MockResult[] execute(MockExecuteContext context) 
    throws SQLException {

        // Use ordinary jOOQ API to create an org.jooq.Result object.
        // You can also use ordinary jOOQ API to load CSV files or
        // other formats, here!
        DSLContext create = DSL.using(configuration);
        Result<MyTableRecord> result = create.newResult(MY_TABLE);
        result.add(create.newRecord(MY_TABLE));

        // Now, return 1-many results, depending on whether this is
        // a batch/multi-result context
        return new MockResult[] {
            new MockResult(1, result)
        };
    }
};

// Put your provider into a MockConnection and use that connection
// in your application. In this case, with a jOOQ DSLContext:
Connection connection = new MockConnection(provider);
DSLContext create = DSL.using(connection, dialect);

// Done! just use regular jOOQ API. It will return the values
// that you've specified in your MockDataProvider
assertEquals(1, create.selectOne().fetch().size());

The above implementation acts as a callback for JDBC’s various executeXXX() methods. Through a very simple MockExecuteContext API, you can thus:

  • Get access to the executed SQL and bind values (Use general jOOQ API to inline bind values into the SQL statement)
  • Distinguish between regular SQL statements and both single-statement/multi-bind-value and multi-statement/no-bind-value batch executions
  • Return one or several results using jOOQ’s org.jooq.Result objects (which you can easily import from CSV, XML, JSON, TEXT formats)
  • Return “generated keys” results through the same API
  • Let jOOQ’s MockStatement take care of the serialisation of your mock data through the JDBC API

There is also an experimental implementation of a MockFileDatabase, a text-based mock database that uses the following format:

# This is a sample test database for MockFileDatabase
# Its syntax is inspired from H2's test script files

# When this query is executed...
select 'A' from dual;
# ... then, return the following result
> A
> -
> A
@ rows: 1

# Just list all possible query / result combinations
select 'A', 'B' from dual;
> A B
> - -
> A B
@ rows: 1

select "TABLE1"."ID1", "TABLE1"."NAME1" from "TABLE1";
> ID1 NAME1
> --- -----
> 1   X
> 2   Y
@ rows: 2

MockFileDatabase implements MockDataProvider, so it’s dead-simple to provide your unit tests with sample data. Future versions of jOOQ will allow for:

  • Regex pattern-matching SQL statements to provide mock results
  • Load these results from other formats, such as jOOQ’s supported export formats
  • Specify the behaviour of batch statements, multi-result statements, etc.

Using jOOQ’s MockConnection in other contexts

Things don’t stop here. As jOOQ’s MockConnection is the entry point for this mocking sub-API of jOOQ, you can also use it in other environments, such as when running JPA queries, Hibernate queries, iBatis or just your plain old legacy JDBC queries.

jOOQ has just become your preferred JDBC mock framework! 😉