Deep Stack Traces Can be a Sign for Good Code Quality

The term “leaky abstractions” has been around for a while. Coining it is most often attributed to Joel Spolsky, who wrote this often-cited article about it. I’ve now stumbled upon another interpretation of a leaky abstraction, measured by the depth of a stack trace:

So, long stack traces are bad according to Geek & Poke. I’ve seen this argument before on Igor Polevoy’s blog (he’s the creator of ActiveJDBC, a Java implementation of the popular Ruby ActiveRecord query interface). Much like Joel Spolsky’s argumentation was often used to criticise ORMs, Igor’s argument was also used to compare ActiveJDBC with Hibernate. I’m citing:

One might say: so what, why do I care about the size of dependencies, depth of stack trace, etc. I think a good developer should care about these things. The thicker the framework, the more complex it is, the more memory it allocates, the more things can go wrong.

I completely agree that a framework with a certain amount of complexity tends to have longer stack traces. So if we run these axioms through our mental Prolog processors:

  • if Hibernate is a leaky abstraction, and
  • if Hibernate is complex, and
  • if complexity leads to long stack traces, then
  • leaky abstractions and long stack traces correlate

I wouldn’t go as far as claiming there’s a formal, causal connection. But a correlation seems logical.

But these things aren’t necessarily bad. In fact, long stack traces can be a good sign in terms of software quality. It can mean that the internals of a piece of software show a high amount of cohesion, a high degree of DRY-ness, which again means that there is little risk for subtle bugs deep down in your framework. Remember that a high cohesion and high DRY-ness lead to a large portion of the code being extremely relevant within the whole framework, which again means that any low-level bug will pretty much blow up the whole framework as it will lead to everything going wrong. If you do test-driven development, you’ll be rewarded by noticing immediately that your silly mistake fails 90% of your test cases.

A real-world example

Let’s use jOOQ as an example to illustrate this, as we’re already comparing Hibernate and ActiveJDBC. Some of the longest stack traces in a database access abstraction can be achieved by putting a breakpoint at the interface of that abstraction with JDBC. For instance, when fetching data from a JDBC ResultSet.

Utils.getFromResultSet(ExecuteContext, Class<T>, int) line: 1945
Utils.getFromResultSet(ExecuteContext, Field<U>, int) line: 1938
CursorImpl$CursorIterator$CursorRecordInitialiser.setValue(AbstractRecord, Field<T>, int) line: 1464
CursorImpl$CursorIterator$CursorRecordInitialiser.operate(AbstractRecord) line: 1447
CursorImpl$CursorIterator$CursorRecordInitialiser.operate(Record) line: 1
RecordDelegate<R>.operate(RecordOperation<R,E>) line: 119
CursorImpl$CursorIterator.fetchOne() line: 1413
CursorImpl$CursorIterator.next() line: 1389
CursorImpl$CursorIterator.next() line: 1
CursorImpl<R>.fetch(int) line: 202
CursorImpl<R>.fetch() line: 176
SelectQueryImpl<R>(AbstractResultQuery<R>).execute(ExecuteContext, ExecuteListener) line: 274
SelectQueryImpl<R>(AbstractQuery).execute() line: 322
T_2698Record(UpdatableRecordImpl<R>).refresh(Field<?>...) line: 438
T_2698Record(UpdatableRecordImpl<R>).refresh() line: 428
H2Test.testH2T2698InsertRecordWithDefault() line: 931

Compared to ActiveJDBC’s stack traces, that’s quite a bit more, but still less compared to Hibernate (which uses lots of reflection and instrumentation). And it involves rather cryptic inner classes with quite a bit of method overloading. How to interpret that? Let’s go through this, bottom-up (or top-down in the stack trace)

CursorRecordInitialiser

The CursorRecordInitialiser is an inner class that encapsules the initialisation of a Record by a Cursor, and it ensures that relevant parts of the ExecuteListener SPI are covered at a single place. It is the gateway to JDBC’s various ResultSet methods. It is a generic internal RecordOperation implementation that is called by…

RecordDelegate

… a RecordDelegate. While the class name is pretty meaningless, its purpose is to shield and wrap all direct record operations in a way that a central implementation of the RecordListener SPI can be achieved. This SPI can be implemented by client code to listen to active record lifecycle events. The price for keeping the implementation of this SPI DRY is a couple of elements on the stack trace, as such callbacks are the standard way to implement closures in the Java language. But keeping this logic DRY guarantees that no matter how a Record is initialised, the SPI will always be invoked. There are (almost) no forgotten corner-cases.

But we were initialising a Record in…

CursorImpl

… a CursorImpl, an implementation of a Cursor. This might appear odd, as jOOQ Cursors are used for “lazy fetching”, i.e. for fetching Records one-by-one from JDBC.

On the other hand, the SELECT query from this stack trace simply refreshes a single UpdatableRecord, jOOQ’s equivalent of an active record. Yet, still, all the lazy fetching logic is executed just as if we were fetching a large, complex data set. This is again to keep things DRY when fetching data. Of course, around 6 levels of stack trace could have been saved by simply reading the single record as we know there can be only one. But again, any subtle bug in the cursor will likely show up in some test case, even in a remote one like the test case for refreshing records.

Some may claim that all of this is wasting memory and CPU cycles. The opposite is more likely to be true. Modern JVM implementations are so good with managing and garbage-collecting short-lived objects and method calls, the slight additional complexity imposes almost no additional work to your runtime environment.

TL;DR: Long stack traces may indicate high cohesion and DRY-ness

The claim that a long stack trace is a bad thing is not necessarily correct. A long stack trace is what happens, when complex frameworks are well implemented. Complexity will inevitably lead to “leaky abstractions”. But only well-designed complexity will lead to long stack traces.

Conversely, short stack traces can mean two things:

  • Lack of complexity: The framework is simple, with few features. This matches Igor’s claim for ActiveJDBC, as he is advertising ActiveJDBC as a “simple framework”.
  • Lack of cohesion and DRY-ness: The framework is poorly written, and probably has poor test coverage and lots of bugs.

Tree data structures

As a final note, it’s worth mentioning that another case where long stack traces are inevitable is when tree structures / composite pattern structures are traversed using visitors. Anyone who has ever debugged XPath or XSLT will know how deep these traces are.

Easy Mocking of Your Database

Test-driven development is something wonderful! Once you’ve established it in your organisation, you will start to:

  • Greatly improve your quality (things break less often)
  • Greatly improve your processes (things can be changed more easily)
  • Greatly improve your developer atmosphere (things are more fun to do)

The importance of doing the right test-driven development is to find a good ratio of what kind of code is to be covered…

  • by automated unit tests
  • by automated integration tests
  • by manual “smoke tests”
  • by manual “acceptance tests”
  • not at all

Finding that ratio can be grounds for heated, religious discussions. I will soon blog about my own opinion on that subject. In this post, however, we will focus on the first kind of test: unit tests.

Unit testing your data access

When databases are involved, people will probably quickly jump to writing integration tests, because all they have to do is create a little Derby, H2 or HSQLDB (or other) test database, and run a couple of data-setup queries prior to the actual test. Their code module will then hopefully not notice the difference to a productive environment, and the whole system can be tested as a blackbox. The advantage of this is that your tests can be written in a way to verify your business requirements, your user stories, or whatever you call them. So far, the theory.

When these database integration tests pile up, it starts to become increasingly difficult to shield them off one another. Avoiding inter-dependencies and at the same time, avoiding costly database setups is hard. You won’t be able to run the whole test-suite immediately after building / committing. You need nightly builds, weekly builds. But unit testing the data access layer isn’t that much easier! Because JDBC is an awful API to mock. There are so many different ways of configuring and executing queries through this highly stateful API, your unit tests quickly become unmanageable.

There are a few libraries that help you with database testing. Just to name a few:

  • MockRunner: This one has some JDBC-specific extensions that allow for simulating JDBC ResultSets, as well as for checking whether actual queries are executed
  • jMock: An “ordinary” Java mocking library
  • mockito: An “ordinary” Java mocking library
  • DBUnit: This one doesn’t mock your database, it’s good for testing your database. Another use-case, but still worth mentioning here

Some of the above libraries will not get you around the fact that JDBC is an awkward API to mock, specifically if you need to support several (incompatible!) versions of JDBC at the same time. Some examples can be seen here:

Mocking the database with jOOQ

When you’re using jOOQ in your application, mocking your database just became really easy in jOOQ 3.0. jOOQ now also ships with a Mock JDBC Connection. Unlike with other frameworks, however, you only have to implement a single functional interface with jOOQ, and provide that implementation to your MockConnection: The MockDataProvider. Here’s a simple implementation example:

MockDataProvider provider = new MockDataProvider() {

    // Your contract is to return execution results, given a context
    // object, which contains SQL statement(s), bind values, and some
    // other context values
    @Override
    public MockResult[] execute(MockExecuteContext context) 
    throws SQLException {

        // Use ordinary jOOQ API to create an org.jooq.Result object.
        // You can also use ordinary jOOQ API to load CSV files or
        // other formats, here!
        DSLContext create = DSL.using(configuration);
        Result<MyTableRecord> result = create.newResult(MY_TABLE);
        result.add(create.newRecord(MY_TABLE));

        // Now, return 1-many results, depending on whether this is
        // a batch/multi-result context
        return new MockResult[] {
            new MockResult(1, result)
        };
    }
};

// Put your provider into a MockConnection and use that connection
// in your application. In this case, with a jOOQ DSLContext:
Connection connection = new MockConnection(provider);
DSLContext create = DSL.using(connection, dialect);

// Done! just use regular jOOQ API. It will return the values
// that you've specified in your MockDataProvider
assertEquals(1, create.selectOne().fetch().size());

The above implementation acts as a callback for JDBC’s various executeXXX() methods. Through a very simple MockExecuteContext API, you can thus:

  • Get access to the executed SQL and bind values (Use general jOOQ API to inline bind values into the SQL statement)
  • Distinguish between regular SQL statements and both single-statement/multi-bind-value and multi-statement/no-bind-value batch executions
  • Return one or several results using jOOQ’s org.jooq.Result objects (which you can easily import from CSV, XML, JSON, TEXT formats)
  • Return “generated keys” results through the same API
  • Let jOOQ’s MockStatement take care of the serialisation of your mock data through the JDBC API

There is also an experimental implementation of a MockFileDatabase, a text-based mock database that uses the following format:

# This is a sample test database for MockFileDatabase
# Its syntax is inspired from H2's test script files

# When this query is executed...
select 'A' from dual;
# ... then, return the following result
> A
> -
> A
@ rows: 1

# Just list all possible query / result combinations
select 'A', 'B' from dual;
> A B
> - -
> A B
@ rows: 1

select "TABLE1"."ID1", "TABLE1"."NAME1" from "TABLE1";
> ID1 NAME1
> --- -----
> 1   X
> 2   Y
@ rows: 2

MockFileDatabase implements MockDataProvider, so it’s dead-simple to provide your unit tests with sample data. Future versions of jOOQ will allow for:

  • Regex pattern-matching SQL statements to provide mock results
  • Load these results from other formats, such as jOOQ’s supported export formats
  • Specify the behaviour of batch statements, multi-result statements, etc.

Using jOOQ’s MockConnection in other contexts

Things don’t stop here. As jOOQ’s MockConnection is the entry point for this mocking sub-API of jOOQ, you can also use it in other environments, such as when running JPA queries, Hibernate queries, iBatis or just your plain old legacy JDBC queries.

jOOQ has just become your preferred JDBC mock framework! ;-)