jOOQ Newsletter November 29, 2013

subscribe to the newsletter here

New jOOQ Express Edition

We have been listening to you, our valuable customers, and will now start to offer a budget license for those wanting to leverage the free Oracle Express or SQL Server Express editions. Inspired by those powerful databases, we call this the jOOQ Express Edition.

This competitively priced license can be used by up to three workstations within the same legal entity, which makes it a perfect tool for Freelancers and small Startups where control over your budget is of the essence. Obviously, all of jOOQ’s supported Open Source databases can be used, too.

New Open Source Blog Category

Since we have been pursuing a dual-licensing model for jOOQ, we made lots of new acquaintances and experiences both in the entrepreneurial world in general, as well as in the Open Source world in particular. Dual-licensing software is a very adventurous endeavour with at least these stakeholders involved:

  • “Enterprise customers” who see our steps as steps forward towards even better quality and higher professionalism.
  • “Everyday customers” who regret losing the option of a free middleware product but who are willing to spend money on high quality software that helps them increase quality and developer effectiveness in their code.
  • “Idealistic customers” who prefer Open Source to be “Free”, as understood byRichard Stallman.

As you can see, every project has its set of stakeholders. This is also true for the jOOQ dual-licensing project. We try our best to provide all of our stakeholders an optimal product without losing our visions of making Open Source a viable business model.

An important part of being “open” is to share experience and vision, and we’re doing so on our blog. In the recent past, we have blogged quite a bit about Open Source, to an extent that we have now created a new category for that subject. Some of these articles may be more controversial:

Others will be simple but interesting experience reports. In that context, we’re reaching out to major Open Source players, such as Oracle, Red Hat and others. So, meet us soon on, because as RedHat puts it: “Open Source is changing the world!”

Upcoming Events

The schedule for the awesome Java2Days conference in Sofia is now published! The same is true for the great abstract of the JUG-BB event in Berlin. Here is an overview of other, upcoming events:

Stay informed about 2014 events on

SQL Zone – SQL Antipatterns

We’ve advertised excellent books on SQL before. This time we want to make you aware of SQL Antipatterns by Bill Karwin. Bill has a very refreshing style of writing. One can really feel the pain he must have been suffering when discovering the code madness he has seen in his work as a consultant. We can feel with him, as we have created Code That Made Me Cry. This book is an excellent read for architects who need new input about how to explain bad ideas to their team, much as it is an excellent read for SQL developers who want to learn one or two things about how not to do things.

We list these books on our website at

If you’ve written a book, a blog post or a tutorial about SQL and want us to review and/or advertise it, please contact us.

SQL Zone – Keyset Paging

We’ve written about the concept of keyset paging (also known as the “seek method”) before. Applying keyset paging is not so trivial, which is why the upcoming jOOQ 3.3’s native support for it will be even more awesome. One of our blog readers was eager to see more concrete examples for keyset paging, and we heard them. Read about how to pre-calculate all page boundaries for pages when applying keyset paging:

Evil Field Injection vs. Annotatiomania™ with Lombok

Spring was the framework that has originally brought us dependency injection, e.g. through the utterly magic @Autowired, or the equally magic JEE @Inject. Now in this week, I’ve come across an interesting blog post by Oliver Gierke, a prominent Spring developer. Oliver claims that “field injection is evil.” Hear ye! So, dependency injection isn’t unanimously promoted within Spring / Pivotal? Oliver makes good points about testability and the fact that injected fields may be null under some circumstances that might not resemble productive environments. He then elaborates on creating immutable state through the usage of constructors. This is also advertised by JIRM, the Java Immutable object Relational Mapper by Adam Gent. And obviously, immutability is also advertised by the creators of the Scala language, who systematically distinguish mutable from immutable collections. Now, Oliver tries to solve this problem by using Project Lombok, a tool that takes advantage of Java’s annotation processing capabilities as can be seen in this tutorial. Here’s his solution in code:

class MyComponent {

  final @NonNull MyCollaborator collaborator;

  public void myBusinessMethod() {

As you might guess, the above example will generate a constructor taking all members as arguments and annotates that constructor with the JEE @Inject annotation. Now, this solution certainly works and helps avoid boiler plate that you would have if you wrote the constructor code yourself. But is this really our “brave new world”? Did we really want to put those annotations all over the place? Or is it just me, feeling that Annotatiomania™ is going mad at us? Check out this example from Code That Made Me Cry that makes use of the JPA 2.1 @SqlResultSetMapping annotation. All that code that has been moved from readable, imperative writing to declarative annotations.

Solving things on a language level

Now we can argue whether dependency injection is good or prone to abuse. But this third-party annotation-processing code generation is just crazy. Let’s have a look at how the Scala language resolves such things (as can be seen in this forum thread):

class MyComponent @Inject() 
     (collaborator : MyCollaborator) {

  def myBusinessMethod() {

Now, this looks much leaner, doesn’t it? I really wish we could have “Scala, the good parts” in Java too. Mixing constructors and immutable state with the class declaration is just plain awesome. This is how an object-oriented language should be done. And it has been repeated in RedHat’s Ceylon. Stay tuned for a future blog post about borrowing Scala’s and Ceylon’s good parts.

Detect JDBC API Misusage with JDBCLint

I’ve recently seen an advertisement for JDBCLint on the H2 User Group. JDBCLint is an Apache licensed JDBC proxy implementation that does some plausibility checks on the lifecycles of your JDBC objects. For instance, it
  • Checks if a ResultSet is closed twice
  • Checks if a ResultSet is not closed at all (in the finalizer)
  • Checks if a ResultSet yields unread columns
All of these checks can be disabled by specifying relevant properties on the proxy. And the best thing is that this proxy is so easy to integrate:

import com.maginatics.jdbclint.ConnectionProxy;
Connection connection =
connection = ConnectionProxy.newInstance(
    connection, new Properties());
// reports error and optionally throws exception

In addition to static code analysis tools like FindBugs or Alvor, this tool can help you find very subtle memory leaks in your large legacy application. Certainly a tool to have on your tool chain!

Using jOOQ with JAX-RS to Build a Simple License Server

In some use-cases, having a lean, single-tier server-side architecture is desireable. Typically, such architectures expose a RESTful API implementing client code and the UI using something like AngularJS. In Java, the standard API for RESTful applications is JAX-RS, which is part of JEE 7, along with a standard JSON implementation. But you can use JAX-RS also outside of a JEE container. The following example shows how to set up a simple license server using these technologies:
  • Maven for building and running
  • Jetty as a lightweight Servlet implementation
  • Jersey, the JAX-RS (JSR 311 & JSR 339) reference implementation
  • jOOQ as a data access layer
For the example, we’ll use a PostgreSQL database.

Example code

Now, before you go and copy-paste all the code off this blog post, consider downloading it from here, instead: The sample code is licensed under the terms of the Apache Software License 2.0.

Creating the license server database

We’ll keep the example simple and use a LICENSE table to store all license keys and associated information, whereas a LOG_VERIFY table is used to log access to the license server. Here’s the DDL:

  ID           SERIAL8      NOT NULL,

  -- The date when the license was issued
  -- The e-mail address of the licensee
  -- The license key
  LICENSE      TEXT         NOT NULL,
  -- The licensed version(s), a regular expression


  ID           SERIAL8      NOT NULL,

  -- The licensee whose license is being verified
  -- The license key that is being verified
  LICENSE      TEXT         NOT NULL,
  -- The request IP verifying the license
  -- The version that is being verified
  -- Whether the verification was successful


To make things a bit more interesting (and secure), we’ll also push license key generation into the database, by generating it from a stored function as such:

    IN email TEXT
AS $$
    RETURN 'license-key';

The actual algorithm might be using a secret salt to hash the function arguments. For the sake of a tutorial, a constant string will suffice.

Setting up the project

We’re going to be setting up the jOOQ code generator using Maven

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns=""





        <!-- See GitHub for details -->



With the above setup, we’re now pretty ready to start developing our license service as a JAX-RS service.

The license service class

Once we’ve run the jOOQ code generator using Maven, we can write the following service class:

 * The license server.
public class LicenseService {

   * <code>/license/generate</code> generates
   * and returns a new license key.
   * @param mail The input email of the licensee.
  public String generate(
    final @QueryParam("mail") String mail
  ) {
    return run(new CtxRunnable() {

      public String run(DSLContext ctx) {
        Timestamp licenseDate = new Timestamp(

           .set(LICENSE.LICENSE_, generateKey(
                inline(licenseDate), inline(mail)))
           .set(LICENSE.LICENSE_DATE, licenseDate)
           .set(LICENSE.LICENSEE, mail)

   * <code>/license/verify</code> checks if a given
   * licensee has access to version using a license.
   * @param request
   *   The servlet request from the JAX-RS context.
   * @param mail
   *   The input email address of the licensee.
   * @param license
   *   The license used by the licensee.
   * @param version
   *   The product version being accessed.
  public String verify(
    final @Context HttpServletRequest request,
    final @QueryParam("mail") String mail,
    final @QueryParam("license") String license,
    final @QueryParam("version") String version
  ) {
    return run(new CtxRunnable() {
      public String run(DSLContext ctx) {
        String v = (version == null
                 || version.equals(""))
          ? ""
          : version;

           .set(LOG_VERIFY.LICENSE, license)
           .set(LOG_VERIFY.LICENSEE, mail)
           .set(LOG_VERIFY.MATCH, field(
           .set(LOG_VERIFY.VERSION, v)
           .getValue(LOG_VERIFY.MATCH, String.class);

  // [...]

The INSERT INTO LOG_VERIFY query is actually rather interesting. In plain SQL, it would look like this:

   AND LICENSE = :license
   AND :version ~ VERSION) > 0,

Apart from the foregoing, the LicenseService also contains a couple of simple utilities:

 * This method encapsulates a transaction and
 * initialises a jOOQ DSLcontext. This could also be
 * achieved with Spring and DBCP for connection
 * pooling.
private String run(CtxRunnable runnable) {
  Connection c = null;

  try {
    c = getConnection(
        System.getProperty("pw", "test"));
    DSLContext ctx =
    DSL.using(new DefaultConfiguration()
       .set(new DefaultConnectionProvider(c))
       .set(new Settings()

  catch (Exception e) {
    return "Service Unavailable";
  finally {
private interface CtxRunnable {
    String run(DSLContext ctx);

Configuring Spring and Jetty

All we need now is to configure Spring…

<beans xmlns=""

     base-package="org.jooq.example.jaxrs" />


… and Jetty …

<?xml version="1.0" encoding="UTF-8"?>
<web-app version="2.4" xmlns=""

    <servlet-name>Jersey Spring Web Application</servlet-name>
    <servlet-name>Jersey Spring Web Application</servlet-name>

… and we’re done! We can now run the server with the following command:
mvn jetty:run
Or if you need a custom port:
mvn jetty:run -Djetty.port=8088

Using the license server

You can now use the license server at the following URLs
-> license-key

-> true

-> false
Let’s verify what happened, in the database:
select * from license_server.license
-- id | licensee         | license     | version
--  3 | | license-key | .*

select * from license_server.log_verify
-- id | licensee         | license     | match
--  2 | | license-key | t
--  5 | | wrong       | f

Downloading the complete example

The complete example can be downloaded for free and under the terms of the Apache Software License 2.0 from here:

Squel – A SQL Query Builder for JavaScript

… Yes! You’ve read correctly. For JavaScript. OK, there has been quite a bit traction around server-side JavaScript through node.js. The brave ones among you brave enough to actually write JavaScript, writing SQL in JavaScript might seem like a good idea, then. So I have discovered this library called squel.js, which has a nice-looking GitHub-style website and a big fat disclaimer almost at the top:
NOTE: It is recommended that you do NOT create queries browser-side to run on the server as this massively increases your exposure to SQL Injection attacks.
Again. If such a disclaimer needs to be added at the top of your website, is it really a good idea to proceed, then? But it may be for the node.js folks. So let’s have a look at the syntax of Squel."students")

Does this look familiar? So far, it could also be jOOQ code. With this SQL builder API, you can also create select from derived tables:

        .from('students'), 's')
/* SELECT FROM (SELECT * FROM students) `s` */

Or perform JOINs:

        .left_join("teachers", null, 
             " = teachers.student_id")
        .right_join("jailed", "j", 
             "j.student_id =")
/*  SELECT FROM students
        LEFT JOIN teachers 
        ON ( = teachers.student_id)
        RIGHT JOIN jailed `j` 
        ON (j.student_id =

Obviously, unlike Java SQL builders, this API is not typesafe, but it’s still interesting to see fluent APIs in other languages as well.

A Lesser-Known Java 8 Feature: Generalized Target-Type Inference

Going through the list of Java 8 features, Generalized Target-Type Inference struck me as a particularly interesting, lesser-known gem. It looks as though the Java language designers will ease some of the pain that we’ve been having with generics in the past (Java 5-7). Let’s have a look at their example:

class List<E> {
  static <Z> List<Z> nil() {..}
  static <Z> List<Z> cons(Z head, List<Z> tail) {..}
  E head() {..}

Given the above example, the JEP 101 feature claims that it would be nice to be able to write:

// This:
List.cons(42, List.nil());
String s = List.nil().head();

// ... instead of this:
List.cons(42, List.<Integer>nil());
String s = List.<String>nil().head();

Being a fluent API designer myself, I was thrilled to see that such an improvement is on the roadmap, particularly the latter. What’s so exciting about these changes? Let me comment on that more in detail:

// In addition to inferring generic types from
// assignments
List<String> l = List.nil();

// ... it would be nice for the compiler to be able
// to infer types from method argument types
List.cons(42, List.nil());

// ... or from "subsequent" method calls
String s = List.nil().head();

So in the last example where methods are chained, the type inference would be delayed until the whole assignment expression has been evaluated. From the left-hand side of the assignment, the compiler could infer that <Z> binds to String on the head() call. This information could then be used again to infer that <Z> binds again to String on the nil() call. Sounds like a lot of trickery to me, as the nil() call’s AST evaluations would need to be delayed until a “dependent” sub-AST is evaluated. Is that a good idea?

Yes, this is so awesome!

… you may think. Because a fluent API like jOOQ or the Streams API could be designed in a much much more fluent style, delaying type inference until the end of the call chain. So I downloaded the latest evaluation distribution of the JDK 8 to test this with the following program:

public class InferenceTest {
    public static void main(String[] args) {
        List<String> ls = List.nil();
        List.cons(42, List.nil());
        String s = List.nil().head();

I compiled this and I got:
C:\Users\Lukas\java8>javac error: incompatible types: 
    Object cannot be converted to String
        String s = List.nil().head();
1 error
So, the type inference based on the method argument type is implemented (and thus, compiles), but not the type inference for chained method calls. I searched the internet for an explanation and found this Stack Overflow question linking to this interesting thread on the lambda-dev mailing list. It appears that the Java type system has become quite complex. Too complex to implement such crazy type inference stuff. But still, a slight improvement that will be greatly valued when writing every day Java 8 code. And maybe, in Java 9, we’ll get val and var, like everyone else ;-)

Using jOOQ With Groovy

Some people may be using jOOQ with Groovy for easy scripting. As with the existing jOOQ / Scala integration, some Groovy language features can be leveraged. Take the following example, for instance:

package org.jooq.groovy

import static org.jooq.impl.DSL.*
import static org.jooq.groovy.example.h2.Tables.*

import groovy.sql.Sql
import org.jooq.*
import org.jooq.impl.DSL

sql = Sql.newInstance(
    'sa', '', 'org.h2.Driver')

a ="a")
b ="b")

   .select(a.FIRST_NAME, a.LAST_NAME, b.TITLE)
   .fetchInto ({ 
       r -> println(
           "${r.getValue(a.FIRST_NAME)} " +
           "${r.getValue(a.LAST_NAME)} " +
           "has written ${r.getValue(b.TITLE)}"
   } as RecordHandler)

Groovy is not such a typesafe language. When we miss the .on() clause in the above query, Groovy’s Eclipse IDE integration would indicate that the subsequent call to fetchInto() might not work at run time. But Groovy cannot be sure, just as much as the getValue() calls cannot be guaranteed to work in Groovy’s interpretation of what closures are. This is how Eclipse displays the above code:
jOOQ code written in Groovy
jOOQ code written in Groovy
What’s also interesting to see is that Groovy cannot infer the SAM (Single Abstract Method) interface type that would best match the fetchInto() call. We have to explicitly tell Groovy to coerce the closure to a jOOQ RecordHandler, and inside that RecordHandler, we cannot access the well-known type of r, which would be:
Record3<String, String, String>
Using jOOQ with Groovy is certainly possible, but also not as powerful as with Scala or with Java 8.

Alternative ways of writing SQL with Groovy

Apart from using a SQL query builder like jOOQ (obviously, as this is the jOOQ blog, or a syndication thereof), you can also use other ways of writing SQL in Groovy. The standard way is to use Groovy’s own SQL support, which is a lot more convenient string-based approach than JDBC directly. In fact, Groovy SQL is how JDBC should have been implemented in the first place:

import groovy.sql.Sql

sql = Sql.newInstance(
    'sa', '', 'org.h2.Driver')
sql.eachRow('select * from t_author') { 
    println "${it.first_name} ${it.last_name}" 

Another, interesting approach is to leverage Groovy’s advanced internal DSL capabilities. Here’s an example by Ilya Sterin where he created a DSL for SQL creation in Groovy

Select select = ("table1") {
    join("table2", type: "INNER") {
        using(table1: "col1", table2: "col1")
    join("table3", type: "OUTER") {
        using(table1: "col2", table2: "col2")
        using(table1: "col3", table2: "col3")
    where("table1.col1 = 'test'")
    groupBy(table1: "col1", table2: "col1")
    orderBy(table1: "col1", table2: "col1")

Read the full blog post here:

Deep Stack Traces Can be a Sign for Good Code Quality

The term “leaky abstractions” has been around for a while. Coining it is most often attributed to Joel Spolsky, who wrote this often-cited article about it. I’ve now stumbled upon another interpretation of a leaky abstraction, measured by the depth of a stack trace:

Leaky Abstractions as understood by Geek and Poke (Licensed CC-BY)
Leaky Abstractions as understood by Geek and Poke (Licensed CC-BY)

So, long stack traces are bad according to Geek & Poke. I’ve seen this argument before on Igor Polevoy’s blog (he’s the creator of ActiveJDBC, a Java implementation of the popular Ruby ActiveRecord query interface). Much like Joel Spolsky’s argumentation was often used to criticise ORMs, Igor’s argument was also used to compare ActiveJDBC with Hibernate. I’m citing:

One might say: so what, why do I care about the size of dependencies, depth of stack trace, etc. I think a good developer should care about these things. The thicker the framework, the more complex it is, the more memory it allocates, the more things can go wrong.

I completely agree that a framework with a certain amount of complexity tends to have longer stack traces. So if we run these axioms through our mental Prolog processors:

  • if Hibernate is a leaky abstraction, and
  • if Hibernate is complex, and
  • if complexity leads to long stack traces, then
  • leaky abstractions and long stack traces correlate

I wouldn’t go as far as claiming there’s a formal, causal connection. But a correlation seems logical.

But these things aren’t necessarily bad. In fact, long stack traces can be a good sign in terms of software quality. It can mean that the internals of a piece of software show a high amount of cohesion, a high degree of DRY-ness, which again means that there is little risk for subtle bugs deep down in your framework. Remember that a high cohesion and high DRY-ness lead to a large portion of the code being extremely relevant within the whole framework, which again means that any low-level bug will pretty much blow up the whole framework as it will lead to everything going wrong. If you do test-driven development, you’ll be rewarded by noticing immediately that your silly mistake fails 90% of your test cases.

A real-world example

Let’s use jOOQ as an example to illustrate this, as we’re already comparing Hibernate and ActiveJDBC. Some of the longest stack traces in a database access abstraction can be achieved by putting a breakpoint at the interface of that abstraction with JDBC. For instance, when fetching data from a JDBC ResultSet.

Utils.getFromResultSet(ExecuteContext, Class<T>, int) line: 1945
Utils.getFromResultSet(ExecuteContext, Field<U>, int) line: 1938
CursorImpl$CursorIterator$CursorRecordInitialiser.setValue(AbstractRecord, Field<T>, int) line: 1464
CursorImpl$CursorIterator$CursorRecordInitialiser.operate(AbstractRecord) line: 1447
CursorImpl$CursorIterator$CursorRecordInitialiser.operate(Record) line: 1
RecordDelegate<R>.operate(RecordOperation<R,E>) line: 119
CursorImpl$CursorIterator.fetchOne() line: 1413
CursorImpl$ line: 1389
CursorImpl$ line: 1
CursorImpl<R>.fetch(int) line: 202
CursorImpl<R>.fetch() line: 176
SelectQueryImpl<R>(AbstractResultQuery<R>).execute(ExecuteContext, ExecuteListener) line: 274
SelectQueryImpl<R>(AbstractQuery).execute() line: 322
T_2698Record(UpdatableRecordImpl<R>).refresh(Field<?>...) line: 438
T_2698Record(UpdatableRecordImpl<R>).refresh() line: 428
H2Test.testH2T2698InsertRecordWithDefault() line: 931

Compared to ActiveJDBC’s stack traces, that’s quite a bit more, but still less compared to Hibernate (which uses lots of reflection and instrumentation). And it involves rather cryptic inner classes with quite a bit of method overloading. How to interpret that? Let’s go through this, bottom-up (or top-down in the stack trace)


The CursorRecordInitialiser is an inner class that encapsules the initialisation of a Record by a Cursor, and it ensures that relevant parts of the ExecuteListener SPI are covered at a single place. It is the gateway to JDBC’s various ResultSet methods. It is a generic internal RecordOperation implementation that is called by…


… a RecordDelegate. While the class name is pretty meaningless, its purpose is to shield and wrap all direct record operations in a way that a central implementation of the RecordListener SPI can be achieved. This SPI can be implemented by client code to listen to active record lifecycle events. The price for keeping the implementation of this SPI DRY is a couple of elements on the stack trace, as such callbacks are the standard way to implement closures in the Java language. But keeping this logic DRY guarantees that no matter how a Record is initialised, the SPI will always be invoked. There are (almost) no forgotten corner-cases.

But we were initialising a Record in…


… a CursorImpl, an implementation of a Cursor. This might appear odd, as jOOQ Cursors are used for “lazy fetching”, i.e. for fetching Records one-by-one from JDBC.

On the other hand, the SELECT query from this stack trace simply refreshes a single UpdatableRecord, jOOQ’s equivalent of an active record. Yet, still, all the lazy fetching logic is executed just as if we were fetching a large, complex data set. This is again to keep things DRY when fetching data. Of course, around 6 levels of stack trace could have been saved by simply reading the single record as we know there can be only one. But again, any subtle bug in the cursor will likely show up in some test case, even in a remote one like the test case for refreshing records.

Some may claim that all of this is wasting memory and CPU cycles. The opposite is more likely to be true. Modern JVM implementations are so good with managing and garbage-collecting short-lived objects and method calls, the slight additional complexity imposes almost no additional work to your runtime environment.

TL;DR: Long stack traces may indicate high cohesion and DRY-ness

The claim that a long stack trace is a bad thing is not necessarily correct. A long stack trace is what happens, when complex frameworks are well implemented. Complexity will inevitably lead to “leaky abstractions”. But only well-designed complexity will lead to long stack traces.

Conversely, short stack traces can mean two things:

  • Lack of complexity: The framework is simple, with few features. This matches Igor’s claim for ActiveJDBC, as he is advertising ActiveJDBC as a “simple framework”.
  • Lack of cohesion and DRY-ness: The framework is poorly written, and probably has poor test coverage and lots of bugs.

Tree data structures

As a final note, it’s worth mentioning that another case where long stack traces are inevitable is when tree structures / composite pattern structures are traversed using visitors. Anyone who has ever debugged XPath or XSLT will know how deep these traces are.

#CTMMC, the New Hashtag for Code That Made Me Cry

Our recent article about Code That Made Me Cry was really well received and had quite a few readers, both on our blog and on our syndication partners:

In each of us programmers is a little geek with a little geek humour. This is reflected by funny comics like the one about

The only valid measurement of code quality: WTF/minute

Other platforms ridiculing bad code include

But one thing missing is a hashtag (and acronym) on twitter. It should be called #CTTMC for Code That Made Me Cry.

To promote this hashtag, we have created a website, where we will list our collective programmer pain. Behold:

On Friday Dec 13th 2013, Things *WILL* go Wrong

We’re writing for @JavaAdvent, on Friday Dec, 13th 2013. Superstitious? We are and we’ll give some fun and scary insights! Stay tuned and follow @JavaAdvent to be ready for an interesting, geeky Holidays season!

See also posts from 2012.