jOOQ 4.0’s New API Will Use Annotations Only for Truly Declarative Java/SQL Programming


SQL is the only really popular and mature 4GL (Fourth Generation Programming Language). I.e. it is the only popular declarative language.

At the same time, SQL has proven that turing completeness is not reserved to lesser languages like C, C++, or Java. Since SQL:1999 and its hierarchical common table expressions, SQL can be safely considered “turing complete”. This means that any program can be written in SQL. Don’t believe it? Take, for instance, this SQL Mandelbrot set calculation as can be seen in this Stack Overflow question.

mandelbrot set

Source: User Elie on http://stackoverflow.com/q/314864/521799

Wonderful! No more need for procedural, and object oriented cruft.

How we’ve been wrong so far…

At Data Geekery (the company behind jOOQ), we love SQL. And we love Java. But one thing has always bothered us in the past. Java is not really a purely declarative language. A lot of Java language constructs are real anti patterns for the enlightened declarative programmer. For instance:

// This is bad
for (String string : strings)
    System.out.println(string);

// This is even worse
try {
    someSQLStatements();
}
catch (SQLException e) {
    someRecovery();
}

The imperative style of the above code is hardly ever useful. Programmers need to tediously tell the Java compiler and the JVM what algorithm they meant to implement, down to the single statement, when in reality, using the JIT and other advanced optimisation techniques, they don’t really have to.

Luckily, there are annotations

Since Java 5, however, there have been farsighted people in expert groups who have added a powerful new concept to the Java language: Annotations (more info here). At first, experiments were made with only a handful of limited-use annotations, like:

  • @Override
  • @SuppressWarnings

But then, even more farsighted people have then proceeded in combining these annotations to form completely declaratively things like a component:

@Path("/MonsterRest")
@Stateless
@WebServlet(urlPatterns = "/MonsterServlet")
@Entity
@Table(name = "MonsterEntity")
@XmlRootElement
@XmlAccessorType(XmlAccessType.FIELD)
@NamedQuery(name = "findAll", query = "SELECT c FROM Book c")
public class Book extends HttpServlet {
 
    // ======================================
    // =             Attributes             =
    // ======================================
 
    @Id
    @GeneratedValue
    private Long id;
    private String isbn;
    private Integer nbOfPage;
    private Boolean illustrations;
    private String contentLanguage;
    @Column(nullable = false)
    @Size(min = 5, max = 50)
    @XmlElement(nillable = false)
    private String title;
    private Float price;
    @Column(length = 2000)
    @Size(max = 2000)
    private String description;
    @ElementCollection
    @CollectionTable(name = "tags")
    private List<String> tags = new ArrayList<>();

Look at this beauty. Credits to Antonio Goncalves

However, we still think that there is a lot of unnecessary object oriented bloat in the above. Luckily, recent innovations that make Java annotations turing complete (or even sentient?) will now finally allow us to improve upon this situation, specifically for jOOQ, which aims to model the declarative SQL language in Java. Finally, annotations are a perfect fit!

Those innovations are:

These innovations allow us to completely re-implement the entire jOOQ 4.0 API in order to allow for users writing SQL as follows:

@Select({
    @Column("FIRST_NAME"),
    @Column("LAST_NAME")
})
@From(
    table = @Table("AUTHOR"),
    join = @Join("BOOK"),
    predicate = @On(
        left = @Column("AUTHOR.ID"),
        op = @Eq,
        right = @Column("BOOK.AUTHOR_ID")
    )
)
@Where(
    predicate = @Predicate(
        left = @Column("BOOK.TITLE"),
        op = @Like,
        right = @Value("%Annotations in a Nutshell%")
    )
)
class SQLStatement {}

Just like JPA, this makes jOOQ now fully transparent and declarative, by using annotations. Developers will now be able to completely effortlessly translate their medium to highly complex SQL queries into the exact equivalent in jOOQ annotations.

Don’t worry, we’ll provide migration scripts to upgrade your legacy jOOQ 3.x application to 4.0. A working prototype is on the way and is expected to be released soon, early adopter feedback is very welcome, so stay tuned for more exciting SQL goodness!

How to Support Java 6, 8, 9 in a Single API


With jOOQ 3.7, we have finally added formal support for Java 8 features. This opened the door to a lot of nice improvements, such as:

Creating result streams

try (Stream<Record2<String, String>> stream =
     DSL.using(configuration)
        .select(FIRST_NAME, LAST_NAME)
        .from(PERSON)
        .stream()) {

    List<String> people =
    stream.map(p -> p.value1() + " " + p.value2())
          .collect(Collectors.toList());
}

Calling statements asynchronously (jOOQ 3.8+)

CompletionStage<Record> result =
DSL.using(configuration)
   .select(...)
   .from(COMPLEX_TABLE)
   .fetchAsync();

result.thenComposing(r -> ...);

But obviously, we didn’t want to disappoint our paying customers who are stuck with Java 6 because of their using an older application server, etc.

How to support several Java versions in a single API

This is why we continue publishing a Java 6 version of jOOQ for our commercial customers. How did we do it? Very easily. Our commercial code base (which is our main code base) contains tons of “flags” as in the following example:

public interface Query 
extends 
    QueryPart, 
    Attachable 
    /* [java-8] */, AutoCloseable /* [/java-8] */ 
{

    int execute() throws DataAccessException;

    /* [java-8] */
    CompletionStage<Integer> executeAsync();
    CompletionStage<Integer> executeAsync(Executor executor);
    /* [/java-8] */

}

(Sure, AutoCloseable was available already in Java 7, but we don’t have a Java 7 version).

When we build jOOQ, we build it several times after using a preprocessor to strip logic from the source files:

  • The commercial Java 8 version is built first as is
  • The commercial Java 6 version is built second by stripping all the code between [java-8] and [/java-8] markers
  • The commercial free trial version is built by adding some code to the commercial version
  • The open source version is built third by stripping all the code between [pro] and [/pro] markers

Advantages of this approach

There are several advantages of this approach compared to others:

  • We only have a single source of truth, the original commercial source code.
  • The line numbers are the same in all different versions
  • The APIs are compatible to a certain extent
  • No magic is involved via class loading or reflection

The disadvantages are:

  • Committing to repositories is a bit slower as we have several repositories.
  • Publishing releases takes longer as the different versions need to be built and integration tested several times
  • Sometimes, we simply forget adding a marker and have to re-build again when the Java-6 nightly build crashes
  • We still cannot use lambda expressions in ordinary code that is contained in the Java 6 version (most code)

In our opinion, the advantages outweigh clearly. It’s OK if we can’t implement top-notch Java features as long as our customers can, and as long as those customers who are stuck with old versions can still upgrade to the latest jOOQ version.

We’re looking forward to supporting JDK 9 features, like modularity and the new Flow API without any compromise to existing users.

What about you?

How do you approach cross JDK version compatibility?

Using Oracle AQ via Java 8 Streams


One of the most awesome features of the Oracle database is Oracle AQ: Oracle Database Advanced Queuing. The AQ API implements a full fledged, transactional messaging system directly in the database.

In a classic architecture where the database is at the center of your system, with multiple applications (some of which written in Java, others written in Perl or PL/SQL, etc.) accessing the same database, using AQ for inter-process communication is just great. If you’re more on the Java EE side, you might purchase a Java-based MQ solution, and put that message bus / middleware at the center of your system architecture. But why not use the database instead?

How to use the PL/SQL AQ API with jOOQ

The PL/SQL API for AQ message enqueuing and dequeuing is rather simple, and it can be accessed very easily from Java using jOOQ’s OracleDSL.DBMS_AQ API.

The queue configuration used here would look something like:

CREATE OR REPLACE TYPE message_t AS OBJECT (
  ID         NUMBER(7),
  title      VARCHAR2(100 CHAR)
)
/

BEGIN
  DBMS_AQADM.CREATE_QUEUE_TABLE(
    queue_table => 'message_aq_t',
    queue_payload_type => 'message_t'
  );

  DBMS_AQADM.CREATE_QUEUE(
    queue_name => 'message_q',
    queue_table => 'message_aq_t'
  );

  DBMS_AQADM.START_QUEUE(
    queue_name => 'message_q'
  );
  COMMIT;
END;
/

And the jOOQ code generator would generate the useful classes with all the type information directly associated with them (simplified example):

class Queues {
    static final Queue<MessageTRecord> MESSAGE_Q = 
        new QueueImpl<>("NEW_AUTHOR_AQ", MESSAGE_T);
}

class MessageTRecord {
    void setId(Integer id) { ... }
    Integer getId() { ... }
    void setTitle(String title) { ... }
    String getTitle() { ... }
    MessageTRecord(
        Integer id, String title
    ) { ... }
}

These classes can then be used to enqueue and dequeue messages type safely directly on the generated queue references:

// The jOOQ configuration
Configuration c = ...

// Enqueue a message
DBMS_AQ.enqueue(c, MESSAGE_Q, 
    new MessageTRecord(1, "test"));

// Dequeue it again
MessageTRecord message = DBMS_AQ.dequeue(c, MESSAGE_Q);

Easy, isn’t it?

Now, let’s leverage Java 8 features

A message queue is nothing other than an infinite (blocking) stream of messages. Since Java 8, we have a formidable API for such message streams, the Stream API.

This is why we have added (for the upcoming jOOQ 3.8) a new API that combines the existing jOOQ AQ API with Java 8 Streams:

// The jOOQ configuration
Configuration c = ...

DBMS_AQ.dequeueStream(c, MESSAGE_Q)
       .filter(m -> "test".equals(m.getTitle()))
       .forEach(System.out::println);

The above stream pipeline will listen on the MESSAGE_Q queue, consume all messages, filter out messages that do not contain the "test", and print the remaining messages.

Blocking streams

The interesting thing is the fact that this is a blocking, infinite stream. As long as there is no new message in the queue, the stream pipeline processing will simply block on the queue, waiting for new messages. This is not an issue for sequential streams, but when calling Stream.parallel(), what happens then?

jOOQ will consume each message in a transaction. A jOOQ 3.8 transaction runs in a ForkJoinPool.ManagedBlocker:

static <T> Supplier<T> blocking(Supplier<T> supplier) {
    return new Supplier<T>() {
        volatile T result;

        @Override
        public T get() {
            try {
                ForkJoinPool.managedBlock(new ManagedBlocker() {
                    @Override
                    public boolean block() {
                        result = supplier.get();
                        return true;
                    }

                    @Override
                    public boolean isReleasable() {
                        return result != null;
                    }
                });
            }
            catch (InterruptedException e) {
                throw new RuntimeException(e);
            }

            return asyncResult;
        }
    };
}

This isn’t a lot of magic. A ManagedBlocker runs some special code when it is run by a ForkJoinWorkerThread, making sure the thread’s ForkJoinPool won’t suffer from thread exhaustion and thus from deadlocks. For more info, read this interesting article here:
http://zeroturnaround.com/rebellabs/java-parallel-streams-are-bad-for-your-health

Or this Stack Overflow answer:
http://stackoverflow.com/a/35272153/521799

So, if you want a super-fast parallel AQ dequeuing process, just run:

// The jOOQ configuration. Make sure its referenced
// ConnectionPool has enough connections
Configuration c = ...

DBMS_AQ.dequeueStream(c, MESSAGE_Q)
       .parallel()
       .filter(m -> "test".equals(m.getTitle()))
       .forEach(System.out::println);

And you’ll have several threads that will dequeue messages in parallel.

Don’t want to wait for jOOQ 3.8?

No problem. Use the current version and wrap the dequeue operation in your own Stream:

Stream<MessageTRecord> stream = Stream.generate(() ->
    DSL.using(config).transactionResult(c ->
        dequeue(c, MESSAGE_Q)
    )
);

Done.

Bonus: Asynchronous dequeuing

While we were at it, another very nice feature of queuing systems is their asynchronicity. With Java 8, a very useful type to model (and compose) asynchronous algorithms is the CompletionStage, and it’s default implementation the CompletableFuture, which executes tasks in the ForkJoinPool again.

Using jOOQ 3.8, you can again simply call

// The jOOQ configuration. Make sure its referenced
// ConnectionPool has enough connections
Configuration c = ...

CompletionStage<MessageTRecord> stage =
DBMS_AQ.dequeueAsync(c, MESSAGE_Q)
       .thenCompose(m -> ...)
       ...;

Stay tuned for another article on the jOOQ blog soon, where we look into more sophisticated use-cases for asynchronous, blocking SQL statements with jOOQ 3.8 and Java 8

How to Detect Slow Queries with jOOQ


Just now, we implemented a nice little feature in jOOQ’s code generator:
https://github.com/jOOQ/jOOQ/issues/4974

It detects whenever the jOOQ code generator runs a slow query to reverse engineer schema meta information. Why?

In our development and integration test environment, we don’t have huge schemas with all the different performance edge cases put in place. For instance, we don’t have 5000 Oracle synonyms. Or 10000 procedures with 500 parameters each. We do cover some common edge cases, but not in all databases.

Users on the other hand tend to accept the status quo after a while. The code generator is slow-ish? Sure, because we have a huge schema. This lazy acceptance is an impediment to our product quality. We had rather our users report all sorts of issues they encounter, so we want to encourage them.

And we did

In the upcoming jOOQ version 3.8 (and patch releases for 3.5.5, 3.6.5, and 3.7.3), we’ve added a nice little ExecuteListener to jOOQ-meta, which looks roughly like this:

class PerformanceListener 
    extends DefaultExecuteListener {

    StopWatch watch;
    class SQLPerformanceWarning 
        extends Exception {}

    @Override
    public void executeStart(ExecuteContext ctx) {
        super.executeStart(ctx);
        watch = new StopWatch();
    }

    @Override
    public void executeEnd(ExecuteContext ctx) {
        super.executeEnd(ctx);
        if (watch.split() > 5_000_000_000L)
            log.warn(
                "Slow SQL",
                "jOOQ Meta executed a slow query"
              + "\n\n"
              + "Please report this bug here: "
              + "https://github.com/jOOQ/jOOQ/issues/new\n\n"
              + formatted(ctx.query()),
                new SQLPerformanceWarning());
    }
}

It’s very simple. Every time we start executing a query, a “stop watch” is started. Every time we end execution, we check if the watch elapsed more than 5 seconds. If so, we log a warning, a link to our issue tracker, a formatted version of the SQL query, and a stack trace to help find the exact spot where the slow statement was executed.

Let’s run this

The reason why we did this is because we’ve seen ourselves that the PostgreSQL code generator runs a slow query to fetch all the stored procedures (and to generate overload indexes). The produced error message is:

[WARNING] Slow SQL                 : jOOQ Meta executed a slow query (slower than 5 seconds)

Please report this bug here: https://github.com/jOOQ/jOOQ/issues/new

select
  "r1"."routine_schema",
  "r1"."routine_name",
  "r1"."specific_name",
  case when exists (
        select 1 as "one"
        from "information_schema"."parameters"
        where (
          "information_schema"."parameters"."specific_schema" = "r1"."specific_schema"
          and "information_schema"."parameters"."specific_name" = "r1"."specific_name"
          and upper("information_schema"."parameters"."parameter_mode")  'IN'
        )
      ) then 'void'
       else "r1"."data_type"
  end as "data_type",
  "r1"."character_maximum_length",
  "r1"."numeric_precision",
  "r1"."numeric_scale",
  "r1"."type_udt_schema",
  "r1"."type_udt_name",
  case when exists (
        select 1 as "one"
        from "information_schema"."routines" as "r2"
        where (
          "r2"."routine_schema" in (
            'public', 'multi_schema', 'pg_catalog'
          )
          and "r2"."routine_schema" = "r1"."routine_schema"
          and "r2"."routine_name" = "r1"."routine_name"
          and "r2"."specific_name"  "r1"."specific_name"
        )
      ) then (
        select count(*)
        from "information_schema"."routines" as "r2"
        where (
          "r2"."routine_schema" in (
            'public', 'multi_schema', 'pg_catalog'
          )
          and "r2"."routine_schema" = "r1"."routine_schema"
          and "r2"."routine_name" = "r1"."routine_name"
          and "r2"."specific_name" <= "r1"."specific_name"
        )
      ) end as "overload",
  "pg_catalog"."pg_proc"."proisagg"
from "information_schema"."routines" as "r1"
  join "pg_catalog"."pg_namespace"
  on "pg_catalog"."pg_namespace"."nspname" = "r1"."specific_schema"
  join "pg_catalog"."pg_proc"
  on (
    "pg_catalog"."pg_proc"."pronamespace" = "pg_catalog"."pg_namespace".oid
    and (("pg_catalog"."pg_proc"."proname" || '_') || cast("pg_catalog"."pg_proc".oid as varchar)) = "r1"."specific_name"
  )
where (
  "r1"."routine_schema" in (
    'public', 'multi_schema', 'pg_catalog'
  )
  and not("pg_catalog"."pg_proc"."proretset")
)
order by
  "r1"."routine_schema" asc,
  "r1"."routine_name" asc,
  "overload" asc
org.jooq.util.AbstractDatabase$1$SQLPerformanceWarning
        at org.jooq.util.AbstractDatabase$1.executeEnd(AbstractDatabase.java:230)
        at org.jooq.impl.ExecuteListeners.executeEnd(ExecuteListeners.java:163)
        at org.jooq.impl.AbstractResultQuery.execute(AbstractResultQuery.java:269)
        at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:346)
        at org.jooq.impl.AbstractResultQuery.fetch(AbstractResultQuery.java:308)
        at org.jooq.impl.SelectImpl.fetch(SelectImpl.java:2703)
        at org.jooq.util.postgres.PostgresDatabase.getRoutines0(PostgresDatabase.java:707)
        at org.jooq.util.AbstractDatabase.getRoutines(AbstractDatabase.java:1131)
        at org.jooq.util.JavaGenerator.generate(JavaGenerator.java:417)
        at org.jooq.util.JavaGenerator.generate(JavaGenerator.java:314)
        at org.jooq.util.JavaGenerator.generate(JavaGenerator.java:279)
        at org.jooq.util.GenerationTool.run(GenerationTool.java:490)
        at org.jooq.util.GenerationTool.generate(GenerationTool.java:193)
        at org.jooq.util.maven.Plugin.execute(Plugin.java:131)
        ...

We can now proceed with fixing the query, easily.

You can do the same!

The implementation of the ExecuteListener was straight forward. You can do the same, very easily. Just hook a simple execute listener into your jOOQ Configuration, measuring execution speeds and logging warnings after a threshold, done.

Happy debugging!

Further reading

Coincidentally, a very similar approach has been documented by the engineering team at Square – The Query Sniper:

https://corner.squareup.com/2016/01/query-sniper.html

Reactive Database Access – Part 3 – Using jOOQ with Scala, Futures and Actors


We’re very happy to continue our a guest post series on the jOOQ blog by Manuel Bernhardt. In this blog series, Manuel will explain the motivation behind so-called reactive technologies and after introducing the concepts of Futures and Actors use them in order to access a relational database in combination with jOOQ.

manuel-bernhardtManuel Bernhardt is an independent software consultant with a passion for building web-based systems, both back-end and front-end. He is the author of “Reactive Web Applications” (Manning) and he started working with Scala, Akka and the Play Framework in 2010 after spending a long time with Java. He lives in Vienna, where he is co-organiser of the local Scala User Group. He is enthusiastic about the Scala-based technologies and the vibrant community and is looking for ways to spread its usage in the industry. He’s also scuba-diving since age 6, and can’t quite get used to the lack of sea in Austria.

This series is split in three parts, which we have published over the past months:

Introduction

In the previous two posts of this series we have introduced the benefits of reactive programming as well as two tools available for manipulating them, Futures and Actors. Now it is time to get your hands dirty, dear reader, and to create a simple application featuring reactive database access. Fear not, I will be there along the whole way to guide you through the process.

Also, the source code of this example is available on Github

Getting the tools

In order to build the application, you will need a few tools. If you haven’t worked with Scala yet, the simplest for you may be to go and grab the Typesafe Activator which is a standalone project that brings in the necessary tools to build a Scala project from scratch.

Since this is about reactive database access, you will also need a database. For the purpose of this simple example, we’re going to use Oracle Database 12c Enterprise Edition. Nah, just kidding – it might be a bit cumbersome to get this one to run on your machine. Instead we will use the excellent PostgreSQL. Make sure to install it so that you can run the psql utility from your console.

Ready? Great! Let’s have a look at what we’re going to build.

The application

The goal of our application is to fetch mentions from Twitter and store them locally so as to be able to visualize them and perform analytics on them.

687474703a2f2f6d616e75656c2e6265726e68617264742e696f2f77702d636f6e74656e742f4d656e74696f6e732e706e67

The core of this mechanism will be a MentionsFetcher actor which will periodically fetch mentions from Twitter and save them in the database. Once there we can display useful information on a view.

Creating the database

The first step we’re going to take is to create the database. Create a mentions.sql file somewhere with the following content:

CREATE USER "play" NOSUPERUSER INHERIT CREATEROLE;

CREATE DATABASE mentions WITH OWNER = "play" ENCODING 'UTF8';

GRANT ALL PRIVILEGES ON DATABASE mentions to "play";

\connect mentions play

CREATE TABLE twitter_user (
  id bigserial primary key,
  created_on timestamp with time zone NOT NULL,
  twitter_user_name varchar NOT NULL
);

CREATE TABLE mentions (
  id bigserial primary key,
  tweet_id varchar NOT NULL,
  user_id bigint NOT NULL,
  created_on timestamp with time zone NOT NULL,
  text varchar NOT NULL
);

This script will create a play user, a mentions database as well as two tables, twitter_user and mentions.

In order to execute it, execute the following command in a terminal:

psql -f mentions.sql

(note: you might need to explictly declare which user runs this command, depending on how you have configured PostgreSQL to run)

Bootstrapping the project

Let’s create the reactive-mentions project, shall we? Assuming that you have installed the activator, run the following command in a terminal:

~/workspace » activator new reactive-mentions

This will prompt a list of templates, we are going to use the play-scala project template:

Fetching the latest list of templates...

Browse the list of templates: http://typesafe.com/activator/templates
Choose from these featured templates or enter a template name:
  1) minimal-akka-java-seed
  2) minimal-akka-scala-seed
  3) minimal-java
  4) minimal-scala
  5) play-java
  6) play-scala
(hit tab to see a list of all templates)
> 6
OK, application "reactive-mentions" is being created using the "play-scala" template.
[...]

At this point, a simple Play Framework project has been created in the reactive-mentions directory. If you want to, you can run this project by navigating to it and running the command activator run.

In order to work on the project, you can use one of the many IDEs that have Scala support. My personal favourite is to this day IntelliJ IDEA which does a pretty good job at this and also has built-in support for the Play Framework itself.

Setting up jOOQ

I wrote about database access in Scala about 2 years ago. There are to this day still quite a few alternatives to relational database access in Scala but at least personally I have now reached the conclusion that for the type of projects I work on, jOOQ beats them all when it comes to writing type-safe SQL. So without further ado let’s integrate it with our project.

There is an SBT plugin available for this if you would like, however for this application we will settle for a minimal, hand-crafter solution.

Bring up the build.sbt file in an editor and add adjust the libraryDependencies to look like so:

libraryDependencies ++= Seq(
  jdbc,
  cache,
  ws,
  "org.postgresql" % "postgresql" % "9.4-1201-jdbc41",
  "org.jooq" % "jooq" % "3.7.0",
  "org.jooq" % "jooq-codegen-maven" % "3.7.0",
  "org.jooq" % "jooq-meta" % "3.7.0",
  specs2 % Test
)

If you are running the project’s console (which you can do by executing the activator command in the project’s directory) you will need to call the reload command in order for the new dependencies to be pulled in. This is true of any change you are doing to the build.sbt file. Don’t forget about it in the remainder of this article!

(note: make sure to use the version of the PostgreSQL driver that fits your version of PostgreSQL!)

Next, we need to set up jOOQ itself. For this purpose, create the file conf/mentions.xml, where conf is the directory used in the Play Framework for storing configuration-related files:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<configuration xmlns="http://www.jooq.org/xsd/jooq-codegen-3.7.0.xsd">
  <jdbc>
    <driver>org.postgresql.Driver</driver>
    <url>jdbc:postgresql://localhost/mentions</url>
    <user>play</user>
    <password></password>
  </jdbc>
  <generator>
    <name>org.jooq.util.ScalaGenerator</name>
    <database>
      <name>org.jooq.util.postgres.PostgresDatabase</name>
      <inputSchema>public</inputSchema>
      <includes>.*</includes>
      <excludes></excludes>
    </database>
    <target>
      <packageName>generated</packageName>
      <directory>target/scala-2.11/src_managed/main</directory>
    </target>
  </generator>
</configuration>

This configuration will allow to run jOOQ’s ScalaGenerator which will read the database schema and generate Scala specific classes for it, storing them in a directory accessible in the classpath and meant for generated sources.

All that is left to do is to create have a way to run jOOQ’s code generation. A simple solution that we are going to use is to create a custom SBT task in our project build. Go back to build.sbt and add the following at the end:

val generateJOOQ = taskKey[Seq[File]]("Generate JooQ classes")

val generateJOOQTask = (sourceManaged, fullClasspath in Compile, runner in Compile, streams) map { (src, cp, r, s) =>
  toError(r.run("org.jooq.util.GenerationTool", cp.files, Array("conf/mentions.xml"), s.log))
  ((src / "main/generated") ** "*.scala").get
}

generateJOOQ <<= generateJOOQTask


unmanagedSourceDirectories in Compile += sourceManaged.value / "main/generated"

The generateJOOQ task will run the GenerationTool using the mentions.xml file we have set-up earlier on. Let’s run it!

Start the SBT console by running the activator command in your terminal window, in the reactive-streams directory, and then run the generateJOOQ command:

[reactive-mentions] $ generateJOOQ
[info] Running org.jooq.util.GenerationTool conf/mentions.xml
[success] Total time: 1 s, completed Dec 11, 2015 2:55:08 PM

That’s it! If you want a bit more verbosity, add the following logger configuration to conf/logback.xml:

 <logger name="org.jooq" level="INFO" />

Alright, we are now ready to get to the core of our endaveour: create the actor that will pull mentions from Twitter!

Creating the MentionsFetcher actor

For the purpose of fetching mentions at regular intervals from Twitter, we will be using a simple Akka actor. Actors are meant to do a lot more powerful things than this but for the sake of introducing the concept this example will do (or so I hope).

Go ahead and add Akka as well as its logging facility as library dependencies in build.sbt:

libraryDependencies ++= Seq(
  ...
  "com.typesafe.akka" %% "akka-actor" % "2.4.1",
  "com.typesafe.akka" %% "akka-slf4j" % "2.4.1"
)

Next, create the file app/actors/MentionsFetcher.scala with the following content:

package actors

import actors.MentionsFetcher._
import akka.actor.{ActorLogging, Actor}
import org.joda.time.DateTime
import scala.concurrent.duration._

class MentionsFetcher extends Actor with ActorLogging {

  val scheduler = context.system.scheduler.schedule(
    initialDelay = 5.seconds,
    interval = 10.minutes,
    receiver = self,
    message = CheckMentions
  )

  override def postStop(): Unit = {
    scheduler.cancel()
  }

  def receive = {
    case CheckMentions => checkMentions
    case MentionsReceived(mentions) => storeMentions(mentions)
  }

  def checkMentions = ???

  def storeMentions(mentions: Seq[Mention]) = ???

}

object MentionsFetcher {

  case object CheckMentions
  case class Mention(id: String, created_at: DateTime, text: String, from: String, users: Seq[User])
  case class User(handle: String, id: String)
  case class MentionsReceived(mentions: Seq[Mention])

}

The first thing you may notice from this code is the unimplemented methods fetchMentions and storeMentions with the triple question mark ???. That’s actually valid Scala syntax: it is a method available by default which throws ascala.NotImplementedError.

The second thing I want you to notice is the companion object to the MentionsFetcher class which holds the protocol of our actor. Actors communicate using messages and even though our actor will only communicate with itself in this example it is a good idea to place it in a companion object and to import its members (via the wildcard import import actors.MentionsFetcher._) so as to keep things organized as the project grows.

Other than this, what we are doing for the moment is quite simple: we are setting up a scheduler that wakes up every 10 minutes in order to send the actor it-self the FetchMentions message. Upon receiving this message in the main receivemethod we are going to proceed to fetching the mentions from Twitter. Finally when a MentionsReceived message is received, we simply invoke the storeMentions method.

Simple enough, isn’t it? Don’t worry, things are about to get a little bit more complicated.

Fetching the mentions from Twitter

Twitter does not have an API that lets us directly fetch recent mentions. However it has an API that lets us search for Tweets and that will have to do.

Before you can go any further, if you intend to run this project, you will need to get yourself a set of keys and access tokens at apps.twitter.com. If you don’t you will have to trust me that the following works.

Once you have them, add them in the file conf/application.conf like so:

# Twitter
twitter.apiKey="..."
twitter.apiSecret="..."
twitter.accessToken="..."
twitter.accessTokenSecret="..."

Then, create the credentials method in MentionsFetcher:

// ...
import play.api.Play
import play.api.Play.current
import play.api.libs.oauth.{RequestToken, ConsumerKey}

class MentionsFetcher extends Actor with ActorLogging {

  // ...

  def credentials = for {
    apiKey <- Play.configuration.getString("twitter.apiKey")
    apiSecret <- Play.configuration.getString("twitter.apiSecret")
    token <- Play.configuration.getString("twitter.accessToken")
    tokenSecret <- Play.configuration.getString("twitter.accessTokenSecret")
  } yield (ConsumerKey(apiKey, apiSecret), RequestToken(token, tokenSecret))

}

This will allow us to place a call to Twitter’s API using the correct OAuth credentials.

Next, let’s get ready to fetch those mentions:

// ...
import akka.pattern.pipe
import org.joda.time.DateTime
import scala.util.control.NonFatal

class MentionsFetcher extends Actor with ActorLogging {

  // ...

  var lastSeenMentionTime: Option[DateTime] = Some(DateTime.now)

  def checkMentions = {
      val maybeMentions = for {
        (consumerKey, requestToken) <- credentials
        time <- lastSeenMentionTime
      } yield fetchMentions(consumerKey, requestToken, "<yourTwitterHandleHere>", time)

      maybeMentions.foreach { mentions =>
        mentions.map { m =>
          MentionsReceived(m)
        } recover { case NonFatal(t) =>
          log.error(t, "Could not fetch mentions")
          MentionsReceived(Seq.empty)
        } pipeTo self
      }
  }

  def fetchMentions(consumerKey: ConsumerKey, requestToken: RequestToken, user: String, time: DateTime): Future[Seq[Mention]] = ???

Do you remember the pipe pattern we talked about in the previous post about Actors? Well, here it is again!

The call we are going to make against Twitter’s API is going to be asynchronous. In other words we will not simply get aSeq[Mention] but a Future[Seq[Mention]] to work with, and the best way to deal with that one is to send ourselves a message once the Future has completed with the contents of the result.

Since things can go wrong though we also need to think about error recovery which we do here by heroically logging out the fact that we could not fetch the mentions.

You may also notice that we have introduced a lastSeenMentionTime variable. This is the means by which we are going to keep in memory the timestamp of the last mention we have seen.

In order to go ahead, one thing we need to do is to use a more recent version of the async-http-library client since there is a bug in Play 2.4.x. Add the following dependency to build.sbt:

libraryDependencies ++= Seq(
  ...
  "com.ning" % "async-http-client" % "1.9.29"
)

Alright, now that we are all set, let’s finally fetch those mentions!

// ...
import scala.util.control.NonFatal
import org.joda.time.format.DateTimeFormat
import play.api.libs.json.JsArray
import play.api.libs.ws.WS
import scala.concurrent.Future


class MentionsFetcher extends Actor with ActorLogging {

  // ...

  def fetchMentions(consumerKey: ConsumerKey, requestToken: RequestToken, user: String, time: DateTime): Future[Seq[Mention]] = {
    val df = DateTimeFormat.forPattern("EEE MMM dd HH:mm:ss Z yyyy").withLocale(Locale.ENGLISH)

    WS.url("https://api.twitter.com/1.1/search/tweets.json")
      .sign(OAuthCalculator(consumerKey, requestToken))
      .withQueryString("q" -> s"@$user")
      .get()
      .map { response =>
        val mentions = (response.json \ "statuses").as[JsArray].value.map { status =>
          val id = (status \ "id_str").as[String]
          val text = (status \ "text").as[String]
          val from = (status \ "user" \ "screen_name").as[String]
          val created_at = df.parseDateTime((status \ "created_at").as[String])
          val userMentions = (status \ "entities" \ "user_mentions").as[JsArray].value.map { user =>
            User((user \ "screen_name").as[String], ((user \ "id_str").as[String]))
          }

          Mention(id, created_at, text, from, userMentions)

        }
        mentions.filter(_.created_at.isAfter(time))
    }
  }

}

Fetching the mentions is rather straightforward thanks to Play’s WebService library. We create a signed OAuth request using our credentials and run a HTTP GET request against the search API passing as query string the @userName which will (hopefully) give us a list of Tweets mentioning a user. Lastly we do only keep those mentions that are after our last check time. Since we check every 10 minutes and since the API only returns recent tweets, this should be doing fine (unless you are very popular on Twitter and get an insane amount of replies – but this really is your own fault, then).

Setting the ExecutionContext

If you try to compile the project now (using the compile command) you will be greeted with a few compilation errors complaining about a missing ExecutionContext. Futures are a way to abstract tasks and they need something to run them. The ExecutionContext is the missing bit which will schedule the tasks to be executed.

Since we are inside of an actor we can borrow the actor’s own dispatcher:

class MentionsFetcher extends Actor with ActorLogging {

  implicit val executionContext = context.dispatcher

  // ...
}

We’ll talk more about Execution Contexts later on when it comes to fine-tuning the connection with the database. For the moment let us focus on actually getting to talk with the database at all.

Setting up a reactive database connection

Configuring the database connection

In order to connect to the database, we will first need to configure the connection information in conf/application.conf like so:

// ...

db.default.driver="org.postgresql.Driver"
db.default.url="jdbc:postgresql://localhost/mentions?user=play"

Creating a helper class to access the database

Play’s Database API is letting us access the configured database. We now need to do two things:

  • use jOOQ (rather than plain JDBC) to talk with the database
  • make sure we are not going to jeopardize our application by blocking while waiting for the database interaction to happen (JDBC is blocking)

For this purpose we will wrap the database operations in a Future that will run on its own ExecutionContext rather than sharing the one used by the actor or by the Play application itself.

Go ahead and create the file app/database/DB.scala:

package database

import javax.inject.Inject

import akka.actor.ActorSystem
import org.jooq.{SQLDialect, DSLContext}
import org.jooq.impl.DSL
import play.api.db.Database

import scala.concurrent.{ExecutionContext, Future}

class DB @Inject() (db: Database, system: ActorSystem) {

  val databaseContext: ExecutionContext = system.dispatchers.lookup("contexts.database")

  def query[A](block: DSLContext => A): Future[A] = Future {
    db.withConnection { connection =>
      val sql = DSL.using(connection, SQLDialect.POSTGRES_9_4)
      block(sql)
    }
  }(databaseContext)

  def withTransaction[A](block: DSLContext => A): Future[A] = Future {
    db.withTransaction { connection =>
      val sql = DSL.using(connection, SQLDialect.POSTGRES_9_4)
      block(sql)
    }
  }(databaseContext)

}

We define two methods, query and withTransaction that:

  • use a Future block in order to wrap the underlying code as a Future, thus running it asynchronously
  • use a custom databaseContext ExecutionContext in order to execute this Future
  • initialze jOOQ’s DSLContext and give access to it in the body of the expected functions

The databaseContext ExectionContext is created using Akka’s configuration capabilities. We need to add the configuration of the database dispatcher in conf/application.conf:

contexts {
    database {
        fork-join-executor {
          parallelism-max = 9
        }
    }
}

The magic number 9 doesn’t come out of nowhere. Check the excellent explanation provided by the HikariCP connection pool about connection pool sizing for more details. Those considerations are also discussed in length in Chapters 5 and 7 of Reactive Web Applications.

Wiring everything using dependency injection

Next, let’s use Play’s built-in dependency injection mechanism in order to provide our MentionsFetcher actor with a DB class. Adjust the constructor of our MentionsFetcher actor in app/actors/MentionsFetcher.scala to look like so:

// ...
import javax.inject.Inject
import play.api.db.Database


class MentionsFetcher @Inject() (database: Database) extends Actor with ActorLogging { ... }

We just need one more thing in order to bootstrap our MentionsFetcher actor: let Play know that we want to use it.

For this purpose we will declare a module and leverage the plumbing that Play provides when it comes to interacting with Akka actors. At the end of MentionsFetcher.scala (or in a new file, if you like), declare the following MentionsFetcherModule:

import com.google.inject.AbstractModule
import play.api.libs.concurrent.AkkaGuiceSupport

class MentionsFetcherModule extends AbstractModule with AkkaGuiceSupport {
  def configure(): Unit =
    bindActor[MentionsFetcher]("fetcher")
}

Last but not least we need to tell Play that we would like to use this module. In conf/appliction.conf add the following line to do so:

play.modules.enabled += "actors.MentionsFetcherModule"

That’s it! When Play starts up it will initialize the enabled modules which in turn will lead to the actor being initialized.

We now can go ahead and use the database in order to store the fetched mentions.

Storing the mentions in the database

Thanks to jOOQ writing the statements for storing the mentions is rather easy. Since we do not want to risk storing users or mentions twice we will upsert them using the WHERE NOT EXISTS SQL clause. For the sake of recording as much data as possible we will also store all mentioned users of a Tweet.

// ...
import generated.Tables._
import org.jooq.impl.DSL._

class MentionsFetcher @Inject() (db: DB) extends Actor with ActorLogging {

  // ...

  def storeMentions(mentions: Seq[Mention]) = db.withTransaction { sql =>
    log.info("Inserting potentially {} mentions into the database", mentions.size)
    val now = new Timestamp(DateTime.now.getMillis)

    def upsertUser(handle: String) = {
      sql.insertInto(TWITTER_USER, TWITTER_USER.CREATED_ON, TWITTER_USER.TWITTER_USER_NAME)
        .select(
          select(value(now), value(handle))
            .whereNotExists(
              selectOne()
                .from(TWITTER_USER)
                .where(TWITTER_USER.TWITTER_USER_NAME.equal(handle))
            )
        )
        .execute()
    }

    mentions.foreach { mention =>
      // upsert the mentioning users
      upsertUser(mention.from)

      // upsert the mentioned users
      mention.users.foreach { user =>
        upsertUser(user.handle)
      }

      // upsert the mention
      sql.insertInto(MENTIONS, MENTIONS.CREATED_ON, MENTIONS.TEXT, MENTIONS.TWEET_ID, MENTIONS.USER_ID)
        .select(
          select(
            value(now),
            value(mention.text),
            value(mention.id),
            TWITTER_USER.ID
          )
            .from(TWITTER_USER)
            .where(TWITTER_USER.TWITTER_USER_NAME.equal(mention.from))
            .andNotExists(
              selectOne()
                .from(MENTIONS)
                .where(MENTIONS.TWEET_ID.equal(mention.id))
            )
        )
        .execute()
    }
  }

}

Et voilà! If you execute this code (and generate some mentions, or use an earlier timestamp for filtering) you will get some data into your database!

Let’s now query and display a few statistics in the browser.

Displaying the mentions

In order to show our mentions we will adjust the default view shown when launching the application as well as theApplication controller. Start by adjusting the template app/views/index.scala.html to look as follows:

@(mentionsCount: Int)

@main("Reactive mentions") {

    <p>You have been mentioned @mentionsCount times in the past days</p>

}

Next, edit the Application controller located in app/controllers/Application.scala:

package controllers

import java.sql.Timestamp
import javax.inject.Inject

import database.DB
import org.joda.time.DateTime
import play.api._
import play.api.mvc._

class Application @Inject() (db: DB) extends Controller {

  def index = Action.async { implicit request =>

    import generated.Tables._
    import org.jooq.impl.DSL._

    db.query { sql =>
      val mentionsCount = sql.select(
        count()
      ).from(MENTIONS)
       .where(
         MENTIONS.CREATED_ON.gt(value(new Timestamp(DateTime.now.minusDays(1).getMillis)))
       ).execute()

      Ok(views.html.index(mentionsCount))
    }

  }

}

This time, we are using the query method that we have built in our DB helper. Since the result of this operation is a Future, we need to use the Action.async method of the Action which has a signature of the kind Request => Future[Response]. The execution of this query is performed by the custom ExecutionContext that we have set up for database operations and does not impede on the default ExecutionContext of the Play framework itself.

In case anything were to go wrong and the database operations were to hang on forever on the threads offered by that context, the rest of the application would not be affected (this principle is called “bulkheading” and is described a bit more in detail in Chapter 5 of Reactive Web Applications).

Conclusion

In this series we have explored the “Why?” of reactive applications and of asynchronous programming. In particular, we have talked about Futures and Actors, two powerful abstractions that make asynchronous programming easier to think about.

Most relational databases do not have asynchronous drivers available yet and even if there are some projects aiming at it I think it will still take some time before we’ll have a standard that will hopefully be implemented by many vendors. In the meanwhile we have seen that we can use a custom ExecutionContext in order to isolate otherwise blocking database operations.

687474703a2f2f6d616e75656c2e6265726e68617264742e696f2f77702d636f6e74656e742f4265726e68617264742d526561637469766557412d4d4541502d48492e6a706567

If you liked this series and are interested in learning more on the topic, consider checking out my book which provides an introductio to building reactive web applications on the JVM. Futures are covered in Chapter 5, Actors in Chapter 6 and Database Access in Chapter 7.

Read on

Read the previous chapters of this series:

A Curious Incidence of a jOOQ API Design Flaw


jOOQ is an internal domain-specific language (DSL), modelling the SQL language (external DSL) in Java (the host language). The main mechanism of the jOOQ API is described in this popular article:

The Java Fluent API Designer Crash Course.

Anyone can implement an internal DSL in Java (or in most other host languages) according to the rules from that article.

An example SQL language feature: BOOLEANs

One of the nice things about the SQL language, however, is the BOOLEAN type, which has been introduced late into the language as of SQL:1999. Sure, without booleans, you can just model TRUE and FALSE values via 1 and 0, and transform the predicates into the value using CASE

CASE WHEN A = B THEN 1 ELSE 0 END

But with true BOOLEAN support, you can do awesome queries like the following PostgreSQL query that is run against the Sakila database:

SELECT
  f.title, 
  string_agg(a.first_name, ', ') AS actors
FROM film AS f
JOIN film_actor AS fa USING (film_id)
JOIN actor AS a USING (actor_id)
GROUP BY film_id
HAVING every(a.first_name LIKE '%A%')

The above yields:

TITLE                    ACTORS
-----------------------------------------------------
AMISTAD MIDSUMMER        CARY, DARYL, SCARLETT, SALMA
ANNIE IDENTITY           CATE, ADAM, GRETA
ANTHEM LUKE              MILLA, OPRAH
ARSENIC INDEPENDENCE     RITA, CUBA, OPRAH
BIRD INDEPENDENCE        FAY, JAYNE
...

In other words, we’re looking for all the films where all the actors who played in the film contain the letter “A” in their first names. This is done via an aggregation on the boolean expression / predicate first_name LIKE '%A%':

HAVING every(a.first_name LIKE '%A%')

Now, in the terms of the jOOQ API, this means we’ll have to provide overloads of the having() method that take different argument types, such as:

// These accept "classic" predicates
having(Condition... conditions);
having(Collection<? extends Condition> conditions);

// These accept a BOOLEAN type
having(Field<Boolean> condition);

Of course, these overloads are available for any API method that accepts predicates / boolean values, not just for the HAVING clause.

As mentioned before, since SQL:1999, jOOQ’s Condition and Field<Boolean> are really the same thing. jOOQ allows for converting between the two via explicit API:

Condition condition1 = FIRST_NAME.like("%A%");
Field<Boolean> field = field(condition1);
Condition condition2 = condition(field);

… and the overloads make conversion more conveniently implicit.

So, what’s the problem?

The problem is that we thought it might be a good idea to add yet another convenient overload, the having(Boolean) method, where constant, nullable BOOLEAN values could be introduced into the query, for convenience, which can be useful when building dynamic SQL, or commenting out some predicates:

DSL.using(configuration)
   .select()
   .from(TABLE)
   .where(true)
// .and(predicate1)
   .and(predicate2)
// .and(predicate3)
   .fetch();

The idea is that the WHERE keyword will never be commented out, regardless what predicate you want to temporarily remove.

Unfortunately, adding this overload introduced a nuisance to developers using IDE auto-completion. Consider the following two method calls:

// Using jOOQ API
Condition condition1 = FIRST_NAME.eq   ("ADAM");
Condition condition2 = FIRST_NAME.equal("ADAM");

// Using Object.equals (accident)
boolean = FIRST_NAME.equals("ADAM");

By (accidentally) adding a letter “s” to the equal() method – mostly because of IDE autocompletion – the whole predicate expression changes semantics drastically, from a jOOQ expression tree element that can be used to generate SQL to an “ordinary” boolean value (which always yields false, obviously).

Prior to having added the last overload, this wasn’t a problem. The equals() method usage wouldn’t compile, as there was no applicable overload taking a Java boolean type.

// These accept "classic" predicates
having(Condition condition);
having(Condition... conditions);
having(Collection<? extends Condition> conditions);

// These accept a BOOLEAN type
having(Field<Boolean> condition);

// This method didn't exist prior to jOOQ 3.7
// having(Boolean condition);

After jOOQ 3.7, this accident started to go unnoticed in user code as the compiler no longer complained, leading to wrong SQL.

Conclusion: Be careful when designing an internal DSL. You inherit the host language’s “flaws”

Java is “flawed” in that every type is guaranteed to inherit from java.lang.Object and with it, its methods: getClass(), clone(), finalize() equals(), hashCode(), toString(), notify(), notifyAll(), and wait().

In most APIs, this isn’t really that much of a problem. You don’t really need to re-use any of the above method names (please, don’t).

But when designing an internal DSL, these Object method names (just like the language keywords) limit you in your design space. This is particularly obvious in the case of equal(s).

We’ve learned, and we’ve deprecated and will remove the having(Boolean) overload, and all the similar overloads again.

Oracle LONG and LONG RAW Causing “Stream has already been closed” Exception


Like many old databases, Oracle has legacy data types, which are rather nasty to work with in every day SQL. Usually, you don’t run into wild encounters of LONG and LONG RAW data types anymore, but when you’re working with an old database, or with the dictionary views, you might just have to deal with LONG.

These data types are pretty much the same thing as the “newer” LOB representations:

  • LONG and CLOB are somewhat the same thing, except they aren’t
  • LONG RAW and BLOB are somewhat the same thing, except they aren’t

Reading LONG or LONG RAW from JDBC causes a “Stream has already been closed” exception

When you have the following schema:

CREATE TABLE t_long_raw_and_blob (
  id        NUMBER(7),
  blob1     BLOB,
  longx     LONG RAW,
  blob2     BLOB,

  CONSTRAINT pk_t_long_raw_and_blob PRIMARY KEY (id)
);

CREATE TABLE t_long_and_clob (
  id        NUMBER(7),
  clob1     CLOB,
  longx     LONG,
  clob2     CLOB,

  CONSTRAINT pk_t_long_and_clob PRIMARY KEY (id)
);

… you cannot just simply select all columns from JDBC (or other APIs) like this:

try (PreparedStatement s = con.prepareStatement(
        "SELECT * FROM t_long_raw_and_blob");
     ResultSet rs = s.executeQuery()) {

    while (rs.next()) {
        System.out.println();
        System.out.println("ID    = " + rs.getInt(1));
        System.out.println("BLOB1 = " + rs.getBytes(2));
        System.out.println("LONGX = " + rs.getBytes(3));
        System.out.println("BLOB2 = " + rs.getBytes(4));
    }
}

If you’re doing the above, you’ll run into something along the lines of:

Caused by: java.sql.SQLException: Stream has already been closed
    at oracle.jdbc.driver.LongRawAccessor.getBytes(LongRawAccessor.java:162)
    at oracle.jdbc.driver.OracleResultSetImpl.getBytes(OracleResultSetImpl.java:708)
    ... 33 more

The “correct” solution would be, to run the following, instead:

try (PreparedStatement s = con.prepareStatement(
        "SELECT * FROM t_long_raw_and_blob");
     ResultSet rs = s.executeQuery()) {

    while (rs.next()) {
        byte[] longx = rs.getBytes(3);

        System.out.println();
        System.out.println("ID    = " + rs.getInt(1));
        System.out.println("BLOB1 = " + rs.getBytes(2));
        System.out.println("LONGX = " + longx);
        System.out.println("BLOB2 = " + rs.getBytes(4));
    }
}

In short: All LONG or LONG RAW columns have to be retrieved from the ResultSet prior to all the other columns.

That’s nasty

Indeed! Some sort of low level Oracle protocol flaw has leaked outside of the JDBC API, which is very unfortunate. We don’t care about these details. We should be able to fetch resources in any order.

In jOOQ, we’ve fixed this issue #4820, so you can run your statement and order the columns in whatever order you want them to be:

DSL.using(configuration)
   .select(
       T_LONG_RAR_AND_BLOB.ID,
       T_LONG_RAR_AND_BLOB.BLOB1,
       T_LONG_RAR_AND_BLOB.LONGX,
       T_LONG_RAR_AND_BLOB.BLOB2
   )
   .from(T_LONG_RAR_AND_BLOB)
   .fetch();

jOOQ will internally reorder the columns when fetching them from the ResultSet, transparently.