Tag Archive | jooq

Querying Your Database from Millions of Fibers (Rather than Thousands of Threads)


jOOQ is a great way to do SQL in Java and Quasar fibers bring a much improved concurrency

We’re excited to announce another very interesting guest post on the jOOQ Blog by Fabio Tudone from Parallel Universe.

Parallel Universe develops an open-source stack that allows developers to easily code extremly concurrent application on the JVM. With the Parallel Universe stack you build software that works in harmony with modern hardware rather than fight it at every turn, while keeping your programming language and your simple, familiar programming styles.

fabiotudoneFabio Tudone develops and maintains Quasar integration modules as part of the Comsat project. He’s been part of and then led the development of a cloud-based enterprise content governance platform for several years before joining the Parallel Universe team and he’s been writing mostly JVM software along his whole professional journey. His interests include Dev and DevOps practices, scalability, concurrent and functional programming as well as runtime platforms. Naturally curious and leaning towards exploration, he enjoys gathering knowledge and understanding from people, places and cultures. He’s also interested in awareness practices and likes writing all sorts of stuff.

Quasar features an integration for JDBC and jOOQ as part of the Comsat project, so let’s have a look inside the box.

JDBC, jOOQ and Quasar

comsat-jdbc provides a fiber-blocking wrapper of the JDBC API, so that you can use your connection inside fibers rather than regular Java threads.

Why would you do that? Because fibers are lightweight threads and you can have many more fibers than threads in your running JVM. “Many more” means we’re talking millions versus a handful of thousands.

This means that you have a lot more concurrency capacity in your system to do other things in parallel while you wait for JDBC execution, be it concurrent / parallel calculations (like exchanging actor messages in your highly reliable Quasar Erlang-like actor system) or fiber-blocking I/O (e.g. serving webrequests, invoking micro-services, reading files through fiber NIO or accessing other fiber-enabled data sources like MongoDB).

If your DB can stand it and few more regular threads won’t blow up your system (yet), you can even increase your fiber-JDBC pool (see Extra points: where’s the waiting line later) and send more concurrent jOOQ commands.

Since jOOQ uses JDBC connections to access the database, having jOOQ run on fibers is as easy as bringing in the comsat-jooq dependency and handing your fiber-enabled JDBC connection to the jOOQ context:

import java.sql.Connection;
import static org.jooq.impl.DSL.*;

// ...

Connecton conn = FiberDataSource.wrap(dataSource)
                                .getConnection();
DSLContext create = DSL.using(connection);

// ...

Of course you can also configure a ConnectionProvider to fetch connections from your FiberDataSource.

From this moment on you can use regular jOOQ and everything will happen in fiber-blocking mode rather than thread-blocking. That’s it.

No, really, there’s absolutely nothing more to it: you keep using the excellent jOOQ, only with much more efficient fibers rather than threads. Quasar is a good citizen and won’t force you into a new API (which is nice especially when the original one is great already).

Since the JVM at present doesn’t support native green threads nor continuations, which can be used to implement lightweight threads, Quasar implements continuations (and fibers on top of them) viabytecode instrumentation. This can be done at compile time but often it’s just more convenient to use Quasar’s agent (especially when instrumenting third-party libraries), so here’s an example Gradle project based on Dropwizard that also includes the Quasar agent setup (Don’t forget about Capsule, a really great Java deployment tool for every need, which, needless to say, makes using Quasar and agents in general a breeze). The example doesn’t use all of the jOOQ features, rather it falls in SQL-building use case (both for querying and for CRUD) but you’re encouraged to change it to suit your needs. The without-comsat branch contains a thread-blocking version so you can compare and see the (minimal) differences with the Comsat version.

Where’s the waiting line?

You might be wondering now: ok, but JDBC is a thread-blocking API, how can Quasar turn it into a fiber-blocking one? Because JDBC doesn’t have an aynchronous mode, Quasar uses a thread pool behind the scenes to which fibers dispatch JDBC operations and by which they’re unfrozen and scheduled for resumption when the JDBC operation completes (have a look at Quasar’s integration patterns for more info).

Yes, here’s the nasty waiting line: JDBC commands awaiting to be executed by the thread pool. Although you’re not improving DB parallelism beyond your JDBC thread-pool size, you’re not hurting your fibers, either, even though you’re still using a simple and familiar blocking API. You can still have millions of fibers.

Is it possible to improve the overall situation? Without a standard asynchronous Java RDBMS API there isn’t much we can do. However, this may not matter at all if the database is your bottleneck. There are several nice posts and discussions about this topic and the argument amounts to deciding where you want to move the waiting line.

Bonus: how does this neat jOOQ integration work under the cover?

At present Quasar needs the developer (or integrator) to tell it what to instrument, although fully automatic instrumentation is in the works (This feature depends on some minor JRE changes that won’t be released before Java 9). If you can conveniently alter the source code (or the compiled classes) then it’s enough to annotate methods with @Suspendable or letting them throws SuspendExecution, but this is usually not the case with libraries. But methods with fixed, well known names to be instrumented can be listed in META-INF/suspendables and META-INF/suspendable-supers, respectively for concrete methods and abstract / interface methods that can have suspendable implementations.

If there are a lot (or there’s code generation involved), you can write a SuspendableClassifier to ship with your integration and register it with Quasar’s SPI to provide additional instrumentation logic (see jOOQs). A SuspendableClassifier‘s job is to examine signature information about each and every method in your runtime classpath during the instrumentation phase and tell if it’s suspendable, if it can have suspendable implementations, if for sure neither is the case or if it doesn’t know (Some other classifier could say perhaps “suspendable” or “suspendable-super” later on).

Summing it all up

Well… Just enjoy the excellent jOOQ on efficient fibers!

Type Safe Queries for JPA’s Native Query API


When you’re using JPA – sometimes – JPQL won’t do the trick and you’ll have to resort to native SQL. From the very beginning, ORMs like Hibernate kept an open “backdoor” for these cases and offered a similar API to Spring’s JdbcTemplate, to Apache DbUtils, or to jOOQ for plain SQL. This is useful as you can continue using your ORM as your single point of entry for database interaction.

However, writing complex, dynamic SQL using string concatenation is tedious and error-prone, and an open door for SQL injection vulnerabilities. Using a type safe API like jOOQ would be very useful, but you may find it hard to maintain two different connection, transaction, session models within the same application just for 10-15 native queries.

But the truth is:

You an use jOOQ for your JPA native queries!

That’s true! There are several ways to achieve this.

Fetching tuples (i.e. Object[])

The simplest way will not make use of any of JPA’s advanced features and simply fetch tuples in JPA’s native Object[] form for you. Assuming this simple utility method:

public static List<Object[]> nativeQuery(
    EntityManager em, 
    org.jooq.Query query
) {

    // Extract the SQL statement from the jOOQ query:
    Query result = em.createNativeQuery(query.getSQL());

    // Extract the bind values from the jOOQ query:
    List<Object> values = query.getBindValues();
    for (int i = 0; i < values.size(); i++) {
        result.setParameter(i + 1, values.get(i));
    }

    return result.getResultList();
}

Using the API

This is all you need to bridge the two APIs in their simplest form to run “complex” queries via an EntityManager:

List<Object[]> books =
nativeQuery(em, DSL.using(configuration)
    .select(
        AUTHOR.FIRST_NAME, 
        AUTHOR.LAST_NAME, 
        BOOK.TITLE
    )
    .from(AUTHOR)
    .join(BOOK)
        .on(AUTHOR.ID.eq(BOOK.AUTHOR_ID))
    .orderBy(BOOK.ID));

books.forEach((Object[] book) -> 
    System.out.println(book[0] + " " + 
                       book[1] + " wrote " + 
                       book[2]));

Agreed, not a lot of type safety in the results – as we’re only getting an Object[]. We’re looking forward to a future Java that supports tuple (or even record) types like Scala or Ceylon.

So a better solution might be the following:

Fetching entities

Let’s assume you have the following, very simple entities:

@Entity
@Table(name = "book")
public class Book {

    @Id
    public int id;

    @Column(name = "title")
    public String title;

    @ManyToOne
    public Author author;
}

@Entity
@Table(name = "author")
public class Author {

    @Id
    public int id;

    @Column(name = "first_name")
    public String firstName;

    @Column(name = "last_name")
    public String lastName;

    @OneToMany(mappedBy = "author")
    public Set<Book> books;
}

And let’s assume, we’ll add an additional utility method that also passes a Class reference to the EntityManager:

public static <E> List<E> nativeQuery(
    EntityManager em, 
    org.jooq.Query query,
    Class<E> type
) {

    // Extract the SQL statement from the jOOQ query:
    Query result = em.createNativeQuery(
        query.getSQL(), type);

    // Extract the bind values from the jOOQ query:
    List<Object> values = query.getBindValues();
    for (int i = 0; i < values.size(); i++) {
        result.setParameter(i + 1, values.get(i));
    }

    // There's an unsafe cast here, but we can be sure
    // that we'll get the right type from JPA
    return result.getResultList();
}

Using the API

This is now rather slick, just put your jOOQ query into that API and get JPA entities back from it – the best of both worlds, as you can easily add/remove nested collections from the fetched entities as if you had fetched them via JPQL:

List<Author> authors =
nativeQuery(em,
    DSL.using(configuration)
       .select()
       .from(AUTHOR)
       .orderBy(AUTHOR.ID)
, Author.class); // This is our entity class here

authors.forEach(author -> {
    System.out.println(author.firstName + " " + 
                       author.lastName + " wrote");
    
    books.forEach(book -> {
        System.out.println("  " + book.title);

        // Manipulate the entities here. Your
        // changes will be persistent!
    });
});

Fetching EntityResults

If you’re extra-daring and have a strange affection for annotations, or you just want to crack a joke for your coworkers just before you leave on vacation, you can also resort to using JPA’s javax.persistence.SqlResultSetMapping. Imagine the following mapping declaration:

@SqlResultSetMapping(
    name = "bookmapping",
    entities = {
        @EntityResult(
            entityClass = Book.class,
            fields = {
                @FieldResult(name = "id", column = "b_id"),
                @FieldResult(name = "title", column = "b_title"),
                @FieldResult(name = "author", column = "b_author_id")
            }
        ),
        @EntityResult(
            entityClass = Author.class,
            fields = {
                @FieldResult(name = "id", column = "a_id"),
                @FieldResult(name = "firstName", column = "a_first_name"),
                @FieldResult(name = "lastName", column = "a_last_name")
            }
        )
    }
)

Essentially, the above declaration maps database columns (@SqlResultSetMapping -> entities -> @EntityResult -> fields -> @FieldResult -> column) onto entities and their corresponding attributes. With this powerful technique, you can generate entity results from any sort of SQL query result.

Again, we’ll be creating a small little utility method:

public static <E> List<E> nativeQuery(
    EntityManager em, 
    org.jooq.Query query,
    String resultSetMapping
) {

    // Extract the SQL statement from the jOOQ query:
    Query result = em.createNativeQuery(
        query.getSQL(), resultSetMapping);

    // Extract the bind values from the jOOQ query:
    List<Object> values = query.getBindValues();
    for (int i = 0; i < values.size(); i++) {
        result.setParameter(i + 1, values.get(i));
    }

    // This implicit cast is a lie, but let's risk it
    return result.getResultList();
}

Note that the above API makes use of an anti-pattern, which is OK in this case, because JPA is not a type safe API in the first place.

Using the API

Now, again, you can pass your type safe jOOQ query to the EntityManager via the above API, passing the name of the SqlResultSetMapping along like so:

List<Object[]> result =
nativeQuery(em,
    DSL.using(configuration
       .select(
           AUTHOR.ID.as("a_id"),
           AUTHOR.FIRST_NAME.as("a_first_name"),
           AUTHOR.LAST_NAME.as("a_last_name"),
           BOOK.ID.as("b_id"),
           BOOK.AUTHOR_ID.as("b_author_id"),
           BOOK.TITLE.as("b_title")
       )
       .from(AUTHOR)
       .join(BOOK).on(BOOK.AUTHOR_ID.eq(AUTHOR.ID))
       .orderBy(BOOK.ID)), 
    "bookmapping" // The name of the SqlResultSetMapping
);

result.forEach((Object[] entities) -> {
    JPAAuthor author = (JPAAuthor) entities[1];
    JPABook book = (JPABook) entities[0];

    System.out.println(author.firstName + " " + 
                       author.lastName + " wrote " + 
                       book.title);
});

The result in this case is again an Object[], but this time, the Object[] doesn’t represent a tuple with individual columns, but it represents the entities as declared by the SqlResultSetMapping annotation.

This approach is intriguing and probably has its use when you need to map arbitrary results from queries, but still want managed entities. We can only recommend Thorben Janssen‘s interesting blog series about these advanced JPA features, if you want to know more:

Conclusion

Choosing between an ORM and SQL (or between Hibernate and jOOQ, in particular) isn’t always easy.

  • ORMs shine when it comes to applying object graph persistence, i.e. when you have a lot of complex CRUD, involving complex locking and transaction strategies.
  • SQL shines when it comes to running bulk SQL, both for read and write operations, when running analytics, reporting.

When you’re “lucky” (as in – the job is easy), your application is only on one side of the fence, and you can make a choice between ORM and SQL. When you’re “lucky” (as in – ooooh, this is an interesting problem), you will have to use both. (See also Mike Hadlow’s interesting article on the subject)

The message here is: You can! Using JPA’s native query API, you can run complex queries leveraging the full power of your RDBMS, and still map results to JPA entities. You’re not restricted to using JPQL.

Side-note

While we’ve been critical with some aspects of JPA in the past (read How JPA 2.1 has become the new EJB 2.0 for details), our criticism has been mainly focused on JPA’s (ab-)use of annotations. When you’re using a type safe API like jOOQ, you can provide the compiler with all the required type information easily to construct results. We’re convinced that a future version of JPA will engage more heavily in using Java’s type system, allowing a more fluent integration of SQL, JPQL, and entity persistence.

PostgreSQL’s Best-Kept Secret, and how to Use it with jOOQ


PostgreSQL has a lot of secret data types. In recent times, PostgreSQL’s JSON and JSONB support was hyped as being the NoSQL on SQL secret (e.g. as advertised by ToroDB) that allows you to get the best out of both worlds. But there are many other useful data types, among which the range type.

How does the range type work?

Ranges are very useful for things like age, date, price intervals etc. Let’s assume the following table:

CREATE TABLE age_categories (
  name VARCHAR(50),
  ages INT4RANGE
);

We can now fill the above table as such:

INSERT INTO age_categories
VALUES ('Foetus',   int4range(-1, 0)),
       ('Newborn',  int4range(0, 1)),
       ('Infant',   int4range(1, 2)),
       ('Kid',      int4range(2, 12)),
       ('Teenager', int4range(12, 20)),
       ('Tween',    int4range(20, 30)),
       ('Adult',    int4range(30, 60)),
       ('Senior',   int4range(60, 90)),
       ('Ancient',  int4range(90, 999));

And query it, e.g. by taking my age:

SELECT name
FROM age_categories
WHERE range_contains_elem(ages, 33);

… yielding

name
-----
Adult

There are a lot of other useful functions involved with range calculations. For instance, we can get the age span of a set of categories. Let’s assume we want to have the age span of Kid, Teenager, Senior. Let’s query:

SELECT int4range(min(lower(ages)), max(upper(ages)))
FROM age_categories
WHERE name IN ('Kid', 'Teenager', 'Senior');

… yielding

int4range
---------
[2,90]

And now, let’s go back to fetching all the categories that are within that range:

SELECT name
FROM age_categories
WHERE range_overlaps(ages, (
  SELECT int4range(min(lower(ages)), max(upper(ages)))
  FROM age_categories
  WHERE name IN ('Kid', 'Teenager', 'Senior')
));

… yielding

name
--------
Kid
Teenager
Tween
Adult
Senior

All of this could have been implemented with values contained in two separate columns, but using ranges is just much more expressive for range arithmetic.

How to use these types with jOOQ?

jOOQ doesn’t include built-in support for these advanced data types, but it allows you to bind these data types to your own, custom representation.

A good representation of PostgreSQL’s range types in Java would be jOOλ’s org.jooq.lambda.tuple.Range type, but you could also simply use int[] or Map.Entry for ranges. When we’re using jOOλ’s Range type, the idea is to be able to run the following statements using jOOQ:

// Assuming this static import:
import static org.jooq.lambda.tuple.Tuple.*;

DSL.using(configuration)
   .insertInto(AGE_CATEGORIES)
   .columns(AGE_CATEGORIES.NAME, AGE_CATEGORIES.AGES)
   .values("Foetus",   range(-1, 0))
   .values("Newborn",  range(0, 1))
   .values("Infant",   range(1, 2))
   .values("Kid",      range(2, 12))
   .values("Teenager", range(12, 20))
   .values("Tween",    range(20, 30))
   .values("Adult",    range(30, 60))
   .values("Senior",   range(60, 90))
   .values("Ancient",  range(90, 999))
   .execute();

And querying…

DSL.using(configuration)
   .select(AGE_CATEGORIES.NAME)
   .from(AGE_CATEGORIES)
   .where(rangeContainsElem(AGE_CATEGORIES.AGES, 33))
   .fetch();

DSL.using(configuration)
   .select(AGE_CATEGORIES.NAME)
   .where(rangeOverlaps(AGE_CATEGORIES.AGES,
      select(int4range(min(lower(AGE_CATEGORIES.AGES)),
                       max(upper(AGE_CATEGORIES.AGES))))
     .from(AGE_CATEGORIES)
     .where(AGE_CATEGORIES.NAME.in(
       "Kid", "Teenager", "Senior"
     ))
   ))
   .fetch();

As always, the idea with jOOQ is that the SQL you want to get as output is the SQL you want to write in Java, type safely. In order to be able to write the above, we will need to implement 1-2 missing pieces. First off, we’ll need to create a data type binding (org.jooq.Binding) as described in this section of the manual. The binding can be written as such, using the following Converter:

public class Int4RangeConverter 
implements Converter<Object, Range<Integer>> {
    private static final Pattern PATTERN = 
        Pattern.compile("\\[(.*?),(.*?)\\)");

    @Override
    public Range<Integer> from(Object t) {
        if (t == null)
            return null;

        Matcher m = PATTERN.matcher("" + t);
        if (m.find())
            return Tuple.range(
                Integer.valueOf(m.group(1)), 
                Integer.valueOf(m.group(2)));

        throw new IllegalArgumentException(
            "Unsupported range : " + t);
    }

    @Override
    public Object to(Range<Integer> u) {
        return u == null 
            ? null 
            : "[" + u.v1 + "," + u.v2 + ")";
    }

    @Override
    public Class<Object> fromType() {
        return Object.class;
    }

    @SuppressWarnings({ "unchecked", "rawtypes" })
    @Override
    public Class<Range<Integer>> toType() {
        return (Class) Range.class;
    }
}

… and the Converter can then be re-used in a Binding:

public class PostgresInt4RangeBinding 
implements Binding<Object, Range<Integer>> {

    @Override
    public Converter<Object, Range<Integer>> converter() {
        return new Int4RangeConverter();
    }

    @Override
    public void sql(BindingSQLContext<Range<Integer>> ctx) throws SQLException {
        ctx.render()
           .visit(DSL.val(ctx.convert(converter()).value()))
           .sql("::int4range");
    }

    // ...
}

The important thing in the binding is just that every bind variable needs to be encoded in PostgreSQL’s string format for range types (i.e. [lower, upper)) and cast explicitly to ?::int4range, that’s all.

You can then configure your code generator to use those types, e.g. on all columns that are called [xxx]_RANGE

<customType>
    <name>com.example.PostgresInt4RangeBinding</name>
    <type>org.jooq.lambda.tuple.Range&lt;Integer></type>
    <binding>com.example.PostgresInt4RangeBinding</binding>
</customType>
<forcedType>
    <name>com.example.PostgresInt4RangeBinding</name>
    <expression>.*?_RANGE</expression>
</forcedType>

The last missing thing now are the functions that you need to compare ranges, i.e.:

  • rangeContainsElem()
  • rangeOverlaps()
  • int4range()
  • lower()
  • upper()

These are written quickly:

static <T extends Comparable<T>> Condition 
    rangeContainsElem(Field<Range<T>> f1, T e) {
    return DSL.condition("range_contains_elem({0}, {1})", f1, val(e));
}

static <T extends Comparable<T>> Condition 
    rangeOverlaps(Field<Range<T>> f1, Range<T> f2) {
    return DSL.condition("range_overlaps({0}, {1})", f1, val(f2, f1.getDataType()));
}

Conclusion

Writing an extension for jOOQ data types takes a bit of time and effort, but it is really easy to achieve and will allow you to write very powerful queries in a type safe way. Once you’ve set up all the data types and the bindings, your generated source code will reflect the actual data type from your database and you will be able to write powerful queries in a type safe way directly in Java. Let’s consider again the plain SQL and the jOOQ version:

SQL

SELECT name
FROM age_categories
WHERE range_overlaps(ages, (
  SELECT int4range(min(lower(ages)), max(upper(ages)))
  FROM age_categories
  WHERE name IN ('Kid', 'Teenager', 'Senior')
));

jOOQ

DSL.using(configuration)
   .select(AGE_CATEGORIES.NAME)
   .where(rangeOverlaps(AGE_CATEGORIES.AGES,
      select(int4range(min(lower(AGE_CATEGORIES.AGES)),
                       max(upper(AGE_CATEGORIES.AGES))))
     .from(AGE_CATEGORIES)
     .where(AGE_CATEGORIES.NAME.in(
       "Kid", "Teenager", "Senior"
     ))
   ))
   .fetch();

More information about data type bindings can be found in the jOOQ manual.

How JPA 2.1 has become the new EJB 2.0


Beauty lies in the eye of the beholder. So does “ease”:

Thorben writes very good and useful articles about JPA, and he’s recently started an excellent series about JPA 2.1’s new features. Among which: Result set mapping. You may know result set mapping from websites like CTMMC, or annotatiomania.com. We can summarise this mapping procedure as follows:

a) define the mapping

@SqlResultSetMapping(
    name = "BookAuthorMapping",
    entities = {
        @EntityResult(
            entityClass = Book.class,
            fields = {
                @FieldResult(name = "id", column = "id"),
                @FieldResult(name = "title", column = "title"),
                @FieldResult(name = "author", column = "author_id"),
                @FieldResult(name = "version", column = "version")}),
        @EntityResult(
            entityClass = Author.class,
            fields = {
                @FieldResult(name = "id", column = "authorId"),
                @FieldResult(name = "firstName", column = "firstName"),
                @FieldResult(name = "lastName", column = "lastName"),
                @FieldResult(name = "version", column = "authorVersion")})})

The above mapping is rather straight-forward. It specifies how database columns should be mapped to entity fields and to entities as a whole. Then you give this mapping a name ("BookAuthorMapping"), which you can then reuse across your application, e.g. with native JPA queries.

I specifically like the fact that Thorben then writes:

If you don’t like to add such a huge block of annotations to your entity, you can also define the mapping in an XML file

… So, we’re back to replacing huge blocks of annotations by huge blocks of XML – a technique that many of us wanted to avoid using annotations… :-)

b) apply the mapping

Once the mapping has been statically defined on some Java type, you can then fetch those entities by applying the above BookAuthorMapping

List<Object[]> results = this.em.createNativeQuery(
    "SELECT b.id, b.title, b.author_id, b.version, " +
    "       a.id as authorId, a.firstName, a.lastName, " + 
    "       a.version as authorVersion " + 
    "FROM Book b " +
    "JOIN Author a ON b.author_id = a.id", 
    "BookAuthorMapping"
).getResultList();

results.stream().forEach((record) -> {
    Book book = (Book)record[0];
    Author author = (Author)record[1];
});

Notice how you still have to remember the Book and Author types and cast explicitly as no verifiable type information is really attached to anything.

The definition of “complex”

Now, the article claims that this is “complex” mapping, and no doubt, I would agree. This very simple query with only a simple join already triggers such an annotation mess if you want to really map your entities via JPA. You don’t want to see Thorben’s mapping annotations, once the queries get a little more complex. And remember, @SqlResultSetMapping is about mapping (native!) SQL results, so we’re no longer in object-graph-persistence land, we’re in SQL land, where bulk fetching, denormalising, aggregating, and other “fancy” SQL stuff is king.

The problem is here:

Java 5 introduced annotations. Annotations were originally intended to be used as “artificial modifiers”, i.e. things like static, final, protected (interestingly enough, Ceylon only knows annotations, no modifiers). This makes sense. Java language designers could introduce new modifiers / “keywords” without breaking existing code – because “real” keywords are reserved words, which are hard to introduce in a language. Remember enum?

So, good use-cases for annotations (and there are only few) are:

  • @Override
  • @Deprecated (although, a comment attribute would’ve been fancy)
  • @FunctionalInterface

JPA (and other Java EE APIs, as well as Spring) have gone completely wacko on their use of annotations. Repeat after me:

No language @Before or @After Java ever abused annotations as much as Java tweet this

(the @Before / @After idea was lennoff’s, on reddit)

There is a strong déjà vu in me when reading the above. Do you remember the following?

No language before or after Java ever abused checked exceptions as much as Java

We will all deeply regret Java annotations by 2020.

Annotations are a big wart in the Java type system. They have an extremely limited justified use and what we Java Enterprise developers are doing these days is absolutely not within the limits of “justified”. We’re abusing them for configuration for things that we should really be writing code for.

Here’s how you’d run the same query with jOOQ (or any other API that leverages generics and type safety for SQL):

Book b = BOOK.as("b");
Author a = AUTHOR.as("a");

DSL.using(configuration)
   .select(b.ID, b.TITLE, b.AUTHOR_ID, b.VERSION,
           a.ID, a.FIRST_NAME, a.LAST_NAME,
           a.VERSION)
   .from(b)
   .join(a).on(b.AUTHOR_ID.eq(a.ID))
   .fetch()
   .forEach(record -> {
       BookRecord book = record.into(b);
       AuthorRecord author = record.into(a);
   });

This example combines both JPA 2.1’s annotations AND querying. All the meta information about projected “entities” is already contained in the query and thus in the Result that is produced by the fetch() method. But it doesn’t really matter, the point here is that this lambda expression …

record -> {
    BookRecord book = record.into(b);
    AuthorRecord author = record.into(a);
}

… it can be anything you want! Like the more sophisticated examples we’ve shown in previous blog posts:

Mapping can be defined ad-hoc, on the fly, using functions. Functions are the ideal mappers, because they take an input, produce an output, and are completely stateless. And the best thing about functions in Java 8 is, they’re compiled by the Java compiler and can be used to type-check your mapping. And you can assign functions to objects, which allows you to reuse the functions, when a given mapping algorithm can be used several times.

In fact, the SQL SELECT clause itself is such a function. A function that transforms input tuples / rows into output tuples / rows, and you can adapt that function on the fly using additional expressions.

There is absolutely no way to type-check anything in the previous JPA 2.1 native SQL statement and @SqlResultSetMapping example. Imagine changing a column name:

List<Object[]> results = this.em.createNativeQuery(
    "SELECT b.id, b.title as book_title, " +
    "       b.author_id, b.version, " +
    "       a.id as authorId, a.firstName, a.lastName, " + 
    "       a.version as authorVersion " + 
    "FROM Book b " +
    "JOIN Author a ON b.author_id = a.id", 
    "BookAuthorMapping"
).getResultList();

Did you notice the difference? The b.title column was renamed to book_title. In a SQL string. Which blows up at run time! How to remember that you have to also adapt

@FieldResult(name = "title", column = "title")

… to be

@FieldResult(name = "title", column = "book_title")

Conversely, how to remember that once you rename the column in your @FieldResult, you’ll also have to go check wherever this "BookAuthorMapping" is used, and also change the column names in those queries.

@SqlResultSetMapping(
    name = "BookAuthorMapping",
    ...
)

Annotations are evil

You may or may not agree with some of the above. You may or may not like jOOQ as an alternative to JPA, that’s perfectly fine. But it is really hard to disagree with the fact that:

  • Java 5 introduced very useful annotations
  • Java EE / Spring heavily abused those annotations to replace XML
  • We now have a parallel universe type system in Java
  • This parallel universe type system is completely useless because the compiler cannot introspect it
  • Java SE 8 introduces functional programming and lots of type inference
  • Java SE 9-10 will introduce more awesome language features
  • It now becomes clear that what was configuration (XML or annotations) should have been code in the first place
  • JPA 2.1 has become the new EJB 2.0: Obsolete

As I said. Hard to disagree. Or in other words:

Code is much better at expressing algorithms than configuration tweet this

I’ve met Thorben personally on a number of occasions at conferences. This rant here wasn’t meant personally, Thorben :-) Your articles about JPA are very interesting. If you readers of this article are using JPA, please check out Thorben’s blog: http://www.thoughts-on-java.org.

In the meantime, I would love to nominate Thorben for the respected title “The Annotatiomaniac of the Year 2015

How to FlatMap a JDBC ResultSet with Java 8?


You’re not into the functional mood yet? Then the title might not resonate with you – but the article will! Trust me.

Essentially, we want this:

+------+------+------+
| col1 | col2 | col3 |
+------+------+------+
| A    | B    | C    | row 1
| D    | E    | F    | row 2
| G    | H    | I    | row 3
+------+------+------+

to be “flat mapped” into this:

+------+
| cols |
+------+
| A    |\ 
| B    | | row 1
| C    |/
| D    |\
| E    | | row 2
| F    |/
| G    |\
| H    | | row 3
| I    |/
+------+

How to do it with Java 8?

It’s easy, when you’re using jOOQ. Let’s create the database first:

CREATE TABLE t (
  col1 VARCHAR2(1),
  col2 VARCHAR2(1),
  col3 VARCHAR2(1)
);

INSERT INTO t VALUES ('A', 'B', 'C');
INSERT INTO t VALUES ('D', 'E', 'F');
INSERT INTO t VALUES ('G', 'H', 'I');

Now let’s add some jOOQ and Java 8!

List<String> list =
DSL.using(connection)
   .fetch("SELECT col1, col2, col3 FROM t")
   .stream()
   .flatMap(r -> Arrays.stream(r.into(String[].class)))
   .collect(Collectors.toList());

System.out.println(list);

… and that’s it! The output is:

[A, B, C, D, E, F, G, H, I]

(I’ve also given this solution to this Stack Overflow question)

How do you read the above? Simply like this:

List<String> list =

// Get a Result<Record>, which is essentially a List
// from the database query
DSL.using(connection)
   .fetch("SELECT col1, col2, col3 FROM t")

// Stream its records
   .stream()

// And generate a new stream of each record's String[]
// representation, "flat mapping" that again into a
// single stream
   .flatMap(r -> Arrays.stream(r.into(String[].class)))
   .collect(Collectors.toList());

Note that if you’re not using jOOQ to render and execute your query, you can still use jOOQ to transform the JDBC ResultSet into a jOOQ Result to produce the same output:

try (ResultSet rs = ...) {
    List<String> list =
    DSL.using(connection)
       .fetch(rs) // unwind the ResultSet here
       .stream()
       .flatMap(r -> Arrays.stream(r.into(String[].class)))
       .collect(Collectors.toList());

    System.out.println(list);
}

Bonus: The SQL way

The SQL way to produce the same result is trivial:

SELECT col1 FROM t UNION ALL
SELECT col2 FROM t UNION ALL
SELECT col3 FROM t
ORDER BY 1

Or, of course, if you’re using Oracle or SQL Server, you can use the magic UNPIVOT clause (the opposite of the PIVOT clause):

SELECT c
FROM t
UNPIVOT (
  c FOR col in (col1, col2, col3)
)

How to Avoid the Dreaded Dead Lock when Pessimistic Locking – And some Awesome Java 8 Usage!


Sometimes you simply cannot avoid it: Pessimistic locking via SQL. In fact, it’s an awesome tool when you want to synchronise several applications on a shared, global lock.

Some may think this is abusing the database. We think use the tools you have if they can solve the problem you have. For instance, the RDBMS can be the perfect implementation for a message queue.

Let’s assume you do have that pessimistic locking use-case and you do want to choose the RDBMS. Now, how to get it right? Because it is really easy to produce a deadlock. Imagine the following setup (and I’m using Oracle for this):

CREATE TABLE locks (v NUMBER(18));

INSERT INTO locks
SELECT level
FROM dual
CONNECT BY level <= 10;

This generates 10 records, which we’ll use as 10 distinct row-level locks.

Now, let’s connect to the database from two sqlplus clients:

Instance 1

SQL> SELECT *
  2  FROM locks
  3  WHERE v = 1
  4  FOR UPDATE;

         V
----------
         1

Instance 2

SQL> SELECT *
  2  FROM locks
  3  WHERE v = 2
  4  FOR UPDATE;

         V
----------
         2

We’ve now acquired two different locks from two different sessions.

And then, let’s inverse things:

Instance 1

SQL> SELECT *
  2  FROM locks
  3  WHERE v = 2
  4  FOR UPDATE;

Instance 2

SQL> SELECT *
  2  FROM locks
  3  WHERE v = 1
  4  FOR UPDATE;

Both sessions are now locked and luckily, Oracle will detect this and fail one of the sessions:

ORA-00060: deadlock detected while waiting for resource

Avoiding deadlocks

This is a very explicit example where it is easy to see why it happens, and potentially, how to avoid it. A simple way to avoid deadlocks is to establish a rule that all locks will always have to be acquired in ascending order. If you know you need lock number 1 and 2, you must acquire them in that order. This way, you will still produce locking and thus contention, but at least the contention will eventually (probably) get resolved once load decreases. Here’s an example that shows what happens when you have more clients. This time, written as Java threads.

In the example, we’re using jOOλ for simpler lambda expressions (e.g. lambdas throwing checked exceptions). And of course, we’ll be abusing Java 8, heavily!

Class.forName("oracle.jdbc.OracleDriver");

// We want a collection of 4 threads and their
// associated execution counters
List<Tuple2<Thread, AtomicLong>> list =
IntStream
    .range(0, 4)

    // Let's use jOOλ here to wrap checked exceptions
    // we'll map the thread index to the actual tuple
    .mapToObj(Unchecked.intFunction(i -> {
        final Connection con = DriverManager.getConnection(
            "jdbc:oracle:thin:@localhost:1521:xe", 
            "TEST", "TEST");

        final AtomicLong counter = new AtomicLong();
        final Random rnd = new Random();

        return Tuple.tuple(

            // Each thread acquires a random number of
            // locks in ascending order
            new Thread(Unchecked.runnable(() -> {
                for (;;) {
                    String sql =
                      " SELECT *"
                    + " FROM locks"
                    + " WHERE v BETWEEN ? AND ?"
                    + " ORDER BY v"
                    + " FOR UPDATE";

                    try (PreparedStatement stmt = 
                             con.prepareStatement(sql)) {
                        stmt.setInt(1, rnd.nextInt(10));
                        stmt.setInt(2, rnd.nextInt(10));
                        stmt.executeUpdate();

                        counter.incrementAndGet();
                        con.commit();
                    }
                }
            })),
            counter
        );
    }))
    .collect(Collectors.toList());

// Starting each thread
list.forEach(tuple -> tuple.v1.start());

// Printing execution counts
for (;;) {
    list.forEach(tuple -> {
        System.out.print(String.format(
            "%1s:%2$-10s",
            tuple.v1.getName(),
            tuple.v2.get()
        ));
    });

    System.out.println();
    Thread.sleep(1000);
}

As the program runs, you can see that it continues progressively, with each thread taking approximately the same load as the other threads:

Thread-1:0         Thread-2:0         Thread-3:0         Thread-4:0
Thread-1:941       Thread-2:966       Thread-3:978       Thread-4:979
Thread-1:2215      Thread-2:2206      Thread-3:2244      Thread-4:2253
Thread-1:3422      Thread-2:3400      Thread-3:3466      Thread-4:3418
Thread-1:4756      Thread-2:4720      Thread-3:4855      Thread-4:4847
Thread-1:6095      Thread-2:5987      Thread-3:6250      Thread-4:6173
Thread-1:7537      Thread-2:7377      Thread-3:7644      Thread-4:7503
Thread-1:9122      Thread-2:8884      Thread-3:9176      Thread-4:9155

Now, for the sake of the argument, let’s do the forbidden thing and ORDER BY DBMS_RANDOM.VALUE

String sql =
  " SELECT *"
+ " FROM locks"
+ " WHERE v BETWEEN ? AND ?"
+ " ORDER BY DBMS_RANDOM.VALUE"
+ " FOR UPDATE";

It won’t take long and your application explodes:

Thread-1:0         Thread-2:0         Thread-3:0         Thread-4:0         
Thread-1:72        Thread-2:79        Thread-3:79        Thread-4:90        
Thread-1:72        Thread-2:79        Thread-3:79        Thread-4:90        
Thread-1:72        Thread-2:79        Thread-3:79        Thread-4:90        
Exception in thread "Thread-3" org.jooq.lambda.UncheckedException: 
java.sql.SQLException: ORA-00060: deadlock detected while waiting for resource

Thread-1:72        Thread-2:79        Thread-3:79        Thread-4:93        
Thread-1:72        Thread-2:79        Thread-3:79        Thread-4:93        
Thread-1:72        Thread-2:79        Thread-3:79        Thread-4:93        
Exception in thread "Thread-1" org.jooq.lambda.UncheckedException: 
java.sql.SQLException: ORA-00060: deadlock detected while waiting for resource

Thread-1:72        Thread-2:1268      Thread-3:79        Thread-4:1330      
Thread-1:72        Thread-2:3332      Thread-3:79        Thread-4:3455      
Thread-1:72        Thread-2:5691      Thread-3:79        Thread-4:5841      
Thread-1:72        Thread-2:8663      Thread-3:79        Thread-4:8811      
Thread-1:72        Thread-2:11307     Thread-3:79        Thread-4:11426     
Thread-1:72        Thread-2:12231     Thread-3:79        Thread-4:12348     
Thread-1:72        Thread-2:12231     Thread-3:79        Thread-4:12348     
Thread-1:72        Thread-2:12231     Thread-3:79        Thread-4:12348     
Exception in thread "Thread-4" org.jooq.lambda.UncheckedException: 
java.sql.SQLException: ORA-00060: deadlock detected while waiting for resource

Thread-1:72        Thread-2:13888     Thread-3:79        Thread-4:12348     
Thread-1:72        Thread-2:17037     Thread-3:79        Thread-4:12348     
Thread-1:72        Thread-2:20234     Thread-3:79        Thread-4:12348     
Thread-1:72        Thread-2:23495     Thread-3:79        Thread-4:12348     

And in the end, all but one of your threads have been killed (at least in our example) because of deadlock exceptions.

Beware of execution plans

The above example has worked, because in the given example, the execution plan applied locking AFTER ordering as can be seen here:

SQL_ID  bcyyxqyubp4v8, child number 0
-------------------------------------
SELECT * FROM locks WHERE v BETWEEN :v1 AND :v2 ORDER BY v FOR UPDATE
 
Plan hash value: 2944215640
 
--------------------------------------
| Id  | Operation            | Name  |
--------------------------------------
|   0 | SELECT STATEMENT     |       |
|   1 |  FOR UPDATE          |       |
|   2 |   SORT ORDER BY      |       | <-- happens before FOR UPDATE
|*  3 |    FILTER            |       |
|*  4 |     TABLE ACCESS FULL| LOCKS |
--------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   3 - filter(TO_NUMBER(:V1)<=TO_NUMBER(:V2))
   4 - filter(("V"=TO_NUMBER(:V1)))

(see this article to learn how to get Oracle execution plans like the above)

You should obviously not rely on this in a more real world scenario.

Beware of contention, though

The above examples have also been impressive in terms of displaying the other negative side-effects of pessimistic locking (or locking in general): Contention. The single thread that continued executing in the “bad example” was almost as fast as the four threads before. Our silly example where we used random lock ranges led to the fact that on average, almost every attempt to acquire locks did at least some blocking. How can you figure this out? By looking out for enq: TX – row lock contention events in your sessions. For instance:

SELECT blocking_session, event
FROM v$session
WHERE username = 'TEST'

The above query returns the catastrophic result, here:

BLOCKING_SESSION   EVENT
-------------------------------------
48                 enq: TX - row lock contention
54                 enq: TX - row lock contention
11                 enq: TX - row lock contention
11                 enq: TX - row lock contention

Conclusion

The conclusion can only be: Use pessimistic locking sparingly and always expect the unexpected. When doing pessimistic locking, both deadlocks and heavy contention are quite possible problems that you can run into. As a general rule of thumb, follow these rules (in order):

  • Avoid pessimistic locking if you can
  • Avoid locking more than one row per session if you can
  • Avoid locking rows in random order if you can
  • Avoid going to work to see what happened

Don’t be Fooled by Generics and Backwards-Compatibility. Use Generic Generic Types


I’ve recently had a very interesting discussion with Sebastian Gruber from Ergon, a very early jOOQ customer, whom we’re in close touch with. Talking to Sebastian has lead our engineering team to the conclusion that we should completely rewrite the jOOQ API. Right now, we already have lots of generics for various purposes, e.g.

  • Generics for column types, such as
    interface Field<T> { ... }
    Field<String> field = BOOK.TITLE;
    
  • Generics for table types, such as
    interface Table<R extends Record> { ... }
    Table<BookRecord> books = BOOK;
    
  • Combined generics where both <T> and <R> are used
  • … and much more

Sometimes, you just cannot anticipate how many different generic types you’ll need on your classes and interfaces two years down the line, and the problem with Java is: You can generify your classes only exactly once. Let’s assume that you’ve always had a type like this:

class Foo {}

Now you happen to know that you need two generic type parameters right now:

// Still compatible
class Foo<Bar, Baz> {}

That’ll work and all the existing client code will still compile, with a rawtype warning. But once you’ve published Foo<Bar, Baz>, you can no longer add more type variables to it, or remove them. Every modification will break client code!

// Breaking change
class Foo<Bar, Baz, Fizz> {}

The solution: Generic generic types

We don’t want to place that burden upon our customers, the heavy burden of backwards-incompatibility. This is why we’re now publishing our next release of jOOQ with a new feature that we call generic generic types. How does it work? It’s easy. We’ve learned from the best database designers who have already been using generic column types all along. In SQL, if you run into this kind of problem, you’d simply write:

CREATE TABLE foo (
    bar int,
    baz int,
    fizz int,

    generic_1 varchar(4000),
    generic_2 varchar(4000),
    generic_3 varchar(4000),
    generic_4 varchar(4000),
    -- [...]
);

Now your SQL schema is safe for ages to come. We’ll do the same in Java:

class Foo<
    Bar, 
    Baz, 
    Fizz,

    Generic1,
    Generic2,
    Generic3,
    Generic4,
    // [...]
> {}

We’ll thus generify all our types to have exactly 256 generic type parameters. 256 was the sensible limit that MS Access chose for the number of possible columns. That way, our customers will only have to upgrade to the new version of jOOQ once and from then on, generic type backwards-compatibility will be guaranteed forever.

Happy coding!

Hack up a Simple JDBC ResultSet Cache Using jOOQ’s MockDataProvider


Some queries shouldn’t hit the database all the time. When you query for master data (such as system settings, languages, translations, etc.), for instance, you may want to avoid sending the same silly query (and the results) over the wire all the time. For example:

SELECT * FROM languages

Most databases maintain buffer caches to accelerate these queries, so you don’t always hit the disk. Some databases maintain result set caches per cursor, or their JDBC drivers might even implement result set caches directly in the driver – a little known feature in Oracle, for instance:

SELECT /*+ RESULT_CACHE */ * FROM languages

But you may not be using Oracle, and because patching JDBC is a pain, you might have resorted to implementing the cache one or two layers up in the data access or service layer:

class LanguageService {
    private Cache cache;

    List<Language> getLanguages() {
        List<Language> result = cache.get();

        if (result == null) {
            result = doGetLanguages();
            cache.put(result);
        }

        return result;
    }
}

Doing it in the JDBC layer, instead

While this might work fine on a per-service-and-method level, it might quickly become tedious when you query only parts of those results. E.g. what happens when you add an additional filter? Should you cache that query as well? Should you perform the filter on the cache, or hit the database at least once per filter?

class LanguageService {
    private Cache cache;

    List<Language> getLanguages() { ... }
    List<Language> getLanguages(Country country) {
        // Another cache?
        // Query the cache only and delegate to
        //     getLanguages()?
        // Or don't cache this at all?
    }
}

wouldn’t it be nice if we had a cache of the form:

Map<String, ResultSet> cache;

… which caches re-usable JDBC ResultSets (or better: jOOQ Results) and returns the same results every time an identical query string is encountered.

Use jOOQ’s MockDataProvider for this

jOOQ ships with a MockConnection, which implements the JDBC Connection API for you, mocking all other objects, such as PreparedStatement, ResultSet, etc. We’ve already introduced this useful tool for unit testing in a previous blog post.

But you can “mock” your connection also in order to implement a cache! Consider the following, very simple MockDataProvider:

class ResultCache implements MockDataProvider {
    final Map<String, Result<?>> cache = 
        new ConcurrentHashMap<>();
    final Connection connection;

    ResultCache(Connection connection) {
        this.connection = connection;
    }

    @Override
    public MockResult[] execute(MockExecuteContext ctx)
    throws SQLException {
        Result<?> result;

        // Add more sophisticated caching criteria
        if (ctx.sql().contains("from language")) {

            // We're using this very useful new Java 8
            // API for atomic cache value calculation
            result = cache.computeIfAbsent(
                ctx.sql(),
                sql -> DSL.using(connection).fetch(
                    ctx.sql(),
                    ctx.bindings()
                )
            );
        }

        // All other queries go to the database
        else {
            result = DSL.using(connection).fetch(
                ctx.sql(), 
                ctx.bindings()
            );
        }

        return new MockResult[] { 
            new MockResult(result.size(), result)
        };
    }
}

Obviously, this is a very simplistic example. A real cache would involve invalidation (time-based, update-based, etc.) as well as more selective caching criteria than just matching on from language.

But the fact is that using the above ResultCache, we can now wrap all JDBC connections and prevent hitting the database more than once for all queries that query from the language table! An example using jOOQ API:

DSLContext normal = DSL.using(connection);
DSLContext cached = DSL.using(
    new MockConnection(new ResultCache(connection))
);

// This executs a select count(*) from language query
assertEquals(4, cached.fetchCount(LANGUAGE));
assertEquals(4, normal.fetchCount(LANGUAGE));

// Let's add another language (using normal config):
LanguageRecord lang = normal.newRecord(LANGUAGE);
lang.setName("German");
lang.store();

// Checking again on the language table:
assertEquals(4, cached.fetchCount(LANGUAGE));
assertEquals(5, normal.fetchCount(LANGUAGE));

The cache works like a charm! Note that the current cache implementation is merely SQL string based (as it should be). If you modify the SQL string even only slightly, you’ll experience another cache miss and the query goes back to the database:

// This query is not the same as the cached one, it
// fetches two count(*) expressions. Thus we go back
// to the database and get the latest result.
assertEquals(5, (int) cached
    .select(
        count(),
        count())
    .from(LANGUAGE)
    .fetchOne()
    .value1());

// This still has the "stale" previous result
assertEquals(4, cached.fetchCount(LANGUAGE));

Conclusion

Caching is hard. Very hard. Apart from concurrency, naming things and off-by-one errors, it’s one of the three hardest problems in software.

This article doesn’t recommend to implement a cache at the JDBC level. You may or may not make that decision yourself. But when you do, then you can see how easy it is to implement such a cache using jOOQ.

jOOQ is the best way to write SQL in Java

And the best thing is that you don’t have to use jOOQ in all of your application. You can use it just for this particular use-case (and for mocking JDBC), and continue using JDBC, MyBatis, Hibernate, etc, as long as you patch other framework’s JDBC Connections with the jOOQ MockConnection.

jOOQ vs. Hibernate: When to Choose Which


Hibernate has become a de-facto standard in the Java ecosystem, and after the fact, also an actual JavaEE standard implementation if standards matter to you, and if you put the JCP on the same level with ISO, ANSI, IEEE, etc.

This article does not intended to discuss standards, but visions. Hibernate shares JPA’s vision of ORM. jOOQ shares SQL’s vision of powerful querying, so for the sake of the argument, let’s use Hibernate / JPA / ORM interchangeably, much like jOOQ / JDBC / SQL.

The question why should anyone not use Hibernate these days always shows up frequently – precisely because Hibernate is a de-facto standard, and the first framework choice in many other frameworks such as Grails (which uses GORM, which again uses Hibernate).

However, even Gavin King, the creator of Hibernate, doesn’t believe that Hibernate should be used for everything:

gavin-king

If that’s the case, are there any objective decision helping points that you could consider, when to use an ORM and when to use SQL?

Discussing on a high level

First off, let’s bring this discussion to a higher level. Instead of deciding between Hibernate and jOOQ as concrete implementations of their own domains, let’s think about ORM vs. SQL, and their different use-cases.

When deciding between an ORM (e.g. Hibernate) and SQL (e.g. jOOQ), the driving question that you should ask yourself is not the question of project complexity. Some of our most demanding customers are using jOOQ on medium-sized schemas with thousands of tables / views. Often, those schemas are extremely normalised and sometimes even deployed on as many as six different RDBMS. jOOQ was specifically designed to work in these scenarios, while keeping the simple use-case in mind as well.

So, instead of thinking about project complexity, ask yourself the following questions:

  • 1. Will your data model drive your application design, or will your application design drive your data model(s)?

    A main aspect here is the question whether you “care” about your database in the sense of whether it might survive your application. Very often, applications come and go. They may be re-written in Python / JavaScript, etc. 5 years down the line. Or you have multiple applications accessing the same database: Your Java application, some Perl scripts, stored procedures, etc. If this is the case, database design is a priority in your project, and jOOQ works extremely well in these setups.

    If you don’t necessarily “care” about your database in the sense that you just want to “persist” your Java domain somewhere, and this happens to be a relational database, then Hibernate might be a better choice – at least in early stages of your project, because you can easily generate your database schema from your Entity model.

  • 2. Will you do mostly complex reading and simple writing, or will you engage in complex writing?

    SQL really shines when reading is complex. When you join many tables, when you aggregate data in your database, when you do reporting, when you do bulk reading and writing. You think of your data in terms of set theory, e.g. your data as a whole. Writing CRUD with SQL is boring, though. This is why jOOQ also provides you with an ActiveRecord-style API that handles the boring parts, when you’re operating on single tables (Jason mentioned this).

    If, however, your writing becomes complex, i.e. you have to load a complex object graph with 20 entities involved into memory, perform optimistic locking on it, modify it in many different ways and then persist it again in one go, then SQL / jOOQ will not help you. This is what Hibernate has originally been created for.

Opinion

I believe that data is forever. You should *always* assume that your database survives your application. It is much easier to rewrite (parts of) an application than to migrate a database. Having a clean and well-designed database schema will always pay off down the line of a project, specifically of a complex project. See also our previous article about the fallacy of “schemaless” databases.

Also, most projects really do 90% reading and 10% writing, writing often not being complex (2-3 tables modified within a transaction). This means that most of the time, the complexity solved by Hibernate / JPA’s first and second level caches is not needed. People often misunderstand these features and simply turn off caching, flushing Hibernate’s cache to the server all the time, and thus using Hibernate in the wrong way.

If, however, you’re undecided about the above two axes of decision, you can go the middle way and use jOOQ only for reporting, batch processing, etc. and use Hibernate for your CRUD – in a CQRS (Command Query Responsibility Segregation: http://martinfowler.com/bliki/CQRS.html) style. There are also quite a few jOOQ users who have chosen this path.

Further reading

Integrating jOOQ with Grails Featuring the UWS-jOOQ Plugin


Introduction

Grails is a web framework aimed to boost development productivity. One of the main features is domain centric database schema generation. Applications built with Grails are able to update existing schema just before they start. To do this, Grails is using built-in domain mappers or migrations in more advanced cases. The goal of the UWS-jOOQ Grails-plugin is to integrate jOOQ into the existing Grails lifecycle in order to leverage features of jOOQ without compromising the ones provided by Grails.

This article is part of a series brought to you by the Germany based jOOQ integration partner UWS Software Service (UWS). UWS is specialised in custom software development, application modernisation and outsourcing with a distinct focus on the Java Enterprise ecosystem.

Why should I use jOOQ with Grails?

In enterprise applications we often face issues with Hibernate performance, lack of support of some statements or just too much hassle caused by the Hibernate model. Hibernate’s Query Language HQL often is not sophisticated enough to cope with some requirements. This forces us to use plain SQL, which is not bad and helps solving specific business problems. However in big projects where a larger group of people is involved and a product continuously evolves, type-safety is very precious but is thrown away when using plain SQL. That’s the moment where the jOOQ framework excels and the UWS-jOOQ Grails-Plugin comes into the game.

How can I Integrate jOOQ into Grails?

We tried to provide a simple integration of jOOQ into Grails using Grails built-in dependency resolution. Just add the following line to the plugins section of your BuildConfig.groovy:

compile ':uws-jooq:0.1.1'

Add plugin configuration to your Config.groovy:

jooq.dataSource = ['dataSource']
jooq.xmlConfigDir = "src/resources/jooq/"
jooq.generatedClassOutputDirectory = 'src/java'
jooq.generatedClassPackageName.dataSource = 'ie.uws.example'

As in this example the plugin allows to configure the datasources to be used and also some of the key-paths. Eventually you want to also take a look at a customized version of the Config.groovy in our sample integration project.

Next jOOQ needs an xml configuration file, which can be generated by the plugin using the following command. The plugin will use your existing datasources and their configuration to generate the jOOQ configuration:

grails jooq-generate-config

Now all the configuration is ready and it is time to get to one of the main features of jOOQ which is type-safe SQL. With the following command jOOQ will generate Java-classes which you want to use when writing SQL as they will give you compiler-based autocompletion:

grails jooq-init

Now that everything is in place, let’s say you would like to insert a new record into your database via jOOQ in one of your Controllers. It’s as simple as that:

class ExampleController {
  JooqService jooqService

  def insert() {
    DSLContext dsl = jooqService.dataSource	
    BookRecord record = dsl.newRecord(Tables.BOOK)
    record.author = "John"
    record.name = "Uws"
    record.version = 1
    record.store()
  }
}

How does the integration of jOOQ with Grails work under the hood?

In the example above you noticed the JooqService which is dependency-injected by Grails. The JooqService is your entrypoint when it comes to using jOOQ within Grails as it is able to pick your datasource and provide the jOOQ DSL context for you. When you have multiple datasources it also allows you to select a different datasource just by providing the name of it:

DSLContext dsl = jooqService.dataSource_custom

Note that autocompletion won’t tell you about the existence of a dataSource_custom field but JooqService will handle that for you.

In version 0.1 we added JooqService but DSLContext have to generated based on your databases schema. So it is important to execute jooq-init command every time you change your Grails domain model since this command compiles your code and executes all migrations so that latest Java-classes are generated on the latest database-schema. Thanks to this approach it is possible to generate structures even from an in-memory H2 database which will not be available right after the application will be shut down.

Best-practices for using jOOQ with Grails

Integrate legacy databases

You might face the situation where you have to connect to some legacy database using the Grails framework. It’s doable, for sure, but you have to create the right Hibernate mapping first or – with some luck – let Hibernate generate the right one for you. With this plugin you can just let jOOQ generate its Java-classes and you’re ready to communicate with the database using a fully type-safe DSL.

Let a database schema change break your code

It’s one of the most precious gifts when you know that something will break early. With using jOOQ in Grails it will happen during compilation time. When you are executing jooq-init, your application is compiled and the Java-classes are regenerated using the latest database schema. If the generated classes change, you will be notified that your code is not able to compile anymore. You can fix your SQL statements and ensure that your application won’t break during runtime.

Keep generated classes in your version control system

We recommend you to check in jOOQs generated classes into your VCS along with the rest of your applications source code. When you are using jOOQ classes to communicate with the database it’s mandatory for compilation to have those classes already defined. Do you use a different approach? Please let us know!

Roadmap

We’re planning to simplify integration even more and attach jooq-init into regular grails compilation process. Also we’d like to make our plugin harder to misuse (see jooqService section) and add the possibility to use jOOQ not only in services or controllers but also in plain Java classes.

Contribution to UWS-jOOQ Grails-Plugin

This software is distributed under the Apache License, Version 2.0. We want to keep this software free and provide services for people
who integrate jOOQ and Grails. If you’re interested in this project feel free to submit issues or pull requests to the project’s git repository.

Further reading

The following links provide additional information about the UWS-jOOQ Grails-Plugin :

General information about UWS or jOOQ can be found here:

Follow

Get every new post delivered to your Inbox.

Join 3,907 other followers

%d bloggers like this: