Tag Archive | jooq

jOOQ Newsletter: jOOLY 23, 2014 – Only 8 Days Left in jOOLY


subscribe to this newsletter here

Only 8 Days Left in jOOLY

Time is running so fast! The month of jOOLY is almost over – have you taken advantage of our limited-time promotional discount of 20% that we’re offering to all of your purchases in July 2014? And that’s not it, you will also get a free copy of the popular e-book SQL Performance Explained by Markus Winand, a book that we believe belongs on the shelf of every SQL developer.

Act now to get 20% off your next jOOQ purchase!

Tweet of the Day

Our customers, users, and followers are sharing their love for jOOQ with the world and we can hardly catch up with them! Here are:

Álvaro Hernández Tortosa, who has the final word on frameworks that hide SQL, because SQL is really powerful

Calvin Thomas, who Has come to an end of his search for the stack he direly needs. And that consists of AngularJS, Bootstrap, Play, Scala, jOOQ. Well done!

The famous Adam Bien, who explains how to properly use jOOQ in a Java EE context.

Thanks for the shouts, guys! You make the jOOQ experience rock!

New tiered pricing model

In the recent months, we have been having a lot of interesting discussions about our workstation-based pricing model, and how that fits in larger organisations with more fluctuation among team members.

We think of our workstation-based model as particularly fair because the price increases when more value is added – but we have heard the various concerns about simplifying the administration effort for large volumes. This is why we’re now officially offering a tiered pricing model on all subscriptions larger than 10 workstations.

If this is interesting for your organisation, please consider the updated license textcontaining prices (on page 17), or contact us directly.

Of course, if you act quickly, this offering can be combined with the “jOOLY” promotional discount to help you get even more value out of your next purchase!

jOOQ 3.5: Oracle AQ Support

The upcoming jOOQ 3.5 will ship with an extension to the code generator and the API that will make using Oracle AQ with jOOQ as easy as everything else!

Oracle AQ is a very powerful feature when you need to notify your database clients of data changes. Typical use-cases include triggers on updates needing to invalidate a UI cache for an “expensive” value.

If you’re using Oracle AQ with JDBC directly, however, you might be put off by the complexity of binding / loading OBJECT types from CallableStatements. Not with jOOQ.

This is what an enqueue call will look like:

DBMS_AQ.enqueue(conf, QUEUE_NAME, object);

And this is what a dequeue call will look like:

MyObjectType object = DBMS_AQ.dequeue(conf, QUEUE_NAME);

Both the MyObjectType and the QUEUE_NAME reference are generated objects with type information associated with them. This means, you can enqueue / dequeue just as if Java were the same as PL/SQL. Excited? We are!

Community Zone – The jOOQ aficionados have been active!

The jOOQ community has been very active again in the last month. We’re happy to point out these editor’s picks from our radar:

Tired of building with Maven? We’re very happy to announce Etienne Studer’s publication of a fully-functional gradle-jooq-plugin. This is a great community effort for those of you working with jOOQ and Gradle – or even Groovy in general.

Bert van Langen is a passionate DB2 DBA who has given us this excellent introduction to jOOQ on his blog. An alternative tutorial that should get new users started very quickly.

Marco Behler has published a treaties about the Java persistence ghetto (and how jOOQ might change that). We’re very glad to see that Marco is also coming to the same conclusion that we try to repeat time and again: Nothing keeps you from using JPA and SQL (e.g. in the form of jOOQ) in the same project.

Feedback zone

You’ve read to the end of this newsletter, that’s great! Did you like it? What did we do great? What can we improve? What other subjects would you like us to cover?

We’d love to hear from you, so if you want to reach out to us, just drop a message tocontact@datageekery.com. Looking forward to hearing from you!

Java 8 Friday: More Functional Relational Transformation


In the past, we’ve been providing you with a new article every Friday about what’s new in Java 8. It has been a very exciting blog series, but we would like to focus again more on our core content, which is Java and SQL. We will still be occasionally blogging about Java 8, but no longer every Friday (as some of you have already notice).

In this last, short post of the Java 8 Friday series, we’d like to re-iterate the fact that we believe that the future belongs to functional relational data transformation (as opposed to ORM). We’ve spent about 20 years now using the object-oriented software development paradigm. Many of us have been very dogmatic about it. In the last 10 years, however, a “new” paradigm has started to get increasing traction in programming communities: Functional programming.

Functional programming is not that new, however. Lisp has been a very early functional programming language. XSLT and SQL are also somewhat functional (and declarative!). As we’re big fans of SQL’s functional (and declarative!) nature, we’re quite excited about the fact that we now have sophisticated tools in Java to transform tabular data that has been extracted from SQL databases. Streams!

SQL ResultSets are very similar to Streams

As we’ve pointed out before, JDBC ResultSets and Java 8 Streams are quite similar. This is even more true when you’re using jOOQ, which replaces the JDBC ResultSet by an org.jooq.Result, which extends java.util.List, and thus automatically inherits all Streams functionality. Consider the following query that allows fetching a one-to-many relationship between BOOK and AUTHOR records:

Map<Record2<String, String>, 
    List<Record2<Integer, String>>> booksByAuthor =

// This work is performed in the database
// --------------------------------------
ctx.select(
        BOOK.ID,
        BOOK.TITLE,
        AUTHOR.FIRST_NAME,
        AUTHOR.LAST_NAME
    )
   .from(BOOK)
   .join(AUTHOR)
   .on(BOOK.AUTHOR_ID.eq(AUTHOR.ID))
   .orderBy(BOOK.ID)
   .fetch()

// This work is performed in Java memory
// -------------------------------------
   .stream()

   // Group BOOKs by AUTHOR
   .collect(groupingBy(

        // This is the grouping key      
        r -> r.into(AUTHOR.FIRST_NAME, 
                    AUTHOR.LAST_NAME),

        // This is the target data structure
        LinkedHashMap::new,

        // This is the value to be produced for each
        // group: A list of BOOK
        mapping(
            r -> r.into(BOOK.ID, BOOK.TITLE),
            toList()
        )
    ));

The fluency of the Java 8 Streams API is very idiomatic to someone who has been used to writing SQL with jOOQ. Obviously, you can also use something other than jOOQ, e.g. Spring’s JdbcTemplate, or Apache Commons DbUtils, or just wrap the JDBC ResultSet in an Iterator…

What’s very nice about this approach, compared to ORM is the fact that there is no magic happening at all. Every piece of mapping logic is explicit and, thanks to Java generics, fully typesafe. The type of the booksByAuthor output is complex, and a bit hard to read / write, in this example, but it is also fully descriptive and useful.

The same functional transformation with POJOs

If you aren’t too happy with using jOOQ’s Record2 tuple types, no problem. You can specify your own data transfer objects like so:

class Book {
    public int id;
    public String title;

    @Override
    public String toString() { ... }

    @Override
    public int hashCode() { ... }

    @Override
    public boolean equals(Object obj) { ... }
}

static class Author {
    public String firstName;
    public String lastName;

    @Override
    public String toString() { ... }

    @Override
    public int hashCode() { ... }

    @Override
    public boolean equals(Object obj) { ... }
}

With the above DTO, you can now leverage jOOQ’s built-in POJO mapping to transform the jOOQ records into your own domain classes:

Map<Author, List<Book>> booksByAuthor =
ctx.select(
        BOOK.ID,
        BOOK.TITLE,
        AUTHOR.FIRST_NAME,
        AUTHOR.LAST_NAME
    )
   .from(BOOK)
   .join(AUTHOR)
   .on(BOOK.AUTHOR_ID.eq(AUTHOR.ID))
   .orderBy(BOOK.ID)
   .fetch()
   .stream()
   .collect(groupingBy(

        // This is the grouping key      
        r -> r.into(Author.class),
        LinkedHashMap::new,

        // This is the grouping value list
        mapping(
            r -> r.into(Book.class),
            toList()
        )
    ));

Explicitness vs. implicitness

At Data Geekery, we believe that a new time has started for Java developers. A time where Annotatiomania™ (finally!) ends and people stop assuming all that implicit behaviour through annotation magic. ORMs depend on a huge amount of specification to explain how each annotation works with each other annotation. It is hard to reverse-engineer (or debug!) this kind of not-so-well-understood annotation-language that JPA has brought to us.

On the flip side, SQL is pretty well understood. Tables are an easy-to-handle data structure, and if you need to transform those tables into something more object-oriented, or more hierarchically structured, you can simply apply functions to those tables and group values yourself! By grouping those values explicitly, you stay in full control of your mapping, just as with jOOQ, you stay in full control of your SQL.

This is why we believe that in the next 5 years, ORMs will lose relevance and people start embracing explicit, stateless and magicless data transformation techniques again, using Java 8 Streams.

PostgreSQL’s Table-Valued Functions


Table-valued functions are an awesome thing. Many databases support them in one way or another and so does PostgreSQL. In PostgreSQL, (almost) everything is a table. For instance, we can write:

CREATE OR REPLACE FUNCTION 
    f_1 (v1 INTEGER, v2 OUT INTEGER)
AS $$
BEGIN
    v2 := v1;
END
$$ LANGUAGE plpgsql;

… and believe it or not, this is a table! We can write:

select * from f_1(1);

And the above will return:

+----+
| v2 |
+----+
|  1 |
+----+

It’s kind of intuitive if you think about it. We’re just pushing out a single record with a single column. If we wanted two columns, we could’ve written:

CREATE OR REPLACE FUNCTION 
    f_2 (v1 INTEGER, v2 OUT INTEGER, v3 OUT INTEGER)
AS $$
BEGIN
    v2 := v1;
    v3 := v1 + 1;
END
$$ LANGUAGE plpgsql;

… and then:

select * from f_2(1);

And the above will return:

+----+----+
| v2 | v3 |
+----+----+
|  1 |  2 |
+----+----+

That’s useful, but those are just single records. What if we wanted to produce a whole table? It’s easy, just change your functions to actually return TABLE types, instead of using OUT parameters:

CREATE OR REPLACE FUNCTION f_3 (v1 INTEGER)
RETURNS TABLE(v2 INTEGER, v3 INTEGER)
AS $$
BEGIN
    RETURN QUERY
    SELECT *
    FROM (
        VALUES(v1, v1 + 1), 
              (v1 * 2, (v1 + 1) * 2)
    ) t(a, b);
END
$$ LANGUAGE plpgsql;

When selecting from the above very useful function, we’ll get a table like so:

select * from f_3(1);

And the above will return:

+----+----+
| v2 | v3 |
+----+----+
|  1 |  2 |
|  2 |  4 |
+----+----+

And we can LATERAL join that function to other tables if we want:

select *
from book, lateral f_3(book.id)

… which might yield, for example:

+----+--------------+----+----+
| id | title        | v2 | v3 |
+----+--------------+----+----+
|  1 | 1984         |  1 |  2 |
|  1 | 1984         |  2 |  4 |
|  2 | Animal Farm  |  2 |  4 |
|  2 | Animal Farm  |  4 |  6 |
+----+--------------+----+----+

In fact, it appears that the keyword LATERAL is optional in this case, at least for PostgreSQL.

Table-valued functions are very powerful!

Discovering table-valued functions

From jOOQ’s schema reverse-engineering perspective, things might get a bit tricky as can be seen in this Stack Overflow question. PostgreSQL deals with OUT parameters in a very similar way as with TABLE return types. This can be seen in the following query against the INFORMATION_SCHEMA:

SELECT r.routine_name, r.data_type, p.parameter_name, p.data_type
FROM   information_schema.routines r
JOIN   information_schema.parameters p
USING (specific_catalog, specific_schema, specific_name);

… and the output:

routine_name | data_type | parameter_name | data_type
-------------+-----------+----------------+----------
f_1          | integer   | v1             | integer
f_1          | integer   | v2             | integer
f_2          | record    | v1             | integer
f_2          | record    | v2             | integer
f_2          | record    | v3             | integer
f_3          | record    | v1             | integer
f_3          | record    | v2             | integer
f_3          | record    | v3             | integer

As you can see, the output is really indistinguishable from that perspective. Luckily, we can also join the pg_catalog.pg_proc table, which contains the relevant flag to indicate if a function returns a set or not:

SELECT   r.routine_name, 
         r.data_type, 
         p.parameter_name, 
         p.data_type, 
         pg_p.proretset
FROM     information_schema.routines r
JOIN     information_schema.parameters p
USING   (specific_catalog, specific_schema, specific_name)
JOIN     pg_namespace pg_n
ON       r.specific_schema = pg_n.nspname
JOIN     pg_proc pg_p
ON       pg_p.pronamespace = pg_n.oid
AND      pg_p.proname = r.routine_name
ORDER BY routine_name, parameter_name;

Now, we’re getting:

routine_name | data_type | parameter_name | data_type | proretset
-------------+-----------+----------------+-----------+----------
f_1          | integer   | v1             | integer   | f
f_1          | integer   | v2             | integer   | f
f_2          | record    | v1             | integer   | f
f_2          | record    | v2             | integer   | f
f_2          | record    | v3             | integer   | f
f_3          | record    | v1             | integer   | t
f_3          | record    | v2             | integer   | t
f_3          | record    | v3             | integer   | t

We can see that f_3 is the only function actually returning a set of record, unlike f_1 and f_2, which only return a single record.

Now, remove all those parameters that are not OUT parameters, and you have your table type:

SELECT   r.routine_name, 
         p.parameter_name,
         p.data_type,
         row_number() OVER (
           PARTITION BY r.specific_name 
           ORDER BY p.ordinal_position
         ) AS ordinal_position
FROM     information_schema.routines r
JOIN     information_schema.parameters p
USING   (specific_catalog, specific_schema, specific_name)
JOIN     pg_namespace pg_n
ON       r.specific_schema = pg_n.nspname
JOIN     pg_proc pg_p
ON       pg_p.pronamespace = pg_n.oid
AND      pg_p.proname = r.routine_name
WHERE    pg_p.proretset
AND      p.parameter_mode = 'OUT'
ORDER BY routine_name, parameter_name;

Which will give us:

routine_name | parameter_name | data_type | position |
-------------+----------------+-----------+----------+
f_3          | v2             | integer   |        1 |
f_3          | v3             | integer   |        2 |

How to run such queries in jOOQ?

Once the above code is generated, you can easily call the table-valued function in any jOOQ query. Consider again the BOOK example (in SQL):

select *
from book, lateral f_3(book.id)

… and with jOOQ:

DSL.using(configuration)
   .select()
   .from(BOOK, lateral(F_3.call(BOOK.ID)))
   .fetch();

The returned records then contain values for:

record.getValue(F_3.V2);
record.getValue(F_3.V3);

All that typesafety is only available in the upcoming jOOQ 3.5, for free! (SQL Server, Oracle, and HSQLDB table-valued functions are already supported!)

jOOQ is the best way to write SQL in Java

jOOQ Newsletter: July 2, 2014 – jOOLY 20% Discount Offering


Subscribe for this newsletter here

jOOLY 2014 20% Discount Offering

Have you been evaluating jOOQ for a while now, still hesitating to purchase licenses? Or are you an existing customer and looking into licensing more workstations for your team? This is your chance!

Get 20% off all your jOOQ purchases in the month of jOOLY 2014!

Google has their summer of code, we have our month of jOOLY, where you get 20% off all your purchases for jOOQ Professional and Enterprise Edition licenses. Just enter the “jOOLY” discount code with your next purchase and start coding awesome Java / SQL code! Click here to download and buy jOOQ now! The discount code is only valid until the end of jOOLY (July 31, 2014), so act quickly!

In fact, if you purchase jOOQ in the month of jOOLY (July), we’ll offer you a free PDF e-book copy of SQL Performance Explained with your purchase to help you get even more out of your jOOQ experience.

This discount cannot be combined with other discounts. Only the first period of a monthly or yearly subscription is discounted. Existing subscriptions are not eligible for this discount

Tweet of the Day

Our customers, users, and followers are sharing their love for jOOQ with the world and we can hardly catch up with them! Here are:

“Henk”, who challenges the JavaEE folks to add jOOQ to the standards. It’s about time!

Roland Tepp, who sees the added value of jOOQ in “database-first” applications. That’s very true

Chris Raastad, who … well. Who puts it bluntly and is looking forward to use the “last missing features” from .NET also in Java. We’re honoured to be compared to LINQ. Thanks Chris!

Thanks for the shouts, guys! You make the jOOQ experience rock!

Community Zone – The jOOQ aficionados have been active!

The jOOQ community has been very active again in the last month. We’re happy to point out these editor’s picks from our radar:

Speaking of LINQ, you may have heard of JINQ. JINQ now also has a jOOQ integration, allowing you to query the awesome the Java 8 Streams API as if it were a database. With jOOQ being the backing SQL implementation, you get all the advantages like typesafety, SQL transformation and standardisation, etc. So, let’s hear it for Dr. Ming-Yee Iu, from whom we’ll certainly hear more in the near future.

Instil Software from Belfast is hosting this great jOOQ and Flyway workshop on July 16, 2014. “Unfortunately,” it is already sold out, but we’re sure that this killer productivity combination will be interesting for numerous future workshops. If you’re missing out on it, read this blog post to get an overview of jOOQ and Flyway. If you’re hosting such a workshop yourself, let us know. We’ll be more than happy to advertise it in our events section of the jOOQ website.

If you haven’t seen it already, consider reading the ZeroTurnaround Java Tools and Technologies Landscape for 2014, a survey of 2164 Java professionals (slightly biased towards ZeroTurnaround’s RebelLabs audience). jOOQ is listed and compared to other Java / SQL integration patterns. We’re clearly catching up with well-established brands like MyBatis – so stay tuned for more community news as we keep growing.

Feedback zone

You’ve read to the end of this newsletter, that’s great! Did you like it? What did we do great? What can we improve? What other subjects would you like us to cover?

We’d love to hear from you, so if you want to reach out to us, just drop a message to contact@datageekery.com. Looking forward to hearing from you!

Flyway and jOOQ for Unbeatable SQL Development Productivity


When performing database migrations, we at Data Geekery recommend using jOOQ with Flyway – Database Migrations Made Easy. In this post, we’re going to look into a simple way to get started with the two frameworks.

Philosophy

There are a variety of ways how jOOQ and Flyway could interact with each other in various development setups. In this tutorial we’re going to show just one variant of such framework team play – a variant that we find particularly compelling for most use cases.

The general philosophy and workflow behind the following approach can be summarised as this:

  • 1. Database increment
  • 2. Database migration
  • 3. Code re-generation
  • 4. Development

The four steps above can be repeated time and again, every time you need to modify something in your database. More concretely, let’s consider:

  • 1. Database increment – You need a new column in your database, so you write the necessary DDL in a Flyway script
  • 2. Database migration – This Flyway script is now part of your deliverable, which you can share with all developers who can migrate their databases with it, the next time they check out your change
  • 3. Code re-generation – Once the database is migrated, you regenerate all jOOQ artefacts (see code generation), locally
  • 4. Development – You continue developing your business logic, writing code against the udpated, generated database schema

0.1. Maven Project Configuration – Properties

The following properties are defined in our pom.xml, to be able to reuse them between plugin configurations:

<properties>
    <db.url>jdbc:h2:~/flyway-test</db.url>
    <db.username>sa</db.username>
</properties>

0.2. Maven Project Configuration – Dependencies

While jOOQ and Flyway could be used in standalone migration scripts, in this tutorial, we’ll be using Maven for the standard project setup. You will also find the source code of this tutorial on GitHub, and the full pom.xml file here.

These are the dependencies that we’re using in our Maven configuration:

<!-- We'll add the latest version of jOOQ 
     and our JDBC driver - in this case H2 -->
<dependency>
    <groupId>org.jooq</groupId>
    <artifactId>jooq</artifactId>
    <version>3.4.0</version>
</dependency>
<dependency>
    <groupId>com.h2database</groupId>
    <artifactId>h2</artifactId>
    <version>1.4.177</version>
</dependency>

<!-- For improved logging, we'll be using 
     log4j via slf4j to see what's going
     on during migration and code generation -->
<dependency>
    <groupId>log4j</groupId>
    <artifactId>log4j</artifactId>
    <version>1.2.16</version>
</dependency>
<dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-log4j12</artifactId>
    <version>1.7.5</version>
</dependency>

<!-- To esnure our code is working, we're
     using JUnit -->
<dependency>
    <groupId>junit</groupId>
    <artifactId>junit</artifactId>
    <version>4.11</version>
    <scope>test</scope>
</dependency>

0.3. Maven Project Configuration – Plugins

After the dependencies, let’s simply add the Flyway and jOOQ Maven plugins like so. The Flyway plugin:

<plugin>
    <groupId>org.flywaydb</groupId>
    <artifactId>flyway-maven-plugin</artifactId>
    <version>3.0</version>

    <!-- Note that we're executing the Flyway
         plugin in the "generate-sources" phase -->
    <executions>
        <execution>
            <phase>generate-sources</phase>
            <goals>
                <goal>migrate</goal>
            </goals>
        </execution>
    </executions>

    <!-- Note that we need to prefix the db/migration
         path with filesystem: to prevent Flyway
         from looking for our migration scripts
         only on the classpath -->
    <configuration>
        <url>${db.url}</url>
        <user>${db.username}</user>
        <locations>
            <location>filesystem:src/main/resources/db/migration</location>
        </locations>
    </configuration>
</plugin>

The above Flyway Maven plugin configuration will read and execute all database migration scripts from src/main/resources/db/migrationprior to compiling Java source code. While the official Flyway documentation suggests that migrations be done in the compile phase, the jOOQ code generator relies on such migrations having been done prior to code generation.

After the Flyway plugin, we’ll add the jOOQ Maven Plugin. For more details, please refer to the manual’s section about the code generation configuration.

<plugin>
    <groupId>org.jooq</groupId>
    <artifactId>jooq-codegen-maven</artifactId>
    <version>${org.jooq.version}</version>

    <!-- The jOOQ code generation plugin is also 
         executed in the generate-sources phase,
         prior to compilation -->
    <executions>
        <execution>
            <phase>generate-sources</phase>
            <goals>
                <goal>generate</goal>
            </goals>
        </execution>
    </executions>

    <!-- This is a minimal working configuration. 
         See the manual's section about the code
         generator for more details -->
    <configuration>
        <jdbc>
            <url>${db.url}</url>
            <user>${db.username}</user>
        </jdbc>
        <generator>
            <database>
                <includes>.*</includes>
                <inputSchema>FLYWAY_TEST</inputSchema>
            </database>
            <target>
                <packageName>org.jooq.example.flyway.db.h2</packageName>
                <directory>target/generated-sources/jooq-h2</directory>
            </target>
        </generator>
    </configuration>
</plugin>

This configuration will now read the FLYWAY_TEST schema and reverse-engineer it into the target/generated-sources/jooq-h2 directory, and within that, into the org.jooq.example.flyway.db.h2 package.

1. Database increments

Now, when we start developing our database. For that, we’ll create database increment scripts, which we put into thesrc/main/resources/db/migration directory, as previously configured for the Flyway plugin. We’ll add these files:

  • V1__initialise_database.sql
  • V2__create_author_table.sql
  • V3__create_book_table_and_records.sql

These three scripts model our schema versions 1-3 (note the capital V!). Here are the scripts’ contents

-- V1__initialise_database.sql
DROP SCHEMA flyway_test IF EXISTS;

CREATE SCHEMA flyway_test;
-- V2__create_author_table.sql
CREATE SEQUENCE flyway_test.s_author_id START WITH 1;

CREATE TABLE flyway_test.author (
  id INT NOT NULL,
  first_name VARCHAR(50),
  last_name VARCHAR(50) NOT NULL,
  date_of_birth DATE,
  year_of_birth INT,
  address VARCHAR(50),

  CONSTRAINT pk_t_author PRIMARY KEY (ID)
);
-- V3__create_book_table_and_records.sql
CREATE TABLE flyway_test.book (
  id INT NOT NULL,
  author_id INT NOT NULL,
  title VARCHAR(400) NOT NULL,

  CONSTRAINT pk_t_book PRIMARY KEY (id),
  CONSTRAINT fk_t_book_author_id FOREIGN KEY (author_id) REFERENCES flyway_test.author(id)
);


INSERT INTO flyway_test.author VALUES (next value for flyway_test.s_author_id, 'George', 'Orwell', '1903-06-25', 1903, null);
INSERT INTO flyway_test.author VALUES (next value for flyway_test.s_author_id, 'Paulo', 'Coelho', '1947-08-24', 1947, null);

INSERT INTO flyway_test.book VALUES (1, 1, '1984');
INSERT INTO flyway_test.book VALUES (2, 1, 'Animal Farm');
INSERT INTO flyway_test.book VALUES (3, 2, 'O Alquimista');
INSERT INTO flyway_test.book VALUES (4, 2, 'Brida');

2. Database migration and 3. Code regeneration

The above three scripts are picked up by Flyway and executed in the order of the versions. This can be seen very simply by executing:

mvn clean install

And then observing the log output from Flyway…

[INFO] --- flyway-maven-plugin:3.0:migrate (default) @ jooq-flyway-example ---
[INFO] Database: jdbc:h2:~/flyway-test (H2 1.4)
[INFO] Validated 3 migrations (execution time 00:00.004s)
[INFO] Creating Metadata table: "PUBLIC"."schema_version"
[INFO] Current version of schema "PUBLIC": <>
[INFO] Migrating schema "PUBLIC" to version 1
[INFO] Migrating schema "PUBLIC" to version 2
[INFO] Migrating schema "PUBLIC" to version 3
[INFO] Successfully applied 3 migrations to schema "PUBLIC" (execution time 00:00.073s).

… and from jOOQ on the console:

[INFO] --- jooq-codegen-maven:3.5.0-SNAPSHOT:generate (default) @ jooq-flyway-example ---
[INFO] Using this configuration:
...
[INFO] Generating schemata      : Total: 1
[INFO] Generating schema        : FlywayTest.java
[INFO] ----------------------------------------------------------
[....]
[INFO] GENERATION FINISHED!     : Total: 337.576ms, +4.299ms

4. Development

Note that all of the previous steps are executed automatically, every time someone adds new migration scripts to the Maven module. For instance, a team member might have committed a new migration script, you check it out, rebuild and get the latest jOOQ-generated sources for your own development or integration-test database.

Now, that these steps are done, you can proceed writing your database queries. Imagine the following test case

import org.jooq.Result;
import org.jooq.impl.DSL;
import org.junit.Test;

import java.sql.DriverManager;

import static java.util.Arrays.asList;
import static org.jooq.example.flyway.db.h2.Tables.*;
import static org.junit.Assert.assertEquals;

public class AfterMigrationTest {

    @Test
    public void testQueryingAfterMigration() throws Exception {
        try (Connection c = DriverManager.getConnection("jdbc:h2:~/flyway-test", "sa", "")) {
            Result<?> result =
            DSL.using(c)
               .select(
                   AUTHOR.FIRST_NAME,
                   AUTHOR.LAST_NAME,
                   BOOK.ID,
                   BOOK.TITLE
               )
               .from(AUTHOR)
               .join(BOOK)
               .on(AUTHOR.ID.eq(BOOK.AUTHOR_ID))
               .orderBy(BOOK.ID.asc())
               .fetch();

            assertEquals(4, result.size());
            assertEquals(asList(1, 2, 3, 4), result.getValues(BOOK.ID));
        }
    }
}

If you run the mvn clean install again, the above integration test will now compile and pass!

Reiterate

The power of this approach becomes clear once you start performing database modifications this way. Let’s assume that the French guy on our team prefers to have things his way (no offense intended ;-) ):

-- V4__le_french.sql
ALTER TABLE flyway_test.book 
  ALTER COLUMN title RENAME TO le_titre;

They check it in, you check out the new database migration script, run

mvn clean install

And then observe the log output:

[INFO] --- flyway-maven-plugin:3.0:migrate (default) @ jooq-flyway-example ---
[INFO] --- flyway-maven-plugin:3.0:migrate (default) @ jooq-flyway-example ---
[INFO] Database: jdbc:h2:~/flyway-test (H2 1.4)
[INFO] Validated 4 migrations (execution time 00:00.005s)
[INFO] Current version of schema "PUBLIC": 3
[INFO] Migrating schema "PUBLIC" to version 4
[INFO] Successfully applied 1 migration to schema "PUBLIC" (execution time 00:00.016s).

So far so good, but later on:

[ERROR] COMPILATION ERROR :
[INFO] -------------------------------------------------------------
[ERROR] C:\...\AfterMigrationTest.java:[24,19] error: cannot find symbol
[INFO] 1 error

When we go back to our Java integration test, we can immediately see that the TITLE column is still being referenced, but it no longer exists:

public class AfterMigrationTest {

    @Test
    public void testQueryingAfterMigration() throws Exception {
        try (Connection c = DriverManager.getConnection("jdbc:h2:~/flyway-test", "sa", "")) {
            Result<?> result =
            DSL.using(c)
               .select(
                   AUTHOR.FIRST_NAME,
                   AUTHOR.LAST_NAME,
                   BOOK.ID,
                   BOOK.TITLE
                   //   ^^^^^ This column no longer exists. 
                   //   We'll have to rename it to LE_TITRE
               )
               .from(AUTHOR)
               .join(BOOK)
               .on(AUTHOR.ID.eq(BOOK.AUTHOR_ID))
               .orderBy(BOOK.ID.asc())
               .fetch();

            assertEquals(4, result.size());
            assertEquals(asList(1, 2, 3, 4), result.getValues(BOOK.ID));
        }
    }
}

Conclusion

This tutorial shows very easily how you can build a rock-solid development process using Flyway and jOOQ to prevent SQL-related errors very early in your development lifecycle – immediately at compile time, rather than in production!

Visit the Flyway website and the jOOQ website.

SQL Server ALTER TABLE SET DEFAULT


Most databases that support default values on their column DDL, it is also possible to actually alter that default. An Oracle example:

CREATE TABLE t (
  val NUMBER(7) DEFAULT 1 NOT NULL
);

-- Oops, wrong default, let us change it
ALTER TABLE t MODIFY val DEFAULT -1;

-- Now that is better

Unfortunately, this isn’t possible in SQL Server, where the DEFAULT column property is really a constraint, and probably a constraint whose name you don’t know because it was system generated.

But luckily, jOOQ 3.4 now supports DDL and can abstract this information away from you by generating the following Transact-SQL program:

DECLARE @constraint NVARCHAR(max);
DECLARE @command NVARCHAR(max);

SELECT @constraint = name
FROM sys.default_constraints
WHERE parent_object_id = object_id('t')
AND parent_column_id = columnproperty(
    object_id('t'), 'val', 'ColumnId');

IF @constraint IS NOT NULL
BEGIN
  SET @command = 'ALTER TABLE t DROP CONSTRAINT ' 
    + @constraint;
  EXECUTE sp_executeSQL @command

  SET @command = 'ALTER TABLE t ADD CONSTRAINT ' 
    + @constraint + ' DEFAULT -1 FOR val';
  EXECUTE sp_executeSQL @command
END
ELSE
BEGIN
  SET @command = 'ALTER TABLE t ADD DEFAULT -1 FOR val';
  EXECUTE sp_executeSQL @command
END

This program will either drop and create a new constraint with the same name, or create an entirely new constraint with a system-generated name.

With jOOQ, you can execute this statement as such:

DSL.using(configuration)
   .alterTable(T)
   .alter(T.VAL)
   .defaultValue(-1)
   .execute();

jOOQ Newsletter: June 18, 2014 – jOOQ goes JavaOne™


jOOQ Goes JavaOne™

We’re super excited to announce that our CEO and Head of R&D Lukas will be heading to San Francisco this fall to talk about jOOQ at JavaOne™! This is not just great for Data Geekery and jOOQ, but also for the whole Java / SQL ecosystem, as we believe that the Java / SQL integration deserves much more focus at conferences, where buzzwords like Big Data and NoSQL dominate the agenda disproportionally.

From our perspective, the JVM is the best platform for general purpose languages, whereas SQL is the best tool for database interaction – with Oracle SQL being a leader in the industry. So…

2014 will be a great year for Java and SQL

Prior to JavaOne™, we have also been talking at the awesome 33rd Degree and GeekOut conferences, the latter having been hosted by our friends over at ZeroTurnaround who have launched XRebel, a very promising tool to help you find rogue SQL statements in your application. Stay tuned as we’ll be trying out XRebel to compare jOOQ with Hibernate on our blog, soon.

Stay informed about 2014 events on www.jooq.org/news.

Tweet of the Day

Our customers, users, and followers are sharing their love for jOOQ with the world and we can hardly catch up with them! Here are:

Alessio Harri, who is having a great time using OpenJPA (the workhorse) with jOOQ (the artist)

The Vertabelo team over at e-point, who like using jOOQ as a database consumer for their SaaS, collaborative ERD tool.

Petri Kainulainen, who is working around performance problems by getting back in control of his SQL.

Thanks for the shouts, guys! You make the jOOQ experience rock!

Upcoming License Improvements

From our recent negotiations with site license leads, we’ve come to two conclusions that will benefit all of the jOOQ Professional and jOOQ Enterprise customer base.

  • When you buy a car or a TV, you probably don’t run to the supplier every time you encounter a small defect that prevents you from fully enjoying your product. You’ll fix it yourself. We want to do the same in the future. As we trust our customers, and as we already ship our sources, we will soon allow you to implement urgent fixes to jOOQ yourselves, as we believe that this will improve the jOOQ experience for everyone and add further value to your own experience.
  • We understand the requirements of purchasing departments in large organisations. Often, it is easier to purchase a site license from a supplier rather than going through the hassles of adapting workstation-based subscriptions all the time. To respond to this need, we’ll soon publish a discounted, tiered pricing model for large-volume purchases of our perpetual licenses.

Both of these improvements will be deployed to all of our customer base in the beginning of July.

Do you already have any questions regarding what will change / improve? Do not hesitate to contact us.

Community Zone – The jOOQ aficionados have been active!

The jOOQ community has been very active again in the last month. We’re happy to point out these editor’s picks from our radar:

Vlad Mihalcea is a very active blogger on the subject of Hibernate integrations, transaction mangagement and connection pooling performance. We’re looking forward to his future blog posts about how to integrate ORMs with SQL/jOOQ, e.g. by applying emerging architecture patterns such as CQRS. One of his most recent, very interesting blog posts deals precisely with that subject.

Micha Kops has been blogging about a variety of Java tool integrations and has now published this comprehensive and very useful jOOQ tutorial. It is great to see fresh opinions from people just getting to know the platform and blogging about it.

SQL Zone – More common SQL mistakes

Our popular blog series “Top 10 mistakes Java developers make when writing SQL”has been enhanced with yet another must-read article for the Java/SQL community:

Did you know about all those common mistakes? We bet you didn’t. If you haven’t seen enough, see also our previous articles:

SQL Zone – Don’t roll your own OFFSET pagination emulation

One of the great reasons why you should use jOOQ is the fact that jOOQ abstracts away all the hard parts of your SQL dialect. If you’re using Oracle (prior to 12c), SQL Server (prior to 2012), or DB2, you might need to emulate what other databases know as OFFSET pagination. While most people get the simple use-cases right, we’ve tried to outline all of the other issues that may arise when you try to do it yourself in our blog post:

A must-read for all SQL transformation aficionados.

Feedback zone

You’ve read to the end of this newsletter, that’s great! Did you like it? What did we do great? What can we improve? What other subjects would you like us to cover?

We’d love to hear from you, so if you want to reach out to us, just drop a message to contact@datageekery.com. Looking forward to hearing from you!

Conference Legal Disclaimers


After deep consideration with our lawyers, we would like to follow suit with Oracle and provide you with the following legal disclaimer about our jOOQ-related conference talks, as presented at the awesome GeekOut conference in Tallinn. Please do read them and take them seriously.

THE FOLLOWING IS COMMUNICATED TO YOU SOLELY FOR ENTERTAINMENT PURPOSES. NO ONE SANE WOULD BELIEVE A GUY WHO CLAIMS HE IS A SQL AFICIONADO OR WORSE WHO CLAIMS THAT SQL IS ANYTHING NEAR BEAUTIFUL. IF YOU STILL FIND THE FOLLOWING INTERESTING AND IF YOU BASE YOUR PURCHASING DECISIONS UPON THAT, YOU DEFINITELY NEED PROFESSIONAL HELP. WE ACTUALLY PROVIDE SUCH HELP.

Java 8 Friday: 10 Subtle Mistakes When Using the Streams API


At Data Geekery, we love Java. And as we’re really into jOOQ’s fluent API and query DSL, we’re absolutely thrilled about what Java 8 will bring to our ecosystem.

Java 8 Friday

Every Friday, we’re showing you a couple of nice new tutorial-style Java 8 features, which take advantage of lambda expressions, extension methods, and other great stuff. You’ll find the source code on GitHub.

10 Subtle Mistakes When Using the Streams API

We’ve done all the SQL mistakes lists:

But we haven’t done a top 10 mistakes list with Java 8 yet! For today’s occasion (it’s Friday the 13th), we’ll catch up with what will go wrong in YOUR application when you’re working with Java 8. (it won’t happen to us, as we’re stuck with Java 6 for another while)

1. Accidentally reusing streams

Wanna bet, this will happen to everyone at least once. Like the existing “streams” (e.g. InputStream), you can consume streams only once. The following code won’t work:

IntStream stream = IntStream.of(1, 2);
stream.forEach(System.out::println);

// That was fun! Let's do it again!
stream.forEach(System.out::println);

You’ll get a

java.lang.IllegalStateException: 
  stream has already been operated upon or closed

So be careful when consuming your stream. It can be done only once

2. Accidentally creating “infinite” streams

You can create infinite streams quite easily without noticing. Take the following example:

// Will run indefinitely
IntStream.iterate(0, i -> i + 1)
         .forEach(System.out::println);

The whole point of streams is the fact that they can be infinite, if you design them to be. The only problem is, that you might not have wanted that. So, be sure to always put proper limits:

// That's better
IntStream.iterate(0, i -> i + 1)
         .limit(10)
         .forEach(System.out::println);

3. Accidentally creating “subtle” infinite streams

We can’t say this enough. You WILL eventually create an infinite stream, accidentally. Take the following stream, for instance:

IntStream.iterate(0, i -> ( i + 1 ) % 2)
         .distinct()
         .limit(10)
         .forEach(System.out::println);

So…

  • we generate alternating 0’s and 1’s
  • then we keep only distinct values, i.e. a single 0 and a single 1
  • then we limit the stream to a size of 10
  • then we consume it

Well… the distinct() operation doesn’t know that the function supplied to the iterate() method will produce only two distinct values. It might expect more than that. So it’ll forever consume new values from the stream, and the limit(10) will never be reached. Tough luck, your application stalls.

4. Accidentally creating “subtle” parallel infinite streams

We really need to insist that you might accidentally try to consume an infinite stream. Let’s assume you believe that the distinct() operation should be performed in parallel. You might be writing this:

IntStream.iterate(0, i -> ( i + 1 ) % 2)
         .parallel()
         .distinct()
         .limit(10)
         .forEach(System.out::println);

Now, we’ve already seen that this will turn forever. But previously, at least, you only consumed one CPU on your machine. Now, you’ll probably consume four of them, potentially occupying pretty much all of your system with an accidental infinite stream consumption. That’s pretty bad. You can probably hard-reboot your server / development machine after that. Have a last look at what my laptop looked like prior to exploding:

If I were a laptop, this is how I'd like to go.

If I were a laptop, this is how I’d like to go.

5. Mixing up the order of operations

So, why did we insist on your definitely accidentally creating infinite streams? It’s simple. Because you may just accidentally do it. The above stream can be perfectly consumed if you switch the order of limit() and distinct():

IntStream.iterate(0, i -> ( i + 1 ) % 2)
         .limit(10)
         .distinct()
         .forEach(System.out::println);

This now yields:

0
1

Why? Because we first limit the infinite stream to 10 values (0 1 0 1 0 1 0 1 0 1), before we reduce the limited stream to the distinct values contained in it (0 1).

Of course, this may no longer be semantically correct, because you really wanted the first 10 distinct values from a set of data (you just happened to have “forgotten” that the data is infinite). No one really wants 10 random values, and only then reduce them to be distinct.

If you’re coming from a SQL background, you might not expect such differences. Take SQL Server 2012, for instance. The following two SQL statements are the same:

-- Using TOP
SELECT DISTINCT TOP 10 *
FROM i
ORDER BY ..

-- Using FETCH
SELECT *
FROM i
ORDER BY ..
OFFSET 0 ROWS
FETCH NEXT 10 ROWS ONLY

So, as a SQL person, you might not be as aware of the importance of the order of streams operations.

jOOQ, the best way to write SQL in Java

6. Mixing up the order of operations (again)

Speaking of SQL, if you’re a MySQL or PostgreSQL person, you might be used to the LIMIT .. OFFSET clause. SQL is full of subtle quirks, and this is one of them. The OFFSET clause is applied FIRST, as suggested in SQL Server 2012’s (i.e. the SQL:2008 standard’s) syntax.

If you translate MySQL / PostgreSQL’s dialect directly to streams, you’ll probably get it wrong:

IntStream.iterate(0, i -> i + 1)
         .limit(10) // LIMIT
         .skip(5)   // OFFSET
         .forEach(System.out::println);

The above yields

5
6
7
8
9

Yes. It doesn’t continue after 9, because the limit() is now applied first, producing (0 1 2 3 4 5 6 7 8 9). skip() is applied after, reducing the stream to (5 6 7 8 9). Not what you may have intended.

BEWARE of the LIMIT .. OFFSET vs. "OFFSET .. LIMIT" trap!

7. Walking the file system with filters

We’ve blogged about this before. What appears to be a good idea is to walk the file system using filters:

Files.walk(Paths.get("."))
     .filter(p -> !p.toFile().getName().startsWith("."))
     .forEach(System.out::println);

The above stream appears to be walking only through non-hidden directories, i.e. directories that do not start with a dot. Unfortunately, you’ve again made mistake #5 and #6. walk() has already produced the whole stream of subdirectories of the current directory. Lazily, though, but logically containing all sub-paths. Now, the filter will correctly filter out paths whose names start with a dot “.”. E.g. .git or .idea will not be part of the resulting stream. But these paths will be: .\.git\refs, or .\.idea\libraries. Not what you intended.

Now, don’t fix this by writing the following:

Files.walk(Paths.get("."))
     .filter(p -> !p.toString().contains(File.separator + "."))
     .forEach(System.out::println);

While that will produce the correct output, it will still do so by traversing the complete directory subtree, recursing into all subdirectories of “hidden” directories.

I guess you’ll have to resort to good old JDK 1.0 File.list() again. The good news is, FilenameFilter and FileFilter are both functional interfaces.

8. Modifying the backing collection of a stream

While you’re iterating a List, you must not modify that same list in the iteration body. That was true before Java 8, but it might become more tricky with Java 8 streams. Consider the following list from 0..9:

// Of course, we create this list using streams:
List<Integer> list = 
IntStream.range(0, 10)
         .boxed()
         .collect(toCollection(ArrayList::new));

Now, let’s assume that we want to remove each element while consuming it:

list.stream()
    // remove(Object), not remove(int)!
    .peek(list::remove)
    .forEach(System.out::println);

Interestingly enough, this will work for some of the elements! The output you might get is this one:

0
2
4
6
8
null
null
null
null
null
java.util.ConcurrentModificationException

If we introspect the list after catching that exception, there’s a funny finding. We’ll get:

[1, 3, 5, 7, 9]

Heh, it “worked” for all the odd numbers. Is this a bug? No, it looks like a feature. If you’re delving into the JDK code, you’ll find this comment in ArrayList.ArraListSpliterator:

/*
 * If ArrayLists were immutable, or structurally immutable (no
 * adds, removes, etc), we could implement their spliterators
 * with Arrays.spliterator. Instead we detect as much
 * interference during traversal as practical without
 * sacrificing much performance. We rely primarily on
 * modCounts. These are not guaranteed to detect concurrency
 * violations, and are sometimes overly conservative about
 * within-thread interference, but detect enough problems to
 * be worthwhile in practice. To carry this out, we (1) lazily
 * initialize fence and expectedModCount until the latest
 * point that we need to commit to the state we are checking
 * against; thus improving precision.  (This doesn't apply to
 * SubLists, that create spliterators with current non-lazy
 * values).  (2) We perform only a single
 * ConcurrentModificationException check at the end of forEach
 * (the most performance-sensitive method). When using forEach
 * (as opposed to iterators), we can normally only detect
 * interference after actions, not before. Further
 * CME-triggering checks apply to all other possible
 * violations of assumptions for example null or too-small
 * elementData array given its size(), that could only have
 * occurred due to interference.  This allows the inner loop
 * of forEach to run without any further checks, and
 * simplifies lambda-resolution. While this does entail a
 * number of checks, note that in the common case of
 * list.stream().forEach(a), no checks or other computation
 * occur anywhere other than inside forEach itself.  The other
 * less-often-used methods cannot take advantage of most of
 * these streamlinings.
 */

Now, check out what happens when we tell the stream to produce sorted() results:

list.stream()
    .sorted()
    .peek(list::remove)
    .forEach(System.out::println);

This will now produce the following, “expected” output

0
1
2
3
4
5
6
7
8
9

And the list after stream consumption? It is empty:

[]

So, all elements are consumed, and removed correctly. The sorted() operation is a “stateful intermediate operation”, which means that subsequent operations no longer operate on the backing collection, but on an internal state. It is now “safe” to remove elements from the list!

Well… can we really? Let’s proceed with parallel(), sorted() removal:

list.stream()
    .sorted()
    .parallel()
    .peek(list::remove)
    .forEach(System.out::println);

This now yields:

7
6
2
5
8
4
1
0
9
3

And the list contains

[8]

Eek. We didn’t remove all elements!? Free beers (and jOOQ stickers) go to anyone who solves this streams puzzler!

This all appears quite random and subtle, we can only suggest that you never actually do modify a backing collection while consuming a stream. It just doesn’t work.

9. Forgetting to actually consume the stream

What do you think the following stream does?

IntStream.range(1, 5)
         .peek(System.out::println)
         .peek(i -> { 
              if (i == 5) 
                  throw new RuntimeException("bang");
          });

When you read this, you might think that it will print (1 2 3 4 5) and then throw an exception. But that’s not correct. It won’t do anything. The stream just sits there, never having been consumed.

As with any fluent API or DSL, you might actually forget to call the “terminal” operation. This might be particularly true when you use peek(), as peek() is an aweful lot similar to forEach().

This can happen with jOOQ just the same, when you forget to call execute() or fetch():

DSL.using(configuration)
   .update(TABLE)
   .set(TABLE.COL1, 1)
   .set(TABLE.COL2, "abc")
   .where(TABLE.ID.eq(3));

Oops. No execute()

jOOQ, the best way to write SQL in Java

Yes, the “best” way – with 1-2 caveats ;-)

10. Parallel stream deadlock

This is now a real goodie for the end!

All concurrent systems can run into deadlocks, if you don’t properly synchronise things. While finding a real-world example isn’t obvious, finding a forced example is. The following parallel() stream is guaranteed to run into a deadlock:

Object[] locks = { new Object(), new Object() };

IntStream
    .range(1, 5)
    .parallel()
    .peek(Unchecked.intConsumer(i -> {
        synchronized (locks[i % locks.length]) {
            Thread.sleep(100);

            synchronized (locks[(i + 1) % locks.length]) {
                Thread.sleep(50);
            }
        }
    }))
    .forEach(System.out::println);

Note the use of Unchecked.intConsumer(), which transforms the functional IntConsumer interface into a org.jooq.lambda.fi.util.function.CheckedIntConsumer, which is allowed to throw checked exceptions.

Well. Tough luck for your machine. Those threads will be blocked forever :-)

The good news is, it has never been easier to produce a schoolbook example of a deadlock in Java!

For more details, see also Brian Goetz’s answer to this question on Stack Overflow.

Conclusion

With streams and functional thinking, we’ll run into a massive amount of new, subtle bugs. Few of these bugs can be prevented, except through practice and staying focused. You have to think about how to order your operations. You have to think about whether your streams may be infinite.

Streams (and lambdas) are a very powerful tool. But a tool which we need to get a hang of, first.

Stay tuned for more exciting Java 8 articles on this blog.

Stop Trying to Emulate SQL OFFSET Pagination with Your In-House DB Framework!


I’m pretty sure you’ve gotten it wrong in numerous ways, so far. And you probably won’t get it right any time soon. So why waste your precious time on SQL tweaking, when you could be implementing business logic?

Let me explain…

It hasn’t been until the recent SQL:2008 standard that what MySQL users know as LIMIT .. OFFSET was standardised into the following simple statement:

SELECT * 
FROM BOOK 
OFFSET 2 ROWS 
FETCH NEXT 1 ROWS ONLY

Yes. So many keywords.

SQL is indeed a very verbose language. Personally, we really like the conciseness of MySQL’s / PostgreSQL’s LIMIT .. OFFSET clause, which is why we chose that for the jOOQ DSL API

In SQL:

SELECT * FROM BOOK LIMIT 1 OFFSET 2

In jOOQ:

select().from(BOOK).limit(1).offset(2);

Now, when you’re a SQL framework vendor, or when you’re rolling your own, in-house SQL abstraction, you might think about standardising this neat little clause. Here’s a couple of flavours from databases that natively support offset pagination:

-- MySQL, H2, HSQLDB, Postgres, and SQLite
SELECT * FROM BOOK LIMIT 1 OFFSET 2

-- CUBRID supports a MySQL variant of the 
-- LIMIT .. OFFSET clause
SELECT * FROM BOOK LIMIT 2, 1

-- Derby, SQL Server 2012, Oracle 12, SQL:2008
SELECT * FROM BOOK 
OFFSET 2 ROWS FETCH NEXT 1 ROWS ONLY

-- Ingres. Eek, almost the standard. Almost!
SELECT * FROM BOOK 
OFFSET 2 FETCH FIRST 1 ROWS ONLY

-- Firebird
SELECT * FROM BOOK ROWS 2 TO 3

-- Sybase SQL Anywhere
SELECT TOP 1 ROWS START AT 3 * FROM BOOK

-- DB2 (without OFFSET)
SELECT * FROM BOOK FETCH FIRST 1 ROWS ONLY

-- Sybase ASE, SQL Server 2008 (without OFFSET)
SELECT TOP 1 * FROM BOOK

So far, so good. These can all be handled. Some databases put offsets before limits, others put limits before offsets, and the T-SQL family puts the whole TOP clause before the SELECT list. This is easy to emulate. Now what about:

  • Oracle 11g and less
  • SQL Server 2008 and less
  • DB2 with OFFSET

(note that you can enable various alternative syntaxes in DB2)

When you google for this, you will find millions of ways to emulate OFFSET .. FETCH in those older databases. The optimal solutions always involve:

  • Using doubly-nested derived tables with ROWNUM filtering in Oracle
  • Using single-nested derived tabels with ROW_NUMBER() filtering in SQL Server and DB2

So you’re emulating it.

Do you think you will get it right?

;-)

Let us go through a couple of issues that you may not have thought about.

First off, Oracle. Oracle obviously wanted to create a maximum vendor-lockin, which is only exceeded by Apple’s recent introduction of Swift. This is why ROWNUM solutions perform best, even better than SQL:2003 standard window function based solutions. Don’t believe it? Read this very interesting article on Oracle offset pagination performance.

So, the optimal solution in Oracle is:

-- PostgreSQL syntax:
SELECT ID, TITLE 
FROM BOOK 
LIMIT 1 OFFSET 2

-- Oracle equivalent:
SELECT *
FROM (
  SELECT b.*, ROWNUM rn
  FROM (
    SELECT ID, TITLE
    FROM BOOK
  ) b
  WHERE ROWNUM <= 3 -- (1 + 2)
)
WHERE rn > 2

So that’s really the equivalent?

Of course not. You’re selecting an additional column, the rn column. You might just not care in most cases, but what if you wanted to make a limited subquery to be used with an IN predicate?

-- PostgreSQL syntax:
SELECT *
FROM BOOK
WHERE AUTHOR_ID IN (
  SELECT ID
  FROM AUTHOR
  LIMIT 1 OFFSET 2
)

-- Oracle equivalent:
SELECT *
FROM BOOK
WHERE AUTHOR_ID IN (
  SELECT * -- Ouch. These are two columns!
  FROM (
    SELECT b.*, ROWNUM rn
    FROM (
      SELECT ID
      FROM AUTHOR
    ) b
    WHERE ROWNUM <= 3
  )
  WHERE rn > 2
)

So, as you can see, you’ll have to do some more sophisticated SQL transformation. If you’re manually emulating LIMIT .. OFFSET, then you might just patch the ID column into the subquery:

SELECT *
FROM BOOK
WHERE AUTHOR_ID IN (
  SELECT ID -- better
  FROM (
    SELECT b.ID, ROWNUM rn -- better
    FROM (
      SELECT ID
      FROM AUTHOR
    ) b
    WHERE ROWNUM <= 3
  )
  WHERE rn > 2
)

So, that’s more like it, right? But since you’re not writing this manually every time, you’re about to start creating your own nifty in-house SQL framework covering the 2-3 use cases that you’ve encountered so far, right?

You can do it. So you’ll regex-search-replace column names automagically to produce the above.

So now, it is correct?

Of course not! Because you can have ambiguous column names in top-level SELECTs, but not in nested selects. What if you want to do this:

-- PostgreSQL syntax:
-- Perfectly valid repetition of two ID columns
SELECT BOOK.ID, AUTHOR.ID
FROM BOOK
JOIN AUTHOR
ON BOOK.AUTHOR_ID = AUTHOR.ID
LIMIT 1 OFFSET 2

-- Oracle equivalent:
SELECT *
FROM (
  SELECT b.*, ROWNUM rn
  FROM (
    -- Ouch! ORA-00918: column ambiguously defined
    SELECT BOOK.ID, AUTHOR.ID
    FROM BOOK
    JOIN AUTHOR
    ON BOOK.AUTHOR_ID = AUTHOR.ID
  ) b
  WHERE ROWNUM <= 3
)
WHERE rn > 2

Nope. And the trick of manually patching ID columns from the previous example doesn’t work, because you have multiple ID instances. And renaming the columns to random values is nasty, because the user of your home-grown in-house database framework wants to receive well-defined column names. I.e. ID and… ID.

So, the solution is to rename the columns twice. Once in each derived table:

-- Oracle equivalent:
-- Rename synthetic column names back to original
SELECT c1 ID, c2 ID
FROM (
  SELECT b.c1, b.c2, ROWNUM rn
  FROM (
    -- synthetic column names here
    SELECT BOOK.ID c1, AUTHOR.ID c2
    FROM BOOK
    JOIN AUTHOR
    ON BOOK.AUTHOR_ID = AUTHOR.ID
  ) b
  WHERE ROWNUM <= 3
)
WHERE rn > 2

But now, we’re done?

Of course not! What if you doubly nest such a query? Will you think about doubly renaming ID columns to synthetic names, and back? … ;-) Let’s leave it here and talk about something entirely different:

Does the same thing work for SQL Server 2008?

Of course not! In SQL Server 2008, the most popular approach is to use window functions. Namely, ROW_NUMBER(). So, let’s consider:

-- PostgreSQL syntax:
SELECT ID, TITLE 
FROM BOOK 
LIMIT 1 OFFSET 2

-- SQL Server equivalent:
SELECT b.*
FROM (
  SELECT ID, TITLE, 
    ROW_NUMBER() OVER (ORDER BY ID) rn
  FROM BOOK
) b
WHERE rn > 2 AND rn <= 3

So that’s it, right?

Of course not! ;-)

OK, we’ve already had this issue. We should not select *, because that would generate too many columns in the case that we’re using this as a subquery for an IN predicate. So let’s consider the correct solution with synthetic column names:

-- SQL Server equivalent:
SELECT b.c1 ID, b.c2 TITLE
FROM (
  SELECT ID c1, TITLE c2,
    ROW_NUMBER() OVER (ORDER BY ID) rn
  FROM BOOK
) b
WHERE rn > 2 AND rn <= 3

But now we got it, right?

Make an educated guess: Nope!

What happens, if you add an ORDER BY clause to the original query?

-- PostgreSQL syntax:
SELECT ID, TITLE 
FROM BOOK 
ORDER BY SOME_COLUMN
LIMIT 1 OFFSET 2

-- Naive SQL Server equivalent:
SELECT b.c1 ID, b.c2 TITLE
FROM (
  SELECT ID c1, TITLE c2,
    ROW_NUMBER() OVER (ORDER BY ID) rn
  FROM BOOK
  ORDER BY SOME_COLUMN
) b
WHERE rn > 2 AND rn <= 3

Now, that doesn’t work in SQL Server. Subqueries are not allowed to have an ORDER BY clause, unless they also have a TOP clause (or an OFFSET .. FETCH clause in SQL Server 2012).

OK, we can probably tweak this using TOP 100 PERCENT to make SQL Server happy.

-- Better SQL Server equivalent:
SELECT b.c1 ID, b.c2 TITLE
FROM (
  SELECT TOP 100 PERCENT
    ID c1, TITLE c2,
    ROW_NUMBER() OVER (ORDER BY ID) rn
  FROM BOOK
  ORDER BY SOME_COLUMN
) b
WHERE rn > 2 AND rn <= 3

Now, that’s correct SQL according to SQL Server, although you do not have a guarantee that the ordering of the derived table will survive after query execution. It may well be that the ordering is changed again by some influence.

If you wanted to order by SOME_COLUMN in the outer query, you’d have to again transform the SQL statement to add another synthetic column:

-- Better SQL Server equivalent:
SELECT b.c1 ID, b.c2 TITLE
FROM (
  SELECT TOP 100 PERCENT
    ID c1, TITLE c2,
    SOME_COLUMN c99,
    ROW_NUMBER() OVER (ORDER BY ID) rn
  FROM BOOK
) b
WHERE rn > 2 AND rn <= 3
ORDER BY b.c99

That does start getting a bit nasty. And let’s guess whether:

This is the correct solution!

Of course not! What if the original query had DISTINCT in it?

-- PostgreSQL syntax:
SELECT DISTINCT AUTHOR_ID
FROM BOOK 
LIMIT 1 OFFSET 2

-- Naive SQL Server equivalent:
SELECT b.c1 AUTHOR_ID
FROM (
  SELECT DISTINCT AUTHOR_ID c1,
    ROW_NUMBER() OVER (ORDER BY AUTHOR_ID) rn
  FROM BOOK
) b
WHERE rn > 2 AND rn <= 3

Now, what happens if an author has written several books? Yes, the DISTINCT keyword should remove such duplicates, and effectively, the PostgreSQL query will correctly remove duplicates first, and then apply LIMIT and OFFSET.

However, the ROW_NUMBER() predicate always generates distinct row numbers before DISTINCT can remove them again. In other words, DISTINCT has no effect.

Luckily, we can tweak this SQL again, using this neat little trick:

-- Better SQL Server equivalent:
SELECT b.c1 AUTHOR_ID
FROM (
  SELECT DISTINCT AUTHOR_ID c1,
    DENSE_RANK() OVER (ORDER BY AUTHOR_ID) rn
  FROM BOOK
) b
WHERE rn > 2 AND rn <= 3

Read more about this trick here:

SQL Trick: row_number() is to SELECT what dense_rank() is to SELECT DISTINCT.

Watch out that the ORDER BY clause must contain all columns from the SELECT field list. Obviously, this will limit the acceptable columns in the SELECT DISTINCT field list to columns that are allowed in a window function’s ORDER BY clause (e.g. no other window functions).

We could of course try to fix that as well using common table expressions, or we consider

Yet another issue??

Yes, of course!

Do you even know what the column(s) in the window function’s ORDER BY clause should be? Have you just picked any column, at random? What if that column doesn’t have an index on it, will your window function still perform?

The answer is easy when your original SELECT statement also has an ORDER BY clause, then you should probably take that one (plus all the columns from the SELECT DISTINCT clause if applicable).

But what if you don’t have any ORDER BY clause?

Yet another trick! Use a “constant” variable:

-- Better SQL Server equivalent:
SELECT b.c1 AUTHOR_ID
FROM (
  SELECT AUTHOR_ID c1,
    ROW_NUMBER() OVER (ORDER BY @@version) rn
  FROM BOOK
) b
WHERE rn > 2 AND rn <= 3

Yes, you need to use a variable, because constants are not allowed in those ORDER BY clauses, in SQL Server. Painful, I know.

Read more about this @@version trick here.

Are we done yet!?!?

Probably not ;-) But we have probably covered around 99% of the common and edge cases. We can sleep nicely, now.

Note that all of these SQL transformations are implemented in jOOQ. jOOQ is the only SQL abstraction framework that takes SQL seriously (with all its warts and caveats), standardising over all of this madness.

As mentioned in the beginning, with jOOQ, you just write:

// Don't worry about general emulation
select().from(BOOK).limit(1).offset(2);

// Don't worry about duplicate column names
// in subselects
select(BOOK.ID, AUTHOR.ID)
.from(BOOK)
.join(AUTHOR)
.on(BOOK.AUTHOR_ID.eq(AUTHOR.ID))
.limit(1).offset(2);

// Don't worry about invalid IN predicates
select()
.from(BOOK)
.where(BOOK.AUTHOR_ID).in(
    select(AUTHOR.ID)
    .from(AUTHOR)
    .limit(1).offset(2)
);

// Don't worry about the ROW_NUMBER() vs.
// DENSE_RANK() distinction
selectDistinct(AUTHOR_ID)
    .from(BOOK).limit(1).offset(2);

With jOOQ, you can just write your Oracle SQL or Transact SQL as if it were as awesome as PostgreSQL! … without jumping the SQL ship entirely, and moving on to JPA.

jOOQ, the best way to write SQL in Java

Keyset paging

Now, of course, if you have been reading our blog, or our partner blog SQL Performance Explained, you should know by now that OFFSET pagination is often a bad choice in the first place. You should know that keyset pagination almost always outperforms OFFSET pagination.

Read about how jOOQ natively supports keyset pagination using the SEEK clause, here.

Follow

Get every new post delivered to your Inbox.

Join 1,859 other followers

%d bloggers like this: