The jOOQ Parser Ignore Comment Syntax

jOOQ’s parser can’t parse every possible SQL syntax. Try this random PostgreSQL syntax:

ALTER SYSTEM RESET ALL

And the jOOQ parser will complain:

DOMAIN, INDEX, SCHEMA, SEQUENCE, SESSION, TABLE, TYPE, or VIEW expected: [1:7] ALTER [*]SYSTEM RESET ALL

That’s perfectly fine. The goal of the jOOQ parser isn’t to understand all vendor specific syntax. The goal is to offer parser functionality for all of the syntax supported by jOOQ so far, and to be able to translate it between dialects. For a lot of syntax like the above, there isn’t an equivalent in other dialects anyway.

But if you’re using the jOOQ parser to simulate a database migration, e.g. to create a schema diff, or to generate jOOQ code using the DDLDatabase, then you may have some bits of vendor specific SQL in your scripts that you want jOOQ to ignore. For example, the above command may be in the middle of a migration script:

CREATE TABLE a (i int);
ALTER SYSTEM RESET ALL;
CREATE TABLE b (i int);

Luckily, with jOOQ, you can add special markers around the commands you wish to tell jOOQ to ignore, and jOOQ will ignore them. For this, just enable the Settings.parseIgnoreComments flag, and now you can use a special comment syntax:

CREATE TABLE a (i int);

/* [jooq ignore start] */
ALTER SYSTEM RESET ALL;
/* [jooq ignore stop] */

CREATE TABLE b (i int);

The syntax is transparent to your RDBMS, because they’re just comments. So, the RDBMS will see and execute this, just like before. So, your actual database migration isn’t affected:

CREATE TABLE a (i int);

/*                     */
ALTER SYSTEM RESET ALL;
/*                    */

CREATE TABLE b (i int);

But jOOQ, on the other hand, will interpret the comments a bit differently, and see and execute this, instead:

CREATE TABLE a (i int);

/*

                      */

CREATE TABLE b (i int);

Note, if you don’t like the [jooq ignore start] and [jooq ignore stop] tokens, you can customise them as well via Settings.parseIgnoreCommentStart and Settings.parseIgnoreCommentStop.

Like any of these purely text-based preprocessor syntaxes, this works completely transparently, even within some command syntax. Assuming you’re using some vendor-specific syntax in a CREATE TABLE‘s DEFAULT expression, you can just exclude that DEFAULT expression from jOOQ’s parsing:

CREATE TABLE t (
  a int 
    /* [jooq ignore start] */ 
    DEFAULT some_fancy_expression() 
    /* [jooq ignore stop] */
);

Obviously, you don’t have to format things this way, that’s just for illustration purposes on this blog. Now, again, the RDBMS will see and execute this:

CREATE TABLE t (
  a int 
    /*                     */ 
    DEFAULT some_fancy_expression() 
    /*                    */
);

Whereas jOOQ’s parser will see and execute this:

CREATE TABLE t (
  a int 
    /*

                          */
);

For more information about this topic, please refer to the jOOQ manual.

Vendor Agnostic, Dynamic Procedural Logic with jOOQ

One of the strengths of modern RDBMS is the capability to mix the powerful SQL language with procedural code.

SQL is a 4th generation programming language (4GL), and as such, extremely well suited for querying and bulk data manipulation. Its functional-declarative nature allows for it to be optimised in highly efficient ways using cost-based optimisation, but also statically as we’ve blogged before.

Sometimes, however, an imperative 3GL is better suited for a given task. That’s where stored procedures shine, or more specifically, procedural languages of RDBMS.

Among the ones that jOOQ supports, at least these ones support procedures:

  • BigQuery
  • Db2
  • Exasol
  • Firebird
  • HANA
  • HSQLDB
  • Informix
  • MariaDB
  • MySQL
  • Oracle
  • PostgreSQL
  • SQL Server
  • Vertica

Others may do, as well, but jOOQ isn’t supporting their dialects yet.

Many have implemented their own procedural languages, some according to the ISO/IEC 9075-4 Persistent stored modules (SQL/PSM) standard, others have their own.

jOOQ support for procedural logic

Since jOOQ 3.12, our commercial distributions have supported anonymous blocks and the procedural statements they contain, such as the IF statement, LOOP statements, etc. Starting with jOOQ 3.15, we also support 3 types of statements to manage storing procedural logic in the catalog:

Using these statements via jOOQ may not be your every day use-case. You may prefer managing that logic via the native syntax, which is still more powerful than what jOOQ 3.15 supports (especially when you’re using Oracle’s PL/SQL), in case of which you’ll u se jOOQ purely to call your procedure from Java in the usual type safe manner.

But maybe, you have one of these use-cases?

  • You’re a product vendor, and you profit from procedural logic being vendor agnostic in order to support multiple of your clients’ RDBMS
  • Your procedural logic is dynamic, just like your SQL logic (and what other than jOOQ to use for that?)
  • You don’t have the necessary privileges to create procedures, functions, or triggers in your schema

In all of those cases, jOOQ is here to help you.

How does it work?

The first building block is the anonymous block, which isn’t supported by all of the above dialects, regrettably. jOOQ can emulate it on MySQL as discussed here, but not currently in other dialects.

Here’s a simple, empty anonymous block:

-- Db2
BEGIN
END

-- Firebird
EXECUTE BLOCK AS
BEGIN
END

-- MariaDB
BEGIN NOT ATOMIC
END;

-- Oracle
BEGIN
  NULL;
END;

-- PostgreSQL
DO $$
BEGIN
  NULL;
END;
$$

It doesn’t really do much, but you can try executing it as follows, with jOOQ:

ctx.begin().execute();

Now, let’s do something more interesting, such as:

// Assuming the usual static imports:
import static org.jooq.impl.DSL.*;
import static org.jooq.impl.SQLDataType.*;

// Then write
Variable<Integer> i = variable(unquotedName("i"), INTEGER);
Table<?> t = table(unquotedName("t"));
Field<Integer> col = field(unquotedName("col"), INTEGER);

ctx.begin(
    declare(i).set(1),

    while_(i.le(10)).loop(
        insertInto(t).columns(c).values(i),
        i.set(i.plus(1))
    )
).execute();

The above block executes:

-- Db2
BEGIN
  DECLARE i integer;
  SET i = 1;
  WHILE i <= 10 DO
    INSERT INTO t (c)
    VALUES (i);
    SET i = (i + 1);
  END WHILE;
END

-- FIREBIRD
EXECUTE BLOCK AS
  DECLARE i integer;
BEGIN
  :i = 1;
  WHILE (:i <= 10) DO BEGIN
    INSERT INTO t (c)
    VALUES (:i);
    :i = (:i + 1);
  END
END

-- MariaDB
BEGIN NOT ATOMIC
  DECLARE i int;
  SET i = 1;
  WHILE i <= 10 DO
    INSERT INTO t (c)
    VALUES (i);
    SET i = (i + 1);
  END WHILE;
END;

-- Oracle
DECLARE
  i number(10);
BEGIN
  i := 1;
  WHILE i <= 10 LOOP
    INSERT INTO t (c)
    VALUES (i);
    i := (i + 1);
  END LOOP;
END;

-- PostgreSQL
DO $$
DECLARE
  i int;
BEGIN
  i := 1;
  WHILE i <= 10 LOOP
    INSERT INTO t (c)
    VALUES (i);
    i := (i + 1);
  END LOOP;
END;
$$

-- SQL Server
BEGIN
  DECLARE @i int;
  SET @i = 1;
  WHILE @i <= 10 BEGIN
    INSERT INTO t (c)
    VALUES (@i);
    SET @i = (@i + 1);
  END;
END;

Easy as pie. Perhaps you prefer a FOR loop, instead? Try this:

ctx.begin(
    for_(i).in(1, 10).loop(
        insertInto(t).columns(c).values(i)
    )
).execute();

It produces the necessary emulations, if required, because regrettably, not all dialects support FOR:

-- Db2
BEGIN
  DECLARE i integer;
  SET i = 1;
  WHILE i <= 10 DO
    INSERT INTO t (c)
    VALUES (i);
    SET i = (i + 1);
  END WHILE;
END

-- Firebird
EXECUTE BLOCK AS
  DECLARE i integer;
BEGIN
  :i = 1;
  WHILE (:i <= 10) DO BEGIN
    INSERT INTO t (c)
    VALUES (:i);
    :i = (:i + 1);
  END
END

-- MariaDB
BEGIN NOT ATOMIC
  FOR i IN 1 .. 10 DO
    INSERT INTO t (c)
    VALUES (i);
  END FOR;
END;

-- Oracle
BEGIN
  FOR i IN 1 .. 10 LOOP
    INSERT INTO t (c)
    VALUES (i);
  END LOOP;
END;

-- PostgreSQL
DO $$
BEGIN
  FOR i IN 1 .. 10 LOOP
    INSERT INTO t (c)
    VALUES (i);
  END LOOP;
END;
$$

-- SQL Server
BEGIN
  DECLARE @i int;
  BEGIN
    SET @i = 1;
    WHILE @i <= 10 BEGIN
      INSERT INTO t (c)
      VALUES (@i);
      SET @i = (@i + 1);
    END;
  END;
END;

SQL vs procedures

Of course, this particular SQL statement would be better implemented using a single bulk insertion statement, purely with SQL, not with procedural logic

ctx.insertInto(t, c)
   .select(selectFrom(generateSeries(1, 10)))
   .execute();

Which translates to:

-- Db2
INSERT INTO t (c)
SELECT generate_series.generate_series
FROM (
  WITH
    generate_series(generate_series) AS (
      SELECT 1
      FROM SYSIBM.DUAL
      UNION ALL
      SELECT (generate_series + 1)
      FROM generate_series
      WHERE generate_series < 10
    )
  SELECT generate_series
  FROM generate_series
) generate_series;

-- Firebird
INSERT INTO t (c)
SELECT generate_series.generate_series
FROM (
  WITH RECURSIVE
    generate_series(generate_series) AS (
      SELECT 1
      FROM RDB$DATABASE
      UNION ALL
      SELECT (generate_series + 1)
      FROM generate_series
      WHERE generate_series < 10
    )
  SELECT generate_series
  FROM generate_series
) generate_series;

-- MariaDB
INSERT INTO t (c)
SELECT generate_series.generate_series
FROM (
  WITH RECURSIVE
    generate_series(generate_series) AS (
      SELECT 1
      UNION ALL
      SELECT (generate_series + 1)
      FROM generate_series
      WHERE generate_series < 10
    )
  SELECT generate_series
  FROM generate_series
) AS generate_series;

-- Oracle
INSERT INTO t (c)
SELECT generate_series.generate_series
FROM (
  SELECT (level + (1 - 1)) generate_series
  FROM DUAL
  CONNECT BY level <= ((10 + 1) - 1)
) generate_series;

-- PostgreSQL
INSERT INTO t (c)
SELECT generate_series.generate_series
FROM generate_series(1, 10);

-- SQL Server
WITH
  generate_series(generate_series) AS (
    SELECT 1
    UNION ALL
    SELECT (generate_series + 1)
    FROM generate_series
    WHERE generate_series < 10
  )
INSERT INTO t (c)
SELECT generate_series.generate_series
FROM (
  SELECT generate_series
  FROM generate_series
) generate_series

… but you get the point.

Storing the procedural logic

If you have the necessary privileges, and your procedural logic isn’t super dynamic, you may choose to store your logic in a procedure or function directly in your database. In some databases, this means a compiler will be able to eagerly translate the logic to something very efficient (e.g. machine code), instead of interpreting the logic on the fly.

Take the above WHILE loop, for example. You may want to store that as a procedure P:

Name p = unquotedName("p");

ctx.createProcedure(p)
   .modifiesSQLData()
   .as(
        declare(i).set(1),

        while_(i.le(10)).loop(
            insertInto(t).columns(c).values(i),
            i.set(i.plus(1))
        )
   )
   .execute();

This produces the following statements:

-- Db2
CREATE PROCEDURE p()
MODIFIES SQL DATA
BEGIN
  DECLARE i integer;
  SET i = 1;
  WHILE i <= 10 DO
    INSERT INTO t (c)
    VALUES (i);
    SET i = (i + 1);
  END WHILE;
END;

-- Firebird
CREATE PROCEDURE p()
AS
  DECLARE i integer;
BEGIN
  :i = 1;
  WHILE (:i <= 10) DO BEGIN
    INSERT INTO t (c)
    VALUES (:i);
    :i = (:i + 1);
  END
END

-- MariaDB
CREATE PROCEDURE p()
MODIFIES SQL DATA
BEGIN
  DECLARE i int;
  SET i = 1;
  WHILE i <= 10 DO
    INSERT INTO t (c)
    VALUES (i);
    SET i = (i + 1);
  END WHILE;
END;

-- Oracle
CREATE PROCEDURE p
AS
  i number(10);
BEGIN
  i := 1;
  WHILE i <= 10 LOOP
    INSERT INTO t (c)
    VALUES (i);
    i := (i + 1);
  END LOOP;
END;

-- PostgreSQL
CREATE PROCEDURE p()
LANGUAGE plpgsql
AS
$$
DECLARE
  i int;
BEGIN
  i := 1;
  WHILE i <= 10 LOOP
    INSERT INTO t (c)
    VALUES (i);
    i := (i + 1);
  END LOOP;
END;
$$

-- SQL Server
CREATE PROCEDURE p
AS
BEGIN
  DECLARE @i int;
  SET @i = 1;
  WHILE @i <= 10 BEGIN
    INSERT INTO t (c)
    VALUES (@i);
    SET @i = (@i + 1);
  END;
END;

And now, what better way to call this procedure than, again, an anonymous block?

ctx.begin(call(unquotedName("p"))).execute();

Producing:

-- Db2
BEGIN
  CALL p();
END

-- Firebird
EXECUTE BLOCK AS
BEGIN
  EXECUTE PROCEDURE p;
END

-- MariaDB
BEGIN NOT ATOMIC
  CALL p();
END;

-- Oracle
BEGIN
  p();
END;

-- PostgreSQL
DO $$
BEGIN
  CALL p();
END;
$$

-- SQL Server
BEGIN
  EXEC p ;
END;

If you’re using jOOQ in Flyway or Liquibase to generate procedures during your database migrations, you can obviously generate jOOQ procedure stubs to call in a more type safe manner, instead of the above dynamic procedure call.

Parsing procedural logic

This jOOQ feature is not really exceptional. You can play around with our parser / translator here: https://www.jooq.org/translate. It can definitely help you translate your (simpler) stored procedures between dialects, such as PL/SQL, T-SQL, PL/pgSQL, etc.

Conclusion

As a rule of thumb, if you can do it with SQL (the 4GL), do it with SQL alone. But sometimes, you can’t. A 3GL is a better choice for an algorithm. When using jOOQ, you’ll naturally think of using Java to implement that 3GL algorithm. But wait, you could move the logic to the server for (drastically) increased performance!

Thanks to jOOQ, you can generate procedural logic that is:

  • Dynamic
  • Vendor agnostic
  • Anonymous or stored

Just like you’re used to, from jOOQ, for SQL

Formatting ASCII Charts With jOOQ

A very little known feature in jOOQ is the Formattable.formatChart() capability, which allows for formatting any jOOQ result as an ASCII chart. This can be useful for quick plotting of results in your console application.

Assuming you have a result set of this form (which is what you’re getting when you call result.format() or just result.toString())

+---+---+---+---+
|c  | v1| v2| v3|
+---+---+---+---+
|a  |  1|  2|  3|
|b  |  2|  1|  1|
|c  |  4|  0|  1|
+---+---+---+---+

This result can be produced by any query, or you can construct it without executing a query like this:

Field<String>  c  = field("c", VARCHAR);
Field<Integer> v1 = field("v1", INTEGER);
Field<Integer> v2 = field("v2", INTEGER);
Field<Integer> v3 = field("v3", INTEGER);

Result<Record4<String, Integer, Integer, Integer>> result = 
    create.newResult(c, v1, v2, v3); 
                      
result.add(create.newRecord(c, v1, v2, v3).values("a", 1, 2, 3));
result.add(create.newRecord(c, v1, v2, v3).values("b", 2, 1, 1));
result.add(create.newRecord(c, v1, v2, v3).values("c", 4, 0, 1));

Then, calling result.formatChart() will produce a nice ASCII chart like this, by default:

4.00|                                                  █████████████████████████
3.86|                                                  █████████████████████████
3.73|                                                  █████████████████████████
3.59|                                                  █████████████████████████
3.45|                                                  █████████████████████████
3.32|                                                  █████████████████████████
3.18|                                                  █████████████████████████
3.05|                                                  █████████████████████████
2.91|                                                  █████████████████████████
2.77|                                                  █████████████████████████
2.64|                                                  █████████████████████████
2.50|                                                  █████████████████████████
2.36|                                                  █████████████████████████
2.23|                                                  █████████████████████████
2.09|                                                  █████████████████████████
1.95|                         ██████████████████████████████████████████████████
1.82|                         ██████████████████████████████████████████████████
1.68|                         ██████████████████████████████████████████████████
1.55|                         ██████████████████████████████████████████████████
1.41|                         ██████████████████████████████████████████████████
1.27|                         ██████████████████████████████████████████████████
1.14|                         ██████████████████████████████████████████████████
1.00|███████████████████████████████████████████████████████████████████████████
----+---------------------------------------------------------------------------
    |            a                        b                        c            

It includes the first column as the category label on the x-axis, and the second column as the value to be plotted on the y-axis. You can tweak all sorts of configuration, including height and width:

result.formatChart(new ChartFormat().dimensions(6, 20));

To get this much smaller chart:

4.00|          █████
3.00|          █████
2.00|     ██████████
1.00|███████████████
----+---------------
    |  a    b    c  

To include the values of the other value columns in a stacked chart (which is the default) write:

result.formatChart(new ChartFormat()
    .dimensions(40, 8)
    .values(1, 2, 3)
);

Producing:

6.00|▒▒▒▒▒▒▒▒▒▒▒▒                       
5.00|▒▒▒▒▒▒▒▒▒▒▒▒            ▒▒▒▒▒▒▒▒▒▒▒
4.00|▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒███████████
3.00|▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓███████████
2.00|▓▓▓▓▓▓▓▓▓▓▓▓███████████████████████
1.00|███████████████████████████████████
----+-----------------------------------
    |     a           b          c      

If those default ASCII characters from the sweet old 90s MS-DOS and BBS times don’t align well with your font (e.g. as on this blog), you can switch them like this:

result.formatChart(new ChartFormat()
    .dimensions(40, 8)
    .values(1, 2, 3)
    .shades('@', 'o', '.')
);

And now you’re getting:

6.00|............                       
5.00|............            ...........
4.00|........................@@@@@@@@@@@
3.00|oooooooooooooooooooooooo@@@@@@@@@@@
2.00|oooooooooooo@@@@@@@@@@@@@@@@@@@@@@@
1.00|@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
----+-----------------------------------
    |     a           b          c      

Prefer displaying 100% charts? No problem

result.formatChart(new ChartFormat()
    .dimensions(40, 8)
    .values(1, 2, 3)
    .shades('@', 'o', '.')
    .display(Display.HUNDRED_PERCENT_STACKED)
);

And now, the result is:

100.00%|................................
 80.00%|......................@@@@@@@@@@
 60.00%|...........ooooooooooo@@@@@@@@@@
 40.00%|ooooooooooo@@@@@@@@@@@@@@@@@@@@@
 20.00%|ooooooooooo@@@@@@@@@@@@@@@@@@@@@
  0.00%|@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
-------+--------------------------------
       |     a          b         c     

Who needs MS Excel? No one, if you have jOOQ!

Standard SQL/JSON – The Sobering Parts

It’s been almost 1 year now since jOOQ 3.14 was released in October 19, 2020 with SQL/JSON (and SQL/XML) support. Half a year later, we’ve released jOOQ 3.15 with MULTISET support, which builds on top of these features to offer type-safe nested collections, the way every ORDBMS should implement them.

Building (dogfooding) on top of our own SQL/JSON API has revealed a lot of caveats of the various SQL/JSON implementations across vendors, and to be frank, it’s been a bit of a sobering experience. Despite there now being the ISO/IEC TR 19075:6 standard (mostly driven by Oracle this time), many vendors have already implemented some kind of JSON support, and it looks differently in all dialects – to the extent where writing vendor agnostic SQL/JSON is almost impossible with hand written native SQL. You’ll need an API like jOOQ or any other abstraction to standardise the different dialects.

In this article, I’d like to share a few of the biggest caveats encountered over the past year. There are many more, just try to translate some standard SQL/JSON to various dialects with jOOQ.

JSON Type or String?

JSON documents can be seen as simple strings with formatted content. Sure, I mean you can parse a string into a JSON document, so why complicate the SQL type system with new types?

Unfortunately, this proves to be a poor decision by most vendors. PostgreSQL got this right by offering the JSON and JSONB types, the latter being quite performant even when storing and indexing JSON documents.

First off, and this comes to no surprise to anyone working with almost any other language than SQL: Types are semantic. Stringly typed environments are bad. Or as the quote goes:

The Bitterness of Poor Quality Remains Long After the Sweetness of Low Price is Forgotten — Benjamin Franklin

With “Low Price” being the low price of the quick and dirty stringly-typed feature addition without a formal type. We already had this time and again with BOOLEAN types, and dialects like MySQL that pretend that supporting this is a good idea:

SELECT *
FROM t
WHERE 1 OR 0 -- TRUE OR FALSE

Let’s look at examples of what I mean.

MariaDB and MySQL

Let’s look at MariaDB’s and MySQL’s syntax first. Here’s how to create a JSON array:

select json_array(1, 2)

Producing

[1, 2]

That’s great! And it even conforms to standard SQL syntax. A JSON array can also be nested easily:

select json_array(json_array(1, 2), 3)

To produce:

[[1, 2], 3]

Now, what if that nested array originates from a derived table?

select json_array(nested, 3)
from (select json_array(1, 2) nested) as t

It results in:

-- MySQL, MariaDB 10.6
[[1, 2], 3]

-- MariaDB 10.5
["[1, 2]", 3]

Bummer. In MariaDB 10.5, the nested array lost its “JSON type annotation” and went back to the stringly typed version of itself. As a string, it needs to be quoted. This seems to be a bug https://jira.mariadb.org/browse/MDEV-26134 which apparently has been fixed already under a different issue in MariaDB 10.6. It’s not the only one, though. The bug tracker is full of similar issues: https://jira.mariadb.org/browse/MDEV-13701.

MySQL seems to fare a bit better by now, though there are caveats when aggregating (see further down).

The only workaround I’ve found so far for the above bugs is extremely laborious:

select json_array(json_merge_preserve('[]', nested), 3)
from (select json_array(1, 2) nested) as t

Imagine doing that every time you nest JSON. It works, but it really needs to be automated (e.g. through jOOQ).

Oracle

Oracle defined most of the SQL standard, and I really like how SQL-idiomatic its syntax feels, just like SQL/XML. Unfortunately, they waited with the introduction of the new JSON type until Oracle 21c (can’t wait to play with it). As such, we have to choose whether to represent JSON as VARCHAR2, by default (limited to 4000 bytes on some systems or 32kb at most!) or CLOB. If you’re doing serious JSON, you can probably always add the RETURNING CLOB clause to every single JSON function call of yours. Effectively:

select json_array(1, 2 returning clob)
from dual

The nested version looks like this:

select json_array(nested, 3 returning clob)
from (select json_array(1, 2 returning clob) nested from dual) t;

This works much better in Oracle, but there are still many edge cases that it doesn’t get right. Try this on Oracle 18c:

select json_arrayagg(coalesce(json_object(), json_object()))
from dual

And it produces another instance of erroneously stringly-typed JSON:

["{}"]

The workaround is to add FORMAT JSON just about everywhere, just to be sure, e.g.

select json_arrayagg(
  coalesce(json_object(), json_object()) format json
)
from dual

Now, the result is as expected:

[{}]

To play it safe, you’ll probably have to write FORMAT JSON just about everywhere again, like RETURNING CLOB

Truncation

Truncation is probably the most sobering part when using JSON. Why do we still have size-limited data types in SQL? It hardly every makes sense from an application perspective. Yet, here we are.

Oracle

Run this query in Oracle:

select json_arrayagg(owner) from all_objects;

And you’ll get:

SQL Error [40478] [99999]: ORA-40478: output value too large (maximum: 4000)

Increasing the maximum VARCHAR2 size to 32kb will only postpone the problem. There is no “reasonable” size limit to such documents, so again, you’ll have to RETURNING CLOB all the time.

select json_arrayagg(owner returning clob) from all_objects;

The price is the same as always. CLOB are just a bit more annoying to work with than VARCHAR2, both within Oracle as well as in the client application (e.g. JDBC based), because you have to work with yet another resource when you just wanted a string. jOOQ will just add the clause everywhere for you, there’s hardly a reason not to do that with jOOQ, as the fetching of CLOB values is completely transparent to jOOQ users.

MySQL

Up until recently, you couldn’t call JSON_ARRAYAGG() in MySQL, and the MariaDB version crashed the server (https://jira.mariadb.org/browse/MDEV-21912). When this blog was written, neither implementation supported the ORDER BY clause in JSON_ARRAYAGG(), which I find quite essential, so the workaround is to use GROUP_CONCAT:

select concat('[', group_concat(id), ']')
from t_book

Of course, that is very wrong if the concatenated values are not numbers, so we need to also use JSON_QUOTE, e.g.

select concat('[', group_concat(json_quote(title)), ']')
from t_book

And, if you’re embedding that stuff in other JSON structures, you have to turn what really is a string now, back into JSON using JSON_MERGE (until recently, but now deprecated), or JSON_MERGE_PRESERVE, e.g.

select json_object(
  'titles', 
  json_merge_preserve(
    '[]', 
    concat('[', group_concat(json_quote(title)), ']')
  )
)
from t_book

To produce a document like this:

{"titles": ["1984", "Animal Farm", "O Alquimista", "Brida"]}

Without that JSON_MERGE_PRESERVE, you’d be getting:

{"titles": "[\"1984\",\"Animal Farm\",\"O Alquimista\",\"Brida\"]"}

Definitely not something you can ever remember.

Anyway. This section was about truncation! What happens with large, aggregated JSON documents in MySQL? Try this:

select 
  concat('[', group_concat(json_quote(table_name)), ']')
from information_schema.tables

It produces (on my machine):

["innodb_table_stats","innodb_index_stats","CHARACTER_SETS","CHECK_CONSTRAINTS","COLLATIONS","COLLATION_CHARACTER_SET_APPLICABILITY","COLUMNS","COLUMNS_EXTENSIONS","COLUMN_STATISTICS","EVENTS","FILES","INNODB_DATAFILES","INNODB_FOREIGN","INNODB_FOREIGN_COLS","INNODB_FIELDS","INNODB_TABLESPACES_BRIEF","KEY_COLUMN_USAGE","KEYWORDS","PARAMETERS","PARTITIONS","REFERENTIAL_CONSTRAINTS","RESOURCE_GROUPS","ROUTINES","SCHEMATA","SCHEMATA_EXTENSIONS","ST_SPATIAL_REFERENCE_SYSTEMS","ST_UNITS_OF_MEASURE","ST_GEOMETRY_COLUMNS","STATISTICS","TABLE_CONSTRAINTS","TABLE_CONSTRAINTS_EXTENSIONS","TABLES","TABLES_EXTENSIONS","TABLESPACES_EXTENSIONS","TRIGGERS","VIEW_ROUTINE_USAGE","VIEW_TABLE_USAGE","VIEWS","COLUMN_PRIVILEGES","ENGINES","OPTIMIZER_TRACE","PLUGINS","PROCESSLIST","PROFILING","SCHEMA_PRIVILEGES","TABLESPACES","TABLE_PRIVILEGES","USER_PRIVILEGES","cond_instances","error_log","events_waits_current","events_waits_history","events_waits_history_long","events_waits_summary_by_host_by_event_name","events_waits_summary_by]

Wait, what is that at the end

,"events_waits_summary_by]

Invalid JSON. Because the GROUP_CONCAT string got truncated. We could set the following

set @@group_concat_max_len = 4294967295;

And now the output is correct, and much longer:

["innodb_table_stats",...,"t_identity_pk","t_triggers"]

An API like jOOQ will be able to set this session variable automatically for you, but you probably don’t want to think about this all the time for native SQL?

Data Type Support

JSON knows a few data types. Namely:

  • string
  • number
  • boolean
  • null
  • object
  • array

That’s less than SQL, but often good enough (anything can be encoded as a string, after all). But when you don’t have BOOLEAN types in SQL (e.g. MySQL, Oracle), then you have to manually encode your already manual encoding to JSON BOOLEAN.

MySQL

MySQL makes you believe that this works:

select json_array(true, false)

Because the above produces

[true, false]

But it doesn’t really work. That seems to be hard coded in the parser. As soon as your true and false values are expressions instead of literals, e.g. originating from a derived table:

select json_array(t, f)
from (select true as t, false as f) as t

You’ll get:

[1, 0]

There are different ways to emulate this. One being:

select json_array(
  json_extract(case when t = 1 then 'true' when t = 0 then 'false' end, '$'),
  json_extract(case when f = 1 then 'true' when f = 0 then 'false' end, '$')
)
from (select true as t, false as f) as t

Now, we’re getting again:

[true, false]

Oracle

Unlike MySQL / MariaDB, Oracle SQL doesn’t pretend it has a BOOLEAN type. Instead, people encode it as a NUMBER(1) or CHAR(1) or some other thing. Irrespective of the encoding, this is the solution:

select json_array(
  case when t = 1 then 'true' when t = 0 then 'false' end format json,
  case when f = 1 then 'true' when f = 0 then 'false' end format json
)
from (
  select 1 as t, 0 as f, null as n
  from dual
) t

Producing

[true,false]

NULL handling

When using NULL with SQL/JSON, there are various caveats. First off, SQL NULL is not the same thing as JSON NULL. Using PostgreSQL:

select 
  a, 
  b, 
  a = b as equal, 
  a is null as a_is_null, 
  b is null as b_is_null
from (
  select null::jsonb as a, 'null'::jsonb as b
) as t

Produces:

|a  |b   |equal|a_is_null|b_is_null|
|---|----|-----|---------|---------|
|   |null|     |true     |false    |

Where the empty cells are SQL NULL values, and the null value is a JSON null value that is not “SQL NULL“. It’s the only reasonable thing to do, really. Do dialects agree?

MySQL

select 
  a, 
  b,
  a = b as equal, 
  a is null as a_is_null, 
  b is null as b_is_null
from (
  select null as a, json_extract('null', '$') as b
) as t

Producing also:

|a  |b   |equal|a_is_null|b_is_null|
|---|----|-----|---------|---------|
|   |null|     |1        |0        |

So, yes!

Oracle

Let’s see, this could be how I can create a JSON NULL value:

select 
  a, 
  b, 
  case when a = b then 1 else 0 end as equal, 
  case when a is null then 1 else 0 end as a_is_null, 
  case when b is null then 1 else 0 end as b_is_null
from (
  select null as a, json_value('[null]', '$[0]') as b
  from dual
) t

Yet, Oracle is known to have a funky relationship with NULL strings. This is the result:

|A  |B  |EQUAL|A_IS_NULL|B_IS_NULL|
|---|---|-----|---------|---------|
|   |   |0    |1        |1        |

There does not seem to be an actual JSON NULL representation! I couldn’t find a workaround for this yet. Probably I will. But this is very unfortunate, leading to many conversion edge cases.

On the other hand, Oracle is the only dialect among the ones presented in this article that introduced a very useful NULL handling clause for aggregate functions. Check this out:

select
  json_arrayagg(a),
  json_arrayagg(a absent on null),
  json_arrayagg(a null on null)
from (
  select 1 as a from dual union all
  select null from dual
) t

Producing

|A  |B  |C       |
|---|---|--------|
|[1]|[1]|[2,null]|

Note the above query doesn’t produce the correct result on Oracle 18c yet, because of a parser / optimiser bug. Use this to work around the bug for this example:

select
  json_arrayagg(a) a,
  json_arrayagg(a + 0 absent on null) b,
  json_arrayagg(a - 0 null on null) c
from (
  select 1 as a from dual union all
  select null from dual
) t

Other dialects have different opinions on how to aggregate NULL values to JSON documents. In SQL, aggregate functions tend to ignore NULL values, such as Oracle above, but with JSON, it is often essential to include the value, especially when creating a JSON_OBJECT, where an absent key is not strictly the same thing as an absent value.

Db2

Db2 has a very limited implementation of SQL/JSON. It is very standards compliant in terms of syntax, but suffers from severe bugs like these:

Which make it quite unusable for now. Surely, this will improve in the near future.

SQL Server

I’ve also left out SQL Server in this article. SQL Server had JSON and XML support for a while, but implemented it completely differently. You can’t just form arbitrary JSON objects or arrays easily, but you can transform result sets into JSON in an intuitive way.

This offers a quick win when streaming results, but doesn’t compose very well. E.g. you can’t create a JSON_ARRAY with scalar, non-object content, though a JSON_OBJECT can be created like this:

select 1 as a, 2 as b
for json path, without_array_wrapper

Producing

{"a":1,"b":2}

A limited number of JSON features can be emulated also in SQL Server, and as with all the dialects that don’t support an actual JSON type, constant escaping / unescaping may be required.

Conclusion

SQL/JSON was standardised relatively late, mostly by Oracle. The standard is very sound. But regrettably, a lot of dialects disagree on syntax, behaviour, and some are still quite buggy.

The soundest of all implementations is that of PostgreSQL, which introduced proper data types and offers a rich set of vendor-specific functions to manipulate JSON directly in SQL. In the near future, PostgreSQL will embrace standard SQL/JSON, just like Oracle, and enhance its own implementation. I don’t think there will be anything fundamentally new, just more portable standard syntax.

See this talk for a preview of “what’s next”:

There is nothing insurmountable for a library like jOOQ, which abstracts over all the dialects and offers the SQL/JSON (and SQL/XML) functionality using a uniform, standards-inspired API. As dialects improve, jOOQ will adopt new, better syntax by individual vendors, so you can implement your SQL/JSON queries forwards compatibly.

So, as always, use jOOQ to work around the subtle and boring differences of the various vendor syntaxes and start using SQL/JSON already today. Despite the caveats presented here, SQL/JSON is one of the most exciting things about modern SQL!

Read more:

Reactive SQL with jOOQ 3.15 and R2DBC

One of the biggest new features of the recently released jOOQ 3.15 is its new support for reactive querying via R2DBC. This has been a highly popular feature request, and we finally delivered on it.

You can continue using jOOQ the way you were used to, providing you with type safe, embedded SQL in Java, kotlin, or scala, but your query executions are no longer blocking. Instead, your jOOQ ResultQuery or Query can be used as a Publisher<R> or Publisher<Integer> in the reactive-streams implementation of your choice.

Instead of (or in addition to) configuring your jOOQ DSLContext with a JDBC java.sql.Connection or javax.sql.DataSource, just configure it with an R2DBC io.r2dbc.spi.Connection or io.r2dbc.spi.ConnectionFactory:

ConnectionFactory connectionFactory = ConnectionFactories.get(
    ConnectionFactoryOptions
        .parse("r2dbc:h2:file://localhost/~/r2dbc-test")
        .mutate()
        .option(ConnectionFactoryOptions.USER, "sa")
        .option(ConnectionFactoryOptions.PASSWORD, "")
        .build()
);

DSLContext ctx = DSL.using(connectionFactory);

Alternatively, use Spring Boot to auto-configure jOOQ like this:

Starting from this DSLContext, you can build your queries as always, but instead of calling the usual blocking execute() or fetch() methods, you’ll just wrap the query in a Flux, for example. Assuming you ran the jOOQ code generator on your H2 INFORMATION_SCHEMA, you can now write:

record Table(String schema, String table) {}

Flux.from(ctx
        .select(
            INFORMATION_SCHEMA.TABLES.TABLE_SCHEMA,
            INFORMATION_SCHEMA.TABLES.TABLE_NAME)
        .from(INFORMATION_SCHEMA.TABLES))

    // Type safe mapping from Record2<String, String> to Table::new
    .map(Records.mapping(Table::new))
    .doOnNext(System.out::println)
    .subscribe();

jOOQ will acquire an R2DBC Connection from your ConnectionFactory, and release it again after the query has been executed, allowing for optimised resource management, something that is otherwise a bit tricky with R2DBC and reactor. In other words, the above execution corresponds to this manually written query:

Flux.usingWhen(
        connectionFactory.create(),
        c -> c.createStatement(
                """
                SELECT table_schema, table_name
                FROM information_schema.tables
                """
             ).execute(),
        c -> c.close()
    )
    .flatMap(it -> it.map((r, m) -> 
         new Table(r.get(0, String.class), r.get(1, String.class))
    ))
    .doOnNext(System.out::println)
    .subscribe();

Both will print something like the following:

Table[schema=INFORMATION_SCHEMA, table=TABLE_PRIVILEGES]
Table[schema=INFORMATION_SCHEMA, table=REFERENTIAL_CONSTRAINTS]
Table[schema=INFORMATION_SCHEMA, table=TABLE_TYPES]
Table[schema=INFORMATION_SCHEMA, table=QUERY_STATISTICS]
Table[schema=INFORMATION_SCHEMA, table=TABLES]
Table[schema=INFORMATION_SCHEMA, table=SESSION_STATE]
Table[schema=INFORMATION_SCHEMA, table=HELP]
Table[schema=INFORMATION_SCHEMA, table=COLUMN_PRIVILEGES]
Table[schema=INFORMATION_SCHEMA, table=SYNONYMS]
Table[schema=INFORMATION_SCHEMA, table=SESSIONS]
Table[schema=INFORMATION_SCHEMA, table=IN_DOUBT]
Table[schema=INFORMATION_SCHEMA, table=USERS]
Table[schema=INFORMATION_SCHEMA, table=COLLATIONS]
Table[schema=INFORMATION_SCHEMA, table=SCHEMATA]
Table[schema=INFORMATION_SCHEMA, table=TABLE_CONSTRAINTS]
Table[schema=INFORMATION_SCHEMA, table=INDEXES]
Table[schema=INFORMATION_SCHEMA, table=ROLES]
Table[schema=INFORMATION_SCHEMA, table=FUNCTION_COLUMNS]
Table[schema=INFORMATION_SCHEMA, table=CONSTANTS]
Table[schema=INFORMATION_SCHEMA, table=SEQUENCES]
Table[schema=INFORMATION_SCHEMA, table=RIGHTS]
Table[schema=INFORMATION_SCHEMA, table=FUNCTION_ALIASES]
Table[schema=INFORMATION_SCHEMA, table=CATALOGS]
Table[schema=INFORMATION_SCHEMA, table=CROSS_REFERENCES]
Table[schema=INFORMATION_SCHEMA, table=SETTINGS]
Table[schema=INFORMATION_SCHEMA, table=DOMAINS]
Table[schema=INFORMATION_SCHEMA, table=KEY_COLUMN_USAGE]
Table[schema=INFORMATION_SCHEMA, table=LOCKS]
Table[schema=INFORMATION_SCHEMA, table=COLUMNS]
Table[schema=INFORMATION_SCHEMA, table=TRIGGERS]
Table[schema=INFORMATION_SCHEMA, table=VIEWS]
Table[schema=INFORMATION_SCHEMA, table=TYPE_INFO]
Table[schema=INFORMATION_SCHEMA, table=CONSTRAINTS]

Note that if you’re using JDBC and not R2DBC, you can continue to use the jOOQ API with your reactive streams libraries in a blocking manner in exactly the same way as above, e.g. if your favourite RDBMS does not yet support a reactive R2DBC driver. Currently supported drivers according to r2dbc.io include:

All of which we integration test with jOOQ 3.15+.

A runnable example

Go play with the example here: https://github.com/jOOQ/jOOQ/tree/main/jOOQ-examples/jOOQ-r2dbc-example

It uses the following schema:

CREATE TABLE r2dbc_example.author (
  id INT NOT NULL AUTO_INCREMENT,
  first_name VARCHAR(100) NOT NULL,
  last_name VARCHAR(100) NOT NULL,
  
  CONSTRAINT pk_author PRIMARY KEY (id)
);

CREATE TABLE r2dbc_example.book (
  id INT NOT NULL AUTO_INCREMENT,
  author_id INT NOT NULL,
  title VARCHAR(100) NOT NULL,
  
  CONSTRAINT pk_book PRIMARY KEY (id),
  CONSTRAINT fk_book_author FOREIGN KEY (id) 
    REFERENCES r2dbc_example.author
);

And runs this code

Flux.from(ctx
        .insertInto(AUTHOR)
        .columns(AUTHOR.FIRST_NAME, AUTHOR.LAST_NAME)
        .values("John", "Doe")
        .returningResult(AUTHOR.ID))
    .flatMap(id -> ctx
        .insertInto(BOOK)
        .columns(BOOK.AUTHOR_ID, BOOK.TITLE)
        .values(id.value1(), "Fancy Book"))
    .thenMany(ctx
        .select(
             BOOK.author().FIRST_NAME, 
             BOOK.author().LAST_NAME, 
             BOOK.TITLE)
        .from(BOOK))
    .doOnNext(System.out::println)
    .subscribe();

To insert two records and fetch the joined result as follows:

+----------+---------+----------+
|FIRST_NAME|LAST_NAME|TITLE     |
+----------+---------+----------+
|John      |Doe      |Fancy Book|
+----------+---------+----------+

jOOQ 3.15’s New Multiset Operator Will Change How You Think About SQL

This is how SQL should have been used all along.

They called it The Third Manifesto, ORDBMS, or other things. Regrettably, it never really took off. Because most vendors didn’t adopt it. And those who did, didn’t agree on syntax.

But this is about to change. Thanks to the now ubiquitous SQL/JSON support (which jOOQ 3.14 has already covered), we can now emulate the most powerful ORDBMS feature that you will want to use everywhere: Nested collections!

How We Used to do Things: With Joins

We’re going to be using the Sakila database for this example. It’s a DVD rental store with things like ACTOR, FILM, CATEGORY (of films) and other nice relational things. Let’s write a query for this requirement

Get me all the films with their actors and their categories

Classically, we’d go ahead and use jOOQ to write:

ctx.select(
      FILM.TITLE,
      ACTOR.FIRST_NAME,
      ACTOR.LAST_NAME,
      CATEGORY.NAME
   )
   .from(ACTOR)
   .join(FILM_ACTOR)
     .on(ACTOR.ACTOR_ID.eq(FILM_ACTOR.ACTOR_ID))
   .join(FILM)
     .on(FILM_ACTOR.FILM_ID.eq(FILM.FILM_ID))
   .join(FILM_CATEGORY)
     .on(FILM.FILM_ID.eq(FILM_CATEGORY.FILM_ID))
   .join(CATEGORY)
     .on(FILM_CATEGORY.CATEGORY_ID.eq(CATEGORY.CATEGORY_ID))
   .orderBy(1, 2, 3, 4)
   .fetch();

And the result? Not so nice. A denormalised, flat table containing tons of repetition:

+----------------+----------+---------+-----------+
|title           |first_name|last_name|name       |
+----------------+----------+---------+-----------+
|ACADEMY DINOSAUR|CHRISTIAN |GABLE    |Documentary|
|ACADEMY DINOSAUR|JOHNNY    |CAGE     |Documentary|
|ACADEMY DINOSAUR|LUCILLE   |TRACY    |Documentary|
|ACADEMY DINOSAUR|MARY      |KEITEL   |Documentary|
|ACADEMY DINOSAUR|MENA      |TEMPLE   |Documentary|
|ACADEMY DINOSAUR|OPRAH     |KILMER   |Documentary|
|ACADEMY DINOSAUR|PENELOPE  |GUINESS  |Documentary|
|ACADEMY DINOSAUR|ROCK      |DUKAKIS  |Documentary|
|ACADEMY DINOSAUR|SANDRA    |PECK     |Documentary|
|ACADEMY DINOSAUR|WARREN    |NOLTE    |Documentary|
|ACE GOLDFINGER  |BOB       |FAWCETT  |Horror     |
|ACE GOLDFINGER  |CHRIS     |DEPP     |Horror     |
|ACE GOLDFINGER  |MINNIE    |ZELLWEGER|Horror     |
 ...

If we don’t consume this result unmodified as it is (e.g. when displaying tabular data to a user), we’d then go and de-duplicate things, shoehorning them back again into some nested data structures (e.g. for consumption by some JSON based UI), and spending hours trying to untangle the cartesian products between the 2 nested collections FILM -> ACTOR and FILM -> CATEGORY (because ACTOR and CATEGORY now created a cartesian product, which we didn’t want!)

In the worst case, we don’t even notice! This example database only has 1 category per film, but it is designed to support multiple categories.

jOOQ can help with that deduplication, but just look at the number of questions for jOOQ many to many on Stack Overflow! You’ll probably still have to write at least 2 queries to separate the nested collections.

ENTER the Stage: Multiset

The standard SQL <multiset value constructor> operator allows for collecting the data from a correlated subquery into a nested data structure, a MULTISET. Everything in SQL is a MULTISET, so the operator isn’t too surprising. But the nesting is where it shines. The previous query can now be re-written in jOOQ as follows:

var result =
dsl.select(
      FILM.TITLE,
      multiset(
        select(
          FILM_ACTOR.actor().FIRST_NAME, 
          FILM_ACTOR.actor().LAST_NAME)
        .from(FILM_ACTOR)
        .where(FILM_ACTOR.FILM_ID.eq(FILM.FILM_ID))
      ).as("actors"),
      multiset(
        select(FILM_CATEGORY.category().NAME)
        .from(FILM_CATEGORY)
        .where(FILM_CATEGORY.FILM_ID.eq(FILM.FILM_ID))
      ).as("films")
   )
   .from(FILM)
   .orderBy(FILM.TITLE)
   .fetch();

Note, it’s not relevant to this task, but I’m using jOOQ’s type safe implicit to-one join feature, which further helps taming the joins, syntactically. A matter of taste.

How to read this query? Easy:

  • Get all the films
  • For each FILM, get all the actors as a nested collection
  • For each FILM, get all the categories as a nested collection

You’re going to like Java 10’s var keyword even more after this :) Because what type is result? It’s of this type:

Result<Record3<
  String,                          // FILM.TITLE
  Result<Record2<String, String>>, // ACTOR.FIRST_NAME, ACTOR.LAST_NAME
  Result<Record1<String>>          // CATEGORY.NAME
>>

That’s quite something. Not too complex if you think of it. There’s a result with 3 columns

  • TITLE
  • A first nested result that has 2 string columns: ACTOR.FIRST_NAME and ACTOR.LAST_NAME
  • A second nested result that has 1 string column: CATEGORY.NAME

Using var or other type inference mechanisms, you don’t have to denote this type. Or even better (stay tuned): We’ll type-safely map the structural type to our nominal DTO type hierarchy, with just a few additional lines of code. I’ll explain that later.

What Does the Result Look Like?

Calling toString() on the above Result type yields something like this:

+---------------------------+--------------------------------------------------+---------------+
|title                      |actors                                            |films          |
+---------------------------+--------------------------------------------------+---------------+
|ACADEMY DINOSAUR           |[(PENELOPE, GUINESS), (CHRISTIAN, GABLE), (LUCI...|[(Documentary)]|
|ACE GOLDFINGER             |[(BOB, FAWCETT), (MINNIE, ZELLWEGER), (SEAN, GU...|[(Horror)]     |
|ADAPTATION HOLES           |[(NICK, WAHLBERG), (BOB, FAWCETT), (CAMERON, ST...|[(Documentary)]|
|AFFAIR PREJUDICE           |[(JODIE, DEGENERES), (SCARLETT, DAMON), (KENNET...|[(Horror)]     |
|AFRICAN EGG                |[(GARY, PHOENIX), (DUSTIN, TAUTOU), (MATTHEW, L...|[(Family)]     |
|AGENT TRUMAN               |[(KIRSTEN, PALTROW), (SANDRA, KILMER), (JAYNE, ...|[(Foreign)]    |
 ...

Notice how we’re back to getting each FILM.TITLE entry only once (no duplication), and nested in each row are the subquery results. There’s no denormalisation happening!

When calling result.formatJSON() with the appropriate formatting options, we’ll get this representation:

[
  {
    "title": "ACADEMY DINOSAUR",
    "actors": [
      {
        "first_name": "PENELOPE",
        "last_name": "GUINESS"
      },
      {
        "first_name": "CHRISTIAN",
        "last_name": "GABLE"
      },
      {
        "first_name": "LUCILLE",
        "last_name": "TRACY"
      },
      {
        "first_name": "SANDRA",
        "last_name": "PECK"
      },

      ...
    ],
    "films": [
      { "name": "Documentary" }
    ]
  },
  {
    "title": "ACE GOLDFINGER",
    "actors": [
      {
        "first_name": "BOB",
        "last_name": "FAWCETT"
      },
  ...

Calling result.formatXML() would produce this:

<result>
  <record>
    <title>ACADEMY DINOSAUR</title>
    <actors>
      <result>
        <record>
          <first_name>PENELOPE</first_name>
          <last_name>GUINESS</last_name>
        </record>
        <record>
          <first_name>CHRISTIAN</first_name>
          <last_name>GABLE</last_name>
        </record>
        <record>
          <first_name>LUCILLE</first_name>
          <last_name>TRACY</last_name>
        </record>
        <record>
          <first_name>SANDRA</first_name>
          <last_name>PECK</last_name>
        </record>
        ...
      </result>
    </actors>
    <films>
      <result>
        <record>
          <name>Documentary</name>
        </record>
      </result>
    </films>
  </record>
  <record>
    <title>ACE GOLDFINGER</title>
    <actors>
      <result>
        <record>
          <first_name>BOB</first_name>
          <last_name>FAWCETT</last_name>
        </record>
        ...

You get the idea!

What’s the Generated SQL?

Just turn on jOOQ’s DEBUG logging and observe a query like this one (in PostgreSQL):

select
  film.title,
  (
    select coalesce(
      jsonb_agg(jsonb_build_object(
        'first_name', t.first_name,
        'last_name', t.last_name
      )),
      jsonb_build_array()
    )
    from (
      select 
        alias_78509018.first_name, 
        alias_78509018.last_name
      from (
        film_actor
          join actor as alias_78509018
            on film_actor.actor_id = alias_78509018.actor_id
        )
      where film_actor.film_id = film.film_id
    ) as t
  ) as actors,
  (
    select coalesce(
      jsonb_agg(jsonb_build_object('name', t.name)),
      jsonb_build_array()
    )
    from (
      select alias_130639425.name
      from (
        film_category
          join category as alias_130639425
            on film_category.category_id = alias_130639425.category_id
        )
      where film_category.film_id = film.film_id
    ) as t
  ) as films
from film
order by film.title

The Db2, MySQL, Oracle, SQL Server versions would look similar. Just try it on your Sakila database installation. It runs fast, too.

Mapping the Results to DTOs

Now, I promised to get rid of that lengthy structural type with all the generics. Check this out!

We used to call them POJOs (Plain Old Java Objects). Then DTOs (Data Transfer Objects). Now records. Yes, let’s try some Java 16 records here. (Note, records aren’t required for these examples. Any POJOs with appropriate constructors would do).

Wouldn’t it be nice, if result was of this type

record Actor(String firstName, String lastName) {}
record Film(
  String title,
  List<Actor> actors,
  List<String> categories
) {}

List<Film> result = ...

Structural typing is essential to jOOQ and its type safe query system, but in your code, you probably don’t want to have those merge-conflict lookalike generics all the time, and even var won’t help you if your data needs to be returned from a method.

So, let’s transform our jOOQ query to one that produces List<Film>, step by step. We’re starting with the original query, untouched:

Result<Record3<
    String, 
    Result<Record2<String, String>>, 
    Result<Record1<String>>
>> result =
dsl.select(
      FILM.TITLE,
      multiset(
        select(
          FILM_ACTOR.actor().FIRST_NAME, 
          FILM_ACTOR.actor().LAST_NAME)
        .from(FILM_ACTOR)
        .where(FILM_ACTOR.FILM_ID.eq(FILM.FILM_ID))
      ).as("actors"),
      multiset(
        select(FILM_CATEGORY.category().NAME)
        .from(FILM_CATEGORY)
        .where(FILM_CATEGORY.FILM_ID.eq(FILM.FILM_ID))
      ).as("films")
   )
   .from(FILM)
   .orderBy(FILM.TITLE)
   .fetch();

Now, we’re going to map the Actor in the first MULTISET expression. This can be done as follows, using the new Field.convertFrom() convenience method, which allows to turn a Field<T> into any read-only Field<U> for ad-hoc usage. A simple example would be this:

record Title(String title) {}

// Use this field in any query
Field<Title> title = FILM.TITLE.convertFrom(Title::new);

It’s just an easy, new way to attach a read-only Converter to a Field for single usage, instead of doing that with the code generator.

Applied to the original query:

Result<Record3<
    String, 
    List<Actor>,  // A bit nicer already
    Result<Record1<String>>
>> result =
dsl.select(
      FILM.TITLE,
      multiset(
        select(
          FILM_ACTOR.actor().FIRST_NAME, 
          FILM_ACTOR.actor().LAST_NAME)
        .from(FILM_ACTOR)
        .where(FILM_ACTOR.FILM_ID.eq(FILM.FILM_ID))

      // Magic here: vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
      ).as("actors").convertFrom(r -> r.map(mapping(Actor::new))),

      multiset(
        select(FILM_CATEGORY.category().NAME)
        .from(FILM_CATEGORY)
        .where(FILM_CATEGORY.FILM_ID.eq(FILM.FILM_ID))
      ).as("films")
   )
   .from(FILM)
   .orderBy(FILM.TITLE)
   .fetch();

What are we doing here?

  • The method convertFrom() takes a lambda Result<Record2<String, String>> -> Actor.
  • The Result type is the usual jOOQ Result, which has a Result.map(RecordMapper<R, E>) method.
  • The mapping() method is the new Records.mapping(), which isn’t doing much, just turning a constructor reference of type Function2<String, String, Actor> into a RecordMapper, which can then be used to turn a Result<Record2<String, String>> into a List<Actor>.

And it type checks! Try it yourself. If you add a column to the multiset, you’ll get a compilation error. If you add/remove an attribute from the Actor record, you’ll get a compilation error. No reflection here, just declarative mapping of jOOQ results/records to custom List<UserType>. If you prefer the “old” reflection approach using jOOQ’s ubiquitous into() methods, you can still do that, too:

Result<Record3<
    String, 
    List<Actor>,  // A bit nicer already
    Result<Record1<String>>
>> result =
dsl.select(
      FILM.TITLE,
      multiset(
        select(
          FILM_ACTOR.actor().FIRST_NAME, 
          FILM_ACTOR.actor().LAST_NAME)
        .from(FILM_ACTOR)
        .where(FILM_ACTOR.FILM_ID.eq(FILM.FILM_ID))

      // Magic here: vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
      ).as("actors").convertFrom(r -> r.into(Actor.class))),

      multiset(
        select(FILM_CATEGORY.category().NAME)
        .from(FILM_CATEGORY)
        .where(FILM_CATEGORY.FILM_ID.eq(FILM.FILM_ID))
      ).as("films")
   )
   .from(FILM)
   .orderBy(FILM.TITLE)
   .fetch();

The result still type checks, but the conversion from Result<Record2<String, String>> to List<Actor> no longer does, it uses reflection.

Let’s continue. Let’s remove the clumsy category Result<Record1<String>>. We could’ve added another record, but in this case, a List<String> will suffice.

Result<Record3<
    String, 
    List<Actor>,
    List<String>  // Begone, jOOQ structural type!
>> result =
dsl.select(
      FILM.TITLE,
      multiset(
        select(
          FILM_ACTOR.actor().FIRST_NAME, 
          FILM_ACTOR.actor().LAST_NAME)
        .from(FILM_ACTOR)
        .where(FILM_ACTOR.FILM_ID.eq(FILM.FILM_ID))
      ).as("actors").convertFrom(r -> r.map(mapping(Actor::new))),
      multiset(
        select(FILM_CATEGORY.category().NAME)
        .from(FILM_CATEGORY)
        .where(FILM_CATEGORY.FILM_ID.eq(FILM.FILM_ID))

      // Magic.     vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
      ).as("films").convertFrom(r -> r.map(Record1::value1))
   )
   .from(FILM)
   .orderBy(FILM.TITLE)
   .fetch();

And finally, the outer-most Result<Record3<...>> to List<Film> conversion

List<Film> result =
dsl.select(
      FILM.TITLE,
      multiset(
        select(
          FILM_ACTOR.actor().FIRST_NAME, 
          FILM_ACTOR.actor().LAST_NAME)
        .from(FILM_ACTOR)
        .where(FILM_ACTOR.FILM_ID.eq(FILM.FILM_ID))
      ).as("actors").convertFrom(r -> r.map(mapping(Actor::new))),
      multiset(
        select(FILM_CATEGORY.category().NAME)
        .from(FILM_CATEGORY)
        .where(FILM_CATEGORY.FILM_ID.eq(FILM.FILM_ID))
      ).as("films").convertFrom(r -> r.map(Record1::value1))
   )
   .from(FILM)
   .orderBy(FILM.TITLE)

   //     vvvvvvvvvvvvvvvvvv grande finale
   .fetch(mapping(Film::new));

This time, we don’t need the Field.convertFrom() method. Just using the Records.mapping() auxiliary will be sufficient.

A More Complex Example

The previous example showed nesting of two independent collections, which is quite hard with classic JOIN based SQL or ORMs. How about a much more complex example, where we nest things 2 levels, one of which being an aggregation, even? The requirement is:

Give me all the films, the actors that played in the film, the categories that categorise the film, the customers that have rented the film, and all the payments per customer for that film

I won’t even show a JOIN based approach. Let’s dive directly into MULTISET and the also new, synthetic MULTISET_AGG aggregate function. Here’s how to do this with jOOQ. Now, check out that beautiful result type:

Result<Record4<
    String,                   // FILM.TITLE
    Result<Record2<
        String,               // ACTOR.FIRST_NAME
        String                // ACTOR.LAST_NAME
    >>,                       // "actors"
    Result<Record1<String>>,  // CATEGORY.NAME
    Result<Record4<
        String,               // CUSTOMER.FIRST_NAME
        String,               // CUSTOMER.LAST_NAME
        Result<Record2<
            LocalDateTime,    // PAYMENT.PAYMENT_DATE
            BigDecimal        // PAYMENT.AMOUNT
        >>, 
        BigDecimal            // "total"
    >>                        // "customers"
>> result = 
dsl.select(

        // Get the films
        FILM.TITLE,

        // ... and all actors that played in the film
        multiset(
            select(
                FILM_ACTOR.actor().FIRST_NAME,
                FILM_ACTOR.actor().LAST_NAME
            )
            .from(FILM_ACTOR)
            .where(FILM_ACTOR.FILM_ID.eq(FILM.FILM_ID))
        ).as("actors"),

        // ... and all categories that categorise the film
        multiset(
            select(FILM_CATEGORY.category().NAME)
            .from(FILM_CATEGORY)
            .where(FILM_CATEGORY.FILM_ID.eq(FILM.FILM_ID))
        ).as("categories"),

        // ... and all customers who rented the film, as well
        // as their payments
        multiset(
            select(
                PAYMENT.rental().customer().FIRST_NAME,
                PAYMENT.rental().customer().LAST_NAME,
                multisetAgg(
                    PAYMENT.PAYMENT_DATE,
                    PAYMENT.AMOUNT
                ).as("payments"),
                sum(PAYMENT.AMOUNT).as("total"))
            .from(PAYMENT)
            .where(PAYMENT
                .rental().inventory().FILM_ID.eq(FILM.FILM_ID))
            .groupBy(
                PAYMENT.rental().customer().CUSTOMER_ID,
                PAYMENT.rental().customer().FIRST_NAME,
                PAYMENT.rental().customer().LAST_NAME)
        ).as("customers")
    )
    .from(FILM)
    .where(FILM.TITLE.like("A%"))
    .orderBy(FILM.TITLE)
    .limit(5)
    .fetch();

You’ll be using var, of course, rather than denoting this insane type but I wanted to denote the type explicitly for the sake of the example.

Note again how implicit joins were really helpful here since we wanted to aggregate all the payments per customer, we can just select from PAYMENT and group by the payment’s PAYMENT.rental().customer(), as well as correlate the subquery by PAYMENT.rental().inventory().FILM_ID without any extra effort.

The executed SQL looks like this, where you can see the generated implicit joins (run it on your PostgreSQL Sakila database!):

select
  film.title,
  (
    select coalesce(
      jsonb_agg(jsonb_build_object(
        'first_name', t.first_name,
        'last_name', t.last_name
      )),
      jsonb_build_array()
    )
    from (
      select alias_78509018.first_name, alias_78509018.last_name
      from (
        film_actor
          join actor as alias_78509018
            on film_actor.actor_id = alias_78509018.actor_id
        )
      where film_actor.film_id = film.film_id
    ) as t
  ) as actors,
  (
    select coalesce(
      jsonb_agg(jsonb_build_object('name', t.name)),
      jsonb_build_array()
    )
    from (
      select alias_130639425.name
      from (
        film_category
          join category as alias_130639425
            on film_category.category_id = 
               alias_130639425.category_id
        )
      where film_category.film_id = film.film_id
    ) as t
  ) as categories,
  (
    select coalesce(
      jsonb_agg(jsonb_build_object(
        'first_name', t.first_name,
        'last_name', t.last_name,
        'payments', t.payments,
        'total', t.total
      )),
      jsonb_build_array()
    )
    from (
      select
        alias_63965917.first_name,
        alias_63965917.last_name,
        jsonb_agg(jsonb_build_object(
          'payment_date', payment.payment_date,
          'amount', payment.amount
        )) as payments,
        sum(payment.amount) as total
      from (
        payment
          join (
            rental as alias_102068213
              join customer as alias_63965917
                on alias_102068213.customer_id = 
                   alias_63965917.customer_id
              join inventory as alias_116526225
                on alias_102068213.inventory_id = 
                   alias_116526225.inventory_id
          )
            on payment.rental_id = alias_102068213.rental_id
        )
      where alias_116526225.film_id = film.film_id
      group by 
        alias_63965917.customer_id, 
        alias_63965917.first_name, 
        alias_63965917.last_name
    ) as t
  ) as customers
from film
where film.title like 'A%'
order by film.title
fetch next 5 rows only

The result, in JSON, now looks something like this:

[
  {
    "title": "ACADEMY DINOSAUR",
    "actors": [
      {
        "first_name": "PENELOPE",
        "last_name": "GUINESS"
      },
      {
        "first_name": "CHRISTIAN",
        "last_name": "GABLE"
      },
      {
        "first_name": "LUCILLE",
        "last_name": "TRACY"
      },
      ...
    ],
    "categories": [{ "name": "Documentary" }],
    "customers": [
      {
        "first_name": "SUSAN",
        "last_name": "WILSON",
        "payments": [
          {
            "payment_date": "2005-07-31T22:08:29",
            "amount": 0.99
          }
        ],
        "total": 0.99
      },
      {
        "first_name": "REBECCA",
        "last_name": "SCOTT",
        "payments": [
          {
            "payment_date": "2005-08-18T18:36:16",
            "amount": 0.99
          }
        ],
        "total": 0.99
      },
      ...

That’s it. Nesting collections in arbitrary ways is completely effortless and intuitive. No N+1, no deduplication. Just declare the results in exactly the form you require in your client.

There is no other way to pull off this complexity with such ease, than letting your RDBMS do the heavy lifting of planning and running such a query, and letting jOOQ do the mapping.

Conclusion

We had this kind of functionality all along. We just never used it, or not enough. Why? Because client APIs did not make it accessible enough. Because RDBMS didn’t agree on syntax enough.

That’s now over. jOOQ uses standard SQL MULTISET syntax in its DSL API, enhances it with the synthetic MULTISET_AGG aggregate function, the way all RDBMS should have implemented it (go Informix, Oracle). We can wait for another 40 years for the other RDBMS to implement this, or we just use jOOQ today.

And, I cannot stress this enough:

  • This is all type safe
  • There is no reflection
  • The nesting is done in the database using SQL (via SQL/XML or SQL/JSON for now)
  • … So, execution planners can optimise your entire query
  • … No extra columns or extra queries or other extra work is performed in the database

This works on all dialects that have either SQL/XML or SQL/JSON support (or both), including the major popular dialects:

  • MySQL
  • Oracle
  • PostgreSQL
  • SQL Server

And it is offered under the usual license terms of jOOQ. So, happy nesting of collections.

Addendum: Using SQL/XML or SQL/JSON directly

You may be tempted to use this everywhere. And you rightfully do so. But beware of this, if your SQL client is consuming XML or JSON directly, there’s no need to use MULTISET. Use jOOQ’s native SQL/XML or SQL/JSON support that was introduced in jOOQ 3.14. That way, you won’t convert from JSON to jOOQ results to JSON, but stream the JSON (or XML) to your frontend directly.

How to Get an RDBMS Server Version with SQL

Do you need to know what RDBMS Server version you’re on, and you only have SQL at your disposal? No problem. Most RDBMS provide you with that information in some form of meta data table.

Here’s how:

-- CockroachDB
select version();

-- Db2
select service_level from table (sysproc.env_get_inst_info()) t

-- Derby
select getdatabaseproductversion() from (values (1)) t (a);

-- Exasol
select param_value 
from exa_metadata 
where param_name = 'databaseProductVersion';

-- Firebird
select rdb$get_context('SYSTEM', 'ENGINE_VERSION')
from rdb$database;

-- H2
select h2version(); 

-- HANA
select * from m_database;

-- HSQLDB
select character_value
from information_schema.sql_implementation_info
where implementation_info_name = 'DBMS VERSION'

-- Informix
select dbinfo('version', 'full') from systables where tabid = 1;

-- MariaDB
select version();

-- MemSQL (SingleStore)
select version();

-- MySQL
select version();

-- Oracle
select * from v$version;

-- PostgreSQL
select version();

-- Snowflake
select current_version();

-- SQL Server
select @@version;

-- SQLite
select sqlite_version();

-- Teradata
select infodata from dbc.dbcinfov where infokey = 'VERSION'

Missing this info for your own RDBMS? Feel free to comment.

Use IN List Padding to Your JDBC Application to Avoid Cursor Cache Contention Problems

A problem few developers are aware of is the possibility of running into “cursor cache contention” or “execution plan cache contention” problems when using IN lists in SQL. The problem that is described in lengths in previous articles, can be summarised as this.

All of these are distinct SQL queries and need to be parsed / planned / cached as possibly distinct execution plans in RDBMS that have strong plan caches (e.g. Db2, Oracle, SQL Server):

SELECT * FROM t WHERE id IN (?);
SELECT * FROM t WHERE id IN (?, ?);
SELECT * FROM t WHERE id IN (?, ?, ?);
SELECT * FROM t WHERE id IN (?, ?, ?, ?);
SELECT * FROM t WHERE id IN (?, ?, ?, ?, ?);

While this is never a problem on developer machines, this can produce significant problems in production. I’ve seen this take down entire Oracle instances during peak loads. While RDBMS vendors should work on avoiding the severe problems this can cause, you can work around it using a trick that we invented at jOOQ (and Hibernate also has it now):

IN list padding

The trick is very simple. Just “pad” your IN lists to the nearest power of 2, and repeat the last value until the end:

SELECT * FROM t WHERE id IN (?);                      -- Left as it is
SELECT * FROM t WHERE id IN (?, ?);                   -- Left as it is
SELECT * FROM t WHERE id IN (?, ?, ?, ?);             -- Padded 3 to 4
SELECT * FROM t WHERE id IN (?, ?, ?, ?);             -- Left as it is
SELECT * FROM t WHERE id IN (?, ?, ?, ?, ?, ?, ?, ?); -- Padded 5 to 8

It’s really a hack and there are better solutions to avoiding this problem, including using arrays or temporary tables, but your production system may be down and you need a quick fix.

jOOQ has supported IN list padding for years now, since jOOQ 3.9 (late 2016), but with the relatively new parser and the ParsingConnection, you can now also apply this technique to any arbitrary SQL query in your non-jOOQ based system. Here’s a simple example:

// Any arbitrary JDBC Connection is wrapped by jOOQ here and replaced
// by a "ParsingConnection", which is also a JDBC Connection
DSLContext ctx = DSL.using(connection);
ctx.settings().setInListPadding(true);
Connection c = ctx.parsingConnection();

// Your remaining code is left untouched. It is unaware of jOOQ
for (int i = 0; i < 10; i++) {
    try (PreparedStatement s = c.prepareStatement(

        // This alone is reason enough to use jOOQ instead, 
        // but one step at a time :)
        "select 1 from dual where 1 in (" +
            IntStream.rangeClosed(0, i)
                     .mapToObj(x -> "?")
                     .collect(Collectors.joining(", ")) +
        ")")
    ) {
        for (int j = 0; j <= i; j++)
            s.setInt(j + 1, j + 1);

        try (ResultSet rs = s.executeQuery()) {
            while (rs.next())
                System.out.println(rs.getInt(1));
        }
    }
}

The above example just generates and runs 10 queries of this form:

select 1 from dual where 1 in (?)
select 1 from dual where 1 in (?, ?)
select 1 from dual where 1 in (?, ?, ?)
select 1 from dual where 1 in (?, ?, ?, ?)
select 1 from dual where 1 in (?, ?, ?, ?, ?)
select 1 from dual where 1 in (?, ?, ?, ?, ?, ?)
select 1 from dual where 1 in (?, ?, ?, ?, ?, ?, ?)
select 1 from dual where 1 in (?, ?, ?, ?, ?, ?, ?, ?)
select 1 from dual where 1 in (?, ?, ?, ?, ?, ?, ?, ?, ?)
select 1 from dual where 1 in (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)

But that’s not what is being executed. In the DEBUG logs, we can see the following:

Translating from         : select 1 from dual where 1 in (?)
Translating to           : select 1 from dual where 1 in (?)
Translating from         : select 1 from dual where 1 in (?, ?)
Translating to           : select 1 from dual where 1 in (?, ?)
Translating from         : select 1 from dual where 1 in (?, ?, ?)
Translating to           : select 1 from dual where 1 in (?, ?, ?, ?)
Translating from         : select 1 from dual where 1 in (?, ?, ?, ?)
Translating to           : select 1 from dual where 1 in (?, ?, ?, ?)
Translating from         : select 1 from dual where 1 in (?, ?, ?, ?, ?)
Translating to           : select 1 from dual where 1 in (?, ?, ?, ?, ?, ?, ?, ?)
Translating from         : select 1 from dual where 1 in (?, ?, ?, ?, ?, ?)
Translating to           : select 1 from dual where 1 in (?, ?, ?, ?, ?, ?, ?, ?)
Translating from         : select 1 from dual where 1 in (?, ?, ?, ?, ?, ?, ?)
Translating to           : select 1 from dual where 1 in (?, ?, ?, ?, ?, ?, ?, ?)
Translating from         : select 1 from dual where 1 in (?, ?, ?, ?, ?, ?, ?, ?)
Translating to           : select 1 from dual where 1 in (?, ?, ?, ?, ?, ?, ?, ?)
Translating from         : select 1 from dual where 1 in (?, ?, ?, ?, ?, ?, ?, ?, ?)
Translating to           : select 1 from dual where 1 in (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
Translating from         : select 1 from dual where 1 in (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
Translating to           : select 1 from dual where 1 in (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)

And just like that, our legacy application can run in production again, and you’ll have time to fix this more thoroughly.

Conclusion

While jOOQ is mostly an internal DSL for writing type safe, embedded SQL in Java, you can use it for a lot of other things too on any JDBC based application. The above example is using the ParsingConnection that can parse all your SQL statements and translate / transform them to anything else, including other dialects.

Calculating Pagination Metadata Without Extra Roundtrips in SQL

When paginating results in SQL, we use standard SQL OFFSET .. FETCH or a vendor specific version of it, such as LIMIT .. OFFSET. For example:

SELECT first_name, last_name
FROM actor
ORDER BY actor_id
OFFSET 10 ROWS FETCH NEXT 10 ROWS ONLY

As always, we’re using the Sakila database for this example.

This is rather straightforward. It will give us page 2 out of N pages, with a page size of 10. But how do we calculate these values? How do we know we’re on page 2? How do we know the number of pages N? Can we calculate this without an extra round-trip e.g. to calculate the total number of actors:

-- Yuck, a second round-trip!
SELECT COUNT(*)
FROM actor

We can do it with a single SQL query and window functions, but before I explain how to do this, please consider reading this article on why OFFSET pagination is a bad thing for your performance

If you’re still convinced OFFSET pagination is what you need, as opposed to keyset pagination, let’s look at how to calculate the above meta data with SQL.

What Metadata Do We Need?

The metadata we typically need to paginate using OFFSET are these:

  • TOTAL_ROWS: The total number of records if we hadn’t paginated
  • CURRENT_PAGE: The current page we’re on
  • MAX_PAGE_SIZE: The maximum page size
  • ACTUAL_PAGE_SIZE: The actual page size (when on the last page)
  • ROW: The actual offsets of the returned rows
  • LAST_PAGE: Whether we are on the last page

The maximum page size is something we set to the query, so it doesn’t have to be calculated. Everything else needs to be calculated. And here’s how to do that in a single query

SELECT 
  t.first_name, 
  t.last_name,

  -- Calculate some meta data
  COUNT(*) OVER () AS actual_page_size,
  MAX(row) OVER () = total_rows AS last_page,

  -- Metadata from the subquery
  total_rows,
  row,
  ((row - 1) / :max_page_size) + 1 AS current_page
FROM (
  SELECT
    u.*,

    -- Calculate some meta data, repeating the ORDER BY from
    -- the original query
    COUNT(*) OVER () AS total_rows,
    ROW_NUMBER () OVER (ORDER BY u.actor_id) AS row

  -- Original query with all the predicates, joins, as a derived table
  FROM (
    SELECT *
    FROM actor
  ) AS u

  -- Ordering and pagination done here, where :offset is
  -- The maximum row value of the previous page + 1
  ORDER BY u.actor_id
  OFFSET :offset ROWS
  FETCH NEXT :max_page_size ROWS ONLY
) AS t
ORDER BY t.actor_id

That’s it. Impressive? Don’t be scared, I’ll walk you through these things step by step. And if you ever get confused by SQL syntax, consider this article explaining the logical order of SQL operations, which is, for our example:

  • FROM (recurse ordering for derived tables)
  • WHERE (which the example omitted)
  • WINDOW calculations
  • SELECT (the projection)
  • ORDER BY
  • OFFSET .. FETCH

Annotating our query, ordering operations logically as 1.1, 1.2, 2.1, 2.2, 2.3, 2.4, 2.5, 3.1, 3.2, 3.3, 3.4:

-- 3.3
SELECT 
  t.first_name, 
  t.last_name,

  -- 3.2
  COUNT(*) OVER () AS actual_page_size,
  MAX(row) OVER () = total_rows AS last_page,

  -- 3.3
  total_rows,
  row,
  ((row - 1) / :max_page_size) + 1 AS current_page
-- 3.1
FROM (
  -- 2.3
  SELECT
    u.*,

    -- 2.2
    COUNT(*) OVER () AS total_rows,
    ROW_NUMBER () OVER (ORDER BY u.actor_id) AS row

  -- 2.1
  FROM (

    -- 1.2
    SELECT *
    -- 1.1
    FROM actor
  ) AS u

  -- 2.4
  ORDER BY u.actor_id

  -- 2.5
  OFFSET :offset ROWS
  FETCH NEXT :max_page_size ROWS ONLY
) AS t

-- 3.4
ORDER BY t.actor_id

Step by step explanation

First off, the original query SELECT * FROM actor is wrapped as a derived table called u. You can do almost anything you want with this original query, applying only a few transformations:

  • 1.1, 1.2, 2.1: You need to project (SELECT clause) the columns that your original query projected, plus the columns that you need for ORDER BY. Because I’m projecting the right things in the outermost query, and because there’s no DISTINCT clause in the original query, I conveniently projected *. Alternatively, I could have projected FIRST_NAME, LAST_NAME (because that is projected in the original query), and ACTOR_ID (because that’s what we ORDER BY).
  • 2.2: On that derived table u, we’re now able to calculate some metadata, including TOTAL_ROWS as COUNT(*) OVER () and ROW as ROW_NUMBER () OVER (ORDER BY t.actor_id). The COUNT(*) OVER () window function has an empty window specification OVER (), meaning it calculates all the rows that result from the FROM, WHERE, GROUP BY, HAVING clauses, i.e. from u in our particular example. Without a second round-trip! The ROW_NUMBER () OVER (ORDER BY u.actor_id) orders all the rows in u by u.actor_id and assigns unique row numbers to them, according to that ordering.
  • 2.3: The window functions are calculated implicitly because they’re located in the projection of this derived table. We’re also again going to conveniently project everything from u.*, because the outer-most query is the one that projects columns explicitly.
  • 2.4: The original ordering has been moved here because there is no guarantee that the ordering would have been maintained if we had ordered the contents of u. But we need the ordering to calculate OFFSET .. FETCH right after
  • 2.5: This is where we paginate. The OFFSET corresponds to the maximum ROW value that we’ve encountered before. We start at 0, and with a page size of 15, we use 15 on the next page. Remember that while indexes are 1 based in SQL, OFFSET is 0 based.
  • 3.1: All of the above is wrapped again in a derived table, in order to make further calculations on it, namely:
  • 3.2: We can again calculate COUNT(*) OVER (), calculating the total number of rows that result from the FROM, WHERE, GROUP BY, HAVING clauses, i.e. from t in our particular example. This time, the number of rows can be no more than MAX_PAGE_SIZE, because that’s what the FETCH (or LIMIT) clause inside of t says. But it can be less, too, so this is what we use to calculate the ACTUAL_PAGE_SIZE. Finally, we compare MAX(row) OVER () = total_rows to see if we’re on the last page, meaning the highest value for row in the current page resulting from t is compared to the total row count. Another way to calculate the LAST_PAGE value would be if ACTUAL_PAGE_SIZE < MAX_PAGE_SIZE, i.e. COUNT(*) OVER () < :MAX_PAGE_SIZE.
  • 3.3: In addition to the usual projection of the original columns FIRST_NAME, LAST_NAME (we’re no longer projecting * now!), we’re doing some final calculations including dividing ROW / TOTAL_ROWS to get the page number. You can calculate more things, such as TOTAL_ROWS / MAX_PAGE_SIZE to get the TOTAL_PAGES value.
  • 3.4: Finally, we have to ORDER BY t.actor_id again, don’t let anyone tell you otherwise. In SQL, if you do not ORDER BY, then the ordering is undefined. Sure, it would be silly for an optimiser to re-order things without any good reason. We’ve already ordered the contents of our subqueries in 2.4, but there is no guarantee, that this ordering is stable. Just add DISTINCT, UNION, or a JOIN resulting in a hash join or some random other operator to your query, and the ordering breaks. So, always ORDER BY if ordering is important to you.

And we’re done!

How to do it in jOOQ?

This is the kind of use-case where jOOQ really really shines, because all of this is about dynamic SQL. Your actual business logic is contained in the deeply nested u table. Everything else is “presentation logic”, which is implemented in SQL for very obvious reasons: To improve performance.

And because you want to implement all of this only once in some library of yours, instead of having to play this game on every query, you make this kind of query dynamic. The utility will look like this:

// Assuming as always the usual static imports, including:
// import static org.jooq.impl.DSL.*;
// import com.generated.code.Tables.*;

static Select<?> paginate(
    DSLContext ctx,
    Select<?> original,
    Field<?>[] sort,
    int limit,
    int offset
) {
    Table<?> u = original.asTable("u");
    Field<Integer> totalRows = count().over().as("total_rows");
    Field<Integer> row = rowNumber().over().orderBy(u.fields(sort))
        .as("row");

    Table<?> t = ctx
        .select(u.asterisk())
        .select(totalRows, row)
        .from(u)
        .orderBy(u.fields(sort))
        .limit(limit)
        .offset(offset)
        .asTable("t");

    Select<?> result = ctx
        .select(t.fields(original.getSelect().toArray(Field[]::new)))
        .select(
            count().over().as("actual_page_size"),
            field(max(t.field(row)).over().eq(t.field(totalRows)))
                .as("last_page"),
            t.field(totalRows),
            t.field(row),
            t.field(row).minus(inline(1)).div(limit).plus(inline(1))
                .as("current_page"))
        .from(t)
        .orderBy(t.fields(sort));

    // System.out.println(result);
    return result;
}

Notice the println for debugging? It will print again something like our original query (but you’ll also see that in your debug log output, by default, with jOOQ):

select
  t.ACTOR_ID,
  t.FIRST_NAME,
  t.LAST_NAME,
  count(*) over () as actual_page_size,
  (max(t.row) over () = t.total_rows) as last_page,
  t.total_rows,
  t.row,
  ((t.row / 15) + 1) as current_page
from (
  select
    u.*,
    count(*) over () as total_rows,
    row_number() over (order by u.ACTOR_ID) as row
  from (
    select
      ACTOR.ACTOR_ID,
      ACTOR.FIRST_NAME,
      ACTOR.LAST_NAME
    from ACTOR
  ) as u
  order by u.ACTOR_ID
  offset 30 rows
  fetch next 15 rows only
) as t
order by t.ACTOR_ID

And here’s how you call the utility:

System.out.println(
    paginate(
        ctx,
        ctx.select(ACTOR.ACTOR_ID, ACTOR.FIRST_NAME, ACTOR.LAST_NAME)
           .from(ACTOR),
        new Field[] { ACTOR.ACTOR_ID },
        15,
        30
    ).fetch()
);

Notice that you can plug in arbitrary SQL fragments into that utility and paginate them. Irrespective of the complexity (including joins, other window functions, grouping, recursion, and what not), jOOQ will have you covered and will now paginate things for you.

The result of the above is:

+--------+----------+---------+----------------+---------+----------+----+------------+
|ACTOR_ID|FIRST_NAME|LAST_NAME|actual_page_size|last_page|total_rows| row|current_page|
+--------+----------+---------+----------------+---------+----------+----+------------+
|      31|SISSY     |SOBIESKI |              15|false    |       200|  31|           3|
|      32|TIM       |HACKMAN  |              15|false    |       200|  32|           3|
|      33|MILLA     |PECK     |              15|false    |       200|  33|           3|
|      34|AUDREY    |OLIVIER  |              15|false    |       200|  34|           3|
|      35|JUDY      |DEAN     |              15|false    |       200|  35|           3|
|      36|BURT      |DUKAKIS  |              15|false    |       200|  36|           3|
|      37|VAL       |BOLGER   |              15|false    |       200|  37|           3|
|      38|TOM       |MCKELLEN |              15|false    |       200|  38|           3|
|      39|GOLDIE    |BRODY    |              15|false    |       200|  39|           3|
|      40|JOHNNY    |CAGE     |              15|false    |       200|  40|           3|
|      41|JODIE     |DEGENERES|              15|false    |       200|  41|           3|
|      42|TOM       |MIRANDA  |              15|false    |       200|  42|           3|
|      43|KIRK      |JOVOVICH |              15|false    |       200|  43|           3|
|      44|NICK      |STALLONE |              15|false    |       200|  44|           3|
|      45|REESE     |KILMER   |              15|false    |       200|  45|           3|
+--------+----------+---------+----------------+---------+----------+----+------------+

Or, on the last page, with offset 195

+--------+----------+---------+----------------+---------+----------+----+------------+
|ACTOR_ID|FIRST_NAME|LAST_NAME|actual_page_size|last_page|total_rows| row|current_page|
+--------+----------+---------+----------------+---------+----------+----+------------+
|     196|BELA      |WALKEN   |               5|true     |       200| 196|          14|
|     197|REESE     |WEST     |               5|true     |       200| 197|          14|
|     198|MARY      |KEITEL   |               5|true     |       200| 198|          14|
|     199|JULIA     |FAWCETT  |               5|true     |       200| 199|          14|
|     200|THORA     |TEMPLE   |               5|true     |       200| 200|          14|
+--------+----------+---------+----------------+---------+----------+----+------------+

Conclusion

jOOQ is all about dynamic SQL. There’s hardly any SQL feature left that jOOQ doesn’t support. This includes window functions, for example, but also making sure that your dynamic SQL works on a large number of SQL dialects, irrespective of the little syntactic details.

You can build your own libraries to construct re-usable SQL elements from other SQL building blocks as this article has shown, to dynamically create single-query OFFSET pagination meta data calculation, without performing additional database round trips.

Translating Stored Procedures Between Dialects

In the past years, we’ve invested a lot of effort into improving our procedural language capabilities in jOOQ. What started with a simple internal API to support the emulations of DDL clauses like these:

-- Some dialect that supports this
create table if not exists t (i varchar(10));

-- Db2
begin
  declare continue handler for sqlstate '42710' begin end;
  execute immediate 'create table T (I varchar(10))';
end

-- Oracle
begin
  execute immediate 'create table T (I varchar2(10))';
exception
  when others then
    if sqlerrm like 'ORA-00955%' then null;
    else raise;
    end if;
end;

-- SQL Server
begin try
  create table T (I varchar(10))
end try
begin catch
  if error_number() != 2714 throw;
end catch

… evolved into a full fledged API for all sorts of procedural logic executed in your database server.

Anonymous blocks

The above examples show what most RDBMS call “anonymous blocks”, similar to Java’s anonymous classes, i.e. elements of procedural logic that do not have a name.

Depending on the database, these blocks are interpreted on the fly, or compiled and cached like ordinary SQL statements. They can be very useful for a variety of reasons:

  • To create atomic ad-hoc units of code, executed in a single server round trip, similar to the above DDL scripts with integrated exception handling
  • To create dynamic procedural code. This may be esoteric for many, but exactly the right thing to do for others. All of jOOQ is about dynamic SQL, so why not also dynamic PL/SQL, T-SQL, pgplsql, etc?
  • To work around limitations imposed by Conway’s Law, when there is no way you can get the necessary GRANT or other bureaucratic token to deploy your procedure in production at your discretion. I mean, this is still a thing in a lot of companies.
  • A lesser case of Conway’s Law may be when you’re a product vendor, and you don’t know if you can create procedures on your clients’ production system. Just don’t. Run your procedural logic as an anonymous block if you can’t, or as a procedure if you can. Same jOOQ code.
  • If your procedural code changes very frequently (even dynamically), storing it might cause tricky issues. If you’ve ever worked with Oracle and ran into the dreaded latch free event, you know what I mean.

I’m by no means advocating you should use anonymous blocks over stored procedures in general. If you can, store your code in the database for better performance and re-use. But sometimes you can’t, and sometimes you shouldn’t.

So, jOOQ supports – as always – a mix of various procedural logic elements, including:

  • Blocks with variable declarations
  • IF statement
  • Loops including LOOP, WHILE, REPEAT, FOR loops
  • EXIT (or LEAVE) and CONTINUE (or ITERATE) for loop control flow
  • RETURN to return from procedures or functions
  • GOTO (ghasp!)
  • SIGNAL or RAISE
  • Labels
  • CALL statement to call other stored procedures
  • EXECUTE statement (for running dynamic SQL from within procedural logic. Which level of inception is that?)

And we’re adding more support all the time. The Java code might look something like this:

Variable<Integer> i = var(name("i"), INTEGER);
ctx.begin(
  for_(i).in(1, 10).loop(
    insertInto(T).columns(T.COL).values(i)
  )
).execute();

Assuming you cannot run a bulk insert statement for some reason, this might be the way to go. It translates to various dialects as follows.

Db2 and MySQL (which doesn’t support anonymous blocks, but statement batches)

begin
  declare I bigint;
  set I = 1;
  while I <= 10 do
    insert into T (COL) values (I);
    set I = (I + 1);
  end while;
end;

PostgreSQL

do $$
begin
  for I in 1 .. 10 loop
    insert into T (COL) values (I);
  end loop;
end;
$$

Oracle

begin
  for I in 1 .. 10 loop
    insert into T (COL) values (I);
  end loop;
end;

SQL Server

begin
  declare @I bigint = 1;
  while @I <= 10 begin
    insert into T (COL) values (I);
    set @I = (@I + 1);
  end;
end;

As always with jOOQ, you don’t have to start out with writing jOOQ API based code. While this is the recommended approach when your procedural (or SQL) logic is dynamic, jOOQ can also parse and translate static SQL in string form. The babelfish of SQL. Play around with it here to learn more: https://www.jooq.org/translate/

Storing the code as a procedure

If you don’t have any of the above use-cases, you will want to store this code as a procedure (or function):

  • For greater re-use
  • For better performance

In that case, starting from jOOQ 3.15, you can use our CREATE PROCEDURE, CREATE FUNCTION, or even CREATE TRIGGER support.

Note: CREATE PACKAGE is high on our wish list, but might not make it into 3.15 anymore. If packages are used for namespacing only, they might be emulated using schemas in other dialects. Other package level features, such as package state may be more difficult to translate.

The previous anonymous block can be easily wrapped in a DSLContext.createProcedure() call

Variable<Integer> i = var("i", INTEGER);
Parameter<Integer> i1 = in("i1", INTEGER);
Parameter<Integer> i2 = in("i2", INTEGER);

ctx.createProcedure("insert_into_t")
   .parameters(i1, i2)
   // You may or may not wrap your block in BEGIN .. END.
   // jOOQ will figure it out per dialect...
   .as(for_(i).in(i1, i2).loop(
     insertInto(T).columns(T.COL).values(i)
   ))
   .execute();

Which would produce the following procedures:

Db2 and MySQL

create procedure INSERT_INTO_T(
  I1 integer,
  I2 integer
)
begin
  declare I bigint;
  set I = I1;
  while I <= I2 do
    insert into T (COL) values (I);
    set I = (I + 1);
  end while;
end;

MariaDB

create procedure INSERT_INTO_T(
  I1 int,
  I2 int
)
begin
  for I in I1 .. I2 do
    insert into T (COL) values (I);
  end for;
end;

Oracle

create procedure INSERT_INTO_T(
  I1 number,
  I2 number
)
as
begin
  for I in I1 .. I2 loop
    insert into T (COL) values (I);
  end loop;
end;

PostgreSQL

create procedure INSERT_INTO_T(
  I1 int,
  I2 int
)
language plpgsql
as
$$
begin
  for I in I1 .. I2 loop
    insert into T (COL) values (I);
  end loop;
end;
$$

SQL Server

create procedure INSERT_INTO_T
  @I1 int,
  @I2 int
as
begin
  declare @I bigint = @I1;
  while @I <= @I2 begin
    insert into T (COL) values (@I);
    set @I = (@I + 1);
  end;
end;

Play around with it here to learn more: https://www.jooq.org/translate/. We’re also looking forward to your bug reports and/or feature requests here: https://github.com/jOOQ/jOOQ/issues/new/choose.

Tricky transformations

Procedural languages are standardised via the ISO/IEC 9075-4 standard, and some RBDMS surprisingly agree to a large extent on the standard, including:

  • Db2
  • HSQLDB
  • MariaDB
  • MySQL

Others do less so, but all procedural languages agree on the fact that they are very simple languages, without any such “fancy” things like subtype or parametric polymorphism (OK, PL/SQL has some subtype polymorphism, but not a very sophisticated one. We won’t support it for now), lambda expressions, dynamic dispatch, algebraic data types, etc. etc.

What they do have in common is a tight integration with the SQL language, which is where they shine.

But there are subtle differences, nonetheless. For example, they differ in where you can declare variables. Some have block scope, others don’t. And some adhere to the standard, where LEAVE requires a label, others don’t.

Imagine you write this “fantasy” jOOQ code

Name t = unquotedName("t");
Name a = unquotedName("a");
Variable<Integer> i = var(unquotedName("i"), INTEGER);

ctx.begin(
     insertInto(t).columns(a).values(1),
     declare(i).set(2),
     loop(
       insertInto(t).columns(a).values(i),
       i.set(i.plus(1)),
       if_(i.gt(10)).then(loop(exit()), exit())
     )
   )
   .execute();

This is just a more complicated version of the original loop, which inserts values 1-10 into a table. There’s no reason other than to show off the transformation capabilities for the nesting of loop(exit()), as well as the infinite LOOP with EXIT usage, rather than the indexed FOR loop.

There are a few things that don’t always work exactly like this in some dialects!

Let’s look at what Db2 does with this.

begin
  -- Variable declarations need to be "pulled up" to the beginning
  -- of the block, i.e. before the INSERT statement
  declare i integer;
  insert into t (a) values (1);

  -- While irrelevant to this example, the init value for the
  -- variable declaration must remain at the original location
  set i = 2;

  -- We need a label to be able to leave this loop
  alias_1:
  loop
    insert into t (a) values (i);
    set i = (i + 1);
    if i > 10 then

      -- Same here, a label is required
      alias_2:
      loop
        leave alias_2;
      end loop;
      leave alias_1;
    end if;
  end loop;
end

If we don’t use EXIT on a loop, then there won’t be a label. Or, you can obviously label your loops explicitly, which is always recommended. But sometimes, you don’t have that in your original source code.

What does Oracle do with this?

Oracle has a slightly different syntax here:

declare
  i number(10);
begin
  insert into t (a) values (1);
  i := 2;
  loop
    insert into t (a) values (i);
    i := (i + 1);
    if i > 10 then
      loop
        exit;
      end loop;
      exit;
    end if;
  end loop;
end;

The main difference being that the declaration is also pulled up, but a separate DECLARE block is required to declare variables outside of BEGIN .. END. Label-less EXIT is supported natively, so nothing needs to be transformed here.

If you’re interested in how this transformations work, read this post.

Conclusion

Whether you’re migrating off one dialect onto another, or whether you’re supporting several dialects at once, or you’re writing dynamic SQL and dynamic procedural logic, or you just like writing things in Java rather than native SQL, or you suffer from Conway’s Law and cannot store your procedural code easily, jOOQ can help you with those endeavours.

For a while now, jOOQ has supported procedural statements as anonymous blocks for the most popular dialects. Starting from jOOQ 3.15, we’ll also support storing this logic in the database in a dialect agnostic way, as well as parsing / translating procedural code on our website, or as a library / CLI or JDBC proxy to replace your SQL / procedural code ad-hoc in a legacy JDBC application.

Stay tuned for more in this exciting area of jOOQ development!