How to Write Multiset Conditions With Oracle VARRAY Types

Oracle is one of the few databases that implements the SQL standard ORDBMS extensions, which essentially allow for nested collections. Other databases that have these features to some extent are CUBRID, Informix, PostgreSQL.

Oracle has two types of nested collections:

-- Nested tables
CREATE TYPE t1 AS TABLE OF VARCHAR2(10);
/

-- Varrays
CREATE TYPE t2 AS VARRAY(10) OF VARCHAR2(10);
/

The main difference at first is that a nested table can be of arbitrary size, whereas a varray has a fixed maximum size. Other than that, they behave in similar ways.

When storing a nested collection in a table, there is another difference. Varrays can be inlined into the table just like any other data type, whereas nested tables have to be accompanied by an additional storage clause:

CREATE TABLE t (
  id NUMBER(10),
  t1 t1,
  t2 t2
)
NESTED TABLE t1 STORE AS t1_nt;

This is a minor hassle in terms of DDL. The runtime implications are more significant.

Multiset Conditions

The most important difference is the fact that all the useful multiset conditions are not available with varrays. For instance, consider running these statements:

INSERT INTO t VALUES (1, NULL, NULL);
INSERT INTO t VALUES (2, t1(), t2());
INSERT INTO t VALUES (
  3, 
  t1('abc', 'xyz', 'zzz'), 
  t2('abc', 'xyz', 'zzz')
);
INSERT INTO t VALUES (
  4, 
  t1('dup', 'dup', 'dup'), 
  t2('dup', 'dup', 'dup')
);

SELECT * FROM t WHERE 'abc' MEMBER OF t1;
SELECT * FROM t WHERE 'abc' MEMBER OF t2;

The result of these queries is:

ID  T1                        T2
-----------------------------------------------------
3   T1('abc', 'xyz', 'zzz')   T2('abc', 'xyz', 'zzz')

ORA-00932: inconsistent datatypes: expected UDT got TEST.T2

Bummer. The documentation is a bit unclear about this. It reads (emphasis mine):

he return value is TRUE if expr is equal to a member of the specified nested table or varray. The return value is NULL if expr is null or if the nested table is empty.

There is some explicit mention of varrays supporting these operations, but in most of the documentation, varrays are not mentioned. So, how can we write such operations with varrays? Here’s an list of translations of the nested table operator to the equivalent SQL expression for use with varrays.

These are the multiset conditions:

IS A SET condition

In SQL, everything is a (partially ordered) multiset by default. Sometimes, however, we want to work with sets, i.e. a special type of multiset that has no duplicate values. We can easily check whether nested tables are sets (or whether they aren’t):

-- Nested table version
SELECT * FROM t WHERE t1 IS A SET;

-- Varray version
SELECT * 
FROM t 
WHERE t2 IS NOT NULL
AND (SELECT count(*) FROM TABLE(t2)) 
  = (SELECT count(DISTINCT column_value) FROM TABLE(t2));

The IS A SET operation yields UNKNOWN if the nested table is NULL, so we have to take that into account as well. If it isn’t NULL, we can count the total values in the varray and compare that with the total distinct values in the varray.

The result is:

ID  T1                        T2
-----------------------------------------------------
2   T1()                      T2()
3   T1('abc', 'xyz', 'zzz')   T2('abc', 'xyz', 'zzz')

IS EMPTY condition

This predicate needs no explanation. It can be written as such:

-- Nested table version
SELECT * FROM t WHERE t1 IS EMPTY;

-- Varray version
SELECT * 
FROM t 
WHERE t2 IS NOT NULL
AND NOT EXISTS (
  SELECT * FROM TABLE (t2)
);

The result being:

ID  T1                 T2
---------------------------------------
2   T1()               T2()

MEMBER condition

This handy predicate can help check if a specific value is contained in a nested collection. It can be written as such:

-- Nested table version
SELECT * FROM t WHERE 'abc' MEMBER OF t1;

-- Varray version
SELECT *
FROM t
WHERE t2 IS NOT NULL
AND EXISTS (
  SELECT 1 FROM TABLE(t2) WHERE column_value = 'abc'
);

Yielding:

ID  T1                        T2
-----------------------------------------------------
3   T1('abc', 'xyz', 'zzz')   T2('abc', 'xyz', 'zzz')

SUBMULTISET condition

Just like the previous MEMBER condition, this predicate can help check if specific values (more than one) are contained in a nested collection. This is a bit more tricky than the previous emulations. The MEMBER condition works the same way for sets and multisets, as we’re checking if exactly one element is contained in the (multi)set.

When working with multisets, duplicates are allowed, and in the case of the SUBMULTISET operation, the following can be observed:

-- Equal multisets
t1() SUBMULTISET OF t1();
t1('a', 'a') SUBMULTISET OF t1('a', 'a');

-- Subsets
t1('a') SUBMULTISET OF t1('a', 'a');

-- But this is not true
t1('a', 'a') SUBMULTISET OF t1('a');

When we omit the fact that nested collections can be multisets and pretend we’re working with sets only, then the emulation of the SUBMULTISET operator is relatively easy:

-- Nested table version
SELECT * FROM t WHERE t1('abc', 'xyz') SUBMULTISET OF t1;

-- Varray version
SELECT *
FROM t
WHERE t2 IS NOT NULL
AND EXISTS (
  SELECT 1 FROM TABLE(t2) 
  WHERE column_value = 'abc'
  INTERSECT
  SELECT 1 FROM TABLE(t2) 
  WHERE column_value = 'xyz'
);

Yielding, once more:

ID  T1                        T2
-----------------------------------------------------
3   T1('abc', 'xyz', 'zzz')   T2('abc', 'xyz', 'zzz')

If we’re really working with multisets, things are a bit more tricky:

-- Nested table version
SELECT * FROM t WHERE t1('dup', 'dup') SUBMULTISET OF t1;

-- Varray version
SELECT *
FROM t
WHERE t2 IS NOT NULL
AND NOT EXISTS (
  SELECT column_value, count(*)
  FROM TABLE (t2('dup', 'dup')) x
  GROUP BY column_value
  HAVING count(*) > (
    SELECT count(*)
    FROM TABLE (t2) y
    WHERE y.column_value = x.column_value
  )
);

Yielding:

ID  T1                        T2
-----------------------------------------------------
4   T1('dup', 'dup', 'dup')   T2('dup', 'dup', 'dup')

How does it work? In the NOT EXISTS correlated subquery, we’re counting the number of duplicate values in the potential SUBMULTISET, effectively turning that SUBMULTISET into a SET using the GROUP BY operation.

We’re then comparing that count value from the left operand with the corresponding count value from the right operand. If there is no value in the left operand whose number of occurrences is bigger than the number of occurrences of that value in the right operand, then the whole left operand is a SUBMULTISET of the right operand.

Cool, eh? We’ll talk about performance another time :-)

MULTISET operators

Also very interesting, the multiset operators:

  • MULTISET EXCEPT [ ALL | DISTINCT ]
  • MULTISET INTERSECT [ ALL | DISTINCT ]
  • MULTISET UNION [ ALL | DISTINCT ]

Notice how there are some differences to the ordinary set operators that can be used in SELECT statements. In particular:

  • EXCEPT is used as defined in the standard, not MINUS
  • ALL is supported on all three operators, not just on UNION
  • ALL is the default, not DISTINCT

How can we work with these operators? Consider these queries:

SELECT id, t1 MULTISET EXCEPT t1('aaa', 'abc', 'dup', 'dup') r 
FROM t;

SELECT id, t1 MULTISET EXCEPT ALL t1('aaa', 'abc', 'dup', 'dup') r 
FROM t;

Both yielding:

ID   R
---------------------
1    (null)
2    T1()
3    T1('xyz', 'zzz')
4    T1('dup')

With this operator, we’re removing each element of the right operand once from the left operand:

  • 'aaa' does not appear in the left operand, so nothing happens
  • 'abc' appears on row with ID = 3 and we remove it
  • 'dup' appears on row with ID = 4, 3 times, and we remove it twice, leaving one value

Conversely, when adding DISTINCT, we’ll get:

SELECT t1 MULTISET EXCEPT DISTINCT t1('aaa', 'abc', 'dup') FROM t;

Yielding:

ID   R
---------------------
1    (null)
2    T1()
3    T1('xyz', 'zzz')
4    T1('')

The only difference is on row with ID = 4, where all 'dup' values were removed, regardless how many there were on either side of the MULTISET EXCEPT DISTINCT operator.

How to emulate this for varrays?

DISTINCT version

This is a bit easier, because we can now use MINUS:

-- Nested table version
SELECT t1 MULTISET EXCEPT DISTINCT t1('aaa', 'abc', 'dup', 'dup') 
FROM t;

-- Varray version
SELECT 
  id,
  CASE 
    WHEN t2 IS NULL THEN NULL 
    ELSE 
      CAST(MULTISET(
        SELECT column_value
        FROM TABLE (t2)
        MINUS
        SELECT column_value
        FROM TABLE (t2('aaa', 'abc', 'dup', 'dup'))
      ) AS t2)
  END r
FROM t;

Luckily, we can still cast a structural MULTISET type that we can obtain using the MULTISET() operator to a varray type. This greatly simplifies the task.

ALL version

If we want the MULTISET EXCEPT or MULTISET EXCEPT ALL semantics, things are trickier. Here’s a solution that resorts to using window functions, in order to turn a MULTISET back into a SET:

-- Nested table version
SELECT t1 MULTISET EXCEPT ALL t1('aaa', 'abc', 'dup', 'dup') 
FROM t;

-- Varray version
SELECT 
  id,
  CASE 
    WHEN t2 IS NULL THEN NULL 
    ELSE 
      CAST(MULTISET(
        SELECT column_value
        FROM (
          SELECT 
            column_value,
            row_number() OVER (
              PARTITION BY column_value 
              ORDER BY column_value) rn
          FROM TABLE (t2)
          MINUS
          SELECT 
            column_value, 
            row_number() OVER (
              PARTITION BY column_value 
              ORDER BY column_value) rn
          FROM TABLE (t2('aaa', 'abc', 'dup', 'dup'))
        )
      ) AS t2)
  END r
FROM t;

How does this work? Ideally, we’ll look at what this ROW_NUMBER() evaluates to on each row. For this, we use OUTER APPLY:

SELECT id, t2, column_value, rn
FROM t
OUTER APPLY (
  SELECT 
    column_value,
    row_number() OVER (
      PARTITION BY column_value
      ORDER BY column_value) rn
  FROM TABLE (t2)
);

The result is:

ID      T2                       COLUMN_VALUE  RN
-----------------------------------------------------
1       (null)                   (null)        (null)
2       T2()                     (null)        (null)
3       T2('abc', 'xyz', 'zzz')  abc           1
3       T2('abc', 'xyz', 'zzz')  xyz           1
3       T2('abc', 'xyz', 'zzz')  zzz           1
4       T2('dup', 'dup', 'dup')  dup           1
4       T2('dup', 'dup', 'dup')  dup           2
4       T2('dup', 'dup', 'dup')  dup           3

As can be seen, each duplicate value gets assigned a unique row number due to the nature of how ROW_NUMBER() works (this property can be very useful for solving the gaps-and-islands-problem. See trick #4).

Now that we turned our (COLUMN_VALUE) multiset into a (COLUMN_VALUE, RN) set (without duplicates), we can use MINUS again.

MULTISET INTERSECT and MULTISET UNION

MULTISET INTERSECT works exactly the same way as MULTISET EXCEPT, with the same window function based emulation in the MULTISET INTERSECT ALL case. MULTISET UNION is simpler, because Oracle knows UNION ALL, so we do not need to resort to such trickery.

Conclusion

Nested collections are a very powerful tool in Oracle SQL. Oracle knows two types of nested collections:

  • Nested tables
  • Varrays

Nested tables are trickier to maintain as you have to think of their storage more explicitly. Varrays can just be embedded into ordinary tables like any other column. But there’s a price to pay for using varrays. Oracle regrettably doesn’t support all of the above very useful multiset conditions and multiset operators.

Luckily, when you encounter a situation where you have varrays and cannot change that, you can still emulate each of the operators using more traditional SQL.

How SQL DISTINCT and ORDER BY are Related

One of the things that confuse SQL users all the time is how DISTINCT and ORDER BY are related in a SQL query.

The Basics

Running some queries against the Sakila database, most people quickly understand:

SELECT DISTINCT length FROM film

This returns results in an arbitrary order, because the database can (and might apply hashing rather than ordering to remove duplicates):

length |
-------|
129    |
106    |
120    |
171    |
138    |
80     |
...

Most people also understand:

SELECT length FROM film ORDER BY length

This will give us duplicates, but in order:

length |
-------|
46     |
46     |
46     |
46     |
46     |
47     |
47     |
47     |
47     |
47     |
47     |
47     |
48     |
...

And, of course, we can combine the two:

SELECT DISTINCT length FROM film ORDER BY length

Resulting in…

length |
-------|
46     |
47     |
48     |
49     |
50     |
51     |
52     |
53     |
54     |
55     |
56     |
...

Then why doesn’t this work?

Maybe somewhat intuitively, we may want to order the lengths differently, e.g. by title:

SELECT DISTINCT length FROM film ORDER BY title

Most databases fail this query with an exception like Oracle’s:

ORA-01791: not a SELECTed expression

At first sight, this seems funny, because this works after all:

SELECT length FROM film ORDER BY title

Yielding:

length |
-------|
86     |
48     |
50     |
117    |
130    |
...

We could add the title to illustrate the ordering

length |title                       |
-------|----------------------------|
86     |ACADEMY DINOSAUR            |
48     |ACE GOLDFINGER              |
50     |ADAPTATION HOLES            |
117    |AFFAIR PREJUDICE            |
130    |AFRICAN EGG                 |

So, how are these different?

We have to rewind and check out the logical order of SQL operations (as opposed to the syntactic order). And always remember, this is the logical order, not the actual order executed by the optimiser.

When we write something like this:

SELECT DISTINCT length FROM film ORDER BY length

The logical order of operations is:

  • FROM clause, loading the FILM table
  • SELECT clause, projecting the LENGTH column
  • DISTINCT clause, removing distinct tuples (with projected LENGTH columns)
  • ORDER BY clause, ordering by the LENGTH column

If we look at this step by step, we have:

Step 1: SELECT * FROM film

The intermediary data set is something like:

film_id |title                       |length | ...
--------|----------------------------|-------| ...
1       |ACADEMY DINOSAUR            |86     | ...
2       |ACE GOLDFINGER              |48     | ...
3       |ADAPTATION HOLES            |50     | ...
4       |AFFAIR PREJUDICE            |117    | ...
5       |AFRICAN EGG                 |130    | ...
...     |...                         |...    | ...

Step 2: SELECT length …

The intermediary data set is something like:

length |
-------|
86     |
48     |
50     |
117    |
130    |
...
86     | <-- duplicate

Step 3: SELECT DISTINCT length …

Now we’re getting a new random order (due to hashing) and no duplicates anymore:

length |
-------|
129    |
106    |
120    |
171    |
138    |
...

Step 4: … ORDER BY length

And we’re getting:

length |
-------|
46     |
47     |
48     |
49     |
50     |
...

It seems obvious.

So why did this work?

Remember, this query worked:

SELECT length FROM film ORDER BY title

Even if after projecting the LENGTH column, it seems as though it is no longer available for sorting, it really is, according to the SQL standard and to common sense. There is a concept called extended sort key columns in the SQL standard, which means the above query has a slightly different order of operations (apart from the fact that there is no DISTINCT operation):

  • FROM clause, loading the FILM table
  • SELECT clause, projecting the LENGTH column from the select list and the TITLE from the extended sort key columns
  • ORDER BY clause, ordering by the TITLE column
  • SELECT clause (implicit), projecting only the LENGTH column, discarding the TITLE column

Again, this is what happens logically. Database optimisers may choose other ways to implement this. By example:

Step 1: SELECT * FROM film

Same as before

film_id |title                       |length | ...
--------|----------------------------|-------| ...
1       |ACADEMY DINOSAUR            |86     | ...
2       |ACE GOLDFINGER              |48     | ...
3       |ADAPTATION HOLES            |50     | ...
4       |AFFAIR PREJUDICE            |117    | ...
5       |AFRICAN EGG                 |130    | ...
...     |...                         |...    | ...

Step 2: SELECT length, title…

We get that synthetic extended sort key column TITLE along with the LENGTH column that we requested

length |title                       |
-------|----------------------------|
86     |ACADEMY DINOSAUR            |
114    |ALABAMA DEVIL               |
50     |ADAPTATION HOLES            |
117    |AFFAIR PREJUDICE            |
168    |ANTITRUST TOMATOES          |
...

Step 3: … ORDER BY title

… we can now order by that column

length |title                       |
-------|----------------------------|
86     |ACADEMY DINOSAUR            |
48     |ACE GOLDFINGER              |
50     |ADAPTATION HOLES            |
117    |AFFAIR PREJUDICE            |
130    |AFRICAN EGG                 |
...

Step 4: SELECT length

… and finally discard it, because we never wanted it

length |
-------|
86     |
48     |
50     |
117    |
130    |

So why can’t we use DISTINCT?

If we try to run this:

SELECT DISTINCT length FROM film ORDER BY title

We would get an additional DISTINCT operation in our logical set of operations:

  • FROM clause, loading the FILM table
  • SELECT clause, projecting the LENGTH column from the select list and the TITLE from the extended sort key columns
  • DISTINCT clause, removing duplicate (LENGTH, TITLE) values… Ooops
  • ORDER BY clause, ordering by the TITLE column
  • SELECT clause (implicit), projecting only the LENGTH column, discarding the TITLE column

The problem is, since we have synthetically added the extended sort key column TITLE to the projection in order to be able to ORDER BY it, DISTINCT wouldn’t have the same semantics anymore as can be seen here:

SELECT count(*)
FROM (
  SELECT DISTINCT length FROM film
) t;

SELECT count(*)
FROM (
  SELECT DISTINCT length, title FROM film
) t;

Yielding

140
1000

All titles are distinct. There is no way this query can be executed reasonably. Either DISTINCT doesn’t work (because the added extended sort key column changes its semantics), or ORDER BY doesn’t work (because after DISTINCT we can no longer access the extended sort key column).

A more constructed example. T contains this data:

CREATE TABLE t (a INT, b INT);
INSERT INTO t VALUES (1, 1);
INSERT INTO t VALUES (1, 2);
INSERT INTO t VALUES (2, 3);
INSERT INTO t VALUES (1, 4);
INSERT INTO t VALUES (2, 5);
A   B
-----
1   1
1   2
2   3
1   4
2   5

What would this query produce?

SELECT DISTINCT a FROM t ORDER BY b;

Clearly, we should only get 2 rows with values 1, 2, because of DISTINCT a:

A 
--
1
2

Now, how do we order these by B? There are 3 values of B associated A = 1 and 2 values of B associated with A = 2:

A   B
------------------
1   Any of 1, 2, 4
2   Any of 3, 5

Should we get 1, 2 or 2, 1 as a result? Impossible to tell.

But there are some exceptions

The way I read the SQL standard, the following exception should be possible. The SQL standard ISO/IEC 9075-2:2016(E), 7.17 <query expression>, Syntax Rules 28) d) i) 6) references the “Left normal form derivation”. But I may be reading this wrong, see also a discussion on the PostgreSQL mailing list:
https://www.postgresql.org/message-id/20030819103859.L69440-100000%40megazone.bigpanda.com

In any case, it still makes sense to me. For instance, we can form expressions on the columns in the select list. This is totally fine in MySQL (strict mode) and Oracle:

SELECT DISTINCT length 
FROM film 
ORDER BY mod(length, 10), length;

It will produce

length |
-------|
50     |
60     |
70     |
80     |
90     |
100    |
110    |
120    |
130    |
140    |
150    |
160    |
170    |
180    |
51     |
61     |
71     |

PostgreSQL doesn’t allow this because the expression MOD(LENGTH, 10) is not in the select list. How to interpret this? We’re looking again at the order of SQL operations:

  • FROM clause, loading the FILM table
  • SELECT clause, projecting the LENGTH column from the select list. MOD(LENGTH, 10) does not have to be put in the extended sort key columns, because it can be fully derived from the select list.
  • DISTINCT clause, removing duplicate LENGTH values … all fine, because we don’t have the verboten extended sort key columns
  • ORDER BY clause, ordering by the mod(LENGTH, 10), LENGTH columns. Totally fine, because we can derive all of these order by expressions from expressions in the select list

Makes sense, right?

Back to our constructed table T:

A   B
-----
1   1
1   2
2   3
1   4
2   5

We are allowed to write:

SELECT DISTINCT a, b FROM t ORDER BY a - b;

We would get:

A   B
-----
1   4
2   5
2   3
1   2
1   1

Again, the order by expressions can be derived completely from the select list. This also works in Oracle:

SELECT DISTINCT a - b FROM t ORDER BY abs(a - b);

The select list contains a column A - B, so we can derive any ORDER BY expression from it. But these don’t work:

SELECT DISTINCT a - b FROM t ORDER BY a;
SELECT DISTINCT a - b FROM t ORDER BY b;
SELECT DISTINCT a - b FROM t ORDER BY b - a;

It is easy to build an intuition for why these don’t work. Clearly, the data set we want is:

A - B  A             B             B - A
------------------------------------------
-3     Any of 1, 2   Any of 4, 5   3
-1     Any of 2, 1   Any of 3, 2   1
 0     Any of 1      Any of 1      0

Now, how are we supposed to order these by A, B or B - A? It looks as though we should be able to sort by B - A in this case. We could derive a complicated transformation of expressions that can be reasonably transformed into each other, such as A - B = -(B - A), but this simply isn’t practical. The expression in the projection is A - B, and that’s the only expression we can re-use in the ORDER BY. For example, we could even do this in Oracle:

SELECT DISTINCT a - b FROM t ORDER BY abs((a - b) + (a - b));

Or start using aliases:

SELECT DISTINCT a - b AS x FROM t ORDER BY abs(x + x);

PostgreSQL DISTINCT ON

PostgreSQL has a nice feature for when you want to order by something from within a group of non-distinct values. Remember how this wasn’t possible?

SELECT DISTINCT length FROM film ORDER BY title

Well, this is:

SELECT DISTINCT ON (title) length FROM film ORDER BY title

And we’re getting now:

length |
-------|
86     |
48     |
50     |
117    |
130    |
169    |
62     |
...

What we’re essentially doing is, we take all distinct lengths, and for each group of identical lengths, we’re taking the top title as a criteria to order by. In a way, this is syntax sugar for this:

SELECT length
FROM (
  SELECT length, MIN(title) title
  FROM film
  GROUP BY length
) t
ORDER BY title

Which is what most people really want, when they ORDER BY something they cannot really order by.

Conclusion

The SQL language is quirky. This is mostly because the syntactical order of operations doesn’t match the logical order of operations. The syntax is meant to be human readable (remember Structured English Query Language?) but when reasoning about a SQL statement, we would often like to directly write down the logical order of operations.

In this article, we haven’t even touched the implications of adding

  • GROUP BY
  • TOP / LIMIT / FETCH
  • UNION

Which add more fun rules to what’s possible and what isn’t. Our previous article on the true logical order of SQL operations explains this completely.

Need more explanation? Check this out.

PostgreSQL 11’s Support for SQL Standard GROUPS and EXCLUDE Window Function Clauses

Exciting discovery when playing around with PostgreSQL 11! New SQL standard window function clauses have been supported. If you want to play with this, you can do so very easily using docker:

docker pull postgres:11
docker run --name POSTGRES11 -e POSTGRES_PASSWORD=postgres -d postgres:11
docker run -it --rm --link POSTGRES11:postgres postgres psql -h postgres -U postgres

See also: https://hub.docker.com/r/_/postgres

The frame clause

When working with window functions, in some cases you want to add the optional frame clause. For example, to get a sliding average over your data, you will write:

SELECT 
  payment_date,
  amount,
  avg(amount) OVER (
    ORDER BY payment_date, payment_id
    ROWS BETWEEN 2 PRECEDING AND 2 FOLLOWING
  )::DECIMAL(10, 2),
  array_agg(amount) OVER (
    ORDER BY payment_date, payment_id
    ROWS BETWEEN 2 PRECEDING AND 2 FOLLOWING
  )
FROM payment;

As always I will be running queries against the Sakila database. The above query yields:

payment_date        |amount |avg  |array_agg                   |
--------------------|-------|-----|----------------------------|
2005-05-24 22:53:30 |2.99   |3.32 |          {2.99,2.99,3.99}  |
2005-05-24 22:54:33 |2.99   |3.74 |     {2.99,2.99,3.99,4.99}  |
2005-05-24 23:03:39 |3.99   |4.39 |{2.99,2.99,3.99,4.99,6.99}  |
2005-05-24 23:04:41 |4.99   |3.99 |{2.99,3.99,4.99,6.99,0.99}  |
2005-05-24 23:05:21 |6.99   |3.79 |{3.99,4.99,6.99,0.99,1.99}  |
2005-05-24 23:08:07 |0.99   |3.99 |{4.99,6.99,0.99,1.99,4.99}  |
2005-05-24 23:11:53 |1.99   |3.99 |{6.99,0.99,1.99,4.99,4.99}  |
2005-05-24 23:31:46 |4.99   |3.79 |{0.99,1.99,4.99,4.99,5.99}  |

The array_agg function helps display how the sliding average came to be. For each average value, we’re looking 2 rows ahead and 2 rows behind in the ordered window.

In the above query, I’m using the optional frame clause to specify the frame size. It has three “modes” or “units”:

<window frame units> ::=
  ROWS
| RANGE
| GROUPS

Almost all databases that support window functions support the first two unit types. To my knowledge, only PostgreSQL 11 now also supports GROUPS The difference is rather simple to explain:

  • ROWS counts the exact number of rows in the frame.
  • RANGE performs logical windowing where we don’t count the number of rows, but look for a value offset.
  • GROUPS counts all groups of tied rows within the window.

I think this is best explained by example. Let’s look at payments with payment timestamps truncated to the hour:

WITH hourly_payment AS (
  SELECT 
    payment_id,
    date_trunc('h', payment_date) AS hour,
    amount
  FROM payment
)
SELECT *
FROM hourly_payment
ORDER BY hour;

This gives us:

payment_id |hour                |amount |
-----------|--------------------|-------|
12377      |2005-05-24 22:00:00 |2.99   | \  Tied group
3504       |2005-05-24 22:00:00 |2.99   | /

6440       |2005-05-24 23:00:00 |4.99   | \
11032      |2005-05-24 23:00:00 |3.99   |  |
8987       |2005-05-24 23:00:00 |4.99   |  | Tied group
6003       |2005-05-24 23:00:00 |6.99   |  |
14728      |2005-05-24 23:00:00 |0.99   |  |
7274       |2005-05-24 23:00:00 |1.99   | /

12025      |2005-05-25 00:00:00 |0.99   | \
3831       |2005-05-25 00:00:00 |8.99   |  |
7044       |2005-05-25 00:00:00 |4.99   |  |
8623       |2005-05-25 00:00:00 |9.99   |  | Tied group
3386       |2005-05-25 00:00:00 |4.99   |  |
8554       |2005-05-25 00:00:00 |4.99   |  |
10785      |2005-05-25 00:00:00 |5.99   |  |
9014       |2005-05-25 00:00:00 |6.99   | /

15394      |2005-05-25 01:00:00 |2.99   | \
10499      |2005-05-25 01:00:00 |4.99   |  |
5020       |2005-05-25 01:00:00 |2.99   |  | Tied group
490        |2005-05-25 01:00:00 |0.99   |  |
12305      |2005-05-25 01:00:00 |4.99   | /

11796      |2005-05-25 02:00:00 |4.99   | \
9463       |2005-05-25 02:00:00 |4.99   |  | Tied group
13711      |2005-05-25 02:00:00 |4.99   | /

Now we can see that for each hour, we have several payments. When we order payments by hour, there are some “tied” payments within that hour (or “group”), i.e. the order among payments on 2005-05-24 22:00:00 are not ordered deterministically among themselves. The payment ids are pretty random.

Now, if we look at the three window frame units again, how do they behave?

ROWS

WITH hourly_payment AS (
  SELECT 
    payment_id,
    date_trunc('h', payment_date) AS hour
  FROM payment
)
SELECT 
  payment_id,
  hour,
  array_agg(payment_id) OVER (
    ORDER BY hour
    ROWS BETWEEN 2 PRECEDING AND 2 FOLLOWING
  )
FROM hourly_payment
ORDER BY hour;

We can see that the size of the window is always precisely 5 rows (except at the beginning and end of the data set):

payment_id |hour                |array_agg                      |
-----------|--------------------|-------------------------------|
12377      |2005-05-24 22:00:00 |{12377,3504,6440}              |
3504       |2005-05-24 22:00:00 |{12377,3504,6440,11032}        |
6440       |2005-05-24 23:00:00 |{12377,3504,6440,11032,8987}   |
11032      |2005-05-24 23:00:00 |{3504,6440,11032,8987,6003}    |
8987       |2005-05-24 23:00:00 |{6440,11032,8987,6003,14728}   |
6003       |2005-05-24 23:00:00 |{11032,8987,6003,14728,7274}   |
14728      |2005-05-24 23:00:00 |{8987,6003,14728,7274,12025}   |
7274       |2005-05-24 23:00:00 |{6003,14728,7274,12025,3831}   |
12025      |2005-05-25 00:00:00 |{14728,7274,12025,3831,7044}   |
3831       |2005-05-25 00:00:00 |{7274,12025,3831,7044,8623}    |
7044       |2005-05-25 00:00:00 |{12025,3831,7044,8623,3386}    |
8623       |2005-05-25 00:00:00 |{3831,7044,8623,3386,8554}     |
3386       |2005-05-25 00:00:00 |{7044,8623,3386,8554,10785}    |
8554       |2005-05-25 00:00:00 |{8623,3386,8554,10785,9014}    |
10785      |2005-05-25 00:00:00 |{3386,8554,10785,9014,15394}   |
9014       |2005-05-25 00:00:00 |{8554,10785,9014,15394,10499}  |
15394      |2005-05-25 01:00:00 |{10785,9014,15394,10499,5020}  |
10499      |2005-05-25 01:00:00 |{9014,15394,10499,5020,490}    |
5020       |2005-05-25 01:00:00 |{15394,10499,5020,490,12305}   |
490        |2005-05-25 01:00:00 |{10499,5020,490,12305,11796}   |
12305      |2005-05-25 01:00:00 |{5020,490,12305,11796,9463}    |
11796      |2005-05-25 02:00:00 |{490,12305,11796,9463,13711}   |
9463       |2005-05-25 02:00:00 |{12305,11796,9463,13711,8167}  |
13711      |2005-05-25 02:00:00 |{11796,9463,13711,8167,1011}   |

There is no notion of a “group” among the rows in the window. But the problem is that we’re getting random PAYMENT_ID values unless we also add the PAYMENT_ID to the ORDER BY clause. This isn’t really what we want, most of the time, so we use:

RANGE

WITH hourly_payment AS (
  SELECT 
    payment_id,
    date_trunc('h', payment_date) AS hour
  FROM payment
)
SELECT 
  payment_id,
  hour,
  EXTRACT(epoch FROM hour) / 3600,
  array_agg(payment_id) OVER (
    ORDER BY EXTRACT(epoch FROM hour) / 3600
    RANGE BETWEEN 2 PRECEDING AND 2 FOLLOWING
  )
FROM hourly_payment
ORDER BY hour;

I have switched from ROWS to RANGE and now the ORDER BY clause works on a number based on the epoch of the hour. What happens now?

This now yields:

payment_id |hour                |?column? |array_agg                                                                                                                                                              
-----------|--------------------|---------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
12377      |2005-05-24 22:00:00 |310270   |{12377,3504,  6440,11032,8987,6003,14728,7274,  12025,3831,7044,8623,3386,8554,10785,9014}
3504       |2005-05-24 22:00:00 |310270   |{12377,3504,  6440,11032,8987,6003,14728,7274,  12025,3831,7044,8623,3386,8554,10785,9014}

6440       |2005-05-24 23:00:00 |310271   |{12377,3504,  6440,11032,8987,6003,14728,7274,  12025,3831,7044,8623,3386,8554,10785,9014,  15394,10499,5020,490,12305}
11032      |2005-05-24 23:00:00 |310271   |{12377,3504,  6440,11032,8987,6003,14728,7274,  12025,3831,7044,8623,3386,8554,10785,9014,  15394,10499,5020,490,12305}
8987       |2005-05-24 23:00:00 |310271   |{12377,3504,  6440,11032,8987,6003,14728,7274,  12025,3831,7044,8623,3386,8554,10785,9014,  15394,10499,5020,490,12305}
6003       |2005-05-24 23:00:00 |310271   |{12377,3504,  6440,11032,8987,6003,14728,7274,  12025,3831,7044,8623,3386,8554,10785,9014,  15394,10499,5020,490,12305}
14728      |2005-05-24 23:00:00 |310271   |{12377,3504,  6440,11032,8987,6003,14728,7274,  12025,3831,7044,8623,3386,8554,10785,9014,  15394,10499,5020,490,12305}
7274       |2005-05-24 23:00:00 |310271   |{12377,3504,  6440,11032,8987,6003,14728,7274,  12025,3831,7044,8623,3386,8554,10785,9014,  15394,10499,5020,490,12305}

12025      |2005-05-25 00:00:00 |310272   |{12377,3504,  6440,11032,8987,6003,14728,7274,  12025,3831,7044,8623,3386,8554,10785,9014,  15394,10499,5020,490,12305,  11796,9463,13711}
3831       |2005-05-25 00:00:00 |310272   |{12377,3504,  6440,11032,8987,6003,14728,7274,  12025,3831,7044,8623,3386,8554,10785,9014,  15394,10499,5020,490,12305,  11796,9463,13711}
7044       |2005-05-25 00:00:00 |310272   |{12377,3504,  6440,11032,8987,6003,14728,7274,  12025,3831,7044,8623,3386,8554,10785,9014,  15394,10499,5020,490,12305,  11796,9463,13711}
8623       |2005-05-25 00:00:00 |310272   |{12377,3504,  6440,11032,8987,6003,14728,7274,  12025,3831,7044,8623,3386,8554,10785,9014,  15394,10499,5020,490,12305,  11796,9463,13711}
3386       |2005-05-25 00:00:00 |310272   |{12377,3504,  6440,11032,8987,6003,14728,7274,  12025,3831,7044,8623,3386,8554,10785,9014,  15394,10499,5020,490,12305,  11796,9463,13711}
8554       |2005-05-25 00:00:00 |310272   |{12377,3504,  6440,11032,8987,6003,14728,7274,  12025,3831,7044,8623,3386,8554,10785,9014,  15394,10499,5020,490,12305,  11796,9463,13711}
10785      |2005-05-25 00:00:00 |310272   |{12377,3504,  6440,11032,8987,6003,14728,7274,  12025,3831,7044,8623,3386,8554,10785,9014,  15394,10499,5020,490,12305,  11796,9463,13711}
9014       |2005-05-25 00:00:00 |310272   |{12377,3504,  6440,11032,8987,6003,14728,7274,  12025,3831,7044,8623,3386,8554,10785,9014,  15394,10499,5020,490,12305,  11796,9463,13711}

15394      |2005-05-25 01:00:00 |310273   |{6440,11032,8987,6003,14728,7274,  12025,3831,7044,8623,3386,8554,10785,9014,  15394,10499,5020,490,12305,  11796,9463,13711,  8167,1011,1203,10019,6245}
10499      |2005-05-25 01:00:00 |310273   |{6440,11032,8987,6003,14728,7274,  12025,3831,7044,8623,3386,8554,10785,9014,  15394,10499,5020,490,12305,  11796,9463,13711,  8167,1011,1203,10019,6245}
5020       |2005-05-25 01:00:00 |310273   |{6440,11032,8987,6003,14728,7274,  12025,3831,7044,8623,3386,8554,10785,9014,  15394,10499,5020,490,12305,  11796,9463,13711,  8167,1011,1203,10019,6245}
490        |2005-05-25 01:00:00 |310273   |{6440,11032,8987,6003,14728,7274,  12025,3831,7044,8623,3386,8554,10785,9014,  15394,10499,5020,490,12305,  11796,9463,13711,  8167,1011,1203,10019,6245}
12305      |2005-05-25 01:00:00 |310273   |{6440,11032,8987,6003,14728,7274,  12025,3831,7044,8623,3386,8554,10785,9014,  15394,10499,5020,490,12305,  11796,9463,13711,  8167,1011,1203,10019,6245}

11796      |2005-05-25 02:00:00 |310274   |{12025,3831,7044,8623,3386,8554,10785,9014,  15394,10499,5020,490,12305,  11796,9463,13711,  8167,1011,1203,10019,6245,14396,13055,15984,9975,8188,5596,2388,7347,11598,6186}
9463       |2005-05-25 02:00:00 |310274   |{12025,3831,7044,8623,3386,8554,10785,9014,  15394,10499,5020,490,12305,  11796,9463,13711,  8167,1011,1203,10019,6245,14396,13055,15984,9975,8188,5596,2388,7347,11598,6186}
13711      |2005-05-25 02:00:00 |310274   |{12025,3831,7044,8623,3386,8554,10785,9014,  15394,10499,5020,490,12305,  11796,9463,13711,  8167,1011,1203,10019,6245,14396,13055,15984,9975,8188,5596,2388,7347,11598,6186}

I’ve visually separated the rows by their hour and the array aggregation by the “tied” payment_ids, i.e. the payment IDs that have the same hour.

Observations:

  1. We get the same aggregation value for the entire set of tied rows, so if in two rows, HOUR is the same, then ARRAY_AGG is the same as well
  2. The window size is now a logical size, no longer an offset size, so we’re going back 2 hours and ahead 2 hours (instead of 2 rows). This is why I’ve extracted epoch and divided it by hour, so I will get consecutive integer values for consecutive hours

The same result could have been achieved using interval types:

WITH hourly_payment AS (
  SELECT 
    payment_id,
    date_trunc('h', payment_date) AS hour
  FROM payment
)
SELECT 
  payment_id,
  hour,
  EXTRACT(epoch FROM hour) / 3600,
  array_agg(payment_id) OVER (
    ORDER BY hour
    RANGE BETWEEN INTERVAL '2 hours' PRECEDING 
              AND INTERVAL '2 hours' FOLLOWING
  )
FROM hourly_payment
ORDER BY hour;

See also this article for details:
https://blog.jooq.org/2016/10/31/a-little-known-sql-feature-use-logical-windowing-to-aggregate-sliding-ranges/

GROUPS

The third frame unit is quite useful, as we can now frame the window to a number of groups of same values. In our case, all payments of the same hour are in the same group. So, in order to get a similar result again, we can write:

WITH hourly_payment AS (
  SELECT 
    payment_id,
    payment_date,
    date_trunc('h', payment_date) AS hour
  FROM payment
)
SELECT 
  payment_id,
  hour,
  array_agg(payment_id) OVER (
    ORDER BY hour
    GROUPS BETWEEN 2 PRECEDING AND 2 FOLLOWING
  )
FROM hourly_payment
ORDER BY hour;

In fact, this is not exactly the same result, because if we have gaps in the hours, GROUPS will simply jump over the gaps, whereas RANGE will not.

Summary of ROWS, RANGE, GROUPS

The above case was a real world use-case. A more constructed example that might be easier to digest, visually, can be seen here:

WITH t(id, v) AS (
  VALUES (1, 1), (2, 1), (3, 3), (4, 5), (5, 5), (6, 5), (7, 6)
)
SELECT
  id,
  v,
  array_agg(id) OVER rows,
  array_agg(v)  OVER rows,
  array_agg(id) OVER range,
  array_agg(v)  OVER range,
  array_agg(id) OVER groups,
  array_agg(v)  OVER groups
FROM t
WINDOW 
  o AS (ORDER BY v),
  rows AS (o ROWS BETWEEN 1 PRECEDING AND 1 FOLLOWING),
  range AS (o RANGE BETWEEN 1 PRECEDING AND 1 FOLLOWING),
  groups AS (o GROUPS BETWEEN 1 PRECEDING AND 1 FOLLOWING)

Notice, I’m using the SQL standard WINDOW clause to be able to name and reuse a repeated window specification. I’ve seen this clause to be supported in:

  • MySQL 8.0
  • PostgreSQL
  • Sybase SQL Anywhere

The query yields:

id |v |array_agg |array_agg |array_agg |array_agg |array_agg     |array_agg     |
---|--|----------|----------|----------|----------|--------------|--------------|
1  |1 |{1,2}     |{1,1}     |{1,2}     |{1,1}     |{1,2,3}       |{1,1,3}       |
2  |1 |{1,2,3}   |{1,1,3}   |{1,2}     |{1,1}     |{1,2,3}       |{1,1,3}       |
3  |3 |{2,3,4}   |{1,3,5}   |{3}       |{3}       |{1,2,3,4,5,6} |{1,1,3,5,5,5} |
4  |5 |{3,4,5}   |{3,5,5}   |{4,5,6,7} |{5,5,5,6} |{3,4,5,6,7}   |{3,5,5,5,6}   |
5  |5 |{4,5,6}   |{5,5,5}   |{4,5,6,7} |{5,5,5,6} |{3,4,5,6,7}   |{3,5,5,5,6}   |
6  |5 |{5,6,7}   |{5,5,6}   |{4,5,6,7} |{5,5,5,6} |{3,4,5,6,7}   |{3,5,5,5,6}   |
7  |6 |{6,7}     |{5,6}     |{4,5,6,7} |{5,5,5,6} |{4,5,6,7}     |{5,5,5,6}     |

Observation:

  • The ROWS framed window is of size 3 max in this case (1 row preceding, the current row, and 1 row following)
  • The RANGE framed window is a logical window that looks behind a value of 1 and ahead a value of 1
  • The GROUPS framed window is of size 3 groups max in this case (1 group preceding, the current group, and 1 group following)

Neat, huh?

jOOQ 3.12 will add support for this feature: https://github.com/jOOQ/jOOQ/issues/7646

EXCLUDE clause

This is probably a bit less frequently useful than the new GROUPS clause. There is now a new window frame exclusion clause:

<window frame exclusion> ::=
  EXCLUDE CURRENT ROW
| EXCLUDE GROUP
| EXCLUDE TIES
| EXCLUDE NO OTHERS

It can be used to exclude some rows around the current row from being in the window. I have yet to think of a use case for this. Here’s how it works for:

ROWS

WITH t(v) AS (
  VALUES (1), (1), (3), (5), (5), (5), (6)
)
SELECT
  v,
  array_agg(v) OVER (o ROWS BETWEEN 1 PRECEDING AND 1 FOLLOWING
                       EXCLUDE CURRENT ROW) AS current_row,
  array_agg(v) OVER (o ROWS BETWEEN 1 PRECEDING AND 1 FOLLOWING 
                       EXCLUDE GROUP) AS group,
  array_agg(v) OVER (o ROWS BETWEEN 1 PRECEDING AND 1 FOLLOWING 
                       EXCLUDE TIES) AS ties,
  array_agg(v) OVER (o ROWS BETWEEN 1 PRECEDING AND 1 FOLLOWING 
                       EXCLUDE NO OTHERS) AS no_others
FROM t
WINDOW o AS (ORDER BY v)

Resulting in:

v |current_row |group |ties    |no_others |
--|------------|------|--------|----------|
1 |{1}         |NULL  |{1}     |{1,1}     |
1 |{1,3}       |{3}   |{1,3}   |{1,1,3}   |
3 |{1,5}       |{1,5} |{1,3,5} |{1,3,5}   |
5 |{3,5}       |{3}   |{3,5}   |{3,5,5}   |
5 |{5,5}       |NULL  |{5}     |{5,5,5}   |
5 |{5,6}       |{6}   |{5,6}   |{5,5,6}   |
6 |{5}         |{5}   |{5,6}   |{5,6}     |

As you can see, the window may now be completely empty, which results in NULL being emitted.

  • Excluding the current row seems obvious
  • Excluding the current group also
  • Excluding ties excludes all other rows from the group
  • Excluding no others is the default, just like when you don’t put this EXCLUDE clause

RANGE

The exclusion can be applied to logical windowing as well:

WITH t(v) AS (
  VALUES (1), (1), (3), (5), (5), (5), (6)
)
SELECT
  v,
  array_agg(v) OVER (o RANGE BETWEEN 1 PRECEDING AND 1 FOLLOWING 
                       EXCLUDE CURRENT ROW) AS current_row,
  array_agg(v) OVER (o RANGE BETWEEN 1 PRECEDING AND 1 FOLLOWING 
                       EXCLUDE GROUP) AS group,
  array_agg(v) OVER (o RANGE BETWEEN 1 PRECEDING AND 1 FOLLOWING 
                       EXCLUDE TIES) AS ties,
  array_agg(v) OVER (o RANGE BETWEEN 1 PRECEDING AND 1 FOLLOWING 
                       EXCLUDE NO OTHERS) AS no_others
FROM t
WINDOW o AS (ORDER BY v)

Resulting in:

v |current_row |group   |ties      |no_others |
--|------------|--------|----------|----------|
1 |{1}         |NULL    |{1}       |{1,1}     |
1 |{1}         |NULL    |{1}       |{1,1}     |
3 |NULL        |NULL    |{3}       |{3}       |
5 |{5,5,6}     |{6}     |{5,6}     |{5,5,5,6} |
5 |{5,5,6}     |{6}     |{5,6}     |{5,5,5,6} |
5 |{5,5,6}     |{6}     |{5,6}     |{5,5,5,6} |
6 |{5,5,5}     |{5,5,5} |{5,5,5,6} |{5,5,5,6} |

GROUPS

Same for grouped windows:

WITH t(v) AS (
  VALUES (1), (1), (3), (5), (5), (5), (6)
)
SELECT
  v,
  array_agg(v) OVER (o GROUPS BETWEEN 1 PRECEDING AND 1 FOLLOWING 
                       EXCLUDE CURRENT ROW) AS current_row,
  array_agg(v) OVER (o GROUPS BETWEEN 1 PRECEDING AND 1 FOLLOWING 
                       EXCLUDE GROUP) AS group,
  array_agg(v) OVER (o GROUPS BETWEEN 1 PRECEDING AND 1 FOLLOWING 
                       EXCLUDE TIES) AS ties,
  array_agg(v) OVER (o GROUPS BETWEEN 1 PRECEDING AND 1 FOLLOWING 
                       EXCLUDE NO OTHERS) AS no_others
FROM t
WINDOW o AS (ORDER BY v)

Resulting in:

v |current_row |group       |ties          |no_others     |
--|------------|------------|--------------|--------------|
1 |{1,3}       |{3}         |{1,3}         |{1,1,3}       |
1 |{1,3}       |{3}         |{1,3}         |{1,1,3}       |
3 |{1,1,5,5,5} |{1,1,5,5,5} |{1,1,3,5,5,5} |{1,1,3,5,5,5} |
5 |{3,5,5,6}   |{3,6}       |{3,5,6}       |{3,5,5,5,6}   |
5 |{3,5,5,6}   |{3,6}       |{3,5,6}       |{3,5,5,5,6}   |
5 |{3,5,5,6}   |{3,6}       |{3,5,6}       |{3,5,5,5,6}   |
6 |{5,5,5}     |{5,5,5}     |{5,5,5,6}     |{5,5,5,6}     |

Needless to say that this clause will be supported in jOOQ 3.12 as well: https://github.com/jOOQ/jOOQ/issues/7647

Bonus points for the reader who can think of a real world use-case for this clause, please leave a comment!

Using UNPIVOT to Traverse a Configuration Table’s Rows and Columns

Imagine you have a configuration table like the following:

CREATE TABLE rule (
  name     VARCHAR2(50)         NOT NULL PRIMARY KEY,
  enabled  NUMBER(1)  DEFAULT 1 NOT NULL CHECK (enabled IN (0,1)),
  priority NUMBER(10) DEFAULT 0 NOT NULL,
  flag1    NUMBER(3)  DEFAULT 0 NOT NULL,
  flag2    NUMBER(3)  DEFAULT 0 NOT NULL,
  flag3    NUMBER(3)  DEFAULT 0 NOT NULL,
  flag4    NUMBER(3)  DEFAULT 0 NOT NULL,
  flag5    NUMBER(3)  DEFAULT 0 NOT NULL
);

It specifies a set of rules that

  • Can be enabled / disabled
  • Can be given a priority among themselves
  • Include a set of flags which correspond to the thing you want to configure (e.g. some check to execute)
  • Those flags can be ordered as well

So, given the following data:

INSERT INTO rule (name, priority, flag1, flag5) 
  VALUES ('RULE 1', 1, 1, 2);
INSERT INTO rule (name, priority, flag2, flag5) 
  VALUES ('RULE 2', 2, 2, 1);
INSERT INTO rule (name, priority, flag3, flag4, flag5) 
  VALUES ('RULE 3', 3, 3, 1, 2);
INSERT INTO rule (name, priority, flag3) 
  VALUES ('RULE 4', 4, 1);

SELECT * FROM rule;

We’ll get our configuration “spreadsheet”:

NAME    ENABLED  PRIORITY  FLAG1  FLAG2  FLAG3  FLAG4  FLAG5
------------------------------------------------------------
RULE 1  1        1         1      0      0      0      2
RULE 2  1        2         0      2      0      0      1
RULE 3  1        3         0      0      3      1      2
RULE 4  1        4         0      0      1      0      0

This form is really useful to edit the configuration. If we want to activate FLAG2 in RULE 1, we just go to that cell in some SQL tool like Oracle SQL Developer, and change the value.

But reading the configuration is a bit different. FLAG1 through FLAG5 are not nicely normalised. How to read the data as though it were normalised?

Using UNPIVOT

In Oracle and SQL Server, we can use UNPIVOT for this use case. I’m using Oracle syntax in this blog post. SQL Server’s is just slightly different. Consider the following query:

SELECT name, flag, value
FROM rule
UNPIVOT (
  value FOR flag IN (
    flag1,  
    flag2,  
    flag3,  
    flag4,  
    flag5
  )
)
WHERE enabled = 1
AND value > 0
ORDER BY priority, value;

This will result in the following result set:

NAME    FLAG    VALUE
---------------------
RULE 1  FLAG1   1
RULE 1  FLAG5   2
RULE 2  FLAG5   1
RULE 2  FLAG2   2
RULE 3  FLAG4   1
RULE 3  FLAG5   2
RULE 3  FLAG3   3
RULE 4  FLAG3   1

In this representation, the rules are ordered by priority, and the flags are ordered by their respective value within a rule. The flags that are not turned on (value 0) are simply omitted. This form is much easier to traverse procedurally, when “consuming” the configuration.

How does it work?

In principle, UNPIVOT is just syntax sugar for a bunch of UNION ALL subqueries. We could have written our query like this, instead:

SELECT name, flag, value
FROM (
  SELECT rule.*, 'FLAG1' AS flag, FLAG1 AS value FROM rule
  UNION ALL
  SELECT rule.*, 'FLAG2' AS flag, FLAG2 AS value FROM rule
  UNION ALL
  SELECT rule.*, 'FLAG3' AS flag, FLAG3 AS value FROM rule
  UNION ALL
  SELECT rule.*, 'FLAG4' AS flag, FLAG4 AS value FROM rule
  UNION ALL
  SELECT rule.*, 'FLAG5' AS flag, FLAG5 AS value FROM rule
) rule
WHERE enabled = 1
AND value > 0
ORDER BY priority, value;

Which is decidedly more code. It’s also more work for the database. The execution plans are different (I’m using Oracle 12.2.0.1.0):

UNPIVOT version – single table access

---------------------------------------------
| Id  | Operation            | Name | Rows  |
---------------------------------------------
|   0 | SELECT STATEMENT     |      |       |
|   1 |  SORT ORDER BY       |      |     5 |
|*  2 |   VIEW               |      |     5 |
|   3 |    UNPIVOT           |      |       |
|*  4 |     TABLE ACCESS FULL| RULE |     1 |
---------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - filter(("unpivot_view_005"."VALUE">0 AND 
              "unpivot_view_005"."ENABLED"=1))
   4 - filter("RULE"."ENABLED"=1)

UNION ALL version – multi table access

---------------------------------------------
| Id  | Operation            | Name | Rows  |
---------------------------------------------
|   0 | SELECT STATEMENT     |      |       |
|   1 |  SORT ORDER BY       |      |     8 |
|   2 |   VIEW               |      |     8 |
|   3 |    UNION-ALL         |      |       |
|*  4 |     TABLE ACCESS FULL| RULE |     1 |
|*  5 |     TABLE ACCESS FULL| RULE |     1 |
|*  6 |     TABLE ACCESS FULL| RULE |     2 |
|*  7 |     TABLE ACCESS FULL| RULE |     1 |
|*  8 |     TABLE ACCESS FULL| RULE |     3 |
---------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   4 - filter(("RULE"."ENABLED"=1 AND "FLAG1">0))
   5 - filter(("RULE"."ENABLED"=1 AND "FLAG2">0))
   6 - filter(("RULE"."ENABLED"=1 AND "FLAG3">0))
   7 - filter(("RULE"."ENABLED"=1 AND "FLAG4">0))
   8 - filter(("RULE"."ENABLED"=1 AND "FLAG5">0))

We can also measure the time it takes to execute these queries thousands of times. The following shows resulting times relative to the fastest execution (1):

Run 1, Statement 1 : 1.155
Run 1, Statement 2 : 1.88056

Run 2, Statement 1 : 1.04333
Run 2, Statement 2 : 1.95148

Run 3, Statement 1 : 1.02185
Run 3, Statement 2 : 1.86074

Run 4, Statement 1 : 1
Run 4, Statement 2 : 1.85241

Run 5, Statement 1 : 1.0263
Run 5, Statement 2 : 1.82944

The UNION ALL version is consistently about 2x slower on this very small data set. This is significant in the use case presented here, as a configuration table is probably read many times per day.

Knowing when a rule starts and when it ends

The real world use case that is behind this blog post also needed to know when a rule started and when it ended. I.e., which flag entry was the first and which was the last of the rule. This was easy in the non-normalised representation where each rule was a single row.

In the normalised version, we can use LEAD() and LAG().

Using this query:

SELECT 
  CASE WHEN lag(name, 1, 'NULL') 
            OVER (ORDER BY priority, value) != name 
       THEN 1 ELSE 0 END rule_begin,
  CASE WHEN lead(name, 1, 'NULL') 
            OVER (ORDER BY priority, value) != name 
       THEN 1 ELSE 0 END rule_end,
  name, flag, value
FROM rule
UNPIVOT (
  value FOR flag IN (
    flag1,  
    flag2,  
    flag3,  
    flag4,  
    flag5
  )
)
WHERE enabled = 1
AND value > 0
ORDER BY priority, value;

We’re now getting (with some visual emphasis):

RULE_BEGIN  RULE_END  NAME    FLAG    VALUE
-------------------------------------------
1           0         RULE 1  FLAG1   1
0           1         RULE 1  FLAG5   2

1           0         RULE 2  FLAG5   1
0           1         RULE 2  FLAG2   2

1           0         RULE 3  FLAG4   1
0           0         RULE 3  FLAG5   2
0           1         RULE 3  FLAG3   3

1           1         RULE 4  FLAG3   1

LEAD() looks ahead one row to see if the rule name there is different from the rule name on the current row.

LAG() looks behind one row to see if the rule name there is different from the rule name on the current row.

That’s it – very simple. The window functions part of this example is part of my 10 SQL Tricks talk, which I highly recommend you watch.

Oracle’s OFFSET .. FETCH Can be Slower than Classic ROWNUM Filtering

One of Oracle 12c’s coolest features was the introduction of the SQL standard OFFSET .. FETCH clause, as we can now write things like:

SELECT *
FROM film 
ORDER BY film_id
FETCH FIRST 1 ROW ONLY

This is querying the Sakila database. Most other databases had this clause (or a non-standard version of it) for ages, e.g. MySQL with LIMIT. For all the different LIMIT syntaxes, check out the jOOQ manual.

Implementation wise, the Oracle folks chose to rewrite this clause as a simple window function filter. In principle, behind the scenes, the following is executed:

Teradata syntax

SELECT *
FROM film
QUALIFY row_number() OVER (ORDER BY film_id) = 1
ORDER BY film_id

Standard syntax

SELECT * -- Except rn
FROM (
  SELECT film.*, row_number() OVER (ORDER BY film_id) rn
  FROM film
) t
WHERE rn = 1
ORDER BY film_id

This does definitely look much better than the “old” approach using ROWNUM filtering, which many of us have written for years:

Legacy Oracle syntax

SELECT t.*
FROM (
  SELECT *
  FROM film 
  ORDER BY film_id
) t
WHERE ROWNUM = 1

What I don’t like about this “old” approach is that we’re relying on the ORDER BY clause of a derived table, which should not guaranteed to be retained in the outer most query in my opinion (although it is, in Oracle, in this case).

So, having the SQL standard syntax is definitely good.

What’s the problem?

Now, while the SQL transformation from FETCH FIRST to ROW_NUMBER() filtering is certainly correct, the execution plan doesn’t really make me happy. Consider the ROWNUM based query:

---------------------------------------------------------
| Id  | Operation                     | Name    | Rows  |
---------------------------------------------------------
|   0 | SELECT STATEMENT              |         |       |
|*  1 |  COUNT STOPKEY                |         |       |
|   2 |   VIEW                        |         |     1 |
|   3 |    TABLE ACCESS BY INDEX ROWID| FILM    |  1000 |
|   4 |     INDEX FULL SCAN           | PK_FILM |     1 |
---------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter(ROWNUM=1)

And compare that to the FETCH FIRST query:

-------------------------------------------------
| Id  | Operation                | Name | Rows  |
-------------------------------------------------
|   0 | SELECT STATEMENT         |      |       |
|*  1 |  VIEW                    |      |     1 |
|*  2 |   WINDOW SORT PUSHED RANK|      |  1000 |
|   3 |    TABLE ACCESS FULL     | FILM |  1000 |
-------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=1)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY "FILM"."FILM_ID")<=1)

The cardinalities are both correct, but one of the queries seems to traverse the entire table to find the top FILM_ID, which the other query found in the index directly. A workaround would be to hint the number of rows to the FETCH FIRST query:

SELECT /*+FIRST_ROWS(1)*/ *
FROM film
ORDER BY film_id
FETCH FIRST 1 ROW ONLY;

… in case of which we’ll get a similar plan as that of the ROWNUM filtering query:

---------------------------------------------------------
| Id  | Operation                     | Name    | Rows  |
---------------------------------------------------------
|   0 | SELECT STATEMENT              |         |       |
|*  1 |  VIEW                         |         |     1 |
|*  2 |   WINDOW NOSORT STOPKEY       |         |     1 |
|   3 |    TABLE ACCESS BY INDEX ROWID| FILM    |  1000 |
|   4 |     INDEX FULL SCAN           | PK_FILM |     1 |
---------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=1)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY "FILM"."FILM_ID")<=1)

Measuring this using our measurement technique yields quite devastating results:

Run 1, Statement 1 :  1.11230  -- ROWNUM
Run 1, Statement 2 :  1.15508  -- FETCH FIRST +FIRST_ROWS
Run 1, Statement 3 : 46.92781  -- FETCH FIRST

Run 2, Statement 1 :  1.68449
Run 2, Statement 2 :  1.99465
Run 2, Statement 3 : 47.32620

Run 3, Statement 1 :  1.10428
Run 3, Statement 2 :  1.13904
Run 3, Statement 3 : 68.06417

Run 4, Statement 1 :  1
Run 4, Statement 2 :  6.00535
Run 4, Statement 3 : 44.88235

The above results don’t show any time measurement, but a number relative to the fastest execution (1)

There is a 40x performance difference between the approaches, with ROWNUM based filtering being the fastest, FETCH FIRST plus +FIRST_ROWS hint being slightly slower, and “naked” FETCH FIRST being terribly slow, when repeating the measurement 5x and running each query 10000x on my machine on Oracle 12.2.0.1.0 in Docker.

Things get worse when joining tables. Let’s run a query that fetches the first customer and their full address information:

-- Legacy Oracle syntax
SELECT t.*
FROM (
  SELECT *
  FROM customer 
  JOIN address USING (address_id)
  JOIN city USING (city_id)
  JOIN country USING (country_id)
  ORDER BY customer_id
) t
WHERE ROWNUM = 1;

-- Standard syntax with hint
SELECT /*+FIRST_ROWS(1)*/ *
FROM customer 
JOIN address USING (address_id)
JOIN city USING (city_id)
JOIN country USING (country_id)
ORDER BY customer_id
FETCH FIRST 1 ROW ONLY;

-- Standard syntax without hint
SELECT *
FROM customer 
JOIN address USING (address_id)
JOIN city USING (city_id)
JOIN country USING (country_id)
ORDER BY customer_id
FETCH FIRST 1 ROW ONLY;

The two queries are equivalent, they both produce the same result. Yet, the plans are very different.

Oracle’s legacy syntax

-----------------------------------------------------------------
| Id  | Operation                         | Name        | Rows  |
-----------------------------------------------------------------
|   0 | SELECT STATEMENT                  |             |       |
|*  1 |  COUNT STOPKEY                    |             |       |
|   2 |   VIEW                            |             |     1 |
|   3 |    NESTED LOOPS                   |             |     1 |
|   4 |     NESTED LOOPS                  |             |     1 |
|   5 |      NESTED LOOPS                 |             |     1 |
|   6 |       NESTED LOOPS                |             |     1 |
|   7 |        TABLE ACCESS BY INDEX ROWID| CUSTOMER    |   302 |
|   8 |         INDEX FULL SCAN           | PK_CUSTOMER |     1 |
|   9 |        TABLE ACCESS BY INDEX ROWID| ADDRESS     |     1 |
|* 10 |         INDEX UNIQUE SCAN         | PK_ADDRESS  |     1 |
|  11 |       TABLE ACCESS BY INDEX ROWID | CITY        |     1 |
|* 12 |        INDEX UNIQUE SCAN          | PK_CITY     |     1 |
|* 13 |      INDEX UNIQUE SCAN            | PK_COUNTRY  |     1 |
|  14 |     TABLE ACCESS BY INDEX ROWID   | COUNTRY     |     1 |
-----------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter(ROWNUM=1)
  10 - access("CUSTOMER"."ADDRESS_ID"="ADDRESS"."ADDRESS_ID")
  12 - access("ADDRESS"."CITY_ID"="CITY"."CITY_ID")
  13 - access("CITY"."COUNTRY_ID"="COUNTRY"."COUNTRY_ID")

We’re seeing tons of nested loop joins because that’s what we’ll expect given the low cardinalities imposed by the COUNT STOPKEY operation.

SQL standard syntax with hint

-----------------------------------------------------------------
| Id  | Operation                         | Name        | Rows  |
-----------------------------------------------------------------
|   0 | SELECT STATEMENT                  |             |       |
|*  1 |  VIEW                             |             |     1 |
|*  2 |   WINDOW NOSORT STOPKEY           |             |     1 |
|   3 |    NESTED LOOPS                   |             |     1 |
|   4 |     NESTED LOOPS                  |             |     1 |
|   5 |      NESTED LOOPS                 |             |     1 |
|   6 |       NESTED LOOPS                |             |     1 |
|   7 |        TABLE ACCESS BY INDEX ROWID| CUSTOMER    |   302 |
|   8 |         INDEX FULL SCAN           | PK_CUSTOMER |     1 |
|   9 |        TABLE ACCESS BY INDEX ROWID| ADDRESS     |     1 |
|* 10 |         INDEX UNIQUE SCAN         | PK_ADDRESS  |     1 |
|  11 |       TABLE ACCESS BY INDEX ROWID | CITY        |     1 |
|* 12 |        INDEX UNIQUE SCAN          | PK_CITY     |     1 |
|* 13 |      INDEX UNIQUE SCAN            | PK_COUNTRY  |     1 |
|  14 |     TABLE ACCESS BY INDEX ROWID   | COUNTRY     |     1 |
-----------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter("from$_subquery$_008"."rowlimit_$$_rownumber"<=1)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY "CUSTOMER"."CUSTOMER_ID")<=1)
  10 - access("CUSTOMER"."ADDRESS_ID"="ADDRESS"."ADDRESS_ID")
  12 - access("ADDRESS"."CITY_ID"="CITY"."CITY_ID")
  13 - access("CITY"."COUNTRY_ID"="COUNTRY"."COUNTRY_ID")

Quite a similar plan

SQL standard syntax without hint

---------------------------------------------------------------
| Id  | Operation                        | Name       | Rows  |
---------------------------------------------------------------
|   0 | SELECT STATEMENT                 |            |       |
|*  1 |  VIEW                            |            |     1 |
|*  2 |   WINDOW SORT PUSHED RANK        |            |   599 |
|*  3 |    HASH JOIN                     |            |   599 |
|   4 |     TABLE ACCESS FULL            | CUSTOMER   |   599 |
|*  5 |     HASH JOIN                    |            |   603 |
|   6 |      MERGE JOIN                  |            |   600 |
|   7 |       TABLE ACCESS BY INDEX ROWID| COUNTRY    |   109 |
|   8 |        INDEX FULL SCAN           | PK_COUNTRY |   109 |
|*  9 |       SORT JOIN                  |            |   600 |
|  10 |        TABLE ACCESS FULL         | CITY       |   600 |
|  11 |      TABLE ACCESS FULL           | ADDRESS    |   603 |
---------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter("from$_subquery$_008"."rowlimit_$$_rownumber"<=1)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY "CUSTOMER"."CUSTOMER_ID")<=1)
   3 - access("CUSTOMER"."ADDRESS_ID"="ADDRESS"."ADDRESS_ID")
   5 - access("ADDRESS"."CITY_ID"="CITY"."CITY_ID")
   9 - access("CITY"."COUNTRY_ID"="COUNTRY"."COUNTRY_ID")
       filter("CITY"."COUNTRY_ID"="COUNTRY"."COUNTRY_ID")

Oops. Many HASH JOIN and MERGE JOIN and TABLE ACCESS FULL operations, as well as a WINDOW SORT operation, rather than a WINDOW NOSORT operation. That cannot be good. Let’s measure this again.

Run 1, Statement 1 :  1.26157  -- ROWNUM
Run 1, Statement 2 :  1.32394  -- FETCH FIRST +FIRST_ROWS
Run 1, Statement 3 : 66.97384  -- FETCH FIRST

Run 2, Statement 1 :  1.31992
Run 2, Statement 2 :  1.76459
Run 2, Statement 3 : 72.76056

Run 3, Statement 1 :  1
Run 3, Statement 2 :  1.36419
Run 3, Statement 3 : 74.06439

Run 4, Statement 1 :  1.08451
Run 4, Statement 2 :  1.64990
Run 4, Statement 3 : 66.83702

The difference is even worse: Factor 60x. And make no mistake, these are trivial data set sizes. As we can see in the last execution plan, the cardinality of the CUSTOMER table is 599. This can get much worse for larger tables.

Why even use this syntax?

The SQL standard syntax is much nicer to write, and it allows for nice TOP-N style queries using CROSS APPLY or LATERAL, e.g. to find the TOP 3 longest film titles per actor:

SELECT actor_id, first_name, last_name, title
FROM actor a
OUTER APPLY (
  SELECT /*+FIRST_ROWS(1)*/ title
  FROM film f
  JOIN film_actor fa USING (film_id)
  WHERE fa.actor_id = a.actor_id
  ORDER BY length(title) DESC
  FETCH FIRST 3 ROWS ONLY
) t
ORDER BY actor_id, length(title) DESC;

This would have been much harder with the ROWNUM approach. In older Oracle versions, it was even impossible, because we could not reference A.ACTOR_ID from doubly nested derived tables / correlated subqueries. Luckily, this is no longer the case. So, syntactically, it is definitely a much better way to do paginated queries or TOP-N queries, but the price is very high.

Disclaimer

The optimiser might make much better choices when:

The base data set is much bigger than the above 600 to 1000 rows “strong” tables.

Indeed, when fetching the first row from the PAYMENT table (with ~16000 rows), the difference becomes smaller or even inexistent:

Run 1, Statement 1 : 1        -- ROWNUM
Run 1, Statement 2 : 1.72246  -- FETCH FIRST +FIRST_ROWS
Run 1, Statement 3 : 1.76165  -- FETCH FIRST

Run 2, Statement 1 : 1.03919
Run 2, Statement 2 : 1.78284
Run 2, Statement 3 : 1.75742

Run 3, Statement 1 : 1.2553
Run 3, Statement 2 : 1.86441
Run 3, Statement 3 : 2.39089

Run 4, Statement 1 : 2.28814
Run 4, Statement 2 : 3.02436
Run 4, Statement 3 : 2.39407

Run 5, Statement 1 : 1.31462
Run 5, Statement 2 : 2.27225
Run 5, Statement 3 : 1.70975

As can be seen, the measurement errors start to outweigh the difference in performance, so the difference isn’t really as significant anymore.

The limit is not 1 or 3, but 10 or 50

When fetching the top 50 rows from the joined customer/address query, the measurements actually changed quite a bit. Suddenly, the ROWNUM query wasn’t optimal anymore and behaved like the un-hinted FETCH FIRST query. Adding a /*+FIRST_ROWS(1)*/ hint (not /*+FIRST_ROWS(50)*/ !) did help:

Run 1, Statement 1 : 1.00545  -- ROWNUM +FIRST_ROWS
Run 1, Statement 2 : 7.24842  -- ROWNUM
Run 1, Statement 3 : 1.35691  -- FETCH FIRST +FIRST_ROWS
Run 1, Statement 4 : 7.15264  -- FETCH FIRST

Run 2, Statement 1 : 1.08054
Run 2, Statement 2 : 6.51922
Run 2, Statement 3 : 1.35960
Run 2, Statement 4 : 7.94527

Run 3, Statement 1 : 1.02824
Run 3, Statement 2 : 7.16228
Run 3, Statement 3 : 1.19702
Run 3, Statement 4 : 7.55008

Run 4, Statement 1 : 1.08364
Run 4, Statement 2 : 6.66652
Run 4, Statement 3 : 1.18559
Run 4, Statement 4 : 7.36938

Run 5, Statement 1 : 1
Run 5, Statement 2 : 6.89051
Run 5, Statement 3 : 1.24211
Run 5, Statement 4 : 7.15167

Conclusion

What we’ve seen here is a bit unfortunate. For some cases, one approach is better than the other, performance wise. For others, it’s vice versa. Paginated queries are still a bit tricky for Oracle to get right and we have to measure things explicitly.

Workaround in jOOQ

Until this is fixed by Oracle, if you’re using jOOQ, you can use the SQLDialect.ORACLE11G dialect to run classic ROWNUM filtering queries also on Oracle 12c. Alternatively, a future version of jOOQ will optionally generate a +FIRST_ROWS hint with a reasonably approximated cardinality: https://github.com/jOOQ/jOOQ/issues/5793

How to Patch Your IDE to Fix an Urgent Bug

Clock’s ticking. JDK 11 will remove a bunch of deprecated modules through JEP 320, which includes the Java EE modules, which again includes JAXB, a dependency of many libraries, including jOOQ. Thus far, few people have upgraded to Java 9 or 10, as these aren’t LTS releases. Unlike in the old days, however, people will be forced much earlier to upgrade to Java 11, because Java 8 (the free version) will reach end of life soon after Java 11 is released:

End of Public Updates for Oracle JDK 8
As outlined in the Oracle JDK Support Roadmap below, Oracle will not post further updates of Java SE 8 to its public download sites for commercial use after January 2019

So, we library developers must act and finally modularise our libraries. Which is, quite frankly, a pain. Not because of the module system itself, which works surprisingly well. But because of the toolchain, which is far from being production ready. This mostly includes:

It’s still almost not possible to maintain a modularised project in an IDE (I’ve tried Eclipse and IntelliJ, not Netbeans so far) as there are still tons of bugs. Some of which are showstoppers, halting compilation in the IDE (despite compilation working in Maven). For example:

But rather than just complaining, let’s complain and fix it

Let’s fix our own IDE by patching it

Disclaimer: The following procedure assumes that you have the right to modify your IDE’s source and binaries. To my understanding, this is the case with the EPL licensed Eclipse. It may not be the case for other IDEs.

Disclaimer2: Note, as reddit user fubarbazqux so eloquently put it, there are cleaner ways to apply patches (and contribute them) to the Eclipse community, if you have more time. This article just displays a very easy way to do things without spending too much time to figure out how the Eclipse development processes work, internally. It shows a QUICK FIX recipe

The first bug was already discovered and fixed for Eclipse 4.8, but its RC4 version seems to have tons of other problems, so let’s not upgrade to that yet. Instead, let’s apply the fix that can be seen here to our own distribution:

https://github.com/eclipse/eclipse.jdt.core/commit/e60c4f1f36f7efd5fbc1bbc661872b78c6939230#diff-e517e5944661053f0fcff49d9432b74e

It’s just a single line:

How do we do this?

First off, go to the Eclipse Packages Download page:

http://www.eclipse.org/downloads/eclipse-packages

And download the “Eclipse IDE for Eclipse Committers” distribution:

It will contain all the Eclipse source code, which we’ll need to compile the above class. In the new workspace, create a new empty plugin project:

Specify the correct execution environment (in our case Java 10) and add all the Java Development Tools (JDT) dependencies:

Or just add all the available dependencies, it doesn’t really matter.

You can now open the type that you want to edit:

Now, simply copy the source code from the editor and paste it in a new class inside of your project, which you put in the same package as the original (split packages are still possible in this case, yay)

Inside of your copy, apply the desired patch and build the project. Since you already included all the dependencies, it will be easy to compile your copy of the class, and you don’t have to build the entirety of Eclipse.

Now, go to your Windows Explorer or Mac OS X Finder, or Linux shell or whatever and find the compiled class:

This class can now be copied into the Eclipse plugin. How to find the appropriate Eclipse plugin? Just go to your plugin dependencies and check out the location of the class you’ve opened earlier:

Open that plugin from your Eclipse distribution’s /plugins folder using 7zip or whatever zipping tool you prefer, and overwrite the original class file(s). You may need to close Eclipse first, before you can write to the plugin zip file. And it’s always a good idea to make backup copies of the original plugin(s).

Be careful that if your class has any nested classes, you will need to copy them all, e.g.

MyClass.class
MyClass$1.class // Anonymous class
MyClass$Nested.class // Named, nested class

Restart Eclipse, and your bug should be fixed!

How to fix my own bugs?

You may not always be lucky to find a bug with an existing fix in the bug tracker as in the second case:
https://bugs.eclipse.org/bugs/show_bug.cgi?id=535927

No problemo, we can hack our way around that as well. Launch your normal Eclipse instance (not the “Eclipse IDE for Eclipse Committers” one) with a debug agent running, by adding the following lines to your eclipse.ini file:

-Xdebug 
-Xnoagent 
-Djava.compile=NONE 
-Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=5005

Launch Eclipse again, then connect to your Eclipse from your other “Eclipse IDE for Eclipse Committers” instance by connecting a debugger:

And start setting breakpoints wherever you need, e.g. here, in my case:

java.lang.NullPointerException
	at org.eclipse.jdt.internal.compiler.problem.ProblemHandler.handle(ProblemHandler.java:145)
	at org.eclipse.jdt.internal.compiler.problem.ProblemHandler.handle(ProblemHandler.java:226)
	at org.eclipse.jdt.internal.compiler.problem.ProblemReporter.handle(ProblemReporter.java:2513)
	at org.eclipse.jdt.internal.compiler.problem.ProblemReporter.deprecatedType(ProblemReporter.java:1831)
	at org.eclipse.jdt.internal.compiler.problem.ProblemReporter.deprecatedType(ProblemReporter.java:1808)
	at org.eclipse.jdt.internal.compiler.lookup.CompilationUnitScope.checkAndRecordImportBinding(CompilationUnitScope.java:960)
	at org.eclipse.jdt.internal.compiler.lookup.CompilationUnitScope.faultInImports(CompilationUnitScope.java:471)
	at org.eclipse.jdt.internal.compiler.lookup.CompilationUnitScope.faultInTypes(CompilationUnitScope.java:501)
	at org.eclipse.jdt.internal.compiler.Compiler.process(Compiler.java:878)
	at org.eclipse.jdt.internal.compiler.ProcessTaskManager.run(ProcessTaskManager.java:141)
	at java.lang.Thread.run(Unknown Source)

And start analysing the problem like your own bugs. The nice thing is, you don’t have to fix the problem, just find it, and possibly comment out some lines of code if you think they’re not really needed. In my case, luckily, the regression was introduced by a new method that is applied to JDK 9+ projects only:

String deprecatedSinceValue(Supplier<AnnotationBinding[]> annotations) {
    // ...
}

The method will check for the new @Deprecated(since="9") attribute on the @Deprecated annotation. Not an essential feature, so let’s just turn it off by adding this line to the source file:

String deprecatedSinceValue(Supplier<AnnotationBinding[]> annotations) {
    if (true) return;
    // ...
}

This will effectively prevent the faulty logic from ever running. Not a fix, but a workaround. For more details about this specific issue, see the report. Of course, never forget to actually report the issue to Eclipse (or whatever your IDE is), so it can be fixed thoroughly for everyone else as well

Compile. Patch. Restart. Done!

Conclusion

Java is a cool platform. It has always been a very dynamic language at runtime, where compiled class files can be replaced by new versions at any moment, and re-loaded by the class loaders. This makes patching code by other vendors very easy, just:

  • Create a project containing the vendors’ code (or if you don’t have the code, the binaries)
  • Apply a fix / workaround to the Java class that is faulty (or if you don’t have the code, decompile the binaries if you are allowed to)
  • Compile your own version
  • Replace the version of the class file from the vendor by yours
  • Restart

This works with all software, including IDEs. In the case of jOOQ, all our customers have the right to modification, and they get the sources as well. We know how useful it is to be able to patch someone else’s code. This article shows it. Now, I can continue modularising jOOQ, and as a side product, improve the tool chain for everybody else as well.

Again, this article displayed a QUICK FIX approach (some call it “hack”). There are more thorough ways to apply patches / fixes, and contribute them back to the vendor.

Another, very interesting option would be to instrument your runtime and apply the fix only to byte code:

And:

https://www.sitepoint.com/fixing-bugs-in-running-java-code-with-dynamic-attach/

A note on IntelliJ and NetBeans

Again, I haven’t tried NetBeans yet (although I’ve heard its Java 9 support has been working very well for quite a while).

While IntelliJ’s Jigsaw support seems more advanced than Eclipse’s (still with a few flaws as well), it currently has a couple of performance issues when compiling projects like jOOQ or jOOλ. In a future blog post, I will show how to “fix” those by using a profiler, like:

  • Java Mission Control (can be used as a profiler, too)
  • YourKit
  • JProfiler

Profilers can be used to very easily track down the main source of a performance problem. I’ve reported a ton to Eclipse already. For instance, this one:

https://bugs.eclipse.org/bugs/show_bug.cgi?id=474686

Where a lot of time is being spent in the processing of Task Tags, like:

  • TODO
  • FIXME
  • XXX

The great thing about profiling this is:

  • You can report a precise bug to the vendor
  • You can find the flawed feature and turn it off as a workaround. Turning off the above task tag feature was a no-brainer. I’m not even using the feature.

So, stay tuned for another blog post, soon.

jOOQ 3.11 Released With 4 New Databases, Implicit Joins, Diagnostics, and Much More

Today, jOOQ 3.11 has been released with support for 4 new databases, implicit joins, diagnostics, and much more

New Databases Supported

At last, 4 new SQL dialects have been added to jOOQ! These are:

jOOQ Professional Edition

  • Aurora MySQL Edition
  • Aurora PostgreSQL Edition
  • Azure SQL Data Warehouse

jOOQ Enterprise Edition

  • Teradata

Implicit Joins

One of the really cool features in ORMs like Hibernate, Doctrine, and others, is
the capability of using a relationship graph notation to access another entity’s
columns through what is often called “implicit joins”.

Instead of explicitly joining a to-one relationship to access its columns:

SELECT author.first_name, author.last_name, book.title
FROM book
JOIN author ON book.author_id = author.id

We would like to be able to access those columns directly, using this notation:

SELECT book.author.first_name, book.author.last_name, book.title
FROM book

The join is implied and should be added implicitly. jOOQ now allows for this to
happen when you use the code generator:

ctx.select(BOOK.author().FIRST_NAME, BOOK.author().LAST_NAME, BOOK.TITLE)
   .from(BOOK)
   .fetch();

When rendering this query, the implicit join graph will be calculated on the fly
and added behind the scenes to the BOOK table. This works for queries of
arbitrary complexity and on any level of nested SELECT.

More details in this blog post:
https://blog.jooq.org/2018/02/20/type-safe-implicit-join-through-path-navigation-in-jooq-3-11/

DiagnosticsListener SPI

A new DiagnosticsListener SPI has been added to jOOQ:
https://github.com/jOOQ/jOOQ/issues/5960

The purpose of this SPI is to sanitise your SQL language, JDBC and jOOQ API
usage. Listeners can listen to events such as:

  • duplicateStatements (similar SQL is executed, bind variables should be used)
  • repeatedStatements (identical SQL is executed, should be batched or rewritten)
  • tooManyColumnsFetched (not all projected columns were needed)
  • tooManyRowsFetched (not all fetched rows were needed)

The great thing about this SPI is that it can be exposed to clients through the
JDBC API, in case of which the diagnostics feature can reverse engineer your
JDBC or even JPA generated SQL. Ever wanted to detect N+1 queries from
Hibernate? Pass those Hibernate-generated queries through this SPI.

Want to find missing bind variables leading to cursor cache contention or SQLi?
Let jOOQ find similar SQL statements and report them. E.g.

  • SELECT name FROM person WHERE id = 1
  • SELECT name FROM person WHERE id = 2

Or also:

  • SELECT name FROM person WHERE id IN (?, ?)
  • SELECT name FROM person WHERE id IN (?, ?, ?)

Anonymous blocks

Many databases support anonymous blocks to run several statements in a single
block scope. For example, Oracle:

DECLARE
  l_var NUMBER(10);
BEGIN
  l_var := 10;
  dbms_output.put_line(l_var);
END;

jOOQ now supports the new org.jooq.Block API to allow for wrapping DDL and DML
statements in such a block. This is a first step towards a future jOOQ providing
support for:

  • Abstractions over procedural languages
  • CREATE PROCEDURE and CREATE FUNCTION statements
  • Trigger support
  • And much more

Parser

jOOQ’s parser support is an ongoing effort. This release has added support for
a lot of new SQL clauses and functions from various vendors and in various DDL
and DML statements.

The parser is now also exposed through a public website and API, where SQL can
be translated from one dialect to another:
https://www.jooq.org/translate

This website will help further drive jOOQ API development by helping to find
missing functionality that is used in real-world SQL.

Another way to access this API is through the new org.jooq.ParserCLI command
line tool. For example, run:

$ java -cp jooq-3.11.0.jar org.jooq.ParserCLI -f -t ORACLE -s "SELECT * FROM (VALUES(1),(2)) AS t(a)"

To get:

select *
from (
  (
    select null a
    from dual
    where 1 = 0
  )
  union all (
    select *
    from (
      (
        select 1
        from dual
      )
      union all (
        select 2
        from dual
      )
    ) t
  )
) t;

Formal Java 10 Support

jOOQ 3.11 is the first release that is formally integration tested with Java 10.
To use jOOQ with Java 10, use the Java 8 distribution which has not yet been
modularised, but contains Automatic-Module-Name specification to be forward
compatible with future, modularised jOOQ distributions.

Additionally, package names between jOOQ, jOOQ-meta, and jOOQ-codegen have been
cleaned up to prevent duplicate package names, and the JAXB dependency has been
added explicitly to the various artefacts.

Other great improvements

  • Finally, asterisks (SELECT * or SELECT t.*) are formally supported in the API.
  • Collations can now be specified on a variety of syntax elements
  • The org.jooq.Comment type has been added, and DDL statements for it
  • The DefaultBinding implementation has been rewritten for better peformance
  • Several performance improvements in jOOQ’s internals
  • Many more DDL statements are supported including GRANT and REVOKE
  • Support for the EXPLAIN statement
  • FETCH n PERCENT ROWS and TOP n PERCENT clauses are supported
  • Better org.jooq.Name and org.jooq.Named API for identifier handling
  • Support for PostgreSQL 10
  • Support for SQL Server 2017
  • Support for DB2 11
  • Upgraded MariaDB support for window functions, inv dist functions, WITH
  • jOOU dependency updated to 0.9.3
  • jOOR dependency updated to 0.9.8
  • Server output (e.g. DBMS_OUTPUT) can now be fetched automatically, by jOOQ
  • Code generation support for PL/SQL TABLE types
  • SQL Keywords Can Now Be Rendered In Pascal Style If You Must
  • Emulate PostgreSQL’s ON CONFLICT clause using MERGE

The complete list can be seen here:
https://www.jooq.org/notes/?version=3.11