Using JDK 10’s Local Variable Type Inference with jOOQ

After the successful release of JDK 9, we can already look forward, and play around with early access releases of JDK 10. The list of JEPs currently targeted for JDK 10 is quite manageable so far. JEP 286 is probably the most exciting one for most Java developers: Local variable type inference (which we’ve blogged about before). You can read the JEP yourself, or just go get the early access release and play around with it.

One of the nice things about this new feature is the fact that we now get access to non-denotable types that were previously rather clumsy to work with. For example, this is now possible:

The type of “o” is non denotable, we cannot give it a name (we could uselessly assign it to an Object variable, though). But the new “var” keyword can “capture” it (my wording) to make it usable within a local scope. This could already be done prior to Java 10, when chaining methods (or attribute references).

A rarely used feature are methods in anonymous classes that do not override / implement a super type’s method. They are available only in a very narrow scope. Prior to Java 10, we could only call either m() or n() on such a class, but not both, using the following syntax:

(new Object() {
    void m() { 
        System.out.println("m"); 
    }
    void n() { 
        System.out.println("n"); 
    }
}).m();

// Now, how to call n()?

So, again, this is like “chaining methods”, where the m() call is chained to the constructor call.

The language feature of adding methods to anonymous classes wasn’t too useful. Only one method could be called from the “outside” of the anonymous class, as the instance reference will have gone quickly. With Java 10, we can assign the whole expression to a local variable, without losing the anonymous type.

On a side-note, Java always had a funky and weird love-hate relationship with structural typing, trying to be a mostly nominally typed language. Yet, as we can see in this example, another new kind of structural type has snuck into the language. Cool!

What does this mean for jOOQ?

jOOQ has some cool types. Just look at the API:

Ultimately, depending on how many columns you want to project in your SELECT statement, you’ll get a different Record[N]<T1, T2, ..., T[N]> type, e.g.

for (Record3<String, String, String> r : using(con)
        .select(c.TABLE_SCHEMA, c.TABLE_NAME, c.COLUMN_NAME)
        .from(c))
  System.out.println(
    r.value1() + "." + r.value2() + "." + r.value3());

What’s nice is the fact that there is record-level type safety, i.e. you know that the record has 3 columns and that they’re all of type String. What’s less nice is that in order to profit from this type safety, you have to actually write down the type, which can get laborious (both when writing and when reading it), e.g. when you select 16 columns or more.

Java 10 changes this. It’s now possible to simply write

for (var r : using(con)
        .select(c.TABLE_SCHEMA, c.TABLE_NAME, c.COLUMN_NAME)
        .from(c))
  System.out.println(
    r.value1() + "." + r.value2() + "." + r.value3());

I.e. using the keyword “var” (or “final var”, if you prefer) to create the loop variable. And it will still be type safe. For instance, you cannot call r.value4() on it:

jshell> for (var r : using(con)
   ...>         .select(c.TABLE_SCHEMA, c.TABLE_NAME, c.COLUMN_NAME)
   ...>         .from(c))
   ...>   System.out.println(r.value1() + "." + r.value2() + "." + r.value4());
|  Error:
|  cannot find symbol
|    symbol:   method value4()
|      System.out.println(r.value1() + "." + r.value2() + "." + r.value4());
|                                                               ^------^

This isn’t a game changer, but for folks coming from Kotlin or Scala, it is a big relief to see that this option is now given to Java developers too.

And this isn’t just useful for results in jOOQ. You can also use it for creating dynamic SQL, e.g.:

// Create a subquery listing all tables called TABLES in any schema
var subq = select(t.TABLE_SCHEMA, t.TABLE_NAME)
          .from(t)
          .where(t.TABLE_NAME.eq("TABLES"));

// Create a predicate that uses the above subquery:
var pred = row(c.TABLE_SCHEMA, c.TABLE_NAME).in(subq);

// use the above predicate in an actual query
var q = using(con).selectFrom(c).where(pred);

So, clearly, this is going to be a really really useful Java release for jOOQ folks.

How to Avoid Excessive Sorts in Window Functions

Usually, this blog is 100% pro window functions and advocates using them at any occasion. But like any tool, window functions come at a price and we must carefully evaluate if that’s a price we’re willing to pay. That price can be a sort operation. And as we all know, sort operations are expensive. They follow O(n log n) complexity, which must be avoided at all costs for large data sets.

In a previous post, I’ve described how to calculate a running total with window functions (among other ways). In this post, we’re going to calculate the cumulative revenue at each payment in our Sakila database.

SELECT
  customer_id,
  payment_date,
  amount,
  SUM(amount) OVER (
    PARTITION BY customer_id
    ORDER BY payment_date, payment_id
  ) cumulative_amount
FROM payment
ORDER BY customer_id, payment_date, payment_id;

The above will yield something like this:

customer_id |payment_date        |amount |cumulative_amount 
------------|--------------------|-------|------------------
1           |2005-05-25 11:30:37 |2.99   |2.99              
1           |2005-05-28 10:35:23 |0.99   |3.98              
1           |2005-06-15 00:54:12 |5.99   |9.97              
1           |2005-06-15 18:02:53 |0.99   |10.96             
1           |2005-06-15 21:08:46 |9.99   |20.95             
1           |2005-06-16 15:18:57 |4.99   |25.94             
...

As can be seen, in spread sheet notation, cumulative_amount[N] = cumulative_amount[N-1] + amount.

Reusing this calculation in several queries

As in any other language, we don’t want to repeat ourselves, so the SQL way of doing DRY is to create a view or a table valued function. Let’s create a view, first. Something like this:

CREATE VIEW payment_with_revenue AS
SELECT
  customer_id,
  payment_date,
  amount,
  SUM(amount) OVER (
    PARTITION BY customer_id
    ORDER BY payment_date, payment_id
  ) cumulative_amount
FROM payment

Now, we can do nice things like this:

SELECT 
  customer_id,
  payment_date,
  amount,
  cumulative_amount
FROM payment_with_revenue
WHERE customer_id IN (1, 2, 3)
AND payment_date 
  BETWEEN DATE '2005-05-25'
  AND     DATE '2005-05-29'
ORDER BY customer_id, payment_date

yielding:

customer_id |payment_date        |amount |cumulative_amount 
------------|--------------------|-------|------------------
1           |2005-05-25 11:30:37 |2.99   |2.99              
1           |2005-05-28 10:35:23 |0.99   |3.98              
2           |2005-05-27 00:09:24 |4.99   |4.99              
3           |2005-05-27 17:17:09 |1.99   |1.99              

What about performance?

Now, if we have an index on (CUSTOMER_ID, PAYMENT_DATE), we’d expect to be able to use it, right? Because it seems that our predicate should be able to profit from it:

SELECT 
  count(*),
  count(*) FILTER (
    WHERE customer_id IN (1, 2, 3)
  ),
  count(*) FILTER (
    WHERE customer_id IN (1, 2, 3)
    AND payment_date < DATE '2005-05-29'
  ) 
FROM payment;

yielding:

count |count |count 
------|------|-----
16049 |85    |4     

(To learn more about the cool FILTER clause, read this article here)

How could we best use the index? Let’s look again at our original query, but this time, with an inlined view (“inlined”):

SELECT 
  customer_id,
  payment_date,
  amount,
  cumulative_amount
FROM (
  SELECT
    customer_id,
    payment_date,
    amount,
    SUM(amount) OVER (
      PARTITION BY customer_id
      ORDER BY payment_date, payment_id
    ) cumulative_amount
  FROM payment
) inlined
WHERE customer_id IN (1, 2, 3)
AND payment_date 
  BETWEEN DATE '2005-05-25'
  AND     DATE '2005-05-29'
ORDER BY customer_id, payment_date;

We should be able to apply two transformations that benefit using the index:

CUSTOMER_ID IN (1, 2, 3) predicate

The CUSTOMER_ID IN (1, 2, 3) predicate should be pushed down into the view, “past” the window function, because it does not affect the window function calculation, which partitions the data set by CUSTOMER_ID. By being pushed “past” the window function, I mean the fact that window functions are calculated late in the order of SELECT clauses.

This means that our original query should be equivalent to this one:

SELECT 
  customer_id,
  payment_date,
  amount,
  cumulative_amount
FROM (
  SELECT
    customer_id,
    payment_date,
    amount,
    SUM(amount) OVER (
      PARTITION BY customer_id
      ORDER BY payment_date, payment_id
    ) cumulative_amount
  FROM payment
  WHERE customer_id IN (1, 2, 3) -- Pushed down
) inlined
WHERE payment_date 
  BETWEEN DATE '2005-05-25'
  AND     DATE '2005-05-29'
ORDER BY customer_id, payment_date;

The PAYMENT_DATE predicate

The PAYMENT_DATE predicate is a bit more tricky. It cannot be pushed “past” the window function completely, because that would alter the semantics of the window function, which calculates the cumulative amount in the RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW range (which is the default, if we do not specify it).

But intuitively (and if you want to spend the time: formally as well), we can show that we can at least push the upper bound of our range predicate into the view, like this:

SELECT 
  customer_id,
  payment_date,
  amount,
  cumulative_amount
FROM (
  SELECT
    customer_id,
    payment_date,
    amount,
    SUM(amount) OVER (
      PARTITION BY customer_id
      ORDER BY payment_date, payment_id
    ) cumulative_amount
  FROM payment
  WHERE customer_id IN (1, 2, 3)
  AND payment_date <= DATE '2005-05-29' -- Pushed down
) inlined
WHERE payment_date >= DATE '2005-05-25'
ORDER BY customer_id, payment_date;

And now, we can profit from the index very easily! But is this transformation being done by any database? Unfortunately not. Some databases manage to push down the “more obvious” CUSTOMER_ID predicate past the window function, but none can do the same with the “less obvious” range predicate on PAYMENT_DATE:

DB2 LUW 10.5

The CUSTOMER_ID predicate is pushed down into the view, which generates an index scan (blue) on the pre-existing foreign key index (which doesn’t contain the PAYMENT_DATE column), but the PAYMENT_DATE itself is only filtered much later using an in-memory filter (red):

Explain Plan                                                       
-------------------------------------------------------------------
ID | Operation                       |                  Rows | Cost
 1 | RETURN                          |                       |   40
 2 |  FILTER                         |     4 of 80 (  5.00%) |   40
 3 |   TBSCAN                        |    80 of 80 (100.00%) |   40
 4 |    SORT                         |    80 of 80 (100.00%) |   40
 5 |     NLJOIN                      |               80 of 3 |   40
 6 |      TBSCAN GENROW              |      3 of 3 (100.00%) |    0
 7 |      FETCH PAYMENT              |    27 of 27 (100.00%) |   13
 8 |       IXSCAN IDX_FK_CUSTOMER_ID | 27 of 16049 (   .17%) |    6
                                                                   
Predicate Information                                              
 2 - RESID (Q5.PAYMENT_DATE <= '2005-05-29')                       
     RESID ('2005-05-25' <= Q5.PAYMENT_DATE)                       
 5 - JOIN (Q3.CUSTOMER_ID = Q2.$C0)                                
 8 - START (Q3.CUSTOMER_ID = Q2.$C0)                               
      STOP (Q3.CUSTOMER_ID = Q2.$C0)                               

Conversely, see the plan of the manually optimised query:

Explain Plan                                                  
--------------------------------------------------------------
ID | Operation                   |                 Rows | Cost
 1 | RETURN                      |                      |   40
 2 |  FILTER                     |     4 of 4 (100.00%) |   40
 3 |   TBSCAN                    |     4 of 4 (100.00%) |   40
 4 |    SORT                     |     4 of 4 (100.00%) |   40
 5 |     NLJOIN                  |               4 of 1 |   40
 6 |      TBSCAN GENROW          |     3 of 3 (100.00%) |    0
 7 |      FETCH PAYMENT          |     1 of 1 (100.00%) |   13
 8 |       IXSCAN IDX_PAYMENT_I1 | 1 of 16049 (   .01%) |    6
                                                              
Predicate Information                                         
 2 - RESID ('2005-05-25' <= Q5.PAYMENT_DATE)                  
 5 - JOIN (Q3.CUSTOMER_ID = Q2.$C0)                           
 8 - START (Q3.CUSTOMER_ID = Q2.$C0)                          
      STOP (Q3.CUSTOMER_ID = Q2.$C0)                          
      STOP (Q3.PAYMENT_DATE <= '2005-05-29')                  

This is certainly a better plan.

MySQL 8.0.2

MySQL, very regrettably, doesn’t seem to show any effort at all in optimising this. We’re accessing the entire payment table to get this result.

id   table        type  rows    filtered    Extra
-----------------------------------------------------------------------
1    <derived2>   ALL   16086    3.33       Using where
2    payment      ALL   16086  100.00       Using filesort

Here’s the manually optimised plan:

id   table        type  key             rows  filtered    Extra
-------------------------------------------------------------------------------
1    <derived2>   ALL                   4     3.33        Using where
2    payment      range idx_payment_i1  4      100.00     Using index condition

Oracle 12.2.0.1

Oracle also cannot do this beyond pushing the more obvious CUSTOMER_ID predicate into the view:

-------------------------------------------------------------------------------
| Id  | Operation                              | Name                 | Rows  |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                       |                      |       |
|*  1 |  VIEW                                  | PAYMENT_WITH_REVENUE |    80 |
|   2 |   WINDOW SORT                          |                      |    80 |
|   3 |    INLIST ITERATOR                     |                      |       |
|   4 |     TABLE ACCESS BY INDEX ROWID BATCHED| PAYMENT              |    80 |
|*  5 |      INDEX RANGE SCAN                  | IDX_FK_CUSTOMER_ID   |    80 |
-------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter(("PAYMENT_DATE">=TO_DATE('2005-05-25 00:00:00') AND 
              "PAYMENT_DATE"<=TO_DATE('2005-05-29 00:00:00')))
   5 - access(("CUSTOMER_ID"=1 OR "CUSTOMER_ID"=2 OR "CUSTOMER_ID"=3))

The manually optimised plan looks better:

-------------------------------------------------------------------------
| Id  | Operation                              | Name           | Rows  |
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT                       |                |       |
|*  1 |  VIEW                                  |                |     1 |
|   2 |   WINDOW SORT                          |                |     1 |
|   3 |    INLIST ITERATOR                     |                |       |
|   4 |     TABLE ACCESS BY INDEX ROWID BATCHED| PAYMENT        |     1 |
|*  5 |      INDEX RANGE SCAN                  | IDX_PAYMENT_I1 |     1 |
-------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter("PAYMENT_DATE">=TO_DATE('2005-05-25 00:00:00'))
   5 - access(("CUSTOMER_ID" IN (1, 2, 3)) AND 
              "PAYMENT_DATE"<=TO_DATE('2005-05-29 00:00:00'))

Much better cardinality estimates!

PostgreSQL 10

PostgreSQL’s version of the Sakila database uses a partitioned payment table, but that should be irrelevant for this analysis. The CUSTOMER_ID predicate could be pushed down…

QUERY PLAN                                                                                          
---------------------------------------------------------------------------------------------------
Subquery Scan on payment_with_revenue  (cost=117.06..124.45 rows=8 width=52)                       
  Filter: ((payment_date >= '2005-05-25') AND (payment_date <= '2005-05-29'))
-> WindowAgg  (cost=117.06..121.49 rows=197 width=56)                                               
   -> Sort  (cost=117.06..117.55 rows=197 width=24)                                              
      Sort Key: payment.customer_id, payment.payment_date, payment.payment_id                  
      -> Result  (cost=0.29..109.55 rows=197 width=24)                                        
         -> Append  (cost=0.29..107.58 rows=197 width=24)                                  
            -> Index Scan using idx_fk.. on payment  (cost=0.29..18.21 rows=77 width=20)
               Index Cond: (customer_id = ANY ('{1,2,3}'::integer[]))
               -> Bitmap Heap Scan on payment_p2007_01  (cost=4.62..14.90 rows=20 width=26)
                  Recheck Cond: (customer_id = ANY ('{1,2,3}'::integer[]))               
                  -> Bitmap Index Scan on idx_fk.. (cost=0.00..4.61 rows=20 width=0)
                     Index Cond: (customer_id = ANY ('{1,2,3}'::integer[]))           
                  -> Bitmap Heap Scan on payment_p2007_02  (cost=4.62..14.90 rows=20 width=26)
                     Recheck Cond: (customer_id = ANY ('{1,2,3}'::integer[]))               
                  -> Bitmap Index Scan on idx_fk.. (cost=0.00..4.61 rows=20 width=0)
                     Index Cond: (customer_id = ANY ('{1,2,3}'::integer[]))           
              ...

But manual optimisation is required to get better behaviour for the date range:

QUERY PLAN                                                                                           
-----------------------------------------------------------------------------------------------------
Subquery Scan on inlined  (cost=18.46..18.56 rows=3 width=48)                                        
  Filter: (inlined.payment_date >= '2005-05-25'::date)                    
-> WindowAgg  (cost=18.46..18.52 rows=3 width=52)                                                 
   -> Sort  (cost=18.46..18.46 rows=3 width=20)                                                
      Sort Key: payment.customer_id, payment.payment_date, payment.payment_id                
      -> Result  (cost=0.29..18.43 rows=3 width=20)                                         
         -> Append  (cost=0.29..18.40 rows=3 width=20)                                   
            -> Index Scan using idx_fk.. on payment  (cost=0.29..18.40 rows=3 width=20)
                Index Cond: (customer_id = ANY ('{1,2,3}'::integer[]))
                Filter: (payment_date <= '2005-05-29'::date)

Interestingly, the index still isn’t used optimally on both columns, which has nothing to do with the current discussion on window functions. PostgreSQL seems to be unable to think of the IN predicate as an equality predicate. See also this article about other optimisations (such as predicate merging) that are not possible (yet) in PostgreSQL.

But still, this is much better as it brings down the estimated cardinalities (in case this query is a subquery in a more sophisticated context), and more importantly, it filters out many many rows prior to calculating the window function.

SQL Server 2014

Another database that cannot push down this predicate past the window function optimally. Only the “obvious” part is pushed down:

|--Sort(ORDER BY:([payment_date] ASC))
   |--Filter(WHERE:([payment_date]>='2005-05-25' AND [payment_date]<='2005-05-29'))
      |--Compute Scalar(DEFINE:([Expr1003]=CASE WHEN [Expr1004]=(0) THEN NULL ELSE [Expr1005] END))
         |--Stream Aggregate(GROUP BY:([WindowCount1009]) DEFINE:(..))
            |--Window Spool(RANGE BETWEEN:(UNBOUNDED, [[payment_date], [payment_id]]))
               |--Segment
                  |--Segment
                     |--Sort(ORDER BY:([customer_id] ASC, [payment_date] ASC, [payment_id] ASC))
                        |--Table Scan(OBJECT:([payment]), WHERE:([customer_id] IN (1, 2, 3)))

Interestingly, this doesn’t even use the index at all, but at least the data is filtered out prior to the calculation that relies on sorting. With the manual optimisation, again the same, much better effect:

|--Filter(WHERE:([payment_date]>='2005-05-25'))
   |--Compute Scalar(DEFINE:([Expr1003]=CASE WHEN [Expr1004]=(0) THEN NULL ELSE [Expr1005] END))
      |--Stream Aggregate(GROUP BY:([WindowCount1011]) DEFINE:(..))
         |--Window Spool(RANGE BETWEEN:(UNBOUNDED, [[payment_date], [payment_id]]))
            |--Segment
               |--Segment
                  |--Sort(ORDER BY:([payment_date] ASC, [payment_id] ASC))
                     |--Nested Loops(Inner Join, OUTER REFERENCES:([Bmk1000]))
                        |--Nested Loops(Inner Join, OUTER REFERENCES:([Expr1007], [Expr1008], [Expr1006]))
                        |  |--Compute Scalar(DEFINE:(([Expr1007],[Expr1008],[Expr1006])=GetRangeWithMismatchedTypes(NULL,'2005-05-29',(42))))
                        |  |  |--Constant Scan
                        |  |--Index Seek(OBJECT:([idx_payment_i1]), SEEK:([customer_id] IN (1, 2, 3) AND [payment_date] > [Expr1007] AND [payment_date] < [Expr1008]))
                        |--RID Lookup(OBJECT:([payment]))

Certainly, this is a bit cryptic to read but it really means the same thing as always: The manual optimisation worked and we got a better plan.

Meh, does it matter?

I hope so! Let’s benchmark these things against each other! Some info about our benchmarking technique in our previous post and on this page here. Specifically, we don’t publish actual execution times, only relative times within the benchmark as we do not want to compare databases against each other but only against themselves.

DB2 LUW 10.5

RUN |STMT |RATIO  |
----|-----|-------|
1   |1    |3.0890 |
1   |2    |1.2272 |
2   |1    |3.0624 |
2   |2    |1.0100 |
3   |1    |3.0389 |
3   |2    |1.0000 |
4   |1    |3.1566 |
4   |2    |1.0948 |
5   |1    |3.1817 |
5   |2    |1.0905 |

The manually optimised statement is 3x faster in our benchmark. Do bear in mind that we’re operating on a rather small data set of a total of a few thousand rows! This gets worse in a larger data set.

MySQL 8.0.2

The difference is devastating in MySQL 8.0.2, which just recently introduced window functions. Surely, the MySQL team will be able to apply some further optimisations prior to GA – I’ve filed an issue for review:

0	1	431.1905
0	2	1.0000
1	1	372.4286
1	2	1.0000
2	1	413.4762
2	2	1.0000
3	1	381.2857
3	2	1.0000
4	1	400.1429
4	2	1.2857

Oracle 12.2.0.1

Another factor 4x can be observed in Oracle:

Run 1, Statement 1 : 4.58751
Run 1, Statement 2 : 1.37639
Run 2, Statement 1 : 4.71833
Run 2, Statement 2 : 1.03693
Run 3, Statement 1 : 4.05729
Run 3, Statement 2 : 1.04719
Run 4, Statement 1 : 3.86653
Run 4, Statement 2 : 1
Run 5, Statement 1 : 3.99603
Run 5, Statement 2 : 1.0212

PostgreSQL 10

PostgreSQL is quite bad too, here. A factor 7x can be observed:

RUN 1, Statement 1: 7.23373
RUN 1, Statement 2: 1.01438
RUN 2, Statement 1: 6.62028
RUN 2, Statement 2: 1.26183
RUN 3, Statement 1: 8.40322
RUN 3, Statement 2: 1.04074
RUN 4, Statement 1: 6.33401
RUN 4, Statement 2: 1.06750
RUN 5, Statement 1: 6.41649
RUN 5, Statement 2: 1.00000

SQL Server 2014

Another very significant penalty in SQL Server for the unoptimised version:

Run 1, Statement 1: 29.50000
Run 1, Statement 2: 1.07500
Run 2, Statement 1: 28.15000
Run 2, Statement 2: 1.00000
Run 3, Statement 1: 28.00000
Run 3, Statement 2: 1.00000
Run 4, Statement 1: 28.00000
Run 4, Statement 2: 1.00000
Run 5, Statement 1: 31.07500
Run 5, Statement 2: 1.00000

Bad news for views. Is there a better solution?

This is rather bad news for window functions inside of reusable views. None of the databases, not even DB2 or Oracle can push down range predicates past a derived table’s window function, if the column that is part of the range predicate doesn’t correspond to the window function’s PARTITION BY clause.

The problem described above can be easily fixed when the query is written manually, expanding all possible views into their calling SQL, but that kind of sucks – we’d love to make our code reusable. There’s one solution in databases that support inline table valued functions. Among the tested databases, these include:

  • DB2
  • PostgreSQL
  • SQL Server

MySQL doesn’t have table valued functions, and Oracle’s (very regrettably) are not inlineable because they have to be written in PL/SQL.

Here’s how to write these functions:

DB2

Function definition:

CREATE OR REPLACE FUNCTION f_payment_with_revenue (
  p_customer_id BIGINT,
  p_from_date DATE,
  p_to_date DATE
)
RETURNS TABLE (
  customer_id BIGINT,
  payment_date DATE,
  amount DECIMAL(10, 2),
  cumulative_amount DECIMAL(10, 2)
)
LANGUAGE SQL
RETURN
SELECT *
FROM (
  SELECT
    customer_id,
    payment_date,
    amount,
    SUM(amount) OVER (
      PARTITION BY customer_id
      ORDER BY payment_date, payment_id
    ) cumulative_amount
  FROM payment
  WHERE customer_id = p_customer_id
  AND payment_date <= p_to_date
) t
WHERE payment_date >= p_from_date;

Function call:

SELECT 
  payment_date,
  amount,
  cumulative_amount
FROM (
  SELECT customer_id FROM customer WHERE customer_id IN (1, 2, 3)
) c(customer_id),
TABLE(sakila.f_payment_with_revenue(
  c.customer_id,
  CAST('2005-05-25' AS DATE),
  CAST('2005-05-29' AS DATE)
))
ORDER BY payment_date;

Execution plan:

Explain Plan                                                    
----------------------------------------------------------------
ID | Operation                     |                 Rows | Cost
 1 | RETURN                        |                      |   33
 2 |  TBSCAN                       |     4 of 4 (100.00%) |   33
 3 |   SORT                        |     4 of 4 (100.00%) |   33
 4 |    NLJOIN                     |               4 of 1 |   33
 5 |     NLJOIN                    |               3 of 1 |   20
 6 |      TBSCAN GENROW            |     3 of 3 (100.00%) |    0
 7 |      IXSCAN PK_CUSTOMER       |   1 of 599 (   .17%) |    6
 8 |     FILTER                    |     1 of 1 (100.00%) |   13
 9 |      TBSCAN                   |     1 of 1 (100.00%) |   13
10 |       SORT                    |     1 of 1 (100.00%) |   13
11 |        FETCH PAYMENT          |     1 of 1 (100.00%) |   13
12 |         IXSCAN IDX_PAYMENT_I1 | 1 of 16049 (   .01%) |    6
                                                                
Predicate Information                                           
  5 - JOIN (Q3.CUSTOMER_ID = Q2.$C0)                            
  7 - START (Q3.CUSTOMER_ID = Q2.$C0)                           
       STOP (Q3.CUSTOMER_ID = Q2.$C0)                           
  8 - RESID ('2005-05-25' <= Q6.PAYMENT_DATE)                   
 12 - START (Q4.CUSTOMER_ID = Q3.CUSTOMER_ID)                   
       STOP (Q4.CUSTOMER_ID = Q3.CUSTOMER_ID)                   
       STOP (Q4.PAYMENT_DATE <= '2005-05-29')                   

Much better!

Benchmark result (Statement 1 = function call, Statement 2 = manually optimised):

RUN |STMT |RATIO  |
----|-----|-------|
1   |1    |1.5945 |
1   |2    |1.0080 |
2   |1    |1.6310 |
2   |2    |1.0768 |
3   |1    |1.5827 |
3   |2    |1.0090 |
4   |1    |1.5486 |
4   |2    |1.0084 |
5   |1    |1.5569 |
5   |2    |1.0000 |

Definitely a huge improvement. The comparison might not be entirely fair because

  • CROSS APPLY / LATERAL unnesting tends to generate nested loops that could be written more optimally with a classic join
  • We have an additional auxiliary customer table access (which could probably be tuned away with another rewrite)

PostgreSQL

Function definition:

CREATE OR REPLACE FUNCTION f_payment_with_revenue (
  p_customer_id BIGINT,
  p_from_date DATE,
  p_to_date DATE
)
RETURNS TABLE (
  customer_id SMALLINT,
  payment_date TIMESTAMP,
  amount DECIMAL(10, 2),
  cumulative_amount DECIMAL(10, 2)
)
AS $$
SELECT *
FROM (
  SELECT
    customer_id,
    payment_date,
    amount,
    SUM(amount) OVER (
      PARTITION BY customer_id
      ORDER BY payment_date, payment_id
    ) cumulative_amount
  FROM payment
  WHERE customer_id = p_customer_id
  AND payment_date <= p_to_date
) t
WHERE payment_date >= p_from_date
$$ LANGUAGE SQL;

Function call:

SELECT 
  payment_date,
  amount,
  cumulative_amount
FROM (
  SELECT customer_id FROM customer WHERE customer_id IN (1, 2, 3)
) c(customer_id)
CROSS JOIN LATERAL f_payment_with_revenue(
  c.customer_id,
  CAST('2005-05-25' AS DATE),
  CAST('2005-05-29' AS DATE)
)
ORDER BY payment_date;

Execution plan:

QUERY PLAN                                                                                    
----------------------------------------------------------------------------------------------
Sort  (cost=250.39..257.89 rows=3000 width=72)                                                
  Sort Key: f_payment_with_revenue.payment_date                                               
  ->  Nested Loop  (cost=0.53..77.13 rows=3000 width=72)                                      
        ->  Index Only Scan using customer_pkey on customer  (cost=0.28..16.88 rows=3 width=4)
              Index Cond: (customer_id = ANY ('{1,2,3}'::integer[]))                          
        ->  Function Scan on f_payment_with_revenue  (cost=0.25..10.25 rows=1000 width=72)    

Oops, no unnesting of the function is happening. The cardinality defaults to 1000. That’s bad news!

Benchmark result (Statement 1 = function call, Statement 2 = manually optimised):

RUN 1, Statement 1: 25.77538
RUN 1, Statement 2: 1.00000
RUN 2, Statement 1: 27.55197
RUN 2, Statement 2: 1.11581
RUN 3, Statement 1: 27.99331
RUN 3, Statement 2: 1.16463
RUN 4, Statement 1: 29.11022
RUN 4, Statement 2: 1.01159
RUN 5, Statement 1: 26.65781
RUN 5, Statement 2: 1.01654

Rats. This has gotten much worse than with the view. Not surprising, though. Table valued functions are not that good of an idea when they cannot be inlined! Oracle would have had a similar result if I wasn’t too lazy to translate my function to an ordinary PL/SQL table valued function, or a pipelined function.

SQL Server

Function definition:

CREATE FUNCTION f_payment_with_revenue (
  @customer_id BIGINT,
  @from_date DATE,
  @to_date DATE
)
RETURNS TABLE
AS RETURN
SELECT *
FROM (
  SELECT
    customer_id,
    payment_date,
    amount,
    SUM(amount) OVER (
      PARTITION BY customer_id
      ORDER BY payment_date, payment_id
    ) cumulative_amount
  FROM payment
  WHERE customer_id = @customer_id
  AND payment_date <= @to_date
) t
WHERE payment_date >= @from_date;

Function call:

SELECT 
  payment_date,
  amount,
  cumulative_amount
FROM (
  SELECT customer_id FROM customer WHERE customer_id IN (1, 2, 3)
) AS c(customer_id)
CROSS APPLY f_payment_with_revenue(
  c.customer_id,
  CAST('2005-05-25' AS DATE),
  CAST('2005-05-29' AS DATE)
)
ORDER BY payment_date;

Execution plan

|--Sort(ORDER BY:([payment_date] ASC))
   |--Nested Loops(Inner Join, OUTER REFERENCES:([customer_id]))
      |--Index Seek(OBJECT:([PK__customer__CD65CB84E826462D]), SEEK:([customer_id] IN (1, 2, 3))
      |--Filter(WHERE:([payment_date]>='2005-05-25'))
         |--Compute Scalar(DEFINE:([Expr1006]=CASE WHEN [Expr1007]=(0) THEN NULL ELSE [Expr1008] END))
            |--Stream Aggregate(GROUP BY:([WindowCount1014]) DEFINE:(..)))
               |--Window Spool(RANGE BETWEEN:(UNBOUNDED, [[payment_date], [payment_id]]))
                  |--Segment
                     |--Segment
                        |--Sort(ORDER BY:([payment_date] ASC, [payment_id] ASC))
                           |--Nested Loops(Inner Join, OUTER REFERENCES:([Bmk1003]))
                              |--Nested Loops(Inner Join, OUTER REFERENCES:([Expr1010], [Expr1011], [Expr1009]))
                              |  |--Compute Scalar(DEFINE:(([Expr1010],[Expr1011],[Expr1009])=GetRangeWithMismatchedTypes(NULL,'2005-05-29',(42))))
                              |  |  |--Constant Scan
                              |  |--Index Seek(OBJECT:([idx_payment_i1]), SEEK:([customer_id]=CONVERT_IMPLICIT(bigint,[customer_id],0) AND [payment_date] > [Expr1010] AND [payment_date] < [Expr1011]))
                              |--RID Lookup(OBJECT:([payment]), SEEK:([Bmk1003]=[Bmk1003]))

Again, super unreadable IMO, but after looking a bit more closely, we can see that the plan is almost the same as the manually optimised one, and the predicate is applied early on, where it belongs.

Benchmark result (Statement 1 = function call, Statement 2 = manually optimised):

Run 1, Statement 1: 2.50000
Run 1, Statement 2: 1.27778
Run 2, Statement 1: 2.11111
Run 2, Statement 2: 1.27778
Run 3, Statement 1: 2.11111
Run 3, Statement 2: 1.00000
Run 4, Statement 1: 2.22222
Run 4, Statement 2: 1.11111
Run 5, Statement 1: 2.02778
Run 5, Statement 2: 1.19444

Conclusion

Window functions are super cool and powerful. But they come at a price. They sort your data. Normally, when we write complex queries and reuse parts in views, we can profit from predicate push down operations into derived tables and views, which is something that most databases support (see also our previous blog post about such optimisations).

But when it comes to using window functions, they act like a “fence”, past which only few predicates can be pushed automatically. It’s not that it wouldn’t be possible, it simply isn’t done very well by most databases (and in the case of MySQL, not at all as of 8.0.2).

Inline table valued functions can be a remedy to avoid manual building of complex queries, such that at least some parts of your logic can be reused among queries. Unfortunately, they rely on CROSS APPLY or LATERAL JOIN, which can also cause performance issues in more complex setups. Besides, among the databases covered in this article, only DB2 and SQL Server support inline table valued functions. Oracle doesn’t support SQL functions at all, and PostgreSQL’s SQL functions are not inlinable (yet), which means that in these databases, in order to tune such queries, you might not be able to reuse the parts that use window functions in views or stored functions.

However, as always, do measure. Perhaps, a 4x waste of performance for a particular query is OK.

Squeezing Another 10% Speed Increase out of jOOQ using JMC and JMH

In this post, we’re going to discuss a couple of recent efforts to squeeze roughly 10% in terms of speed out of jOOQ by iterating on hotspots that were detected using JMC (Java Mission Control) and then validated using JMH (Java Microbenchmark Harness). This post shows how to apply micro optimisations to algorithms where the smallest improvement can have a significant effect.

While JMH is probably without competition, JMC could easily be replaced by JProfiler, YourKit, or even your own manual jstack sampling. I’ll just use JMC because it ships with the JDK and is free for use for development as of JDK 8 and 9 (if you’re unsure whether you’re “developing”, better ask Oracle). Rumours have it that JMC might be contributed to the OpenJDK in the near future.

Micro optimisations

Micro optimisations are a cool technique to squeeze a very small improvement out of a local algorithm (e.g. a loop) that has a significant effect on the entire application / library, because of the fact that the local algorithm is called many times. This is absolutely the case in jOOQ, which is essentially a library that always runs 4 nested loops:

  1. S: A “loop” over all possible SQL statements
  2. E: A “loop” over all executions of such a statement
  3. R: A loop over all rows in the result
  4. C: A loop over all columns in a row

Such four level nested loops result in what we could call a polynomial complexity of our algorithms, even if we cannot call the complexity O(N4) (as the 4 “N” are not all the same), it is certainly of O(S x E x R x C) (I’ll call this “S-E-R-C loops” further down). Even to the untrained eye, it becomes evident that anything that happens in the inner-most “C-loop” can have devastating effects. We better not be opening any files here, that could be opened outside of, e.g. the “S-loop”

In a previous blog post, we’ve discussed common techniques of optimising such situations. In this blog post, we’ll look into a couple of concrete examples.

How to discover flaws in these loops?

We’re looking for the problems that affect all users, the kind of problem that, once fixed, will improve jOOQ’s performance for everyone by e.g. 10%. This is similar to what the JIT does, by performing things like stack allocation, inlining, which don’t drastically improve things locally, but do so globally, and for everyone. Here’s an interesting guest post by Tagir Valeev on JIT optimisation, and how good it is.

Getting a large “S-loop”

The first option is to run profiling sessions on benchmarks. We could, for example, run the entire “S-E-R-C loops” in a JMC profiling session, where the “S-loop” is a loop over all our statements, or in other words, over all our integration tests. Unfortunately, with this approach, our “E-loop” (in the case of jOOQ’s integration tests) is a single execution per statement. We’d have to run the integration tests many, many times in order to get meaningful results.

Also, while the jOOQ integration tests run thousands of distinct queries, most queries are still rather simple, each one focusing on an individual SQL feature (e.g. lateral join). In a end user application, queries might use less specific features, but are much more complex, i.e. they have a lot of ordinary joins.

This technique is useful to find problems that appear in all queries, deep down inside of jOOQ – e.g. at the JDBC interface. But we cannot use this approach to test individual features.

Getting a large “E-loop”

Another option is to write a single test that runs a few statements (small “S-loop”) many times in an explicit loop (large “E-loop”). This has the advantage that a specific bottleneck can be found with a high confidence, but the drawback is: It’s specific. For instance, if we find a small bottleneck in the string concatenation function, well, that is certainly worth fixing, but doesn’t affect most users.

This approach is useful to test individual features. It can also be useful for finding issues that affect all queries, but with a lower confidence than the previous case, where the “S-loop” is maximised.

Getting large “R-loops” and “C-loops”

Creating large result sets is easy and should definitely be part of such benchmarks, because in the case of a large result set, any flaw will multiply drastically, so fixing these things is worthwhile. However, these problems only affect actual result sets, not the query building process or the execution process. Sure, most statements are probably queries, not insertions / updates, etc. But this needs to be kept in mind.

Optimising for problems in large “E-loops”

All of the above scenarios are different optimisation sessions and deserve their own blog posts. In this post, I’m describing what has been discovered and fixed when running a single query 3 million times on an H2 database. The H2 database is chosen here, because it can run in memory of the same process and thus has the least extra overhead compared to jOOQ – so jOOQ’s overhead contributions become significant in a profiling session / benchmark. In fact, it can be shown that in such a benchmark, jOOQ (or Hibernate, etc.) appears to perform quite poorly compared to a JDBC only solution, as many have done before.

This is an important moment to remind ourselves:

Benchmarks do not reflect real-world use cases! You will never run the exact same query 3 million times on a production system, and your production system doesn’t run on H2.

A benchmark profits from so much caching, buffering, you would never perform as fast as in a benchmark.

Always be careful not to draw any wrong conclusions from a benchmark!

This needs to be said, so take every benchmark you find on the web with a grain of salt. This includes our own!

The query being profiled is:

ctx.select(
      AUTHOR.FIRST_NAME,
      AUTHOR.LAST_NAME,
      BOOK.ID,
      BOOK.TITLE)
   .from(BOOK)
   .join(AUTHOR).on(BOOK.AUTHOR_ID.eq(AUTHOR.ID))
   .where(BOOK.ID.eq(1))
   .and(BOOK.TITLE.isNull().or(BOOK.TITLE.ne(randomValue)));

The trivial query returns a ridiculous 4 rows and 4 columns, so the “R-loop” and “C-loops” are negligible. This benchmark is really testing the overhead of jOOQ query execution in a case where the database does not contribute much to the execution time. Again, in a real world scenario, you will get much more overhead from your database.

In the following sections, I’ll show a few minor bottlenecks that could be found when drilling down into these such execution scenarios. As I’ve switched between JMC versions, the screenshots will not always be the same, I’m afraid.

1. Instance allocation of constant values

A very silly mistake was easily discovered right away:

The mistake didn’t contribute a whole lot of overhead, only 1.1% to the sampled time spent, but it made me curious. In version 3.10 of jOOQ, the SelectQueryImpl‘s Limit class, which encodes the jOOQ OFFSET / LIMIT behaviour kept allocating this DSL.val() thingy, which is a bind variable. Sure, limits do work with bind variables, but this happened when SelectQueryImpl was initialised, not when the LIMIT clause is added by the jOOQ API user.

As can be seen in the sources, the following logic was there:

private static final Field<Integer> ZERO              = zero();
private static final Field<Integer> ONE               = one();
private Field<Integer>              numberOfRowsOrMax = 
    DSL.inline(Integer.MAX_VALUE);

While the “special limits” ZERO and ONE were static members, the numberOfRowsOrMax value wasn’t. That’s the instantiation we were measuring in JMC. The member is not a constant, but the default value is. It is always initialised with Integer.MAX_VALUE wrapped in an DSL.inline() call. The solution is really simple:

private static final Param<Integer> MAX               = 
    DSL.inline(Integer.MAX_VALUE);
private Field<Integer>              numberOfRowsOrMax = MAX;

This is obviously better! Not only does it avoid the allocation of the bind variable, it also avoids the boxing of Integer.MAX_VALUE (which can also be seen in the sampling screenshot).

Note, a similar optimisation is available in the JDK’s ArrayList. When you look at the sources, you’ll see:

/**
 * Shared empty array instance used for empty instances.
 */
private static final Object[] EMPTY_ELEMENTDATA = {};

When you initialise an ArrayList without initial capacity, it will reference this shared instance, instead of creating a new, empty (or even non-empty) array. This delays the allocation of such an array until we actually add things to the ArrayList, just in case it stays empty.

jOOQ’s LIMIT is the same. Most queries might not have a LIMIT, so better not allocate that MAX_VALUE afresh!

This is done once per “E-loop” iteration

One issue down: https://github.com/jOOQ/jOOQ/issues/6635

2. Copying lists in internals

This is really a micro optimisation that you probably shouldn’t do in ordinary business logic. But it might be worthwhile in infrastructure logic, e.g. when you’re also in an “S-E-R-C loop”:

jOOQ (unfortunately) occasionally copies data around between arrays, e.g. wrapping Strings in jOOQ wrapper types, transforming numbers to strings, etc. These loops aren’t bad per se, but remember, we’re inside some level of the “S-E-R-C loop”, so these copying operations might be run hundreds of millions of times when we run a statement 3 million times.

The above loop didn’t contribute a lot of overhead, and possible the cloned object was stack allocated or the clone call eliminated by the JIT. But maybe it wasn’t. The QualifiedName class cloned its argument prior to returning it to make sure that no accidental modifications will have any side effect:

private static final String[] nonEmpty(String[] qualifiedName) {
    String[] result;
    ...
    if (nulls > 0) {
        result = new String[qualifiedName.length - nulls];
        ...
    }
    else {
        result = qualifiedName.clone();
    }
    return result;
}

So, the implementation of the method guaranteed a new array as a result.

After a bit of analysis, it could be seen that there is only a single consumer of this method, and it doesn’t leave that consumer. So, it’s safe to remove the clone call. Probably, the utility was refactored from a more general purpose method into this local usage.

This is done several times per “E-loop” iteration

One more issue down: https://github.com/jOOQ/jOOQ/issues/6640

3. Running checks in loops

This one is too silly to be true:

There’s a costly overhead in the CombinedCondition constructor (<init> method). Notice, how the samples drop from 0.47% to 0.32% between the constructor and the next method init(), that’s the time spent inside the constructor.

A tiny amount of time, but this time is spent every time someone combines two conditions / predicates with AND and OR. Every time. We can probably save this time. The problem is this:

CombinedCondition(Operator operator, Collection<? extends Condition> conditions) {
    ...
    for (Condition condition : conditions)
        if (condition == null)
            throw new IllegalArgumentException("The argument 'conditions' must not contain null");

    ...
    init(operator, conditions);
}

There’s a loop over the arguments to give some meaningful error messages. That’s a bit too defensive, I suspect. How about we simply live with the NPE when it arises, as this should be rather unexpected (for the context, jOOQ hardly ever checks on parameters like this, so this should also be removed for consistency reasons).

This is done several times per “E-loop” iteration

One more issue down: https://github.com/jOOQ/jOOQ/issues/6666 (nice number)

4. Lazy initialisation of lists

The nature of the JDBC API forces us to work with ThreadLocal variables, very unfortunately, as it is not possible to pass arguments from parent SQLData objects to children, especially when we combine nesting of Oracle TABLE/VARRAY and OBJECT types.

In this analysis, we’re combining the profiler’s CPU sampling with its memory sampling:

In the CPU sampling view above, we can see some overhead in the DefaultExecuteContext, which is instantiated once per “E-loop” iteration. Again, not a huge overhead, but let’s look at what this constructor does. It contributes to the overall allocations of ArrayList:

When we select the type in JMC, the other view will then display all the stack traces where ArrayList instances were allocated, among which, again, our dear DefaultExecuteContext constructor:

Where are those ArrayLists allocated? Right here:

BLOBS.set(new ArrayList<Blob>());
CLOBS.set(new ArrayList<Clob>());
SQLXMLS.set(new ArrayList<SQLXML>());
ARRAYS.set(new ArrayList<Array>());

Every time we start executing a query, we initialise a list for each ones of these types. All of our variable binding logic will then register any possibly allocated BLOB or CLOB, etc. such that we can clean these up at the end of the execution (a JDBC 4.0 feature that not everyone knows of!):

static final void register(Blob blob) {
    BLOBS.get().add(blob);
}
    
static final void clean() {
    List<Blob> blobs = BLOBS.get();

    if (blobs != null) {
        for (Blob blob : blobs)
            JDBCUtils.safeFree(blob);

        BLOBS.remove();
    }
    ...
}

Don’t forget calling Blob.free() et al, if you’re working with JDBC directly!

But the truth is, in most cases, we don’t really need these things. We need them only in Oracle, and only if we’re using TABLE / VARRAY or OBJECT types, due to some JDBC restrictions. Why punish all the users of other databases with this overhead? Instead of a sophisticated refactoring, which risks introducing regressions (https://github.com/jOOQ/jOOQ/issues/4205), we can simply initialise these lists lazily. We leave the clean() method as it is, remove the initialisation in the constructor, and replace the register() logic by this:

static final void register(Blob blob) {
    List<Blob> list = BLOBS.get();

    if (list == null) {
        list = new ArrayList<Blob>();
        BLOBS.set(list);
    }

    list.add(blob);
}

That was easy. And significant. Check out the new allocation measurements:

Note that every allocation, apart from the overhead of allocating things, also incurs additional overhead when the object is garbage collected. That’s a bit trickier to measure and correlate. In general, less allocations is almost always a good thing, except if the allocation is super short lived, in case of which stack allocation can happen, or the logic can even be eliminated by the JIT.

This is done several times per “E-loop” iteration

One more issue down: https://github.com/jOOQ/jOOQ/issues/6669

6. Using String.replace()

This is mostly a problem in JDK 8 only, JDK 9 fixed string replacing by no longer relying on regular expressions internally. In JDK 8, however (and jOOQ still supports Java 6, so this is relevant), string replacement works through regular expressions as can be seen here:

The Pattern implementation allocates quite a few int[] instances, even if that’s probably not strictly needed for non-regex patterns as those of String.replace():

I’ve already analysed this in a previous blog post, which can be seen here:

https://blog.jooq.org/2017/10/11/benchmarking-jdk-string-replace-vs-apache-commons-stringutils-replace/

This is done several times per “E-loop” iteration

One more issue down: https://github.com/jOOQ/jOOQ/issues/6672

7. Registering an SPI that is going to be inactive

This one was a bit more tricky to solve as it relies on a deeper analysis. Unfortunately, I have no profiling screenshots available anymore, but it is easy to explain with code. There’s an internal ExecuteListeners utility, which abstracts over the ExecuteListener SPIs. Users can register such a listener and listen to query rendering, variable binding, query execution, and other lifecycle events. By default, there is no such ExecuteListener by the users, but there’s always one internal ExecuteListener:

private static ExecuteListener[] listeners(ExecuteContext ctx) {
    List<ExecuteListener> result = new ArrayList<ExecuteListener>();

    for (ExecuteListenerProvider provider : ctx.configuration()
                                               .executeListenerProviders())
        if (provider != null)
            result.add(provider.provide());

    if (!FALSE.equals(ctx.settings().isExecuteLogging()))
        result.add(new LoggerListener());

    return result.toArray(EMPTY_EXECUTE_LISTENER);
}

The LoggerListener is added by default, unless users turn off that feature. Which means:

  • We’ll pretty much always get this ArrayList
  • We’ll pretty much always loop over this list
  • We’ll pretty much always clal this LoggerListener

But what does it do? It logs stuff on DEBUG and TRACE level. For instance:

@Override
public void executeEnd(ExecuteContext ctx) {
    if (ctx.rows() >= 0)
        if (log.isDebugEnabled())
            log.debug("Affected row(s)", ctx.rows());
}

That’s what it does by definition. It’s a debug logger. So, the improved logic for initialising this thing is the following:

private static final ExecuteListener[] listeners(ExecuteContext ctx) {
    List<ExecuteListener> result = null;

    for (ExecuteListenerProvider provider : ctx.configuration()
                                               .executeListenerProviders())
        if (provider != null)
            (result = init(result)).add(provider.provide());

    if (!FALSE.equals(ctx.settings().isExecuteLogging())) {
        if (LOGGER_LISTENER_LOGGER.isDebugEnabled())
            (result = init(result)).add(new LoggerListener());
    }

    return result == null ? null : result.toArray(EMPTY_EXECUTE_LISTENER);
}

We’re no longer allocating the ArrayList (that might be premature, the JIT might have rewritten this allocation to not happen, but OK), and we’re only adding the LoggerListener if it DEBUG or TRACE logging is enabled for it, i.e. if it would do any work at all.

That’s just a couple of CPU cycles we can save on every execution. Again, I don’t have the profiling measurements anymore, but trust me. It helped.

This is done several times per “E-loop” iteration

One more issue down: https://github.com/jOOQ/jOOQ/issues/6747

8. Eager allocation where lazy allocation works

Sometimes, we need two different representations of the same information. The “raw” representation, and a more useful, pre-processed representation for some purposes. This was done, for instance, in QualifiedField:

private final Name          name;
private final Table<Record> table;

QualifiedField(Name name, DataType<T> type) {
    super(name, type);

    this.name = name;
    this.table = name.qualified()
        ? DSL.table(name.qualifier())
        : null;
}

@Override
public final void accept(Context<?> ctx) {
    ctx.visit(name);
}

@Override
public final Table<Record> getTable() {
    return table;
}

As can be seen, the name is really the beef of this class. It’s a qualified name that generates itself on the SQL string. The Table representation is useful when navigating the meta model, but this is hardly ever done by jOOQ’s internals and/or user facing code.

However, this eager initialisation it is costly:

Quite a few UnqualifiedName[] arrays are allocated by the call to Name.qualifier(). We can easily make that table reference non-final and calculate it lazily:

private final Name              name;
private Table<Record>           table;

QualifiedField(Name name, DataType<T> type) {
    super(name, type);

    this.name = name;
}

@Override
public final Table<Record> getTable() {
    if (table == null)
        table = name.qualified() ? DSL.table(name.qualifier()) : null;

    return table;
}

Because name is final, we could call table “effectively final” (in a different meaning than the Java language’s) – we won’t have any thread safety issues because these particular types are immutable inside of jOOQ.

This is done several times per “E-loop” iteration

One more issue down: https://github.com/jOOQ/jOOQ/issues/6755

Results

Now, thus far, we’ve “improved” many low hanging fruit based on a profiler session (that was run, akhem, from outside of Eclipse on a rather busy machine). This wasn’t very scientific. Just tracking down “bottlenecks” which triggered my interest by having high enough numbers to even notice. This is called “micro optimisation”, and it is only worth the trouble if you’re in a “S-E-R-C loop”, meaning that the code you’re optimising is executed many many times. For me, developing jOOQ, this is almost always the case, because jOOQ is a library used by a lot of people who all profit from these optimisations. In many other cases, this might be called “premature optimisation”

But once we’ve optimised, we shouldn’t stop. I’ve done a couple of individual JMH benchmarks for many of the above problems, to see if they were really an improvement. But sometimes, in a JMH benchmark, something that doesn’t look like an improvement will still be an improvement in the bigger picture. The JVM doesn’t inline all methods 100 levels deep. If your algorithm is complex, perhaps a micro optimisation will still have an effect that would not have any effect on a JMH benchmark.

Unfortunately this isn’t very exact science, but with enough intuition, you’ll find the right spots to optimise.

In my case, I verified progress over two patch releases: 3.10.0 -> 3.10.1 -> 3.10.2 (not yet released) by running a JMH benchmark over the entire query execution (including H2’s part). The results of applying roughly 15 of the above and similar optimisations (~2 days’ worth of effort) is:

JDK 9 (9+181)

jOOQ 3.10.0 Open Source Edition

Benchmark                          Mode   Cnt       Score      Error  Units
ExecutionBenchmark.testExecution   thrpt   21  101891.108 ± 7283.832  ops/s

jOOQ 3.10.2 Open Source Edition

Benchmark                          Mode   Cnt       Score      Error  Units
ExecutionBenchmark.testExecution   thrpt   21  110982.940 ± 2374.504  ops/s

JDK 8 (1.8.0_145)

jOOQ 3.10.0 Open Source Edition

Benchmark                          Mode   Cnt       Score      Error  Units
ExecutionBenchmark.testExecution   thrpt   21  110178.873 ± 2134.894  ops/s

jOOQ 3.10.2 Open Source Edition

Benchmark                          Mode   Cnt       Score      Error  Units
ExecutionBenchmark.testExecution   thrpt   21  118795.922 ± 2661.653  ops/s

As can be seen, in both JDK versions, we’ve gotten roughly a 10% speed increase. What’s interesting is also that JDK 8 seemed to have been also 10% faster than JDK 9 in this benchmark, although this can be due to a variety of things that I haven’t considered yet, and which are out of scope for this discussion.

Conclusion

This iterative approach to tackling performance is definitely worth it for library authors:

  • run a representative benchmark (repeat a task millions of times)
  • profile it
  • track down “bottlenecks”
  • if they’re easy to fix without regression risk, do it
  • repeat
  • after a while, verify with JMH

Individual improvements are quite hard to measure, or measure correctly. But when you do 10-15 of them, they start adding up and become significant. 10% can make a difference.

Looking forward to your comments, alternative techniques, alternative tools, etc.!

If you liked this article, you will also like Top 10 Easy Performance Optimisations in Java

jOOQ Tuesdays: Nicolai Parlog Talks About Java 9

Welcome to the jOOQ Tuesdays series. In this series, we’ll publish an article on the third Tuesday every other month where we interview someone we find exciting in our industry from a jOOQ perspective. This includes people who work with SQL, Java, Open Source, and a variety of other related topics.

I’m very excited to feature today Nicolai Parlog, author of The Java Module System

Nicolai, your blog is an “archeological” treasure trove for everyone who wants to learn about why Java expert group decisions were made. What made you dig out all these interesting discussions on the mailing lists?

Ha, thank you, didn’t know I was sitting on a treasure.

It all started with everyone’s favorite bikeshed: Optional. After using it for a few months, I was curious to learn more about the reason behind its introduction to Java and why it was designed the way it was, so I started digging and learned a few things:

  • Piperman, the JDK mailing list archive, is a horrible place to peruse and search.
  • Mailing list discussions are often lengthy, fragmented, and thus hard to revisit.
  • Brian Goetz was absolutely right: Everything related to Optional seems to take 300 messages.

Consequently, researching that post about Optional’s design took a week or so. But as you say, it’s interesting to peek behind the curtain and once a discussion is condensed to its most relevant positions and peppered with some context it really appeals to the wider Java community.

I actually think there’s a niche to be filled, here. Imagine there were a site that did regularly (at least once a week) what I did with a few selected topics: Follow the JDK mailing list, summarize ongoing discussions, and make them accessible to a wide audience. That would be a great service to the Java community as it would make it much easier to follow what is going on and to chime in with an informed opinion when you feel you have something to contribute. Now we just need to find someone with a lot of free time on their hands.

By the way, I think it’s awesome that the comparitively open development of the JDK makes that possible.

I had followed your blog after Java 8 came out, where you explained expert group decisions in retrospect. Now, you’re mostly covering what’s new in Java 9. What are your favourite “hidden” (i.e. non-Jigsaw) Java 9 features and why?

From the few language changes, it’s easy pickings: definitely private interface methods. I’ve been in the situation more than once that I wanted to share code between default methods but found no good place to put it without making it part of the public API. With private mehods in interfaces, that’s a thing of the past.

When it comes to API changes, the decision is much harder as there is more to choose from. People definitely like collection factory methods and I do, too, but I think I’ll go with the changes to Stream and Optional. I really enjoy using those Java 8 features and think it’s great that they’ve been improved in 9.

A JVM feature I really like are multi-release JARs. The ability to ship a JAR that uses the newest APIs, but degrades gracefully on older JVMs will come in very handy. Some projects, Spring for example, already do this, but without JVM support it’s not exactly pleasant.

Can I go on? Because there’s so much more! Just two: Unified logging makes it much easier to tease out JVM log messages without having to configure logging for different subsystems and compact strings and indified string concatenation make working with strings faster, reduce garbage and conserve heap space (on average, 10% to 15% less memory!). Ok, that were three, but there you go.

You’re writing a book on the Java 9 module system that can already be pre-ordered on Manning. What will readers get out of your book?

All they need to become module system experts. Of course it explains all the basics (delcaring, compiling, packaging, and running modular applications) and advanced features (services, implied readability, optional dependencies, etc), but it goes far beyond that. More than how to use a feature it also explains when and why to use it, which nuances to consider, and what are good defaults if you’re not sure which way to go.

It’s also full of practical advice. I migrated two large applications to Java 9 (compiling and running on the new release, not turning them into modules) and that experience as well as the many discussions on the mailing list informed a big chapter on migration. If readers are interested in a preview, I condensed it into a post on the most common Java 9 migration challenges. I also show how to debug modules and the module system with various tools (JDeps for example) and logging (that’s when I started using uniform logging), Last but not least, I plan to include a chapter that simply lists error messages and what to do about them.

In your opinion, what are the good parts and the bad parts about  Jigsaw? Do you think Jigsaw will be adopted quickly?

The good, the bad, and the ugly, eh? My favorite feature (of all of Java 9 actually) is strong encapsulation. The ability to have types that are public only within a module is incredibly valuable! This adds another option to the private-to-public-axis and once people internalize that feature we will wonder how we ever lived without it. Can you imagine giving up private? We will think the same about exported.

I hope the worst aspect of the module system will be the compatibility challenges. That’s a weird way to phrase it, but let me explain. These challenges definitely exist and they will require a non-neglectable investmement from the Java community as a whole to get everything working on Java 9, in the long run as modules. (As an aside: This is well invested time – much of it pays back technical debt.)

My hope is that no other aspect of the module system turns out to be worse. One thing I’m a little concerned about is the strictness of reliable configuration. I like the general principle and I’m definitely one for enforcing good practices, but just think about all those POMs that busily exclude transitive dependencies. Once all those JARs are modules, that won’t work – the module system will not let you launch without all dependencies present.

Generally speaking, the module system makes it harder to go against the maintainers’ decisions. Making internal APIs available via reflection or altering dependencies now goes against the grain of a mechanism that is built deeply into the compiler and JVM. There are of course a number of command line flags to affect the module system but they don’t cover everything. To come back to exclusing dependencies, maybe–ignore-missing-modules ${modules} would be a good idea…

Regarding adoption rate, I expect it to be slower than Java 8. But leaving those projects aside that see every new version as insurmountable and are still on Java 6, I’m sure the vast majority will migrate eventually. If not for Java 9’s features than surely for future ones. As a friend and colleague once said: “I’ll do everything to get to value types.”

Now that Java 9 is out and “legacy”, what Java projects will you cover next in your blog and your work?

Oh boy, I’m still busy with Java 9. First I have to finish the book (November hopefully) and then I want to do a few more migrations because I actually like doing that for some weird and maybe not entirely healthy reason (the things you see…). FYI, I’m for hire, so if readers are stuck with their migration they should reach out.

Beyond that, I’m already looking forward to primitive specialization, e.g. ArrayList<int>, and value types (both from Project Valhalla) as well as the changes Project Amber will bring to Java. I’m sure I’ll start discussing those in 2018.

Another thing I’ll keep myself busy with and which I would love your readers to check out is my YouTube channel. It’s still very young and until the book’s done I won’t do a lot of videos (hope to record one next week), but I’m really thrilled about the whole endavour!

Benchmarking JDK String.replace() vs Apache Commons StringUtils.replace()

What’s better? Using the JDK’s String.replace() or something like Apache Commons Lang’s Apache Commons Lang’s StringUtils.replace()?

In this article, I’ll compare the two, first in a profiling session using Java Mission Control (JMC), then in a benchmark using JMH, and we’ll see that Java 9 heavily improved things in this area.

Profiling using JMC

In a recent profiling session where I checked for any “obvious” bottlenecks in jOOQ, I’ve discovered this nasty regular expression pattern instantiation:

Tons of int[] instances were allocated by a regular expression pattern. That’s weird, because in general, inside of jOOQ’s internals, special care is always taken to pre-compile any regular expressions that are needed in static members, e.g.:

private static final Pattern TYPE_NAME_PATTERN = 
  Pattern.compile("\\([^\\)]*\\)");

This allows for using the Pattern in a far more optimal way, than e.g. by using String.replaceAll():

// Much better, pattern is pre-compiled
TYPE_NAME_PATTERN.matcher(castTypeName).replaceAll("")

// Much worse, pattern is compiled *every time*
castTypeName.replaceAll("\\([^\\)]*\\)", "")

That should be clear to everyone. The price to pay for this is the fact that the pattern is stored “far away” in some static member, rather than being visible right where it is used, which is a bit less readable. At least in my opinion.

SIDENOTE: People tend to get all angry about premature optimisation and such. Yes, these optimisations are micro optimisations and aren’t always worth the trouble. But this article is about jOOQ, a library that does a lot of expression tree transformations, and it is important for jOOQ to eliminate even 1% “bottlenecks”, as they make a difference. So, please read this article in this context.

Consider also our previous post about this subject: Top 10 Easy Performance Optimisations in Java

What was the problem in jOOQ?

Now, what appears to be obvious when using regular expressions seems less obvious when using ordinary, constant string replacements, such as when calling String.replace(CharSequence), as was done in the linked jOOQ issue #6672. The relevant piece of code was escaping all inline strings that are sent to the SQL database, to prevent syntax errors and, of course, SQL injection:

static final String escape(Object val, Context<?> context) {
    String result = val.toString();

    if (needsBackslashEscaping(context.configuration()))
        result = result.replace("\\", "\\\\");

    return result.replace("'", "''");
}

We’re always escaping apostrophes by doubling them, and in some databases (e.g. MySQL), we often have to escape backslashes as well (unfortunately, not all ORMs seem to do this or even be aware of this MySQL “feature”).

Unfortunately as well, despite heavy use of Apache Commons Lang’s StringUtils.replace() in jOOQ’s internals, every now and then a String.replace(CharSequence) sneaks in, because it’s just so convenient to write.

Meh, does it matter?

Usually, in ordinary business logic, it shouldn’t (again – don’t optimise prematurely), but in jOOQ, which is essentially a SQL string manipulation library, it can get quite costly if a single replace call is done excessively (for good reasons, of course), and it is slower than it should be. And it is, prior to Java 9, when this method was optimised. I’ve done the profiling with Java 8, where internally, String.replace() uses a literal regex pattern (i.e. a pattern with a “literal” flag that is faster, but it is a pattern, nonetheless).

Not only does the method appear as a major offender in the GC allocation view, it also triggers quite some action in the “hot methods” view of JMC:

Those are quite a few Pattern methods. The percentages have to be understood in the context of a benchmark, running millions of queries against an H2 in-memory database, so the overhead is significant!

Using Apache Commons Lang’s StringUtils

A simple fix is to use Apache Commons Lang’s StringUtils instead:

static final String escape(Object val, Context<?> context) {
    String result = val.toString();

    if (needsBackslashEscaping(context.configuration()))
        result = StringUtils.replace(result, "\\", "\\\\");

    return StringUtils.replace(result, "'", "''");
}

Now, the pressure has changed significantly. The int[] allocation is barely noticeable in comparison:

And much fewer Pattern calls are made, overall.

Benchmarking using JMH

Profiling can be very useful to spot bottlenecks, but it needs to be read with care. It introduces some artefacts and slight overheads and it is not 100% accurate when sampling call stacks, which might lead the wrong conclusions at times. This is why it is sometimes important to back claims by running an actual benchmark. And when benchmarking, please, don’t just loop 1 million times in a main() method. That will be very very inaccurate, except for very obvious, order-of-magnitude scale differences.

I’m using JMH here, running the following simple benchmark:

package org.jooq.test.benchmark;

import org.apache.commons.lang3.StringUtils;
import org.openjdk.jmh.annotations.Benchmark;
import org.openjdk.jmh.annotations.Fork;
import org.openjdk.jmh.annotations.Measurement;
import org.openjdk.jmh.annotations.Warmup;
import org.openjdk.jmh.infra.Blackhole;

@Fork(value = 3, jvmArgsAppend = "-Djmh.stack.lines=3")
@Warmup(iterations = 5)
@Measurement(iterations = 7)
public class StringReplaceBenchmark {

    private static final String SHORT_STRING_NO_MATCH = "abc";
    private static final String SHORT_STRING_ONE_MATCH = "a'bc";
    private static final String SHORT_STRING_SEVERAL_MATCHES = "'a'b'c'";
    private static final String LONG_STRING_NO_MATCH = 
      "abcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabc";
    private static final String LONG_STRING_ONE_MATCH = 
      "abcabcabcabcabcabcabcabcabcabcabca'bcabcabcabcabcabcabcabcabcabcabcabcabc";
    private static final String LONG_STRING_SEVERAL_MATCHES = 
      "abcabca'bcabcabcabcabcabc'abcabcabca'bcabcabcabcabcabca'bcabcabcabcabcabcabc";

    @Benchmark
    public void testStringReplaceShortStringNoMatch(Blackhole blackhole) {
        blackhole.consume(SHORT_STRING_NO_MATCH.replace("'", "''"));
    }

    @Benchmark
    public void testStringReplaceLongStringNoMatch(Blackhole blackhole) {
        blackhole.consume(LONG_STRING_NO_MATCH.replace("'", "''"));
    }

    @Benchmark
    public void testStringReplaceShortStringOneMatch(Blackhole blackhole) {
        blackhole.consume(SHORT_STRING_ONE_MATCH.replace("'", "''"));
    }

    @Benchmark
    public void testStringReplaceLongStringOneMatch(Blackhole blackhole) {
        blackhole.consume(LONG_STRING_ONE_MATCH.replace("'", "''"));
    }

    @Benchmark
    public void testStringReplaceShortStringSeveralMatches(Blackhole blackhole) {
        blackhole.consume(SHORT_STRING_SEVERAL_MATCHES.replace("'", "''"));
    }

    @Benchmark
    public void testStringReplaceLongStringSeveralMatches(Blackhole blackhole) {
        blackhole.consume(LONG_STRING_SEVERAL_MATCHES.replace("'", "''"));
    }

    @Benchmark
    public void testStringUtilsReplaceShortStringNoMatch(Blackhole blackhole) {
        blackhole.consume(StringUtils.replace(SHORT_STRING_NO_MATCH, "'", "''"));
    }

    @Benchmark
    public void testStringUtilsReplaceLongStringNoMatch(Blackhole blackhole) {
        blackhole.consume(StringUtils.replace(LONG_STRING_NO_MATCH, "'", "''"));
    }

    @Benchmark
    public void testStringUtilsReplaceShortStringOneMatch(Blackhole blackhole) {
        blackhole.consume(StringUtils.replace(SHORT_STRING_ONE_MATCH, "'", "''"));
    }

    @Benchmark
    public void testStringUtilsReplaceLongStringOneMatch(Blackhole blackhole) {
        blackhole.consume(StringUtils.replace(LONG_STRING_ONE_MATCH, "'", "''"));
    }

    @Benchmark
    public void testStringUtilsReplaceShortStringSeveralMatches(Blackhole blackhole) {
        blackhole.consume(StringUtils.replace(SHORT_STRING_SEVERAL_MATCHES, "'", "''"));
    }

    @Benchmark
    public void testStringUtilsReplaceLongStringSeveralMatches(Blackhole blackhole) {
        blackhole.consume(StringUtils.replace(LONG_STRING_SEVERAL_MATCHES, "'", "''"));
    }
}

Notice that I tried to run 2 x 3 different string replacement scenarios:

  • The string is “short”
  • The string is “long”

Cross joining (there, finally some SQL in this post!) the above with:

  • No match is found
  • One match is found
  • Several matches are found

That’s important because different optimisations can be implemented for those different cases, and probably, in jOOQ’s case, there is mostly no match in this particular case.

I ran this benchmark once on Java 8:

$ java -version
java version "1.8.0_141"
Java(TM) SE Runtime Environment (build 1.8.0_141-b15)
Java HotSpot(TM) 64-Bit Server VM (build 25.141-b15, mixed mode)

And on Java 9:

$ java -version
java version "9"
Java(TM) SE Runtime Environment (build 9+181)
Java HotSpot(TM) 64-Bit Server VM (build 9+181, mixed mode)

As Tagir Valeev was kind enough to remind me that this issue was supposed to be fixed in Java 9:

The results are:

Java 8

testStringReplaceLongStringNoMatch               thrpt   21    4809343.940 ▒  66443.628  ops/s
testStringUtilsReplaceLongStringNoMatch          thrpt   21   25063493.793 ▒ 660657.256  ops/s

testStringReplaceLongStringOneMatch              thrpt   21    1406989.855 ▒  43051.008  ops/s
testStringUtilsReplaceLongStringOneMatch         thrpt   21    6961669.111 ▒ 141504.827  ops/s

testStringReplaceLongStringSeveralMatches        thrpt   21    1103323.491 ▒  17047.449  ops/s
testStringUtilsReplaceLongStringSeveralMatches   thrpt   21    3899108.777 ▒  41854.636  ops/s

testStringReplaceShortStringNoMatch              thrpt   21    5936992.874 ▒  68115.030  ops/s
testStringUtilsReplaceShortStringNoMatch         thrpt   21  171660973.829 ▒ 377711.864  ops/s

testStringReplaceShortStringOneMatch             thrpt   21    3267435.957 ▒ 240198.763  ops/s
testStringUtilsReplaceShortStringOneMatch        thrpt   21    9943846.428 ▒ 270821.641  ops/s

testStringReplaceShortStringSeveralMatches       thrpt   21    2313713.015 ▒  28806.738  ops/s
testStringUtilsReplaceShortStringSeveralMatches  thrpt   21    5447065.933 ▒ 139525.472  ops/s

As can be seen, the difference is “catastrophic”. Apache Commons Lang’s StringUtils drastically outpeforms the JDK’s String.replace() in every discipline, especially when no match is found in a short string! That’s because the library optimises for this particular case:

...
int end = searchText.indexOf(searchString, start);
if (end == INDEX_NOT_FOUND) {
    return text;
}

Java 9

Things look a bit differently for Java 9:

testStringReplaceLongStringNoMatch               thrpt   21   55528132.674 ▒  479721.812  ops/s
testStringUtilsReplaceLongStringNoMatch          thrpt   21   55767541.806 ▒  754862.755  ops/s

testStringReplaceLongStringOneMatch              thrpt   21    4806322.839 ▒  217538.714  ops/s
testStringUtilsReplaceLongStringOneMatch         thrpt   21    8366539.616 ▒  142757.888  ops/s

testStringReplaceLongStringSeveralMatches        thrpt   21    2685134.029 ▒   78108.171  ops/s
testStringUtilsReplaceLongStringSeveralMatches   thrpt   21    3923819.576 ▒  351103.020  ops/s

testStringReplaceShortStringNoMatch              thrpt   21  122398496.629 ▒ 1350086.256  ops/s
testStringUtilsReplaceShortStringNoMatch         thrpt   21  121139633.453 ▒ 2756892.669  ops/s

testStringReplaceShortStringOneMatch             thrpt   21   18070522.151 ▒  498663.835  ops/s
testStringUtilsReplaceShortStringOneMatch        thrpt   21   11367395.622 ▒  153377.552  ops/s

testStringReplaceShortStringSeveralMatches       thrpt   21    7548407.681 ▒  168950.209  ops/s
testStringUtilsReplaceShortStringSeveralMatches  thrpt   21    5045065.948 ▒  175251.545  ops/s

Java 9’s implementation is now similar to that of Apache Commons, with the same optimisation for non-matches:

public String replace(CharSequence target, CharSequence replacement) {
    String tgtStr = target.toString();
    String replStr = replacement.toString();
    int j = indexOf(tgtStr);
    if (j < 0) {
        return this;
    }
    ...

It is still quite slower for matches in long strings, but faster for matches in short strings. The tradeoff for jOOQ will be to still prefer Apache Commons because:

  • Most people are still on Java 8 or less, currently
  • Most replacements won’t match and both implementations fare equally well for that in Java 9, but Apache Commons is much faster for this category in Java 8
  • If there’s a match and thus a replacement, the speed depends on the string length, where the faster implementation is currently undecided

Conclusion

This micro optimisation stuff matters in jOOQ because jOOQ is a library that does a lot of SQL string manipulation. Every allocation and every CPU cycle that is wasted when manipulating SQL strings slows down the library, and thus impacts all of its users. In a situation like this, it is definitely worth considering not using these useful JDK String methods, and opting for the much faster Apache Commons implementations instead.

Things have improved a lot in Java 9, in case of which this can mostly be ignored. But if you still need to support Java 8 (we still support Java 6 in our commercial distributions!), then this has to be considered.

10 Cool SQL Optimisations That do not Depend on the Cost Model

Cost Based Optimisation is the de-facto standard way to optimise SQL queries in most modern databases. It is the reason why it is really really hard to implement a complex, hand-written algorithm in a 3GL (third generation programming language) such as Java that outperforms a dynamically calculated database execution plan, that has been generated from a modern optimiser. I’ve recently delivered a talk about that topic:

Today, we don’t want to talk about cost based optimisation, i.e. optimisations that depend on a database’s cost model. We’ll look into much simpler optimisations that can be implemented purely based on meta data (e.g. constraints) and the query itself. They’re usually no-brainers for a database to optimise, because the optimisation will always lead to a better execution plan, independently of whether there are any indexes, or how much data you have, or how skewed your data distribution is.

So, they’re not no-brainers in the sense whether they’re easy for the optimiser teams to implement, but they’re no-brainers in the sense whether they should be done.

These optimisations remove needless, optional work (as opposed to needless, mandatory work, which I’ve blogged about before)

Where do these optimisations apply?

Most of these optimisations are applied to:

  • Fix mistakes in queries
  • Allow for reusing complex views without actually executing the entire logic from the view

In the first case, you could claim: “Well, then fix the stupid SQL already”, but then again, who never makes any mistakes, right?

Specifically, the second case is really cool, as these optimisations allow us to build complex libraries of views and table valued functions, which we can reuse in several layers.

Databases being used

This post will evaluate 10 SQL optimisations on the 5 most popular RDBMS (according to the db-engines ranking):

  • Oracle 12.2
  • MySQL 8.0.2
  • SQL Server 2014
  • PostgreSQL 9.6
  • DB2 LUW 10.5

In all of this article, I will be using queries against the Sakila database – as always.

Sakila database

These will be the 10 optimisation types:

  1. Transitive Closure
  2. Impossible Predicates and Unneeded Table Accesses
  3. JOIN Elimination
  4. Removing “Silly” Predicates
  5. Projections in EXISTS Subqueries
  6. Predicate Merging
  7. Provably Empty Sets
  8. CHECK Constraints
  9. Unneeded Self JOIN
  10. Predicate Pushdown

One final note before you move on: Many of the following examples might be too simple. Some databases (e.g. SQL Server) might not apply a specific optimisation on a query that is “too trivial”. See also the comments for details.

1. Transitive Closure

Let’s start with something simple: transitive closure. It’s a really trivial concept that applies to a variety of maths operations, e.g. to the equality operator. It can be said that if:

  • A = B and…
  • B = C

then:

  • A = C

Duh, right? But this has some nice implications on SQL optimisers.

Let’s look at an example. Let’s get all films for ACTOR_ID = 1:

SELECT first_name, last_name, film_id
FROM actor a
JOIN film_actor fa ON a.actor_id = fa.actor_id
WHERE a.actor_id = 1;

The result being:

FIRST_NAME      LAST_NAME  FILM_ID
PENELOPE        GUINESS    1
PENELOPE        GUINESS    23
PENELOPE        GUINESS    25
PENELOPE        GUINESS    106
PENELOPE        GUINESS    140
PENELOPE        GUINESS    166
...

Now, observe the execution plan if we run this query in Oracle:

--------------------------------------------------------------
| Id  | Operation                    | Name          | Rows  |
--------------------------------------------------------------
|   0 | SELECT STATEMENT             |               |       |
|   1 |  NESTED LOOPS                |               |    19 |
|   2 |   TABLE ACCESS BY INDEX ROWID| ACTOR         |     1 |
|*  3 |    INDEX UNIQUE SCAN         | PK_ACTOR      |     1 |
|*  4 |   INDEX RANGE SCAN           | PK_FILM_ACTOR |    19 |
--------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   3 - access("A"."ACTOR_ID"=1)
   4 - access("FA"."ACTOR_ID"=1)

Specifically the predicate section is really interesting. The predicate ACTOR_ID = 1 is applied to both the ACTOR and FILM_ACTOR tables because of transitive closure. If:

  • A.ACTOR_ID = 1 (from the WHERE predicate) and…
  • A.ACTOR_ID = FA.ACTOR_ID (from the ON predicate)

Then:

  • FA.ACTOR_ID = 1

In other words, the query is rewritten to this:

SELECT first_name, last_name, film_id
FROM actor a
JOIN film_actor fa ON a.actor_id = fa.actor_id
WHERE a.actor_id = 1
AND fa.actor_id = 1;

Or in this particular case, even this, as the A.ACTOR_ID = 1 predicate ensures a single column from the actor table, so a cross join might do as well (at least that’s what the plan indicates):

SELECT first_name, last_name, film_id
FROM actor a
JOIN film_actor fa ON fa.actor_id = 1
WHERE a.actor_id = 1;

This has a few nice effects on more complex queries. In particular, the cardinality estimates will be much more precise this way, as we can pick the estimate based on a concrete, constant predicate value, rather than e.g. the average number of films per actor as in this query (which returns the same result):

SELECT first_name, last_name, film_id
FROM actor a
JOIN film_actor fa ON a.actor_id = fa.actor_id
WHERE first_name = 'PENELOPE'
AND last_name = 'GUINESS'

The plan being:

----------------------------------------------------------------------------
| Id  | Operation                            | Name                | Rows  |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |                     |       |
|   1 |  NESTED LOOPS                        |                     |     2 |
|*  2 |   TABLE ACCESS BY INDEX ROWID BATCHED| ACTOR               |     1 |
|*  3 |    INDEX RANGE SCAN                  | IDX_ACTOR_LAST_NAME |     3 |
|*  4 |   INDEX RANGE SCAN                   | PK_FILM_ACTOR       |    27 |
----------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - filter("A"."FIRST_NAME"='PENELOPE')
   3 - access("A"."LAST_NAME"='GUINESS')
   4 - access("A"."ACTOR_ID"="FA"."ACTOR_ID")

As you can see, the estimate for the number of FILM_ACTOR rows is too high, and the estimate for the NESTED LOOP result is too low. Here are some interesting numbers:

SELECT count(*) FROM film_actor WHERE actor_id = 1;

SELECT avg(c) FROM (
  SELECT count(*) c FROM film_actor GROUP BY actor_id
);

Resulting in:

19
27.315

That’s where those estimates come from. If the database knows we’re dealing with ACTOR_ID = 1, it can pick the statistics on the number of films for that actor. If it doesn’t know this (because our standard statistics don’t correlate FIRST_NAME / LAST_NAME with ACTOR_ID), then we get the average number of films for any actor. Simple, insignificant error in this particular case, but when that error propagates in a complex query, it can add up and lead to the wrong choice of JOIN down the line (or up the plan).

So, when you can, always design JOIN and ordinary predicates to profit from transitive closure.

What other databases support this?

DB2

Yes

Explain Plan                                               
-----------------------------------------------------------
ID | Operation              |                 Rows | Cost  
 1 | RETURN                 |                      |   13  
 2 |  NLJOIN                |              27 of 1 |   13  
 3 |   FETCH ACTOR          |     1 of 1 (100.00%) |    6  
 4 |    IXSCAN PK_ACTOR     |   1 of 200 (   .50%) |    0  
 5 |   IXSCAN PK_FILM_ACTOR | 27 of 5462 (   .49%) |    6  
                                                           
Predicate Information                                      
 4 - START (Q2.ACTOR_ID = 1)                               
      STOP (Q2.ACTOR_ID = 1)                               
 5 - START (1 = Q1.ACTOR_ID)                               
      STOP (1 = Q1.ACTOR_ID)                               

Btw, want cool execution plans like the above on DB2 LUW? Go visit Markus Winand’s script:
http://use-the-index-luke.com/s/last_explained

MySQL

Unfortunately, MySQL explain plans are not very useful for such analyses. We don’t really see the predicate itself in this output:

ID  SELECT TYPE  TABLE  TYPE   REF    ROWS
------------------------------------------
1   SIMPLE       a      const  const  1 
1   SIMPLE       fa     ref    const  19

But the fact that the REF column is two times “const” indicates that we’re scanning for a constant value in both tables. Conversely, the plan that queries for FIRST_NAME / LAST_NAME looks like this:

ID  SELECT TYPE  TABLE  TYPE   REF         ROWS
-----------------------------------------------
1   SIMPLE       a      ref    const       3 
1   SIMPLE       fa     ref    a.actor_id  27

And you can see that REF has now switched to a column reference from the JOIN predicate. The cardinality estimate is now almost the same as in Oracle.

So, yes, MySQL supports transitive closure, too.

PostgreSQL

Yes

QUERY PLAN                                                                          
------------------------------------------------------------------------------------
Nested Loop  (cost=4.49..40.24 rows=27 width=15)                                    
  ->  Seq Scan on actor a  (cost=0.00..4.50 rows=1 width=17)                        
        Filter: (actor_id = 1)                                                      
  ->  Bitmap Heap Scan on film_actor fa  (cost=4.49..35.47 rows=27 width=4)         
        Recheck Cond: (actor_id = 1)                                                
        ->  Bitmap Index Scan on film_actor_pkey  (cost=0.00..4.48 rows=27 width=0) 
              Index Cond: (actor_id = 1)                                            

SQL Server

Yes

  |--Nested Loops(Inner Join)
       |--Nested Loops(Inner Join)
       |    |--Index Seek (SEEK:([a].[actor_id]=(1)))
       |    |--RID Lookup
       |--Index Seek (SEEK:([fa].[actor_id]=(1)))

Summary

All databases can do transitive closure:

Database Transitive closure
DB2 LUW 10.5 Yep
MySQL 8.0.2 Yep
Oracle 12.2.0.1 Yep
PostgreSQL 9.6 Yep
SQL Server 2014 Yep

Stay tuned, though, for #6. There are more complex cases of transitive closure, where not all databases get it right.

2. Impossible Predicates and Unneeded Table Accesses

This optimisation is really silly, but hey, why not. If users write impossible predicates, then why even execute them? Here are some examples:

-- "Obvious"
SELECT * FROM actor WHERE 1 = 0

-- "Subtle"
SELECT * FROM actor WHERE NULL = NULL

The first query should obviously never return any results, but the same is true for the second one, because while NULL IS NULL yields TRUE, always, NULL = NULL evaluates to NULL, which has the same effect as FALSE according to three-valued logic.

This doesn’t need much explanation, so let’s immediately jump to see which databases optimise this:

DB2

Yes

Explain Plan                       
-----------------------------------
ID | Operation      |   Rows | Cost
 1 | RETURN         |        |    0
 2 |  TBSCAN GENROW | 0 of 0 |    0

As you can see, the table access to the ACTOR table is completely eliminated from the plan. There’s only a GENROW operation, which generates zero rows. Perfect.

MySQL

Yes

ID  SELECT TYPE  TABLE   EXTRAS
-----------------------------------------
1   SIMPLE         Impossible WHERE

This time, MySQL has been so kind to indicate that the WHERE clause is impossible. Thanks, that’s helpful when analysing – more than the other databases

Oracle

Yes

---------------------------------------------------------------
| Id  | Operation          | Name  | Starts | E-Rows | A-Rows |
---------------------------------------------------------------
|   0 | SELECT STATEMENT   |       |      1 |        |      0 |
|*  1 |  FILTER            |       |      1 |        |      0 |
|   2 |   TABLE ACCESS FULL| ACTOR |      0 |    200 |      0 |
---------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter(NULL IS NOT NULL)

Now, observe that the plan still shows the table access to the ACTOR table, and the estimated number of rows is still 200, but there’s a FILTER operation on Id=1, which can never be true. Because Oracle really really doesn’t like the SQL standard BOOLEAN type, they display NULL IS NOT NULL in the plan, rather than simply FALSE. Oh well 🙂

But seriously, do check for this predicate. I’ve been debugging 1000-line-long execution plan subtrees with super high costs before noticing that the entire subtree was “cut off” by NULL IS NOT NULL. A bit misleading, if you ask me.

PostgreSQL

Yes

QUERY PLAN                                 
-------------------------------------------
Result  (cost=0.00..0.00 rows=0 width=228) 
  One-Time Filter: false                   

That’s nicer. No noisy ACTOR access and a nice little FALSE predicate.

SQL Server

Yes

  |--Constant Scan

SQL Server calls this a “constant scan”, i.e. a scan where nothing happens – just like DB2.

All databases can eliminate impossible predicates:

Database Impossible predicates Unneeded table access
DB2 LUW 10.5 Yep Yep
MySQL 8.0.2 Yep Yep
Oracle 12.2.0.1 Yep Yep
PostgreSQL 9.6 Yep Yep
SQL Server 2014 Yep Yep

3. JOIN Elimination

In the previous section, we’ve seen unneeded table access for single table queries. But what happens if one out of several table accesses is unneeded in a JOIN?

I’ve already blogged about JOIN elimination in a previous blog post. SQL engines can determine, based on the way a query is written, and based on the presence of PRIMARY KEYs and FOREIGN KEYs, whether any given JOIN is really required in a query, or whether it could be eliminated without affecting the semantics of the query.

In all of the following three examples, the JOIN is unnecessary:

to-one INNER JOINs can be removed if there’s a NOT NULL FOREIGN KEY

Instead of this:

SELECT first_name, last_name
FROM customer c
JOIN address a ON c.address_id = a.address_id

The database can run this:

SELECT first_name, last_name
FROM customer c

to-one INNER JOINs can be replaced if there’s a nullable FOREIGN KEY

The above works if there’s also a NOT NULL constraint on the FOREIGN KEY. If there isn’t, e.g. as in this query:

SELECT title
FROM film f
JOIN language l ON f.original_language_id = l.language_id

The JOIN can still be eliminated, but there needs to be a replacement NOT NULL predicate, as such:

SELECT title
FROM film
WHERE original_language_id IS NOT NULL

to-one OUTER JOINs can be removed if there’s a UNIQUE KEY

Instead of this:

SELECT first_name, last_name
FROM customer c
LEFT JOIN address a ON c.address_id = a.address_id

The database can again run this:

SELECT first_name, last_name
FROM customer c

… even if there is no FOREIGN KEY on CUSTOMER.ADDRESS_ID.

to-many DISTINCT OUTER JOINs can be removed

Instead of this:

SELECT DISTINCT first_name, last_name
FROM actor a
LEFT JOIN film_actor fa ON a.actor_id = fa.actor_id

The database can run this:

SELECT DISTINCT first_name, last_name
FROM actor a

All of these examples are explained in detail in the previous article, so I’m not going to repeat it, but here’s again the summary of what each database can eliminate:

Here’s a summary of what databases can eliminate:

Database INNER JOIN:
to-one
INNER JOIN nullable:
to-one
OUTER JOIN:
to-one
OUTER JOIN DISTINCT:
to-many
DB2 LUW 10.5 Yep Yep Yep Yep
MySQL 8.0.2 Nope Nope Nope Nope
Oracle 12.2.0.1 Yep Yep Yep Nope
PostgreSQL 9.6 Nope Nope Yep Nope
SQL Server 2014 Yep Nope Yep Yep

Unfortunately, not all databases can eliminate all joins. DB2 and SQL Server are the clear winners here!

4. Removing “Silly” Predicates

Equally silly are predicates that are (almost) always true. As you can imagine, if you search for

SELECT * FROM actor WHERE 1 = 1;

Then, databases will not actually evaluate the predicate but ignore it. This was a recent Stack Overflow question that I’ve answered, which actually gave me the idea to write this blog post.

I’ll leave it to you to check this, but what happens if the predicate is just slightly less silly, e.g.:

SELECT * FROM film WHERE release_year = release_year;

Do we actually have to compare the value with itself on each row? No, there’s no value where this can be FALSE, right? Right. But we still have to do a check. While the predicate can never be FALSE, it can totally be NULL, again because of three valued logic. The RELEASE_YEAR column is a nullable column, and if RELEASE_YEAR IS NULL for any given row, then NULL = NULL yields NULL, and the row must be excluded.

So, the query is transformed into this:

SELECT * FROM film WHERE release_year IS NOT NULL;

Which databases do this?

DB2

Yes

Explain Plan                                     
-------------------------------------------------
ID | Operation    |                   Rows | Cost
 1 | RETURN       |                        |   49
 2 |  TBSCAN FILM | 1000 of 1000 (100.00%) |   49
                                                 
Predicate Information                            
 2 - SARG Q1.RELEASE_YEAR IS NOT NULL            

MySQL

Very regrettably, again, because MySQL doesn’t display predicates in their execution plans, it’s a bit hard to find out whether MySQL performs this particular optimisation. We could benchmark things and see if some really big string comparisons are executed or not. Or, we add an index:

CREATE INDEX i_release_year ON film (release_year);

And get the plans for these queries instead:

SELECT * FROM film WHERE release_year = release_year;
SELECT * FROM film WHERE release_year IS NOT NULL;

If the optimisation works, then both queries should produce exactly the same plan. But they don’t in this case:

ID  TABLE  POSSIBLE_KEYS   ROWS  FILTERED  EXTRA
------------------------------------------------------
1   film             1000  10.00           Using where

ID  TABLE  POSSIBLE_KEYS   ROWS  FILTERED  EXTRA
------------------------------------------------------
1   film   i_release_year  1000  100.00    Using where

As you can see, the two queries differ substantially in that the POSSIBLE_KEYS and FILTERED columns yield different values. I’m making an educated guess and say MySQL does not optimise this.

Oracle

Yes

----------------------------------------------------
| Id  | Operation         | Name | Starts | E-Rows |
----------------------------------------------------
|   0 | SELECT STATEMENT  |      |      1 |        |
|*  1 |  TABLE ACCESS FULL| FILM |      1 |   1000 |
----------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter("RELEASE_YEAR" IS NOT NULL)

PostgreSQL

Disappointingly, no!

QUERY PLAN                                                    
--------------------------------------------------------------
Seq Scan on film  (cost=0.00..67.50 rows=5 width=386)         
  Filter: ((release_year)::integer = (release_year)::integer) 

The plans and the costs are different. Specifically, observe the cardinality estimate, which is totally off, when this predicate:

SELECT * FROM film WHERE release_year IS NOT NULL;

… yields much better results

QUERY PLAN                                               
---------------------------------------------------------
Seq Scan on film  (cost=0.00..65.00 rows=1000 width=386) 
  Filter: (release_year IS NOT NULL)                     

Bummer!

SQL Server

Surprisingly, also SQL Server doesn’t seem to do this:

  |--Table Scan(OBJECT:([film]), WHERE:([release_year]=[release_year]))

However, the cardinality estimate is correct when looking at the visual plan, and the costs are all correct as well. From what I’ve seen in the past on SQL Server, though, I’m going to say that in this case, the optimisation is not taking place because SQL server would display the actually executed predicate in the plan (look at the CHECK constraint examples below to see why).

What about “silly” predicates on NOT NULL columns?

The above transformation was only needed, because RELEASE_YEAR is a nullable column. What if we did the same silly query with e.g. FILM_ID?

SELECT * FROM film WHERE film_id = film_id

This is now the same as not putting a predicate at all. Or at least it should be. Is it, though?

DB2

Yes!

Explain Plan                                     
-------------------------------------------------
ID | Operation    |                   Rows | Cost
 1 | RETURN       |                        |   49
 2 |  TBSCAN FILM | 1000 of 1000 (100.00%) |   49

No predicate is applied at all, and we’re selecting all the films.

MySQL

Yes (educated guess, again)

ID  TABLE  POSSIBLE_KEYS   ROWS  FILTERED  EXTRA
------------------------------------------------------
1   film                   1000  100.00

Observe how now the EXTRA column is empty as if we didn’t have any WHERE clause!

Oracle

Yes

----------------------------------------------------
| Id  | Operation         | Name | Starts | E-Rows |
----------------------------------------------------
|   0 | SELECT STATEMENT  |      |      1 |        |
|   1 |  TABLE ACCESS FULL| FILM |      1 |   1000 |
----------------------------------------------------

Again, no predicates are applied.

PostgreSQL

Gee, still no!

QUERY PLAN                                            
------------------------------------------------------
Seq Scan on film  (cost=0.00..67.50 rows=5 width=386) 
  Filter: (film_id = film_id)                         

The filter is applied and the cardinality estimate is still 5. Bummer!

SQL Server

Also, still no!

  |--Table Scan(OBJECT:([film]), WHERE:([film_id]=[film_id]))

Summary

This appears like a simple optimisation, but it is not applied in all databases, surprisingly not in SQL Server!

Database Silly but needed predicates
(NULL semantics)
Silly unneeded predicates
(no NULL semantics)
DB2 LUW 10.5 Yep Yep
MySQL 8.0.2 Nope Yep
Oracle 12.2.0.1 Yep Yep
PostgreSQL 9.6 Nope Nope
SQL Server 2014 Nope Nope

5. Projections in EXISTS Subqueries

Interestingly, this one, I get asked all the time in my SQL Masterclass where I advocate that SELECT * is mostly bad.

The question then is, is it OK to use SELECT * in an EXISTS subquery? For instance, if we wanted to find actors who have played in films:

SELECT first_name, last_name
FROM actor a
WHERE EXISTS (
  SELECT * -- Is this OK?
  FROM film_actor fa
  WHERE a.actor_id = fa.actor_id
)

And the answer is: Yes it is OK. The asterisk has no impact on the query. How can we “prove” this? Consider the following query:

-- DB2
SELECT 1 / 0 FROM sysibm.dual

-- Oracle
SELECT 1 / 0 FROM dual

-- PostgreSQL, SQL Server
SELECT 1 / 0

-- MySQL
SELECT pow(-1, 0.5);

All databases report a division by zero error. Note that interestingly, in MySQL, dividing by zero yields NULL, not an error, so we’re doing something else that’s illegal.

Now, what happens if we do this, instead?

-- DB2
SELECT CASE WHEN EXISTS (
  SELECT 1 / 0 FROM sysibm.dual
) THEN 1 ELSE 0 END
FROM sysibm.dual

-- Oracle
SELECT CASE WHEN EXISTS (
  SELECT 1 / 0 FROM dual
) THEN 1 ELSE 0 END
FROM dual

-- PostgreSQL
SELECT EXISTS (SELECT 1 / 0)

-- SQL Server
SELECT CASE WHEN EXISTS (
  SELECT 1 / 0
) THEN 1 ELSE 0 END

-- MySQL
SELECT EXISTS (SELECT pow(-1, 0.5));

Now, none of the databases fail the query. All of them return TRUE or 1. This means that none of the databases actually evaluated the projection (i.e. the SELECT clause) of the EXISTS subquery.

SQL Server, for instance, shows the following plan:

  |--Constant Scan(VALUES:((CASE WHEN (1) THEN (1) ELSE (0) END)))

As you can see, the CASE expression was transformed to a constant, the subquery has been eliminated. Other databases still have the subquery in their plan and don’t mention anything about a projection, so let’s again look at the original query’s plan in Oracle:

SELECT first_name, last_name
FROM actor a
WHERE EXISTS (
  SELECT *
  FROM film_actor fa
  WHERE a.actor_id = fa.actor_id
)

The plan for the above is:

------------------------------------------------------------------
| Id  | Operation             | Name                    | E-Rows |
------------------------------------------------------------------
|   0 | SELECT STATEMENT      |                         |        |
|*  1 |  HASH JOIN SEMI       |                         |    200 |
|   2 |   TABLE ACCESS FULL   | ACTOR                   |    200 |
|   3 |   INDEX FAST FULL SCAN| IDX_FK_FILM_ACTOR_ACTOR |   5462 |
------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - access("A"."ACTOR_ID"="FA"."ACTOR_ID")
 
Column Projection Information (identified by operation id):
-----------------------------------------------------------
 
   1 - (#keys=1) LAST_NAME, FIRST_NAME
   2 - (rowset=256) A.ACTOR_ID, FIRST_NAME, LAST_NAME
   3 - FA.ACTOR_ID

Observe the projection information on the FILM_ACTOR access in Id=3. In fact, we’re not even accessing the FILM_ACTOR table, because we don’t have to. The EXISTS predicate can be executed using the foreign key index on the ACTOR_ID column only, that’s all we need for this query – despite us having written SELECT *

Summary

Luckily, all databases can remove the projection in EXISTS subqueries

Database EXISTS projection
DB2 LUW 10.5 Yep
MySQL 8.0.2 Yep
Oracle 12.2.0.1 Yep
PostgreSQL 9.6 Yep
SQL Server 2014 Yep

6. Predicate Merging

This one is interesting and has bitten me in the past when I erroneously assumed that a given database could do it.

Consider the following query:

SELECT * 
FROM actor
WHERE actor_id IN (2, 3, 4)
AND actor_id IN (1, 2, 3);

Obviously, the two predicates overlap and can be merged. I would expect the database to transform the above into:

SELECT * 
FROM actor
WHERE actor_id IN (2, 3);

Looks obvious, right? It is a more sophisticated case of transitive closure. Another case would be merging two ranges. When running the following query:

SELECT * 
FROM film
WHERE film_id BETWEEN 1 AND 100
AND film_id BETWEEN 99 AND 200

We’d hope for the database to rewrite the query to this:

SELECT * 
FROM film
WHERE film_id BETWEEN 99 AND 100

The cardinality of the latter predicate should be 2 rows, but the first, combined ranges might not look like it, and the database might choose a full table scan when it should pick the index

Which database can do these optimisations?

DB2

Merging IN predicates

Yes

Explain Plan                                      
--------------------------------------------------
ID | Operation         |               Rows | Cost
 1 | RETURN            |                    |   11
 2 |  FETCH ACTOR      |   2 of 2 (100.00%) |   11
 3 |   IXSCAN PK_ACTOR | 2 of 200 (  1.00%) |    0
                                                  
Predicate Information                             
 3 - SARG Q3.ACTOR_ID IN (2, 3)                   

Merging range predicates

Yes (but don’t be fooled by the plan!)

Explain Plan                                      
--------------------------------------------------
ID | Operation        |                Rows | Cost
 1 | RETURN           |                     |   13
 2 |  FETCH FILM      |    2 of 2 (100.00%) |   13
 3 |   IXSCAN PK_FILM | 2 of 1000 (   .20%) |    6
                                                  
Predicate Information                             
 3 - START (99 <= Q1.FILM_ID)                     
      STOP (Q1.FILM_ID <= 100)                    
      SARG (Q1.FILM_ID <= 200)                    
      SARG (1 <= Q1.FILM_ID)                      

As you can see, the predicate was not optimised away entirely. There’s still a filter (SARG) that checks for the overall upper and lower bounds of the combined range, but the important bits are the START and STOP operations, which indicate fast index access. Besides, the cardinality is also correct.

If you want to be sure, just run this impossible predicate here:

SELECT * 
FROM film
WHERE film_id BETWEEN 1 AND 2
AND film_id BETWEEN 199 AND 200;

To get the correct plan:

Explain Plan                       
-----------------------------------
ID | Operation      |   Rows | Cost
 1 | RETURN         |        |    0
 2 |  TBSCAN GENROW | 0 of 0 |    0
                                   
Predicate Information              
 2 - RESID (1 = 0)                 

MySQL

Merging IN predicates

Again, unfortunately, MySQL doesn’t display the predicate information very nicely. We get the same plan for both queries:

ID  TABLE  TYPE   KEY      ROWS  FILTERED  EXTRA
------------------------------------------------------
1   actor  range  PRIMARY  2     100.00    Using where

2x the same cardinalities, 2x “Using where” with no indication what exactly is being done inside of “where”, but given the cardinality, we can assume that the transformation happened correctly. We can look at it differently, let’s try this query:

SELECT * FROM actor
WHERE actor_id IN (3, 4, 5)
AND actor_id IN (1, 2, 3);

Which should be transformed into this one:

SELECT * FROM actor
WHERE actor_id = 3;

And indeed, it happens:

ID  TABLE  TYPE   KEY      ROWS  FILTERED  EXTRA
------------------------------------------------------
1   actor  const  PRIMARY  1     100.00

Observe how TYPE=range changed to TYPE=const

So, we can conclude that yes, MySQL implements this optimisation.

Merging range predicates

Again, the plan is not helpful at all:

ID  TABLE  TYPE   KEY      ROWS  FILTERED  EXTRA
------------------------------------------------------
1   film   range  PRIMARY  2     100.00    Using where

But we can again prove that the optimisation is being done by creating an “impossible” predicate as such:

SELECT * 
FROM film
WHERE film_id BETWEEN 1 AND 2
AND film_id BETWEEN 199 AND 200

In case of which the plan switches to:

ID  TABLE  EXTRA
-----------------------------------------
1          no matching row in const table

So, again good news for MySQL

Oracle

Merging IN predicates

Yes

----------------------------------------------------------
| Id  | Operation                    | Name     | E-Rows |
----------------------------------------------------------
|   0 | SELECT STATEMENT             |          |        |
|   1 |  INLIST ITERATOR             |          |        |
|   2 |   TABLE ACCESS BY INDEX ROWID| ACTOR    |      2 |
|*  3 |    INDEX UNIQUE SCAN         | PK_ACTOR |      2 |
----------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   3 - access(("ACTOR_ID"=2 OR "ACTOR_ID"=3))

The predicate being applied only includes the values 2 and 3, so the transformation has worked out correctly.

Merging range predicates

Again, yes:

----------------------------------------------------------------
| Id  | Operation                           | Name    | E-Rows |
----------------------------------------------------------------
|   0 | SELECT STATEMENT                    |         |        |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| FILM    |      2 |
|*  2 |   INDEX RANGE SCAN                  | PK_FILM |      2 |
----------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - access("FILM_ID">=99 AND "FILM_ID"<=100)

PostgreSQL

Merging IN predicates

Regrettably, no, this is not optimised!

QUERY PLAN                                                                                     
-----------------------------------------------------------------------------------------------
Seq Scan on actor  (cost=0.00..5.50 rows=1 width=25)                                           
  Filter: ((actor_id = ANY ('{2,3,4}'::integer[])) AND (actor_id = ANY ('{1,2,3}'::integer[])))

Both predicates are still present in the execution plan, and the cardinality estimate is wrong, it should be 2, not 1. If I manually transform the query, I’m getting this plan instead:

QUERY PLAN                                           
-----------------------------------------------------
Seq Scan on actor  (cost=0.00..4.50 rows=2 width=25) 
  Filter: (actor_id = ANY ('{2,3}'::integer[]))      

In particular, we can see the wrong plan if the two predicates do not overlap, in case of which an impossible predicate is formed:

SELECT * 
FROM actor
WHERE actor_id IN (2, 3, 4)
AND actor_id IN (7, 8, 9)

Still, this yields a “wrong” plan:

QUERY PLAN                                                                                     
-----------------------------------------------------------------------------------------------
Seq Scan on actor  (cost=0.00..5.50 rows=1 width=25)                                           
  Filter: ((actor_id = ANY ('{2,3,4}'::integer[])) AND (actor_id = ANY ('{7,8,9}'::integer[])))

Bummer!

Merging range predicates

This doesn’t look better

QUERY PLAN                                                                                  
--------------------------------------------------------------------------------------------
Index Scan using film_pkey on film  (cost=0.28..8.30 rows=1 width=386)                      
  Index Cond: ((film_id >= 1) AND (film_id <= 100) AND (film_id >= 99) AND (film_id <= 200))

Now, it’s hard to say whether this worked or not. Ultimately, we have gotten the correct plan with a reasonable cardinality as before, and it might just work out as on DB2. But what happens if we again create an impossible predicate?

SELECT * 
FROM film
WHERE film_id BETWEEN 1 AND 2
AND film_id BETWEEN 199 AND 200;

The plan got worse:

QUERY PLAN                                                                                 
-------------------------------------------------------------------------------------------
Index Scan using film_pkey on film  (cost=0.28..8.42 rows=5 width=386)                     
  Index Cond: ((film_id >= 1) AND (film_id >= 2) AND (film_id >= 199) AND (film_id >= 200))

The cardinality increased instead of it decreasing! And after all, we shouldn’t run this query anyway. No points for PostgreSQL

SQL Server

Merging IN predicates

Yes, this works:

  |--Nested Loops(Inner Join)
       |--Index Seek(SEEK:([actor_id]=(2) OR [actor_id]=(3)))
       |--RID Lookup(OBJECT:([actor]))

Merging range predicates

This again looks like the DB2 case:

  |--Nested Loops(Inner Join)
       |--Index Seek(SEEK:([film_id] >= (1) AND [film_id] <= (100)), WHERE:([film_id]>=(99) AND [film_id]<=(200)))
       |--RID Lookup(OBJECT:([film]))

Unfortunately, observe the distinction between SEEK and WHERE. We want the range [99, 100] in SEEK (as DB2 did) because SEEK is the fast, O(log N) index access, whereas WHERE is linear in O(N) time. Bummer!

This looks like a bug to me, because the impossible predicate yields a more reasonable:

  |--Constant Scan

Summary

Note that there are many different kinds of predicates that might be merged in one database but not in the other. If in doubt, do check your execution plans!

Database Merging IN Merging ranges
DB2 LUW 10.5 Yep Yep
MySQL 8.0.2 Yep Yep
Oracle 12.2.0.1 Yep Yep
PostgreSQL 9.6 Nope Nope
SQL Server 2014 Yep Nope

7. Provably Empty Sets

This one is really cool. We’ve seen Impossible predicates and unneeded table accesses before. What if we do this again, but this time with a JOIN? Can JOIN elimination kick in, too?

We’re trying these queries:

IS NULL on NOT NULL column

The predicate in the WHERE clause cannot be TRUE, because we have a NOT NULL constraint on the FILM_ID column.

SELECT first_name, last_name
FROM actor a
JOIN (
  SELECT *
  FROM film_actor
  WHERE film_id IS NULL
) fa ON a.actor_id = fa.actor_id;

The derived table FA cannot return any rows, because of that NOT NULL constraint on the FA.FILM_ID column, so it is provably empty. Because an INNER JOIN with an empty table cannot produce any rows either, this should save us from accessing the ACTOR table, so the above query should be rewritten to something like this:

SELECT NULL AS first_name, NULL AS last_name
WHERE 1 = 0;

I.e. the predicate is never evaluated and the JOIN is eliminated.

INTERSECT NULL and NOT NULL columns

In principle, this is the same as the previous example, but using a bit more sophisticated syntax:

SELECT *
FROM actor a
JOIN (
  SELECT actor_id, film_id
  FROM film_actor
  INTERSECT
  SELECT NULL, NULL
  FROM dual
) fa ON a.actor_id = fa.actor_id;

Because of the NOT NULL constraints on both FA.ACTOR_ID and FA.FILM_ID, an INTERSECT operation with a (NULL, NULL) tuple should not yield any results, and thus the derived table is provably empty, and thus the INNER JOIN can be eliminated.

Funky, but why not?

Let’s repeat, with EXISTS

Finally, let’s repeat the above type of query, but this time with an SEMI JOIN instead of an INNER JOIN. First with an impossible predicate

SELECT *
FROM actor a
WHERE a.actor_id IN (
  SELECT actor_id
  FROM film_actor
  WHERE actor_id IS NULL
);

… then again with an intersection.

SELECT *
FROM actor a
WHERE a.actor_id IN (
  SELECT actor_id
  FROM film_actor
  INTERSECT
  SELECT NULL
  FROM sysibm.dual
)

Let’s go. Which database can do which optimisation?

DB2

Joining a provably empty set (IS NULL predicate):

Explain Plan                       
-----------------------------------
ID | Operation      |   Rows | Cost
 1 | RETURN         |        |    0
 2 |  TBSCAN GENROW | 0 of 0 |    0
                                   
Predicate Information              
 2 - RESID (1 = 0)                 

Joining a provably empty set (INTERSECT):

Explain Plan                       
-----------------------------------
ID | Operation      |   Rows | Cost
 1 | RETURN         |        |    0
 2 |  TBSCAN GENROW | 0 of 0 |    0
                                   
Predicate Information              
 2 - RESID (1 = 0)                 

Semi joining a provably empty set (IS NULL predicate):

Explain Plan                       
-----------------------------------
ID | Operation      |   Rows | Cost
 1 | RETURN         |        |    0
 2 |  TBSCAN GENROW | 0 of 0 |    0
                                   
Predicate Information              
 2 - RESID (1 = 0)                 

Semi joining a provably empty set (INTERSECT):

Explain Plan                       
-----------------------------------
ID | Operation      |   Rows | Cost
 1 | RETURN         |        |    0
 2 |  TBSCAN GENROW | 0 of 0 |    0
                                   
Predicate Information              
 2 - RESID (1 = 0)                 

Wow, cool! Looks like a winner!

MySQL

Joining a provably empty set (IS NULL predicate):

ID  TABLE   EXTRA
----------------------------
1           Impossible WHERE

Cool! I didn’t expect this!

Joining a provably empty set (INTERSECT):

MySQL doesn’t support INTERSECT, regrettably.

Semi joining a provably empty set (IS NULL predicate):

ID  TABLE   EXTRA
----------------------------
1           Impossible WHERE

Semi joining a provably empty set (INTERSECT):

MySQL doesn’t support INTERSECT, regrettably.

But still, that’s a great result for MySQL!

Oracle

Joining a provably empty set (IS NULL predicate):

---------------------------------------------------------------------------
| Id  | Operation              | Name          | Starts | E-Rows | A-Rows |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |               |      1 |        |      0 |
|*  1 |  FILTER                |               |      1 |        |      0 |
|*  2 |   HASH JOIN            |               |      0 |   5462 |      0 |
|   3 |    TABLE ACCESS FULL   | ACTOR         |      0 |    200 |      0 |
|   4 |    INDEX FAST FULL SCAN| PK_FILM_ACTOR |      0 |   5462 |      0 |
---------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter(NULL IS NOT NULL)
   2 - access("A"."ACTOR_ID"="FILM_ACTOR"."ACTOR_ID")

Again, a very confusing execution plan in Oracle, but the NULL IS NOT NULL filter is there, and it happens before all the other operations, which are not executed.

Joining a provably empty set (INTERSECT):

---------------------------------------------------------------------------------
| Id  | Operation                    | Name          | Starts | E-Rows | A-Rows |
---------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |               |      1 |        |      0 |
|   1 |  NESTED LOOPS                |               |      1 |      1 |      0 |
|   2 |   NESTED LOOPS               |               |      1 |      1 |      0 |
|   3 |    VIEW                      |               |      1 |      1 |      0 |
|   4 |     INTERSECTION             |               |      1 |        |      0 |
|   5 |      SORT UNIQUE             |               |      1 |   5462 |   5463 |
|   6 |       INDEX FAST FULL SCAN   | PK_FILM_ACTOR |      1 |   5462 |   5463 |
|   7 |      FAST DUAL               |               |      1 |      1 |      1 |
|*  8 |    INDEX UNIQUE SCAN         | PK_ACTOR      |      0 |      1 |      0 |
|   9 |   TABLE ACCESS BY INDEX ROWID| ACTOR         |      0 |      1 |      0 |
---------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   8 - access("A"."ACTOR_ID"="FA"."ACTOR_ID")

Interesting. This plan will indeed access the entire FILM_ACTOR primary key. It can save accesses to the ACTOR table and primary key index, because it does the derived table first (which yields no rows), but still those Ids=5 and 6 should not be there. Bummer!

Semi joining a provably empty set (IS NULL predicate):

This works again:

-------------------------------------------------------------------------------------
| Id  | Operation              | Name                    | Starts | E-Rows | A-Rows |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |                         |      1 |        |      0 |
|*  1 |  FILTER                |                         |      1 |        |      0 |
|*  2 |   HASH JOIN SEMI       |                         |      0 |    200 |      0 |
|   3 |    TABLE ACCESS FULL   | ACTOR                   |      0 |    200 |      0 |
|   4 |    INDEX FAST FULL SCAN| IDX_FK_FILM_ACTOR_ACTOR |      0 |   5462 |      0 |
-------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter(NULL IS NOT NULL)
   2 - access("A"."ACTOR_ID"="ACTOR_ID")

… with the same confusing plan that keeps around the unexecuted subtree.

Semi joining a provably empty set (INTERSECT):

Again, no optimisation here:

-------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                    | Starts | E-Rows | A-Rows |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                         |      1 |        |      0 |
|   1 |  NESTED LOOPS                |                         |      1 |      1 |      0 |
|   2 |   NESTED LOOPS               |                         |      1 |      1 |      0 |
|   3 |    VIEW                      | VW_NSO_1                |      1 |      1 |      0 |
|   4 |     INTERSECTION             |                         |      1 |        |      0 |
|   5 |      SORT UNIQUE             |                         |      1 |   5462 |    200 |
|   6 |       INDEX FAST FULL SCAN   | IDX_FK_FILM_ACTOR_ACTOR |      1 |   5462 |   5463 |
|   7 |      FAST DUAL               |                         |      1 |      1 |      1 |
|*  8 |    INDEX UNIQUE SCAN         | PK_ACTOR                |      0 |      1 |      0 |
|   9 |   TABLE ACCESS BY INDEX ROWID| ACTOR                   |      0 |      1 |      0 |
-------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   8 - access("A"."ACTOR_ID"="ACTOR_ID")

Not so good!

PostgreSQL

Disappointingly, PostgreSQL doesn’t fare well in this experiment!

Joining a provably empty set (IS NULL predicate):

Nope:

QUERY PLAN                                                                                  
--------------------------------------------------------------------------------------------
Hash Join  (cost=8.31..13.07 rows=1 width=13)                                               
  Hash Cond: (a.actor_id = film_actor.actor_id)                                             
  ->  Seq Scan on actor a  (cost=0.00..4.00 rows=200 width=17)                              
  ->  Hash  (cost=8.30..8.30 rows=1 width=2)                                                
        ->  Index Scan using idx_fk_film_id on film_actor  (cost=0.28..8.30 rows=1 width=2) 
              Index Cond: (film_id IS NULL)                                                 

Joining a provably empty set (INTERSECT):

Even worse:

QUERY PLAN                                                                                         
---------------------------------------------------------------------------------------------------
Hash Join  (cost=166.60..171.36 rows=1 width=29)                                                   
  Hash Cond: (a.actor_id = fa.actor_id)                                                            
  ->  Seq Scan on actor a  (cost=0.00..4.00 rows=200 width=25)                                     
  ->  Hash  (cost=166.59..166.59 rows=1 width=4)                                                   
        ->  Subquery Scan on fa  (cost=0.00..166.59 rows=1 width=4)                                
              ->  HashSetOp Intersect  (cost=0.00..166.58 rows=1 width=8)                          
                    ->  Append  (cost=0.00..139.26 rows=5463 width=8)                              
                          ->  Subquery Scan on "*SELECT* 2"  (cost=0.00..0.02 rows=1 width=8)      
                                ->  Result  (cost=0.00..0.01 rows=1 width=4)                       
                          ->  Subquery Scan on "*SELECT* 1"  (cost=0.00..139.24 rows=5462 width=8) 
                                ->  Seq Scan on film_actor  (cost=0.00..84.62 rows=5462 width=4)   

Semi joining a provably empty set (IS NULL predicate):

Same as inner join:

QUERY PLAN                                                                                       
-------------------------------------------------------------------------------------------------
Hash Semi Join  (cost=6.06..10.60 rows=1 width=25)                                               
  Hash Cond: (a.actor_id = film_actor.actor_id)                                                  
  ->  Seq Scan on actor a  (cost=0.00..4.00 rows=200 width=25)                                   
  ->  Hash  (cost=6.05..6.05 rows=1 width=2)                                                     
        ->  Index Only Scan using film_actor_pkey on film_actor  (cost=0.28..6.05 rows=1 width=2)
              Index Cond: (actor_id IS NULL)                                                     

Semi joining a provably empty set (INTERSECT):

Unsurprisingly:

QUERY PLAN                                                                                        
--------------------------------------------------------------------------------------------------
Hash Semi Join  (cost=152.94..157.48 rows=1 width=25)                                             
  Hash Cond: (a.actor_id = "ANY_subquery".actor_id)                                               
  ->  Seq Scan on actor a  (cost=0.00..4.00 rows=200 width=25)                                    
  ->  Hash  (cost=152.93..152.93 rows=1 width=2)                                                  
        ->  Subquery Scan on "ANY_subquery"  (cost=0.00..152.93 rows=1 width=2)                   
              ->  HashSetOp Intersect  (cost=0.00..152.92 rows=1 width=6)                         
                    ->  Append  (cost=0.00..139.26 rows=5463 width=6)                             
                          ->  Subquery Scan on "*SELECT* 2"  (cost=0.00..0.02 rows=1 width=6)     
                                ->  Result  (cost=0.00..0.01 rows=1 width=2)                      
                          ->  Subquery Scan on "*SELECT* 1"  (cost=0.00..139.24 rows=5462 width=6)
                                ->  Seq Scan on film_actor  (cost=0.00..84.62 rows=5462 width=2)  

SQL Server

SQL Server shines, like DB2:

Joining a provably empty set (IS NULL predicate):

  |--Constant Scan

Joining a provably empty set (INTERSECT):

  |--Constant Scan

Semi joining a provably empty set (IS NULL predicate):

  |--Constant Scan

Semi joining a provably empty set (INTERSECT):

  |--Constant Scan

Summary

Database JOIN / NULL JOIN / INTERSECT SEMI JOIN / NULL SEMI JOIN / INTERSECT
DB2 LUW 10.5 Yep Yep Yep Yep
MySQL 8.0.2 Yep Not supported Yep Not supported
Oracle 12.2.0.1 Yep Nope Yep Nope
PostgreSQL 9.6 Nope Nope Nope Nope
SQL Server 2014 Yep Yep Yep Yep

On a side note, this could be done in thousands of other ways. Feel free to comment with your own ideas on how to create “provably empty sets” to see if this is optimised by any of the databases.

8. CHECK Constraints

Oh, this is cool! Our Sakila database has a CHECK constraint on the FILM.RATING table:

CREATE TABLE film (
  ..
  RATING varchar(10) DEFAULT 'G',
  ..
  CONSTRAINT check_special_rating 
    CHECK (rating IN ('G','PG','PG-13','R','NC-17')),
  ..
);

Seriously, use CHECK constraints for data integrity. The cost of adding them is super low – much less than other constraints like PRIMARY, UNIQUE, and FOREIGN KEY constraints, as the do not profit from an index to enforce them, so you get them almost for “free”.

But there’s also an interesting optimisation aspect here! Check out these queries:

Impossible predicate

We’ve seen impossible predicates before, even with NOT NULL constraints (which are special types of CHECK constraints, in fact), but this is even more powerful:

SELECT *
FROM film
WHERE rating = 'N/A';

There can be no such film, because the CHECK constraint prevents its insertion (or update). This should again be transformed into a NOOP. Now, what about this?

CREATE INDEX idx_film_rating ON film (rating);

SELECT count(*)
FROM film
WHERE rating NOT IN ('G','PG','PG-13','R');

With the above index, we should probably simply run a quick index scan to count all the films of rating = ‘NC-17’, because that’s the only remaining rating. So the query should be rewritten to this:

SELECT count(*)
FROM film
WHERE rating = 'NC-17';

It should be, regardless of the index, because comparing the column with a single value is faster than comparing it with 4 values.

So, which database can do these things?

DB2

Impossible predicate (rating = ‘N/A’)

Cool!

Explain Plan                       
-----------------------------------
ID | Operation      |   Rows | Cost
 1 | RETURN         |        |    0
 2 |  TBSCAN GENROW | 0 of 0 |    0
                                   
Predicate Information              
 2 - RESID (1 = 0)                 

Inverse predicate (rating = ‘NC-17’)

Nope…

Explain Plan                                                
------------------------------------------------------------
ID | Operation                |                  Rows | Cost
 1 | RETURN                   |                       |   34
 2 |  GRPBY (COMPLETE)        |    1 of 210 (   .48%) |   34
 3 |   IXSCAN IDX_FILM_RATING | 210 of 1000 ( 21.00%) |   34
                                                            
Predicate Information                                       
 3 - SARG  NOT(Q1.RATING IN ('G', 'PG', 'PG-13', 'R'))      

While the index is used on ID=3 and while the cardinalities are correct, it is scanned entirely, as we do not have a range predicate but a “SARG” predicate. For more details, see Markus Winand’s overview here.

We can also show this by manually inverting the predicate to get:

Explain Plan                                                
------------------------------------------------------------
ID | Operation                |                  Rows | Cost
 1 | RETURN                   |                       |    7
 2 |  GRPBY (COMPLETE)        |    1 of 210 (   .48%) |    7
 3 |   IXSCAN IDX_FILM_RATING | 210 of 1000 ( 21.00%) |    7
                                                            
Predicate Information                                       
 3 - START (Q1.RATING = 'NC-17')                            
      STOP (Q1.RATING = 'NC-17')                            

Now, we’re getting the desired range predicate

MySQL

MySQL supports the CHECK constraint syntax but doesn’t enforce it for whatever reason. Try this:

CREATE TABLE x (a INT CHECK (a != 0));
INSERT INTO x VALUES (0);
SELECT * FROM x;

You’ll get:

A
-
0

Zero points for MySQL (really, why not just support CHECK constraints?)

Oracle

Impossible predicate (rating = ‘N/A’)

--------------------------------------------------------------
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |
--------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |      1 |        |      0 |
|*  1 |  FILTER            |      |      1 |        |      0 |
|*  2 |   TABLE ACCESS FULL| FILM |      0 |     89 |      0 |
--------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter(NULL IS NOT NULL)
   2 - filter("RATING"='N/A')

Again, the super confusing NULL IS NOT NULL filter that cuts off the FULL TABLE SCAN, which might as well be removed entirely from the plan. But at least it works!

Inverse predicate (rating = ‘NC-17’)

Ooops:

----------------------------------------------------------------------------
| Id  | Operation             | Name            | Starts | E-Rows | A-Rows |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |                 |      1 |        |      1 |
|   1 |  SORT AGGREGATE       |                 |      1 |      1 |      1 |
|*  2 |   INDEX FAST FULL SCAN| IDX_FILM_RATING |      1 |    415 |    210 |
----------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - filter((RATING'PG-13' AND RATING'R' AND RATING'PG' AND RATING'G'))

The predicate could not be inversed, we get a much off cardinality estimate, we get an INDEX FAST FULL SCAN, instead of an INDEX RANGE SCAN, and a filter predicate rather than an access predicate. Here’s what we should have gotten, e.g. when manually inverting the predicate:

------------------------------------------------------------------------
| Id  | Operation         | Name            | Starts | E-Rows | A-Rows |
------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                 |      1 |        |      1 |
|   1 |  SORT AGGREGATE   |                 |      1 |      1 |      1 |
|*  2 |   INDEX RANGE SCAN| IDX_FILM_RATING |      1 |    210 |    210 |
------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - access("RATING"='NC-17')

Bummer!

PostgreSQL

Note that the Sakila database in its PostgreSQL version uses an ENUM type instead of a CHECK constraint on the RATING column. I’ve duplicated the table to use a CHECK constraint instead.

Impossible predicate (rating = ‘N/A’)

Doesn’t work:

QUERY PLAN                                            
------------------------------------------------------
Seq Scan on film2  (cost=0.00..67.50 rows=1 width=385)
  Filter: ((rating)::text = 'N/A'::text)              

Inverse predicate (rating = ‘NC-17’)

Also nope:

QUERY PLAN                                                        
------------------------------------------------------------------
Aggregate  (cost=70.53..70.54 rows=1 width=8)                     
  ->  Seq Scan on film2  (cost=0.00..70.00 rows=210 width=0)      
        Filter: ((rating)::text  ALL ('{G,PG,PG-13,R}'::text[]))

Too bad!

NOTE: As was kindly pointed out by David Rowley in the comments, this feature can be opted in by specifying:

SET constraint_exclusion TO on;

SQL Server

Impossible predicate (rating = ‘N/A’)

Yes!

  |--Constant Scan

Inverse predicate (rating = ‘NC-17’)

Also yes!

  |--Compute Scalar
       |--Stream Aggregate
            |--Index Seek(OBJECT:([idx_film_rating]), SEEK:([rating]='NC-17'))

Summary

Database Impossible predicate Inverse predicate
DB2 LUW 10.5 Yep Nope
MySQL 8.0.2 Not supported Not supported
Oracle 12.2.0.1 Yep Nope
PostgreSQL 9.6 Nope Nope
SQL Server 2014 Yep Yep

9. Unneeded Self JOIN

When your queries get more complex, it might well happen that you’re going to self JOIN a table based on its primary key. Trust me, this is common practice when you build complex views and JOIN them to each other, so a database noticing this is a crucial step in optimising complex SQL. I won’t show a complex example, but a simple one, e.g.

SELECT a1.first_name, a1.last_name
FROM actor a1
JOIN actor a2 ON a1.actor_id = a2.actor_id;

This could be considered a special case of JOIN elimination as we don’t really need the JOIN of A2, we can do everything with A1 only. Now, INNER JOIN elimination normally works in the presence of a FOREIGN KEY only, which we don’t have here. But because of the PRIMARY KEY on ACTOR_ID, we can prove that in fact A1 = A2. In a way, this is transitive closure all over again.

We can take this one step further and use columns from both A1 and A2:

SELECT a1.first_name, a2.last_name
FROM actor a1
JOIN actor a2 ON a1.actor_id = a2.actor_id;

In the classic JOIN elimination case, we could no longer eliminate the JOIN because we’re projecting from both tables. But since we’ve already proven that A1 = A2, we can use them interchangeably, so the expectation is for this query to be transformed into:

SELECT first_name, last_name
FROM actor;

Who can do this?

DB2

Projecting from A1 only

Yes:

Explain Plan                                    
------------------------------------------------
ID | Operation     |                 Rows | Cost
 1 | RETURN        |                      |   20
 2 |  TBSCAN ACTOR | 200 of 200 (100.00%) |   20

Projecting from A1 and A2

… and yes:

Explain Plan                                    
------------------------------------------------
ID | Operation     |                 Rows | Cost
 1 | RETURN        |                      |   20
 2 |  TBSCAN ACTOR | 200 of 200 (100.00%) |   20

MySQL

Projecting from A1 only

Nope

ID  TABLE  REF          EXTRA
-----------------------------------
1   a1
1   a2     a1.actor_id  Using index

Projecting from A1 and A2

… and nope

ID  TABLE  REF          EXTRA
-----------------------------------
1   a1
1   a2     a1.actor_id  

That’s disappointing…

Oracle

Projecting from A1 only

Yes

--------------------------------------------
| Id  | Operation         | Name  | E-Rows |
--------------------------------------------
|   0 | SELECT STATEMENT  |       |        |
|   1 |  TABLE ACCESS FULL| ACTOR |    200 |
--------------------------------------------

Projecting from A1 and A2

And yes

--------------------------------------------
| Id  | Operation         | Name  | E-Rows |
--------------------------------------------
|   0 | SELECT STATEMENT  |       |        |
|   1 |  TABLE ACCESS FULL| ACTOR |    200 |
--------------------------------------------

PostgreSQL

Projecting from A1 only

Nope:

QUERY PLAN                                                          
--------------------------------------------------------------------
Hash Join  (cost=6.50..13.25 rows=200 width=13)                     
  Hash Cond: (a1.actor_id = a2.actor_id)                            
  ->  Seq Scan on actor a1  (cost=0.00..4.00 rows=200 width=17)     
  ->  Hash  (cost=4.00..4.00 rows=200 width=4)                      
        ->  Seq Scan on actor a2  (cost=0.00..4.00 rows=200 width=4)

Projecting from A1 and A2

And nope:

QUERY PLAN                                                           
---------------------------------------------------------------------
Hash Join  (cost=6.50..13.25 rows=200 width=13)                      
  Hash Cond: (a1.actor_id = a2.actor_id)                             
  ->  Seq Scan on actor a1  (cost=0.00..4.00 rows=200 width=10)      
  ->  Hash  (cost=4.00..4.00 rows=200 width=11)                      
        ->  Seq Scan on actor a2  (cost=0.00..4.00 rows=200 width=11)

SQL Server

Projecting from A1 only

Surprisingly, no! (But remember, this is SQL Server 2014, maybe this got fixed in a more recent version. I should definitely upgrade!)

  |--Merge Join(Inner Join, MERGE:([a2].[actor_id])=([a1].[actor_id]))
       |--Index Scan(OBJECT:([a2]))
       |--Sort(ORDER BY:([a1].[actor_id] ASC))
            |--Table Scan(OBJECT:([a1]))

Projecting from A1 and A2

Also no, and even with a different, worse plan:

  |--Hash Match(Inner Join, HASH:([a1].[actor_id])=([a2].[actor_id]))
       |--Table Scan(OBJECT:([sakila].[dbo].[actor] AS [a1]))
       |--Table Scan(OBJECT:([sakila].[dbo].[actor] AS [a2]))

Summary

I would have frankly expected this to work on all databases, but I was proven very wrong, which is a shame. Along with JOIN elimination, this is one of the most crucial optimisations to enable building huge SQL queries from reusable parts, such as views and table valued functions. Unfortunately, this is not supported 3/5 of the most popular databases.

Database Self-join elimination,
single table projection
Self-join elimination,
complete projection
DB2 LUW 10.5 Yep Yep
MySQL 8.0.2 Nope Nope
Oracle 12.2.0.1 Yep Yep
PostgreSQL 9.6 Nope Nope
SQL Server 2014 Nope Nope

10. Predicate Pushdown

This optimisation doesn’t belong here 100%, because it is not entirely true to assume this transformation is not cost based. But since I cannot think of a single obvious reason why an optimiser should not push down predicates into derived tables, I’m listing this here along with the other, non-cost-based optimisations.

Consider this query:

SELECT *
FROM (
  SELECT *
  FROM actor
) a
WHERE a.actor_id = 1;

The derived table has absolutely no value in this query and it should be eliminated as well, by unnesting it. But let’s ignore that for a moment.

We’d expect the database to perform this query instead:

SELECT *
FROM (
  SELECT *
  FROM actor
  WHERE actor_id = 1
) a;

And then again, possibly, eliminate the outer query.

A more sophisticated example would be when using UNION:

SELECT *
FROM (
  SELECT first_name, last_name, 'actor' type
  FROM actor
  UNION ALL
  SELECT first_name, last_name, 'customer' type
  FROM customer
) people
WHERE people.last_name = 'DAVIS';

The result of this query is:

FIRST_NAME  LAST_NAME  TYPE
----------------------------
JENNIFER    DAVIS      actor
SUSAN       DAVIS      actor
SUSAN       DAVIS      actor
JENNIFER    DAVIS      customer

Now, we’d love the database optimiser to run this statement instead:

SELECT *
FROM (
  SELECT first_name, last_name, 'actor' type
  FROM actor
  WHERE last_name = 'DAVIS'
  UNION ALL
  SELECT first_name, last_name, 'customer' type
  FROM customer
  WHERE last_name = 'DAVIS'
) people;

I.e. pushing down the predicate into the derived table, and from there on into the two UNION ALL subqueries, because after all, we have indexes on both ACTOR.LAST_NAME and CUSTOMER.LAST_NAME columns.

Again, this transformation might be motivated based on costs in most databases, but I still think it’s a no-brainer to do anyway, because it’s almost always better to reduce the number of processed tuples as early as possible in any algorithm. If you know a case where this transformation is a bad idea, please comment! I’d be very curious.

So, which databases can do this? (And please, this is so basic, yet important, let the answer be: all)

DB2

Simple derived table

Yes

Explain Plan                                      
--------------------------------------------------
ID | Operation         |               Rows | Cost
 1 | RETURN            |                    |    6
 2 |  FETCH ACTOR      |   1 of 1 (100.00%) |    6
 3 |   IXSCAN PK_ACTOR | 1 of 200 (   .50%) |    0
                                                  
Predicate Information                             
 3 - START (Q1.ACTOR_ID = 1)                      
      STOP (Q1.ACTOR_ID = 1)                      

UNION derived table

Yes, again:

Explain Plan                                                     
-----------------------------------------------------------------
ID | Operation                        |               Rows | Cost
 1 | RETURN                           |                    |   20
 2 |  UNION                           |             2 of 1 |   20
 3 |   FETCH CUSTOMER                 |   1 of 1 (100.00%) |   13
 4 |    IXSCAN IDX_CUSTOMER_LAST_NAME | 1 of 599 (   .17%) |    6
 5 |   FETCH ACTOR                    |   1 of 1 (100.00%) |    6
 6 |    IXSCAN IDX_ACTOR_LAST_NAME    | 1 of 200 (   .50%) |    0
                                                                 
Predicate Information                                            
 4 - START (Q1.LAST_NAME = 'DAVIS')                              
      STOP (Q1.LAST_NAME = 'DAVIS')                              
 6 - START (Q3.LAST_NAME = 'DAVIS')                              
      STOP (Q3.LAST_NAME = 'DAVIS')                              

Also, in both cases, the derived table (view) was removed from the plan as it is not really necessary.

MySQL

Simple derived table

Yes

ID  TABLE  TYPE   KEY      REF    EXTRA
---------------------------------------
1   actor  const  PRIMARY  const

The usual PRIMARY KEY access by a constant value is applied.

UNION derived table

Oops, nope

ID  SELECT_TYPE  TABLE       TYPE  KEY          REF    ROWS  EXTRA
------------------------------------------------------------------
1   PRIMARY        ref   	const  10
2   DERIVED      actor       ALL                       200
3   UNION        customer    ALL                       599

The manual transformation would yield:

ID  SELECT_TYPE  TABLE       TYPE  KEY                  REF    ROWS  EXTRA
--------------------------------------------------------------------------
1   PRIMARY        ALL                               5
2   DERIVED      actor       ref   idx_actor_last_name  const  3
3   UNION        customer    ref   idx_last_name        const  1

That’s really a problem if you want to nest complex queries in MySQL!

Oracle

Simple derived table

Yes, works

---------------------------------------------------------------------------
| Id  | Operation                   | Name     | Starts | E-Rows | A-Rows |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |          |      1 |        |      1 |
|   1 |  TABLE ACCESS BY INDEX ROWID| ACTOR    |      1 |      1 |      1 |
|*  2 |   INDEX UNIQUE SCAN         | PK_ACTOR |      1 |      1 |      1 |
---------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - access("ACTOR"."ACTOR_ID"=1)

The derived table has been unnested, too.

UNION derived table

Works as well:

---------------------------------------------------------------------------------
| Id  | Operation                             | Name                   | E-Rows |
---------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |                        |        |
|   1 |  VIEW                                 |                        |      4 |
|   2 |   UNION-ALL                           |                        |        |
|   3 |    TABLE ACCESS BY INDEX ROWID BATCHED| ACTOR                  |      3 |
|*  4 |     INDEX RANGE SCAN                  | IDX_ACTOR_LAST_NAME    |      3 |
|   5 |    TABLE ACCESS BY INDEX ROWID BATCHED| CUSTOMER               |      1 |
|*  6 |     INDEX RANGE SCAN                  | IDX_CUSTOMER_LAST_NAME |      1 |
---------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   4 - access("LAST_NAME"='DAVIS')
   6 - access("LAST_NAME"='DAVIS')

However, without unnesting the derived table. The Id=1 “VIEW” indicates that it’s still there. This isn’t a problem in this case, just perhaps a bit cosmetic overhead.

PostgreSQL

Simple derived table

Yes, it works:

QUERY PLAN                                          
----------------------------------------------------
Seq Scan on actor  (cost=0.00..4.50 rows=1 width=25)
  Filter: (actor_id = 1)                            

Note, interestingly, PostgreSQL sometimes doesn’t even use the PRIMARY KEY for a single row lookup but scans the entire table. In this case, 200 rows x 25 bytes per row (“width”) fits in a single block, so why bother reading the index anyway, generating more I/O for this small table access?

UNION derived table

Yes, this works as well:

QUERY PLAN                                                                         
-----------------------------------------------------------------------------------
Append  (cost=0.00..12.83 rows=4 width=45)                                         
  ->  Seq Scan on actor  (cost=0.00..4.50 rows=3 width=45)                         
        Filter: ((last_name)::text = 'DAVIS'::text)                                
  ->  Index Scan using idx_last_name on customer  (cost=0.28..8.29 rows=1 width=45)
        Index Cond: ((last_name)::text = 'DAVIS'::text)                            

Again, the index on ACTOR.LAST_NAME is not used, but the one on CUSTOMER.LAST_NAME is, as the CUSTOMER table is quite larger.

SQL Server

Simple derived table

Yep, works

  |--Nested Loops(Inner Join)
       |--Index Seek(SEEK:([actor_id]=(1)))
       |--RID Lookup(OBJECT:([actor]))

UNION derived table

Works as well.

  |--Concatenation
       |--Compute Scalar(DEFINE:([Expr1003]='actor'))
       |    |--Nested Loops(Inner Join)
       |         |--Index Seek(SEEK:([actor].[last_name]='DAVIS'))
       |         |--RID Lookup(OBJECT:([actor]))
       |--Compute Scalar(DEFINE:([Expr1007]='customer'))
            |--Nested Loops(Inner Join)
                 |--Index Seek(SEEK:([customer].[last_name]='DAVIS'))
                 |--RID Lookup(OBJECT:([customer]))

Summary

My hopes were scattered. MySQL 8.0.2 doesn’t support this simple optimisation completely yet. All others do, however:

Database Simple derived table pushdown UNION derived table pushdown
DB2 LUW 10.5 Yep Yep
MySQL 8.0.2 Yep Nope
Oracle 12.2.0.1 Yep Yep
PostgreSQL 9.6 Yep Yep
SQL Server 2014 Yep Yep

Conclusion

The list presented here is far from complete. There are many more of these simple SQL transformations that are (or should be) a no-brainer for a database to implement, even before the cost-based optimiser kicks in. They remove what I call unnecessary, optional work (as opposed to unnecessary, mandatory work). They are essential tools for:

  • Preventing silly mistakes from affecting SQL performance. Everyone makes mistakes, and as projects grow larger and SQL queries grow more complex, these mistakes might accumulate, yet hopefully, without effect
  • Enabling the reuse of complex building blocks, such as views and table-valued functions, which can be inlined into parent SQL queries, transformed, and parts removed or rewritten

These features are essential for the second part. Without them, it is very difficult to build 4000 LOC SQL queries that still perform decently, based on a library of reusable SQL components.

Unfortunately for users of PostgreSQL and MySQL, these two popular Open Source databases are still much behind their commercial counterparts DB2, Oracle, and SQL Server – where DB2 fared best in this article, Oracle and SQL Server being roughly on par.

SQL is a wonderful language, because it is declarative and any statement can be rewritten to something simpler or more sophisticated, which performs much better than what the author has written. If you have liked this article, you may also like:

How to Write Efficient TOP N Queries in SQL

A very common type of SQL query is the TOP-N query, where we need the “TOP N” records ordered by some value, possibly per category. In this blog post, we’re going to look into a variety of different aspects to this problem, as well as how to solve them with standard and non-standard SQL.

These are the different aspects we’ll discuss:

  • Top values
  • Top values with ties
  • Top values per category

Getting the Top Value

When looking at the Sakila database, we might want to find the actor who played in the most films. The simplest solution here would be to use GROUP BY to find the number of films per actor, and then ORDER BY and LIMIT to find the “TOP 1” actor.

Here’s the query in PostgreSQL:

SELECT actor_id, first_name, last_name, count(film_id)
FROM actor
LEFT JOIN film_actor USING (actor_id)
GROUP BY actor_id, first_name, last_name
ORDER BY count(film_id) DESC
LIMIT 1;

Yielding

ACTOR_ID  FIRST_NAME  LAST_NAME  COUNT
--------------------------------------
107       GINA        DEGENERES  42

Other databases have different syntaxes for LIMIT – check out the jOOQ manual for a complete list of emulations of this useful clause, e.g. in Oracle 12c, we would use FETCH:

SELECT actor_id, first_name, last_name, count(film_id)
FROM actor
LEFT JOIN film_actor USING (actor_id)
GROUP BY actor_id, first_name, last_name
ORDER BY count(*) DESC
FETCH FIRST ROW ONLY;

Or, in SQL Server, we could use TOP

SELECT TOP 1 a.actor_id, first_name, last_name, count(film_id)
FROM actor a
LEFT JOIN film_actor fa ON a.actor_id = fa.actor_id
GROUP BY a.actor_id, first_name, last_name
ORDER BY count(*) DESC;

… which kinda makes sense. We want the TOP 1 actor, right?

Alternatives with Subqueries

In my SQL Masterclass, we’re also looking at the performance of various variants of solving this particular problem. For instance, we could calculate the COUNT(*) value in a derived value, prior to joining:

SELECT actor_id, first_name, last_name, COALESCE(c, 0)
FROM actor
LEFT JOIN (
  SELECT actor_id, count(*) AS c
  FROM film_actor
  GROUP BY actor_id
) fa USING (actor_id)
ORDER BY COALESCE(c, 0) DESC
LIMIT 1;

Or, we could calculate the COUNT(*) value in a correlated subquery:

SELECT actor_id, first_name, last_name, (
  SELECT count(*)
  FROM film_actor fa
  WHERE fa.actor_id = a.actor_id
) AS c
FROM actor a
ORDER BY c DESC
LIMIT 1;

These perform vastly differently, depending on the database as can be seen in this tweet:

Anyway, the different techniques do not really influence the TOP-N semantics themselves, as we’re always using the same two clauses to get the TOP 1 actor:

ORDER BY c DESC -- Some means of calculating "c"
LIMIT 1;        -- Some means of limiting results to 1 row

Window function alternative

In the old days when databases like Oracle didn’t really support LIMIT (or when using DB2, SQL Server, and Sybase, which support “LIMIT” but not “OFFSET”), people used to resort to using window functions. In order to get the FETCH FIRST n ROWS ONLY semantics, we can use ROW_NUMBER():

SELECT *
FROM (
  SELECT 
    actor_id, first_name, last_name, count(film_id),
    row_number() OVER (ORDER BY count(film_id) DESC) rn
  FROM actor
  LEFT JOIN film_actor USING (actor_id)
  GROUP BY actor_id, first_name, last_name
) t
WHERE rn <= 5
ORDER BY rn;

The derived table contains that ROW_NUMBER() window function, which produces a unique, consecutive row number for each actor ordered by the number of films the actor played in.

Note that thanks to the aggregation step “happening before” the windowing step, we can use aggregation functions inside of window functions.

We’ll revisit the window function approach again later, when implementing “WITH TIES” semantics using RANK().

Oracle Specific Alternative

In Oracle, we have those cool FIRST and LAST functions, with a bit of an arcane syntax. They help finding the first or last element in a sorted group.

SELECT 
  max(actor_id)   KEEP (DENSE_RANK FIRST ORDER BY c DESC, actor_id),
  max(first_name) KEEP (DENSE_RANK FIRST ORDER BY c DESC, actor_id),
  max(last_name)  KEEP (DENSE_RANK FIRST ORDER BY c DESC, actor_id),
  max(c)          KEEP (DENSE_RANK FIRST ORDER BY c DESC, actor_id)
FROM (
  SELECT actor_id, first_name, last_name, count(film_id) c
  FROM actor
  LEFT JOIN film_actor USING (actor_id)
  GROUP BY actor_id, first_name, last_name
) t;

It’s fair to say that the syntax is a bit verbose.

How does this work? The nested query counts again all the films per actor, producing the entire result set. The outer query, however, aggregates all these actors and their film counts into a single row, displaying in that row only the first value(s) within the group, ordered by some specific ordering criteria. Note that the ACTOR_ID was also added to the ordering criteria, to get a unique ordering (see “with ties” semantics, below).

Why do we need the MAX() aggregate function? Simply because this feature applies only to aggregate functions, and the FIRST value(s) could be more than one value within the group, namely when the ordering would produce a “tie” (again, see below).

I’ve seen this slightly outperform the alternatives in TOP 1 cases (that ignore “TIES”, see below), including this one. A benchmark showing relative execution times shows:

Statement 1 : 1.88945  (FETCH version)
Statement 2 : 1.84442  (window function version)
Statement 3 : 1        (KEEP version)

The benchmark technique is described in this blog, on this page, and also again below. Measure yourself though!

Getting the Top Value “With Ties”

In the previous query, we’re getting a single top value, and the result is correct as we can see with this query that returns the TOP 5 values:

SELECT actor_id, first_name, last_name, count(film_id)
FROM actor
LEFT JOIN film_actor USING (actor_id)
GROUP BY actor_id, first_name, last_name
ORDER BY count(film_id) DESC
LIMIT 5;

The result being:

ACTOR_ID  FIRST_NAME  LAST_NAME  COUNT
--------------------------------------
107       GINA        DEGENERES  42
102       WALTER      TORN       41
198       MARY        KEYTEL     40
181       MATTHEW     CARREY     39
23        SANDRA      KILMER     37

But what if WALTER TORN had played in yet one more film? We would have a “tie” between GINA DEGENERES and WALTER TORN, so which one would we pick as the TOP 1 actor? There are several possible answers:

  • Pick a random one by accident: This is what our query does right now. This might be considered fair, as the preference for the winner might be merely technical and might change between releases
  • Pick a random one explicitly: We could add another sort criteria that introduces randomness (e.g. a RANDOM() function call). Let’s just hope that this call is only made when needed, i.e. when we have a tie. Otherwise, that would be rather dangerous. But it would be fair.
  • Specify a unique additional sort criteria: E.g. the ACTOR_ID. That would make the result predictable, but in this case a bit unfair.
  • Fetch both tied actors. Yes, we can have several winners. That’s what sports events do, and that’s what we’ll discuss now.

So, let’s add one more film for WALTER TORN:

INSERT INTO film_actor (actor_id, film_id)
VALUES (102, 1);

The SQL standard specifies how we can fetch the first rows “with their ties”, namely by using … well … the FETCH FIRST ROWS WITH TIES syntax!

This is implemented as such in Oracle 12c, the only database I’ve seen with this standard syntax so far.

SELECT actor_id, first_name, last_name, count(film_id)
FROM actor
LEFT JOIN film_actor USING (actor_id)
GROUP BY actor_id, first_name, last_name
ORDER BY count(film_id) DESC
FETCH FIRST ROWS WITH TIES;

Note that all of these syntaxes are possible, they’re all equivalent. Use the one that resembles the English language the most, in your opinion (in other words: everyone does it differently):

FETCH FIRST 1 ROWS WITH TIES
FETCH FIRST 1 ROW WITH TIES
FETCH FIRST ROWS WITH TIES
FETCH FIRST ROW WITH TIES
FETCH NEXT 1 ROWS WITH TIES
FETCH NEXT 1 ROW WITH TIES
FETCH NEXT ROWS WITH TIES
FETCH NEXT ROW WITH TIES

The only other database I’m aware of that knows this feature (but not the standard syntax) is SQL Server:

SELECT TOP 1 WITH TIES 
  a.actor_id, first_name, last_name, count(*)
FROM actor a
JOIN film_actor fa ON a.actor_id = fa.actor_id
GROUP BY a.actor_id, first_name, last_name
ORDER BY count(*) DESC;

While SQL Server also supports the standard OFFSET .. FETCH syntax for the ROWS ONLY semantics (i.e. pick a random row among the winners), it supports WITH TIES only with the proprietary TOP syntax.

That’s OK, we can always use TOP, because OFFSET and OFFSET pagination is verboten anyway, right?

Using Window Functions in Other Databases

As promised, we’re now revisiting a window function based solution to make the “WITH TIES” semantics work in other databases as well. We can make use of ranking functions, namely RANK() (or in more rare cases DENSE_RANK()).

Here’s how to find the TOP 1 actor WITH TIES in PostgreSQL, for instance:

SELECT *
FROM (
  SELECT 
    actor_id, first_name, last_name, count(film_id),
    rank() OVER (ORDER BY count(film_id) DESC) rk
  FROM actor
  LEFT JOIN film_actor USING (actor_id)
  GROUP BY actor_id, first_name, last_name
) t
WHERE rk = 1;

We’re now nesting the original query in a derived table, adding an additional column RK, which contains the RANK() of the row given the desired ordering. The result is:

ACTOR_ID  FIRST_NAME  LAST_NAME  COUNT  RK
------------------------------------------
107       GINA        DEGENERES  42     1
102       WALTER      TORN       42     1

If we wanted to find the TOP 5 again, we simply search for ranks less or equal to 5. And don’t forget explicit ordering again. Even if the ordering from the subquery will probably be stable (namely the one from the window function, which was the last one applied to the subquery), we must never rely on a non-explicit ordering. NEVER.

SELECT *
FROM (
  SELECT 
    actor_id, first_name, last_name, count(film_id),
    rank() OVER (ORDER BY count(film_id) DESC) rk
  FROM actor
  LEFT JOIN film_actor USING (actor_id)
  GROUP BY actor_id, first_name, last_name
) t
WHERE rk <= 5
ORDER BY rk;
ACTOR_ID  FIRST_NAME  LAST_NAME  COUNT  RK
------------------------------------------
107       GINA        DEGENERES  42     1
102       WALTER      TORN       42     1
198       MARY        KEYTEL     40     3
181       MATTHEW     CARREY     39     4
23        SANDRA      KILMER     37     5

Now, if we had another actor with 37 films, that other actor would also appear in this list and the list would contain 6 actors, even if we searched for the TOP 5 (WITH TIES). Cool, eh?

Oracle Specific Solution

Remember we were discussing the Oracle-specific FIRST and LAST functions? Unfortunately, this cannot be used to get the WITH TIES semantics because the aggregation can produce only a single row, not multiple tied rows.

As a workaround, I’ve tried combining LISTAGG() with KEEP with no avail:

SELECT 
  LISTAGG (first_name, ', ')
    WITHIN GROUP (ORDER BY count(film_id) DESC)
    KEEP (DENSE_RANK FIRST ORDER BY count(film_id) DESC),
...

While this could have produced all the ties in a comma separated list of values, this syntax is simply not allowed.

Top Values Per Category

Now, this is the coolest! So far, we’ve run single queries getting the single TOP N values over our entire data set. I.e. the actor with the most films.

Imagine, however, we’d like to get the TOP N something per actor. For instance, the TOP 3 most successful films an actor played in. That would be quite a query. To keep it simple on the Sakila database, let’s simply find the TOP 3 films with the longest titles per actor.

This will again use LIMIT, but this time, we need to do it in a subquery. The two tools we’ve seen so far won’t really work well:

  • Derived tables (subqueries in the FROM clause) cannot implement the per actor semantics easily, at least not with LIMIT. It’s possible with window functions as we’ll see later.
  • Correlated subqueries (subqueries in the SELECT or WHERE clauses) can only return one row and one column. Not quite useful when we want to return, e.g. the TOP 3 rows.

Luckily, the SQL standard specifies LATERAL (implemented by Oracle 12c, PostgreSQL, DB2) and SQL Server has always had APPLY (also available in Oracle 12c). Both syntaxes are exactly equivalent. Let’s look at APPLY first, as I much prefer this syntax:

SELECT actor_id, first_name, last_name, title
FROM actor a
OUTER APPLY (
  SELECT title
  FROM film f
  JOIN film_actor fa USING (film_id)
  WHERE fa.actor_id = a.actor_id
  ORDER BY length(title) DESC
  FETCH FIRST 3 ROWS ONLY
) t
ORDER BY actor_id, length(title) DESC;

This will produce:

ACTOR_ID  FIRST_NAME  LAST_NAME  TITLE
------------------------------------------------------
1         PENELOPE    GUINESS    BULWORTH COMMANDMENTS
1         PENELOPE    GUINESS    ANACONDA CONFESSIONS
1         PENELOPE    GUINESS    GLEAMING JAWBREAKER
2         NICK        WAHLBERG   GOODFELLAS SALUTE
2         NICK        WAHLBERG   DESTINY SATURDAY
2         NICK        WAHLBERG   ADAPTATION HOLES
3         ED          CHASE      ARTIST COLDBLOODED
3         ED          CHASE      NECKLACE OUTBREAK
3         ED          CHASE      BOONDOCK BALLROOM
...

Cool. How does it work?

Think of the derived table as a function. A function with a single argument:

SELECT actor_id, first_name, last_name, title
FROM actor a
OUTER APPLY (
  SELECT title
  FROM film f
  JOIN film_actor fa USING (film_id)
  WHERE fa.actor_id = a.actor_id -- this is the function argument
  ORDER BY length(title) DESC
  FETCH FIRST 3 ROWS ONLY
) t
ORDER BY actor_id, length(title) DESC;

Or, if we would be storing this as an actual table valued function, e.g. in PostgreSQL syntax:

CREATE FUNCTION top_3_films_per_actor(p_actor_id IN INTEGER)
RETURNS TABLE (
  title VARCHAR(50)
)
AS $$
  SELECT title
  FROM film f
  JOIN film_actor fa USING (film_id)
  WHERE fa.actor_id = p_actor_id
  ORDER BY length(title) DESC
  LIMIT 3
$$ LANGUAGE sql;

Table valued functions are pretty cool. They work like parameterised views in case your database can inline it. DB2 and SQL Server can, Oracle 12c and PostgreSQL 9.6 cannot. It’s not a surprise in Oracle, which doesn’t support SQL functions, only PL/SQL functions.

The function can now be used as such:

SELECT actor_id, first_name, last_name, title
FROM actor a
LEFT JOIN LATERAL top_3_films_per_actor(a.actor_id) t 
ON 1 = 1 -- Silly predicate is required because of LEFT JOIN
ORDER BY actor_id, length(title) DESC;

Note that the LATERAL syntax is exactly the same as APPLY with a bit more syntactic ceremony, especially when used with OUTER JOIN.

Now, normally, we aren’t allowed to do this. We aren’t allowed to access the column A.ACTOR_ID from within a derived table or function expression. Inside the derived table, the table A is not defined, and the outer scope is not accessible either.

APPLY (and LATERAL) changes this. With these tools, we can now access all the tables and their columns to the left of the APPLY (or LATERAL) operator. This means our subquery now works like a correlated subquery, but it is allowed to return:

  • More than one row
  • More than one column

That’s really cool! If you’re working with Java 8 streams, think of it as the equivalent of the Stream.flatMap() operation. If we wanted to write this query in Java with streams, we could write something like:

actors
  .stream()
  .flatMap(a -> films
    .stream()
    .filter(f -> f.hasActor(a)) // Assuming some API
    .sorted(comparing(f -> f.title.length()).reversed())
    .limit(3)
    .map(f -> tuple(a, f)));

This would be more or less the same, with a few simplifications. E.g. the FILM_ACTOR JOIN has been cheated away with an auxiliary method Film.hasActor(Actor).

So, following the flatMap() “metaphor”, APPLY applies a table function (e.g. a subquery) to another table (in our case: ACTOR). That function produces a table for each record of the ACTOR table. In our case, we chose OUTER APPLY (instead of CROSS APPLY) because like a LEFT JOIN that will keep the actors without films in the result set – something that isn’t as easy to do with flatMap(), which corresponds to CROSS APPLY.

For more info about LATERAL and APPLY read these interesting posts:

WITH TIES Semantics

What if we wanted to list the TOP 3 films by their longest title per actor WITH TIES? I.e. if there are several films of equal length, we might get 4 or 5 or more films for any given actor?

In databases that support the WITH TIES syntax, this is really simple. In Oracle, just replace ROWS ONLY by ROWS WITH TIES:

SELECT actor_id, first_name, last_name, title
FROM actor a
OUTER APPLY (
  SELECT title
  FROM film f
  JOIN film_actor fa USING (film_id)
  WHERE fa.actor_id = a.actor_id
  ORDER BY length(title) DESC
  FETCH FIRST 3 ROWS WITH TIES
) t
ORDER BY actor_id, length(title) DESC;

In SQL Server, add the clause to the TOP clause:

SELECT actor_id, first_name, last_name, title
FROM actor a
OUTER APPLY (
  SELECT TOP 3 WITH TIES title
  FROM film f
  JOIN film_actor fa ON f.film_id = fa.film_id
  WHERE fa.actor_id = a.actor_id
  ORDER BY len(title) DESC
) t
ORDER BY actor_id, len(title) DESC;

In both cases, we’re now getting:

ACTOR_ID  FIRST_NAME  LAST_NAME  TITLE
------------------------------------------------------
1         PENELOPE    GUINESS    BULWORTH COMMANDMENTS
1         PENELOPE    GUINESS    ANACONDA CONFESSIONS
1         PENELOPE    GUINESS    GLEAMING JAWBREAKER
1         PENELOPE    GUINESS    WESTWARD SEABISCUIT
2         NICK        WAHLBERG   GOODFELLAS SALUTE
2         NICK        WAHLBERG   ADAPTATION HOLES
2         NICK        WAHLBERG   WARDROBE PHANTOM
2         NICK        WAHLBERG   RUSHMORE MERMAID
2         NICK        WAHLBERG   HAPPINESS UNITED
2         NICK        WAHLBERG   FIGHT JAWBREAKER
2         NICK        WAHLBERG   DESTINY SATURDAY
3         ED          CHASE      ARTIST COLDBLOODED
3         ED          CHASE      NECKLACE OUTBREAK
3         ED          CHASE      BOONDOCK BALLROOM
...

Quite a different result!

If we don’t have native support for WITH TIES, then we can use RANK() again, and this time, we don’t need APPLY or LATERAL:

SELECT actor_id, first_name, last_name, title
FROM actor a
LEFT JOIN (
  SELECT 
    actor_id,
    title, 
    rank() OVER (
      PARTITION BY actor_id
      ORDER BY length(title) DESC
    ) rk
  FROM film f
  JOIN film_actor fa USING (film_id)
) t USING (actor_id)
WHERE rk <= 3
ORDER BY actor_id, rk;

What are we doing here, compared to the APPLY version of the query?

  • First off, we’re no longer filtering anything in the subquery. The original WHERE clause that accessed the outer A.ACTOR_ID column is gone
  • … instead, we’re using that ACTOR_ID column in an ordinary JOIN predicate between the ACTOR table and our derived table that produces the films
  • We calculate the RANK() of films per actor. Unlike before, where we calculated the RANK() over the entire data set (the default PARTITION in window functions is always the entire data set), we now partition our data set by ACTOR_ID. So, for each actor, we’re getting the TOP 3 (and 4 and 5 and 6, …) films pre-calculated
  • … only then, in the outer query, we can filter again by this pre-calculated rank, hoping that the database will be smart enough and somehow push down the predicate into the ranking function.

Which is the faster solution? NEVER GUESS! ALWAYS MEASURE! 🙂

Benchmarking time

As a matter of fact, FETCH is just syntax sugar for filtering on window functions in Oracle. So the two queries should really perform equally well. I’m using the benchmarking technique from this article here.

Oracle 12.2.0.1.0 first:

Here’s the full code for Oracle:

SET SERVEROUTPUT ON
CREATE TABLE results (
  run     NUMBER(2),
  stmt    NUMBER(2),
  elapsed NUMBER
);

DECLARE
  v_ts TIMESTAMP WITH TIME ZONE;
  v_repeat CONSTANT NUMBER := 100;
BEGIN

  -- Repeat benchmark several times to avoid warmup penalty
  FOR r IN 1..5 LOOP
    v_ts := SYSTIMESTAMP;
      
    FOR i IN 1..v_repeat LOOP
      FOR rec IN (
        SELECT actor_id, first_name, last_name, title
        FROM actor a
        OUTER APPLY (
          SELECT title
          FROM film f
          JOIN film_actor fa USING (film_id)
          WHERE fa.actor_id = a.actor_id
          ORDER BY length(title) DESC
          FETCH FIRST 3 ROWS WITH TIES
        ) t
        ORDER BY actor_id, length(title) DESC
      ) LOOP
        NULL;
      END LOOP;
    END LOOP;
  
    INSERT INTO results VALUES (r, 1, 
	  SYSDATE + ((SYSTIMESTAMP - v_ts) * 86400) - SYSDATE);
    v_ts := SYSTIMESTAMP;
      
    FOR i IN 1..v_repeat LOOP
      FOR rec IN (
        SELECT actor_id, first_name, last_name, title
        FROM actor a
        LEFT JOIN (
          SELECT 
            actor_id,
            title, 
            rank() OVER (
              PARTITION BY actor_id
              ORDER BY length(title) DESC
            ) rk
          FROM film f
          JOIN film_actor fa USING (film_id)
        ) t USING (actor_id)
        WHERE rk <= 3
        ORDER BY actor_id, rk
      ) LOOP
        NULL;
      END LOOP;
    END LOOP;
      
    INSERT INTO results VALUES (r, 2, 
	  SYSDATE + ((SYSTIMESTAMP - v_ts) * 86400) - SYSDATE);
  END LOOP;
  
  FOR rec IN (
    SELECT 
      run, stmt, 
      CAST(elapsed / MIN(elapsed) OVER() AS NUMBER(10, 5)) ratio 
    FROM results
  )
  LOOP
    dbms_output.put_line('Run ' || rec.run || 
      ', Statement ' || rec.stmt || 
      ' : ' || rec.ratio);
  END LOOP;
END;
/

DROP TABLE results;

Yielding:

Run 1, Statement 1 :  9.19943
Run 1, Statement 2 :  1.07469
Run 2, Statement 1 : 12.86258
Run 2, Statement 2 :  1.03741
Run 3, Statement 1 : 13.80538
Run 3, Statement 2 :  1.09832
Run 4, Statement 1 : 13.91985
Run 4, Statement 2 :  1.16206
Run 5, Statement 1 :  9.37335
Run 5, Statement 2 :  1

Results are relative to each other (according to my understanding, I’m doing this in order to comply with the Oracle license regarding the publication of benchmark results – no actual times are published). Statement 2 (using explicit ranking) is repeatedly at least 9x faster than statement 1 (using FETCH) on Oracle 12.2.0.1.0! Bummer!

What about SQL Server 2014?

Benchmarking logic:

DECLARE @ts DATETIME;
DECLARE @repeat INT = 100;
DECLARE @r INT;
DECLARE @i INT;
DECLARE @dummy1 INT;
DECLARE @dummy2 VARCHAR;
DECLARE @dummy3 VARCHAR;
DECLARE @dummy4 VARCHAR;

DECLARE @s1 CURSOR;
DECLARE @s2 CURSOR;

DECLARE @results TABLE (
  run     INT,
  stmt    INT,
  elapsed DECIMAL
);

SET @r = 0;
WHILE @r < 5
BEGIN
  SET @r = @r + 1

  SET @s1 = CURSOR FOR 
    SELECT actor_id, first_name, last_name, title
	FROM actor a
	OUTER APPLY (
	  SELECT TOP 3 WITH TIES title
	  FROM film f
	  JOIN film_actor fa ON f.film_id = fa.film_id
	  WHERE fa.actor_id = a.actor_id
	  ORDER BY len(title) DESC
	) t
	ORDER BY actor_id, len(title) DESC;

  SET @s2 = CURSOR FOR 
    SELECT a.actor_id, first_name, last_name, title
    FROM actor a
    LEFT JOIN (
      SELECT 
        actor_id,
        title, 
        rank() OVER (
          PARTITION BY actor_id
          ORDER BY len(title) DESC
        ) rk
      FROM film f
      JOIN film_actor fa ON f.film_id = fa.film_id
	) t ON a.actor_id = t.actor_id
    WHERE rk <= 3
    ORDER BY a.actor_id, rk

  SET @ts = current_timestamp;
  SET @i = 0;
  WHILE @i < @repeat
  BEGIN
    SET @i = @i + 1

    OPEN @s1;
    FETCH NEXT FROM @s1 INTO @dummy1, @dummy2, @dummy3, @dummy4;
    WHILE @@FETCH_STATUS = 0
    BEGIN
      FETCH NEXT FROM @s1 INTO @dummy1, @dummy2, @dummy3, @dummy4;
    END;

    CLOSE @s1;
  END;

  DEALLOCATE @s1;
  INSERT INTO @results 
  VALUES (@r, 1, DATEDIFF(ms, @ts, current_timestamp));

  SET @ts = current_timestamp;
  SET @i = 0;
  WHILE @i < @repeat
  BEGIN
    SET @i = @i + 1

    OPEN @s2;
    FETCH NEXT FROM @s2 INTO @dummy1, @dummy2, @dummy3, @dummy4;
    WHILE @@FETCH_STATUS = 0
    BEGIN
      FETCH NEXT FROM @s2 INTO @dummy1, @dummy2, @dummy3, @dummy4;
    END;

    CLOSE @s2;
  END;

  DEALLOCATE @s2;
  INSERT INTO @results 
  VALUES (@r, 2, DATEDIFF(ms, @ts, current_timestamp));
END;

SELECT 'Run ' + CAST(run AS VARCHAR) + ', Statement ' + 
  CAST(stmt AS VARCHAR) + ': ' + 
  CAST(CAST(elapsed / MIN(elapsed) OVER() AS DECIMAL(10, 5)) AS VARCHAR)
FROM @results;

Surprisingly, also SQL Server has worse performance with the APPLY approach than with window functions, although the difference is smaller (I’m using SQL Server 2014):

Run 1, Statement 1: 5.07019
Run 1, Statement 2: 1.11988
Run 2, Statement 1: 5.48820
Run 2, Statement 2: 1.20683
Run 3, Statement 1: 5.08882
Run 3, Statement 2: 1.31429
Run 4, Statement 1: 5.31863
Run 4, Statement 2: 1.00000
Run 5, Statement 1: 5.07453
Run 5, Statement 2: 1.21491

I would have expected SQL Server to be much better here, given that this syntax has been available for a long time in SQL Server. Challenge to the reader: Why do both databases perform so poorly with CROSS APPLY? I’ll explain Oracle’s problem in a bit.

Let’s look at PostgreSQL 9.6

The “WITH TIES” semantics can only be implemented with window functions in PostgreSQL, so let’s stick to the “ROWS” semantics of the LIMIT clause.

DO $$
DECLARE
  v_ts TIMESTAMP;
  v_repeat CONSTANT INT := 100;
  rec RECORD;
  run INT[];
  stmt INT[];
  elapsed DECIMAL[];
  max_elapsed DECIMAL;
  i INT := 1;
BEGIN

  -- Repeat benchmark several times to avoid warmup penalty
  FOR r IN 1..5 LOOP
    v_ts := clock_timestamp();

    FOR i IN 1..v_repeat LOOP
      FOR rec IN (
        SELECT actor_id, first_name, last_name, title
        FROM actor a
        LEFT JOIN LATERAL (
          SELECT title
          FROM film f
          JOIN film_actor fa USING (film_id)
          WHERE fa.actor_id = a.actor_id
          ORDER BY length(title) DESC
          LIMIT 3
        ) t ON true
        ORDER BY actor_id, length(title) DESC
      ) LOOP
        NULL;
      END LOOP;
    END LOOP;

    run[i] := r;
    stmt[i] := 1;
    elapsed[i] := 
       (EXTRACT(EPOCH FROM CAST(clock_timestamp() AS TIMESTAMP)) 
      - EXTRACT(EPOCH FROM v_ts));
    i := i + 1;
    v_ts := clock_timestamp();

    FOR i IN 1..v_repeat LOOP
      FOR rec IN (
        SELECT actor_id, first_name, last_name, title
        FROM actor a
        LEFT JOIN (
          SELECT 
            actor_id,
            title, 
            row_number() OVER (
              PARTITION BY actor_id
              ORDER BY length(title) DESC
            ) rn
          FROM film f
          JOIN film_actor fa USING (film_id)
        ) t USING (actor_id)
        WHERE rn <= 3
        ORDER BY actor_id, rn
      ) LOOP
        NULL;
      END LOOP;
    END LOOP;

    run[i] := r;
    stmt[i] := 2;
    elapsed[i] := 
       (EXTRACT(EPOCH FROM CAST(clock_timestamp() AS TIMESTAMP)) 
      - EXTRACT(EPOCH FROM v_ts));
    i := i + 1;
  END LOOP;

  SELECT max(t.elapsed)
  INTO max_elapsed
  FROM unnest(elapsed) AS t(elapsed);

  FOR i IN 1..array_length(run, 1) LOOP
    RAISE INFO 'RUN %, Statement %: %', run[i], stmt[i], 
      CAST(elapsed[i] / max_elapsed AS DECIMAL(10, 5));
  END LOOP;
END$$;

This time, the results are more even. It doesn’t really seem to matter which approach you pick:

00000: RUN 1, Statement 1: 0.75904
00000: RUN 1, Statement 2: 0.99784
00000: RUN 2, Statement 1: 0.75907
00000: RUN 2, Statement 2: 0.95206
00000: RUN 3, Statement 1: 0.82102
00000: RUN 3, Statement 2: 0.94841
00000: RUN 4, Statement 1: 0.96208
00000: RUN 4, Statement 2: 0.96218
00000: RUN 5, Statement 1: 0.83398
00000: RUN 5, Statement 2: 1.00000

This can have two reasons:

  • Optimistic: LATERAL is better optimised in PostgreSQL
  • Pessimistic: window functions are less well optimised in PostgreSQL

From experience with tuning PostgreSQL queries, I’ll go for the pessimistic guess, because indeed, there is no hint in the execution plan about any optimisation being done on the window function predicate:

Sort  (cost=911.26..915.81 rows=1821 width=40)
  Sort Key: a.actor_id, t.rn
  -> Hash Join  (cost=596.44..812.65 rows=1821 width=40)
     Hash Cond: (t.actor_id = a.actor_id)
     -> Subquery Scan on t  (cost=589.94..781.11 rows=1821 width=25)
        Filter: (t.rn <= 3)
        -> WindowAgg  (cost=589.94..712.83 rows=5462 width=29)
           -> Sort  (cost=589.94..603.59 rows=5462 width=21)
              Sort Key: fa.actor_id, (length((f.title)::text)) DESC
              -> Hash Join  (cost=77.50..250.88 rows=5462 width=21)
                 Hash Cond: (fa.film_id = f.film_id)
                 -> Seq Scan on film_actor fa (cost=0.0..84.62 rows=5462 width=4)
                 -> Hash  (cost=65.00..65.00 rows=1000 width=19)
                    ->  Seq Scan on film f  (cost=0.00..65.00 rows=1000 width=19)
     -> Hash  (cost=4.00..4.00 rows=200 width=17)
        -> Seq Scan on actor a  (cost=0.00..4.00 rows=200 width=17)

What about DB2 10.5?

BEGIN
  DECLARE CONTINUE HANDLER FOR SQLSTATE '42710' BEGIN END;
  EXECUTE IMMEDIATE 'CREATE TABLE print_relative (run INTEGER, stmt INTEGER, elapsed DECIMAL(20, 4))';
END 

BEGIN
  DECLARE v_ts TIMESTAMP;
  DECLARE v_repeat INTEGER DEFAULT 100;
  DECLARE v_i INTEGER;
  DECLARE v_j INTEGER;

  -- Repeat benchmark several times to avoid warmup penalty
  SET v_i = 1;
  
  DELETE FROM print_relative;
  
  REPEAT
    SET v_j = 1;
    SET v_ts = CURRENT_TIMESTAMP;
    
    REPEAT
      FOR rec AS cur CURSOR FOR
        SELECT actor_id, first_name, last_name, title
        FROM actor a
        LEFT JOIN LATERAL (
          SELECT title
          FROM film f
          JOIN film_actor fa ON f.film_id = fa.film_id
          WHERE fa.actor_id = a.actor_id
          ORDER BY length(title) DESC
          FETCH FIRST 3 ROWS ONLY
        ) t ON 1 = 1
        ORDER BY actor_id, length(title) DESC
      DO
        BEGIN END;
      END FOR;
    
      SET v_j = v_j + 1;
      UNTIL v_j = v_repeat
    END REPEAT;
      
    INSERT INTO print_relative 
    VALUES (v_i, 1, (CURRENT_TIMESTAMP - v_ts));
    
    SET v_j = 1;
    SET v_ts = CURRENT_TIMESTAMP;
    
    REPEAT
      FOR rec AS cur CURSOR FOR
        SELECT a.actor_id, first_name, last_name, title
        FROM actor a
        LEFT JOIN (
          SELECT 
            actor_id,
            title, 
            row_number() OVER (
              PARTITION BY actor_id
              ORDER BY length(title) DESC
            ) rn
          FROM film f
          JOIN film_actor fa ON f.film_id = fa.film_id
        ) t ON t.actor_id = a.actor_id
        WHERE rn <= 3
        ORDER BY a.actor_id, rn
      DO
        BEGIN END;
      END FOR;
      
      SET v_j = v_j + 1;
      UNTIL v_j = v_repeat
    END REPEAT;

    INSERT INTO print_relative 
    VALUES (v_i, 2, (CURRENT_TIMESTAMP - v_ts));
    
    SET v_i = v_i + 1;
    UNTIL v_i = 5
  END REPEAT;
END

SELECT
  run, 
  stmt,
  CAST(elapsed / MIN(elapsed) OVER() AS DECIMAL(20, 4)) ratio
FROM print_relative;

DROP TABLE print_relative;

Result, again, window functions outperform LATERAL joining:

1	1	6.0353
1	2	1.0783
2	1	6.2549
2	2	1.0093
3	1	6.1067
3	2	1.0000
4	1	6.1987
4	2	1.0128

Explanation for benchmarks

For this explanation, I’m looking at Oracle execution plans only – showing what kind of rationale could be derived from their plans. I’m assuming that similar problems may appear in the other 3 databases.

Here’s the plan for the window function query:

The following is the execution plan obtained through

SELECT * FROM TABLE (
  dbms_xplan.display_cursor(format => 'ALLSTATS LAST')
)

It displays the actual execution plan with gathered statistics (using the /*+GATHER_PLAN_STATISTICS*/ hint), not the estimated plan.

----------------------------------------------------------------------
| Id  | Operation                  | Name          | Starts | A-Rows |
----------------------------------------------------------------------
|   0 | SELECT STATEMENT           |               |      1 |    767 |
|   1 |  SORT ORDER BY             |               |      1 |    767 |
|*  2 |   HASH JOIN                |               |      1 |    767 |
|   3 |    TABLE ACCESS FULL       | ACTOR         |      1 |    200 |
|*  4 |    VIEW                    |               |      1 |    767 |
|*  5 |     WINDOW SORT PUSHED RANK|               |      1 |   1079 |
|*  6 |      HASH JOIN             |               |      1 |   5463 |
|   7 |       TABLE ACCESS FULL    | FILM          |      1 |   1000 |
|   8 |       INDEX FAST FULL SCAN | PK_FILM_ACTOR |      1 |   5463 |
----------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - access("A"."ACTOR_ID"="T"."ACTOR_ID")
   4 - filter("T"."RK"<=3)
   5 - filter(RANK() OVER ( PARTITION BY "FA"."ACTOR_ID" 
         ORDER BY LENGTH("F"."TITLE") DESC )<=3)
   6 - access("F"."FILM_ID"="FA"."FILM_ID")

As can be seen, there’s some initial work of calculating the rank for most FILM_ACTOR rows from the nested query. I say most because the PUSHED RANK in the WINDOW SORT PUSHED RANK operation seems to indicate that the rank is calculated only as long as it is needed, i.e. until the ranking predicate rk <= 3 is no longer true.

In particular, this also means that we do not necessarily execute the entire sort, but we can keep a buffer of the current TOP 3, which can be done in O(N) rather than the full sort’s O(N log N). Whether this is really done in Oracle, I don’t know.

In any case, the result of the ranking operation is then efficiently hash joined with the ACTOR table.

What’s most interesting, though, is the “Starts” column, which indicates how many times an operation has been started. Each operation is started only once!

What about APPLY?

The plan is much worse:

---------------------------------------------------------------------------
| Id  | Operation                     | Name            | Starts | A-Rows |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |                 |      1 |    767 |
|   1 |  SORT ORDER BY                |                 |      1 |    767 |
|   2 |   MERGE JOIN OUTER            |                 |      1 |    767 |
|   3 |    TABLE ACCESS FULL          | ACTOR           |      1 |    200 |
|   4 |    BUFFER SORT                |                 |    200 |    767 |
|   5 |     VIEW                      | VW_LAT_14BC7596 |    200 |    767 |
|   6 |      VIEW                     | VW_LAT_A18161FF |    200 |    767 |
|*  7 |       VIEW                    |                 |    200 |    767 |
|*  8 |        WINDOW SORT PUSHED RANK|                 |    200 |   1079 |
|   9 |         NESTED LOOPS          |                 |    200 |   5463 |
|  10 |          TABLE ACCESS FULL    | FILM            |    200 |    200K|
|* 11 |          INDEX UNIQUE SCAN    | PK_FILM_ACTOR   |    200K|   5463 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
 
   7 - filter("from$_subquery$_008"."rowlimit_$$_rank"<=3)
   8 - filter(RANK() OVER ( ORDER BY LENGTH("F"."TITLE") DESC )<=3)
  11 - access("FA"."ACTOR_ID"="A"."ACTOR_ID" AND "F"."FILM_ID"="FA"."FILM_ID")

Observe the “Starts” column. On Id 4, we’re experiencing 200 executions of the BUFFER SORT operation. That’s because there are exactly 200 rows in the ACTOR table (as can be seen on Id 3). So, indeed, there’s no optimisation happening that would calculate the “laterally correlated subquery” for all actors in one go.

Furthermore, quite unfortunately, the JOIN order between FILM and FILM_ACTOR seems quite wrong. For each of the 200 actors, we’re loading the entire 1000 films, which produces 200 * 1000 = 200K rows to scan for those belonging to the single actor from the outer query (Id 11). That’s unreasonably repetitive.

I’d expect Oracle to inverse the table accesses. We can do this with a hint:

SELECT actor_id, first_name, last_name, title
FROM actor a
OUTER APPLY (
  SELECT /*+LEADING(fa f) USE_NL(fa f)*/ title
  FROM film f
  JOIN film_actor fa USING (film_id)
  WHERE actor_id = a.actor_id
  ORDER BY length(title) DESC
  FETCH FIRST 3 ROWS WITH TIES
) t
ORDER BY actor_id, length(title) DESC;

Observe the two hints:

  • LEADING to indicate that the FILM_ACTOR table should be the driving row source of the join
  • USE_NL to enforce a NESTED LOOP JOIN (without this, and with LEADING, Oracle would prefer a HASH JOIN)

With the hints, the benchmark results look a lot better:

Statement 1 : 1          (window functions)
Statement 2 : 9.17483    (APPLY without hints)
Statement 3 : 4.88774    (APPLY with LEADING hint)
Statement 4 : 1.65269    (APPLY with LEADING and USE_NL hints)

Also, the execution plan now no longer has those excessive “Starts” and “A-Rows” values:

--------------------------------------------------------------------------------
| Id  | Operation                           | Name           | Starts | A-Rows |
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |                |      1 |     50 |
|   1 |  SORT ORDER BY                      |                |      1 |     50 |
|   2 |   MERGE JOIN OUTER                  |                |      1 |    767 |
|   3 |    TABLE ACCESS FULL                | ACTOR          |      1 |    200 |
|   4 |    BUFFER SORT                      |                |    200 |    767 |
|   5 |     VIEW                            | VW_LAT_2880E7C5|    200 |    767 |
|   6 |      VIEW                           | VW_LAT_A18161FF|    200 |    767 |
|*  7 |       VIEW                          |                |    200 |    767 |
|*  8 |        WINDOW SORT PUSHED RANK      |                |    200 |   1079 |
|   9 |         NESTED LOOPS                |                |    200 |   5463 |
|  10 |          NESTED LOOPS               |                |    200 |   5463 |
|* 11 |           INDEX RANGE SCAN          | PK_FILM_ACTOR  |    200 |   5463 |
|* 12 |           INDEX UNIQUE SCAN         | PK_FILM        |   5463 |   5463 |
|  13 |          TABLE ACCESS BY INDEX ROWID| FILM           |   5463 |   5463 |
--------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
 
   7 - filter("from$_subquery$_006"."rowlimit_$$_rank"<=3)
   8 - filter(RANK() OVER ( ORDER BY LENGTH("F"."TITLE") DESC )<=3)
  11 - access("FA"."ACTOR_ID"="A"."ACTOR_ID")
  12 - access("F"."FILM_ID"="FA"."FILM_ID")

The highest number of “A-Rows” is easily explained. There are exactly 5463 rows in FILM_ACTOR. Still, I’d wish that the 200 TOP-N lookups per actor could be optimised somehow.

Interestingly, this plan is associated with a much higher cost than the original one, even if in this case, it is much better.

Conclusion

TOP N queries are very common, we need them all the time. The simplest way is to use an ORDER BY clause and a LIMIT clause.

Sometimes, we need WITH TIES semantics, and Oracle 12c as well as SQL Server can provide this with standard or vendor specific syntax, out of the box. Other databases can emulate WITH TIES using window functions rather easily.

When we need TOP N per category, an interesting and cool syntax comes in handy: APPLY or LATERAL. However, very unfortunately, simple benchmarks have shown that these can be much slower than their equivalent window function counterparts. Of course, this will heavily depend on the size of the data set. Do not underestimate the cost of O(N log N) sorts that occur in window functions. As always: Do measure, never guess.