## Calculate Percentiles to Learn About Data Set Skew in SQL

B-Tree indexes are perfect when your data is uniformly distributed. They are not really useful, when you have skewed data. I’ll explain later why this is the case, but let’s first learn how to detect “skew”

### What is skew?

Skew is a term from statistics when a normal distribution is not symmetric. The example given on Wikipedia shows a distribution like this:

In RDBMS, we sometimes use the term skew colloquially to mean the same thing as non-uniform distribution, i.e. a normal distribution would also be skewed. We simply mean that some values appear more often than others. Thus, I will put the term “skew” in double quotes in this article. While your RDBMS’s statistics contain this information once they are calculated, we can also detect such “skew” manually in ad-hoc queries using percentiles, which are defined in the SQL standard and supported in a variety of databases, as ordinary aggregate functions, including:

• Oracle
• PostgreSQL
• SQL Server (regrettably, only as window functions)

Uniform distribution

Let’s look at the FILM_ID values in the Sakila database:

```SELECT
percentile_disc(0.0) WITHIN GROUP (ORDER BY film_id) AS "0%",
percentile_disc(0.1) WITHIN GROUP (ORDER BY film_id) AS "10%",
percentile_disc(0.2) WITHIN GROUP (ORDER BY film_id) AS "20%",
percentile_disc(0.3) WITHIN GROUP (ORDER BY film_id) AS "30%",
percentile_disc(0.4) WITHIN GROUP (ORDER BY film_id) AS "40%",
percentile_disc(0.5) WITHIN GROUP (ORDER BY film_id) AS "50%",
percentile_disc(0.6) WITHIN GROUP (ORDER BY film_id) AS "60%",
percentile_disc(0.7) WITHIN GROUP (ORDER BY film_id) AS "70%",
percentile_disc(0.8) WITHIN GROUP (ORDER BY film_id) AS "80%",
percentile_disc(0.9) WITHIN GROUP (ORDER BY film_id) AS "90%",
percentile_disc(1.0) WITHIN GROUP (ORDER BY film_id) AS "100%"
FROM film;
```

What are we calculating here? We’re trying to find 11 different values for which we can say that:

• 0% of the film_ids are lower than the “0%” value
• 10% of the film_ids are lower than the “10%” value

Or in other words:

• 0% is the MIN(film_id) value
• 50% is the MEDIAN(film_id) value
• 100% is the MAX(film_id) value

The result shows an unsurprisingly uniform distribution:

```0% |10% |20% |30% |40% |50% |60% |70% |80% |90% |100% |
---|----|----|----|----|----|----|----|----|----|-----|
1  |100 |200 |300 |400 |500 |600 |700 |800 |900 |1000 |
```

We can plot this in Microsoft Excel or some other tool to get this nice curve:

This is not surprising, as the IDs are just consecutive values, which is a desired property of surrogate keys.

“Skewed” distribution

It’s a different story when we look at the distribution of amounts in the payment table:

```SELECT
percentile_disc(0.0) WITHIN GROUP (ORDER BY amount) AS "0%",
percentile_disc(0.1) WITHIN GROUP (ORDER BY amount) AS "10%",
percentile_disc(0.2) WITHIN GROUP (ORDER BY amount) AS "20%",
percentile_disc(0.3) WITHIN GROUP (ORDER BY amount) AS "30%",
percentile_disc(0.4) WITHIN GROUP (ORDER BY amount) AS "40%",
percentile_disc(0.5) WITHIN GROUP (ORDER BY amount) AS "50%",
percentile_disc(0.6) WITHIN GROUP (ORDER BY amount) AS "60%",
percentile_disc(0.7) WITHIN GROUP (ORDER BY amount) AS "70%",
percentile_disc(0.8) WITHIN GROUP (ORDER BY amount) AS "80%",
percentile_disc(0.9) WITHIN GROUP (ORDER BY amount) AS "90%",
percentile_disc(1.0) WITHIN GROUP (ORDER BY amount) AS "100%"
FROM payment;
```

We’re now getting:

```0%   |10%  |20%  |30%  |40%  |50%  |60%  |70%  |80%  |90%  |100%
-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----
0.00 |0.99 |1.99 |2.99 |2.99 |3.99 |4.99 |4.99 |5.99 |6.99 |11.99
```

This looks … “skewed”, although clearly the bias is mainly caused by the fact that this data is generated. When we plot the above, we’re getting:

The slope is less steep at the beginning of this curve, which essentially means that more values exist at the lower end of the range than at the upper end. We can validate this with another query:

```SELECT amount, count(*)
FROM (
SELECT trunc(amount) AS amount
FROM payment
) t
GROUP BY amount
ORDER BY amount;
```

… which yields:

```amount |count |
-------|------|
0      |3003  |
1      |641   |
2      |3542  |
3      |1117  |
4      |3789  |
5      |1306  |
6      |1119  |
7      |675   |
8      |486   |
9      |257   |
10     |104   |
11     |10    |
```

Plotted:

When plotting this, we can see that there are more amounts in the lower half of the range than in the upper half, which leads to percentiles growing slower.

Correlations

This technique can also be applied to detect correlations in data. We can, for instance, try to find the percentiles of the length of films, and group data sets by rating. I’m using a GROUPING SETS function here, the ROLLUP() function, to calculate the grand total as well. Just check out the query and its results, and you’ll see:

```SELECT
rating,
count(*),
percentile_disc(0.0) WITHIN GROUP (ORDER BY length) AS "0%",
percentile_disc(0.1) WITHIN GROUP (ORDER BY length) AS "10%",
percentile_disc(0.2) WITHIN GROUP (ORDER BY length) AS "20%",
percentile_disc(0.3) WITHIN GROUP (ORDER BY length) AS "30%",
percentile_disc(0.4) WITHIN GROUP (ORDER BY length) AS "40%",
percentile_disc(0.5) WITHIN GROUP (ORDER BY length) AS "50%",
percentile_disc(0.6) WITHIN GROUP (ORDER BY length) AS "60%",
percentile_disc(0.7) WITHIN GROUP (ORDER BY length) AS "70%",
percentile_disc(0.8) WITHIN GROUP (ORDER BY length) AS "80%",
percentile_disc(0.9) WITHIN GROUP (ORDER BY length) AS "90%",
percentile_disc(1.0) WITHIN GROUP (ORDER BY length) AS "100%"
FROM film
GROUP BY ROLLUP(rating);
```

This yields:

```rating |count |0% |10% |20% |30% |40% |50% |60% |70% |80% |90% |100% |
-------|------|---|----|----|----|----|----|----|----|----|----|-----|
G      |178   |47 |57  |67  |80  |93  |107 |121 |138 |156 |176 |185  |
PG     |194   |46 |58  |72  |85  |99  |113 |122 |137 |151 |168 |185  |
PG-13  |223   |46 |61  |76  |92  |110 |125 |138 |150 |162 |176 |185  |
R      |195   |49 |68  |82  |90  |104 |115 |129 |145 |160 |173 |185  |
NC-17  |210   |46 |58  |74  |84  |97  |112 |125 |138 |153 |174 |184  |
|1000  |46 |60  |74  |86  |102 |114 |128 |142 |156 |173 |185  |
```

So, the `GROUP BY` clause produced one row per rating, and an additional grand total column at the bottom. For illustration purposes, I’ve added the `COUNT(*)` column, to show how many films are in each group. The 5 first rows sum up to 1000, which is again the grand total at the bottom.

Let’s plot the percentiles now as line and bar charts:

We can “see” that there is no strong correlation between the two data points. Both data sets are close to uniformly distributed, quite independently of the rating, with the exception of PG-13, which is just slightly skewed towards longer film lengths.

Again, this isn’t terribly interesting as the data set was generated, probably using some randomness to avoid perfectly uniform distribution. In real world scenarios, the above data would have been more “skewed”.

### How does this help with performance?

A balanced tree index is very useful when data is quite uniformly distributed, because in that case, it can help access data points or ranges of data in O(log(N)) time. This is quite a useful property for queries that look for film_id values, e.g.

```SELECT *
FROM film
WHERE film_id = 1
```

When accessing “skewed” data, some values are more equal than others. This means that for example if we’re looking for amounts in the payment table, these two queries are not the same:

```-- A lot of rows returned (3644)
SELECT * FROM payment WHERE amount BETWEEN 0 AND 2;

-- Few rows returned (361)
SELECT * FROM payment WHERE amount BETWEEN 9 AND 11;
```

An index on the amount column could have been useful for the second query, but maybe not for the first one.

There are several things we can do to make sure optimal index usage is being applied for all sorts of queries. In case of uniformly distributed data, we usually don’t have to do anything as SQL developers. In case of “skewed” data sets, it may be worth thinking about:

• Using histogram statistics
• Hinting the optimiser (in Oracle or SQL Server)
• Avoiding bind variables (only in extreme cases)

### Conclusion

Not all data sets are equal. They are often “skewed”. By “skewed”, in SQL, we don’t mean the statistical meaning of a normal distribution being skewed asymmetrically. We mean that a distribution is not uniform, so even a normal distribution is “skewed”. When it is, then some values appear way more often than others. Some examples are:

Uniform distribution

• Surrogate keys generated from sequences (consecutive)
• Surrogate keys generated from UUIDs (random)
• Foreign keys on one-to-one relationships

Slight “skew”

Possibly significant “skew”

This really depends on the actual data set, but do expect significant “skew” in these data types

• Foreign keys on to-many relationships (e.g. some customers have more assets than others)
• Numeric values (e.g. amount)
• Codes and other discrete values (e.g. film rating, payment settlement codes, etc.)

This article has shown how we can use simple SQL aggregate functions, including the percentiles, to calculate and visualise such “skew”.

## How to Use SQL UPDATE .. RETURNING to Run DML More Efficiently

At a customer site, I recently refactored a “slow-by-slow” PL/SQL loop and turned that into an efficient set based `UPDATE` statement saving many lines of code and running much faster. In this blog post, I will show how that can be done. The blog post will focus on Oracle and `UPDATE`, but rest assured, this technique can be implemented in other databases too, and also with other DML statements, such as `INSERT`, `DELETE`, and depending on the vendor, even `MERGE`.

### The Schema

The original logic that needed refactoring worked on the following data set (simplified for this blog post):

```-- Table definition
CREATE TABLE t (
id NUMBER(10) GENERATED ALWAYS AS IDENTITY NOT NULL PRIMARY KEY,
category NUMBER(10) NOT NULL,
counter NUMBER(10),
text VARCHAR2(10) NOT NULL
);

-- Sample data
INSERT INTO t (category, text)
SELECT dbms_random.value(1, 10), dbms_random.string('a', 10)
FROM dual
CONNECT BY level <= 100;

-- Output of data
SELECT *
FROM t
ORDER BY counter DESC NULLS LAST, category, id;
```

The sample data generated above might look like this:

```ID   CATEGORY   COUNTER   TEXT
16   1                    UIXSzJxDez
25   1                    hkvvrTRbTC
29   1                    IBOJYveDgf
44   1                    VhcwOugrWB
46   1                    gBJFJrPQYy
47   1                    bVzfHznOUj
10   2                    KpHHgsRXwR
11   2                    vpkhTrkaaU
14   2                    fDlNtRdvBE
```

So, there were certain records belonging to some category, and there’s a counter indicating how often each record has been encountered in some system.

### The “slow-by-slow” PL/SQL Logic

(“slow-by-slow” rhymes with “row-by-row”. You get the idea)

Every now and then, there was a message from another system that should:

• Fetch all the rows of a category
• Increase the counter on each element of that category
• Concatenate all the texts of that category and return those

Sounds like something that can be done very easily using a loop. In PL/SQL (but imagine you could be doing this in Java just the same):

```SET SERVEROUTPUT ON
DECLARE
v_text VARCHAR2(2000);
v_updated PLS_INTEGER := 0;
BEGIN
FOR r IN (
SELECT * FROM t WHERE category = 1
) LOOP
v_updated := v_updated + 1;

IF v_text IS NULL THEN
v_text := r.text;
ELSE
v_text := v_text || ', ' || r.text;
END IF;

IF r.counter IS NULL THEN
UPDATE t SET counter = 1 WHERE id = r.id;
ELSE
UPDATE t SET counter = counter + 1 WHERE id = r.id;
END IF;
END LOOP;

COMMIT;
dbms_output.put_line('Rows updated: ' || v_updated);
dbms_output.put_line('Returned:     ' || v_text);
END;
/
```

The result of this block would be:

```Rows updated: 6
Returned:     UIXSzJxDez, hkvvrTRbTC, IBOJYveDgf, VhcwOugrWB, gBJFJrPQYy, bVzfHznOUj
```

And the data is now:

```ID   CATEGORY   COUNTER   TEXT
16   1          1         UIXSzJxDez
25   1          1         hkvvrTRbTC
29   1          1         IBOJYveDgf
44   1          1         VhcwOugrWB
46   1          1         gBJFJrPQYy
47   1          1         bVzfHznOUj
10   2                    KpHHgsRXwR
11   2                    vpkhTrkaaU
14   2                    fDlNtRdvBE
```

Wonderful. What’s wrong with this? The logic is straightforward and runs quite quickly. Until you run this many many many times per second – then it suddenly starts to hurt.

### Thinking Set Based

Whenever you work with RDBMS, try to think in terms of data sets and try running a bulk operation on such a data set. (Exceptions exist, see caveats below). The modification of the data can be written in a single SQL statement, instead of updating the same table many times.

Here’s the SQL statement in Oracle, that does precisely the same thing:

```SET SERVEROUTPUT ON
DECLARE
v_text VARCHAR2(2000);
v_updated PLS_INTEGER := 0;
BEGIN
UPDATE t
SET counter = nvl(counter, 0) + 1
WHERE category = 1
RETURNING
listagg (text, ', ') WITHIN GROUP (ORDER BY text),
count(*)
INTO
v_text,
v_updated;

COMMIT;
dbms_output.put_line('Rows updated: ' || v_updated);
dbms_output.put_line('Returned:     ' || v_text);
END;
/
```

Again, the same output:

```Rows updated: 6
Returned:     UIXSzJxDez, hkvvrTRbTC, IBOJYveDgf, VhcwOugrWB, gBJFJrPQYy, bVzfHznOUj
```

And the data set is now:

```ID   CATEGORY   COUNTER   TEXT
16   1          2         UIXSzJxDez
25   1          2         hkvvrTRbTC
29   1          2         IBOJYveDgf
44   1          2         VhcwOugrWB
46   1          2         gBJFJrPQYy
47   1          2         bVzfHznOUj
10   2                    KpHHgsRXwR
11   2                    vpkhTrkaaU
14   2                    fDlNtRdvBE
```

Below, you can see each piece of logic of the original PL/SQL block, and the corresponding logic in the revised SQL statement

There are 4 areas of interest:

1. Red: The category predicate
In the PL/SQL version, this predicate is a simple access predicate for the `SELECT` statement, over whose implicit cursor we’re iterating. In the set based SQL version, that predicate has been moved into the single bulk `UPDATE` statement. Thus: we’re modifying the exact same set of rows.
2. Blue: The number of updated rows
Before, we had a count variable that counted the number of iterations over the implicit cursor. Now, we can simply count the number of rows being updated in the bulk update statement, conveniently in the `RETURNING` clause. An alternative (in Oracle) would have been to use `SQL%ROWCOUNT`, which is available for free after a single bulk `UPDATE` statement.
3. Orange: The string concatenation
The requirement was to concatenate all the texts which are being updated. In the “slow-by-slow” PL/SQL approach, we’re again keeping around a local variable and concatenate new values to it, doing some `NULL` handling, initially. In the set based SQL version, we can simply use `LISTAGG()` in the `RETURNING` clause. Notice, there seems to be a bug with this usage of `LISTAGG`. The `ORDER BY` clause has no effect.
4. Green: The actual update
In the “slow-by-slow” version, we run 1 `UPDATE` statement per row, which can turn out to be devastating, if we’re updating a lot of rows. Besides, in this particular case, the developer(s) have been unaware of the possibility of `NULL` handling using `NVL()` (or `COALESCE()` or similar). There is really only one `UPDATE` statement necessary here.

That already looks a lot neater.

### How does it perform?

In a quick test script, which I’ve linked here, I could observe the following times for the above test data set, when running each approach 5 x 10000 times:

```Run 1, Statement 1 : 2.63841 (avg : 2.43714)
Run 1, Statement 2 : 1.11019 (avg : 1.04562)
Run 2, Statement 1 : 2.35626 (avg : 2.43714)
Run 2, Statement 2 : 1.05716 (avg : 1.04562)
Run 3, Statement 1 : 2.38004 (avg : 2.43714)
Run 3, Statement 2 : 1.05153 (avg : 1.04562)
Run 4, Statement 1 : 2.47451 (avg : 2.43714)
Run 4, Statement 2 : 1.00921 (avg : 1.04562)
Run 5, Statement 1 : 2.33649 (avg : 2.43714)
Run 5, Statement 2 : 1.00000 (avg : 1.04562)
```

As always, I’m not publishing actual benchmark times, but relative times compared to the fastest run. The set based approach is consistently 2.5x faster on my machine (Oracle 18c on Docker on Windows 10 / SSD). This is updating 6 rows per execution.

When we remove the `WHERE category = 1` predicate, updating the entirety of the 100 rows each time, we get even more drastic results. I’m now running this 5 x 2000 times to get:

```Run 1, Statement 1 : 10.21833 (avg : 11.98154)
Run 1, Statement 2 : 1.219130 (avg : 1.739260)
Run 2, Statement 1 : 10.17014 (avg : 11.98154)
Run 2, Statement 2 : 3.027930 (avg : 1.739260)
Run 3, Statement 1 : 9.444620 (avg : 11.98154)
Run 3, Statement 2 : 1.000000 (avg : 1.739260)
Run 4, Statement 1 : 20.54692 (avg : 11.98154)
Run 4, Statement 2 : 1.193560 (avg : 1.739260)
Run 5, Statement 1 : 9.527690 (avg : 11.98154)
Run 5, Statement 2 : 2.255680 (avg : 1.739260)
```

At this point, no one needs to be convinced anymore that a set based approach is much better for updating your data than a row-by-row approach in a language like PL/SQL or Java, etc.

### Caveats

Bulk updates are much better than row-by-row (remember: “slow-by-slow”) updates, regardless if you’re using PL/SQL or Java or whatever client language. This is because the optimiser can plan the update much more efficiently when it knows which rows will be updated in bulk, rather than seeing each individual row update afresh, not being able to plan ahead for the remaining number of updates.

However, in situations where a lot of other processes are reading the same data while you’re bulk updating them, you need to be more careful. In such cases, a bulk update can cause trouble keeping locks and log files busy while you’re updating and while the other processes need to access the data prior to your update.

One size never fits all, but at least, in every situation where you loop over a result set to update some data (or fetch additional data), ask yourself: Could I have written that logic in a single SQL statement? The answer is very often: Yes.

### Other databases

A few other databases support similar language features. These include:

• DB2: Implements the SQL standard (see below)
• Firebird: Exactly like Oracle: `RETURNING`
• PostgreSQL: Exactly like Oracle: `RETURNING`
• SQL Server: Similar, a bit less powerful OUTPUT clause

The DB2 syntax is quite noteworthy, because:

• It is very elegant
• It corresponds to the SQL standard
```SELECT
listagg (text, ', ') WITHIN GROUP (ORDER BY id),
count(*)
FROM FINAL TABLE (
UPDATE t
SET counter = nvl(counter, 0) + 1
WHERE category = 1
)
```

## Beware of Hidden PL/SQL to SQL Context Switches

I recently stumbled upon a curious query on a customer’s productive Oracle database:

```SELECT USER FROM SYS.DUAL
```

Two things caught my attention:

• The query was executed many billions of times per month, accounting for about 0.3% of that system’s load. That’s 0.3% for something extremely silly!
• I don’t think that customer would ever qualify the `DUAL` table as `SYS.DUAL`, which hints at some system functionality

I found it in Oracle Enterprise Manager, but you could also find it using a query like this one:

```SELECT
sql_id,
executions,
elapsed_time,
ratio_to_report(elapsed_time) over() p,
sql_text
FROM v\$sql
ORDER BY p DESC;
```

Why was this query being run so often? In Enterprise Manager, the query’s statistics overview displayed that the query originated from a function called `STANDARD.USER` (I don’t know yet where I could find this information in the dictionary views, manually).

Naively, I had always thought that the `USER` pseudo column or pseudo constant is some value from the context, but like many other functions, it’s really just a function in that package.

### What does STANDARD.USER() do?

Now, I’m not 100% sure if that source code is something that I am allowed to reproduce from a legal perspective, this being Oracle and all. But if you run this query here, which I am freely allowing you to:

```WITH s AS (
SELECT s.*,
MIN(CASE
WHEN upper(text) LIKE '%FUNCTION USER%'
THEN line END
) OVER () s
FROM all_source s
WHERE owner = 'SYS'
AND name = 'STANDARD'
AND type = 'PACKAGE BODY'
)
SELECT text
FROM s
WHERE line >= s AND line < s + 6;
```

Then you might be able to see something like this:

```  function USER return varchar2 is
c varchar2(255);
begin
select user into c from sys.dual;
return c;
end;
```

This is just the result of some SQL query I’ve shown you. Any correspondence with actual source code is merely coincidental.

Let’s assume this were the actual source code of the `STANDARD.USER()` function. We can now clearly see that this very SQL query that I’ve observed before is being executed! Want to verify this?

### Let’s benchmark

In essence, I’m comparing the performances of 500000 executions of this loop:

```FOR i IN 1 .. v_repeat LOOP
v := USER;
END LOOP;
```

With this one:

```FOR i IN 1 .. v_repeat LOOP
SELECT USER INTO v FROM dual;
END LOOP;
```

And this one:

```FOR i IN 1 .. v_repeat LOOP
-- Note: According to the doc, use 'SESSION_USER' instead
v := sys_context('USERENV', 'CURRENT_USER');
END LOOP;
```

The result of this benchmark is:

```Run 1, Statement 1 : 2.40509 (avg : 2.43158)
Run 1, Statement 2 : 2.13208 (avg : 2.11816)
Run 1, Statement 3 : 1.01452 (avg : 1.02081)

Run 2, Statement 1 : 2.41889 (avg : 2.43158)
Run 2, Statement 2 : 2.09753 (avg : 2.11816)
Run 2, Statement 3 : 1.00203 (avg : 1.02081)

Run 3, Statement 1 : 2.45384 (avg : 2.43158)
Run 3, Statement 2 : 2.09060 (avg : 2.11816)
Run 3, Statement 3 : 1.02239 (avg : 1.02081)

Run 4, Statement 1 : 2.39516 (avg : 2.43158)
Run 4, Statement 2 : 2.14140 (avg : 2.11816)
Run 4, Statement 3 : 1.06512 (avg : 1.02081)

Run 5, Statement 1 : 2.48493 (avg : 2.43158)
Run 5, Statement 2 : 2.12922 (avg : 2.11816)
Run 5, Statement 3 : 1.00000 (avg : 1.02081)
```

How to read this benchmark result? These aren’t actual times, which are not interesting, but relative times compared to the fastest run (run 5, statement 3 = 1). The explicit `SELECT USER FROM DUAL` is about half as fast as the `SYS_CONTEXT` call, and the `USER` call is a bit slower, even.

When re-running this query:

```SELECT
sql_id,
executions,
ratio_to_report(elapsed_time) over() p,
sql_text
FROM v\$sql
ORDER BY p DESC;
```

We can see:

```SQL_ID          EXECUTIONS  P     SQL_TEXT
6r9s58qfu339c   1           0.26  DECLARE ...
1v717nvrhgbn9   2500000     0.14  SELECT USER FROM SYS.DUAL
...
```

So, this query has definitely been run way too many times, including the PL/SQL to SQL context switch that is involved.

I’m running this benchmark in Oracle 18.0.0.0.0 in Docker on a Windows machine. More close-to-the-metal and less virtualised setups might achieve more drastic results. See, e.g. Connor McDonald got a much better improvement from using `SYS_CONTEXT`:

In this particular case, The `STANDARD.USER()` reference was used very often in triggers to fill in audit columns of many tables. Very easy to fix. Just use `sys_context('USERENV', 'CURRENT_USER')` instead.

### Full benchmark logic

```SET SERVEROUTPUT ON

ALTER SYSTEM FLUSH SHARED_POOL;
ALTER SYSTEM FLUSH BUFFER_CACHE;

CREATE TABLE results (
run     NUMBER(2),
stmt    NUMBER(2),
elapsed NUMBER
);

DECLARE
v_ts TIMESTAMP WITH TIME ZONE;
v_repeat CONSTANT NUMBER := 500000;
v NUMBER;
BEGIN

-- Repeat the whole benchmark several times to
-- avoid warmup penalty
FOR r IN 1..5 LOOP
v_ts := SYSTIMESTAMP;

FOR i IN 1 .. v_repeat LOOP
v := v + length(USER);
END LOOP;

INSERT INTO results VALUES (r, 1,
SYSDATE + ((SYSTIMESTAMP - v_ts) * 86400) - SYSDATE);
v_ts := SYSTIMESTAMP;

FOR i IN 1 .. v_repeat LOOP
SELECT v + length(USER) INTO v FROM dual;
END LOOP;

INSERT INTO results VALUES (r, 2,
SYSDATE + ((SYSTIMESTAMP - v_ts) * 86400) - SYSDATE);
v_ts := SYSTIMESTAMP;

FOR i IN 1 .. v_repeat LOOP
-- Note: According to the doc, use 'SESSION_USER' instead
v := v + length(sys_context('USERENV', 'CURRENT_USER'));
END LOOP;

INSERT INTO results VALUES (r, 3,
SYSDATE + ((SYSTIMESTAMP - v_ts) * 86400) - SYSDATE);
END LOOP;

FOR rec IN (
SELECT
run, stmt,
CAST(elapsed / MIN(elapsed) OVER() AS NUMBER(10, 5)) ratio,
CAST(AVG(elapsed) OVER (PARTITION BY stmt) /
MIN(elapsed) OVER() AS NUMBER(10, 5)) avg_ratio
FROM results
ORDER BY run, stmt
)
LOOP
dbms_output.put_line('Run ' || rec.run ||
', Statement ' || rec.stmt ||
' : ' || rec.ratio || ' (avg : ' || rec.avg_ratio || ')');
END LOOP;

dbms_output.put_line('');
dbms_output.put_line('https://www.jooq.org/benchmark');
END;
/

DROP TABLE results;
```

## How to Write Multiset Conditions With Oracle VARRAY Types

Oracle is one of the few databases that implements the SQL standard ORDBMS extensions, which essentially allow for nested collections. Other databases that have these features to some extent are CUBRID, Informix, PostgreSQL.

Oracle has two types of nested collections:

```-- Nested tables
CREATE TYPE t1 AS TABLE OF VARCHAR2(10);
/

-- Varrays
CREATE TYPE t2 AS VARRAY(10) OF VARCHAR2(10);
/
```

The main difference at first is that a nested table can be of arbitrary size, whereas a varray has a fixed maximum size. Other than that, they behave in similar ways.

When storing a nested collection in a table, there is another difference. Varrays can be inlined into the table just like any other data type, whereas nested tables have to be accompanied by an additional storage clause:

```CREATE TABLE t (
id NUMBER(10),
t1 t1,
t2 t2
)
NESTED TABLE t1 STORE AS t1_nt;
```

This is a minor hassle in terms of DDL. The runtime implications are more significant.

## Multiset Conditions

The most important difference is the fact that all the useful multiset conditions are not available with varrays. For instance, consider running these statements:

```INSERT INTO t VALUES (1, NULL, NULL);
INSERT INTO t VALUES (2, t1(), t2());
INSERT INTO t VALUES (
3,
t1('abc', 'xyz', 'zzz'),
t2('abc', 'xyz', 'zzz')
);
INSERT INTO t VALUES (
4,
t1('dup', 'dup', 'dup'),
t2('dup', 'dup', 'dup')
);

SELECT * FROM t WHERE 'abc' MEMBER OF t1;
SELECT * FROM t WHERE 'abc' MEMBER OF t2;
```

The result of these queries is:

```ID  T1                        T2
-----------------------------------------------------
3   T1('abc', 'xyz', 'zzz')   T2('abc', 'xyz', 'zzz')

ORA-00932: inconsistent datatypes: expected UDT got TEST.T2
```

he return value is TRUE if expr is equal to a member of the specified nested table or varray. The return value is NULL if expr is null or if the nested table is empty.

There is some explicit mention of varrays supporting these operations, but in most of the documentation, varrays are not mentioned. So, how can we write such operations with varrays? Here’s an list of translations of the nested table operator to the equivalent SQL expression for use with varrays.

## IS A SET condition

In SQL, everything is a (partially ordered) multiset by default. Sometimes, however, we want to work with sets, i.e. a special type of multiset that has no duplicate values. We can easily check whether nested tables are sets (or whether they aren’t):

```-- Nested table version
SELECT * FROM t WHERE t1 IS A SET;

-- Varray version
SELECT *
FROM t
WHERE t2 IS NOT NULL
AND (SELECT count(*) FROM TABLE(t2))
= (SELECT count(DISTINCT column_value) FROM TABLE(t2));
```

The `IS A SET` operation yields `UNKNOWN` if the nested table is `NULL`, so we have to take that into account as well. If it isn’t `NULL`, we can count the total values in the varray and compare that with the total distinct values in the varray.

The result is:

```ID  T1                        T2
-----------------------------------------------------
2   T1()                      T2()
3   T1('abc', 'xyz', 'zzz')   T2('abc', 'xyz', 'zzz')
```

## IS EMPTY condition

This predicate needs no explanation. It can be written as such:

```-- Nested table version
SELECT * FROM t WHERE t1 IS EMPTY;

-- Varray version
SELECT *
FROM t
WHERE t2 IS NOT NULL
AND NOT EXISTS (
SELECT * FROM TABLE (t2)
);
```

The result being:

```ID  T1                 T2
---------------------------------------
2   T1()               T2()
```

## MEMBER condition

This handy predicate can help check if a specific value is contained in a nested collection. It can be written as such:

```-- Nested table version
SELECT * FROM t WHERE 'abc' MEMBER OF t1;

-- Varray version
SELECT *
FROM t
WHERE t2 IS NOT NULL
AND EXISTS (
SELECT 1 FROM TABLE(t2) WHERE column_value = 'abc'
);
```

Yielding:

```ID  T1                        T2
-----------------------------------------------------
3   T1('abc', 'xyz', 'zzz')   T2('abc', 'xyz', 'zzz')
```

## SUBMULTISET condition

Just like the previous MEMBER condition, this predicate can help check if specific values (more than one) are contained in a nested collection. This is a bit more tricky than the previous emulations. The MEMBER condition works the same way for sets and multisets, as we’re checking if exactly one element is contained in the (multi)set.

When working with multisets, duplicates are allowed, and in the case of the `SUBMULTISET` operation, the following can be observed:

```-- Equal multisets
t1() SUBMULTISET OF t1();
t1('a', 'a') SUBMULTISET OF t1('a', 'a');

-- Subsets
t1('a') SUBMULTISET OF t1('a', 'a');

-- But this is not true
t1('a', 'a') SUBMULTISET OF t1('a');
```

When we omit the fact that nested collections can be multisets and pretend we’re working with sets only, then the emulation of the `SUBMULTISET` operator is relatively easy:

```-- Nested table version
SELECT * FROM t WHERE t1('abc', 'xyz') SUBMULTISET OF t1;

-- Varray version
SELECT *
FROM t
WHERE t2 IS NOT NULL
AND EXISTS (
SELECT 1 FROM TABLE(t2)
WHERE column_value = 'abc'
INTERSECT
SELECT 1 FROM TABLE(t2)
WHERE column_value = 'xyz'
);
```

Yielding, once more:

```ID  T1                        T2
-----------------------------------------------------
3   T1('abc', 'xyz', 'zzz')   T2('abc', 'xyz', 'zzz')
```

If we’re really working with multisets, things are a bit more tricky:

```-- Nested table version
SELECT * FROM t WHERE t1('dup', 'dup') SUBMULTISET OF t1;

-- Varray version
SELECT *
FROM t
WHERE t2 IS NOT NULL
AND NOT EXISTS (
SELECT column_value, count(*)
FROM TABLE (t2('dup', 'dup')) x
GROUP BY column_value
HAVING count(*) > (
SELECT count(*)
FROM TABLE (t2) y
WHERE y.column_value = x.column_value
)
);
```

Yielding:

```ID  T1                        T2
-----------------------------------------------------
4   T1('dup', 'dup', 'dup')   T2('dup', 'dup', 'dup')
```

How does it work? In the `NOT EXISTS` correlated subquery, we’re counting the number of duplicate values in the potential `SUBMULTISET`, effectively turning that `SUBMULTISET` into a `SET` using the `GROUP BY` operation.

We’re then comparing that count value from the left operand with the corresponding count value from the right operand. If there is no value in the left operand whose number of occurrences is bigger than the number of occurrences of that value in the right operand, then the whole left operand is a `SUBMULTISET` of the right operand.

Cool, eh? We’ll talk about performance another time :-)

## MULTISET operators

• MULTISET EXCEPT [ ALL | DISTINCT ]
• MULTISET INTERSECT [ ALL | DISTINCT ]
• MULTISET UNION [ ALL | DISTINCT ]

Notice how there are some differences to the ordinary set operators that can be used in `SELECT` statements. In particular:

• `EXCEPT` is used as defined in the standard, not `MINUS`
• `ALL` is supported on all three operators, not just on `UNION`
• `ALL` is the default, not `DISTINCT`

How can we work with these operators? Consider these queries:

```SELECT id, t1 MULTISET EXCEPT t1('aaa', 'abc', 'dup', 'dup') r
FROM t;

SELECT id, t1 MULTISET EXCEPT ALL t1('aaa', 'abc', 'dup', 'dup') r
FROM t;
```

Both yielding:

```ID   R
---------------------
1    (null)
2    T1()
3    T1('xyz', 'zzz')
4    T1('dup')
```

With this operator, we’re removing each element of the right operand once from the left operand:

• `'aaa'` does not appear in the left operand, so nothing happens
• `'abc'` appears on row with ID = 3 and we remove it
• `'dup'` appears on row with ID = 4, 3 times, and we remove it twice, leaving one value

Conversely, when adding `DISTINCT`, we’ll get:

```SELECT t1 MULTISET EXCEPT DISTINCT t1('aaa', 'abc', 'dup') FROM t;
```

Yielding:

```ID   R
---------------------
1    (null)
2    T1()
3    T1('xyz', 'zzz')
4    T1('')
```

The only difference is on row with ID = 4, where all `'dup'` values were removed, regardless how many there were on either side of the `MULTISET EXCEPT DISTINCT` operator.

How to emulate this for varrays?

DISTINCT version

This is a bit easier, because we can now use `MINUS`:

```-- Nested table version
SELECT t1 MULTISET EXCEPT DISTINCT t1('aaa', 'abc', 'dup', 'dup')
FROM t;

-- Varray version
SELECT
id,
CASE
WHEN t2 IS NULL THEN NULL
ELSE
CAST(MULTISET(
SELECT column_value
FROM TABLE (t2)
MINUS
SELECT column_value
FROM TABLE (t2('aaa', 'abc', 'dup', 'dup'))
) AS t2)
END r
FROM t;
```

Luckily, we can still cast a structural `MULTISET` type that we can obtain using the `MULTISET()` operator to a varray type. This greatly simplifies the task.

ALL version

If we want the `MULTISET EXCEPT` or `MULTISET EXCEPT ALL` semantics, things are trickier. Here’s a solution that resorts to using window functions, in order to turn a `MULTISET` back into a `SET`:

```-- Nested table version
SELECT t1 MULTISET EXCEPT ALL t1('aaa', 'abc', 'dup', 'dup')
FROM t;

-- Varray version
SELECT
id,
CASE
WHEN t2 IS NULL THEN NULL
ELSE
CAST(MULTISET(
SELECT column_value
FROM (
SELECT
column_value,
row_number() OVER (
PARTITION BY column_value
ORDER BY column_value) rn
FROM TABLE (t2)
MINUS
SELECT
column_value,
row_number() OVER (
PARTITION BY column_value
ORDER BY column_value) rn
FROM TABLE (t2('aaa', 'abc', 'dup', 'dup'))
)
) AS t2)
END r
FROM t;
```

How does this work? Ideally, we’ll look at what this `ROW_NUMBER()` evaluates to on each row. For this, we use `OUTER APPLY`:

```SELECT id, t2, column_value, rn
FROM t
OUTER APPLY (
SELECT
column_value,
row_number() OVER (
PARTITION BY column_value
ORDER BY column_value) rn
FROM TABLE (t2)
);
```

The result is:

```ID      T2                       COLUMN_VALUE  RN
-----------------------------------------------------
1       (null)                   (null)        (null)
2       T2()                     (null)        (null)
3       T2('abc', 'xyz', 'zzz')  abc           1
3       T2('abc', 'xyz', 'zzz')  xyz           1
3       T2('abc', 'xyz', 'zzz')  zzz           1
4       T2('dup', 'dup', 'dup')  dup           1
4       T2('dup', 'dup', 'dup')  dup           2
4       T2('dup', 'dup', 'dup')  dup           3
```

As can be seen, each duplicate value gets assigned a unique row number due to the nature of how `ROW_NUMBER()` works (this property can be very useful for solving the gaps-and-islands-problem. See trick #4).

Now that we turned our `(COLUMN_VALUE)` multiset into a `(COLUMN_VALUE, RN)` set (without duplicates), we can use `MINUS` again.

## MULTISET INTERSECT and MULTISET UNION

`MULTISET INTERSECT` works exactly the same way as `MULTISET EXCEPT`, with the same window function based emulation in the `MULTISET INTERSECT ALL` case. `MULTISET UNION` is simpler, because Oracle knows `UNION ALL`, so we do not need to resort to such trickery.

## Conclusion

Nested collections are a very powerful tool in Oracle SQL. Oracle knows two types of nested collections:

• Nested tables
• Varrays

Nested tables are trickier to maintain as you have to think of their storage more explicitly. Varrays can just be embedded into ordinary tables like any other column. But there’s a price to pay for using varrays. Oracle regrettably doesn’t support all of the above very useful multiset conditions and multiset operators.

Luckily, when you encounter a situation where you have varrays and cannot change that, you can still emulate each of the operators using more traditional SQL.

## Oracle’s OFFSET .. FETCH Can be Slower than Classic ROWNUM Filtering

One of Oracle 12c’s coolest features was the introduction of the SQL standard OFFSET .. FETCH clause, as we can now write things like:

```SELECT *
FROM film
ORDER BY film_id
FETCH FIRST 1 ROW ONLY
```

This is querying the Sakila database. Most other databases had this clause (or a non-standard version of it) for ages, e.g. MySQL with `LIMIT`. For all the different LIMIT syntaxes, check out the jOOQ manual.

Implementation wise, the Oracle folks chose to rewrite this clause as a simple window function filter. In principle, behind the scenes, the following is executed:

```SELECT *
FROM film
QUALIFY row_number() OVER (ORDER BY film_id) = 1
ORDER BY film_id
```

Standard syntax

```SELECT * -- Except rn
FROM (
SELECT film.*, row_number() OVER (ORDER BY film_id) rn
FROM film
) t
WHERE rn = 1
ORDER BY film_id
```

This does definitely look much better than the “old” approach using `ROWNUM` filtering, which many of us have written for years:

Legacy Oracle syntax

```SELECT t.*
FROM (
SELECT *
FROM film
ORDER BY film_id
) t
WHERE ROWNUM = 1
```

What I don’t like about this “old” approach is that we’re relying on the `ORDER BY` clause of a derived table, which should not guaranteed to be retained in the outer most query in my opinion (although it is, in Oracle, in this case).

So, having the SQL standard syntax is definitely good.

### What’s the problem?

Now, while the SQL transformation from `FETCH FIRST` to `ROW_NUMBER()` filtering is certainly correct, the execution plan doesn’t really make me happy. Consider the `ROWNUM` based query:

```---------------------------------------------------------
| Id  | Operation                     | Name    | Rows  |
---------------------------------------------------------
|   0 | SELECT STATEMENT              |         |       |
|*  1 |  COUNT STOPKEY                |         |       |
|   2 |   VIEW                        |         |     1 |
|   3 |    TABLE ACCESS BY INDEX ROWID| FILM    |  1000 |
|   4 |     INDEX FULL SCAN           | PK_FILM |     1 |
---------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter(ROWNUM=1)
```

And compare that to the `FETCH FIRST` query:

```-------------------------------------------------
| Id  | Operation                | Name | Rows  |
-------------------------------------------------
|   0 | SELECT STATEMENT         |      |       |
|*  1 |  VIEW                    |      |     1 |
|*  2 |   WINDOW SORT PUSHED RANK|      |  1000 |
|   3 |    TABLE ACCESS FULL     | FILM |  1000 |
-------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("from\$_subquery\$_002"."rowlimit_\$\$_rownumber"<=1)
2 - filter(ROW_NUMBER() OVER ( ORDER BY "FILM"."FILM_ID")<=1)
```

The cardinalities are both correct, but one of the queries seems to traverse the entire table to find the top `FILM_ID`, which the other query found in the index directly. A workaround would be to hint the number of rows to the `FETCH FIRST` query:

```SELECT /*+FIRST_ROWS(1)*/ *
FROM film
ORDER BY film_id
FETCH FIRST 1 ROW ONLY;
```

… in case of which we’ll get a similar plan as that of the `ROWNUM` filtering query:

```---------------------------------------------------------
| Id  | Operation                     | Name    | Rows  |
---------------------------------------------------------
|   0 | SELECT STATEMENT              |         |       |
|*  1 |  VIEW                         |         |     1 |
|*  2 |   WINDOW NOSORT STOPKEY       |         |     1 |
|   3 |    TABLE ACCESS BY INDEX ROWID| FILM    |  1000 |
|   4 |     INDEX FULL SCAN           | PK_FILM |     1 |
---------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("from\$_subquery\$_002"."rowlimit_\$\$_rownumber"<=1)
2 - filter(ROW_NUMBER() OVER ( ORDER BY "FILM"."FILM_ID")<=1)
```

Measuring this using our measurement technique yields quite devastating results:

```Run 1, Statement 1 :  1.11230  -- ROWNUM
Run 1, Statement 2 :  1.15508  -- FETCH FIRST +FIRST_ROWS
Run 1, Statement 3 : 46.92781  -- FETCH FIRST

Run 2, Statement 1 :  1.68449
Run 2, Statement 2 :  1.99465
Run 2, Statement 3 : 47.32620

Run 3, Statement 1 :  1.10428
Run 3, Statement 2 :  1.13904
Run 3, Statement 3 : 68.06417

Run 4, Statement 1 :  1
Run 4, Statement 2 :  6.00535
Run 4, Statement 3 : 44.88235
```

The above results don’t show any time measurement, but a number relative to the fastest execution (1)

There is a 40x performance difference between the approaches, with `ROWNUM` based filtering being the fastest, `FETCH FIRST` plus `+FIRST_ROWS` hint being slightly slower, and “naked” `FETCH FIRST` being terribly slow, when repeating the measurement 5x and running each query 10000x on my machine on Oracle 12.2.0.1.0 in Docker.

Things get worse when joining tables. Let’s run a query that fetches the first customer and their full address information:

```-- Legacy Oracle syntax
SELECT t.*
FROM (
SELECT *
FROM customer
JOIN city USING (city_id)
JOIN country USING (country_id)
ORDER BY customer_id
) t
WHERE ROWNUM = 1;

-- Standard syntax with hint
SELECT /*+FIRST_ROWS(1)*/ *
FROM customer
JOIN city USING (city_id)
JOIN country USING (country_id)
ORDER BY customer_id
FETCH FIRST 1 ROW ONLY;

-- Standard syntax without hint
SELECT *
FROM customer
JOIN city USING (city_id)
JOIN country USING (country_id)
ORDER BY customer_id
FETCH FIRST 1 ROW ONLY;
```

The two queries are equivalent, they both produce the same result. Yet, the plans are very different.

Oracle’s legacy syntax

```-----------------------------------------------------------------
| Id  | Operation                         | Name        | Rows  |
-----------------------------------------------------------------
|   0 | SELECT STATEMENT                  |             |       |
|*  1 |  COUNT STOPKEY                    |             |       |
|   2 |   VIEW                            |             |     1 |
|   3 |    NESTED LOOPS                   |             |     1 |
|   4 |     NESTED LOOPS                  |             |     1 |
|   5 |      NESTED LOOPS                 |             |     1 |
|   6 |       NESTED LOOPS                |             |     1 |
|   7 |        TABLE ACCESS BY INDEX ROWID| CUSTOMER    |   302 |
|   8 |         INDEX FULL SCAN           | PK_CUSTOMER |     1 |
|   9 |        TABLE ACCESS BY INDEX ROWID| ADDRESS     |     1 |
|* 10 |         INDEX UNIQUE SCAN         | PK_ADDRESS  |     1 |
|  11 |       TABLE ACCESS BY INDEX ROWID | CITY        |     1 |
|* 12 |        INDEX UNIQUE SCAN          | PK_CITY     |     1 |
|* 13 |      INDEX UNIQUE SCAN            | PK_COUNTRY  |     1 |
|  14 |     TABLE ACCESS BY INDEX ROWID   | COUNTRY     |     1 |
-----------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter(ROWNUM=1)
13 - access("CITY"."COUNTRY_ID"="COUNTRY"."COUNTRY_ID")
```

We’re seeing tons of nested loop joins because that’s what we’ll expect given the low cardinalities imposed by the `COUNT STOPKEY` operation.

SQL standard syntax with hint

```-----------------------------------------------------------------
| Id  | Operation                         | Name        | Rows  |
-----------------------------------------------------------------
|   0 | SELECT STATEMENT                  |             |       |
|*  1 |  VIEW                             |             |     1 |
|*  2 |   WINDOW NOSORT STOPKEY           |             |     1 |
|   3 |    NESTED LOOPS                   |             |     1 |
|   4 |     NESTED LOOPS                  |             |     1 |
|   5 |      NESTED LOOPS                 |             |     1 |
|   6 |       NESTED LOOPS                |             |     1 |
|   7 |        TABLE ACCESS BY INDEX ROWID| CUSTOMER    |   302 |
|   8 |         INDEX FULL SCAN           | PK_CUSTOMER |     1 |
|   9 |        TABLE ACCESS BY INDEX ROWID| ADDRESS     |     1 |
|* 10 |         INDEX UNIQUE SCAN         | PK_ADDRESS  |     1 |
|  11 |       TABLE ACCESS BY INDEX ROWID | CITY        |     1 |
|* 12 |        INDEX UNIQUE SCAN          | PK_CITY     |     1 |
|* 13 |      INDEX UNIQUE SCAN            | PK_COUNTRY  |     1 |
|  14 |     TABLE ACCESS BY INDEX ROWID   | COUNTRY     |     1 |
-----------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("from\$_subquery\$_008"."rowlimit_\$\$_rownumber"<=1)
2 - filter(ROW_NUMBER() OVER ( ORDER BY "CUSTOMER"."CUSTOMER_ID")<=1)
13 - access("CITY"."COUNTRY_ID"="COUNTRY"."COUNTRY_ID")
```

Quite a similar plan

SQL standard syntax without hint

```---------------------------------------------------------------
| Id  | Operation                        | Name       | Rows  |
---------------------------------------------------------------
|   0 | SELECT STATEMENT                 |            |       |
|*  1 |  VIEW                            |            |     1 |
|*  2 |   WINDOW SORT PUSHED RANK        |            |   599 |
|*  3 |    HASH JOIN                     |            |   599 |
|   4 |     TABLE ACCESS FULL            | CUSTOMER   |   599 |
|*  5 |     HASH JOIN                    |            |   603 |
|   6 |      MERGE JOIN                  |            |   600 |
|   7 |       TABLE ACCESS BY INDEX ROWID| COUNTRY    |   109 |
|   8 |        INDEX FULL SCAN           | PK_COUNTRY |   109 |
|*  9 |       SORT JOIN                  |            |   600 |
|  10 |        TABLE ACCESS FULL         | CITY       |   600 |
|  11 |      TABLE ACCESS FULL           | ADDRESS    |   603 |
---------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("from\$_subquery\$_008"."rowlimit_\$\$_rownumber"<=1)
2 - filter(ROW_NUMBER() OVER ( ORDER BY "CUSTOMER"."CUSTOMER_ID")<=1)
9 - access("CITY"."COUNTRY_ID"="COUNTRY"."COUNTRY_ID")
filter("CITY"."COUNTRY_ID"="COUNTRY"."COUNTRY_ID")
```

Oops. Many HASH JOIN and MERGE JOIN and TABLE ACCESS FULL operations, as well as a WINDOW SORT operation, rather than a WINDOW NOSORT operation. That cannot be good. Let’s measure this again.

```Run 1, Statement 1 :  1.26157  -- ROWNUM
Run 1, Statement 2 :  1.32394  -- FETCH FIRST +FIRST_ROWS
Run 1, Statement 3 : 66.97384  -- FETCH FIRST

Run 2, Statement 1 :  1.31992
Run 2, Statement 2 :  1.76459
Run 2, Statement 3 : 72.76056

Run 3, Statement 1 :  1
Run 3, Statement 2 :  1.36419
Run 3, Statement 3 : 74.06439

Run 4, Statement 1 :  1.08451
Run 4, Statement 2 :  1.64990
Run 4, Statement 3 : 66.83702
```

The difference is even worse: Factor 60x. And make no mistake, these are trivial data set sizes. As we can see in the last execution plan, the cardinality of the `CUSTOMER` table is 599. This can get much worse for larger tables.

### Why even use this syntax?

The SQL standard syntax is much nicer to write, and it allows for nice TOP-N style queries using CROSS APPLY or LATERAL, e.g. to find the TOP 3 longest film titles per actor:

```SELECT actor_id, first_name, last_name, title
FROM actor a
OUTER APPLY (
SELECT /*+FIRST_ROWS(1)*/ title
FROM film f
JOIN film_actor fa USING (film_id)
WHERE fa.actor_id = a.actor_id
ORDER BY length(title) DESC
FETCH FIRST 3 ROWS ONLY
) t
ORDER BY actor_id, length(title) DESC;
```

This would have been much harder with the `ROWNUM` approach. In older Oracle versions, it was even impossible, because we could not reference `A.ACTOR_ID` from doubly nested derived tables / correlated subqueries. Luckily, this is no longer the case. So, syntactically, it is definitely a much better way to do paginated queries or TOP-N queries, but the price is very high.

### Disclaimer

The optimiser might make much better choices when:

The base data set is much bigger than the above 600 to 1000 rows “strong” tables.

Indeed, when fetching the first row from the `PAYMENT` table (with ~16000 rows), the difference becomes smaller or even inexistent:

```Run 1, Statement 1 : 1        -- ROWNUM
Run 1, Statement 2 : 1.72246  -- FETCH FIRST +FIRST_ROWS
Run 1, Statement 3 : 1.76165  -- FETCH FIRST

Run 2, Statement 1 : 1.03919
Run 2, Statement 2 : 1.78284
Run 2, Statement 3 : 1.75742

Run 3, Statement 1 : 1.2553
Run 3, Statement 2 : 1.86441
Run 3, Statement 3 : 2.39089

Run 4, Statement 1 : 2.28814
Run 4, Statement 2 : 3.02436
Run 4, Statement 3 : 2.39407

Run 5, Statement 1 : 1.31462
Run 5, Statement 2 : 2.27225
Run 5, Statement 3 : 1.70975
```

As can be seen, the measurement errors start to outweigh the difference in performance, so the difference isn’t really as significant anymore.

The limit is not 1 or 3, but 10 or 50

When fetching the top 50 rows from the joined customer/address query, the measurements actually changed quite a bit. Suddenly, the ROWNUM query wasn’t optimal anymore and behaved like the un-hinted FETCH FIRST query. Adding a `/*+FIRST_ROWS(1)*/` hint (not `/*+FIRST_ROWS(50)*/` !) did help:

```Run 1, Statement 1 : 1.00545  -- ROWNUM +FIRST_ROWS
Run 1, Statement 2 : 7.24842  -- ROWNUM
Run 1, Statement 3 : 1.35691  -- FETCH FIRST +FIRST_ROWS
Run 1, Statement 4 : 7.15264  -- FETCH FIRST

Run 2, Statement 1 : 1.08054
Run 2, Statement 2 : 6.51922
Run 2, Statement 3 : 1.35960
Run 2, Statement 4 : 7.94527

Run 3, Statement 1 : 1.02824
Run 3, Statement 2 : 7.16228
Run 3, Statement 3 : 1.19702
Run 3, Statement 4 : 7.55008

Run 4, Statement 1 : 1.08364
Run 4, Statement 2 : 6.66652
Run 4, Statement 3 : 1.18559
Run 4, Statement 4 : 7.36938

Run 5, Statement 1 : 1
Run 5, Statement 2 : 6.89051
Run 5, Statement 3 : 1.24211
Run 5, Statement 4 : 7.15167
```

### Conclusion

What we’ve seen here is a bit unfortunate. For some cases, one approach is better than the other, performance wise. For others, it’s vice versa. Paginated queries are still a bit tricky for Oracle to get right and we have to measure things explicitly.

### Workaround in jOOQ

Until this is fixed by Oracle, if you’re using jOOQ, you can use the SQLDialect.ORACLE11G dialect to run classic `ROWNUM` filtering queries also on Oracle 12c. Alternatively, a future version of jOOQ will optionally generate a `+FIRST_ROWS` hint with a reasonably approximated cardinality: https://github.com/jOOQ/jOOQ/issues/5793

## How to Run a Bulk INSERT .. RETURNING Statement With Oracle and JDBC

When inserting records into SQL databases, we often want to fetch back generated IDs and possibly other trigger, sequence, or default generated values. Let’s assume we have the following table:

```-- DB2
CREATE TABLE x (
i INT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
j VARCHAR(50),
k DATE DEFAULT CURRENT_DATE
);

-- PostgreSQL
CREATE TABLE x (
i SERIAL4 PRIMARY KEY,
j VARCHAR(50),
k DATE DEFAULT CURRENT_DATE
);

-- Oracle
CREATE TABLE x (
i INT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
j VARCHAR2(50),
k DATE DEFAULT SYSDATE
);
```

DB2

DB2 is the only database currently supported by jOOQ, which implements the SQL standard according to which we can SELECT from any INSERT statement, including:

```SELECT *
FROM FINAL TABLE (
INSERT INTO x (j)
VALUES ('a'), ('b'), ('c')
);
```

The above query returns:

```I |J |K          |
--|--|-----------|
1 |a |2018-05-02 |
2 |b |2018-05-02 |
3 |c |2018-05-02 |
```

Pretty neat! This query can simply be run like any other query in JDBC, and you don’t have to go through any hassles.

PostgreSQL and Firebird

These databases have a vendor specific extension that does the same thing, almost as powerful:

```-- Simple INSERT .. RETURNING query
INSERT INTO x (j)
VALUES ('a'), ('b'), ('c')
RETURNING *;

-- If you want to do more fancy stuff
WITH t AS (
INSERT INTO x (j)
VALUES ('a'), ('b'), ('c')
RETURNING *
)
SELECT * FROM t;
```

Both syntaxes work equally well, the latter is just as powerful as DB2’s, where the result of an insertion (or update, delete, merge) can be joined to other tables. Again, no problem with JDBC

Oracle

In Oracle, this is a bit more tricky. The Oracle SQL language doesn’t have an equivalent of DB2’s `FINAL TABLE (DML statement)`. The Oracle PL/SQL language, however, does support the same syntax as PostgreSQL and Firebird. This is perfectly valid PL/SQL

```-- Create a few auxiliary types first
CREATE TYPE t_i AS TABLE OF NUMBER(38);
/
CREATE TYPE t_j AS TABLE OF VARCHAR2(50);
/
CREATE TYPE t_k AS TABLE OF DATE;
/

DECLARE
-- These are the input values
in_j t_j := t_j('a', 'b', 'c');

out_i t_i;
out_j t_j;
out_k t_k;

c1 SYS_REFCURSOR;
c2 SYS_REFCURSOR;
c3 SYS_REFCURSOR;
BEGIN

-- Use PL/SQL's FORALL command to bulk insert the
-- input array type and bulk return the results
FORALL i IN 1 .. in_j.COUNT
INSERT INTO x (j)
VALUES (in_j(i))
RETURNING i, j, k
BULK COLLECT INTO out_i, out_j, out_k;

-- Fetch the results and display them to the console
OPEN c1 FOR SELECT * FROM TABLE(out_i);
OPEN c2 FOR SELECT * FROM TABLE(out_j);
OPEN c3 FOR SELECT * FROM TABLE(out_k);

dbms_sql.return_result(c1);
dbms_sql.return_result(c2);
dbms_sql.return_result(c3);
END;
/
```

A bit verbose, but it has the same effect. Now, from JDBC:

```try (Connection con = DriverManager.getConnection(url, props);
Statement s = con.createStatement();

// The statement itself is much more simple as we can
// use OUT parameters to collect results into, so no
// auxiliary local variables and cursors are needed
CallableStatement c = con.prepareCall(
"DECLARE "
+ "  v_j t_j := ?; "
+ "BEGIN "
+ "  FORALL j IN 1 .. v_j.COUNT "
+ "    INSERT INTO x (j) VALUES (v_j(j)) "
+ "    RETURNING i, j, k "
+ "    BULK COLLECT INTO ?, ?, ?; "
+ "END;")) {

try {

// Create the table and the auxiliary types
s.execute(
"CREATE TABLE x ("
+ "  i INT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,"
+ "  j VARCHAR2(50),"
+ "  k DATE DEFAULT SYSDATE"
+ ")");
s.execute("CREATE TYPE t_i AS TABLE OF NUMBER(38)");
s.execute("CREATE TYPE t_j AS TABLE OF VARCHAR2(50)");
s.execute("CREATE TYPE t_k AS TABLE OF DATE");

// Bind input and output arrays
c.setArray(1, ((OracleConnection) con).createARRAY(
"T_J", new String[] { "a", "b", "c" })
);
c.registerOutParameter(2, Types.ARRAY, "T_I");
c.registerOutParameter(3, Types.ARRAY, "T_J");
c.registerOutParameter(4, Types.ARRAY, "T_K");

// Execute, fetch, and display output arrays
c.execute();
Object[] i = (Object[]) c.getArray(2).getArray();
Object[] j = (Object[]) c.getArray(3).getArray();
Object[] k = (Object[]) c.getArray(4).getArray();

System.out.println(Arrays.asList(i));
System.out.println(Arrays.asList(j));
System.out.println(Arrays.asList(k));
}
finally {
try {
s.execute("DROP TYPE t_i");
s.execute("DROP TYPE t_j");
s.execute("DROP TYPE t_k");
s.execute("DROP TABLE x");
}
catch (SQLException ignore) {}
}
}
```

The above code will display:

```[1, 2, 3]
[a, b, c]
[2018-05-02 10:40:34.0, 2018-05-02 10:40:34.0, 2018-05-02 10:40:34.0]
```

Exactly what we wanted.

### jOOQ support

A future version of will emulate the above PL/SQL block from the jOOQ `INSERT .. RETURNING` statement:

```DSL.using(configuration)
.insertInto(X)
.columns(X.J)
.values("a")
.values("b")
.values("c")
.returning(X.I, X.J, X.K)
.fetch();
```

This will correctly emulate the query for all of the databases that natively support the syntax. In the case of Oracle, since jOOQ cannot create nor assume any SQL TABLE types, PL/SQL types from the `DBMS_SQL` package will be used

The relevant issue is here: https://github.com/jOOQ/jOOQ/issues/5863

## The Performance Difference Between SQL Row-by-row Updating, Batch Updating, and Bulk Updating

Something that has been said many times, but needs constant repeating until every developer is aware of the importance of this is the performance difference between row-by-row updating and bulk updating. If you cannot guess which one will be much faster, remember that row-by-row kinda rhymes with slow-by-slow (hint hint).

Disclaimer: This article will discuss only non-concurrent updates, which are much easier to reason about. In a concurrent update situation, a lot of additional factors will add complexity to the problem, including the locking strategy, transaction isolation levels, or simply how the database vendor implements things in detail. For the sake of simplicity, I’ll assume no concurrent updates are being made.

### Example query

Let’s say we have a simple table for our blog posts (using Oracle syntax, but the effect is the same on all databases):

```CREATE TABLE post (
id INT NOT NULL PRIMARY KEY,
text VARCHAR2(1000) NOT NULL,
archived NUMBER(1) NOT NULL CHECK (archived IN (0, 1)),
creation_date DATE NOT NULL
);

CREATE INDEX post_creation_date_i ON post (creation_date);
```

Now, let’s add some 10000 rows:

```INSERT INTO post
SELECT
level,
0 AS archived,
DATE '2017-01-01' + (level / 100)
FROM dual
CONNECT BY level <= 10000;

EXEC dbms_stats.gather_table_stats('TEST', 'POST');
```

Now imagine, we want to update this table and set all posts to `ARCHIVED = 1` if they are from last year, e.g. `CREATION_DATE < DATE '2018-01-01'`. There are various ways to do this, but you should have built an intuition that doing the update in one single `UPDATE` statement is probably better than looping over each individual row and updating each individual row explicitly. Right?

Right.

Then, why do we keep doing it?

### Does it matter?

The best way to find out is to benchmark. I’m doing two benchmarks for this:

1. One that is run in PL/SQL, showing the performance difference between different approaches that are available to PL/SQL (namely looping, the `FORALL` syntax, and a single bulk `UPDATE`)
2. One that is run in Java, doing JDBC calls, showing the performance difference between different approaches available to Java (namely looping, caching `PreparedStatement` but still looping, batching, and a single bulk `UPDATE`)

### Benchmarking PL/SQL

The code of the benchmark can be found in this gist. I will also include it at the bottom of this blog post. The results are:

```Run 1, Statement 1 : .01457 (avg : .0098)
Run 1, Statement 2 : .0133  (avg : .01291)
Run 1, Statement 3 : .02351 (avg : .02519)
Run 2, Statement 1 : .00882 (avg : .0098)
Run 2, Statement 2 : .01159 (avg : .01291)
Run 2, Statement 3 : .02348 (avg : .02519)
Run 3, Statement 1 : .01012 (avg : .0098)
Run 3, Statement 2 : .01453 (avg : .01291)
Run 3, Statement 3 : .02544 (avg : .02519)
Run 4, Statement 1 : .00799 (avg : .0098)
Run 4, Statement 2 : .01346 (avg : .01291)
Run 4, Statement 3 : .02958 (avg : .02519)
Run 5, Statement 1 : .00749 (avg : .0098)
Run 5, Statement 2 : .01166 (avg : .01291)
Run 5, Statement 3 : .02396 (avg : .02519)
```

The difference between Statement 1 and 3 is a factor of 2.5x

Showing the time it takes for each statement type to complete, each time updating 3649 / 10000 rows. The winner is:

Statement 1, running a bulk update

It looks like this:

```UPDATE post
SET archived = 1
WHERE archived = 0 AND creation_date < DATE '2018-01-01';
```

Runner-up (not too far away) is:

Statement 2, using the PL/SQL FORALL syntax

It works like this:

```DECLARE
TYPE post_ids_t IS TABLE OF post.id%TYPE;
v_post_ids post_ids_t;
BEGIN
SELECT id
BULK COLLECT INTO v_post_ids
FROM post
WHERE archived = 0 AND creation_date < DATE '2018-01-01';

FORALL i IN 1 .. v_post_ids.count
UPDATE post
SET archived = 1
WHERE id = v_post_ids(i);
END;
```

Loser (by a factor of 2.5x on our specific data set) is:

Statement 3, using an ordinary LOOP and running row-by-row updates

```FOR rec IN (
SELECT id
FROM post
WHERE archived = 0 AND creation_date < DATE '2018-01-01'
) LOOP
UPDATE post
SET archived = 1
WHERE id = rec.id;
END LOOP;
```

It does not really come as a surprise. We’re switching between the PL/SQL engine and the SQL engine many many times, and also, instead of running through the post table only once in `O(N)` time, we’re looking up individual ID values in `O(log N)` time, N times, so the complexity went from

```O(N) -> O(N log N)
```

We’d get far worse results for larger tables!

### What about doing this from Java?

The difference is much more drastic if each call to the SQL engine has to be done over the network from another process. Again, the benchmark code is available from a gist, and I will paste it to the end of this blog post as well.

The result is (same time unit):

```Run 0, Statement 1: PT4.546S
Run 0, Statement 2: PT3.52S
Run 0, Statement 3: PT0.144S
Run 0, Statement 4: PT0.028S
Run 1, Statement 1: PT3.712S
Run 1, Statement 2: PT3.185S
Run 1, Statement 3: PT0.138S
Run 1, Statement 4: PT0.025S
Run 2, Statement 1: PT3.481S
Run 2, Statement 2: PT3.007S
Run 2, Statement 3: PT0.122S
Run 2, Statement 4: PT0.026S
Run 3, Statement 1: PT3.518S
Run 3, Statement 2: PT3.077S
Run 3, Statement 3: PT0.113S
Run 3, Statement 4: PT0.027S
Run 4, Statement 1: PT3.54S
Run 4, Statement 2: PT2.94S
Run 4, Statement 3: PT0.123S
Run 4, Statement 4: PT0.03S
```

The difference between Statement 1 and 4 is a factor of 100x !!

So, who’s winning? Again (by far):

Statement 4, running the bulk update

In fact, the time is not too far away from the time taken by PL/SQL. With larger data sets being updated, the two results will converge. The code is:

```try (Statement s = c.createStatement()) {
s.executeUpdate(
"UPDATE post\n" +
"SET archived = 1\n" +
"WHERE archived = 0\n" +
"AND creation_date < DATE '2018-01-01'\n");
}
```

Followed by the not that much worse (but still 3.5x worse):

Statement 3, running the batch update

Batching can be compared to PL/SQL’s `FORALL` statement. While we’re running individual row-by-row updates, we’re sending all the update statements in one batch to the SQL engine. This does save a lot of time on the network and all the layers in between.

The code looks like this:

```try (Statement s = c.createStatement();
ResultSet rs = s.executeQuery(
"SELECT id FROM post WHERE archived = 0\n"
+ "AND creation_date < DATE '2018-01-01'"
);
PreparedStatement u = c.prepareStatement(
"UPDATE post SET archived = 1 WHERE id = ?"
)) {

while (rs.next()) {
u.setInt(1, rs.getInt(1));
}

u.executeBatch();
}
```

Followed by the losers:

Statement 1 and 2, running row by row updates

The difference between statement 1 and 2 is that 2 caches the `PreparedStatement`, which allows for reusing some resources. This can be a good thing, but didn’t have a very significant effect in our case, compared to the batch / bulk alternatives. The code is:

```// Statement 1:
try (Statement s = c.createStatement();
ResultSet rs = s.executeQuery(
"SELECT id FROM post\n"
+ "WHERE archived = 0\n"
+ "AND creation_date < DATE '2018-01-01'"
)) {

while (rs.next()) {
try (PreparedStatement u = c.prepareStatement(
"UPDATE post SET archived = 1 WHERE id = ?"
)) {
u.setInt(1, rs.getInt(1));
u.executeUpdate();
}
}
}

// Statement 2:
try (Statement s = c.createStatement();
ResultSet rs = s.executeQuery(
"SELECT id FROM post\n"
+ "WHERE archived = 0\n"
+ "AND creation_date < DATE '2018-01-01'"
);
PreparedStatement u = c.prepareStatement(
"UPDATE post SET archived = 1 WHERE id = ?"
)) {

while (rs.next()) {
u.setInt(1, rs.getInt(1));
u.executeUpdate();
}
}
```

### Conclusion

As shown previously on this blog, there is a significant cost of JDBC server roundtrips, which can be seen in the JDBC benchmark. This cost is much more severe if we unnecessarily create many server roundtrips for a task that could be done in a single roundtrip, namely by using a SQL bulk `UPDATE` statement.

This is not only true for updates, but also for all the other statements, including `SELECT`, `DELETE`, `INSERT`, and `MERGE`. If doing everything in a single statement isn’t possible due to the limitations of SQL, we can still save roundtrips by grouping statements in a block, either by using an anonymous block in databases that support them:

```BEGIN
statement1;
statement2;
statement3;
END;
```

(you can easily send these anonymous blocks over JDBC, as well!)

Or, by emulating anonymous blocks using the JDBC batch API (has its limitations), or by writing stored procedures.

The performance gain is not always worth the trouble of moving logic from the client to the server, but very often (as in the above case), the move is a no-brainer and there’s absolutely no reason against it.

So, remember: Stop doing row-by-row (slow-by-slow) operations when you could run the same operation in bulk, in a single SQL statement.

Hint: Always know what your ORM (if you’re using one) is doing, because the ORM can help you with automatic batching / bulking in many cases. But it often cannot, or it is too difficult to make it do so, so resorting to SQL is the way to go.

### Code

PL/SQL benchmark

```SET SERVEROUTPUT ON

DROP TABLE post;

CREATE TABLE post (
id INT NOT NULL PRIMARY KEY,
text VARCHAR2(1000) NOT NULL,
archived NUMBER(1) NOT NULL CHECK (archived IN (0, 1)),
creation_date DATE NOT NULL
);

CREATE INDEX post_creation_date_i ON post (creation_date);

ALTER SYSTEM FLUSH SHARED_POOL;
ALTER SYSTEM FLUSH BUFFER_CACHE;

CREATE TABLE results (
run     NUMBER(2),
stmt    NUMBER(2),
elapsed NUMBER
);

DECLARE
v_ts TIMESTAMP WITH TIME ZONE;

PROCEDURE reset_post IS
BEGIN
EXECUTE IMMEDIATE 'TRUNCATE TABLE post';
INSERT INTO post
SELECT
level AS id,
0 AS archived,
DATE '2017-01-01' + (level / 100) AS creation_date
FROM dual
CONNECT BY level <= 10000;
dbms_stats.gather_table_stats('TEST', 'POST');
END reset_post;
BEGIN

-- Repeat the whole benchmark several times to avoid warmup penalty
FOR r IN 1..5 LOOP

reset_post;
v_ts := SYSTIMESTAMP;

UPDATE post
SET archived = 1
WHERE archived = 0 AND creation_date < DATE '2018-01-01';

INSERT INTO results VALUES (r, 1, SYSDATE + ((SYSTIMESTAMP - v_ts) * 86400) - SYSDATE);

reset_post;
v_ts := SYSTIMESTAMP;

DECLARE
TYPE post_ids_t IS TABLE OF post.id%TYPE;
v_post_ids post_ids_t;
BEGIN
SELECT id
BULK COLLECT INTO v_post_ids
FROM post
WHERE archived = 0 AND creation_date < DATE '2018-01-01';

FORALL i IN 1 .. v_post_ids.count
UPDATE post
SET archived = 1
WHERE id = v_post_ids(i);
END;

INSERT INTO results VALUES (r, 2, SYSDATE + ((SYSTIMESTAMP - v_ts) * 86400) - SYSDATE);

reset_post;
v_ts := SYSTIMESTAMP;

FOR rec IN (
SELECT id
FROM post
WHERE archived = 0 AND creation_date < DATE '2018-01-01'
) LOOP
UPDATE post
SET archived = 1
WHERE id = rec.id;
END LOOP;

INSERT INTO results VALUES (r, 3, SYSDATE + ((SYSTIMESTAMP - v_ts) * 86400) - SYSDATE);
END LOOP;

FOR rec IN (
SELECT
run, stmt,
CAST(elapsed AS NUMBER(10, 5)) ratio,
CAST(AVG(elapsed) OVER (PARTITION BY stmt) AS NUMBER(10, 5)) avg_ratio
FROM results
ORDER BY run, stmt
)
LOOP
dbms_output.put_line('Run ' || rec.run ||
', Statement ' || rec.stmt ||
' : ' || rec.ratio || ' (avg : ' || rec.avg_ratio || ')');
END LOOP;

dbms_output.put_line('');
dbms_output.put_line('https://www.jooq.org/benchmark');
END;
/

DROP TABLE results;
```

JDBC benchmark

```import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import java.time.Duration;
import java.time.Instant;
import java.util.Properties;

public class OracleUpdate {

public static void main(String[] args) throws Exception {
Class.forName("oracle.jdbc.OracleDriver");

String url = "jdbc:oracle:thin:@192.168.99.100:1521:ORCLCDB";
String user = "TEST";

Properties properties = new Properties();
properties.setProperty("user", user);

try (Connection c = DriverManager.getConnection(url, properties)) {
for (int i = 0; i < 5; i++) {
Instant ts;

resetPost(c);
ts = Instant.now();

try (Statement s = c.createStatement();
ResultSet rs = s.executeQuery(
"SELECT id FROM post WHERE archived = 0 AND creation_date < DATE '2018-01-01'"
)) {

while (rs.next()) {
try (PreparedStatement u = c.prepareStatement(
"UPDATE post SET archived = 1 WHERE id = ?"
)) {
u.setInt(1, rs.getInt(1));
u.executeUpdate();
}
}
}

System.out.println("Run " + i + ", Statement 1: " + Duration.between(ts, Instant.now()));

resetPost(c);
ts = Instant.now();

try (Statement s = c.createStatement();
ResultSet rs = s.executeQuery(
"SELECT id FROM post WHERE archived = 0 AND creation_date < DATE '2018-01-01'"
);
PreparedStatement u = c.prepareStatement(
"UPDATE post SET archived = 1 WHERE id = ?"
)) {

while (rs.next()) {
u.setInt(1, rs.getInt(1));
u.executeUpdate();
}
}

System.out.println("Run " + i + ", Statement 2: " + Duration.between(ts, Instant.now()));

resetPost(c);
ts = Instant.now();

try (Statement s = c.createStatement();
ResultSet rs = s.executeQuery(
"SELECT id FROM post WHERE archived = 0 AND creation_date < DATE '2018-01-01'"
);
PreparedStatement u = c.prepareStatement(
"UPDATE post SET archived = 1 WHERE id = ?"
)) {

while (rs.next()) {
u.setInt(1, rs.getInt(1));
}

u.executeBatch();
}
System.out.println("Run " + i + ", Statement 3: " + Duration.between(ts, Instant.now()));

resetPost(c);
ts = Instant.now();

try (Statement s = c.createStatement()) {
s.executeUpdate("UPDATE post\n" +
"SET archived = 1\n" +
"WHERE archived = 0 AND creation_date < DATE '2018-01-01'\n");
}

System.out.println("Run " + i + ", Statement 4: " + Duration.between(ts, Instant.now()));
}
}
}

static void resetPost(Connection c) throws SQLException {
try (Statement s = c.createStatement()) {
s.executeUpdate("TRUNCATE TABLE post");
s.executeUpdate("INSERT INTO post\n" +
"    SELECT \n" +
"      level,\n" +
"      0,\n" +
"      DATE '2017-01-01' + (level / 10)\n" +
"    FROM dual\n" +
"    CONNECT BY level <= 10000");
s.executeUpdate("BEGIN dbms_stats.gather_table_stats('TEST', 'POST'); END;");
}
}
}
```

## The Cost of JDBC Server Roundtrips

### Or: Move That Loop into the Server Already!

This article will illustrate the significance of something that I always thought to be common sense, but I keep seeing people getting this (very) wrong in their productive systems. Chances are, in fact, that most applications out there suffer from this performance problem – and the fix is really easy.

### Transferring data in bulk vs. transferring it row by row

This blog post is inspired by a recent Stack Overflow question that was asking about how to call Oracle’s `DBMS_OUTPUT.GET_LINES` from JDBC. The answer to that specific question can also be seen in a previous blog post here.

This time, I don’t want to discuss that particular Oracle-specific technical problem, but the performance aspect of whether to call:

• `DBMS_OUTPUT.GET_LINES`: Which allows for fetching a bulk of server output into an array
• `DBMS_OUTPUT.GET_LINE`: Which fetches a single line of server output into a string

While I will continue to discuss this Oracle-specific functionality, this blog post is in no way strictly related to Oracle. It is generally applicable (and again, it should be common sense) for any database, and in fact, any client-server architecture, or even any distributed architecture. The solution to such performance problems will almost always be that transferring a bulk of data is better than transferring it row by row.

Let’s run a benchmark!

The full benchmark logic can be seen in this gist. It includes the boring parts of actually calling the `GET_LINE[S]` procedures. The beef of the benchmark is this:

```int max = 50;
long[] getLines = new long[max];
long[] getLine = new long[max];

try (Connection c = DriverManager.getConnection(url, properties);
Statement s = c.createStatement()) {

for (int warmup = 0; warmup < 2; warmup++) {
for (int i = 0; i < max; i++) {
s.executeUpdate("begin dbms_output.enable(); end;");
String sql =
"begin "
+ "for i in 1 .. 100 loop "
+ "dbms_output.put_line('Message ' || i); "
+ "end loop; "
+ "end;";
long t1 = System.nanoTime();
logGetLines(c, 100, () -> s.executeUpdate(sql));
long t2 = System.nanoTime();
logGetLine(c, 100, () -> s.executeUpdate(sql));
long t3 = System.nanoTime();
s.executeUpdate("begin dbms_output.disable(); end;");

if (warmup > 0) {
getLines[i] = t2 - t1;
getLine[i] = t3 - t2;
}
}
}
}

System.out.println(LongStream.of(getLines).summaryStatistics());
System.out.println(LongStream.of(getLine).summaryStatistics());
```

What does it do in prose?

• It contains a warmup loop whose first iteration doesn’t contribute to our measurements (this is always a good idea)
• It runs the benchmarked logic 50 times
• It generates 100 `DBMS_OUTPUT.PUT_LINE` messages for each run in an anonymous PL/SQL loop …
• … and then fetches those 100 messages immediately with either 1 call to `GET_LINES` or 100 calls to `GET_LINE`
• Finally, all the stored execution times are aggregated and printed out conveniently with Java 8 Stream’s summary statistics feature

So, in both cases, we’re generating and fetching 5000 messages.

Both methods using `GET_LINES` and `GET_LINE` respectively are functionally equivalent, i.e. the only difference is performance. Again the full benchmark can be seen here (which also benchmarks the effect of JDBC fetchSize, which is none, in this case).

The results are devastating:

```{count=50, sum=  69120455, min= 1067521, average= 1382409.100000, max= 2454614}
{count=50, sum=2088201423, min=33737827, average=41764028.460000, max=64498375}
```

We’re gaining a factor of 30x in this benchmark run on my machine. The actual results may vary depending on your hardware and software (e.g. I’m running Oracle 12cR2 in Docker), but regardless of the setup, the results are always very significant.

Note that caching the prepared statement in the client yields better results (see the gist), but nowhere near as good as moving the loop to the server

### Does this mean that GET_LINE is slow?

When looking at benchmark results, we must always be very very careful not to draw the wrong conclusions. There can be many reasons why benchmarks show differences in performance. One simple (albeit improbable) reason could be that the `GET_LINE` implementation simply sucks.

So, I’ve tried to re-implement this benchmark in pure PL/SQL. The full benchmark can be seen here. The beef of it is this:

```FOR r IN 1..5 LOOP
v_ts := SYSTIMESTAMP;

FOR i IN 1..v_repeat LOOP
m();

v_i := v_max;
dbms_output.get_lines(v_array, v_i);
END LOOP;

INSERT INTO results VALUES (1, (SYSTIMESTAMP - v_ts));
v_ts := SYSTIMESTAMP;

FOR i IN 1..v_repeat LOOP
m();

FOR j IN 1 .. v_max LOOP
dbms_output.get_line(v_string, v_i);
END LOOP;
END LOOP;

INSERT INTO results VALUES (2, (SYSTIMESTAMP - v_ts));
END LOOP;
```

Where m() is:

```PROCEDURE m IS BEGIN
FOR i IN 1 .. v_max LOOP
dbms_output.put_line('Message ' || i);
END LOOP;
END m;
```

The results are now rather different:

```stmt    sum     avg      min     max
1       0.0609  0.01218  0.0073  0.0303
2       0.0333  0.00666  0.0063  0.007
```

This time, calling `GET_LINE` individually seems to have been 2x faster than the `GET_LINES` version. Again, it is important not to draw the wrong conclusions! This could be due to:

• `GET_LINES` allocating an additional array copy of the original lines, which resides in the PGA, might be costly
• `GET_LINE` might have profited from some additional optimisation because we’re never actually consuming the result in the benchmark

But the one thing we can conclude with certainty is: There’s no problem in `GET_LINE`, so calling it is not inherently worse than calling `GET_LINES`.

### Which brings us back to the JDBC calls

While guess work and hypotheses are usually dangerous, in this case I’m certain of the reason why the JDBC based approach shows such drastic differences. Just watch this excellent talk by Toon Koppelaars from Oracle “NoPLSql and Thick Database Approaches with Toon Koppelaars”, where he explains this with some impressive flame graphs:

The obvious difference between the JDBC benchmark and the PL/SQL one is the fact that the JDBC call has to traverse a vast amount of logic, APIs, “barriers” between the JVM and the Oracle kernel before it can actually invoke the really interesting part. This includes:

• JDBC logic
• Various “outer” layers inside the Oracle database
• Oracle’s API layers to get into the SQL and PL/SQL execution engines
• The actual code running in the PL/SQL engine

In Toon’s talk (which again, you should definitely watch), the examples are running SQL code, not PL/SQL code, but the results are the same. The actual logic is relatively cheap inside of the database (as we’ve seen in the PL/SQL only benchmark), but the overhead is significant when calling database logic from outside the database.

Thus: It is very important to minimise that overhead

There are two ways to minimise that overhead:

• The super hard way: Change and / or tweak the API technology, e.g. in Oracle’s case, using the C/OCI bindings can be much faster than JDBC
• The easy way: Just move some data collection logic into the database and fetch data in bulk

Let me repeat this one more time:

Fetch (or send) data in bulk

… it’s almost always faster than processing things row-by-row (or as the Oracle folks call it: “slow-by-slow”).

And it does not matter at all, if that database logic is written in SQL or in a procedural language. The point is that accessing objects over the network (any network) is expensive, so you should minimise the access calls if ever possible.

As I’ve tweeted recently:

When calling logic over the network (any network), we should move logic to the data, not data to the logic. When working with RDBMS, we’re doing this through SQL (preferrably) or if SQL doesn’t suffice, we resort to using stored procedures.

When working with HTTP, we’re doing this with – well, it doesn’t have a name, but we should prefer making few physical HTTP calls that aggregate several logical API calls in order to transfer a lot of data in bulk.

When working with “map reduce” or “serverless” etc technology, we’re calling this “functions” or “lambdas”, which are just fancy, more modern names for stored procedures.

### Conclusion

I’m not an expert in how to design complex distributed systems. This stuff is really hard. But I’ve spent many years working with RDBMS, which are also, in a very simple way, distributed systems (data on one server, client logic on another one).

A very significant amount of performance problems with RDBMS is related to the simple fact of clients making way too many calls to the database for what could be implemented in a single SQL query. Within the database, once your logic has reached the kernel, stuff gets executed really really fast. Adding more logic to a query is going to cause far less trouble than adding more queries.

Does your application take into account these things? Especially, if you’re using an ORM that generates the SQL for you, does it generate the right amount of queries, or are you suffering from “N+1 problems”? The main reason why ORM-based systems tend to be slow is because developers are not aware of the SQL their ORMs generate, e.g. due to excessive lazy loading, when a single SQL (or JPQL) query would have been a much better choice. It’s not the ORM’s fault. Most ORMs can get this right.

It’s the developer’s responsibility to think about where the logic should be executed. The rule of thumb is:

If you’re looping in the client to fetch individual things from the same server, you’re doing it wrong. Move the loop into the server.

And by doing so, you’ve written your first (ghasp) “stored procedure”. You’ll write many more, and you’ll love it, once you realise how much speed you’re gaining.

Or, in other words:

/u/cogman10 made good points in his comment warning about batching “too big” workloads, which is perfectly correct when batching write heavy tasks. Large batches may increase the contention inside of your database. If you’re working in an MVCC environment (such as Oracle), having transactions that span millions of updates will put a lot of pressure on the UNDO/REDO log, and all other sessions reading from the same data will feel that pain.

Also, when working with HTTP, beware of the fact that batches are harder to cache than individual requests. This article made the assumption that HTTP requests are:

• Authorised – i.e. caching doesn’t even make sense as the authorisation might be revoked or the session terminated
• Operating on similar resources, e.g. fetching a bulk of IDs in one go might be more sensible than fetching each ID individuall

… of course, as always, don’t follow advice you find on the internet blindly :) This article illustrated a common mistake. The fix isn’t always as simple as illustrated here, but often it really is.

## How to Fetch Oracle DBMS_OUTPUT from JDBC

When working with Oracle stored procedures, it is not uncommon to have debug log information available from `DBMS_OUTPUT` commands. For instance, if we have a procedure like this:

```CREATE TABLE my_table (i INT);

CREATE OR REPLACE PROCEDURE my_procedure (i1 INT, i2 INT) IS
BEGIN
INSERT INTO my_table
SELECT i1 FROM dual UNION ALL
SELECT i2 FROM dual;

dbms_output.put_line(sql%rowcount || ' rows inserted');
END my_procedure;
/
```

The procedure works just the same, regardless if we’re reading the output from the `DBMS_OUTPUT` call. It is there purely for logging purposes. Now, if we call the above procedure from a tool like SQL Developer or sqlplus, we could write:

```SET SERVEROUTPUT ON
BEGIN
my_procedure(1, 2);
END;
/
```

To get a result like this:

```PL/SQL-Prozedur erfolgreich abgeschlossen.
2 rows inserted
```

(pardon my german)

### How to get this output from JDBC

By default, we don’t get such output from JDBC as the overhead of transferring all this output is usually not worth the trouble. If we still wanted to call the procedure AND get the server output, we cannot simply write `SET SERVEROUTPUT ON`, as that is a command specific to sqlplus. We have to wrap our procedure calls in two other calls:

```try (Connection c = DriverManager.getConnection(url, properties);
Statement s = c.createStatement()) {

try {
// First, we have to enable the DBMS_OUTPUT. Otherwise,
// all calls to DBMS_OUTPUT made on our connection won't
// have any effect.
s.executeUpdate("begin dbms_output.enable(); end;");

// Now, this is the actually interesting procedure call
s.executeUpdate("begin my_procedure(1, 2); end;");

// After we're done with our call(s), we can proceed to
// fetch the SERVEROUTPUT explicitly, using
// DBMS_OUTPUT.GET_LINES
try (CallableStatement call = c.prepareCall(
"declare "
+ "  num integer := 1000;"
+ "begin "
+ "  dbms_output.get_lines(?, num);"
+ "end;"
)) {
call.registerOutParameter(1, Types.ARRAY,
"DBMSOUTPUT_LINESARRAY");
call.execute();

Array array = null;
try {
array = call.getArray(1);
Stream.of((Object[]) array.getArray())
.forEach(System.out::println);
}
finally {
if (array != null)
array.free();
}
}
}

// Don't forget to disable DBMS_OUTPUT for the remaining use
// of the connection.
finally {
s.executeUpdate("begin dbms_output.disable(); end;");
}
}
```

As can be seen above, this is rather simple:

• Initialise a connection with `DBMS_OUTPUT.ENABLE`
• Do the actually interesting work
• Fetch the output and call `DBMS_OUTPUT.DISABLE`

This could also be refactored into a utility:

```// Alternatively, just use https://github.com/jOOQ/jOOL
interface WhyUNoCheckedExceptionRunnable {
void run() throws Exception;
}

static void logServerOutput(
Connection connection,
WhyUNoCheckedExceptionRunnable runnable
) throws Exception {
try (Statement s = connection.createStatement()) {
try {
s.executeUpdate("begin dbms_output.enable(); end;");
runnable.run();

try (CallableStatement call = connection.prepareCall(
"declare "
+ "  num integer := 1000;"
+ "begin "
+ "  dbms_output.get_lines(?, num);"
+ "end;"
)) {
call.registerOutParameter(1, Types.ARRAY,
"DBMSOUTPUT_LINESARRAY");
call.execute();

Array array = null;
try {
array = call.getArray(1);
Stream.of((Object[]) array.getArray())
.forEach(System.out::println);
}
finally {
if (array != null)
array.free();
}
}
}
finally {
s.executeUpdate("begin dbms_output.disable(); end;");
}
}
}
```

This can now be called conveniently as such:

```try (Connection c = DriverManager.getConnection(url, properties);
Statement s = c.createStatement()) {

logServerOutput(c, () ->
s.executeUpdate("begin my_procedure(1, 2); end;"));
}
```

### How to do the same with jOOQ?

jOOQ 3.11 will have built in support for fetching this server output through its `ExecuteListener` SPI with https://github.com/jOOQ/jOOQ/issues/6580

We can either use jOOQ’s plain SQL API as such:

```try (Connection c = DriverManager.getConnection(url, properties)) {

// Specify this setting to fetch server output explicitly
DSLContext ctx = DSL.using(c,
new Settings().withFetchServerOutputSize(10));
ctx.execute("begin my_procedure(1, 2); end;");
}
```

Or, use the code generator for even more type safe calls to the procedures:

```try (Connection c = DriverManager.getConnection(url, properties)) {

// Specify this setting to fetch server output explicitly
DSLContext ctx = DSL.using(c,
new Settings().withFetchServerOutputSize(10));
myProcedure(ctx.configuration(), 1, 2);
}
```

The log output will be:

```DEBUG [org.jooq.tools.LoggerListener          ] - Executing query : begin my_procedure(1, 2); end;
DEBUG [org.jooq.impl.FetchServerOutputListener] - 2 rows inserted
```

## JOIN Elimination: An Essential Optimiser Feature for Advanced SQL Usage

The SQL language has one great advantage over procedural, object oriented, and “ordinary” functional programming languages. The fact that it is truly declarative (i.e. a 4GL / fourth generation programming language) means that a sophisticated optimiser can easily transform one SQL expression into another, equivalent SQL expression, which might be faster to execute.

How does this work?

### Expression Tree Equivalence

Here’s an introduction to equivalent expression trees from my SQL Masterclass: Let’s assume we want to evaluate the following expression:

```A + (A - B)
```

Now in maths, it can be proven trivially that by the laws of associativity, the above expression is really the same as this one:

```(A + A) - B
```

We didn’t really win anything yet, but we can equally trivially turn the above addition into a multiplication that can be proven to be exactly equivalent:

```(2 * A) - B
```

Now, imagine that `A` is an extremely “expensive” value, e.g. a value that we have to fetch from the disk, or worse, from the network. The cost of accessing A is thus very high and if we can avoid one access to A by the above transformation, then the resulting expression is much faster to evaluate than the original one, even if mathematically speaking, it does not matter at all.

That’s what optimisers do all the time. They transform expressions into equivalent expressions which are faster to execute. And they constantly adapt, so if the DBA chooses to move A from a remote server to the local server, thus reducing the cost of access to A, perhaps, suddenly, the original plan will be better again, because of the cost of multiplication (just an example).

### A SQL Example

An equally trivial SQL example from my SQL Masterclass shows that it really doesn’t matter mathematically if we run this query:

```SELECT first_name, last_name
FROM customer
WHERE first_name = 'JAMIE'
```

Or this one:

```SELECT *
FROM (
SELECT first_name, last_name
FROM customer
)
WHERE first_name = 'JAMIE'
```

With the SQL language, it may be a bit harder to see that these are exactly equivalent SQL statements, but if we translate the above queries to relational algebra, it may become more visible:

Selection before projection

… or `WHERE` before `SELECT`:

Projection then selection

… or `SELECT` before `WHERE`:

Don’t be fooled by relational algebra‘s term “selection”. It does not correspond to the `SELECT` clause, but to the `WHERE` clause!

We can prove (let’s leave the exercise to the reader), that both expressions are exactly equivalent, so optimisers can pick whichever they have a more efficient matching algorithm for:

• Ordinary row stores will probably apply the selection first, before projecting, as the expensive operation is accessing rows and if we can filter rows early (e.g. through indexes) then the whole query is less expensive)
• A column store might (just a hypothesis) apply the projection first, as the expensive operation is accessing the columns. We then might have to traverse the entire column anyway to filter out the rows

### Let’s Talk About JOINs (and Their Elimination)

JOIN elimination is one of the most basic and yet most powerful SQL transformations, which is implemented by most modern databases in one way or another. It is so powerful, because we’re writing (potentially useless) JOINs all the time, when writing a bit more complex queries. See also our article about JOINs for a general overview.

Now, consider the following simplified schema, taken from the Sakila database:

```CREATE TABLE address (
);

CREATE TABLE customer (
customer_id INT NOT NULL,
first_name VARCHAR(45) NOT NULL,
last_name VARCHAR(45) NOT NULL,
CONSTRAINT pk_customer PRIMARY KEY  (customer_id),
);
```

Let’s ignore indexing and other useful features for this example.

INNER JOIN Elimination

The following query shows a common JOIN use-case, joining a parent table (`ADDRESS`) to a child table (`CUSTOMER`) in a to-one relationship:

```SELECT c.*
FROM customer c
```

We intended to fetch all customers and their addresses. But observe: We project only columns from the `CUSTOMER` table and we don’t have any predicates at all, specifically not predicates using the `ADDRESS` table. So, we’re completely ignoring any contributions from the `ADDRESS` table. We never really needed the JOIN in the first place!

And in fact, the optimiser can prove this too, because of the `FOREIGN KEY` constraint on `C.ADDRESS_ID`, which guarantees that every `CUSTOMER` record has exactly one corresponding `ADDRESS` record. The JOIN does not duplicate, nor remove any `CUSTOMER` rows, so it is unneeded and thus eliminated (by some, not all databases, will list each database at the end of the article).

So, the database can rewrite the SQL statement to the following, equivalent SQL statement in the presence of said FOREIGN KEY:

```SELECT *
FROM customer c
```

Now, quite obviously, this query will be faster than the previous one, if the entire JOIN can be avoided, and thus the entire access to the `ADDRESS` table. Neat, huh? Who would have thought that `FOREIGN KEY`s can be so useful in terms of performance.

The above works if there’s also a `NOT NULL` constraint on the `FOREIGN KEY`. If there isn’t, e.g. as in this query:

```SELECT title
FROM film f
JOIN language l ON f.original_language_id = l.language_id
```

The JOIN can still be eliminated, but there needs to be a replacement `NOT NULL` predicate, as such:

```SELECT title
FROM film
WHERE original_language_id IS NOT NULL
```

OUTER JOIN Elimination

A `LEFT [ OUTER ] JOIN` will JOIN the right table to the left table but keep rows from the left table if there is no match (again, an explanation of joins can be seen here). When we apply `LEFT JOIN` to the previous query…

```SELECT c.*
FROM customer c
```

… then we’ll fetch all rows from `CUSTOMER` regardless if that customer has any `ADDRESS`. This is useful if the `FOREIGN KEY` is optional (nullable), or completely absent, e.g. through:

```ALTER TABLE customer DROP CONSTRAINT fk_customer_address
```

`OUTER JOIN` is even easier to eliminate, as it doesn’t require any `FOREIGN KEY` constraint for the database to prove that it is unneeded. A `UNIQUE` constraint on the parent table (here: `ADDRESS.ADDRESS_ID`) is sufficient to show that for every `CUSTOMER` there can be at most one `ADDRESS`, so the `LEFT JOIN` won’t duplicate any `CUSTOMER` rows (Unlike `INNER JOIN`, `OUTER JOIN` never remove rows).

Hence, the above query can again be rewritten to the more optimal:

```SELECT *
FROM customer c
```

OUTER JOIN Elimination with DISTINCT

Another interesting case of `OUTER JOIN` elimination is the following one, which unfortunately didn’t work on Oracle for a customer of ours, recently, in a complex query that ran rogue. Let’s look at some other tables of the Sakila database, which expose a to-many relationship:

```CREATE TABLE actor (
actor_id numeric NOT NULL ,
first_name VARCHAR(45) NOT NULL,
last_name VARCHAR(45) NOT NULL,
CONSTRAINT pk_actor PRIMARY KEY (actor_id)
);

CREATE TABLE film (
film_id int NOT NULL,
title VARCHAR(255) NOT NULL,
CONSTRAINT pk_film PRIMARY KEY (film_id)
);

CREATE TABLE film_actor (
actor_id INT NOT NULL,
film_id  INT NOT NULL,
CONSTRAINT pk_film_actor PRIMARY KEY (actor_id, film_id),
CONSTRAINT fk_film_actor_actor FOREIGN KEY (actor_id)
REFERENCES actor (actor_id),
CONSTRAINT fk_film_actor_film FOREIGN KEY (film_id)
REFERENCES film (film_id)
);
```

Now, consider this query:

```SELECT DISTINCT first_name, last_name
FROM actor a
LEFT JOIN film_actor fa ON a.actor_id = fa.actor_id
```

We’re looking for all actors and their films, but then we project only distinct actors. Again, the `JOIN` to `FILM_ACTOR` doesn’t contribute anything to the result, but because we’re joining a to-many relationship here (from parent table `ACTOR` to child table `FILM_ACTOR`), the `JOIN` is producing duplicate rows. Without `DISTINCT`, we’d get something like:

```FIRST_NAME   LAST_NAME
----------------------
...
PENELOPE     GUINESS
PENELOPE     GUINESS
PENELOPE     GUINESS
NICK         WAHLBERG
NICK         WAHLBERG
...
```

But thanks to the `DISTINCT` keyword, the result is (provably) no different from the result of this much simpler query:

```SELECT DISTINCT first_name, last_name
FROM actor a
```

(Note, `DISTINCT` cannot be eliminated, unless we already have a `UNIQUE` constraint on `(FIRST_NAME, LAST_NAME)`).

### Why not Just Refactor the SQL Manually?

Of course, all of this shouldn’t be needed if our SQL were perfect. In the above trivial examples, the SQL can (and should) be re-written manually to improve quality. But note that:

• Developers make mistakes, and those mistakes may be very subtle when queries get more complex. I’ll show an example below.
• The presence of this feature actually encourages writing more complex SQL, especially when using reusable views. I’ll show another example below.
• Finally, I’ve previously advocated avoiding needless, mandatory work, like `SELECT *`. Such work is mandatory because the optimiser cannot prove its needlessness. In the case of these `JOINs`, the optimiser can prove the needlessness, so the work is no longer mandatory. It can be eliminated.

Here are some complex examples as promised, where this optimiser feature really shines:

Subtle Mistakes

Let’s consider the following query (in PostgreSQL syntax):

```SELECT c.name, count(*)
FROM actor a
JOIN film_actor fa USING (actor_id)
JOIN film f USING (film_id)
JOIN film_category fc USING (film_id)
JOIN category c USING (category_id)
WHERE actor_id = 1
GROUP BY c.name
ORDER BY count(*) DESC
```

What does it do? For `ACTOR_ID = 1` (Penelope Guiness), we’re looking for all the different film categories she played in, and the number of films per category. This is easier to understand when we look at the result:

```NAME         COUNT
Horror       3
Family       2
New          2
Classics     2
Games        2
Music        1
Sci-Fi       1
Animation    1
Sports       1
Children     1
Comedy       1
Documentary  1
Foreign      1
```

Now, can you spot the unneeded JOINs? In fact, we never needed `ACTOR`, nor did we need `FILM`

```SELECT c.name, count(*)
FROM film_actor fa
JOIN film_category fc USING (film_id)
JOIN category c USING (category_id)
WHERE actor_id = 1
GROUP BY c.name
ORDER BY count(*) DESC
```

Cool, eh? The JOINs can be eliminated (again, in some databases, see below) and our “mistake” is no longer relevant to the query. The mistake could have also snuck (or sneaked?) in from a previous query version, which may have looked like this, projecting also the actor information and the list of films per category, in case of which the additional `JOIN` are needed:

```SELECT
c.name, count(*),
a.first_name, a.last_name,
array_agg(f.title ORDER BY f.title)
FROM actor a
JOIN film_actor fa USING (actor_id)
JOIN film f USING (film_id)
JOIN film_category fc USING (film_id)
JOIN category c USING (category_id)
WHERE actor_id = 1
GROUP BY c.name, a.first_name, a.last_name
ORDER BY count(*) DESC
```

The result being:

```NAME         COUNT  FIRST_NAME  LAST_NAME  FILMS
Horror       3      PENELOPE    GUINESS    {"ELEPHANT TROJAN","LADY STAGE","RULES HUMAN"}
Family       2      PENELOPE    GUINESS    {"KING EVOLUTION","SPLASH GUMP"}
New          2      PENELOPE    GUINESS    {"ANGELS LIFE","OKLAHOMA JUMANJI"}
Classics     2      PENELOPE    GUINESS    {"COLOR PHILADELPHIA","WESTWARD SEABISCUIT"}
Games        2      PENELOPE    GUINESS    {"BULWORTH COMMANDMENTS","HUMAN GRAFFITI"}
Music        1      PENELOPE    GUINESS    {"WIZARD COLDBLOODED"}
Sci-Fi       1      PENELOPE    GUINESS    {"CHEAPER CLYDE"}
Animation    1      PENELOPE    GUINESS    {"ANACONDA CONFESSIONS"}
Sports       1      PENELOPE    GUINESS    {"GLEAMING JAWBREAKER"}
Children     1      PENELOPE    GUINESS    {"LANGUAGE COWBOY"}
Comedy       1      PENELOPE    GUINESS    {"VERTIGO NORTHWEST"}
Documentary  1      PENELOPE    GUINESS    {"ACADEMY DINOSAUR"}
Foreign      1      PENELOPE    GUINESS    {"MULHOLLAND BEAST"}
```

As you can see, this optimisation can be very useful on your legacy SQL, because if we maintain a complex query, we might not always be able to see all the JOINs that are really needed.

Reusable Views

Sometimes, we simply add additional `JOINs` for convenience, when building complex queries from simpler ones, e.g. by using views (which is a completely underrated RDBMS feature! You should all write more views).

Consider this view:

```CREATE VIEW v_customer AS
SELECT
c.first_name, c.last_name,
FROM customer c
JOIN city ci USING (city_id)
JOIN country co USING (country_id)
```

It’s not unlikely that we will write a view like this, simply because we’re incredibly bored to constantly join all these tables all the time. Every time we do something with customers and addresses, we need the `CITY` and `COUNTRY` table as well.

From now on (with this view), we can simply select from the view and “forget” about how it came to be. Now, let’s consider we completely forget about the underlying table, because the view was so useful all the time. We could think about doing this:

```SELECT first_name, last_name
FROM v_customer
```

What do you think will happen? Exactly. `JOIN` elimination. A view isn’t really anything special, just a “macro” of some stored SQL (beware of some databases, where this isn’t always the case, e.g. MySQL, which Bill Karwin was kind enough to hint me at). So the above statement will be transformed into:

```SELECT first_name, last_name
FROM (
SELECT
c.first_name, c.last_name,
FROM customer c
JOIN city ci USING (city_id)
JOIN country co USING (country_id)
) v_customer
```

… which can be transformed into this (we don’t need all columns in the nested select):

```SELECT first_name, last_name
FROM (
SELECT
c.first_name, c.last_name
FROM customer c
JOIN city ci USING (city_id)
JOIN country co USING (country_id)
) v_customer
```

… which can be transformed into this (JOINs can be eliminated):

```SELECT first_name, last_name
FROM (
SELECT
c.first_name, c.last_name
FROM customer c
) v_customer
```

… and finally (the subquery is not useful):

```SELECT first_name, last_name
FROM customer
```

The view is even very useful for this particular query, thanks to JOIN elimination!

Note, the SQL transformations exposed above are simply educational. Actual optimisers may perform transformations in an entirely differently, or in a different order. This is just to show what’s possible, and what kind of stuff is being done.

### Cool, So Can My Database Do It?

Perhaps! Let’s look at the three different types of `JOIN` elimination in the context of these databases:

• DB2 LUW 10.5
• MySQL 8.0.2
• Oracle 12.2.0.1
• PostgreSQL 9.6
• SQL Server 2014

### INNER JOIN Elimination

Remember, this depends on the presence (and usefulness) of a `FOREIGN KEY` constraint. The SQL statement we’re using here is:

```SELECT first_name, last_name
FROM customer c
```

We’re hoping to get:

```SELECT first_name, last_name
FROM customer c
```

DB2 LUW

The following execution plan (created with Markus Winand’s cool utility) shows that this works in DB2, there’s no access to the `ADDRESS` table:

```Explain Plan                                                           |
-----------------------------------------------------------------------|
ID | Operation                           |                 Rows | Cost |
1 | RETURN                              |                      |   61 |
2 |  FETCH CUSTOMER                     | 599 of 599 (100.00%) |   61 |
3 |   IXSCAN IDX_CUSTOMER_FK_ADDRESS_ID | 599 of 599 (100.00%) |   20 |
```

MySQL

MySQL 8, apart from finally introducing CTE and window functions (yay), has a lot of new optimiser features, read Morgan Tocker’s useful optimiser guide for details. Unfortunately, `INNER JOIN` elimination is not implemented:

```ID  TABLE  TYPE    REF                  ROWS   EXTRA
1   c      ALL                          599
1   a      eq_ref  sakila.c.address_id  1      Using index
```

Not only is the `JOIN` executed, but it is executed using a nested loop with 599 index lookups, as MySQL still only supports NESTED LOOP JOINs, not HASH JOINs.

Bummer.

Oracle

No problem at all for Oracle:

```------------------------------------------------------------------------------
| Id  | Operation         | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |          |   599 | 28752 |     5   (0)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| CUSTOMER |   599 | 28752 |     5   (0)| 00:00:01 |
------------------------------------------------------------------------------
```

… the `JOIN` is eliminated as expected.

PostgreSQL

Unfortunately, PostgreSQL cannot eliminate INNER JOIN:

```Hash Join  (cost=19.57..42.79 rows=599 width=13)
->  Seq Scan on customer c  (cost=0.00..14.99 rows=599 width=15)
->  Hash  (cost=12.03..12.03 rows=603 width=4)
->  Seq Scan on address a  (cost=0.00..12.03 rows=603 width=4)
```

Not as bad as in MySQL, though, as PostgreSQL chose to use a HASH JOIN to combine the two tables.

SQL Server

No problemo for SQL Server, the `ADDRESS` table access is gone!

```  |--Table Scan(OBJECT:([sakila].[dbo].[customer] AS [c]))
```

Notice, however, that SQL Server can only eliminate `INNER JOIN` on `NOT NULL` FOREIGN KEYs!

### OUTER JOIN Elimination

This one is a bit easier to prove for the database, remember? We don’t rely on any `FOREIGN KEY` anymore. A `UNIQUE` key in the parent table is sufficient to eliminate an `OUTER JOIN`. We can safely expect that if the `INNER JOIN` could be eliminated (DB2, Oracle, SQL Server), then an `OUTER JOIN` can be eliminated, too.

Here’s the query:

```SELECT first_name, last_name
FROM customer c
```

And the outcome:

DB2 LUW

Good

```Explain Plan                                                           |
-----------------------------------------------------------------------|
ID | Operation                           |                 Rows | Cost |
1 | RETURN                              |                      |   61 |
2 |  FETCH CUSTOMER                     | 599 of 599 (100.00%) |   61 |
3 |   IXSCAN IDX_CUSTOMER_FK_ADDRESS_ID | 599 of 599 (100.00%) |   20 |
```

MySQL

Still nope:

```ID  TABLE  TYPE    REF                  ROWS   EXTRA
1   c      ALL                          599
1   a      eq_ref  sakila.c.address_id  1      Using index
```

Oracle

Perfect:

```------------------------------------------------------------------------------
| Id  | Operation         | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |          |   599 | 28752 |     5   (0)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| CUSTOMER |   599 | 28752 |     5   (0)| 00:00:01 |
------------------------------------------------------------------------------
```

PostgreSQL

Unlike `INNER JOIN` elimination, this works. Great!

```Seq Scan on customer c  (cost=0.00..14.99 rows=599 width=13)
```

SQL Server

As expected, good:

```  |--Table Scan(OBJECT:([sakila].[dbo].[customer] AS [c]))
```

Finally…

### OUTER JOIN Elimination with DISTINCT

Remember, the query here navigates a to-many relationship, producing duplicate records of the parent table, but then removes all those duplicates again by

• Ignoring contributions from the child table
• Removing duplicates with `DISTINCT`

It’s easy to prove that this:

```SELECT DISTINCT first_name, last_name
FROM actor a
LEFT JOIN film_actor fa ON a.actor_id = fa.actor_id
```

Is equivalent to this:

```SELECT DISTINCT first_name, last_name
FROM actor a
```

Remember also, this only works with `OUTER JOIN`, not with `INNER JOIN`, as the latter might remove rows, so we have to execute it to see if it does.

DB2 LUW

Cool, this actually works!

```Explain Plan                                                          |
----------------------------------------------------------------------|
ID | Operation        |                 Rows | Cost                   |
1 | RETURN           |                      |   20                   |
2 |  TBSCAN          | 200 of 200 (100.00%) |   20                   |
3 |   SORT (UNIQUE)  | 200 of 200 (100.00%) |   20                   |
4 |    TBSCAN ACTOR  | 200 of 200 (100.00%) |   20                   |
```

There’s no access to the `FILM_ACTOR` table, nor to its indexes. Very nice.

MySQL

As this is a more sophisticated transformation than the previous ones, we don’t have high hopes here.

```ID  TABLE  TYPE    REF                  ROWS   EXTRA
1   a      ALL                          200    Using temporary
1   fa     ref     sakila.a.actor_id    27     Using index; Distinct
```

This has become a rather expensive query, again because of the lack of HASH JOIN support in MySQL!

Oracle

I’m very surprised to see that Oracle doesn’t support this optimisation, we’re executing the full query:

```---------------------------------------------------------------
| Id  | Operation           | Name                    | Rows  |
---------------------------------------------------------------
|   0 | SELECT STATEMENT    |                         |  5462 |
|   1 |  HASH UNIQUE        |                         |  5462 |
|   2 |   NESTED LOOPS OUTER|                         |  5462 |
|   3 |    TABLE ACCESS FULL| ACTOR                   |   200 |
|*  4 |    INDEX RANGE SCAN | IDX_FK_FILM_ACTOR_ACTOR |    27 |
---------------------------------------------------------------
```

Curiously, Oracle also chose a NESTED LOOP JOIN in this case, even if we could have loaded the entire index on the `FILM_ACTOR` table into memory and then HASH JOINed it to `ACTOR`. Note that the cardinality estimate of the resulting query is quite off, too, despite the `DISTINCT` operation. This can lead to significant effects in an upstream query, which selects from this query (e.g. if stored as a view) – which is what happened to our customer.

PostgreSQL

PostgreSQL also doesn’t support this elimination, but at least gets cardinality estimates much more accurately and chooses a HASH JOIN operation:

```HashAggregate  (cost=193.53..194.81 rows=128 width=13)
Group Key: a.first_name, a.last_name
->  Hash Right Join  (cost=6.50..166.22 rows=5462 width=13)
Hash Cond: (fa.actor_id = a.actor_id)
->  Seq Scan on film_actor fa  (cost=0.00..84.62 rows=5462 width=2)
->  Hash  (cost=4.00..4.00 rows=200 width=17)
->  Seq Scan on actor a  (cost=0.00..4.00 rows=200 width=17)
```

SQL Server

The pleasant surprise came from SQL Server, which does support this optimisation too:

```  |--Sort(DISTINCT ORDER BY:([a].[first_name] ASC, [a].[last_name] ASC))
|--Table Scan(OBJECT:([sakila].[dbo].[actor] AS [a]))
```

As you can see, no access to any `FILM_ACTOR` related objects!

### Summary

Here’s a summary of what databases can eliminate:

Database INNER JOIN:
to-one
INNER JOIN nullable:
to-one
OUTER JOIN:
to-one
OUTER JOIN DISTINCT:
to-many
DB2 LUW 10.5 Yep Yep Yep Yep
MySQL 8.0.2 Nope Nope Nope Nope
Oracle 12.2.0.1 Yep Yep Yep Nope
PostgreSQL 9.6 Nope Nope Yep Nope
SQL Server 2014 Yep Nope Yep Yep

### Conclusion

`JOIN` elimination is a very simple to understand, yet incredibly powerful feature that modern databases support to help developers build and maintain complex SQL queries without worrying too much about performance side effects.

It is possible because SQL is a 4GL (fourth-generation programming language), i.e. a declarative language whose parsed expression trees can “easily” be transformed into equivalent, simpler, and faster to execute expression trees. This work is constantly done by the optimiser behind the scenes, often without you even noticing (see example where unnecessary `ACTOR` and `FILM` tables were removed).

Currenly, DB2 and SQL Server are the leaders here with Oracle being a close runner-up, at least in my investigation. There’s hope for PostgreSQL, and a bit less hope for MySQL right now. There are tons of other interesting SQL transformations, which I’ll blog about in future blog posts, which may make the difference in other kinds of complex queries.

If you were intrigued by this sort of functionality, do have a look at my most recent SQL talk, which helps developers understand the real value of SQL in the context of the optimiser: