How to Prevent Execution Plan Troubles when Querying Skewed Data, with jOOQ

One of the biggest advantages of using jOOQ is that you can change all of your complex application’s generated SQL with just a few lines of code. In this article, we’ll look into how to solve some common bind peeking issues just like that, without touching your application code, without the need to explain this advanced SQL performance issue to every team member, and best of all: for good.

What are Bind Values Good For?

First of all, bind variables are a good thing. They:

The latter bullet doesn’t affect all dialects, just like this article doesn’t. Commercial DBMS like Oracle, SQL Server, etc. ship with a powerful execution plan cache. They are designed to run thousands of distinct queries with very complex plans. Planning these queries takes time (I’ve seen Oracle SQL being planned for 5 seconds!) and you don’t want the DBMS to re-execute this planning work every time the query is run, which may be thousands of times per second.

Usually, this execution plan cache takes the SQL string (or a hash of it), and associates meta data like alternative possible execution plans to it. When the SQL string changes, the cache lookup fails and the “new” query has to be planned again.

I say “new”, because it might be the “same” query to the user, even if the string is different. For example:

SELECT * FROM book WHERE id = 1;
SELECT * FROM book WHERE id = 2;

Now we have two times the “same” query, but each of them are “new”. Oracle will plan them both afresh. So, we use bind variables, instead:

SELECT * FROM book WHERE id = ?;
SELECT * FROM book WHERE id = ?;

What is the Bind Peeking Issue?

In some cases, the cached plan is not optimal. This is the case when the actual bind value is significant to the planning, e.g. the value 1 would produce a vastly different plan than the value 2, or much more likely the value DELETED produces a different plan from PROCESSED or NEW.

This problem has been discussed in our previous blog post Why You Should Design Your Database to Optimise for Statistics.

“Bind Peeking” is a technique by the Oracle database (it is done by others, too, but maybe not called “Bind Peeking”) to have a “peek” at the bind variable to get a more accurate plan than the average plan if we don’t know the bind value. This can go both ways, good or bad, so there were numerous fixes / patches / workarounds in previous Oracle versions. Some interesting articles on the topic:

Databases slowly get to truly adaptive query execution models where execution plans can be fixed in-flight when the estimates were clearly wrong. Db2 is quite strong at this, and Oracle is getting better.

But even then, some times planners get it wrong, simply because they can’t reasonably estimate the cardinalities produced by a simple predicate like

WHERE x = ?

… just because the overall query is very complex and some SQL transformation did not apply.

Preventing the Problem by Avoiding Bind Values

Again. Please use bind values by default. They’re a good thing by default. Not all data is as skewed as the one I presented in my other blog post. But some data is almost always skewed: Enumeration types.

When you have an enum like:

enum ProcessingState {
  NEW,
  PROCESSING,
  EXECUTED,
  DELETED
}

Or in PostgreSQL:

CREATE TYPE processing_state AS ENUM (
  'new',
  'processing',
  'executed',
  'deleted'
);

Or even just encoded as a CHECK constraint:

CREATE TABLE transaction (
  -- ...
  processing_state VARCHAR(10) CHECK (processing_state IN (
    'new',
    'processing',
    'executed',
    'deleted'
  ))
  -- ...
);

In that case, you will very likely have highly skewed data. For example, a quick query might yield:

SELECT processing_state, count(*)
FROM transaction
GROUP BY processing_state

Resulting in:

+------------------+----------+
| processing_state |    count |
+------------------+----------+
| new              |    10234 |
| processing       |       15 |
| executed         | 17581684 |
| deleted          |    83193 |
+------------------+----------+

Now, do you think you’ll profit from indexing PROCESSING_STATE equally, when looking for NEW or PROCESSING versus when looking for EXECUTED values? Do you want the same plans? Do you want an average plan, which might not use the index, when in fact you should use it (looking for PROCESSING)?

Not only that, but it’s also quite unlikely that your queries are so generic as that the individual PROCESSING_STATE values can be used interchangeably. For example, a query looking for DELETED states might be run by a housekeeping batch job that wants to remove the logically deleted transactions for good. It will never query anything other than DELETED states. So, might as well inline, right?

Now, if you write a query like this:

SELECT *
FROM transaction
WHERE processing_state = 'processing';

With jOOQ, you can create an “inline” on a per-query basis using DSL.inline("processing") (as opposed to DSL.val("processing"), which is used by default, implicitly:

// These generate a ? bind value
ctx.selectFrom(TRANSACTION)
   .where(TRANSACTION.PROCESSING_STATE.eq("processing"))
   .fetch();

ctx.selectFrom(TRANSACTION)
   .where(TRANSACTION.PROCESSING_STATE.eq(val("processing")))
   .fetch();

// This creates an inline 'processing' literal
ctx.selectFrom(TRANSACTION)
   .where(TRANSACTION.PROCESSING_STATE.eq(inline("processing")))
   .fetch();

As always, assuming this static import:

import static org.jooq.impl.DSL.*;

But now, you have to think of doing that every single time you query PROCESSING_STATE, and all the other similar columns.

Prevent it for Good

Much better, prevent it for good. You can create a very simple Binding like this:

class AlwaysInlineStringBinding implements Binding<String, String> {

    final Binding<?, String> delegate = VARCHAR.getBinding();

    @Override
    public Converter<String, String> converter() {
        return Converters.identity(String.class);
    }

    @Override
    public void sql(BindingSQLContext<String> ctx) 
    throws SQLException {
        ctx.render().visit(inline(ctx.value()));
    }

    @Override
    public void register(BindingRegisterContext<String> ctx) 
    throws SQLException {
        delegate.register(ctx);
    }

    // No need to set anything
    @Override
    public void set(BindingSetStatementContext<String> ctx) 
    throws SQLException {}

    @Override
    public void set(BindingSetSQLOutputContext<String> ctx) 
    throws SQLException {
        delegate.set(ctx);
    }

    @Override
    public void get(BindingGetResultSetContext<String> ctx) 
    throws SQLException {
        delegate.get(ctx);
    }

    @Override
    public void get(BindingGetStatementContext<String> ctx) 
    throws SQLException {
        delegate.get(ctx);
    }

    @Override
    public void get(BindingGetSQLInputContext<String> ctx) 
    throws SQLException {
        delegate.get(ctx);
    }
}

Or, starting from jOOQ 3.15, even simpler, and generic:

class AlwaysInlineStringBinding 
extends DefaultBinding<String, String> {
    public AlwaysInlineStringBinding() {
        super(DefaultBinding.binding(VARCHAR));
    }

    @Override
    public void sql(BindingSQLContext<String> ctx) 
    throws SQLException {
        ctx.render().visit(inline(ctx.value()));
    }

    // No need to set anything
    @Override
    public void set(BindingSetStatementContext<T> ctx) 
    throws SQLException {}
}

Or even generic:

class AlwaysInlineBinding<T> extends DefaultBinding<T, T> {
    public AlwaysInlineBinding(DataType<T> type) {
        super(DefaultBinding.binding(type));
    }

    @Override
    public void sql(BindingSQLContext<T> ctx) 
    throws SQLException {
        ctx.render().visit(inline(ctx.value()));
    }

    // No need to set anything
    @Override
    public void set(BindingSetStatementContext<T> ctx) 
    throws SQLException {}
}

All this does is generate inline values instead of the ? bind parameter marker, and skip setting any value to the JDBC PreparedStatement (or reactive R2DBC Statement, starting from jOOQ 3.15. This will work just the same!)

Try it very easily, yourself (using the jOOQ 3.15 version):

@Test
public void testAlwaysInlineBinding() {
    DSLContext ctx = DSL.using(DEFAULT);
    DataType<Integer> t = INTEGER.asConvertedDataType(
        new AlwaysInlineBinding<>(INTEGER));

    Field<Integer> i = field("i", INTEGER);
    Field<Integer> j = field("j", t);
    Param<Integer> a = val(1);
    Param<Integer> b = val(1, INTEGER.asConvertedDataType(
        new AlwaysInlineBinding<>(INTEGER)));

    // Bind value by default
    assertEquals("?", ctx.render(a));
    assertEquals("1", ctx.renderInlined(a));
    assertEquals("1", ctx.render(b));
    assertEquals("1", ctx.renderInlined(b));

    // Bind value by default in predicates
    assertEquals("i = ?", ctx.render(i.eq(a)));
    assertEquals("i = 1", ctx.renderInlined(i.eq(a)));
    assertEquals("i = 1", ctx.render(i.eq(b)));
    assertEquals("i = 1", ctx.renderInlined(i.eq(b)));
    assertEquals("i = ?", ctx.render(i.eq(1)));
    assertEquals("i = 1", ctx.renderInlined(i.eq(1)));

    // No more bind values in predicates!
    assertEquals("j = 1", ctx.render(j.eq(a)));
    assertEquals("j = 1", ctx.renderInlined(j.eq(a)));
    assertEquals("j = 1", ctx.render(j.eq(b)));
    assertEquals("j = 1", ctx.renderInlined(j.eq(b)));
    assertEquals("j = 1", ctx.render(j.eq(1)));
    assertEquals("j = 1", ctx.renderInlined(j.eq(1)));
}

Of course, instead of doing the above programmatically, you’ll attach this Binding to all relevant columns using the code generator’s forced type configuration

Conclusion

Please use bind values by default. Both in jOOQ and elsewhere. It’s a very good default.

But sometimes, your data is skewed, and you as a developer, you probably know that. In those cases, sometimes, “inline values” as we call them (or constants, literals, etc.) may be the better choice to help the optimiser get better at estimating. Even if the optimiser’s estimates are good the first time, the plan may switch in production for some weird reason, including some plans being purged because the cache is full, or the DBA clicked a button, or whatever.

And that’s when your query might be unnecessarily slow all of a sudden. No more need. When you have enum types, or similar, just use the above simple trick, apply to all of your schema where it makes sense, and forget this problem for good.

Side note:

Of course, the other way round is just as easy. When you have inline literals that you want to switch over to bind values, you can do it just the same way, e.g. when you use jOOQ’s parsing connection to translate between dialects, or to patch your wrong ORM-generated SQL!

Use IN List Padding to Your JDBC Application to Avoid Cursor Cache Contention Problems

A problem few developers are aware of is the possibility of running into “cursor cache contention” or “execution plan cache contention” problems when using IN lists in SQL. The problem that is described in lengths in previous articles, can be summarised as this.

All of these are distinct SQL queries and need to be parsed / planned / cached as possibly distinct execution plans in RDBMS that have strong plan caches (e.g. Db2, Oracle, SQL Server):

SELECT * FROM t WHERE id IN (?);
SELECT * FROM t WHERE id IN (?, ?);
SELECT * FROM t WHERE id IN (?, ?, ?);
SELECT * FROM t WHERE id IN (?, ?, ?, ?);
SELECT * FROM t WHERE id IN (?, ?, ?, ?, ?);

While this is never a problem on developer machines, this can produce significant problems in production. I’ve seen this take down entire Oracle instances during peak loads. While RDBMS vendors should work on avoiding the severe problems this can cause, you can work around it using a trick that we invented at jOOQ (and Hibernate also has it now):

IN list padding

The trick is very simple. Just “pad” your IN lists to the nearest power of 2, and repeat the last value until the end:

SELECT * FROM t WHERE id IN (?);                      -- Left as it is
SELECT * FROM t WHERE id IN (?, ?);                   -- Left as it is
SELECT * FROM t WHERE id IN (?, ?, ?, ?);             -- Padded 3 to 4
SELECT * FROM t WHERE id IN (?, ?, ?, ?);             -- Left as it is
SELECT * FROM t WHERE id IN (?, ?, ?, ?, ?, ?, ?, ?); -- Padded 5 to 8

It’s really a hack and there are better solutions to avoiding this problem, including using arrays or temporary tables, but your production system may be down and you need a quick fix.

jOOQ has supported IN list padding for years now, since jOOQ 3.9 (late 2016), but with the relatively new parser and the ParsingConnection, you can now also apply this technique to any arbitrary SQL query in your non-jOOQ based system. Here’s a simple example:

// Any arbitrary JDBC Connection is wrapped by jOOQ here and replaced
// by a "ParsingConnection", which is also a JDBC Connection
DSLContext ctx = DSL.using(connection);
ctx.settings().setInListPadding(true);
Connection c = ctx.parsingConnection();

// Your remaining code is left untouched. It is unaware of jOOQ
for (int i = 0; i < 10; i++) {
    try (PreparedStatement s = c.prepareStatement(

        // This alone is reason enough to use jOOQ instead, 
        // but one step at a time :)
        "select 1 from dual where 1 in (" +
            IntStream.rangeClosed(0, i)
                     .mapToObj(x -> "?")
                     .collect(Collectors.joining(", ")) +
        ")")
    ) {
        for (int j = 0; j <= i; j++)
            s.setInt(j + 1, j + 1);

        try (ResultSet rs = s.executeQuery()) {
            while (rs.next())
                System.out.println(rs.getInt(1));
        }
    }
}

The above example just generates and runs 10 queries of this form:

select 1 from dual where 1 in (?)
select 1 from dual where 1 in (?, ?)
select 1 from dual where 1 in (?, ?, ?)
select 1 from dual where 1 in (?, ?, ?, ?)
select 1 from dual where 1 in (?, ?, ?, ?, ?)
select 1 from dual where 1 in (?, ?, ?, ?, ?, ?)
select 1 from dual where 1 in (?, ?, ?, ?, ?, ?, ?)
select 1 from dual where 1 in (?, ?, ?, ?, ?, ?, ?, ?)
select 1 from dual where 1 in (?, ?, ?, ?, ?, ?, ?, ?, ?)
select 1 from dual where 1 in (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)

But that’s not what is being executed. In the DEBUG logs, we can see the following:

Translating from         : select 1 from dual where 1 in (?)
Translating to           : select 1 from dual where 1 in (?)
Translating from         : select 1 from dual where 1 in (?, ?)
Translating to           : select 1 from dual where 1 in (?, ?)
Translating from         : select 1 from dual where 1 in (?, ?, ?)
Translating to           : select 1 from dual where 1 in (?, ?, ?, ?)
Translating from         : select 1 from dual where 1 in (?, ?, ?, ?)
Translating to           : select 1 from dual where 1 in (?, ?, ?, ?)
Translating from         : select 1 from dual where 1 in (?, ?, ?, ?, ?)
Translating to           : select 1 from dual where 1 in (?, ?, ?, ?, ?, ?, ?, ?)
Translating from         : select 1 from dual where 1 in (?, ?, ?, ?, ?, ?)
Translating to           : select 1 from dual where 1 in (?, ?, ?, ?, ?, ?, ?, ?)
Translating from         : select 1 from dual where 1 in (?, ?, ?, ?, ?, ?, ?)
Translating to           : select 1 from dual where 1 in (?, ?, ?, ?, ?, ?, ?, ?)
Translating from         : select 1 from dual where 1 in (?, ?, ?, ?, ?, ?, ?, ?)
Translating to           : select 1 from dual where 1 in (?, ?, ?, ?, ?, ?, ?, ?)
Translating from         : select 1 from dual where 1 in (?, ?, ?, ?, ?, ?, ?, ?, ?)
Translating to           : select 1 from dual where 1 in (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
Translating from         : select 1 from dual where 1 in (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
Translating to           : select 1 from dual where 1 in (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)

And just like that, our legacy application can run in production again, and you’ll have time to fix this more thoroughly.

Conclusion

While jOOQ is mostly an internal DSL for writing type safe, embedded SQL in Java, you can use it for a lot of other things too on any JDBC based application. The above example is using the ParsingConnection that can parse all your SQL statements and translate / transform them to anything else, including other dialects.

When Using Bind Variables is not Enough: Dynamic IN Lists

In a previous blog post, I wrote about why you should (almost) always default to using bind variables. There are some exceptions, which I will cover in another follow-up post, but by default, bind variables are the right choice, both from a performance and from a security perspective. In this article, I will show an example where regrettably, bind variables are not enough, and you can still run into significant performance issues in production. That’s when you create dynamic IN lists.

What’s the problem with dynamic IN lists

Again, this blog post is applicable for most databases, but I’ll use Oracle as an example, only. An IN predicate that takes a list is a handy abbreviation for an equivalent OR predicate. The following two queries are semantically equivalent:

-- IN predicate
SELECT *
FROM actor
WHERE actor_id IN (1, 2, 3)

-- OR predicate
SELECT *
FROM actor
WHERE actor_id = 1
OR actor_id = 2
OR actor_id = 3

The execution plans are, respectively: IN list
---------------------------------------------------------
| Id  | Operation                    | Name     | Rows  |
---------------------------------------------------------
|   0 | SELECT STATEMENT             |          |     3 |
|   1 |  INLIST ITERATOR             |          |       |
|   2 |   TABLE ACCESS BY INDEX ROWID| ACTOR    |     3 |
|*  3 |    INDEX UNIQUE SCAN         | PK_ACTOR |     3 |
---------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   3 - access("ACTOR_ID"=1 OR "ACTOR_ID"=2 OR "ACTOR_ID"=3)
OR predicate
---------------------------------------------------------
| Id  | Operation                    | Name     | Rows  |
---------------------------------------------------------
|   0 | SELECT STATEMENT             |          |     3 |
|   1 |  INLIST ITERATOR             |          |       |
|   2 |   TABLE ACCESS BY INDEX ROWID| ACTOR    |     3 |
|*  3 |    INDEX UNIQUE SCAN         | PK_ACTOR |     3 |
---------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   3 - access("ACTOR_ID"=1 OR "ACTOR_ID"=2 OR "ACTOR_ID"=3)
We get the exact same plan. And the plan is also optimal. When looking for 3 actors, we’ll run 3 primary key lookups and get a cardinality estimate of 3. Perfect, what could be wrong about this?

Remember execution plan caching

In that previous blog post, we’ve seen the detrimental effects we can get under load when execution plans are no longer cached in the execution plan cache (or in Oracle: cursor cache). But even when we’re using bind variables, we can run into these issues. Consider the following dynamic SQL queries:

SELECT count(*) FROM actor WHERE actor_id IN (?)
SELECT count(*) FROM actor WHERE actor_id IN (?, ?)
SELECT count(*) FROM actor WHERE actor_id IN (?, ?, ?)
SELECT count(*) FROM actor WHERE actor_id IN (?, ?, ?, ?)
SELECT count(*) FROM actor WHERE actor_id IN (?, ?, ?, ?, ?)

Now, let’s generate these SQL queries using dynamic SQL in PL/SQL. PL/SQL is not a very dynamic SQL friendly language as can be seen in the following snippet:

SET SERVEROUTPUT ON
DECLARE
  v_count NUMBER;
  
  FUNCTION binds (n NUMBER) RETURN VARCHAR2 IS
    v_result VARCHAR2(1000);
  BEGIN
    FOR i IN 1..n LOOP
      IF v_result IS NULL THEN
        v_result := ':b' || i;
      ELSE
        v_result := v_result || ', :b' || i;
      END IF;
    END LOOP;
    
    RETURN v_result;
  END binds;
  
  FUNCTION vals (n NUMBER) RETURN VARCHAR2 IS
    v_result VARCHAR2(1000);
  BEGIN
    FOR i IN 1..n LOOP
      IF v_result IS NULL THEN
        v_result := i;
      ELSE
        v_result := v_result || ', ' || i;
      END IF;
    END LOOP;
    
    RETURN v_result;
  END vals;
BEGIN
  FOR i IN 1..5 LOOP
    EXECUTE IMMEDIATE '
      DECLARE 
        v_count NUMBER;
      BEGIN
        EXECUTE IMMEDIATE ''
          SELECT count(*)
          FROM actor
          WHERE actor_id IN (' || binds(i) || ')
        ''
        INTO v_count
        USING ' || vals(i) || ';
        
        :v_count := v_count;
      END;
    '
    USING OUT v_count
    ;
    
    dbms_output.put_line(v_count);
  END LOOP;
END;
/

Running this snippet yields, as expected:
1
2
3
4
5
There are a few noteworthy things to say about the above:
  • The functions BINDS() and VALS() generate strings like :b1, :b2, :b3 and 1, 2, 3. We need these to generate dynamically a list of bind variables for the IN predicates, as well as a list of bind values for the USING clause of the nested EXECUTE IMMEDIATE statement
  • The EXECUTE IMMEDIATE .. USING clause does not take dynamically sized bind value lists, so we have to generate that as well, in a nested EXECUTE IMMEDIATE statement
An appropriate gif at this point when nesting EXECUTE IMMEDIATE like we did is this one: Or also: So, now that we got this covered… … let’s look at execution plans:

SELECT s.sql_id, p.*
FROM v$sql s, TABLE (
  dbms_xplan.display_cursor (
    s.sql_id, s.child_number, 'ALLSTATS LAST'
  )
) p
WHERE lower(s.sql_text) LIKE '%actor_id in %'
AND lower(s.sql_text) NOT LIKE '%v$sql%'
AND lower(s.sql_text) NOT LIKE '%execute_immediate%';

And the result is (some additional formatting applied):
SQL_ID  cwm09k8zqp2at, child number 0
-------------------------------------
SELECT count(*) FROM actor WHERE actor_id IN (:b1)
 
Plan hash value: 1971314150
 
------------------------------------------------
| Id  | Operation          | Name     | E-Rows |
------------------------------------------------
|   0 | SELECT STATEMENT   |          |        |
|   1 |  SORT AGGREGATE    |          |      1 |
|*  2 |   INDEX UNIQUE SCAN| PK_ACTOR |      1 |
------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - access("ACTOR_ID"=:B1)
 
SQL_ID  1wypurx1x0mrp, child number 0
-------------------------------------
SELECT count(*) FROM actor WHERE actor_id IN (:b1, :b2)
 
Plan hash value: 577114894
 
-------------------------------------------------
| Id  | Operation           | Name     | E-Rows |
-------------------------------------------------
|   0 | SELECT STATEMENT    |          |        |
|   1 |  SORT AGGREGATE     |          |      1 |
|   2 |   INLIST ITERATOR   |          |        |
|*  3 |    INDEX UNIQUE SCAN| PK_ACTOR |      2 |
-------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   3 - access(("ACTOR_ID"=:B1 OR "ACTOR_ID"=:B2))

SQL_ID  2y06nwgpxqttn, child number 0
-------------------------------------
SELECT count(*) FROM actor WHERE actor_id IN (:b1, :b2, :b3)
 
Plan hash value: 577114894
 
-------------------------------------------------
| Id  | Operation           | Name     | E-Rows |
-------------------------------------------------
|   0 | SELECT STATEMENT    |          |        |
|   1 |  SORT AGGREGATE     |          |      1 |
|   2 |   INLIST ITERATOR   |          |        |
|*  3 |    INDEX UNIQUE SCAN| PK_ACTOR |      3 |
-------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   3 - access(("ACTOR_ID"=:B1 OR "ACTOR_ID"=:B2 OR "ACTOR_ID"=:B3))

SQL_ID  d4nn8qf9n22yt, child number 0
-------------------------------------
SELECT count(*) FROM actor WHERE actor_id IN (:b1, :b2, :b3, :b4)
 
Plan hash value: 577114894
 
-------------------------------------------------
| Id  | Operation           | Name     | E-Rows |
-------------------------------------------------
|   0 | SELECT STATEMENT    |          |        |
|   1 |  SORT AGGREGATE     |          |      1 |
|   2 |   INLIST ITERATOR   |          |        |
|*  3 |    INDEX UNIQUE SCAN| PK_ACTOR |      4 |
-------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   3 - access(("ACTOR_ID"=:B1 OR "ACTOR_ID"=:B2 OR "ACTOR_ID"=:B3 OR 
              "ACTOR_ID"=:B4))

SQL_ID  4n9b4zgxr5cwj, child number 0
-------------------------------------
SELECT count(*) FROM actor WHERE actor_id IN (:b1, :b2, :b3, :b4, :b5)
 
Plan hash value: 577114894
 
-------------------------------------------------
| Id  | Operation           | Name     | E-Rows |
-------------------------------------------------
|   0 | SELECT STATEMENT    |          |        |
|   1 |  SORT AGGREGATE     |          |      1 |
|   2 |   INLIST ITERATOR   |          |        |
|*  3 |    INDEX UNIQUE SCAN| PK_ACTOR |      5 |
-------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   3 - access(("ACTOR_ID"=:B1 OR "ACTOR_ID"=:B2 OR "ACTOR_ID"=:B3 OR 
              "ACTOR_ID"=:B4 OR "ACTOR_ID"=:B5))
So, again, we’re down to having 5 different SQL_IDs and statements in the execution plan cache / cursor cache, even if they all derive from a single logical statement. Now, in PL/SQL, this is so hard to do, you’re probably not doing it at all. But if you’re using a query builder (or wrote your own), like JPA’s Criteria Query, or jOOQ, then you will be writing something like this (using jOOQ, because in fact, using the Criteria Query API, dynamic SQL is about as hard to do as in PL/SQL):

for (int i = 1; i <= 5; i++) {
    System.out.println(
        DSL.using(configuration)
           .select(count())
           .from(ACTOR)
           .where(ACTOR.ACTOR_ID.in(
                IntStream.rangeClosed(1, i)
                         .boxed()
                         .collect(toList())
           )
           .fetchOne(count())
    );
}

So this query will generate the same queries as the PL/SQL code, and this really wasn’t that hard. Perhaps, in this particular case, it’s not that much of a good thing that this isn’t so hard to do!

Let’s benchmark this, again

There are different ways to benchmark this problem. One mistake we could make here is to just benchmark small IN lists with large ones, but that’s not fair. If you have to send a large IN list to the server, you cannot easily change that. Another option is to benchmark alternative approaches of generating those IN lists
  • The default way is to have a list of exactly size N
  • An alternative way would be to pad the list to a size M where M = 2 ^ X such that M(X) >= N and M(X - 1) < N
The latter approach would, instead of generating the original queries:

SELECT count(*) FROM actor WHERE actor_id IN (?)
SELECT count(*) FROM actor WHERE actor_id IN (?, ?)
SELECT count(*) FROM actor WHERE actor_id IN (?, ?, ?)
SELECT count(*) FROM actor WHERE actor_id IN (?, ?, ?, ?)
SELECT count(*) FROM actor WHERE actor_id IN (?, ?, ?, ?, ?)

… generate these ones:

SELECT count(*) FROM actor WHERE actor_id IN (?)
SELECT count(*) FROM actor WHERE actor_id IN (?, ?)
SELECT count(*) FROM actor WHERE actor_id IN (?, ?, ?, ?)
SELECT count(*) FROM actor WHERE actor_id IN (?, ?, ?, ?)
SELECT count(*) FROM actor WHERE actor_id IN (?, ?, ?, ?, ?, ?, ?, ?)

Where the last bind variable is always repeated (padded) until the list is filled up to a length that is a power of 2 (and in Oracle, of course, we’d split lists longer than 1000 items into several lists). This padding is what jOOQ already offers: https://www.jooq.org/doc/latest/manual/sql-building/dsl-context/custom-settings/settings-in-list-padding/ And maybe, a future Hibernate will offer it too, after I’ve suggested the feature to Vlad Mihalcea from the Hibernate team: https://hibernate.atlassian.net/browse/HHH-12469 Here’s the benchmark logic. It’s a bit more complex than previously:

SET SERVEROUTPUT ON

ALTER SYSTEM FLUSH SHARED_POOL;
ALTER SYSTEM FLUSH BUFFER_CACHE;

CREATE TABLE results (
  run     NUMBER(2),
  stmt    NUMBER(2),
  elapsed NUMBER
);

DECLARE
  v_ts TIMESTAMP WITH TIME ZONE;
  v_repeat CONSTANT NUMBER := 20000;
  v_first_name actor.first_name%TYPE;
  v_last_name  actor.last_name%TYPE;
  
  FUNCTION binds (n NUMBER, pad BOOLEAN) RETURN VARCHAR2 IS
    v_result VARCHAR2(32767);
    v_n_pad  NUMBER(10) := power(2, ceil(ln(n)/ln(2)));
  BEGIN
    FOR i IN 1..n LOOP
      IF v_result IS NULL THEN
        v_result := ':b' || i;
      ELSIF mod(i, 1000) = 0 THEN
        v_result := v_result || ') OR actor_id IN (:b' || i;
      ELSE
        v_result := v_result || ', :b' || i;
      END IF;
    END LOOP;
    
    IF pad THEN
      FOR i IN n + 1 .. v_n_pad LOOP
        IF mod(i, 1000) = 0 THEN
          v_result := v_result || ') OR actor_id IN (:b' || i;
        ELSE
          v_result := v_result || ', :b' || i;
        END IF;
      END LOOP;
    END IF;
    
    RETURN v_result;
  END binds;
  
  FUNCTION vals (n NUMBER, pad BOOLEAN) RETURN VARCHAR2 IS
    v_result VARCHAR2(32767);
    v_n_pad  NUMBER(10) := power(2, ceil(ln(n)/ln(2)));
  BEGIN
    FOR i IN 1..n LOOP
      IF v_result IS NULL THEN
        v_result := i;
      ELSE
        v_result := v_result || ', ' || i;
      END IF;
    END LOOP;
    
    IF pad THEN
      FOR i IN n + 1 .. v_n_pad LOOP
        v_result := v_result || ', ' || n;
      END LOOP;
    END IF;
    
    RETURN v_result;
  END vals;
  
  PROCEDURE run(i NUMBER, pad BOOLEAN) IS
  BEGIN
    EXECUTE IMMEDIATE '
      DECLARE 
        v_count NUMBER;
      BEGIN
        EXECUTE IMMEDIATE ''
          SELECT count(*)
          FROM actor
          WHERE actor_id IN (' || binds(i, pad) || ')
        ''
        INTO v_count
        USING ' || vals(i, pad) || ';
      END;
    ';
  END run;
BEGIN

  -- Repeat the whole benchmark several times to avoid warmup penalty
  FOR r IN 1..5 LOOP
    v_ts := SYSTIMESTAMP;
      
    FOR i IN 1..v_repeat LOOP
      run(mod(i, 100) + 1, FALSE);
    END LOOP;
  
    INSERT INTO results VALUES (r, 1, 
      SYSDATE + ((SYSTIMESTAMP - v_ts) * 86400) - SYSDATE);
    v_ts := SYSTIMESTAMP;
      
    FOR i IN 1..v_repeat LOOP
      run(mod(i, 100) + 1, TRUE);
    END LOOP;
  
    INSERT INTO results VALUES (r, 2, 
      SYSDATE + ((SYSTIMESTAMP - v_ts) * 86400) - SYSDATE);
  END LOOP;
  
  FOR rec IN (
    SELECT 
      run, stmt, 
      CAST(elapsed / MIN(elapsed) OVER() AS NUMBER(10, 5)) ratio 
    FROM results
  )
  LOOP
    dbms_output.put_line('Run ' || rec.run || 
      ', Statement ' || rec.stmt || 
      ' : ' || rec.ratio);
  END LOOP;
END;
/

DROP TABLE results;

Bear in mind the following:
  • We run the benchmark 5 times
  • In each run, we run 20000 queries with each technique
  • Those 20000 queries have between 1 and 100 elements in the IN list
The results show that both approaches fare equally well:
Run 1, Statement 1 : 1.16944
Run 1, Statement 2 : 1.11204
Run 2, Statement 1 : 1.06086
Run 2, Statement 2 : 1.03591
Run 3, Statement 1 : 1.03589
Run 3, Statement 2 : 1.03605
Run 4, Statement 1 : 1.33935
Run 4, Statement 2 : 1.2822
Run 5, Statement 1 : 1
Run 5, Statement 2 : 1.04648
This means that there is no significant overhead caused by the padding in this benchmark. That’s good news. Running the following query shows the number of executions per statement:

SELECT executions, regexp_count(sql_text, ':'), sql_text
FROM v$sql
WHERE lower(sql_text) LIKE '%actor_id in %'
AND lower(sql_text) NOT LIKE '%v$sql%'
AND lower(sql_text) NOT LIKE '%execute_immediate%'
ORDER BY sql_text;

The result being:
EXECS   BINDS   SQL_TEXT
2000	1       SELECT count(*) FROM actor WHERE actor_id IN (:b1)         
2000	2       SELECT count(*) FROM actor WHERE actor_id IN (:b1, :b2)         
1000	3       SELECT count(*) FROM actor WHERE actor_id IN (:b1, :b2, :b3)         
3000	4       ...
1000	5       ...
1000	6       ...
1000	7       ...
5000	8       ...
1000	9       ...
1000	10      ...
1000	11      ...
1000	12      ...
1000	13	...
1000	14	...
1000	15	...
9000	16	...
...
1000	100	...
36000	128	...
The version that creates exactly sized IN lists is always executed 1000 times. The version that has padded IN lists sized to the next power of 2 (M = 2 ^ X) is executed 1000 * X times (plus another 1000 times because there’s also an IN list of that exact size) At the end, there are some “lone” 128-element sized IN lists. This is expected. The number of different SQL statements is:
  • 100 for the exactly sized IN lists
  • 7 for the padded IN lists (128 = 2 ^ 7)
So, thus far, we haven’t gained anything, but also not lost much.

What if the lists get longer

The real problem isn’t the individual query execution time, which is fine in both cases, but the effect that dynamic IN lists have on a system. Let’s generate IN lists of up to size 4096, and bearing in mind that Oracle won’t allow this, the lists will be of the form:

SELECT count(*)
FROM actor
WHERE actor_id IN (:b1, :b2, ..., :b1000)
OR actor_id IN (:b1001, :b1002, ..., :b2000)
OR actor_id IN (:b2001, ...)

Now, with a maximum size of 4096, there are quite a few distinct statements that need to be parsed. As opposed to only 12 when we use the padded in lists. The padded size grows logarithmically, so we’ll never really run into this same problem again, at least not for a single IN list. How does the comparison fare in another benchmark? We must be careful not to overdo things. Running 2x 20000 queries with variable sized IN lists can easily run for more than an hour on Oracle. Perhaps, it’s Oracle’s way of punishing you for doing something you shouldn’t: Having super-long IN lists. I’ve played around with different sets of parameters to the benchmark and the clear message here is that having long bind variable lists creates a bottleneck per se (i.e. the binding of the variables takes time), so the difference in parse times isn’t as drastic as it was before, for individual query executions. The overall improvement is this:
Run 1, Statement 1 : 1.42696
Run 1, Statement 2 : 1
So, we get around a 1.5x increase of performance when repeating the bind values. Yet still, the fact that we have far less cached execution plans does matter, especially when we start exhausting the cursor cache, because with this approach, at least this exhaustion is not going to happen.

Alternative options

The above shows that if you really cannot get around an IN list, a simple, viable workaround is to pad the lists. But often, you can get around them. In databases like Oracle and PostgreSQL, that offer array types, you can consider using those once your lists start growing. I’ve blogged about this in a previous blog post: https://blog.jooq.org/2017/03/30/sql-in-predicate-with-in-list-or-with-array-which-is-faster If your lists become huge, a bulk/batch inserting the IDs into a temporary table might be a far better option, and then semi-join that one instead:

SELECT count(*)
FROM actor
WHERE actor_id IN (
  SELECT id FROM temp_table
)

And ideally, if your list is really not dynamic, but originates from some other table, then please, stop passing around IDs and semi-join the original query that produced the IDs in the first place:

SELECT count(*)
FROM actor
WHERE actor_id IN (
  SELECT id FROM other_table WHERE ...
)

That last option should always be your first choice as it will very likely outperform all the others.

Why SQL Bind Variables are Important for Performance

A common problem with dynamic SQL is parsing performance in production. What makes matters worse is that many developers do not have access to production environments, so they are unaware of the problem (even if there’s nothing new about this topic). What exactly is the problem?

Execution plan caches

Most database vendors these days ship with an execution plan cache (Oracle calls it cursor cache), where previously parsed SQL statements are stored and their execution plan(s) is cached for reuse. This is the main reason why bind variables are so important (the other reason being SQL injection prevention). By using bind variables, we can make sure that the database will easily recognise an identical SQL statement from a previous execution and be able to re-execute the previously found execution plan. This is in fact one of my favourite topics from my SQL training. Let’s see what happens in various databases, if we run the following queries:

-- First, run them with "inline values" or "constant literals"
SELECT first_name, last_name FROM actor WHERE actor_id = 1;
SELECT first_name, last_name FROM actor WHERE actor_id = 2;

-- Then, run the same queries again with bind values
SELECT first_name, last_name FROM actor WHERE actor_id = ?;
SELECT first_name, last_name FROM actor WHERE actor_id = ?;

Note, it doesn’t matter if the queries are run from JDBC, jOOQ, Hibernate, or the procedural language in the database, e.g. PL/SQL, T-SQL, pgplsql. The result is always the same.

Let’s run an example

I’ll run the following examples using Oracle, only. Other databases behave in a similar fashion. We’ll run the following script, which includes the above queries, and a query to fetch all the execution plans:

SELECT first_name, last_name FROM actor WHERE actor_id = 1;
SELECT first_name, last_name FROM actor WHERE actor_id = 2;

SET SERVEROUTPUT ON
DECLARE
  v_first_name actor.first_name%TYPE;
  v_last_name  actor.last_name%TYPE;
BEGIN
  FOR i IN 1 .. 2 LOOP
    SELECT first_name, last_name 
    INTO v_first_name, v_last_name
    FROM actor 
    WHERE actor_id = i;
    
    dbms_output.put_line(v_first_name || ' ' || v_last_name);
  END LOOP;
END;
/

SELECT s.sql_id, p.*
FROM v$sql s, TABLE (
  dbms_xplan.display_cursor (
    s.sql_id, s.child_number, 'ALLSTATS LAST'
  )
) p
WHERE lower(s.sql_text) LIKE '%actor_id = %';

The output is:
SQL_ID  90rk04nhr45yz, child number 0
-------------------------------------
SELECT FIRST_NAME, LAST_NAME FROM ACTOR WHERE ACTOR_ID = :B1
 
Plan hash value: 457831946
 
---------------------------------------------------------
| Id  | Operation                   | Name     | E-Rows |
---------------------------------------------------------
|   0 | SELECT STATEMENT            |          |        |
|   1 |  TABLE ACCESS BY INDEX ROWID| ACTOR    |      1 |
|*  2 |   INDEX UNIQUE SCAN         | PK_ACTOR |      1 |
---------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - access("ACTOR_ID"=:B1)
 

SQL_ID  283s8m524c9rk, child number 0
-------------------------------------
SELECT first_name, last_name FROM actor WHERE actor_id = 2
 
Plan hash value: 457831946
 
---------------------------------------------------------
| Id  | Operation                   | Name     | E-Rows |
---------------------------------------------------------
|   0 | SELECT STATEMENT            |          |        |
|   1 |  TABLE ACCESS BY INDEX ROWID| ACTOR    |      1 |
|*  2 |   INDEX UNIQUE SCAN         | PK_ACTOR |      1 |
---------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - access("ACTOR_ID"=2)
 

SQL_ID  3mks715670mqw, child number 0
-------------------------------------
SELECT first_name, last_name FROM actor WHERE actor_id = 1
 
Plan hash value: 457831946
 
---------------------------------------------------------
| Id  | Operation                   | Name     | E-Rows |
---------------------------------------------------------
|   0 | SELECT STATEMENT            |          |        |
|   1 |  TABLE ACCESS BY INDEX ROWID| ACTOR    |      1 |
|*  2 |   INDEX UNIQUE SCAN         | PK_ACTOR |      1 |
---------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - access("ACTOR_ID"=1)
The plans are always the same and since we’re accessing primary key values, we’ll always get the same cardinalities, so there doesn’t seem to be anything wrong on each individual execution. But notice how the predicate information is slightly different. When querying for a constant value, then the predicate will include that value right there, whereas with the bind variable, we don’t know what the predicate value is, from the plan. This is perfectly expected, because we want to reuse that plan for both executions of the query. With another query, we can see the number of executions of each statement:

SELECT sql_id, sql_text, executions
FROM v$sql
WHERE sql_id IN (
  '90rk04nhr45yz', 
  '283s8m524c9rk', 
  '3mks715670mqw'
);

SQL_ID          SQL_TEXT                                                        EXECUTIONS
------------------------------------------------------------------------------------------
90rk04nhr45yz	SELECT FIRST_NAME, LAST_NAME FROM ACTOR WHERE ACTOR_ID = :B1 	2
283s8m524c9rk	SELECT first_name, last_name FROM actor WHERE actor_id = 2	1
3mks715670mqw	SELECT first_name, last_name FROM actor WHERE actor_id = 1	1
This is where it gets more interesting. In the second case where we used a bind variable (which was generated by PL/SQL, automatically), we could reuse the statement, cache its plan, and run it twice.

Meh, does it matter?

It matters for two reasons:
  • Performance of individual executions
  • Performance of your entire system
How this affects individual executions It seems very obvious that when being able to cache something, there’s a slight overhead to the cache maintenance compared to the gain in not having to do the work whose result is cached. The work in question here is parsing the SQL statement and creating an execution plan for it. Even if the plan is trivial, as in the above examples, there is overhead involved with calculating this plan. This overhead can best be shown in a benchmark, a technique that I also display in my SQL training:

SET SERVEROUTPUT ON

-- Don't run these on production
-- But on your development environment, this guarantees clean caches
ALTER SYSTEM FLUSH SHARED_POOL;
ALTER SYSTEM FLUSH BUFFER_CACHE;

CREATE TABLE results (
  run     NUMBER(2),
  stmt    NUMBER(2),
  elapsed NUMBER
);

DECLARE
  v_ts TIMESTAMP WITH TIME ZONE;
  v_repeat CONSTANT NUMBER := 2000;
  v_first_name actor.first_name%TYPE;
  v_last_name  actor.last_name%TYPE;
BEGIN

  -- Repeat whole benchmark several times to avoid warmup penalty
  FOR r IN 1..5 LOOP
    v_ts := SYSTIMESTAMP;
      
    FOR i IN 1..v_repeat LOOP
      BEGIN
        EXECUTE IMMEDIATE '
          SELECT first_name, last_name 
          FROM actor 
          WHERE actor_id = ' || i 
          -- Just fixing a syntax highlighting bug of this blog '
        INTO v_first_name, v_last_name;
      EXCEPTION
        -- Please forgive me
        WHEN OTHERS THEN NULL;
      END;
    END LOOP;
  
    INSERT INTO results VALUES (
      r, 1, SYSDATE + ((SYSTIMESTAMP - v_ts) * 86400) - SYSDATE);
    v_ts := SYSTIMESTAMP;
      
    FOR i IN 1..v_repeat LOOP
      BEGIN
        EXECUTE IMMEDIATE '
          SELECT first_name, last_name 
          FROM actor 
          WHERE actor_id = :i'
        INTO v_first_name, v_last_name
        USING i;
      EXCEPTION
        -- Please forgive me
        WHEN OTHERS THEN NULL;
      END;
    END LOOP;
      
    INSERT INTO results VALUES (
      r, 2, SYSDATE + ((SYSTIMESTAMP - v_ts) * 86400) - SYSDATE);
  END LOOP;
  
  FOR rec IN (
    SELECT 
      run, stmt, 
      CAST(elapsed / MIN(elapsed) OVER() AS NUMBER(10, 5)) ratio 
    FROM results
  )
  LOOP
    dbms_output.put_line('Run ' || rec.run || 
      ', Statement ' || rec.stmt || 
      ' : ' || rec.ratio);
  END LOOP;
END;
/

DROP TABLE results;

As always, on the jOOQ blog, we don’t publish actual execution times to comply with license restrictions on benchmark publications, so we’re only comparing each execution with the fastest execution. This is the result of the above:
Run 1, Statement 1 : 83.39893
Run 1, Statement 2 : 1.1685
Run 2, Statement 1 : 3.02697
Run 2, Statement 2 : 1
Run 3, Statement 1 : 2.72028
Run 3, Statement 2 : 1.03996
Run 4, Statement 1 : 2.70929
Run 4, Statement 2 : 1.00866
Run 5, Statement 1 : 2.71895
Run 5, Statement 2 : 1.02198
We can see that consistently, the SQL version using a bind variable is 2.5x as fast as the one not using the bind variable. This overhead is very significant for trivial queries – it might be a bit less so for more complex queries, where the execution itself takes more time, compared to the parsing. But it should be obvious that the overhead is a price we do not want to pay. We want the query and its plan to be cached! Notice also how the very first execution of the benchmark has a very significant overhead, because all of the 2000 queries will have been encountered for the first time before they’re cached for the second run. That’s a price we’re only paying during the first run, though. How this affects your entire system Not only do individual query executions suffer, your entire system does, too. After running the benchmark a few times, these are the execution statistics I’m getting from the Oracle cursor cache:

SELECT 
  count(*), 
  avg(executions), 
  min(executions), 
  max(executions)
FROM v$sql
WHERE lower(sql_text) LIKE '%actor_id = %'
AND sql_text NOT LIKE '%v$sql%';

Yielding:
count  avg      min  max
2001   9.9950   5    10000
There are now, currently, 2000 queries in my cache. The one that has been executed 10000 times (benchmark was repeated 5x and 2000 executions of the query per run), and 2000 queries that have been executed 5x (benchmark was repeated 5x). If instead we run the query 20000 times (and remember, the query run corresponds to the filtered ACTOR_ID), then the result will be vastly different!
Run 1, Statement 1 : 86.85862
Run 1, Statement 2 : 1.13546
Run 2, Statement 1 : 78.39842
Run 2, Statement 2 : 1.01298
Run 3, Statement 1 : 72.45254
Run 3, Statement 2 : 1
Run 4, Statement 1 : 73.78357
Run 4, Statement 2 : 2.24365
Run 5, Statement 1 : 84.89842
Run 5, Statement 2 : 1.143
Oh, my. Why has this happened? Let’s check again the cursor cache stats:

SELECT 
  count(*), 
  avg(executions), 
  min(executions), 
  max(executions)
FROM v$sql
WHERE lower(sql_text) LIKE '%actor_id = %'
AND sql_text NOT LIKE '%v$sql%';

Yielding:
count  avg      min  max
15738  3.4144   1    20000
This is a vastly different result. We don’t have all of our 20000 queries in the cursor cache, only some of them. This means that some statements have been purged from the cache to make room for new ones (which is reasonable behaviour for any cache). But purging them is problematic too, because the way the benchmark was designed, they will re-appear again in the second, third, fourth, and fifth run, so we should have kept them in the cache. And since we’re executing every query the same number of times, there really wasn’t any way of identifying a “more reasonable” (i.e. rare) query to purge. Resources in a system are always limited, and so is the cursor cache size. The more distinct queries we’re running in a system, the less they can profit from the cursor cache. This is not a problem for rarely run queries, including reports, analytics, or some special queries run only by some very few users. But the queries that are being run all the time should always be cached. I cannot stress enough how serious this can be: In the above case, what should have been a single query in the cursor cache exploded into 20000 queries, shoving a lot of much more useful queries out of the cache. Not only does this slow down the execution of this particular query, it will purge tons of completely unrelated queries from the cache, thus slowing down the entire system by similar factors. If everyone is slowed down drastically, everyone will start to queue up to have their SQL queries parsed, and you can bring down your entire server with this problem (in the worst case)!

Workaround

Some databases support enforcing parsing the constant literals to bind variables. In Oracle, you can specify CURSOR_SHARING = FORCE as a “quick fix”. In SQL Server, it’s called forced parametrization. But this approach has its own limitations and overhead, as this means that additional parsing work needs to be performed every time to recognise constant literals and replace them by bind variables. This overhead will then apply to all queries!

Conclusion

Bind variables are very important for SQL performance. After tons of training people to use them for SQL injection reasons (which is already a good thing), we have now seen how important they are also for performance reasons. Not using a bind variable for values like IDs, timestamps, names, or anything that is uniformly distributed and has many values in your column will produce the above problem. The exception are bind variables for columns with only very few distinct values (like true/false flags, codes that encode a given state, etc.), in case of which a constant literal can be a reasonable option (follow-up blog post coming, soon). But bind variables should always be your default choice. If you’re using a client-side tool like jOOQ or Hibernate, bind variables tend to be the default, and you’re fine. If you’re using a stored procedural language like PL/SQL or T-SQL, bind variables are generated automatically and you’re fine as well. But if you’re using JDBC or any JDBC wrapper like Spring’s JdbcTemplates, or any other string based API, like JPA’s native query API, then you are on your own again, and you must explicitly take care of using bind variables every time you have variable input. And in our next article, we’ll see how bind variables are actually not enough, when using dynamic IN lists, another topic that I borrowed from my SQL training.

SQL IN Predicate: With IN List or With Array? Which is Faster?

Hah! Got nerd-sniped again: http://stackoverflow.com/questions/43099226/how-to-make-jooq-to-use-arrays-in-the-in-clause/43102102 A jOOQ user was wondering why jOOQ would generate an IN list for a predicate like this: Java

COLUMN.in(1, 2, 3, 4)

SQL

COLUMN in (?, ?, ?, ?)

… when in fact there could have been the following predicate being generated, instead:

COLUMN = any(?::int[])

In the second case, there would have been only one single bind variable instead of 4, and the SQL generation and parsing work would have been “much” less (maybe not for the IN list of size 4, but let’s imagine a list of 50 values).

A disclaimer

First off, a disclaimer: In databases that have a cursor cache / plan cache (e.g. Oracle or SQL Server), you should be careful with long IN lists, because they will probably trigger a hard parse every time you run them, as by the time you run the exact same predicate (with 371 elements in the list) again, the execution plan will have been purged from the cache. So, you cannot really profit from the cache. I’m aware of this problem, and it will be topic of another blog post, soon. Let’s stick to PostgreSQL whose “plan cache” isn’t really that sophisticated.

Measure, don’t guess

The question was about improving the speed of parsing a SQL statement. Parsers are really fast, so parsing shouldn’t be a problem. Generating an execution plan certainly does cost more time, but again, since PostgreSQL’s plan cache isn’t very sophisticated, this won’t play into the issue here. So the question is really: Is an IN list really that bad in PostgreSQL? Would an array bind variable be much better? Since our recent post about benchmarking, we now know that we shall never guess, but always measure. I’m using again the Sakila database to run these two queries:

-- IN list
SELECT * 
FROM film 
JOIN film_actor USING (film_id) 
JOIN actor USING (actor_id) 
WHERE film_id IN (?, ?, ?, ?)

-- Array
SELECT * 
FROM film 
JOIN film_actor USING (film_id) 
JOIN actor USING (actor_id) 
WHERE film_id = ANY(?)

Let’s try lists of length 4, first. The benchmark is here:

DO $$
DECLARE
  v_ts TIMESTAMP;
  v_repeat CONSTANT INT := 1000;
  rec RECORD;
  v_e1 INT := 1;
  v_e2 INT := 2;
  v_e3 INT := 4;
  v_e4 INT := 8;
  v_any_arr INT[] := ARRAY[v_e1, v_e2, v_e3, v_e4];
BEGIN
  FOR r IN 1..5 LOOP
    v_ts := clock_timestamp();

    FOR i IN 1..v_repeat LOOP
      FOR rec IN (
        SELECT * 
        FROM film 
        JOIN film_actor USING (film_id) 
        JOIN actor USING (actor_id) 
        WHERE film_id IN (v_e1, v_e2, v_e3, v_e4)
      ) LOOP
        NULL;
      END LOOP;
    END LOOP;

    RAISE INFO 'Run %, Statement 1: %', 
      r, (clock_timestamp() - v_ts); 
    v_ts := clock_timestamp();

    FOR i IN 1..v_repeat LOOP
      FOR rec IN (
        SELECT * 
        FROM film 
        JOIN film_actor USING (film_id) 
        JOIN actor USING (actor_id) 
        WHERE film_id = ANY(v_any_arr)
      ) LOOP
        NULL;
      END LOOP;
    END LOOP;

    RAISE INFO 'Run %, Statement 2: %', 
      r, (clock_timestamp() - v_ts); 
  END LOOP;
END$$;

The result being:
INFO:  Run 1, Statement 1: 00:00:00.112195
INFO:  Run 1, Statement 2: 00:00:00.450461
INFO:  Run 2, Statement 1: 00:00:00.109792
INFO:  Run 2, Statement 2: 00:00:00.446518
INFO:  Run 3, Statement 1: 00:00:00.105413
INFO:  Run 3, Statement 2: 00:00:00.44298
INFO:  Run 4, Statement 1: 00:00:00.108249
INFO:  Run 4, Statement 2: 00:00:00.476527
INFO:  Run 5, Statement 1: 00:00:00.120229
INFO:  Run 5, Statement 2: 00:00:00.448214
Interesting. So, the IN list outperforms the array bind variable every time by a factor of 4 (which is the size of the array / list!) So, let’s try 8 values, then. Here are the values and the adapted query 1:

-- values
  v_e1 INT := 1;
  v_e2 INT := 2;
  v_e3 INT := 4;
  v_e4 INT := 8;
  v_e5 INT := 16;
  v_e6 INT := 32;
  v_e7 INT := 64;
  v_e8 INT := 128;
  v_any_arr INT[] := ARRAY[v_e1, v_e2, v_e3, v_e4, v_e5, v_e6, v_e7, v_e8];

-- adapted query 1 ...
        WHERE film_id IN (v_e1, v_e2, v_e3, v_e4, v_e5, v_e6, v_e7, v_e8)
-- ...

The result is still impressive:
INFO:  Run 1, Statement 1: 00:00:00.182646
INFO:  Run 1, Statement 2: 00:00:00.63624
INFO:  Run 2, Statement 1: 00:00:00.184814
INFO:  Run 2, Statement 2: 00:00:00.685976
INFO:  Run 3, Statement 1: 00:00:00.188108
INFO:  Run 3, Statement 2: 00:00:00.634903
INFO:  Run 4, Statement 1: 00:00:00.184933
INFO:  Run 4, Statement 2: 00:00:00.626616
INFO:  Run 5, Statement 1: 00:00:00.185879
INFO:  Run 5, Statement 2: 00:00:00.636723
The IN list query now takes almost 2x as long (but not quite 2x), whereas the array query now takes around 1.5x as long. It looks as though arrays become the better choice when their size increases. So, let’s do this! With 32 bind variables in the IN list, or 32 array elements respectively:
INFO:  Run 1, Statement 1: 00:00:00.905064
INFO:  Run 1, Statement 2: 00:00:00.752819
INFO:  Run 2, Statement 1: 00:00:00.760475
INFO:  Run 2, Statement 2: 00:00:00.758247
INFO:  Run 3, Statement 1: 00:00:00.777667
INFO:  Run 3, Statement 2: 00:00:00.895875
INFO:  Run 4, Statement 1: 00:00:01.308167
INFO:  Run 4, Statement 2: 00:00:00.789537
INFO:  Run 5, Statement 1: 00:00:00.788606
INFO:  Run 5, Statement 2: 00:00:00.776159
Both are about equally fast. 64 bind values!
INFO:  Run 1, Statement 1: 00:00:00.915069
INFO:  Run 1, Statement 2: 00:00:01.058966
INFO:  Run 2, Statement 1: 00:00:00.951488
INFO:  Run 2, Statement 2: 00:00:00.906285
INFO:  Run 3, Statement 1: 00:00:00.907489
INFO:  Run 3, Statement 2: 00:00:00.892393
INFO:  Run 4, Statement 1: 00:00:00.900424
INFO:  Run 4, Statement 2: 00:00:00.903447
INFO:  Run 5, Statement 1: 00:00:00.961805
INFO:  Run 5, Statement 2: 00:00:00.951697
Still about the same. OK… INTERN! Get over here. I need you to “generate” 128 bind values on this query. Yep, as expected. Finally, arrays start to outperform IN lists:
INFO:  Run 1, Statement 1: 00:00:01.122866
INFO:  Run 1, Statement 2: 00:00:01.083816
INFO:  Run 2, Statement 1: 00:00:01.416469
INFO:  Run 2, Statement 2: 00:00:01.134882
INFO:  Run 3, Statement 1: 00:00:01.122723
INFO:  Run 3, Statement 2: 00:00:01.087755
INFO:  Run 4, Statement 1: 00:00:01.143148
INFO:  Run 4, Statement 2: 00:00:01.124902
INFO:  Run 5, Statement 1: 00:00:01.236722
INFO:  Run 5, Statement 2: 00:00:01.113741

Using Oracle

Oracle also has array types (although you have to declare them as nominal types first, but that’s not a problem here). Here are some benchmark results (as always, not actual benchmark results, but anonymised units of measurement. I.e. these aren’t seconds but… Larrys): 4 bind values
Run 1, Statement 1 : 01.911000000
Run 1, Statement 2 : 02.852000000
Run 2, Statement 1 : 01.659000000
Run 2, Statement 2 : 02.680000000
Run 3, Statement 1 : 01.628000000
Run 3, Statement 2 : 02.664000000
Run 4, Statement 1 : 01.629000000
Run 4, Statement 2 : 02.657000000
Run 5, Statement 1 : 01.636000000
Run 5, Statement 2 : 02.688000000
128 bind values
Run 1, Statement 1 : 04.010000000
Run 1, Statement 2 : 06.275000000
Run 2, Statement 1 : 03.749000000
Run 2, Statement 2 : 05.440000000
Run 3, Statement 1 : 03.985000000
Run 3, Statement 2 : 05.387000000
Run 4, Statement 1 : 03.807000000
Run 4, Statement 2 : 05.688000000
Run 5, Statement 1 : 03.782000000
Run 5, Statement 2 : 05.803000000
The size of the number of bind values doesn’t seem to matter really. There’s always a constant overhead of using the array bind variable compared to the IN list, but that might as well be a benchmarking error. For instance, when I add the /*+GATHER_PLAN_STATISTICS*/ hint to both queries, interestingly, the one with the array got significantly faster, whereas the IN list one was not affected… Weird? An explanation can be found in the comments, specifically Kim Berg Hansen’s comment:
It looks like the default cardinality of the collection is assumed by the optimizer to be 8168 (at least in my 12.1.0.2 db). So probably full scan of actor with hash join.
Indeed, the TABLE() constructor (in this case) always seems to yield a constant cardinality estimate of 8168, despite the array containing much less data. So hinting approximate cardinalities might help here, to get nested loop joins for small arrays.

Conclusion

This article doesn’t go into why there’s such a big difference for small lists when the benefit is only apparent for quite large lists. But it has once again shown, that we must not optimise prematurely in SQL, but measure, measure, measure things. IN lists in dynamic SQL queries can be a big issue in production when they lead to cursor cache / plan cache saturation and a lot of “hard parsing”. So, the benefit of using the array is much more drastic when the content is big, as we can recycle execution plans much more often than with IN lists. But chances are, that IN lists may be faster for single executions. In any case: Choose carefully when following advice that you find somewhere on the Internet. Also, when following this advice. I ran the benchmark on PostgreSQL 9.5 and Oracle 11gR2 XE. Both are not the latest database versions. Try to measure things again on your side, to be sure that your “improvement” is really an actual improvement! And if in doubt, don’t optimise, until you’re sure you actually have a problem.