jOOQ Tuesdays: Brett Wooldridge Shows What it Takes to Write the Fastest Java Connection Pool


Welcome to the jOOQ Tuesdays series. In this series, we’ll publish an article on the third Tuesday every other month where we interview someone we find exciting in our industry from a jOOQ perspective. This includes people who work with SQL, Java, Open Source, and a variety of other related topics.

brett-wooldridge

I’m very excited to feature today Brett Wooldridge, creator of HikariCP, the fastest connection pool available for Java.

Brett, you’ve created one of the most popular connection pools for Java: HikariCP. What made your library so popular?

I’ll provide some backstory on HikariCP before I answer that, but I’ll tease the answer by saying “marketing“.

A few years ago I was creating a product prototype for the company I work for, and I needed a connection pool. Like most developers I just wanted to drop in a pool and move on, so I took to the web to find the most popular and actively maintained library. Unfortunately, while load testing the prototype we started encountering deadlocks, and exceptions indicating connection state bleed over between threads.

Because the pool was open source, I thought I’d just pull down the code, find and fix the problems, and contribute back. But when I opened the code, I found thousands of lines more code than I was expecting.  Added to the mix were many locks, nested, sometimes acquired in one method and released in some distant place. There was simply no way to reason about where potential deadlocks lurked, even if we found and fixed the ones we encountered.

I picked up another pool and inspected its code. The lock semantics were clearer, but the volume of code was still more than 2x what I expected, especially given that it was delegating the core pooling logic to a separate library.

In addition, all of the pools I studied violated JDBC contracts in multiple ways. In as much as it is possible, a pool should return a Connection that is indistinguishable from one received in the absence of the pool. But these pools didn’t automatically close Statements when a connection was “closed” (returned), or clear warnings, or rollback uncommitted transactions, and they didn’t reset properties altered by the user such as auto-commit or transaction isolation level, and more; resulting in the next consumer getting a “dirty” connection.

I thought, “Really? This is the state of connection pools in the ecosystem after 20 years of Java?” Out of necessity and frustration, I created HikariCP.

To be fair, since I started HikariCP some pools have made some of these “correctness” behaviors configurable, but none of them do so by default and I suspect most users are running with the safety off.  At least two popular pools fail to complete our benchmark with OutOfMemory exceptions when they are enabled.  Conversely, HikariCP doesn’t support an unsafe mode of operation.

Returning to your question, as noted above there were many established pools available, so how did HikariCP become popular?  “Correctness” and reliability are a tough sell, so I focused on promoting performance, and started with a simple tweet. One follower led to another.  Some users tweeted about big performance gains, and improved reliability, and at some point in 2015 the Wix engineering team wrote a blog about switching to HikariCP.

In essence, simple word of mouth has led to HikariCP’s rising popularity, with an initial “marketing” push based on performance.  I do hope that over time more users will give equal weight to correctness and reliability, without which performance is meaningless, and for my part I plan to write more about those aspects of HikariCP.  

You quoted Edsger Dijkstra: “Simplicity is prerequisite for reliability.” – That reminds me of Antoine de Saint-Exupery’s “Perfection is Achieved Not When There Is Nothing More to Add, But When There Is Nothing Left to Take Away”. How do you manage to keep things simple when this world only ever gets more complicated?

Resisting complexity through feature-creep can be challenging.  I get a lot of requests for this or that feature, and while each may be simple in and of itself, if taken in totality would significantly increase complexity and code size.  Of course, that is not to say that I don’t add features.

For example, initial versions of HikariCP only supported a fixed size pool.  HikariCP was designed for systems with fairly constant load, and in that environment pools tend to stay at their maximum size, so I saw little need to complicate the code to support dynamic sizing.  Can you imagine a server at Google falling idle for several minutes?  Additionally, I feel like the more axes of configuration there are, the more difficult it is for users to optimally configure a pool.  However, eventually there were enough users who needed dynamic sizing, and its absence was a barrier to adoption, so support was added.  Principally, I did not want lack of dynamic sizing support to deprive users of the reliability and correctness benefits of HikariCP.

Still, I probably reject the vast majority of feature requests. As the custodian of HikariCP keeping it simple and true to that core philosophy is in the best interest of the community.  I always try to minimize the “surface area”, both in terms of code and configuration.  The larger the surface area of an API, the more difficult it is to comprehend.  Our brains have a limit for the amount of contextual information that can be held “in view” at one time; this is true in a lot of contexts.  For example, when reading code, methods larger than a certain size, or conditionals of more than a certain number of terms, are difficult to follow or reason about.  Generally, for users of HikariCP, the “surface area” is manifest in the number of configuration parameters.  While I can hardly say that “Perfection [has been] achieved”, I do feel like there is not much left to take away without cutting into functionality.

Few libraries go to the byte code level to optimise their code. While this helps in benchmarks, did it also help your users in production? What were the biggest caveats you found while micro-optimising?

Definitely.  Maybe some developers are dismissive of the potential gains, because in their minds they think, “What does it matter if connection acquisition takes 100ns or 100μs, the query is going to take 10ms anyway?”  However, pools intercept dozens of methods, and the “close()” path is typically slower than acquisition, so it’s not that simple.  I often get reports from users providing confirmation of real world performance improvements.  It’s anecdotal but one user initially commented in a bug report, “We’re testing HikariCP at the client and have had great initial success – an application loading 1 million records over multiple HTTP threads and putting them in the DB had its run time cut by 70% after moving from Tomcat CP to HikariCP!”  The follow-up comment on the bug was, “This was a bug in our side, using some unrelated non-threadsafe code.  No issue.  After fixing the bug, the code runs about 2x faster using HikariCP than Tomcat CP.”  That’s pretty good; and yet some reports surprise even me.

Regarding optimisation, and as long as we’re quoting famous thinkers, I would be remiss if I didn’t cite Knuth: “We should forget about small efficiencies, say about 97% of the time: premature optimisation is the root of all evil.”  I think the key word here is “premature”.  It is definitely better to write the code as it naturally comes and then, based on detailed profiling and benchmarking, perform “peephole optimisations” (to hijack a word from compiler theory).  At the same time, I would estimate that half of the performance gains in HikariCP have come as the result of algorithmic changes, rather than low-level optimisations.

Regarding caveats to micro-optimising, it would be hard to convey how much I have learned, and am still learning.  I’d like to give a shout-out to Aleksey Shipilëv for his excellent JMH micro-benchmark framework.  Aleksey has become somewhat of a JVM performance oracle (no pun intended, he used to work for Oracle).  The JVM performs an amazing array of optimisations, and if one is not careful then what appears to be a clever optimisation in the code simply confuses the JIT’s pattern-based optimiser and the result is slower rather than faster.

In order to effectively optimise on the JVM you sometimes end up reading the JIT source code, and you must become familiar with concepts such as dead code elimination, loop invariant hoisting, constant propagation, virtual call inlining, and many more.  Even with a good grip on these concepts I am sometimes surprised by the JVM in my attempts at optimisation.  In addition to the JIT, you really must understand the Java Memory Model (JMM) and how it maps onto CPU architectures like x86.

Lastly, after the design of algorithms, contention for shared state is the source of most bottlenecks (see the aforementioned JMM), so recently the biggest gains (for example, in v2.6.0) have come from tricks that simply avoid it; the fastest code is code that is never executed.

If there is a main takeaway, it is “trust the benchmarks”, your assumptions and intuitions are wrong more often than you imagine.

Your fellow jOOQ Tuesdays interviewee Vlad Mihalcea talked to us about queueing theory. How does this compare to what you wrote about connection pool sizing?

I have great respect for Vlad, I think we’re both members of the Mutual Admiration Society.  His FlexyPool is trying to solve a difficult problem; that being how to automatically tune optimal pool settings for varying loads.  Ultimately, the upper-bound is constrained by the database’s optimal concurrent query capacity, which is where my write-up on pool sizing comes into play.  However, there is a large amount of configuration space in-between a minimally sized pool and that upper-bound, which is where FlexyPool is trying to add value, by ensuring that the pool is “right sized”, dynamically, for the load it is servicing.

I say it is a difficult problem, because connection pools on modern multi-core servers likely present as a M/G/k queue in queueing theory; arrivals have a Markovian distribution, service times have a General distribution, and there are k servers (where “server” is defined as an abstract single-threaded processor).  Quoting wikipedia, “Most performance metrics for this queueing system are not known and remain an open problem.”  Modeling connection pools as a M/M/c queue might provide a decent approximation for the purposes of predicting queue lengths, but service times are not likely to have a Markovian distribution.  Of course, there are also non-Markovian stochastic models in queueing theory that could be applied.  Complicating everything is the fact that queued waiters (threads) can abandon the queue before service, for example when a timeout is reached.  That adds an additional twist when trying to predict queue lengths and wait times.  Hats off to Vlad for taking on this problem!

Anyway, what I wrote about setting the upper-bound on pool sizing translates to pinning the k (or c) value in those respective Markovian queueing theory models.

You chose a Japanese word in your product: 光 (Hikari, “Light”). What’s your connection to Japan?

I’ve lived and worked in Tokyo since 2008, though I think my Japanese is far behind where it should be given my time here.  I chalk that up to preferring time at the keyboard to language study.

As you mentioned, Hikari (pronounced Hi-ka-lee) translates to “Light” (as in sunlight).  In English, it is a double entendre in the context of HikariCP; though in Japanese it would not be.  “Light” in the sense of “the speed of…”, and “light” in the sense of being light in terms of code weight.

Beautiful SQL: Lateral Unnesting of Array Columns


Sometimes, SQL can just be so beautiful. One of the less mainstream features in SQL is the array type (or nested collections). In fact, it’s so not mainstream that only 2 major databases actually support it: Oracle and PostgreSQL (and HSQLDB and H2 in the Java ecosystem).

In PostgreSQL, you can write:

CREATE TABLE blogs (
  id    SERIAL NOT NULL PRIMARY KEY,
  title text   NOT NULL,
  tags  text[]
)

Or in Oracle:

-- Oracle only knows nominal array types, so we have to declare
-- them in advance
CREATE TYPE tag_t AS VARRAY(100) OF VARCHAR2(100 CHAR);

CREATE TABLE blogs (
  id    NUMBER(18) GENERATED BY DEFAULT AS IDENTITY 
                   NOT NULL PRIMARY KEY,
  title VARCHAR2(100 CHAR) NOT NULL,
  tags  tag_t
)

So, roughly the same thing. Now, let’s insert some data. How about the 3 most recent posts on the jOOQ blog, prior to this one:

In PostgreSQL:

INSERT INTO blogs (title, tags)
VALUES (
  'How to Fetch Oracle 12c Implicit Cursors with JDBC and jOOQ',
  ARRAY[
    'implicit cursor',
    'batch',
    'oracle',
    'jooq',
    'jdbc',
    'resultset'
  ]
), (
  'How to Execute SQL Batches With JDBC and jOOQ',
  ARRAY[
    'batch',
    'batch statement',
    'mysql',
    'jooq',
    'jdbc',
    'sql server',
    'sql'
  ]
), (
  'How to Emulate Partial Indexes in Oracle',
  ARRAY[
    'optimisation',
    'index',
    'partial index',
    'oracle',
    'sql',
    'postgresql',
    't-sql',
    'sql server'
  ]
)

Or in Oracle:

INSERT INTO blogs (title, tags)
VALUES (
  'How to Fetch Oracle 12c Implicit Cursors with JDBC and jOOQ',
  tag_t(
    'implicit cursor',
    'batch',
    'oracle',
    'jooq',
    'jdbc',
    'resultset'
  ));
INSERT INTO blogs (title, tags)
VALUES (
  'How to Execute SQL Batches With JDBC and jOOQ',
  tag_t(
    'batch',
    'batch statement',
    'mysql',
    'jooq',
    'jdbc',
    'sql server',
    'sql'
  ));
INSERT INTO blogs (title, tags)
VALUES (
  'How to Emulate Partial Indexes in Oracle',
  tag_t(
    'optimisation',
    'index',
    'partial index',
    'oracle',
    'sql',
    'postgresql',
    't-sql',
    'sql server'
  ));

Now, the array type by itself is not very useful. When it gets really interesting is when we unnest it again into a table. For instance in PostgreSQL:

SELECT title, tag
FROM blogs, LATERAL unnest(tags) AS tags(tag);

Or in Oracle:

-- Classic style
SELECT title, tags.*
FROM blogs, TABLE(tags) tags;

-- Since Oracle 12c
SELECT title, tags.*
FROM blogs, LATERAL (SELECT * FROM TABLE(tags)) tags;

Note that we’re using the keyword LATERAL in some of the above queries. For those of you who are used to T-SQL syntax, it’s almost the same thing as APPLY. Both LATERAL and APPLY are also very useful with table valued functions (stay tuned for a blog post on those).

The idea behind LATERAL is that the table (derived table, subquery, function call, array unnesting) on the right side of LATERAL can “laterally” access stuff from the left side of LATERAL in order to produce new tables. In the above query, we’re producing a new table of tags for each blog post, and then we cross join the two tables.

Here’s what the above queries result in:

title                                                         tag
-----------------------------------------------------------------------------
How to Fetch Oracle 12c Implicit Cursors with JDBC and jOOQ   implicit cursor
How to Fetch Oracle 12c Implicit Cursors with JDBC and jOOQ   batch
How to Fetch Oracle 12c Implicit Cursors with JDBC and jOOQ   oracle
How to Fetch Oracle 12c Implicit Cursors with JDBC and jOOQ   jooq
How to Fetch Oracle 12c Implicit Cursors with JDBC and jOOQ   jdbc
How to Fetch Oracle 12c Implicit Cursors with JDBC and jOOQ   resultset
How to Execute SQL Batches With JDBC and jOOQ                 batch
How to Execute SQL Batches With JDBC and jOOQ                 batch statement
How to Execute SQL Batches With JDBC and jOOQ                 mysql
How to Execute SQL Batches With JDBC and jOOQ                 jooq
How to Execute SQL Batches With JDBC and jOOQ                 jdbc
How to Execute SQL Batches With JDBC and jOOQ                 sql server
How to Execute SQL Batches With JDBC and jOOQ                 sql
How to Emulate Partial Indexes in Oracle                      optimisation
How to Emulate Partial Indexes in Oracle                      index
How to Emulate Partial Indexes in Oracle                      partial index
How to Emulate Partial Indexes in Oracle                      oracle
How to Emulate Partial Indexes in Oracle                      sql
How to Emulate Partial Indexes in Oracle                      postgresql
How to Emulate Partial Indexes in Oracle                      t-sql
How to Emulate Partial Indexes in Oracle                      sql server

You can immediately see the cross join semantics here, as we’re combining each tag (per post) with its post.

Looking for ordinals (i.e. the tag number inside of the array) along with the array? Easy!

Just add the powerful WITH ORDINALITY clause after the UNNEST() call in PostgreSQL:

SELECT title, tag
FROM blogs, LATERAL unnest(tags) WITH ORDINALITY AS tags(tag);

A bit more complicated to emulate in Oracle:

-- Fancy, with a window function
SELECT title, tags.*
FROM blogs, LATERAL (
  SELECT tags.*, ROW_NUMBER() OVER (ORDER BY NULL)
  FROM TABLE(tags) tags
) tags;

-- Classic, with ROWNUM
SELECT title, tags.*
FROM blogs, LATERAL (
  SELECT tags.*, ROWNUM
  FROM TABLE(tags) tags
) tags;

The result now contains the tag “ID”, i.e the ordinal of the tag inside of the array:

title                                           tag               ordinal
-------------------------------------------------------------------------
How to Fetch ... Cursors with JDBC and jOOQ     implicit cursor   1
How to Fetch ... Cursors with JDBC and jOOQ     batch             2
How to Fetch ... Cursors with JDBC and jOOQ     oracle            3
How to Fetch ... Cursors with JDBC and jOOQ     jooq              4
How to Fetch ... Cursors with JDBC and jOOQ     jdbc              5
How to Fetch ... Cursors with JDBC and jOOQ     resultset         6
How to Execute SQL Batches With JDBC and jOOQ   batch             1
How to Execute SQL Batches With JDBC and jOOQ   batch statement   2
How to Execute SQL Batches With JDBC and jOOQ   mysql             3
How to Execute SQL Batches With JDBC and jOOQ   jooq              4
How to Execute SQL Batches With JDBC and jOOQ   jdbc              5
How to Execute SQL Batches With JDBC and jOOQ   sql server        6
How to Execute SQL Batches With JDBC and jOOQ   sql               7
How to Emulate Partial Indexes in Oracle        optimisation      1
How to Emulate Partial Indexes in Oracle        index             2
How to Emulate Partial Indexes in Oracle        partial index     3
How to Emulate Partial Indexes in Oracle        oracle            4
How to Emulate Partial Indexes in Oracle        sql               5
How to Emulate Partial Indexes in Oracle        postgresql        6
How to Emulate Partial Indexes in Oracle        t-sql             7
How to Emulate Partial Indexes in Oracle        sql server        8

Now, imagine looking for those blog posts that are tagged “jooq”. Easy!

PostgreSQL:

SELECT title
FROM blogs
WHERE 'jooq' = ANY(tags);

Oracle:

SELECT title
FROM blogs
WHERE 'jooq' IN (SELECT * FROM TABLE(tags));

Yielding:

title
-----------------------------------------------------------
How to Fetch Oracle 12c Implicit Cursors with JDBC and jOOQ
How to Execute SQL Batches With JDBC and jOOQ

Conclusion

These are just a few nice things we can do when we denormalise our data into nested collections / arrays, and then use features like UNNEST to bring them back to the table level. Both Oracle and PostgreSQL support a variety of really nice features building on top of arrays, so do check them out!

How to Execute SQL Batches With JDBC and jOOQ


Some databases (in particular MySQL and T-SQL databases like SQL Server and Sybase) support a very nice feature: They allow for running a “batch” of statements in a single statement. For instance, in SQL Server, you can do something like this:

-- Statement #1
DECLARE @table AS TABLE (id INT);

-- Statement #2
SELECT * FROM @table;

-- Statement #3
INSERT INTO @table VALUES (1),(2),(3);

-- Statement #4
SELECT * FROM @table;

This is a batch of 4 statements, and it can be executed as a single statement both with JDBC and with jOOQ. Let’s see how:

Executing a batch with JDBC

Unfortunately, the term “batch” has several meanings, and in this case, I don’t mean the JDBC Statement.addBatch() method, which is actually a bit clumsy as it doesn’t allow for fetching mixed update counts and result sets.

Instead, what I’ll be doing is this:

String sql =
    "\n  -- Statement #1                              "
  + "\n  DECLARE @table AS TABLE (id INT);            "
  + "\n                                               "
  + "\n  -- Statement #2                              "
  + "\n  SELECT * FROM @table;                        "
  + "\n                                               "
  + "\n  -- Statement #3                              "
  + "\n  INSERT INTO @table VALUES (1),(2),(3);       "
  + "\n                                               "
  + "\n  -- Statement #4                              "
  + "\n  SELECT * FROM @table;                        ";

try (PreparedStatement s = c.prepareStatement(sql)) {
    fetchLoop:
    for (int i = 0, updateCount = 0;; i++) {
        boolean result = (i == 0)
            ? s.execute()
            : s.getMoreResults();

        if (result)
            try (ResultSet rs = s.getResultSet()) {
                System.out.println("\nResult:");

                while (rs.next())
                    System.out.println("  " + rs.getInt(1));
            }
        else if ((updateCount = s.getUpdateCount()) != -1)
            System.out.println("\nUpdate Count: " + updateCount);
        else
            break fetchLoop;
    }
}

The output of the above program being:

Result:

Update Count: 3

Result:
  1
  2
  3

The above API usage is a somewhat “hidden” – or at least not every day usage of the JDBC API. Mostly, you’ll be using Statement.executeQuery() when you’re expecting a ResultSet, or Statement.executeUpdate() otherwise.

But in our case, we don’t really know what’s happening. We’re going to discover the result types on the fly, when executing the statement. Here are the main JDBC API features that we’re using, along with an explanation:

  • Statement.execute(): This method should be used if we don’t know the result type. The method returns a boolean, which is true when the first statement in the batch produced a ResultSet and false otherwise.
  • Statement.getMoreResults(): This method returns the same kind of boolean value as the previous Statement.execute() method, but it does so for the next statement in the batch (i.e. for every statement except the first).
  • If the current result is a ResultSet (the boolean was true), then we’ll obtain that ResultSet through Statement.getResultSet() (we can obviously no longer call the usual Statement.executeQuery() to obtain the ResultSet).
  • If the current result is not a ResultSet (the boolean was true), then we’ll check the update count value through Statement.getUpdateCount().
  • If the update count is -1, then we’ve reached the end of the batch.

What a nice state machine!

The nice thing about this is that a batch may be completely nondeterministic. E.g. there may be triggers, T-SQL blocks (e.g. an IF statement), stored procedures, and many other things that contribute result sets and/or update counts. In some cases, we simply don’t know what we’ll get.

Executing a batch with jOOQ

It’s great that the JDBC API designers have thought of this exotic API usage on a rather low level. This makes JDBC extremely powerful. But who remembers the exact algorithm all the time? After all, the above minimalistic version required around 20 lines of code for something as simple as that.

Compare this to the following jOOQ API usage:

System.out.println(
    DSL.using(c).fetchMany(sql)
);

The result being:

Result set:
+----+
|  id|
+----+
Update count: 3
Result set:
+----+
|  id|
+----+
|   1|
|   2|
|   3|
+----+

Huh! Couldn’t get much simpler than that! Let’s walk through what happens:

The DSLContext.fetchMany() method is intended for use when users know there will be many result sets and/or update counts. Unlike JDBC which reuses ordinary JDBC API, jOOQ has a different API here to clearly distinguish between behaviours. The method then eagerly fetches all the results and update counts in one go (lazy fetching is on the roadmap with issue #4503).

The resulting type is org.jooq.Results, a type that extends List<Result>, which allows for iterating over the results only, for convenience. If a mix of results or update counts need to be consumed, the Results.resultsOrRows() method can be used.

How to Emulate Partial Indexes in Oracle


A very interesting feature of the SQL Server and PostgreSQL databases (and some others, including SQLite) is the partial index (sometimes also called “filtered index”). That’s an index that contains only “parts” of the table data. For instance, we can write the following index in SQL Server and PostgreSQL:

CREATE INDEX i ON message WHERE deleted = 1;

Let’s imagine you have a house keeping job that periodically removes deleted messages. Now, let’s assume you have discovered, that only 0.1% of all messages are really deleted, so an index on the DELETED column is very selective if you’re looking for deleted messages.

On the other hand, it’s not selective at all if you’re looking for non-deleted messages, as such a query would return 99.9% of all messages, in case of which a full table scan is more efficient.

So, since the index is never useful for non-deleted messages, why index those messages at all? If we can avoid indexing non-deleted messages, then we can:

  • Save a lot of disk space, as the index will be much smaller
  • Save a lot of time inserting into the messages table, since we don’t have to update the index all the time
  • Save a lot of time scanning the index, since it will contain a lot less blocks

Unfortunately, Oracle doesn’t support this feature

… but “luckily”, Oracle has another controversial “feature”. In Oracle, all indexes are partial indexes, because they don’t contain NULL values. This is probably due to an ancient optimisation (remember, partial indexes are smaller), which occasionally gets into your way, performance wise, if you do want to query for NULL values.

But in this case, it’s useful. Check this out:

CREATE TABLE message(deleted number(1));

CREATE INDEX i ON message (
  CASE WHEN deleted > 0 THEN deleted END
);

The above index is a function-based index, i.e. an index that contains not the value of the deleted column itself, but an expression based on it. Concretely, it contains only deleted values that are strictly greater than zero, because if the value is zero, then it is turned to NULL by the CASE expression, and Oracle doesn’t index NULL values. Check this out:

INSERT INTO message
SELECT DECODE(LEVEL, 1, 1, 0)
FROM dual
CONNECT BY LEVEL < 100000;

The above query is inserting a single row containing a deleted value of 1, and almost 100k rows containing a value of 0. The insert is very quick, because only one row has to be added to the index. The other almost 100k rows are skipped:

EXEC dbms_stats.gather_table_stats('SANDBOX', 'MESSAGE');

SELECT NUM_ROWS, BLOCKS
FROM SYS.ALL_TAB_STATISTICS
WHERE TABLE_NAME = 'MESSAGE';

SELECT NUM_ROWS, LEAF_BLOCKS
FROM SYS.ALL_IND_STATISTICS
WHERE TABLE_NAME = 'MESSAGE';

The result is:

NUM_ROWS       BLOCKS
---------------------
   99999          152 <-- table size

NUM_ROWS  LEAF_BLOCKS
---------------------
       1            1 <-- index size

The “trouble” with this kind of emulation is: It’s a function-based index. We can use this index only if we really reproduce the same “function” (or in this case, expression) as in the index itself. So, in order to fetch all the deleted messages, we must not write the following query:

SELECT *
FROM message
WHERE deleted = 1;

But this one, instead:

SELECT *
FROM message
WHERE CASE WHEN deleted > 0 THEN deleted END = 1;

Check out execution plans:

EXPLAIN PLAN FOR
SELECT *
FROM message
WHERE deleted = 1;

SELECT * FROM TABLE(dbms_xplan.display);

EXPLAIN PLAN FOR
SELECT *
FROM message
WHERE CASE WHEN deleted > 0 THEN deleted END = 1;

SELECT * FROM TABLE(dbms_xplan.display);

The output being:

------------------------------------------------------------------
| Id  | Operation         | Name    | Rows  | Bytes | Cost (%CPU)|
------------------------------------------------------------------
|   0 | SELECT STATEMENT  |         | 50000 |   146K|    44   (3)|
|*  1 |  TABLE ACCESS FULL| MESSAGE | 50000 |   146K|    44   (3)|
------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter("DELETED"=1)

And

----------------------------------------------------------------------------
| Id  | Operation                   | Name    | Rows  | Bytes | Cost (%CPU)|
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |         |     1 |     3 |     2   (0)|
|   1 |  TABLE ACCESS BY INDEX ROWID| MESSAGE |     1 |     3 |     2   (0)|
|*  2 |   INDEX RANGE SCAN          | I       |     1 |       |     1   (0)|
----------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - access(CASE  WHEN "DELETED">0 THEN "DELETED" END =1)

As you can see, the first query runs a full table scan, estimating to retrieve 50% of all the rows, when the actual result is only 1 row as can be seen in the second execution plan!

Insertion speed

What’s even more impressive is the difference in insertion speed. Consider the following code block, which measures the time it takes to insert 1 million times 0 and one million times 1:

SET SERVEROUTPUT ON
DECLARE
  ts TIMESTAMP;
BEGIN
  ts := SYSTIMESTAMP;
  INSERT INTO message 
  SELECT 0 FROM dual CONNECT BY level <= 1000000;
  dbms_output.put_line(SYSTIMESTAMP - ts);
  
  ts := SYSTIMESTAMP;
  INSERT INTO message 
  SELECT 1 FROM dual CONNECT BY level <= 1000000;
  dbms_output.put_line(SYSTIMESTAMP - ts);
END;
/

The result being:

+000000000 00:00:01.501000000
+000000000 00:00:08.162000000

The insertion is much faster if we don’t have to modify the index!

Conclusion

Partial indexes are a very neat trick in cases where your data is highly skewed and some values in a column are extremely rare and very frequently queried. They may drastically reduce the index size, which greatly improves performance in some situations, including inserting into the table, and querying the index.

In Oracle, they can be emulated using function-based indexes, which means you have to use the exact function expression from the index also in queries, in order to profit. But it may well be worth the trouble!

A Probably Incomplete, Comprehensive Guide to the Many Different Ways to JOIN Tables in SQL


Perhaps the most powerful SQL feature is the JOIN operation. It is the envy of all non-relational databases, because the concept is so simple, yet so universally applicable, when you want to “combine” two data sets.

Put simply, when joining two tables, you’re combining every row from one table with every row from another table, for which a certain predicate is true. An illustration from our SQL Masterclass shows it.

venn-join

See also our recent article about using Venn diagrams to illustrate JOINs. The above illustration compares INNER JOIN with different OUTER JOIN operations, but those are far from all possibilities that we have. Let’s look at things from a more systematic angle.

Do note that whenever in this article I say “X happens before Y”, I mean that “X happens logically before Y”. The database optimiser may still choose to execute Y before X, because that is faster without changing the outcome. More information about the syntactic / logical order of operations here.

But let’s look into all the join types, individually!

CROSS JOIN

The most basic JOIN operation is really a cartesian product. It just combines every row from one table with every row from another table. The best example for cartesian products is given by Wikipedia, using a deck of cards for illustration where we’re “cross joining” the ranks table with the suits table:

venn-cross-product

In real-world scenarios, CROSS JOIN can be very useful when doing reports, for instance you could generate a set of dates (e.g. days in a month) and cross join that with all the departments in the database, to create a complete days/departments table. Using PostgreSQL syntax:

SELECT *

-- This just generates all the days in January 2017
FROM generate_series(
  '2017-01-01'::TIMESTAMP,
  '2017-01-01'::TIMESTAMP + INTERVAL '1 month -1 day',
  INTERVAL '1 day'
) AS days(day)

-- Here, we're combining all days with all departments
CROSS JOIN departments

Imagine we have the following data:

+--------+   +------------+
| day    |   | department |
+--------+   +------------+
| Jan 01 |   | Dept 1     |
| Jan 02 |   | Dept 2     |
| ...    |   | Dept 3     |
| Jan 30 |   +------------+
| Jan 31 |
+--------+

The result would now look like this:

+--------+------------+
| day    | department |
+--------+------------+
| Jan 01 | Dept 1     |
| Jan 01 | Dept 2     |
| Jan 01 | Dept 3     |

| Jan 02 | Dept 1     |
| Jan 02 | Dept 2     |
| Jan 02 | Dept 3     |
| ...    | ...        |
| Jan 31 | Dept 1     |
| Jan 31 | Dept 2     |
| Jan 31 | Dept 3     |
+--------+------------+

Now, for each days/departments combination, you could calculate the daily revenue for that department, or something similar.

Characteristics

A CROSS JOIN is a cartesian product, i.e. a product as in “multiplication”. The mathematical notation uses the multiplication sign to express this operation: A × B, or in our case: days × departments.

Just like with “ordinary” arithmetic multiplication, if one of the two tables is empty (size zero), then the result will also be empty (size zero). This makes total sense. If we combine the previous 31 days with 0 departments, we’ll get 0 days/departments combinations. Likewise, if we combine an empty date range with any number of departments, we also get 0 days/departments combinations.

In other words:

size(result) = size(days) * size(departments)

Alternative syntaxes

In the old days, before the ANSI JOIN syntax was introduced to SQL, people just wrote comma-separated lists of tables in the FROM clause to write a CROSS JOIN. The above query is equivalent to this one:

SELECT *
FROM 
  generate_series(
    '2017-01-01'::TIMESTAMP,
    '2017-01-01'::TIMESTAMP + INTERVAL '1 month -1 day',
    INTERVAL '1 day'
  ) AS days(day),
  departments

In general, I strongly recommend to use the CROSS JOIN keyword instead of comma-separated lists of tables, because if you intentionally want to do a CROSS JOIN, nothing communicates this intent (to the next developer!) better than using the actual keywords. So many things can go wrong, accidentally, with comma-separated lists of tables, including accidental cross joins. You don’t want those!

INNER JOIN (Theta-JOIN)

Building on top of the previous CROSS JOIN operation, INNER JOIN (or just simply JOIN, sometimes also called "THETA" JOIN) allows for filtering the outcome of the cartesian product by some predicate. Most of the times, we’re putting that predicate in an ON clause, and it could be something like this:

SELECT *

-- Same as before
FROM generate_series(
  '2017-01-01'::TIMESTAMP,
  '2017-01-01'::TIMESTAMP + INTERVAL '1 month -1 day',
  INTERVAL '1 day'
) AS days(day)

-- Now, exclude all days/departments combinations for
-- days before the department was created
JOIN departments AS d ON day >= d.created_at

In most databases, the INNER keyword is optional, so I’m just omitting it in this article.

Notice how the INNER JOIN operation allows for putting arbitrary predicates in the ON clause, which is again very useful when doing reporting. Just like in the previous CROSS JOIN example, we’re combining all days with all departments, but then we retain only those days/departments combinations for which the department already existed, i.e. for which the department creation preceded the day.

Again, using this data:

+--------+   +------------+------------+
| day    |   | department | created_at |
+--------+   +------------+------------+
| Jan 01 |   | Dept 1     | Jan 10     |
| Jan 02 |   | Dept 2     | Jan 11     |
| ...    |   | Dept 3     | Jan 12     |
| Jan 30 |   +------------+------------+
| Jan 31 |
+--------+

The result would now look like this:

+--------+------------+
| day    | department |
+--------+------------+
| Jan 10 | Dept 1     |

| Jan 11 | Dept 1     |
| Jan 11 | Dept 2     |

| Jan 12 | Dept 1     |
| Jan 12 | Dept 2     |
| Jan 12 | Dept 3     |

| Jan 13 | Dept 1     |
| Jan 13 | Dept 2     |
| Jan 13 | Dept 3     |
| ...    | ...        |
| Jan 31 | Dept 1     |
| Jan 31 | Dept 2     |
| Jan 31 | Dept 3     |
+--------+------------+

So, we’d get no results prior to January 10, those rows were filtered out.

Characteristics

INNER JOIN operations are filtered CROSS JOIN operations. This means that if one of the two tables is empty, then the result is also guaranteed to be empty. But unlike with CROSS JOIN, because of the predicate, we can always get less results than the CROSS JOIN would deliver.

In other words:

size(result) <= size(days) * size(departments)

Alternative syntaxes

While the ON clause is mandatory with the INNER JOIN operation, you’re not required to place JOIN predicates in there (although it is highly recommended from a readability perspective). Most databases will optimise the following, equivalent query in just the same way:

SELECT *
FROM generate_series(
  '2017-01-01'::TIMESTAMP,
  '2017-01-01'::TIMESTAMP + INTERVAL '1 month -1 day',
  INTERVAL '1 day'
) AS days(day)

-- You can always JOIN .. ON true (or 1 = 1 in other DBs)
-- to turn an syntactic INNER JOIN into a semantic CROSS JOIN
JOIN departments AS d ON true

-- ... and then turn the CROSS JOIN back into an INNER JOIN
-- by putting the JOIN predicate in the WHERE clause:
WHERE day >= d.created_at

Of course, again, that is just obfuscating the query for the reader, but you may have your reasons, right? If we take this one step further, the following query is also equivalent, because most optimisers can figure out the equivalence and execute an INNER JOIN instead:

SELECT *
FROM generate_series(
  '2017-01-01'::TIMESTAMP,
  '2017-01-01'::TIMESTAMP + INTERVAL '1 month -1 day',
  INTERVAL '1 day'
) AS days(day)

-- Now, this is really a syntactic CROSS JOIN
CROSS JOIN departments AS d
WHERE day >= d.created_at

… and, as seen before, CROSS JOIN is just syntax sugar for comma-separated table lists. In this case, we retain the WHERE clause to get what people have frequently done prior to the introduction of ANSI JOIN syntax:

SELECT *
FROM
  generate_series(
    '2017-01-01'::TIMESTAMP,
    '2017-01-01'::TIMESTAMP + INTERVAL '1 month -1 day',
    INTERVAL '1 day'
  ) AS days(day),
  departments AS d
WHERE day >= d.created_at

All of these syntaxes do the same thing, usually without performance penalty, but clearly, they’re all less readable than the original INNER JOIN syntax.

EQUI JOIN

Sometimes, e.g. in literature, you will hear the term EQUI JOIN where "EQUI" isn’t really meant as a SQL keyword, but just as a specific way of writing a special kind of INNER JOIN.

In fact, it is weird that "EQUI" JOIN is the special case, because it’s what we’re doing most in SQL, also in OLTP applications where we simply JOIN by primary key / foreign key relationship. For instance:

SELECT *
FROM actor AS a
JOIN film_actor AS fa ON a.actor_id = fa.actor_id
JOIN film AS f ON f.film_id = fa.film_id

The above query selects all actors and their films. There are two INNER JOIN operations, one connecting actors to the corresponding entries in the film_actor relationship table (because actors can have many films and films can have many actors), and the other connecting each film_actor relationship with additional information about the film itself.

Characteristics

The characteristics of this operation are the same as those of the “general” INNER JOIN operation. An "EQUI" JOIN is still a cartesian product (CROSS JOIN) with a reduced set of results, i.e. containing only those actor/film combinations for which a given actor actually played in the given film.

So again, in other words:

size(result) <= size(actor) * size(film)

The result size can only be equal to the actor size multiplied by the film size, if every actor played in every film, which is unlikely.

Alternative syntaxes: USING

Again, as before, we could write the INNER JOIN operation without an actual INNER JOIN syntax, but CROSS JOIN or comma-separated table lists instead. That’s boring, but much more interesting are the following two alternative syntaxes, one of which is very very useful:

SELECT *
FROM actor
JOIN film_actor USING (actor_id)
JOIN film USING (film_id)

The USING clause replaces the ON clause and allows for listing a set of columns that have to be present in both sides of the JOIN operation. If you carefully design your database in the same way as the Sakila database, namely where every foreign key column(s) have the same names as their referenced primary key column(s) (e.g. actor.actor_id = film_actor.actor_id), then you can use USING for "EQUI" JOIN in at least these databases:

  • Derby
  • Firebird
  • HSQLDB
  • Ingres
  • MariaDB
  • MySQL
  • Oracle
  • PostgreSQL
  • SQLite
  • Vertica

These databases, unfortunately, don’t support the syntax:

  • Access
  • Cubrid
  • DB2
  • H2
  • HANA
  • Informix
  • SQL Server
  • Sybase ASE
  • Sybase SQL Anywhere

While this produces exactly (well almost) the same result as the ON clause, it’s much quicker to read and write. I said almost because some databases (and the SQL standard) specify that any column appearing in the USING clause loses its qualification. For instance:

SELECT
  f.title,   -- Ordinary column, can be qualified
  f.film_id, -- USING column, shouldn't be qualified
  film_id    -- USING column, correct / non-ambiguous here
FROM actor AS a
JOIN film_actor AS fa USING (actor_id)
JOIN film AS f USING (film_id)

Also, of course, this syntax is a bit limited. Sometimes, you have several foreign keys in your tables, not all of which can have the primary key column name(s). For instance:

CREATE TABLE film (
  ..
  language_id          BIGINT REFERENCES language,
  original_language_id BIGINT REFERENCES language,
)

If you want to join by ORIGINAL_LANGUAGE_ID, you’ll have to resort to the ON clause.

Alternative syntaxes: NATURAL JOIN

An more extreme and much less useful form of "EQUI" JOIN is the NATURAL JOIN clause. The previous example could be further “improved” by replacing USING by NATURAL JOIN like this:

SELECT *
FROM actor
NATURAL JOIN film_actor
NATURAL JOIN film

Notice how we no longer need to specify any JOIN criteria, because a NATURAL JOIN will automatically take all the columns that share the same name from both tables that it joins and place them in a “hidden” USING clause. As we’ve seen before, as primary keys and foreign keys have the same column name, this appears quite useful.

Truth is: It isn’t useful. In the Sakila database, every table also has a LAST_UPDATE column, which is automatically being taken into consideration by NATURAL JOIN. The NATURAL JOIN query is thus equivalent to this one:

SELECT *
FROM actor
JOIN film_actor USING (actor_id, last_update)
JOIN film USING (film_id, last_update)

… which of course makes absolutely no sense at all. So, forget about NATURAL JOIN immediately (except for some very rare cases, e.g. when joining Oracle’s diagnostics views like v$sql NATURAL JOIN v$sql_plan, etc., for ad-hoc analytics)

OUTER JOIN

We’ve seen INNER JOIN before, which returns results only for combinations of the left / right table, for which the ON predicate yields true.

OUTER JOIN allows us to retain rowson the left / rigth side, for which we didn’t find a matching combination. Let’s go back to the days and departments example:

SELECT *
FROM generate_series(
  '2017-01-01'::TIMESTAMP,
  '2017-01-01'::TIMESTAMP + INTERVAL '1 month -1 day',
  INTERVAL '1 day'
) AS days(day)
LEFT JOIN departments AS d ON day >= d.created_at

Again, the OUTER keyword is optional, so I’m omitting it in the examples.

This query is very subtly different from its INNER JOIN counter part in that it will always return at least one row per day, even if for a given day, there is no department that has already been created at that day. For instance: All departments were created on January 10. The above query will still return January 1-9:

+--------+   +------------+------------+
| day    |   | department | created_at |
+--------+   +------------+------------+
| Jan 01 |   | Dept 1     | Jan 10     |
| Jan 02 |   | Dept 2     | Jan 11     |
| ...    |   | Dept 3     | Jan 12     |
| Jan 30 |   +------------+------------+
| Jan 31 |
+--------+

In addition to the rows we got before in the INNER JOIN example, we now also have all the days from January 1-9 with NULL departments:

+--------+------------+
| day    | department |
+--------+------------+
| Jan 01 |            | -- Extra rows with no match here
| Jan 02 |            | -- Extra rows with no match here
| ...    |            | -- Extra rows with no match here
| Jan 09 |            | -- Extra rows with no match here
| Jan 10 | Dept 1     |
| Jan 11 | Dept 1     |
| Jan 11 | Dept 2     |
| Jan 12 | Dept 1     |
| Jan 12 | Dept 2     |
| Jan 12 | Dept 3     |
| Jan 13 | Dept 1     |
| Jan 13 | Dept 2     |
| Jan 13 | Dept 3     |
| ...    | ...        |
| Jan 31 | Dept 1     |
| Jan 31 | Dept 2     |
| Jan 31 | Dept 3     |
+--------+------------+

In other words, each day appears at least once in the result. LEFT JOIN does this for the left table, i.e. it retains all rows from the left table in the result.

Formally, a LEFT OUTER JOIN is an INNER JOIN with a UNION like this:

-- Convenient syntax:
SELECT *
FROM a LEFT JOIN b ON <predicate>

-- Cumbersome, equivalent syntax:
SELECT a.*, b.*
FROM a JOIN b ON <predicate>
UNION ALL
SELECT a.*, NULL, NULL, ..., NULL
FROM a
WHERE NOT EXISTS (
  SELECT * FROM b WHERE <predicate>
)

We’ll talk about NOT EXISTS further down, when talking about "SEMI" JOIN

RIGHT OUTER JOIN

RIGHT OUTER JOIN does exactly the opposite. It retains all the rows from the right table in the result. Let’s add more departments

+--------+   +------------+------------+
| day    |   | department | created_at |
+--------+   +------------+------------+
| Jan 01 |   | Dept 1     | Jan 10     |
| Jan 02 |   | Dept 2     | Jan 11     |
| ...    |   | Dept 3     | Jan 12     |
| Jan 30 |   | Dept 4     | Apr 01     |
| Jan 31 |   | Dept 5     | Apr 02     |
+--------+   +------------+------------+

The new departments 4 and 5 will not be in the previous results at all, because they were created on a day after January 31. But it will appear in a RIGHT JOIN result, because departments is the right table of the join operation, and all the rows from the right table will be retained.

Running this query:

SELECT *
FROM generate_series(
  '2017-01-01'::TIMESTAMP,
  '2017-01-01'::TIMESTAMP + INTERVAL '1 month -1 day',
  INTERVAL '1 day'
) AS days(day)
RIGHT JOIN departments AS d ON day >= d.created_at

Will yield something like this:

+--------+------------+
| day    | department |
+--------+------------+
| Jan 10 | Dept 1     |
| Jan 11 | Dept 1     |
| Jan 11 | Dept 2     |
| Jan 12 | Dept 1     |
| Jan 12 | Dept 2     |
| Jan 12 | Dept 3     |
| Jan 13 | Dept 1     |
| Jan 13 | Dept 2     |
| Jan 13 | Dept 3     |
| ...    | ...        |
| Jan 31 | Dept 1     |
| Jan 31 | Dept 2     |
| Jan 31 | Dept 3     |
|        | Dept 4     | -- Extra rows with no match here
|        | Dept 5     | -- Extra rows with no match here
+--------+------------+

In most cases (I have to yet run into a situation where this isn’t the case), Each LEFT OUTER JOIN expression can be transformed into an equivalent RIGHT OUTER JOIN expression, and vice-versa. Because RIGHT OUTER JOIN is usually a bit less readable, most people use LEFT OUTER JOIN only.

FULL OUTER JOIN

Finally, there is also FULL OUTER JOIN, which retains all rows from both sides of the JOIN operation. In our example, this means that each day appears at least once in the result, just like each department appears at least once in the result.

Let’s take again this data:

+--------+   +------------+------------+
| day    |   | department | created_at |
+--------+   +------------+------------+
| Jan 01 |   | Dept 1     | Jan 10     |
| Jan 02 |   | Dept 2     | Jan 11     |
| ...    |   | Dept 3     | Jan 12     |
| Jan 30 |   | Dept 4     | Apr 01     |
| Jan 31 |   | Dept 5     | Apr 02     |
+--------+   +------------+------------+

And now, let’s run this query:

SELECT *
FROM generate_series(
  '2017-01-01'::TIMESTAMP,
  '2017-01-01'::TIMESTAMP + INTERVAL '1 month -1 day',
  INTERVAL '1 day'
) AS days(day)
FULL JOIN departments AS d ON day >= d.created_at

The result will now look something like this:

+--------+------------+
| day    | department |
+--------+------------+
| Jan 01 |            | -- row from the left table
| Jan 02 |            | -- row from the left table
| ...    |            | -- row from the left table
| Jan 09 |            | -- row from the left table
| Jan 10 | Dept 1     |
| Jan 11 | Dept 1     |
| Jan 11 | Dept 2     |
| Jan 12 | Dept 1     |
| Jan 12 | Dept 2     |
| Jan 12 | Dept 3     |
| Jan 13 | Dept 1     |
| Jan 13 | Dept 2     |
| Jan 13 | Dept 3     |
| ...    | ...        |
| Jan 31 | Dept 1     |
| Jan 31 | Dept 2     |
| Jan 31 | Dept 3     |
|        | Dept 4     | -- row from the right table
|        | Dept 5     | -- row from the right table 
+--------+------------+

If you insist, formally, a LEFT OUTER JOIN is an INNER JOIN with a UNION like this:

-- Convenient syntax:
SELECT *
FROM a LEFT JOIN b ON <predicate>

-- Cumbersome, equivalent syntax:
SELECT a.*, b.*
FROM a JOIN b ON <predicate>
-- LEFT JOIN part
UNION ALL
SELECT a.*, NULL, NULL, ..., NULL
FROM a
WHERE NOT EXISTS (
  SELECT * FROM b WHERE <predicate>
)
-- RIGHT JOIN part
UNION ALL
SELECT NULL, NULL, ..., NULL, b.*
FROM b
WHERE NOT EXISTS (
  SELECT * FROM a WHERE <predicate>
)

Alternative syntaxes: “EQUI” OUTER JOIN

The above examples again used the “cartesian product with filter” kind of JOIN. Much more common, however, is the "EQUI" OUTER JOIN approach, where we join on a primary key / foreign key relationship. Let’s go back to the Sakila database example. Some actors haven’t played in any film, and we might want to query them like this:

SELECT *
FROM actor
LEFT JOIN film_actor USING (actor_id)
LEFT JOIN film USING (film_id)

This query will return all actors at least once, regardless if they played in a film. If we also want all films that do not have any actors (yet), we could use a FULL OUTER JOIN to combine the results:

SELECT *
FROM actor
FULL JOIN film_actor USING (actor_id)
FULL JOIN film USING (film_id)

And of course, this also works with NATURAL LEFT JOIN, NATURAL RIGHT JOIN, NATURAL FULL JOIN, but again, these aren’t useful at all, as we’d be joining again USING (..., LAST_UPDATE), which makes no sense at all.

Alternative syntaxes: Oracle and SQL Server style OUTER JOIN

These two databases had OUTER JOIN before the ANSI syntax was established. It looked like this:

-- Oracle
SELECT *
FROM actor a, film_actor fa, film f
WHERE a.actor_id = fa.actor_id(+)
AND fa.film_id = f.film_id(+)

-- SQL Server
SELECT *
FROM actor a, film_actor fa, film f
WHERE a.actor_id *= fa.actor_id
AND fa.film_id *= f.film_id

That’s nice, given that at some point in time (in the 80s??), ANSI didn’t specify OUTER JOIN yet. But the 80s are more than 30 years ago. So, it’s safe to say this stuff is obsolete.

SQL Server did the right thing and deprecated (and later removed) that syntax a long time ago. Oracle still supports it for backwards compatibility reasons.

But nothing about this syntax is reasonable or readable. Don’t use it. Replace it with ANSI JOIN.

PARTITIONED OUTER JOIN

This is Oracle specific, but I must say, it’s a real shame none of the other databases have stolen the feature yet. Remember the CROSS JOIN operation that we used to combine each day with each department? Because, sometimes, that’s what we want in the result: All combinations, and if there is a match put also the matching values in the row.

That’s hard to describe in words, much more easy with an example. Here’s the query with Oracle syntax:

WITH 

  -- Using CONNECT BY to generate all dates in January
  days(day) AS (
    SELECT DATE '2017-01-01' + LEVEL - 1
    FROM dual
    CONNECT BY LEVEL <= 31
  ),

  -- Our departments
  departments(department, created_at) AS (
    SELECT 'Dept 1', DATE '2017-01-10' FROM dual UNION ALL
    SELECT 'Dept 2', DATE '2017-01-11' FROM dual UNION ALL
    SELECT 'Dept 3', DATE '2017-01-12' FROM dual UNION ALL
    SELECT 'Dept 4', DATE '2017-04-01' FROM dual UNION ALL
    SELECT 'Dept 5', DATE '2017-04-02' FROM dual
  )
SELECT *
FROM days 
LEFT JOIN departments 
  PARTITION BY (department) -- This is where the magic happens
  ON day >= created_at

Unfortunately, PARTITION BY is used in various contexts with different meanings (e.g. for window functions). In this case, it means that we “partition” our data by the departments.department column, creating a “partition” for each department. Now, each partition will get a copy for each day, regardless if there’s a match in our predicate (unlike in the ordinary LEFT JOIN case, where we had a bunch of “department-less” days). The result of the above query is now this:

+--------+------------+------------+
| day    | department | created_at |
+--------+------------+------------+
| Jan 01 | Dept 1     |            | -- Didn't match, but still get row
| Jan 02 | Dept 1     |            | -- Didn't match, but still get row
| ...    | Dept 1     |            | -- Didn't match, but still get row
| Jan 09 | Dept 1     |            | -- Didn't match, but still get row
| Jan 10 | Dept 1     | Jan 10     | -- Matches, so get join result
| Jan 11 | Dept 1     | Jan 10     | -- Matches, so get join result
| Jan 12 | Dept 1     | Jan 10     | -- Matches, so get join result
| ...    | Dept 1     | Jan 10     | -- Matches, so get join result
| Jan 31 | Dept 1     | Jan 10     | -- Matches, so get join result

| Jan 01 | Dept 2     |            | -- Didn't match, but still get row
| Jan 02 | Dept 2     |            | -- Didn't match, but still get row
| ...    | Dept 2     |            | -- Didn't match, but still get row
| Jan 09 | Dept 2     |            | -- Didn't match, but still get row
| Jan 10 | Dept 2     |            | -- Didn't match, but still get row
| Jan 11 | Dept 2     | Jan 11     | -- Matches, so get join result
| Jan 12 | Dept 2     | Jan 11     | -- Matches, so get join result
| ...    | Dept 2     | Jan 11     | -- Matches, so get join result
| Jan 31 | Dept 2     | Jan 11     | -- Matches, so get join result

| Jan 01 | Dept 3     |            | -- Didn't match, but still get row
| Jan 02 | Dept 3     |            | -- Didn't match, but still get row
| ...    | Dept 3     |            | -- Didn't match, but still get row
| Jan 09 | Dept 3     |            | -- Didn't match, but still get row
| Jan 10 | Dept 3     |            | -- Didn't match, but still get row
| Jan 11 | Dept 3     |            | -- Didn't match, but still get row
| Jan 12 | Dept 3     | Jan 12     | -- Matches, so get join result
| ...    | Dept 3     | Jan 12     | -- Matches, so get join result
| Jan 31 | Dept 3     | Jan 12     | -- Matches, so get join result

| Jan 01 | Dept 4     |            | -- Didn't match, but still get row
| Jan 02 | Dept 4     |            | -- Didn't match, but still get row
| ...    | Dept 4     |            | -- Didn't match, but still get row
| Jan 31 | Dept 4     |            | -- Didn't match, but still get row

| Jan 01 | Dept 5     |            | -- Didn't match, but still get row
| Jan 02 | Dept 5     |            | -- Didn't match, but still get row
| ...    | Dept 5     |            | -- Didn't match, but still get row
| Jan 31 | Dept 5     |            | -- Didn't match, but still get row
+--------+------------+

As you can see, I’ve visually created 5 partitions for the 5 departments. Each partition combines the department with each day, but unlike when doing a CROSS JOIN, we’re now getting actual LEFT JOIN .. ON .. results in case there is a match for the predicate. This is a really nice feature for reporting in Oracle!

SEMI JOIN

In relational algebra, there is a notion of a semi join operation, which unfortunately doesn’t have a syntax representation in SQL. If it did, the syntax would probably be LEFT SEMI JOIN and RIGHT SEMI JOIN, just like the Cloudera Impala syntax extension offers.

What is a "SEMI" JOIN?

When writing something like the fictional query below:

SELECT *
FROM actor
LEFT SEMI JOIN film_actor USING (actor_id)

What we really mean is we want all actors that played in films. But we don’t want any films in the results, just the actors. More specifically, we don’t want each actor several times, once per film. We want each actor only once (or zero times) in the result.

Semi is latin for “half”, i.e. we implement only “half the join”, in this case, the left half.

In SQL, there are two alternative syntaxes that can emulate "SEMI" JOIN

Alternative syntax: EXISTS

This is the more powerful and a bit more verbose syntax

SELECT *
FROM actor a
WHERE EXISTS (
  SELECT * FROM film_actor fa
  WHERE a.actor_id = fa.actor_id
)

We’re looking for all the actors for which there exists a film, i.e. actors that played in a film. With this syntax (i.e. "SEMI" JOIN placed in the WHERE clause), it’s immediately clear that we can get each actor at most once in the result. There’s no actual JOIN in the syntax.

Nonetheless, most databases will be able to recognise that what’s going on here is really a "SEMI" JOIN, and not just an ordinary EXISTS() predicate. For instance, consider the Oracle execution plan for the above query:

hash-join-semi

Note how Oracle calls the operation “HASH JOIN (SEMI)” – the SEMI keyword is present here. The same is true for PostgreSQL:

nested-loop-semi-join

Or SQL Server:

nested-loops-left-semi-join

Apart from being the optimal solution in terms of correctness, there are also some performance benefits when using "SEMI" JOIN rather than INNER JOIN, as the database can stop looking for matches as soon as it found the first!

Alternative syntax: IN

IN and EXISTS are exactly equivalent "SEMI" JOIN emulations. The following query will produce the same plan in most databases (not MySQL), as the previous EXISTS query:

SELECT *
FROM actor
WHERE actor_id IN (
  SELECT actor_id FROM film_actor
)

If your database supports both syntaxes for the "SEMI" JOIN operation, you may choose whichever you prefer from a stylistic point of view.

This is different with …

ANTI JOIN

In principle, "ANTI" JOIN is just the opposite of "SEMI" JOIN. When writing something like the fictional query below:

SELECT *
FROM actor
LEFT ANTI JOIN film_actor USING (actor_id)

… what we’re doing is we take all the actors that didn’t play in any film. Unfortunately, again, SQL doesn’t have a built-in syntax for this operation, but we can emulate it with EXISTS:

Alternative syntax: NOT EXISTS

The following query has exactly the expected semantics:

SELECT *
FROM actor a
WHERE NOT EXISTS (
  SELECT * FROM film_actor fa
  WHERE a.actor_id = fa.actor_id
)

(Dangerous) alternative syntax: NOT IN

Watch out! While EXISTS and IN are equivalent, NOT EXISTS and NOT IN are not equivalent. This is because of NULL values.

In this particular case, the following NOT IN query is equivalent to the previous NOT EXISTS query, because our film_actor table has a NOT NULL constraint on film_actor.actor_id

SELECT *
FROM actor
WHERE actor_id NOT IN (
  SELECT actor_id FROM film_actor
)

If, however, actor_id became nullable, then the query would be wrong. Don’t believe it? Try running this:

SELECT *
FROM actor
WHERE actor_id NOT IN (1, 2, NULL)

It will not return any records. Why? Because NULL is the UNKNOWN value in SQL. So, the predicate above is the same as this:

SELECT *
FROM actor
WHERE actor_id NOT IN (1, 2, UNKNOWN)

And because we cannot be sure whether actor_id is in a set of values of which one value is UNKNOWN (is it 4? Or 5? Or -1?), the whole predicate becomes UNKNOWN

SELECT *
FROM actor
WHERE UNKNOWN

Here’s a nice article about three-valued logic by Joe Celko, if you want to learn more.

So, I cannot say this enough:

Do not use NOT IN predicates in SQL, ever, unless you put constant, non-null values there.

— Lukas Eder. Just now.

Don’t even gamble on the presence of a NOT NULL constraint. Perhaps, some DBA might temporarily turn off the constraint to load some data and your query will be wrong for the time being. Just use NOT EXISTS. Or, in some cases…

(Dangerous) alternative syntax: LEFT JOIN / IS NULL

Strangely enough, some people prefer the following syntax:

SELECT *
FROM actor a
LEFT JOIN film_actor fa
USING (actor_id)
WHERE film_id IS NULL

It’s correct as we’re:

  • Joining films to their actors
  • Retaining all actors without films (LEFT JOIN)
  • Retaining only actors without films (film_id IS NULL)

Now, I personally don’t like this syntax, as it doesn’t communicate the intent of an "ANTI" JOIN at all. And chances are, it is slower, because your optimiser doesn’t recognise this as an "ANTI" JOIN operation (or in fact, it cannot formally prove that it probably is). So, again, use NOT EXISTS instead.

An interesting (but a bit outdated) blog post comparing the three syntaxes is this one here:
https://explainextended.com/2009/09/15/not-in-vs-not-exists-vs-left-join-is-null-sql-server

LATERAL JOIN

LATERAL is a relatively new keyword in the SQL standard, and it is supported in PostgreSQL and Oracle. SQL Server folks have had a vendor-specific alternative syntax using the APPLY keyword forever (which I personally much prefer). Let’s look at an example, using the PostgreSQL / Oracle LATERAL keyword:

WITH 
  departments(department, created_at) AS (
    VALUES ('Dept 1', DATE '2017-01-10'),
           ('Dept 2', DATE '2017-01-11'),
           ('Dept 3', DATE '2017-01-12'),
           ('Dept 4', DATE '2017-04-01'),
           ('Dept 5', DATE '2017-04-02')
  )
SELECT *
FROM departments AS d
CROSS JOIN LATERAL generate_series(
  d.created_at, -- We can dereference a column from department!
  '2017-01-31'::TIMESTAMP, 
  INTERVAL '1 day'
) AS days(day)

Indeed, instead of a CROSS JOIN between all departments and all days, why not just generate the necessary days for each department directly? That’s what LATERAL does. It is a prefix to the right-hand side of any JOIN operation (including INNER JOIN, LEFT OUTER JOIN, etc.) that allows the right-hand side to access columns from the left hand side.

This of course has nothing to do with relational algebra anymore, because it imposes a JOIN order (from left to right). But sometimes, that’s OK and sometimes, your table-valued function (or subquery) is so complex, that’s the only way you can actually use it.

Another very popular use-case is to join a “TOP-N” query to a normal table – e.g. if you want to find each actor, and their TOP 5 best selling films:

SELECT a.first_name, a.last_name, f.*
FROM actor AS a
LEFT OUTER JOIN LATERAL (
  SELECT f.title, SUM(amount) AS revenue
  FROM film AS f
  JOIN film_actor AS fa USING (film_id)
  JOIN inventory AS i USING (film_id)
  JOIN rental AS r USING (inventory_id)
  JOIN payment AS p USING (rental_id)
  WHERE fa.actor_id = a.actor_id
  GROUP BY f.film_id
  ORDER BY revenue DESC
  LIMIT 5
) AS f
ON true

The result being something like:

outer-join-lateral-result

Don’t worry about the long list of joins in the derived table, that’s just how we have to get from the FILM table to the PAYMENT table in the Sakila database:

sakila1.

Essentially, the subquery is calculating thsoe TOP 5 best selling films per actor. So, it isn’t a “classic” derived table, but a correlated subquery that returns more than one row and one column. We’re all used to writing correlated subqueries like these:

SELECT 
  a.first_name, 
  a.last_name, 
  (SELECT count(*) 
   FROM film_actor AS fa 
   WHERE fa.actor_id = a.actor_id) AS films
FROM actor AS a

The result of the above correlated subquery is one row and one column. If you ever need to return more than one row and/or more than one column from a correlated subquery, LATERAL or APPLY will be your friend.

Note also how I’ve used LEFT OUTER JOIN together with LATERAL, and then I needed to put a dummy ON true clause for syntax correctness. Using LATERAL with OUTER JOIN will again always retain the left side of the JOIN, i.e. we’ll retain actors that didn’t play in any film.

Characteristics:

The LATERAL keyword doesn’t really change the semantics of the JOIN type that it is applied to. If you’re running a CROSS JOIN LATERAL, the result size is still

size(result) = size(left table) * size(right table)

Even if the right table is produced on a per-row-of-the-left-table basis.

You can use LATERAL also with OUTER JOIN, in case of which the left rows will be retained even if your table function doesn’t return any rows on the right side.

Alternative syntax: APPLY

SQL Server didn’t choose the confusing LATERAL keyword, they introduced the APPLY keyword (more specifically: CROSS APPLY and OUTER APPLY) a long time ago, which makes more sense, because we’re applying a function to each row of a table. Let’s assume for a moment that we have a generate_series() function in SQL Server:

-- Use with care, this is quite inefficient!
CREATE FUNCTION generate_series(@d1 DATE, @d2 DATE)
RETURNS TABLE AS
RETURN
  WITH t(d) AS (
    SELECT @d1 
    UNION ALL
    SELECT DATEADD(day, 1, d) 
    FROM t
    WHERE d < @d2
  ) 
  SELECT * FROM t;

Then, we could use CROSS APPLY to call that function for each department:

WITH 
  departments AS (
    SELECT * FROM (
      VALUES ('Dept 1', CAST('2017-01-10' AS DATE)),
             ('Dept 2', CAST('2017-01-11' AS DATE)),
             ('Dept 3', CAST('2017-01-12' AS DATE)),
             ('Dept 4', CAST('2017-04-01' AS DATE)),
             ('Dept 5', CAST('2017-04-02' AS DATE))
    ) d(department, created_at)
  )
SELECT *
FROM departments AS d
CROSS APPLY dbo.generate_series(
  d.created_at, -- We can dereference a column from department!
  CAST('2017-01-31' AS DATE)
)

What’s so nice about this syntax is that – again – we’re applying a function to each row of a table, and that function produces rows. Does it ring a bell? In Java 8, we would use Stream.flatMap() for that! Consider the following stream usage:

departments.stream()
           .flatMap(department -> generateSeries(
                             department.createdAt, 
                             LocalDate.parse("2017-01-31"))
                         .map(day -> tuple(department, day))
           );

What’s going on here?

  • The DEPARTMENTS table is just a Java departments stream
  • We flatMap the departments stream using a function that produces tuples for each department
  • Those tuples include the department itself, and a day generated from a series of days starting with the department’s createdAt day

Same story! SQL CROSS APPLY / CROSS JOIN LATERAL is the same thing as Java’s Stream.flatMap(). In fact, SQL and streams aren’t too different anyway. For more info, read this blog post here.

Note: Just like we could write LEFT OUTER JOIN LATERAL, we can also write OUTER APPLY in case we want to retain the left side of the JOIN expression.

MULTISET

Few databases implement this (in fact, only Oracle), but if you think about it, it’s an awesome JOIN type. One that creates nested collections. If all databases implemented it, we wouldn’t need ORMs!

A hypothetical example (that uses SQL standard syntax, not Oracle’s) looks like this:

SELECT a.*, MULTISET (
  SELECT f.*
  FROM film AS f
  JOIN film_actor AS fa USING (film_id)
  WHERE a.actor_id = fa.actor_id
) AS films
FROM actor

The MULTISET operator takes a correlated subquery argument, and aggregates all its resulting rows in a nested collection. This works in a similar fashion as a LEFT OUTER JOIN (we’re getting all actors, and if they have films, we’re getting all their films too), but instead of duplicating all the actors in the result set, we’re collecting them into a nested collection.

Just like what we would be doing in an ORM, when fetching stuff into this structure:

@Entity
class Actor {
  @ManyToMany
  List<Film> films;
}

@Entity
class Film {
}

Ignore the incompleteness of used JPA annotations, I just want to show the strength of nested collections. Unlike as in ORMs, the SQL MULTISET operator allows for collecting arbitrary results from correlated subqueries into nested collections – not just actual entities. This is million times more powerful than what any ORM could ever dream of.

Alternative syntaxes: Oracle

As I said, Oracle actually supports MULTISET, but you cannot create ad-hoc nested collections. For some reason, Oracle chose to implement nominal typing for these nested collections, rather than the usual SQL-style structural typing. So you have to declare your types in advance:

CREATE TYPE film_t AS OBJECT ( ... );
CREATE TYPE film_tt AS TABLE OF FILM;

SELECT 
  a.*, 
  CAST (
    MULTISET (
      SELECT f.*
      FROM film AS f
      JOIN film_actor AS fa USING (film_id)
      WHERE a.actor_id = fa.actor_id
    ) AS film_tt
  ) AS films
FROM actor

A bit more verbose, but still does the trick! Excellent!

Alternative syntaxes: PostgreSQL

For once, the awesome PostgreSQL is missing out on an excellent SQL standard feature, but there’s a workaround: arrays! And this time, we can use structural types, yay! So the following query will return a nested array of rows in PostgreSQL:

SELECT
  a AS actor,
  array_agg(
    f
  ) AS films
FROM actor AS a
JOIN film_actor AS fa USING (actor_id)
JOIN film AS f USING (film_id)
GROUP BY a

The result is everyone’s ORDBMS dream! Nested records and collections everywhere (in only two columns):

actor                  films
--------------------   ----------------------------------------------------
(1,PENELOPE,GUINESS)   {(1,ACADEMY DINOSAUR),(23,ANACONDA CONFESSIONS),...}
(2,NICK,WAHLBERG)      {(3,ADAPTATION HOLES),(31,APACHE DIVINE),...}
(3,ED,CHASE)           {(17,ALONE TRIP),(40,ARMY FLINTSTONES),...}

Tell me that you don’t find this exciting and I don’t know what is.

Conclusion

This was A Probably Incomplete, Comprehensive Guide to the Many Different Ways to JOIN Tables in SQL. I hope you’ve found 1-2 new tricks in this article. JOIN is only one of many very interesting SQL operations. If you’ve found this information useful, you will also like:

SQL, Streams, For Comprehension… It’s All the Same


Recently, at Devoxx, I’ve seen this beautiful slide in a talk by Kevlin Henney

In his talk, he was displaying a variety of approaches to solve the FizzBuzz “problem”, including a couple of very elegant solutions in completely declarative approaches and languages.

In this particular slide, Kevlin used a notation that is derived from maths. The set builder notation. Here’s an example from Wikipedia:

even-numbers

The example reads: For all n in (the set of all integer numbers), take those for which there exists () another integer k, for which the following equation is satisfied: n = 2k.

Or in plain English: All even integers. (because for even integers, there exists another integer that is half the even integer)

Beautiful, eh? In imperative programming, we’d probably do something like this instead:

List<Integer> even = new ArrayList<>();
for (int i = /* hmm...? */; i < /* what to put here */; i++)
    even.add(i * 2);

Or this:

List<Integer> even = new ArrayList<>();
for (int i = /* hmm...? */; i < /* what to put here */; i = i + 2)
    even.add(i);

But there are several problems with the imperative approach:

  • We have to realistically start somewhere
  • We have to realistically end somewhere
  • We have to store all values in an intermediate collection

Sure, those aren’t severe limitations in every day use-cases, because we’re probably solving a real world problem where we don’t actually need an infinite number of even integers, and storing them in an intermediate collection doesn’t consume all of our memory, but still, the declarative, mathematical approach is much leaner, because we can still answer those questions about where to start and where to end later, and we never need to materialise any intermediate collection before we make those final decisions.

For instance, we can declare X to be that set, and then declare Y to be a set that is derived from X, and finally materialise Z, which is a very tiny set derived from Y. For this, we may have never needed to materialise all the (even) integers.

How this compares to SQL

Kevlin made a cunning comparison. Of course, all functional programming aficionados will immediately recognise that languages like Scala have something called a “for comprehension”, which models precisely the mathematical set-builder notation.

Java 8 now has the Streams API, which allows us, to some extent, model something similar (although not as powerful). But Kevlin didn’t use those “modern” languages. He used SQL as a comparison. That “arcane” declarative programming language that has been around forever, and that we love so much. Yes, here’s how we can declare all the even numbers in SQL:

SELECT n
FROM integers
WHERE EXISTS (
  SELECT k
  FROM integers
  WHERE n = 2 * k
)

If optimisers were perfect, this semi-self-join between the two references of the integers “table” could be optimised perfectly. In most databases, we’d probably manually transform the above notation to this equivalent one:

SELECT n
FROM integers
WHERE MOD(n, 2) = 0

Yes, indeed. The set-builder notation and the SQL language are very similar beasts. The former prefers using mathematical symbols for brevity and conciseness, the latter prefers using English words to connect the different operators, but it’s the same thing. And if you squint hard enough, you’ll see that Java 8 Streams, for instance, are also pretty much the same thing:

everything-is-a-table

I’ve blogged about this recently where all the Java 8 Streams operations are compared to their SQL clause counterparts:
https://blog.jooq.org/2015/08/13/common-sql-clauses-and-their-equivalents-in-java-8-streams

How is this better?

It’s simple. Both the set-builder notation, and the SQL language (and in principle, other languages’ for comprehensions) are declarative. They are expressions, which can be composed to other, more complex expressions, without necessarily executing them.

Remember the imperative approach? We tell the machine exactly what to do:

  • Start counting from this particular minimal integer value
  • Stop counting at this particular maximal integer value
  • Store all even integers in between in this particular intermediate collection

What if we don’t actually need negative integers? What if we just wanted to have a utility that calculates even integers and then reuse that to list all positive integers? Or, all positive integers less than 100? Etc.

In the imperative approach, we have to refactor constantly, to avoid the overhead of

  • Producing too many integers
  • Storing too many integers (or storing them at all)

In truly declarative languages like SQL, we’re just describing “even integers” with an expression, possibly assigning the expression a name:

CREATE VIEW even_integers AS
SELECT n
FROM integers
WHERE EXISTS (
  SELECT k
  FROM integers
  WHERE k = 2 * n
)

So, when we actually use and materialise the even integers, e.g. positive integers less than 100, the optimiser can optimise away the double access to the integer table and produce only the exact number of values that we’re requesting (without materialising them in intermediate collections):

SELECT n
FROM even_integers
WHERE n BETWEEN 0 AND 100

Conclusion

Thinking in terms of sets, in terms of declaring sets, has always been our dream as software engineers. The approach is extremely compelling and elegant. We can delegate a lot of boring algorithmic work to the implementation engine of the declarative programming language. In the case of SQL, it would be a SQL database optimiser, which figures out a great lot of optimisations that we might not have thought of.

The above example is trivial. We can perfectly live in a world where we manually iterate over a local integer variable that goes from 0 to 100:

for (int i = 0; i <= 100; i++)
  doSomething(i);

But stuff gets hairy quite quickly. Compare Mario Fusco‘s famous tweet’s two versions of the same algorithm:

This also applies to SQL, and what’s even better in SQL than with Streams: The SQL statement is a declarative expression tree, not a formally ordered set of stream pipeline operations. The optimiser can freely reorder / transform the expression tree into something that it thinks is more optimal. This isn’t just a promise. This works in modern SQL databases every day, for very complex queries, which you can write in a matter of seconds, rather than hours.

Stay tuned for a short series of blog posts on the jOOQ blog illustrating what modern cost-based optimisation can do for you, when you’re using the SQL language.

A Beginner’s Guide to the True Order of SQL Operations


The SQL language is very intuitive. Until it isn’t.

Over the years, a lot of people have criticised the SQL language for a variety of reasons. For instance: IDEs cannot easily guess what auto completion options to offer, because as long as you don’t specify the FROM clause, there are no tables in scope (yet):

-- Don't you wish this would be completed to first_name?
SELECT first_na...

-- Aaah, now it works:
SELECT first_na...
FROM customer

These things are weird, because the lexical order of operations does not match the logical order of operations. We humans may sometimes (often) intuitively understand this ordering difference. E.g. we know that we’re about to select from the customer table. But the IDE doesn’t know this.

GROUP BY contributes the most confusion

When a junior developer / SQL beginner starts working with SQL, quite quickly, they will find out about aggregation and GROUP BY. And they’ll quickly write things like:

SELECT count(*)
FROM customer

Yay, we have 200 customers!

And then:

SELECT count(*)
FROM customer
WHERE first_name = 'Steve'

Wow, 90 of them are called Steve! Interesting. Let’s find out how many we have per name…

SELECT first_name, count(*)
FROM customer
GROUP BY first_name

Ahaa!

FIRST_NAME   COUNT
------------------
Steve        90
Jane         80
Joe          20
Janet        10

Very nice. But are they all the same? Let’s check out the last name, too

SELECT first_name, last_name, count(*)
FROM customer
GROUP BY first_name

Oops!

ORA-00979: not a GROUP BY expression

Jeez, what does it mean? (note, unfortunately, MySQL users that do not use the STRICT mode will still get a result here with arbitrary last names!, so a new MySQL user won’t understand their mistake)

How do you easily explain this to a SQL newbie? It seems obvious to “pros”, but is it really obvious? Is it obvious enough that you can explain it easily to a junior? Think about it. Why are each one of these statements semantically correct or wrong?

-- Wrong
SELECT first_name, count(*)
FROM customer
WHERE count(*) > 1
GROUP BY first_name

-- Correct
SELECT first_name, count(*)
FROM customer
GROUP BY first_name
HAVING count(*) > 1

-- Correct
SELECT first_name, count(*)
FROM customer
GROUP BY first_name
ORDER BY count(*) DESC

-- Wrong
SELECT first_name, last_name, count(*)
FROM customer
GROUP BY first_name

-- Correct
SELECT first_name, MAX(last_name), count(*)
FROM customer
GROUP BY first_name

-- Wrong
SELECT first_name || ' ' || last_name, count(*)
FROM customer
GROUP BY first_name

-- Correct
SELECT first_name || ' ' || MAX(last_name), count(*)
FROM customer
GROUP BY first_name

-- Correct
SELECT MAX(first_name || ' ' || last_name), count(*)
FROM customer
GROUP BY first_name

The problem is syntax related

The SQL syntax works in a similar way like the English language. It is a command. We start commands with verbs. The verb is SELECT (or INSERT, UPDATE, DELETE, CREATE, DROP, etc. etc.)

Unfortunately, human language is incredibly ill-suited for the much more formal world of programming. While it offers some consolation to new users (possibly non-programmers) who are absolute beginners, it just makes stuff hard for everyone else. All the different SQL clauses have extremely complex interdependencies. For instance:

  • In the presence of a GROUP BY clause, only expressions built from GROUP BY expressions (or functional dependencies thereof), or aggregate functions can be used in HAVING, SELECT, and ORDER BY clauses.
  • For simplicity reasons, let’s not even talk about GROUPING SETS
  • In fact, there are even a few cases in which GROUP BY is implied. E.g. if you write a “naked” HAVING clause
  • A single aggregate function in the SELECT clause (in the absence of GROUP BY) will force aggregation into a single row
  • In fact, this can also be implied by putting that aggregate function in ORDER BY (for whatever reason)
  • You can ORDER BY quite a few expressions that reference any columns from the FROM clause without SELECTing them. But that’s no longer true if you write SELECT DISTINCT

The list is endless. If you’re interested, you can read the SQL standard documents and check out how many weird and complicated inter-dependencies there exist between the many clauses of the SELECT statement.

Can this ever be understood?

Luckily, yes! There’s a simple trick, which I’m always explaining to the delegates that visit my SQL Masterclass. The lexical (syntactical) order of SQL operations (clauses) does not correspond at all to the logical order of operations (although, sometimes, they do coincidentally). Thanks to modern optimisers, the order also doesn’t correspond to the actual order of operations, so we really have: syntactical -> logical -> actual order, but let’s leave that aside for now.

The logical order of operations is the following (for “simplicity” I’m leaving out vendor specific things like CONNECT BY, MODEL, MATCH_RECOGNIZE, PIVOT, UNPIVOT and all the others):

  • FROM: This is actually the first thing that happens, logically. Before anything else, we’re loading all the rows from all the tables and join them. Before you scream and get mad: Again, this is what happens first logically, not actually. The optimiser will very probably not do this operation first, that would be silly, but access some index based on the WHERE clause. But again, logically, this happens first. Also: all the JOIN clauses are actually part of this FROM clause. JOIN is an operator in relational algebra. Just like + and - are operators in arithmetics. It is not an independent clause, like SELECT or FROM
  • WHERE: Once we have loaded all the rows from the tables above, we can now throw them away again using WHERE
  • GROUP BY: If you want, you can take the rows that remain after WHERE and put them in groups or buckets, where each group contains the same value for the GROUP BY expression (and all the other rows are put in a list for that group). In Java, you would get something like: Map<String, List<Row>>. If you do specify a GROUP BY clause, then your actual rows contain only the group columns, no longer the remaining columns, which are now in that list. Those columns in the list are only visible to aggregate functions that can operate upon that list. See below.
  • aggregations: This is important to understand. No matter where you put your aggregate function syntactically (i.e. in the SELECT clause, or in the ORDER BY clause), this here is the step where aggregate functions are calculated. Right after GROUP BY. (remember: logically. Clever databases may have calculated them before, actually). This explains why you cannot put an aggregate function in the WHERE clause, because its value cannot be accessed yet. The WHERE clause logically happens before the aggregation step. Aggregate functions can access columns that you have put in “this list” for each group, above. After aggregation, “this list” will disappear and no longer be available. If you don’t have a GROUP BY clause, there will just be one big group without any key, containing all the rows.
  • HAVING: … but now you can access aggregation function values. For instance, you can check that count(*) > 1 in the HAVING clause. Because HAVING is after GROUP BY (or implies GROUP BY), we can no longer access columns or expressions that were not GROUP BY columns.
  • WINDOW: If you’re using the awesome window function feature, this is the step where they’re all calculated. Only now. And the cool thing is, because we have already calculated (logically!) all the aggregate functions, we can nest aggregate functions in window functions. It’s thus perfectly fine to write things like sum(count(*)) OVER () or row_number() OVER (ORDER BY count(*)). Window functions being logically calculated only now also explains why you can put them only in the SELECT or ORDER BY clauses. They’re not available to the WHERE clause, which happened before. Note that PostgreSQL and Sybase SQL Anywhere have an actual WINDOW clause!
  • SELECT: Finally. We can now use all the rows that are produced from the above clauses and create new rows / tuples from them using SELECT. We can access all the window functions that we’ve calculated, all the aggregate functions that we’ve calculated, all the grouping columns that we’ve specified, or if we didn’t group/aggregate, we can use all the columns from our FROM clause. Remember: Even if it looks like we’re aggregating stuff inside of SELECT, this has happened long ago, and the sweet sweet count(*) function is nothing more than a reference to the result.
  • DISTINCT: Yes! DISTINCT happens after SELECT, even if it is put before your SELECT column list, syntax-wise. But think about it. It makes perfect sense. How else can we remove distinct rows, if we don’t know all the rows (and their columns) yet?
  • UNION, INTERSECT, EXCEPT: This is a no-brainer. A UNION is an operator that connects two subqueries. Everything we’ve talked about thus far was a subquery. The output of a union is a new query containing the same row types (i.e. same columns) as the first subquery. Usually. Because in wacko Oracle, the penultimate subquery is the right one to define the column name. Oracle database, the syntactic troll 😉
  • ORDER BY: It makes total sense to postpone the decision of ordering a result until the end, because all other operations might use hashmaps, internally, so any intermediate order might be lost again. So we can now order the result. Normally, you can access a lot of rows from the ORDER BY clause, including rows (or expressions) that you did not SELECT. But when you specified DISTINCT, before, you can no longer order by rows / expressions that were not selected. Why? Because the ordering would be quite undefined.
  • OFFSET: Don’t use offset
  • LIMIT, FETCH, TOP: Now, sane databases put the LIMIT (MySQL, PostgreSQL) or FETCH (DB2, Oracle 12c, SQL Server 2012) clause at the very end, syntactically. In the old days, Sybase and SQL Server thought it would be a good idea to have TOP as a keyword in SELECT. As if the correct ordering of SELECT DISTINCT wasn’t already confusing enough.

There, we have it. It makes total sense. And if you ever want to do something that is not in the “right order”, the simplest trick is always to resort to a derived table. E.g. when you want to group on a window function:

-- Doesn't work, cannot put window functions in GROUP BY
SELECT ntile(4) ORDER BY (age) AS bucket, MIN(age), MAX(age)
FROM customer
GROUP BY ntile(4) ORDER BY (age)

-- Works:
SELECT bucket, MIN(age), MAX(age)
FROM (
  SELECT age, ntile(4) ORDER BY (age) AS bucket
  FROM customer
) c
GROUP BY bucket

Why does it work? Because:

  • In the derived table, FROM happens first, and then the WINDOW is calculated, then the bucket is SELECTed.
  • The outer SELECT can now treat the result of this window function calculation like any ordinary table in the FROM clause, then GROUP BY an ordinary column, then aggregate, then SELECT

Let’s review our original examples with an explanation why they work or why they don’t.

-- Wrong: Because aggregate functions are calculated
-- *after* GROUP BY, and WHERE is applied *before* GROUP BY
SELECT first_name, count(*)
FROM customer
WHERE count(*) > 1
GROUP BY first_name

-- logical order         -- available columns after operation
-------------------------------------------------------------
FROM customer            -- customer.*
WHERE ??? > 1            -- customer.* (count not yet available!)
GROUP BY first_name      -- first_name (customer.* for aggs only)
<aggregate> count(*)     -- first_name, count
SELECT first_name, count -- first_name, count



-- Correct: Because aggregate functions are calculated
-- *after* GROUP BY but *before* HAVING, so they're 
-- available to the HAVING clause.
SELECT first_name, count(*)
FROM customer
GROUP BY first_name
HAVING count(*) > 1

-- logical order         -- available columns after operation
-------------------------------------------------------------
FROM customer            -- customer.*
GROUP BY first_name      -- first_name (customer.* for aggs only)
<aggregate> count(*)     -- first_name, count
HAVING count > 1         -- first_name, count
SELECT first_name, count -- first_name, count



-- Correct: Both SELECT and ORDER BY are applied *after*
-- the aggregation step, so aggregate function results are 
-- available
SELECT first_name, count(*)
FROM customer
GROUP BY first_name
ORDER BY count(*) DESC

-- logical order         -- available columns after operation
-------------------------------------------------------------
FROM customer            -- customer.*
GROUP BY first_name      -- first_name (customer.* for aggs only)
<aggregate> count(*)     -- first_name, count
SELECT first_name, count -- first_name, count
ORDER BY count DESC      -- first_name, count



-- Wrong: Because the GROUP BY clause creates groups of
-- first names, and all the remaining customer columns
-- are aggregated into a list, which is only visiblbe to
-- aggregate functions
SELECT first_name, last_name, count(*)
FROM customer
GROUP BY first_name

-- logical order         -- available columns after operation
-----------------------------------------------------------------
FROM customer            -- customer.*
GROUP BY first_name      -- first_name (customer.* for aggs only)
<aggregate> count(*)     -- first_name, count
                         -- first_name, count (last_name removed)
SELECT first_name, ???, count 



-- Correct: Because now, we're using an aggregate function
-- to access one of the columns that have been put into that
-- list of columns that are otherwise no longer available
-- after the GROUP BY clause
SELECT first_name, MAX(last_name), count(*)
FROM customer
GROUP BY first_name

-- logical order         -- available columns after operation
-----------------------------------------------------------------
FROM customer            -- customer.*
GROUP BY first_name      -- first_name (customer.* for aggs only)
                         -- first_name, max, count
<aggregate> MAX(last_name), count(*) 
                         -- first_name, max, count
SELECT first_name, max, count



-- Wrong: Because we still cannot access the last name column
-- which is in that list after the GROUP BY clause.
SELECT first_name || ' ' || last_name, count(*)
FROM customer
GROUP BY first_name

-- logical order         -- available columns after operation
-----------------------------------------------------------------
FROM customer            -- customer.*
GROUP BY first_name      -- first_name (customer.* for aggs only)
<aggregate> count(*)     -- first_name, count
                         -- first_name, count (last_name removed)
SELECT first_name || ' ' || ???, count 




-- Correct: Because we can access the last name column from
-- aggregate functions, which can see that list
SELECT first_name || ' ' || MAX(last_name), count(*)
FROM customer
GROUP BY first_name

-- logical order         -- available columns after operation
-----------------------------------------------------------------
FROM customer            -- customer.*
GROUP BY first_name      -- first_name (customer.* for aggs only)
                         -- first_name, max, count
<aggregate> MAX(last_name), count(*)  
                         -- first_name, max, count (no last_name)
SELECT first_name || ' ' || max, count




-- Correct: Because both GROUP BY columns and aggregated
-- columns are available to aggregate functions
SELECT MAX(first_name || ' ' || last_name), count(*)
FROM customer
GROUP BY first_name

-- logical order         -- available columns after operation
-----------------------------------------------------------------
FROM customer            -- customer.*
GROUP BY first_name      -- first_name (customer.* for aggs only)
                         -- first_name, max, count
<aggregate> MAX(first_name || ' ' || last_name), count(*)
SELECT max, count        -- first_name, max, count

Always think about the logical order of operations

If you’re not a frequent SQL writer, the syntax can indeed be confusing. Especially GROUP BY and aggregations “infect” the rest of the entire SELECT clause, and things get really weird. When confronted with this weirdness, we have two options:

  • Get mad and scream at the SQL language designers
  • Accept our fate, close our eyes, forget about the snytax and remember the logical operations order

I generally recommend the latter, because then things start making a lot more sense, including the beautiful cumulative daily revenue calculation below, which nests the daily revenue (SUM(amount) aggregate function) inside of the cumulative revenue (SUM(...) OVER (...) window function):

SELECT
  payment_date,
  SUM(SUM(amount)) OVER (ORDER BY payment_date) AS revenue
FROM payment
GROUP BY payment_date

… because aggregations logically happen before window functions.

Want to learn more? We also have these articles for you to read: