10 SQL Tricks That You Didn’t Think Were Possible

Listicles like these do work – not only do they attract attention, if the content is also valuable (and in this case it is, trust me), the article format can be extremely entertaining.

This article will bring you 10 SQL tricks that many of you might not have thought were possible. The article is a summary of my new, extremely fast-paced, ridiculously childish-humoured talk, which I’m giving at conferences (recently at JAX, and Devoxx France. You may quote me on this:

The full slides can be seen on slideshare:

And a recording from Devoxx France is here:

Here are 10 SQL Tricks That You Didn’t Think Were Possible:

Introduction

In order to understand the value of these 10 SQL tricks, it is first important to understand the context of the SQL language. Why do I talk about SQL at Java conferences? (and I’m usually the only one!) This is why:

sql-tricks-slide-006

From early days onwards, programming language designers had this desire to design languages in which you tell the machine WHAT you want as a result, not HOW to obtain it. For instance, in SQL, you tell the machine that you want to “connect” (JOIN) the user table and the address table and find the users that live in Switzerland. You don’t care HOW the database will retrieve this information (e.g. should the users table be loaded first, or the address table? Should the two tables be joined in a nested loop or with a hashmap? Should all data be loaded in memory first and then filtered for Swiss users, or should we only load Swiss addresses in the first place? Etc.)

As with every abstraction, you will still need to know the basics of what’s going on behind the scenes in a database to help the database make the right decisions when you query it. For instance, it makes sense to:

  • Establish a formal foreign key relationship between the tables (this tells the database that every address is guaranteed to have a corresponding user)
  • Add an index on the search field: The country (this tells the database that specific countries can be found in O(log N) instead of O(N))

But once your database and your application matures, you will have put all the important meta data in place and you can focus on your business logic only. The following 10 tricks show amazing functionality written in only a few lines of declarative SQL, producing simple and also complex output.

1. Everything is a Table

This is the most trivial of tricks, and not even really a trick, but it is fundamental to a thorough understanding of SQL: Everything is a table! When you see a SQL statement like this:

SELECT *
FROM person

… you will quickly spot the table person sitting right there in the FROM clause. That’s cool, that is a table. But did you realise that the whole statement is also a table? For instance, you can write:

SELECT *
FROM (
  SELECT *
  FROM person
) t

And now, you have created what is called a “derived table” – i.e. a nested SELECT statement in a FROM clause.

That’s trivial, but if you think of it, quite elegant. You can also create ad-hoc, in-memory tables with the VALUES() constructor as such, in some databases (e.g. PostgreSQL, SQL Server):

SELECT *
FROM (
  VALUES(1),(2),(3)
) t(a)

Which simply yields:

 a
---
 1
 2
 3

If that clause is not supported, you can revert to derived tables, e.g. in Oracle:

SELECT *
FROM (
  SELECT 1 AS a FROM DUAL UNION ALL
  SELECT 2 AS a FROM DUAL UNION ALL
  SELECT 3 AS a FROM DUAL
) t

Now that you’re seeing that VALUES() and derived tables are really the same thing, conceptually, let’s review the INSERT statement, which comes in two flavours:

-- SQL Server, PostgreSQL, some others:
INSERT INTO my_table(a)
VALUES(1),(2),(3);

-- Oracle, many others:
INSERT INTO my_table(a)
SELECT 1 AS a FROM DUAL UNION ALL
SELECT 2 AS a FROM DUAL UNION ALL
SELECT 3 AS a FROM DUAL

In SQL everything is a table. When you’re inserting rows into a table, you’re not really inserting individual rows. You’re really inserting entire tables. Most people just happen to insert a single-row-table most of the time, and thus don’t realise what INSERT really does.

Everything is a table. In PostgreSQL, even functions are tables:

SELECT *
FROM substring('abcde', 2, 3)

The above yields:

substring
---------
bcd

If you’re programming in Java, you can use the analogy of the Java 8 Stream API to take this one step further. Consider the following equivalent concepts:

TABLE          : Stream<Tuple<..>>
SELECT         : map() 
DISTINCT       : distinct()
JOIN           : flatMap()
WHERE / HAVING : filter()
GROUP BY       : collect()
ORDER BY       : sorted()
UNION ALL      : concat()

With Java 8, “everything is a Stream” (as soon as you start working with Streams, at least). No matter how you transform a stream, e.g. with map() or filter(), the resulting type is always a Stream again.

We’ve written an entire article to explain this more deeply, and to compare the Stream API with SQL:
https://blog.jooq.org/2015/08/13/common-sql-clauses-and-their-equivalents-in-java-8-streams

And if you’re looking for “better streams” (i.e. streams with even more SQL semantics), do check out jOOλ, an open source library that brings SQL window functions to Java.

2. Data Generation with Recursive SQL

Common Table Expressions (also: CTE, also referred to as subquery factoring, e.g. in Oracle) are the only way to declare variables in SQL (apart from the obscure WINDOW clause that only PostgreSQL and Sybase SQL Anywhere know).

This is a powerful concept. Extremely powerful. Consider the following statement:

-- Table variables
WITH 
  t1(v1, v2) AS (SELECT 1, 2),
  t2(w1, w2) AS (
    SELECT v1 * 2, v2 * 2
    FROM t1
  )
SELECT *
FROM t1, t2

It yields

v1   v2   w1   w2
-----------------
 1    2    2    4

Using the simple WITH clause, you can specify a list of table variables (remember: everything is a table), which may even depend on each other.

That is easy to understand. This makes CTE (Common Table Expressions) already very useful, but what’s really really awesome is that they’re allowed to be recursive! Consider the following PostgreSQL example:

WITH RECURSIVE t(v) AS (
  SELECT 1     -- Seed Row
  UNION ALL
  SELECT v + 1 -- Recursion
  FROM t
)
SELECT v
FROM t
LIMIT 5

It yields

 v
---
 1
 2
 3
 4
 5 

How does it work? It’s relatively easy, once you see through the many keywords. You define a common table expression that has exactly two UNION ALL subqueries.

The first UNION ALL subquery is what I usually call the “seed row”. It “seeds” (initialises) the recursion. It can produce one or several rows on which we will recurse afterwards. Remember: everything is a table, so our recursion will happen on a whole table, not on an individual row/value.

The second UNION ALL subquery is where the recursion happens. If you look closely, you will observe that it selects from t. I.e. the second subquery is allowed to select from the very CTE that we’re about to declare. Recursively. It thus has also access to the column v, which is being declared by the CTE that already uses it.

In our example, we seed the recursion with the row (1), and then recurse by adding v + 1. The recursion is then stopped at the use-site by setting a LIMIT 5 (beware of potentially infinite recursions – just like with Java 8 Streams).

Side note: Turing completeness

Recursive CTE make SQL:1999 turing complete, which means that any program can be written in SQL! (if you’re crazy enough)

One impressive example that frequently shows up on blogs: The Mandelbrot Set, e.g. as displayed on http://explainextended.com/2013/12/31/happy-new-year-5/

WITH RECURSIVE q(r, i, rx, ix, g) AS (
  SELECT r::DOUBLE PRECISION * 0.02, i::DOUBLE PRECISION * 0.02, 
        .0::DOUBLE PRECISION      , .0::DOUBLE PRECISION, 0
  FROM generate_series(-60, 20) r, generate_series(-50, 50) i
  UNION ALL
  SELECT r, i, CASE WHEN abs(rx * rx + ix * ix) <= 2 THEN rx * rx - ix * ix END + r, 
               CASE WHEN abs(rx * rx + ix * ix) <= 2 THEN 2 * rx * ix END + i, g + 1
  FROM q
  WHERE rx IS NOT NULL AND g < 99
)
SELECT array_to_string(array_agg(s ORDER BY r), '')
FROM (
  SELECT i, r, substring(' .:-=+*#%@', max(g) / 10 + 1, 1) s
  FROM q
  GROUP BY i, r
) q
GROUP BY i
ORDER BY i

Run the above on PostgreSQL, and you’ll get something like

                             .-.:-.......==..*.=.::-@@@@@:::.:.@..*-.         =. 
                             ...=...=...::+%.@:@@@@@@@@@@@@@+*#=.=:+-.      ..-  
                             .:.:=::*....@@@@@@@@@@@@@@@@@@@@@@@@=@@.....::...:. 
                             ...*@@@@=.@:@@@@@@@@@@@@@@@@@@@@@@@@@@=.=....:...::.
                              .::@@@@@:-@@@@@@@@@@@@@@@@@@@@@@@@@@@@:@..-:@=*:::.
                              .-@@@@@-@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@.=@@@@=..:
                              ...@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@:@@@@@:.. 
                             ....:-*@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@::  
                            .....@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@-..  
                          .....@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@-:...  
                         .--:+.@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@...  
                         .==@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@-..  
                         ..+@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@-#.  
                         ...=+@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@.. 
                         -.=-@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@..:
                        .*%:@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@:@-
 .    ..:...           ..-@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
..............        ....-@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@%@=
.--.-.....-=.:..........::@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@..
..=:-....=@+..=.........@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@:.
.:+@@::@==@-*:%:+.......:@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@.
::@@@-@@@@@@@@@-:=.....:@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@:
.:@@@@@@@@@@@@@@@=:.....%@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
.:@@@@@@@@@@@@@@@@@-...:@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@:-
:@@@@@@@@@@@@@@@@@@@-..%@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@.
%@@@@@@@@@@@@@@@@@@@-..-@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@.
@@@@@@@@@@@@@@@@@@@@@::+@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@+
@@@@@@@@@@@@@@@@@@@@@@:@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@..
@@@@@@@@@@@@@@@@@@@@@@-@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@- 
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@.  

Impressive, huh?

3. Running Total Calculations

This blog is full of running total examples. They’re some of the most educational examples to learn about advanced SQL, because there are at least a dozen of ways how to implement a running total.

A running total is easy to understand, conceptually.sql-tricks-running-total

In Microsoft Excel, you would simply calculate a sum (or difference) of two previous (or subsequent) values, and then use the useful crosshair cursor to pull that formula through your entire spreadsheet. You “run” that total through the spreadsheet. A “running total”.

In SQL, the best way to do that is by using window functions, another topic that this blog has covered many many times.

Window functions are a powerful concept – not so easy to understand at first, but in fact, they’re really really easy:

Window functions are aggregations / rankings on a subset of rows relative to the current row being transformed by SELECT

That’s it. 🙂

What it essentially means is that a window function can perform calculations on rows that are “above” or “below” the current row. Unlike ordinary aggregations and GROUP BY, however, they don’t transform the rows, which makes them very useful.

The syntax can be summarised as follows, with individual parts being optional

function(...) OVER (
  PARTITION BY ...
  ORDER BY ...
  ROWS BETWEEN ... AND ...
)

So, we have any sort of function (we’ll see examples for such functions later), followed by this OVER() clause, which specifies the window. I.e. this OVER() clause defines:

  • The PARTITION: Only rows that are in the same partition as the current row will be considered for the window
  • The ORDER: The window can be ordered independently of what we’re selecting
  • The ROWS (or RANGE) frame definition: The window can be restricted to a fixed amount of rows “ahead” and “behind”

That’s all there is to window functions.

Now how does that help us calculate a running total? Consider the following data:

| ID   | VALUE_DATE | AMOUNT |    BALANCE |
|------|------------|--------|------------|
| 9997 | 2014-03-18 |  99.17 |   19985.81 |
| 9981 | 2014-03-16 |  71.44 |   19886.64 |
| 9979 | 2014-03-16 | -94.60 |   19815.20 |
| 9977 | 2014-03-16 |  -6.96 |   19909.80 |
| 9971 | 2014-03-15 | -65.95 |   19916.76 |

Let’s assume that BALANCE is what we want to calculate from AMOUNT

Intuitively, we can immediately see that the following holds true:

sql-tricks-slide-081

So, in plain English, any balance can be expressed with the following pseudo SQL:

TOP_BALANCE - SUM(AMOUNT) OVER (
  "all the rows on top of the current row"
)

In real SQL, that would then be written as follows:

SUM(t.amount) OVER (
  PARTITION BY t.account_id 
  ORDER BY     t.value_date DESC,
               t.id         DESC
  ROWS BETWEEN UNBOUNDED PRECEDING
       AND     1         PRECEDING
)

Explanation:

  • The partition will calculate the sum for each bank account, not for the entire data set
  • The ordering will guarantee that transactions are ordered (within the partition) prior to summing
  • The rows clause will consider only preceding rows (within the partition, given the ordering) prior to summing

All of this will happen in-memory over the data set that has already been selected by you in your FROM .. WHERE etc. clauses, and is thus extremely fast.

Intermezzo

Before we move on to all the other awesome tricks, consider this: We’ve seen

  • (Recursive) Common Table Expressions (CTE)
  • Window functions

Both of these features are:

  • Awesome
  • Exremely powerful
  • Declarative
  • Part of the SQL standard
  • Available in most popular RDBMS (except MySQL)
  • Very important building blocks

If anything can be concluded from this article, it is the fact that you should absolutely know these two building blocks of modern SQL. Why? Because:

sql-tricks-slide-016

4. Finding the Largest Series with no Gaps

Stack Overflow has this very nice feature to motivate people to stay on their website for as long as possible. Badges:

sql-tricks-slide-090

For scale, you can see how many badges I have. Tons.

How do you calculate these badges? Let’s have a look at the “Enthusiast” and the “Fanatic”. These badges are awarded to anyone who spends a given amount of consecutive days on their platform. Regardless of any wedding date or wife’s birthday, you HAVE TO LOG IN, or the counter starts from zero again.

Now as we’re doing declarative programming, we don’t care about maintaining any state and in-memory counters. We want to express this in the form of online analytic SQL. I.e. consider this data:

| LOGIN_TIME          |
|---------------------|
| 2014-03-18 05:37:13 |
| 2014-03-16 08:31:47 |
| 2014-03-16 06:11:17 |
| 2014-03-16 05:59:33 |
| 2014-03-15 11:17:28 |
| 2014-03-15 10:00:11 |
| 2014-03-15 07:45:27 |
| 2014-03-15 07:42:19 |
| 2014-03-14 09:38:12 |

That doesn’t help much. Let’s remove the hours from the timestamp. That’s easy:

SELECT DISTINCT 
  cast(login_time AS DATE) AS login_date
FROM logins
WHERE user_id = :user_id

Which yields:

| LOGIN_DATE |
|------------|
| 2014-03-18 |
| 2014-03-16 |
| 2014-03-15 |
| 2014-03-14 |

Now, that we’ve learned about window functions, let’s just add a simple row number to each of these dates:

SELECT
  login_date,
  row_number() OVER (ORDER BY login_date)
FROM login_dates

Which produces:

| LOGIN_DATE | RN |
|------------|----|
| 2014-03-18 |  4 |
| 2014-03-16 |  3 |
| 2014-03-15 |  2 |
| 2014-03-14 |  1 |

Still easy. Now, what happens, if instead of selecting these values separately, we subtract them?

SELECT
  login_date -
  row_number() OVER (ORDER BY login_date)
FROM login_dates

We’re getting something like this:

| LOGIN_DATE | RN | GRP        |
|------------|----|------------|
| 2014-03-18 |  4 | 2014-03-14 |
| 2014-03-16 |  3 | 2014-03-13 |
| 2014-03-15 |  2 | 2014-03-13 |
| 2014-03-14 |  1 | 2014-03-13 |

Wow. Interesting. So, 14 - 1 = 13, 15 - 2 = 13, 16 - 3 = 13, but 18 - 4 = 14. No one can say it better than Doge:

sql-tricks-slide-099

There’s a simple example for this behaviour:

  1. ROW_NUMBER() never has gaps. That’s how it’s defined
  2. Our data, however, does

So when we subtract a “gapless” series of consecutive integers from a “gapful” series of non-consecutive dates, we will get the same date for each “gapless” subseries of consecutive dates, and we’ll get a new date again where the date series had gaps.

Huh.

This means we can now simply GROUP BY this arbitrary date value:

SELECT
  min(login_date), max(login_date),
  max(login_date) - 
  min(login_date) + 1 AS length
FROM login_date_groups
GROUP BY grp
ORDER BY length DESC

And we’re done. The largest series of consecutive dates with no gaps has been found:

| MIN        | MAX        | LENGTH |
|------------|------------|--------|
| 2014-03-14 | 2014-03-16 |      3 |
| 2014-03-18 | 2014-03-18 |      1 |

With the full query being:

WITH 
  login_dates AS (
    SELECT DISTINCT cast(login_time AS DATE) login_date 
    FROM logins WHERE user_id = :user_id
  ),
  login_date_groups AS (
    SELECT 
      login_date,
      login_date - row_number() OVER (ORDER BY login_date) AS grp
    FROM login_dates
  )
SELECT 
  min(login_date), max(login_date), 
  max(login_date) - min(login_date) + 1 AS length
FROM login_date_groups
GROUP BY grp
ORDER BY length DESC

sql-tricks-slide-102

Not that hard in the end, right? Of course, having the idea makes all the difference, but the query itself is really very very simple and elegant. No way you could implement some imperative-style algorithm in a leaner way than this.

Whew.

5. Finding the Length of a Series

Previously, we had seen series of consecutive values. That’s easy to deal with as we can abuse of the consecutiveness of integers. What if the definition of a “series” is less intuitive, and in addition to that, several series contain the same values? Consider the following data, where LENGTH is the length of each series that we want to calculate:

| ID   | VALUE_DATE | AMOUNT |     LENGTH |
|------|------------|--------|------------|
| 9997 | 2014-03-18 |  99.17 |          2 |
| 9981 | 2014-03-16 |  71.44 |          2 |
| 9979 | 2014-03-16 | -94.60 |          3 |
| 9977 | 2014-03-16 |  -6.96 |          3 |
| 9971 | 2014-03-15 | -65.95 |          3 |
| 9964 | 2014-03-15 |  15.13 |          2 |
| 9962 | 2014-03-15 |  17.47 |          2 |
| 9960 | 2014-03-15 |  -3.55 |          1 |
| 9959 | 2014-03-14 |  32.00 |          1 |

Yes, you’ve guessed right. A “series” is defined by the fact that consecutive (ordered by ID) rows have the same SIGN(AMOUNT). Check again the data formatted as below:

| ID   | VALUE_DATE | AMOUNT |     LENGTH |
|------|------------|--------|------------|
| 9997 | 2014-03-18 | +99.17 |          2 |
| 9981 | 2014-03-16 | +71.44 |          2 |

| 9979 | 2014-03-16 | -94.60 |          3 |
| 9977 | 2014-03-16 | - 6.96 |          3 |
| 9971 | 2014-03-15 | -65.95 |          3 |

| 9964 | 2014-03-15 | +15.13 |          2 |
| 9962 | 2014-03-15 | +17.47 |          2 |

| 9960 | 2014-03-15 | - 3.55 |          1 |

| 9959 | 2014-03-14 | +32.00 |          1 |

How do we do it? “Easy” 😉 First, let’s get rid of all the noise, and add another row number:

SELECT 
  id, amount,
  sign(amount) AS sign,
  row_number() 
    OVER (ORDER BY id DESC) AS rn
FROM trx

This will give us:

| ID   | AMOUNT | SIGN | RN |
|------|--------|------|----|
| 9997 |  99.17 |    1 |  1 |
| 9981 |  71.44 |    1 |  2 |

| 9979 | -94.60 |   -1 |  3 |
| 9977 |  -6.96 |   -1 |  4 |
| 9971 | -65.95 |   -1 |  5 |

| 9964 |  15.13 |    1 |  6 |
| 9962 |  17.47 |    1 |  7 |

| 9960 |  -3.55 |   -1 |  8 |

| 9959 |  32.00 |    1 |  9 |

Now, the next goal is to produce the following table:

| ID   | AMOUNT | SIGN | RN | LO | HI |
|------|--------|------|----|----|----|
| 9997 |  99.17 |    1 |  1 |  1 |    |
| 9981 |  71.44 |    1 |  2 |    |  2 |

| 9979 | -94.60 |   -1 |  3 |  3 |    |
| 9977 |  -6.96 |   -1 |  4 |    |    |
| 9971 | -65.95 |   -1 |  5 |    |  5 |

| 9964 |  15.13 |    1 |  6 |  6 |    |
| 9962 |  17.47 |    1 |  7 |    |  7 |

| 9960 |  -3.55 |   -1 |  8 |  8 |  8 |

| 9959 |  32.00 |    1 |  9 |  9 |  9 |

In this table, we want to copy the row number value into “LO” at the “lower” end of a series, and into “HI” at the “upper” end of a series. For this we’ll be using the magical LEAD() and LAG(). LEAD() can access the n-th next row from the current row, whereas LAG() can access the n-th previous row from the current row. For example:

SELECT 
  lag(v) OVER (ORDER BY v),
  v, 
  lead(v) OVER (ORDER BY v)
FROM (
  VALUES (1), (2), (3), (4)
) t(v)

The above query produces:

sql-tricks-lead-lag

That’s awesome! Remember, with window functions, you can perform rankings or aggregations on a subset of rows relative to the current row. In the case of LEAD() and LAG(), we simply access a single row relative to the current row, given its offset. This is useful in so many cases.

Continuing with our “LO” and “HI” example, we can simply write:

SELECT 
  trx.*,
  CASE WHEN lag(sign) 
       OVER (ORDER BY id DESC) != sign 
       THEN rn END AS lo,
  CASE WHEN lead(sign) 
       OVER (ORDER BY id DESC) != sign 
       THEN rn END AS hi,
FROM trx

… in which we compare the “previous” sign (lag(sign)) with the “current” sign (sign). If they’re different, we put the row number in “LO”, because that’s the lower bound of our series.

Then we compare the “next” sign (lead(sign)) with the “current” sign (sign). If they’re different, we put the row number in “HI”, because that’s the upper bound of our series.

Finally, a little boring NULL handling to get everything right, and we’re done:

SELECT -- With NULL handling...
  trx.*,
  CASE WHEN coalesce(lag(sign) 
       OVER (ORDER BY id DESC), 0) != sign 
       THEN rn END AS lo,
  CASE WHEN coalesce(lead(sign) 
       OVER (ORDER BY id DESC), 0) != sign 
       THEN rn END AS hi,
FROM trx

Next step. We want “LO” and “HI” to appear in ALL rows, not just at the “lower” and “upper” bounds of a series. E.g. like this:

| ID   | AMOUNT | SIGN | RN | LO | HI |
|------|--------|------|----|----|----|
| 9997 |  99.17 |    1 |  1 |  1 |  2 |
| 9981 |  71.44 |    1 |  2 |  1 |  2 |

| 9979 | -94.60 |   -1 |  3 |  3 |  5 |
| 9977 |  -6.96 |   -1 |  4 |  3 |  5 |
| 9971 | -65.95 |   -1 |  5 |  3 |  5 |

| 9964 |  15.13 |    1 |  6 |  6 |  7 |
| 9962 |  17.47 |    1 |  7 |  6 |  7 |

| 9960 |  -3.55 |   -1 |  8 |  8 |  8 |

| 9959 |  32.00 |    1 |  9 |  9 |  9 |

We’re using a feature that is available at least in Redshift, Sybase SQL Anywhere, DB2, Oracle. We’re using the “IGNORE NULLS” clause that can be passed to some window functions:

SELECT 
  trx.*,
  last_value (lo) IGNORE NULLS OVER (
    ORDER BY id DESC 
    ROWS BETWEEN UNBOUNDED PRECEDING 
    AND CURRENT ROW) AS lo,
  first_value(hi) IGNORE NULLS OVER (
    ORDER BY id DESC 
    ROWS BETWEEN CURRENT ROW 
    AND UNBOUNDED FOLLOWING) AS hi
FROM trx

A lot of keywords! But the essence is always the same. From any given “current” row, we look at all the “previous values” (ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW), but ignoring all the nulls. From those previous values, we take the last value, and that’s our new “LO” value. In other words, we take the “closest preceding” “LO” value.

The same with “HI”. From any given “current” row, we look at all the “subsequent values” (ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING), but ignoring all the nulls. From the subsequent values, we take the first value, and that’s our new “HI” value. In other words, we take the “closest following” “HI” value.

Explained in Powerpoint:

sql-tricks-slide-131

Getting it 100% correct, with a little boring NULL fiddling:

SELECT -- With NULL handling...
  trx.*,
  coalesce(last_value (lo) IGNORE NULLS OVER (
    ORDER BY id DESC 
    ROWS BETWEEN UNBOUNDED PRECEDING 
    AND CURRENT ROW), rn) AS lo,
  coalesce(first_value(hi) IGNORE NULLS OVER (
    ORDER BY id DESC 
    ROWS BETWEEN CURRENT ROW 
    AND UNBOUNDED FOLLOWING), rn) AS hi
FROM trx

Finally, we’re just doing a trivial last step, keeping in mind off-by-1 errors:

SELECT
  trx.*,
  1 + hi - lo AS length
FROM trx

And we’re done. Here’s our result:

| ID   | AMOUNT | SIGN | RN | LO | HI | LENGTH|
|------|--------|------|----|----|----|-------|
| 9997 |  99.17 |    1 |  1 |  1 |  2 |     2 |
| 9981 |  71.44 |    1 |  2 |  1 |  2 |     2 |
| 9979 | -94.60 |   -1 |  3 |  3 |  5 |     3 |
| 9977 |  -6.96 |   -1 |  4 |  3 |  5 |     3 |
| 9971 | -65.95 |   -1 |  5 |  3 |  5 |     3 |
| 9964 |  15.13 |    1 |  6 |  6 |  7 |     2 |
| 9962 |  17.47 |    1 |  7 |  6 |  7 |     2 |
| 9960 |  -3.55 |   -1 |  8 |  8 |  8 |     1 |
| 9959 |  32.00 |    1 |  9 |  9 |  9 |     1 |

And the full query here:

WITH 
  trx1(id, amount, sign, rn) AS (
    SELECT id, amount, sign(amount), row_number() OVER (ORDER BY id DESC)
    FROM trx
  ),
  trx2(id, amount, sign, rn, lo, hi) AS (
    SELECT trx1.*,
    CASE WHEN coalesce(lag(sign) OVER (ORDER BY id DESC), 0) != sign 
         THEN rn END,
    CASE WHEN coalesce(lead(sign) OVER (ORDER BY id DESC), 0) != sign 
         THEN rn END
    FROM trx1
  )
SELECT 
  trx2.*, 1
  - last_value (lo) IGNORE NULLS OVER (ORDER BY id DESC 
    ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)
  + first_value(hi) IGNORE NULLS OVER (ORDER BY id DESC 
    ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING)
FROM trx2

sql-tricks-slide-136

Huh. This SQL thing does start getting interesting!

Ready for more?

6. The subset sum problem with SQL

This is my favourite!

What is the subset sum problem? Find a fun explanation here:
https://xkcd.com/287

And a boring one here:
https://en.wikipedia.org/wiki/Subset_sum_problem

Essentially, for each of these totals…

| ID | TOTAL |
|----|-------|
|  1 | 25150 |
|  2 | 19800 |
|  3 | 27511 |

… we want to find the “best” (i.e. the closest) sum possible, consisting of any combination of these items:

| ID   |  ITEM |
|------|-------|
|    1 |  7120 |
|    2 |  8150 |
|    3 |  8255 |
|    4 |  9051 |
|    5 |  1220 |
|    6 | 12515 |
|    7 | 13555 |
|    8 |  5221 |
|    9 |   812 |
|   10 |  6562 |

As you’re all quick with your mental mathemagic processing, you have immediately calculated these to be the best sums:

| TOTAL |  BEST | CALCULATION
|-------|-------|--------------------------------
| 25150 | 25133 | 7120 + 8150 + 9051 + 812
| 19800 | 19768 | 1220 + 12515 + 5221 + 812
| 27511 | 27488 | 8150 + 8255 + 9051 + 1220 + 812

How to do it with SQL? Easy. Just create a CTE that contains all the 2n *possible* sums and then find the closest one for each TOTAL:

-- All the possible 2N sums
WITH sums(sum, max_id, calc) AS (...)

-- Find the best sum per “TOTAL”
SELECT 
  totals.total,
  something_something(total - sum) AS best,
  something_something(total - sum) AS calc
FROM draw_the_rest_of_the_*bleep*_owl

As you’re reading this, you might be like my friend here:

sql-tricks-slide-144

But don’t worry, the solution is – again – not all that hard (although it doesn’t perform due to the nature of the algorithm):

WITH sums(sum, id, calc) AS (
  SELECT item, id, to_char(item) FROM items
  UNION ALL
  SELECT item + sum, items.id, calc || ' + ' || item
  FROM sums JOIN items ON sums.id < items.id
)
SELECT 
  totals.id,
  totals.total,
  min (sum) KEEP (
    DENSE_RANK FIRST ORDER BY abs(total - sum)
  ) AS best,
  min (calc) KEEP (
    DENSE_RANK FIRST ORDER BY abs(total - sum)
  ) AS calc,
FROM totals 
CROSS JOIN sums
GROUP BY totals.id, totals.total

In this article, I won’t explain the details of this solution, because the example has been taken from a previous article that you can find here:

https://blog.jooq.org/2015/10/26/how-to-find-the-closest-subset-sum-with-sql/

Enjoy reading the details, but be sure to come back here for the remaining 4 tricks:

7. Capping a Running Total

So far, we’ve seen how to calculate an “ordinary” running total with SQL using window functions. That was easy. Now, how about if we cap the running total such that it never goes below zero? Essentially, we want to calculate this:

| DATE       | AMOUNT | TOTAL |
|------------|--------|-------|
| 2012-01-01 |    800 |   800 |
| 2012-02-01 |   1900 |  2700 |
| 2012-03-01 |   1750 |  4450 |
| 2012-04-01 | -20000 |     0 |
| 2012-05-01 |    900 |   900 |
| 2012-06-01 |   3900 |  4800 |
| 2012-07-01 |  -2600 |  2200 |
| 2012-08-01 |  -2600 |     0 |
| 2012-09-01 |   2100 |  2100 |
| 2012-10-01 |  -2400 |     0 |
| 2012-11-01 |   1100 |  1100 |
| 2012-12-01 |   1300 |  2400 |

So, when that big negative AMOUNT -20000 was subtracted, instead of displaying the real TOTAL of -15550, we simply display 0. In other words (or data sets):

| DATE       | AMOUNT | TOTAL |
|------------|--------|-------|
| 2012-01-01 |    800 |   800 | GREATEST(0,    800)
| 2012-02-01 |   1900 |  2700 | GREATEST(0,   2700)
| 2012-03-01 |   1750 |  4450 | GREATEST(0,   4450)
| 2012-04-01 | -20000 |     0 | GREATEST(0, -15550)
| 2012-05-01 |    900 |   900 | GREATEST(0,    900)
| 2012-06-01 |   3900 |  4800 | GREATEST(0,   4800)
| 2012-07-01 |  -2600 |  2200 | GREATEST(0,   2200)
| 2012-08-01 |  -2600 |     0 | GREATEST(0,   -400)
| 2012-09-01 |   2100 |  2100 | GREATEST(0,   2100)
| 2012-10-01 |  -2400 |     0 | GREATEST(0,   -300)
| 2012-11-01 |   1100 |  1100 | GREATEST(0,   1100)
| 2012-12-01 |   1300 |  2400 | GREATEST(0,   2400)

How will we do it?

sql-tricks-slide-173

Exactly. With obscure, vendor-specific SQL. In this case, we’re using Oracle SQL

disaster-girl

How does it work? Surprisingly easy!

Just add MODEL in your SELECT, and you’re opening up a can of awesome SQL worms!

SELECT ... FROM some_table

-- Put this after any table
MODEL ... 

Once we put MODEL there, we can implement spreadsheet logic directly in our SQL statements, just as with Microsoft Excel.

The following three clauses are the most useful and widely used (i.e. 1-2 per year by anyone on this planet):

MODEL
  -- The spreadsheet dimensions
  DIMENSION BY ...
  
  -- The spreadsheet cell type
  MEASURES ...
  
  -- The spreadsheet formulas
  RULES ... 

The meaning of each of these three additional clauses is best explained with slides again.

The DIMENSION BY clause specifies the dimensions of your spreadsheet. Unlike in MS Excel, you can have any number of dimensions in Oracle:

sql-tricks-slide-177

The MEASURES clause specifies the values that are available in each cell of your spreadsheet. Unlike in MS Excel, you can have a whole tuple in each cell in Oracle, not just a single value.

sql-tricks-slide-178

The RULES clause specifies the formulas that apply to each cell in your spreadsheet. Unlike in MS Excel, these rules / formulas are centralised at a single place, instead of being put inside of each cell:

sql-tricks-slide-179

This design makes MODEL a bit harder to use than MS Excel, but much more powerful, if you dare. The whole query will then be “trivially”:

SELECT *
FROM (
  SELECT date, amount, 0 AS total
  FROM amounts
)
MODEL 
  DIMENSION BY (row_number() OVER (ORDER BY date) AS rn)
  MEASURES (date, amount, total)
  RULES (
    total[any] = greatest(0,
	coalesce(total[cv(rn) - 1], 0) + amount[cv(rn)])
  )

This whole thing is so powerful, it ships with its own whitepaper by Oracle, so rather than explaining things further here in this article, please do read the excellent whitepaper:

http://www.oracle.com/technetwork/middleware/bi-foundation/10gr1-twp-bi-dw-sqlmodel-131067.pdf

8. Time Series Pattern Recognition

If you’re into fraud detection or any other field that runs real time analytics on large data sets, time series pattern recognition is certainly not a new term to you.

If we review the “length of a series” data set, we might want to generate triggers on complex events over our time series as such:

|   ID | VALUE_DATE |  AMOUNT | LEN | TRIGGER
|------|------------|---------|-----|--------
| 9997 | 2014-03-18 | + 99.17 |   1 |
| 9981 | 2014-03-16 | - 71.44 |   4 |
| 9979 | 2014-03-16 | - 94.60 |   4 |      x
| 9977 | 2014-03-16 | -  6.96 |   4 |
| 9971 | 2014-03-15 | - 65.95 |   4 |
| 9964 | 2014-03-15 | + 15.13 |   3 |
| 9962 | 2014-03-15 | + 17.47 |   3 |
| 9960 | 2014-03-15 | +  3.55 |   3 |
| 9959 | 2014-03-14 | - 32.00 |   1 |

The rule of the above trigger is:

Trigger on the 3rd repetition of an event if the event occurs more than 3 times.

Similar to the previous MODEL clause, we can do this with an Oracle-specific clause that was added to Oracle 12c:

SELECT ... FROM some_table

-- Put this after any table to pattern-match
-- the table’s contents
MATCH_RECOGNIZE (...) 

The simplest possible application of MATCH_RECOGNIZE includes the following subclauses:

SELECT *
FROM series
MATCH_RECOGNIZE (
  -- Pattern matching is done in this order
  ORDER BY ...

  -- These are the columns produced by matches
  MEASURES ...

  -- A short specification of what rows are
  -- returned from each match
  ALL ROWS PER MATCH

  -- «Regular expressions» of events to match
  PATTERN (...)

  -- The definitions of «what is an event»
  DEFINE ...
) 

That sounds crazy. Let’s look at some example clause implementations

SELECT *
FROM series
MATCH_RECOGNIZE (
  ORDER BY id
  MEASURES classifier() AS trg
  ALL ROWS PER MATCH
  PATTERN (S (R X R+)?)
  DEFINE
    R AS sign(R.amount) = prev(sign(R.amount)),
    X AS sign(X.amount) = prev(sign(X.amount))
) 

What do we do here?

  • We order the table by ID, which is the order in which we want to match events. Easy.
  • We then specify the values that we want as a result. We want the “MEASURE” trg, which is defined as the classifier, i.e. the literal that we’ll use in the PATTERN afterwards. Plus we want all the rows from a match.
  • We then specify a regular expression-like pattern. The pattern is an event “S” for Start, followed optionally by “R” for Repeat, “X” for our special event X, followed by one or more “R” for repeat again. If the whole pattern matches, we get SRXR or SRXRR or SRXRRR, i.e. X will be at the third position of a series of length >= 4
  • Finally, we define R and X as being the same thing: The event when SIGN(AMOUNT) of the current row is the same as SIGN(AMOUNT) of the previous row. We don’t have to define “S”. “S” is just any other row.

This query will magically produce the following output:

|   ID | VALUE_DATE |  AMOUNT | TRG |
|------|------------|---------|-----|
| 9997 | 2014-03-18 | + 99.17 |   S |
| 9981 | 2014-03-16 | - 71.44 |   R |
| 9979 | 2014-03-16 | - 94.60 |   X |
| 9977 | 2014-03-16 | -  6.96 |   R |
| 9971 | 2014-03-15 | - 65.95 |   S |
| 9964 | 2014-03-15 | + 15.13 |   S |
| 9962 | 2014-03-15 | + 17.47 |   S |
| 9960 | 2014-03-15 | +  3.55 |   S |
| 9959 | 2014-03-14 | - 32.00 |   S |

We can see a single “X” in our event stream. Exactly where we had expected it. At the 3rd repetition of an event (same sign) in a series of length > 3.

Boom!

As we don’t really care about “S” and “R” events, let’s just remove them as such:

SELECT 
  id, value_date, amount, 
  CASE trg WHEN 'X' THEN 'X' END trg
FROM series
MATCH_RECOGNIZE (
  ORDER BY id
  MEASURES classifier() AS trg
  ALL ROWS PER MATCH
  PATTERN (S (R X R+)?)
  DEFINE
    R AS sign(R.amount) = prev(sign(R.amount)),
    X AS sign(X.amount) = prev(sign(X.amount))
) 

to produce:

|   ID | VALUE_DATE |  AMOUNT | TRG |
|------|------------|---------|-----|
| 9997 | 2014-03-18 | + 99.17 |     |
| 9981 | 2014-03-16 | - 71.44 |     |
| 9979 | 2014-03-16 | - 94.60 |   X |
| 9977 | 2014-03-16 | -  6.96 |     |
| 9971 | 2014-03-15 | - 65.95 |     |
| 9964 | 2014-03-15 | + 15.13 |     |
| 9962 | 2014-03-15 | + 17.47 |     |
| 9960 | 2014-03-15 | +  3.55 |     |
| 9959 | 2014-03-14 | - 32.00 |     |

Thank you Oracle!

sql-tricks-slide-211

Again, don’t expect me to explain this any better than the excellent Oracle whitepaper already did, which I strongly recommend reading if you’re using Oracle 12c anyway:

http://www.oracle.com/ocom/groups/public/@otn/documents/webcontent/1965433.pdf

9. Pivoting and Unpivoting

If you’ve read this far, the following will be almost too embarassingly simple:

This is our data, i.e. actors, film titles, and film ratings:

| NAME      | TITLE           | RATING |
|-----------|-----------------|--------|
| A. GRANT  | ANNIE IDENTITY  | G      |
| A. GRANT  | DISCIPLE MOTHER | PG     |
| A. GRANT  | GLORY TRACY     | PG-13  |
| A. HUDSON | LEGEND JEDI     | PG     |
| A. CRONYN | IRON MOON       | PG     |
| A. CRONYN | LADY STAGE      | PG     |
| B. WALKEN | SIEGE MADRE     | R      |

This is what we call pivoting:

| NAME      | NC-17 |  PG |   G | PG-13 |   R |
|-----------|-------|-----|-----|-------|-----|
| A. GRANT  |     3 |   6 |   5 |     3 |   1 |
| A. HUDSON |    12 |   4 |   7 |     9 |   2 |
| A. CRONYN |     6 |   9 |   2 |     6 |   4 |
| B. WALKEN |     8 |   8 |   4 |     7 |   3 |
| B. WILLIS |     5 |   5 |  14 |     3 |   6 |
| C. DENCH  |     6 |   4 |   5 |     4 |   5 |
| C. NEESON |     3 |   8 |   4 |     7 |   3 |

Observe how we kinda grouped by the actors and then “pivoted” the number films per rating each actor played in. Instead of displaying this in a “relational” way, (i.e. each group is a row) we pivoted the whole thing to produce a column per group. We can do this, because we know all the possible groups in advance.

Unpivoting is the opposite, when from the above, we want to get back to the “row per group” representation:

| NAME      | RATING | COUNT |
|-----------|--------|-------|
| A. GRANT  | NC-17  |     3 |
| A. GRANT  | PG     |     6 |
| A. GRANT  | G      |     5 |
| A. GRANT  | PG-13  |     3 |
| A. GRANT  | R      |     6 |
| A. HUDSON | NC-17  |    12 |
| A. HUDSON | PG     |     4 |

It’s actually really easy. This is how we’d do it in PostgreSQL:

SELECT 
  first_name, last_name,
  count(*) FILTER (WHERE rating = 'NC-17') AS "NC-17",
  count(*) FILTER (WHERE rating = 'PG'   ) AS "PG",
  count(*) FILTER (WHERE rating = 'G'    ) AS "G",
  count(*) FILTER (WHERE rating = 'PG-13') AS "PG-13",
  count(*) FILTER (WHERE rating = 'R'    ) AS "R"
FROM actor AS a
JOIN film_actor AS fa USING (actor_id)
JOIN film AS f USING (film_id)
GROUP BY actor_id

We can append a simple FILTER clause to an aggregate function in order to count only some of the data.

In all other databases, we’d do it like this:

SELECT 
  first_name, last_name,
  count(CASE rating WHEN 'NC-17' THEN 1 END) AS "NC-17",
  count(CASE rating WHEN 'PG'    THEN 1 END) AS "PG",
  count(CASE rating WHEN 'G'     THEN 1 END) AS "G",
  count(CASE rating WHEN 'PG-13' THEN 1 END) AS "PG-13",
  count(CASE rating WHEN 'R'     THEN 1 END) AS "R"
FROM actor AS a
JOIN film_actor AS fa USING (actor_id)
JOIN film AS f USING (film_id)
GROUP BY actor_id

The nice thing here is that aggregate functions usually only consider non-NULL values, so if we make all the values NULL that are not interesting per aggregation, we’ll get the same result.

Now, if you’re using either SQL Server, or Oracle, you can use the built-in PIVOT or UNPIVOT clauses instead. Again, as with MATCH_RECOGNIZE, just append this new keyword after a table and get the same result:

-- PIVOTING
SELECT something, something
FROM some_table
PIVOT (
  count(*) FOR rating IN (
    'NC-17' AS "NC-17", 
    'PG'    AS "PG", 
    'G'     AS "G", 
    'PG-13' AS "PG-13", 
    'R'     AS "R"
  )
)

-- UNPIVOTING
SELECT something, something
FROM some_table
UNPIVOT (
  count    FOR rating IN (
    "NC-17" AS 'NC-17', 
    "PG"    AS 'PG', 
    "G"     AS 'G', 
    "PG-13" AS 'PG-13', 
    "R"     AS 'R'
  )
)

Easy. Next.

10. Abusing XML and JSON

First off

sql-tricks-slide-226

JSON is just XML with less features and less syntax

Now, everyone knows that XML is awesome. The corollary is thus:

JSON is less awesome

Don’t use JSON.

Now that we’ve settled this, we can safely ignore the ongoing JSON-in-the-database-hype (which most of you will regret in five years anyway), and move on to the final example. How to do XML in the database.

This is what we want to do:

sql-tricks-slide-231

Given the original XML document, we want to parse that document, unnest the comma-separated list of films per actor, and produce a denormalised representation of actors/films in a single relation.

Ready. Set. Go. This is the idea. We have three CTE:

WITH RECURSIVE
  x(v) AS (SELECT '...'::xml),
  actors(
    actor_id, first_name, last_name, films
  ) AS (...),
  films(
    actor_id, first_name, last_name, 
    film_id, film
  ) AS (...)
SELECT * 
FROM films

In the first one, we simply parse the XML. Here with PostgreSQL:

WITH RECURSIVE
  x(v) AS (SELECT '
<actors>
  <actor>
    <first-name>Bud</first-name>
    <last-name>Spencer</last-name>
    <films>God Forgives... I Don’t, Double Trouble, They Call Him Bulldozer</films>
  </actor>
  <actor>
    <first-name>Terence</first-name>
    <last-name>Hill</last-name>
    <films>God Forgives... I Don’t, Double Trouble, Lucky Luke</films>
  </actor>
</actors>'::xml),
  actors(actor_id, first_name, last_name, films) AS (...),
  films(actor_id, first_name, last_name, film_id, film) AS (...)
SELECT * 
FROM films

Easy.

Then, we do some XPath magic to extract the individual values from the XML structure and put those into columns:

WITH RECURSIVE
  x(v) AS (SELECT '...'::xml),
  actors(actor_id, first_name, last_name, films) AS (
    SELECT
      row_number() OVER (),
      (xpath('//first-name/text()', t.v))[1]::TEXT,
      (xpath('//last-name/text()' , t.v))[1]::TEXT,
      (xpath('//films/text()'     , t.v))[1]::TEXT
    FROM unnest(xpath('//actor', (SELECT v FROM x))) t(v)
  ),
  films(actor_id, first_name, last_name, film_id, film) AS (...)
SELECT * 
FROM films

Still easy.

Finally, just a bit of recursive regular expression pattern matching magic, and we’re done!

WITH RECURSIVE
  x(v) AS (SELECT '...'::xml),
  actors(actor_id, first_name, last_name, films) AS (...),
  films(actor_id, first_name, last_name, film_id, film) AS (
    SELECT actor_id, first_name, last_name, 1, 
      regexp_replace(films, ',.+', '')
    FROM actors
    UNION ALL
    SELECT actor_id, a.first_name, a.last_name, f.film_id + 1,
      regexp_replace(a.films, '.*' || f.film || ', ?(.*?)(,.+)?', '\1')
    FROM films AS f 
    JOIN actors AS a USING (actor_id)
    WHERE a.films NOT LIKE '%' || f.film
  )
SELECT * 
FROM films

Let’s conclude:

yesterdays-regex

Conclusion

All of what this article has shown was declarative. And relatively easy. Of course, for the fun effect that I’m trying to achieve in this talk, some exaggerated SQL was taken and I expressly called everything “easy”. It’s not at all easy, you have to practice SQL. Like many other languages, but a bit harder because:

  1. The syntax is a bit awkward from time to time
  2. Declarative thinking is not easy. At least, it’s very different

But once you get a hang of it, declarative programming with SQL is totally worth it as you can express complex relationships between your data in very very little code by just describing the result you want to get from the database.

Isn’t that awesome?

And if that was a bit over the top, do note that I’m happy to visit your JUG / conference to give this talk (just contact us), or if you want to get really down into the details of these things, we also offer this talk as a public or in-house workshop. Do get in touch! We’re looking forward.

See again the full set of slides here:

And the recording from Devoxx France:

The Parameterless Generic Method Antipattern

A very interesting question was posted to Stack Overflow and reddit just recently about Java generics. Consider the following method:

<X extends CharSequence> X getCharSequence() {
    return (X) "hello";
}

While the unsafe cast seems a bit wonky, and you might guess there’s something wrong here, you can still go ahead and compile the following assignment in Java 8:

Integer x = getCharSequence();

This is obviously wrong, because Integer is final, and there is thus no possible Integer subtype that can also implement CharSequence. Yet, Java’s generic type system doesn’t care about classes being final final, and it thus infers the intersection type Integer & CharSequence for X prior to upcasting that type back to Integer. From a compiler perspective, all is fine. At runtime: ClassCastException

While the above seems “obviously fishy”, the real problem lies elsewhere.

It is (almost) never correct for a method to be generic on the return type only

There are exceptions to this rule. Those exceptions are methods like:

class Collections {
    public static <T> List<T> emptyList() { ... }
}

This method has no parameters, and yet it returns a generic List<T>. Why can it guarantee correctness, regardless of the concrete inference for <T>? Because of its semantics. Regardless if you’re looking for an empty List<String> or an empty List<Integer>, it is possible to provide the same implementation for any of these T, despite erasure, because of the emptiness (and immutable!) semantics.

Another exception is builders, such as javax.persistence.criteria.CriteriaBuilder.Coalesce<, which is created from a generic, parameterless method:

<T> Coalesce<T> coalesce();

Builder methods are methods that construct initially empty objects. Emptiness is key, here.

For most other methods, however, this is not true, including the above getCharSequence() method. The only guaranteed correct return value for this method is null

<X extends CharSequence> X getCharSequence() {
    return null;
}

… because in Java, null is the value that can be assigned (and cast) to any reference type. But that’s not the intention of the author of this method.

Think in terms of functional programming

Methods are functions (mostly), and as such, are expected not to have any side-effects. A parameterless function should always return the very same return value. Just like emptyList() does.

But in fact, these methods aren’t parameterless. They do have a type parameter <T>, or <X extendds CharSequence>. Again, because of generic type erasure, this parameter “doesn’t really count” in Java, because short of reification, it cannot be introspected from within the method / function.

So, remember this:

It is (almost) never correct for a method to be generic on the return type only

Most importantly, if your use-case is simply to avoid a pre-Java 5 cast, like:

Integer integer = (Integer) getCharSequence();

Want to find offending methods in your code?

I’m using Guava to scan the class path, you might use something else. This snippet will produce all the generic, parameterless methods on your class path:

import java.lang.reflect.Method;
import java.util.Comparator;
import java.util.stream.Stream;

import com.google.common.reflect.ClassPath;

public class Scanner {

    public static void main(String[] args) throws Exception {
        ClassPath
           .from(Thread.currentThread().getContextClassLoader())
           .getTopLevelClasses()
           .stream()
           .filter(info -> !info.getPackageName().startsWith("slick")
                        && !info.getPackageName().startsWith("scala"))
           .flatMap(info -> {
               try {
                   return Stream.of(info.load());
               }
               catch (Throwable ignore) {
                   return Stream.empty();
               }
           })
           .flatMap(c -> {
               try {
                   return Stream.of(c.getMethods());
               }
               catch (Throwable ignore) {
                   return Stream.<Method> of();
               }
           })
           .filter(m -> m.getTypeParameters().length > 0 && m.getParameterCount() == 0)
           .sorted(Comparator.comparing(Method::toString))
           .map(Method::toGenericString)
           .forEach(System.out::println);
    }
}

jOOQ Tuesdays: Ming-Yee Iu Gives Insight into Language Integrated Querying

Welcome to the jOOQ Tuesdays series. In this series, we’ll publish an article on the third Tuesday every other month where we interview someone we find exciting in our industry from a jOOQ perspective. This includes people who work with SQL, Java, Open Source, and a variety of other related topics.

Ming-Yee Iu

We have the pleasure of talking to Ming-Yee Iu in this eighth edition who will be telling us about how different people in our industry have tackled the integration of query systems into general purpose languages, including his own library JINQ, which does so for Java.

Ming, everyone coming from C# to Java will google LINQ for Java. You have implemented just that with JINQ. What made you do it?

Jinq actually grew out of my PhD research at EPFL university in Switzerland. When I started a PhD there in 2005, I needed a thesis topic, and I heard that my supervisor Willy Zwaenepoel was interested in making it easier to write database code. I had a bit of a background with Java internals from when I was an intern with one of IBM’s JVM teams in 1997, so when I took a look at the problem, I looked at it from a lower-level systems perspective. As a result, I came up with the idea of using a bytecode rewriting scheme to rewrite certain types of Java code into database queries. There were other research groups looking at the problem at the same time, including the LINQ group. Different groups came up with different approaches based on their own backgrounds. The basic assumption was that programmers had difficulty writing database code because there was a semantic gap–the relational database model was so different from the object-oriented programming model that programmers wasted mental effort bridging the differences. The hope was that this semantic gap could be reduced by letting programmers write normal Java code and having the computer figure out how to run this code on a database. Different approaches would result in tools that could handle more complex database queries or could be more flexible in the style of code they accept.

Although I came up with an initial approach fairly quickly, it took me many years to refine the algorithms into something more robust and usable. Similar to the LINQ researchers, I found that my algorithms worked best with functional code. Because functional-style code has no side effects, it’s easier to analyze. It’s also easier to explain to programmers how to write complex code that the algorithms could still understand. Unfortunately, when I finished my PhD in 2010, Java still didn’t properly support functional programming, so I shelved the research to work on other things. But when Java 8 finally came out in 2014 with lambdas, I decided to revisit my old research. I adapted my research to make use of Java 8 lambdas and to integrate with current enterprise tools. And the result was Jinq, an open source tool that provided support for LINQ-style queries in Java.

In a recent discussion on reddit, you’ve mentioned that the Java language stewards will never integrate query systems into the language, and that LINQ has been a mistake. Yet, LINQ is immensely popular in C#. Why was LINQ a mistake?

My opinion is a little more nuanced than that. LINQ makes a lot of sense for the C# ecosystem, but I think it is totally inappropriate for Java. Different languages have different trade-offs, different philosophies, and different historical baggage. Integrating a query system into Java would run counter to the Java philosophy and would be considered a mistake. C# was designed with different trade-offs in mind, so adding feature like query integration to C# is more acceptable.

C# was designed to evolve quickly. C# regularly forces programmers to leave behind old code so that it can embrace new ways of doing things. There’s an old article on Joel on Software describing how Microsoft has two camps: the Raymond Chen camp that always tries to maintain backwards compatibility and the MSDN Magazine camp that is always evangelizing shiny new technology that may abandoned after a few years. Raymond Chen camp allows me to run 20 year old Windows programs on Windows 10. The MSDN Magazine camp produces cool new technology like C# and Typescript and LINQ. There is nothing wrong with the MSDN philosophy. Many programmers prefer using languages built using this philosophy because the APIs and languages end up with less legacy cruft in them. You don’t have to understand the 30 year history of an API to figure out the proper way to use it. Apple uses this philosophy, and many programmers love it despite the fact that they have to rewrite all their code every few years to adapt to the latest APIs. For C#, adopting a technology that is immature and still evolving is fine because they can abandon it later if it doesn’t work out.

The Java philosophy is to never break backwards compatibility. Old Java code from the 1990s still compiles and runs perfectly fine on modern Java. As such, there’s a huge maintenance burden to adding new features to Java. Any feature has to be maintained for decades. Once a feature is added to Java, it can’t be changed or it might break backwards compatibility. As a result, only features that have that have withstood the test of time are candidates for being added to Java. When features are added to Java that haven’t yet fully matured, it “locks-in” a specific implementation and prevents the feature from evolving as people’s needs change. This can cause major headaches for the language in the future.

One example of this lock-in is Java serialization. Being able to easily write objects to disk is very convenient. But the feature locked in an architecture that isn’t flexible enough for future use-cases. People want to serialize objects to JSON or XML, but can’t do that using the existing serialization framework. Serialization has led to many security errors, and a huge amount of developer resources were required to get lambdas and serialization to work correctly together. Another example of this premature lock-in is synchronization support for all objects. At the time, it seemed very forward-looking to have multi-threading primitives built right into the language. Since every object could be used as a multi-threaded monitor, you could easily synchronize access to every object. But we now know that good multi-threaded programs avoid that sort of fine-grained synchronization. It’s better to work with higher-level synchronization primitives. All that low-level synchronization slows down the performance of both single-threaded and multi-threaded code. Even if you don’t use the feature, all Java objects have to be burdened by the overhead of having lock support. Serialization and synchronization were both added to Java with the best of intentions. But those features are now treated like “goto”: they don’t pass the smell test. If you see any code that uses those features, it usually means that the code needs extra scrutiny.

Adding LINQ-style queries to Java would likely cause similar problems. Don’t get me wrong. LINQ is a great system. It is currently the most elegant system we have now for integrating a query language into an object-oriented language. Many people love using C# specifically because of LINQ. But the underlying technology is still too immature to be added to Java. Researchers are still coming up with newer and better ways of embedding query systems into languages, so there is a very real danger of locking Java into an approach that would later be considered obsolete. Already, researchers have many improvements to LINQ that Microsoft can’t adopt without abandoning its old code.

For example, to translate LINQ expressions to database queries, Microsoft added some functionality to C# that lets LINQ inspect the abstract syntax trees of lambda expressions at runtime. This functionality is convenient, but it limits LINQ to only working with expressions. LINQ doesn’t work with statements because it can’t inspect the abstract syntax trees of lambdas containing statements. This restriction on what types of lambdas can be inspected is inelegant. Although this functionality for inspecting lambdas is really powerful, it is so restricted that very few other frameworks use it. In a general-purpose programming language, all the language primitives should be expressive enough that they can be used as building blocks for many different structures and frameworks. But this lambda inspection functionality has ended up only being useful for query frameworks like LINQ. In fact, Jinq has shown that this functionality isn’t even necessary. It’s possible to build a LINQ-style query system using only the compiled bytecode, and the resulting query system ends up being more flexible in that it can handle statements and other imperative code structures.

As programmers have gotten more experience with LINQ, they have also started to wonder if there might be alternate approaches that would work better than LINQ. LINQ is supposed to make it easier for programmers to write database queries because they can write functional-style code instead of having to learn SQL. In reality though, to use LINQ well, a programmer still needs to understand SQL too. But if a programmer already understands SQL, what advantages does LINQ give them? Would it be better to use a query system like jOOQ matches SQL syntax more closely than Slick and can quickly evolve to encompass new SQL features then? Perhaps, query systems aren’t even necessary. More and more companies are adopting NoSQL databases that don’t even support queries at all.

Given how quickly our understanding of LINQ-style query systems are evolving, it would definitely be a mistake to add that functionality directly to a language like Java at the moment. Any approach might end up being obsolete, and it would impose a large maintenance burden on future versions of Java. Fortunately, Java programmers can use libraries such as Jinq and jOOQ instead, which provide most of the benefits of LINQ but don’t require tight language integration like LINQ.

Lightbend maintains Slick – LINQ for Scala. How does JINQ compare to Slick?

They both try to provide a LINQ-style interface for querying databases. Since Slick is designed for Scala, it has great Scala integration and is able to use Scala’s more expressive programming model to provide a very elegant implementation. To get the full benefits of Slick, you have to embrace the Scala ecosystem though.

Jinq is primarily designed for use with Java. It integrates with existing Java technologies like JPA and Hibernate. You don’t have to abandon your existing Java enterprise code when adopting Jinq because Jinq works with your existing JPA entity classes. Jinq is designed for incremental adoption. You can selectively use it some places and fall back to using regular JPA code elsewhere. Although Jinq can be used with Scala, it’s more useful for organizations that are using Scala but haven’t embraced the full Scala ecosystem. For example, Jinq allows you to use your existing Hibernate entities in your Scala code while still using a modern LINQ-style functional query system for them.

JINQ has seen the biggest improvement when Java 8 introduced the Stream API. What is your opinion about functional programming in Java?

I’m really happy that Java finally has support for lambdas. It’s a huge improvement that really makes my life as a programmer much easier. Over time, I’m hoping that the Java language stewards will be able to refine lambdas further though.

From Jinq’s perspective, one of the major weaknesses of Java 8’s lambdas is the total lack of any reflection support. Jinq needs reflection support to decode lambdas and to translate them to queries. Since there is no reflection support, Jinq needs to use slow and brittle alternate techniques to get the same information. Personally, I think the lack of reflection is a significant oversight, and this lack of reflection support could potentially weaken the entire Java ecosystem as a whole in the long term.

I have a few small annoyances with the lack of annotation support and lack of good JavaDoc guidelines for how to treat lambdas. The Streams API and lambda metafactories also seem a little bit overly complex to me, and I wonder if something simpler would have been better there.

From a day-to-day programming perspective though, I’ve found that the lack of syntactic sugar for calling lambdas is the main issue that has repeatedly frustrated me. It seems like a fairly minor thing, but the more I use lambdas, the more I feel that it is really important. In Java 8, it’s so easy to create and pass around lambdas, that I’m usually able to completely ignore the fact that lambdas are represented as classes with a single method. I’m able to think of my code in terms of lambdas. My mental model when I write Java 8 code is that I’m creating lambdas and passing them around. But when I actually have to invoke a lambda, the lambda magic completely breaks down. I have to stop and switch gears and think of lambdas in terms of classes. Personally, I can never remember the name of the method I need to call in order to invoke a lambda. Is it run(), accept(), consume(), or apply()? I often end up having to look up the documentation for the method name, which breaks my concentration. If Java 8 had syntactic sugar for calling lambdas, then I would never need to break out of the lambda abstraction. I would be able to create, pass around, and call lambdas without having to think about them as classes.

Java 9 will introduce the Flow API for reactive interoperability. Do you plan to implement a reactive JINQ?

To be honest, I’m not too familiar with reactive APIs. Lately, I’ve been working mostly on desktop applications, so I haven’t had to deal with problems at a sufficient scale where a reactive approach would make sense.

You’ve mentioned to me in the past that you have other projects running. What are you currently working on?

After a while, it’s easy to accumulate projects. Jinq is mostly stable at the moment though I do occasionally add bug fixes and other changes. There are still a few major features that could be added such as support for bulk updates or improved code generation, but those are fairly major undertakings that would require some funding to do.

I occasionally work on a programming language called Babylscript, which is a multilingual programming language that lets you write code in a mix of French, Chinese, Arabic, and other non-English languages. As a companion project to that, I also run a website for teaching programming to kids called Programming Basics that teaches programming in 17 different languages. Currently, though, I’m spending most of my time on two projects. One is an art tool called Omber, which is a vector drawing program that specializes in advanced gradients. The other project involves using HTML5 as the UI front-end for desktop Java programs. All your UI code would still be written in Java, but instead of using AWT or Swing, you would just manipulate HTML using a standard DOM interface bound to Java. As a side benefit, all your UI code can be recompiled using GWT to JavaScript, so you can reuse your UI code for web pages too.

Further info

Thank you very much for this very interesting interview, Ming. Want to learn more about JINQ? Read about it in this previous guest post on the jOOQ blog, and watch Ming’s JVMLS 2015 talk:

Would We Still Criticise Checked Exceptions, If Java had a Better try-catch Syntax?

In the context of a previous blog post about JUnit 5, Maaartinus, one of our readers, has brought up a very interesting idea:

The only problem with try-catch is its verbosity, which is something I can live with (IMHO a lone catch would do better, the implicit try would apply to all preceding code in the block; just syntactic sugar)

Huh!

Imagine a world where the following is valid Java code:

{
    something();
}
catch (Exception e) {
    /* All exceptions from the above block */
}

Likewise:

{
    something();
}
finally {
    /* Clean up after the previous block */
}

In other languages, this is implemented exactly as such. Take PL/SQL, for instance. An ordinary block looks like this:

BEGIN
  SOMETHING();
END;

Replace curly braces by BEGIN and END keywords, and you have exactly the same thing. Now, if SOMETHING raises an exception, in PL/SQL, we can append an EXCEPTION block, which does exactly the same thing as catch in Java:

BEGIN
  SOMETHING();
EXCEPTION
  WHEN OTHERS THEN NULL;
END;

Indeed, in these very trivial cases, the try keyword seems optional just like there is no such keyword in PL/SQL, and we don’t really need it as the scope of the catch and/or finally blocks is very well defined (at first sight, there might be caveats, of course).

So what? We’ve saved 3 characters…

In these trivial cases, we’re not gaining a lot from the “improved” syntax. But what about many other cases where the notoriously verbose try { ... } catch { ... } syntax might be getting on our nerves…? Again, in PL/SQL, whenever you’re using a block using BEGIN .. END, you can automatically profit from optionally adding an EXCEPTION block.

Without thinking this through thoroughly though (whew, some English language usage!), this could add immense syntactic value to Java. For instance:

lambdas

// Better:
Consumer<String> consumer = string -> {
    something();
}
catch (Exception e) {
    /* still part of the consumer */
}

// Instead of:
Consumer<String> consumer = string -> {
    try {
        something();
    }
    catch (Exception e) {
        /* still part of the consumer */
    }
}

Would that have prevented long discussions about checked exceptions in lambdas and in the Stream API

loops

// Better:
for (String string : strings) {
    something();
}
catch (Exception e) {
    /* still part of the loop's iteration */
}

// Instead of:
for (String string : strings) {
    try {
        something();
    }
    catch (Exception e) {
        /* still part of the loop's iteration */
    }
}

Again, tons of syntactic value here!

if / else

For consistency reasons, although this might appear a bit esoteric to people used to Java code. But let’s think out of the box, and admit the following!

// Better:
if (check) {
    something();
}
catch (Exception e) {
    /* still part of the if branch */
}
else {
    somethingElse();
}
catch (Exception e) {
    /* still part of the else branch */
}

// Instead of:
if (check) {
    try {
        something();
    }
    catch (Exception e) {
        /* still part of the if branch */
    }
}
else {
    try {
        something();
    }
    catch (Exception e) {
        /* still part of the else branch */
    }
}

Huh!

method bodies

Last but not least, method bodies would be the ultimate entities profiting from this additional syntax sugar. If you’re admitting that the curly braces in methods are nothing but mandatory blocks (or mandatory BEGIN .. END constructs), then you could have:

// Better:
public void method() {
    something();
}
catch (Exception e) {
    /* still part of the method body */
}

// Instead of:
public void method() {
    try {
        something();
    }
    catch (Exception e) {
        /* still part of the method body */
    }
}

This is particularly useful for (static) initialisers, where exception handling is always a pain, as there is no way to specify a throws clause in a (static) initialiser (might be a good opportunity to fix that!)

class Something {
    
    // Better:
    static {
        something();
    }
    catch (Exception e) {
        /* still part of the initialiser body */
    }

    // Instead of:
    static {
        try {
            something();
        }
        catch (Exception e) {
            /* still part of the initialiser body */
        }
    }
}

Take this one step further

Of course, we wouldn’t stop here. We’d also get rid of the very peculiar requirement of putting curly braces after catch (or finally). Once we have established the above, how about also allowing:

// Better:
something();
    catch (SQLException e)
        log.info(e);
    catch (IOException e)
        log.warn(e);
    finally
        close();

// Instead of:
try {
    something();
}
catch (SQLException e) {
    log.info(e);
}
catch (IOException e) {
    log.warn(e);
}
finally {
    close();
}

Now, make exception blocks expressions, rather than statements, and suddenly, Java starts to look an awful lot like all those cool languages. Like Scala. Or Kotlin.

Conclusion

Of course, the “old” syntax would still be possible. For instance, when using the try-with-resources statement, it is inevitable. But the big advantage of such syntax sugar is that in cases when we have to handle exceptions (namely checked exceptions), the pain would be lessened a bit, as we could do so without nesting blocks several levels deep. Perhaps, with this syntax, we would no longer criticise checked exceptions at all?

Very interesting ideas, thanks again, Maaartinus, for sharing.

What are your thoughts?

Using jOOQ’s ExecuteListener to Prevent Write Operations on a Connection

Security is important, especially on the data access layer. Most commercial databasese allow for fine-grained privilege control using database access grants. For instance, you would be restricting access from a user to a certain set of tables (or even better: views), via GRANT statements:

GRANT SELECT ON table TO user;

With this fine-grained access control, write operations on certain database objects can be prevented directly in the database.

What if that’s not possible?

Not all databases ship with sophisticated access privilege implementations, or perhaps, your application cannot profit from those features for operational reasons. In that case, you should at least be able to implement security on the client, e.g. by using jOOQ’s ExecuteListener (for coarse grained access control), or by using jOOQ’s VisitListener (for fine grained access control).

An example using an ExecuteListener might look like this:

class ReadOnlyListener extends DefaultExecuteListener {
    @Override
    public void executeStart(ExecuteContext ctx) {
        if (ctx.type() != READ)
            throw new DataAccessException("No privilege to execute " + ctx.sql());
    }
}

If you hook this listener into your jOOQ Configuration, you will no longer be able to execute any write operations on that Configuration. It’s that easy!

For more fine-grained control (e.g. a per-table ACL), a VisitListener will do the trick. An (very much simplified) example implementation that shows what can be done can be seen here:

static class ACLListener extends DefaultVisitListener {

    @Override
    public void visitStart(VisitContext context) {
        if (context.queryPart() instanceof Table
                && Arrays.asList(context.clauses()).contains(INSERT_INSERT_INTO)
                && ((Table<?>) context.queryPart()).getName().equals("AUTHOR"))
            throw new DataAccessException("No privilege to insert into AUTHOR");
    }
}

Essentially, this check prevents a client session from running insert statements into the AUTHOR table. A future version of jOOQ will ship with this kind of ACL VisitListener out of the box, when https://github.com/jOOQ/jOOQ/issues/5197 is implemented.

Using SQL to Calculate the Popularity (on Stack Overflow) of Derby, H2, and HSQLDB

Few people know about this very very awesome feature of the Stack Exchange platform. The Stack Exchange Data Explorer

se-data-explorer

To be found here:
http://data.stackexchange.com

As you may know, much of the Stack Exchange platform runs on SQL Server (interesting architecture details here: http://stackexchange.com/performance), and the team has had the courtesy of making a lot of data publicly available through a SQL web API. Here’s the schmema that you can query:

se-data-explorer-schema

Using a running total to calculate cumulative daily questions per tag

The amount of analytics possibilities with such a public schema are infinite. Today, we’ll look into a question that has been interesting for a lot of users in the past: What is the most “popular” Java in-memory database among Derby (also known as Java DB, which ships with the JDK), the popular test database H2 (see also our interview with Thomas Müller, its creator), or HSQLDB.

What we’d like to do is sum up the number of questions per database, up to any given date. This should give us one of those nice exponential curves that managers like so much.

Here’s the SQL query that we’ll run:

SELECT 
  d,
  SUM(h2)     OVER (ORDER BY d) AS h2,
  SUM(hsqldb) OVER (ORDER BY d) AS hsqldb,
  SUM(derby)  OVER (ORDER BY d) AS derby
FROM (
  SELECT 
    CAST(CreationDate AS DATE) AS d, 
    COUNT(CASE WHEN Tags LIKE '%<h2>%'     THEN 1 END) AS h2,
    COUNT(CASE WHEN Tags LIKE '%<hsqldb>%' THEN 1 END) AS hsqldb,
    COUNT(CASE WHEN Tags LIKE '%<derby>%'  THEN 1 END) AS derby
  FROM Posts
  GROUP BY CAST(CreationDate AS DATE)
) AS DailyPosts
ORDER BY d ASC

A short explanation:

The nested select "DailyPosts" creates a PIVOT table with the aggregated number of questions per database and date. We could have used the SQL Server PIVOT clause, if the Stack Exchange platform had stored tagging information in a normalised form, but the equivalent COUNT(CASE) expressions work just as nicely (see also our article about PostgreSQL’s aggregation FILTER clause for more inspiration).

Now, that we have the number of posts per tag and day, all that’s left to do is sum up those numbers from the first day to any given day. That is often also called a “running total”, which can be calculated very easily using the SUM() OVER() window function.

Now we’re done. You can run and play around with this query here:
http://data.stackexchange.com/stackoverflow/query/469392/java-in-memory-database-popularity-by-time

The raw result is not very interesting. It’s a lot of numbers and dates. But if we plot that result in a graph / chart, we’re getting this nice-looking curve here:

se-data-explorer-derby-h2-hsqldb

As we can see, all three databases are roughly equivalent in terms of “popularity”, although H2 seems to be catching up momentum while HSQLDB is on a slight decline.

(Obviously, this “popularity” is not representative of true market share. More questions might just mean that people struggle more with the technology, or – less skilled people are using it).

Have fun further exploring the Stack Exchange Data Explorer:
http://data.stackexchange.com

(all Stack Exchange Subscriber Content is licensed under the terms of the CC BY-SA 3.0 license. For more details, see: http://stackexchange.com/legal)

Further articles

Further articles that are interesting in the context of the displayed query:

jOOQ 4.0’s New API Will Use Annotations Only for Truly Declarative Java/SQL Programming

SQL is the only really popular and mature 4GL (Fourth Generation Programming Language). I.e. it is the only popular declarative language.

At the same time, SQL has proven that turing completeness is not reserved to lesser languages like C, C++, or Java. Since SQL:1999 and its hierarchical common table expressions, SQL can be safely considered “turing complete”. This means that any program can be written in SQL. Don’t believe it? Take, for instance, this SQL Mandelbrot set calculation as can be seen in this Stack Overflow question.

mandelbrot set

Source: User Elie on http://stackoverflow.com/q/314864/521799

Wonderful! No more need for procedural, and object oriented cruft.

How we’ve been wrong so far…

At Data Geekery (the company behind jOOQ), we love SQL. And we love Java. But one thing has always bothered us in the past. Java is not really a purely declarative language. A lot of Java language constructs are real anti patterns for the enlightened declarative programmer. For instance:

// This is bad
for (String string : strings)
    System.out.println(string);

// This is even worse
try {
    someSQLStatements();
}
catch (SQLException e) {
    someRecovery();
}

The imperative style of the above code is hardly ever useful. Programmers need to tediously tell the Java compiler and the JVM what algorithm they meant to implement, down to the single statement, when in reality, using the JIT and other advanced optimisation techniques, they don’t really have to.

Luckily, there are annotations

Since Java 5, however, there have been farsighted people in expert groups who have added a powerful new concept to the Java language: Annotations (more info here). At first, experiments were made with only a handful of limited-use annotations, like:

  • @Override
  • @SuppressWarnings

But then, even more farsighted people have then proceeded in combining these annotations to form completely declaratively things like a component:

@Path("/MonsterRest")
@Stateless
@WebServlet(urlPatterns = "/MonsterServlet")
@Entity
@Table(name = "MonsterEntity")
@XmlRootElement
@XmlAccessorType(XmlAccessType.FIELD)
@NamedQuery(name = "findAll", query = "SELECT c FROM Book c")
public class Book extends HttpServlet {
 
    // ======================================
    // =             Attributes             =
    // ======================================
 
    @Id
    @GeneratedValue
    private Long id;
    private String isbn;
    private Integer nbOfPage;
    private Boolean illustrations;
    private String contentLanguage;
    @Column(nullable = false)
    @Size(min = 5, max = 50)
    @XmlElement(nillable = false)
    private String title;
    private Float price;
    @Column(length = 2000)
    @Size(max = 2000)
    private String description;
    @ElementCollection
    @CollectionTable(name = "tags")
    private List<String> tags = new ArrayList<>();

Look at this beauty. Credits to Antonio Goncalves

However, we still think that there is a lot of unnecessary object oriented bloat in the above. Luckily, recent innovations that make Java annotations turing complete (or even sentient?) will now finally allow us to improve upon this situation, specifically for jOOQ, which aims to model the declarative SQL language in Java. Finally, annotations are a perfect fit!

Those innovations are:

These innovations allow us to completely re-implement the entire jOOQ 4.0 API in order to allow for users writing SQL as follows:

@Select({
    @Column("FIRST_NAME"),
    @Column("LAST_NAME")
})
@From(
    table = @Table("AUTHOR"),
    join = @Join("BOOK"),
    predicate = @On(
        left = @Column("AUTHOR.ID"),
        op = @Eq,
        right = @Column("BOOK.AUTHOR_ID")
    )
)
@Where(
    predicate = @Predicate(
        left = @Column("BOOK.TITLE"),
        op = @Like,
        right = @Value("%Annotations in a Nutshell%")
    )
)
class SQLStatement {}

Just like JPA, this makes jOOQ now fully transparent and declarative, by using annotations. Developers will now be able to completely effortlessly translate their medium to highly complex SQL queries into the exact equivalent in jOOQ annotations.

Don’t worry, we’ll provide migration scripts to upgrade your legacy jOOQ 3.x application to 4.0. A working prototype is on the way and is expected to be released soon, early adopter feedback is very welcome, so stay tuned for more exciting SQL goodness!