jOOQ Tuesdays: Chris Saxon Explains the 3 Things Every Developer Should Know About SQL

Welcome to the jOOQ Tuesdays series. In this series, we’ll publish an article on the third Tuesday every other month (today, exceptionally on a Wednesday because of technical issues) where we interview someone we find exciting in our industry from a jOOQ perspective. This includes people who work with SQL, Java, Open Source, and a variety of other related topics.

chris-saxon-headshot[1]

I’m very excited to feature today Chris Saxon who has worked with Oracle forever, and who is one of the brains behind the famous Ask Tom website.

Chris, you’re part of the famous Ask Tom team. Everyone working with Oracle has wound up on Ask Tom’s website at least once. You’ve answered an incredible amount of questions already. What’s it like to work for such a big community as Oracle’s?

It’s an amazing experience! My first real job was as a PL/SQL developer. My only knowledge of SQL was a couple of vaguely remembered lectures at university. Ask Tom was the main place I learned about SQL and Oracle Database. So it’s a huge honor to be on the other side, helping others get the best out of the technology.

The best part has to be the positive comments when you help someone solve a problem that’s been troubling them for days. That’s why we’re here. To help developers learn more about Oracle and improve their SQL skills. When you use the database to its full extent, you can write better, faster applications with less code!

What were your three most interesting questions, so far?

Any question that has a clear definition and a complete test case is interesting! ;) Personally I enjoy using SQL to solve complex problems the best. So the first two do just that:

1. Finding the previous row in a different group

The poster had a series of transactions. These were grouped into two types. For each row, they wanted to show the id of the previous transaction from the other group.

At first this sounds like a problem you can solve using LAG or LEAD. But these only search for values within the same group. So you need a different method.

I provided a solution using the model clause. Using this, you can generate columns based on complex, spreadsheet-like formulas. Rows in your table are effectively cells in the sheet. You identify them by defining dimensions which can be other columns or expressions. By setting the transaction type as a dimension, you can then easily reference – and assign – values from one type to the other.

This worked well. But commenters were quick to provide solutions using window functions and 12c’s match_recognize clause. Both of which were faster than my approach!

I like this because it shows the flexibility of SQL. And it shows the value of an engaged community. No one knows everything. By sharing our knowledge and workin together we can all become better developers.

2. Improving SQL that deliberately generates cartesian product

The poster had a set of abbreviations for words. For example, Saint could also be “St.” or “St”. They wanted to take text containing these words. Then generate all combinations of strings using these abbreviations.

The “obvious” solution the user had is to split the text into words. Then for each word, join the abbreviation table, replacing the string as needed. So for a five word string, you have five joins.

There are a couple of problems with this method. The number of joins limits the number of words. So if you have a string with seven words, but only six table joins you won’t abbreviate the final word.

The other issue is performance. Joining the same table N times increases the work you do. If you have long sentences and/or a large number of abbreviations, the query could take a long time to run.

To overcome these you need to ask: “how can I join to the abbreviation table just once?”

The solution to do this starts the same as the original. Split the sentence into a table of words. Then join this to the abbreviations to give a row for each replacement needed.

You can then recursively walk down these rows using CTEs. These build up the sentence again, replacing words with their abbreviations as needed. A scalable solution that only needs a single pass of each table!

The final question relates to performance. Tom Kyte’s mantra was always “if you can do it in SQL, do it in SQL”. The reason is because a pure SQL solution is normally faster than one which combines SQL and other code. Yet a question came in that cast doubt on this:

3. Difference in performance SQL vs PL/SQL

The poster was updating a table. The new values came from another table. He was surprised that PL/SQL using bulk processing came out faster than the pure SQL approach.

The query in question was in the form:

update table1
set col1 = (select col2 from table2 where t1.code = t2.code);

It turned out the reason was due to “missing” indexes. Oracle executes the subquery once for every row in table1. Unless there’s an index on table2 (code), this will full scan table2 once for every row in table1!

The PL/SQL only approach avoided this problem by reading the whole of table2 into an array. So there was only one full scan of table2.
 

The problem here is there was no index on the join condition (t1.code = t2.code). With this in place Oracle does an index lookup of table2 for each row in table1. A massive performance improvement!
 

The moral being if your SQL is “slow”, particularly in compared to a combined SQL + other language method, it’s likely you have a missing index (or two).

This question again showed the strength and value of the Oracle community. Shortly after I posted the explanation, a reviewer was quick to point out the following SQL solution:

merge into table1
using  table2
on   (t1.code = t2.code)
when matched
  then update set t1.col = t2.col;

This came out significantly faster than both the original update and PL/SQL – without needing any extra indexes!

You’re running a YouTube channel called “The Magic of SQL”. Are SQL developers magicians?

Of course they are! In fact, I’d say that all developers are magicians. As Arthur C Clarke said:

“Any sufficiently advanced technology is indistinguishable from magic”

The amount of computing power you carry around in your phone today is mind blowing. Just ask your grandparents!

I think SQL developers have a special kind of magic though :). The ability to answer hard questions with a few lines of SQL is amazing. And for it to adapt to changes in the underlying data to give great performance without you changing it is astounding.

Your Twitter account has a pinned tweet about window functions. I frequently talk to Java developers at conferences, and few of them know about window functions, even if they’ve been in databases like Oracle for a very long time. Why do you think they’re still so “obscure”?

Oracle Database has had window functions has had them since the nineties. But many other RDBMSes have only fully supported them recently. So a combination of writing “database independent” code and people using other databases is certainly a factor.

Use of tools which hide SQL from developers is also a problem. If you’re not actively using SQL, it’s easy to overlook many of its features.

Fortunately I think this is changing. As more and more developers are realizing, SQL is a powerful language. Learning how to use it effectively is a key skill for all developers. Window functions and other SQL features mean you can get write better performing applications with less code. Who doesn’t want that? ;)

What are three things that every developer should know about SQL?

1. Understand set based processing

If you find yourself writing a cursor loop (select … from … loop), and inside that loop you run more SQL, you’re doing it wrong.

Think about it. Do you eat your cornflakes by placing one flake in your bowl, adding the milk, and eating that one piece? Then doing the same for the next. And the next. And so on? Or do you pour yourself a big bowl and eat all the flakes at once?

If you have a cursor loop with more SQL within the loop, you’re effectively doing this. There’s a lot of overhead in executing each SQL statement. This will slow you down if you have a large number of statements that each process one row. Instead you want few statements that process lots of rows where possible.

It’s also possible to do this by accident. As developers we’re taught that code reuse is A Good Thing. So if there’s an API available we’ll often use it. For example, say you’re building a batch process. This finds the unshipped orders, places them on shipments and marks them as sent.

If a ship_order function exists, you could write something like:

select order_id from unshipped_orders loop
  ship_order ( order_id );
end loop;

The problem here is ship_order almost certainly contains SQL. SQL you’ll be executing once for every order awaiting postage. If it’s only a few this may be fine. But if there’s hundreds or thousands this process could take a long time to run.

The way to make this faster is to process all the orders in one go. You can do this with SQL like:

insert into shipments
  select … from unshipped_orders;

update unshipped_orders
set  shipment_date = sysdate;

You may counter there’s other, non-SQL, processing you need to do such as sending emails. So you still need a query to find the order ids.

But you can overcome this! With update’s returning clause, you can get values from all the changed rows:

update unshipped_orders
set  shipment_date = sysdate
returning order_id bulk collect into order_array;

This gives you all the order ids to use as you need.

2. Learn what an execution plan is and how to create and read one

“How can I make my SQL faster” is one of the most common classes of questions posted on Ask Tom. The trouble is there’s scant one-size-fits-all advice when it comes to SQL performance. To help we need to know what your query is, what the tables and indexes are and details about the data. But most importantly we need to know what the query is actually doing!

For example, say you want me to help you figure out a faster route to work. To do this I need to know which route you currently use and how long each part of it takes. We can then compare this against other routes, checking how far they are, expected traffic and predicted speeds. But we need the data to do this!

So when answering performance questions, the first thing we look for is an execution plan. Many confuse this with an explain plan. An explain plan is just a prediction. Often it’s wrong. And even when it’s right, we still don’t know how much work each step does.

An execution plan shows exactly what the database did. It also gives stats about how much work, how often and how long it took to process each step. Combine this with a basic understanding of indexes and join methods and you can often solve your own performance problems.

3. Use bind variables

Sadly data breaches are all too common. There hardly seems to be a week that goes by without news of a major company leaking sensitive data. And the root cause of these attacks is often SQL injection.

This is a simple, well known attack vector. If you write vulnerable SQL on a web enabled application, eventually you’ll be attacked.

And this isn’t something you can avoid by using NoSQL databases. SQL injection like attacks are possible there too!

Fortunately the solution is easy: use bind variables. Not only do these secure your application, they can improve performance too.

Make sure your company is not tomorrow’s data leak headline. Use bind variables!

Last but not least: When will Oracle have a BOOLEAN type? :)

We have a BOOLEAN type! It’s just only in PL/SQL ;P

There’s currently a push in the community to for us to add a SQL BOOLEAN type. If this is a feature you’d like to see, you can vote for it on the Database Ideas forum. The more support there is, the more likely we are to implement it! But no promises ;)

Yet Another 10 Common Mistakes Java Developers Make When Writing SQL (You Won’t BELIEVE the Last One)

(Sorry for that click-bait heading. Couldn’t resist ;-) )

We’re on a mission. To teach you SQL. But mostly, we want to teach you how to appreciate SQL. You’ll love it!

Getting SQL right or wrong shouldn’t be about that You’re-Doing-It-Wrong™ attitude that can be encountered often when evangelists promote their object of evangelism. Getting SQL right should be about the fun you’ll have once you do get it right. The things you start appreciating when you notice that you can easily replace 2000 lines of slow, hard-to-maintain, and ugly imperative (or object-oriented) code with 300 lines of lean functional code (e.g. using Java 8), or even better, with 50 lines of SQL.

We’re glad to see that our blogging friends have started appreciating SQL, and most specifically, window functions after reading our posts. For instance, take

So, after our previous, very popular posts:

… we’ll bring you:

Yet Another 10 Common Mistakes Java Developer Make When Writing SQL

And of course, this doesn’t apply to Java developers alone, but it’s written from the perspective of a Java (and SQL) developer. So here we go (again):

1. Not Using Window Functions

After all that we’ve been preaching, this must be our number 1 mistake in this series. Window functions are probably the coolest SQL feature of them all. They’re so incredibly useful, they should be the number one reason for anyone to switch to a better database, e.g. PostgreSQL:

If free and/or Open Source is important to you, you have absolutely no better choice than using PostgreSQL (and you’ll even get to use the free jOOQ Open Source Edition, if you’re a Java developer).

And if you’re lucky enough to work in an environment with Oracle or SQL Server (or DB2, Sybase) licenses, you get even more out of your new favourite tool.

We won’t repeat all the window function goodness in this section, we’ve blogged about them often enough:

The Cure:

Start playing with window functions. You’ll never go back, guaranteed.

2. Not declaring NOT NULL constraints

This one was already part of a previous list where we claimed that you should add as much metadata as possible to your schema, because your database will be able to leverage that metadata for optimisations. For instance, if your database knows that a foreign key value in BOOK.AUTHOR_ID must also be contained exactly once in AUTHOR.ID, then a whole set of optimisations can be achieved in complex queries.

Now let’s have another look at NOT NULL constraints. If you’re using Oracle, NULL values will not be part of your index. This doesn’t matter if you’re expressing an IN constraint, for instance:

SELECT * FROM table
WHERE value IN (
  SELECT nullable_column FROM ...
)

But what happens with a NOT IN constraint?

SELECT * FROM table
WHERE value NOT IN (
  SELECT nullable_column FROM ...
)

Due to SQL’s slightly unintuitive way of handling NULL, there is a slight risk of the second query unexpectedly not returning any results at all, namely if there is at least one NULL value as a result from the subquery. This is true for all databases that get SQL right.

But because the index on nullable_column doesn’t contain any NULL values, Oracle has to look up the complete content in the table, resulting in a FULL TABLE SCAN. Now that is unexpected! Details about this can be seen in this article.

The Cure:

Carefully review all your nullable, yet indexed columns, and check if you really cannot add a NOT NULL constraint to those columns.

The Tool:

If you’re using Oracle, use this query to detect all nullable, yet indexed columns:

SELECT
  i.table_name,
  i.index_name,
  LISTAGG(
    LPAD(i.column_position,  2) || ': ' || 
    RPAD(i.column_name    , 30) || ' '  ||
    DECODE(t.nullable, 'Y', '(NULL)', '(NOT NULL)'), 
    ', '
  ) WITHIN GROUP (ORDER BY i.column_position) 
    AS "NULLABLE columns in indexes"
FROM user_ind_columns i
JOIN user_tab_cols t
ON (t.table_name, t.column_name) = 
  ((i.table_name, i.column_name))
WHERE EXISTS (
  SELECT 1
  FROM user_tab_cols t
  WHERE (t.table_name, t.column_name, t.nullable) = 
       ((i.table_name, i.column_name, 'Y'       ))
)
GROUP BY i.table_name, i.index_name
ORDER BY i.index_name ASC;

Example output:

TABLE_NAME | INDEX_NAME   | NULLABLE columns in indexes
-----------+--------------+----------------------------
PERSON     | I_PERSON_DOB | 1: DATE_OF_BIRTH (NULL)

And then, fix it!

(Accidental criticism of Maven is irrelevant here ;-) )

If you’re curious about more details, see also these posts:

3. Using PL/SQL Package State

Now, this is a boring one if you’re not using Oracle, but if you are (and you’re a Java developer), be very wary of PL/SQL package state. Are you really doing what you think you’re doing?

Yes, PL/SQL has package-state, e.g.

CREATE OR REPLACE PACKAGE pkg IS
  -- Package state here!
  n NUMBER := 1;

  FUNCTION next_n RETURN NUMBER;
END pkg;

CREATE OR REPLACE PACKAGE BODY pkg IS
  FUNCTION next_n RETURN NUMBER
  IS
  BEGIN
    n := n + 1;
    RETURN n;
  END next_n;
END pkg;

Wonderful, so you’ve created yourself an in-memory counter that generates a new number every time you call pkg.next_n. But who owns that counter? Yes, the session. Each session has their own initialised “package instance”.

But no, it’s probably not the session you might have thought of.

We Java developers connect to databases through connection pools. When we obtain a JDBC Connection from such a pool, we recycle that connection from a previous “session”, e.g. a previous HTTP Request (not HTTP Session!). But that’s not the same. The database session (probably) outlives the HTTP Request and will be inherited by the next request, possibly from an entirely different user. Now, imagine you had a credit card number in that package…?

Not The Cure:

Nope. Don’t just jump to using SERIALLY_REUSABLE packages

CREATE OR REPLACE PACKAGE pkg IS
  PRAGMA SERIALLY_REUSABLE;
  n NUMBER := 1;

  FUNCTION next_n RETURN NUMBER;
END pkg;

Because:

  • You cannot even use that package from SQL, now (see ORA-06534).
  • Mixing this PRAGMA with regular package state from other packages just makes things a lot more complex.

So, don’t.

Not The Cure:

I know. PL/SQL can be a beast. It often seems like such a quirky language. But face it. Many things run much much faster when written in PL/SQL, so don’t give up, just yet. Dropping PL/SQL is not the solution either.

The Cure:

At all costs, try to avoid package state in PL/SQL. Think of package state as of static variables in Java. While they might be useful for caches (and constants, of course) every now and then, you might not actually access that state that you wanted. Think about load-balancers, suddenly transferring you to another JVM. Think about class loaders, that might have loaded the same class twice, for some reason.

Instead, pass state as arguments through procedures and functions. This will avoid side-effects and make your code much cleaner and more predictable.

Or, obviuously, persist state to some table.

4. Running the same query all the time

Master data is boring. You probably wrote some utility to get the latest version of your master data (e.g. language, locale, translations, tenant, system settings), and you can query it every time, once it is available.

At all costs, don’t do that. You don’t have to cache many things in your application, as modern databases have grown to be extremely fast when it comes to caching:

  • Table / column content
  • Index content
  • Query / materialized view results
  • Procedure results (if they’re deterministic)
  • Cursors
  • Execution plans

So, for your average query, there’s virtually no need for an ORM second-level cache, at least from a performance perspective (ORM caches mainly fulfil other purposes, of course).

But when you query master data, i.e. data that never changes, then, network latency, traffic and many other factors will impair your database experience.

The Cure:

Please do take 10 minutes, download Guava, and use its excellent and easy to set up cache, that ships with various built-in invalidation strategies. Choose time-based invalidation (i.e. polling), choose Oracle AQ or Streams, or PostgreSQL’s NOTIFY for event-based invalidation, or just make your cache permanent, if it doesn’t matter. But don’t issue an identical master data query all the time.

… This obviously brings us to

5. Not knowing about the N+1 problem

You had a choice. At the beginning of your software product, you had to choose between:

So, obviously, you chose an ORM, because otherwise you wouldn’t be suffering from “N+1”. What does “N+1” mean?

The accepted answer on this Stack Overflow question explains it nicely. Essentially, you’re running:

SELECT * FROM book

-- And then, for each book:
SELECT * FROM author WHERE id = ?
SELECT * FROM author WHERE id = ?
SELECT * FROM author WHERE id = ?

Of course, you could go and tweak your hundreds of annotations to correctly prefetch or eager fetch each book’s associated author information to produce something along the lines of:

SELECT * 
FROM   book
JOIN   author 
  ON   book.author_id = author.id

But that would be an awful lot of work, and you’ll risk eager-fetching too many things that you didn’t want, resulting in another performance issue.

Maybe, you could upgrade to JPA 2.1 and use the new @NamedEntityGraph to express beautiful annotation trees like this one:

@NamedEntityGraph(
    name = "post",
    attributeNodes = {
        @NamedAttributeNode("title"),
        @NamedAttributeNode(
            value = "comments", 
            subgraph = "comments"
        )
    },
    subgraphs = {
        @NamedSubgraph(
            name = "comments",
            attributeNodes = {
                @NamedAttributeNode("content")
            }
        )
    }
)

The example was taken from this blog post by Hantsy Bai. Hantsy then goes on explaining that you can use the above beauty through the following statement:

em.createQuery("select p from Post p where p.id=:id",
               Post.class)
  .setHint("javax.persistence.fetchgraph", 
           postGraph)
  .setParameter("id", this.id)
  .getResultList()
  .get(0);

Let us all appreciate the above application of JEE standards with all due respect, and then consider…

The Cure:

You just listen to the wise words at the beginning of this article and replace thousands of lines of tedious Java / Annotatiomania™ code with a couple of lines of SQL. Because that will also likely help you prevent another issue that we haven’t even touched yet, namely selecting too many columns as you can see in these posts:

Since you’re already using an ORM, this might just mean resorting to native SQL – or maybe you manage to express your query with JPQL. Of course, we agree with Alessio Harri in believing that you should use jOOQ together with JPA:

The Takeaway:

While the above will certainly help you work around some real world issues that you may have with your favourite ORM, you could also take it one step further and think about it this way. After all these years of pain and suffering from the object-relational impedance mismatch, the JPA 2.1 expert group is now trying to tweak their way out of this annotation madness by adding more declarative, annotation-based fetch graph hints to JPQL queries, that no one can debug, let alone maintain.

The alternative is simple and straight-forward SQL. And with Java 8, we’ll add functional transformation through the Streams API. That’s hard to beat.

But obviuosly, your views and experiences on that subject may differ from ours, so let’s head on to a more objective discussion about…

6. Not using Common Table Expressions

While common table expressions obviously offer readability improvements, they may also offer performance improvements. Consider the following query that I have recently encountered in a customer’s PL/SQL package (not the actual query):

SELECT round (
  (SELECT amount FROM payments WHERE id = :p_id)
    * 
  (
    SELECT e.bid
    FROM   currencies c, exchange_rates e
    WHERE  c.id     = 
      (SELECT cur_id FROM payments WHERE id = :p_id)
    AND    e.cur_id = 
      (SELECT cur_id FROM payments WHERE id = :p_id)
    AND    e.org_id = 
      (SELECT org_id FROM payments WHERE id = :p_id)
  ) / (
    SELECT c.factor
    FROM   currencies c, exchange_rates e
    WHERE  c.id     = 
      (SELECT cur_id FROM payments WHERE id = :p_id)
    AND    e.cur_id = 
      (SELECT cur_id FROM payments WHERE id = :p_id)
    AND    e.org_id = 
      (SELECT org_id FROM payments WHERE id = :p_id)
  ), 0
) 
INTO amount 
FROM dual;

So what does this do? This essentially converts a payment’s amount from one currency into another. Let’s not delve into the business logic too much, let’s head straight to the technical problem. The above query results in the following execution plan (on Oracle):

------------------------------------------------------
| Operation                         | Name           |
------------------------------------------------------
| SELECT STATEMENT                  |                |
|  TABLE ACCESS BY INDEX ROWID      | PAYMENTS       |
|   INDEX UNIQUE SCAN               | PAYM_PK        |
|   NESTED LOOPS                    |                |
|    INDEX UNIQUE SCAN              | CURR_PK        |
|     TABLE ACCESS BY INDEX ROWID   | PAYMENTS       |
|      INDEX UNIQUE SCAN            | PAYM_PK        |
|    TABLE ACCESS BY INDEX ROWID    | EXCHANGE_RATES |
|     INDEX UNIQUE SCAN             | EXCH_PK        |
|      TABLE ACCESS BY INDEX ROWID  | PAYMENTS       |
|       INDEX UNIQUE SCAN           | PAYM_PK        |
|      TABLE ACCESS BY INDEX ROWID  | PAYMENTS       |
|       INDEX UNIQUE SCAN           | PAYM_PK        |
|     NESTED LOOPS                  |                |
|      TABLE ACCESS BY INDEX ROWID  | CURRENCIES     |
|       INDEX UNIQUE SCAN           | CURR_PK        |
|        TABLE ACCESS BY INDEX ROWID| PAYMENTS       |
|         INDEX UNIQUE SCAN         | PAYM_PK        |
|      INDEX UNIQUE SCAN            | EXCH_PK        |
|       TABLE ACCESS BY INDEX ROWID | PAYMENTS       |
|        INDEX UNIQUE SCAN          | PAYM_PK        |
|       TABLE ACCESS BY INDEX ROWID | PAYMENTS       |
|        INDEX UNIQUE SCAN          | PAYM_PK        |
|  FAST DUAL                        |                |
------------------------------------------------------

The actual execution time is negligible in this case, but as you can see, the same objects are accessed again and again within the query. This is a violation of Common Mistake #4: Running the same query all the time.

The whole thing would be so much easier to read, maintain, and for Oracle to execute, if we had used a common table expression. From the original source code, observe the following thing:

-- We're always accessing a single payment:
  FROM payments WHERE id = :p_id

-- Joining currencies and exchange_rates twice:
  FROM currencies c, exchange_rates e

So, let’s factor out the payment first:

-- "payment" contains only a single payment
-- But it contains all the columns that we'll need
-- afterwards
WITH payment AS (
    SELECT cur_id, org_id, amount
    FROM   payments
    WHERE  id = :p_id
)
SELECT round(p.amount * e.bid / c.factor, 0)

-- Then, we simply don't need to repeat the
-- currencies / exchange_rates joins twice
FROM   payment p
JOIN   currencies c     ON  p.cur_id = c.id
JOIN   exchange_rates e ON  e.cur_id = p.cur_id
                        AND e.org_id = p.org_id

Note, that we’ve also replaced table lists with ANSI JOINs as suggested in our previous list

You wouldn’t believe it’s the same query, would you? And what about the execution plan? Here it is!

---------------------------------------------------
| Operation                      | Name           |
---------------------------------------------------
| SELECT STATEMENT               |                |
|  NESTED LOOPS                  |                |
|   NESTED LOOPS                 |                |
|    NESTED LOOPS                |                |
|     FAST DUAL                  |                |
|     TABLE ACCESS BY INDEX ROWID| PAYMENTS       |
|      INDEX UNIQUE SCAN         | PAYM_PK        |
|    TABLE ACCESS BY INDEX ROWID | EXCHANGE_RATES |
|     INDEX UNIQUE SCAN          | EXCH_PK        |
|   TABLE ACCESS BY INDEX ROWID  | CURRENCIES     |
|    INDEX UNIQUE SCAN           | CURR_PK        |
---------------------------------------------------

No doubt that this is much much better.

The Cure:

If you’re lucky enough and you’re using one of those databases that supports window functions, chances are incredibly high (100%) that you also have common table expression support. (Even MySQL 8.0 will finally have them).

Common table expressions are like local variables in SQL. In every large statement, you should consider using them, as soon as you feel that you’ve written something before.

The Takeaway:

Some databases (e.g. PostgreSQL, or SQL Server) also support common table expressions for DML statements. In other words, you can write:

WITH ...
UPDATE ...

This makes DML incredibly more powerful.

7. Not using row value expressions for UPDATEs

We’ve advertised the use of row value expressions in our previous listing. They’re very readable and intuitive, and often also promote using certain indexes, e.g. in PostgreSQL.

But few people know that they can also be used in an UPDATE statement, in most databases. Check out the following query, which I again found in a customer’s PL/SQL package (simplified again, of course):

UPDATE u
SET n = (SELECT n + 1    FROM t WHERE u.n = t.n),
    s = (SELECT 'x' || s FROM t WHERE u.n = t.n),
    x = 3;

So this query takes a subquery as a data source for updating two columns, and the third column is updated “regularly”. How does it perform? Moderately:

-----------------------------
| Operation          | Name |
-----------------------------
| UPDATE STATEMENT   |      |
|  UPDATE            | U    |
|   TABLE ACCESS FULL| U    |
|   TABLE ACCESS FULL| T    |
|   TABLE ACCESS FULL| T    |
-----------------------------

Let’s ignore the full table scans, as this query is constructed. The actual query could leverage indexes. But T is accessed twice, i.e. in both subqueries. Oracle didn’t seem to be able to apply scalar subquery caching in this case.

To the rescue: row value expressions. Let’s simply rephrase our UPDATE to this:

UPDATE u
SET (n, s) = ((
      SELECT n + 1, 'x' || s FROM t WHERE u.n = t.n
    )),
    x = 3;

Let’s ignore the funny, Oracle-specific double-parentheses syntax for the right hand side of such a row value expression assignment, but let’s appreciate the fact that we can easily assign a new value to the tuple (n, s) in one go! Note, we could have also written this, instead, and assign x as well:

UPDATE u
SET (n, s, x) = ((
      SELECT n + 1, 'x' || s, 3 
      FROM t WHERE u.n = t.n
    ));

As you will have expected, the execution plan has also improved, and T is accessed only once:

-----------------------------
| Operation          | Name |
-----------------------------
| UPDATE STATEMENT   |      |
|  UPDATE            | U    |
|   TABLE ACCESS FULL| U    |
|   TABLE ACCESS FULL| T    |
-----------------------------

The Cure:

Use row value expressions. Where ever you can. They make your SQL code incredibly more expressive, and chances are, they make it faster, as well.

Note that the above is supported by jOOQ’s UPDATE statement. This is the moment we would like to make you aware of this cheap, in-article advertisement:

jOOQ - The best way to write SQL in Java

;-)

8. Using MySQL when you could use PostgreSQL

Disclaimer: I wrote this article in 2014. Since then, MySQL has made tremendous progress, so I wouldn’t bet all my money on PostgreSQL any longer, when comparing it with MySQL! So, read this tip with the 2014 context in mind!

To some, this may appear to be a bit of a hipster discussion. But let’s consider the facts:

  • MySQL claims to be the “most popular Open Source database”.
  • PostgreSQL claims to be the “most advanced Open Source database”.

Let’s consider a bit of history. MySQL has always been very easy to install, maintain, and it has had a great and active community. This has lead to MySQL still being the RDBMS of choice with virtually every web hoster on this planet. Those hosters also host PHP, which was equally easy to install, and maintain.

BUT!

We Java developers tend to have an opinion about PHP, right? It’s summarised by this image here:

The PHP Hammer
The PHP Hammer

Well, it works, but how does it work?

The same can be said about MySQL. MySQL has always worked somehow, but while commercial databases like Oracle have made tremendous progress both in terms of query optimisation and feature scope, MySQL has hardly moved in the last decade.

Many people choose MySQL primarily because of its price (USD $ 0.00). But often, the same people have found MySQL to be slow and quickly concluded that SQL is slow per se – without evaluating the options. This is also why all NoSQL stores compare themselves with MySQL, not with Oracle, the database that has been winning the Transaction Processing Performance Council’s (TPC) benchmarks almost forever. Some examples:

While the last article bluntly adds “(and other RDBMS)” it doesn’t go into any sort of detail whatsoever, what those “other RDBMS” do wrong. It really only compares MongoDB with MySQL.

The Cure:

We say: Stop complaining about SQL, when in fact, you’re really complaining about MySQL. There are at least four very popular databases out there that are incredibly good. These are:

(just kidding about the last one, of course)

Again, remember, this was said in 2014. Today, MySQL 8.0 has made tremendous progress, so please review this opinion!

The Takeaway:

Don’t fall for agressive NoSQL marketing. 10gen is an extremely well-funded company, even if MongoDB continues to disappoint, technically.

The same is true for Datastax.

Both companies are solving a problem that few people have. They’re selling us niche products as commodity, making us think that our real commodity databases (the RDBMS) no longer fulfil our needs. They are well-funded and have big marketing teams to throw around with blunt claims.

In the mean time, PostgreSQL just got even better, and you, as a reader of this blog / post, are about to bet on the winning team :-)

… just to cite Mark Madsen once more:

The Disclaimer:

This article has been quite strongly against MySQL. We don’t mean to talk badly about a database that perfectly fulfils its purpose, as this isn’t a black and white world. Heck, you can get happy with SQLite in some situations. MySQL, being the cheap and easy to use, easy to install commodity database. We just wanted to make you aware of the fact, that you’re expressly choosing the cheap, not-so-good database, rather than the cheap, awesome one.

9. Forgetting about UNDO / REDO logs

We have claimed that MERGE statements or bulk / batch updates are good. That’s correct, but nonetheless, you should be wary when updating huge data sets in transactional contexts. If your transaction “takes too long”, i.e. if you’re updating 10 million records at a time, you will run into two problems:

  • You increase the risk of race conditions, if another process is also writing to the same table. This may cause a rollback on their or on your transaction, possibly making you roll out the huge update again
  • You cause a lot of concurrency on your system, because every other transaction / session, that wants to see the data that you’re about to update, will have to temporarily roll back all of your updates first, before they reach the state on disk that was there before your huge update. That’s the price of ACID.

One way to work around this issue is to allow for other sessions to read uncommitted data.

Another way to work around this issue is to frequently commit your own work, e.g. after 1000 inserts / updates.

In any case, due to the CAP theorem, you will have to make a compromise. Frequent commits will produce the risk of an inconsistent database in the event of the multi-million update going wrong after 5 million (committed) records. A rollback would then mean to revert all database changes towards a backup.

The Cure:

There is no definitive cure to this issue. But beware that you are very very rarely in a situation where it is OK to simply update 10 million records of a live and online table outside of an actual scheduled maintenance window. The simplest acceptable workaround is indeed to commit your work after N inserts / updates.

The Takeaway:

By this time, NoSQL aficionados will claim (again due to excessive marketing by aforementioned companies) that NoSQL has solved this by dropping schemas and typesafety. “Don’t update, just add another property!” – they said.

But that’s not true!

First off, I can add columns to my database without any issue at all. An ALTER TABLE ADD statement is executed instantly on live databases. Filling the column with data doesn’t bother anyone either, because no one reads the column yet (remember, don’t SELECT * !). So adding columns in RDBMS is as cheap as adding JSON properties to a MongoDB document.

But what about altering columns? Removing them? Merging them?

It is simply not true that denormalisation takes you anywhere far. Denormalisation is always a short-term win for the developer. Hardly a long-term win for the operations teams. Having redundant data in your database for the sake of speeding up an ALTER TABLE statement is like sweeping dirt under the carpet.

Don’t believe the marketers. And while you’re at it, perform some doublethink and forget that we’re SQL tool vendors ourselves ;-) Here’s again the “correct” message:

jOOQ - The best way to write SQL in Java

10. Not using the BOOLEAN type correctly

This is not really a mistake per se. It’s just again something that hardly anyone knows. When the SQL:1999 standard introduced the new BOOLEAN data type, they really did it right. Because before, we already had something like booleans in SQL. We’ve had <search condition> in SQL-92, which are essentially predicates for use in WHERE, ON, and HAVING clauses, as well as in CASE expressions.

SQL:1999, however, simply defined the new <boolean value expression> as a regular <value expression>, and redefined the <search condition> as such:

<search condition> ::=
    <boolean value expression>

Done! Now, for most of us Java / Scala / etc. developers, this doesn’t seem like such an innovation. Heck it’s a boolean. Obviuosly it can be interchangeably used as predicate and as variable.

But in the mind-set of the keyword-heavy SQL folks who have taken inspiration from COBOL when designing the language, this was quite a step forward.

Now, what does this mean? This means that you can use any predicate also as a column! For instance:

SELECT a, b, c
FROM (
  SELECT EXISTS (SELECT ...) a,
         MY_COL IN (1, 2, 3) b,
         3 BETWEEN 4 AND 5   c
  FROM   MY_TABLE
) t
WHERE a AND b AND NOT(c)

This is a bit of a dummy query, agreed, but are you aware of how powerful this is?

Luckily, again, PostgreSQL fully supports this (unlike Oracle, which still doesn’t have any BOOLEAN data type in SQL).

The Cure:

Every now and then, using BOOLEAN types feels very right, so do it! You can transform boolean value expressions into predicates and predicates into boolean value expressions. They’re the same. This makes SQL all so powerful.

Conclusion

SQL has evolved steadily over the past years through great standards like SQL:1999, SQL:2003, SQL:2008 and now SQL:2011. It is the only surviving mainstream declarative language, now that XQuery can be considered pretty dead for the mainstream. It can be easily mixed with procedural languages, as PL/SQL and T-SQL (and other procedural dialects) have shown. It can be easily mixed with object-oriented or functional languages, as jOOQ has shown.

At Data Geekery, we believe that SQL is the best way to query data. You don’t agree with any of the above? That’s fine, you don’t have to. Sometimes, even we agree with Winston Churchill who is known to have said:

tweet thisSQL is the worst form of database querying, except for all the other forms.

But as Yakov Fain has recently put it:

You can run from SQL, but you can’t hide

So, let’s better get back to work and learn this beast! Thanks for reading.