Java 8 Friday: 10 Subtle Mistakes When Using the Streams API

At Data Geekery, we love Java. And as we’re really into jOOQ’s fluent API and query DSL, we’re absolutely thrilled about what Java 8 will bring to our ecosystem.

Java 8 Friday

Every Friday, we’re showing you a couple of nice new tutorial-style Java 8 features, which take advantage of lambda expressions, extension methods, and other great stuff. You’ll find the source code on GitHub.

10 Subtle Mistakes When Using the Streams API

We’ve done all the SQL mistakes lists:

But we haven’t done a top 10 mistakes list with Java 8 yet! For today’s occasion (it’s Friday the 13th), we’ll catch up with what will go wrong in YOUR application when you’re working with Java 8. (it won’t happen to us, as we’re stuck with Java 6 for another while)

1. Accidentally reusing streams

Wanna bet, this will happen to everyone at least once. Like the existing “streams” (e.g. InputStream), you can consume streams only once. The following code won’t work:

IntStream stream = IntStream.of(1, 2);
stream.forEach(System.out::println);

// That was fun! Let's do it again!
stream.forEach(System.out::println);

You’ll get a

java.lang.IllegalStateException: 
  stream has already been operated upon or closed

So be careful when consuming your stream. It can be done only once

2. Accidentally creating “infinite” streams

You can create infinite streams quite easily without noticing. Take the following example:

// Will run indefinitely
IntStream.iterate(0, i -> i + 1)
         .forEach(System.out::println);

The whole point of streams is the fact that they can be infinite, if you design them to be. The only problem is, that you might not have wanted that. So, be sure to always put proper limits:

// That's better
IntStream.iterate(0, i -> i + 1)
         .limit(10)
         .forEach(System.out::println);

3. Accidentally creating “subtle” infinite streams

We can’t say this enough. You WILL eventually create an infinite stream, accidentally. Take the following stream, for instance:

IntStream.iterate(0, i -> ( i + 1 ) % 2)
         .distinct()
         .limit(10)
         .forEach(System.out::println);

So…

  • we generate alternating 0’s and 1’s
  • then we keep only distinct values, i.e. a single 0 and a single 1
  • then we limit the stream to a size of 10
  • then we consume it

Well… the distinct() operation doesn’t know that the function supplied to the iterate() method will produce only two distinct values. It might expect more than that. So it’ll forever consume new values from the stream, and the limit(10) will never be reached. Tough luck, your application stalls.

4. Accidentally creating “subtle” parallel infinite streams

We really need to insist that you might accidentally try to consume an infinite stream. Let’s assume you believe that the distinct() operation should be performed in parallel. You might be writing this:

IntStream.iterate(0, i -> ( i + 1 ) % 2)
         .parallel()
         .distinct()
         .limit(10)
         .forEach(System.out::println);

Now, we’ve already seen that this will turn forever. But previously, at least, you only consumed one CPU on your machine. Now, you’ll probably consume four of them, potentially occupying pretty much all of your system with an accidental infinite stream consumption. That’s pretty bad. You can probably hard-reboot your server / development machine after that. Have a last look at what my laptop looked like prior to exploding:

If I were a laptop, this is how I'd like to go.

If I were a laptop, this is how I’d like to go.

5. Mixing up the order of operations

So, why did we insist on your definitely accidentally creating infinite streams? It’s simple. Because you may just accidentally do it. The above stream can be perfectly consumed if you switch the order of limit() and distinct():

IntStream.iterate(0, i -> ( i + 1 ) % 2)
         .limit(10)
         .distinct()
         .forEach(System.out::println);

This now yields:

0
1

Why? Because we first limit the infinite stream to 10 values (0 1 0 1 0 1 0 1 0 1), before we reduce the limited stream to the distinct values contained in it (0 1).

Of course, this may no longer be semantically correct, because you really wanted the first 10 distinct values from a set of data (you just happened to have “forgotten” that the data is infinite). No one really wants 10 random values, and only then reduce them to be distinct.

If you’re coming from a SQL background, you might not expect such differences. Take SQL Server 2012, for instance. The following two SQL statements are the same:

-- Using TOP
SELECT DISTINCT TOP 10 *
FROM i
ORDER BY ..

-- Using FETCH
SELECT *
FROM i
ORDER BY ..
OFFSET 0 ROWS
FETCH NEXT 10 ROWS ONLY

So, as a SQL person, you might not be as aware of the importance of the order of streams operations.

jOOQ, the best way to write SQL in Java

6. Mixing up the order of operations (again)

Speaking of SQL, if you’re a MySQL or PostgreSQL person, you might be used to the LIMIT .. OFFSET clause. SQL is full of subtle quirks, and this is one of them. The OFFSET clause is applied FIRST, as suggested in SQL Server 2012’s (i.e. the SQL:2008 standard’s) syntax.

If you translate MySQL / PostgreSQL’s dialect directly to streams, you’ll probably get it wrong:

IntStream.iterate(0, i -> i + 1)
         .limit(10) // LIMIT
         .skip(5)   // OFFSET
         .forEach(System.out::println);

The above yields

5
6
7
8
9

Yes. It doesn’t continue after 9, because the limit() is now applied first, producing (0 1 2 3 4 5 6 7 8 9). skip() is applied after, reducing the stream to (5 6 7 8 9). Not what you may have intended.

BEWARE of the LIMIT .. OFFSET vs. "OFFSET .. LIMIT" trap!

7. Walking the file system with filters

We’ve blogged about this before. What appears to be a good idea is to walk the file system using filters:

Files.walk(Paths.get("."))
     .filter(p -> !p.toFile().getName().startsWith("."))
     .forEach(System.out::println);

The above stream appears to be walking only through non-hidden directories, i.e. directories that do not start with a dot. Unfortunately, you’ve again made mistake #5 and #6. walk() has already produced the whole stream of subdirectories of the current directory. Lazily, though, but logically containing all sub-paths. Now, the filter will correctly filter out paths whose names start with a dot “.”. E.g. .git or .idea will not be part of the resulting stream. But these paths will be: .\.git\refs, or .\.idea\libraries. Not what you intended.

Now, don’t fix this by writing the following:

Files.walk(Paths.get("."))
     .filter(p -> !p.toString().contains(File.separator + "."))
     .forEach(System.out::println);

While that will produce the correct output, it will still do so by traversing the complete directory subtree, recursing into all subdirectories of “hidden” directories.

I guess you’ll have to resort to good old JDK 1.0 File.list() again. The good news is, FilenameFilter and FileFilter are both functional interfaces.

8. Modifying the backing collection of a stream

While you’re iterating a List, you must not modify that same list in the iteration body. That was true before Java 8, but it might become more tricky with Java 8 streams. Consider the following list from 0..9:

// Of course, we create this list using streams:
List<Integer> list = 
IntStream.range(0, 10)
         .boxed()
         .collect(toCollection(ArrayList::new));

Now, let’s assume that we want to remove each element while consuming it:

list.stream()
    // remove(Object), not remove(int)!
    .peek(list::remove)
    .forEach(System.out::println);

Interestingly enough, this will work for some of the elements! The output you might get is this one:

0
2
4
6
8
null
null
null
null
null
java.util.ConcurrentModificationException

If we introspect the list after catching that exception, there’s a funny finding. We’ll get:

[1, 3, 5, 7, 9]

Heh, it “worked” for all the odd numbers. Is this a bug? No, it looks like a feature. If you’re delving into the JDK code, you’ll find this comment in ArrayList.ArraListSpliterator:

/*
 * If ArrayLists were immutable, or structurally immutable (no
 * adds, removes, etc), we could implement their spliterators
 * with Arrays.spliterator. Instead we detect as much
 * interference during traversal as practical without
 * sacrificing much performance. We rely primarily on
 * modCounts. These are not guaranteed to detect concurrency
 * violations, and are sometimes overly conservative about
 * within-thread interference, but detect enough problems to
 * be worthwhile in practice. To carry this out, we (1) lazily
 * initialize fence and expectedModCount until the latest
 * point that we need to commit to the state we are checking
 * against; thus improving precision.  (This doesn't apply to
 * SubLists, that create spliterators with current non-lazy
 * values).  (2) We perform only a single
 * ConcurrentModificationException check at the end of forEach
 * (the most performance-sensitive method). When using forEach
 * (as opposed to iterators), we can normally only detect
 * interference after actions, not before. Further
 * CME-triggering checks apply to all other possible
 * violations of assumptions for example null or too-small
 * elementData array given its size(), that could only have
 * occurred due to interference.  This allows the inner loop
 * of forEach to run without any further checks, and
 * simplifies lambda-resolution. While this does entail a
 * number of checks, note that in the common case of
 * list.stream().forEach(a), no checks or other computation
 * occur anywhere other than inside forEach itself.  The other
 * less-often-used methods cannot take advantage of most of
 * these streamlinings.
 */

Now, check out what happens when we tell the stream to produce sorted() results:

list.stream()
    .sorted()
    .peek(list::remove)
    .forEach(System.out::println);

This will now produce the following, “expected” output

0
1
2
3
4
5
6
7
8
9

And the list after stream consumption? It is empty:

[]

So, all elements are consumed, and removed correctly. The sorted() operation is a “stateful intermediate operation”, which means that subsequent operations no longer operate on the backing collection, but on an internal state. It is now “safe” to remove elements from the list!

Well… can we really? Let’s proceed with parallel(), sorted() removal:

list.stream()
    .sorted()
    .parallel()
    .peek(list::remove)
    .forEach(System.out::println);

This now yields:

7
6
2
5
8
4
1
0
9
3

And the list contains

[8]

Eek. We didn’t remove all elements!? Free beers (and jOOQ stickers) go to anyone who solves this streams puzzler!

This all appears quite random and subtle, we can only suggest that you never actually do modify a backing collection while consuming a stream. It just doesn’t work.

9. Forgetting to actually consume the stream

What do you think the following stream does?

IntStream.range(1, 5)
         .peek(System.out::println)
         .peek(i -> { 
              if (i == 5) 
                  throw new RuntimeException("bang");
          });

When you read this, you might think that it will print (1 2 3 4 5) and then throw an exception. But that’s not correct. It won’t do anything. The stream just sits there, never having been consumed.

As with any fluent API or DSL, you might actually forget to call the “terminal” operation. This might be particularly true when you use peek(), as peek() is an aweful lot similar to forEach().

This can happen with jOOQ just the same, when you forget to call execute() or fetch():

DSL.using(configuration)
   .update(TABLE)
   .set(TABLE.COL1, 1)
   .set(TABLE.COL2, "abc")
   .where(TABLE.ID.eq(3));

Oops. No execute()

jOOQ, the best way to write SQL in Java

Yes, the “best” way – with 1-2 caveats ;-)

10. Parallel stream deadlock

This is now a real goodie for the end!

All concurrent systems can run into deadlocks, if you don’t properly synchronise things. While finding a real-world example isn’t obvious, finding a forced example is. The following parallel() stream is guaranteed to run into a deadlock:

Object[] locks = { new Object(), new Object() };

IntStream
    .range(1, 5)
    .parallel()
    .peek(Unchecked.intConsumer(i -> {
        synchronized (locks[i % locks.length]) {
            Thread.sleep(100);

            synchronized (locks[(i + 1) % locks.length]) {
                Thread.sleep(50);
            }
        }
    }))
    .forEach(System.out::println);

Note the use of Unchecked.intConsumer(), which transforms the functional IntConsumer interface into a org.jooq.lambda.fi.util.function.CheckedIntConsumer, which is allowed to throw checked exceptions.

Well. Tough luck for your machine. Those threads will be blocked forever :-)

The good news is, it has never been easier to produce a schoolbook example of a deadlock in Java!

For more details, see also Brian Goetz’s answer to this question on Stack Overflow.

Conclusion

With streams and functional thinking, we’ll run into a massive amount of new, subtle bugs. Few of these bugs can be prevented, except through practice and staying focused. You have to think about how to order your operations. You have to think about whether your streams may be infinite.

Streams (and lambdas) are a very powerful tool. But a tool which we need to get a hang of, first.

Stay tuned for more exciting Java 8 articles on this blog.

Yet Another 10 Common Mistakes Java Developers Make When Writing SQL (You Won’t BELIEVE the Last One)

(Sorry for that click-bait heading. Couldn’t resist ;-) )

We’re on a mission. To teach you SQL. But mostly, we want to teach you how to appreciate SQL. You’ll love it!

Getting SQL right or wrong shouldn’t be about that You’re-Doing-It-Wrong™ attitude that can be encountered often when evangelists promote their object of evangelism. Getting SQL right should be about the fun you’ll have once you do get it right. The things you start appreciating when you notice that you can easily replace 2000 lines of slow, hard-to-maintain, and ugly imperative (or object-oriented) code with 300 lines of lean functional code (e.g. using Java 8), or even better, with 50 lines of SQL.

We’re glad to see that our blogging friends have started appreciating SQL, and most specifically, window functions after reading our posts. For instance, take

So, after our previous, very popular posts:

… we’ll bring you:

Yet Another 10 Common Mistakes Java Developer Make When Writing SQL

And of course, this doesn’t apply to Java developers alone, but it’s written from the perspective of a Java (and SQL) developer. So here we go (again):

1. Not Using Window Functions

After all that we’ve been preaching, this must be our number 1 mistake in this series. Window functions are probably the coolest SQL feature of them all. They’re so incredibly useful, they should be the number one reason for anyone to switch to a better database, e.g. PostgreSQL:

If free and/or Open Source is important to you, you have absolutely no better choice than using PostgreSQL (and you’ll even get to use the free jOOQ Open Source Edition, if you’re a Java developer).

And if you’re lucky enough to work in an environment with Oracle or SQL Server (or DB2, Sybase) licenses, you get even more out of your new favourite tool.

We won’t repeat all the window function goodness in this section, we’ve blogged about them often enough:

The Cure:

Start playing with window functions. You’ll never go back, guaranteed.

2. Not declaring NOT NULL constraints

This one was already part of a previous list where we claimed that you should add as much metadata as possible to your schema, because your database will be able to leverage that metadata for optimisations. For instance, if your database knows that a foreign key value in BOOK.AUTHOR_ID must also be contained exactly once in AUTHOR.ID, then a whole set of optimisations can be achieved in complex queries.

Now let’s have another look at NOT NULL constraints. If you’re using Oracle, NULL values will not be part of your index. This doesn’t matter if you’re expressing an IN constraint, for instance:

SELECT * FROM table
WHERE value IN (
  SELECT nullable_column FROM ...
)

But what happens with a NOT IN constraint?

SELECT * FROM table
WHERE value NOT IN (
  SELECT nullable_column FROM ...
)

Due to SQL’s slightly unintuitive way of handling NULL, there is a slight risk of the second query unexpectedly not returning any results at all, namely if there is at least one NULL value as a result from the subquery. This is true for all databases that get SQL right.

But because the index on nullable_column doesn’t contain any NULL values, Oracle has to look up the complete content in the table, resulting in a FULL TABLE SCAN. Now that is unexpected! Details about this can be seen in this article.

The Cure:

Carefully review all your nullable, yet indexed columns, and check if you really cannot add a NOT NULL constraint to those columns.

The Tool:

If you’re using Oracle, use this query to detect all nullable, yet indexed columns:

SELECT
  i.table_name,
  i.index_name,
  LISTAGG(
    LPAD(i.column_position,  2) || ': ' || 
    RPAD(i.column_name    , 30) || ' '  ||
    DECODE(t.nullable, 'Y', '(NULL)', '(NOT NULL)'), 
    ', '
  ) WITHIN GROUP (ORDER BY i.column_position) 
    AS "NULLABLE columns in indexes"
FROM user_ind_columns i
JOIN user_tab_cols t
ON (t.table_name, t.column_name) = 
  ((i.table_name, i.column_name))
WHERE EXISTS (
  SELECT 1
  FROM user_tab_cols t
  WHERE (t.table_name, t.column_name, t.nullable) = 
       ((i.table_name, i.column_name, 'Y'       ))
)
GROUP BY i.table_name, i.index_name
ORDER BY i.index_name ASC;

Example output:

TABLE_NAME | INDEX_NAME   | NULLABLE columns in indexes
-----------+--------------+----------------------------
PERSON     | I_PERSON_DOB | 1: DATE_OF_BIRTH (NULL)

And then, fix it!

(Accidental criticism of Maven is irrelevant here ;-) )

If you’re curious about more details, see also these posts:

3. Using PL/SQL Package State

Now, this is a boring one if you’re not using Oracle, but if you are (and you’re a Java developer), be very wary of PL/SQL package state. Are you really doing what you think you’re doing?

Yes, PL/SQL has package-state, e.g.

CREATE OR REPLACE PACKAGE pkg IS
  -- Package state here!
  n NUMBER := 1;

  FUNCTION next_n RETURN NUMBER;
END pkg;

CREATE OR REPLACE PACKAGE BODY pkg IS
  FUNCTION next_n RETURN NUMBER
  IS
  BEGIN
    n := n + 1;
    RETURN n;
  END next_n;
END pkg;

Wonderful, so you’ve created yourself an in-memory counter that generates a new number every time you call pkg.next_n. But who owns that counter? Yes, the session. Each session has their own initialised “package instance”.

But no, it’s probably not the session you might have thought of.

We Java developers connect to databases through connection pools. When we obtain a JDBC Connection from such a pool, we recycle that connection from a previous “session”, e.g. a previous HTTP Request (not HTTP Session!). But that’s not the same. The database session (probably) outlives the HTTP Request and will be inherited by the next request, possibly from an entirely different user. Now, imagine you had a credit card number in that package…?

Not The Cure:

Nope. Don’t just jump to using SERIALLY_REUSABLE packages

CREATE OR REPLACE PACKAGE pkg IS
  PRAGMA SERIALLY_REUSABLE;
  n NUMBER := 1;

  FUNCTION next_n RETURN NUMBER;
END pkg;

Because:

  • You cannot even use that package from SQL, now (see ORA-06534).
  • Mixing this PRAGMA with regular package state from other packages just makes things a lot more complex.

So, don’t.

Not The Cure:

I know. PL/SQL can be a beast. It often seems like such a quirky language. But face it. Many things run much much faster when written in PL/SQL, so don’t give up, just yet. Dropping PL/SQL is not the solution either.

The Cure:

At all costs, try to avoid package state in PL/SQL. Think of package state as of static variables in Java. While they might be useful for caches (and constants, of course) every now and then, you might not actually access that state that you wanted. Think about load-balancers, suddenly transferring you to another JVM. Think about class loaders, that might have loaded the same class twice, for some reason.

Instead, pass state as arguments through procedures and functions. This will avoid side-effects and make your code much cleaner and more predictable.

Or, obviuously, persist state to some table.

4. Running the same query all the time

Master data is boring. You probably wrote some utility to get the latest version of your master data (e.g. language, locale, translations, tenant, system settings), and you can query it every time, once it is available.

At all costs, don’t do that. You don’t have to cache many things in your application, as modern databases have grown to be extremely fast when it comes to caching:

  • Table / column content
  • Index content
  • Query / materialized view results
  • Procedure results (if they’re deterministic)
  • Cursors
  • Execution plans

So, for your average query, there’s virtually no need for an ORM second-level cache, at least from a performance perspective (ORM caches mainly fulfil other purposes, of course).

But when you query master data, i.e. data that never changes, then, network latency, traffic and many other factors will impair your database experience.

The Cure:

Please do take 10 minutes, download Guava, and use its excellent and easy to set up cache, that ships with various built-in invalidation strategies. Choose time-based invalidation (i.e. polling), choose Oracle AQ or Streams, or PostgreSQL’s NOTIFY for event-based invalidation, or just make your cache permanent, if it doesn’t matter. But don’t issue an identical master data query all the time.

… This obviously brings us to

5. Not knowing about the N+1 problem

You had a choice. At the beginning of your software product, you had to choose between:

So, obviously, you chose an ORM, because otherwise you wouldn’t be suffering from “N+1”. What does “N+1” mean?

The accepted answer on this Stack Overflow question explains it nicely. Essentially, you’re running:

SELECT * FROM book

-- And then, for each book:
SELECT * FROM author WHERE id = ?
SELECT * FROM author WHERE id = ?
SELECT * FROM author WHERE id = ?

Of course, you could go and tweak your hundreds of annotations to correctly prefetch or eager fetch each book’s associated author information to produce something along the lines of:

SELECT * 
FROM   book
JOIN   author 
  ON   book.author_id = author.id

But that would be an awful lot of work, and you’ll risk eager-fetching too many things that you didn’t want, resulting in another performance issue.

Maybe, you could upgrade to JPA 2.1 and use the new @NamedEntityGraph to express beautiful annotation trees like this one:

@NamedEntityGraph(
    name = "post",
    attributeNodes = {
        @NamedAttributeNode("title"),
        @NamedAttributeNode(
            value = "comments", 
            subgraph = "comments"
        )
    },
    subgraphs = {
        @NamedSubgraph(
            name = "comments",
            attributeNodes = {
                @NamedAttributeNode("content")
            }
        )
    }
)

The example was taken from this blog post by Hantsy Bai. Hantsy then goes on explaining that you can use the above beauty through the following statement:

em.createQuery("select p from Post p where p.id=:id",
               Post.class)
  .setHint("javax.persistence.fetchgraph", 
           postGraph)
  .setParameter("id", this.id)
  .getResultList()
  .get(0);

Let us all appreciate the above application of JEE standards with all due respect, and then consider…

The Cure:

You just listen to the wise words at the beginning of this article and replace thousands of lines of tedious Java / Annotatiomania™ code with a couple of lines of SQL. Because that will also likely help you prevent another issue that we haven’t even touched yet, namely selecting too many columns as you can see in these posts:

Since you’re already using an ORM, this might just mean resorting to native SQL – or maybe you manage to express your query with JPQL. Of course, we agree with Alessio Harri in believing that you should use jOOQ together with JPA:

The Takeaway:

While the above will certainly help you work around some real world issues that you may have with your favourite ORM, you could also take it one step further and think about it this way. After all these years of pain and suffering from the object-relational impedance mismatch, the JPA 2.1 expert group is now trying to tweak their way out of this annotation madness by adding more declarative, annotation-based fetch graph hints to JPQL queries, that no one can debug, let alone maintain.

The alternative is simple and straight-forward SQL. And with Java 8, we’ll add functional transformation through the Streams API. That’s hard to beat.

But obviuosly, your views and experiences on that subject may differ from ours, so let’s head on to a more objective discussion about…

6. Not using Common Table Expressions

While common table expressions obviously offer readability improvements, they may also offer performance improvements. Consider the following query that I have recently encountered in a customer’s PL/SQL package (not the actual query):

SELECT round (
  (SELECT amount FROM payments WHERE id = :p_id)
    * 
  (
    SELECT e.bid
    FROM   currencies c, exchange_rates e
    WHERE  c.id     = 
      (SELECT cur_id FROM payments WHERE id = :p_id)
    AND    e.cur_id = 
      (SELECT cur_id FROM payments WHERE id = :p_id)
    AND    e.org_id = 
      (SELECT org_id FROM payments WHERE id = :p_id)
  ) / (
    SELECT c.factor
    FROM   currencies c, exchange_rates e
    WHERE  c.id     = 
      (SELECT cur_id FROM payments WHERE id = :p_id)
    AND    e.cur_id = 
      (SELECT cur_id FROM payments WHERE id = :p_id)
    AND    e.org_id = 
      (SELECT org_id FROM payments WHERE id = :p_id)
  ), 0
) 
INTO amount 
FROM dual;

So what does this do? This essentially converts a payment’s amount from one currency into another. Let’s not delve into the business logic too much, let’s head straight to the technical problem. The above query results in the following execution plan (on Oracle):

------------------------------------------------------
| Operation                         | Name           |
------------------------------------------------------
| SELECT STATEMENT                  |                |
|  TABLE ACCESS BY INDEX ROWID      | PAYMENTS       |
|   INDEX UNIQUE SCAN               | PAYM_PK        |
|   NESTED LOOPS                    |                |
|    INDEX UNIQUE SCAN              | CURR_PK        |
|     TABLE ACCESS BY INDEX ROWID   | PAYMENTS       |
|      INDEX UNIQUE SCAN            | PAYM_PK        |
|    TABLE ACCESS BY INDEX ROWID    | EXCHANGE_RATES |
|     INDEX UNIQUE SCAN             | EXCH_PK        |
|      TABLE ACCESS BY INDEX ROWID  | PAYMENTS       |
|       INDEX UNIQUE SCAN           | PAYM_PK        |
|      TABLE ACCESS BY INDEX ROWID  | PAYMENTS       |
|       INDEX UNIQUE SCAN           | PAYM_PK        |
|     NESTED LOOPS                  |                |
|      TABLE ACCESS BY INDEX ROWID  | CURRENCIES     |
|       INDEX UNIQUE SCAN           | CURR_PK        |
|        TABLE ACCESS BY INDEX ROWID| PAYMENTS       |
|         INDEX UNIQUE SCAN         | PAYM_PK        |
|      INDEX UNIQUE SCAN            | EXCH_PK        |
|       TABLE ACCESS BY INDEX ROWID | PAYMENTS       |
|        INDEX UNIQUE SCAN          | PAYM_PK        |
|       TABLE ACCESS BY INDEX ROWID | PAYMENTS       |
|        INDEX UNIQUE SCAN          | PAYM_PK        |
|  FAST DUAL                        |                |
------------------------------------------------------

The actual execution time is negligible in this case, but as you can see, the same objects are accessed again and again within the query. This is a violation of Common Mistake #4: Running the same query all the time.

The whole thing would be so much easier to read, maintain, and for Oracle to execute, if we had used a common table expression. From the original source code, observe the following thing:

-- We're always accessing a single payment:
  FROM payments WHERE id = :p_id

-- Joining currencies and exchange_rates twice:
  FROM currencies c, exchange_rates e

So, let’s factor out the payment first:

-- "payment" contains only a single payment
-- But it contains all the columns that we'll need
-- afterwards
WITH payment AS (
    SELECT cur_id, org_id, amount
    FROM   payments
    WHERE  id = :p_id
)
SELECT round(p.amount * e.bid / c.factor, 0)

-- Then, we simply don't need to repeat the
-- currencies / exchange_rates joins twice
FROM   payment p
JOIN   currencies c     ON  p.cur_id = c.id
JOIN   exchange_rates e ON  e.cur_id = p.cur_id
                        AND e.org_id = p.org_id

Note, that we’ve also replaced table lists with ANSI JOINs as suggested in our previous list

You wouldn’t believe it’s the same query, would you? And what about the execution plan? Here it is!

---------------------------------------------------
| Operation                      | Name           |
---------------------------------------------------
| SELECT STATEMENT               |                |
|  NESTED LOOPS                  |                |
|   NESTED LOOPS                 |                |
|    NESTED LOOPS                |                |
|     FAST DUAL                  |                |
|     TABLE ACCESS BY INDEX ROWID| PAYMENTS       |
|      INDEX UNIQUE SCAN         | PAYM_PK        |
|    TABLE ACCESS BY INDEX ROWID | EXCHANGE_RATES |
|     INDEX UNIQUE SCAN          | EXCH_PK        |
|   TABLE ACCESS BY INDEX ROWID  | CURRENCIES     |
|    INDEX UNIQUE SCAN           | CURR_PK        |
---------------------------------------------------

No doubt that this is much much better.

The Cure:

If you’re lucky enough and you’re using one of those databases that supports window functions, chances are incredibly high (100%) that you also have common table expression support. (Even MySQL 8.0 will finally have them).

Common table expressions are like local variables in SQL. In every large statement, you should consider using them, as soon as you feel that you’ve written something before.

The Takeaway:

Some databases (e.g. PostgreSQL, or SQL Server) also support common table expressions for DML statements. In other words, you can write:

WITH ...
UPDATE ...

This makes DML incredibly more powerful.

7. Not using row value expressions for UPDATEs

We’ve advertised the use of row value expressions in our previous listing. They’re very readable and intuitive, and often also promote using certain indexes, e.g. in PostgreSQL.

But few people know that they can also be used in an UPDATE statement, in most databases. Check out the following query, which I again found in a customer’s PL/SQL package (simplified again, of course):

UPDATE u
SET n = (SELECT n + 1    FROM t WHERE u.n = t.n),
    s = (SELECT 'x' || s FROM t WHERE u.n = t.n),
    x = 3;

So this query takes a subquery as a data source for updating two columns, and the third column is updated “regularly”. How does it perform? Moderately:

-----------------------------
| Operation          | Name |
-----------------------------
| UPDATE STATEMENT   |      |
|  UPDATE            | U    |
|   TABLE ACCESS FULL| U    |
|   TABLE ACCESS FULL| T    |
|   TABLE ACCESS FULL| T    |
-----------------------------

Let’s ignore the full table scans, as this query is constructed. The actual query could leverage indexes. But T is accessed twice, i.e. in both subqueries. Oracle didn’t seem to be able to apply scalar subquery caching in this case.

To the rescue: row value expressions. Let’s simply rephrase our UPDATE to this:

UPDATE u
SET (n, s) = ((
      SELECT n + 1, 'x' || s FROM t WHERE u.n = t.n
    )),
    x = 3;

Let’s ignore the funny, Oracle-specific double-parentheses syntax for the right hand side of such a row value expression assignment, but let’s appreciate the fact that we can easily assign a new value to the tuple (n, s) in one go! Note, we could have also written this, instead, and assign x as well:

UPDATE u
SET (n, s, x) = ((
      SELECT n + 1, 'x' || s, 3 
      FROM t WHERE u.n = t.n
    ));

As you will have expected, the execution plan has also improved, and T is accessed only once:

-----------------------------
| Operation          | Name |
-----------------------------
| UPDATE STATEMENT   |      |
|  UPDATE            | U    |
|   TABLE ACCESS FULL| U    |
|   TABLE ACCESS FULL| T    |
-----------------------------

The Cure:

Use row value expressions. Where ever you can. They make your SQL code incredibly more expressive, and chances are, they make it faster, as well.

Note that the above is supported by jOOQ’s UPDATE statement. This is the moment we would like to make you aware of this cheap, in-article advertisement:

jOOQ - The best way to write SQL in Java

;-)

8. Using MySQL when you could use PostgreSQL

Disclaimer: I wrote this article in 2014. Since then, MySQL has made tremendous progress, so I wouldn’t bet all my money on PostgreSQL any longer, when comparing it with MySQL! So, read this tip with the 2014 context in mind!

To some, this may appear to be a bit of a hipster discussion. But let’s consider the facts:

  • MySQL claims to be the “most popular Open Source database”.
  • PostgreSQL claims to be the “most advanced Open Source database”.

Let’s consider a bit of history. MySQL has always been very easy to install, maintain, and it has had a great and active community. This has lead to MySQL still being the RDBMS of choice with virtually every web hoster on this planet. Those hosters also host PHP, which was equally easy to install, and maintain.

BUT!

We Java developers tend to have an opinion about PHP, right? It’s summarised by this image here:

The PHP Hammer

The PHP Hammer

Well, it works, but how does it work?

The same can be said about MySQL. MySQL has always worked somehow, but while commercial databases like Oracle have made tremendous progress both in terms of query optimisation and feature scope, MySQL has hardly moved in the last decade.

Many people choose MySQL primarily because of its price (USD $ 0.00). But often, the same people have found MySQL to be slow and quickly concluded that SQL is slow per se – without evaluating the options. This is also why all NoSQL stores compare themselves with MySQL, not with Oracle, the database that has been winning the Transaction Processing Performance Council’s (TPC) benchmarks almost forever. Some examples:

While the last article bluntly adds “(and other RDBMS)” it doesn’t go into any sort of detail whatsoever, what those “other RDBMS” do wrong. It really only compares MongoDB with MySQL.

The Cure:

We say: Stop complaining about SQL, when in fact, you’re really complaining about MySQL. There are at least four very popular databases out there that are incredibly good. These are:

(just kidding about the last one, of course)

Again, remember, this was said in 2014. Today, MySQL 8.0 has made tremendous progress, so please review this opinion!

The Takeaway:

Don’t fall for agressive NoSQL marketing. 10gen is an extremely well-funded company, even if MongoDB continues to disappoint, technically.

The same is true for Datastax.

Both companies are solving a problem that few people have. They’re selling us niche products as commodity, making us think that our real commodity databases (the RDBMS) no longer fulfil our needs. They are well-funded and have big marketing teams to throw around with blunt claims.

In the mean time, PostgreSQL just got even better, and you, as a reader of this blog / post, are about to bet on the winning team :-)

… just to cite Mark Madsen once more:

The Disclaimer:

This article has been quite strongly against MySQL. We don’t mean to talk badly about a database that perfectly fulfils its purpose, as this isn’t a black and white world. Heck, you can get happy with SQLite in some situations. MySQL, being the cheap and easy to use, easy to install commodity database. We just wanted to make you aware of the fact, that you’re expressly choosing the cheap, not-so-good database, rather than the cheap, awesome one.

9. Forgetting about UNDO / REDO logs

We have claimed that MERGE statements or bulk / batch updates are good. That’s correct, but nonetheless, you should be wary when updating huge data sets in transactional contexts. If your transaction “takes too long”, i.e. if you’re updating 10 million records at a time, you will run into two problems:

  • You increase the risk of race conditions, if another process is also writing to the same table. This may cause a rollback on their or on your transaction, possibly making you roll out the huge update again
  • You cause a lot of concurrency on your system, because every other transaction / session, that wants to see the data that you’re about to update, will have to temporarily roll back all of your updates first, before they reach the state on disk that was there before your huge update. That’s the price of ACID.

One way to work around this issue is to allow for other sessions to read uncommitted data.

Another way to work around this issue is to frequently commit your own work, e.g. after 1000 inserts / updates.

In any case, due to the CAP theorem, you will have to make a compromise. Frequent commits will produce the risk of an inconsistent database in the event of the multi-million update going wrong after 5 million (committed) records. A rollback would then mean to revert all database changes towards a backup.

The Cure:

There is no definitive cure to this issue. But beware that you are very very rarely in a situation where it is OK to simply update 10 million records of a live and online table outside of an actual scheduled maintenance window. The simplest acceptable workaround is indeed to commit your work after N inserts / updates.

The Takeaway:

By this time, NoSQL aficionados will claim (again due to excessive marketing by aforementioned companies) that NoSQL has solved this by dropping schemas and typesafety. “Don’t update, just add another property!” – they said.

But that’s not true!

First off, I can add columns to my database without any issue at all. An ALTER TABLE ADD statement is executed instantly on live databases. Filling the column with data doesn’t bother anyone either, because no one reads the column yet (remember, don’t SELECT * !). So adding columns in RDBMS is as cheap as adding JSON properties to a MongoDB document.

But what about altering columns? Removing them? Merging them?

It is simply not true that denormalisation takes you anywhere far. Denormalisation is always a short-term win for the developer. Hardly a long-term win for the operations teams. Having redundant data in your database for the sake of speeding up an ALTER TABLE statement is like sweeping dirt under the carpet.

Don’t believe the marketers. And while you’re at it, perform some doublethink and forget that we’re SQL tool vendors ourselves ;-) Here’s again the “correct” message:

jOOQ - The best way to write SQL in Java

10. Not using the BOOLEAN type correctly

This is not really a mistake per se. It’s just again something that hardly anyone knows. When the SQL:1999 standard introduced the new BOOLEAN data type, they really did it right. Because before, we already had something like booleans in SQL. We’ve had <search condition> in SQL-92, which are essentially predicates for use in WHERE, ON, and HAVING clauses, as well as in CASE expressions.

SQL:1999, however, simply defined the new <boolean value expression> as a regular <value expression>, and redefined the <search condition> as such:

<search condition> ::=
    <boolean value expression>

Done! Now, for most of us Java / Scala / etc. developers, this doesn’t seem like such an innovation. Heck it’s a boolean. Obviuosly it can be interchangeably used as predicate and as variable.

But in the mind-set of the keyword-heavy SQL folks who have taken inspiration from COBOL when designing the language, this was quite a step forward.

Now, what does this mean? This means that you can use any predicate also as a column! For instance:

SELECT a, b, c
FROM (
  SELECT EXISTS (SELECT ...) a,
         MY_COL IN (1, 2, 3) b,
         3 BETWEEN 4 AND 5   c
  FROM   MY_TABLE
) t
WHERE a AND b AND NOT(c)

This is a bit of a dummy query, agreed, but are you aware of how powerful this is?

Luckily, again, PostgreSQL fully supports this (unlike Oracle, which still doesn’t have any BOOLEAN data type in SQL).

The Cure:

Every now and then, using BOOLEAN types feels very right, so do it! You can transform boolean value expressions into predicates and predicates into boolean value expressions. They’re the same. This makes SQL all so powerful.

Conclusion

SQL has evolved steadily over the past years through great standards like SQL:1999, SQL:2003, SQL:2008 and now SQL:2011. It is the only surviving mainstream declarative language, now that XQuery can be considered pretty dead for the mainstream. It can be easily mixed with procedural languages, as PL/SQL and T-SQL (and other procedural dialects) have shown. It can be easily mixed with object-oriented or functional languages, as jOOQ has shown.

At Data Geekery, we believe that SQL is the best way to query data. You don’t agree with any of the above? That’s fine, you don’t have to. Sometimes, even we agree with Winston Churchill who is known to have said:

tweet thisSQL is the worst form of database querying, except for all the other forms.

But as Yakov Fain has recently put it:

You can run from SQL, but you can’t hide

So, let’s better get back to work and learn this beast! Thanks for reading.

Top 10 Lists of Common Java Mistakes (That Makes Top 100!)

Top 10 lists are very popular, fun, and informative to read. But there are so many of them! How to choose the right one? Here’s a meta top 10 list helping you find the top 10 top 10 lists. On a more geeky note:

SELECT TOP 10 mistake FROM source1
UNION ALL
SELECT TOP 10 mistake FROM source2
UNION ALL
SELECT TOP 10 mistake FROM source3
...

In this selection, I have carefully removed all of those top 10 newbie mistake lists that pop up when performing an average Google search. Because there aren’t 10 newbie mistakes, there are about one million. I’m more interested in subtle mistakes and problems. So, join me in reading these awesome 10 lists of top 10 Java mistakes / best practices (in no particular order)

1: ZeroTurnaround’s 10 Common Pitfalls of Experienced Java Developers & Architects

ZeroTurnaround has just released this one, in time for my post. The JRebel guys usually employ quite a geeky tongue-in-cheek, which I really like, of course:

http://zeroturnaround.com/rebellabs/watch-out-for-these-10-common-pitfalls-of-experienced-java-developers-architects/

2: jOOQ’s 10 Subtle Best Practices When Coding Java

Some advertising in our own cause. We have a top 10 list as well, about very subtle things that can go wrong when writing Java:

https://blog.jooq.org/2013/08/20/10-subtle-best-practices-when-coding-java/

3: AppDynamic’s Top 10 Java Performance Problems

AppDynamics is giving away this interesting and very well-written eBook for only your contact information. (Don’t blame me if they’ll call you and sell you their products after you download the nice list).

http://info.appdynamics.com/Top10JavaPerformanceProblems_eBook.html

4: The AmiableAPI’s Java API Design Checklist

This isn’t exactly a top 10 list, but more of a style guide helping you to write a good, clean API. Something that isn’t so obvious to do if you don’t write APIs every day:

http://theamiableapi.com/2012/01/16/java-api-design-checklist/

5: Josh Bloch’s talk about How To Design a Good API and Why it Matters

While this one isn’t labeled as top 10, it certainly contains the top 10 things to do when designing APIs, citing from a very authoritative reference: Josh Bloch himself:

http://www.youtube.com/watch?v=heh4OeB9A-c

6: Top 10 Mistakes When Writing Server-Side JavaScript Using Rhino

Haha, just kidding. There’s only one mistake here. It’s the fact that you’re writing JavaScript. So, on the the real #6:

6: Pierre-Hugues Charbonneau’s Top 10 Causes of Java EE Enterprise Performance Problems

This one is extremely well-written. A very good read for all Java architects out there:

http://java.dzone.com/articles/top-10-causes-java-ee

7: Top 10 Interesting Statements by Adam Bien About the Java Enterprise Edition 6 (JEE 6)

I like to cite Adam Bien. He’s very dogmatic, pro JEE Java Rock Star. While I most certainly don’t agree with him in many aspects, I still enjoy reading his blog. This list is not by Adam Bien himself, but by a Kai Waehner, who has summarised Adam Bien’s opinions quite well:

http://www.kai-waehner.de/blog/2010/09/10/10-interesting-statements-of-adam-bien-about-the-java-enterprise-edition-6-jee-6/

8: Top 15 Worst Computer Software Blunders

OK, this one isn’t about Java in particular, neither does it show concrete best practices. But why do we need best practices and avoid common mistakes? Yes, because things can go terribly wrong. Here’s how wrong they can go:

http://www.intertech.com/Blog/15-worst-computer-software-blunders/

9: Top 10 Java People You Should Know

You think this list is incomplete? Very unlikely. But you still might be interested in looking up the top 10 Java people, which have influenced our every day work like few others. They have said many things in their professional lives, which could fill many more top 10 lists. Here’s the “Top 10 Java People You Should Know” list:

http://javastoreroom.blogspot.ch/2013/05/top-10-java-people-you-should-know.html

10: The Top 10 List of Best Java-Related Top 10 Lists

And here’s a killer list explaining all about the origins of tail-recursion and – possibly – StackOverflowErrors:

https://blog.jooq.org/2013/11/01/top-10-lists-of-common-java-mistakes-that-makes-top-100/

10 Common Mistakes Java Developers Make when Writing SQL

This article is part of a series. You might also like:

Java developers mix object-oriented thinking with imperative thinking, depending on their levels of:

  • Skill (anyone can code imperatively)
  • Dogma (some use the “Pattern-Pattern”, i.e. the pattern of applying patterns everywhere and giving them names)
  • Mood (true OO is more clumsy to write than imperative code. At first)

But when Java developers write SQL, everything changes. SQL is a declarative language that has nothing to do with either object-oriented or imperative thinking. It is very easy to express a query in SQL. It is not so easy to express it optimally or correctly. Not only do developers need to re-think their programming paradigm, they also need to think in terms of set theory.

Here are common mistakes that a Java developer makes when writing SQL through JDBC or jOOQ (in no particular order). For 10 More Common Mistakes, see this article here.

jOOQ is the best way to write SQL in Java

Here are common mistakes that a Java developer makes when writing SQL (in no particular order):

1. Forgetting about NULL

Misunderstanding NULL is probably the biggest mistake a Java developer can make when writing SQL. This is also (but not exclusively) due to the fact that NULL is also called UNKNOWN. If it were only called UNKNOWN, it would be easier to understand. Another reason is that JDBC maps SQL NULL to Java null when fetching data or when binding variables. This may lead to thinking that NULL = NULL (SQL) would behave the same way as null == null (Java)

One of the crazier examples of misunderstanding NULL is when NULL predicates are used with row value expressions.

Another, subtle problem appears when misunderstanding the meaning of NULL in NOT IN anti-joins.

The Cure:

Train yourself. There’s nothing but explicitly thinking about NULL, every time you write SQL:

  • Is this predicate correct with respect to NULL?
  • Does NULL affect the result of this function?

2. Processing data in Java memory

Few Java developers know SQL very well. The occasional JOIN, the odd UNION, fine. But window functions? Grouping sets? A lot of Java developers load SQL data into memory, transform the data into some appropriate collection type, execute nasty maths on that collection with verbose loop structures (at least, before Java 8’s Collection improvements).

But some SQL databases support advanced (and SQL standard!) OLAP features that tend to perform a lot better and are much easier to write. A (non-standard) example is Oracle’s awesome MODEL clause. Just let the database do the processing and fetch only the results into Java memory. Because after all some very smart guys have optimised these expensive products. So in fact, by moving OLAP to the database, you gain two things:

  • Simplicity. It’s probably easier to write correctly in SQL than in Java
  • Performance. The database will probably be faster than your algorithm. And more importantly, you don’t have to transmit millions of records over the wire.

The Cure:

Every time you implement a data-centric algorithm in Java, ask yourself: Is there a way to let the database perform that work for me?

3. Using UNION instead of UNION ALL

It’s a shame that UNION ALL needs an extra keyword compared to UNION. It would be much better if the SQL standard had been defined to support:

  • UNION (allowing duplicates)
  • UNION DISTINCT (removing duplicates)

Not only is the removal of duplicates rarely needed (or sometimes even wrong), it is also quite slow for large result sets with many columns, as the two subselects need to be ordered, and each tuple needs to be compared with its subsequent tuple.

Note that even if the SQL standard specifies INTERSECT ALL and EXCEPT ALL, hardly any database implements these less useful set operations.

The Cure:

Every time you write a UNION, think if you actually wanted to write UNION ALL.

4. Using JDBC Pagination to paginate large results

Most databases support some way of paginating ordered results through LIMIT .. OFFSET, TOP .. START AT, OFFSET .. FETCH clauses. In the absence of support for these clauses, there is still the possibility for ROWNUM (Oracle) or ROW_NUMBER() OVER() filtering (DB2, SQL Server 2008 and less), which is much faster than pagination in memory. This is specifically true for large offsets!

The Cure:

Just use those clauses, or a tool (such as jOOQ) that can simulate those clauses for you.

5. Joining data in Java memory

From early days of SQL, some developers still have an uneasy feeling when expressing JOINs in their SQL. There is an inherent fear of JOIN being slow. This can be true if a cost-based optimiser chooses to perform a nested loop, possibly loading complete tables into database memory, before creating a joined table source. But that happens rarely. With appropriate predicates, constraints and indexes, MERGE JOIN and HASH JOIN operations are extremely fast. It’s all about the correct metadata (I cannot cite Tom Kyte often enough for this). Nonetheless, there are probably still quite a few Java developers who will load two tables from separate queries into maps and join them in Java memory in one way or another.

The Cure:

If you’re selecting from various tables in various steps, think again to see if you cannot express your query in a single statement.

6. Using DISTINCT or UNION to remove duplicates from an accidental cartesian product

With heavy joining, one can lose track of all the relations that are playing a role in a SQL statement. Specifically, if multi-column foreign key relationships are involved, it is possible to forget to add the relevant predicates in JOIN .. ON clauses. This might result in duplicate records, but maybe only in exceptional cases. Some developers may then choose to use DISTINCT to remove those duplicates again. This is wrong in three ways:

  • It (may) solve the symptoms but not the problem. It may as well not solve the symptoms in edge-cases.
  • It is slow for large result sets with many columns. DISTINCT performs an ORDER BY operation to remove duplicates.
  • It is slow for large cartesian products, which will still load lots of data into memory

The Cure:

As a rule of thumb, when you get unwanted duplicates, always review your JOIN predicates. There’s probably a subtle cartesian product in there somewhere.

7. Not using the MERGE statement

This isn’t really a mistake, but probably some lack of knowledge or some fear towards the powerful MERGE statement. Some databases know other forms of UPSERT statements, e.g. MySQL’s ON DUPLICATE KEY UPDATE clause. But MERGE is really so powerful, most importantly in databases that heavily extend the SQL standard, such as SQL Server.

The Cure:

If you’re UPSERTING by chaining INSERT and UPDATE or by chaining SELECT .. FOR UPDATE and then INSERT or UPDATE, think again. Apart from risking race conditions, you might be able to express a simpler MERGE statement.

8. Using aggregate functions instead of window functions

Before the introduction of window functions, the only means to aggregate data in SQL was by using a GROUP BY clause along with aggregate functions in the projection. This works well in many cases, and if aggregation data needed to be enriched with regular data, the grouped query can be pushed down into a joined subquery.

But SQL:2003 defined window functions, which are implemented by many popular database vendors. Window functions can aggregate data on result sets that are not grouped. In fact, each window function supports its own, independent PARTITION BY clause, which is an awesome tool for reporting.

Using window functions will:

  • Lead to more readable SQL (less dedicated GROUP BY clauses in subqueries)
  • Improve performance, as a RDBMS is likely to optimise window functions more easily

The Cure:

When you write a GROUP BY clause in a subquery, think again if this cannot be done with a window function.

9. Using in-memory sorting for sort indirections

The SQL ORDER BY clause supports many types of expressions, including CASE statements, which can be very useful for sort indirections. You should probably never sort data in Java memory because you think that

  • SQL sorting is too slow
  • SQL sorting cannot do it

The Cure:

If you sort any SQL data in memory, think again if you cannot push sorting into your database. This goes along well with pushing pagination into the database.

10. Inserting lots of records one by one

JDBC knows batching, and you should use it. Do not INSERT thousands of records one by one, re-creating a new PreparedStatement every time. If all of your records go to the same table, create a batch INSERT statement with a single SQL statement and multiple bind value sets. Depending on your database and database configuration, you may need to commit after a certain amount of inserted records, in order to keep the UNDO log slim.

The Cure:

Always batch-insert large sets of data.

Some interesting books

Some very interesting books on similar topics are

jOOQ is the best way to write SQL in Java

Liked this article?

You might also like these follow-up articles