Archive | java RSS for this section

Flyway and jOOQ for Unbeatable SQL Development Productivity


When performing database migrations, we at Data Geekery recommend using jOOQ with Flyway – Database Migrations Made Easy. In this post, we’re going to look into a simple way to get started with the two frameworks.

Philosophy

There are a variety of ways how jOOQ and Flyway could interact with each other in various development setups. In this tutorial we’re going to show just one variant of such framework team play – a variant that we find particularly compelling for most use cases.

The general philosophy and workflow behind the following approach can be summarised as this:

  • 1. Database increment
  • 2. Database migration
  • 3. Code re-generation
  • 4. Development

The four steps above can be repeated time and again, every time you need to modify something in your database. More concretely, let’s consider:

  • 1. Database increment – You need a new column in your database, so you write the necessary DDL in a Flyway script
  • 2. Database migration – This Flyway script is now part of your deliverable, which you can share with all developers who can migrate their databases with it, the next time they check out your change
  • 3. Code re-generation – Once the database is migrated, you regenerate all jOOQ artefacts (see code generation), locally
  • 4. Development – You continue developing your business logic, writing code against the udpated, generated database schema

0.1. Maven Project Configuration – Properties

The following properties are defined in our pom.xml, to be able to reuse them between plugin configurations:

<properties>
    <db.url>jdbc:h2:~/flyway-test</db.url>
    <db.username>sa</db.username>
</properties>

0.2. Maven Project Configuration – Dependencies

While jOOQ and Flyway could be used in standalone migration scripts, in this tutorial, we’ll be using Maven for the standard project setup. You will also find the source code of this tutorial on GitHub, and the full pom.xml file here.

These are the dependencies that we’re using in our Maven configuration:

<!-- We'll add the latest version of jOOQ 
     and our JDBC driver - in this case H2 -->
<dependency>
    <groupId>org.jooq</groupId>
    <artifactId>jooq</artifactId>
    <version>3.4.0</version>
</dependency>
<dependency>
    <groupId>com.h2database</groupId>
    <artifactId>h2</artifactId>
    <version>1.4.177</version>
</dependency>

<!-- For improved logging, we'll be using 
     log4j via slf4j to see what's going
     on during migration and code generation -->
<dependency>
    <groupId>log4j</groupId>
    <artifactId>log4j</artifactId>
    <version>1.2.16</version>
</dependency>
<dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-log4j12</artifactId>
    <version>1.7.5</version>
</dependency>

<!-- To esnure our code is working, we're
     using JUnit -->
<dependency>
    <groupId>junit</groupId>
    <artifactId>junit</artifactId>
    <version>4.11</version>
    <scope>test</scope>
</dependency>

0.3. Maven Project Configuration – Plugins

After the dependencies, let’s simply add the Flyway and jOOQ Maven plugins like so. The Flyway plugin:

<plugin>
    <groupId>org.flywaydb</groupId>
    <artifactId>flyway-maven-plugin</artifactId>
    <version>3.0</version>

    <!-- Note that we're executing the Flyway
         plugin in the "generate-sources" phase -->
    <executions>
        <execution>
            <phase>generate-sources</phase>
            <goals>
                <goal>migrate</goal>
            </goals>
        </execution>
    </executions>

    <!-- Note that we need to prefix the db/migration
         path with filesystem: to prevent Flyway
         from looking for our migration scripts
         only on the classpath -->
    <configuration>
        <url>${db.url}</url>
        <user>${db.username}</user>
        <locations>
            <location>filesystem:src/main/resources/db/migration</location>
        </locations>
    </configuration>
</plugin>

The above Flyway Maven plugin configuration will read and execute all database migration scripts from src/main/resources/db/migrationprior to compiling Java source code. While the official Flyway documentation suggests that migrations be done in the compile phase, the jOOQ code generator relies on such migrations having been done prior to code generation.

After the Flyway plugin, we’ll add the jOOQ Maven Plugin. For more details, please refer to the manual’s section about the code generation configuration.

<plugin>
    <groupId>org.jooq</groupId>
    <artifactId>jooq-codegen-maven</artifactId>
    <version>${org.jooq.version}</version>

    <!-- The jOOQ code generation plugin is also 
         executed in the generate-sources phase,
         prior to compilation -->
    <executions>
        <execution>
            <phase>generate-sources</phase>
            <goals>
                <goal>generate</goal>
            </goals>
        </execution>
    </executions>

    <!-- This is a minimal working configuration. 
         See the manual's section about the code
         generator for more details -->
    <configuration>
        <jdbc>
            <url>${db.url}</url>
            <user>${db.username}</user>
        </jdbc>
        <generator>
            <database>
                <includes>.*</includes>
                <inputSchema>FLYWAY_TEST</inputSchema>
            </database>
            <target>
                <packageName>org.jooq.example.flyway.db.h2</packageName>
                <directory>target/generated-sources/jooq-h2</directory>
            </target>
        </generator>
    </configuration>
</plugin>

This configuration will now read the FLYWAY_TEST schema and reverse-engineer it into the target/generated-sources/jooq-h2 directory, and within that, into the org.jooq.example.flyway.db.h2 package.

1. Database increments

Now, when we start developing our database. For that, we’ll create database increment scripts, which we put into thesrc/main/resources/db/migration directory, as previously configured for the Flyway plugin. We’ll add these files:

  • V1__initialise_database.sql
  • V2__create_author_table.sql
  • V3__create_book_table_and_records.sql

These three scripts model our schema versions 1-3 (note the capital V!). Here are the scripts’ contents

-- V1__initialise_database.sql
DROP SCHEMA flyway_test IF EXISTS;

CREATE SCHEMA flyway_test;
-- V2__create_author_table.sql
CREATE SEQUENCE flyway_test.s_author_id START WITH 1;

CREATE TABLE flyway_test.author (
  id INT NOT NULL,
  first_name VARCHAR(50),
  last_name VARCHAR(50) NOT NULL,
  date_of_birth DATE,
  year_of_birth INT,
  address VARCHAR(50),

  CONSTRAINT pk_t_author PRIMARY KEY (ID)
);
-- V3__create_book_table_and_records.sql
CREATE TABLE flyway_test.book (
  id INT NOT NULL,
  author_id INT NOT NULL,
  title VARCHAR(400) NOT NULL,

  CONSTRAINT pk_t_book PRIMARY KEY (id),
  CONSTRAINT fk_t_book_author_id FOREIGN KEY (author_id) REFERENCES flyway_test.author(id)
);


INSERT INTO flyway_test.author VALUES (next value for flyway_test.s_author_id, 'George', 'Orwell', '1903-06-25', 1903, null);
INSERT INTO flyway_test.author VALUES (next value for flyway_test.s_author_id, 'Paulo', 'Coelho', '1947-08-24', 1947, null);

INSERT INTO flyway_test.book VALUES (1, 1, '1984');
INSERT INTO flyway_test.book VALUES (2, 1, 'Animal Farm');
INSERT INTO flyway_test.book VALUES (3, 2, 'O Alquimista');
INSERT INTO flyway_test.book VALUES (4, 2, 'Brida');

2. Database migration and 3. Code regeneration

The above three scripts are picked up by Flyway and executed in the order of the versions. This can be seen very simply by executing:

mvn clean install

And then observing the log output from Flyway…

[INFO] --- flyway-maven-plugin:3.0:migrate (default) @ jooq-flyway-example ---
[INFO] Database: jdbc:h2:~/flyway-test (H2 1.4)
[INFO] Validated 3 migrations (execution time 00:00.004s)
[INFO] Creating Metadata table: "PUBLIC"."schema_version"
[INFO] Current version of schema "PUBLIC": <>
[INFO] Migrating schema "PUBLIC" to version 1
[INFO] Migrating schema "PUBLIC" to version 2
[INFO] Migrating schema "PUBLIC" to version 3
[INFO] Successfully applied 3 migrations to schema "PUBLIC" (execution time 00:00.073s).

… and from jOOQ on the console:

[INFO] --- jooq-codegen-maven:3.5.0-SNAPSHOT:generate (default) @ jooq-flyway-example ---
[INFO] Using this configuration:
...
[INFO] Generating schemata      : Total: 1
[INFO] Generating schema        : FlywayTest.java
[INFO] ----------------------------------------------------------
[....]
[INFO] GENERATION FINISHED!     : Total: 337.576ms, +4.299ms

4. Development

Note that all of the previous steps are executed automatically, every time someone adds new migration scripts to the Maven module. For instance, a team member might have committed a new migration script, you check it out, rebuild and get the latest jOOQ-generated sources for your own development or integration-test database.

Now, that these steps are done, you can proceed writing your database queries. Imagine the following test case

import org.jooq.Result;
import org.jooq.impl.DSL;
import org.junit.Test;

import java.sql.DriverManager;

import static java.util.Arrays.asList;
import static org.jooq.example.flyway.db.h2.Tables.*;
import static org.junit.Assert.assertEquals;

public class AfterMigrationTest {

    @Test
    public void testQueryingAfterMigration() throws Exception {
        try (Connection c = DriverManager.getConnection("jdbc:h2:~/flyway-test", "sa", "")) {
            Result<?> result =
            DSL.using(c)
               .select(
                   AUTHOR.FIRST_NAME,
                   AUTHOR.LAST_NAME,
                   BOOK.ID,
                   BOOK.TITLE
               )
               .from(AUTHOR)
               .join(BOOK)
               .on(AUTHOR.ID.eq(BOOK.AUTHOR_ID))
               .orderBy(BOOK.ID.asc())
               .fetch();

            assertEquals(4, result.size());
            assertEquals(asList(1, 2, 3, 4), result.getValues(BOOK.ID));
        }
    }
}

If you run the mvn clean install again, the above integration test will now compile and pass!

Reiterate

The power of this approach becomes clear once you start performing database modifications this way. Let’s assume that the French guy on our team prefers to have things his way (no offense intended ;-) ):

-- V4__le_french.sql
ALTER TABLE flyway_test.book 
  ALTER COLUMN title RENAME TO le_titre;

They check it in, you check out the new database migration script, run

mvn clean install

And then observe the log output:

[INFO] --- flyway-maven-plugin:3.0:migrate (default) @ jooq-flyway-example ---
[INFO] --- flyway-maven-plugin:3.0:migrate (default) @ jooq-flyway-example ---
[INFO] Database: jdbc:h2:~/flyway-test (H2 1.4)
[INFO] Validated 4 migrations (execution time 00:00.005s)
[INFO] Current version of schema "PUBLIC": 3
[INFO] Migrating schema "PUBLIC" to version 4
[INFO] Successfully applied 1 migration to schema "PUBLIC" (execution time 00:00.016s).

So far so good, but later on:

[ERROR] COMPILATION ERROR :
[INFO] -------------------------------------------------------------
[ERROR] C:\...\AfterMigrationTest.java:[24,19] error: cannot find symbol
[INFO] 1 error

When we go back to our Java integration test, we can immediately see that the TITLE column is still being referenced, but it no longer exists:

public class AfterMigrationTest {

    @Test
    public void testQueryingAfterMigration() throws Exception {
        try (Connection c = DriverManager.getConnection("jdbc:h2:~/flyway-test", "sa", "")) {
            Result<?> result =
            DSL.using(c)
               .select(
                   AUTHOR.FIRST_NAME,
                   AUTHOR.LAST_NAME,
                   BOOK.ID,
                   BOOK.TITLE
                   //   ^^^^^ This column no longer exists. 
                   //   We'll have to rename it to LE_TITRE
               )
               .from(AUTHOR)
               .join(BOOK)
               .on(AUTHOR.ID.eq(BOOK.AUTHOR_ID))
               .orderBy(BOOK.ID.asc())
               .fetch();

            assertEquals(4, result.size());
            assertEquals(asList(1, 2, 3, 4), result.getValues(BOOK.ID));
        }
    }
}

Conclusion

This tutorial shows very easily how you can build a rock-solid development process using Flyway and jOOQ to prevent SQL-related errors very early in your development lifecycle – immediately at compile time, rather than in production!

Visit the Flyway website and the jOOQ website.

Java 8 Friday: The Best Java 8 Resources – Your Weekend is Booked


At Data Geekery, we love Java. And as we’re really into jOOQ’s fluent API and query DSL, we’re absolutely thrilled about what Java 8 will bring to our ecosystem.

Every Friday, we’re showing you a couple of nice new tutorial-style Java 8 features, which take advantage of lambda expressions, method references, default methods, the Streams API, and other great stuff. You’ll find the source code on GitHub.

The Best Java 8 Resources – Your Weekend is Booked

We’re obviously not the only ones writing about Java 8. Ever since this great language update’s go live, there had been blogs all around the world appearing with great content and different perspectives on the subject. In this edition of the Java 8 Friday series, we’d like to summarise some of the best content that has been going on on that subject.

1. Brian Goetz’s Answers on Stack Overflow

Brian Goetz was the spec lead for JSR 335. Together with his Expert Group team, he has worked very hard to help Java 8 succeed. However, now that JSR 335 has shipped, his work is far from being over. Brian has had the courtesy of giving authoritative answers to questions from the Java community on Stack Overflow. Here are some of the most interesting questions:

Thumbs up to this great community effort. It cannot get any better than hearing authoritative answers from the spec lead himself.

2. Baeldung.com’s Collection of Java 8 Resources

This list of resources wouldn’t be complete without the very useful list of Java 8 resources (mostly authoritative links to specifications) from the guys over at Baeldung.com. Here is:

http://www.baeldung.com/java8

3. The jOOQ Blog’s Java 8 Friday Series

Yay, that’s us! :-)

Yes, we’ve worked hard to bring you the latest from our experience when integrating jOOQ with Java 8. Here are some of our most popular articles from the recent months:

4. ZeroTurnaround’s RebelLabs Blog

As part of the ZeroTurnaround content marketing strategy, ZeroTurnaround has launched RebelLabs quite a while ago where various writers publish interesting articles around the topic of Java, which aren’t necessarily related to JRebel and other ZT products. There is some great Java 8 related content having been published over there. Here are our favourite gems:

5. The Takipi Blog

Just like ZeroTurnaround and ourselves, our friends over at Takipi provide you with some awesome Java 8 content on their blog.

6. Benji Weber’s Fun Experiments with Java 8

This blog series we found particularly fun to read. Benji Weber really thinks outside of the box and does some crazy things with default methods, method references and all that. Things that Java developers could only dream of, so far. Here are:

7. The Geeks from Paradise Blog’s Java 8 Musings

Edwin Dalorzo from Informatech has been treating us with a variety of well-founded comparisons between Java 8 and .NET. This is particularly interesting when comparing Streams with LINQ. Here are some of his best writings:

Is this list complete?

No, it is missing many other, very interesting blog series. Do you have a series to share? We’re more than happy to update this post, just let us know (in the comments section)

Java 8 Friday: 10 Subtle Mistakes When Using the Streams API


At Data Geekery, we love Java. And as we’re really into jOOQ’s fluent API and query DSL, we’re absolutely thrilled about what Java 8 will bring to our ecosystem.

Java 8 Friday

Every Friday, we’re showing you a couple of nice new tutorial-style Java 8 features, which take advantage of lambda expressions, extension methods, and other great stuff. You’ll find the source code on GitHub.

10 Subtle Mistakes When Using the Streams API

We’ve done all the SQL mistakes lists:

But we haven’t done a top 10 mistakes list with Java 8 yet! For today’s occasion (it’s Friday the 13th), we’ll catch up with what will go wrong in YOUR application when you’re working with Java 8. (it won’t happen to us, as we’re stuck with Java 6 for another while)

1. Accidentally reusing streams

Wanna bet, this will happen to everyone at least once. Like the existing “streams” (e.g. InputStream), you can consume streams only once. The following code won’t work:

IntStream stream = IntStream.of(1, 2);
stream.forEach(System.out::println);

// That was fun! Let's do it again!
stream.forEach(System.out::println);

You’ll get a

java.lang.IllegalStateException: 
  stream has already been operated upon or closed

So be careful when consuming your stream. It can be done only once

2. Accidentally creating “infinite” streams

You can create infinite streams quite easily without noticing. Take the following example:

// Will run indefinitely
IntStream.iterate(0, i -> i + 1)
         .forEach(System.out::println);

The whole point of streams is the fact that they can be infinite, if you design them to be. The only problem is, that you might not have wanted that. So, be sure to always put proper limits:

// That's better
IntStream.iterate(0, i -> i + 1)
         .limit(10)
         .forEach(System.out::println);

3. Accidentally creating “subtle” infinite streams

We can’t say this enough. You WILL eventually create an infinite stream, accidentally. Take the following stream, for instance:

IntStream.iterate(0, i -> ( i + 1 ) % 2)
         .distinct()
         .limit(10)
         .forEach(System.out::println);

So…

  • we generate alternating 0’s and 1’s
  • then we keep only distinct values, i.e. a single 0 and a single 1
  • then we limit the stream to a size of 10
  • then we consume it

Well… the distinct() operation doesn’t know that the function supplied to the iterate() method will produce only two distinct values. It might expect more than that. So it’ll forever consume new values from the stream, and the limit(10) will never be reached. Tough luck, your application stalls.

4. Accidentally creating “subtle” parallel infinite streams

We really need to insist that you might accidentally try to consume an infinite stream. Let’s assume you believe that the distinct() operation should be performed in parallel. You might be writing this:

IntStream.iterate(0, i -> ( i + 1 ) % 2)
         .parallel()
         .distinct()
         .limit(10)
         .forEach(System.out::println);

Now, we’ve already seen that this will turn forever. But previously, at least, you only consumed one CPU on your machine. Now, you’ll probably consume four of them, potentially occupying pretty much all of your system with an accidental infinite stream consumption. That’s pretty bad. You can probably hard-reboot your server / development machine after that. Have a last look at what my laptop looked like prior to exploding:

If I were a laptop, this is how I'd like to go.

If I were a laptop, this is how I’d like to go.

5. Mixing up the order of operations

So, why did we insist on your definitely accidentally creating infinite streams? It’s simple. Because you may just accidentally do it. The above stream can be perfectly consumed if you switch the order of limit() and distinct():

IntStream.iterate(0, i -> ( i + 1 ) % 2)
         .limit(10)
         .distinct()
         .forEach(System.out::println);

This now yields:

0
1

Why? Because we first limit the infinite stream to 10 values (0 1 0 1 0 1 0 1 0 1), before we reduce the limited stream to the distinct values contained in it (0 1).

Of course, this may no longer be semantically correct, because you really wanted the first 10 distinct values from a set of data (you just happened to have “forgotten” that the data is infinite). No one really wants 10 random values, and only then reduce them to be distinct.

If you’re coming from a SQL background, you might not expect such differences. Take SQL Server 2012, for instance. The following two SQL statements are the same:

-- Using TOP
SELECT DISTINCT TOP 10 *
FROM i
ORDER BY ..

-- Using FETCH
SELECT *
FROM i
ORDER BY ..
OFFSET 0 ROWS
FETCH NEXT 10 ROWS ONLY

So, as a SQL person, you might not be as aware of the importance of the order of streams operations.

jOOQ, the best way to write SQL in Java

6. Mixing up the order of operations (again)

Speaking of SQL, if you’re a MySQL or PostgreSQL person, you might be used to the LIMIT .. OFFSET clause. SQL is full of subtle quirks, and this is one of them. The OFFSET clause is applied FIRST, as suggested in SQL Server 2012’s (i.e. the SQL:2008 standard’s) syntax.

If you translate MySQL / PostgreSQL’s dialect directly to streams, you’ll probably get it wrong:

IntStream.iterate(0, i -> i + 1)
         .limit(10) // LIMIT
         .skip(5)   // OFFSET
         .forEach(System.out::println);

The above yields

5
6
7
8
9

Yes. It doesn’t continue after 9, because the limit() is now applied first, producing (0 1 2 3 4 5 6 7 8 9). skip() is applied after, reducing the stream to (5 6 7 8 9). Not what you may have intended.

BEWARE of the LIMIT .. OFFSET vs. "OFFSET .. LIMIT" trap!

7. Walking the file system with filters

We’ve blogged about this before. What appears to be a good idea is to walk the file system using filters:

Files.walk(Paths.get("."))
     .filter(p -> !p.toFile().getName().startsWith("."))
     .forEach(System.out::println);

The above stream appears to be walking only through non-hidden directories, i.e. directories that do not start with a dot. Unfortunately, you’ve again made mistake #5 and #6. walk() has already produced the whole stream of subdirectories of the current directory. Lazily, though, but logically containing all sub-paths. Now, the filter will correctly filter out paths whose names start with a dot “.”. E.g. .git or .idea will not be part of the resulting stream. But these paths will be: .\.git\refs, or .\.idea\libraries. Not what you intended.

Now, don’t fix this by writing the following:

Files.walk(Paths.get("."))
     .filter(p -> !p.toString().contains(File.separator + "."))
     .forEach(System.out::println);

While that will produce the correct output, it will still do so by traversing the complete directory subtree, recursing into all subdirectories of “hidden” directories.

I guess you’ll have to resort to good old JDK 1.0 File.list() again. The good news is, FilenameFilter and FileFilter are both functional interfaces.

8. Modifying the backing collection of a stream

While you’re iterating a List, you must not modify that same list in the iteration body. That was true before Java 8, but it might become more tricky with Java 8 streams. Consider the following list from 0..9:

// Of course, we create this list using streams:
List<Integer> list = 
IntStream.range(0, 10)
         .boxed()
         .collect(toCollection(ArrayList::new));

Now, let’s assume that we want to remove each element while consuming it:

list.stream()
    // remove(Object), not remove(int)!
    .peek(list::remove)
    .forEach(System.out::println);

Interestingly enough, this will work for some of the elements! The output you might get is this one:

0
2
4
6
8
null
null
null
null
null
java.util.ConcurrentModificationException

If we introspect the list after catching that exception, there’s a funny finding. We’ll get:

[1, 3, 5, 7, 9]

Heh, it “worked” for all the odd numbers. Is this a bug? No, it looks like a feature. If you’re delving into the JDK code, you’ll find this comment in ArrayList.ArraListSpliterator:

/*
 * If ArrayLists were immutable, or structurally immutable (no
 * adds, removes, etc), we could implement their spliterators
 * with Arrays.spliterator. Instead we detect as much
 * interference during traversal as practical without
 * sacrificing much performance. We rely primarily on
 * modCounts. These are not guaranteed to detect concurrency
 * violations, and are sometimes overly conservative about
 * within-thread interference, but detect enough problems to
 * be worthwhile in practice. To carry this out, we (1) lazily
 * initialize fence and expectedModCount until the latest
 * point that we need to commit to the state we are checking
 * against; thus improving precision.  (This doesn't apply to
 * SubLists, that create spliterators with current non-lazy
 * values).  (2) We perform only a single
 * ConcurrentModificationException check at the end of forEach
 * (the most performance-sensitive method). When using forEach
 * (as opposed to iterators), we can normally only detect
 * interference after actions, not before. Further
 * CME-triggering checks apply to all other possible
 * violations of assumptions for example null or too-small
 * elementData array given its size(), that could only have
 * occurred due to interference.  This allows the inner loop
 * of forEach to run without any further checks, and
 * simplifies lambda-resolution. While this does entail a
 * number of checks, note that in the common case of
 * list.stream().forEach(a), no checks or other computation
 * occur anywhere other than inside forEach itself.  The other
 * less-often-used methods cannot take advantage of most of
 * these streamlinings.
 */

Now, check out what happens when we tell the stream to produce sorted() results:

list.stream()
    .sorted()
    .peek(list::remove)
    .forEach(System.out::println);

This will now produce the following, “expected” output

0
1
2
3
4
5
6
7
8
9

And the list after stream consumption? It is empty:

[]

So, all elements are consumed, and removed correctly. The sorted() operation is a “stateful intermediate operation”, which means that subsequent operations no longer operate on the backing collection, but on an internal state. It is now “safe” to remove elements from the list!

Well… can we really? Let’s proceed with parallel(), sorted() removal:

list.stream()
    .sorted()
    .parallel()
    .peek(list::remove)
    .forEach(System.out::println);

This now yields:

7
6
2
5
8
4
1
0
9
3

And the list contains

[8]

Eek. We didn’t remove all elements!? Free beers (and jOOQ stickers) go to anyone who solves this streams puzzler!

This all appears quite random and subtle, we can only suggest that you never actually do modify a backing collection while consuming a stream. It just doesn’t work.

9. Forgetting to actually consume the stream

What do you think the following stream does?

IntStream.range(1, 5)
         .peek(System.out::println)
         .peek(i -> { 
              if (i == 5) 
                  throw new RuntimeException("bang");
          });

When you read this, you might think that it will print (1 2 3 4 5) and then throw an exception. But that’s not correct. It won’t do anything. The stream just sits there, never having been consumed.

As with any fluent API or DSL, you might actually forget to call the “terminal” operation. This might be particularly true when you use peek(), as peek() is an aweful lot similar to forEach().

This can happen with jOOQ just the same, when you forget to call execute() or fetch():

DSL.using(configuration)
   .update(TABLE)
   .set(TABLE.COL1, 1)
   .set(TABLE.COL2, "abc")
   .where(TABLE.ID.eq(3));

Oops. No execute()

jOOQ, the best way to write SQL in Java

Yes, the “best” way – with 1-2 caveats ;-)

10. Parallel stream deadlock

This is now a real goodie for the end!

All concurrent systems can run into deadlocks, if you don’t properly synchronise things. While finding a real-world example isn’t obvious, finding a forced example is. The following parallel() stream is guaranteed to run into a deadlock:

Object[] locks = { new Object(), new Object() };

IntStream
    .range(1, 5)
    .parallel()
    .peek(Unchecked.intConsumer(i -> {
        synchronized (locks[i % locks.length]) {
            Thread.sleep(100);

            synchronized (locks[(i + 1) % locks.length]) {
                Thread.sleep(50);
            }
        }
    }))
    .forEach(System.out::println);

Note the use of Unchecked.intConsumer(), which transforms the functional IntConsumer interface into a org.jooq.lambda.fi.util.function.CheckedIntConsumer, which is allowed to throw checked exceptions.

Well. Tough luck for your machine. Those threads will be blocked forever :-)

The good news is, it has never been easier to produce a schoolbook example of a deadlock in Java!

For more details, see also Brian Goetz’s answer to this question on Stack Overflow.

Conclusion

With streams and functional thinking, we’ll run into a massive amount of new, subtle bugs. Few of these bugs can be prevented, except through practice and staying focused. You have to think about how to order your operations. You have to think about whether your streams may be infinite.

Streams (and lambdas) are a very powerful tool. But a tool which we need to get a hang of, first.

Stay tuned for more exciting Java 8 articles on this blog.

Stop Trying to Emulate SQL OFFSET Pagination with Your In-House DB Framework!


I’m pretty sure you’ve gotten it wrong in numerous ways, so far. And you probably won’t get it right any time soon. So why waste your precious time on SQL tweaking, when you could be implementing business logic?

Let me explain…

It hasn’t been until the recent SQL:2008 standard that what MySQL users know as LIMIT .. OFFSET was standardised into the following simple statement:

SELECT * 
FROM BOOK 
OFFSET 2 ROWS 
FETCH NEXT 1 ROWS ONLY

Yes. So many keywords.

SQL is indeed a very verbose language. Personally, we really like the conciseness of MySQL’s / PostgreSQL’s LIMIT .. OFFSET clause, which is why we chose that for the jOOQ DSL API

In SQL:

SELECT * FROM BOOK LIMIT 1 OFFSET 2

In jOOQ:

select().from(BOOK).limit(1).offset(2);

Now, when you’re a SQL framework vendor, or when you’re rolling your own, in-house SQL abstraction, you might think about standardising this neat little clause. Here’s a couple of flavours from databases that natively support offset pagination:

-- MySQL, H2, HSQLDB, Postgres, and SQLite
SELECT * FROM BOOK LIMIT 1 OFFSET 2

-- CUBRID supports a MySQL variant of the 
-- LIMIT .. OFFSET clause
SELECT * FROM BOOK LIMIT 2, 1

-- Derby, SQL Server 2012, Oracle 12, SQL:2008
SELECT * FROM BOOK 
OFFSET 2 ROWS FETCH NEXT 1 ROWS ONLY

-- Ingres. Eek, almost the standard. Almost!
SELECT * FROM BOOK 
OFFSET 2 FETCH FIRST 1 ROWS ONLY

-- Firebird
SELECT * FROM BOOK ROWS 2 TO 3

-- Sybase SQL Anywhere
SELECT TOP 1 ROWS START AT 3 * FROM BOOK

-- DB2 (without OFFSET)
SELECT * FROM BOOK FETCH FIRST 1 ROWS ONLY

-- Sybase ASE, SQL Server 2008 (without OFFSET)
SELECT TOP 1 * FROM BOOK

So far, so good. These can all be handled. Some databases put offsets before limits, others put limits before offsets, and the T-SQL family puts the whole TOP clause before the SELECT list. This is easy to emulate. Now what about:

  • Oracle 11g and less
  • SQL Server 2008 and less
  • DB2 with OFFSET

(note that you can enable various alternative syntaxes in DB2)

When you google for this, you will find millions of ways to emulate OFFSET .. FETCH in those older databases. The optimal solutions always involve:

  • Using doubly-nested derived tables with ROWNUM filtering in Oracle
  • Using single-nested derived tabels with ROW_NUMBER() filtering in SQL Server and DB2

So you’re emulating it.

Do you think you will get it right?

;-)

Let us go through a couple of issues that you may not have thought about.

First off, Oracle. Oracle obviously wanted to create a maximum vendor-lockin, which is only exceeded by Apple’s recent introduction of Swift. This is why ROWNUM solutions perform best, even better than SQL:2003 standard window function based solutions. Don’t believe it? Read this very interesting article on Oracle offset pagination performance.

So, the optimal solution in Oracle is:

-- PostgreSQL syntax:
SELECT ID, TITLE 
FROM BOOK 
LIMIT 1 OFFSET 2

-- Oracle equivalent:
SELECT *
FROM (
  SELECT b.*, ROWNUM rn
  FROM (
    SELECT ID, TITLE
    FROM BOOK
  ) b
  WHERE ROWNUM <= 3 -- (1 + 2)
)
WHERE rn > 2

So that’s really the equivalent?

Of course not. You’re selecting an additional column, the rn column. You might just not care in most cases, but what if you wanted to make a limited subquery to be used with an IN predicate?

-- PostgreSQL syntax:
SELECT *
FROM BOOK
WHERE AUTHOR_ID IN (
  SELECT ID
  FROM AUTHOR
  LIMIT 1 OFFSET 2
)

-- Oracle equivalent:
SELECT *
FROM BOOK
WHERE AUTHOR_ID IN (
  SELECT * -- Ouch. These are two columns!
  FROM (
    SELECT b.*, ROWNUM rn
    FROM (
      SELECT ID
      FROM AUTHOR
    ) b
    WHERE ROWNUM <= 3
  )
  WHERE rn > 2
)

So, as you can see, you’ll have to do some more sophisticated SQL transformation. If you’re manually emulating LIMIT .. OFFSET, then you might just patch the ID column into the subquery:

SELECT *
FROM BOOK
WHERE AUTHOR_ID IN (
  SELECT ID -- better
  FROM (
    SELECT b.ID, ROWNUM rn -- better
    FROM (
      SELECT ID
      FROM AUTHOR
    ) b
    WHERE ROWNUM <= 3
  )
  WHERE rn > 2
)

So, that’s more like it, right? But since you’re not writing this manually every time, you’re about to start creating your own nifty in-house SQL framework covering the 2-3 use cases that you’ve encountered so far, right?

You can do it. So you’ll regex-search-replace column names automagically to produce the above.

So now, it is correct?

Of course not! Because you can have ambiguous column names in top-level SELECTs, but not in nested selects. What if you want to do this:

-- PostgreSQL syntax:
-- Perfectly valid repetition of two ID columns
SELECT BOOK.ID, AUTHOR.ID
FROM BOOK
JOIN AUTHOR
ON BOOK.AUTHOR_ID = AUTHOR.ID
LIMIT 1 OFFSET 2

-- Oracle equivalent:
SELECT *
FROM (
  SELECT b.*, ROWNUM rn
  FROM (
    -- Ouch! ORA-00918: column ambiguously defined
    SELECT BOOK.ID, AUTHOR.ID
    FROM BOOK
    JOIN AUTHOR
    ON BOOK.AUTHOR_ID = AUTHOR.ID
  ) b
  WHERE ROWNUM <= 3
)
WHERE rn > 2

Nope. And the trick of manually patching ID columns from the previous example doesn’t work, because you have multiple ID instances. And renaming the columns to random values is nasty, because the user of your home-grown in-house database framework wants to receive well-defined column names. I.e. ID and… ID.

So, the solution is to rename the columns twice. Once in each derived table:

-- Oracle equivalent:
-- Rename synthetic column names back to original
SELECT c1 ID, c2 ID
FROM (
  SELECT b.c1, b.c2, ROWNUM rn
  FROM (
    -- synthetic column names here
    SELECT BOOK.ID c1, AUTHOR.ID c2
    FROM BOOK
    JOIN AUTHOR
    ON BOOK.AUTHOR_ID = AUTHOR.ID
  ) b
  WHERE ROWNUM <= 3
)
WHERE rn > 2

But now, we’re done?

Of course not! What if you doubly nest such a query? Will you think about doubly renaming ID columns to synthetic names, and back? … ;-) Let’s leave it here and talk about something entirely different:

Does the same thing work for SQL Server 2008?

Of course not! In SQL Server 2008, the most popular approach is to use window functions. Namely, ROW_NUMBER(). So, let’s consider:

-- PostgreSQL syntax:
SELECT ID, TITLE 
FROM BOOK 
LIMIT 1 OFFSET 2

-- SQL Server equivalent:
SELECT b.*
FROM (
  SELECT ID, TITLE, 
    ROW_NUMBER() OVER (ORDER BY ID) rn
  FROM BOOK
) b
WHERE rn > 2 AND rn <= 3

So that’s it, right?

Of course not! ;-)

OK, we’ve already had this issue. We should not select *, because that would generate too many columns in the case that we’re using this as a subquery for an IN predicate. So let’s consider the correct solution with synthetic column names:

-- SQL Server equivalent:
SELECT b.c1 ID, b.c2 TITLE
FROM (
  SELECT ID c1, TITLE c2,
    ROW_NUMBER() OVER (ORDER BY ID) rn
  FROM BOOK
) b
WHERE rn > 2 AND rn <= 3

But now we got it, right?

Make an educated guess: Nope!

What happens, if you add an ORDER BY clause to the original query?

-- PostgreSQL syntax:
SELECT ID, TITLE 
FROM BOOK 
ORDER BY SOME_COLUMN
LIMIT 1 OFFSET 2

-- Naive SQL Server equivalent:
SELECT b.c1 ID, b.c2 TITLE
FROM (
  SELECT ID c1, TITLE c2,
    ROW_NUMBER() OVER (ORDER BY ID) rn
  FROM BOOK
  ORDER BY SOME_COLUMN
) b
WHERE rn > 2 AND rn <= 3

Now, that doesn’t work in SQL Server. Subqueries are not allowed to have an ORDER BY clause, unless they also have a TOP clause (or an OFFSET .. FETCH clause in SQL Server 2012).

OK, we can probably tweak this using TOP 100 PERCENT to make SQL Server happy.

-- Better SQL Server equivalent:
SELECT b.c1 ID, b.c2 TITLE
FROM (
  SELECT TOP 100 PERCENT
    ID c1, TITLE c2,
    ROW_NUMBER() OVER (ORDER BY ID) rn
  FROM BOOK
  ORDER BY SOME_COLUMN
) b
WHERE rn > 2 AND rn <= 3

Now, that’s correct SQL according to SQL Server, although you do not have a guarantee that the ordering of the derived table will survive after query execution. It may well be that the ordering is changed again by some influence.

If you wanted to order by SOME_COLUMN in the outer query, you’d have to again transform the SQL statement to add another synthetic column:

-- Better SQL Server equivalent:
SELECT b.c1 ID, b.c2 TITLE
FROM (
  SELECT TOP 100 PERCENT
    ID c1, TITLE c2,
    SOME_COLUMN c99,
    ROW_NUMBER() OVER (ORDER BY ID) rn
  FROM BOOK
) b
WHERE rn > 2 AND rn <= 3
ORDER BY b.c99

That does start getting a bit nasty. And let’s guess whether:

This is the correct solution!

Of course not! What if the original query had DISTINCT in it?

-- PostgreSQL syntax:
SELECT DISTINCT AUTHOR_ID
FROM BOOK 
LIMIT 1 OFFSET 2

-- Naive SQL Server equivalent:
SELECT b.c1 AUTHOR_ID
FROM (
  SELECT DISTINCT AUTHOR_ID c1,
    ROW_NUMBER() OVER (ORDER BY AUTHOR_ID) rn
  FROM BOOK
) b
WHERE rn > 2 AND rn <= 3

Now, what happens if an author has written several books? Yes, the DISTINCT keyword should remove such duplicates, and effectively, the PostgreSQL query will correctly remove duplicates first, and then apply LIMIT and OFFSET.

However, the ROW_NUMBER() predicate always generates distinct row numbers before DISTINCT can remove them again. In other words, DISTINCT has no effect.

Luckily, we can tweak this SQL again, using this neat little trick:

-- Better SQL Server equivalent:
SELECT b.c1 AUTHOR_ID
FROM (
  SELECT DISTINCT AUTHOR_ID c1,
    DENSE_RANK() OVER (ORDER BY AUTHOR_ID) rn
  FROM BOOK
) b
WHERE rn > 2 AND rn <= 3

Read more about this trick here:

SQL Trick: row_number() is to SELECT what dense_rank() is to SELECT DISTINCT.

Watch out that the ORDER BY clause must contain all columns from the SELECT field list. Obviously, this will limit the acceptable columns in the SELECT DISTINCT field list to columns that are allowed in a window function’s ORDER BY clause (e.g. no other window functions).

We could of course try to fix that as well using common table expressions, or we consider

Yet another issue??

Yes, of course!

Do you even know what the column(s) in the window function’s ORDER BY clause should be? Have you just picked any column, at random? What if that column doesn’t have an index on it, will your window function still perform?

The answer is easy when your original SELECT statement also has an ORDER BY clause, then you should probably take that one (plus all the columns from the SELECT DISTINCT clause if applicable).

But what if you don’t have any ORDER BY clause?

Yet another trick! Use a “constant” variable:

-- Better SQL Server equivalent:
SELECT b.c1 AUTHOR_ID
FROM (
  SELECT AUTHOR_ID c1,
    ROW_NUMBER() OVER (ORDER BY @@version) rn
  FROM BOOK
) b
WHERE rn > 2 AND rn <= 3

Yes, you need to use a variable, because constants are not allowed in those ORDER BY clauses, in SQL Server. Painful, I know.

Read more about this @@version trick here.

Are we done yet!?!?

Probably not ;-) But we have probably covered around 99% of the common and edge cases. We can sleep nicely, now.

Note that all of these SQL transformations are implemented in jOOQ. jOOQ is the only SQL abstraction framework that takes SQL seriously (with all its warts and caveats), standardising over all of this madness.

As mentioned in the beginning, with jOOQ, you just write:

// Don't worry about general emulation
select().from(BOOK).limit(1).offset(2);

// Don't worry about duplicate column names
// in subselects
select(BOOK.ID, AUTHOR.ID)
.from(BOOK)
.join(AUTHOR)
.on(BOOK.AUTHOR_ID.eq(AUTHOR.ID))
.limit(1).offset(2);

// Don't worry about invalid IN predicates
select()
.from(BOOK)
.where(BOOK.AUTHOR_ID).in(
    select(AUTHOR.ID)
    .from(AUTHOR)
    .limit(1).offset(2)
);

// Don't worry about the ROW_NUMBER() vs.
// DENSE_RANK() distinction
selectDistinct(AUTHOR_ID)
    .from(BOOK).limit(1).offset(2);

With jOOQ, you can just write your Oracle SQL or Transact SQL as if it were as awesome as PostgreSQL! … without jumping the SQL ship entirely, and moving on to JPA.

jOOQ, the best way to write SQL in Java

Keyset paging

Now, of course, if you have been reading our blog, or our partner blog SQL Performance Explained, you should know by now that OFFSET pagination is often a bad choice in the first place. You should know that keyset pagination almost always outperforms OFFSET pagination.

Read about how jOOQ natively supports keyset pagination using the SEEK clause, here.

Java 8 Friday: JavaScript goes SQL with Nashorn and jOOQ


At Data Geekery, we love Java. And as we’re really into jOOQ’s fluent API and query DSL, we’re absolutely thrilled about what Java 8 will bring to our ecosystem.

Java 8 Friday

Every Friday, we’re showing you a couple of nice new tutorial-style Java 8 features, which take advantage of lambda expressions, extension methods, and other great stuff. You’ll find the source code on GitHub.

JavaScript goes SQL with Nashorn and jOOQ

This week, we’ll look into some awesome serverside SQL scripting with Nashorn and Java 8. Only few things can be found on the web regarding the use of JDBC in Nashorn. But why use JDBC and take care of painful resource management and SQL string composition, when you can use jOOQ? Everything works out of the box!

Let’s set up a little sample JavaScript file as such:

var someDatabaseFun = function() {
    var Properties = Java.type("java.util.Properties");
    var Driver = Java.type("org.h2.Driver");

    var driver = new Driver();
    var properties = new Properties();

    properties.setProperty("user", "sa");
    properties.setProperty("password", "");

    try {
        var conn = driver.connect(
            "jdbc:h2:~/test", properties);

        // Database code here
    }
    finally {
        try { 
            if (conn) conn.close();
        } catch (e) {}
    }
}

someDatabaseFun();

This is pretty much all you need to interoperate with JDBC and a H2 database. So we could be running SQL statements with JDBC like so:

try {
    var stmt = conn.prepareStatement(
        "select table_schema, table_name " + 
        "from information_schema.tables");
    var rs = stmt.executeQuery();

    while (rs.next()) {
        print(rs.getString("TABLE_SCHEMA") + "."
            + rs.getString("TABLE_NAME"))
    }
}
finally {
    if (rs)
        try {
            rs.close();
        }
        catch(e) {}

    if (stmt)
        try {
            stmt.close();
        }
        catch(e) {}
}

Most of the bloat is JDBC resource handling as we unfortunately don’t have a try-with-resources statement in JavaScript. The above generates the following output:

INFORMATION_SCHEMA.FUNCTION_COLUMNS
INFORMATION_SCHEMA.CONSTANTS
INFORMATION_SCHEMA.SEQUENCES
INFORMATION_SCHEMA.RIGHTS
INFORMATION_SCHEMA.TRIGGERS
INFORMATION_SCHEMA.CATALOGS
INFORMATION_SCHEMA.CROSS_REFERENCES
INFORMATION_SCHEMA.SETTINGS
INFORMATION_SCHEMA.FUNCTION_ALIASES
INFORMATION_SCHEMA.VIEWS
INFORMATION_SCHEMA.TYPE_INFO
INFORMATION_SCHEMA.CONSTRAINTS
...

Let’s see if we can run the same query using jOOQ:

var DSL = Java.type("org.jooq.impl.DSL");

print(
    DSL.using(conn)
       .fetch("select table_schema, table_name " +
              "from information_schema.tables")
);

This is how you can execute plain SQL statements in jOOQ, with much less bloat than with JDBC. The output is roughly the same:

+------------------+--------------------+
|TABLE_SCHEMA      |TABLE_NAME          |
+------------------+--------------------+
|INFORMATION_SCHEMA|FUNCTION_COLUMNS    |
|INFORMATION_SCHEMA|CONSTANTS           |
|INFORMATION_SCHEMA|SEQUENCES           |
|INFORMATION_SCHEMA|RIGHTS              |
|INFORMATION_SCHEMA|TRIGGERS            |
|INFORMATION_SCHEMA|CATALOGS            |
|INFORMATION_SCHEMA|CROSS_REFERENCES    |
|INFORMATION_SCHEMA|SETTINGS            |
|INFORMATION_SCHEMA|FUNCTION_ALIASES    |
 ...

But jOOQ’s strength is not in its plain SQL capabilities, it lies in the DSL API, which abstracts away all the vendor-specific SQL subtleties and allows you to compose queries (and also DML) fluently. Consider the following SQL statement:

// Let's assume these objects were generated
// by the jOOQ source code generator
var Tables = Java.type(
    "org.jooq.db.h2.information_schema.Tables");
var t = Tables.TABLES;
var c = Tables.COLUMNS;

// This is the equivalent of Java's static imports
var count = DSL.count;
var row = DSL.row;

// We can now execute the following query:
print(
    DSL.using(conn)
       .select(
           t.TABLE_SCHEMA, 
           t.TABLE_NAME, 
           c.COLUMN_NAME)
       .from(t)
       .join(c)
       .on(row(t.TABLE_SCHEMA, t.TABLE_NAME)
           .eq(c.TABLE_SCHEMA, c.TABLE_NAME))
       .orderBy(
           t.TABLE_SCHEMA.asc(),
           t.TABLE_NAME.asc(),
           c.ORDINAL_POSITION.asc())
       .fetch()
);

Note that there is obviously no typesafety in the above query, as this is JavaScript. But I would imagine that the IntelliJ, Eclipse, or NetBeans creators will eventually detect Nashorn dependencies on Java programs, and provide syntax auto-completion and highlighting, as some things can be statically analysed.

Things get even better if you’re using the Java 8 Streams API from Nashorn. Let’s consider the following query:

DSL.using(conn)
   .select(
       t.TABLE_SCHEMA,
       t.TABLE_NAME,
       count().as("CNT"))
   .from(t)
   .join(c)
   .on(row(t.TABLE_SCHEMA, t.TABLE_NAME)
       .eq(c.TABLE_SCHEMA, c.TABLE_NAME))
   .groupBy(t.TABLE_SCHEMA, t.TABLE_NAME)
   .orderBy(
       t.TABLE_SCHEMA.asc(),
       t.TABLE_NAME.asc())

// This fetches a List<Map<String, Object>> as
// your ResultSet representation
   .fetchMaps()

// This is Java 8's standard Collection.stream()
   .stream()

// And now, r is like any other JavaScript object
// or record!
   .forEach(function (r) {
       print(r.TABLE_SCHEMA + '.' 
           + r.TABLE_NAME + ' has ' 
           + r.CNT + ' columns.');
   });

The above generates this output:

INFORMATION_SCHEMA.CATALOGS has 1 columns.
INFORMATION_SCHEMA.COLLATIONS has 2 columns.
INFORMATION_SCHEMA.COLUMNS has 23 columns.
INFORMATION_SCHEMA.COLUMN_PRIVILEGES has 8 columns.
INFORMATION_SCHEMA.CONSTANTS has 7 columns.
INFORMATION_SCHEMA.CONSTRAINTS has 13 columns.
INFORMATION_SCHEMA.CROSS_REFERENCES has 14 columns.
INFORMATION_SCHEMA.DOMAINS has 14 columns.
...

If your database supports arrays, you can even access such array columns by index, e.g.

r.COLUMN_NAME[3]

So, if you’re a server-side JavaScript aficionado, download jOOQ today, and start writing awesome SQL in JavaScript, now! For more Nashorn awesomeness, consider reading this article here.

Stay tuned for more awesome Java 8 content on this blog.

SQL Developer’s “Securely” Encrypted Passwords


Recently, while at one of our customers’ site, the customer and I needed to get access to a database. On my machine, I had stored the password, but the customer obviously didn’t want to rely on my machine, and the password itself is hashed, so we couldn’t guess it. But guess what? Yes we can! I googled a bit, and incredibly, I found instructions to write the following little utility programme, which I’m licensing to you under the terms of the ASL 2.0:

DISCLAIMER: This program is BY NO MEANS intended for you to do any harm. You could have found this information anywhere else on the web. Please use this ONLY to recover your own “lost” passwords. Like I did.

Note also, this only works with SQL Developer versions less than 4.

import java.io.File;
import java.security.GeneralSecurityException;

import javax.crypto.Cipher;
import javax.crypto.spec.IvParameterSpec;
import javax.crypto.spec.SecretKeySpec;
import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;
import javax.xml.xpath.XPath;
import javax.xml.xpath.XPathConstants;
import javax.xml.xpath.XPathExpression;
import javax.xml.xpath.XPathFactory;

import org.w3c.dom.Document;
import org.w3c.dom.Element;
import org.w3c.dom.NodeList;

public class SQLDeveloperDecrypter {
    public static void main(String[] args) throws Exception {
        if (args.length == 0) {
            System.err.println("  Usage 1: " + SQLDeveloperDecrypter.class.getName() + " 0501F83890..... (a single encrypted password)");
            System.err.println("  Usage 2: " + SQLDeveloperDecrypter.class.getName() + " C:\\Users\\...... (the path to the connections.xml file)");
            System.err.println();
            System.err.println("  Pass the password hash code from your connections.xml file. The file might be located at (example)");
            System.err.println("  C:\\Users\\[User]\\AppData\\Roaming\\SQL Developer\\system2.1.1.64.45\\o.jdeveloper.db.connection.11.1.1.2.36.55.30");

            System.exit(-1);
        }

        if (args[0].startsWith("05")) {
            System.out.println(decryptPassword(args[0]));
        }
        else {
            File file = new File(args[0]);
            if (file.isDirectory())
                file = new File(file, "connections.xml");

            DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
            DocumentBuilder builder = factory.newDocumentBuilder();
            Document doc = builder.parse(file.toURI().toString());

            // The relevant structure is:
            //
            // <Reference name="connection name">
            //   <RefAddresses>
            //     <StringRefAddr addrType="password">
            //       <Contents>057D3DE2...

            XPathFactory xPathfactory = XPathFactory.newInstance();
            XPath xpath = xPathfactory.newXPath();
            XPathExpression expr = xpath.compile("//StringRefAddr[@addrType='password']/Contents");

            NodeList nodes = (NodeList) expr.evaluate(doc, XPathConstants.NODESET);
            for (int i = 0; i < nodes.getLength(); i++) {
                Element e = (Element) nodes.item(i);

                System.out.println("Connection name     : " +
                    ((Element) e.getParentNode().getParentNode().getParentNode()).getAttribute("name")
                );

                System.out.println("Password (encrypted): " +
                    e.getTextContent()
                );

                System.out.println("Password (decrypted): " +
                    decryptPassword(e.getTextContent())
                );

                System.out.println();
            }
        }
    }

    // From: http://stackoverflow.com/a/140861
    public static byte[] hexStringToByteArray(String s) {
        int len = s.length();
        byte[] data = new byte[len / 2];
        for (int i = 0; i < len; i += 2) {
            data[i / 2] = (byte) ((Character.digit(s.charAt(i), 16) << 4)
                                 + Character.digit(s.charAt(i+1), 16));
        }
        return data;
    }

    // From: http://stackoverflow.com/a/3109774
    public static String decryptPassword(String result) throws GeneralSecurityException {
          return new String(decryptPassword(hexStringToByteArray(result)));
    }

    public static byte[] decryptPassword(byte[] result) throws GeneralSecurityException {
        byte constant = result[0];
        if (constant != 5) {
            throw new IllegalArgumentException();
        }

        byte[] secretKey = new byte[8];
        System.arraycopy(result, 1, secretKey, 0, 8);

        byte[] encryptedPassword = new byte[result.length - 9];
        System.arraycopy(result, 9, encryptedPassword, 0, encryptedPassword.length);

        byte[] iv = new byte[8];
        for (int i = 0; i < iv.length; i++) {
            iv[i] = 0;
        }

        Cipher cipher = Cipher.getInstance("DES/CBC/PKCS5Padding");
        cipher.init(Cipher.DECRYPT_MODE, new SecretKeySpec(secretKey, "DES"), new IvParameterSpec(iv));
        return cipher.doFinal(encryptedPassword);
    }
}

Parts of the source code were borrowed, from here and here. In other words, virtually any hacker could’ve come up with the above programme. And the output? This:

Connection name     : SAKILA
Password (encrypted): 0517CB1A41E3C2CC3A3163234A6A8E92F8
Password (decrypted): SAKILA

Connection name     : TEST
Password (encrypted): 05B03F45511F83F6CD4D322C9E173B5A94
Password (decrypted): TEST

Wonderful! All the passwords on my machine are now recovered in constant time (no brute force).

Does this make you think? I hope that your DBA doesn’t store their passwords in SQL Developer. On a laptop. Which they forget in the train. With access to your customers’ credit card information.

In the meantime, though, I’m glad I could recover the “lost” password for my client ;-)

Java 8 Friday: Most Internal DSLs are Outdated


At Data Geekery, we love Java. And as we’re really into jOOQ’s fluent API and query DSL, we’re absolutely thrilled about what Java 8 will bring to our ecosystem.

Java 8 Friday

Every Friday, we’re showing you a couple of nice new tutorial-style Java 8 features, which take advantage of lambda expressions, extension methods, and other great stuff. You’ll find the source code on GitHub.

Most Internal DSLs are Outdated

That’s quite a statement from a vendor of one of the most advanced internal DSLs currently on the market. Let me explain:

Languages are hard

Learning a new language (or API) is hard. You have to understand all the keywords, the constructs, the statement and expression types, etc. This is true both for external DSLs, internal DSLs and “regular” APIs, which are essentially internal DSLs with less fluency.

When using JUnit, people have grown used to using hamcrest matchers. The fact that they’re available in six languages (Java, Python, Ruby, Objective-C, PHP, Erlang) makes them somewhat of a sound choice. As a domain-specific language, they have established idioms that are easy to read, e.g.

assertThat(theBiscuit, equalTo(myBiscuit));
assertThat(theBiscuit, is(equalTo(myBiscuit)));
assertThat(theBiscuit, is(myBiscuit));

When you read this code, you will immediately “understand” what is being asserted, because the API reads like prosa. But learning to write code in this API is harder. You will have to understand:

  • Where all of these methods are coming from
  • What sorts of methods exist
  • Who might have extended hamcrest with custom Matchers
  • What are best practices when extending the DSL

For instance, in the above example, what exactly is the difference between the three? When should I use one and when the other? Is is() checking for object identity? Is equalTo() checking for object equality?

The hamcrest tutorial goes on with examples like these:

public void testSquareRootOfMinusOneIsNotANumber() {
    assertThat(Math.sqrt(-1), is(notANumber()));
}

You can see that notANumber() apparently is a custom matcher, implemented some place in a utility:

public class IsNotANumber
extends TypeSafeMatcher<Double> {

  @Override
  public boolean matchesSafely(Double number) {
    return number.isNaN();
  }

  public void describeTo(Description description) {
    description.appendText("not a number");
  }

  @Factory
  public static <T> Matcher<Double> notANumber() {
    return new IsNotANumber();
  }
}

While this sort of DSL is very easy to create, and probably also a bit fun, it is dangerous to start delving into writing and enhancing custom DSLs for a simple reason. They’re in no way better than their general-purpose, functional counterparts – but they’re harder to maintain. Consider the above examples in Java 8:

Replacing DSLs with Functions

Let’s assume we have a very simple testing API:

static <T> void assertThat(
    T actual, 
    Predicate<T> expected
) {
    assertThat(actual, expected, "Test failed");
}

static <T> void assertThat(
    T actual, 
    Predicate<T> expected, 
    String message
) {
    assertThat(() -> actual, expected, message);
}

static <T> void assertThat(
    Supplier<T> actual, 
    Predicate<T> expected
) {
    assertThat(actual, expected, "Test failed");
}

static <T> void assertThat(
    Supplier<T> actual, 
    Predicate<T> expected, 
    String message
) {
    if (!expected.test(actual.get()))
        throw new AssertionError(message);
}

Now, compare the hamcrest matcher expressions with their functional equivalents:

// BEFORE
// ---------------------------------------------
assertThat(theBiscuit, equalTo(myBiscuit));
assertThat(theBiscuit, is(equalTo(myBiscuit)));
assertThat(theBiscuit, is(myBiscuit));

assertThat(Math.sqrt(-1), is(notANumber()));

// AFTER
// ---------------------------------------------
assertThat(theBiscuit, b -> b == myBiscuit);
assertThat(Math.sqrt(-1), n -> Double.isNaN(n));

With lambda expressions, and a well-designed assertThat() API, I’m pretty sure that you won’t be looking for the right way to express your assertions with matchers any longer.

Note that unfortunately, we cannot use the Double::isNaN method reference, as that would not be compatible with Predicate<Double>. For that, we’d have to do some primitive type magic in the assertion API, e.g.

static void assertThat(
    double actual, 
    DoublePredicate expected
) { ... }

Which can then be used as such:

assertThat(Math.sqrt(-1), Double::isNaN);

Yeah, but…

… you may hear yourself saying, “but we can combine matchers with lambdas and streams”. Yes, of course we can. I’ve just done so now in the jOOQ integration tests. I want to skip the integration tests for all SQL dialects that are not in a list of dialects supplied as a system property:

String dialectString = 
    System.getProperty("org.jooq.test-dialects");

// The string must not be "empty"
assumeThat(dialectString, not(isOneOf("", null)));

// And we check if the current dialect() is
// contained in a comma or semi-colon separated
// list of allowed dialects, trimmed and lowercased
assumeThat(
    dialect().name().toLowerCase(),

    // Another matcher here
    isOneOf(stream(dialectString.split("[,;]"))
        .map(String::trim)
        .map(String::toLowerCase)
        .toArray(String[]::new))
);

… and that’s pretty neat, too, right?

But why don’t I just simply write:

// Using Apache Commons, here
assumeThat(dialectString, StringUtils::isNotEmpty);
assumeThat(
    dialect().name().toLowerCase(),
    d -> stream(dialectString.split("[,;]"))
        .map(String::trim)
        .map(String::toLowerCase())
        .anyMatch(d::equals)
);

No Hamcrest needed, just plain old lambdas and streams!

Now, readability is a matter of taste, of course. But the above example clearly shows that there is no longer any need for Hamcrest matchers and for the Hamcrest DSL. Given that within the next 2-3 years, the majority of all Java developers will be very used to using the Streams API in every day work, but not very used to using the Hamcrest API, I urge you, JUnit maintainers, to deprecate the use of Hamcrest in favour of Java 8 APIs.

Is Hamcrest now considered bad?

Well, it has served its purpose in the past, and people have grown somewhat used to it. But as we’ve already pointed out in a previous post about Java 8 and JUnit Exception matching, yes, we do believe that we Java folks have been barking up the wrong tree in the last 10 years.

The lack of lambda expressions has lead to a variety of completely bloated and now also slightly useless libraries. Many internal DSLs or annotation-magicians are also affected. Not because they’re no longer solving the problems they used to, but because they’re not Java-8-ready. Hamcrest’s Matcher type is not a functional interface, although it would be quite easy to transform it into one. In fact, Hamcrest’s CustomMatcher logic should be pulled up to the Matcher interface, into default methods.

Things dont’ get better with alternatives, like AssertJ, which create an alternative DSL that is now rendered obsolete (in terms of call-site code verbosity) through lambdas and the Streams API.

If you insist on using a DSL for testing, then probably Spock would be a far better choice anyway.

Other examples

Hamcrest is just one example of such a DSL. This article has shown how it can be almost completely removed from your stack by using standard JDK 8 constructs and a couple of utility methods, which you might have in JUnit some time soon, anyway.

Java 8 will bring a lot of new traction into last decade’s DSL debate, as also the Streams API will greatly improve the way we look at transforming or building data. But many current DSLs are not ready for Java 8, and have not been designed in a functional way. They have too many keywords for things and concepts that are hard to learn, and that would be better modelled using functions.

An exception to this rule are DSLs like jOOQ or jRTF, which are modelling actual pre-existing external DSLs in a 1:1 fashion, inheriting all the existing keywords and syntax elements, which makes them much easier to learn in the first place.

What’s your take?

What is your take on the above assumptions? What is your favourite internal DSL, that might vanish or that might be completely transformed in the next five years because it has been obsoleted by Java 8?

Stay tuned for more Java 8 Friday articles here on this blog.

QueryDSL vs. jOOQ. Feature Completeness vs. Now More Than Ever


This week, Timo Westkämper from QueryDSL has announced feature completeness on the QueryDSL user group, along with his call for contributions and increased focus on bugfixes and documentation.

Timo and us, we have always been in close contact, observing each other’s products. In the beginning of jOOQ in 2009, QueryDSL was ahead of us.

But we learned quickly and removed all of our shortcomings such that jOOQ and QueryDSL were quickly at eye level by 2011. Ever since, we have been taking inspiration from one another, as in the end, we have had similar goals. Today, whenever someone is looking for a querying DSL, people tend to recommend either of our tools:

QueryDSL is often a good choice in JPA-based environments, while jOOQ is mostly the best choice in SQL-based environments, although jOOQ has already been given some credit in JPA-based environments as well:

Anyway, today, we’d like to congratulate Timo to his new job, and to QueryDSL’s feature completeness.

jOOQ, on the other hand, is far from feature complete.

jOOQ is what SQLJ should have been from the beginning.

We’re only at the beginning. Java and SQL are the two platforms that are used by most of the developers on this planet. According to db-engines.com, almost every popular DBMS is a SQL-based relational DBMS. According to TIOBE, Java currently ranks #2 among all languages.

We strongly believe that all of these developers are in dire need for better SQL integration into the Java language. While ORMs and JPA are very well integrated, SQL is not, and that is what we are working on. jOOQ will be feature complete when the Java compiler can natively compile actual SQL code and SQL code fragments into jOOQ, which will serve as its backing AST model for further SQL transformation.

Until we reach that goal, we’ll be adding support for more SQL goodness. A small selection of things that we already support, beyond QueryDSL’s “feature completeness”:

  • Table-valued functions
  • PIVOT tables
  • DDL (with jOOQ 3.4)
  • MERGE statement
  • Derived tables and derived column lists
  • Row value expressions
  • Flashback query
  • Window functions
  • Ordered aggregate functions
  • Common table expressions (with jOOQ 3.4)
  • Object-oriented PL/SQL
  • User-defined types
  • Hierarchical SQL
  • Custom SQL transformation
  • 16 supported RDBMS (even MS Access!)
  • … you name it

Our roadmap is full of great ideas. There’s plenty of work, so let’s get going! Join us, your partner for…

jOOQ is the best way to write SQL in Java

Yet Another 10 Common Mistakes Java Developers Make When Writing SQL (You Won’t BELIEVE the Last One)


(Sorry for that click-bait heading. Couldn’t resist ;-) )

We’re on a mission. To teach you SQL. But mostly, we want to teach you how to appreciate SQL. You’ll love it!

Getting SQL right or wrong shouldn’t be about that You’re-Doing-It-Wrong™ attitude that can be encountered often when evangelists promote their object of evangelism. Getting SQL right should be about the fun you’ll have once you do get it right. The things you start appreciating when you notice that you can easily replace 2000 lines of slow, hard-to-maintain, and ugly imperative (or object-oriented) code with 300 lines of lean functional code (e.g. using Java 8), or even better, with 50 lines of SQL.

We’re glad to see that our blogging friends have started appreciating SQL, and most specifically, window functions after reading our posts. For instance, take

So, after our previous, very popular posts:

… we’ll bring you:

Yet Another 10 Common Mistakes Java Developer Make When Writing SQL

And of course, this doesn’t apply to Java developers alone, but it’s written from the perspective of a Java (and SQL) developer. So here we go (again):

1. Not Using Window Functions

After all that we’ve been preaching, this must be our number 1 mistake in this series. Window functions are probably the coolest SQL feature of them all. They’re so incredibly useful, they should be the number one reason for anyone to switch to a better database, e.g. PostgreSQL:

If free and/or Open Source is important to you, you have absolutely no better choice than using PostgreSQL (and you’ll even get to use the free jOOQ Open Source Edition, if you’re a Java developer).

And if you’re lucky enough to work in an environment with Oracle or SQL Server (or DB2, Sybase) licenses, you get even more out of your new favourite tool.

We won’t repeat all the window function goodness in this section, we’ve blogged about them often enough:

The Cure:

Remove MySQL. Take a decent database. And start playing with window functions. You’ll never go back, guaranteed.

2. Not declaring NOT NULL constraints

This one was already part of a previous list where we claimed that you should add as much metadata as possible to your schema, because your database will be able to leverage that metadata for optimisations. For instance, if your database knows that a foreign key value in BOOK.AUTHOR_ID must also be contained exactly once in AUTHOR.ID, then a whole set of optimisations can be achieved in complex queries.

Now let’s have another look at NOT NULL constraints. If you’re using Oracle, NULL values will not be part of your index. This doesn’t matter if you’re expressing an IN constraint, for instance:

SELECT * FROM table
WHERE value IN (
  SELECT nullable_column FROM ...
)

But what happens with a NOT IN constraint?

SELECT * FROM table
WHERE value NOT IN (
  SELECT nullable_column FROM ...
)

Due to SQL’s slightly unintuitive way of handling NULL, there is a slight risk of the second query unexpectedly not returning any results at all, namely if there is at least one NULL value as a result from the subquery. This is true for all databases that get SQL right.

But because the index on nullable_column doesn’t contain any NULL values, Oracle has to look up the complete content in the table, resulting in a FULL TABLE SCAN. Now that is unexpected! Details about this can be seen in this article.

The Cure:

Carefully review all your nullable, yet indexed columns, and check if you really cannot add a NOT NULL constraint to those columns.

The Tool:

If you’re using Oracle, use this query to detect all nullable, yet indexed columns:

SELECT
  i.table_name,
  i.index_name,
  LISTAGG(
    LPAD(i.column_position,  2) || ': ' || 
    RPAD(i.column_name    , 30) || ' '  ||
    DECODE(t.nullable, 'Y', '(NULL)', '(NOT NULL)'), 
    ', '
  ) WITHIN GROUP (ORDER BY i.column_position) 
    AS "NULLABLE columns in indexes"
FROM user_ind_columns i
JOIN user_tab_cols t
ON (t.table_name, t.column_name) = 
  ((i.table_name, i.column_name))
WHERE EXISTS (
  SELECT 1
  FROM user_tab_cols t
  WHERE (t.table_name, t.column_name, t.nullable) = 
       ((i.table_name, i.column_name, 'Y'       ))
)
GROUP BY i.table_name, i.index_name
ORDER BY i.index_name ASC;

Example output:

TABLE_NAME | INDEX_NAME   | NULLABLE columns in indexes
-----------+--------------+----------------------------
PERSON     | I_PERSON_DOB | 1: DATE_OF_BIRTH (NULL)

And then, fix it!

(Accidental criticism of Maven is irrelevant here ;-) )

If you’re curious about more details, see also these posts:

3. Using PL/SQL Package State

Now, this is a boring one if you’re not using Oracle, but if you are (and you’re a Java developer), be very wary of PL/SQL package state. Are you really doing what you think you’re doing?

Yes, PL/SQL has package-state, e.g.

CREATE OR REPLACE PACKAGE pkg IS
  -- Package state here!
  n NUMBER := 1;

  FUNCTION next_n RETURN NUMBER;
END pkg;

CREATE OR REPLACE PACKAGE BODY pkg IS
  FUNCTION next_n RETURN NUMBER
  IS
  BEGIN
    n := n + 1;
    RETURN n;
  END next_n;
END pkg;

Wonderful, so you’ve created yourself an in-memory counter that generates a new number every time you call pkg.next_n. But who owns that counter? Yes, the session. Each session has their own initialised “package instance”.

But no, it’s probably not the session you might have thought of.

We Java developers connect to databases through connection pools. When we obtain a JDBC Connection from such a pool, we recycle that connection from a previous “session”, e.g. a previous HTTP Request (not HTTP Session!). But that’s not the same. The database session (probably) outlives the HTTP Request and will be inherited by the next request, possibly from an entirely different user. Now, imagine you had a credit card number in that package…?

Not The Cure:

Nope. Don’t just jump to using SERIALLY_REUSABLE packages

CREATE OR REPLACE PACKAGE pkg IS
  PRAGMA SERIALLY_REUSABLE;
  n NUMBER := 1;

  FUNCTION next_n RETURN NUMBER;
END pkg;

Because:

  • You cannot even use that package from SQL, now (see ORA-06534).
  • Mixing this PRAGMA with regular package state from other packages just makes things a lot more complex.

So, don’t.

Not The Cure:

I know. PL/SQL can be a beast. It often seems like such a quirky language. But face it. Many things run much much faster when written in PL/SQL, so don’t give up, just yet. Dropping PL/SQL is not the solution either.

The Cure:

At all costs, try to avoid package state in PL/SQL. Think of package state as of static variables in Java. While they might be useful for caches (and constants, of course) every now and then, you might not actually access that state that you wanted. Think about load-balancers, suddenly transferring you to another JVM. Think about class loaders, that might have loaded the same class twice, for some reason.

Instead, pass state as arguments through procedures and functions. This will avoid side-effects and make your code much cleaner and more predictable.

Or, obviuously, persist state to some table.

4. Running the same query all the time

Master data is boring. You probably wrote some utility to get the latest version of your master data (e.g. language, locale, translations, tenant, system settings), and you can query it every time, once it is available.

At all costs, don’t do that. You don’t have to cache many things in your application, as modern databases have grown to be extremely fast when it comes to caching:

  • Table / column content
  • Index content
  • Query / materialized view results
  • Procedure results (if they’re deterministic)
  • Cursors
  • Execution plans

So, for your average query, there’s virtually no need for an ORM second-level cache, at least from a performance perspective (ORM caches mainly fulfil other purposes, of course).

But when you query master data, i.e. data that never changes, then, network latency, traffic and many other factors will impair your database experience.

The Cure:

Please do take 10 minutes, download Guava, and use its excellent and easy to set up cache, that ships with various built-in invalidation strategies. Choose time-based invalidation (i.e. polling), choose Oracle AQ or Streams, or PostgreSQL’s NOTIFY for event-based invalidation, or just make your cache permanent, if it doesn’t matter. But don’t issue an identical master data query all the time.

… This obviously brings us to

5. Not knowing about the N+1 problem

You had a choice. At the beginning of your software product, you had to choose between:

So, obviously, you chose an ORM, because otherwise you wouldn’t be suffering from “N+1″. What does “N+1″ mean?

The accepted answer on this Stack Overflow question explains it nicely. Essentially, you’re running:

SELECT * FROM book

-- And then, for each book:
SELECT * FROM author WHERE id = ?
SELECT * FROM author WHERE id = ?
SELECT * FROM author WHERE id = ?

Of course, you could go and tweak your hundreds of annotations to correctly prefetch or eager fetch each book’s associated author information to produce something along the lines of:

SELECT * 
FROM   book
JOIN   author 
  ON   book.author_id = author.id

But that would be an awful lot of work, and you’ll risk eager-fetching too many things that you didn’t want, resulting in another performance issue.

Maybe, you could upgrade to JPA 2.1 and use the new @NamedEntityGraph to express beautiful annotation trees like this one:

@NamedEntityGraph(
    name = "post",
    attributeNodes = {
        @NamedAttributeNode("title"),
        @NamedAttributeNode(
            value = "comments", 
            subgraph = "comments"
        )
    },
    subgraphs = {
        @NamedSubgraph(
            name = "comments",
            attributeNodes = {
                @NamedAttributeNode("content")
            }
        )
    }
)

The example was taken from this blog post by Hantsy Bai. Hantsy then goes on explaining that you can use the above beauty through the following statement:

em.createQuery("select p from Post p where p.id=:id",
               Post.class)
  .setHint("javax.persistence.fetchgraph", 
           postGraph)
  .setParameter("id", this.id)
  .getResultList()
  .get(0);

Let us all appreciate the above application of JEE standards with all due respect, and then consider…

The Cure:

You just listen to the wise words at the beginning of this article and replace thousands of lines of tedious Java / Annotatiomania™ code with a couple of lines of SQL. Because that will also likely help you prevent another issue that we haven’t even touched yet, namely selecting too many columns as you can see in these posts:

Since you’re already using an ORM, this might just mean resorting to native SQL – or maybe you manage to express your query with JPQL. Of course, we agree with Alessio Harri in believing that you should use jOOQ together with JPA:

The Takeaway:

While the above will certainly help you work around some real world issues that you may have with your favourite ORM, you could also take it one step further and think about it this way. After all these years of pain and suffering from the object-relational impedance mismatch, the JPA 2.1 expert group is now trying to tweak their way out of this annotation madness by adding more declarative, annotation-based fetch graph hints to JPQL queries, that no one can debug, let alone maintain.

The alternative is simple and straight-forward SQL. And with Java 8, we’ll add functional transformation through the Streams API. That’s hard to beat.

But obviuosly, your views and experiences on that subject may differ from ours, so let’s head on to a more objective discussion about…

6. Not using Common Table Expressions

While common table expressions obviously offer readability improvements, they may also offer performance improvements. Consider the following query that I have recently encountered in a customer’s PL/SQL package (not the actual query):

SELECT round (
  (SELECT amount FROM payments WHERE id = :p_id)
    * 
  (
    SELECT e.bid
    FROM   currencies c, exchange_rates e
    WHERE  c.id     = 
      (SELECT cur_id FROM payments WHERE id = :p_id)
    AND    e.cur_id = 
      (SELECT cur_id FROM payments WHERE id = :p_id)
    AND    e.org_id = 
      (SELECT org_id FROM payments WHERE id = :p_id)
  ) / (
    SELECT c.factor
    FROM   currencies c, exchange_rates e
    WHERE  c.id     = 
      (SELECT cur_id FROM payments WHERE id = :p_id)
    AND    e.cur_id = 
      (SELECT cur_id FROM payments WHERE id = :p_id)
    AND    e.org_id = 
      (SELECT org_id FROM payments WHERE id = :p_id)
  ), 0
) 
INTO amount 
FROM dual;

So what does this do? This essentially converts a payment’s amount from one currency into another. Let’s not delve into the business logic too much, let’s head straight to the technical problem. The above query results in the following execution plan (on Oracle):

------------------------------------------------------
| Operation                         | Name           |
------------------------------------------------------
| SELECT STATEMENT                  |                |
|  TABLE ACCESS BY INDEX ROWID      | PAYMENTS       |
|   INDEX UNIQUE SCAN               | PAYM_PK        |
|   NESTED LOOPS                    |                |
|    INDEX UNIQUE SCAN              | CURR_PK        |
|     TABLE ACCESS BY INDEX ROWID   | PAYMENTS       |
|      INDEX UNIQUE SCAN            | PAYM_PK        |
|    TABLE ACCESS BY INDEX ROWID    | EXCHANGE_RATES |
|     INDEX UNIQUE SCAN             | EXCH_PK        |
|      TABLE ACCESS BY INDEX ROWID  | PAYMENTS       |
|       INDEX UNIQUE SCAN           | PAYM_PK        |
|      TABLE ACCESS BY INDEX ROWID  | PAYMENTS       |
|       INDEX UNIQUE SCAN           | PAYM_PK        |
|     NESTED LOOPS                  |                |
|      TABLE ACCESS BY INDEX ROWID  | CURRENCIES     |
|       INDEX UNIQUE SCAN           | CURR_PK        |
|        TABLE ACCESS BY INDEX ROWID| PAYMENTS       |
|         INDEX UNIQUE SCAN         | PAYM_PK        |
|      INDEX UNIQUE SCAN            | EXCH_PK        |
|       TABLE ACCESS BY INDEX ROWID | PAYMENTS       |
|        INDEX UNIQUE SCAN          | PAYM_PK        |
|       TABLE ACCESS BY INDEX ROWID | PAYMENTS       |
|        INDEX UNIQUE SCAN          | PAYM_PK        |
|  FAST DUAL                        |                |
------------------------------------------------------

The actual execution time is negligible in this case, but as you can see, the same objects are accessed again and again within the query. This is a violation of Common Mistake #4: Running the same query all the time.

The whole thing would be so much easier to read, maintain, and for Oracle to execute, if we had used a common table expression. From the original source code, observe the following thing:

-- We're always accessing a single payment:
  FROM payments WHERE id = :p_id

-- Joining currencies and exchange_rates twice:
  FROM currencies c, exchange_rates e

So, let’s factor out the payment first:

-- "payment" contains only a single payment
-- But it contains all the columns that we'll need
-- afterwards
WITH payment AS (
    SELECT cur_id, org_id, amount
    FROM   payments
    WHERE  id = :p_id
)
SELECT round(p.amount * e.bid / c.factor, 0)

-- Then, we simply don't need to repeat the
-- currencies / exchange_rates joins twice
FROM   payment p
JOIN   currencies c     ON  p.cur_id = c.id
JOIN   exchange_rates e ON  e.cur_id = p.cur_id
                        AND e.org_id = p.org_id

Note, that we’ve also replaced table lists with ANSI JOINs as suggested in our previous list

You wouldn’t believe it’s the same query, would you? And what about the execution plan? Here it is!

---------------------------------------------------
| Operation                      | Name           |
---------------------------------------------------
| SELECT STATEMENT               |                |
|  NESTED LOOPS                  |                |
|   NESTED LOOPS                 |                |
|    NESTED LOOPS                |                |
|     FAST DUAL                  |                |
|     TABLE ACCESS BY INDEX ROWID| PAYMENTS       |
|      INDEX UNIQUE SCAN         | PAYM_PK        |
|    TABLE ACCESS BY INDEX ROWID | EXCHANGE_RATES |
|     INDEX UNIQUE SCAN          | EXCH_PK        |
|   TABLE ACCESS BY INDEX ROWID  | CURRENCIES     |
|    INDEX UNIQUE SCAN           | CURR_PK        |
---------------------------------------------------

No doubt that this is much much better.

The Cure:

If you’re lucky enough and you’re using one of those databases that supports window functions, chances are incredibly high (100%) that you also have common table expression support. This is another reason for you to migrate from MySQL to PostgreSQL, or appreciate the fact that you can work on an awesome commercial database.

Common table expressions are like local variables in SQL. In every large statement, you should consider using them, as soon as you feel that you’ve written something before.

The Takeaway:

Some databases (e.g. PostgreSQL, or SQL Server) also support common table expressions for DML statements. In other words, you can write:

WITH ...
UPDATE ...

This makes DML incredibly more powerful.

7. Not using row value expressions for UPDATEs

We’ve advertised the use of row value expressions in our previous listing. They’re very readable and intuitive, and often also promote using certain indexes, e.g. in PostgreSQL.

But few people know that they can also be used in an UPDATE statement, in most databases. Check out the following query, which I again found in a customer’s PL/SQL package (simplified again, of course):

UPDATE u
SET n = (SELECT n + 1    FROM t WHERE u.n = t.n),
    s = (SELECT 'x' || s FROM t WHERE u.n = t.n),
    x = 3;

So this query takes a subquery as a data source for updating two columns, and the third column is updated “regularly”. How does it perform? Moderately:

-----------------------------
| Operation          | Name |
-----------------------------
| UPDATE STATEMENT   |      |
|  UPDATE            | U    |
|   TABLE ACCESS FULL| U    |
|   TABLE ACCESS FULL| T    |
|   TABLE ACCESS FULL| T    |
-----------------------------

Let’s ignore the full table scans, as this query is constructed. The actual query could leverage indexes. But T is accessed twice, i.e. in both subqueries. Oracle didn’t seem to be able to apply scalar subquery caching in this case.

To the rescue: row value expressions. Let’s simply rephrase our UPDATE to this:

UPDATE u
SET (n, s) = ((
      SELECT n + 1, 'x' || s FROM t WHERE u.n = t.n
    )),
    x = 3;

Let’s ignore the funny, Oracle-specific double-parentheses syntax for the right hand side of such a row value expression assignment, but let’s appreciate the fact that we can easily assign a new value to the tuple (n, s) in one go! Note, we could have also written this, instead, and assign x as well:

UPDATE u
SET (n, s, x) = ((
      SELECT n + 1, 'x' || s, 3 
      FROM t WHERE u.n = t.n
    ));

As you will have expected, the execution plan has also improved, and T is accessed only once:

-----------------------------
| Operation          | Name |
-----------------------------
| UPDATE STATEMENT   |      |
|  UPDATE            | U    |
|   TABLE ACCESS FULL| U    |
|   TABLE ACCESS FULL| T    |
-----------------------------

The Cure:

Use row value expressions. Where ever you can. They make your SQL code incredibly more expressive, and chances are, they make it faster, as well.

Note that the above is supported by jOOQ’s UPDATE statement. This is the moment we would like to make you aware of this cheap, in-article advertisement:

jOOQ - The best way to write SQL in Java

;-)

8. Using MySQL when you could use PostgreSQL

To some, this may appear to be a bit of a hipster discussion. But let’s consider the facts:

  • MySQL claims to be the “most popular Open Source database”.
  • PostgreSQL claims to be the “most advanced Open Source database”.

Let’s consider a bit of history. MySQL has always been very easy to install, maintain, and it has had a great and active community. This has lead to MySQL still being the RDBMS of choice with virtually every web hoster on this planet. Those hosters also host PHP, which was equally easy to install, and maintain.

BUT!

We Java developers tend to have an opinion about PHP, right? It’s summarised by this image here:

The PHP Hammer

The PHP Hammer

Well, it works, but how does it work?

The same can be said about MySQL. MySQL has always worked somehow, but while commercial databases like Oracle have made tremendous progress both in terms of query optimisation and feature scope, MySQL has hardly moved in the last decade.

Many people choose MySQL primarily because of its price (USD $ 0.00). But often, the same people have found MySQL to be slow and quickly concluded that SQL is slow per se – without evaluating the options. This is also why all NoSQL stores compare themselves with MySQL, not with Oracle, the database that has been winning the Transaction Processing Performance Council’s (TPC) benchmarks almost forever. Some examples:

While the last article bluntly adds “(and other RDBMS)” it doesn’t go into any sort of detail whatsoever, what those “other RDBMS” do wrong. It really only compares MongoDB with MySQL.

The Cure:

We say: Stop complaining about SQL, when in fact, you’re really complaining about MySQL. There are at least four very popular databases out there that are incredibly good, and millions of times better than MySQL. These are:

(just kidding about the last one, of course)

The Takeaway:

Don’t fall for agressive NoSQL marketing. 10gen is an extremely well-funded company, even if MongoDB continues to disappoint, technically.

The same is true for Datastax.

Both companies are solving a problem that few people have. They’re selling us niche products as commodity, making us think that our real commodity databases (the RDBMS) no longer fulfil our needs. They are well-funded and have big marketing teams to throw around with blunt claims.

In the mean time, PostgreSQL just got even better, and you, as a reader of this blog / post, are about to bet on the winning team :-)

… just to cite Mark Madsen once more:

The Disclaimer:

This article has been quite strongly against MySQL. We don’t mean to talk badly about a database that perfectly fulfils its purpose, as this isn’t a black and white world. Heck, you can get happy with SQLite in some situations. MySQL, being the cheap and easy to use, easy to install commodity database. We just wanted to make you aware of the fact, that you’re expressly choosing the cheap, not-so-good database, rather than the cheap, awesome one.

9. Forgetting about UNDO / REDO logs

We have claimed that MERGE statements or bulk / batch updates are good. That’s correct, but nonetheless, you should be wary when updating huge data sets in transactional contexts. If your transaction “takes too long”, i.e. if you’re updating 10 million records at a time, you will run into two problems:

  • You increase the risk of race conditions, if another process is also writing to the same table. This may cause a rollback on their or on your transaction, possibly making you roll out the huge update again
  • You cause a lot of concurrency on your system, because every other transaction / session, that wants to see the data that you’re about to update, will have to temporarily roll back all of your updates first, before they reach the state on disk that was there before your huge update. That’s the price of ACID.

One way to work around this issue is to allow for other sessions to read uncommitted data.

Another way to work around this issue is to frequently commit your own work, e.g. after 1000 inserts / updates.

In any case, due to the CAP theorem, you will have to make a compromise. Frequent commits will produce the risk of an inconsistent database in the event of the multi-million update going wrong after 5 million (committed) records. A rollback would then mean to revert all database changes towards a backup.

The Cure:

There is no definitive cure to this issue. But beware that you are very very rarely in a situation where it is OK to simply update 10 million records of a live and online table outside of an actual scheduled maintenance window. The simplest acceptable workaround is indeed to commit your work after N inserts / updates.

The Takeaway:

By this time, NoSQL aficionados will claim (again due to excessive marketing by aforementioned companies) that NoSQL has solved this by dropping schemas and typesafety. “Don’t update, just add another property!” – they said.

But that’s not true!

First off, I can add columns to my database without any issue at all. An ALTER TABLE ADD statement is executed instantly on live databases. Filling the column with data doesn’t bother anyone either, because no one reads the column yet (remember, don’t SELECT * !). So adding columns in RDBMS is as cheap as adding JSON properties to a MongoDB document.

But what about altering columns? Removing them? Merging them?

It is simply not true that denormalisation takes you anywhere far. Denormalisation is always a short-term win for the developer. Hardly a long-term win for the operations teams. Having redundant data in your database for the sake of speeding up an ALTER TABLE statement is like sweeping dirt under the carpet.

Don’t believe the marketers. And while you’re at it, perform some doublethink and forget that we’re SQL tool vendors ourselves ;-) Here’s again the “correct” message:

jOOQ - The best way to write SQL in Java

10. Not using the BOOLEAN type correctly

This is not really a mistake per se. It’s just again something that hardly anyone knows. When the SQL:1999 standard introduced the new BOOLEAN data type, they really did it right. Because before, we already had something like booleans in SQL. We’ve had <search condition> in SQL-92, which are essentially predicates for use in WHERE, ON, and HAVING clauses, as well as in CASE expressions.

SQL:1999, however, simply defined the new <boolean value expression> as a regular <value expression>, and redefined the <search condition> as such:

<search condition> ::=
    <boolean value expression>

Done! Now, for most of us Java / Scala / etc. developers, this doesn’t seem like such an innovation. Heck it’s a boolean. Obviuosly it can be interchangeably used as predicate and as variable.

But in the mind-set of the keyword-heavy SQL folks who have taken inspiration from COBOL when designing the language, this was quite a step forward.

Now, what does this mean? This means that you can use any predicate also as a column! For instance:

SELECT a, b, c
FROM (
  SELECT EXISTS (SELECT ...) a,
         MY_COL IN (1, 2, 3) b,
         3 BETWEEN 4 AND 5   c
  FROM   MY_TABLE
) t
WHERE a AND b AND NOT(c)

This is a bit of a dummy query, agreed, but are you aware of how powerful this is?

Luckily, again, PostgreSQL fully supports this (unlike Oracle, which still doesn’t have any BOOLEAN data type in SQL).

The Cure:

Every now and then, using BOOLEAN types feels very right, so do it! You can transform boolean value expressions into predicates and predicates into boolean value expressions. They’re the same. This makes SQL all so powerful.

Conclusion

SQL has evolved steadily over the past years through great standards like SQL:1999, SQL:2003, SQL:2008 and now SQL:2011. It is the only surviving mainstream declarative language, now that XQuery can be considered pretty dead for the mainstream. It can be easily mixed with procedural languages, as PL/SQL and T-SQL (and other procedural dialects) have shown. It can be easily mixed with object-oriented or functional languages, as jOOQ has shown.

At Data Geekery, we believe that SQL is the best way to query data. You don’t agree with any of the above? That’s fine, you don’t have to. Sometimes, even we agree with Winston Churchill who is known to have said:

tweet thisSQL is the worst form of database querying, except for all the other forms.

But as Yakov Fain has recently put it:

You can run from SQL, but you can’t hide

So, let’s better get back to work and learn this beast! Thanks for reading.

Java 8 Friday: Better Exceptions


At Data Geekery, we love Java. And as we’re really into jOOQ’s fluent API and query DSL, we’re absolutely thrilled about what Java 8 will bring to our ecosystem.

Java 8 Friday

Every Friday, we’re showing you a couple of nice new tutorial-style Java 8 features, which take advantage of lambda expressions, extension methods, and other great stuff. You’ll find the source code on GitHub.

Better Exceptions

I had the idea when I stumbled upon JUnit GitHub issue #706, which is about a new method proposal:

ExpectedException#expect(Throwable, Callable)

One suggestion was to create an interceptor for exceptions like this.

assertEquals(Exception.class, 
    thrown(() -> foo()).getClass());
assertEquals("yikes!", 
    thrown(() -> foo()).getMessage());

On the other hand, why not just add something completely new along the lines of this?

// This is needed to allow for throwing Throwables
// from lambda expressions
@FunctionalInterface
interface ThrowableRunnable {
    void run() throws Throwable;
}

// Assert a Throwable type
static void assertThrows(
    Class<? extends Throwable> throwable,
    ThrowableRunnable runnable
) {
    assertThrows(throwable, runnable, t -> {});
}

// Assert a Throwable type and implement more
// assertions in a consumer
static void assertThrows(
    Class<? extends Throwable> throwable,
    ThrowableRunnable runnable,
    Consumer<Throwable> exceptionConsumer
) {
    boolean fail = false;
    try {
        runnable.run();
        fail = true;
    }
    catch (Throwable t) {
        if (!throwable.isInstance(t))
            Assert.fail("Bad exception type");

        exceptionConsumer.accept(t);
    }

    if (fail)
        Assert.fail("No exception was thrown");
}

So the above methods both assert that a given throwable is thrown from a given runnable – ThrowableRunnable to be precise, because most functional interfaces, unfortunately, don’t allow for throwing checked exceptions. See this article for details.

We’re now using the above hypothetical JUnit API as such:

assertThrows(Exception.class, 
    () -> { throw new Exception(); });

assertThrows(Exception.class, 
    () -> { throw new Exception("Message"); },
    e  -> assertEquals("Message", e.getMessage()));

In fact, we could even go further and declare an exception swallowing helper method like this:

// This essentially swallows exceptions
static void withExceptions(
    ThrowableRunnable runnable
) {
    withExceptions(runnable, t -> {});
}

// This delegates exception handling to a consumer
static void withExceptions(
    ThrowableRunnable runnable,
    Consumer<Throwable> exceptionConsumer
) {
    try {
        runnable.run();
    }
    catch (Throwable t) {
        exceptionConsumer.accept(t);
    }
}

This is useful to swallow all sorts of exceptions. The following two idioms are thus equivalent:

try {
    // This will fail
    assertThrows(SQLException.class, () -> {
        throw new Exception();
    });
}
catch (Throwable t) {
    t.printStackTrace();
}

withExceptions(
    // This will fail
    () -> assertThrows(SQLException.class, () -> {
        throw new Exception();
    }),
    t -> t.printStackTrace()
);

Obviuously, these idioms aren’t necessarily more useful than an actual try .. catch .. finally block, specifically also because it does not support proper typing of exceptions (at least not in this example), nor does it support the try-with-resources statement.

Nonetheless, such utility methods will come in handy every now and then.

Next week

Stay tuned for more Java 8 goodness on this blog when we continue our Java 8 Friday series with great new examples.

Follow

Get every new post delivered to your Inbox.

Join 1,838 other followers

%d bloggers like this: