How I Incorrectly Fetched JDBC ResultSets. Again.

You know JDBC, right? It’s that really easy, concise API that we love to use to work with virtually any database, relational or not. It has essentially three types that you need to care about:

All the other types some sort of utilities.

Now, with the above three, we can do really nice and lean Java/SQL coding as follows:

try (Connection c = datasource.getConnection();
     Statement s = c.createStatement();
     ResultSet r = s.executeQuery("SELECT 'hello'")
) {
    while (r.next())
        System.out.println(r.getString(1));
}

Output:

hello

OK? Super easy.

Unless…

Unless you want to write generic JDBC code, because you don’t know what the SQL string is. It could be a SELECT statement. It could be and UPDATE. It could be DDL. It could be a statement batch (several statements). It could call triggers and stored procedures, which again produce nice things like warnings, exceptions, update counts, and additional result sets.

You know, the sort of thing that might come flying in to a generic utility method like jOOQ’s ResultQuery.fetchMany().

(Don’t think this couldn’t happen to you as well. SQL Server triggers are really mean things!)

For this, let’s consider the correct way to execute the following simple statement batch that works wonderfully in SQL Server:

raiserror('e0', 0, 2, 3);
create table t(i int);
raiserror('e1', 5, 2, 3);
insert into t values (1);
raiserror('e2', 10, 2, 3);
insert into t values (2);
raiserror('e3', 15, 2, 3);
select * from t;
drop table t;
raiserror('e4', 16, 2, 3);

The result is:

And obviously

For your convenience, I have pre-formatted the above String into a Java String variable, which is already the first problem, because Java STILL doesn’t have multi-line strings (gaah):

String sql =
    "\n raiserror('e0', 0, 2, 3);"
  + "\n create table t(i int);"
  + "\n raiserror('e1', 5, 2, 3);"
  + "\n insert into t values (1);"
  + "\n raiserror('e2', 10, 2, 3);"
  + "\n insert into t values (2);"
  + "\n raiserror('e3', 15, 2, 3);"
  + "\n select * from t;"
  + "\n drop table t;"
  + "\n raiserror('e4', 16, 2, 3);";

Now see, we might be inclined to just copy paste some JDBC snippet off some website (e.g. this blog, and take its first snippet) and execute it as such:

try (
    Statement s = c.createStatement();
    ResultSet r = s.executeQuery(sql)
) {
    while (r.next())
        System.out.println(r.getString(1));
}

Yeah. What’ll happen if we do this?

Rats:

com.microsoft.sqlserver.jdbc.SQLServerException: e3
	at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:258)
	at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1547)
	at com.microsoft.sqlserver.jdbc.SQLServerStatement.doExecuteStatement(SQLServerStatement.java:857)
	at com.microsoft.sqlserver.jdbc.SQLServerStatement$StmtExecCmd.doExecute(SQLServerStatement.java:757)
	at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7151)
	at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:2689)
	at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:224)
	at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:204)
	at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeQuery(SQLServerStatement.java:659)
	at SQLServer.main(SQLServer.java:80)

e3? What on earth? So what happened with my statement batch? Did it execute? Only until the middle? Or did I get to the end as well?

OK, quite obviously, we have to do this more carefully. We cannot use Statement.executeQuery() here, because we don’t know whether we’ll get a result set. In fact, we got an exception, but not the first one.

Let’s try something else. Let’s try this:

try (Statement s = c.createStatement()) {
    System.out.println(s.execute(sql));
}

That just yields:

false

Okaaay, did anything execute in the database at all? No more exceptions… Let me have a look at the SQL Server Profiler…

Nope, the entire batch got executed. (Could’ve just removed the DROP TABLE statement and checked the contents of table T in SQL Server Management Studio, of course).

Huh, quite a different result, depending on what method we’re calling. Does that scare you? Does your ORM get this right? (jOOQ didn’t but this is now fixed).

OK, let’s read the Javadoc on Statement.execute()

It says:

Executes the given SQL statement, which may return multiple results. In some (uncommon) situations, a single SQL statement may return multiple result sets and/or update counts. Normally you can ignore this unless you are (1) executing a stored procedure that you know may return multiple results or (2) you are dynamically executing an unknown SQL string.
The execute method executes an SQL statement and indicates the form of the first result. You must then use the methods getResultSet or getUpdateCount to retrieve the result, and getMoreResults to move to any subsequent result(s).

Huh, OK. Statement.getResultSet() and getUpdateCount() must be used, and then getMoreResults()

The getMoreResults() method also has this interesting bit of information:

There are no more results when the following is true:

// stmt is a Statement object
((stmt.getMoreResults() == false) && (stmt.getUpdateCount() == -1))

Interesting. -1. I guess we can be very happy that at least it’s not returning null or a punch in your face.

So, let’s try this again:

  • We first have to call execute()
  • If it’s true, we fetch getResultSet()
  • If it’s false, we check getUpdateCount()
  • If that was -1, we can stop

Or, in code:

fetchLoop:
for (int i = 0, updateCount = 0; i < 256; i++) {
    boolean result = (i == 0)
        ? s.execute(sql)
        : s.getMoreResults();

    if (result)
        try (ResultSet rs = s.getResultSet()) {
            System.out.println("Result      :");

            while (rs.next())
                System.out.println("  " + rs.getString(1));
        }
    else if ((updateCount = s.getUpdateCount()) != -1)
        System.out.println("Update Count: " + updateCount);
    else
        break fetchLoop;
}

Beautiful! Some remarks:

  • Note how the loop stops after 256 iterations. Never trust these infinite streaming APIs, there’s always a bug somewhere, trust me
  • The boolean value return from Statement.execute() and Statement.getMoreResults() is the same. We can assign it to a variable inside the loop and call execute only on the first iteration
  • If true, fetch the result set
  • If false, check the update count
  • If that was -1, stop

Run time!

Update Count: 0
Update Count: 1
Update Count: 1
com.microsoft.sqlserver.jdbc.SQLServerException: e3
	at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:258)
	at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1547)
	at com.microsoft.sqlserver.jdbc.SQLServerStatement.getMoreResults(SQLServerStatement.java:1270)
	at SQLServer.main(SQLServer.java:83)

Crap. But did it execute completely? Yes it did, but we didn’t get that sweet result set after e3, because of that exception. But at least, we now got 3 update counts. But wait a second, why didn’t we get e0, e1, and e2?

AHA, they’re warnings, not exceptions. Funky SQL Server decided that everything below some severity level is a warning. Whatever.

Anyway, let’s fetch those warnings as well!

fetchLoop:
for (int i = 0, updateCount = 0; i < 256; i++) {
    boolean result = (i == 0)
        ? s.execute(sql)
        : s.getMoreResults();

    // Warnings here
    SQLWarning w = s.getWarnings();
    for (int j = 0; j < 255 && w != null; j++) {
        System.out.println("Warning     : " + w.getMessage());
        w = w.getNextWarning();
    }

    // Don't forget this
    s.clearWarnings();

    if (result)
        try (ResultSet rs = s.getResultSet()) {
            System.out.println("Result      :");

            while (rs.next())
                System.out.println("  " + rs.getString(1));
        }
    else if ((updateCount = s.getUpdateCount()) != -1)
        System.out.println("Update Count: " + updateCount);
    else
        break fetchLoop;
}

Great, so now we get all the warnings e0, e1, e2, and the exception e3, along with the update counts:

Warning     : e0
Update Count: 0
Warning     : e1
Update Count: 1
Warning     : e2
Update Count: 1
com.microsoft.sqlserver.jdbc.SQLServerException: e3
	at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:258)
	at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1547)
	at com.microsoft.sqlserver.jdbc.SQLServerStatement.getMoreResults(SQLServerStatement.java:1270)
	at SQLServer.main(SQLServer.java:82)

That’s more like our batch. But we’re still aborting after e3. How can we get the result set? Easy! Just ignore the exception, right? 🙂

And while we’re at it, let’s use ResultSetMetaData to read the unknown result set type.

fetchLoop:
for (int i = 0, updateCount = 0; i < 256; i++) {
    try {
        boolean result = (i == 0)
            ? s.execute(sql)
            : s.getMoreResults();

        SQLWarning w = s.getWarnings();
        for (int j = 0; j < 255 && w != null; j++) {
            System.out.println("Warning     : " + w.getMessage());
            w = w.getNextWarning();
        }

        s.clearWarnings();

        if (result)
            try (ResultSet rs = s.getResultSet()) {
                System.out.println("Result      :");
                ResultSetMetaData m = rs.getMetaData();

                while (rs.next())
                    for (int c = 1; c <= m.getColumnCount(); c++)
                        System.out.println(
                            "  " + m.getColumnName(c) +
                            ": " + rs.getInt(c));
            }
        else if ((updateCount = s.getUpdateCount()) != -1)
            System.out.println("Update Count: " + updateCount);
        else
            break fetchLoop;
        }
    catch (SQLException e) {
        System.out.println("Exception   : " + e.getMessage());
    }
}

There, that’s more like it:

Warning     : e0
Update Count: 0
Warning     : e1
Update Count: 1
Warning     : e2
Update Count: 1
Exception   : e3
Result      :
  i: 1
  i: 2
Update Count: 0
Exception   : e4

Now we’ve executed the entire batch in a super generic way with JDBC

Gah, I want this to be easier

Of course you do, which is why there is jOOQ. jOOQ has the really nice fetchMany() methods, which can execute random SQL strings to get a mixture of:

  • Update counts
  • Result sets
  • Exceptions / Warnings (jOOQ 3.10+ only)

For example, we can write:

// Use this new setting to indicate that we don't want to throw
//  exceptions, but collect them, as we've seen above
DSLContext ctx = DSL.using(c, 
  new Settings().withThrowExceptions(THROW_NONE));

// And then, simply:
System.out.println(ctx.fetchMany(sql));

The result is of the form:

Warning: SQL [...]; e0
Update count: 0
Warning: SQL [...]; e1
Update count: 1
Warning: SQL [...]; e2
Update count: 1
Exception: SQL [...]; e3
Result set:
+----+
|   i|
+----+
|   1|
|   2|
+----+
Update count: 0
Exception: SQL [...]; e4

Excellent!

What we didn’t cover

Oh, tons of things, but I need material for future blog posts, too, right?

  • We only discussed SQL Server so far
  • We didn’t discuss the fact that SQLException.getNextException() doesn’t work here
  • We didn’t discuss how we can combine this with OUT parameters (eegh, at what moment do we fetch those)
  • We didn’t discuss the fact that some JDBC drivers don’t implement this correctly (looking at you, Oracle)
  • We didn’t go into the depths of how JDBC drivers don’t implement ResultSetMetaData correctly
  • We didn’t cover the performance overhead of fetching warnings, e.g. in MySQL
  • … and much more

So, are you still writing JDBC code yourself? 🙂

Are Java 8 Streams Truly Lazy? Not Completely!

In a recent article, I’ve shown that programmers should always apply a filter first, map later strategy with streams. The example I made there was this one:

hugeCollection
    .stream()
    .limit(2)
    .map(e -> superExpensiveMapping(e))
    .collect(Collectors.toList());

In this case, the limit() operation implements the filtering, which should take place before the mapping.

Several readers correctly mentioned that in this case, it doesn’t matter what order we’re putting the limit() and map() operations, because most operations are evaluated lazily in the Java 8 Stream API.

Or rather: The collect() terminal operation pulls values from the stream lazily, and as the limit(5) operation reaches the end, it will no longer produce new values, regardless whether map() came before or after. This can be proven easily as follows:

import java.util.stream.Stream;

public class LazyStream {
    public static void main(String[] args) {
        Stream.iterate(0, i -> i + 1)
              .map(i -> i + 1)
              .peek(i -> System.out.println("Map: " + i))
              .limit(5)
              .forEach(i -> {});

        System.out.println();
        System.out.println();

        Stream.iterate(0, i -> i + 1)
              .limit(5)
              .map(i -> i + 1)
              .peek(i -> System.out.println("Map: " + i))
              .forEach(i -> {});
    }
}

The output of the above is:

Map: 1
Map: 2
Map: 3
Map: 4
Map: 5


Map: 1
Map: 2
Map: 3
Map: 4
Map: 5

But this isn’t always the case!

This optimisation is an implementation detail, and in general, it is not unwise to really apply the filter first, map later rule thoroughly, not relying on such an optimisation. In particular, the Java 8 implementation of flatMap() is not lazy. Consider the following logic, where we put a flatMap() operation in the middle of the stream:

import java.util.stream.Stream;

public class LazyStream {
    public static void main(String[] args) {
        Stream.iterate(0, i -> i + 1)
              .flatMap(i -> Stream.of(i, i, i, i))
              .map(i -> i + 1)
              .peek(i -> System.out.println("Map: " + i))
              .limit(5)
              .forEach(i -> {});

        System.out.println();
        System.out.println();

        Stream.iterate(0, i -> i + 1)
              .flatMap(i -> Stream.of(i, i, i, i))
              .limit(5)
              .map(i -> i + 1)
              .peek(i -> System.out.println("Map: " + i))
              .forEach(i -> {});
    }
}

The result is now:

Map: 1
Map: 1
Map: 1
Map: 1
Map: 2
Map: 2
Map: 2
Map: 2


Map: 1
Map: 1
Map: 1
Map: 1
Map: 2

So, the first Stream pipeline will map all the 8 flatmapped values prior to applying the limit, whereas the second Stream pipeline really limits the stream to 5 elements first, and then maps only those.

The reason for this is in the flatMap() implementation:

// In ReferencePipeline.flatMap()
try (Stream<? extends R> result = mapper.apply(u)) {
    if (result != null)
        result.sequential().forEach(downstream);
}

As you can see, the result of the flatMap() operation is consumed eagerly with a terminal forEach() operation, which will always produce all the four values in our case and send them to the next operation. So, flatMap() isn’t lazy, and thus the next operation after it will get all of its results. This is true for Java 8. Future Java versions might improve this, of course.

We better filter them first. And map later.

A Basic Programming Pattern: Filter First, Map Later

In recent days, I’ve seen a bit too much of this:

someCollection
    .stream()
    .map(e -> someFunction(e))
    .collect(Collectors.toList())
    .subList(0, 2);

Something is very wrong with the above example. Can you see it? No? Let me rename those variables for you.

hugeCollection
    .stream()
    .map(e -> superExpensiveMapping(e))
    .collect(Collectors.toList())
    .subList(0, 2);

Better now? Exactly. The above algorithm is O(N) when it could be O(1):

hugeCollection
    .stream()
    .limit(2)
    .map(e -> superExpensiveMapping(e))
    .collect(Collectors.toList());

(Let’s assume the lack of explicit ordering is irrelevant)

I’m working mostly with SQL and helping companies tune their SQL (check out our training, btw) and as such, I’m always very eager to reduce the algorithmic complexity of queries. To some people, this seems like wizardry, but honestly, most of the time, it just boils down to adding a well placed index.

Why?

Because an index reduces the algorithmic complexity of a query algorithm from O(N) (scanning the entire table for matches) to O(log N) (scanning a B-tree index for the same matches).

Sidenote: Reducing algorithmic complexity is almost never premature optimisation. As your data set grows, bad complexity will always bite you!

The same is true for the above examples. Why would you ever traverse and transform an entire list of a huge amount of elements (N) with an expensive operation (superExpensiveMapping), when you really only need to do this only for the first two values?

Conclusion

SQL is a declarative language where the query optimiser gets this right automatically for you: It will (almost) always filter the data first (WHERE clause), and only then transform it (JOIN, GROUP BY,
SELECT
, etc).

Just as in SQL, when we hand-write our queries using Streams (and also in imperative programming), do always:

Filter First, Map Later

Your production system will thank you.

Note, another interesting case with flatMap() is documented here.

ORMs Should Update “Changed” Values, Not Just “Modified” Ones

In this article, I will establish how the SQL language and its implementations distinguish between changed values and modified values, where a changed value is a value that has been “touched”, but not necessarily modified, i.e. the value might be the same before and after the change.

Many ORMs, unfortunately, either update all of a record’s values, or only the modified ones. The first can be inefficient, and the latter can be wrong. Updating the changed values would be correct.

Note that you may have a different definition of changed and modified. For this article, let’s just assume that the above definition is as valid as it is useful.

Introduction

A very interesting discussion was triggered recently by Vlad Mihalcea who was looking for an answer to this interesting question:

What’s the overhead of updating all columns, even the ones that haven’t changed?

Apart from the question being very interesting from a performance perspective, the tweet also inspired functional aspects of a distinction between updating all columns vs. updating some columns, which I’ll summarise in this article.

What’s the Problem?

The problem is one that all ORM vendors need to solve: ORMs have a client side representation of the relational model, and that representation is cached (or “out of sync”) for a user to change and then persist again. The problem is now how to re-synchronise the client side representation with the server side representation in a consistent and correct way.

Sidenote: By ORM I understand any tool that maps from a client side representation of your database schema to the database schema itself, regardless if the product supports full-fledged JPA-style object graph persistence, or “merely” implements an “active record” pattern, such as jOOQ 3.x (I find that distinction a bit academic).

All such ORMs have a client side representation of a database record, for instance given the following table (I’m going to be using PostgreSQL syntax):

CREATE TABLE customer (
  customer_id SERIAL8     NOT NULL PRIMARY KEY,
  first_name  VARCHAR(50) NOT NULL,
  last_name   VARCHAR(50) NOT NULL
)

You’re going to have a client side representation as the following (using Java, e.g. jOOQ or JPA):

// jOOQ generated UpdatableRecord
public class CustomerRecord 
extends UpdatableRecordImpl<CustomerRecord> {

  public CustomerRecord setCustomerId(Long customerId) { ... }
  public Long getCustomerId() { ... }
  public CustomerRecord setFirstName(String firstName) { ... }
  public String getFirstName() { ... }

  ...
}

// JPA annotated entity
@Entity
public class Customer {

  @Id
  @GeneratedValue(strategy = IDENITITY)
  public long customerId;

  @Column
  public String firstName;

  ...
}

In principle, these two approaches are the same thing with the distinction that jOOQ explicitly governs all UpdatableRecord interactions through type inheritance, whereas JPA makes this dependency more implicit through annotations:

  • jOOQ – explicit behavioural dependency between entity and jOOQ logic
  • JPA – implicit behavioural dependency between entity and JPA entity manager

In principle, the distinction is just a matter of taste, a programming style: Explicit vs. declarative.

But from a practical perspective, the JPA implementation lacks an important feature when it comes to synching the state back to the database. It cannot reflect change, only modification.

How to synch the state back to the database?

Let’s assume we have a customer called John Doe:

INSERT INTO customer (first_name, last_name)
VALUES ('John', 'Doe');

And that customer now changes their names to John Smith. We have several options of sending that update to the database, through “PATCH” or “PUT” semantics – terminology used by Morgan Tocker in another tweet in that discussion:

-- PATCH
UPDATE customer SET last_name = 'Smith' WHERE id = ? 

-- PUT
UPDATE customer 
SET first_name = 'John',
    last_name = 'Smith'
WHERE customer_id = ? 

A “PATCH” operation sends only the changed values back to the server, whereas a “PUT” operation sends the entire entity back to the server.

Discussion – Semantics.

In favour of PUT

The two operations are semantically very different. If another session attempts to rename this customer to Jane Doe concurrently (and without optimistic locking being in place), then the PATCH operation might result in an inconsistent outcome (Jane Smith), whereas the PUT operation would still produce one of the expected results, depending on what write is executed first:

-- PATCH result: Jane Smith
-- PATCH 1
UPDATE customer SET last_name = 'Smith' WHERE customer_id = ? 

-- PATCH 2
UPDATE customer SET first_name = 'Jane' WHERE customer_id = ? 

-- PUT result: Jane Doe
-- PUT 1
UPDATE customer 
SET first_name = 'John',
    last_name = 'Smith'
WHERE customer_id = ? 

-- PUT 2
UPDATE customer 
SET first_name = 'Jane',
    last_name = 'Doe'
WHERE customer_id = ? 

This is one of the reasons why Hibernate, as a JPA implementation, always implements PUT semantics by default, sending all the columns at once. You can opt out of this by using the @DynamicUpdate, which will only update modified values (not “changed” values, I’ll explain this distinction later).

This makes perfect sense in such a trivial setup, but it is a short-sighted solution, when the table has many more columns. We’ll see right away why:

In favour of PATCH

One size doesn’t fit all. Sometimes, you do want concurrent updates to happen, and you do want to implement PATCH semantics, because sometimes, two concurrent updates do not work against each other. Take the following example using an enhancement of the customer table.

Business is asking us to collect some aggregate metrics for each customer. The number of clicks they made on our website, as well as the number of purchases they made:

CREATE TABLE customer (
  customer_id SERIAL8     NOT NULL PRIMARY KEY,
  first_name  VARCHAR(50) NOT NULL,
  last_name   VARCHAR(50) NOT NULL,

  clicks      BIGINT      NOT NULL DEFAULT 0,
  purchases   BIGINT      NOT NULL DEFAULT 0
)

And, of course, once you agree that the above design is a suitable one, you’ll immediately agree that here, PATCH semantics is more desirable than PUT semantics:

-- Updating clicks
UPDATE customer SET clicks = clicks+1 WHERE customer_id = ? 

-- Updating purchases
UPDATE customer SET purchases = purchases+1 WHERE customer_id = ? 

Not only do we update only an individual column, we’re doing it entirely in SQL, including the calculation. With this approach, we do not even need optimistic locking to guarantee update correctness, as we’re not using any client side cached version of the customer record, which could be out of date and would need optimistic (or worse: pessimistic) locking.

If we implemented this differently, using client side calculation of the updated clicks / purchases counters…

-- Updating clicks
UPDATE customer 
SET clicks = ? 
WHERE customer_id = ? 

-- Updating purchases
UPDATE customer 
SET purchases = ? 
WHERE customer_id = ? 

… then we’d need one of these techniques:

  • Pessimistic locking: Nope, won’t work. We could still get incorrect updates
  • Optimistic locking: Indeed, any update would need to be done on a versioned customer record, so if there are two concurrent updates, one of them will fail and could try again. This guarantees data integrity, but will probably make this functionality very painful, because a lot of click updates are probably done in a short amount of time, and they would need to be repeated until they work!
  • Client side synchronisation: Of course, we could prevent concurrency for these updates on the client side, making sure that only one concurrent process ever updates click counts (for a given customer). We could implement a click count update queue for this.

All of the above options have significant drawbacks, the easiest solution is really to just increment the counter directly in the database.

And don’t forget, if you choose a bind-variable based solution, and opt for updating ALL the columns, rather than just the changed one, your first_name / last_name updates might conflict with these counter updates as well, making things even more complicated.

Partial PUT (or compound PATCH)

In fact, from a semantics perspective, if you do want to use an ORM to update an entity, you should think about a “partial PUT” semantics, which separates the different entity elements in “sub entities”. From a relational perspective, of course, no such thing as a subentity exists. The above example should be normalised into this, and we would have much less concurrency issues:

CREATE TABLE customer (
  customer_id SERIAL8     NOT NULL PRIMARY KEY,
  first_name  VARCHAR(50) NOT NULL,
  last_name   VARCHAR(50) NOT NULL
);

CREATE TABLE customer_clicks
  customer_id BIGINT NOT NULL PRIMARY KEY REFERENCES customer,
  clicks      BIGINT NOT NULL DEFAULT 0
);

CREATE TABLE customer_purchases
  customer_id BIGINT NOT NULL PRIMARY KEY REFERENCES customer,
  purchases   BIGINT NOT NULL DEFAULT 0
);

This way, the previously mentioned PUT semantics would not create situations where individual, semantically unrelated updates (updates to names, updates to clicks) would interfere with each other. We would only need to make sure that e.g. two competing updates to clicks are correctly serialised.

Practically, we often don’t design our databases this way, either for convenience reasons, for optimised storage, for optimised querying (see also our article when normalisation and surrogate keys hurt performance).

jOOQ’s “changed” value semantics

So that “sub entity” is really just a logical thing, which can be represented either as a logically separate entity in JPA, or we can use jOOQ, which works a bit differently here. In jOOQ, we can change an UpdatableRecord only partially, and that partial change is sent to the server:

CustomerRecord customer = ctx
    .selectFrom(CUSTOMER)
    .where(CUSTOMER.CUSTOMER_ID.eq(customerId))
    .fetchOne();

customer.setFirstName("John");
customer.setLastName("Smith");

assertTrue(customer.changed(CUSTOMER.FIRST_NAME));
assertTrue(customer.changed(CUSTOMER.LAST_NAME));
assertFalse(customer.changed(CUSTOMER.CLICKS));
assertFalse(customer.changed(CUSTOMER.PURCHASES));

customer.store();

assertFalse(customer.changed(CUSTOMER.FIRST_NAME));
assertFalse(customer.changed(CUSTOMER.LAST_NAME));
assertFalse(customer.changed(CUSTOMER.CLICKS));
assertFalse(customer.changed(CUSTOMER.PURCHASES));

This will send the following statement to the server:

UPDATE customer
SET first_name = ?,
    last_name = ?
WHERE customer_id = ?

Optionally, just as with JPA, you can turn on optimistic locking on this statement. The important thing here is that the clicks and purchases columns are left untouched, because they were not changed by the client code. This is different from JPA, which either sends all the values by default, or if you specify @DynamicUpdate in Hibernate, it would send only the last_name column, because while first_name was changed it was not modified.

My definition:

  • changed: The value is “touched”, its state is “dirty” and the state needs to be synched to the database, regardless of modification.
  • modified: The value is different from its previously known value. By necessity, a modified value is always changed.

As you can see, these are different things, and it is quite hard for a JPA-based API like Hibernate to implement changed semantics because of the annotation-based declarative nature of how entities are defined. We’d need some sophisticated instrumentation to intercept all data changes even when the values have not been modified (I didn’t make those attributes public by accident).

Without this distinction, however, it is unreasonable to use @DynamicUpdate in Hibernate, as we might run into that situation we didn’t want to run into, where we get a customer called “Jane Smith” – or we use optimistic locking, in case of which there’s not much point in using @DynamicUpdate.

The database perspective

From a database perspective, it is also important to distinguish between change and modification semantics. In the answer I gave on Stack Exchange, I’ve illustrated two situations:

INSERTs and DEFAULT values

Thus far, we’ve discussed only UPDATE statements, but similar reasoning may be made for INSERT as well. These two statements are the same:

INSERT INTO t (a, b)    VALUES (?, ?);
INSERT INTO t (a, b, c) VALUES (?, ?, DEFAULT);

This one, however, is different:

INSERT INTO t (a, b, c) VALUES (?, ?, ?);

In the first case, a DEFAULT clause (e.g. timestamp generation, identity generation, trigger value generation, etc.) may apply to the column c. In the second case, the value c is provided explicitly by the client.

Languages like Java do not have any way to represent this distinction between

  • NULL (which is usually, but not always, the DEFAULT) in SQL
  • an actual DEFAULT

This can only be achieved when an ORM implements changed semantics, like jOOQ does. When you create a customer with jOOQ, then clicks and purchases will have their DEFAULT applied:

CustomerRecord c1 = ctx.newRecord(CUSTOMER);
c1.setFirstName("John");
c1.setLastName("Doe");
c1.store();

CustomerRecord c2 = ctx.newRecord(CUSTOMER);
c2.setFirstName("Jane");
c2.setLastName("Smith");
c2.setClicks(1);
c2.setPurchases(1);
c2.store();

Resulting SQL:

-- c1.store();
INSERT INTO customer (first_name, last_name)
VALUES (?, ?);

-- c2.store();
INSERT INTO customer (first_name, last_name, clicks, purchases)
VALUES (?, ?, ?, ?);

In both cases, that’s what the user tells jOOQ to do, so jOOQ will generate a query accordingly.

Back to UPDATE statements

Consider the following example using Oracle triggers:

CREATE TABLE x (a INT PRIMARY KEY, b INT, c INT, d INT);

INSERT INTO x VALUES (1, 1, 1, 1);

CREATE OR REPLACE TRIGGER t
  BEFORE UPDATE OF c, d -- Doesn't fire on UPDATE OF b!
  ON x
BEGIN
  IF updating('c') THEN
    dbms_output.put_line('Updating c');
  END IF;
  IF updating('d') THEN
    dbms_output.put_line('Updating d');
  END IF;
END;
/

SET SERVEROUTPUT ON
UPDATE x SET b = 1 WHERE a = 1;
UPDATE x SET c = 1 WHERE a = 1;
UPDATE x SET d = 1 WHERE a = 1;
UPDATE x SET b = 1, c = 1, d = 1 WHERE a = 1;

It results in the following output:

table X created.
1 rows inserted.
TRIGGER T compiled
1 rows updated.
1 rows updated.
Updating c

1 rows updated.
Updating d

1 rows updated.
Updating c
Updating d

As you can see, the trigger doesn’t fire when we update only column b, which it is not interested in. Again, this goes in the direction of distinguishing between changed and modified values, where a trigger fires only when a value is changed (but not necessarily modified).

Now, if an ORM will always update all the columns, this trigger will not work correctly. Sure, we can compare :OLD.b and :NEW.b, but that would check for modification, not change, and it might be costly to do so for large strings!

Speaking of costs…

Performance

Statement caching: Weakly in favour of PUT

While one of the reasons the Hibernate team mentioned in favour of updating all the columns is improved cursor cache performance (fewer distinct SQL statements need to be parsed by the database as there are fewer distinct update configurations), I suggest that this “premature optimisation” is negligible. If a client application runs dynamic updates (in the jOOQ sense, where changed values are updated, not just modified values), then chances that the possible SQL statements that need to be parsed will explode are slim to non-existent.

I would definitely like to see real-world benchmarks on this topic!

Batching: Weakly in favour of PUT

When you want to batch tons of update statements from JDBC, then indeed, you will need to ensure that they all have the exact same SQL string. However, this is not a good argument in favour of using PUT semantics and updating all columns.

I’m saying “not good”, because such a batched update should still only consider a subset of the columns for update, not all the columns. And that subset should be determined on aggregated changed flags, not data modification.

Index updates: In favour of PATCH (depending on the database)

Most databases optimise index updates to ignore indexes whose columns have not been changed. Oracle also doesn’t update indexes whose columns have not been modified, in case of which PUT and PATCH semantics both work the same way from an indexing perspective. Other databases may not work this way, where PATCH semantics is favourable.

But even if the optimisation is in place, the old and the new values have to be compared for equality (i.e. to see if a modification took place). You don’t want to compare millions of strings per second if there’s no need to do so! Check out Morgan Tocker’s interesting answer on Stack Exchange, from a MySQL perspective

So, why not just prevent expensive modification checks by telling the database what has changed, instead?

UNDO overhead: In favour of PATCH

Every statement has a footprint on the UNDO / REDO logs. As I’ve shown above, the statements are semantically different in many ways, so if your statement is bigger (more columns are updated), then the impact on the UNDO / REDO log is bigger as well. This can have drastic effects depending on the size of your table / columns:

Don’t forget that this can also affect backup performance!

More performance related information in this blog post:

https://jonathanlewis.wordpress.com/2007/01/02/superfluous-updates

Note: While these bits of information were mostly Oracle-specific, common sense dictates that other RDBMS will behave in similar ways.

Conclusion

With all these negative aspects to including unnecessary columns for update through an ORM compared to the almost negligible benefits, I’d say that users should move forward and completely avoid this mess. Here’s how:

  • jOOQ optimises this out of the box, if users set the changed values explicitly. Beware that when you “load” a POJO into a Record, it will set all the columns to changed, which may or may not be the desired effect!
  • Hibernate allows for @DynamicUpdate, which may work incorrectly as we have minimal “PATCH” semantics based on modified values, not on changed values. However, JPA allows for declaring more than one entity per table, which might certainly be a valid option for this kind of problem
  • Normalisation is always an option, with its own trade offs. The clicks and purchases columns could be externalised in separate tables, if this benefits the overall design.
  • More often than not, writing an UPDATE with SQL directly is the best choice. As we’ve seen in this article, the counters should be updated with expressions of the form clicks = clicks + 1, which circumvents most problems exposed in this article.

In short, as Michael Simons said:

And we all do feel very dirty when we write SELECT *, right? So we should at least be wary of updating all the columns as well.

jOOQ Tuesdays: Gerald Sangudi and Keshav Murthy Reveal the Secrets of N1QL (SQL on JSON)

Welcome to the jOOQ Tuesdays series. In this series, we’ll publish an article on the third Tuesday every other month where we interview someone we find exciting in our industry from a jOOQ perspective. This includes people who work with SQL, Java, Open Source, and a variety of other related topics.

I’m very excited to feature today Gerald Sangudi and Keshav Murthy, creators of the the N1QL query language, CouchBase’s SQL-based JSON querying language.

Hi Gerald and Keshav – It’s great to have two people on the jOOQ Tuesdays interview at once! How did you guys meet?

Keshav: Gerald interviewed me for my the job at Couchbase.

Gerald: Best decision we ever made.

You’re both working at CouchBase, one of the leading “JSON databases” in the market. What drove you towards JSON?

Keshav: I worked for IBM, specifically on the Informix database. In 2013, Websphere teams needed JSON support. I was skeptical of JSON for data management except for managing metadata.  Years before, I had successfully opposed implementing XML and XQuery inside Informix. Then I went to a NoSQL conference where I saw enterprises using JSON in real-world applications.  I saw enterprises like eBay, Cisco developing their enterprise applications on NoSQL and JSON.

RDBMS customers had been asking for ALTER TABLE ONLINE when they needed to add new columns. They typically would have additional unused  columns which they’d rename them when needed.  Obviously, this wasn’t an optimal or a good solution because you could run out these columns, types or structure wasn’t flexible.

For years, customers had been ALTER TABLE while application were online.  I realized, JSON was a good way to provide it. That’s when we added JSON as a type, added sharding, and started to extend SQL to support JSON.  A good relational database can deliver lot of value JSON.  So, we developed a mechanism to exploit JSON to provide schema flexibility on relational tables and we received a patent for it: http://bit.ly/2og1uXe.

At Couchbase, data is stored as JSON and N1QL was designed as SQL for JSON.  So, we keep calm and JSON.

Gerald: I am not at Couchbase any more, but I was there for four years. Couchbase has its roots in memcached and CouchDB, so Couchbase was already storing and managing JSON data before N1QL and before we arrived. N1QL was developed to enable our users to query and manipulate their JSON data.

You’re both working on, or have worked on N1QL, CouchBase’s innovative SQL-esque query language for JSON. Has the SQL language come full circle? Why is the SQL syntax such a great fit for you?

Keshav: Don Chamberlin, co-inventor of SQL said this. During XQuery discussion, he argued against using SQL for manipulating XML. Now, with N1QL and SQL++ on JSON, he sees enormous possibilities for SQL to manipulate JSON effectively.  So, you could say, it has come a full circle.

From the Application point of view, requirement for SQL to support complex data models has been there.  SQL-99 added the structured type into the language, but didn’t recognize need for schema flexibility.

What Gerald has very nicely is to inherit SQL and  extend not just the language but the
underlying boolean logic.  

SQL has three valued logic: TRUE, FALSE, NULL.  N1QL has 4-valued logic: TRUE, FALSE, NULL and MISSING.  SQL has the select-join-project operations. N1QL has those and adds NEST and UNNEST operations for handling arrays.  Once we have the logic and operations, the type system, rest of the expressions for handling nested objects and arrays can be added.

There is another important change in N1QL compared to SQL.  SQL  is the query language to manipulate data.  N1QL can discover the document metadata (names, structure and types) dynamically and operate on it.

Gerald: SQL is the greatest and most successful query language of all time. Our job was to enable our users to query data that is sometimes different from what standard SQL expects. We hope N1QL does that.

As an aside, the “N1” in N1QL stands for non-first normal form. SQL geeks like you Lukas may know that “non-first” is one primary difference between JSON data and relational data. The other primary differences are schema and uniformity.

Will your relational competition steal features from N1QL? Or will you steal more features from them?

Keshav: I do hope relational databases steal features from N1QL. Having common approaches solve problems makes it easier for customers to choose the right database for the right problem. I do hope they choose Couchbase more often than RDBMS!

Gerald always says, we don’t differ from SQL unless there is a good reason. In that sense, we’ve taken lot of the features from relational databases already.  In addition, we learn from successful models in relational databases for things like index design, query optimization, security and monitoring.  We stand on shoulders of giants.

Gerald: We hope that both relational and non-relational vendors steal features from each other. There is a collaborative effort on something called SQL++, which is a superset of both SQL and N1QL. We hope SQL++ is the convergence point. One lesson from the success of SQL is that standards are great for both users and vendors.

CouchBase doesn’t have what you call a “static schema”. The biggest advantage of a static schema for the database optimiser is the many ways such a schema can be used to predict performance and choose optimal execution plans. How does optimisation work in a “schema-less” database?

Keshav: Actually, static schema gives you the structure, but not the data distribution, which is a major factor for calculating the execution cost. N1QL uses the information within the query and available indexes to glean the structure and decide on the plan. For example, if you have a query with a predicate:  WHERE state = “CA” and zipcode = “94040”, and there is an index on either state, zipcode or both, we’d assume these key-values exist in the document and push down the predicates to index scan.  We given further details on an article on DZone:  https://dzone.com/articles/a-deep-dive-into-couchbase-n1ql-query-optimization

Separately, we do have a mechanism to INFER the schema by sampling.  Right now, we show the inferred schema so users can understand the structure.  We also use the inferred schema to make query editing easier with hints and validations within the workbench.  We do have plans to use this, collect additional statistics to improve decisions in the optimizer.

Gerald: The N1QL optimizer is one of the joys of working on N1QL. As Keshav said, most of the SQL optimization techniques carry over to N1QL. The data may not have a static schema, but the query has an implicit schema, and the indexes have implicit schemas. That is, the query has predicates and other characteristics, and each index has keys and other characteristics. That is enough to keep an optimizer busy.

Working for a newer database vendor, you don’t have as much legacy, so you can innovate faster and more freely. What will be the next big thing in the database market?

Keshav: Our metric for innovation is the progress customer trying to make in their business. So, we innovate within the constraint of a customer job.  That helps us to innovate and measure its success from customer point of view.

We see customers deploying NoSQL databases for newer patterns like systems of engagement.

When you plan your vacation, you search a lot before you buy.  Search for places, hotels, airlines, things-to-do.  Then you compare costs, ratings, before you make the final purchase. For a travel company, these require significant database infrastructure to support high number of queries with low latencies and at a very low cost.

We see customers deploying NoSQL databases to support lot of the customer engagement and information requirement use cases.  RDBMS is still used as system of record for buy-sell-cancel-checkin-etc operations but will integrate with system of engagement databases to enhance customer experience.

Doing this effectively requires innovations in every area: data platform designs, scale out, query processing, index designs and manageability.

Gerald: I read Jeff Bezos’ latest annual letter to shareholders.  He says that the things that don’t change are more important than the things that do change. Databases should store your data reliably and give you answers and updates quickly. If database vendors continue to do that, we’ll be ok.

Using Kotlin’s Apply Function for Dynamic SQL with jOOQ

It was hard to limit ourselves to 10 Nice Examples of Writing SQL in Kotlin With jOOQ, recently, because the Kotlin language has many nice little features that really help a lot when working with Java libraries. We’ve talked about the nice with() stdlib function, which allows to “import” a namespace for a local scope or closure:

with (AUTHOR) {
    ctx.select(FIRST_NAME, LAST_NAME)
       .from(AUTHOR)
       .where(ID.lt(5))
       .orderBy(ID)
       .fetch {
           println("${it[FIRST_NAME]} ${it[LAST_NAME]}")
       }
}

In the above example, the AUTHOR table is made available as the this reference in the closure following the with function, which works exactly like JavaScript’s with(). Everything in AUTHOR is available, without dereferencing it from AUTHOR.

Apply is very similar

A very similar feature is made available through apply(), although with different syntactic implications. Check out this Stack Overflow question for some details about with() vs. apply() in Kotlin.

When using jOOQ, apply() is most useful for dynamic SQL. Imagine you have local variables indicating whether some parts of a query should be added to the query:

val filtering = true;
val joining = true;

These boolean variables would be evaluated dynamically, of course. filtering specifies whether a dynamic filter / where clause is needed, whereas joining specifies whether an additional JOIN is required.

So, the following query will select authors, and:

  • if “filtering”, we’re selecting only author ID = 1
  • if “joining”, we’ll join the books table and count the number of books per author

Both of these predicates are independent. Enter the game: apply():

ctx.select(
      a.FIRST_NAME, 
      a.LAST_NAME, 
      if (joining) count() else value(""))
   .from(a)
   .apply { if (filtering) where(a.ID.eq(1)) }
   .apply { if (joining) join(b).on(a.ID.eq(b.AUTHOR_ID)) }
   .apply { if (joining) groupBy(a.FIRST_NAME, a.LAST_NAME) }
   .orderBy(a.ID)
   .fetch {
       println(it[a.FIRST_NAME] + " " + 
               it[a.LAST_NAME] +
               (if (joining) " " + it[count()] else ""))
   }

That’s neat! See, the jOOQ API doesn’t specify any apply() method / function, yet you can chain the apply() function to the jOOQ API as if it were natively supported.

Like with(), apply() makes a reference available to a closure as this, so it doesn’t have to be referenced explicitly anymore. Which means, we can write neat things like

   .apply { if (filtering) where(a.ID.eq(1)) }

Where a where() clause is added only if we’re filtering!

Of course, jOOQ (or any other query builder) lends itself to this kind of dynamic SQL, and it can be done in Java too:
https://www.jooq.org/doc/latest/manual/sql-building/dynamic-sql

But the Kotlin-specific fluent integration using apply() is exceptionally neat. Well done, Kotlin!

Side-note

This only works because the jOOQ DSL API of jOOQ 3.x is mutable and every operation returns the same this reference as was kindly pointed out by Ilya Ryzhenkov

In the future (e.g. version 4.0), we’re planning on making the jOOQ API more immutable – mutability is a historic legacy (although, often, it’s the desired behaviour for a query builder).

More nice Kotlin/jOOQ tricks in this article here.

How to Use SQL INTERSECT to Work Around SQL’s NULL Logic

ANOTHER SQL Post this week? I got nerd-sniped:

Oooooh, challenge accepted!

So, let’s assume we have a table T with columns (A, B, C) like this:

WITH t(a, b, c) AS (
  SELECT 'a', 'b', null FROM dual UNION ALL
  SELECT 'a', null, 'c' FROM dual UNION ALL
  SELECT 'a', 'b', 'c'  FROM dual
)
SELECT * FROM t

As expected, this yields:

A       B       C
-----------------
a       b
a               c
a       b       c

Truly exciting.

Now we want to find all those rows that “match” either ('a', 'b', NULL) or ('a', NULL, 'b'). Clearly, this should produce the first two rows, right?

A       B       C
-----------------
a       b
a               c

Yes. Now the canonical solution would be to tediously write out the entire predicate as such:

WITH t(a, b, c) AS (...)
SELECT * FROM t
WHERE (a = 'a' AND b = 'b' AND c IS NULL)
OR (a = 'a' AND b IS NULL AND c = 'c')

That’s really boring. Sure, we could have factored out the first, common predicate:

WITH t(a, b, c) AS (...)
SELECT * FROM t
WHERE a = 'a' AND (
     (b = 'b' AND c IS NULL)
  OR (b IS NULL AND c = 'c')
)

That’s certainly better from a performance perspective, but Rafael had a nifty idea. Let’s use row value expressions (tuples) in our predicates:

WITH t(a, b, c) AS (...)
SELECT * FROM t
WHERE (a, b, c) IN (('a', 'b', NULL), ('a', NULL, 'c'))

Unfortunately this doesn’t yield any results, because nothing is equal to NULL in SQL (not even NULL itself). The above query is the same as this one:

WITH t(a, b, c) AS (...)
SELECT * FROM t
WHERE (a = 'a' AND b = 'b' AND c = NULL /* oops */)
OR (a = 'a' AND b = NULL /* oops */ AND c = 'c')

D’oh.

Solutions

The lame one

The canonical solution then would be a really lame (but perfectly valid) one. Encode NULL to be some “impossible” string value. Rafael suggested yolo. Fair enough.

This works:

WITH t(a, b, c) AS (...)
SELECT * FROM t
WHERE (a, NVL(b, 'yolo'), NVL(c, 'yolo')) 
  IN (('a', 'b', 'yolo'), ('a', 'yolo', 'c'))

All we have to do now is to always remember the term yolo which means “NULL, but not NULL thank you SQL”

The hipster one

But wait! SQL is painstakingly inconsistent when it comes to NULL. See, NULL really means UNKNOWN in three-valued logic, and this means, we never know if SQL abides to its own rules.

Come in INTERSECT. Like UNION or EXCEPT (MINUS) in Oracle, as well as SELECT DISTINCT, these set operations handle two NULL values as NOT DISTINCT. Yes, they’re not equal but also not distinct. Whatever. Just remember: That’s how it is 🙂

So, we can write this hipster solution to Rafael’s problem:

WITH t(a, b, c) AS (...)
SELECT *
FROM t
WHERE EXISTS (
  SELECT a, b, c FROM dual
  INTERSECT (
    SELECT 'a', 'b', null FROM dual
    UNION ALL
    SELECT 'a', null, 'c' FROM dual
  )
)

We create an intersection of the tuple (a, b, c), the left side of Rafael’s IN predicate, and the desired values on the right side of the IN predicate, and we’re done.

Clearly less tedious than writing the original predicates, right? (We won’t look into performance this time)

Cheers, and a happy weekend.