Leaky Abstractions, or How to Bind Oracle DATE Correctly with Hibernate

We’ve recently published an article about how to bind the Oracle DATE type correctly in SQL / JDBC, and jOOQ. This article got a bit of traction on reddit with an interesting remark by Vlad Mihalcea, who is frequently blogging about Hibernate, JPA, transaction management and connection pooling on his blog. Vlad pointed out that this problem can also be solved with Hibernate, and we’re going to look into this, shortly.

What is the problem with Oracle DATE?

The problem that was presented in the previous article is dealing with the fact that if a query uses filters on Oracle DATE columns:

// execute_at is of type DATE and there's an index
PreparedStatement stmt = connection.prepareStatement(
    "SELECT * " + 
    "FROM rentals " +
    "WHERE rental_date > ? AND rental_date < ?");

… and we’re using java.sql.Timestamp for our bind values:

stmt.setTimestamp(1, start);
stmt.setTimestamp(2, end);

… then the execution plan will turn very bad with a FULL TABLE SCAN or perhaps an INDEX FULL SCAN, even if we should have gotten a regular INDEX RANGE SCAN.

-------------------------------------
| Id  | Operation          | Name   |
-------------------------------------
|   0 | SELECT STATEMENT   |        |
|*  1 |  FILTER            |        |
|*  2 |   TABLE ACCESS FULL| RENTAL |
-------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(:1<=:2)
   2 - filter((INTERNAL_FUNCTION("RENTAL_DATE")>=:1 AND 
              INTERNAL_FUNCTION("RENTAL_DATE")<=:2))

This is because the database column is widened from Oracle DATE to Oracle TIMESTAMP via this INTERNAL_FUNCTION(), rather than truncating the java.sql.Timestamp value to Oracle DATE.

More details about the problem itself can be seen in the previous article

Preventing this INTERNAL_FUNCTION() with Hibernate

You can fix this with Hibernate’s proprietary API, using a org.hibernate.usertype.UserType.

Assuming that we have the following entity:

@Entity
public class Rental {

    @Id
    @Column(name = "rental_id")
    public Long rentalId;

    @Column(name = "rental_date")
    public Timestamp rentalDate;
}

And now, let’s run this query here (I’m using Hibernate API, not JPA, for the example):

List<Rental> rentals =
session.createQuery("from Rental r where r.rentalDate between :from and :to")
       .setParameter("from", Timestamp.valueOf("2000-01-01 00:00:00.0"))
       .setParameter("to", Timestamp.valueOf("2000-10-01 00:00:00.0"))
       .list();

The execution plan that we’re now getting is again inefficient:

-------------------------------------
| Id  | Operation          | Name   |
-------------------------------------
|   0 | SELECT STATEMENT   |        |
|*  1 |  FILTER            |        |
|*  2 |   TABLE ACCESS FULL| RENTAL |
-------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(:1<=:2)
   2 - filter((INTERNAL_FUNCTION("RENTAL0_"."RENTAL_DATE")>=:1 AND 
              INTERNAL_FUNCTION("RENTAL0_"."RENTAL_DATE")<=:2))

The solution is to add this @Type annotation to all relevant columns…

@Entity
@TypeDefs(
    value = @TypeDef(
        name = "oracle_date", 
        typeClass = OracleDate.class
    )
)
public class Rental {

    @Id
    @Column(name = "rental_id")
    public Long rentalId;

    @Column(name = "rental_date")
    @Type(type = "oracle_date")
    public Timestamp rentalDate;
}

and register the following, simplified UserType:

import java.io.Serializable;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Timestamp;
import java.sql.Types;
import java.util.Objects;

import oracle.sql.DATE;

import org.hibernate.engine.spi.SessionImplementor;
import org.hibernate.usertype.UserType;

public class OracleDate implements UserType {

    @Override
    public int[] sqlTypes() {
        return new int[] { Types.TIMESTAMP };
    }

    @Override
    public Class<?> returnedClass() {
        return Timestamp.class;
    }

    @Override
    public Object nullSafeGet(
        ResultSet rs, 
        String[] names, 
        SessionImplementor session, 
        Object owner
    )
    throws SQLException {
        return rs.getTimestamp(names[0]);
    }

    @Override
    public void nullSafeSet(
        PreparedStatement st, 
        Object value, 
        int index, 
        SessionImplementor session
    )
    throws SQLException {
        // The magic is here: oracle.sql.DATE!
        st.setObject(index, new DATE(value));
    }

    // The other method implementations are omitted
}

This will work because using the vendor-specific oracle.sql.DATE type will have the same effect on your execution plan as explicitly casting the bind variable in your SQL statement, as shown in the previous article: CAST(? AS DATE). The execution plan is now the desired one:

------------------------------------------------------
| Id  | Operation                    | Name          |
------------------------------------------------------
|   0 | SELECT STATEMENT             |               |
|*  1 |  FILTER                      |               |
|   2 |   TABLE ACCESS BY INDEX ROWID| RENTAL        |
|*  3 |    INDEX RANGE SCAN          | IDX_RENTAL_UQ |
------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(:1<=:2)
   3 - access("RENTAL0_"."RENTAL_DATE">=:1 
          AND "RENTAL0_"."RENTAL_DATE"<=:2)

If you want to reproduce this issue, just query any Oracle DATE column with a java.sql.Timestamp bind value through JPA / Hibernate, and get the execution plan as indicated here.

Don’t forget to flush shared pools and buffer caches to enforce the calculation of new plans between executions, because the generated SQL is the same each time.

Can I do it with JPA 2.1?

At first sight, it looks like the new converter feature in JPA 2.1 (which works just like jOOQ’s converter feature) should be able to do the trick. We should be able to write:

import java.sql.Timestamp;

import javax.persistence.AttributeConverter;
import javax.persistence.Converter;

import oracle.sql.DATE;

@Converter
public class OracleDateConverter 
implements AttributeConverter<Timestamp, DATE>{

    @Override
    public DATE convertToDatabaseColumn(Timestamp attribute) {
        return attribute == null ? null : new DATE(attribute);
    }

    @Override
    public Timestamp convertToEntityAttribute(DATE dbData) {
        return dbData == null ? null : dbData.timestampValue();
    }
}

This converter can then be used with our entity:

import java.sql.Timestamp;

import javax.persistence.Column;
import javax.persistence.Convert;
import javax.persistence.Entity;
import javax.persistence.Id;

@Entity
public class Rental {

    @Id
    @Column(name = "rental_id")
    public Long rentalId;

    @Column(name = "rental_date")
    @Convert(converter = OracleDateConverter.class)
    public Timestamp rentalDate;
}

But unfortunately, this doesn’t work out of the box as Hibernate 4.3.7 will think that you’re about to bind a variable of type VARBINARY:

// From org.hibernate.type.descriptor.sql.SqlTypeDescriptorRegistry

    public <X> ValueBinder<X> getBinder(JavaTypeDescriptor<X> javaTypeDescriptor) {
        if ( Serializable.class.isAssignableFrom( javaTypeDescriptor.getJavaTypeClass() ) ) {
            return VarbinaryTypeDescriptor.INSTANCE.getBinder( javaTypeDescriptor );
        }

        return new BasicBinder<X>( javaTypeDescriptor, this ) {
            @Override
            protected void doBind(PreparedStatement st, X value, int index, WrapperOptions options)
                    throws SQLException {
                st.setObject( index, value, jdbcTypeCode );
            }
        };
    }

Of course, we can probably somehow tweak this SqlTypeDescriptorRegistry to create our own “binder”, but then we’re back to Hibernate-specific API. This particular implementation is probably a “bug” at the Hibernate side, which has been registered here, for the record:

https://hibernate.atlassian.net/browse/HHH-9553

Conclusion

Abstractions are leaky on all levels, even if they are deemed a “standard” by the JCP. Standards are often a means of justifying an industry de-facto standard in hindsight (with some politics involved, of course). Let’s not forget that Hibernate didn’t start as a standard and massively revolutionised the way the standard-ish J2EE folks tended to think about persistence, 14 years ago.

In this case we have:

  • Oracle SQL, the actual implementation
  • The SQL standard, which specifies DATE quite differently from Oracle
  • ojdbc, which extends JDBC to allow for accessing Oracle features
  • JDBC, which follows the SQL standard with respect to temporal types
  • Hibernate, which offers proprietary API in order to access Oracle SQL and ojdbc features when binding variables
  • JPA, which again follows the SQL standard and JDBC with respect to temporal types
  • Your entity model

As you can see, the actual implementation (Oracle SQL) leaked up right into your own entity model, either via Hibernate’s UserType, or via JPA’s Converter. From then on, it will hopefully be shielded off from your application (until it won’t), allowing you to forget about this nasty little Oracle SQL detail.

Any way you turn it, if you want to solve real customer problems (i.e. the significant performance issue at hand), then you will need to resort to vendor-specific API from Oracle SQL, ojdbc, and Hibernate – instead of pretending that the SQL, JDBC, and JPA standards are the bottom line.

But that’s probably alright. For most projects, the resulting implementation-lockin is totally acceptable.

Are You Binding Your Oracle DATEs Correctly? I Bet You Aren’t

Oracle database has its ways. In my SQL talks at conferences, I love to confuse people with the following Oracle facts:

sql-trivia-1

… and the answer is, of course:

sql-trivia-2

Isn’t it horrible to make empty string the same thing as NULL? Please, Oracle…

The only actually reasonable slide to follow the previous two is this one:

sql-trivia-3

But the DATE type is much more subtle

So you think VARCHAR2 is weird?

Well, we all know that Oracle’s DATE is not really a date as in the SQL standard, or as in all the other databases, or as in java.sql.Date. Oracle’s DATE type is really a TIMESTAMP(0), i.e. a timestamp with a fractional second precision of zero.

Most legacy databases actually use DATE precisely for that, to store timestamps with no fractional seconds, such as:

  • 1970-01-01 00:00:00
  • 2000-02-20 20:00:20
  • 1337-01-01 13:37:00

So, it’s always a safe bet to use java.sql.Timestamp types in Java, when you’re operating with Oracle DATE.

But things can go very wrong when you bind such variables via JDBC as can be seen in this Stack Overflow question here. Let’s assume you have a range predicate like so:

// execute_at is of type DATE and there's an index
PreparedStatement stmt = connection.prepareStatement(
    "SELECT * " + 
    "FROM my_table " +
    "WHERE execute_at > ? AND execute_at < ?");

Now, naturally, we’d expect any index on execute_at to be a sensible choice to use for filtering out records from my_table, and that’s also what happens when we bind java.sql.Date

stmt.setDate(1, start);
stmt.setDate(2, end);

The execution plan is optimal:

-----------------------------------------------------
| Id  | Operation                    | Name         |
-----------------------------------------------------
|   0 | SELECT STATEMENT             |              |
|*  1 |  FILTER                      |              |
|   2 |   TABLE ACCESS BY INDEX ROWID| my_table     |
|*  3 |    INDEX RANGE SCAN          | my_index     |
-----------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(:1:1 AND ""EXECUTE_AT""<:2)

But let’s check out what happens if we assume execute_at to be a date with hours/minutes/seconds, i.e. an Oracle DATE. We’ll be binding java.sql.Timestamp

stmt.setTimestamp(1, start);
stmt.setTimestamp(2, end);

and the execution plan suddenly becomes very bad:

-----------------------------------------------------
| Id  | Operation                    | Name         |
-----------------------------------------------------
|   0 | SELECT STATEMENT             |              |
|*  1 |  FILTER                      |              |
|   2 |   TABLE ACCESS BY INDEX ROWID| my_table     |
|*  3 |    INDEX FULL SCAN           | my_index     |
-----------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(:1:1 AND 
               INTERNAL_FUNCTION(""EXECUTE_AT"")<:2))

What’s this INTERNAL_FUNCTION()

INTERNAL_FUNCTION() is Oracle’s way of silently converting values into other values in ways that are completely opaque. In fact, you cannot even place a function-based index on this pseudo-function to help the database choose a RANGE SCAN on it again. The following is not possible:

CREATE INDEX oracle_why_oh_why
  ON my_table(INTERNAL_FUNCTION(execute_at));

Nope. What the function really does, it widens the less precise DATE type of the execute_at column to the more precise TIMESTAMP type of the bind variable. Just in case.

Why? Because with exclusive range boundaries (> and <), chances are that the fractional seconds in your Timestamp may lead to the timestamp being stricly greater than the lower bound of the range, which would include it, when the same Timestamp with no fractional sections (i.e. an Oracle DATE) would have been excluded.

Duh. But we don’t care, we’re only using Timestamp as a silly workaround in the first place! Now that we know this, you might think that adding a function-based index on an explicit conversion would work, but that’s not the case either:

CREATE INDEX nope
  ON my_table(CAST(execute_at AS TIMESTAMP));

Perhaps, if you magically found the exact right precision of the implicitly used TIMESTAMP(n) type it could work, but that all feels shaky, and besides, I don’t want a second index on that same column!

The solution

The solution given by user APC is actually very simple (and it sucks). Again, you could bind a java.sql.Date, but that would make you lose all hour/minute/second information. No, you have to explicitly cast the bind variable to DATE in the database. Exactly!

PreparedStatement stmt = connection.prepareStatement(
    "SELECT * " + 
    "FROM my_table " +
    "WHERE execute_at > CAST(? AS DATE) " +
    "AND execute_at < CAST(? AS DATE)");

You have to do that every time you bind a java.sql.Timestamp variable to an Oracle DATE value, at least when used in predicates.

How to implement that?

If you’re using JDBC directly, you’re pretty much doomed. Of course, you could run AWR reports to find the worst statements in production and fix only those, but chances are that you won’t be able to fix your statements so easily and deploy them so quickly, so you might want to get it right in advance. And of course, this is production. Tomorrow, another statement would suddenly pop up in your DBA’s reports.

If you’re using JPA / Hibernate, you can only hope that they got it right, because you probably won’t be able to fix those queries, otherwise.

If you’re using jOOQ 3.5 or later, you can take advantage of jOOQ’s new custom type binding feature, which works out of the box with Oracle, and transparently renders that CAST(? AS DATE) for you, only on those columns that are really relevant.

jOOQ is the best way to write SQL in Java

Other databases

If you think that this is an Oracle issue, think again. Oracle is actually very lenient and nice to use when it comes to bind variables. Oracle can infer a lot of types for your bind variables, such that casting is almost never necessary. With other databases, that’s a different story. Read our article about RDBMS bind variable casting madness for more information.