Liquibase for DB Migrations

I have just now discovered a very nice-looking tool for database migrations: Liquibase

With Liquibase, you can model your DB increments as XML files that will translate to as many as 13 different databases. A sample DB increment (taken from the Liquibase manual):



  FOREIGN KEY (person_id) REFERENCES person (id);

  UPDATE ProductSettings SET property = 'vatCategory'
  WHERE property = 'vat';


…and so on. I guess it’s about time to contact the Liquibase folks and ask for cooperation! A fully integrated solution involving database schema management, database schema migration, and jOOQ’s source code generation seem to be the perfect toolset for a Java database developer

jOOQ and Google Cloud SQL Example

This is all too simple. Here’s how you can create an easy jOOQ / Google Cloud SQL integration example:

  1. Sign up with Google App Engine
  2. Sign up with Google Cloud SQL
  3. Create a Google App Engine project (preferably with Eclipse)
  4. Add jOOQ to your project
  5. Add your generated schema to your project
  6. Done

Google Cloud SQL is actually a MySQL database, which you can also install locally on your machine for development purposes. For your jOOQ integration, this means that you will set up the code generation and execution just as if you were using a plain MySQL database. Simple, huh?

See a simple example in action, here:

Check out the source code here (libs not included):

And some instructions about Google Cloud SQL:

Use jOOQ to transform java.sql.ResultSet

jOOQ has many uses. The fact that it provides a thin layer of additional abstraction over JDBC can be interesting enough for some users wanting to operate on their own java.sql.ResultSet objects. Let’s say, you prefer using JDBC directly, for query execution, rather than letting jOOQ execute queries for you:

PreparedStatement stmt = connection.prepareStatement(sql);
ResultSet rs = stmt.executeQuery();

// Instead of verbosely iterating over the above ResultSet, use jOOQ:
Factory factory = new Factory(connection, dialect);
Result<Record> result = factory.fetch(rs);

// And use jOOQ's API for more ease:
for (Record record : result) {
  System.out.print(": ");

// Use jOOQ to export the result to XML or other formats:
String xml = result.formatXML();
String csv = result.formatCSV();
String json= result.formatJSON();

// Get a text version of the result for logging or console output
String text= result.format();

// Or use jOOQ to export the result into your own JPA-annotated entities
List<Value> values = result.into(Value.class);

// With Value being
public class Value {
  @Column(name = "ID")
  public Integer id;

  @Column(name = "NAME")
  public String name;

If you don’t want jOOQ to be your strategic SQL interface, you can still use it as a utility every now and then for your regular JDBC use.

Debug logging SQL with jOOQ

This nice little feature is not much advertised in the jOOQ manual, but probably it’s something that most of us developers want and love when we have it. When you put log4j or slf4j on the classpath along with jOOQ, then jOOQ will use those frameworks for logging according to your configuration. While jOOQ is pretty quiet on ERROR/WARN/INFO level logging, it becomes quite a chatty tool on DEBUG/TRACE level.

DEBUG level logging example

Check out some sample DEBUG level output when executing this query:

Result<Record> result = create
  .select(TAuthor.LAST_NAME, create.count().as("c"))

And the output is:

Executing query          : select "t_author"."last_name", 
  count(*) as "c" from "t_book" join "t_author" on 
  "t_book"."author_id" = "t_author"."id" where 
  "t_book"."title" <> '1984' group by "t_author"."last_name"
  having count(*) = 2

Fetched result           : +---------+----+
                         : |last_name|   c|
                         : +---------+----+
                         : |Coelho   |   2|
                         : +---------+----+
Statement executed       : Total: 3.911ms

The query text is printed to the log output with inlined parameters (i.e. bind variables are replaced for logging). Then a text table representation of the result’s first 5 rows is printed before the query execution time.

TRACE level logging example

On trace level, you can see even more of jOOQ’s internals, although usually this is only needed when debugging jOOQ:

Executing query          : select "t_author"."last_name", 
  count(*) as "c" from "t_book" join "t_author" on 
  "t_book"."author_id" = "t_author"."id" where 
  "t_book"."title" <> '1984' group by "t_author"."last_name"
  having count(*) = 2

Preparing statement      : select "t_author"."last_name", 
  count(*) as "c" from "t_book" join "t_author" on 
  "t_book"."author_id" = "t_author"."id" where 
  "t_book"."title" <> cast(? as varchar) 
  group by "t_author"."last_name" 
  having count(*) = cast(? as int)

Statement prepared       : Total: 1.451ms, +0.435ms
Binding variable 1       : 1984 (class java.lang.String)
Binding variable 2       : 2 (class java.lang.Integer)
Variables bound          : Total: 1.808ms, +0.356ms
Attaching                : RecordImpl [ RecordImpl [values=[null, null]] ]
Fetching record          : RecordImpl [values=[Coelho, 2]]
Fetched result           : +---------+----+
                         : |last_name|   c|
                         : +---------+----+
                         : |Coelho   |   2|
                         : +---------+----+
Statement executed       : Total: 6.29ms, +4.481ms

In addition to the previously shown DEBUG-level output, you’ll also find the true SQL statement that is executed, including bind variables and some additional type-casts introduced by jOOQ. Time measuring is divided into 3 steps: statement preparing, variable binding, and statement execution. Also, all bind variables are documented, and every fetched record is documented. That’s a lot of logging. Be sure to turn it off when not needed!

Logging configuration

In future versions of jOOQ, logging will become more configurable

Cloud Fever now also at Sybase

After SQL Server (SQL Azure) and MySQL (Google Cloud SQL), there is now also a SQL Anywhere database available in the cloud:

It’s called Sybase SQL Anywhere OnDemand or code name Fuji. I guess the connotation is that your data might as well be relocated to Fuji. Or your DBA might as well work from Fuji. Who knows ;-)

I don’t know where to start adding integration tests for jOOQ with all those cloud-based SQL solutions. Anyway, exciting times are coming!

What are procedures and functions after all?

Many RDBMS support the concept of “routines”, usually calling them procedures and/or functions. These concepts have been around in programming languages for a while, also outside of databases. Famous languages distinguishing procedures from functions are:

  • Ada
  • Pascal
  • etc…

The general distinction between (stored) procedures and (stored) functions can be summarised like this:


  • Are called using JDBC CallableStatement
  • Have no return value
  • Usually support OUT parameters


  • Can be used in SQL statements
  • Have a return value
  • Usually don’t support OUT parameters

But there are exceptions to these rules:

  • DB2, H2, and HSQLDB don’t allow for JDBC escape syntax when calling functions. Functions must be used in a SELECT statement
  • H2 only knows functions (without OUT parameters)
  • Oracle functions may have OUT parameters
  • Oracle knows functions that mustn’t be used in SQL statements for transactional reasons
  • Postgres only knows functions (with all features combined). OUT parameters can also be interpreted as return values, which is quite elegant/freaky, depending on your taste
  • The Sybase jconn3 JDBC driver doesn’t handle null values correctly when using the JDBC escape syntax on functions

In general, it can be said that the field of routines (procedures / functions) is far from being standardised in modern RDBMS. Every database has its ways and JDBC only provides little abstraction over the great variety of procedures / functions implementations, especially when advanced data types such as cursors / UDT’s / arrays are involved.

SQL Trouble with dummy tables

As I’m mostly using Oracle for work projects, the concept of the DUAL dummy table has become quite intuitive. I hardly ever think about the days when I was playing around with that table to find out its purpose (e.g. writing into it when DUAL was still a physical object, and thus killing the whole database…)

In many other RDBMS, there is no need for dummy tables, as you can issue statements like these:

SELECT 1 + 1;

These are the RDBMS, where the above is generally possible:

  • H2
  • MySQL
  • Ingres
  • Postgres
  • SQLite
  • SQL Server
  • Sybase ASE

In other RDBMS, dummy tables are required, like in Oracle. Hence, you’ll need to write things like these:


These are the RDBMS and their respective dummy tables:

  • H2: Optionally supports DUAL
  • MySQL: Optionally supports DUAL
  • Oracle: DUAL
  • Sybase SQL Anywhere: SYS.DUMMY

Trouble when avoiding dummy tables

While some might find avoiding dummy tables in H2 or MySQL better as SQL becomes more readable, it is worth mentioning that you can run into trouble when doing so:

MySQL’s avoiding DUAL

Clauses such as the following one seem to cause trouble in MySQL in some contexts:

-- this can cause trouble
exists (select 1 where 1 = 0)

-- this will work
exists (select 1 from dual where 1 = 0)

Other similar clauses exist

Ingres has no DUAL, but would actually need it

In Ingres, you cannot have a WHERE, GROUP BY or HAVING clause without a FROM clause. Without dummy tables, you will have to create your own dummy subquery:


-- this is a workaround with a nested select

In general, jOOQ will hide these facts from client code, allowing to always use the simple form without dummy table. You don’t have to worry about overly restrictive syntactic rules in some SQL dialects