Dogfooding in Product Development

Dogfooding, or eating your own dog food, is a practice that all product developers should implement all the time. According to wikipedia:

Dogfooding, occurs when an organization uses its own product. This can be a way for an organization to test its products in real-world usage. Hence dogfooding can act as quality control, and eventually a kind of testimonial advertising. Once in the market, dogfooding demonstrates confidence in the developers’ own products

I’ve recently started delivering this talk about API design at conferences, where I mentioned dogfooding as an excellent approach to make sure the user experience of your API is great:

The more you use your own API, the better it gets from a UX and usability perspective.

Dogfooding via tests

We do a lot of dogfooding ourselves, inevitably, as we write tons and tons of tests for jOOQ, to make sure jOOQ works correctly on all the currently 26 RDBMS that we support.

Writing a test for new API means we have to immediately use the new API for the first time. This helps discover the first usability problems. But there are even better ways:

Dogfooding via significant new functionality

Another great way to dogfood is to create significant new features. For example, this new schema diff tool that jOOQ 3.13 will ship with (#9425). It is part of a bigger project that we call DDL interpretation, where we start implementing the DDL part of a database by maintaining an up to date database schema depending on a stream of DDL statements. This will be part of a variety of improvements and value propositions in the area of database change management / SQL migrations. Recently, we’ve published a post about an improved Liquibase integration, which goes in a similar direction.

The schema diff tool is something many database products are offering. For us, it is relatively simple to implement as we:

  • Support a variety of schema meta representations
  • Support a lot of DDL syntax
  • Support exporting schema meta representations as DDL

We’ve always had a type called org.jooq.Meta. That type represents your database. Historically, it just gave jOOQ-style access to JDBC’s java.sql.DatabaseMetaData. But over time, we also started supporting exposing generated jOOQ code, or XML files as org.jooq.Meta.

With jOOQ 3.13, we’ll support interpreting arbitrary DDL (parsed or manually created using the jOOQ API), or also Liquibase XML files to create such a org.jooq.Meta representation. Which can then be turned again into DDL using the new Meta.ddl() method:

System.out.println(
  ctx.meta("")
     .apply("create table t (i int)")
     .apply("alter table t add j int")
     .apply("alter table t alter i set not null")
     .apply("alter table t add primary key (i)")
);

When we run the above program, we’re getting the following output:

create table t(
  i int not null,
  j int null,
  primary key (i)
);

That’s already cool – we can create a snapshot of our schema at any point of a set of database migration scripts. Can we also do the inverse? We can, with the new diff tool:

System.out.println(
  ctx.meta(
    "create table t (i int)"
  ).diff(ctx.meta(
    "create table t ("
  + "i int not null, "
  + "j int null, "
  + "primary key (i))"
  ))
);

The output generated from this is:

alter table t alter i set not null;
alter table t add j integer null;
alter table t add constraint primary key (i);

This is part of an exciting new set of DDL / migration features, we can hardly wait to publish with jOOQ 3.13 (around Q1 2020), as we believe it will greatly help with your existing database change management solution – perhaps as a Flyway or Liquibase plugin.

What does this have to do with dogfooding?

While developing this feature, we’ve discovered numerous missing features, which we also implemented for jOOQ 3.13:

  • #7752: Our current meta model for sequences does remember flags like MAXVALUE
  • #9428: Meta.toString() could just call Meta.ddl(), the DDL export. This is what I’ve shown above, very useful when debugging with small schemas!
  • #9433: Meta.equals() and Meta.hashCode() was not yet implemented, as this didn’t make sense, historically.
  • #9434: Our current DDL export should support reordering objects in alphabetical order for the export. This is useful for equality checks, text-based diffs, etc.
  • #9437: We don’t support ALTER SEQUENCE statements that allow for modifying sequence properties like MAXVALUE yet.
  • #9438: The current emulation of ALTER SEQUENCE .. RESTART hard codes restarting the sequence at 1, when in fact, it could be some other START WITH value
  • #9440: We’ll need a synthetic syntax to drop unnamed foreign keys.

This is just a short extract of things we’ve implemented this week based on our discoveries of what’s missing when implementing the “big” feature: The Meta.diff(Meta) method. The diff() method will be the one highlighted in the release notes of 3.13. But the little things above are what ultimately makes jOOQ so useful – the many little things that were discovered while dogfooding and that help everyone, not just the users that use the diff() method.

Dogfooding via blogging and documentation

But, perhaps even better than when implementing new features is blogging or documenting API. When blogging or documenting, the vendor / maintainer has to put themselves into the position of the user, in particular, the first time user of a specific API.

Imagine, after writing all this code that I as an author love, I have to reset my experience, and pretend I don’t know what to expect. Then, sit down, and write a really good and simple example using the API that I wrote.

The first example I came up with was not like this (as that API didn’t exist yet):

System.out.println(
  ctx.meta("")
     .apply("create table t (i int)")
     .apply("alter table t add j int")
     .apply("alter table t alter i set not null")
     .apply("alter table t add primary key (i)")
);

Instead, it was like this (which works the same way):

System.out.println(
  ctx.meta("create table t (i int);\n"
         + "alter table t add j int;\n"
         + "alter table t alter i set not null;\n"
         + "alter table t add primary key (i);")
);

I thought to myself, do I really need all that string concatenation? I already have this Meta.apply(Queries) method, which would add even more value. But then, my code would look like this:

System.out.println(
  ctx.meta("")
     .apply(queries(createTable("t").column("i", INTEGER)))
     .apply(queries(alterTable("t").add("i", INTEGER)))
     .apply(queries(alterTable("t").alter("i").setNotNull()))
     .apply(queries(alterTable("t").add(primaryKey("i")))
);

That would be cool, too. I could show that our DSL API and our parser can do the same things. But I didn’t want to show the DSL API in this blog post. It would only distract from my main point. I could, of course, parse the queries explicitly:

System.out.println(
  ctx.meta("")
     .apply(ctx.parse("create table t (i int)"))
     .apply(ctx.parse("alter table t add j int"))
     .apply(ctx.parse("alter table t alter i set not null"))
     .apply(ctx.parse("alter table t add primary key (i)"))
);

But why not just add convenience API that does this for me?

System.out.println(
  ctx.meta("")
     .apply("create table t (i int)")
     .apply("alter table t add j int")
     .apply("alter table t alter i set not null")
     .apply("alter table t add primary key (i)")
);

Where Meta.apply(String) looks like this:

public final Meta apply(String diff) {
  return apply(dsl().parser().parse(diff));
}

That’s mere convenience. It is not needed. But it is very useful:

  • For tests
  • For blog posts
  • For documentation
  • For new users
  • For simple applications

The more thorough, more powerful API that accepts Queries (which wraps a bunch of parsed or hand-constructed jOOQ Query) objects is the real feature here. But convenience helps the user very much!

I would not have discovered this requirement without dogfooding.

Convenience

A much underrated tweet by Brian Goetz is this one:

APIs need a ton of abstractions. The JDK Collector API is a very good example. In Java and in SQL, in order to write a custom aggregate function, you need these four operations:

  • Supplier<A>: A supplier that provides an empty, intermediary data structure to aggregate into
  • BiConsumer<A, T>: A accumulator that accumulates new values from the stream into our intermediary data structure.
  • BinaryOperator<A>: A combiner that combines two intermediary data structures. This is used for parallel streams only.
  • Function<A, R>: The finisher function that extracts the result from the intermediary data structure.

Writing a Collector from scratch is super annoying and “low level”. Using convenience API is much better, this is why the JDK has Collectors, or jOOλ has Agg, and there are other Collector libraries that provide some basic building blocks. The Stream API does not have to know about all of these building blocks, it just knows a single, abstract type. (serves the API). But we users, we don’t want only a single abstraction. We want those convenient building blocks (serves the user).

Convenience is never requirement, but always a big plus. In APIs like in languages.

Have you noticed this new method on InputStream, which has been added in Java 9?

public long transferTo(OutputStream out) throws IOException {
    Objects.requireNonNull(out, "out");
    long transferred = 0;
    byte[] buffer = new byte[DEFAULT_BUFFER_SIZE];
    int read;
    while ((read = this.read(buffer, 0, DEFAULT_BUFFER_SIZE)) >= 0) {
        out.write(buffer, 0, read);
        transferred += read;
    }
    return transferred;
}

How many times have we written this kind of code? N times (and a big N, too)

How many times have we enjoyed writing this code? 1 time

How many times too many have we written this code? N + 1 times.

Dogfooding is a very good way to make sure this kind of convenience our users love us for will make it into products much earlier on. Because we, the vendors, are the first users, and we hate writing this silly code all the time. And who is a better first user to implement cool new features for, if not ourselves?

How to Simulate a Liquibase Migration using H2

This post is part of a new blog series about database migrations, which will cover a variety of database change management topics.

In the near future, we’ll look much more into these topics, hoping to add more value to our users’ existing Flyway, Liquibase, and other integrations where the migration tools can profit a lot from jOOQ’s most recent features, including the parser.

Embracing Liquibase in the next jOOQ versions

In jOOQ 3.13, we’re going to offer a better integration for those users who use Liquibase for database change management. Our existing DDLDatabase parses, translates, and simulates a DDL script based migration against an in-memory H2 database instance.

This is using our built-in SQL translation functionality (test it on our website here: https://www.jooq.org/translate) to translate this Oracle SQL:

create table t (v varchar2(100))

to this H2 SQL:

create table t (v varchar(100))

That’s just a trivial example. More sophisticated translations are possible too. The main purpose of doing this has been, historically, to offer an “offline” jOOQ source code generation step that does not require connecting to an actual Oracle database instance to reverse engineer your schema, which you can represent in form of SQL migration scripts, which you’d typically run with something like Flyway.

In the future, we’ll add more features around this. One thing we’ve been thinking about has been to allow for translating Flyway migrations to other dialects, to make them vendor agnostic.

Simulating Liquibase migrations

Flyway and Liquibase work in quite a similar fashion, with Liquibase offering an additional abstraction layer over the SQL language. While pure SQL migrations are also possible with Liquibase’s SQL change, Liquibase also offers a set of mostly low level DDL command abstractions, such as the ADD COLUMN change.

Using their XML based DSL, you don’t have to remember whether the command is called:

alter table t add i int;
alter table t add column i int;

Or whatever creative syntax your SQL vendor came up with. Notice that, again, you can achieve the same thing with jOOQ’s translator, which you can use as an API or command line interface

Now, assuming you have the following liquibase database change log:

<?xml version="1.0" encoding="UTF-8"?>
<databaseChangeLog
    xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog
    http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-3.8.xsd">
    <changeSet author="authorName" id="changelog-1.0">
        <createTable tableName="TAB">
            <column name="COL" type="VARCHAR(10)">
                <constraints nullable="true"
                    primaryKey="false" unique="false" />
            </column>
        </createTable>
    </changeSet>
</databaseChangeLog>

And now, you would like to check if it is correct, i.e. if you can ship it to production as it is. Just create an in-memory H2 connection, and run the file against it:

Properties info = new Properties();
info.put("user", "sa");
info.put("password", "");

try (Connection con = new org.h2.Driver()
        .connect("jdbc:h2:mem:db", info)) {
    Database database = DatabaseFactory.getInstance()
      .findCorrectDatabaseImplementation(new JdbcConnection(con));
    Liquibase liquibase = new Liquibase("/path/to/liquibase.xml", 
      new FileSystemResourceAccessor(), database);
    liquibase.update("");
}

To check, just repeat the table definition a second time

<createTable tableName="TAB">
    <column name="COL" type="VARCHAR(10)">
        <constraints nullable="true"
            primaryKey="false" unique="false" />
    </column>
</createTable>
<createTable tableName="TAB">
    <column name="COL" type="VARCHAR(10)">
        <constraints nullable="true"
            primaryKey="false" unique="false" />
    </column>
</createTable>

and see how H2 complains about the statement:

Table "TAB" already exists; SQL statement:
CREATE TABLE PUBLIC.TAB (COL CLOB) [42101-200] [Failed SQL: (42101) CREATE TABLE PUBLIC.TAB (COL CLOB)]
	at liquibase.changelog.ChangeSet.execute(ChangeSet.java:646)
	at liquibase.changelog.visitor.UpdateVisitor.visit(UpdateVisitor.java:53)
	at liquibase.changelog.ChangeLogIterator.run(ChangeLogIterator.java:83)
	at liquibase.Liquibase.update(Liquibase.java:202)
	at liquibase.Liquibase.update(Liquibase.java:179)
	at liquibase.Liquibase.update(Liquibase.java:175)
	at liquibase.Liquibase.update(Liquibase.java:168)
	at LB.main(LB.java:21)

Using jOOQ’s tool chain to create schema snapshots

Want to export the schema again as DDL? Very easy, just use jOOQ for the task. For example, you can write the following Java code right after the above Liquibase migration simulation:

DSLContext ctx = DSL.using(connection);
ctx.ddl(ctx.meta().getSchemas("PUBLIC").get(0))
    .forEach(System.out::println);

And you can get, in H2 syntax (omitting the liquibase tables):

create table "PUBLIC"."TAB"(
  "COL" varchar(10) null
)

Want to export this to Oracle, instead? And using a custom schema, instead of H2’s PUBLIC? Write this instead:

DSLContext h2 = DSL.using(connection);
DSLContext oracle = DSL.using(ORACLE, new Settings()
   .withRenderMapping(new RenderMapping()
       .withSchemata(new MappedSchema()
           .withInput("PUBLIC")
           .withOutput("MY_SCHEMA"))));
oracle.ddl(h2.meta().getSchemas("PUBLIC").get(0))
    .forEach(System.out::println);

And now, we’re getting:

create table "MY_SCHEMA"."TAB"(
  "COL" varchar2(10) null
)

(notice, this works once #9384 is fixed).

Prefer an XML representation of your schema snapshot? Easy with jOOQ, as well:

DSLContext ctx = DSL.using(connection);
JAXB.marshal(ctx.informationSchema(
   ctx.meta().getSchemas("PUBLIC").get(0)
), System.out);

The output being:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<information_schema xmlns="http://www.jooq.org/xsd/jooq-meta-3.12.0.xsd">
    <schemata>
        <schema>
            <schema_name>PUBLIC</schema_name>
            <comment></comment>
        </schema>
    </schemata>
    <tables>
        <table>
            <table_schema>PUBLIC</table_schema>
            <table_name>TAB</table_name>
            <comment></comment>
        </table>
    </tables>
    <columns>
        <column>
            <table_schema>PUBLIC</table_schema>
            <table_name>TAB</table_name>
            <column_name>COL</column_name>
            <data_type>varchar</data_type>
            <character_maximum_length>10</character_maximum_length>
            <ordinal_position>1</ordinal_position>
            <is_nullable>true</is_nullable>
            <column_default></column_default>
            <comment></comment>
        </column>
    </columns>
</information_schema>

A useful file that can be put under version control with your current commit set of database change set to validate your schema, or export to other tools that can reverse engineer DDL or XML.

Using Liquibase migrations as jOOQ code generation input

Just like the pre-existing DDLDatabase, starting from jOOQ 3.13, you can simulate your Liquibase migration in-memory against an H2 database to reverse engineer it again for jOOQ’s code generator. This is documented here.

The relevant code generation configuration looks like this (standalone or with maven):

<configuration>
  <generator>
    <database>
      <name>org.jooq.meta.extensions.liquibase.LiquibaseDatabase</name>
      <properties>
        <property>
          <key>scripts</key>
          <value>src/main/resources/database.xml</value>
        </property>
      </properties>
    </database>
  </generator>
</configuration>

That’s it! As a bonus, you won’t have to manually simulate your migration anymore, as jOOQ’s LiquibaseDatabase already does it for you, behind the scenes, using the same three lines of code:

Database database = DatabaseFactory.getInstance()
  .findCorrectDatabaseImplementation(new JdbcConnection(con));
Liquibase liquibase = new Liquibase("/path/to/liquibase.xml", 
  new FileSystemResourceAccessor(), database);
liquibase.update("");

Stay tuned for more goodies on the topic of SQL migrations and database change management

Calling an Oracle Function with PL/SQL BOOLEAN Type from SQL

One of the most wanted features in the Oracle database is the BOOLEAN type. The SQL standard specified it a while ago, and RDBMS like PostgreSQL show how powerful it can be, e.g. when using the EVERY() aggregate function.

The PL/SQL language already has support for boolean types. We can write:

CREATE OR REPLACE FUNCTION number_to_boolean (i NUMBER) 
RETURN BOOLEAN 
IS
BEGIN
  RETURN NOT i = 0;
END number_to_boolean;
/

CREATE OR REPLACE FUNCTION boolean_to_number (b BOOLEAN) 
RETURN NUMBER 
IS
BEGIN
  RETURN CASE WHEN b THEN 1 WHEN NOT b THEN 0 END;
END boolean_to_number;
/

From PL/SQL, we can now easily call the above functions:

SET SERVEROUTPUT ON
BEGIN
  IF number_to_boolean(1) THEN
    dbms_output.put_line('1 is true');
  END IF;
  IF NOT number_to_boolean(0) THEN
    dbms_output.put_line('0 is false');
  END IF;
  IF number_to_boolean(NULL) IS NULL THEN
    dbms_output.put_line('null is null');
  END IF;
END;
/

The above prints

1 is true
0 is false
null is null

But we cannot do the same from the SQL engine:

SELECT 
  number_to_boolean(1), 
  number_to_boolean(0), 
  number_to_boolean(null) 
FROM dual;

This yields:

ORA-00902: invalid datatype

Eventually, Oracle will fix this by supporting boolean types in the SQL engine (show your love to Oracle here).

The WITH clause

Until then, we can make use of a nice workaround using new functionality from Oracle 12c. We can declare functions in the WITH clause! Run this:

WITH
  FUNCTION f RETURN NUMBER IS 
  BEGIN 
    RETURN 1; 
  END f;
SELECT f
FROM dual;

You’ll get

 F
---
 1

That’s wonderful, and what’s even better, this part of the WITH clause is written in PL/SQL, where we can use the BOOLEAN type again. So we can define bridge functions for each function call. Instead of this:

SELECT 
  number_to_boolean(1), 
  number_to_boolean(0), 
  number_to_boolean(null) 
FROM dual;

We can write this:

WITH
  FUNCTION number_to_boolean_(i NUMBER)
  RETURN NUMBER
  IS
    b BOOLEAN;
  BEGIN
    -- Actual function call
    b := number_to_boolean(i);
    
    -- Translation to numeric result
    RETURN CASE b WHEN TRUE THEN 1 WHEN FALSE THEN 0 END;
  END number_to_boolean_;
SELECT 
  number_to_boolean_(1) AS a, 
  number_to_boolean_(0) AS b, 
  number_to_boolean_(null) AS c
FROM dual;

This now yields:

 A   B   C
-------------
 1   0   null

Of course, we don’t get an actual boolean type back in the result set, as the SQL engine cannot process that. But if you’re calling this function from JDBC, 1/0/null can be translated transparently to true/false/null.

It also works for chaining. Instead of the following, which still yields ORA-00902:

SELECT 
  boolean_to_number(number_to_boolean(1)), 
  boolean_to_number(number_to_boolean(0)), 
  boolean_to_number(number_to_boolean(null))
FROM dual;

We can write this:

WITH
  FUNCTION number_to_boolean_(i NUMBER)
  RETURN NUMBER
  IS
    b BOOLEAN;
  BEGIN
    -- Actual function call
    b := number_to_boolean(i);
    
    -- Translation to numeric result
    RETURN CASE b WHEN TRUE THEN 1 WHEN FALSE THEN 0 END;
  END number_to_boolean_;
  
  FUNCTION boolean_to_number_(b NUMBER)
  RETURN NUMBER
  IS
  BEGIN
    -- Actual function call
    RETURN boolean_to_number(NOT b = 0);
  END boolean_to_number_;
SELECT 
  boolean_to_number_(number_to_boolean_(1)) AS a, 
  boolean_to_number_(number_to_boolean_(0)) AS b, 
  boolean_to_number_(number_to_boolean_(null)) AS c
FROM dual;

… which again yields

 A   B   C
-------------
 1   0   null

And now, the 1/0/null integers are the actual desired result types.

This technique can be automated for any type of PL/SQL function that accepts and/or returns a PL/SQL BOOLEAN type, or even for functions that accept %ROWTYPE parameters, which we’ll work into jOOQ soon, in the near future.

A more real world example can be seen in this Stack Overflow question.

jOOQ 3.12 support

In jOOQ 3.12, we will add native support for using such functions in SQL through #8522. We have already supported PL/SQL boolean types in standalone procedure calls since jOOQ 3.8. With the next version, we can call a function like this one:

FUNCTION f_bool (i BOOLEAN) RETURN BOOLEAN;

From anywhere within a jOOQ statement, e.g.

Record1<Integer> r =
create()
    .select(one())
    .where(PlsObjects.fBool(false))
    .fetchOne();

assertNull(r);

When the above is called, the following SQL statement is generated by jOOQ 3.12, behind the scenes:

with
  function "F_BOOL_"(I integer)
  return integer
  is
    "r" boolean;
  begin
    "r" := "TEST"."PLS_OBJECTS"."F_BOOL"(not I = 0);
    return case when "r" then 1 when not "r" then 0 end;
  end "F_BOOL_";
  select 1
from dual
where (F_BOOL_(0) = 1)

Notice how the boolean expression codes like a true boolean / predicate?

How to Create a Good MCVE (Minimal Complete Verifiable Example)

Reporting a bug takes time, and trust me, every vendor appreciates your reporting of a bug! Your voice counts as many voices, for all the other customers of a product who do not want to or cannot take the time to report the same bug are numerous.

So, first off, thanks for taking that time and reaching out to us vendors. We really appreciate your help!

Having said so, reporting a bug can be a tedious exercise. For both parties, the one reporting the bug and the one receiving it. There are extremely simple bugs, such as typos in documentation. They can be easily pointed to and just as easily be fixed. There are much trickier bugs, such as concurrency issues in complicated project setups. They take time to reproduce. This is why an MCVE (Minimal Complete Verifiable Example) is so useful. The linked stack overflow page explains why it is so useful to answer questions. But the same arguments apply when reporting a bug.

And that’s where the tricky part starts. It isn’t easy to create an example that is:

  • Minimal: Your real world application code is huge. You cannot dump the entirety of it to the vendor for various reasons. And the vendor cannot look through it all to try to reproduce it. So, the problem has to be isolated into an example of minimal scope, with no unnecessary additional functionality. That’s hard too, because your project has been set up months or years ago. You don’t want to spend too much time setting up a new project
  • Complete: When reducing the problem to a minimal one, we’re tempted to just describe it in prose. But that can be difficult as well, because prose is hardly complete. It’s difficult to describe a problem when it would be quite easy to show the code. But that brings us back to the minimal part. We want to show only the relevant code, not all of it.
  • Verifiable: Ultimately, the ideal example can be used by the vendor to reproduce the problem, because once that’s possible, the vendor can start debugging it and finding the right spots to fix quite easily. Otherwise, it’s just guessing and going back and forth with the reporter, just to write more prose. That’s tiring on both sides.

This is why we now have an example project on GitHub to help you create that MCVE:

https://github.com/jOOQ/jOOQ-mcve

It is a minimal example that uses:

This example can be forked on GitHub and modified by you directly, in order to show how to reproduce your issue. In the future, we’ll add more example setups that may be helpful to reproduce your specific issue.

Thanks again for taking the time to report issues. We vendors really appreciate your work!

How to Use jOOQ’s UpdatableRecord for CRUD to Apply a Delta

While jOOQ is not a full fledged ORM (as in an object graph persistence framework), there is still some convenience available to avoid hand-writing boring SQL for every day CRUD.

That’s the UpdatableRecord API. It has a few very useful features, including:

A 1:1 mapping to the underlying table

Every UpdatableRecord is mapped on a 1:1 basis to the underlying table (or view) of your database. For example, if you create a table like this (using PostgreSQL syntax):

CREATE TABLE customer (
  id BIGSERIAL NOT NULL PRIMARY KEY,
  first_name TEXT NOT NULL,
  last_name TEXT NOT NULL,
  vip BOOLEAN DEFAULT FALSE
);

… and if you’re using the code generator (which you should), then apart from the generated table that is used in SQL queries, you also have an UpdatableRecord that allows you to do this:

CustomerRecord customer = ctx.newRecord(CUSTOMER);
customer.setFirstName("John");
customer.setLastName("Doe");
customer.store();

The above statement will generate a JDBC statement like this:

INSERT INTO customer (first_name, last_name)
VALUES (?, ?)

Notice…

SQL DEFAULT expressions

… we only set the FIRST_NAME and LAST_NAME values on the record, and only those values have been included in the insert statement. The other columns are not included, which allows for SQL DEFAULT expressions to be applied. This is true for both the ID serial column, as well as the VIP defaulted column.

The actual database record will look something like this:

id     first_name   last_name   vip
-------------------------------------
1337   John         Doe         false

Notice how the VIP column is nullable, which means that an unknown VIP status is something our database design explicitly allows for. Of course, we could easily set that value to NULL as well, preventing the DEFAULT from applying. For example:

// Assuming you actually *need* the client round trip...
CustomerRecord customer = ctx.fetchOne(
  CUSTOMER, CUSTOMER.ID.eq(1337));
customer.setVip(null);
customer.store();

The above query would produce the following update statement:

UPDATE customer SET vip = ? WHERE id = ?

Sidenote: Please do not use this feature if you never really needed the data in the client. In that case, just write the UPDATE statement and save a client-server roundtrip!

Notice again that only the column that we’ve explicitly changed will be included in the update statement, not all the others. This has advantages and disadvantages:

Advantages

  • Even if the CustomerRecord was fetched incompletely (e.g. through a query), we don’t need all the other columns to modify this one.
  • If a trigger listens on columns that have been included in the actual query, we can distinguish between a value being set to NULL explicitly, and a value not being available from the DML statement. Think of JavaScript’s difference between null and undefined.

Disadvantages

  • If we allow users to change arbitrary columns on such a record, there is a very high number of possible resulting queries, which can hurt execution plan cache performance in the database.

This is the tradeoff. jOOQ opted for full support of SQL DEFAULT values, whose application is activated when a Record.changed() flag is not set. As soon as the flag is set, jOOQ assumes that the explicit setting of the value is wanton, and it will thus be propagated to the generated SQL query.

SQL DEFAULT expressions and POJOs

jOOQ Records can encode that difference. Plain Old Java Objects (POJOs) cannot. There is only null, if you choose a reference type for your data. What does null mean in that context? It’s not possible to know. A lot of jOOQ users will want that null value to encode the SQL DEFAULT value, not the SQL NULL value. But when you’re loading your POJO content into a jOOQ record, you’re effectively setting all the column values to NULL explicitly, because all the Record.changed() flags will be set:

// Load the customer record from a POJO
CustomerRecord customer = ctx.newRecord(CUSTOMER, customerPojo);
customer.update();

Now, quite likely, the resulting query will be:

UPDATE customer
SET
  first_name = ?,
  last_name = ?,
  vip = ?
WHERE id = ?

But what if your CustomerPojo was retrieved from a JSON document, e.g. like this one?

{
  "id": 1337,
  "lastName": "Smith"
}

It looks like the intent of this document is for the record “John Doe” to be renamed to “John Smith”, so this update statement should have resulted instead:

UPDATE customer
SET last_name = ?
WHERE id = ?

Clearly, we shouldn’t set the FIRST_NAME nor VIP values to NULL.

Both the JSON document structure and the jOOQ UpdatableRecord can encode the difference between undefined / DEFAULT and null / NULL. But the POJO cannot. So, the only reasonable thing for jOOQ to do is to set all Record.changed() flags to true.

Of course, you can manually set them to false again to work around this limitation, or perhaps don’t use the intermediary POJO data structure, but load the JSON data directly into the jOOQ Record.

jOOQ 3.11 Released With 4 New Databases, Implicit Joins, Diagnostics, and Much More

Today, jOOQ 3.11 has been released with support for 4 new databases, implicit joins, diagnostics, and much more

New Databases Supported

At last, 4 new SQL dialects have been added to jOOQ! These are:

jOOQ Professional Edition

  • Aurora MySQL Edition
  • Aurora PostgreSQL Edition
  • Azure SQL Data Warehouse

jOOQ Enterprise Edition

  • Teradata

Implicit Joins

One of the really cool features in ORMs like Hibernate, Doctrine, and others, is
the capability of using a relationship graph notation to access another entity’s
columns through what is often called “implicit joins”.

Instead of explicitly joining a to-one relationship to access its columns:

SELECT author.first_name, author.last_name, book.title
FROM book
JOIN author ON book.author_id = author.id

We would like to be able to access those columns directly, using this notation:

SELECT book.author.first_name, book.author.last_name, book.title
FROM book

The join is implied and should be added implicitly. jOOQ now allows for this to
happen when you use the code generator:

ctx.select(BOOK.author().FIRST_NAME, BOOK.author().LAST_NAME, BOOK.TITLE)
   .from(BOOK)
   .fetch();

When rendering this query, the implicit join graph will be calculated on the fly
and added behind the scenes to the BOOK table. This works for queries of
arbitrary complexity and on any level of nested SELECT.

More details in this blog post:
https://blog.jooq.org/2018/02/20/type-safe-implicit-join-through-path-navigation-in-jooq-3-11/

DiagnosticsListener SPI

A new DiagnosticsListener SPI has been added to jOOQ:
https://github.com/jOOQ/jOOQ/issues/5960

The purpose of this SPI is to sanitise your SQL language, JDBC and jOOQ API
usage. Listeners can listen to events such as:

  • duplicateStatements (similar SQL is executed, bind variables should be used)
  • repeatedStatements (identical SQL is executed, should be batched or rewritten)
  • tooManyColumnsFetched (not all projected columns were needed)
  • tooManyRowsFetched (not all fetched rows were needed)

The great thing about this SPI is that it can be exposed to clients through the
JDBC API, in case of which the diagnostics feature can reverse engineer your
JDBC or even JPA generated SQL. Ever wanted to detect N+1 queries from
Hibernate? Pass those Hibernate-generated queries through this SPI.

Want to find missing bind variables leading to cursor cache contention or SQLi?
Let jOOQ find similar SQL statements and report them. E.g.

  • SELECT name FROM person WHERE id = 1
  • SELECT name FROM person WHERE id = 2

Or also:

  • SELECT name FROM person WHERE id IN (?, ?)
  • SELECT name FROM person WHERE id IN (?, ?, ?)

Anonymous blocks

Many databases support anonymous blocks to run several statements in a single
block scope. For example, Oracle:

DECLARE
  l_var NUMBER(10);
BEGIN
  l_var := 10;
  dbms_output.put_line(l_var);
END;

jOOQ now supports the new org.jooq.Block API to allow for wrapping DDL and DML
statements in such a block. This is a first step towards a future jOOQ providing
support for:

  • Abstractions over procedural languages
  • CREATE PROCEDURE and CREATE FUNCTION statements
  • Trigger support
  • And much more

Parser

jOOQ’s parser support is an ongoing effort. This release has added support for
a lot of new SQL clauses and functions from various vendors and in various DDL
and DML statements.

The parser is now also exposed through a public website and API, where SQL can
be translated from one dialect to another:
https://www.jooq.org/translate

This website will help further drive jOOQ API development by helping to find
missing functionality that is used in real-world SQL.

Another way to access this API is through the new org.jooq.ParserCLI command
line tool. For example, run:

$ java -cp jooq-3.11.0.jar org.jooq.ParserCLI -f -t ORACLE -s "SELECT * FROM (VALUES(1),(2)) AS t(a)"

To get:

select *
from (
  (
    select null a
    from dual
    where 1 = 0
  )
  union all (
    select *
    from (
      (
        select 1
        from dual
      )
      union all (
        select 2
        from dual
      )
    ) t
  )
) t;

Formal Java 10 Support

jOOQ 3.11 is the first release that is formally integration tested with Java 10.
To use jOOQ with Java 10, use the Java 8 distribution which has not yet been
modularised, but contains Automatic-Module-Name specification to be forward
compatible with future, modularised jOOQ distributions.

Additionally, package names between jOOQ, jOOQ-meta, and jOOQ-codegen have been
cleaned up to prevent duplicate package names, and the JAXB dependency has been
added explicitly to the various artefacts.

Other great improvements

  • Finally, asterisks (SELECT * or SELECT t.*) are formally supported in the API.
  • Collations can now be specified on a variety of syntax elements
  • The org.jooq.Comment type has been added, and DDL statements for it
  • The DefaultBinding implementation has been rewritten for better peformance
  • Several performance improvements in jOOQ’s internals
  • Many more DDL statements are supported including GRANT and REVOKE
  • Support for the EXPLAIN statement
  • FETCH n PERCENT ROWS and TOP n PERCENT clauses are supported
  • Better org.jooq.Name and org.jooq.Named API for identifier handling
  • Support for PostgreSQL 10
  • Support for SQL Server 2017
  • Support for DB2 11
  • Upgraded MariaDB support for window functions, inv dist functions, WITH
  • jOOU dependency updated to 0.9.3
  • jOOR dependency updated to 0.9.8
  • Server output (e.g. DBMS_OUTPUT) can now be fetched automatically, by jOOQ
  • Code generation support for PL/SQL TABLE types
  • SQL Keywords Can Now Be Rendered In Pascal Style If You Must
  • Emulate PostgreSQL’s ON CONFLICT clause using MERGE

The complete list can be seen here:
https://www.jooq.org/notes/?version=3.11

Truth First, or Why You Should Mostly Implement Database First Designs

In this much overdue article, I will explain why I think that in almost all cases, you should implement a “database first” design in your application’s data models, rather than a “Java first” design (or whatever your client language is), the latter approach leading to a long road of pain and suffering, once your project grows.

This article is inspired by a recent Stack Overflow question.

Interesting reddit discussions on /r/java and /r/programming.

Code generation

To my surprise, a small group of first time jOOQ users seem to be appalled by the fact that jOOQ heavily relies on source code generation. No one keeps you from using jOOQ the way you want and you don’t have to use code generation, but the default way to use jOOQ according to the manual is to start with a (legacy) database schema, reverse engineer that using jOOQ’s code generator to get a bunch of classes representing your tables, and then to write type safe queries against those tables:

for (Record2<String, String> record : DSL.using(configuration)
//   ^^^^^^^^^^^^^^^^^^^^^^^ Type information derived from the 
//   generated code referenced from the below SELECT clause

       .select(ACTOR.FIRST_NAME, ACTOR.LAST_NAME)
//           vvvvv ^^^^^^^^^^^^  ^^^^^^^^^^^^^^^ Generated names
       .from(ACTOR)
       .orderBy(1, 2)) {
    // ...
}

The code is generated either manually outside of the build, or automatically with every build. For instance, such a re-generation could follow immediately after a Flyway database migration, which can also be run either manually or automatically.

Source code generation

There are different philosophies, advantages, and disadvantages regarding these manual/automatic approaches, which I don’t want to discuss in this article. But essentially, the point of generated code is that it provides a Java representation of something that we take for granted (a “truth”) either within or outside of our system. In a way, compilers do the same thing when they generate byte code, machine code, or some other type of source code from the original sources – we get a representation of our “truth” in a different language, for whatever reason.

There are many such code generators out there. For instance, XJC can generate Java code from XSD or WSDL files. The principle is always the same:

  • There is some truth (internal or external), like a specification, data model, etc.
  • We need a local representation of that truth in our programming language

And it almost always makes sense to generate that latter, to avoid redundancy.

Type providers and annotation processing

Noteworthy: Another, more modern approach to jOOQ’s particular code generation use-case would be Type Providers, as implemented by F#, in case of which the code is generated by the compiler while compiling. It never really exists in its source form. A similar (but less sophisticated) tool in Java are annotation processors, e.g. Lombok.

In a way, this does the same thing except:

  • You don’t see the generated code (perhaps that’s less appalling to some?)
  • You must ensure the types can be provided, i.e. the “truth” must always be available. Easy in the case of Lombok, which annotates the “truth”. A bit more difficult with database models, which rely on an always available live connection.

What’s the problem with code generation?

Apart from the tricky question whether to trigger code generation manually or automatically, some people seem to think that code must not be generated at all. The reason I hear the most is the idea that it is difficult to set up in a build pipeline. And yes, that is true. There is extra infrastructure overhead. Especially if you’re new to a certain product (like jOOQ, or JAXB, or Hibernate, etc.), setting up an environment takes time you would rather spend learning the API itself and getting value out of it.

If the overhead of learning how the code generator works is too high, then indeed, the API failed to make the code generator easy to use (and later on, to customise). That should be a high priority for any such API. But that’s the only argument against code generation. Other than that, it makes absolutely no sense at all to hand-write the local representation of the internal or external truth.

Many people argue that they don’t have time for that stuff. They need to ship their MVPs. They can finalise their build pipelines later. I say:

“Too Busy To Improve” Licensed CC by Alan O’Rourke / Audience Stack. Original Image

“But Hibernate / JPA makes coding Java first easy”

Yes that’s true. And it’s both a bliss and a curse for Hibernate and its users. In Hibernate, you can just write a couple of entities, such as:

@Entity
class Book {
  @Id
  int id;
  String title;
}

And you’re almost set. Let Hibernate generate the boring “details” of how to define this entity in your SQL dialect’s DDL:

CREATE TABLE book (
  id INTEGER PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
  title VARCHAR(50),

  CONSTRAINT pk_book PRIMARY KEY (id)
);

CREATE INDEX i_book_title ON book (title);

… and start running the application. That’s really cool to get started quickly and to try out things.

But, huh, wait. I cheated.

  • Will Hibernate really apply that named primary key definition?
  • Will it create the index on TITLE, which I know we’ll need?
  • Will it add an identity specification?

Probably not. While you’re developing your greenfield project, it is convenient to always throw away your entire database and re-generate it from scratch, once you’ve added the additional annotations. So, the Book entity would eventually look like this:

@Entity
@Table(name = "book", indexes = {
  @Index(name = "i_book_title", columnList = "title")
})
class Book {
  @Id
  @GeneratedValue(strategy = IDENTITY)
  int id;
  String title;
}

Cool. Re-generate. Again, this makes it really easy to get started.

But you’ll pay the price later on

At some point, you go to production. And that’s when this model no longer works. Because

Once you go live, you can no longer throw away your database, as your database has become legacy.

From now on, you have to write DDL migration scripts, e.g. using Flyway. And then, what happens to your entities? You can either adapt them manually (so you double the work), or have Hibernate re-generate them for you (how big are your chances of the generation matching your expectations?) You can only lose.

Because once you go to production, you need hotfixes. And those have to go live fast. And since you didn’t prepare for pipelining your migrations to production smoothly, you’ll patch things wildly. And then you run out of time to do it right™. And you’ll blame Hibernate, because it’s always someone else’s fault…

Instead, you could have done things entirely differently from the beginning. Like using those round wheels.

Go Database First

The real “truth” of your database schema, and the “sovereignty” over it, resides with your database. The database is the only place where the schema is defined, and all clients have a copy of the database schema, not vice versa. The data is in your database, not in your client, so it makes perfect sense to enforce the schema and its integrity in the database, right where the data is.

This is old wisdom, nothing new. Primary and unique keys are good. Foreign keys are good. Check constraints are good. Assertions (when they’re finally implemented) are good.

And that’s not where it ends. For instance, if you’re using Oracle, you may want to specify:

  • In what tablespace your table resides
  • What PCTFREE value it has
  • What the cache size of your sequence (behind the identity) is

Maybe, all of this doesn’t matter in small systems, but you don’t have to go “big data” before you can profit from vendor-specific storage optimisations as the above. None of the ORMs I’ve ever seen (including jOOQ) will allow you to use the full set of DDL options that you may want to use on your database. ORMs offer some tools to help you write DDL.

But ultimately, a well-designed schema is hand written in DDL. All generated DDL is only an approximation of that.

What about the client model?

As mentioned before, you will need a copy of your database schema in your client, a client representation. Needless to say that this client representation needs to be in-sync with the real model. How to best do that? By using a code generator.

All databases expose their meta information through SQL. Here’s how to get all tables from your database in various SQL dialects:

-- H2, HSQLDB, MySQL, PostgreSQL, SQL Server
SELECT table_schema, table_name
FROM information_schema.tables

-- DB2
SELECT tabschema, tabname
FROM syscat.tables

-- Oracle
SELECT owner, table_name
FROM all_tables

-- SQLite
SELECT name
FROM sqlite_master

-- Teradata
SELECT databasename, tablename
FROM dbc.tables

These queries (or similar ones, e.g. depending on whether views, materialised views, table valued functions should also be considered) are also run by JDBC’s DatabaseMetaData.getTables() call, or by the jOOQ-meta module.

From the result of such queries, it’s relatively easy to generate any client representation of your database model, regardless what your client technology is.

  • If you’re using JDBC or Spring, you can create a bunch of String constants
  • If you’re using JPA, you can generate the entities themselves
  • If you’re using jOOQ, you can generate the jOOQ meta model

Depending on the amount of features your client API offers (e.g. jOOQ or JPA), the generated meta model can be really rich and complete. Consider, for instance, jOOQ 3.11’s implicit join feature, which relies on generated meta information about the foreign key relationships between your tables.

Now, any database increment will automatically lead to updated client code. For instance, imagine:

ALTER TABLE book RENAME COLUMN title TO book_title;

Would you really want to do this work twice? No way. Just commit the DDL, run it through your build pipeline, and have an updated entity:

@Entity
@Table(name = "book", indexes = {

  // Would you have thought of this?
  @Index(name = "i_book_title", columnList = "book_title")
})
class Book {
  @Id
  @GeneratedValue(strategy = IDENTITY)
  int id;

  @Column("book_title")
  String bookTitle;
}

Or an updated jOOQ class. Plus: Your client code might no longer compile, which can be a good thing! Most DDL changes are also semantic changes, not just syntactic ones. So, it’s great to be able to see in compiled client source code, what code is (or may be) affected by your database increment.

A single truth

Regardless what technology you’re using, there’s always one model that contains the single truth for a subsystem – or at least, we should aim for this goal and avoid the enterprisey mess where “truth” is everywhere and nowhere. It just makes everything much simpler. If you exchange XML files with some other system, you’re going to use XSD. Like jOOQ’s INFORMATION_SCHEMA meta model in XML form:
https://www.jooq.org/xsd/jooq-meta-3.10.0.xsd

  • XSD is well understood
  • XSD specifies XML content very well, and allows for validation in all client languages
  • XSD can be versioned easily, and evolved backwards compatibly
  • XSD can be translated to Java code using XJC

The last bullet is important. When communicating with an external system through XML messages, we want to be sure our messages are valid. That’s really really easy to do with JAXB, XJC, and XSD. It would be outright nuts to think that a Java-first approach where we design our messages as Java objects could somehow be reasonably mapped to XML for someone else to consume. That generated XML would be of very poor quality, undocumented, and hard to evolve. If there’s an SLA on such an interface, we’d be screwed.

Frankly, that’s what happens to JSON APIs all the time, but that’s another story, another rant…

Databases: Same thing

When you’re using databases, it’s the same thing. The database owns its data and it should be the master of the schema. All modifications to the schema should be implemented using DDL directly, to update the single truth.

Once that truth is updated, all clients need to update their copies of the model as well. Some clients may be written in Java, using either (or both of) jOOQ and Hibernate, or JDBC. Other clients may be written in Perl (good luck to them). Even other clients may be written in C#. It doesn’t matter. The main model is in the database. ORM-generated models are of poor quality, not well documented, and hard to evolve.

So, don’t do it. And, don’t do it from the very beginning. Instead, go database first. Build a deployment pipeline that can be automated. Include code generators to copy your database model back into the clients. And stop worrying about code generation. It’s a good thing. You’ll be productive. All it takes is a bit of initial effort to set it up, and you’ll get years of improved productivity for the rest of your project.

Thank me later.

Clarification

Just to be sure: This article in no way asserts that your database model should be imposed on your entire system (e.g. your domain, your business logic, etc. etc.). The claim I made here is that client code interacting with the database should act upon the database model, and not have its own first class model of the database instead. This logic typically resides in the data access layer of your client.

In 2-tier architectures, which still have their place sometimes, that may be the only model of your system. In most systems, however, I consider the data access layer a “subsystem” that encapsulates the database model. So, there.

Exceptions

There are always exceptions, and I promised that the database first and code generation approach may not always be the right choice. These exceptions are (probably not exhaustive):

  • When the schema is unknown and must be discovered. E.g. you’re a tool vendor helping users navigate any schema. Duh… No code generation. But still database first.
  • When the schema needs to be generated on the fly for some task. This sounds a lot like a more or less sophisticated version of the entity attribute value pattern, i.e. you don’t really have a well-defined schema. In that case, it’s often not even sure if an RDBMS will be the right choice.

The nature of exceptions is that they’re exceptional. In the majority of RDBMS usage, the schema is known in advance, placed inside the RDBMS as the single source of “truth”, and clients will have derived copies from it – ideally generated using a code generator.