Why You Should Use jOOQ With Code Generation

I’m answering many jOOQ questions on Stack Overflow, and a lot of times. The problem has the same cause: People not using jOOQ’s code generator. The main reason people seem not to be using it, is because it takes some extra time to set up, but as with anything well designed, the initial investment will always pay off.

In this article, I want to briefly summarise what the code generator is, why you should use it, and what is the main reason not to use it (hint: you probably don’t have that case).

What’s the jOOQ code generator?

jOOQ is an internal DSL that pretends that you can write type safe, embedded, dynamic SQL directly in Java. Just like you can do that in PL/SQL, PL/pgSQL, T-SQL, SQL/PSM and all the other procedural dialects. Being internal, the big difference is that dynamic SQL is very easy, because jOOQ makes SQL “composable”.

As an internal DSL, it models the SQL language as a chain of Java method calls. The technique behind the fluent API design is described here. A typical jOOQ query looks like this:

var author =
ctx.select(AUTHOR.FIRST_NAME, AUTHOR.LAST_NAME)
   .from(AUTHOR)
   .where(AUTHOR.ID.eq(10))
   .fetchOne();

Now those AUTHOR table references, as well as those AUTHOR.ID column references are generated code. They are Java representations of your database meta data, and include a variety of information:

  • The catalog / schema they belong to
  • The table they belong to (if it’s a column)
  • The data type
  • Some attached converters and data type bindings
  • Other useful meta data, such as constraints (primary key, unique key, foreign key), identities, default expressions, generated expressions, etc.
  • Auxiliary DSL methods, such as type safe table aliasing methods

As you can see, you get a lot of out of the box features!

What’s the price to pay for code generation?

As I understand, a lot of users (especially new ones) are eager to try out jOOQ immediately and don’t want to bother setting up the code generator, which seems to be an unnecessary step at first. E.g. when using JAXB, you don’t have to use XJC to generate annotated bindings for your XSD files (you might not even have an XSD). But with jOOQ, it’s different.

Yes, the manual can be overwhelming, but the tutorial isn’t, and neither are the examples. You don’t even have to use the Maven or Gradle plugins for your first attempts with jOOQ. Why not just use jbang. With just a few lines of CLI, you can start right away.

To try out basic things, you can also use the DDLDatabase, which generates code from SQL files directly, without a live connection to an actual database, though that isn’t really a big issue anymore with testcontainers.org and/or docker, which allows you to spin up a sample database in a matter of seconds.

What I’m trying to say here is: Don’t be overwhelmed. Yes, there’s some initial investment, but it will pay off massively (see below), will prevent tons of problems (see below), and it just requires a few more minutes of setting up your project. Heck, you could even start playing around with the bug reporting template, which includes code generation configuration for Java, Scala, and Kotlin!

What do you get in return?

Let’s start with the obvious:

1. Compile time type safety

You already get some degree of compile time type safety from using jOOQ’s DSL alone, namely, you cannot mistype keywords, such as SELECT, or forget your parentheses around an IN predicate, such as A IN (B, C). And you get auto-completion for your SQL syntax

But without code generation, you won’t get type safety for your schema objects. This is a big deal, and once you’ve gotten used to it, you will never want to miss it again!

Every table, every column, every constraint, every index, every sequence, every procedure, every function, every package, every schema, every catalog, every domain, every enum type, every object type gets a generated Java representation in jOOQ. You don’t have to go to your ERD, or database, or wherever it is to look up your schema. You have it right there in your code. As a first step, you can use this for auto completion in your favourite IDE!

Not just that. You can also never make a mistake with your meta data:

Yep. No more typos. But also! No more regressions when someone renames a column in the database. As soon as a column name changes, or is deleted, your Java code stops compiling.

No more wrong data types. See the above screenshot? It says that FIRST_NAME is a TableField<AuthorRecord, String>. So your Java compiler, through jOOQ’s elaborate generic API, will already know that you’re projecting a String column. It’s not 2003 anymore. We have generics to prevent type casts or unsafe conversions. Check out the compilation error on this:

Why ever worry about such data type problems again? You already declared the data type once, when you created the table:

CREATE TABLE author (
  id INTEGER NOT NULL PRIMARY KEY,
  first_name TEXT NOT NULL,
  last_name TEXT NOT NULL,
  ...
);

Why go through all that busywork of re-declaring the data type again in Java? In fact, why go through all that busywork of re-declaring anything of the above again in Java? No one enjoys typing all that stuff twice or keeping it in sync. In fact, it’s an outright bad idea. There should only be one source of truth for your meta model, and that’s the DDL you ran in your database to create the schema.

Sure, you can have other representations of your data, e.g. some DDD version, some JSON version for your frontend, whatever. But whenever you query your database, there’s absolutely no reason not to use the exact meta model which you’ve already declared in your database.

I think that’s kind of obvious? Why waive all that for a little (initial) extra setup convenience?

2. Schema introspection

You don’t just get that benefit when writing your queries. You also get it when reading them. You can always quickly navigate to the column you’re interested in and read its SQL comments in the Javadoc. Maybe you have this specified? (Totally underused SQL feature!)

COMMENT ON COLUMN author.first_name IS 'The author''s first name';

Now look what jOOQ’s code generator does with it:

Seems obvious, no? Or, you want to see who is using this column? Just check the call hierarchy or whatever IDE search tool you’re interested in:

This is much better than just text searching for FIRST_NAME, which might be case sensitive and matches all strings like that, not just that particular column of the AUTHOR table.

3. Runtime meta model: Data types

In fact, you won’t just profit from this type safety at compile time, but also at runtime. Trust me. There are so many edge cases in some JDBC drivers and/or SQL dialects where the data type must be communicated to the database explicitly.

Sure, you can write something like this in jOOQ, and it works:

var author =
ctx.select(field("author.first_name"), field("author.last_name"))
   .from("author")
   .where(field("author.id").eq(10))
   .fetchOne();

The above is using plain SQL templates, a feature that is normally used by users to extend jOOQ with custom functionality. Sure, it can be used instead of using code generation, too, and occasionally, that’s just the right thing to do.

When bind variables have insufficient type contexts

But there are many cases where you should give the SQL engine more context about the bind value. For example, in Derby (or older versions of Db2), you can’t just do this:

select null
from sysibm.sysdummy1;

You’ll get an error:

SQL Error [30000] [42X01]: Syntax error: Encountered “null” at line 1, column 8.

Instead, you have to CAST.

select cast(null as int)
from sysibm.sysdummy1;

jOOQ always adds those casts for you, if we find out through our extensive integraiton tests that in any given situation, a dialect might require it. But jOOQ can only do that if you provide it with a type, and often you do, implicitly, but sometimes, you don’t, and then you have to debug a query that used to work before. That can be very unexpected.

When a Field<Object> leads to compilation errors

This one is tricky. The Java language version 8 has made a peculiar decision around overload resolution with generics. Assuming you have overloads like this (and jOOQ is full of these, for convenience reasons):

public <T> void setValue(Parameter<T> parameter, T value) {}
public <T> void setValue(Parameter<T> parameter, Field<T> value) {}

Then there has been a backwards incompatible change of compilation behaviour in Java 8.

While this works perfectly fine:

Parameter<String> p1 = ...
Field<String> f1 = ...
setValue(p1, f1);

This call doesn’t compile:

Parameter<Object> p2 = ...
Field<Object> f2 = ...
setValue(p2, f2);

It seems that Field<Object> is not more specific than Object in this case, which seems weird, but that’s how the JLS designed generic overload resolution. In fact, it’s still more specific, but another rule declares the two types as ambiguous before we get to the specificity resolution. Read the above link for details.

Let’s look again at the previous jOOQ query that doesn’t use code generation:

Record2<Object, Object> author =
ctx.select(field("author.first_name"), field("author.last_name"))
   .from("author")
   .where(field("author.id").eq(10)) // Field<Object>
   .fetchOne();

It’s all full of Field<Object>, so you’ll run into this problem eventually, trust me, mostly with the UPDATE .. SET clause. And it will be quite the puzzler.

The workaround is to always attach type information to your columns:

// Assuming this static import
import static org.jooq.impl.SQLDataType.*;

Record2<String, String> author =
ctx.select(
        field("author.first_name", VARCHAR),
        field("author.last_name", VARCHAR))
   .from("author")
   .where(field("author.id", INTEGER).eq(10)) // Field<Integer>
   .fetchOne();

But why do all that, when the code generator could have already done it for you, automatically? And what if you change your AUTHOR.ID column to be a BIGINT instead of an INTEGER? Will you manually update every query?

4. Emulations

Some emulations are not possible without generated meta data. They mostly involve the emulation of the INSERT .. RETURNING syntax, which relies on knowing both primary key and identity information, but other syntaxes may be affected as well.

You wouldn’t believe how many times I’ve supported users who then wanted to use internal API to tell jOOQ about primary key or identity meta information in convoluted ways to make this work, rather than just generate code. With generated code, the following query works in a lot of RDBMS out of the box. Out. Of. The. Box:

AuthorRecord author =
ctx.insertInto(AUTHOR, AUTHOR.FIRST_NAME, AUTHOR.LAST_NAME)
   .values("John", "Doe")
   .returning(AUTHOR.ID)
   .fetchOne();
  • Can this be run in a single query (e.g. in Firebird, Oracle, PostgreSQL as above, in SQL Server using OUTPUT, in Db2, H2 using data change delta tables)?
  • Can it be run in a single query using JDBC’s Statement.getGeneratedKeys()?
  • Does it require a second query to fetch the identity and/or other columns?

jOOQ can always find a way for you in this case, but only if it knows which column the identity is.

5. Converters

Basic SQL only really has a few built-in data types:

  • Various numeric types
  • Various string and binary string types
  • Various temporal types

But is it a good idea to work with these low level types, everywhere throughout your application? Do you want to use BigDecimal for your monetary amounts all over the place? Or would it be better to create a more useful Amount domain class? One where you can attach functionality, currency, etc.?

With jOOQ, you can use converters for that (or bindings, if that affects how you bind the values to JDBC). Have a fancy FirstName class that models various types of first names?

/**
 * Not just any ordinary first name string!
 */
record LeFirstName(String firstName) {}

Yes, you could insist on still not using code generation and attach that to your plain SQL templating query:

Record2<LeFirstName, String> author =
ctx.select(
        field("author.first_name", VARCHAR.asConvertedDataType(
            LeFirstName.class, LeFirstName::new, LeFirstName::firstName
        )), 
        field("author.last_name", VARCHAR))
   .from("author")
   .where(field("author.id", INTEGER).eq(10))
   .fetchOne();

Yes, you could extract that field definition in an auxiliary class for better reuse:

class LeAuthor {
    static Field<LeFirstName> firstName = field("author.first_name", 
        VARCHAR.asConvertedDataType(
            LeFirstName.class, LeFirstName::new, LeFirstName::firstName
        ));
    static Field<String> lastName = field("author.last_name", VARCHAR));
}

Record2<LeFirstName, String> author =
ctx.select(LeAuthor.firstName, LeAuthor.lastName)
   .from("author")
   .where(field("author.id", INTEGER).eq(10))
   .fetchOne();

And now, it looks almost like jOOQ’s code generator was used. Except it wasn’t. Why not? You can attach your converter to all of your FIRST_NAME columns of all your tables automatically with just a bit of extra configuration in the code generator, so you’ll never forget. Especially, if your converter implements some must-use logic, such as e.g. hashing or encrypting, etc. So, the above example would just be:

// After tweaking the codegen configuration, you now
// get your custom type whenever you reference FIRST_NAME
Record2<LeFirstName, String> author =
ctx.select(AUTHOR.FIRST_NAME, AUTHOR.LAST_NAME)
   .from(AUTHOR)
   .where(AUTHOR.ID.eq(10))
   .fetchOne();

Why write manually, what the machine can write for you? You wouldn’t manually write other derived code, such as byte code, or assembly either?

6. Type safe aliasing

Have a complex query and want to alias your tables? No problem with jOOQ. Just declare the aliases up front and you’re all set:

// Declare the alias:
var a = AUTHOR.as("a");

// Use the alias. Columns are still there, type safe
var author =
ctx.select(a.FIRST_NAME, a.LAST_NAME)
   .from(a)
   .where(a.ID.eq(10))
   .fetchOne();

While aliasing also works without code generation (everything works without code generation, which is just convenience for more complex API calls), you won’t get type safe access to your column names from aliases like the above. And you can still call your type hierarchy in the IDE to check where the FIRST_NAME column is referenced:

Or get compilation errors on typos, or auto completion, etc. etc. What’s not to love?

7. Implicit joins

One of my favourite jOOQ features is implicit joins. Look at this jOOQ query:

ctx.select(
          BOOK.author().FIRST_NAME,
          BOOK.author().LAST_NAME,
          BOOK.TITLE,
          BOOK.language().CD.as("language"))
   .from(BOOK)
   .fetch();

Starting from a child table (e.g. BOOK), jOOQ can implicitly (left) join your parent tables AUTHOR and LANGUAGE, because life’s too short to repetitively type out all the same joins and join predicates again and again. The above is equivalent to:

ctx.select(
          AUTHOR.FIRST_NAME,
          AUTHOR.LAST_NAME,
          BOOK.TITLE,
          LANGUAGE.CD.as("language"))
   .from(BOOK)
   .leftJoin(AUTHOR).on(BOOK.AUTHOR_ID.eq(AUTHOR.ID))
   .leftJoin(LANGUAGE).on(BOOK.LANGUAGE_ID.eq(LANGUAGE.ID))
   .fetch();

It’s a matter of taste and style, yes. You don’t have to use implicit joins everywhere. Some joins cannot be expressed as implicit joins, but very often, this style greatly simplifies your queries by reducing noise.

And it’s only possible with generated code!

Why? Because those BOOK.author() and BOOK.language() methods are generated, and they return again instances of generated tables containing generated columns (with implicit table aliases), and data types, and possibly converters, and what not.

Are you still categorically excluding working with the code generator just because it takes a little extra time to set up?

8. CRUD

jOOQ isn’t a full fledged ORM like JPA implementations, but you can still get some convenience to avoid having to spell out CRUD queries all the time. jOOQ calls these UpdatableRecord.

When you fetch the UpdatableRecord like this:

BookRecord book =
ctx.selectFrom(BOOK)
   .where(BOOK.ID.eq(17))
   .fetchOne();

Then, you can change its values (e.g. in a rich client), and store the record again to generate an UPDATE statement:

book.setTitle("New title");
book.store();

There are a few opt in extra features, like optimistic locking, and you guessed it:

All available only with generated code.

9. Stored procedures

If you have those (and I highly recommend you do if only for performance reasons), then using code generation is even more of a no brainer.

If you have, in PostgreSQL:

CREATE FUNCTION p_count_authors_by_name (
  author_name VARCHAR, 
  result OUT INTEGER
)
AS $$
DECLARE
  v_result INT;
BEGIN
  SELECT COUNT(*)
  INTO v_result
  FROM author
  WHERE first_name LIKE author_name
  OR last_name LIKE author_name;

  result := v_result;
END;
$$ LANGUAGE plpgsql;

Now jOOQ’s code generator generates the following routine call for you:

public static Integer pCountAuthorsByName(
      Configuration configuration
    , String authorName
) {
    // Do you really care?
}

Yep, that’s it. Do you really care what happens behind the scenes? This binding logic is about as exciting as sorting your socks. It’s the same story every time you have some RPC technology. We didn’t care about the implementation in CORBA, with SOAP, with RMI, with remote EJBs, and we don’t care how to bind to a stored procedure.

It’s just some remote callable that accepts a String and returns an Integer. Code generation is your friend here, again. And yes, as always, if you think this function is missing a second parameter, you’ll just add it and your client code stops compiling offering you a chance to refactor!

And the best thing is: This supports all of your weird edge cases including:

  • SQL types (enums, arrays, object types, UDTs)
  • PL/SQL types (record types, table types, associative arrays)
  • Ref cursor types
  • Table valued functions

10. Multitenancy

A cool feature in jOOQ is that jOOQ supports schema level multi tenancy out of the box for you where you can dynamically rename your catalogs, schemas, and table names at runtime depending on whatever (e.g. user tenant).

This feature is called schema mapping, and it obviously… doesn’t work if you use plain SQL templates without code generation. (because a plain SQL template can contain any kind of SQL string, that’s what they’re for).

Not just that. With generated code, all of your objects are fully qualified by default, and you can omit qualification when you like. With your own templates, you can’t easily change that. Want to port your schema from A to B? Well, you’ll hand-rewrite everything, good luck. With jOOQ’s generated code, it’s just flags, configurations, etc.

11. Embedded types

Remember converters? Even more sophisticated are embedded types, i.e. jOOQ’s way of wrapping multiple database columns into a single client side value, pretending your database supports UDTs (such as Oracle, PostgreSQL).

Because when you work with currencies, what you really want to do is combine an AMOUNT column with a CURRENCY column, because after all, USD 1.00 isn’t really the same thing as EUR 1.00, nor can they be directly compared or combined in arithmetics.

Embedded types are currently only available using the code generator, which produces all the meta data required for jOOQ’s runtime to map / unmap your flat result sets.

This is especially powerful when you use:

  • Embedded keys (the only thing you ever compare an BOOK.AUTHOR_ID with is AUTHOR.ID, or maybe another foreign key that points to the same primary key, so why not use a type to enforce type safety?)
  • Embedded domains (you already declared a semantic type in your database, so you want to reuse that name also in client code)

12. Data change management

Another thing that people often don’t want to setup correctly and cleanly up front because it does take some extra time is data change management (e.g. using Flyway or Liquibase). And the regrets are similar once the project grows.

Using jOOQ’s code generator kind forces you to also think about data change management early on, and that is a good thing!

Your data model changes over time, and so does your client code working on that model. jOOQ’s code generation process should be embedded in your CI/CD pipeline:

  • You automatically apply a change in your model
  • You automatically re-generate your code
  • You automatically run your integration tests
  • You automatically deploy the tested result as a unit

Here’s how to do all of that with testcontainers and jOOQ, see an example in action here.

Let me stress this once more.

If you don’t cleanly set up your CI/CD pipeline including your database early on in your project, you’ll do it later on, and it’ll be much harder. But if you do it, then adding jOOQ’s code generation to the game is a no-brainer, and you get everything above for free!

Conclusion

There are many other minor things that you get when you use jOOQ’s code generator. It’s such a powerful tool to get the most out of both jOOQ and your database. It actively encourages you to use all of your RDBMS features, including:

  • Complex models (because joining is much simpler once you have all the meta data, and even tools like implicit joins)
  • Views (those are generated too)
  • Data types (generated too)
  • Procedures (same)
  • Etc.

Using code generation in jOOQ is a clear path towards becoming a power user of your own database, which you (hopefully) design with a lot of love and craftsmanship.

Exception

I promised an exception. You can’t use the code generator when your schema is dynamic (i.e. unknown at compile time). But only few systems are designed like that.

6 thoughts on “Why You Should Use jOOQ With Code Generation

  1. At the beginning I have to state that I’m absolutely in love with jOOQ! Unconditional, complete love! I don’t understand how it is even possible to implement complex, dynamic systems without it – without the ability to build SQL queries dynamically at runtime. Because in dynamic systems like the ones I implement for almost my entire career the SQLs are not known at compile time, sorry about that! jOOQ is the only tool I know that solves this to perfection.
    But there’s one point you can’t onboard me and that’s the code generation. I have written my first code generator back in 1995 and since then I slowly came to the conclusion that code generators suck. No matter what they do and how well they do it! It is the concept that is problematic.
    The main dilemma is whether to commit the results to the source control system. If you do, developers end up committing changes they didn’t write and that they don’t fully understand. If you don’t, the generated code keeps getting out of sync because it’s very difficult to establish the trigger rule of when to run them. And if they run always, it makes the build painfully slow – both because of the process itself and because everything depends on these generated files eventually and gets rebuilt.
    In case you’re asking, we do use Flyway and the schema changes are automated, so we can produce a clean schema on demand. But the developer’s local machine (or docker) is not necessarily clean – as a result the generated code might not be “clean” as well.
    As a minor comment – I recently fell in love with the jOOQ converted types! And in my manually written Table classes I sometimes declare fields that are VARCHAR in the database but are something else for the application (enums or typed IDs are notable examples). We even encrypt the data with jOOQ type converters! Automated code generator couldn’t possibly know that.
    So I’m sorry for not taking this piece of jOOQ! All the rest is fabulous however! Thank you!

    1. Because in dynamic systems like the ones I implement for almost my entire career the SQLs are not known at compile time, sorry about that

      Well, if you’re in the exception case, then this whole article wasn’t for you. But even dynamic SQL queries which aren’t known at compile time may operate on a schema that is known at compile time. If the latter is the case, then again, this isn’t the exceptional case.

      The stated exception is about dynamic schemas, not dynamic SQL.

      If you do, developers end up committing changes they didn’t write and that they don’t fully understand

      But they did write those changes, no? They committed DDL deltas and their generated code must match, or the commit could be rejected by CI. Those mistakes may happen occasionally, but not too often, at least in my experience.

      If you don’t, the generated code keeps getting out of sync because it’s very difficult to establish the trigger rule of when to run them

      The trigger rule is relatively simple. Whenever you integration test your DDL delta, you will want to have code re-generated. Specifically, the https://github.com/etiennestuder/gradle-jooq-plugin claims to handle this wonderfully through Gradle’s own mechanisms of detecting when a build step produces deltas as input for another build step.

      And if they run always, it makes the build painfully slow – both because of the process itself and because everything depends on these generated files eventually and gets rebuilt.

      Depends on the system size. A lot of systems won’t reach a size where this re-run is really that slow. Though, I’m personally more in the commit-generated-code-camp, where this won’t ever be a problem. As a matter of fact, if the system size warrants that this slowness could ever turn out to be a problem, then code generation is the only choice. I started building jOOQ when I worked at a company whose E-Banking system had around 1000 views, 500 tables, 200 PL/SQL packages with dozens of procedures each, 100s of sequences, dozens of AQs. It would be absolutely impossible to tame this beast with manually written bindings to any of that.

      Now with testcontainers and docker, the only thing that ever contributed to slowness (dependency on actual Oracle instances) is no longer an issue.

      In case you’re asking, we do use Flyway and the schema changes are automated, so we can produce a clean schema on demand. But the developer’s local machine (or docker) is not necessarily clean

      Then you’re not using Flyway and said “automation” entirely correctly. I know this can be tricky to get right, but it’s certainly possible, and definitely worth doing! Your code and the database schema are a unit. If they’re not, you will have other types of problems too of keeping things in sync, and they’re not related to code generation. For example, it would not be possible for developers to run integration tests locally, so their commits can break the build, making integration tests, as a whole, less likely.

      This is why I coupled code generation with data change management in that section near the end. The diligence and cleanliness that both of these tools encourage and even enforce together will lead to better quality and a more streamlined development process. It isn’t code generation that is a bad thing because it leads to such mistakes. It is code generation that uncovers that these mistakes happen, which shouldn’t happen in the first place.

      After all, you do build your jar files at some point, no? What is javac if not a code generator for bytecode contained in jar files? How is it different? The jar files are published as a unit. Why is the database not part of this unit?

      And in my manually written Table classes I sometimes declare fields that are VARCHAR in the database but are something else for the application (enums or typed IDs are notable examples)

      So, you wrote your own code generator by the name “Victor”! You do embrace code generator, you just don’t trust the out of the box one ;-)

      Hopefully yours never produces out of sync generated code! How do you guarantee that? Will every developer who commits a database increment always check that all of Victors generated code, including the converters attached to the VARCHAR columns, still work correctly, e.g. after a column rename?

      I fail to see how the criticism of the out of the box code generator doesn’t apply to the one named Victor. I would even claim it applies tenfold.

      Automated code generator couldn’t possibly know that.

      No, it doesn’t make any assumptions or heuristics, but you just configure it: https://www.jooq.org/doc/latest/manual/code-generation/codegen-advanced/codegen-config-database/codegen-database-forced-types/

      It will do your work for you. But indeed, it will not decide your design for you. You’re still in full control.

      TL;DR I’m very biased, and I’m very willing to die on this hill :) From my experience, StackOverflow questions that show code using generated code are usually about more advanced concepts, about SQL itself, about bugs in jOOQ in general, about missing SQL syntax support etc. StackOverflow questions that show code without generated code are usually about the many subtle caveats that appear when meta data isn’t known to jOOQ at runtime. You don’t have these problems, because you do the same work manually that the code generator would do for you, otherwise. But those folks on StackOverflow don’t do that.

      So, my conclusion here is that a lot of people who are avoiding code generation shoot themselves in the foot unnecessarily – at least that’s what my data from supporting jOOQ suggests.

  2. Lucas, I’m really flattered by your thorough reply! Even if we fail to convince each other, perhaps other readers may find this discussion fruitful! I’m all for it even if my remarks convince them to take the path of the automatic code generator!
    So in essence you are write, we sort of “generating” the code manually. And the main reason I leave in peace with it is that it’s tiny compared to the effort of maintaining the Ci/CD burden of the code generator. All we write is
    val myField = createVarcharField(“my_field”)
    in the relevant table class definition. (we have a “utility” function createVarcharField which calls createField behind the scenes). So it’s a negligible effort to write when adding a new column. You asked how it works when we rename? Are are a CD shop so we cannot rename – you can never be sure all pods run the same code version at a given moment. So we only add new columns/tables and slowly deprecate the existing.
    How are we sure this boilerplate code is in sync with the schema? Because we have tests for our DB-related code and they simply fail if the code is out of sync. But it doesn’t happen because the developers run the relevant tests locally and schema changes don’t happen “unexpectedly” – they initiate them.
    Specifically in my current project I was the one who said “we have to switch to jOOQ” and we slowly converted all DAOs to jOOQ. In every case there was a 5-10 minutes “bumping at the keyword” stage to write these “create…Field” lines but again, this is such a minor overhead that it doesn’t worth talking about. In the end every column in the DB is represented by exactly one field in the code and we can find all usages of it in a single click.
    So I guess the bottom line is pretty much the same as if we were using the code generator. We just don’t pay the price of “orchestrating” it, I personally believe it’s worth it. But let’s agree to disagree!

    1. Well, the one thing I can most definitely agree with is that there’s always room for improvement :)

      There’s even a task to improve the currently internal TableImpl API, which you’re probably using, and offer a more convenient meta data construction API for folks who aren’t using the code generator: https://github.com/jOOQ/jOOQ/issues/7444. So, your feedback directly on that issue would be very appreciated (lessons learned, missing features, best practices, etc.)

      Also a way to offer multi version meta data, i.e. meta data that is capable of modelling multiple versions of a schema would be terrific, both as generated code as well as via the above meta data construction API: https://github.com/jOOQ/jOOQ/issues/9626

      And finally, what I wish to do with jOOQ in the mid-term, is integrate much more tightly with schema version control, possibly roll jOOQ’s own, which is very much aware of generated code. I think that what I have in mind here will convince even you, eventually, though it’s far from ready :)

  3. i love to work with jOOQ, and want to ask a question about why jOOQ does not have (maybe in addition to the Schema-To-Model Codegen it already has) also a “Model-To-Schema” Approach.

    I currently can’t think of why it could not also make sense to first write Java-Models or Record-Table-Definitions, and afterwards generate SQL-Statements from them (so you would not need flyway but have something similar).

    But i think i can sense the reason why.
    The Java-Classes would be pretty big one needs to write, without having some kind of abstraction like JPA/ORM has, in regards to using Annotations in the classes to mark fields.

    1. You can always export any Meta representation in jOOQ back to DDL. Just call org.jooq.Meta::ddl and execute that. But why do you want to do it? You say, “without having some kind of abstraction”, but that’s not correct. jOOQ has the same abstraction. It’s just not annotation based. You create a runtime model of your schema in any format (generated classes, hand written classes, XML, DDL, etc.) and export that to SQL rather than writing the SQL yourself. By doing that, you stop thinking about deltas / migrations and only think about snapshots. Once you go to production, you’ll be bitten by this fact because migrations are a thing and your application doesn’t handle it yet. So, what would be the benefit given this high price?

Leave a Reply