Frightening Facts about MySQL


So you might’ve seen Destroy all Software’s talk about JavaScript:

Here’s a similar talk (less funny more scary) about MySQL:

Keeping things DRY: Method overloading


A good clean application design requires discipline in keeping things DRY:

Everything has to be done once.
Having to do it twice is a coincidence.
Having to do it three times is a pattern.

— An unknown wise man

Now, if you’re following the Xtreme Programming rules, you know what needs to be done, when you encounter a pattern:

refactor mercilessly

Because we all know what happens when you don’t:

Not DRY: Method overloading

One of the least DRY things you can do that is still acceptable is method overloading – in those languages that allow it (unlike Ceylon, JavaScript). Being an internal domain-specific language, the jOOQ API makes heavy use of overloading. Consider the type Field (modelling a database column):

public interface Field<T> {

    // [...]

    Condition eq(T value);
    Condition eq(Field<T> field);
    Condition eq(Select<? extends Record1<T>> query);
    Condition eq(QuantifiedSelect<? extends Record1<T>> query);

    Condition in(Collection<?> values);
    Condition in(T... values);
    Condition in(Field<?>... values);
    Condition in(Select<? extends Record1<T>> query);

    // [...]

}

So, in certain cases, non-DRY-ness is inevitable, also to a given extent in the implementation of the above API. The key rule of thumb here, however, is to always have as few implementations as possible also for overloaded methods. Try calling one method from another. For instance these two methods are very similar:

Condition eq(T value);
Condition eq(Field<T> field);

The first method is a special case of the second one, where jOOQ users do not want to explicitly declare a bind variable. It is literally implemented as such:

@Override
public final Condition eq(T value) {
    return equal(value);
}

@Override
public final Condition equal(T value) {
    return equal(Utils.field(value, this));
}

@Override
public final Condition equal(Field<T> field) {
    return compare(EQUALS, nullSafe(field));
}

@Override
public final Condition compare(Comparator comparator, Field<T> field) {
    switch (comparator) {
        case IS_DISTINCT_FROM:
        case IS_NOT_DISTINCT_FROM:
            return new IsDistinctFrom<T>(this, nullSafe(field), comparator);

        default:
            return new CompareCondition(this, nullSafe(field), comparator);
    }
}

As you can see:

  • eq() is just a synonym for the legacy equal() method
  • equal(T) is a more specialised, convenience form of equal(Field<T>)
  • equal(Field<T>) is a more specialised, convenience form of compare(Comparator, Field<T>)
  • compare() finally provides access to the implementation of this API

All of these methods are also part of the public API and can be called by the API consumer, directly, which is why the nullSafe() check is repeated in each method.

Why all the trouble?

The answer is simple.

  • There is only very little possibility of a copy-paste error throughout all the API.
  • … because the same API has to be offered for ne, gt, ge, lt, le
  • No matter what part of the API happens to be integration-tested, the implementation itself is certainly covered by some test.
  • This way, it is extremely easy to provide users with a very rich API with lots of convenience methods, as users do not want to remember how these more general-purpose methods (like compare()) really work.

The last point is particularly important, and because of risks related to backwards-compatibility, not always followed by the JDK, for instance. In order to create a Java 8 Stream from an Iterator, you have to go through all this hassle, for instance:

// Aagh, my fingers hurt...
   StreamSupport.stream(iterator.spliterator(), false);
// ^^^^^^^^^^^^^                 ^^^^^^^^^^^    ^^^^^
//       |                            |           |
// Not Stream!                        |           |
//                                    |           |
// Hmm, Spliterator. Sounds like      |           |
// Iterator. But what is it? ---------+           |
//                                                |
// What's this true and false?                    |
// And do I need to care? ------------------------+

When, intuitively, you’d like to have:

// Not Enterprise enough
iterator.stream();

In other words, subtle Java 8 Streams implementation details will soon leak into a lot of client code, and many new utility functions will wrap these things again and again.

See Brian Goetz’s explanation on Stack Overflow for details.

On the flip side of delegating overload implementations, it is of course harder (i.e. more work) to implement such an API. This is particularly cumbersome if an API vendor also allows users to implement the API themselves (e.g. JDBC). Another issue is the length of stack traces generated by such implementations. But we’ve shown before on this blog that deep stack traces can be a sign of good quality.

Now you know why.

Takeaway

The takeaway is simple. Whenever you encounter a pattern, refactor. Find the most common denominator, factor it out into an implementation, and see that this implementation is hardly ever used by delegating single responsibility steps from method to method.

By following these rules, you will:

  • Have less bugs
  • Have a more convenient API

Happy refactoring!

Logon Triggers: The Oracle Database Magic Bullet


Imagine you want to collect detailed usage statistics to tune your Oracle database, e.g. if you want to have A-Rows and A-Time values in your execution plans (by default, Oracle only reports E-Rows and E-Time with “E” for “Estimated”. But usually, you will care more about the “A” for “Actual”). All you have to do is connect with a sysdba user and issue the following statement:

C:\> sqlplus "/ as sysdba"

Connected to:
Oracle Database 11g Express Edition Release 11.2.0.2.0

SQL> alter system set statistics_level = all;

System altered.

But of course, you hardly ever have the required privileges to connect as sysdba or to issue ALTER SYSTEM statements. So how can we get those advanced statistics? It’s easy, by using a logon trigger:

CREATE OR REPLACE TRIGGER logon_actions
AFTER LOGON
ON DATABASE
ENABLE
BEGIN
    EXECUTE IMMEDIATE 
    'ALTER SESSION SET STATISTICS_LEVEL = all';
END;
/

This will set your session’s statistics level to all every time you log on to the database. Of course, you will need to have the privilege to create such a system trigger, but maybe your DBA will allow you to specify a certain package that provides a debug level for things like these:

DECLARE
    v_loglevel VARCHAR2(100);
BEGIN
    v_loglevel := logger_package.loglevel;

    IF v_loglevel = 'DEBUG' THEN
        EXECUTE IMMEDIATE 
        'ALTER SESSION SET STATISTICS_LEVEL = all';
    END IF;
END;

To be sure that the statistics_level has been set correctly, run this query:

SELECT SID, NAME, VALUE
FROM   V$SES_OPTIMIZER_ENV
WHERE  NAME = 'statistics_level'
AND SID = (
    SELECT SID
    FROM V$MYSTAT
    WHERE ROWNUM = 1
);

To learn about how to get A-Rows and A-Time values in your execution plan, read this article here.

Happy statistics collecting!

Java 8 Friday: More Functional Relational Transformation


In the past, we’ve been providing you with a new article every Friday about what’s new in Java 8. It has been a very exciting blog series, but we would like to focus again more on our core content, which is Java and SQL. We will still be occasionally blogging about Java 8, but no longer every Friday (as some of you have already notice).

In this last, short post of the Java 8 Friday series, we’d like to re-iterate the fact that we believe that the future belongs to functional relational data transformation (as opposed to ORM). We’ve spent about 20 years now using the object-oriented software development paradigm. Many of us have been very dogmatic about it. In the last 10 years, however, a “new” paradigm has started to get increasing traction in programming communities: Functional programming.

Functional programming is not that new, however. Lisp has been a very early functional programming language. XSLT and SQL are also somewhat functional (and declarative!). As we’re big fans of SQL’s functional (and declarative!) nature, we’re quite excited about the fact that we now have sophisticated tools in Java to transform tabular data that has been extracted from SQL databases. Streams!

SQL ResultSets are very similar to Streams

As we’ve pointed out before, JDBC ResultSets and Java 8 Streams are quite similar. This is even more true when you’re using jOOQ, which replaces the JDBC ResultSet by an org.jooq.Result, which extends java.util.List, and thus automatically inherits all Streams functionality. Consider the following query that allows fetching a one-to-many relationship between BOOK and AUTHOR records:

Map<Record2<String, String>, 
    List<Record2<Integer, String>>> booksByAuthor =

// This work is performed in the database
// --------------------------------------
ctx.select(
        BOOK.ID,
        BOOK.TITLE,
        AUTHOR.FIRST_NAME,
        AUTHOR.LAST_NAME
    )
   .from(BOOK)
   .join(AUTHOR)
   .on(BOOK.AUTHOR_ID.eq(AUTHOR.ID))
   .orderBy(BOOK.ID)
   .fetch()

// This work is performed in Java memory
// -------------------------------------
   .stream()

   // Group BOOKs by AUTHOR
   .collect(groupingBy(

        // This is the grouping key      
        r -> r.into(AUTHOR.FIRST_NAME, 
                    AUTHOR.LAST_NAME),

        // This is the target data structure
        LinkedHashMap::new,

        // This is the value to be produced for each
        // group: A list of BOOK
        mapping(
            r -> r.into(BOOK.ID, BOOK.TITLE),
            toList()
        )
    ));

The fluency of the Java 8 Streams API is very idiomatic to someone who has been used to writing SQL with jOOQ. Obviously, you can also use something other than jOOQ, e.g. Spring’s JdbcTemplate, or Apache Commons DbUtils, or just wrap the JDBC ResultSet in an Iterator…

What’s very nice about this approach, compared to ORM is the fact that there is no magic happening at all. Every piece of mapping logic is explicit and, thanks to Java generics, fully typesafe. The type of the booksByAuthor output is complex, and a bit hard to read / write, in this example, but it is also fully descriptive and useful.

The same functional transformation with POJOs

If you aren’t too happy with using jOOQ’s Record2 tuple types, no problem. You can specify your own data transfer objects like so:

class Book {
    public int id;
    public String title;

    @Override
    public String toString() { ... }

    @Override
    public int hashCode() { ... }

    @Override
    public boolean equals(Object obj) { ... }
}

static class Author {
    public String firstName;
    public String lastName;

    @Override
    public String toString() { ... }

    @Override
    public int hashCode() { ... }

    @Override
    public boolean equals(Object obj) { ... }
}

With the above DTO, you can now leverage jOOQ’s built-in POJO mapping to transform the jOOQ records into your own domain classes:

Map<Author, List<Book>> booksByAuthor =
ctx.select(
        BOOK.ID,
        BOOK.TITLE,
        AUTHOR.FIRST_NAME,
        AUTHOR.LAST_NAME
    )
   .from(BOOK)
   .join(AUTHOR)
   .on(BOOK.AUTHOR_ID.eq(AUTHOR.ID))
   .orderBy(BOOK.ID)
   .fetch()
   .stream()
   .collect(groupingBy(

        // This is the grouping key      
        r -> r.into(Author.class),
        LinkedHashMap::new,

        // This is the grouping value list
        mapping(
            r -> r.into(Book.class),
            toList()
        )
    ));

Explicitness vs. implicitness

At Data Geekery, we believe that a new time has started for Java developers. A time where Annotatiomania™ (finally!) ends and people stop assuming all that implicit behaviour through annotation magic. ORMs depend on a huge amount of specification to explain how each annotation works with each other annotation. It is hard to reverse-engineer (or debug!) this kind of not-so-well-understood annotation-language that JPA has brought to us.

On the flip side, SQL is pretty well understood. Tables are an easy-to-handle data structure, and if you need to transform those tables into something more object-oriented, or more hierarchically structured, you can simply apply functions to those tables and group values yourself! By grouping those values explicitly, you stay in full control of your mapping, just as with jOOQ, you stay in full control of your SQL.

This is why we believe that in the next 5 years, ORMs will lose relevance and people start embracing explicit, stateless and magicless data transformation techniques again, using Java 8 Streams.

PostgreSQL’s Table-Valued Functions


Table-valued functions are an awesome thing. Many databases support them in one way or another and so does PostgreSQL. In PostgreSQL, (almost) everything is a table. For instance, we can write:

CREATE OR REPLACE FUNCTION 
    f_1 (v1 INTEGER, v2 OUT INTEGER)
AS $$
BEGIN
    v2 := v1;
END
$$ LANGUAGE plpgsql;

… and believe it or not, this is a table! We can write:

select * from f_1(1);

And the above will return:

+----+
| v2 |
+----+
|  1 |
+----+

It’s kind of intuitive if you think about it. We’re just pushing out a single record with a single column. If we wanted two columns, we could’ve written:

CREATE OR REPLACE FUNCTION 
    f_2 (v1 INTEGER, v2 OUT INTEGER, v3 OUT INTEGER)
AS $$
BEGIN
    v2 := v1;
    v3 := v1 + 1;
END
$$ LANGUAGE plpgsql;

… and then:

select * from f_2(1);

And the above will return:

+----+----+
| v2 | v3 |
+----+----+
|  1 |  2 |
+----+----+

That’s useful, but those are just single records. What if we wanted to produce a whole table? It’s easy, just change your functions to actually return TABLE types, instead of using OUT parameters:

CREATE OR REPLACE FUNCTION f_3 (v1 INTEGER)
RETURNS TABLE(v2 INTEGER, v3 INTEGER)
AS $$
BEGIN
    RETURN QUERY
    SELECT *
    FROM (
        VALUES(v1, v1 + 1), 
              (v1 * 2, (v1 + 1) * 2)
    ) t(a, b);
END
$$ LANGUAGE plpgsql;

When selecting from the above very useful function, we’ll get a table like so:

select * from f_3(1);

And the above will return:

+----+----+
| v2 | v3 |
+----+----+
|  1 |  2 |
|  2 |  4 |
+----+----+

And we can LATERAL join that function to other tables if we want:

select *
from book, lateral f_3(book.id)

… which might yield, for example:

+----+--------------+----+----+
| id | title        | v2 | v3 |
+----+--------------+----+----+
|  1 | 1984         |  1 |  2 |
|  1 | 1984         |  2 |  4 |
|  2 | Animal Farm  |  2 |  4 |
|  2 | Animal Farm  |  4 |  6 |
+----+--------------+----+----+

In fact, it appears that the keyword LATERAL is optional in this case, at least for PostgreSQL.

Table-valued functions are very powerful!

Discovering table-valued functions

From jOOQ’s schema reverse-engineering perspective, things might get a bit tricky as can be seen in this Stack Overflow question. PostgreSQL deals with OUT parameters in a very similar way as with TABLE return types. This can be seen in the following query against the INFORMATION_SCHEMA:

SELECT r.routine_name, r.data_type, p.parameter_name, p.data_type
FROM   information_schema.routines r
JOIN   information_schema.parameters p
USING (specific_catalog, specific_schema, specific_name);

… and the output:

routine_name | data_type | parameter_name | data_type
-------------+-----------+----------------+----------
f_1          | integer   | v1             | integer
f_1          | integer   | v2             | integer
f_2          | record    | v1             | integer
f_2          | record    | v2             | integer
f_2          | record    | v3             | integer
f_3          | record    | v1             | integer
f_3          | record    | v2             | integer
f_3          | record    | v3             | integer

As you can see, the output is really indistinguishable from that perspective. Luckily, we can also join the pg_catalog.pg_proc table, which contains the relevant flag to indicate if a function returns a set or not:

SELECT   r.routine_name, 
         r.data_type, 
         p.parameter_name, 
         p.data_type, 
         pg_p.proretset
FROM     information_schema.routines r
JOIN     information_schema.parameters p
USING   (specific_catalog, specific_schema, specific_name)
JOIN     pg_namespace pg_n
ON       r.specific_schema = pg_n.nspname
JOIN     pg_proc pg_p
ON       pg_p.pronamespace = pg_n.oid
AND      pg_p.proname = r.routine_name
ORDER BY routine_name, parameter_name;

Now, we’re getting:

routine_name | data_type | parameter_name | data_type | proretset
-------------+-----------+----------------+-----------+----------
f_1          | integer   | v1             | integer   | f
f_1          | integer   | v2             | integer   | f
f_2          | record    | v1             | integer   | f
f_2          | record    | v2             | integer   | f
f_2          | record    | v3             | integer   | f
f_3          | record    | v1             | integer   | t
f_3          | record    | v2             | integer   | t
f_3          | record    | v3             | integer   | t

We can see that f_3 is the only function actually returning a set of record, unlike f_1 and f_2, which only return a single record.

Now, remove all those parameters that are not OUT parameters, and you have your table type:

SELECT   r.routine_name, 
         p.parameter_name,
         p.data_type,
         row_number() OVER (
           PARTITION BY r.specific_name 
           ORDER BY p.ordinal_position
         ) AS ordinal_position
FROM     information_schema.routines r
JOIN     information_schema.parameters p
USING   (specific_catalog, specific_schema, specific_name)
JOIN     pg_namespace pg_n
ON       r.specific_schema = pg_n.nspname
JOIN     pg_proc pg_p
ON       pg_p.pronamespace = pg_n.oid
AND      pg_p.proname = r.routine_name
WHERE    pg_p.proretset
AND      p.parameter_mode = 'OUT'
ORDER BY routine_name, parameter_name;

Which will give us:

routine_name | parameter_name | data_type | position |
-------------+----------------+-----------+----------+
f_3          | v2             | integer   |        1 |
f_3          | v3             | integer   |        2 |

How to run such queries in jOOQ?

Once the above code is generated, you can easily call the table-valued function in any jOOQ query. Consider again the BOOK example (in SQL):

select *
from book, lateral f_3(book.id)

… and with jOOQ:

DSL.using(configuration)
   .select()
   .from(BOOK, lateral(F_3.call(BOOK.ID)))
   .fetch();

The returned records then contain values for:

record.getValue(F_3.V2);
record.getValue(F_3.V3);

All that typesafety is only available in the upcoming jOOQ 3.5, for free! (SQL Server, Oracle, and HSQLDB table-valued functions are already supported!)

jOOQ is the best way to write SQL in Java

jOOQ Newsletter: July 2, 2014 – jOOLY 20% Discount Offering


Subscribe for this newsletter here

jOOLY 2014 20% Discount Offering

Have you been evaluating jOOQ for a while now, still hesitating to purchase licenses? Or are you an existing customer and looking into licensing more workstations for your team? This is your chance!

Get 20% off all your jOOQ purchases in the month of jOOLY 2014!

Google has their summer of code, we have our month of jOOLY, where you get 20% off all your purchases for jOOQ Professional and Enterprise Edition licenses. Just enter the “jOOLY” discount code with your next purchase and start coding awesome Java / SQL code! Click here to download and buy jOOQ now! The discount code is only valid until the end of jOOLY (July 31, 2014), so act quickly!

In fact, if you purchase jOOQ in the month of jOOLY (July), we’ll offer you a free PDF e-book copy of SQL Performance Explained with your purchase to help you get even more out of your jOOQ experience.

This discount cannot be combined with other discounts. Only the first period of a monthly or yearly subscription is discounted. Existing subscriptions are not eligible for this discount

Tweet of the Day

Our customers, users, and followers are sharing their love for jOOQ with the world and we can hardly catch up with them! Here are:

“Henk”, who challenges the JavaEE folks to add jOOQ to the standards. It’s about time!

Roland Tepp, who sees the added value of jOOQ in “database-first” applications. That’s very true

Chris Raastad, who … well. Who puts it bluntly and is looking forward to use the “last missing features” from .NET also in Java. We’re honoured to be compared to LINQ. Thanks Chris!

Thanks for the shouts, guys! You make the jOOQ experience rock!

Community Zone – The jOOQ aficionados have been active!

The jOOQ community has been very active again in the last month. We’re happy to point out these editor’s picks from our radar:

Speaking of LINQ, you may have heard of JINQ. JINQ now also has a jOOQ integration, allowing you to query the awesome the Java 8 Streams API as if it were a database. With jOOQ being the backing SQL implementation, you get all the advantages like typesafety, SQL transformation and standardisation, etc. So, let’s hear it for Dr. Ming-Yee Iu, from whom we’ll certainly hear more in the near future.

Instil Software from Belfast is hosting this great jOOQ and Flyway workshop on July 16, 2014. “Unfortunately,” it is already sold out, but we’re sure that this killer productivity combination will be interesting for numerous future workshops. If you’re missing out on it, read this blog post to get an overview of jOOQ and Flyway. If you’re hosting such a workshop yourself, let us know. We’ll be more than happy to advertise it in our events section of the jOOQ website.

If you haven’t seen it already, consider reading the ZeroTurnaround Java Tools and Technologies Landscape for 2014, a survey of 2164 Java professionals (slightly biased towards ZeroTurnaround’s RebelLabs audience). jOOQ is listed and compared to other Java / SQL integration patterns. We’re clearly catching up with well-established brands like MyBatis – so stay tuned for more community news as we keep growing.

Feedback zone

You’ve read to the end of this newsletter, that’s great! Did you like it? What did we do great? What can we improve? What other subjects would you like us to cover?

We’d love to hear from you, so if you want to reach out to us, just drop a message to contact@datageekery.com. Looking forward to hearing from you!

Stop Unit Testing Database Code


Writing tests that use an actual database is hard.

Period.

Now that this has been established, let’s have a look at a blog post by Marco Behler, in which he elaborates on various options when testing database code, with respect to transactionality. Testing database transactions is even harder than just testing database code. Marco lists a couple of options how to tweak these tests to make them “easier” to write.

One of the options is:

3. Set the flush-mode to FlushMode.ALWAYS for your tests

(note, this is about testing Hibernate code).

Marco puts this option in parentheses, because he’s not 100% convinced if it’s a good idea to test different behaviour from the productive one. Here’s our take on that:

Stop Unit Testing Database Code

By the time you start thinking about tweaking your test setup to achieve “simpler” transaction behaviour (or worse, use mocks for your database), you’re pretty much doomed. You start creating an alternative system that heavily deviates from your productive system. This essentially means:

  • that results from tests against your test system have (almost) no meaning
  • that your tests will not cover some of the most complex aspects of your productive system
  • that you will start spending way too much time on tweaking tests rather than implementing useful business logic

Instead, focus on writing integration tests that test your business logic on a very high level, i.e. on a “service layer” level, if your architecture has such a thing. If you’re using EJB, this would probably be on a session bean level. If you’re using REST or SOAP, this would be on a REST or SOAP service level. If you’re using HTTP (the retro name for REST), it would be on an HTTP service level.

Here are the advantages of this approach:

  • You might not get 100% coverage, but you will get the 80% coverage for those parts that really matter (with 20% of the effort). Your UI (or external system) doesn’t call the database in the quirkiest of forms either. It calls your services. Why would you test anything other than your services?
  • Your tests are very easy to maintain. You have a public API to your UI (or external system). It’s formal and easy to understand. It’s a black box with well-defined input and output parameters, which makes it easy to read / write tests

Databases are stateful. Obviously

What you have to do is let go of this idea that your database will ever participate in a “unit” (as in “unit” test). Units are pretty stateless, and thus it is very easy to write mutually independent unit tests for functional algorithms, for instance.

Databases couldn’t be any less stateless. The whole idea of a database is to manage state. And that’s very complicated and completely opposite to what any unit test can ever model. Many of the most meaningful state transitions span several database interactions, or even transactions, or maybe even services. For instance, it may be important that the CREATE_USER service invocation be immediately followed by an invocation of CHANGE_PASSWORD. You can only integration-test that on a service layer. Don’t believe it? What if CREATE_USER depends on an external LDAP system? Or complex security logic in stored procedures? Your integration test’s got that covered.

Takeaway

Writing tests that use an actual database is hard.

Yes. That won’t change. But your perception may. Don’t try to tweak things around this fact. Create a well-known test database. Reset it between tests. And write integration tests on a very high level. The 20/80 cost/benefit ratio will leave you no better choice.

Stay tuned for another blog post on this blog about how we integration-test the jOOQ API against 16 actual RDBMS

Flyway and jOOQ for Unbeatable SQL Development Productivity


When performing database migrations, we at Data Geekery recommend using jOOQ with Flyway – Database Migrations Made Easy. In this post, we’re going to look into a simple way to get started with the two frameworks.

Philosophy

There are a variety of ways how jOOQ and Flyway could interact with each other in various development setups. In this tutorial we’re going to show just one variant of such framework team play – a variant that we find particularly compelling for most use cases.

The general philosophy and workflow behind the following approach can be summarised as this:

  • 1. Database increment
  • 2. Database migration
  • 3. Code re-generation
  • 4. Development

The four steps above can be repeated time and again, every time you need to modify something in your database. More concretely, let’s consider:

  • 1. Database increment - You need a new column in your database, so you write the necessary DDL in a Flyway script
  • 2. Database migration - This Flyway script is now part of your deliverable, which you can share with all developers who can migrate their databases with it, the next time they check out your change
  • 3. Code re-generation - Once the database is migrated, you regenerate all jOOQ artefacts (see code generation), locally
  • 4. Development - You continue developing your business logic, writing code against the udpated, generated database schema

0.1. Maven Project Configuration – Properties

The following properties are defined in our pom.xml, to be able to reuse them between plugin configurations:

<properties>
    <db.url>jdbc:h2:~/flyway-test</db.url>
    <db.username>sa</db.username>
</properties>

0.2. Maven Project Configuration – Dependencies

While jOOQ and Flyway could be used in standalone migration scripts, in this tutorial, we’ll be using Maven for the standard project setup. You will also find the source code of this tutorial on GitHub, and the full pom.xml file here.

These are the dependencies that we’re using in our Maven configuration:

<!-- We'll add the latest version of jOOQ 
     and our JDBC driver - in this case H2 -->
<dependency>
    <groupId>org.jooq</groupId>
    <artifactId>jooq</artifactId>
    <version>3.4.0</version>
</dependency>
<dependency>
    <groupId>com.h2database</groupId>
    <artifactId>h2</artifactId>
    <version>1.4.177</version>
</dependency>

<!-- For improved logging, we'll be using 
     log4j via slf4j to see what's going
     on during migration and code generation -->
<dependency>
    <groupId>log4j</groupId>
    <artifactId>log4j</artifactId>
    <version>1.2.16</version>
</dependency>
<dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-log4j12</artifactId>
    <version>1.7.5</version>
</dependency>

<!-- To esnure our code is working, we're
     using JUnit -->
<dependency>
    <groupId>junit</groupId>
    <artifactId>junit</artifactId>
    <version>4.11</version>
    <scope>test</scope>
</dependency>

0.3. Maven Project Configuration – Plugins

After the dependencies, let’s simply add the Flyway and jOOQ Maven plugins like so. The Flyway plugin:

<plugin>
    <groupId>org.flywaydb</groupId>
    <artifactId>flyway-maven-plugin</artifactId>
    <version>3.0</version>

    <!-- Note that we're executing the Flyway
         plugin in the "generate-sources" phase -->
    <executions>
        <execution>
            <phase>generate-sources</phase>
            <goals>
                <goal>migrate</goal>
            </goals>
        </execution>
    </executions>

    <!-- Note that we need to prefix the db/migration
         path with filesystem: to prevent Flyway
         from looking for our migration scripts
         only on the classpath -->
    <configuration>
        <url>${db.url}</url>
        <user>${db.username}</user>
        <locations>
            <location>filesystem:src/main/resources/db/migration</location>
        </locations>
    </configuration>
</plugin>

The above Flyway Maven plugin configuration will read and execute all database migration scripts from src/main/resources/db/migrationprior to compiling Java source code. While the official Flyway documentation suggests that migrations be done in the compile phase, the jOOQ code generator relies on such migrations having been done prior to code generation.

After the Flyway plugin, we’ll add the jOOQ Maven Plugin. For more details, please refer to the manual’s section about the code generation configuration.

<plugin>
    <groupId>org.jooq</groupId>
    <artifactId>jooq-codegen-maven</artifactId>
    <version>${org.jooq.version}</version>

    <!-- The jOOQ code generation plugin is also 
         executed in the generate-sources phase,
         prior to compilation -->
    <executions>
        <execution>
            <phase>generate-sources</phase>
            <goals>
                <goal>generate</goal>
            </goals>
        </execution>
    </executions>

    <!-- This is a minimal working configuration. 
         See the manual's section about the code
         generator for more details -->
    <configuration>
        <jdbc>
            <url>${db.url}</url>
            <user>${db.username}</user>
        </jdbc>
        <generator>
            <database>
                <includes>.*</includes>
                <inputSchema>FLYWAY_TEST</inputSchema>
            </database>
            <target>
                <packageName>org.jooq.example.flyway.db.h2</packageName>
                <directory>target/generated-sources/jooq-h2</directory>
            </target>
        </generator>
    </configuration>
</plugin>

This configuration will now read the FLYWAY_TEST schema and reverse-engineer it into the target/generated-sources/jooq-h2 directory, and within that, into the org.jooq.example.flyway.db.h2 package.

1. Database increments

Now, when we start developing our database. For that, we’ll create database increment scripts, which we put into thesrc/main/resources/db/migration directory, as previously configured for the Flyway plugin. We’ll add these files:

  • V1__initialise_database.sql
  • V2__create_author_table.sql
  • V3__create_book_table_and_records.sql

These three scripts model our schema versions 1-3 (note the capital V!). Here are the scripts’ contents

-- V1__initialise_database.sql
DROP SCHEMA flyway_test IF EXISTS;

CREATE SCHEMA flyway_test;
-- V2__create_author_table.sql
CREATE SEQUENCE flyway_test.s_author_id START WITH 1;

CREATE TABLE flyway_test.author (
  id INT NOT NULL,
  first_name VARCHAR(50),
  last_name VARCHAR(50) NOT NULL,
  date_of_birth DATE,
  year_of_birth INT,
  address VARCHAR(50),

  CONSTRAINT pk_t_author PRIMARY KEY (ID)
);
-- V3__create_book_table_and_records.sql
CREATE TABLE flyway_test.book (
  id INT NOT NULL,
  author_id INT NOT NULL,
  title VARCHAR(400) NOT NULL,

  CONSTRAINT pk_t_book PRIMARY KEY (id),
  CONSTRAINT fk_t_book_author_id FOREIGN KEY (author_id) REFERENCES flyway_test.author(id)
);


INSERT INTO flyway_test.author VALUES (next value for flyway_test.s_author_id, 'George', 'Orwell', '1903-06-25', 1903, null);
INSERT INTO flyway_test.author VALUES (next value for flyway_test.s_author_id, 'Paulo', 'Coelho', '1947-08-24', 1947, null);

INSERT INTO flyway_test.book VALUES (1, 1, '1984');
INSERT INTO flyway_test.book VALUES (2, 1, 'Animal Farm');
INSERT INTO flyway_test.book VALUES (3, 2, 'O Alquimista');
INSERT INTO flyway_test.book VALUES (4, 2, 'Brida');

2. Database migration and 3. Code regeneration

The above three scripts are picked up by Flyway and executed in the order of the versions. This can be seen very simply by executing:

mvn clean install

And then observing the log output from Flyway…

[INFO] --- flyway-maven-plugin:3.0:migrate (default) @ jooq-flyway-example ---
[INFO] Database: jdbc:h2:~/flyway-test (H2 1.4)
[INFO] Validated 3 migrations (execution time 00:00.004s)
[INFO] Creating Metadata table: "PUBLIC"."schema_version"
[INFO] Current version of schema "PUBLIC": <>
[INFO] Migrating schema "PUBLIC" to version 1
[INFO] Migrating schema "PUBLIC" to version 2
[INFO] Migrating schema "PUBLIC" to version 3
[INFO] Successfully applied 3 migrations to schema "PUBLIC" (execution time 00:00.073s).

… and from jOOQ on the console:

[INFO] --- jooq-codegen-maven:3.5.0-SNAPSHOT:generate (default) @ jooq-flyway-example ---
[INFO] Using this configuration:
...
[INFO] Generating schemata      : Total: 1
[INFO] Generating schema        : FlywayTest.java
[INFO] ----------------------------------------------------------
[....]
[INFO] GENERATION FINISHED!     : Total: 337.576ms, +4.299ms

4. Development

Note that all of the previous steps are executed automatically, every time someone adds new migration scripts to the Maven module. For instance, a team member might have committed a new migration script, you check it out, rebuild and get the latest jOOQ-generated sources for your own development or integration-test database.

Now, that these steps are done, you can proceed writing your database queries. Imagine the following test case

import org.jooq.Result;
import org.jooq.impl.DSL;
import org.junit.Test;

import java.sql.DriverManager;

import static java.util.Arrays.asList;
import static org.jooq.example.flyway.db.h2.Tables.*;
import static org.junit.Assert.assertEquals;

public class AfterMigrationTest {

    @Test
    public void testQueryingAfterMigration() throws Exception {
        try (Connection c = DriverManager.getConnection("jdbc:h2:~/flyway-test", "sa", "")) {
            Result<?> result =
            DSL.using(c)
               .select(
                   AUTHOR.FIRST_NAME,
                   AUTHOR.LAST_NAME,
                   BOOK.ID,
                   BOOK.TITLE
               )
               .from(AUTHOR)
               .join(BOOK)
               .on(AUTHOR.ID.eq(BOOK.AUTHOR_ID))
               .orderBy(BOOK.ID.asc())
               .fetch();

            assertEquals(4, result.size());
            assertEquals(asList(1, 2, 3, 4), result.getValues(BOOK.ID));
        }
    }
}

If you run the mvn clean install again, the above integration test will now compile and pass!

Reiterate

The power of this approach becomes clear once you start performing database modifications this way. Let’s assume that the French guy on our team prefers to have things his way (no offense intended ;-) ):

-- V4__le_french.sql
ALTER TABLE flyway_test.book 
  ALTER COLUMN title RENAME TO le_titre;

They check it in, you check out the new database migration script, run

mvn clean install

And then observe the log output:

[INFO] --- flyway-maven-plugin:3.0:migrate (default) @ jooq-flyway-example ---
[INFO] --- flyway-maven-plugin:3.0:migrate (default) @ jooq-flyway-example ---
[INFO] Database: jdbc:h2:~/flyway-test (H2 1.4)
[INFO] Validated 4 migrations (execution time 00:00.005s)
[INFO] Current version of schema "PUBLIC": 3
[INFO] Migrating schema "PUBLIC" to version 4
[INFO] Successfully applied 1 migration to schema "PUBLIC" (execution time 00:00.016s).

So far so good, but later on:

[ERROR] COMPILATION ERROR :
[INFO] -------------------------------------------------------------
[ERROR] C:\...\AfterMigrationTest.java:[24,19] error: cannot find symbol
[INFO] 1 error

When we go back to our Java integration test, we can immediately see that the TITLE column is still being referenced, but it no longer exists:

public class AfterMigrationTest {

    @Test
    public void testQueryingAfterMigration() throws Exception {
        try (Connection c = DriverManager.getConnection("jdbc:h2:~/flyway-test", "sa", "")) {
            Result<?> result =
            DSL.using(c)
               .select(
                   AUTHOR.FIRST_NAME,
                   AUTHOR.LAST_NAME,
                   BOOK.ID,
                   BOOK.TITLE
                   //   ^^^^^ This column no longer exists. 
                   //   We'll have to rename it to LE_TITRE
               )
               .from(AUTHOR)
               .join(BOOK)
               .on(AUTHOR.ID.eq(BOOK.AUTHOR_ID))
               .orderBy(BOOK.ID.asc())
               .fetch();

            assertEquals(4, result.size());
            assertEquals(asList(1, 2, 3, 4), result.getValues(BOOK.ID));
        }
    }
}

Conclusion

This tutorial shows very easily how you can build a rock-solid development process using Flyway and jOOQ to prevent SQL-related errors very early in your development lifecycle – immediately at compile time, rather than in production!

Visit the Flyway website and the jOOQ website.

Should I Put That Table Alias or Not?


Infrequent SQL developers often get confused about when to put parentheses and/or aliases on derived tables. There has been this recent Reddit discussion about the subject, where user Elmhurstlol was wondering why they needed to provide an alias to the derived table (the subselect with the UNION) in the following query:

SELECT AVG(price) AS AVG_PRICE
FROM (
  SELECT price from product a JOIN pc b
  ON a.model=b.model AND maker='A'
  UNION ALL
  SELECT price from product a JOIN laptop c
  ON a.model=c.model and maker='A'
) hello

The question really was about why the "hello" table alias was necessary, because often it seems not to be required.

Here’s what the SQL standard states

If in doubt, it is often useful to consider the SQL standard about the rationale behind some syntax elements. In this case, let’s consider the freely available SQL 1992 standard text (for simplicity), and see how it specifies table references:

<table reference> ::=
    <table name> [ [ AS ] <correlation name>
        [ <left paren> <derived column list> 
          <right paren> ] ]
  | <derived table> [ AS ] <correlation name>
        [ <left paren> <derived column list>
          <right paren> ]
  | <joined table>

<derived table> ::= <table subquery>

<table subquery> ::= <subquery>

<subquery> ::= <left paren> <query expression>
               <right paren>

(for more information about the awesome and completely underused derived column list feature, read this article here)

The essence of the above syntax specification is this:

  • A derived table MUST always be aliased
  • The AS keyword is optional, for improved readability
  • The parentheses MUST always be put around subqueries

Following these rules, you’ll be pretty safe in most SQL dialects. Here are some deviations to the above, though:

  • Some dialects allow for unaliased derived tables. However, this is still a bad idea, because you will not be able to unambiguously qualify a column from such a derived table

Takeaway

Always provide a meaningful alias to your derived tables. As simple as that.

SQL Server ALTER TABLE SET DEFAULT


Most databases that support default values on their column DDL, it is also possible to actually alter that default. An Oracle example:

CREATE TABLE t (
  val NUMBER(7) DEFAULT 1 NOT NULL
);

-- Oops, wrong default, let us change it
ALTER TABLE t MODIFY val DEFAULT -1;

-- Now that is better

Unfortunately, this isn’t possible in SQL Server, where the DEFAULT column property is really a constraint, and probably a constraint whose name you don’t know because it was system generated.

But luckily, jOOQ 3.4 now supports DDL and can abstract this information away from you by generating the following Transact-SQL program:

DECLARE @constraint NVARCHAR(max);
DECLARE @command NVARCHAR(max);

SELECT @constraint = name
FROM sys.default_constraints
WHERE parent_object_id = object_id('t')
AND parent_column_id = columnproperty(
    object_id('t'), 'val', 'ColumnId');

IF @constraint IS NOT NULL
BEGIN
  SET @command = 'ALTER TABLE t DROP CONSTRAINT ' 
    + @constraint;
  EXECUTE sp_executeSQL @command

  SET @command = 'ALTER TABLE t ADD CONSTRAINT ' 
    + @constraint + ' DEFAULT -1 FOR val';
  EXECUTE sp_executeSQL @command
END
ELSE
BEGIN
  SET @command = 'ALTER TABLE t ADD DEFAULT -1 FOR val';
  EXECUTE sp_executeSQL @command
END

This program will either drop and create a new constraint with the same name, or create an entirely new constraint with a system-generated name.

With jOOQ, you can execute this statement as such:

DSL.using(configuration)
   .alterTable(T)
   .alter(T.VAL)
   .defaultValue(-1)
   .execute();
Follow

Get every new post delivered to your Inbox.

Join 1,612 other followers

%d bloggers like this: