The Java Ecosystem’s Obsession with NonNull Annotations

I’m not well known for my love of annotations. While I do recognise that they can serve a very limited purpose in some areas (e.g. hinting stuff to the compiler or extending the language where we don’t want new keywords), I certainly don’t think they were ever meant to be used for API design.

“unfortunately” (but this is a matter of taste), Java 8 introduced type annotations. An entirely new extension to the annotation type system, which allows you to do things like:

@Positive int positive = 1;

Thus far, I’ve seen such common type restriction features only in the Ada or PL/SQL languages in a much more rigid way, but others may have similar features.

The nice thing about Java 8’s implementation is the fact that the meaning of the type of the above local variable (@Positive int) is unknown to the Java compiler (and to the runtime), until you write and activate a specific compiler plugin to enforce your custom meaning. The easiest way to do that is by using the Checker Framework (and yes, we’re guilty at jOOQ. We have our own checker for SQL dialect validation). You can implement any semantics, for instance:

// This compiles because @Positive int is a subtype of int
int number = positive;

// Doesn't compile, because number might be negative
@Positive int positive2 = number;

// Doesn't compile, because -1 is negative
@Positive int positive3 = -1;

As you can see, using type annotations is a very strategic decision. Either you want to create hundreds of types in this parallel universe as in this example:

Or, in my opinion, you better leave this set of features alone, because probably: YAGNI

Unfortunately, and to the disappointment of Mike Ernst, the author of the Checker Framework (whom I’ve talked to about this some years ago), most people abuse this new JSR-308 feature for boring and simple null checking. For instance, just recently, there had been a feature request on the popular vavr library to add support for such annotations that help users and IDEs guarantee that vavr API methods return non-null results.

Please no. Don’t use this atomic bomb for boring null checks

Let me make this very clear:

Type annotations are the wrong tool to enforce nullability

– Lukas Eder, timeless

You may quote me on that. The only exception to the above is if you strategically embrace JSR-308 type annotations in every possible way and start adding annotations for all sorts of type restrictions, including the @Positive example that I’ve given, then yes, adding nullability annotations won’t hurt you much anymore, as your types will take 50 lines of code to declare and reference anyway. But frankly, this is an extremely niche approach to type systems that only few general purpose programs, let alone publicly available APIs can profit from. If in doubt, don’t use type annotations.

One important reason why a library like vavr shouldn’t add such annotations is the fact that in a library like vavr , you can be very sure that you will hardly ever encounter null, because references in vavr are mostly one of three things:

  • A collection, which is never null but empty
  • An Option which replaces null (it is in fact a collection of cardinality 0..1)
  • A non-null reference (because in the presence of Option, all references can be expected to be non-null)

Of course, these rules aren’t valid for every API. There are some low quality APIs out there that return “unexpected” null values, or leak “internal” null values (and historically, some of the JDK APIs, unfortunately, are part of these “low quality APIs”). But vavr is not one of them, and the APIs you are designing also shouldn’t be one of them.

So, let go of your null fear. null is not a problem in well-designed software. You can spare yourself the work of adding a @NonNull annotation on 99% of all of your types just to shut up your IDE, in case you turned on those warnings. Focus on writing high-quality software rather than bikeshedding null.

Because: YAGNI.

And, if you haven’t had enough bikeshedding already, consider watching this entertaining talk by Stuart Marks:

JSR-308 and the Checker Framework Add Even More Typesafety to jOOQ 3.9

Java 8 introduced JSR-308, which added new annotation capabilities to the Java language. Most importantly: Type annotations. It is now possible to design monsters like the below:

The code displayed in that tweet really compiles. Every type can be annotated now, in order to enhance the type system in any custom way. Why, you may ask? One of the main driving use-cases for this language enhancement is the Checker Framework, an Open Source library that allows you to easily implement arbitrary compiler plugins for sophisticated type checking. The most boring and trivial example would be nullability. Consider the following code:

import org.checkerframework.checker.nullness.qual.Nullable;

class YourClassNameHere {
    void foo(Object nn, @Nullable Object nbl) {
        nn.toString(); // OK
        nbl.toString(); // Fail
        if (nbl != null)
            nbl.toString(); // OK again
    }
}

The above example can be run directly in the Checker Framework live demo console. Compiling the above code with the following annotation processor:

javac -processor org.checkerframework.checker.nullness.NullnessChecker afile.java

Yields:

Error: [dereference.of.nullable] dereference of possibly-null reference nbl:5:9

That’s pretty awesome! It works in quite a similar way as the flow sensitive typing that is implemented in Ceylon or Kotlin, for instance, except that it is much more verbose. But it is also much much more powerful, because the rules that implement your enhanced and annotated Java type system can be implemented directly in Java using annotation processors! Which makes annotations turing-complete, in a way ;-)

award

How does this help jOOQ?

jOOQ has shipped with two types of API documentation annotations for quite a while. Those annotations are:

  • @PlainSQL – To indicate that a DSL method accepts a “plain SQL” string which may introduce SQL injection risks
  • @Support – To indicate that a DSL method works either natively with, or can be emulated for a given set of SQLDialect

An example of such a method is the CONNECT BY clause, which is supported by Cubrid, Informix, and Oracle, and it is overloaded to accept also a “plain SQL” predicate, for convenience:

@Support({ CUBRID, INFORMIX, ORACLE })
@PlainSQL
SelectConnectByConditionStep<R> connectBy(String sql);

Thus far, these annotations were there only for documentation purposes. With jOOQ 3.9, not anymore. We’re now introducing two new annotations to the jOOQ API:

  • org.jooq.Allow – to allow for a set of dialects (or for the @PlainSQL annotation) to be used within a given scope
  • org.jooq.Require – to require for a set of dialects to be supported via the @Support annotation within a given scope

This is best explained by example. Let’s look at @PlainSQL first

Restricting access to @PlainSQL

One of the biggest advantages of using the jOOQ API is that SQL injection is pretty much a thing of the past. With jOOQ being an internal domain-specific language, users really define the SQL expression tree directly in their Java code, rather than a stringified version of the statement as with JDBC. The expression tree being compiled in Java, there’s no possibility of injecting any unwanted or unforeseen expressions via user input.

There is one exception though. jOOQ doesn’t support every SQL feature in every database. This is why jOOQ ships with a rich “plain SQL” API where custom SQL strings can be embedded anywhere in the SQL expression tree. For instance, the above CONNECT BY clause:

DSL.using(configuration)
   .select(level())
   .connectBy("level < ?", bindValue)
   .fetch();

The above jOOQ query translates to the following SQL query:

SELECT level
FROM dual
CONNECT BY level < ?

As you can see, it is perfectly possible to “do it wrong” and create a SQL injection risk, just like in JDBC:

DSL.using(configuration)
   .select(level())
   .connectBy("level < " + bindValue)
   .fetch();

The difference is very subtle. With jOOQ 3.9 and the Checker Framework, it is now possible to specify the following Maven compiler configuration:

<plugin>
    <artifactId>maven-compiler-plugin</artifactId>
    <version>3.3</version>
    <configuration>
        <source>1.8</source>
        <target>1.8</target>
        <fork>true</fork>
        <annotationProcessors>
            <annotationProcessor>org.jooq.checker.PlainSQLChecker</annotationProcessor>
        </annotationProcessors>
        <compilerArgs>
            <arg>-Xbootclasspath/p:1.8</arg>
        </compilerArgs>
    </configuration>
</plugin>

The org.jooq.checker.PlainSQLChecker will ensure that no client code using API annotated with @PlainSQL will compile. The error message we’re getting is something like:

C:\Users\lukas\workspace\jOOQ\jOOQ-examples\jOOQ-checker-framework-example\src\main\java\org\jooq\example\checker\PlainSQLCheckerTests.java:[17,17] error: [Plain SQL usage not allowed at current scope. Use @Allow.PlainSQL.]

If you know-what-you’re-doing™ and you absolutely must use jOOQ’s @PlainSQL API at a very specific location (scope), you can annotate that location (scope) with @Allow.PlainSQL and the code compiles just fine again:

// Scope: Single method.
@Allow.PlainSQL
public List<Integer> iKnowWhatImDoing() {
    return DSL.using(configuration)
              .select(level())
              .connectBy("level < ?", bindValue)
              .fetch(0, int.class);
}

Or even:

// Scope: Entire class.
@Allow.PlainSQL
public class IKnowWhatImDoing {
    public List<Integer> iKnowWhatImDoing() {
        return DSL.using(configuration)
                  .select(level())
                  .connectBy("level < ?", bindValue)
                  .fetch(0, int.class);
    }
}

Or even (but then you might just turn off the checker):

// Scope: entire package (put in package-info.java)
@Allow.PlainSQL
package org.jooq.example.checker;

The benefits are clear, though. If security is very important to you (and it should be), then just enable the org.jooq.checker.PlainSQLChecker on each developer build, or at least in CI builds, and get compilation errors whenever “accidental” @PlainSQL API usage is encountered.

Restricting access to SQLDialect

Now, much more interesting for most users is the ability to check whether jOOQ API that is used in client code really supports your database. For instance, the above CONNECT BY clause is supported only in Oracle (if we ignore the not so popular Cubrid and Informix databases). Let’s assume you do work with Oracle only. You want to make sure that all jOOQ API that you’re using is Oracle-compatible. You can now put the following annotation to all packages that use the jOOQ API:

// Scope: entire package (put in package-info.java)
@Allow(ORACLE)
package org.jooq.example.checker;

Now, simply activate the org.jooq.checker.SQLDialectChecker to type check your code for @Allow compliance and you’re done:

<plugin>
    <artifactId>maven-compiler-plugin</artifactId>
    <version>3.3</version>
    <configuration>
        <source>1.8</source>
        <target>1.8</target>
        <fork>true</fork>
        <annotationProcessors>
            <annotationProcessor>org.jooq.checker.SQLDialectChecker</annotationProcessor>
        </annotationProcessors>
        <compilerArgs>
            <arg>-Xbootclasspath/p:1.8</arg>
        </compilerArgs>
    </configuration>
</plugin>

From now on, whenever you use any jOOQ API, the above checker will verify that any of the following three yields true:

  • The jOOQ API being used is not annotated with @Support
  • The jOOQ API being used is annotated with @Support, but without any explicit SQLDialect (i.e. “works on all databases”), such as DSLContext.select()
  • The jOOQ API being used is annotated with @Support and with at least one of the SQLDialects referenced from @Allow

Thus, within a package annotated as such…

// Scope: entire package (put in package-info.java)
@Allow(ORACLE)
package org.jooq.example.checker;

… using a method annotated as such is fine:

@Support({ CUBRID, INFORMIX, ORACLE })
@PlainSQL
SelectConnectByConditionStep<R> connectBy(String sql);

… but using a method annotated as such is not:

@Support({ MARIADB, MYSQL, POSTGRES })
SelectOptionStep<R> forShare();

In order to allow for this method to be used, client code could, for instance, allow the MYSQL dialect in addition to the ORACLE dialect:

// Scope: entire package (put in package-info.java)
@Allow({ MYSQL, ORACLE })
package org.jooq.example.checker;

From now on, all code in this package may refer to methods supporting either MySQL and/or Oracle.

The @Allow annotation helps giving access to API on a global level. Multiple @Allow annotations (of potentially different scope) create a disjunction of allowed dialects as illustrated here:

// Scope: class
@Allow(MYSQL)
class MySQLAllowed {

    @Allow(ORACLE)
	void mySQLAndOracleAllowed() {
	    DSL.using(configuration)
		   .select()
		   
		   // Works, because Oracle is allowed
		   .connectBy("...")
		   
		   // Works, because MySQL is allowed
		   .forShare();
	}
}

As can be seen above, allowing for two dialects disjunctively won’t ensure that a given statement will work on either of the databases. So…

What if I want both databases to be supported?

In this case, we’ll resort to using the new @Require annotation. Multiple @Require annotations (of potentially different scope) create a conjunction of required dialects as illustrated here:

// Scope: class
@Allow
@Require({ MYSQL, ORACLE })
class MySQLAndOracleRequired {

    @Require(ORACLE)
	void onlyOracleRequired() {
	    DSL.using(configuration)
		   .select()
		   
		   // Works, because only Oracle is required
		   .connectBy("...")
		   
		   // Doesn't work because Oracle is required
		   .forShare();
	}
}

How to put this in use

Let’s assume your application only requires to work with Oracle. You can now put the following annotation on your package, and you will be prevented from using any MySQL-only API, for instance, because MySQL is not allowed as a dialect in your code:

@Allow(ORACLE)
package org.jooq.example.checker;

Now, as requirements change, you want to start supporting MySQL as well from your application. Just change the package specification to the following and start fixing all compilation errors in your jOOQ usage.

// Both dialects are allowed, no others are
@Allow({ MYSQL, ORACLE })

// Both dialects are also required on each clause
@Require({ MYSQL, ORACLE })
package org.jooq.example.checker;

Defaults

By default, for any scope, the following annotations are assumed by the org.jooq.checker.SQLDialectChecker:

  • Nothing is allowed. Each @Allow annotation adds to the set of allowed dialects.
  • Everything is required. Each @Require annotation removes from the set of required dialects.

See it in action

These features will be an integral part of jOOQ 3.9. They’re available simply by adding the following dependency:

<dependency>
    <!-- Use org.jooq            for the Open Source edition
             org.jooq.pro        for commercial editions, 
             org.jooq.pro-java-6 for commercial editions with Java 6 support,
             org.jooq.trial      for the free trial edition -->
	
    <groupId>org.jooq</groupId>
    <artifactId>jooq-checker</artifactId>
    <version>${org.jooq.version}</version>
</dependency>

… and then choosing the appropriate annotation processors to your compiler plugin.

Cannot wait until jOOQ 3.9? You don’t have to. Just check out the 3.9.0-SNAPSHOT version from GitHub and follow the example project given here:

https://github.com/jOOQ/jOOQ/tree/master/jOOQ-examples/jOOQ-checker-framework-example

Done! From now on, when using jOOQ, you can be sure that whatever code you write will work on all the databases that you plan to support!

I think that this year’s Annotatiomaniac Champion title should go to the makers of the Checker Framework:

award

Further reading about the Checker Framework:

jOOQ 4.0’s New API Will Use Annotations Only for Truly Declarative Java/SQL Programming

SQL is the only really popular and mature 4GL (Fourth Generation Programming Language). I.e. it is the only popular declarative language.

At the same time, SQL has proven that turing completeness is not reserved to lesser languages like C, C++, or Java. Since SQL:1999 and its hierarchical common table expressions, SQL can be safely considered “turing complete”. This means that any program can be written in SQL. Don’t believe it? Take, for instance, this SQL Mandelbrot set calculation as can be seen in this Stack Overflow question.

mandelbrot set

Source: User Elie on http://stackoverflow.com/q/314864/521799

Wonderful! No more need for procedural, and object oriented cruft.

How we’ve been wrong so far…

At Data Geekery (the company behind jOOQ), we love SQL. And we love Java. But one thing has always bothered us in the past. Java is not really a purely declarative language. A lot of Java language constructs are real anti patterns for the enlightened declarative programmer. For instance:

// This is bad
for (String string : strings)
    System.out.println(string);

// This is even worse
try {
    someSQLStatements();
}
catch (SQLException e) {
    someRecovery();
}

The imperative style of the above code is hardly ever useful. Programmers need to tediously tell the Java compiler and the JVM what algorithm they meant to implement, down to the single statement, when in reality, using the JIT and other advanced optimisation techniques, they don’t really have to.

Luckily, there are annotations

Since Java 5, however, there have been farsighted people in expert groups who have added a powerful new concept to the Java language: Annotations (more info here). At first, experiments were made with only a handful of limited-use annotations, like:

  • @Override
  • @SuppressWarnings

But then, even more farsighted people have then proceeded in combining these annotations to form completely declaratively things like a component:

@Path("/MonsterRest")
@Stateless
@WebServlet(urlPatterns = "/MonsterServlet")
@Entity
@Table(name = "MonsterEntity")
@XmlRootElement
@XmlAccessorType(XmlAccessType.FIELD)
@NamedQuery(name = "findAll", query = "SELECT c FROM Book c")
public class Book extends HttpServlet {
 
    // ======================================
    // =             Attributes             =
    // ======================================
 
    @Id
    @GeneratedValue
    private Long id;
    private String isbn;
    private Integer nbOfPage;
    private Boolean illustrations;
    private String contentLanguage;
    @Column(nullable = false)
    @Size(min = 5, max = 50)
    @XmlElement(nillable = false)
    private String title;
    private Float price;
    @Column(length = 2000)
    @Size(max = 2000)
    private String description;
    @ElementCollection
    @CollectionTable(name = "tags")
    private List<String> tags = new ArrayList<>();

Look at this beauty. Credits to Antonio Goncalves

However, we still think that there is a lot of unnecessary object oriented bloat in the above. Luckily, recent innovations that make Java annotations turing complete (or even sentient?) will now finally allow us to improve upon this situation, specifically for jOOQ, which aims to model the declarative SQL language in Java. Finally, annotations are a perfect fit!

Those innovations are:

These innovations allow us to completely re-implement the entire jOOQ 4.0 API in order to allow for users writing SQL as follows:

@Select({
    @Column("FIRST_NAME"),
    @Column("LAST_NAME")
})
@From(
    table = @Table("AUTHOR"),
    join = @Join("BOOK"),
    predicate = @On(
        left = @Column("AUTHOR.ID"),
        op = @Eq,
        right = @Column("BOOK.AUTHOR_ID")
    )
)
@Where(
    predicate = @Predicate(
        left = @Column("BOOK.TITLE"),
        op = @Like,
        right = @Value("%Annotations in a Nutshell%")
    )
)
class SQLStatement {}

Just like JPA, this makes jOOQ now fully transparent and declarative, by using annotations. Developers will now be able to completely effortlessly translate their medium to highly complex SQL queries into the exact equivalent in jOOQ annotations.

Don’t worry, we’ll provide migration scripts to upgrade your legacy jOOQ 3.x application to 4.0. A working prototype is on the way and is expected to be released soon, early adopter feedback is very welcome, so stay tuned for more exciting SQL goodness!

Improve Your JUnit Experience with this Annotation

JUnit is probably part of 90% of all Java projects. And the exciting thing is, we’ll soon have JUnit 5 with Java 8 support. We’ve blogged about an improvement recently.

Back in JUnit 4 land, there’s this little trick that I can only recommend you put in all of your unit tests. Just add this little annotation here and you’ll be much more happy:

@FixMethodOrder(MethodSorters.NAME_ASCENDING)
class MyTests {
    ...
}

What does it do? It’s simple. It fixes JUnit’s weird default of not defaulting to any testing order. Sure, not having any order in your tests might help accidentally discover some evil test inter-dependency. But usually, when you’re looking for individual tests and results, e.g. in your IDE, it’s just much much better to be able to visually scan the test suite and find the right method.

E.g. what do you prefer? This?

junit-better

Or this?

junit-worse

Exactly. Finally, a useful annotation. Just put the following everywhere and help make this a slightly better world:

@FixMethodOrder(MethodSorters.NAME_ASCENDING)
class MyTests {
    ...
}

Use JUnit’s expected exceptions sparingly

Sometimes, when we get pull requests for jOOQ or our other libraries, people change the code in our unit tests to be more “idiomatic JUnit”. In particular, this means that they tend to change this (admittedly not so pretty code):

@Test
public void testValueOfIntInvalid() {
    try {
        ubyte((UByte.MIN_VALUE) - 1);
        fail();
    }
    catch (NumberFormatException e) {}
    try {
        ubyte((UByte.MAX_VALUE) + 1);
        fail();
    }
    catch (NumberFormatException e) {}
}

… into this, “better” and “cleaner” version:

@Test(expected = NumberFormatException.class)
public void testValueOfShortInvalidCase1() {
    ubyte((short) ((UByte.MIN_VALUE) - 1));
}

@Test(expected = NumberFormatException.class)
public void testValueOfShortInvalidCase2() {
    ubyte((short) ((UByte.MAX_VALUE) + 1));
}

What have we gained?

Nothing!

Sure, we already have to use the @Test annotation, so we might as well use its attribute expected right? I claim that this is completely wrong. For two reasons. And when I say “two”, I mean “four”:

1. We’re not really gaining anything in terms of number of lines of code

Compare the semantically interesting bits:

// This:
try {
    ubyte((UByte.MIN_VALUE) - 1);
    fail("Reason for failing");
}
catch (NumberFormatException e) {}

// Vs this:
@Test(expected = NumberFormatException.class)
public void reasonForFailing() {
    ubyte((short) ((UByte.MAX_VALUE) + 1));
}

Give or take whitespace formatting, there are exactly the same amount of essential semantic pieces of information:

  1. The method call on ubyte(), which is under test. This doesn’t change
  2. The message we want to pass to the failure report (in a string vs. in a method name)
  3. The exception type and the fact that it is expected

So, even from a stylistic point of view, this isn’t really a meaningful change.

2. We’ll have to refactor it back anyway

In the annotation-driven approach, all I can do is test for the exception type. I cannot make any assumptions about the exception message for instance, in case I do want to add further tests, later on. Consider this:

// This:
try {
    ubyte((UByte.MIN_VALUE) - 1);
    fail("Reason for failing");
}
catch (NumberFormatException e) {
    assertEquals("some message", e.getMessage());
    assertNull(e.getCause());
    ...
}

3. The single method call is not the unit

The unit test was called testValueOfIntInvalid(). So, the semantic “unit” being tested is that of the UByte type’s valueOf() behaviour in the event of invalid input in general. Not for a single value, such as UByte.MIN_VALUE - 1.

It shouldn’t be split into further smaller units, just because that’s the only way we can shoehorn the @Test annotation into its limited scope of what it can do.

Hear this, TDD folks. I NEVER want to shoehorn my API design or my logic into some weird restrictions imposed by your “backwards” test framework (nothing personal, JUnit). NEVER! “My” API is 100x more important than “your” tests. This includes me not wanting to:

  • Make everything public
  • Make everything non-final
  • Make everything injectable
  • Make everything non-static
  • Use annotations. I hate annotations.

Nope. You’re wrong. Java is already a not-so-sophisticated language, but let me at least use the few features it offers in any way I want.

Don’t impose your design or semantic disfigurement on my code because of testing.

OK. I’m overreacting. I always am, in the presence of annotations. Because…

4. Annotations are always a bad choice for control flow structuring

Time and again, I’m surprised by the amount of abuse of annotations in the Java ecosystem. Annotations are good for three things:

  1. Processable documentation (e.g. @Deprecated)
  2. Custom “modifiers” on methods, members, types, etc. (e.g. @Override)
  3. Aspect oriented programming (e.g. @Transactional)

And beware that @Transactional is the one of the very few really generally useful aspect that ever made it to mainstream (logging hooks being another one, or dependency injection if you absolutely must). In most cases, AOP is a niche technique to solve problems, and you generally don’t want that in ordinary programs.

It is decidedly NOT a good idea to model control flow structures, let alone test behaviour, with annotations

Yes. Java has come a long (slow) way to embrace more sophisticated programming idioms. But if you really get upset with the verbosity of the occasional try { .. } catch { .. } statement in your unit tests, then there’s a solution for you. It’s Java 8.

How to do it better with Java 8

JUnit lambda is in the works:
http://junit.org/junit-lambda.html

And they have added new functional API to the new Assertions class:
https://github.com/junit-team/junit-lambda/blob/master/junit5-api/src/main/java/org/junit/gen5/api/Assertions.java

Everything is based around the Executable functional interface:

@FunctionalInterface
public interface Executable {
    void execute() throws Exception;
}

This executable can now be used to implement code that is asserted to throw (or not to throw) an exception. See the following methods in Assertions

public static void assertThrows(Class<? extends Throwable> expected, Executable executable) {
    expectThrows(expected, executable);
}

public static <T extends Throwable> T expectThrows(Class<T> expectedType, Executable executable) {
    try {
        executable.execute();
    }
    catch (Throwable actualException) {
        if (expectedType.isInstance(actualException)) {
            return (T) actualException;
        }
        else {
            String message = Assertions.format(expectedType.getName(), actualException.getClass().getName(),
                "unexpected exception type thrown;");
            throw new AssertionFailedError(message, actualException);
        }
    }
    throw new AssertionFailedError(
        String.format("Expected %s to be thrown, but nothing was thrown.", expectedType.getName()));
}

That’s it! Now, those of you who object to the verbosity of try { .. } catch { .. } blocks can rewrite this:

try {
    ubyte((UByte.MIN_VALUE) - 1);
    fail("Reason for failing");
}
catch (NumberFormatException e) {}

… into this:

expectThrows(NumberFormatException.class, () -> 
    ubyte((UByte.MIN_VALUE) - 1));

And if I want to do further checks on my exception, I can do so:

Exception e = expectThrows(NumberFormatException.class, () -> 
    ubyte((UByte.MIN_VALUE) - 1));
assertEquals("abc", e.getMessage());
...

Great work, JUnit lambda team!

Functional programming beats annotations every time

Annotations were abused for a lot of logic, mostly in the JavaEE and Spring environments, which were all too eager to move XML configuration back into Java code. This has gone the wrong way, and the example provided here clearly shows that there is almost always a better way to write out control flow logic explicitly both using object orientation or functional programming, than by using annotations.

In the case of @Test(expected = ...), I conclude:

Rest in peace, expected

(it is no longer part of the JUnit 5 @Test annotation, anyway)

We’re Taking Bets: This Annotation Will Soon Show up in the JDK

This recent Stack Overflow question by Yahor has intrigued me: How to ensure at Java 8 compile time that a method signature “implements” a functional interface. It’s a very good question. Let’s assume the following nominal type:

@FunctionalInterface
interface LongHasher {
    int hash(long x);
}

The type imposes a crystal clear contract. Implementors must provide a single method named hash() taking a long argument, returning a int value. When using lambdas or method references, then the hash() method name is no longer relevant, and the structural type long -> int will be sufficient.

In his question, Yahor wants to enforce the above type upon three static methods (example modified by me):

class LongHashes {

    // OK
    static int xorHash(long x) {
        return (int)(x ^ (x >>> 32));
    }

    // OK
    static int continuingHash(long x) {
        return (int)(x + (x >>> 32));
    }

    // Yikes
    static int randomHash(NotLong x) {
         return xorHash(x * 0x5DEECE66DL + 0xBL);
    }
}

And he would like the Java compiler to complain in the third case, as the randomHash() does not “conform” to LongHasher.

A compilation error is easy to produce, of course, by actually assigning the static methods in their functional notation (method references) to a LongHasher instance:

// OK
LongHasher good = LongHashes::xorHash;
LongHasher alsoGood = LongHashes::continuingHash;

// Yikes
LongHasher ouch = LongHashes::randomHash;

But that’s not as concise as it could / should be. The type constraint should be imposed directly on the static method.

And what’s the Java way of doing that?

With annotations, of course!

I’m going to take bets that the following pattern will show up by JDK 10:

class LongHashes {

    // Compiles
    @ReferenceableAs(LongHasher.class)
    static int xorHash(long x) {
        return (int)(x ^ (x >>> 32));
    }

    // Compiles
    @ReferenceableAs(LongHasher.class)
    static int continuingHash(long x) {
        return (int)(x + (x >>> 32));
    }

    // Doesn't compile
    @ReferenceableAs(LongHasher.class)
    static int randomHash(NotLong x) {
         return xorHash(x * 0x5DEECE66DL + 0xBL);
    }
}

In fact, you could already implement such an annotation today, and write your own annotation processor (or JSR-308 checker) to validate these methods. Looking forward to yet another great annotation!

So, who’s in for the bet that we’ll have this annotation by JDK 10?

How JPA 2.1 has become the new EJB 2.0

Beauty lies in the eye of the beholder. So does “ease”:

Thorben writes very good and useful articles about JPA, and he’s recently started an excellent series about JPA 2.1’s new features. Among which: Result set mapping. You may know result set mapping from websites like CTMMC, or annotatiomania.com. We can summarise this mapping procedure as follows:

a) define the mapping

@SqlResultSetMapping(
    name = "BookAuthorMapping",
    entities = {
        @EntityResult(
            entityClass = Book.class,
            fields = {
                @FieldResult(name = "id", column = "id"),
                @FieldResult(name = "title", column = "title"),
                @FieldResult(name = "author", column = "author_id"),
                @FieldResult(name = "version", column = "version")}),
        @EntityResult(
            entityClass = Author.class,
            fields = {
                @FieldResult(name = "id", column = "authorId"),
                @FieldResult(name = "firstName", column = "firstName"),
                @FieldResult(name = "lastName", column = "lastName"),
                @FieldResult(name = "version", column = "authorVersion")})})

The above mapping is rather straight-forward. It specifies how database columns should be mapped to entity fields and to entities as a whole. Then you give this mapping a name ("BookAuthorMapping"), which you can then reuse across your application, e.g. with native JPA queries.

I specifically like the fact that Thorben then writes:

If you don’t like to add such a huge block of annotations to your entity, you can also define the mapping in an XML file

… So, we’re back to replacing huge blocks of annotations by huge blocks of XML – a technique that many of us wanted to avoid using annotations… :-)

b) apply the mapping

Once the mapping has been statically defined on some Java type, you can then fetch those entities by applying the above BookAuthorMapping

List<Object[]> results = this.em.createNativeQuery(
    "SELECT b.id, b.title, b.author_id, b.version, " +
    "       a.id as authorId, a.firstName, a.lastName, " + 
    "       a.version as authorVersion " + 
    "FROM Book b " +
    "JOIN Author a ON b.author_id = a.id", 
    "BookAuthorMapping"
).getResultList();

results.stream().forEach((record) -> {
    Book book = (Book)record[0];
    Author author = (Author)record[1];
});

Notice how you still have to remember the Book and Author types and cast explicitly as no verifiable type information is really attached to anything.

The definition of “complex”

Now, the article claims that this is “complex” mapping, and no doubt, I would agree. This very simple query with only a simple join already triggers such an annotation mess if you want to really map your entities via JPA. You don’t want to see Thorben’s mapping annotations, once the queries get a little more complex. And remember, @SqlResultSetMapping is about mapping (native!) SQL results, so we’re no longer in object-graph-persistence land, we’re in SQL land, where bulk fetching, denormalising, aggregating, and other “fancy” SQL stuff is king.

The problem is here:

Java 5 introduced annotations. Annotations were originally intended to be used as “artificial modifiers”, i.e. things like static, final, protected (interestingly enough, Ceylon only knows annotations, no modifiers). This makes sense. Java language designers could introduce new modifiers / “keywords” without breaking existing code – because “real” keywords are reserved words, which are hard to introduce in a language. Remember enum?

So, good use-cases for annotations (and there are only few) are:

  • @Override
  • @Deprecated (although, a comment attribute would’ve been fancy)
  • @FunctionalInterface

JPA (and other Java EE APIs, as well as Spring) have gone completely wacko on their use of annotations. Repeat after me:

No language @Before or @After Java ever abused annotations as much as Java tweet this

(the @Before / @After idea was lennoff’s, on reddit)

There is a strong déjà vu in me when reading the above. Do you remember the following?

No language before or after Java ever abused checked exceptions as much as Java

We will all deeply regret Java annotations by 2020.

Annotations are a big wart in the Java type system. They have an extremely limited justified use and what we Java Enterprise developers are doing these days is absolutely not within the limits of “justified”. We’re abusing them for configuration for things that we should really be writing code for.

Here’s how you’d run the same query with jOOQ (or any other API that leverages generics and type safety for SQL):

Book b = BOOK.as("b");
Author a = AUTHOR.as("a");

DSL.using(configuration)
   .select(b.ID, b.TITLE, b.AUTHOR_ID, b.VERSION,
           a.ID, a.FIRST_NAME, a.LAST_NAME,
           a.VERSION)
   .from(b)
   .join(a).on(b.AUTHOR_ID.eq(a.ID))
   .fetch()
   .forEach(record -> {
       BookRecord book = record.into(b);
       AuthorRecord author = record.into(a);
   });

This example combines both JPA 2.1’s annotations AND querying. All the meta information about projected “entities” is already contained in the query and thus in the Result that is produced by the fetch() method. But it doesn’t really matter, the point here is that this lambda expression …

record -> {
    BookRecord book = record.into(b);
    AuthorRecord author = record.into(a);
}

… it can be anything you want! Like the more sophisticated examples we’ve shown in previous blog posts:

Mapping can be defined ad-hoc, on the fly, using functions. Functions are the ideal mappers, because they take an input, produce an output, and are completely stateless. And the best thing about functions in Java 8 is, they’re compiled by the Java compiler and can be used to type-check your mapping. And you can assign functions to objects, which allows you to reuse the functions, when a given mapping algorithm can be used several times.

In fact, the SQL SELECT clause itself is such a function. A function that transforms input tuples / rows into output tuples / rows, and you can adapt that function on the fly using additional expressions.

There is absolutely no way to type-check anything in the previous JPA 2.1 native SQL statement and @SqlResultSetMapping example. Imagine changing a column name:

List<Object[]> results = this.em.createNativeQuery(
    "SELECT b.id, b.title as book_title, " +
    "       b.author_id, b.version, " +
    "       a.id as authorId, a.firstName, a.lastName, " + 
    "       a.version as authorVersion " + 
    "FROM Book b " +
    "JOIN Author a ON b.author_id = a.id", 
    "BookAuthorMapping"
).getResultList();

Did you notice the difference? The b.title column was renamed to book_title. In a SQL string. Which blows up at run time! How to remember that you have to also adapt

@FieldResult(name = "title", column = "title")

… to be

@FieldResult(name = "title", column = "book_title")

Conversely, how to remember that once you rename the column in your @FieldResult, you’ll also have to go check wherever this "BookAuthorMapping" is used, and also change the column names in those queries.

@SqlResultSetMapping(
    name = "BookAuthorMapping",
    ...
)

Annotations are evil

You may or may not agree with some of the above. You may or may not like jOOQ as an alternative to JPA, that’s perfectly fine. But it is really hard to disagree with the fact that:

  • Java 5 introduced very useful annotations
  • Java EE / Spring heavily abused those annotations to replace XML
  • We now have a parallel universe type system in Java
  • This parallel universe type system is completely useless because the compiler cannot introspect it
  • Java SE 8 introduces functional programming and lots of type inference
  • Java SE 9-10 will introduce more awesome language features
  • It now becomes clear that what was configuration (XML or annotations) should have been code in the first place
  • JPA 2.1 has become the new EJB 2.0: Obsolete

As I said. Hard to disagree. Or in other words:

Code is much better at expressing algorithms than configuration tweet this

I’ve met Thorben personally on a number of occasions at conferences. This rant here wasn’t meant personally, Thorben :-) Your articles about JPA are very interesting. If you readers of this article are using JPA, please check out Thorben’s blog: http://www.thoughts-on-java.org.

In the meantime, I would love to nominate Thorben for the respected title “The Annotatiomaniac of the Year 2015