The Danger of Subtype Polymorphism Applied to Tuples

Java 8 has lambdas and streams, but no tuples, which is a shame. This is why we have implemented tuples in jOOλ – Java 8’s missing parts. Tuples are really boring value type containers. Essentially, they’re just an enumeration of types like these:

public class Tuple2<T1, T2> {
    public final T1 v1;
    public final T2 v2;

    public Tuple2(T1 v1, T2 v2) {
        this.v1 = v1;
        this.v2 = v2;
    }

    // [...]
}


public class Tuple3<T1, T2, T3> {
    public final T1 v1;
    public final T2 v2;
    public final T3 v3;

    public Tuple3(T1 v1, T2 v2, T3 v3) {
        this.v1 = v1;
        this.v2 = v2;
        this.v3 = v3;
    }

    // [...]
}

Writing tuple classes is a very boring task, and it’s best done using a source code generator.

Tuples in other languages and APIs

jOOλ‘s current version features tuples of degrees 0 – 16. C# and other .NET languages have tuple types between 1 – 8. There’s a special library just for tuples called Javatuples with tuples between degrees 1 and 10, and the authors went the extra mile and gave the tuples individual English names:

Unit<A> // (1 element)
Pair<A,B> // (2 elements)
Triplet<A,B,C> // (3 elements)
Quartet<A,B,C,D> // (4 elements)
Quintet<A,B,C,D,E> // (5 elements)
Sextet<A,B,C,D,E,F> // (6 elements)
Septet<A,B,C,D,E,F,G> // (7 elements)
Octet<A,B,C,D,E,F,G,H> // (8 elements)
Ennead<A,B,C,D,E,F,G,H,I> // (9 elements)
Decade<A,B,C,D,E,F,G,H,I,J> // (10 elements)

Why?

because Ennead really rings that sweet bell when I see it

Last, but not least, jOOQ also has a built-in tuple-like type, the org.jooq.Record, which serves as a base type for nice subtypes like Record7<T1, T2, T3, T4, T5, T6, T7>. jOOQ follows Scala and defines records up to a degree of 22.

Watch out when defining tuple type hierarchies

As we have seen in the previous example, Tuple3 has much code in common with Tuple2.

As we’re all massively brain-damaged by decades of object orientation and polymorphic design anti-patters, we might think that it would be a good idea to let Tuple3<T1, T2, T3> extend Tuple2<T1, T2>, as Tuple3 just adds one more attribute to the right of Tuple2, right? So…

public class Tuple3<T1, T2, T3> extends Tuple2<T1, T2> {
    public final T3 v3;

    public Tuple3(T1 v1, T2 v2, T3 v3) {
        super(v1, v2);
        this.v3 = v3;
    }

    // [...]
}

The truth is: That’s about the worst thing you could do, for several reasons. First off, yes. Both Tuple2 and Tuple3 are tuples, so they do have some common features. It’s not a bad idea to group those features in a common super type, such as:

public class Tuple2<T1, T2> implements Tuple {
    // [...]
}

But the degree is not one of those things. Here’s why:

Permutations

Think about all the possible tuples that you can form. If you let tuples extend each other, then a Tuple5 would also be assignment-compatible with a Tuple2, for instance. The following would compile perfectly:

Tuple2<String, Integer> t2 = tuple("A", 1, 2, 3, "B");

When letting Tuple3 extend Tuple2, it may have seemed like a good default choice to just drop the right-most attribute from the tuple in the extension chain.

But in the above example, why don’t I want to re-assign (v2, v4) such that the result is (1, 3), or maybe (v1, v3), such that the result is ("A", 2)?

There are a tremendous amount of permutations of possible attributes that could be of interest when “reducing” a higher degree tuple to a lower degree one. No way a default of dropping the right-most attribute will be sufficiently general for all use-cases

Type systems

Much worse than the above, there would be drastic implications for the type system, if Tuple3 extended Tuple2. Check out the jOOQ API, for instance. In jOOQ, you can safely assume the following:

// Compiles:
TABLE1.COL1.in(select(TABLE2.COL1).from(TABLE2))

// Must not compile:
TABLE1.COL1.in(select(TABLE2.COL1, TABLE2.COL2).from(TABLE2))

The first IN predicate is correct. The left hand side of the predicate has a single column (as opposed to being a row value expression). This means that the right hand side of the predicate must also operate on single-column expressions, e.g. a SELECT subquery that selects a single column (of the same type).

The second example selects too many columns, and the jOOQ API will tell the Java compiler that this is wrong.

This is guaranteed by jOOQ via the Field.in(Select) method, whose signature reads:

public interface Field<T> {
    ...
    Condition in(Select<? extends Record1<T>> select);
    ...
}

So, you can provide a SELECT statement that produces any subtype of the Record1<T> type.

Luckily, Record2 does not extend Record1

If now Record2 extended Record1, which might have seemed like a good idea at first, the second query would suddenly compile:

// This would now compile
TABLE1.COL1.in(select(TABLE2.COL1, TABLE2.COL2).from(TABLE2))

… even if it forms an invalid SQL statement. It would compile because it would generate a Select<Record2<Type1, Type2>> type, which would be a subtype of the expected Select<Record1<Type1>> from the Field.in(Select) method.

Conclusion

Tuple2 and Tuple5 types are fundamentally incompatible types. In strong type systems, you mustn’t be lured into thinking that similar types, or related types should also be compatible types.

Type hierarchies are something very object-oriented, and by object-oriented, I mean the flawed and over-engineered notion of object orientation that we’re still suffering from since the 90s. Even in “the Enterprise”, most people have learned to favour Composition over Inheritance. Composition in the case of tuples means that you can well transform a Tuple5 to a Tuple2. But you cannot assign it.

In jOOλ, such a transformation can be done very easily as follows:

// Produces (1, 3)
Tuple2<String, Integer> t2_4 = 
    tuple("A", 1, 2, 3, "B")
    .map((v1, v2, v3, v4, v5) -> tuple(v2, v4));

// Produces ("A", 2)
Tuple2<String, Integer> t1_3 = 
    tuple("A", 1, 2, 3, "B")
    .map((v1, v2, v3, v4, v5) -> tuple(v1, v3));

The idea is that you operate on immutable values, and that you can easily extract parts of those values and map / recombine them to new values.

Read more

If you’ve enjoyed reading this article, you might also like to learn why recursive generics are a terrible idea (in many situations).

Comparing Imperative and Functional Algorithms in Java 8

Mario Fusco’s popular tweet impressively shows what the main difference between imperative and functional approaches to similar algorithms really is:

Both algorithms do the same thing, they’re probably equally fast and reasonable. Yet, one of the algorithms is much easier to write and read than the other. The difference lies in the fact that in imperative programming, different algorithmic requirements are spread throughout the code block, when in functional programming, each requirement has its own little line of code. Compare:

  • Green: Error handling
  • Blue: Stop criteria
  • Red: IO operations
  • Yellow: “Business logic”

Functional programming doesn’t always beat imperative programming as displayed in other examples on the jOOQ blog:

But here’s an example from Stack Overflow by user Aurora_Titanium, where the difference is as clear as in Mario Fusco’s example:

Calculating the Duplicate Values in an Array

The idea is to calculate the sum of all those values that are duplicate in a set of values. For instance, the following array:

int[] list = new int[]{1,2,3,4,5,6,7,8,8,8,9,10};

… should yield as a result something like:

Duplicate: 8. Sum of all duplicate values: 24

The imperative approach

One of the answers by user Volkan Ozkan takes an imperative approach and calculates the sum as such:

int[] array = new int[] { 
    1, 2, 3, 4, 5, 6, 7, 8, 8, 8, 9, 10 
};

int sum = 0;
for (int j = 0; j < array.length; j++)
{
    for (int k = j + 1; k < array.length; k++) 
    {
        if (k != j && array[k] == array[j])
        {
            sum = sum + array[k];
            System.out.println(
                "Duplicate found: " 
              + array[k]
              + " " 
              + "Sum of the duplicate value is " + sum);
        }
    }
}

The approach works only for sorted arrays where duplicates appear right after one another. In that case, however, it is probably an optimal solution in terms of performance, if performance really matters to this algorithm.

The functional approach

If a slight decrease of performance is acceptable to you (boxing ints, collecting them into maps), and it probably is, you can replace the above difficult-to-read code with the following bit of functional Java-8-style logic, which communicates much more clearly what it does:

int[] array = new int[] { 
    1, 2, 3, 4, 5, 6, 7, 8, 8, 8, 9, 10 
};

IntStream.of(array)
         .boxed()
         .collect(groupingBy(i -> i))
         .entrySet()
         .stream()
         .filter(e -> e.getValue().size() > 1)
         .forEach(e -> {
             System.out.println(
                 "Duplicates found for : " 
               + e.getKey()
               + " their sum being : " 
               + e.getValue()
                  .stream()
                  .collect(summingInt(i -> i)));
         });

or, with explanations:

int[] array = new int[] { 
    1, 2, 3, 4, 5, 6, 7, 8, 8, 8, 9, 10 
};

// Create a Stream<Integer> from your data
IntStream.of(array)
         .boxed()

// Group values into a Map<Integer, List<Integer>>
         .collect(groupingBy(i -> i))

// Filter out those map values that have only 
// 1 element in their group
         .entrySet()
         .stream()
         .filter(e -> e.getValue().size() > 1)

// Print the sum for the remaining groups
         .forEach(e -> {
             System.out.println(
                 "Duplicates found for : " 
               + e.getKey()
               + " their sum being : " 
               + e.getValue()
                  .stream()
                  .collect(summingInt(i -> i)));
         });

(note that the functional approach calculates sums for each duplicate value, not an overall sum, like the imperative approach. From the original question, this requirement wasn’t very clear)

As we’ve stated in a previous article on our blog, the power of functional programming via an API like the Java 8 Stream API is the fact that we’re approaching the expressive power of SQL-style declarative programming. We’re no longer concerned with remembering individual array indexes and how to calculate them and store intermediate results into some buffers. We can now focus on the really interesting logic, such as: “what’s a duplicate?” or “what sum am I interested in?”

Read on about how SQL compares to Java 8 Streams:

Common SQL clauses and their equivalents in Java 8 Streams

How to use Java 8 Functional Programming to Generate an Alphabetic Sequence

I’ve stumbled upon an interesting Stack Overflow question by user “mip”. The question was:

I’m looking for a way of generating an alphabetic sequence:

A, B, C, ..., Z, AA, AB, AC, ..., ZZ.

This can be quickly recognised as the headings of an Excel spreadsheet, which does precisely that:

excel.

So far, none of the answers employed any Java 8 functional programming, which I accepted as a challenge. We’re going to use jOOλ, because the Java 8 Stream API does not offer enough functionality for this task. (I stand corrected – thank you Sebastian, for this interesting answer)

But first, let’s decompose the algorithm in a functional way. What we need are these components:

  1. A (reproducible) representation of the alphabet
  2. An upper bound, i.e. how many letters we want to produce. The requested sequence goes to ZZ, which means the upper bound would be 2
  3. A way to combine each letter of the alphabet with the previously generated combined letters in a cartesian product

Let’s look into some code:

1. Generating the alphabet

We could be writing the alphabet like this:

List<String> alphabet = Arrays.asList("A", "B", ..., "Z");

but that would be lame. Let’s generate it instead, using jOOλ:

List<String> alphabet = Seq
    .rangeClosed('A', 'Z')
    .map(Object::toString)
    .toList();

The above generates a “closed” range (Java-8-Stream-speak for a range with inclusive upper bound) of characters between A and Z, maps characters to strings and collects them into a list.

So far so good. Now:

2. Using an upper bound

The requested sequence of characters includes:

A .. Z, AA, AB, .. ZZ

But we could easily imagine to extend this requirement generally to produce the following, or even more.

A .. Z, AA, AB, .. ZZ, AAA, AAB, .. ZZZ

For this, we’ll use again rangeClosed():

// 1 = A .. Z, 2 = AA .. ZZ, 3 = AAA .. ZZZ
Seq.rangeClosed(1, 2)
   .flatMap(length -> ...)
   .forEach(System.out::println);

The idea here is to produce a new stream for each individual length in the range [1 .. 2], and to flatten those streams into one single stream. flatMap() is essentially the same as a nested loop in imperative programming.

3. Combine letters in a cartesian product

This is the trickiest part: We need to combine each letter with each letter length times. For this, we’ll use the following stream:

Seq.rangeClosed(1, length - 1)
   .foldLeft(Seq.seq(alphabet), (s, i) -> 
       s.crossJoin(Seq.seq(alphabet))
        .map(t -> t.v1 + t.v2))
    );

We’re using again rangeClosed() to produce values in the range [1 .. length-1]. foldLeft() is the same as reduce(), except that foldLeft() is guaranteed to go from “left to right” in a stream, without requiring the folding function to be associative. Whew.

In other, more understandable words: foldLeft() is nothing else but an imperative loop. The “seed” of the loop, i.e. the loop’s initial value, is a complete alphabet (Seq.seq(alphabet)). Now, for every value in the range [1 .. length-1], we produce a cartesian product (crossJoin()) between the letters “folded” so far and a new alphabet, and we concatenate each combination into a single new string (t.v1 and t.v2).

That’s it!

Combining everything

The following simple program prints all the values from A .. Z, AA .. ZZ, AAA .. ZZZ to the console:

import java.util.List;

import org.jooq.lambda.Seq;

public class Test {
    public static void main(String[] args) {
        int max = 3;

        List<String> alphabet = Seq
            .rangeClosed('A', 'Z')
            .map(Object::toString)
            .toList();

        Seq.rangeClosed(1, max)
           .flatMap(length ->
               Seq.rangeClosed(1, length - 1)
                  .foldLeft(Seq.seq(alphabet), (s, i) -> 
                      s.crossJoin(Seq.seq(alphabet))
                       .map(t -> t.v1 + t.v2)))
           .forEach(System.out::println);
    }
}

Disclaimer

This is certainly not the most optimal algorithm for this particular case. One of the best implementations has been given by an unnamed user on Stack Overflow:

import static java.lang.Math.*;

private static String getString(int n) {
    char[] buf = new char[(int) floor(log(25 * (n + 1)) / log(26))];
    for (int i = buf.length - 1; i >= 0; i--) {
        n--;
        buf[i] = (char) ('A' + n % 26);
        n /= 26;
    }
    return new String(buf);
}

Unnecessary to say that the latter runs much much faster than the previous functional algorithm.

There is no Such Thing as Object-Relational Impedance Mismatch

Much of the ORM criticism of the last decade missed the point, being inaccurate. By the end of this article, we will conclude with the following:

There is no significant difference between the relational (data) model and object oriented models

How to come to this conclusion? Read on!

How we came to believe in this fallacy

Many popular bloggers and opinion leaders have missed no chance to bash ORMs for their “obvious” impedance mismatch with the relational world. N+1, inefficient queries, library complexity, leaky abstractions, all sorts of buzzwords have been employed to dismiss ORMs – often containing a lot of truth, albeit without providing a viable alternative.

But are these articles really criticising the right thing?

Few of the above articles recognise a central fact, which has been elicited eloquently and humorously by Erik Meijer and Gavin Bierman in his very interesting paper “A co-Relational Model of Data for Large Shared Data Banks“, subtitled:

Contrary to popular belief, SQL and noSQL are really just two sides of the same coin.

Or in other words: The “hierarchical” object world and the “relational” database world model the exact same thing. The only difference is the direction of the arrows that you draw in your diagrams.

Let this sink in.

  • In the relational model, children point to their parent.
  • In the hierarchical model, parents point to their children.

That’s all there is to it.

hierarchical-vs-relational

What is an ORM?

ORMs fill the bridge between the two worlds. They’re the inverters of arrows, if you will. They will make sure that every “relation” in your RDBMS can be materialised as an “aggregation” or “composition” in your “hierarchical” world (this works for objects, XML, JSON, and any other format). They make sure that such materialisation is properly transacted. That changes to individual attributes or to relational (aggregational, compositional) attributes are properly tracked and purged back into the master model, the database – where the model is persisted. Individual ORMs differ in terms of offered features and in how much mapping logic they offer in addition to mapping individual entities to individual types.

  • Some ORMs may help you implement locking
  • Some may help you to patch model mismatches
  • Some may focus merely on a 1:1 mapping between these classes and tables

But all ORMs do one very simple thing. Ultimately, they take rows from your tables and materialise them as objects in your class model and vice-versa.

A very nice overview of different ORMs has been compiled on the Vertabelo blog, recently, by the way.

Tables and classes are the same thing

Give or take 1-2 implementation details, an RDBMS’s table and an OO language’s class is the same thing. A specification of a set of grouped attributes, each with their associated type. Consider the following example, using SQL and Java:

SQL

CREATE TABLE author (
  first_name VARCHAR(50),
  last_name VARCHAR(50)
);

Java

class Author {
  String firstName;
  String lastName;
}

There is absolutely no conceptual difference between the two – the mapping is straightforward. The mapping is even straightforward when you consider “relations” / “compositions” between different entities / types:

SQL (let’s leave away constraints for simplicity)

CREATE TABLE author (
  id BIGINT,
  first_name VARCHAR(50),
  last_name VARCHAR(50)
);

CREATE TABLE book (
  id BIGINT,
  author_id BIGINT,
  title VARCHAR(50),
);

Java

class Author {
  Long id;
  String firstName;
  String lastName;
  Set<Book> books;
}

class Book {
  Long id;
  Author author;
  String title;
}

The implementation details are omitted (and probably account for half of the criticism). But omitting further details allows for straight-forward 1:1 mapping of individual rows from your database to your Java model, without any surprises. Most ORMs – in the Java ecosystem Hibernate in particular – have managed to implement the above idea very well, hiding away all the technical details of actually doing such a model transfer between the RDBMS and Java.

In other words:

There is absolutely nothing wrong with this mapping approach!

Yet: There *IS* an impedance mismatch, somewhere

The “problems” that many bloggers criticise arise not from the non-existing mismatch between the two model representations (“relational” vs. “hierarchical”). The problems arise from SQL, which is a decent implementation of relational algebra.

In fact, the very same mismatch that everyone criticises is also present between:

Relational algebra has been defined in order to be able to query relations and to form new ad-hoc relations as an output of such queries. Depending on the operations and transformations that are applied, the resulting tuples may have absolutely nothing to do with the individual entities involved in a query. In other, ORM-y words: The product of relational algebra, and in particular of SQL has no use, as it can no longer be further processed by the ORM, let alone persisted back into the database.

To make things “worse”, SQL today is a large super-set of the features offered by relational algebra. It has gotten much more useful than when it was conceived.

Why this mismatch still affects modern ORMs

The previous paragraphs outlined the single main reason why ORMs are really criticised, even if such criticism often doesn’t mention this exact reason:

SQL / relational algebra is not really appropriate to partially materialise relations into a client / store changes back into the database. Yet, most RDBMS offer only SQL for that job.

Back to the author / book example. When you want to load and display an author and their books to a web application’s user, you’d like to simply fetch that author and their books, call simple methods like author.add(book) as well as author.remove(book) and let some magic flush your data back into the storage system.

Thinking about the amount of SQL code to be written for such a simple CRUD task makes everyone squeal.

tweet thisLife’s too short to spend time on CRUD

Perhaps QUEL might have been a better language for CRUD, but that ship has sailed. And unfortunately, because of SQL being an inappropriate language for this job, you cannot ignore that “magic” but have to know well what happens behind the scenes, e.g. by tweaking Hibernate’s fetching strategies.

Translated to SQL, this may be implemented in several ways:

1. Fetching with JOIN

Using outer joins, all the involved entities can be queried in one go:

SELECT author.*, book.*
FROM author
LEFT JOIN book ON author.id = book.author_id
WHERE author.id = ?

Advantages:

  • A single query can be issued and all the data can be transferred at once

Disadvantages:

  • The author attributes are repeated in every tuple. The client (ORM) has to de-duplicate authors first, before populating the author-book relationship. This can be particularly bad when you have many nested relations that should be fetched at once.

2. Fetching with SELECT

A single query is issued for each entity:

SELECT *
FROM author
WHERE id = ?

SELECT *
FROM book
WHERE author_id = ?

Advantages:

  • The amount of data to be transferred is minimal: Each row is transferred exactly once.

Disadvantages:

  • The amount of queries that are issued may explode into the well-known N+1 problem.

Hibernate in particular knows other types of fetch strategies, although they are essentially a variant / optimisation of one of the above.

Why not use SQL MULTISET?

The ideal way to fetch all data in this case using advanced SQL would be by using MULTISET:

SELECT author.*, MULTISET (
  SELECT book.*
  FROM book
  WHERE book.author_id = author.id
) AS books
FROM author
WHERE id = ?

The above will essentially create a nested collection for each author:

first_name  last_name   books (nested collection)
--------------------------------------------------

Leonard     Cohen       title
                        --------------------------
                        Book of Mercy
                        Stranger Music
                        Book of Longing

Ernest      Hemingway   title
                        --------------------------
                        For Whom the Bell Tolls
                        The Old Man and the Sea

If you add another nested entity, it is easy to see how another MULTISET could allow for additionally nested data:

SELECT author.*, MULTISET (
  SELECT book.*, MULTISET (
    SELECT c.*
    FROM language AS t
    JOIN book_language AS bl
    ON c.id = bc.language_id
    AND book.id = bc.book_id
  ) AS languages
  FROM book
  WHERE book.author_id = author.id
) AS books
FROM author
WHERE id = ?

The outcome would now be along the lines of:

first_name  last_name   books
-----------------------------------------------------

Leonard     Cohen       title            languages
                        -----------------------------
                        Book of Mercy    language
                                         ------------
                                         en

                        Stranger Music   language
                                         ------------
                                         en
                                         de

                        Book of Longing  language
                                         ------------
                                         en
                                         fr
                                         es

Advantages:

  • A single query can materialise all eager-loaded rows with minimal bandwidth usage.

Disadvantages:

  • None.

Unfortunately, MULTISET is poorly supported by RDBMS.

MULTISET (as well as arrays and other collection types) have been introduced formally into the SQL standard as of SQL:2003, as a part of an initiative to embed OO features into the SQL language. Oracle, for instance, has implemented much of it, much like Informix did, or the lesser-known CUBRID (although using vendor-specific syntax).

Other databases like PostgreSQL allow for aggregating nested rows into typed arrays, which works the same way although with a bit more syntactic effort.

MULTISET and other ORDBMS SQL features are the perfect compromise, allowing for combining the best of the “relational” model with the best of the “hierarchical” model. Allowing for combining CRUD operations with querying in one go, removing the need for sophisticated ORMs, as the SQL language can be used directly to map all your data from your (relational) database to your (hierarchical) client representation with no friction.

Conclusion and call to action!

We’re living through exciting times in our industry. The elephant (SQL) in the room is still here, learning new tricks all the time. The relational model has served us well, and has been enriched with hierarchical models in various implementations. Functional programming is gaining traction, complementing object orientation in very useful ways.

Think of the glue, putting all these great technological concepts together, allowing for:

  • Storing data in the relational model
  • Materialising data in the hierarchical model
  • Processing data using functional programming

That awesome combination of techniques is hard to beat – we’ve shown how SQL and functional programming can work with jOOQ. All that’s missing – in our opinion – is better support for MULTISET and other ORDBMS features from RDBMS vendors.

Thus, we urge you, PostgreSQL developers: You’re creating one of the most innovative databases out there. Oracle is ahead of you in this area – but their implementation is too strongly tied to PL/SQL, which makes it clumsy. Yet, you’re missing out on one of the most awesome SQL feature sets. The ability to construct nested collections (not just arrays), and to query them efficiently. If you lead the way, other RDBMS will follow.

And we can finally stop wasting time talking about the object-relational impedance non-mismatch.

We’re Taking Bets: This Annotation Will Soon Show up in the JDK

This recent Stack Overflow question by Yahor has intrigued me: How to ensure at Java 8 compile time that a method signature “implements” a functional interface. It’s a very good question. Let’s assume the following nominal type:

@FunctionalInterface
interface LongHasher {
    int hash(long x);
}

The type imposes a crystal clear contract. Implementors must provide a single method named hash() taking a long argument, returning a int value. When using lambdas or method references, then the hash() method name is no longer relevant, and the structural type long -> int will be sufficient.

In his question, Yahor wants to enforce the above type upon three static methods (example modified by me):

class LongHashes {

    // OK
    static int xorHash(long x) {
        return (int)(x ^ (x >>> 32));
    }

    // OK
    static int continuingHash(long x) {
        return (int)(x + (x >>> 32));
    }

    // Yikes
    static int randomHash(NotLong x) {
         return xorHash(x * 0x5DEECE66DL + 0xBL);
    }
}

And he would like the Java compiler to complain in the third case, as the randomHash() does not “conform” to LongHasher.

A compilation error is easy to produce, of course, by actually assigning the static methods in their functional notation (method references) to a LongHasher instance:

// OK
LongHasher good = LongHashes::xorHash;
LongHasher alsoGood = LongHashes::continuingHash;

// Yikes
LongHasher ouch = LongHashes::randomHash;

But that’s not as concise as it could / should be. The type constraint should be imposed directly on the static method.

And what’s the Java way of doing that?

With annotations, of course!

I’m going to take bets that the following pattern will show up by JDK 10:

class LongHashes {

    // Compiles
    @ReferenceableAs(LongHasher.class)
    static int xorHash(long x) {
        return (int)(x ^ (x >>> 32));
    }

    // Compiles
    @ReferenceableAs(LongHasher.class)
    static int continuingHash(long x) {
        return (int)(x + (x >>> 32));
    }

    // Doesn't compile
    @ReferenceableAs(LongHasher.class)
    static int randomHash(NotLong x) {
         return xorHash(x * 0x5DEECE66DL + 0xBL);
    }
}

In fact, you could already implement such an annotation today, and write your own annotation processor (or JSR-308 checker) to validate these methods. Looking forward to yet another great annotation!

So, who’s in for the bet that we’ll have this annotation by JDK 10?

Functional Programming in Java 8 with Javaslang

We’re very happy to announce a guest post on the jOOQ Blog written by Daniel Dietrich, Senior Software Engineer at HSH Nordbank, husband and father of three. He currently creates a pricing framework for financial products as project lead and lead developer.

Daniel Dietrich

Besides his work, he is interested in programming languages, efficient algorithms and data structures. Daniel wrote the short book Play Framework Starter on building web-applications with the Play Framework for Java and Scala – and has been creating Javaslang, a functional component library for Java 8 recently, which has triggered our interest in particular.


It was a really exciting moment as I heard that Java will get lambdas. The fundamental idea of using functions as a means of abstraction has its origin in the ‘lambda calculus’, 80 years ago. Now, Java developers are able to pass behavior using functions.

List<Integer> list = Arrays.asList(2, 3, 1);

// passing the comparator as lambda expression
Collections.sort(list, (i1, i2) -> i1 - i2);

Lambda expressions reduce the verbosity of Java a lot. The new Stream API closes the gap between lambdas and the Java collection library. Taking a closer look shows, that parallel Streams are used rarely or at least with caution. A Stream cannot be reused and it is annoying that collections have to be converted forth and back.

// stream a list, sort it and collect results
Arrays.asList(2, 3, 1)
  .stream()
  .sorted()
  .collect(Collectors.toList());
        
// a little bit shorter
Stream.of(2, 3, 1)
  .sorted()
  .collect(Collectors.toList());

// or better use an IntStream?
IntStream.of(2, 3, 1)
  .sorted()
  .collect(ArrayList::new, List::add, List::addAll);

// slightly simplified
IntStream.of(2, 3, 1)
  .sorted()
  .boxed()
  .collect(Collectors.toList());

Wow! These are quite some variants for sorting a list of integers. Generally we want to focus on the what rather than wrapping our heads around the how. This extra dimension of complexity isn’t necessary. Here is how to achieve the same result with Javaslang:

List.of(2, 3, 1).sort();

Typically every object oriented language has an imperative core, so does Java. We control the flow of our applications using conditional statements and loops.

String getContent(String location) throws IOException {
    try {
        final URL url = new URL(location);
        if (!"http".equals(url.getProtocol())) {
            throw new UnsupportedOperationException(
                "Protocol is not http");
        }
        final URLConnection con = url.openConnection();
        final InputStream in = con.getInputStream();
        return readAndClose(in);
    } catch(Exception x) {
        throw new IOException(
            "Error loading location " + location, x);
    }
}

Functional languages have expressions instead of statements, we think in values. Lambda expressions help us transforming values. Here is one example, using Javaslang’s Try:

Try<String> getContent(String location) {
    return Try
        .of(() -> new URL(location))
        .filter(url -> "http".equals(url.getProtocol()))
        .flatMap(url -> Try.of(url::openConnection))
        .flatMap(con -> Try.of(con::getInputStream))
        .map(this::readAndClose);
}

The result is either a Success containing the content or a Failure containing an exception. In general, this notion is more concise compared to the imperative style and leads to robust programs we are able to reason about.

I hope this brief introduction has peaked your interest in javaslang.com! Please visit the site to learn more about functional programming with Java 8 and Javaslang.

How JPA 2.1 has become the new EJB 2.0

Beauty lies in the eye of the beholder. So does “ease”:

Thorben writes very good and useful articles about JPA, and he’s recently started an excellent series about JPA 2.1’s new features. Among which: Result set mapping. You may know result set mapping from websites like CTMMC, or annotatiomania.com. We can summarise this mapping procedure as follows:

a) define the mapping

@SqlResultSetMapping(
    name = "BookAuthorMapping",
    entities = {
        @EntityResult(
            entityClass = Book.class,
            fields = {
                @FieldResult(name = "id", column = "id"),
                @FieldResult(name = "title", column = "title"),
                @FieldResult(name = "author", column = "author_id"),
                @FieldResult(name = "version", column = "version")}),
        @EntityResult(
            entityClass = Author.class,
            fields = {
                @FieldResult(name = "id", column = "authorId"),
                @FieldResult(name = "firstName", column = "firstName"),
                @FieldResult(name = "lastName", column = "lastName"),
                @FieldResult(name = "version", column = "authorVersion")})})

The above mapping is rather straight-forward. It specifies how database columns should be mapped to entity fields and to entities as a whole. Then you give this mapping a name ("BookAuthorMapping"), which you can then reuse across your application, e.g. with native JPA queries.

I specifically like the fact that Thorben then writes:

If you don’t like to add such a huge block of annotations to your entity, you can also define the mapping in an XML file

… So, we’re back to replacing huge blocks of annotations by huge blocks of XML – a technique that many of us wanted to avoid using annotations… :-)

b) apply the mapping

Once the mapping has been statically defined on some Java type, you can then fetch those entities by applying the above BookAuthorMapping

List<Object[]> results = this.em.createNativeQuery(
    "SELECT b.id, b.title, b.author_id, b.version, " +
    "       a.id as authorId, a.firstName, a.lastName, " + 
    "       a.version as authorVersion " + 
    "FROM Book b " +
    "JOIN Author a ON b.author_id = a.id", 
    "BookAuthorMapping"
).getResultList();

results.stream().forEach((record) -> {
    Book book = (Book)record[0];
    Author author = (Author)record[1];
});

Notice how you still have to remember the Book and Author types and cast explicitly as no verifiable type information is really attached to anything.

The definition of “complex”

Now, the article claims that this is “complex” mapping, and no doubt, I would agree. This very simple query with only a simple join already triggers such an annotation mess if you want to really map your entities via JPA. You don’t want to see Thorben’s mapping annotations, once the queries get a little more complex. And remember, @SqlResultSetMapping is about mapping (native!) SQL results, so we’re no longer in object-graph-persistence land, we’re in SQL land, where bulk fetching, denormalising, aggregating, and other “fancy” SQL stuff is king.

The problem is here:

Java 5 introduced annotations. Annotations were originally intended to be used as “artificial modifiers”, i.e. things like static, final, protected (interestingly enough, Ceylon only knows annotations, no modifiers). This makes sense. Java language designers could introduce new modifiers / “keywords” without breaking existing code – because “real” keywords are reserved words, which are hard to introduce in a language. Remember enum?

So, good use-cases for annotations (and there are only few) are:

  • @Override
  • @Deprecated (although, a comment attribute would’ve been fancy)
  • @FunctionalInterface

JPA (and other Java EE APIs, as well as Spring) have gone completely wacko on their use of annotations. Repeat after me:

No language @Before or @After Java ever abused annotations as much as Java tweet this

(the @Before / @After idea was lennoff’s, on reddit)

There is a strong déjà vu in me when reading the above. Do you remember the following?

No language before or after Java ever abused checked exceptions as much as Java

We will all deeply regret Java annotations by 2020.

Annotations are a big wart in the Java type system. They have an extremely limited justified use and what we Java Enterprise developers are doing these days is absolutely not within the limits of “justified”. We’re abusing them for configuration for things that we should really be writing code for.

Here’s how you’d run the same query with jOOQ (or any other API that leverages generics and type safety for SQL):

Book b = BOOK.as("b");
Author a = AUTHOR.as("a");

DSL.using(configuration)
   .select(b.ID, b.TITLE, b.AUTHOR_ID, b.VERSION,
           a.ID, a.FIRST_NAME, a.LAST_NAME,
           a.VERSION)
   .from(b)
   .join(a).on(b.AUTHOR_ID.eq(a.ID))
   .fetch()
   .forEach(record -> {
       BookRecord book = record.into(b);
       AuthorRecord author = record.into(a);
   });

This example combines both JPA 2.1’s annotations AND querying. All the meta information about projected “entities” is already contained in the query and thus in the Result that is produced by the fetch() method. But it doesn’t really matter, the point here is that this lambda expression …

record -> {
    BookRecord book = record.into(b);
    AuthorRecord author = record.into(a);
}

… it can be anything you want! Like the more sophisticated examples we’ve shown in previous blog posts:

Mapping can be defined ad-hoc, on the fly, using functions. Functions are the ideal mappers, because they take an input, produce an output, and are completely stateless. And the best thing about functions in Java 8 is, they’re compiled by the Java compiler and can be used to type-check your mapping. And you can assign functions to objects, which allows you to reuse the functions, when a given mapping algorithm can be used several times.

In fact, the SQL SELECT clause itself is such a function. A function that transforms input tuples / rows into output tuples / rows, and you can adapt that function on the fly using additional expressions.

There is absolutely no way to type-check anything in the previous JPA 2.1 native SQL statement and @SqlResultSetMapping example. Imagine changing a column name:

List<Object[]> results = this.em.createNativeQuery(
    "SELECT b.id, b.title as book_title, " +
    "       b.author_id, b.version, " +
    "       a.id as authorId, a.firstName, a.lastName, " + 
    "       a.version as authorVersion " + 
    "FROM Book b " +
    "JOIN Author a ON b.author_id = a.id", 
    "BookAuthorMapping"
).getResultList();

Did you notice the difference? The b.title column was renamed to book_title. In a SQL string. Which blows up at run time! How to remember that you have to also adapt

@FieldResult(name = "title", column = "title")

… to be

@FieldResult(name = "title", column = "book_title")

Conversely, how to remember that once you rename the column in your @FieldResult, you’ll also have to go check wherever this "BookAuthorMapping" is used, and also change the column names in those queries.

@SqlResultSetMapping(
    name = "BookAuthorMapping",
    ...
)

Annotations are evil

You may or may not agree with some of the above. You may or may not like jOOQ as an alternative to JPA, that’s perfectly fine. But it is really hard to disagree with the fact that:

  • Java 5 introduced very useful annotations
  • Java EE / Spring heavily abused those annotations to replace XML
  • We now have a parallel universe type system in Java
  • This parallel universe type system is completely useless because the compiler cannot introspect it
  • Java SE 8 introduces functional programming and lots of type inference
  • Java SE 9-10 will introduce more awesome language features
  • It now becomes clear that what was configuration (XML or annotations) should have been code in the first place
  • JPA 2.1 has become the new EJB 2.0: Obsolete

As I said. Hard to disagree. Or in other words:

Code is much better at expressing algorithms than configuration tweet this

I’ve met Thorben personally on a number of occasions at conferences. This rant here wasn’t meant personally, Thorben :-) Your articles about JPA are very interesting. If you readers of this article are using JPA, please check out Thorben’s blog: http://www.thoughts-on-java.org.

In the meantime, I would love to nominate Thorben for the respected title “The Annotatiomaniac of the Year 2015