Archive | java RSS for this section

We’re Taking Bets: This Annotation Will Soon Show up in the JDK


This recent Stack Overflow question by Yahor has intrigued me: How to ensure at Java 8 compile time that a method signature “implements” a functional interface. It’s a very good question. Let’s assume the following nominal type:

@FunctionalInterface
interface LongHasher {
    int hash(long x);
}

The type imposes a crystal clear contract. Implementors must provide a single method named hash() taking a long argument, returning a int value. When using lambdas or method references, then the hash() method name is no longer relevant, and the structural type long -> int will be sufficient.

In his question, Yahor wants to enforce the above type upon three static methods (example modified by me):

class LongHashes {

    // OK
    static int xorHash(long x) {
        return (int)(x ^ (x >>> 32));
    }

    // OK
    static int continuingHash(long x) {
        return (int)(x + (x >>> 32));
    }

    // Yikes
    static int randomHash(NotLong x) {
         return xorHash(x * 0x5DEECE66DL + 0xBL);
    }
}

And he would like the Java compiler to complain in the third case, as the randomHash() does not “conform” to LongHasher.

A compilation error is easy to produce, of course, by actually assigning the static methods in their functional notation (method references) to a LongHasher instance:

// OK
LongHasher good = LongHashes::xorHash;
LongHasher alsoGood = LongHashes::continuingHash;

// Yikes
LongHasher ouch = LongHashes::randomHash;

But that’s not as concise as it could / should be. The type constraint should be imposed directly on the static method.

And what’s the Java way of doing that?

With annotations, of course!

I’m going to take bets that the following pattern will show up by JDK 10:

class LongHashes {

    // Compiles
    @ReferenceableAs(LongHasher.class)
    static int xorHash(long x) {
        return (int)(x ^ (x >>> 32));
    }

    // Compiles
    @ReferenceableAs(LongHasher.class)
    static int continuingHash(long x) {
        return (int)(x + (x >>> 32));
    }

    // Doesn't compile
    @ReferenceableAs(LongHasher.class)
    static int randomHash(NotLong x) {
         return xorHash(x * 0x5DEECE66DL + 0xBL);
    }
}

In fact, you could already implement such an annotation today, and write your own annotation processor (or JSR-308 checker) to validate these methods. Looking forward to yet another great annotation!

So, who’s in for the bet that we’ll have this annotation by JDK 10?

Do Not Make This Mistake When Developing an SPI


Most of your code is private, internal, proprietary, and will never be exposed to public. If that’s the case, you can relax – you can refactor all of your mistakes, including those that incur breaking API changes.

If you’re maintining public API, however, that’s not the case. If you’re maintaining public SPI (Service Provider Interfaces), then things get even worse.

The H2 Trigger SPI

In a recent Stack Overflow question about how to implement an H2 database trigger with jOOQ, I have encountered the org.h2.api.Trigger SPI again – a simple and easy-to-implement SPI that implements trigger semantics. Here’s how triggers work in the H2 database:

Use the trigger

CREATE TRIGGER my_trigger
BEFORE UPDATE
ON my_table
FOR EACH ROW
CALL "com.example.MyTrigger"

Implement the trigger

public class MyTrigger implements Trigger {

    @Override
    public void init(
        Connection conn, 
        String schemaName,
        String triggerName, 
        String tableName, 
        boolean before, 
        int type
    )
    throws SQLException {}

    @Override
    public void fire(
        Connection conn, 
        Object[] oldRow, 
        Object[] newRow
    )
    throws SQLException {
        // Using jOOQ inside of the trigger, of course
        DSL.using(conn)
           .insertInto(LOG, LOG.FIELD1, LOG.FIELD2, ..)
           .values(newRow[0], newRow[1], ..)
           .execute();
    }

    @Override
    public void close() throws SQLException {}

    @Override
    public void remove() throws SQLException {}
}

The whole H2 Trigger SPI is actually rather elegant, and usually you only need to implement the fire() method.

So, how is this SPI wrong?

It is wrong very subtly. Consider the init() method. It has a boolean flag to indicate whether the trigger should fire before or after the triggering event, i.e. the UPDATE. What if suddenly, H2 were to also support INSTEAD OF triggers? Ideally, this flag would then be replaced by an enum:

public enum TriggerTiming {
    BEFORE,
    AFTER,
    INSTEAD_OF
}

But we can’t simply introduce this new enum type because the init() method shouldn’t be changed incompatibly, breaking all implementing code! With Java 8, we could at least declare an overload like this:

    default void init(
        Connection conn, 
        String schemaName,
        String triggerName, 
        String tableName, 
        TriggerTiming timing, 
        int type
    )
    throws SQLException {
        // New feature isn't supported by default
        if (timing == INSTEAD_OF)
            throw new SQLFeatureNotSupportedException();

        // Call through to old feature by default
        init(conn, schemaName, triggerName,
             tableName, timing == BEFORE, type);
    }

This would allow new implementations to handle INSTEAD_OF triggers while old implementations would still work. But it feels hairy, doesn’t it?

Now, imagine, we’d also support ENABLE / DISABLE clauses and we want to pass those values to the init() method. Or maybe, we want to handle FOR EACH ROW. There’s currently no way to do that with this SPI. So we’re going to get more and more of these overloads, which are very hard to implement. And effectively, this has happened already, as there is also org.h2.tools.TriggerAdapter, which is redundant with (but subtly different from) Trigger.

What would be a better approach, then?

The ideal approach for an SPI provider is to provide “argument objects”, like these:

public interface Trigger {
    default void init(InitArguments args)
        throws SQLException {}
    default void fire(FireArguments args)
        throws SQLException {}
    default void close(CloseArguments args)
        throws SQLException {}
    default void remove(RemoveArguments args)
        throws SQLException {}

    final class InitArguments {
        public Connection connection() { ... }
        public String schemaName() { ... }
        public String triggerName() { ... }
        public String tableName() { ... }
        /** use #timing() instead */
        @Deprecated
        public boolean before() { ... }
        public TriggerTiming timing() { ... }
        public int type() { ... }
    }

    final class FireArguments {
        public Connection connection() { ... }
        public Object[] oldRow() { ... }
        public Object[] newRow() { ... }
    }

    // These currently don't have any properties
    final class CloseArguments {}
    final class RemoveArguments {}
}

As you can see in the above example, Trigger.InitArguments has been successfully evolved with appropriate deprecation warnings. No client code was broken, and the new functionality is ready to be used, if needed. Also, close() and remove() are ready for future evolutions, even if we don’t need any arguments yet.

The overhead of this solution is at most one object allocation per method call, which shouldn’t hurt too much.

Another example: Hibernate’s UserType

Unfortunately, this mistake happens way too often. Another prominent example is Hibernate’s hard-to-implement org.hibernate.usertype.UserType SPI:

public interface UserType {
    int[] sqlTypes();
    Class returnedClass();
    boolean equals(Object x, Object y);
    int hashCode(Object x);

    Object nullSafeGet(
        ResultSet rs, 
        String[] names, 
        SessionImplementor session, 
        Object owner
    ) throws SQLException;

    void nullSafeSet(
        PreparedStatement st, 
        Object value, 
        int index, 
        SessionImplementor session
    ) throws SQLException;

    Object deepCopy(Object value);
    boolean isMutable();
    Serializable disassemble(Object value);
    Object assemble(
        Serializable cached, 
        Object owner
    );
    Object replace(
        Object original, 
        Object target, 
        Object owner
    );
}

The SPI looks rather difficult to implement. Probably, you can get something working rather quickly, but will you feel at ease? Will you think that you got it right? Some examples:

  • Is there never a case where you need the owner reference also in nullSafeSet()?
  • What if your JDBC driver doesn’t support fetching values by name from ResultSet?
  • What if you need to use your user type in a CallableStatement for a stored procedure?

Another important aspect of such SPIs is the way implementors can provide values back to the framework. It is generally a bad idea to have non-void methods in SPIs as you will never be able to change the return type of a method again. Ideally, you should have argument types that accept “outcomes”. A lot of the above methods could be replaced by a single configuration() method like this:

public interface UserType {
    default void configure(ConfigureArgs args) {}

    final class ConfigureArgs {
        public void sqlTypes(int[] types) { ... }
        public void returnedClass(Class<?> clazz) { ... }
        public void mutable(boolean mutable) { ... }
    }

    // ...
}

Another example, a SAX ContentHandler

Have a look at this example here:

public interface ContentHandler {
    void setDocumentLocator (Locator locator);
    void startDocument ();
    void endDocument();
    void startPrefixMapping (String prefix, String uri);
    void endPrefixMapping (String prefix);
    void startElement (String uri, String localName,
                       String qName, Attributes atts);
    void endElement (String uri, String localName,
                     String qName);
    void characters (char ch[], int start, int length);
    void ignorableWhitespace (char ch[], int start, int length);
    void processingInstruction (String target, String data);
    void skippedEntity (String name);
}

Some examples for drawbacks of this SPI:

  • What if you need the attributes of an element at the endElement() event? You’ll have to remember them yourself.
  • What if you’d like to know the prefix mapping uri at the endPrefixMapping() event? Or at any other event?

Clearly, SAX was optimised for speed, and it was optimised for speed at a time when the JIT and the GC were still weak. Nonetheless, implementing a SAX handler is not trivial. Parts of this is due to the SPI being hard to implement.

We don’t know the future

As API or SPI providers, we simply do not know the future. Right now, we may think that a given SPI is sufficient, but we’ll break it already in the next minor release. Or we don’t break it and tell our users that we cannot implement these new features.

With the above tricks, we can continue evolving our SPI without incurring any breaking changes:

  • Always pass exactly one argument object to the methods.
  • Always return void. Let implementors interact with SPI state via the argument object.
  • Use Java 8’s default methods, or provide an “empty” default implementation.

Did you enjoy this read? You might also enjoy:

How to Access a Method’s Result Value From the Finally Block


While the JVM is a stack-based machine, the Java language doesn’t really offer you any way to access that Stack. Even if sometimes, in rare occasions, it would be very useful.

An example

Method result values are put on the stack. If you look at the following example:

public int method() {
    if (something)
        return 1;

    ...
    if (somethingElse)
        return 2;

    ...
    return 0;
}

If we ignore the halting problem, error handling, and other academic discussions, we can say that the above method will “certainly” return any value of 1, 2, or 0. And that value is put on the stack prior to jumping out of the method.

Now, sometimes, it may be a use-case to take some action only when a given result value is returned. People might then be lured into starting the old flame-war discussion about whether multiple return statements are EVIL™ and the whole method should have been phrased like this, instead:

public int method() {
    int result = 0;

    if (something)
        result = 1;

    ...
    if (somethingElse)
        result = 2;

    ...
    // Important action here prior to return
    if (result == 1337)
        log.info("hehehe ;-)");

    return result;
}

Of course, the above example is wrong, because, previously, the if (something) return 1 and if (something) return 2 statements immediately aborted method execution. In order to achieve the same with the “single-return-statement” technique, we’ll have to rewrite our code like this:

public int method() {
    int result = 0;

    if (something)
        result = 1;
    else {

        ...
        if (somethingElse)
            result = 2;
        else {
            ...
        }
    }

    // Important action here prior to return
    if (result == 1337)
        log.info("hehehe ;-)");

    return result;
}

… and, of course, we can continue bike-shedding and flame-waring the use of curly braces and/or indentation levels, which shows we haven’t gained anything.

Accessing the return value from the stack

What we really wanted to do in our original implementation is a check just before returning to see what value is on the stack, i.e. what value will be returned. Here’s some pseudo-Java:

public int method() {
    try {
        if (something)
            return 1;

        ...
        if (somethingElse)
            return 2;

        ...
        return 0;
    }

    // Important action here prior to return
    finally {
        if (reflectionMagic.methodResult == 1337)
            log.info("hehehe ;-)");
    }
}

The good news is: Yes we can! Here’s a simple trick that can be done to achieve the above:

public int method() {
    int result = 0;

    try {
        if (something)
            return result = 1;

        ...
        if (somethingElse)
            return result = 2;

        ...
        return result = 0;
    }

    // Important action here prior to return
    finally {
        if (result == 1337)
            log.info("hehehe ;-)");
    }
}

The less good news is: You must never forget to explicitly assign the result. But every once in a while, this technique can be very useful to “access the method stack” when the Java language doesn’t really allow you to.

Of course…

Of course you could also just resort to this boring solution here:

public int method() {
    int result = actualMethod();

    if (result == 1337)
        log.info("hehehe ;-)");

    return result;
}

public int actualMethod() {
    if (something)
        return result = 1;

    ...
    if (somethingElse)
        return result = 2;

    ...
    return result = 0;
}

… and probably, most often, this technique is indeed better (because slightly more readable). But sometimes, you want to do more stuff than just logging in that finally block, or you want to access more than just the result value, and you don’t want to refactor the method.

Other approaches?

Now it’s your turn. What would be your preferred, alternative approach (with code examples?) E.g. using a Try monad? Or aspects?

Use This Preference to Speed up Your Eclipse m2e Configuration


Who doesn’t know them. The good old JFace dialogs in Eclipse that give you a visual representation of what is really a rather simple XML or properties file. In the case of m2e, it looks like this:

m2e-default

Unfortunately, this screen is a bit slow to load, and it doesn’t offer much value beyond checking version numbers and some other stuff that you’ll never change again. If you’re deep into using Maven, you’ll put plugins all over the place, and there is no way to visually manage the plugins in this screen. There can’t be, because plugins may contain “schemaless” XML, for which it would be impossible to design a non-XML editor.

So, most Eclipse/Maven/m2e users will immediately jump to the pom.xml representation, i.e. the tab on the far right, to edit the XML sources of their pom.xml files.

Luckily, you can change your workspace settings and make opening the XML tab the default for your pom.xml files:

m2e-change-default

A setting, which I believe should be the default in new installations of Eclipse, anyway. If you agree with this, show this issue some love:

https://bugs.eclipse.org/bugs/show_bug.cgi?id=465882

Functional Programming in Java 8 with Javaslang


We’re very happy to announce a guest post on the jOOQ Blog written by Daniel Dietrich, Senior Software Engineer at HSH Nordbank, husband and father of three. He currently creates a pricing framework for financial products as project lead and lead developer.

Daniel Dietrich

Besides his work, he is interested in programming languages, efficient algorithms and data structures. Daniel wrote the short book Play Framework Starter on building web-applications with the Play Framework for Java and Scala – and has been creating Javaslang, a functional component library for Java 8 recently, which has triggered our interest in particular.


It was a really exciting moment as I heard that Java will get lambdas. The fundamental idea of using functions as a means of abstraction has its origin in the ‘lambda calculus’, 80 years ago. Now, Java developers are able to pass behavior using functions.

List<Integer> list = Arrays.asList(2, 3, 1);

// passing the comparator as lambda expression
Collections.sort(list, (i1, i2) -> i1 - i2);

Lambda expressions reduce the verbosity of Java a lot. The new Stream API closes the gap between lambdas and the Java collection library. Taking a closer look shows, that parallel Streams are used rarely or at least with caution. A Stream cannot be reused and it is annoying that collections have to be converted forth and back.

// stream a list, sort it and collect results
Arrays.asList(2, 3, 1)
  .stream()
  .sorted()
  .collect(Collectors.toList());
        
// a little bit shorter
Stream.of(2, 3, 1)
  .sorted()
  .collect(Collectors.toList());

// or better use an IntStream?
IntStream.of(2, 3, 1)
  .sorted()
  .collect(ArrayList::new, List::add, List::addAll);

// slightly simplified
IntStream.of(2, 3, 1)
  .sorted()
  .boxed()
  .collect(Collectors.toList());

Wow! These are quite some variants for sorting a list of integers. Generally we want to focus on the what rather than wrapping our heads around the how. This extra dimension of complexity isn’t necessary. Here is how to achieve the same result with Javaslang:

List.of(2, 3, 1).sort();

Typically every object oriented language has an imperative core, so does Java. We control the flow of our applications using conditional statements and loops.

String getContent(String location) throws IOException {
    try {
        final URL url = new URL(location);
        if (!"http".equals(url.getProtocol())) {
            throw new UnsupportedOperationException(
                "Protocol is not http");
        }
        final URLConnection con = url.openConnection();
        final InputStream in = con.getInputStream();
        return readAndClose(in);
    } catch(Exception x) {
        throw new IOException(
            "Error loading location " + location, x);
    }
}

Functional languages have expressions instead of statements, we think in values. Lambda expressions help us transforming values. Here is one example, using Javaslang’s Try:

Try<String> getContent(String location) {
    return Try
        .of(() -> new URL(location))
        .filter(url -> "http".equals(url.getProtocol()))
        .flatMap(url -> Try.of(url::openConnection))
        .flatMap(con -> Try.of(con::getInputStream))
        .map(this::readAndClose);
}

The result is either a Success containing the content or a Failure containing an exception. In general, this notion is more concise compared to the imperative style and leads to robust programs we are able to reason about.

I hope this brief introduction has peaked your interest in javaslang.com! Please visit the site to learn more about functional programming with Java 8 and Javaslang.

PostgreSQL’s Best-Kept Secret, and how to Use it with jOOQ


PostgreSQL has a lot of secret data types. In recent times, PostgreSQL’s JSON and JSONB support was hyped as being the NoSQL on SQL secret (e.g. as advertised by ToroDB) that allows you to get the best out of both worlds. But there are many other useful data types, among which the range type.

How does the range type work?

Ranges are very useful for things like age, date, price intervals etc. Let’s assume the following table:

CREATE TABLE age_categories (
  name VARCHAR(50),
  ages INT4RANGE
);

We can now fill the above table as such:

INSERT INTO age_categories
VALUES ('Foetus',   int4range(-1, 0)),
       ('Newborn',  int4range(0, 1)),
       ('Infant',   int4range(1, 2)),
       ('Kid',      int4range(2, 12)),
       ('Teenager', int4range(12, 20)),
       ('Tween',    int4range(20, 30)),
       ('Adult',    int4range(30, 60)),
       ('Senior',   int4range(60, 90)),
       ('Ancient',  int4range(90, 999));

And query it, e.g. by taking my age:

SELECT name
FROM age_categories
WHERE range_contains_elem(ages, 33);

… yielding

name
-----
Adult

There are a lot of other useful functions involved with range calculations. For instance, we can get the age span of a set of categories. Let’s assume we want to have the age span of Kid, Teenager, Senior. Let’s query:

SELECT int4range(min(lower(ages)), max(upper(ages)))
FROM age_categories
WHERE name IN ('Kid', 'Teenager', 'Senior');

… yielding

int4range
---------
[2,90]

And now, let’s go back to fetching all the categories that are within that range:

SELECT name
FROM age_categories
WHERE range_overlaps(ages, (
  SELECT int4range(min(lower(ages)), max(upper(ages)))
  FROM age_categories
  WHERE name IN ('Kid', 'Teenager', 'Senior')
));

… yielding

name
--------
Kid
Teenager
Tween
Adult
Senior

All of this could have been implemented with values contained in two separate columns, but using ranges is just much more expressive for range arithmetic.

How to use these types with jOOQ?

jOOQ doesn’t include built-in support for these advanced data types, but it allows you to bind these data types to your own, custom representation.

A good representation of PostgreSQL’s range types in Java would be jOOλ’s org.jooq.lambda.tuple.Range type, but you could also simply use int[] or Map.Entry for ranges. When we’re using jOOλ’s Range type, the idea is to be able to run the following statements using jOOQ:

// Assuming this static import:
import static org.jooq.lambda.tuple.Tuple.*;

DSL.using(configuration)
   .insertInto(AGE_CATEGORIES)
   .columns(AGE_CATEGORIES.NAME, AGE_CATEGORIES.AGES)
   .values("Foetus",   range(-1, 0))
   .values("Newborn",  range(0, 1))
   .values("Infant",   range(1, 2))
   .values("Kid",      range(2, 12))
   .values("Teenager", range(12, 20))
   .values("Tween",    range(20, 30))
   .values("Adult",    range(30, 60))
   .values("Senior",   range(60, 90))
   .values("Ancient",  range(90, 999))
   .execute();

And querying…

DSL.using(configuration)
   .select(AGE_CATEGORIES.NAME)
   .from(AGE_CATEGORIES)
   .where(rangeContainsElem(AGE_CATEGORIES.AGES, 33))
   .fetch();

DSL.using(configuration)
   .select(AGE_CATEGORIES.NAME)
   .where(rangeOverlaps(AGE_CATEGORIES.AGES,
      select(int4range(min(lower(AGE_CATEGORIES.AGES)),
                       max(upper(AGE_CATEGORIES.AGES))))
     .from(AGE_CATEGORIES)
     .where(AGE_CATEGORIES.NAME.in(
       "Kid", "Teenager", "Senior"
     ))
   ))
   .fetch();

As always, the idea with jOOQ is that the SQL you want to get as output is the SQL you want to write in Java, type safely. In order to be able to write the above, we will need to implement 1-2 missing pieces. First off, we’ll need to create a data type binding (org.jooq.Binding) as described in this section of the manual. The binding can be written as such, using the following Converter:

public class Int4RangeConverter 
implements Converter<Object, Range<Integer>> {
    private static final Pattern PATTERN = 
        Pattern.compile("\\[(.*?),(.*?)\\)");

    @Override
    public Range<Integer> from(Object t) {
        if (t == null)
            return null;

        Matcher m = PATTERN.matcher("" + t);
        if (m.find())
            return Tuple.range(
                Integer.valueOf(m.group(1)), 
                Integer.valueOf(m.group(2)));

        throw new IllegalArgumentException(
            "Unsupported range : " + t);
    }

    @Override
    public Object to(Range<Integer> u) {
        return u == null 
            ? null 
            : "[" + u.v1 + "," + u.v2 + ")";
    }

    @Override
    public Class<Object> fromType() {
        return Object.class;
    }

    @SuppressWarnings({ "unchecked", "rawtypes" })
    @Override
    public Class<Range<Integer>> toType() {
        return (Class) Range.class;
    }
}

… and the Converter can then be re-used in a Binding:

public class PostgresInt4RangeBinding 
implements Binding<Object, Range<Integer>> {

    @Override
    public Converter<Object, Range<Integer>> converter() {
        return new Int4RangeConverter();
    }

    @Override
    public void sql(BindingSQLContext<Range<Integer>> ctx) throws SQLException {
        ctx.render()
           .visit(DSL.val(ctx.convert(converter()).value()))
           .sql("::int4range");
    }

    // ...
}

The important thing in the binding is just that every bind variable needs to be encoded in PostgreSQL’s string format for range types (i.e. [lower, upper)) and cast explicitly to ?::int4range, that’s all.

You can then configure your code generator to use those types, e.g. on all columns that are called [xxx]_RANGE

<customType>
    <name>com.example.PostgresInt4RangeBinding</name>
    <type>org.jooq.lambda.tuple.Range&lt;Integer></type>
    <binding>com.example.PostgresInt4RangeBinding</binding>
</customType>
<forcedType>
    <name>com.example.PostgresInt4RangeBinding</name>
    <expression>.*?_RANGE</expression>
</forcedType>

The last missing thing now are the functions that you need to compare ranges, i.e.:

  • rangeContainsElem()
  • rangeOverlaps()
  • int4range()
  • lower()
  • upper()

These are written quickly:

static <T extends Comparable<T>> Condition 
    rangeContainsElem(Field<Range<T>> f1, T e) {
    return DSL.condition("range_contains_elem({0}, {1})", f1, val(e));
}

static <T extends Comparable<T>> Condition 
    rangeOverlaps(Field<Range<T>> f1, Range<T> f2) {
    return DSL.condition("range_overlaps({0}, {1})", f1, val(f2, f1.getDataType()));
}

Conclusion

Writing an extension for jOOQ data types takes a bit of time and effort, but it is really easy to achieve and will allow you to write very powerful queries in a type safe way. Once you’ve set up all the data types and the bindings, your generated source code will reflect the actual data type from your database and you will be able to write powerful queries in a type safe way directly in Java. Let’s consider again the plain SQL and the jOOQ version:

SQL

SELECT name
FROM age_categories
WHERE range_overlaps(ages, (
  SELECT int4range(min(lower(ages)), max(upper(ages)))
  FROM age_categories
  WHERE name IN ('Kid', 'Teenager', 'Senior')
));

jOOQ

DSL.using(configuration)
   .select(AGE_CATEGORIES.NAME)
   .where(rangeOverlaps(AGE_CATEGORIES.AGES,
      select(int4range(min(lower(AGE_CATEGORIES.AGES)),
                       max(upper(AGE_CATEGORIES.AGES))))
     .from(AGE_CATEGORIES)
     .where(AGE_CATEGORIES.NAME.in(
       "Kid", "Teenager", "Senior"
     ))
   ))
   .fetch();

More information about data type bindings can be found in the jOOQ manual.

This Common API Technique is Actually an Anti-Pattern


I admit, we’ve been lured into using this technique as well. It’s just so convenient, as it allows for avoiding a seemingly unnecessary cast. It’s the following technique here:

interface SomeWrapper {
  <T> T get();
}

Now you can type safely assign anything from the wrapper to any type:

SomeWrapper wrapper = ...

// Obviously
Object a = wrapper.get();

// Well...
Number b = wrapper.get();

// Risky
String[][] c = wrapper.get();

// Unprobable
javax.persistence.SqlResultSetMapping d = 
    wrapper.get();

This is actually the API you can use when you’re using jOOR, our reflection library that we’ve written and open sourced to improve our integration tests. With jOOR, you can write things like:

Employee[] employees = on(department)
    .call("getEmployees").get();
 
for (Employee employee : employees) {
    Street street = on(employee)
        .call("getAddress")
        .call("getStreet")
        .get();
    System.out.println(street);
}

The API is rather simple. The on() method wraps an Object or a Class. The call() methods then call a method on that object using reflection (but without requiring exact signatures, without requiring the method to be public, and without throwing any checked exceptions). And without the need for casting, you can then call get() to assign the result to any arbitrary reference type.

This is probably OK with a reflection library like jOOR, because the whole library is not really type safe. It can’t be, because it’s reflection.

But the “dirty” feeling remains. The feeling of giving the call-site a promise with respect to the resulting type, a promise that cannot be kept, and that will result in ClassCastException – a thing of the past that junior developers who have started after Java 5 and generics hardly know.

But the JDK libraries also do that…

Yes, they do. But very seldomly, and only if the generic type parameter is really irrelevant. For instance, when getting a Collection.emptyList(), whose implementation looks like this:

@SuppressWarnings("unchecked")
public static final <T> List<T> emptyList() {
    return (List<T>) EMPTY_LIST;
}

It’s true that the EMPTY_LIST is cast unsafely from List to List<T>, but from a semantic perspective, this is a safe cast. You cannot modify this List reference, and because it’s empty, there is no method in List<T> that will ever give you an instance of T or T[] that does not correspond to your target type. So, all of these are valid:

// perfectly fine
List<?> a = emptyList();

// yep
List<Object> b = emptyList();

// alright
List<Number> c = emptyList();

// no problem
List<String[][]> d = emptyList();

// if you must
List<javax.persistence.SqlResultSetMapping> e 
    = emptyList();

So, as always (or mostly), the JDK library designers have taken great care not to make any false promises about the generic type that you might get. This means that you will often get an Object type where you know that another type would be more suitable.

But even if YOU know this, the compiler won’t. Erasure comes at a price and the price is paid when your wrapper or collection is empty. There is no way of knowing the contained type of such an expression, so don’t pretend you do. In other words:

Do not use the just-to-avoid-casting generic method anti pattern

Let’s Review How to Insert Clob or Blob via JDBC


LOBs are a PITA in all databases, as well as in JDBC. Handling them correctly takes a couple of lines of code, and you can be sure that you’ll get it wrong eventually. Because you have to think of a couple of things:

  • Foremost, LOBs are heavy resources that need special lifecycle management. Once you’ve allocated a LOB, you better “free” it as well to decrease the pressure on your GC. This article shows more about why you need to free lobs
  • The time when you allocate and free a lob is crucial. It might have a longer life span than any of your ResultSet, PreparedStatement, or Connection / transaction. Each database manages such life spans individually, and you might have to read up the specifications in edge cases
  • While you may use String instead of Clob, or byte[] instead of Blob for small to medium size LOBs, this may not always be the case, and may even lead to some nasty errors, like Oracle’s dreaded ORA-01461: can bind a LONG value only for insert into a LONG column

So, if you’re working on a low level using JDBC (instead of abstracting JDBC via Hibernate or jOOQ), you better write a small utility that takes care of proper LOB handling.

We’ve recently re-discovered our own utility that we’re using for jOOQ integration testing, at least in some databases, and thought this might be very useful to a couple of our readers who operate directly with JDBC. Consider the following class:

public class LOB implements AutoCloseable {

    private final Connection connection;
    private final List<Blob> blobs;
    private final List<Clob> clobs;

    public LOB(Connection connection) {
        this.connection = connection;
        this.blobs = new ArrayList<>();
        this.clobs = new ArrayList<>();
    }

    public final Blob blob(byte[] bytes) 
    throws SQLException {
        Blob blob;

        // You may write more robust dialect 
        // detection here
        if (connection.getMetaData()
                      .getDatabaseProductName()
                      .toLowerCase()
                      .contains("oracle")) {
            blob = BLOB.createTemporary(connection, 
                       false, BLOB.DURATION_SESSION);
        }
        else {
            blob = connection.createBlob();
        }

        blob.setBytes(1, bytes);
        blobs.add(blob);
        return blob;
    }

    public final Clob clob(String string) 
    throws SQLException {
        Clob clob;

        if (connection.getMetaData()
                      .getDatabaseProductName()
                      .toLowerCase()
                      .contains("oracle")) {
            clob = CLOB.createTemporary(connection, 
                       false, CLOB.DURATION_SESSION);
        }
        else {
            clob = connection.createClob();
        }

        clob.setString(1, string);
        clobs.add(clob);
        return clob;
    }


    @Override
    public final void close() throws Exception {
        blobs.forEach(JDBCUtils::safeFree);
        clobs.forEach(JDBCUtils::safeFree);
    }
}

This simple class has some nice treats:

  • It’s AutoCloseable, so you can free your lobs with the try-with-resources statement
  • It abstracts over the creation of LOBs across SQL dialects. No need to remember the Oracle way

To use this class, simply write something like the following:

try (
    LOB lob = new LOB(connection);
    PreparedStatement stmt = connection.prepareStatement(
        "insert into lobs (id, lob) values (?, ?)")
) {
    stmt.setInt(1, 1);
    stmt.setClob(2, lob.clob("abc"));
    stmt.executeUpdate();
}

That’s it! No need to keep references to the lob, safely freeing it if it’s not null, correctly recovering from exceptions, etc. Just put the LOB container in the try-with-resources statement, along with the PreparedStatement and done.

If you’re interested in why you have to call Clob.free() or Blob.free() in the first place, read our article about it. It’ll spare you one or two OutOfMemoryErrors

Is Your Eclipse Running a Bit Slow? Just Use This Simple Trick!


You wouldn’t believe it until you try it yourself. I’ve been using the Eclipse Mars developer milestones lately, and I’ve been having some issues with slow compilation. I always thought it was because of the m2e integration, which has never been famous for working perfectly. But then, it dawned upon me when I added a JPA persistence.xml file to run some jOOQ + Hibernate tests… I ran into this issue, and googled it to learn that many people are complaining about JPA validation running forever in their Eclipses.

So I searched for how to deactivate that, and boom!

All of my Eclipse got much much faster

In fact, I didn’t just deactivate JPA validation, but all validation:

deactivate all validation in your Eclipse to boost performance

I don’t remember the last time I ever needed validation, or thought that it was a useful feature in the first place. If you want to help your whole team, you can also check in the following file in each of your projects’ .settings/org.eclipse.wst.validation.prefs files:

DELEGATES_PREFERENCE=delegateValidatorList
USER_BUILD_PREFERENCE=enabledBuildValidatorListorg.eclipse.wst.wsi.ui.internal.WSIMessageValidator;
USER_MANUAL_PREFERENCE=enabledManualValidatorListorg.eclipse.wst.wsi.ui.internal.WSIMessageValidator;
USER_PREFERENCE=overrideGlobalPreferencestruedisableAllValidationtrueversion1.2.600.v201501211647
eclipse.preferences.version=1
override=true
suspend=true
vf.version=3

This has the same effect, but can be checked into version control.

Found this tip useful? See also our list of Top 5 Useful Hidden Eclipse Features

How JPA 2.1 has become the new EJB 2.0


Beauty lies in the eye of the beholder. So does “ease”:

Thorben writes very good and useful articles about JPA, and he’s recently started an excellent series about JPA 2.1’s new features. Among which: Result set mapping. You may know result set mapping from websites like CTMMC, or annotatiomania.com. We can summarise this mapping procedure as follows:

a) define the mapping

@SqlResultSetMapping(
    name = "BookAuthorMapping",
    entities = {
        @EntityResult(
            entityClass = Book.class,
            fields = {
                @FieldResult(name = "id", column = "id"),
                @FieldResult(name = "title", column = "title"),
                @FieldResult(name = "author", column = "author_id"),
                @FieldResult(name = "version", column = "version")}),
        @EntityResult(
            entityClass = Author.class,
            fields = {
                @FieldResult(name = "id", column = "authorId"),
                @FieldResult(name = "firstName", column = "firstName"),
                @FieldResult(name = "lastName", column = "lastName"),
                @FieldResult(name = "version", column = "authorVersion")})})

The above mapping is rather straight-forward. It specifies how database columns should be mapped to entity fields and to entities as a whole. Then you give this mapping a name ("BookAuthorMapping"), which you can then reuse across your application, e.g. with native JPA queries.

I specifically like the fact that Thorben then writes:

If you don’t like to add such a huge block of annotations to your entity, you can also define the mapping in an XML file

… So, we’re back to replacing huge blocks of annotations by huge blocks of XML – a technique that many of us wanted to avoid using annotations… :-)

b) apply the mapping

Once the mapping has been statically defined on some Java type, you can then fetch those entities by applying the above BookAuthorMapping

List<Object[]> results = this.em.createNativeQuery(
    "SELECT b.id, b.title, b.author_id, b.version, " +
    "       a.id as authorId, a.firstName, a.lastName, " + 
    "       a.version as authorVersion " + 
    "FROM Book b " +
    "JOIN Author a ON b.author_id = a.id", 
    "BookAuthorMapping"
).getResultList();

results.stream().forEach((record) -> {
    Book book = (Book)record[0];
    Author author = (Author)record[1];
});

Notice how you still have to remember the Book and Author types and cast explicitly as no verifiable type information is really attached to anything.

The definition of “complex”

Now, the article claims that this is “complex” mapping, and no doubt, I would agree. This very simple query with only a simple join already triggers such an annotation mess if you want to really map your entities via JPA. You don’t want to see Thorben’s mapping annotations, once the queries get a little more complex. And remember, @SqlResultSetMapping is about mapping (native!) SQL results, so we’re no longer in object-graph-persistence land, we’re in SQL land, where bulk fetching, denormalising, aggregating, and other “fancy” SQL stuff is king.

The problem is here:

Java 5 introduced annotations. Annotations were originally intended to be used as “artificial modifiers”, i.e. things like static, final, protected (interestingly enough, Ceylon only knows annotations, no modifiers). This makes sense. Java language designers could introduce new modifiers / “keywords” without breaking existing code – because “real” keywords are reserved words, which are hard to introduce in a language. Remember enum?

So, good use-cases for annotations (and there are only few) are:

  • @Override
  • @Deprecated (although, a comment attribute would’ve been fancy)
  • @FunctionalInterface

JPA (and other Java EE APIs, as well as Spring) have gone completely wacko on their use of annotations. Repeat after me:

No language @Before or @After Java ever abused annotations as much as Java tweet this

(the @Before / @After idea was lennoff’s, on reddit)

There is a strong déjà vu in me when reading the above. Do you remember the following?

No language before or after Java ever abused checked exceptions as much as Java

We will all deeply regret Java annotations by 2020.

Annotations are a big wart in the Java type system. They have an extremely limited justified use and what we Java Enterprise developers are doing these days is absolutely not within the limits of “justified”. We’re abusing them for configuration for things that we should really be writing code for.

Here’s how you’d run the same query with jOOQ (or any other API that leverages generics and type safety for SQL):

Book b = BOOK.as("b");
Author a = AUTHOR.as("a");

DSL.using(configuration)
   .select(b.ID, b.TITLE, b.AUTHOR_ID, b.VERSION,
           a.ID, a.FIRST_NAME, a.LAST_NAME,
           a.VERSION)
   .from(b)
   .join(a).on(b.AUTHOR_ID.eq(a.ID))
   .fetch()
   .forEach(record -> {
       BookRecord book = record.into(b);
       AuthorRecord author = record.into(a);
   });

This example combines both JPA 2.1’s annotations AND querying. All the meta information about projected “entities” is already contained in the query and thus in the Result that is produced by the fetch() method. But it doesn’t really matter, the point here is that this lambda expression …

record -> {
    BookRecord book = record.into(b);
    AuthorRecord author = record.into(a);
}

… it can be anything you want! Like the more sophisticated examples we’ve shown in previous blog posts:

Mapping can be defined ad-hoc, on the fly, using functions. Functions are the ideal mappers, because they take an input, produce an output, and are completely stateless. And the best thing about functions in Java 8 is, they’re compiled by the Java compiler and can be used to type-check your mapping. And you can assign functions to objects, which allows you to reuse the functions, when a given mapping algorithm can be used several times.

In fact, the SQL SELECT clause itself is such a function. A function that transforms input tuples / rows into output tuples / rows, and you can adapt that function on the fly using additional expressions.

There is absolutely no way to type-check anything in the previous JPA 2.1 native SQL statement and @SqlResultSetMapping example. Imagine changing a column name:

List<Object[]> results = this.em.createNativeQuery(
    "SELECT b.id, b.title as book_title, " +
    "       b.author_id, b.version, " +
    "       a.id as authorId, a.firstName, a.lastName, " + 
    "       a.version as authorVersion " + 
    "FROM Book b " +
    "JOIN Author a ON b.author_id = a.id", 
    "BookAuthorMapping"
).getResultList();

Did you notice the difference? The b.title column was renamed to book_title. In a SQL string. Which blows up at run time! How to remember that you have to also adapt

@FieldResult(name = "title", column = "title")

… to be

@FieldResult(name = "title", column = "book_title")

Conversely, how to remember that once you rename the column in your @FieldResult, you’ll also have to go check wherever this "BookAuthorMapping" is used, and also change the column names in those queries.

@SqlResultSetMapping(
    name = "BookAuthorMapping",
    ...
)

Annotations are evil

You may or may not agree with some of the above. You may or may not like jOOQ as an alternative to JPA, that’s perfectly fine. But it is really hard to disagree with the fact that:

  • Java 5 introduced very useful annotations
  • Java EE / Spring heavily abused those annotations to replace XML
  • We now have a parallel universe type system in Java
  • This parallel universe type system is completely useless because the compiler cannot introspect it
  • Java SE 8 introduces functional programming and lots of type inference
  • Java SE 9-10 will introduce more awesome language features
  • It now becomes clear that what was configuration (XML or annotations) should have been code in the first place
  • JPA 2.1 has become the new EJB 2.0: Obsolete

As I said. Hard to disagree. Or in other words:

Code is much better at expressing algorithms than configuration tweet this

I’ve met Thorben personally on a number of occasions at conferences. This rant here wasn’t meant personally, Thorben :-) Your articles about JPA are very interesting. If you readers of this article are using JPA, please check out Thorben’s blog: http://www.thoughts-on-java.org.

In the meantime, I would love to nominate Thorben for the respected title “The Annotatiomaniac of the Year 2015

Follow

Get every new post delivered to your Inbox.

Join 2,977 other followers

%d bloggers like this: