Do Not Make This Mistake When Developing an SPI

Most of your code is private, internal, proprietary, and will never be exposed to public. If that’s the case, you can relax – you can refactor all of your mistakes, including those that incur breaking API changes.

If you’re maintining public API, however, that’s not the case. If you’re maintaining public SPI (Service Provider Interfaces), then things get even worse.

The H2 Trigger SPI

In a recent Stack Overflow question about how to implement an H2 database trigger with jOOQ, I have encountered the org.h2.api.Trigger SPI again – a simple and easy-to-implement SPI that implements trigger semantics. Here’s how triggers work in the H2 database:

Use the trigger

CREATE TRIGGER my_trigger
BEFORE UPDATE
ON my_table
FOR EACH ROW
CALL "com.example.MyTrigger"

Implement the trigger

public class MyTrigger implements Trigger {

    @Override
    public void init(
        Connection conn, 
        String schemaName,
        String triggerName, 
        String tableName, 
        boolean before, 
        int type
    )
    throws SQLException {}

    @Override
    public void fire(
        Connection conn, 
        Object[] oldRow, 
        Object[] newRow
    )
    throws SQLException {
        // Using jOOQ inside of the trigger, of course
        DSL.using(conn)
           .insertInto(LOG, LOG.FIELD1, LOG.FIELD2, ..)
           .values(newRow[0], newRow[1], ..)
           .execute();
    }

    @Override
    public void close() throws SQLException {}

    @Override
    public void remove() throws SQLException {}
}

The whole H2 Trigger SPI is actually rather elegant, and usually you only need to implement the fire() method.

So, how is this SPI wrong?

It is wrong very subtly. Consider the init() method. It has a boolean flag to indicate whether the trigger should fire before or after the triggering event, i.e. the UPDATE. What if suddenly, H2 were to also support INSTEAD OF triggers? Ideally, this flag would then be replaced by an enum:

public enum TriggerTiming {
    BEFORE,
    AFTER,
    INSTEAD_OF
}

But we can’t simply introduce this new enum type because the init() method shouldn’t be changed incompatibly, breaking all implementing code! With Java 8, we could at least declare an overload like this:

    default void init(
        Connection conn, 
        String schemaName,
        String triggerName, 
        String tableName, 
        TriggerTiming timing, 
        int type
    )
    throws SQLException {
        // New feature isn't supported by default
        if (timing == INSTEAD_OF)
            throw new SQLFeatureNotSupportedException();

        // Call through to old feature by default
        init(conn, schemaName, triggerName,
             tableName, timing == BEFORE, type);
    }

This would allow new implementations to handle INSTEAD_OF triggers while old implementations would still work. But it feels hairy, doesn’t it?

Now, imagine, we’d also support ENABLE / DISABLE clauses and we want to pass those values to the init() method. Or maybe, we want to handle FOR EACH ROW. There’s currently no way to do that with this SPI. So we’re going to get more and more of these overloads, which are very hard to implement. And effectively, this has happened already, as there is also org.h2.tools.TriggerAdapter, which is redundant with (but subtly different from) Trigger.

What would be a better approach, then?

The ideal approach for an SPI provider is to provide “argument objects”, like these:

public interface Trigger {
    default void init(InitArguments args)
        throws SQLException {}
    default void fire(FireArguments args)
        throws SQLException {}
    default void close(CloseArguments args)
        throws SQLException {}
    default void remove(RemoveArguments args)
        throws SQLException {}

    final class InitArguments {
        public Connection connection() { ... }
        public String schemaName() { ... }
        public String triggerName() { ... }
        public String tableName() { ... }
        /** use #timing() instead */
        @Deprecated
        public boolean before() { ... }
        public TriggerTiming timing() { ... }
        public int type() { ... }
    }

    final class FireArguments {
        public Connection connection() { ... }
        public Object[] oldRow() { ... }
        public Object[] newRow() { ... }
    }

    // These currently don't have any properties
    final class CloseArguments {}
    final class RemoveArguments {}
}

As you can see in the above example, Trigger.InitArguments has been successfully evolved with appropriate deprecation warnings. No client code was broken, and the new functionality is ready to be used, if needed. Also, close() and remove() are ready for future evolutions, even if we don’t need any arguments yet.

The overhead of this solution is at most one object allocation per method call, which shouldn’t hurt too much.

Another example: Hibernate’s UserType

Unfortunately, this mistake happens way too often. Another prominent example is Hibernate’s hard-to-implement org.hibernate.usertype.UserType SPI:

public interface UserType {
    int[] sqlTypes();
    Class returnedClass();
    boolean equals(Object x, Object y);
    int hashCode(Object x);

    Object nullSafeGet(
        ResultSet rs, 
        String[] names, 
        SessionImplementor session, 
        Object owner
    ) throws SQLException;

    void nullSafeSet(
        PreparedStatement st, 
        Object value, 
        int index, 
        SessionImplementor session
    ) throws SQLException;

    Object deepCopy(Object value);
    boolean isMutable();
    Serializable disassemble(Object value);
    Object assemble(
        Serializable cached, 
        Object owner
    );
    Object replace(
        Object original, 
        Object target, 
        Object owner
    );
}

The SPI looks rather difficult to implement. Probably, you can get something working rather quickly, but will you feel at ease? Will you think that you got it right? Some examples:

  • Is there never a case where you need the owner reference also in nullSafeSet()?
  • What if your JDBC driver doesn’t support fetching values by name from ResultSet?
  • What if you need to use your user type in a CallableStatement for a stored procedure?

Another important aspect of such SPIs is the way implementors can provide values back to the framework. It is generally a bad idea to have non-void methods in SPIs as you will never be able to change the return type of a method again. Ideally, you should have argument types that accept “outcomes”. A lot of the above methods could be replaced by a single configuration() method like this:

public interface UserType {
    default void configure(ConfigureArgs args) {}

    final class ConfigureArgs {
        public void sqlTypes(int[] types) { ... }
        public void returnedClass(Class<?> clazz) { ... }
        public void mutable(boolean mutable) { ... }
    }

    // ...
}

Another example, a SAX ContentHandler

Have a look at this example here:

public interface ContentHandler {
    void setDocumentLocator (Locator locator);
    void startDocument ();
    void endDocument();
    void startPrefixMapping (String prefix, String uri);
    void endPrefixMapping (String prefix);
    void startElement (String uri, String localName,
                       String qName, Attributes atts);
    void endElement (String uri, String localName,
                     String qName);
    void characters (char ch[], int start, int length);
    void ignorableWhitespace (char ch[], int start, int length);
    void processingInstruction (String target, String data);
    void skippedEntity (String name);
}

Some examples for drawbacks of this SPI:

  • What if you need the attributes of an element at the endElement() event? You’ll have to remember them yourself.
  • What if you’d like to know the prefix mapping uri at the endPrefixMapping() event? Or at any other event?

Clearly, SAX was optimised for speed, and it was optimised for speed at a time when the JIT and the GC were still weak. Nonetheless, implementing a SAX handler is not trivial. Parts of this is due to the SPI being hard to implement.

We don’t know the future

As API or SPI providers, we simply do not know the future. Right now, we may think that a given SPI is sufficient, but we’ll break it already in the next minor release. Or we don’t break it and tell our users that we cannot implement these new features.

With the above tricks, we can continue evolving our SPI without incurring any breaking changes:

  • Always pass exactly one argument object to the methods.
  • Always return void. Let implementors interact with SPI state via the argument object.
  • Use Java 8’s default methods, or provide an “empty” default implementation.

Did you enjoy this read? You might also enjoy:

You Will Regret Applying Overloading with Lambdas!

Writing good APIs is hard. Extremely hard. You have to think of an incredible amount of things if you want your users to love your API. You have to find the right balance between:

  1. Usefulness
  2. Usability
  3. Backward compatibility
  4. Forward compatibility

We’ve blogged about this topic before, in our article: How to Design a Good, Regular API. Today, we’re going to look into how…

Java 8 changes the rules

Yes!

Overloading is a nice tool to provide covenience in two dimensions:

  • By providing argument type alternatives
  • By providing argument default values

Examples for the above from the JDK include:

public class Arrays {

    // Argument type alternatives
    public static void sort(int[] a) { ... }
    public static void sort(long[] a) { ... }

    // Argument default values
    public static IntStream stream(int[] array) { ... }
    public static IntStream stream(int[] array, 
        int startInclusive, 
        int endExclusive) { ... }
}

The jOOQ API is obviously full of such convenience. As jOOQ is a DSL for SQL, we might even abuse a little bit:

public interface DSLContext {
    <T1> SelectSelectStep<Record1<T1>> 
        select(SelectField<T1> field1);

    <T1, T2> SelectSelectStep<Record2<T1, T2>> 
        select(SelectField<T1> field1, 
               SelectField<T2> field2);

    <T1, T2, T3> SelectSelectStep<Record3<T1, T2, T3>> s
        select(SelectField<T1> field1, 
               SelectField<T2> field2, 
               SelectField<T3> field3);

    <T1, T2, T3, T4> SelectSelectStep<Record4<T1, T2, T3, T4>> 
        select(SelectField<T1> field1, 
               SelectField<T2> field2, 
               SelectField<T3> field3, 
               SelectField<T4> field4);

    // and so on...
}

Languages like Ceylon take this idea of convenience one step further by claiming that the above is the only reasonable reason why overloading is be used in Java. And thus, the creators of Ceylon have completely removed overloading from their language, replacing the above by union types and actual default values for arguments. E.g.

// Union types
void sort(int[]|long[] a) { ... }

// Default argument values
IntStream stream(int[] array,
    int startInclusive = 0,
    int endInclusive = array.length) { ... }

Read “Top 10 Ceylon Language Features I Wish We Had In Java” for more information about Ceylon.

In Java, unfortunately, we cannot use union types or argument default values. So we have to use overloading to provide our API consumers with convenience methods.

If your method argument is a functional interface, however, things changed drastically between Java 7 and Java 8, with respect to method overloading. An example is given here from JavaFX.

JavaFX’s “unfriendly” ObservableList

JavaFX enhances the JDK collection types by making them “observable”. Not to be confused with Observable, a dinosaur type from the JDK 1.0 and from pre-Swing days.

JavaFX’s own Observable essentially looks like this:

public interface Observable {
  void addListener(InvalidationListener listener);
  void removeListener(InvalidationListener listener);
}

And luckily, this InvalidationListener is a functional interface:

@FunctionalInterface
public interface InvalidationListener {
  void invalidated(Observable observable);
}

This is great, because we can do things like:

Observable awesome = 
    FXCollections.observableArrayList();
awesome.addListener(fantastic -> splendid.cheer());

(notice how I’ve replaced foo/bar/baz with more cheerful terms. We should all do that. Foo and bar are so 1970)

Unfortunately, things get more hairy when we do what we would probably do, instead. I.e. instead of declaring an Observable, we’d like that to be a much more useful ObservableList:

ObservableList<String> awesome = 
    FXCollections.observableArrayList();
awesome.addListener(fantastic -> splendid.cheer());

But now, we get a compilation error on the second line:

awesome.addListener(fantastic -> splendid.cheer());
//      ^^^^^^^^^^^ 
// The method addListener(ListChangeListener<? super String>) 
// is ambiguous for the type ObservableList<String>

Because, essentially…

public interface ObservableList<E> 
extends List<E>, Observable {
    void addListener(ListChangeListener<? super E> listener);
}

and…

@FunctionalInterface
public interface ListChangeListener<E> {
    void onChanged(Change<? extends E> c);
}

Now again, before Java 8, the two listener types were completely unambiguously distinguishable, and they still are. You can easily call them by passing a named type. Our original code would still work if we wrote:

ObservableList<String> awesome = 
    FXCollections.observableArrayList();
InvalidationListener hearYe = 
    fantastic -> splendid.cheer();
awesome.addListener(hearYe);

Or…

ObservableList<String> awesome = 
    FXCollections.observableArrayList();
awesome.addListener((InvalidationListener) 
    fantastic -> splendid.cheer());

Or even…

ObservableList<String> awesome = 
    FXCollections.observableArrayList();
awesome.addListener((Observable fantastic) -> 
    splendid.cheer());

All of these measures will remove ambiguity. But frankly, lambdas are only half as cool if you have to explicitly type the lambda, or the argument types. We have modern IDEs that can perform autocompletion and help infer types just as much as the compiler itself.

Imagine if we really wanted to call the other addListener() method, the one that takes a ListChangeListener. We’d have to write any of

ObservableList<String> awesome = 
    FXCollections.observableArrayList();

// Agh. Remember that we have to repeat "String" here
ListChangeListener<String> hearYe = 
    fantastic -> splendid.cheer();
awesome.addListener(hearYe);

Or…

ObservableList<String> awesome = 
    FXCollections.observableArrayList();

// Agh. Remember that we have to repeat "String" here
awesome.addListener((ListChangeListener<String>) 
    fantastic -> splendid.cheer());

Or even…

ObservableList<String> awesome = 
    FXCollections.observableArrayList();

// WTF... "extends" String?? But that's what this thing needs...
awesome.addListener((Change<? extends String> fantastic) -> 
    splendid.cheer());

Overload you shan’t. Be wary you must.

API design is hard. It was hard before, it has gotten harder now. With Java 8, if any of your API methods’ arguments are a functional interface, think twice about overloading that API method. And once you’ve concluded to proceed with overloading, think again, a third time whether this is really a good idea.

Not convinced? Have a close look at the JDK. For instance the java.util.stream.Stream type. How many overloaded methods do you see that have the same number of functional interface arguments, which again take the same number of method arguments (as in our previous addListener() example)?

Zero.

There are overloads where overload argument numbers differ. For instance:

<R> R collect(Supplier<R> supplier,
              BiConsumer<R, ? super T> accumulator,
              BiConsumer<R, R> combiner);

<R, A> R collect(Collector<? super T, A, R> collector);

You will never have any ambiguity when calling collect().

But when the argument numbers do not differ, and neither do the arguments’ own method argument numbers, the method names are different. For instance:

<R> Stream<R> map(Function<? super T, ? extends R> mapper);
IntStream mapToInt(ToIntFunction<? super T> mapper);
LongStream mapToLong(ToLongFunction<? super T> mapper);
DoubleStream mapToDouble(ToDoubleFunction<? super T> mapper);

Now, this is super annoying at the call site, because you have to think in advance what method you have to use based on a variety of involved types.

But it’s really the only solution to this dilemma. So, remember:

You Will Regret Applying Overloading with Lambdas!

Did you like this article? You might also like:

How Nashorn Impacts API Evolution on a New Level

Following our previous article about how to use jOOQ with Java 8 and Nashorn, one of our users discovered a flaw in using the jOOQ API as discussed here on the user group. In essence, the flaw can be summarised like so:

Java code

package org.jooq.nashorn.test;

public class API {
    public static void test(String string) {
        throw new RuntimeException("Don't call this");
    }

    public static void test(Integer... args) {
        System.out.println("OK");
    }
}

JavaScript code

var API = Java.type("org.jooq.nashorn.test.API");
API.test(1); // This will fail with RuntimeException

After some investigation and the kind help of Attila Szegedi, as well as Jim Laskey (both Nashorn developers from Oracle), it became clear that Nashorn disambiguates overloaded methods and varargs differently than what an old Java developer might expect. Quoting Attila:

Nashorn’s overload method resolution mimics Java Language Specification (JLS) as much as possible, but allows for JavaScript-specific conversions too. JLS says that when selecting a method to invoke for an overloaded name, variable arity methods can be considered for invocation only when there is no applicable fixed arity method.

I agree that variable arity methods can be considered only when there is no applicable fixed arity method. But the whole notion of “applicable” itself is completely changed as type promotion (or coercion / conversion) using ToString, ToNumber, ToBoolean is preferred over what intuitively appear to be “exact” matches with varargs methods!

Let this sink in!

Given that we now know how Nashorn resolves overloading, we can see that any of the following are valid workarounds:

Explicitly calling the test(Integer[]) method using an array argument:

This is the simplest approach, where you ignore the fact that varargs exist and simply create an explicit array

var API = Java.type("org.jooq.nashorn.test.API");
API.test([1]);

Explicitly calling the test(Integer[]) method by saying so:

This is certainly the safest approach, as you’re removing all ambiguity from the method call

var API = Java.type("org.jooq.nashorn.test.API");
API["test(Integer[])"](1);

Removing the overload:

public class AlternativeAPI1 {
    public static void test(Integer... args) {
        System.out.println("OK");
    }
}

Removing the varargs:

public class AlternativeAPI3 {
    public static void test(String string) {
        throw new RuntimeException("Don't call this");
    }

    public static void test(Integer args) {
        System.out.println("OK");
    }
}

Providing an exact option:

public class AlternativeAPI4 {
    public static void test(String string) {
        throw new RuntimeException("Don't call this");
    }

    public static void test(Integer args) {
        test(new Integer[] { args });
    }

    public static void test(Integer... args) {
        System.out.println("OK");
    }
}

Replacing String by CharSequence (or any other “similar type”):

Now, this is interesting:

public class AlternativeAPI5 {
    public static void test(CharSequence string) {
        throw new RuntimeException("Don't call this");
    }

    public static void test(Integer args) {
        System.out.println("OK");
    }
}

Specifically, the distinction between CharSequence and String types appears to be very random from a Java perspective in my opinion.

Agreed, implementing overloaded method resolution in a dynamically typed language is very hard, if even possible. Any solution is a compromise that will introduce flaws at some ends. Or as Attila put it:

As you can see, no matter what we do, something else would suffer; overloaded method selection is in a tight spot between Java and JS type systems and very sensitive to even small changes in the logic.

True! But not only is overload method selection very sensitive to even small changes. Using Nashorn with Java interoperability is, too! As an API designer, over the years, I have grown used to semantic versioning, and the many subtle rules to follow when keeping an API source compatible, behavior compatible – and if ever possible – to a large degree also binary compatible.

Forget about that when your clients are using Nashorn. They’re on their own. A newly introduced overload in your Java API can break your Nashorn clients quite badly. But then again, that’s JavaScript, the language that tells you at runtime that:

['10','10','10','10'].map(parseInt)

… yields

[10, NaN, 2, 3]

… and where

++[[]][+[]]+[+[]] === "10"

yields true! (sources here)

For more information about JavaScript, please visit this introductory tutorial

10 Subtle Best Practices when Coding Java

This is a list of 10 best practices that are more subtle than your average Josh Bloch Effective Java rule. While Josh Bloch’s list is very easy to learn and concerns everyday situations, this list here contains less common situations involving API / SPI design that may have a big effect nontheless.

I have encountered these things while writing and maintaining jOOQ, an internal DSL modelling SQL in Java. Being an internal DSL, jOOQ challenges Java compilers and generics to the max, combining generics, varargs and overloading in a way that Josh Bloch probably wouldn’t recommend for the “average API”.

jOOQ is the best way to write SQL in Java

Let me share with you 10 Subtle Best Practices When Coding Java:

1. Remember C++ destructors

Remember C++ destructors? No? Then you might be lucky as you never had to debug through any code leaving memory leaks due to allocated memory not having been freed after an object was removed. Thanks Sun/Oracle for implementing garbage collection!

But nonetheless, destructors have an interesting trait to them. It often makes sense to free memory in the inverse order of allocation. Keep this in mind in Java as well, when you’re operating with destructor-like semantics:

  • When using @Before and @After JUnit annotations
  • When allocating, freeing JDBC resources
  • When calling super methods

There are various other use cases. Here’s a concrete example showing how you might implement some event listener SPI:

@Override
public void beforeEvent(EventContext e) {
    super.beforeEvent(e);
    // Super code before my code
}

@Override
public void afterEvent(EventContext e) {
    // Super code after my code
    super.afterEvent(e);
}

Another good example showing why this can be important is the infamous Dining Philosophers problem. More info about the dining philosophers can be seen in this awesome post:
http://adit.io/posts/2013-05-11-The-Dining-Philosophers-Problem-With-Ron-Swanson.html

The Rule: Whenever you implement logic using before/after, allocate/free, take/return semantics, think about whether the after/free/return operation should perform stuff in the inverse order. tweet this

2. Don’t trust your early SPI evolution judgement

Providing an SPI to your consumers is an easy way to allow them to inject custom behaviour into your library / code. Beware, though, that your SPI evolution judgement may trick you into thinking that you’re (not) going to need that additional parameter. True, no functionality should be added early. But once you’ve published your SPI and once you’ve decided following semantic versioning, you’ll really regret having added a silly, one-argument method to your SPI when you realise that you might need another argument in some cases:

interface EventListener {
    // Bad
    void message(String message);
}

What if you also need a message ID and a message source? API evolution will prevent you from adding that parameter easily, to the above type. Granted, with Java 8, you could add a defender method, to “defend” you bad early design decision:

interface EventListener {
    // Bad
    default void message(String message) {
        message(message, null, null);
    }
    // Better?
    void message(
        String message,
        Integer id,
        MessageSource source
    );
}

Note, unfortunately, the defender method cannot be made final.

But much better than polluting your SPI with dozens of methods, use a context object (or argument object) just for this purpose.

interface MessageContext {
    String message();
    Integer id();
    MessageSource source();
}

interface EventListener {
    // Awesome!
    void message(MessageContext context);
}

You can evolve the MessageContext API much more easily than the EventListener SPI as fewer users will have implemented it.

The Rule: Whenever you specify an SPI, consider using context / parameter objects instead of writing methods with a fixed amount of parameters. tweet this

Remark: It is often a good idea to also communicate results through a dedicated MessageResult type, which can be constructed through a builder API. This will add even more SPI evolution flexibility to your SPI.

3. Avoid returning anonymous, local, or inner classes

Swing programmers probably have a couple of keyboard shortcuts to generate the code for their hundreds of anonymous classes. In many cases, creating them is nice as you can locally adhere to an interface, without going through the “hassle” of thinking about a full SPI subtype lifecycle.

But you should not use anonymous, local, or inner classes too often for a simple reason: They keep a reference to the outer instance. And they will drag that outer instance to wherevery they go, e.g. to some scope outside of your local class if you’re not careful. This can be a major source for memory leaks, as your whole object graph will suddenly entangle in subtle ways.

The Rule: Whenever you write an anonymous, local or inner class, check if you can make it static or even a regular top-level class. Avoid returning anonymous, local or inner class instances from methods to the outside scope. tweet this

Remark: There has been some clever practice around double-curly braces for simple object instantiation:

new HashMap<String, String>() {{
    put("1", "a");
    put("2", "b");
}}

This leverages Java’s instance initializer as specified by the JLS §8.6. Looks nice (maybe a bit weird), but is really a bad idea. What would otherwise be a completely independent HashMap instance now keeps a reference to the outer instance, whatever that just happens to be. Besides, you’ll create an additional class for the class loader to manage.

4. Start writing SAMs now!

Java 8 is knocking on the door. And with Java 8 come lambdas, whether you like them or not. Your API consumers may like them, though, and you better make sure that they can make use of them as often as possible. Hence, unless your API accepts simple “scalar” types such as int, long, String, Date, let your API accept SAMs as often as possible.

What’s a SAM? A SAM is a Single Abstract Method [Type]. Also known as a functional interface, soon to be annotated with the @FunctionalInterface annotation. This goes well with rule number 2, where EventListener is in fact a SAM. The best SAMs are those with single arguments, as they will further simplify writing of a lambda. Imagine writing

listeners.add(c -> System.out.println(c.message()));

Instead of

listeners.add(new EventListener() {
    @Override
    public void message(MessageContext c) {
        System.out.println(c.message()));
    }
});

Imagine XML processing through jOOX, which features a couple of SAMs:

$(document)
    // Find elements with an ID
    .find(c -> $(c).id() != null)
    // Find their  child elements
    .children(c -> $(c).tag().equals("order"))
    // Print all matches
    .each(c -> System.out.println($(c)))

The Rule: Be nice with your API consumers and write SAMs / Functional interfaces already now. tweet this

Remarks: A couple of interesting blog posts about Java 8 Lambdas and improved Collections API can be seen here:

5. Avoid returning null from API methods

I’ve blogged about Java’s NULLs once or twice. I’ve also blogged about Java 8’s introduction of Optional. These are interesting topics both from an academic and from a practical point of view.

While NULLs and NullPointerExceptions will probably stay a major pain in Java for a while, you can still design your API in a way that users will not run into any issues. Try to avoid returning null from API methods whenever possible. Your API consumers should be able to chain methods whenever applicable:

initialise(someArgument).calculate(data).dispatch();

In the above snippet, none of the methods should ever return null. In fact, using null’s semantics (the absence of a value) should be rather exceptional in general. In libraries like jQuery (or jOOX, a Java port thereof), nulls are completely avoided as you’re always operating on iterable objects. Whether you match something or not is irrelevant to the next method call.

Nulls often arise also because of lazy initialisation. In many cases, lazy initialisation can be avoided too, without any significant performance impact. In fact, lazy initialisation should be used carefully, only. If large data structures are involved.

The Rule: Avoid returning nulls from methods whenever possible. Use null only for the “uninitialised” or “absent” semantics. tweet this

6. Never return null arrays or lists from API methods

While there are some cases when returning nulls from methods is OK, there is absolutely no use case of returning null arrays or null collections! Let’s consider the hideous java.io.File.list() method. It returns:

An array of strings naming the files and directories in the directory denoted by this abstract pathname. The array will be empty if the directory is empty. Returns null if this abstract pathname does not denote a directory, or if an I/O error occurs.

Hence, the correct way to deal with this method is

File directory = // ...

if (directory.isDirectory()) {
    String[] list = directory.list();

    if (list != null) {
        for (String file : list) {
            // ...
        }
    }
}

Was that null check really necessary? Most I/O operations produce IOExceptions, but this one returns null. Null cannot hold any error message indicating why the I/O error occurred. So this is wrong in three ways:

  • Null does not help in finding the error
  • Null does not allow to distinguish I/O errors from the File instance not being a directory
  • Everyone will keep forgetting about null, here

In collection contexts, the notion of “absence” is best implemented by empty arrays or collections. Having an “absent” array or collection is hardly ever useful, except again, for lazy initialisation.

The Rule: Arrays or Collections should never be null. tweet this

7. Avoid state, be functional

What’s nice about HTTP is the fact that it is stateless. All relevant state is transferred in each request and in each response. This is essential to the naming of REST: Representational State Transfer. This is awesome when done in Java as well. Think of it in terms of rule number 2 when methods receive stateful parameter objects. Things can be so much simpler if state is transferred in such objects, rather than manipulated from the outside. Take JDBC, for instance. The following example fetches a cursor from a stored procedure:

CallableStatement s =
  connection.prepareCall("{ ? = ... }");

// Verbose manipulation of statement state:
s.registerOutParameter(1, cursor);
s.setString(2, "abc");
s.execute();
ResultSet rs = s.getObject(1);

// Verbose manipulation of result set state:
rs.next();
rs.next();

These are the things that make JDBC such an awkward API to deal with. Each object is incredibly stateful and hard to manipulate. Concretely, there are two major issues:

  • It is very hard to correctly deal with stateful APIs in multi-threaded environments
  • It is very hard to make stateful resources globally available, as the state is not documented
State is like a box of chocolates

Theatrical poster for Forrest Gump, Copyright © 1994 by Paramount Pictures. All Rights Reserved. It is believed that the above usage fulfils what is known as Fair Use

The Rule: Implement more of a functional style. Pass state through method arguments. Manipulate less object state. tweet this

8. Short-circuit equals()

This is a low-hanging fruit. In large object graphs, you can gain significantly in terms of performance, if all your objects’ equals() methods dirt-cheaply compare for identity first:

@Override
public boolean equals(Object other) {
    if (this == other) return true;

    // Rest of equality logic...
}

Note, other short-circuit checks may involve null checks, which should be there as well:

@Override
public boolean equals(Object other) {
    if (this == other) return true;
    if (other == null) return false;

    // Rest of equality logic...
}

The Rule: Short-circuit all your equals() methods to gain performance. tweet this

9. Try to make methods final by default

Some will disagree on this, as making things final by default is quite the opposite of what Java developers are used to. But if you’re in full control of all source code, there’s absolutely nothing wrong with making methods final by default, because:

  • If you do need to override a method (do you really?), you can still remove the final keyword
  • You will never accidentally override any method anymore

This specifically applies for static methods, where “overriding” (actually, shadowing) hardly ever makes sense. I’ve come across a very bad example of shadowing static methods in Apache Tika, recently. Consider:

TikaInputStream extends TaggedInputStream and shadows its static get() method with quite a different implementation.

Unlike regular methods, static methods don’t override each other, as the call-site binds a static method invocation at compile-time. If you’re unlucky, you might just get the wrong method accidentally.

The Rule: If you’re in full control of your API, try making as many methods as possible final by default. tweet this

10. Avoid the method(T…) signature

There’s nothing wrong with the occasional “accept-all” varargs method that accepts an Object... argument:

void acceptAll(Object... all);

Writing such a method brings a little JavaScript feeling to the Java ecosystem. Of course, you probably want to restrict the actual type to something more confined in a real-world situation, e.g. String.... And because you don’t want to confine too much, you might think it is a good idea to replace Object by a generic T:

void acceptAll(T... all);

But it’s not. T can always be inferred to Object. In fact, you might as well just not use generics with the above methods. More importantly, you may think that you can overload the above method, but you cannot:

void acceptAll(T... all);
void acceptAll(String message, T... all);

This looks as though you could optionally pass a String message to the method. But what happens to this call here?

acceptAll("Message", 123, "abc");

The compiler will infer <? extends Serializable & Comparable<?>> for T, which makes the call ambiguous!

So, whenever you have an “accept-all” signature (even if it is generic), you will never again be able to typesafely overload it. API consumers may just be lucky enough to “accidentally” have the compiler chose the “right” most specific method. But they may as well be tricked into using the “accept-all” method or they may not be able to call any method at all.

The Rule: Avoid “accept-all” signatures if you can. And if you cannot, never overload such a method. tweet this

Conclusion

Java is a beast. Unlike other, fancier languages, it has evolved slowly to what it is today. And that’s probably a good thing, because already at the speed of development of Java, there are hundreds of caveats, which can only be mastered through years of experience.

jOOQ is the best way to write SQL in Java

Stay tuned for more top 10 lists on the subject!

Defensive API evolution with Java interfaces

API evolution is something absolutely non-trivial. Something that only few have to deal with. Most of us work on internal, proprietary APIs every day. Modern IDEs ship with awesome tooling to factor out, rename, pull up, push down, indirect, delegate, infer, generalise our code artefacts. These tools make refactoring our internal APIs a piece of cake.

But some of us work on public APIs, where the rules change drastically. Public APIs, if done properly, are versioned. Every change – compatible or incompatible – should be published in a new API version. Most people will agree that API evolution should be done in major and minor releases, similar to what is specified in semantic versioning. In short: Incompatible API changes are published in major releases (1.0, 2.0, 3.0), whereas compatible API changes / enhancements are published in minor releases (1.0, 1.1, 1.2).

If you’re planning ahead, you’re going to foresee most of your incompatible changes a long time before actually publishing the next major release. A good tool in Java to announce such a change early is deprecation.

Interface API evolution

Now, deprecation is a good tool to indicate that you’re about to remove a type or member from your API. What if you’re going to add a method, or a type to an interface’s type hierarchy? This means that all client code implementing your interface will break – at least as long as Java 8’s defender methods aren’t introduced yet. There are several techniques to circumvent / work around this problem:

1. Don’t care about it

Yes, that’s an option too. Your API is public, but maybe not so much used. Let’s face it: Not all of us work on the JDK / Eclipse / Apache / etc codebases.

If you’re friendly, you’re at least going to wait for a major release to introduce new methods. But you can break the rules of semantic versioning if you really have to – if you can deal with the consequences of getting a mob of angry users.

Note, though, that other platforms aren’t as backwards-compatible as the Java universe (often by language design, or by language complexity). E.g. with Scala’s various ways of declaring things as implicit, your API can’t always be perfect.

2. Do it the Java way

The “Java” way is not to evolve interfaces at all. Most API types in the JDK have been the way they are today forever. Of course, this makes APIs feel quite “dinosaury” and adds a lot of redundancy between various similar types, such as StringBuffer and StringBuilder, or Hashtable and HashMap.

Note that some parts of Java don’t adhere to the “Java” way. Most specifically, this is the case for the JDBC API, which evolves according to the rules of section #1: “Don’t care about it”.

3. Do it the Eclipse way

Eclipse’s internals contain huge APIs. There are a lot of guidelines how to evolve your own APIs (i.e. public parts of your plugin), when developing for / within Eclipse. One example about how the Eclipse guys extend interfaces is the IAnnotationHover type. By Javadoc contract, it allows implementations to also implement IAnnotationHoverExtension and IAnnotationHoverExtension2. Obviously, in the long run, such an evolved API is quite hard to maintain, test, and document, and ultimately, hard to use! (consider ICompletionProposal and its 6 (!) extension types)

4. Wait for Java 8

In Java 8, you will be able to make use of defender methods. This means that you can provide a sensible default implementation for your new interface methods as can be seen in Java 1.8’s java.util.Iterator (an extract):

public interface Iterator<E> {

    // These methods are kept the same:
    boolean hasNext();
    E next();

    // This method is now made "optional" (finally!)
    public default void remove() {
        throw new UnsupportedOperationException("remove");
    }

    // This method has been added compatibly in Java 1.8
    default void forEach(Consumer<? super E> consumer) {
        Objects.requireNonNull(consumer);
        while (hasNext())
            consumer.accept(next());
    }
}

Of course, you don’t always want to provide a default implementation. Often, your interface is a contract that has to be implemented entirely by client code.

5. Provide public default implementations

In many cases, it is wise to tell the client code that they may implement an interface at their own risk (due to API evolution), and they should better extend a supplied abstract or default implementation, instead. A good example for this is java.util.List, which can be a pain to implement correctly. For simple, not performance-critical custom lists, most users probably choose to extend java.util.AbstractList instead. The only methods left to implement are then get(int) and size(), The behaviour of all other methods can be derived from these two:

class EmptyList<E> extends AbstractList<E> {
    @Override
    public E get(int index) {
        throw new IndexOutOfBoundsException("No elements here");
    }

    @Override
    public int size() {
        return 0;
    }
}

A good convention to follow is to name your default implementation AbstractXXX if it is abstract, or DefaultXXX if it is concrete

6. Make your API very hard to implement

Now, this isn’t really a good technique, but just a probable fact. If your API is very hard to implement (you have 100s of methods in an interface), then users are probably not going to do it. Note: probably. Never underestimate the crazy user. An example of this is jOOQ’s org.jooq.Field type, which represents a database field / column. In fact, this type is part of jOOQ’s internal domain specific language, offering all sorts of operations and functions that can be performed upon a database column.

Of course, having so many methods is an exception and – if you’re not designing a DSL – is probably a sign of a bad overall design.

7. Add compiler and IDE tricks

Last but not least, there are some nifty tricks that you can apply to your API, to help people understand what they ought to do in order to correctly implement your interface-based API. Here’s a tough example, that slaps the API designer’s intention straight into your face. Consider this extract of the org.hamcrest.Matcher API:

public interface Matcher<T> extends SelfDescribing {

    // This is what a Matcher really does.
    boolean matches(Object item);
    void describeMismatch(Object item, Description mismatchDescription);

    // Now check out this method here:

    /**
     * This method simply acts a friendly reminder not to implement 
     * Matcher directly and instead extend BaseMatcher. It's easy to 
     * ignore JavaDoc, but a bit harder to ignore compile errors .
     *
     * @see Matcher for reasons why.
     * @see BaseMatcher
     * @deprecated to make
     */
    @Deprecated
    void _dont_implement_Matcher___instead_extend_BaseMatcher_();
}

“Friendly reminder”, come on. 😉

Other ways

I’m sure there are dozens of other ways to evolve an interface-based API. I’m curious to hear your thoughts!