How Nashorn Impacts API Evolution on a New Level


Following our previous article about how to use jOOQ with Java 8 and Nashorn, one of our users discovered a flaw in using the jOOQ API as discussed here on the user group. In essence, the flaw can be summarised like so:

Java code

package org.jooq.nashorn.test;

public class API {
    public static void test(String string) {
        throw new RuntimeException("Don't call this");
    }

    public static void test(Integer... args) {
        System.out.println("OK");
    }
}

JavaScript code

var API = Java.type("org.jooq.nashorn.test.API");
API.test(1); // This will fail with RuntimeException

After some investigation and the kind help of Attila Szegedi, as well as Jim Laskey (both Nashorn developers from Oracle), it became clear that Nashorn disambiguates overloaded methods and varargs differently than what an old Java developer might expect. Quoting Attila:

Nashorn’s overload method resolution mimics Java Language Specification (JLS) as much as possible, but allows for JavaScript-specific conversions too. JLS says that when selecting a method to invoke for an overloaded name, variable arity methods can be considered for invocation only when there is no applicable fixed arity method.

I agree that variable arity methods can be considered only when there is no applicable fixed arity method. But the whole notion of “applicable” itself is completely changed as type promotion (or coercion / conversion) using ToString, ToNumber, ToBoolean is preferred over what intuitively appear to be “exact” matches with varargs methods!

Let this sink in!

Given that we now know how Nashorn resolves overloading, we can see that any of the following are valid workarounds:

Explicitly calling the test(Integer[]) method using an array argument:

This is the simplest approach, where you ignore the fact that varargs exist and simply create an explicit array

var API = Java.type("org.jooq.nashorn.test.API");
API.test([1]);

Explicitly calling the test(Integer[]) method by saying so:

This is certainly the safest approach, as you’re removing all ambiguity from the method call

var API = Java.type("org.jooq.nashorn.test.API");
API["test(Integer[])"](1);

Removing the overload:

public class AlternativeAPI1 {
    public static void test(Integer... args) {
        System.out.println("OK");
    }
}

Removing the varargs:

public class AlternativeAPI3 {
    public static void test(String string) {
        throw new RuntimeException("Don't call this");
    }

    public static void test(Integer args) {
        System.out.println("OK");
    }
}

Providing an exact option:

public class AlternativeAPI4 {
    public static void test(String string) {
        throw new RuntimeException("Don't call this");
    }

    public static void test(Integer args) {
        test(new Integer[] { args });
    }

    public static void test(Integer... args) {
        System.out.println("OK");
    }
}

Replacing String by CharSequence (or any other “similar type”):

Now, this is interesting:

public class AlternativeAPI5 {
    public static void test(CharSequence string) {
        throw new RuntimeException("Don't call this");
    }

    public static void test(Integer args) {
        System.out.println("OK");
    }
}

Specifically, the distinction between CharSequence and String types appears to be very random from a Java perspective in my opinion.

Agreed, implementing overloaded method resolution in a dynamically typed language is very hard, if even possible. Any solution is a compromise that will introduce flaws at some ends. Or as Attila put it:

As you can see, no matter what we do, something else would suffer; overloaded method selection is in a tight spot between Java and JS type systems and very sensitive to even small changes in the logic.

True! But not only is overload method selection very sensitive to even small changes. Using Nashorn with Java interoperability is, too! As an API designer, over the years, I have grown used to semantic versioning, and the many subtle rules to follow when keeping an API source compatible, behavior compatible – and if ever possible – to a large degree also binary compatible.

Forget about that when your clients are using Nashorn. They’re on their own. A newly introduced overload in your Java API can break your Nashorn clients quite badly. But then again, that’s JavaScript, the language that tells you at runtime that:

['10','10','10','10'].map(parseInt)

… yields

[10, NaN, 2, 3]

… and where

++[[]][+[]]+[+[]] === "10"

yields true! (sources here)

For more information about JavaScript, please visit this introductory tutorial

6 thoughts on “How Nashorn Impacts API Evolution on a New Level

  1. Are you really complaining that Nashorn doesn’t magically add function overloading to JavaScript? How is it Nashorn’s fault JavaScript doesn’t have this feature? You’re lucky that the call doesn’t just flat out fail with an error saying that you are calling a overloaded method that is incompatible with JavaScript. It seems like they went the extra mile and made it work (if you understand the rules) when it should be failing. Sorry to be so bitchy but saying “Nashorn Prevents Effective API Evolution on a New Level” is just so misleading. Create a Java wrapper class for your JavaScript incompatible API already. Maybe make a blog post of Nashorn design patterns that tell developer how to overcome JavaScript to Java inconsistencies instead of trashing Nashorn for the design of JavaScript.

    • So much anger. It’s really just a warning for API developers who care for backwards compatibility by introducing a new level of danger. Obviously you’ve taken offense. Why – I don’t know. Maybe you’ll explain?

      • It is a negative reaction to a negative blog post. Expecting Nashorn to perfectly map every feature of the Java language to the JavaScript language when they have some fundamental differences is unreasonable and saying that Nashorn prevents you from evolving a Java API is such an over the top “the sky is falling” response. I wouldn’t even call this a bug, it is a slight surprise in a feature that they didn’t even have to create. Sure, create a RFE to see if the behavior can be improved but it is like saying that an ORM tool like hibernate keeps me from updating my database schema. Objects and relational database features will never be 100% mappable. Why do you expect two languages to be 100% mappable?

  2. The notion of binary compatible API evolution in Java refers to compiling new versions of the callee class, without breaking caller classes that were compiled earlier.

    Your desire for binary compatibility is understandable, but what’s important to realize is that unfortunately there is a fundamental reason why it can not apply in this situation.

    The reason is that in Java, overload resolution happens during compilation of the calling class, and the result of the resolution is encoded as a concrete chosen signature in the INVOKE instruction in that caller’s .class file.

    In a dynamic language, there’s no separate compilation time, just run time, therefore a computed decision which overload to choose can never be fixed against a particular version of the called class, so the whole notion of binary compatibility can not be interpreted in its frame of reference. Again, this only arises against precompiled .class artifacts of the callers, and in JS you don’t have such a thing, you only have the JS source code.

    Which finally brings us to the fact that there is a way, though, to pin down a fixed overload in Nashorn – by using the explicit overload selection mechanism. This is exactly what javac does for you by emitting the chosen overloaded signature with the INVOKE instruction for you in the caller .class file without you noticing, but since you don’t have the benefit of a separate compiler step or a .class file compilation artifact in JS, if you need/desire to pin down a fixed overload, you’ll of necessity need to make it explicit in the only place you can: in the JS source code.

    • Hi Attila, and thanks very much for joining this debate!

      I actually haven’t even thought of the difference (or obvious lack thereof) between binary compatibility and source compatibility in the context of Nashorn/Java interoperability. Interesting thought – might be worth a follow-up post.

      I understand that explicit overload resolution is the safest way to circumvent this issue at the call-site. But there are two sides of this story:

      • The technical story: Once these things are understood, people will know how to get things right.
      • The user story: People are lazy and don’t want to understand everything. They want things to “just work”.

      Again, I completely agree with you on the “technical story,” and for me as an API designer, it’s obviously important to get these things right. But our API consumers – which are also our customers, might go through a bit of frustration if we don’t keep them from discovering these interoperability things “the hard way” (as in this case). So the best solution from a jOOQ perspective is now to add more overloads that make the call-site overload resolution “more intuitive,” in the sense of giving the impression that the jOOQ API “just works.”

      By the way: I hope you don’t get me wrong. This is clearly an edge case and I’m positively surprised by how much interoperability seems to work out-of-the-box!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s