jOOQ Tuesdays: Nicolai Parlog Talks About Java 9

Welcome to the jOOQ Tuesdays series. In this series, we’ll publish an article on the third Tuesday every other month where we interview someone we find exciting in our industry from a jOOQ perspective. This includes people who work with SQL, Java, Open Source, and a variety of other related topics.

I’m very excited to feature today Nicolai Parlog, author of The Java Module System

Nicolai, your blog is an “archeological” treasure trove for everyone who wants to learn about why Java expert group decisions were made. What made you dig out all these interesting discussions on the mailing lists?

Ha, thank you, didn’t know I was sitting on a treasure.

It all started with everyone’s favorite bikeshed: Optional. After using it for a few months, I was curious to learn more about the reason behind its introduction to Java and why it was designed the way it was, so I started digging and learned a few things:

  • Piperman, the JDK mailing list archive, is a horrible place to peruse and search.
  • Mailing list discussions are often lengthy, fragmented, and thus hard to revisit.
  • Brian Goetz was absolutely right: Everything related to Optional seems to take 300 messages.

Consequently, researching that post about Optional’s design took a week or so. But as you say, it’s interesting to peek behind the curtain and once a discussion is condensed to its most relevant positions and peppered with some context it really appeals to the wider Java community.

I actually think there’s a niche to be filled, here. Imagine there were a site that did regularly (at least once a week) what I did with a few selected topics: Follow the JDK mailing list, summarize ongoing discussions, and make them accessible to a wide audience. That would be a great service to the Java community as it would make it much easier to follow what is going on and to chime in with an informed opinion when you feel you have something to contribute. Now we just need to find someone with a lot of free time on their hands.

By the way, I think it’s awesome that the comparitively open development of the JDK makes that possible.

I had followed your blog after Java 8 came out, where you explained expert group decisions in retrospect. Now, you’re mostly covering what’s new in Java 9. What are your favourite “hidden” (i.e. non-Jigsaw) Java 9 features and why?

From the few language changes, it’s easy pickings: definitely private interface methods. I’ve been in the situation more than once that I wanted to share code between default methods but found no good place to put it without making it part of the public API. With private mehods in interfaces, that’s a thing of the past.

When it comes to API changes, the decision is much harder as there is more to choose from. People definitely like collection factory methods and I do, too, but I think I’ll go with the changes to Stream and Optional. I really enjoy using those Java 8 features and think it’s great that they’ve been improved in 9.

A JVM feature I really like are multi-release JARs. The ability to ship a JAR that uses the newest APIs, but degrades gracefully on older JVMs will come in very handy. Some projects, Spring for example, already do this, but without JVM support it’s not exactly pleasant.

Can I go on? Because there’s so much more! Just two: Unified logging makes it much easier to tease out JVM log messages without having to configure logging for different subsystems and compact strings and indified string concatenation make working with strings faster, reduce garbage and conserve heap space (on average, 10% to 15% less memory!). Ok, that were three, but there you go.

You’re writing a book on the Java 9 module system that can already be pre-ordered on Manning. What will readers get out of your book?

All they need to become module system experts. Of course it explains all the basics (delcaring, compiling, packaging, and running modular applications) and advanced features (services, implied readability, optional dependencies, etc), but it goes far beyond that. More than how to use a feature it also explains when and why to use it, which nuances to consider, and what are good defaults if you’re not sure which way to go.

It’s also full of practical advice. I migrated two large applications to Java 9 (compiling and running on the new release, not turning them into modules) and that experience as well as the many discussions on the mailing list informed a big chapter on migration. If readers are interested in a preview, I condensed it into a post on the most common Java 9 migration challenges. I also show how to debug modules and the module system with various tools (JDeps for example) and logging (that’s when I started using uniform logging), Last but not least, I plan to include a chapter that simply lists error messages and what to do about them.

In your opinion, what are the good parts and the bad parts about  Jigsaw? Do you think Jigsaw will be adopted quickly?

The good, the bad, and the ugly, eh? My favorite feature (of all of Java 9 actually) is strong encapsulation. The ability to have types that are public only within a module is incredibly valuable! This adds another option to the private-to-public-axis and once people internalize that feature we will wonder how we ever lived without it. Can you imagine giving up private? We will think the same about exported.

I hope the worst aspect of the module system will be the compatibility challenges. That’s a weird way to phrase it, but let me explain. These challenges definitely exist and they will require a non-neglectable investmement from the Java community as a whole to get everything working on Java 9, in the long run as modules. (As an aside: This is well invested time – much of it pays back technical debt.)

My hope is that no other aspect of the module system turns out to be worse. One thing I’m a little concerned about is the strictness of reliable configuration. I like the general principle and I’m definitely one for enforcing good practices, but just think about all those POMs that busily exclude transitive dependencies. Once all those JARs are modules, that won’t work – the module system will not let you launch without all dependencies present.

Generally speaking, the module system makes it harder to go against the maintainers’ decisions. Making internal APIs available via reflection or altering dependencies now goes against the grain of a mechanism that is built deeply into the compiler and JVM. There are of course a number of command line flags to affect the module system but they don’t cover everything. To come back to exclusing dependencies, maybe–ignore-missing-modules ${modules} would be a good idea…

Regarding adoption rate, I expect it to be slower than Java 8. But leaving those projects aside that see every new version as insurmountable and are still on Java 6, I’m sure the vast majority will migrate eventually. If not for Java 9’s features than surely for future ones. As a friend and colleague once said: “I’ll do everything to get to value types.”

Now that Java 9 is out and “legacy”, what Java projects will you cover next in your blog and your work?

Oh boy, I’m still busy with Java 9. First I have to finish the book (November hopefully) and then I want to do a few more migrations because I actually like doing that for some weird and maybe not entirely healthy reason (the things you see…). FYI, I’m for hire, so if readers are stuck with their migration they should reach out.

Beyond that, I’m already looking forward to primitive specialization, e.g. ArrayList<int>, and value types (both from Project Valhalla) as well as the changes Project Amber will bring to Java. I’m sure I’ll start discussing those in 2018.

Another thing I’ll keep myself busy with and which I would love your readers to check out is my YouTube channel. It’s still very young and until the book’s done I won’t do a lot of videos (hope to record one next week), but I’m really thrilled about the whole endavour!

Benchmarking JDK String.replace() vs Apache Commons StringUtils.replace()

What’s better? Using the JDK’s String.replace() or something like Apache Commons Lang’s Apache Commons Lang’s StringUtils.replace()?

In this article, I’ll compare the two, first in a profiling session using Java Mission Control (JMC), then in a benchmark using JMH, and we’ll see that Java 9 heavily improved things in this area.

Profiling using JMC

In a recent profiling session where I checked for any “obvious” bottlenecks in jOOQ, I’ve discovered this nasty regular expression pattern instantiation:

Tons of int[] instances were allocated by a regular expression pattern. That’s weird, because in general, inside of jOOQ’s internals, special care is always taken to pre-compile any regular expressions that are needed in static members, e.g.:

private static final Pattern TYPE_NAME_PATTERN = 
  Pattern.compile("\\([^\\)]*\\)");

This allows for using the Pattern in a far more optimal way, than e.g. by using String.replaceAll():

// Much better, pattern is pre-compiled
TYPE_NAME_PATTERN.matcher(castTypeName).replaceAll("")

// Much worse, pattern is compiled *every time*
castTypeName.replaceAll("\\([^\\)]*\\)", "")

That should be clear to everyone. The price to pay for this is the fact that the pattern is stored “far away” in some static member, rather than being visible right where it is used, which is a bit less readable. At least in my opinion.

SIDENOTE: People tend to get all angry about premature optimisation and such. Yes, these optimisations are micro optimisations and aren’t always worth the trouble. But this article is about jOOQ, a library that does a lot of expression tree transformations, and it is important for jOOQ to eliminate even 1% “bottlenecks”, as they make a difference. So, please read this article in this context.

Consider also our previous post about this subject: Top 10 Easy Performance Optimisations in Java

What was the problem in jOOQ?

Now, what appears to be obvious when using regular expressions seems less obvious when using ordinary, constant string replacements, such as when calling String.replace(CharSequence), as was done in the linked jOOQ issue #6672. The relevant piece of code was escaping all inline strings that are sent to the SQL database, to prevent syntax errors and, of course, SQL injection:

static final String escape(Object val, Context<?> context) {
    String result = val.toString();

    if (needsBackslashEscaping(context.configuration()))
        result = result.replace("\\", "\\\\");

    return result.replace("'", "''");
}

We’re always escaping apostrophes by doubling them, and in some databases (e.g. MySQL), we often have to escape backslashes as well (unfortunately, not all ORMs seem to do this or even be aware of this MySQL “feature”).

Unfortunately as well, despite heavy use of Apache Commons Lang’s StringUtils.replace() in jOOQ’s internals, every now and then a String.replace(CharSequence) sneaks in, because it’s just so convenient to write.

Meh, does it matter?

Usually, in ordinary business logic, it shouldn’t (again – don’t optimise prematurely), but in jOOQ, which is essentially a SQL string manipulation library, it can get quite costly if a single replace call is done excessively (for good reasons, of course), and it is slower than it should be. And it is, prior to Java 9, when this method was optimised. I’ve done the profiling with Java 8, where internally, String.replace() uses a literal regex pattern (i.e. a pattern with a “literal” flag that is faster, but it is a pattern, nonetheless).

Not only does the method appear as a major offender in the GC allocation view, it also triggers quite some action in the “hot methods” view of JMC:

Those are quite a few Pattern methods. The percentages have to be understood in the context of a benchmark, running millions of queries against an H2 in-memory database, so the overhead is significant!

Using Apache Commons Lang’s StringUtils

A simple fix is to use Apache Commons Lang’s StringUtils instead:

static final String escape(Object val, Context<?> context) {
    String result = val.toString();

    if (needsBackslashEscaping(context.configuration()))
        result = StringUtils.replace(result, "\\", "\\\\");

    return StringUtils.replace(result, "'", "''");
}

Now, the pressure has changed significantly. The int[] allocation is barely noticeable in comparison:

And much fewer Pattern calls are made, overall.

Benchmarking using JMH

Profiling can be very useful to spot bottlenecks, but it needs to be read with care. It introduces some artefacts and slight overheads and it is not 100% accurate when sampling call stacks, which might lead the wrong conclusions at times. This is why it is sometimes important to back claims by running an actual benchmark. And when benchmarking, please, don’t just loop 1 million times in a main() method. That will be very very inaccurate, except for very obvious, order-of-magnitude scale differences.

I’m using JMH here, running the following simple benchmark:

package org.jooq.test.benchmark;

import org.apache.commons.lang3.StringUtils;
import org.openjdk.jmh.annotations.Benchmark;
import org.openjdk.jmh.annotations.Fork;
import org.openjdk.jmh.annotations.Measurement;
import org.openjdk.jmh.annotations.Warmup;
import org.openjdk.jmh.infra.Blackhole;

@Fork(value = 3, jvmArgsAppend = "-Djmh.stack.lines=3")
@Warmup(iterations = 5)
@Measurement(iterations = 7)
public class StringReplaceBenchmark {

    private static final String SHORT_STRING_NO_MATCH = "abc";
    private static final String SHORT_STRING_ONE_MATCH = "a'bc";
    private static final String SHORT_STRING_SEVERAL_MATCHES = "'a'b'c'";
    private static final String LONG_STRING_NO_MATCH = 
      "abcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabc";
    private static final String LONG_STRING_ONE_MATCH = 
      "abcabcabcabcabcabcabcabcabcabcabca'bcabcabcabcabcabcabcabcabcabcabcabcabc";
    private static final String LONG_STRING_SEVERAL_MATCHES = 
      "abcabca'bcabcabcabcabcabc'abcabcabca'bcabcabcabcabcabca'bcabcabcabcabcabcabc";

    @Benchmark
    public void testStringReplaceShortStringNoMatch(Blackhole blackhole) {
        blackhole.consume(SHORT_STRING_NO_MATCH.replace("'", "''"));
    }

    @Benchmark
    public void testStringReplaceLongStringNoMatch(Blackhole blackhole) {
        blackhole.consume(LONG_STRING_NO_MATCH.replace("'", "''"));
    }

    @Benchmark
    public void testStringReplaceShortStringOneMatch(Blackhole blackhole) {
        blackhole.consume(SHORT_STRING_ONE_MATCH.replace("'", "''"));
    }

    @Benchmark
    public void testStringReplaceLongStringOneMatch(Blackhole blackhole) {
        blackhole.consume(LONG_STRING_ONE_MATCH.replace("'", "''"));
    }

    @Benchmark
    public void testStringReplaceShortStringSeveralMatches(Blackhole blackhole) {
        blackhole.consume(SHORT_STRING_SEVERAL_MATCHES.replace("'", "''"));
    }

    @Benchmark
    public void testStringReplaceLongStringSeveralMatches(Blackhole blackhole) {
        blackhole.consume(LONG_STRING_SEVERAL_MATCHES.replace("'", "''"));
    }

    @Benchmark
    public void testStringUtilsReplaceShortStringNoMatch(Blackhole blackhole) {
        blackhole.consume(StringUtils.replace(SHORT_STRING_NO_MATCH, "'", "''"));
    }

    @Benchmark
    public void testStringUtilsReplaceLongStringNoMatch(Blackhole blackhole) {
        blackhole.consume(StringUtils.replace(LONG_STRING_NO_MATCH, "'", "''"));
    }

    @Benchmark
    public void testStringUtilsReplaceShortStringOneMatch(Blackhole blackhole) {
        blackhole.consume(StringUtils.replace(SHORT_STRING_ONE_MATCH, "'", "''"));
    }

    @Benchmark
    public void testStringUtilsReplaceLongStringOneMatch(Blackhole blackhole) {
        blackhole.consume(StringUtils.replace(LONG_STRING_ONE_MATCH, "'", "''"));
    }

    @Benchmark
    public void testStringUtilsReplaceShortStringSeveralMatches(Blackhole blackhole) {
        blackhole.consume(StringUtils.replace(SHORT_STRING_SEVERAL_MATCHES, "'", "''"));
    }

    @Benchmark
    public void testStringUtilsReplaceLongStringSeveralMatches(Blackhole blackhole) {
        blackhole.consume(StringUtils.replace(LONG_STRING_SEVERAL_MATCHES, "'", "''"));
    }
}

Notice that I tried to run 2 x 3 different string replacement scenarios:

  • The string is “short”
  • The string is “long”

Cross joining (there, finally some SQL in this post!) the above with:

  • No match is found
  • One match is found
  • Several matches are found

That’s important because different optimisations can be implemented for those different cases, and probably, in jOOQ’s case, there is mostly no match in this particular case.

I ran this benchmark once on Java 8:

$ java -version
java version "1.8.0_141"
Java(TM) SE Runtime Environment (build 1.8.0_141-b15)
Java HotSpot(TM) 64-Bit Server VM (build 25.141-b15, mixed mode)

And on Java 9:

$ java -version
java version "9"
Java(TM) SE Runtime Environment (build 9+181)
Java HotSpot(TM) 64-Bit Server VM (build 9+181, mixed mode)

As Tagir Valeev was kind enough to remind me that this issue was supposed to be fixed in Java 9:

The results are:

Java 8

testStringReplaceLongStringNoMatch               thrpt   21    4809343.940 ▒  66443.628  ops/s
testStringUtilsReplaceLongStringNoMatch          thrpt   21   25063493.793 ▒ 660657.256  ops/s

testStringReplaceLongStringOneMatch              thrpt   21    1406989.855 ▒  43051.008  ops/s
testStringUtilsReplaceLongStringOneMatch         thrpt   21    6961669.111 ▒ 141504.827  ops/s

testStringReplaceLongStringSeveralMatches        thrpt   21    1103323.491 ▒  17047.449  ops/s
testStringUtilsReplaceLongStringSeveralMatches   thrpt   21    3899108.777 ▒  41854.636  ops/s

testStringReplaceShortStringNoMatch              thrpt   21    5936992.874 ▒  68115.030  ops/s
testStringUtilsReplaceShortStringNoMatch         thrpt   21  171660973.829 ▒ 377711.864  ops/s

testStringReplaceShortStringOneMatch             thrpt   21    3267435.957 ▒ 240198.763  ops/s
testStringUtilsReplaceShortStringOneMatch        thrpt   21    9943846.428 ▒ 270821.641  ops/s

testStringReplaceShortStringSeveralMatches       thrpt   21    2313713.015 ▒  28806.738  ops/s
testStringUtilsReplaceShortStringSeveralMatches  thrpt   21    5447065.933 ▒ 139525.472  ops/s

As can be seen, the difference is “catastrophic”. Apache Commons Lang’s StringUtils drastically outpeforms the JDK’s String.replace() in every discipline, especially when no match is found in a short string! That’s because the library optimises for this particular case:

...
int end = searchText.indexOf(searchString, start);
if (end == INDEX_NOT_FOUND) {
    return text;
}

Java 9

Things look a bit differently for Java 9:

testStringReplaceLongStringNoMatch               thrpt   21   55528132.674 ▒  479721.812  ops/s
testStringUtilsReplaceLongStringNoMatch          thrpt   21   55767541.806 ▒  754862.755  ops/s

testStringReplaceLongStringOneMatch              thrpt   21    4806322.839 ▒  217538.714  ops/s
testStringUtilsReplaceLongStringOneMatch         thrpt   21    8366539.616 ▒  142757.888  ops/s

testStringReplaceLongStringSeveralMatches        thrpt   21    2685134.029 ▒   78108.171  ops/s
testStringUtilsReplaceLongStringSeveralMatches   thrpt   21    3923819.576 ▒  351103.020  ops/s

testStringReplaceShortStringNoMatch              thrpt   21  122398496.629 ▒ 1350086.256  ops/s
testStringUtilsReplaceShortStringNoMatch         thrpt   21  121139633.453 ▒ 2756892.669  ops/s

testStringReplaceShortStringOneMatch             thrpt   21   18070522.151 ▒  498663.835  ops/s
testStringUtilsReplaceShortStringOneMatch        thrpt   21   11367395.622 ▒  153377.552  ops/s

testStringReplaceShortStringSeveralMatches       thrpt   21    7548407.681 ▒  168950.209  ops/s
testStringUtilsReplaceShortStringSeveralMatches  thrpt   21    5045065.948 ▒  175251.545  ops/s

Java 9’s implementation is now similar to that of Apache Commons, with the same optimisation for non-matches:

public String replace(CharSequence target, CharSequence replacement) {
    String tgtStr = target.toString();
    String replStr = replacement.toString();
    int j = indexOf(tgtStr);
    if (j < 0) {
        return this;
    }
    ...

It is still quite slower for matches in long strings, but faster for matches in short strings. The tradeoff for jOOQ will be to still prefer Apache Commons because:

  • Most people are still on Java 8 or less, currently
  • Most replacements won’t match and both implementations fare equally well for that in Java 9, but Apache Commons is much faster for this category in Java 8
  • If there’s a match and thus a replacement, the speed depends on the string length, where the faster implementation is currently undecided

Conclusion

This micro optimisation stuff matters in jOOQ because jOOQ is a library that does a lot of SQL string manipulation. Every allocation and every CPU cycle that is wasted when manipulating SQL strings slows down the library, and thus impacts all of its users. In a situation like this, it is definitely worth considering not using these useful JDK String methods, and opting for the much faster Apache Commons implementations instead.

Things have improved a lot in Java 9, in case of which this can mostly be ignored. But if you still need to support Java 8 (we still support Java 6 in our commercial distributions!), then this has to be considered.

Watch Out For Recursion in Java 8’s [Primitive]Stream.iterate()

An interesting question by Tagir Valeev on Stack Overflow has recently caught my attention. To keep things short (read the question for details), while the following code works:

public static Stream<Long> longs() {
    return Stream.iterate(1L, i ->
        1L + longs().skip(i - 1L)
                    .findFirst()
                    .get());
}

longs().limit(5).forEach(System.out::println);

printing

1
2
3
4
5

The following, similar code won’t work:

public static LongStream longs() {
    return LongStream.iterate(1L, i ->
        1L + longs().skip(i - 1L)
                    .findFirst()
                    .getAsLong());
}

Causing a StackOverflowError.

Sure, this kind of recursive iteration is not optimal. It wasn’t prior to Java 8 and it certainly isn’t with the new APIs either. But one might think it should at least work, right? The reason why it doesn’t work is because of a subtle implementation difference between the two iterate() methods in Java 8. While the reference type stream’s Iterator first returns the seed and only then proceeds with iterating by applying the iteration function on the previous value:

final Iterator<T> iterator = new Iterator<T>() {
    @SuppressWarnings("unchecked")
    T t = (T) Streams.NONE;

    @Override
    public boolean hasNext() {
        return true;
    }

    @Override
    public T next() {
        return t = (t == Streams.NONE) ? seed : f.apply(t);
    }
};

This is not the case for the LongStream.iterate() version (and other primitive streams):

final PrimitiveIterator.OfLong iterator = new PrimitiveIterator.OfLong() {
    long t = seed;

    @Override
    public boolean hasNext() {
        return true;
    }

    @Override
    public long nextLong() {
        long v = t;
        t = f.applyAsLong(t);
        return v;
    }
};

The iteration function is already pre-fetched one value in advance. This is usually not a problem, but can lead to

  1. Optimisation issues when the iteration function is expensive
  2. Infinite recursions when the iterator is used recursively

As a workaround, it might be best to simply avoid recursion with this method in primitive type streams. Luckily, a fix in JDK 9 is already on its way (as a side effect for a feature enhancement):
https://bugs.openjdk.java.net/browse/JDK-8072727

How to Support Java 6, 8, 9 in a Single API

With jOOQ 3.7, we have finally added formal support for Java 8 features. This opened the door to a lot of nice improvements, such as:

Creating result streams

try (Stream<Record2<String, String>> stream =
     DSL.using(configuration)
        .select(FIRST_NAME, LAST_NAME)
        .from(PERSON)
        .stream()) {

    List<String> people =
    stream.map(p -> p.value1() + " " + p.value2())
          .collect(Collectors.toList());
}

Calling statements asynchronously (jOOQ 3.8+)

CompletionStage<Record> result =
DSL.using(configuration)
   .select(...)
   .from(COMPLEX_TABLE)
   .fetchAsync();

result.thenComposing(r -> ...);

But obviously, we didn’t want to disappoint our paying customers who are stuck with Java 6 because of their using an older application server, etc.

How to support several Java versions in a single API

This is why we continue publishing a Java 6 version of jOOQ for our commercial customers. How did we do it? Very easily. Our commercial code base (which is our main code base) contains tons of “flags” as in the following example:

public interface Query 
extends 
    QueryPart, 
    Attachable 
    /* [java-8] */, AutoCloseable /* [/java-8] */ 
{

    int execute() throws DataAccessException;

    /* [java-8] */
    CompletionStage<Integer> executeAsync();
    CompletionStage<Integer> executeAsync(Executor executor);
    /* [/java-8] */

}

(Sure, AutoCloseable was available already in Java 7, but we don’t have a Java 7 version).

When we build jOOQ, we build it several times after using a preprocessor to strip logic from the source files:

  • The commercial Java 8 version is built first as is
  • The commercial Java 6 version is built second by stripping all the code between [java-8] and [/java-8] markers
  • The commercial free trial version is built by adding some code to the commercial version
  • The open source version is built third by stripping all the code between [pro] and [/pro] markers

Advantages of this approach

There are several advantages of this approach compared to others:

  • We only have a single source of truth, the original commercial source code.
  • The line numbers are the same in all different versions
  • The APIs are compatible to a certain extent
  • No magic is involved via class loading or reflection

The disadvantages are:

  • Committing to repositories is a bit slower as we have several repositories.
  • Publishing releases takes longer as the different versions need to be built and integration tested several times
  • Sometimes, we simply forget adding a marker and have to re-build again when the Java-6 nightly build crashes
  • We still cannot use lambda expressions in ordinary code that is contained in the Java 6 version (most code)

In our opinion, the advantages outweigh clearly. It’s OK if we can’t implement top-notch Java features as long as our customers can, and as long as those customers who are stuck with old versions can still upgrade to the latest jOOQ version.

We’re looking forward to supporting JDK 9 features, like modularity and the new Flow API without any compromise to existing users.

What about you?

How do you approach cross JDK version compatibility?

JEP 277 “Enhanced Deprecation” is Nice. But Here’s a Much Better Alternative

Maintaining APIs is hard.

We’re maintaining the jOOQ API which is extremely complex. But we are following relatively relaxed rules as far as semantic versioning is concerned.

When you read comments by Brian Goetz and others about maintaining backwards-compatibility in the JDK, I can but show a lot of respect for their work. Obviously, we all wish that things like Vector, Stack, Hashtable were finally removed, but there are backwards-compatibility related edge cases around the collections API that ordinary mortals will never think of. For instance: Why aren’t Java Collections remove methods generic?

Better Deprecation

Stuart Marks aka Dr Deprecator

Stuart Marks aka Dr Deprecator

With Java 9, Jigsaw, and modularity, one of the main driving goals for the new features is to be able to “cut off” parts of the JDK and gently deprecate and remove them over the next releases. And as a part of this improvement, Stuart Marks AKA Dr Deprecator has suggested JEP 277: “Enhanced Deprecation”

The idea is for this to enhance the @Deprecated annotation with some additional info, such as:

  • UNSPECIFIED. This API has been deprecated without any reason having been given. This is the default value; everything that’s deprecated today implicitly has a deprecation reason of UNSPECIFIED.
  • CONDEMNED. This API is earmarked for removal in a future JDK release. Note, the use of the word “condemned” here is used in the sense of a structure that is intended to be torn down. The term is not mean to imply any moral censure.
  • DANGEROUS. Use of this API can lead to data loss, deadlock, security vulnerability, incorrect results, or loss of JVM integrity.
  • OBSOLETE. This API is no longer necessary, and usages should be removed. No replacement API exists. Note that OBSOLETE APIs might or might not be marked CONDEMNED.
  • SUPERSEDED. This API has been replaced by a newer API, and usages should be migrated away from this API to the newer API. Note that SUPERSEDED APIs might or might not be marked CONDEMNED.
  • UNIMPLEMENTED. Calling this has no effect or will unconditionally throw an exception.
  • EXPERIMENTAL. This API is not a stable part of the specification, and it may change incompatibly or disappear at any time.

When deprecating stuff, it’s important to be able to communicate the intent of the deprecation. This can be achieved as well via the @deprecated Javadoc tag, where any sort of text can be generated.

An alternative, much better solution

The above proposition suffers from the following problems:

  • It’s not extensible. The above may be enough for JDK library designers, but we as third party API providers will want to have many more elements in the enum, other than CONDEMNED, DANGEROUS, etc.
  • Still no plain text info. There is still redundancy between this annotation and the Javadoc tag as we can still not formally provide any text to the annotation that clarifies, e.g. the motivation of why something is “DANGEROUS”.
  • “Deprecated” is wrong. The idea of marking something UNIMPLEMENTED or EXPERIMENTAL as “deprecated” shows the workaround-y nature of this JEP, which tries to shoehorn some new functionality into existing names.

I have a feeling that the JEP is just too afraid to touch too many parts. Yet, there would be an extremely simple alternative that is much much better for everyone:

public @interface Warning {
    String name() default "warning";
    String description() default "";
} 

There’s no need to constrain the number of possible warning types to a limited list of constants. Instead, we can have a @Warning annotation that takes any string!

Of course, the JDK could have a set of well-known string values, such as:

public interface ResultSet {

    @Deprecated
    @Warning(name="OBSOLETE")
    InputStream getUnicodeStream(int columnIndex);

}

or…

public interface Collection<E> {

    @Warning(name="OPTIONAL")
    boolean remove(Object o);
}

Notice that while JDBC’s ResultSet.getUnicodeStream() is really deprecated in the sense of being “OBSOLETE”, we could also add a hint to the Collection.remove() method, which applies only to the Collection type, not to many of its subtypes.

Now, the interesting thing with such an approach is that we could also enhance the useful @SuppressWarnings annotation, because sometimes, we simply KnowWhatWeAreDoing™, e.g. when writing things like:

Collection<Integer> collection = new ArrayList<>();

// Compiler!! Stop bitching
@SuppressWarnings("OPTIONAL")
boolean ok = collection.remove(1);

This approach would solve many problems in one go:

  • The JDK maintainers have what they want. Nice tooling for gently deprecating JDK stuff
  • The not-so-well documented mess around what’s possible to do with @SuppressWarnings would finally be a bit more clean and formal
  • We could emit tons of custom warnings to our users, depending on a variety of use-cases
  • Users could mute warnings on a very fine-grained level

For instance: A motivation for jOOQ would be to disambiguate the DSL equal() method from the unfortunate Object.equals() method:

public interface Field<T> {

   /**
     * <code>this = value</code>.
     */
    Condition equal(T value);

    /**
     * <strong>Watch out! This is 
     * {@link Object#equals(Object)}, 
     * not a jOOQ DSL feature!</strong>
     */
    @Override
    @Warning(
        name = "ACCIDENTAL_EQUALS",
        description = "Did you mean Field.equal?"
    )
    boolean equals(Object other);
}

The background of this use-case is described here:
https://github.com/jOOQ/jOOQ/issues/4763

Conclusion

JEP 277 is useful, no doubt. But it is also very limited in scope (probably not to further delay Jigsaw?) Yet, I wish this topic of generating these kinds of compiler warnings would be dealt with more thoroughly by the JDK maintainers. This is a great opportunity to DoTheRightThing™

I don’t think the above “spec” is complete. It’s just a rough idea. But I had wished for such a mechanism many many times as an API designer. To be able to give users a hint about potential API misuse, which they can mute either via:

  • @SuppressWarnings, directly in the code.
  • Easy to implement IDE settings. It would be really simple for Eclipse, NetBeans, and IntelliJ to implement custom warning handling for these things.

Once we do have a @Warning annotation, we can perhaps, finally deprecate the not so useful @Deprecated

@Warning(name = "OBSOLETE")
public @interface Deprecated {
}

Discussions

See also follow-up discussions on:

What the sun.misc.Unsafe Misery Teaches Us

Oracle will remove the internal sun.misc.Unsafe class in Java 9. While most people are probably rather indifferent regarding this change, some other people – mostly library developers – are not. There had been a couple of recent articles in the blogosphere painting a dark picture of what this change will imply:

Maintaining a public API is extremely difficult, especially when the API is as popular as that of the JDK. There is simply (almost) no way to keep people from shooting themselves in the foot. Oracle (and previously Sun) have always declared the sun.* packages as internal and not to be used. Citing from the page called “Why Developers Should Not Write Programs That Call ‘sun’ Packages”:

The sun.* packages are not part of the supported, public interface.

A Java program that directly calls into sun.* packages is not guaranteed to work on all Java-compatible platforms. In fact, such a program is not guaranteed to work even in future versions on the same platform.

This disclaimer is just one out of many similar disclaimers and warnings. Whoever goes ahead and uses Unsafe does so … “unsafely“.

What do we learn from this?

The concrete solution to solving this misery is being discussed and still open. A good idea would be to provide a formal and public replacement before removing Unsafe, in order to allow for migration paths of the offending libraries.

But there’s a more important message to all of this. The message is:

When all you have is a hammer, every problem looks like a thumb

Translated to this situation: The hammer is Unsafe and given that it’s a very poor hammer, but the only option, well, library developers might just not have had much of a choice. They’re not really to blame. In fact, they took a gamble in one of the world’s most stable and backwards compatible software environments (= Java) and they fared extremely well for more than 10 years. Would you have made a different choice in a similar situation? Or, let me ask differently. Was betting on AWT or Swing a much safer choice at the time?

If something can somehow be used by someone, then it will be, no matter how obviously they’re gonna shoot themselves in the foot. The only way to currently write a library / API and really prevent users from accessing internals is to put everything in a single package and make everything package-private. This is what we’ve been doing in jOOQ from the beginning, knowing that jOOQ’s internals are extremely delicate and subject to change all the time.

For more details about this rationale, read also:

However, this solution has a severe drawback for those developing those internals. It’s a hell of a package with almost no structure. That makes development rather difficult.

What would be a better Java, then?

Java has always had an insufficient set of visibilities:

  • public
  • protected
  • default (package-private)
  • private

There should be a fifth visibility that behaves like public but prevents access from “outside” of a module. In a way, that’s between the existing public and default visibilities. Let’s call this the hypothetical module visibility.

In fact, not only should we be able to declare this visibility on a class or member, we should be able to govern module inter-dependencies on a top level, just like the Ceylon language allows us to do:

module org.hibernate "3.0.0.beta" {
    import ceylon.collection "1.0.0";
    import java.base "7";
    shared import java.jdbc "7";
}

This reads very similar to OSGi’s bundle system, where bundles can be imported / exported, although the above module syntax is much much simpler than configuring OSGi.

A sophisticated module system would go even further. Not only would it match OSGi’s features, it would also match those of Maven. With the possibility of declaring dependencies on a Java language module basis, we might no longer need the XML-based Maven descriptors, as those could be generated from a simple module syntax (or Gradle, or ant/ivy).

And with all of this in place, classes like sun.misc.Unsafe could be declared as module-visible for only a few JDK modules – not the whole world. I’m sure the number of people abusing reflection to get a hold of those internals would decrease by 50%.

Conclusion

I do hope that in a future Java, this Ceylon language feature (and also Fantom language feature, btw) will be incorporated into the Java language. A nice overview of Java 9 / Jigsaw’s modular encapsulation can be seen in this blog post:

The Features Project Jigsaw Brings To Java 9

Until then, if you’re an API designer, do know that all disclaimers won’t work. Your internal APIs will be used and abused by your clients. They’re part of your ordinary public API from day 1 after you publish them. It’s not your user’s fault. That’s how things work.