The Lame Side of Java’s Backwards-Compatibility

Java is a very backwards-compatible language. Very as in very very very. It is so backwards compatible, we still have tons of deprecated code that was deprecated in the JDK 1.1. For example, most of the java.util.Date and java.util.Calendar API. Some may argue that it would’ve been easier to deprecate the classes altogether…

But things don’t get better as we’re approaching Java 8. Please, observe with me with a mixture of intrigue and disgust what is going to be added to the JDBC 4.2 specs:

“large”. As in “We should’ve made that a long instead of an int from the very beginning”. Luckily, Java 8 also introduces defender methods, such that the additions were done backwards-compatibly.

I wonder how many other places in the JDK should now have duplicate methods using the “large” term, because in the beginning, people chose int over long, when most processors were still 32bit, and it really did make a difference.

Also, I wonder what’ll happen when we run out of 64bit space in the year 2139, as mankind will reach the outer skirts of milky way. In order to be able to write the occasional planet-migration script, we’ll have to add things like executeHugeUpdate() to the JDBC specs in Java 11 – if we’re optimistic that Java 11 will be shipped by then ;-)

For more info, you can see the up-to-date OpenJDK source code here:
http://hg.openjdk.java.net/lambda/lambda/jdk/file/tip/src/share/classes/java/sql/Statement.java

8 thoughts on “The Lame Side of Java’s Backwards-Compatibility

  1. You are right, I hate this two. All these deprecated methods everywhere, look pretty bad, particularly Date is almost useless. And both Date and Calendar are full gotchas.

    http://mindprod.com/jgloss/date.html
    http://mindprod.com/jgloss/gregoriancalendar.html

    I think the problem is that we broke the rule of encapsulation since the beginning. We should have never cared about the size of integers or floats. We should have a data type for them, and then let the compiler and JVM deal with the size issues, pretty much as other languages do.

    If that was the case we could use this universal integer type everywhere without worrying how big it could become. If it is small enough, the JVM could use byte, short, or int internally, and as it becomes larger it could use int or long or even BigInteger after it exceeds the 64 bits.

    1. Hmm, I think that “universal” integers would be quite challenging to implement, specifically at a byte-code level. I guess it would be hard to write a decent compiler that doesn’t even know at compile-time, how much space they’d have to allocate on the stack. Would you put everything on the heap? Or reserve a special (highly fragmented) integer area in memory?

      What languages are you referring to in particular?

  2. Well, in dynamic languages you do not declare types, therefore it is at runtime that you interpret the types of values. For instance, in languages as diverse as Python or Racket you simple use numbers and you don’t care how they ware handled under the hood. Perhaps two bad examples, since these two are not distinguished for their speed (maybe because these numbers are handled in heap, and not as primitive types in the stack). A statically-typed language with a similar approach is Haskell, in wich you have two types of integers, one called Int with a bounded size, and another called Integer which can grow as big as you want. Under the hood, Int is more performant. Now, not everywhere you use numbers you care about performance as you would care in arithmetic intensive algorithms. My opinion is that perhaps it would be better to have a data type that represents numbers in general, and you may offer a library for those who need to deal with arithmetic intensive operations and for whom the size does matter.

    1. Yes, that’s right. Well, in the end, it boils down to removing primitive type support, as we know it today. In the case of JDBC, we could surely live with a wrapper type “Number” as a result from executeUpdate(). I don’t think that we would really care about this being a primitive long after we have successfully updated 5G rows in the database :-)

  3. No need for unnecessary sarcasm. For some people Java isn’t just a sandbox, but a mission-critical investment.
    Imagine a Java without backward compatibility for a moment: Unless you want to rewrite your ever-growing code base with every new version, there are two options: version-freeze or determine which Java version to use per class, so you can adopt newer versions gradually. Even with modern build systems neither of the options looks appealing. Worse, if you can drop any feature you like, there’s no need anymore to get it (almost) right in the first version.
    Keeping track of which version ofers which features would require a Java version specialist, with an OCJVS certificate, of course (“Hey, which versions offered native JSON support and is there one that supports CORBA at the same time?”)
    Java was advertised as “robust” language right from the beginning. Backward compatibility is one aspect of robustness. So basically you’re lamenting, that the Java guys kept their promise with respect to that.
    Besides, there were breaks: e.g., “assert” in Java 1.4 and at least one, albeit very rare source-code incompatibility introduced in Java 5.
    When it comes to the cited JDBC extensions, I think, that you are making a mountain from a molehill: these methods are only necessary for updates, that affect more than two billion records in one go. I doubt, that the majority of Java developers will have a need for that in the next 20 years.

    1. Thanks for your feedback. Yes, it’s a rant. I occasionally rant on my blog. I have a lot of respect for Java’s backwards-compatibility. 90% of the stuff they did, they got right from the very beginning, which is impressive. But in this case, adding 10-or-so methods with the word “large” in them to fix a primitive return value seems like a very premature workaround to an API problem that – as you put it – doesn’t really affect many users. To me, it just feels as though this addition really really doesn’t pull its own weight, even being implemented using default methods.

    2. I ran into this about five years ago when working with an analytic database (data warehousing). Our ETL jobs would often run an INSERT … SELECT or UPDATE query that affected more than two billion rows. Unfortunately, this caused the JDBC driver to fail and roll back the transaction.

Leave a Reply to David PhillipsCancel reply