jOOQ Tuesdays: Daniel Dietrich Explains the Benefits of Object-Functional Programming

Welcome to the jOOQ Tuesdays series. In this series, we’ll publish an article on the third Tuesday every other month where we interview someone we find exciting in our industry from a jOOQ perspective. This includes people who work with SQL, Java, Open Source, and a variety of other related topics.


I’m very excited to feature today Daniel Dietrich whose popular library JΛVΛSLΛNG is picking up a lot of momentum among functional programming afictionados working with Java.

Daniel, you created JΛVΛSLΛNG – Object-Functional Programming in Java, a library that is becoming more and more popular among functional programmers. Why is Javaslang so popular?

Thank you Lukas for giving me the opportunity to share my thoughts.

I think that many users were disappointed about Java 8 in the whole, especially those who are already familiar with more advanced languages. The Java language architects did an awesome job. Java 8 brought groundbreaking new features like Lambdas, the new Stream API and CompletableFuture. But the new abstractions were only poorly integrated into the language from an API perspective.

There is already an increasing amount of write-ups about the disadvantages of Java 8, starting with the drawbacks of the Optional type. We read that we have to take care when using parallel Streams. These are self-made problems that keep us busy, stealing our expensive time. Javaslang provides us with alternatives.

There is no reason to reinvent the wheel. My vision is to bring as much as possible of the Scala goodness to Java. In fact Scala emerged from Java in the form of the Pizza language. Back in 2001 it had features like generics, function pointers (aka lambdas), case classes (aka value types) and pattern matching. In 2004 Java got generics, in 2014 came lambdas, and hopefully Java 10 will include value types. Scala left Java far behind. It used the last 15 year to evolve.

Object-functional programming is nothing new. It is the best of both worlds, object-oriented programming and functional programming. Scala is one of the better choices to do it on the JVM. Java’s Lambdas are an enabling feature. They allowed us to create a Javaslang API that is similar to Scala.

Java developers who get their hands on Javaslang often react in a way that I call the nice-effect: “Wow that’s nice, it feels like Scala”.

You have published a guest post on the jOOQ blog about Javaslang more than one year ago. Since then, Javaslang has moved forward quite a bit and you’ve recently published the roadmap for version 3.0. What have you done since then and where are you going?

Yes, that is true, it has changed a lot since then. We released Javaslang 1.2.2 two weeks before the first jOOQ guest post went online. Beside enriched functions that release offered popular Scala features like Option for null-safety, Try for performing computations headache-free in the presence of exceptions and a fluent pattern matching DSL. Also notably we shipped two common persistent collections, an eagerly evaluated linked List and the lazy form of it, also called Stream.

Roughly one year later we released Javaslang 2.0.0. We hardened the existing features and most notably included Future and Promise for concurrent programming and a full-fledged, Scala-like persistent collection library. Beside that, we replaced the pattern matching DSL with a more powerful pattern matching API that allows us to recursively match arbitrary object trees.

I spent a significant amount of time and energy abstracting on the type level over the mentioned features, as far as this is possible in Java. For Java developers it is not important to call things monads, sum-types or products. For example we do not need to know group theory in order to calculate 1 + 1. My duty as library developer is to make it as simple as possible for users of Javaslang to reach their goals. The need to learn new APIs and DSLs should be reduced to the minimum. This is the main reason for aligning Javaslang to Scala.

Our efforts for the next release concentrate on adding more syntactic sugar and missing persistent collections beyond those of Scala. It will be sufficient to add one import to reach 90% of Javaslang’s API. There will be new persistent collections BitSet, several MultiMaps and a PriorityQueue. We are improving the performance of our collections, most notably our persistent Vector. It will be faster than Java’s Stream for some operations and have a smaller memory footprint than Java’s ArrayList for primitive elements.

Beyond library features we pay special attention on three things: backward compatibility, controlled growth and integration aspects. Web is important. Our Jackson module ensures that all Javaslang types can be sent over the wire as serialized JSON. The next release will include a GWT module, first tests already run Javaslang in the browser. However, the Javaslang core will stay thin. It will not depend on any other libraries than the JDK.

Towards the next major release 3.0.0 I’m starting to adjust the roadmap I sketched in a previous blog post. I’ve learned that it is most important to our users that they can rely on backward compatibility. Major releases should not appear often, following the 2.x line is a better strategy. We will start to deprecate a few APIs that will be removed in a future major release. Also I keep an eye on some interesting developments that will influence the next major release. For example a new major Scala release is in the works and there are new interesting Java features that will appear in Java 10.

Looking at the current issues I don’t have to be an oracle to foresee that the next minor release 2.1.0 will take some more time. I understand that users want to start using the new Javaslang features but we need the time and the flexibility to get things right. Therefore we target a first beta release of 2.1.0 in Q4 2016.

In the meantime, there is a variety of functional(-ish) libraries for Java 8, like our own jOOλ, StreamEx, Cyclops, or the much older FunctionalJλvλ. How do all these libraries compare and how is yours different?

This question goes a little bit in the philosophical direction, maybe it is also political. These are my subjective thoughts, please treat them as such.

Humans have the ability to abstract over things. They express themselves in various ways, e.g. with painting and music. These areas split into different fields. For example in literature things are expressed in manifold ways like rhythmic prose and poetry. Furthermore different styles can be applied within these fields, like the iambic trimeter in poetry. The styles across different areas are often embossed by outer circumstances, bound to time, like an epoch.

In the area of mathematics there are also several fields, like algebra and mathematical analysis. Both have a notion of functions. Which field should I take when I want to express myself in a functional style?

Personally, I’m not able to afford the time to write non-trivial applications in each of the mentioned libraries. But I took a look at the source code and followed discussions. I see that nearly all libraries are embossed by the outer circumstance that lambdas finally made it to all curly-braces languages, especially to Java in our case. Library designers are keen to modernize their APIs in order to keep pace. But library designers are also interested in staying independent of 3rd party libraries for reasons like stability and progression.

The field of jOOQ is SQL in Java, the field of Cyclops is asynchronous systems. Both libraries are similar in the way that they adapted the new Java Lambda feature. I already mentioned that the new Java features are only poorly integrated into the language. This is the reason why we see a variety of new libraries that try to close this gap.

jOOQ needs jOOλ in order to stay independent. On the technical level StreamEx is similar to jOOλ in the way that both sit on top of Java’s Stream. They augment it with additional functionality that can be accessed using a fluent API. The biggest difference between them is that StreamEx supports parallel computations while jOOλ concentrates on sequential computations only. Looking at the SQL-ish method names it shows that jOOλ is tailored to be used with jOOQ.

Cyclops states to be the answer to the cambrian explosion of functional(-ish) libraries. It offers a facade that is backed by one of several integration modules. From the developer perspective I see this with skepticism. The one-size-fits-all approach did not work well for me in the past because it does not cover all features of the backing libraries. An abstraction layer adds another source of errors, which is unnecessary.

Many names of Cyclops look unfamiliar to me, maybe because of the huge amount of types. Looking at the API, the library seems to be a black hole, a cambrian implosion of reactive and functional features. John McClean did a great job abstracting over all the different libraries and providing a common API but I prefer to use a library directly.

FunctionalJλvλ is different. It existed long before the other libraries and has the noble goal of purely functional programming: If it does compile, it is correct. FunctionalJλvλ was originally driven by people well known from the Scala community, more specifically from the Scalaz community. Scalaz is highly influenced by Haskell, a purely functional language.

Haskell and Scala are much more expressive than Java. Porting the algebraic abstractions from Scalaz to Java turned out to be awkward. Java’s type system isn’t powerful enough, it does not allow us to reach that goal in a practical way. The committers seem to be disillusioned to me. Some state that Java is not the right language for functional programming.

Javaslang is a fresh take on porting Scala functionality to Java. At its core it is not as highly influenced by Scalaz and Haskell as FunctionalJλvλ is. However, for purely functional abstractions it offers an algebra module that depends on the core. The relation algebra/core can be compared to Scalaz/Scala.

Javaslang is similar to StreamEx in the way that it is not bound to a specific domain, in contrast to jOOλ and Cyclops. It is different from StreamEx in the sense that it does not build on top of Java’s Stream. I understand Javaslang as language addition that integrates well with existing Java features.

You have never spoken at conferences, you let other people do that for you. What’s your secret?🙂

In fact I never attended a conference at all. My secret is to delegate the real work to others.

Joking aside, I feel more comfortable spending my time on the Javaslang source code than preparing conferences and travelling. Currently I am working on Javaslang beside my job but I’m still looking for opportunities to do it full-time.

It is awesome to see other people jumping on the Javaslang train. We receive help from all over the world. Beside IntelliJ and YourKit we recently got TouK as new sponsor and produced Javaslang stickers that are handed out at conferences.

Because of the increasing popularity of Javaslang there is also an increasing amount of questions and pull requests. Beside the conception and development I concentrate on code-reviews, discussions and managing the project.

Where do you see Java’s future with projects like Valhalla?

Java stands for stability and safety. New language features are moderately added, like salt to a soup. This is what we can expect from a future Java.

In his recent mission statement Brian Goetz gives us a great overview about the goals of Project Valhalla. From the developer point of view I really love to see that the Java language architects attach great importance to improve the expressiveness of Java. Value types for example will reduce a lot of redundant code and ceremony we are currently confronted with. It is also nice to see that value types will be immutable.

Another feature I’m really looking forward to is the extension of generics. It will allow us to remove several specializations that exist only for primitive types and void. Popular functional interfaces like Predicate, Consumer, Supplier and Runnable will be equivalent to Function. In Javaslang we currently provide additional API for performing side-effects. Having extended generics that API can be reduced to the general case, like it should have been from the beginning.

There are two more features I’m really interested in: local variable type inference, that will come to Java, and reified generics, that might come. Reified generics are needed when we want to get the type of a generic parameter at runtime. We already have type inference for lambdas. Extending it to local variables will increase conciseness and readability of method and lambda bodies while preserving type-safety. I think it is a good idea that we will still have to specify the return type of methods. It is a clear documentation of the API of an application.

I’m deeply impressed how Java and the JVM evolve over time without breaking backward compatibility. It is a safe platform we can rely on. The gap between Java and other, more modern languages is getting smaller but Java is still behind. Some popular features might never come and most probably outdated API will not get a complete refresh or a replacement. This is a field where libraries such as Javaslang can help.

jOOQ Tuesdays: Chris Saxon Explains the 3 Things Every Developer Should Know About SQL

Welcome to the jOOQ Tuesdays series. In this series, we’ll publish an article on the third Tuesday every other month (today, exceptionally on a Wednesday because of technical issues) where we interview someone we find exciting in our industry from a jOOQ perspective. This includes people who work with SQL, Java, Open Source, and a variety of other related topics.


I’m very excited to feature today Chris Saxon who has worked with Oracle forever, and who is one of the brains behind the famous Ask Tom website.

Chris, you’re part of the famous Ask Tom team. Everyone working with Oracle has wound up on Ask Tom’s website at least once. You’ve answered an incredible amount of questions already. What’s it like to work for such a big community as Oracle’s?

It’s an amazing experience! My first real job was as a PL/SQL developer. My only knowledge of SQL was a couple of vaguely remembered lectures at university. Ask Tom was the main place I learned about SQL and Oracle Database. So it’s a huge honor to be on the other side, helping others get the best out of the technology.

The best part has to be the positive comments when you help someone solve a problem that’s been troubling them for days. That’s why we’re here. To help developers learn more about Oracle and improve their SQL skills. When you use the database to its full extent, you can write better, faster applications with less code!

What were your three most interesting questions, so far?

Any question that has a clear definition and a complete test case is interesting!😉 Personally I enjoy using SQL to solve complex problems the best. So the first two do just that:

1. Finding the previous row in a different group

The poster had a series of transactions. These were grouped into two types. For each row, they wanted to show the id of the previous transaction from the other group.

At first this sounds like a problem you can solve using LAG or LEAD. But these only search for values within the same group. So you need a different method.

I provided a solution using the model clause. Using this, you can generate columns based on complex, spreadsheet-like formulas. Rows in your table are effectively cells in the sheet. You identify them by defining dimensions which can be other columns or expressions. By setting the transaction type as a dimension, you can then easily reference – and assign – values from one type to the other.

This worked well. But commenters were quick to provide solutions using window functions and 12c’s match_recognize clause. Both of which were faster than my approach!

I like this because it shows the flexibility of SQL. And it shows the value of an engaged community. No one knows everything. By sharing our knowledge and workin together we can all become better developers.

2. Improving SQL that deliberately generates cartesian product

The poster had a set of abbreviations for words. For example, Saint could also be “St.” or “St”. They wanted to take text containing these words. Then generate all combinations of strings using these abbreviations.

The “obvious” solution the user had is to split the text into words. Then for each word, join the abbreviation table, replacing the string as needed. So for a five word string, you have five joins.

There are a couple of problems with this method. The number of joins limits the number of words. So if you have a string with seven words, but only six table joins you won’t abbreviate the final word.

The other issue is performance. Joining the same table N times increases the work you do. If you have long sentences and/or a large number of abbreviations, the query could take a long time to run.

To overcome these you need to ask: “how can I join to the abbreviation table just once?”

The solution to do this starts the same as the original. Split the sentence into a table of words. Then join this to the abbreviations to give a row for each replacement needed.

You can then recursively walk down these rows using CTEs. These build up the sentence again, replacing words with their abbreviations as needed. A scalable solution that only needs a single pass of each table!

The final question relates to performance. Tom Kyte’s mantra was always “if you can do it in SQL, do it in SQL”. The reason is because a pure SQL solution is normally faster than one which combines SQL and other code. Yet a question came in that cast doubt on this:

3. Difference in performance SQL vs PL/SQL

The poster was updating a table. The new values came from another table. He was surprised that PL/SQL using bulk processing came out faster than the pure SQL approach.

The query in question was in the form:

update table1
set col1 = (select col2 from table2 where t1.code = t2.code);

It turned out the reason was due to “missing” indexes. Oracle executes the subquery once for every row in table1. Unless there’s an index on table2 (code), this will full scan table2 once for every row in table1!

The PL/SQL only approach avoided this problem by reading the whole of table2 into an array. So there was only one full scan of table2.

The problem here is there was no index on the join condition (t1.code = t2.code). With this in place Oracle does an index lookup of table2 for each row in table1. A massive performance improvement!

The moral being if your SQL is “slow”, particularly in compared to a combined SQL + other language method, it’s likely you have a missing index (or two).

This question again showed the strength and value of the Oracle community. Shortly after I posted the explanation, a reviewer was quick to point out the following SQL solution:

merge into table1
using  table2
on   (t1.code = t2.code)
when matched
  then update set t1.col = t2.col;

This came out significantly faster than both the original update and PL/SQL – without needing any extra indexes!

You’re running a YouTube channel called “The Magic of SQL”. Are SQL developers magicians?

Of course they are! In fact, I’d say that all developers are magicians. As Arthur C Clarke said:

“Any sufficiently advanced technology is indistinguishable from magic”

The amount of computing power you carry around in your phone today is mind blowing. Just ask your grandparents!

I think SQL developers have a special kind of magic though🙂. The ability to answer hard questions with a few lines of SQL is amazing. And for it to adapt to changes in the underlying data to give great performance without you changing it is astounding.

Your Twitter account has a pinned tweet about window functions. I frequently talk to Java developers at conferences, and few of them know about window functions, even if they’ve been in databases like Oracle for a very long time. Why do you think they’re still so “obscure”?

Oracle Database has had window functions has had them since the nineties. But many other RDBMSes have only fully supported them recently. So a combination of writing “database independent” code and people using other databases is certainly a factor.

Use of tools which hide SQL from developers is also a problem. If you’re not actively using SQL, it’s easy to overlook many of its features.

Fortunately I think this is changing. As more and more developers are realizing, SQL is a powerful language. Learning how to use it effectively is a key skill for all developers. Window functions and other SQL features mean you can get write better performing applications with less code. Who doesn’t want that?😉

What are three things that every developer should know about SQL?

1. Understand set based processing

If you find yourself writing a cursor loop (select … from … loop), and inside that loop you run more SQL, you’re doing it wrong.

Think about it. Do you eat your cornflakes by placing one flake in your bowl, adding the milk, and eating that one piece? Then doing the same for the next. And the next. And so on? Or do you pour yourself a big bowl and eat all the flakes at once?

If you have a cursor loop with more SQL within the loop, you’re effectively doing this. There’s a lot of overhead in executing each SQL statement. This will slow you down if you have a large number of statements that each process one row. Instead you want few statements that process lots of rows where possible.

It’s also possible to do this by accident. As developers we’re taught that code reuse is A Good Thing. So if there’s an API available we’ll often use it. For example, say you’re building a batch process. This finds the unshipped orders, places them on shipments and marks them as sent.

If a ship_order function exists, you could write something like:

select order_id from unshipped_orders loop
  ship_order ( order_id );
end loop;

The problem here is ship_order almost certainly contains SQL. SQL you’ll be executing once for every order awaiting postage. If it’s only a few this may be fine. But if there’s hundreds or thousands this process could take a long time to run.

The way to make this faster is to process all the orders in one go. You can do this with SQL like:

insert into shipments
  select … from unshipped_orders;

update unshipped_orders
set  shipment_date = sysdate;

You may counter there’s other, non-SQL, processing you need to do such as sending emails. So you still need a query to find the order ids.

But you can overcome this! With update’s returning clause, you can get values from all the changed rows:

update unshipped_orders
set  shipment_date = sysdate
returning order_id bulk collect into order_array;

This gives you all the order ids to use as you need.

2. Learn what an execution plan is and how to create and read one

“How can I make my SQL faster” is one of the most common classes of questions posted on Ask Tom. The trouble is there’s scant one-size-fits-all advice when it comes to SQL performance. To help we need to know what your query is, what the tables and indexes are and details about the data. But most importantly we need to know what the query is actually doing!

For example, say you want me to help you figure out a faster route to work. To do this I need to know which route you currently use and how long each part of it takes. We can then compare this against other routes, checking how far they are, expected traffic and predicted speeds. But we need the data to do this!

So when answering performance questions, the first thing we look for is an execution plan. Many confuse this with an explain plan. An explain plan is just a prediction. Often it’s wrong. And even when it’s right, we still don’t know how much work each step does.

An execution plan shows exactly what the database did. It also gives stats about how much work, how often and how long it took to process each step. Combine this with a basic understanding of indexes and join methods and you can often solve your own performance problems.

3. Use bind variables

Sadly data breaches are all too common. There hardly seems to be a week that goes by without news of a major company leaking sensitive data. And the root cause of these attacks is often SQL injection.

This is a simple, well known attack vector. If you write vulnerable SQL on a web enabled application, eventually you’ll be attacked.

And this isn’t something you can avoid by using NoSQL databases. SQL injection like attacks are possible there too!

Fortunately the solution is easy: use bind variables. Not only do these secure your application, they can improve performance too.

Make sure your company is not tomorrow’s data leak headline. Use bind variables!

Last but not least: When will Oracle have a BOOLEAN type?🙂

We have a BOOLEAN type! It’s just only in PL/SQL ;P

There’s currently a push in the community to for us to add a SQL BOOLEAN type. If this is a feature you’d like to see, you can vote for it on the Database Ideas forum. The more support there is, the more likely we are to implement it! But no promises😉

jOOQ Tuesdays: Thorben Janssen Shares his Hibernate Performance Secrets

Welcome to the jOOQ Tuesdays series. In this series, we’ll publish an article on the third Tuesday every other month where we interview someone we find exciting in our industry from a jOOQ perspective. This includes people who work with SQL, Java, Open Source, and a variety of other related topics.


I’m very excited to feature today Thorben Janssen who has spent most of his professional life with Hibernate.

Thorben, with your blog and training, you are one of the few daring “annotatioficionados” as we like to call them, who risks diving deep into JPA’s more sophisticated annotations – like @SqlResultSetMapping. What is your experience with JPA’s advanced, declarative programming style?

From my point of view, the declarative style of JPA is great and a huge problem at the same time.

If you know what you’re doing, you just add an annotation, set a few properties and your JPA implementation takes care of the rest. That makes it very easy to use complex features and avoids a lot of boilerplate code.

But it can also become a huge issue, when someone is not that familiar with JPA and just copies a few annotations from stack overflow and hopes that it works.

It will work in most of the cases. JPA and Hibernate are highly optimized and handle suboptimal code and annotations quite well. At least as long as it is tested with one user on a local machine. But that changes quickly when the code gets deployed to production and several hundred or thousand users use it in parallel. These issues get then often posted on stack overflow or other forums together with a complaint about the bad performance of Hibernate…

Your training goes far beyond these rather esoteric use-cases and focuses on JPA / Hibernate performance. What are three things every ORM user should know about JPA / SQL performance?

Only three things? I could talk about a lot more things related to JPA and Hibernate performance.

The by far most important one is to remember that your ORM framework is using SQL to store your data in a relational database. That seems to be pretty obvious, but you can avoid the most common performance issues by analyzing and optimizing the executed SQL statements. One example for that is the popular n+1 select issue which you can easily find and fix as I show in my free, 3-part video course.

Another important thing is that no framework or specification provides a good solution for every problem. JPA and Hibernate make it very easy to insert and update data into a relational database. And they provide a set of advanced features for performance optimizations, like caching or the ordering of statements to improve the efficiency of JDBC batches.

But Hibernate and JPA are not a good fit for applications that have to perform a lot of very complex queries for reporting or data mining use cases. The feature set of JPQL is too limited for these use cases. You can, of course, use native queries to execute plain SQL, but you should have a look at other frameworks if you need a lot of these queries.

So, always make sure that your preferred framework is a good fit for your project.

The third thing you should keep in mind is that you should prefer lazy fetching for the relationships between your entities. This prevents Hibernate from executing additional SQL queries to initialize the relationships to other entities when it gets an entity from the database. Most use cases don’t need the related entities, and the additional queries slow down the application. And if one of your use cases uses the relationships, you can use FETCH JOIN statements or entity graphs to initialize them with the initial query.

This approach avoids the overhead of unnecessary SQL queries for most of your use cases and allows you to initialize the relationships if you need them.

These are the 3 most important things you should keep in mind, if you want to avoid performance problems with Hibernate. If you want to dive deeper into this topic, have a look at my Hibernate Performance Tuning Online Training. The next one starts on 23th July.

What made you focus your training mostly on Hibernate, rather than also on EclipseLink / OpenJPA, or just plain SQL / jOOQ? Do you have plans to extend to those topics?

To be honest, that decision was quite easy for me. I’m working with Hibernate for about 15 years now and used it in a lot of different projects with very different requirements. That gives me the experience and knowledge about the framework, which you need if you want to optimize its performance. I also tried EclipseLink but not to the same extent as Hibernate.

And I also asked my readers which JPA implementation they use, and most of them told me that they either use plain JPA or Hibernate. That made it pretty easy to focus on Hibernate.

I might integrate jOOQ into one of my future trainings. Because as I said before, Hibernate and JPA are a good solution if you want to create or update data or if your queries are not too complex. As soon as your queries get complex, you have to use native queries with plain SQL. In these cases, jOOQ can provide some nice benefits.

What’s the advantage of your online training over a more classic training format, where people meet physically – both for you and for your participants?

The good thing about a classroom training is that you can discuss your questions with other students and the instructor. But it also requires you to be in a certain place at a certain time which creates additional costs, requires you to get out of your current projects and keeps you away from home.

With the Hibernate Performance Tuning Online Training, I want to provide a similar experience to a classroom training in which you study with other students and ask your questions but without having to travel somewhere. You can watch my training videos and do the exercises from your office or home and meet with me, and other students in the forum or group coaching calls to discuss your questions.

So you get the best of both worlds without declaring any travel expenses😉

Your blog also includes a weekly digest of all things happening in the Java ecosystem called Java Weekly. What are the biggest insights into our ecosystem that you’ve gotten out of this work, yourself?

The Java ecosystem is always changing and improving, and you need to learn constantly if you want to stay up to date. One way to do that is to read good blog posts. And there are A LOT of great, small blogs out there written by very experienced Java developers who like to share their knowledge. You just have to find them. That’s probably the biggest insight I got.

I read a lot about Java and Java EE each week (that’s probably the only advantage of a 1.5-hour commute with public transportation) and present the most interesting ones every Monday in a new issue of Java Weekly.

jOOQ Tuesdays: Ming-Yee Iu Gives Insight into Language Integrated Querying

Welcome to the jOOQ Tuesdays series. In this series, we’ll publish an article on the third Tuesday every other month where we interview someone we find exciting in our industry from a jOOQ perspective. This includes people who work with SQL, Java, Open Source, and a variety of other related topics.

Ming-Yee Iu

We have the pleasure of talking to Ming-Yee Iu in this eighth edition who will be telling us about how different people in our industry have tackled the integration of query systems into general purpose languages, including his own library JINQ, which does so for Java.

Ming, everyone coming from C# to Java will google LINQ for Java. You have implemented just that with JINQ. What made you do it?

Jinq actually grew out of my PhD research at EPFL university in Switzerland. When I started a PhD there in 2005, I needed a thesis topic, and I heard that my supervisor Willy Zwaenepoel was interested in making it easier to write database code. I had a bit of a background with Java internals from when I was an intern with one of IBM’s JVM teams in 1997, so when I took a look at the problem, I looked at it from a lower-level systems perspective. As a result, I came up with the idea of using a bytecode rewriting scheme to rewrite certain types of Java code into database queries. There were other research groups looking at the problem at the same time, including the LINQ group. Different groups came up with different approaches based on their own backgrounds. The basic assumption was that programmers had difficulty writing database code because there was a semantic gap–the relational database model was so different from the object-oriented programming model that programmers wasted mental effort bridging the differences. The hope was that this semantic gap could be reduced by letting programmers write normal Java code and having the computer figure out how to run this code on a database. Different approaches would result in tools that could handle more complex database queries or could be more flexible in the style of code they accept.

Although I came up with an initial approach fairly quickly, it took me many years to refine the algorithms into something more robust and usable. Similar to the LINQ researchers, I found that my algorithms worked best with functional code. Because functional-style code has no side effects, it’s easier to analyze. It’s also easier to explain to programmers how to write complex code that the algorithms could still understand. Unfortunately, when I finished my PhD in 2010, Java still didn’t properly support functional programming, so I shelved the research to work on other things. But when Java 8 finally came out in 2014 with lambdas, I decided to revisit my old research. I adapted my research to make use of Java 8 lambdas and to integrate with current enterprise tools. And the result was Jinq, an open source tool that provided support for LINQ-style queries in Java.

In a recent discussion on reddit, you’ve mentioned that the Java language stewards will never integrate query systems into the language, and that LINQ has been a mistake. Yet, LINQ is immensely popular in C#. Why was LINQ a mistake?

My opinion is a little more nuanced than that. LINQ makes a lot of sense for the C# ecosystem, but I think it is totally inappropriate for Java. Different languages have different trade-offs, different philosophies, and different historical baggage. Integrating a query system into Java would run counter to the Java philosophy and would be considered a mistake. C# was designed with different trade-offs in mind, so adding feature like query integration to C# is more acceptable.

C# was designed to evolve quickly. C# regularly forces programmers to leave behind old code so that it can embrace new ways of doing things. There’s an old article on Joel on Software describing how Microsoft has two camps: the Raymond Chen camp that always tries to maintain backwards compatibility and the MSDN Magazine camp that is always evangelizing shiny new technology that may abandoned after a few years. Raymond Chen camp allows me to run 20 year old Windows programs on Windows 10. The MSDN Magazine camp produces cool new technology like C# and Typescript and LINQ. There is nothing wrong with the MSDN philosophy. Many programmers prefer using languages built using this philosophy because the APIs and languages end up with less legacy cruft in them. You don’t have to understand the 30 year history of an API to figure out the proper way to use it. Apple uses this philosophy, and many programmers love it despite the fact that they have to rewrite all their code every few years to adapt to the latest APIs. For C#, adopting a technology that is immature and still evolving is fine because they can abandon it later if it doesn’t work out.

The Java philosophy is to never break backwards compatibility. Old Java code from the 1990s still compiles and runs perfectly fine on modern Java. As such, there’s a huge maintenance burden to adding new features to Java. Any feature has to be maintained for decades. Once a feature is added to Java, it can’t be changed or it might break backwards compatibility. As a result, only features that have that have withstood the test of time are candidates for being added to Java. When features are added to Java that haven’t yet fully matured, it “locks-in” a specific implementation and prevents the feature from evolving as people’s needs change. This can cause major headaches for the language in the future.

One example of this lock-in is Java serialization. Being able to easily write objects to disk is very convenient. But the feature locked in an architecture that isn’t flexible enough for future use-cases. People want to serialize objects to JSON or XML, but can’t do that using the existing serialization framework. Serialization has led to many security errors, and a huge amount of developer resources were required to get lambdas and serialization to work correctly together. Another example of this premature lock-in is synchronization support for all objects. At the time, it seemed very forward-looking to have multi-threading primitives built right into the language. Since every object could be used as a multi-threaded monitor, you could easily synchronize access to every object. But we now know that good multi-threaded programs avoid that sort of fine-grained synchronization. It’s better to work with higher-level synchronization primitives. All that low-level synchronization slows down the performance of both single-threaded and multi-threaded code. Even if you don’t use the feature, all Java objects have to be burdened by the overhead of having lock support. Serialization and synchronization were both added to Java with the best of intentions. But those features are now treated like “goto”: they don’t pass the smell test. If you see any code that uses those features, it usually means that the code needs extra scrutiny.

Adding LINQ-style queries to Java would likely cause similar problems. Don’t get me wrong. LINQ is a great system. It is currently the most elegant system we have now for integrating a query language into an object-oriented language. Many people love using C# specifically because of LINQ. But the underlying technology is still too immature to be added to Java. Researchers are still coming up with newer and better ways of embedding query systems into languages, so there is a very real danger of locking Java into an approach that would later be considered obsolete. Already, researchers have many improvements to LINQ that Microsoft can’t adopt without abandoning its old code.

For example, to translate LINQ expressions to database queries, Microsoft added some functionality to C# that lets LINQ inspect the abstract syntax trees of lambda expressions at runtime. This functionality is convenient, but it limits LINQ to only working with expressions. LINQ doesn’t work with statements because it can’t inspect the abstract syntax trees of lambdas containing statements. This restriction on what types of lambdas can be inspected is inelegant. Although this functionality for inspecting lambdas is really powerful, it is so restricted that very few other frameworks use it. In a general-purpose programming language, all the language primitives should be expressive enough that they can be used as building blocks for many different structures and frameworks. But this lambda inspection functionality has ended up only being useful for query frameworks like LINQ. In fact, Jinq has shown that this functionality isn’t even necessary. It’s possible to build a LINQ-style query system using only the compiled bytecode, and the resulting query system ends up being more flexible in that it can handle statements and other imperative code structures.

As programmers have gotten more experience with LINQ, they have also started to wonder if there might be alternate approaches that would work better than LINQ. LINQ is supposed to make it easier for programmers to write database queries because they can write functional-style code instead of having to learn SQL. In reality though, to use LINQ well, a programmer still needs to understand SQL too. But if a programmer already understands SQL, what advantages does LINQ give them? Would it be better to use a query system like jOOQ matches SQL syntax more closely than Slick and can quickly evolve to encompass new SQL features then? Perhaps, query systems aren’t even necessary. More and more companies are adopting NoSQL databases that don’t even support queries at all.

Given how quickly our understanding of LINQ-style query systems are evolving, it would definitely be a mistake to add that functionality directly to a language like Java at the moment. Any approach might end up being obsolete, and it would impose a large maintenance burden on future versions of Java. Fortunately, Java programmers can use libraries such as Jinq and jOOQ instead, which provide most of the benefits of LINQ but don’t require tight language integration like LINQ.

Lightbend maintains Slick – LINQ for Scala. How does JINQ compare to Slick?

They both try to provide a LINQ-style interface for querying databases. Since Slick is designed for Scala, it has great Scala integration and is able to use Scala’s more expressive programming model to provide a very elegant implementation. To get the full benefits of Slick, you have to embrace the Scala ecosystem though.

Jinq is primarily designed for use with Java. It integrates with existing Java technologies like JPA and Hibernate. You don’t have to abandon your existing Java enterprise code when adopting Jinq because Jinq works with your existing JPA entity classes. Jinq is designed for incremental adoption. You can selectively use it some places and fall back to using regular JPA code elsewhere. Although Jinq can be used with Scala, it’s more useful for organizations that are using Scala but haven’t embraced the full Scala ecosystem. For example, Jinq allows you to use your existing Hibernate entities in your Scala code while still using a modern LINQ-style functional query system for them.

JINQ has seen the biggest improvement when Java 8 introduced the Stream API. What is your opinion about functional programming in Java?

I’m really happy that Java finally has support for lambdas. It’s a huge improvement that really makes my life as a programmer much easier. Over time, I’m hoping that the Java language stewards will be able to refine lambdas further though.

From Jinq’s perspective, one of the major weaknesses of Java 8’s lambdas is the total lack of any reflection support. Jinq needs reflection support to decode lambdas and to translate them to queries. Since there is no reflection support, Jinq needs to use slow and brittle alternate techniques to get the same information. Personally, I think the lack of reflection is a significant oversight, and this lack of reflection support could potentially weaken the entire Java ecosystem as a whole in the long term.

I have a few small annoyances with the lack of annotation support and lack of good JavaDoc guidelines for how to treat lambdas. The Streams API and lambda metafactories also seem a little bit overly complex to me, and I wonder if something simpler would have been better there.

From a day-to-day programming perspective though, I’ve found that the lack of syntactic sugar for calling lambdas is the main issue that has repeatedly frustrated me. It seems like a fairly minor thing, but the more I use lambdas, the more I feel that it is really important. In Java 8, it’s so easy to create and pass around lambdas, that I’m usually able to completely ignore the fact that lambdas are represented as classes with a single method. I’m able to think of my code in terms of lambdas. My mental model when I write Java 8 code is that I’m creating lambdas and passing them around. But when I actually have to invoke a lambda, the lambda magic completely breaks down. I have to stop and switch gears and think of lambdas in terms of classes. Personally, I can never remember the name of the method I need to call in order to invoke a lambda. Is it run(), accept(), consume(), or apply()? I often end up having to look up the documentation for the method name, which breaks my concentration. If Java 8 had syntactic sugar for calling lambdas, then I would never need to break out of the lambda abstraction. I would be able to create, pass around, and call lambdas without having to think about them as classes.

Java 9 will introduce the Flow API for reactive interoperability. Do you plan to implement a reactive JINQ?

To be honest, I’m not too familiar with reactive APIs. Lately, I’ve been working mostly on desktop applications, so I haven’t had to deal with problems at a sufficient scale where a reactive approach would make sense.

You’ve mentioned to me in the past that you have other projects running. What are you currently working on?

After a while, it’s easy to accumulate projects. Jinq is mostly stable at the moment though I do occasionally add bug fixes and other changes. There are still a few major features that could be added such as support for bulk updates or improved code generation, but those are fairly major undertakings that would require some funding to do.

I occasionally work on a programming language called Babylscript, which is a multilingual programming language that lets you write code in a mix of French, Chinese, Arabic, and other non-English languages. As a companion project to that, I also run a website for teaching programming to kids called Programming Basics that teaches programming in 17 different languages. Currently, though, I’m spending most of my time on two projects. One is an art tool called Omber, which is a vector drawing program that specializes in advanced gradients. The other project involves using HTML5 as the UI front-end for desktop Java programs. All your UI code would still be written in Java, but instead of using AWT or Swing, you would just manipulate HTML using a standard DOM interface bound to Java. As a side benefit, all your UI code can be recompiled using GWT to JavaScript, so you can reuse your UI code for web pages too.

Further info

Thank you very much for this very interesting interview, Ming. Want to learn more about JINQ? Read about it in this previous guest post on the jOOQ blog, and watch Ming’s JVMLS 2015 talk:

jOOQ Tuesdays: Glenn Paulley Gives Insight into SQL’s History

Welcome to the jOOQ Tuesdays series. In this series, we’ll publish an article on the third Tuesday every other month where we interview someone we find exciting in our industry from a jOOQ perspective. This includes people who work with SQL, Java, Open Source, and a variety of other related topics.


I’m very excited to feature today Glenn Paulley who has been working with and on SQL for several decades.

Glenn, you have been a part of the database ecosystem since the very early days, having been Director of Engineering at Sybase and representing SAP with the SQL Standard committee. What is it that fascinates you so much about databases?

Data management technology has been my favourite subject within Computer Science since I was an undergraduate student at the University of Manitoba, and so it has remained throughout much of my career. I was privileged to be part of the team that implemented the first online IBM DB2 application – using IMS/TM as the transaction monitor – in Canada at Great-West Life Assurance in the late 1980’s, and we hosted some of the DB2 development team from IBM’s San Jose lab – including Don Haderle – to celebrate that achievement. So yes, I guess you could say that I’ve been around for a while.

Much of my personal expertise lies in the realm of query processing and optimization, having had Paul Larson and Frank Wm. Tompa as my Ph.D. supervisors at the University of Waterloo. But I am interested in many related database subjects: query languages certainly, multidatabase systems, and information retrieval. Two closely-related topics within database systems are of particular interest to me.  One is scale. Companies and organizations have a lot of data; billion-row tables in a relational database are fairly routine today in many companies. Advances in other technologies, such as the Internet of Things, are going to dramatically increase the data management requirements for firms wanting to take advantage of these technologies. Implementing solutions to achieve the scale required is difficult, and is of great interest to me. The related issue, quite naturally, is processing queries over such vast collections and getting the execution time down to something reasonable. So query optimization remains a favourite topic.

We’ve had a very interesting E-Mail conversation about SQL’s experiments related to Object Orientation in the late 1990’s. Today, we have ORDBMS like Oracle, Informix, PostgreSQL. Why didn’t ORDBMS succeed?

ORDBMS implementations were not as successful as their developers expected, to be sure, though I would note that many SQL implementations now contain “object” features even though objects are specifically omitted from the ISO SQL standard. Oracle 12c is a good example – Oracle’s PL/SQL object implementation supports a good selection of object-oriented programming features, such as polymorphism and single inheritance, that when coupled with collection types provide a very rich data model that can handle very complex applications. There are others, too, of course: InterSystems’ Caché product, for example, is still available.

So, while object support in relational systems is present, in many instances, to me the significant issues are (1) the performance of object constructions on larger database instances and (2) the question of where do you want objects to exist in the application stack: in the server proper, or in another tier, implemented in a true object-oriented language? The latter issue is the premise behind object-relational mapping tools, though I think that their usage often causes as many problems as they solve.

How do you see the future of the SQL language – e.g. with respect to Big Data or alternative models like document stores (which have N1QL) or graph stores (which have Open Cypher)?

My personal view is that SQL will continue to evolve; having an independent query language that permits one to query or manipulate a database but avoid writing a “program” in the traditional sense is a good idea. Over time that language might evolve to something different from today’s SQL, but it is likely that we will still call it S-Q-L. I do not expect a revolutionary approach to be successful; there is simply far too much invested in current applications and infrastructure. Don’t get me wrong – I’m not saying SQL doesn’t need improvements. It does.

Unlike the work of e.g. the JCP or w3c, which are public and open, SQL still seems to be a more closed environment. How do you see the interaction between SQL and the end user? Can “ordinary” folks participate in the future of SQL?

The SQL standard is published by the International Standards Organization (ISO), whose member countries contribute to changes to the standard and vote on them on a regular basis. Member countries that contribute to the ISO SQL standard include (at least) the US, Canada, Germany, the United Kingdom, Japan, and Korea. Participation in the standards process requires individuals or companies to belong to these “national bodies” in order to view drafts of the standard and vote on proposed changes. Usually those meetings are held in-person – at least in Canada and the United States – so there is a real cost to participation in the process.

While having SQL as an international ISO standard is, I think, worthwhile, the ISO’s business model is based on activities such as collecting revenue from the sale of standards documents. Unless governments or other benefactors sponsor the development of those standards, then it is difficult to see how the standard can be made more freely available. That issue is a political one, rather than a technical one.

After a brief stop at Conestoga College, you’re heading back to SAP. What will you be working on in the next few years?

I will be continuing to focus on database technology now that I’ve returned to SAP. So far I’ve had a fantastic experience being back at the SAP Waterloo lab, and I have every expectation that will continue into the future.

You’ve spent a lot of time at SAP (and before SAP) with Sybase. How do the different Sybase distributions relate to SAP’s new flagship database HANA?

The various SAP database systems (SQL Anywhere, IQ, ASE) all contain database technology pertinent to the HANA platform.

Last question: What do SQL and Curling have in common?🙂

As a Level 3 curling coach, one of my tasks as a High Performance Coach for the Ontario Curling Council is to collect performance data on the athletes in our various programs. Naturally I use a database system to store that data, and perform various analyses on it, so the two are not as unrelated as you might think!


jOOQ Tuesdays: Rafael Winterhalter is Wrestling Byte Code with Byte Buddy

Welcome to the jOOQ Tuesdays series. In this series, we’ll publish an article on the third Tuesday every other month where we interview someone we find exciting in our industry from a jOOQ perspective. This includes people who work with SQL, Java, Open Source, and a variety of other related topics.

Rafael Winterhalter

We have the pleasure of talking to Rafael Winterhalter in this seventh edition who will be telling us about the depths of Java byte code, and about his library Byte Buddy, which makes working with byte code extremely easy.

Note that Byte Buddy won the 2015 Duke’s Choice award – congratulations to this from our side!

Hi Rafael – You’re the author of the popular Byte Buddy library. What does Byte Buddy do?

Byte Buddy is a code generation and manipulation library. It offers APIs for creating new Java classes at runtime and for changing existing classes before or after they were loaded.

At first glance, this might sound like a very esoteric thing to do, but runtime code generation is used in a large number of Java projects. Code generation tools are often used by library developers to implement aspect-oriented programming. For example, the mocking library Mockito adopted Byte Buddy to create subclasses of mocked classes at runtime. In order to implement a mock, Mockito overrides all methods of a class such that the user’s original code is not invoked when a method is called in a test. And there are plenty of other well-known users of code generation. Spring, for example, uses code generation to implement its annotation aspects such as security or transactions. And Hibernate uses code-generation to lazily load properties from getter methods by overriding those getters to query the database only if they are invoked.

Why is there a need for Byte Buddy when there are alternatives like ASM, CGLIB, AspectJ or Javassist?

Before I started working on Byte Buddy, I was involved in several other open-source projects as a contributor. As mentioned before, code generation is a typical requirement for implementing many libraries and so I got used to working with mostly CGLIB and Javassist. However, I became constantly frustrated with those libraries’ limitations and I wanted to resolve the problems I had discovered. Eventually, I started to write an alternative library that I later published as Byte Buddy.

To understand the limitations of alternative libraries, mocks are a good example use case. Mocks in Mockito were previously created using CGLIB. CGLIB is a rather mature library. It has been around for over 15 years and when it was originally developed, the library’s developers did of course not anticipate features such as annotations, generic types or defender methods. Annotations did however become an important part of many APIs which would not accept a mock instance because any annotations of overridden methods were lost. In Java, annotations on methods are never inherited when they are overridden. And annotations on types are only inherited if they are explicitly declared to be. To overcome this, Byte Buddy allows to copy any annotation to a subclass what is now a feature in Mockito 2.

In contrast, Javassist allows to copy annotations, but I do not personally like the approach of the library. In Javassist, all generated code is represented as Java code contained in strings. As a result, Javassist code evolves similarly unstructured to Java code that only describes SQL as concatenated strings. Besides creating code that is difficult to maintain, this approach also offers vulnerabilities such as Java code injection similar to SQL injection. It is sometimes possible to attack Javassist code by letting it compile arbitrary code what can cause sever damage to an application.

AspectJ is a powerful tool when manipulating existing code. However, Byte Buddy lets you do anything that AspectJ is capable of but in plain and simple Java. This way, developers do not need to learn a new syntax or programming metaphor or install tools for their build-process and IDEs. Furthermore, I do not find the join-point and point-cut terminology intuitive and decided to avoid it altogether. Instead, I decided to mimic terminology that developers already know from the Java programming language to ease the first steps with Byte Buddy.

ASM on the other hand is the basis on top of which Byte Buddy is implemented. ASM is a byte code parser rather than a code generation library. ASM processes single class files  and does not consider type hierarchies. ASM does neither have a concept of class loading and does not include higher-level concepts on top of byte code instructions. Byte Buddy offers however an adapter that exposes the ASM API to users that require the generation of very specific code.

How does one become so involved with low-level Java?

In the beginning, I set myself a goal of only creating a version of CGLIB with annotation support which was what I originally needed. But I quickly found out that a lot of developers were looking for the solution that Byte Buddy has become today. Therefore, I started to plan to make the full feature set of the Java virtual machine accessible. For doing so, learning all the gory details and corner cases of the class file format has become a necessity to implement these features. To be fair, the class file format is fairly trivial once you get the hang of it and I really enjoy to see my library mature.

Between Java byte code (2GL language) and SQL (4GL language), there are many levels of programmatic abstraction. Where do you feel at home the most?

I would want to use the right tool for the right job. Obviously, I enjoy working with byte code, but I would avoid handcrafting byte code when working in a production project. In the end, this is what higher-level abstractions such as Byte Buddy are made for.

Looking at the common use cases, Byte Buddy is however often used for implementing custom features by changing code based on annotations on methods. In a way, Byte Buddy enables developers to implement their own 4G abstraction. Declarative programming is a great abstraction for certain tasks, SQL being one of them.

You’ve become a famous speaker and domain expert in a very short time. What’s your most exciting story, being an influencer?

Mainly, I find it exciting to meet users of my library. I have met folks that implemented internal frameworks with large teams that is based on my software and obviously, it makes me proud that Byte Buddy proves to be that useful.

Thank you very much Rafael

If you want to learn more about Rafael’s work, about byte code or about Byte Buddy, check out his talk at JavaZone:

jOOQ Tuesdays: Markus Winand is on a Modern SQL Mission

Welcome to the jOOQ Tuesdays series. In this series, we’ll publish an article on the third Tuesday every other month where we interview someone we find exciting in our industry from a jOOQ perspective. markuswinandThis includes people who work with SQL, Java, Open Source, and a variety of other related topics.

We are excited to talk with Markus Winand in this sixth edition. Markus is the author of the popular book SQL Performance Explained and the even more popular website Use The Index, Luke, and we’re thrilled to see that he’s pulling off another stunt:

Hi Markus – You have recently launched What is your goal with this website?

My goal for is to create a textbook and reference about the SQL goodies you didn’t learn in school or university. Interestingly, online manuals about these features are pretty sparse. They come in two fashions: blog posts and vendor documentation. Blog posts are usually one-off events covering a particular feature or use case. There are many great blogs out there – the jOOQ blog being one of them – but there is no one I could recommend to learn all about recent SQL features. Vendor documentation, on the other hand, is mostly a reference about syntax—quite often even a bad one: they often don’t mention standard compliance at all and tend to follow a “proprietary features first” approach.

The consequence is that SQL market is very fragmented: besides SQL-92, there is no obvious base that is common to all databases. This becomes particularly evident on the job market: job offers either require just SQL—meaning good old relational SQL—or they require experience with a specific product. That’s pretty much the norm nowadays and nobody questions it. However, how would you think about this job opening: “Google Chrome Web Developer.” Web developers can’t choose the client’s browser. Many tried, but failed. Remember “optimized for XYZ”? That’s why web developers demanded standard conformance from browsers over the past decades. Just having launched a new website I can say that CSS conformance has improved drastically over the past five years. Ultimately, I’d like the same thing to happen for SQL. I hope that sparks interest in standard conforming SQL so that developers also start to demand standard conformance from the database vendors. Quite an ambitious goal.

Last year, you’ve gone into battle against the SQL OFFSET clause. Want to shed some light on the background of that campaign?

The most striking problem with OFFSET is that it is generally used for an invalid use case: pagination. In this case, OFFSET is used to skip over a number of rows in the intention to find the rows following the previously selected ones. However, OFFSET does per definition not return the rows following one that was selected earlier, but just discards the first N rows of the result. Coincidentally, OFFSET yields the expected result if the data has not changed in the meanwhile—a case that is pretty common during development. But as soon as rows are added or deleted, discarding a fixed number of rows just doesn’t give the right result. The correct approach is to remember the last row fetched and use this data in a WHERE clause to select the rows following. This approach is explained in detail at

Besides the fact that OFFSET cannot be used to implement correct pagination, OFFSET is also bad for performance. OFFSET is wrong and slow. What else do you need? As a matter of fact, the only valid use case I know for OFFSET it to implement SLEEP in SQL—not that I ever need that. Unfortunately, OFFSET made it into the SQL standard in 2011. I consider this the worst mistake in recent history of SQL because it can’t be corrected. The only good part is that it is an optional feature—vendors don’t need to implement for standard conformance. Nevertheless, Oracle and Microsoft just recently added OFFSET to their SQL databases.

You’ve written a very popular book on SQL Performance called SQL Performance explained. How does it compare to other SQL books and why should our readers buy it?

I’ll start with the second question. First of all you must know that the full content of SQL Performance Explained is available for free at Most people I’m asking why they bought the book did so because the like the web site. They bought the book either to support my work on Use The Index, Luke (greatly appreciated!) or, more importantly, to finally read the book from cover to cover. A typical answer I get goes along these lines: “I knew Use The Index, Luke for years and have read many articles there, but I finally wanted to read everything from the beginning to end.”

Now coming to the first question why the world needed another SQL performance tome: it didn’t. Therefore, I wrote a very small book that can be read in less than a day. I focused on the basic concepts, which are the same in most databases, and boldly skipped less common special cases. Its shortness is also most appreciated in the reviews. On the other hand, the book has occasionally being criticized as being incomplete—probably because the sub-title reads “Everything developers need to know about SQL performance”. Personally, I think these critiques somehow proof my point: Obviously, Java, PHP or .NET developers don’t need to know as much about SQL performance as database performance consultants. When writing for such an audience, you must skip a lot.

Where do you see SQL in 10 years from today?

I hope that the temporal features of SQL:2011 (see here) become commonly available—also in free open source databases. At the moment, they are only available in commercial databases—even there the completeness and standard conformance varies. I would also hope that the SQL standard finds a way to cope with the current trend that every database vendor adds its own proprietary set of JSON functions. Unfortunately, it might be too late for that already.

However, my greatest hope is that developers realize that SQL is not stuck in 1992. The standard has added many useful features since than. Most databases offer a good part of these features. It’s really just our perception of SQL that got stuck in 1992.

Learn more about Markus’s work

… Markus is giving his Modern SQL talk at conferences. Learn more about it here: