Reactive Database Access – Part 1 – Why “Async”

We’re very happy to announce a guest post series on the jOOQ blog by Manuel Bernhardt. In this blog series, Manuel will explain the motivation behind so-called reactive technologies and after introducing the concepts of Futures and Actors use them in order to access a relational database in combination with jOOQ.

manuel-bernhardtManuel Bernhardt is an independent software consultant with a passion for building web-based systems, both back-end and front-end. He is the author of “Reactive Web Applications” (Manning) and he started working with Scala, Akka and the Play Framework in 2010 after spending a long time with Java. He lives in Vienna, where he is co-organiser of the local Scala User Group. He is enthusiastic about the Scala-based technologies and the vibrant community and is looking for ways to spread its usage in the industry. He’s also scuba-diving since age 6, and can’t quite get used to the lack of sea in Austria.

This series is split in three parts, which we’ll publish over the next month:

Reactive?

The concept of Reactive Applications is getting increasingly popular these days and chances are that you have already heard of it someplace on the Internet. If not, you could read the Reactive Manifesto or we could perhaps agree to the following simple summary thereof: in a nutshell, Reactive Applications are applications that:

  • make optimal use of computational resources (in terms of CPU and memory usage) by leveraging asynchronous programming techniques
  • know how to deal with failure, degrading gracefully instead of, well, just crashing and becoming unavailable to their users
  • can adapt to intensive workloads, scaling out on several machines / nodes as load increases (and scaling back in)

Reactive Applications do not exist merely in a wild, pristine green field. At some point they will need to store and access data in order to do something meaningful and chances are that the data happens to live in a relational database.

Latency and database access

When an application talks to a database more often than not the database server is not going to be running on the same server as the application. If you’re unlucky it might even be that the server (or set of servers) hosting the application live in a different data centre than the database server. Here is what this means in terms of latency:

Latency numbers every programmer should know

Say that you have an application that runs a simple SELECT query on its front page (let’s not debate whether or not this is a good idea here). If your application and database servers live in the same data centre, you are looking at a latency of the order of 500 µs (depending on how much data comes back). Now compare this to all that your CPU could do during that time (all those green and black squares on the figure above) and keep this in mind – we’ll come back to it in just a minute.

The cost of threads

Let’s suppose that you run your welcome page query in a synchronous manner (which is what JDBC does) and wait for the result to come back from the database. During all of this time you will be monopolizing a thread that waits for the result to come back. A Java thread that just exists (without doing anything at all) can take up to 1 MB of stack memory, so if you use a threaded server that will allocate one thread per user (I’m looking at you, Tomcat) then it is in your best interest to have quite a bit of memory available for your application in order for it to still work when featured on Hacker News (1 MB / concurrent user).

Reactive applications such as the ones built with the Play Framework make use of a server that follows the evented server model: instead of following the “one user, one thread” mantra it will treat requests as a set of events (accessing the database would be one of these events) and run it through an event loop:

Evented server and its event loop

Such a server will not use many threads. For example, default configuration of the Play Framework is to create one thread per CPU core with a maximum of 24 threads in the pool. And yet, this type of server model can deal with many more concurrent requests than the threaded model given the same hardware. The trick, as it turns out, is to hand over the thread to other events when a task needs to do some waiting – or in other words: to program in an asynchronous fashion.

Painful asynchronous programming

Asynchronous programming is not really new and programming paradigms for dealing with it have been around since the 70s and have quietly evolved since then. And yet, asynchronous programming is not necessarily something that brings back happy memories to most developers. Let’s look at a few of the typical tools and their drawbacks.

Callbacks

Some languages (I’m looking at you, Javascript) have been stuck in the 70s with callbacks as their only tool for asynchronous programming up until recently (ECMAScript 6 introduced Promises). This is also knows as christmas tree programming:

Christmas tree of hell

Ho ho ho.

Threads

As a Java developer, the word asynchronous may not necessarily have a very positive connotation and is often associated to the infamous synchronized keyword:

Working with threads in Java is hard, especially when using mutable state – it is so much more convenient to let an underlying application server abstract all of the asynchronous stuff away and not worry about it, right? Unfortunately as we have just seen this comes at quite a hefty cost in terms of performance.

And I mean, just look at this stack trace:

Abstraction is the mother of all trees

In one way, threaded servers are to asynchronous programming what Hibernate is to SQL – a leaky abstraction that will cost you dearly on the long run. And once you realize it, it is often too late and you are trapped in your abstraction, fighting by all means against it in order to increase performance. Whilst for database access it is relatively easy to let go of the abstraction (just use plain SQL, or even better, jOOQ), for asynchronous programming the better tooling is only starting to gain in popularity.

Let’s turn to a programming model that finds its roots in functional programming: Futures.

Futures: the SQL of asynchronous programming

Futures as they can be found in Scala leverage functional programming techniques that have been around for decades in order to make asynchronous programming enjoyable again.

Future fundamentals

A scala.concurrent.Future[T] can be seen as a box that will eventually contain a value of type T if it succeeds. If it fails, theThrowable at the origin of the failure will be kept. A Future is said to have succeeded once the computation it is waiting for has yielded a result, or failed if there was an error during the computation. In either case, once the Future is done computing, it is said to be completed.

Welcome to the Futures

As soon as a Future is declared, it will start running, which means that the computation it tries to achieve will be executed asynchronously. For example, we can use the WS library of the Play Framework in order to execute a GET request against the Play Framework website:

val response: Future[WSResponse] = 
  WS.url("http://www.playframework.com").get()

This call will return immediately and lets us continue to do other things. At some point in the future, the call may have been executed, in which case we could access the result to do something with it. Unlike Java’s java.util.concurrent.Future<V>which lets one check whether a Future is done or block while retrieving it with the get() method, Scala’s Future makes it possible to specify what to do with the result of an execution.

Transforming Futures

Manipulating what’s inside of the box is easy as well and we do not need to wait for the result to be available in order to do so:

val response: Future[WSResponse] = 
  WS.url("http://www.playframework.com").get()

val siteOnline: Future[Boolean] = 
  response.map { r =>
    r.status == 200
  }

siteOnline.foreach { isOnline =>
  if(isOnline) {
    println("The Play site is up")
  } else {
    println("The Play site is down")
  }
}

In this example, we turn our Future[WSResponse] into a Future[Boolean] by checking for the status of the response. It is important to understand that this code will not block at any point: only when the response will be available will a thread be made available for the processing of the response and execute the code inside of the map function.

Recovering failed Futures

Failure recovery is quite convenient as well:

val response: Future[WSResponse] =
  WS.url("http://www.playframework.com").get()

val siteAvailable: Future[Option[Boolean]] = 
  response.map { r =>
    Some(r.status == 200)
  } recover {
    case ce: java.net.ConnectException => None
  }

At the very end of the Future we call the recover method which will deal with a certain type of exception and limit the dammage. In this example we are only handling the unfortunate case of a java.net.ConnectException by returning a Nonevalue.

Composing Futures

The killer feature of Futures is their composeability. A very typical use-case when building asynchronous programming workflows is to combine the results of several concurrent operations. Futures (and Scala) make this rather easy:

def siteAvailable(url: String): Future[Boolean] =
  WS.url(url).get().map { r =>
    r.status == 200
}

val playSiteAvailable =
  siteAvailable("http://www.playframework.com")

val playGithubAvailable =
  siteAvailable("https://github.com/playframework")

val allSitesAvailable: Future[Boolean] = for {
  siteAvailable <- playSiteAvailable
  githubAvailable <- playGithubAvailable
} yield (siteAvailable && githubAvailable)

The allSitesAvailable Future is built using a for comprehension which will wait until both Futures have completed. The two Futures playSiteAvailable and playGithubAvailable will start running as soon as they are being declared and the for comprehension will compose them together. And if one of those Futures were to fail, the resulting Future[Boolean] would fail directly as a result (without waiting for the other Future to complete).

This is it for the first part of this series. In the next post we will look at another tool for reactive programming and then finally at how to use those tools in combination in order to access a relational database in a reactive fashion.

Read on

Stay tuned as we’ll publish Parts 2 and 3 shortly as a part of this series:

Querying Your Database from Millions of Fibers (Rather than Thousands of Threads)

jOOQ is a great way to do SQL in Java and Quasar fibers bring a much improved concurrency

We’re excited to announce another very interesting guest post on the jOOQ Blog by Fabio Tudone from Parallel Universe.

Parallel Universe develops an open-source stack that allows developers to easily code extremly concurrent application on the JVM. With the Parallel Universe stack you build software that works in harmony with modern hardware rather than fight it at every turn, while keeping your programming language and your simple, familiar programming styles.

fabiotudoneFabio Tudone develops and maintains Quasar integration modules as part of the Comsat project. He’s been part of and then led the development of a cloud-based enterprise content governance platform for several years before joining the Parallel Universe team and he’s been writing mostly JVM software along his whole professional journey. His interests include Dev and DevOps practices, scalability, concurrent and functional programming as well as runtime platforms. Naturally curious and leaning towards exploration, he enjoys gathering knowledge and understanding from people, places and cultures. He’s also interested in awareness practices and likes writing all sorts of stuff.

Quasar features an integration for JDBC and jOOQ as part of the Comsat project, so let’s have a look inside the box.

JDBC, jOOQ and Quasar

comsat-jdbc provides a fiber-blocking wrapper of the JDBC API, so that you can use your connection inside fibers rather than regular Java threads.

Why would you do that? Because fibers are lightweight threads and you can have many more fibers than threads in your running JVM. “Many more” means we’re talking millions versus a handful of thousands.

This means that you have a lot more concurrency capacity in your system to do other things in parallel while you wait for JDBC execution, be it concurrent / parallel calculations (like exchanging actor messages in your highly reliable Quasar Erlang-like actor system) or fiber-blocking I/O (e.g. serving webrequests, invoking micro-services, reading files through fiber NIO or accessing other fiber-enabled data sources like MongoDB).

If your DB can stand it and few more regular threads won’t blow up your system (yet), you can even increase your fiber-JDBC pool (see Extra points: where’s the waiting line later) and send more concurrent jOOQ commands.

Since jOOQ uses JDBC connections to access the database, having jOOQ run on fibers is as easy as bringing in the comsat-jooq dependency and handing your fiber-enabled JDBC connection to the jOOQ context:

import java.sql.Connection;
import static org.jooq.impl.DSL.*;

// ...

Connecton conn = FiberDataSource.wrap(dataSource)
                                .getConnection();
DSLContext create = DSL.using(connection);

// ...

Of course you can also configure a ConnectionProvider to fetch connections from your FiberDataSource.

From this moment on you can use regular jOOQ and everything will happen in fiber-blocking mode rather than thread-blocking. That’s it.

No, really, there’s absolutely nothing more to it: you keep using the excellent jOOQ, only with much more efficient fibers rather than threads. Quasar is a good citizen and won’t force you into a new API (which is nice especially when the original one is great already).

Since the JVM at present doesn’t support native green threads nor continuations, which can be used to implement lightweight threads, Quasar implements continuations (and fibers on top of them) viabytecode instrumentation. This can be done at compile time but often it’s just more convenient to use Quasar’s agent (especially when instrumenting third-party libraries), so here’s an example Gradle project based on Dropwizard that also includes the Quasar agent setup (Don’t forget about Capsule, a really great Java deployment tool for every need, which, needless to say, makes using Quasar and agents in general a breeze). The example doesn’t use all of the jOOQ features, rather it falls in SQL-building use case (both for querying and for CRUD) but you’re encouraged to change it to suit your needs. The without-comsat branch contains a thread-blocking version so you can compare and see the (minimal) differences with the Comsat version.

Where’s the waiting line?

You might be wondering now: ok, but JDBC is a thread-blocking API, how can Quasar turn it into a fiber-blocking one? Because JDBC doesn’t have an aynchronous mode, Quasar uses a thread pool behind the scenes to which fibers dispatch JDBC operations and by which they’re unfrozen and scheduled for resumption when the JDBC operation completes (have a look at Quasar’s integration patterns for more info).

Yes, here’s the nasty waiting line: JDBC commands awaiting to be executed by the thread pool. Although you’re not improving DB parallelism beyond your JDBC thread-pool size, you’re not hurting your fibers, either, even though you’re still using a simple and familiar blocking API. You can still have millions of fibers.

Is it possible to improve the overall situation? Without a standard asynchronous Java RDBMS API there isn’t much we can do. However, this may not matter at all if the database is your bottleneck. There are several nice posts and discussions about this topic and the argument amounts to deciding where you want to move the waiting line.

Bonus: how does this neat jOOQ integration work under the cover?

At present Quasar needs the developer (or integrator) to tell it what to instrument, although fully automatic instrumentation is in the works (This feature depends on some minor JRE changes that won’t be released before Java 9). If you can conveniently alter the source code (or the compiled classes) then it’s enough to annotate methods with @Suspendable or letting them throws SuspendExecution, but this is usually not the case with libraries. But methods with fixed, well known names to be instrumented can be listed in META-INF/suspendables and META-INF/suspendable-supers, respectively for concrete methods and abstract / interface methods that can have suspendable implementations.

If there are a lot (or there’s code generation involved), you can write a SuspendableClassifier to ship with your integration and register it with Quasar’s SPI to provide additional instrumentation logic (see jOOQs). A SuspendableClassifier‘s job is to examine signature information about each and every method in your runtime classpath during the instrumentation phase and tell if it’s suspendable, if it can have suspendable implementations, if for sure neither is the case or if it doesn’t know (Some other classifier could say perhaps “suspendable” or “suspendable-super” later on).

Summing it all up

Well… Just enjoy the excellent jOOQ on efficient fibers!