jOOQ Tuesdays: Oliver Gierke Talks About Spring Data

Welcome to the jOOQ Tuesdays series. In this series, we’ll publish an article on the third Tuesday every other month where we interview someone we find exciting in our industry from a jOOQ perspective. This includes people who work with SQL, Java, Open Source, and a variety of other related topics.

I’m very excited to feature today Oliver Gierke, the Spring Data Project Lead at Pivotal with strong opinions on DDD and REST.

Hi Oliver – Since 2011, you have mostly been working for Pivotal (previously SpringSource) on Spring Data as the project lead. What is it that fascinates you most about working with data?

To be completely honest it’s not data in itself that got me into Spring Data, or even the predecessor of the JPA module which I was working on even before I joined SpringSource, but the interest in managing complexity in software. When you talk about that, there’s no way you can avoid talking about Domain-Driven Design and its building blocks like value objects, entities, aggregates and the concept of a repository which then again gets you into the realms of data access.

So we as the Spring Data team always have to trade different driving forces against each other: first, the level of abstraction and the programming model that you use in your application code and how well and easy it actually allows you to implement domain logic that solves your business problem at hand. Second, the different tradeoffs that different data stores have already made and in how we actually allow our users to leverage them and at the same time expose some commonalities in the programming model so that developers can transfer knowledge between projects that might use different stores for certain reasons easily. Spring Data is trying to bridge that gap, provide a low entry barrier in modelling aggregates and repositories but at the same time give users the tools to fall back to very efficient means of data access that require a lot of developer control for cases where that’s the top priority.

Spring Data has an impressive number of officially and unofficially supported modules that reach far beyond relational data models. What are the biggest challenges in working with so many models and technologies in a single API?

Definitely the diversity in approaches and tradeoffs by the underlying persistence technologies. Actually that’s one of the reasons, Spring Data does not try to provide a singular unifying API. It’s not even trying to do that for stores of a certain category, for example document databases. We’ve rather taken a general Spring ecosystem philosophy and implemented it in the repositories and data access space: let’s have a consistent programming model with repeatable patterns so that it’s easy to understand the purpose of the abstraction, but then let the abstraction expose store specific specialties so that we’re not abstracting away those but rather allow developers to leverage them when needed.

The general concept of an interface based repository abstraction is not the most revolutionary thing on earth, I admit. But we now look back at almost a decade of experience in designing the parts of the programming model that are indeed API and I think we’ve learned a lot from mistakes we made. Java 8 will be the baseline for the upcoming second generation of Spring Data which enriches our options in terms of APIs. Reactive programming is a hot topic at them moment, too. So there are a lot of balls to juggle in that space.

What’s your favourite module, and why?

That’s of course hard to say as it’s been awesome to see what the individual store modules have turned into over time. However, I’ve grown a bit of a special relationship for the Commons module which is the foundation for all of the store ones as it basically contains the heart and soul of Spring Data: the object mapping facilities, the repository proxy implementation etc. And it’s great to see how we often times can add some functionality there and that functionality is immediately available for repositories of all stores.

On the other end of the spectrum there’s Spring Data REST, a module that exposes RESTful resources based on your aggregate and repository definitions. I like that very much as well, as it works across all the Spring Data modules exposing a repository API and is a great showcase of what you can achieve on top of such an abstraction. Also, it has really helped us to make developers aware of a couple of often overseen aspects of REST, but I guess we’re gonna get to that in a second.

Maintaining a big and widely used API is hard, balancing tons of user requests, integrating third party functionality, maintaining backwards compatibility. What are some maintainer battle stories you’d like to share?

It certainly is. Especially with so many concurrent — sometimes contradicting — forces in play. That starts with the question of versioning the modules: what do we actually version here? User facing API. But which part of the API is that? That totally depends on how much the user is customizing behavior. Is a developer building a Spring Data module for some new data store a user, too? Of course, but a very different kind of user. We usually try to be very conservative with changes that could affect application developers but a bit more demanding when it comes to the implementors of a store module.

That’s all stuff we sort of had and have to deal with on a day to day basis. Interestingly, we’ve been the first ones in the broader Spring engineering team that have picked up the notion of a release train — we group together releases of all modules and name them after famous computer scientists —, that had been popularized by the Eclipse team. That approach worked well for us and has now been adopted by Spring Cloud, Reactor and other teams as well.

Oliver, I have to ask, why does everyone misunderstand REST?

I’m kind of surprised this question comes up in this context, but I guess I have build up some reputation on the internet to complain about people being from unspecific to — in my opinion — outright wrong about this topic :D.

I guess the fundamental problem that REST has is that some parts of it are moderately easy to understand and implement. These days everyone agrees that URIs are a cool thing and that using the right HTTP verb for a given task is a good idea. But even with the latter you’ll easily find people that don’t understand why it’s a good idea to prefer a PUT request over a POST one. Which already brings us to the second part.

Then there are parts that are harder to grasp and a bit harder to implement. The hypermedia aspect comes to mind. Unfortunately theses aspects are the ones that heavily influence whether what you build delivers on the promises that REST makes: being an architectural style that gives you e.g. scalability and evolvability. So people basically start ignoring these aspects, sometimes even outright arguing they don’t need them but then turn around and criticize REST for not delivering on its promises.

In my opinion that’s a way to common pattern observable in the wild, but I guess the only way to improve the situation here is to work on making it easier to implement those aspects and good examples of the benefits you get when you follow those advanced constraints.

A RESTful JDBC HTTP Server built on top of jOOQ

The jOOQ ecosystem and community is continually growing. We’re personally always thrilled to see other Open Source projects built on top of jOOQ. Today, we’re very happy to introduce you to a very interesting approach at combining REST and RDBMS by Björn Harrtell.

bjorn-harrtellBjörn Harrtell is a swedish programmer since childhood. He is usually busy writing GIS systems and integrations at Sweco Position AB but sometimes he spends time getting involved in Open Source projects and contributing to a few pieces of work related to Open Source projects like GeoTools and OpenLayers. Björn has also initiated a few minor Open Source projects himself and one of the latest projects he’s been working on is jdbc-http-server.

We’re excited to publish Björn’s guest post introducing his interesting work:

JDBC HTTP Server

Ever found yourself writing a lot of REST resources that do simple CRUD against a relational database and felt the code was repeating itself? In that case, jdbc-http-server might be a project worth checking out.

jdbc-http-server exposes a relational database instance as a discoverable REST API making it possible to perform simple CRUD from a browser application without requiring any backend code to be written.

A discoverable REST API means you can access the root resource at / and follow links to subresources from there. For example, let’s say you have a database named testdb with a table named testtable in the public schema you can then do the following operations:

Retrieve (GET), update (PUT) or delete (DELETE) a single row at:

/db/testdb/schemas/public/tables/testtable/rows/1

Retrieve (GET), update (PUT) rows or create a new row (POST) at:

/db/testdb/schemas/public/tables/testtable/rows

The above resources accepts parameters select, where, limit, offset
and orderby where applicable. Examples:

GET a maximum of 10 rows where cost>100 at:

/db/testdb/schemas/public/tables/testtable/rows?where=cost>100&limit=10

jdbc-http-server is database engine agnostic since it utilizes jOOQ to generate SQL in a dialect suited to the target database engine. At the moment H2, PostgreSQL and HSQLDB are covered by automated tests. Currently the only available representation data format is JSON but adding more is an interesting possibility.

Feedback and, of course, contributions are welcome 🙂

The Power of Spreadsheets in a Reactive, RESTful API

Being mostly a techie, I’ve recently and admittedly been deceived by my own Dilbertesque attitude when I stumbled upon this buzzword-filled TechCrunch article about Espresso Logic. Ever concerned about my social media reputation (e.g. reddit and hackernews karma), I thought it would be witty to put a link on those platforms titled:

Just found this article on TechCrunch. Reads like a markov-chain-generated series of buzzwords.

With such a catchy headline, the post quickly skyrocketed – and like many other redditors, my thoughts were with Geek and Poke:

But like a few other redditors, I couldn’t resist clicking through to the actual product that claims to implement “reactive programming” through a REST and JSON API. And I’m frankly impressed by the ideas behind this product. For once, the buzzwords are backed by software implementing them very nicely! Let’s first delve into…

Reactive Programming

Reactive programming is a term that has gained quite some traction recently around Typesafe, the company behind Akka. It has also gained additional traction since Erik Meijer (creator of LINQ) has left Microsoft to fully dedicate his time to his new company Applied Duality. With those brilliant minds sharply on the topic, we’ll certainly hear more about the Reactive Manifesto in the near future.

excelBut in fact, every manager knows the merits of “reactive programming” already as they’re working with the most reactive and probably the most awesome software on the planet: Microsoft Excel, a device whose mystery is only exceeded by its power. Think about how awesome Excel is. You have hundreds of rules, formulas, cell-interdependencies. And any time you change a value, the whole spreadsheet magically updates itself. That’s Reactive Programming.

The power of reactive programming lies in its expressiveness. With only very little expressive logic, you can express what otherwise needs dozens of lines of SQL, or hundreds of lines of Java.

Espresso Logic

With this in mind, I started to delve into Espresso Logic’s free trial. Note, that I’m the kind of impatient person who wants quick results without reading the docs. In case you work the other way round, there are some interesting resources to get you started:

Anyway, the demo ships with a pre-installed MySQL database containing what looks like a typical E-Commerce schema containing customer, employee, lineitem, product, purchaseorder, and purchaseorder_audit tables:

The schema browsing view in Espresso Logic

The schema browsing view in Espresso Logic

So I get schema navigation information (such as parent / child relationships) and an overview of rules. These rules look like triggers calculating sums or validating things. We’ll get to these rules later on.

Live API

So far, things are as expected. The UI is maybe a bit edgy, as the product only exists since late 2013. But what struck me as quite interesting is what Espresso Logic calls the Live API. With a couple of clicks, I can assemble a REST Resource tree structure from various types of resources, such as database tables. The Espresso Designer will then almost automatically join tables to produce trees like this one:

The Resource Tree view of Espresso Logic

The Resource Tree view of Espresso Logic

Notice how I can connect child entities to their parents quite easily. Now, this API is still a bit limited. For instance, I couldn’t figure out how to drag-and-drop a reporting relationship where I calculate the order amount per customer and product. However, I can switch the Resource Type from “Normal” to “SQL” to achieve just that with a plain old GROUP BY and aggregate function.

I started to grasp that I’m actually managing and developing a RESTful API based on the available database resources! A little further down the menu, I then found the “Quick Ref” item, which helped me understand how to call this API:

A quick API reference

A quick API reference

So, each of the previously defined resources is exposed through a URL as I’d expect from any RESTful API. What looks really nice is that I have built-in API versioning and an API key. Note, it is strongly discouraged from an OWASP point of view to pass API keys around in GET requests. This is just a use-case for a quick-start demo and for the odd developer test. Do not use this in production!

Anyway, I called the URL in my browser with the API key as parameter (going against my own rules):

https://eval.espressologic.com/rest/[my-user]/demo/v1/AllCustomers?auth=[my-key]:1

And I got a JSON document like this:

[
  {
    "@metadata": {
      "href": "https://eval.espressologic.com/rest/[my-user]/demo/v1/Customers/Alpha%20and%20Sons",
      "checksum": "A:cf1f4fb79e8e7142"
    },
    "Name": "Alpha and Sons",
    "Balance": 105,
    "CreditLimit": 900,
    "links": [
    ],
    "Orders": [
      {
        "@metadata": {
          "href": "https://eval.espressologic.com/rest/[my-user]/demo/v1/Customers.Orders/6",
          "checksum": "A:0bf14e2d58cc97b5"
        },
        "OrderNumber": 6,
        "TotalAmount": 70,
        "Paid": false,
        "Notes": "Pack with care - fragile merchandise",
        "links": [
        ], ...

Notice how each resource has a link and a checksum. The checksum is needed for optimistic locking, which is built-in, should you choose to concurrently update any of the above resources. Notice also, how the nested resource Orders is referenced as Customers.Orders. I can also access it directly by calling the above URL.

Live Logic / Reactive Programming

So far so good. Similar things have been implemented in a variety of software. For instance, Adobe Experience Manager / Apache Sling intuitively exposes the JCR repository through REST as well. But where the idea behind Espresso Logic really started fascinating me is when I clicked on “Live Logic”, and I was exposed to a preconfigured set of rules that are applied to the data:

The rules view

The rules view

I’ve quickly skimmed through the manual to see if I understood correctly. These rules actually resemble the kind of rules that I can enter in any spreadsheet software. For instance, it appears as though the customer.balance column is calculated as the sum of all purchaseorder.amount_total having a paid value of false, and so on.

So, if I continue through this rule-chain I’ll wind up with lineitem.product_price being the shared dependency of all other calculated values. When changing that value, a whole set of updates should run through my rule set to finally change the customer.balance:

changing lineitem.product_price
-> changes lineitem.amount
  -> changes purchaseorder.amount_total
    -> changes customer.balance

Depending how much of a console hacker you are, you might want to write your own PUT call using curl, or you can leverage the REST Lab from the Espresso Designer, which helps you get all the parameters right. So, assuming we want to change a line item from the previous call:

{
  "@metadata": {
    "href": "https://eval.espressologic.com/rest/[my_user]/demo/v1/Customers.Orders.LineItems/11",
    "checksum": "A:2e3d8cb0bff42763"
  },
  "lineitem_id": 11,
  "ProductNumber": 2,
  "OrderNumber": 6,
  "Quantity": 2,
  "Price": 25,
  ...

Let’s just try to update that to have a price of 30:

Using the REST lab to execute PUT requests

Using the REST lab to execute PUT requests

And you can see in the response, there is a transaction summary, which shows that the Customers.Orders.TotalAmount has changed from 50 to 60, the Customers.Balance has changed from 105 to 95, and an audit record has been written. The audit record itself is also defined by a rule like any other rule. But there’s also an ordinary log file that shows what really happened when I ran this PUT request:

The log view showing all the INSERTs and UPDATEs executed

The log view showing all the INSERTs and UPDATEs executed

Imagine having to put all those INSERT and UPDATE statements into a correct order yourself, and correctly manage caching, and transactions! Instead, all we have done is define some rules. For a complete overview of what rule types are available, consider this page of the Live Logic manual

Out of scope features for this post

… So far, we’ve had a look at the most obvious features of Espresso Logic. There are more, though. A couple of examples:

Server-side JavaScript

If rules cannot express it, JavaScript can. There are various points of the application where you can inject your JavaScript snippets, e.g. for validation, more complex rule expressions, request and response transformation, etc. Although we haven’t tried it, it reads like row-based triggers written in JavaScript.

Stored procedure support

The people behind Espresso Logic are “legacy-embracing” people, just like us at Data Geekery. Their target audience might already have thousands of complex stored procedures with lots of business logic in them. Those should not be rewritten in JavaScript. But just like tables, views, and REST resources, they are exposed through the REST API, taking GET parameters for IN parameters and returning JSON for OUT parameters and cursors.

From a jOOQ perspective, it’s pretty awesome to see that someone else is taking stored procedures as seriously as we do.

Row / column level security

There is a built-in user and role management module that allows you to provide centrally-managed, fine-grained access control to your data. Not many databases support row-level security like the Oracle database, for instance. So having this kind of feature in your platform really adds value to many RDBMS integrations. Some further resources on that topic:

Conclusion: Querying vs. updating vs. rule-based persistence

On our jOOQ blog and our marketing websites (e.g. hibernate-alternative.com), we always advocate two main use-cases when operating on databases:

  • Querying: You have very complex queries to calculate things like reports. For this, SQL (e.g. through jOOQ) is perfect
  • Updating: You have a very complex domain model with lots of items and deltas that you want to persist in one go. For this, Hibernate / ORMs are perfect

But today, Espresso Logic has shown to us that there is yet another use-case. One that is covered by reactive programming (or “spreadsheet-programming“) techniques. And that’s:

  • Rule-based persistence: You have a very complex domain model with lots of items and lots of rules which you want to validate, calculate, and keep in sync all the time. For this, both SQL and ORMs are solutions at the wrong level of abstraction.

This “new” use-case is actually quite common in a lot of enterprise applications where complex business rules are currently spelled out in millions of lines of imperative code that is very hard to decipher and even harder to validate / modify. How can you reverse-engineer your business rules from millions of lines of legacy code, written in COBOL?

At Data Geekery, we’re always looking out for brand new tech. Espresso Logic is a young startup with a new product. Yet, as originally mentioned, they’re a startup with seed funding, a very compelling and innovative idea, and a huge market of legacy COBOL applications that wants to start delving into “sexy” new technologies, such as RESTful APIs, JSON, reactive programming. It might just work! If you haven’t seen enough, go work through this tutorial, which covers advanced examples such as a “bill of materials price rollup”, “bill of materials kit explosion”, “budget rollup”, “audit salary chagnes” and more.

We’ll certainly keep an eye out for future improvements to the Espresso Logic platform!