How Functional Programming will (Finally) do Away With the GoF Patterns


A recent article about various ways to implement structural pattern matching in Java has triggered my interest:
http://blog.higher-order.com/blog/2009/08/21/structural-pattern-matching-in-java

The article mentions a Scala example where a tree data structure can be traversed very easily and neatly using Scala’s match keyword, along with using algebraic data types (more specifically, a sum type):

def depth(t: Tree): Int = t match {
  case Empty => 0
  case Leaf(n) => 1
  case Node(l, r) => 1 + max(depth(l), depth(r))
}

Even if you’re not used to the syntax, it is relatively easy to understand what it does:

  • There’s a function depth that calculates the (maximum) depth of a tree structure
  • It does so by checking if the input argument is empty, a leaf node, or any other node
  • If it is any other node, then it adds 1 to the maximum of the remaining tree, recursively

The elegant thing here is that the Scala type system helps the author of the above code get this right from a formal point of view, by offering formal type checking. The closest we can do in Java as illustrated by the article is this

public static int depth(Tree t) {
  if (t instanceof Empty)
    return 0;
  if (t instanceof Leaf)
    return 1;
  if (t instanceof Node)
    return 1 + max(depth(((Node) t).left), depth(((Node) t).right));
  throw new RuntimeException("Inexhaustive pattern match on Tree.");
}

But these instanceof checks do smell kind of fishy…

For more details, read the full article here, highly recommended:
http://blog.higher-order.com/blog/2009/08/21/structural-pattern-matching-in-java

How does this compare to the GoF design patterns?

In our object-orientation-brainwashed Java ecosystem (which inherited the OO brainwash from C++), the above instanceof logic would most likely be refactored into an implementation using the visitor pattern from the GoF design patterns book. This refactoring would be done by The Team Architect™ himself, as they are supervising the object oriented quality of your software. The 7 lines of code using instanceof would quickly bloat up to roughly 200 lines of weird interfaces, abstract classes, and cryptic accept() and visit() methods. When in fact, the functional programming approach was so much leaner, even in its imperfect Java instanceof form!

A lot of the GoF design patterns stem from a time when EVERYTHING needed to be an object. Object orientation was the new holy grail, and people even wanted to push objects down into databases. Object databases were invented (luckily, they’re all dead) and the SQL standard was enhanced with ORDBMS features (only really implemented in Oracle, PostgreSQL, and Informix, and maybe some other minor DBs), most of which – also luckily – were never widely adopted.

Since Java 8, finally, we’re starting to recover from the damage that was made in early days of object orientation in the 90s, and we can move back to a more data-centric, functional, immutable programming model where data processing languages like SQL are appreciated rather than avoided, and Java will see more and more of these patterns, hopefully.

If you’re not convinced by the above visitor pattern vs pattern matching example, do read this very interesting series of articles by Mario Fusco:

You will see that with functional programming, many patterns lose their meaning as you’re just starting to pass around functions, making code very simple and easy to understand.

As a wrap up, as Mario presented the content at Voxxed Days Ticino:

Happy functional programming!

With Commercial Licensing, Invest in Innovation, not Protection


When people start creating commercially licensed software (like we did, in 2013 with jOOQ), there is always the big looming question:

What do I do about piracy?

I’ve had numerous discussions with fellow entrepreneurs about this topic, and this fear is omnipresent. There has also been a recent discussion on reddit, titled “prevent sharing of a Java library”. I felt reminded of the early commercial jOOQ days, when I discussed the various options / modalities of the new commercial jOOQ license with the Data Geekery legal counsel – which was clearly the biggest financial investment in early company days.

Build licensing around trust, not fear

One thing that bothered me most about jOOQ’s dual license in its early days is that our paying customers will have less rights than our continued Open Source users. Obviously. If you’re using jOOQ with an Open Source database, you can use the jOOQ Open Source Edition for free under the terms of the very permissive Apache License 2.0. You can do pretty much anything with jOOQ including forking it, rewriting it, creating and distributing a new jOOQ (just don’t name it jOOQ, we own the trademark). The only limitation is: it doesn’t work with commercial databases, but you don’t care about that if you’re using MySQL or PostgreSQL for the next 10 years.

If you’re using jOOQ with a commercial database, however, you need to purchase a jOOQ Professional Edition or jOOQ Enterprise Edition license. Of course, there are costs, but that’s not the problem, because jOOQ is awesome and delivers 50x the value it costs.

The problem is that:

  • Interested developers, architects, etc. now have to go through the hassle of convincing their employer’s legal / purchasing / compliance / … departments and do all sorts of paperwork.
  • Paying customers (at the beginning) could no longer patch jOOQ if they found a bug. They had to wait for us, the vendor, to deliver a fix.

These things were remedied rather quickly in the commercial license text. The commercial jOOQ license now grants access to the commercial source code and allows users to modify jOOQ themselves (of course warranty is then disclaimed), without needing to wait for the vendor to deliver the fix. It is still not allowed to distribute the modification’s source code, although we’re looking into possible legal solutions to allow that as well, such that commercial customers can share patches for commercial parts of jOOQ as well.

In other words: We want our commercial customers feel as if jOOQ were Open Source for their every day job.

Isn’t that crazy?

So, people get the entire source code and can build jOOQ. Aren’t we afraid that our commercial, “cracked” jOOQ distributions wind up on warez sites? Of course we are. And it happens. And we’re maintaining a list of companies that “obviously” don’t comply with our terms (e.g. they’ve been using the free trial in production for 2 years). But they’re only very few. And even if they weren’t few, should we introduce tracking logic in jOOQ to check if customers / trial users are compliant? Should we collect IP addresses? User E-Mails? Count workstations? Shut down jOOQ, if non compliant? Shut it down where, on production servers, perhaps?

The truth is, most companies are honest. We’ve had many customers frequently upgrade their contracts. E.g. every couple of months, they’ve purchased new licenses. We’ve had other customers reduce their contracts. The team started with 5 licenses and now consists only of 1 person doing maintenance work. We’ve had customers not touching their contracts, using jOOQ with e.g. 10 licenses for a long time. Some of these are overlicensed, yes. Some of these are underlicensed. It happens. It’s not good, but it’s also not the end of the world. We’re in constant touch with them to see if their license count is still up to date. In the end, we trust them. They trust us.

The worst thing we could do now is introduce some sort of license checker that might be buggy and accidentally shuts down our honest customers’ production system! For the slight chance that we might be catching someone who “cracked” our software (and who probably manages to “crack” also our license checker anyway).

Even if we’re not shutting down the software but logging messages somewhere, the end user (our customers’ customer) might get a very weird feeling when they see it. We would be indirectly damaging our customers’ reputation for what was probably just an oversight. Or, again, a bug in our license checker.

Is that really worth the trouble? No way!

(an exception is the free trial version, which must be deleted after one month)

Different types of software

The problem is: There are different types of software. Essentially, there are:

  • Tools
  • Libraries
  • Servers
  • SaaS

The difference is:

  • Tools run on premise but are independent of the end user application. The customer doesn’t depend on the tool.
    For instance: IntelliJ IDEA
  • Libraries run on premise and are embedded in the end user application. The customer completely depends on the library.
    For instance: jOOQ
  • Servers run on premise and are linked to the end user application but run independently. The customer can often replace the server, but cannot “open” it. It’s a black box.
    For instance: Oracle Database
  • SaaS runs in the cloud and is completely independent of the end user application. The customer cannot influence it in any way.
    For instance: Salesforce.com

As you can see, tools are “not critical” for the customer (being “just” UIs giving access to “the real system”), and while servers and SaaS are critical, all three of them run independently. It is easy for a vendor to implement license checkers inside of those because they’re in control of running the software. With libraries, it’s different. You cannot make any assumptions about how your library is “run”. Is it part of a single JVM? Or multiple JVMs? Run for a couple of seconds only, or for a long time? Used frequently or only very rarely? Is it allowed to spawn its own threads, collect its own data?

Too many open questions.

Long story short: Invest in innovation, not protection

Unless protection is really important in your area because you’re using extremely complex algorithms that no one should know about, then you shouldn’t worry about piracy too much from a technical perspective. In our case, jOOQ isn’t super secret. Anyone can build their own jOOQ, and there are many (much simpler) competitor frameworks. Our business is maintaining by far the best Java SQL DSL and apparently, no one else wants to compete with us in this niche, so why be afraid of theft.

If protection is important, well then run a SaaS, because then you don’t need to ship any software. For instance: Google, one of the biggest SaaS vendors out there, doesn’t share its search engine algorithms with you.

Once you stop worrying about who is going to steal from you, you can start investing all that time in awesome new features and quality to make those loyal and honest customers happy who happily pay for your software. And who knows. Perhaps some of your “pirate customers” will eventually switch jobs and work for someone who takes compliance more seriously. They have been happy “customers” too, and will also recommend your software to their new peers.

Turn Around. Don’t Use JPA’s loadgraph and fetchgraph Hints. Use SQL Instead.


Thorben Janssen (also known from our jOOQ Tuesdays series) recently published an interesting wrap-up of what’s possible with Hibernate / JPA query hints. The full article can be seen here:
http://www.thoughts-on-java.org/11-jpa-hibernate-query-hints-every-developer-know

Some JPA hints aren’t really hints, they’re really full-blown query specifications, just like JPQL queries, or SQL queries. They tell JPA how to fetch your entities. Let’s look at javax.persistence.loadgraph and javax.persistence.fetchgraph.

The example given in Oracle’s Java EE 7 tutorial is this:

You have a default entity graph, which is hard-wired to your entity class using annotations (or XML in the old days):

@Entity
public class EmailMessage implements Serializable {
    @Id
    String messageId;
    @Basic(fetch=EAGER)
    String subject;
    String body;
    @Basic(fetch=EAGER)
    String sender;
    @OneToMany(mappedBy="message", fetch=LAZY)
    Set<EmailAttachment> attachments;
    ...
}

Notice how the above entity graph mixes formal graph meta information (@Entity, @Id, @OneToMany, …) with query default information (fetch=EAGER, fetch=LAZY).

EAGER or LAZY?

Now, the problem with the above is that these defaults are hard-wired and cannot be changed for ad-hoc usage (thank you annotations). Remember, SQL is an ad-hoc query language with all of benefits that derive from this ad-hoc-ness. You can materialise new result sets whose type was not previously known on the fly. Excellent tool for reporting, but also for ordinary data processing, because it is so easy to change a SQL query if you have new requirements, and if you’re using languages like PL/SQL or libraries like jOOQ, you can even do that in a type safe, precompiled way.

Unlike in JPA, whose annotations are not “ad-hoc”, just like SQL’s DDL is not “ad-hoc”. Can you ever switch from EAGER to LAZY? Or from LAZY to EAGER? Without breaking half of your application? Truth is: You don’t know!

The problem is: choosing EAGER will prematurely materialise your entire entity graph (even if you needed only an E-Mail message’s subject and body), resulting in too much database traffic (see also “EAGER fetching is a code smell” by Vlad Mihalcea). Choosing LAZY will result in N+1 problems in case you really do need to materialise the relationship, because for each parent (“1”), you have to individually fetch each child (“N”) lazily, later on.

Do SQL people suffer from N+1?

As a SQL person, this sounds ridiculous to me. Imagine specifying in your foreign key constraint whether you always want to auto-fetch your relationship…

ALTER TABLE email_attachment
ADD CONSTRAINT fk_email_attachment_email
FOREIGN KEY (message_id)
REFERENCES email_message(message_id)
WITH FETCH OPTION LAZY -- meh...

Of course you don’t do that. The point of normalising your schema is to have the data sit there without duplicating it. That’s it. It is the query language’s responsibility to help you decide whether you want to materialise the relationship or not. For instance, trivially:

-- Materialise the relationship
SELECT *
FROM email_message m
JOIN email_attachment a
USING (message_id)

-- Don't materialise the relationship
SELECT *
FROM email_message m

Duh, right?

Are JOINs really that hard to type?

Now, obviously, typing all these joins all the time can be tedious, and that’s where JPA seems to offer help. Unfortunately, it doesn’t help, because otherwise, we wouldn’t have tons of performance problems due to the eternal EAGER vs LAZY discussion. It is a GOOD THING to think about your individual joins every time because if you don’t, you will structurally neglect your performance (as if SQL performance wasn’t hard enough already) and you’ll notice this only in production, because on your developer machine, you don’t have the problem. Why?

Works on my machine ಠ_ಠ

One way of solving this with JPA is to use the JOIN FETCH syntax in JPQL (which is essentially the same thing as what you would be doing in SQL, so you don’t win anything over SQL except for automatic mapping. See also this example where the query is run with jOOQ and the mapping is done with JPA).

Another way of solving this with JPA is to use these javax.persistence.fetchgraph or javax.persistence.loadgraph hints, but that’s even worse. Check out the code that is needed in Oracle’s Java EE 7 tutorial just to indicate that you want this and that column / attribute from a given entity:

EntityGraph<EmailMessage> eg = em.createEntityGraph(EmailMessage.class);
eg.addAttributeNodes("body");
...
Properties props = new Properties();
props.put("javax.persistence.fetchgraph", eg);
EmailMessage message = em.find(EmailMessage.class, id, props);

With this graph, you can now indicate to your JPA implementation that in fact you don’t really want to get just a single E-Mail message, you also want all the specified JOINs to be materialised (interestingly, the example doesn’t do that, though).

You can pass this graph specification also to a JPA Query that does a bit more complex stuff than just fetching a single tuple by ID – but then, why not just use JPA’s query language to express that explicitly? Why use a hint?

Let me ask you again, why not just specify a sophisticated query language? Let’s call that language… SQL? The above example is solved trivially as such:

SELECT body
FROM email_message
WHERE message_id = :id

That’s not too much typing, is it? You know exactly what’s going on, everyone can read this, it’s simple to debug, and you don’t need to wrestle with first and second level caches because all the caching that is really needed is buffer caching in your database (i.e. caching frequent data in database memory to prevent excessive I/O).

The cognitive overhead of getting everything right and tuning stuff in JPA is so big compared to writing just simple SQL statements (and don’t forget, you may know why you put that hint, but your coworker may so easily overlook it!), let me ask you: Are you very sure you actually profit from JPA (you really need entity graph persistence, including caching)? Or are you wrestling the above just because JPA is your default choice?

When JPA is good

JPA (and its implementations) is excellent when you have the object graph persistence problem. This means: When you do need to load a big graph, modify it in your client application, possibly in a distributed and cached and long-conversational manner, and then store the whole graph back into the database without having to wrestle with locking, caching, lost updates, and all sorts of other problems, then JPA does help you. A lot. You don’t want to do that with SQL.

Do note that the key aspect here is storing the graph back into the database. 80% of JPA’s value is in writing stuff, not reading stuff.

But frankly, you probably don’t have this problem. You’re doing mostly simple CRUD and probably complex querying. SQL is the best language for that. And Java 8 functional programming idioms help you do the mapping, easily.

Conclusion

Don’t use loadgraph and fetchgraph hints. Chances are very low that you’re really on a good track. Chances are very high that migrating off to SQL will greatly simplify your application.

jOOQ Tuesdays: Thorben Janssen Shares his Hibernate Performance Secrets


Welcome to the jOOQ Tuesdays series. In this series, we’ll publish an article on the third Tuesday every other month where we interview someone we find exciting in our industry from a jOOQ perspective. This includes people who work with SQL, Java, Open Source, and a variety of other related topics.

thorben-janssen

I’m very excited to feature today Thorben Janssen who has spent most of his professional life with Hibernate.

Thorben, with your blog and training, you are one of the few daring “annotatioficionados” as we like to call them, who risks diving deep into JPA’s more sophisticated annotations – like @SqlResultSetMapping. What is your experience with JPA’s advanced, declarative programming style?

From my point of view, the declarative style of JPA is great and a huge problem at the same time.

If you know what you’re doing, you just add an annotation, set a few properties and your JPA implementation takes care of the rest. That makes it very easy to use complex features and avoids a lot of boilerplate code.

But it can also become a huge issue, when someone is not that familiar with JPA and just copies a few annotations from stack overflow and hopes that it works.

It will work in most of the cases. JPA and Hibernate are highly optimized and handle suboptimal code and annotations quite well. At least as long as it is tested with one user on a local machine. But that changes quickly when the code gets deployed to production and several hundred or thousand users use it in parallel. These issues get then often posted on stack overflow or other forums together with a complaint about the bad performance of Hibernate…

Your training goes far beyond these rather esoteric use-cases and focuses on JPA / Hibernate performance. What are three things every ORM user should know about JPA / SQL performance?

Only three things? I could talk about a lot more things related to JPA and Hibernate performance.

The by far most important one is to remember that your ORM framework is using SQL to store your data in a relational database. That seems to be pretty obvious, but you can avoid the most common performance issues by analyzing and optimizing the executed SQL statements. One example for that is the popular n+1 select issue which you can easily find and fix as I show in my free, 3-part video course.

Another important thing is that no framework or specification provides a good solution for every problem. JPA and Hibernate make it very easy to insert and update data into a relational database. And they provide a set of advanced features for performance optimizations, like caching or the ordering of statements to improve the efficiency of JDBC batches.

But Hibernate and JPA are not a good fit for applications that have to perform a lot of very complex queries for reporting or data mining use cases. The feature set of JPQL is too limited for these use cases. You can, of course, use native queries to execute plain SQL, but you should have a look at other frameworks if you need a lot of these queries.

So, always make sure that your preferred framework is a good fit for your project.

The third thing you should keep in mind is that you should prefer lazy fetching for the relationships between your entities. This prevents Hibernate from executing additional SQL queries to initialize the relationships to other entities when it gets an entity from the database. Most use cases don’t need the related entities, and the additional queries slow down the application. And if one of your use cases uses the relationships, you can use FETCH JOIN statements or entity graphs to initialize them with the initial query.

This approach avoids the overhead of unnecessary SQL queries for most of your use cases and allows you to initialize the relationships if you need them.

These are the 3 most important things you should keep in mind, if you want to avoid performance problems with Hibernate. If you want to dive deeper into this topic, have a look at my Hibernate Performance Tuning Online Training. The next one starts on 23th July.

What made you focus your training mostly on Hibernate, rather than also on EclipseLink / OpenJPA, or just plain SQL / jOOQ? Do you have plans to extend to those topics?

To be honest, that decision was quite easy for me. I’m working with Hibernate for about 15 years now and used it in a lot of different projects with very different requirements. That gives me the experience and knowledge about the framework, which you need if you want to optimize its performance. I also tried EclipseLink but not to the same extent as Hibernate.

And I also asked my readers which JPA implementation they use, and most of them told me that they either use plain JPA or Hibernate. That made it pretty easy to focus on Hibernate.

I might integrate jOOQ into one of my future trainings. Because as I said before, Hibernate and JPA are a good solution if you want to create or update data or if your queries are not too complex. As soon as your queries get complex, you have to use native queries with plain SQL. In these cases, jOOQ can provide some nice benefits.

What’s the advantage of your online training over a more classic training format, where people meet physically – both for you and for your participants?

The good thing about a classroom training is that you can discuss your questions with other students and the instructor. But it also requires you to be in a certain place at a certain time which creates additional costs, requires you to get out of your current projects and keeps you away from home.

With the Hibernate Performance Tuning Online Training, I want to provide a similar experience to a classroom training in which you study with other students and ask your questions but without having to travel somewhere. You can watch my training videos and do the exercises from your office or home and meet with me, and other students in the forum or group coaching calls to discuss your questions.

So you get the best of both worlds without declaring any travel expenses😉

Your blog also includes a weekly digest of all things happening in the Java ecosystem called Java Weekly. What are the biggest insights into our ecosystem that you’ve gotten out of this work, yourself?

The Java ecosystem is always changing and improving, and you need to learn constantly if you want to stay up to date. One way to do that is to read good blog posts. And there are A LOT of great, small blogs out there written by very experienced Java developers who like to share their knowledge. You just have to find them. That’s probably the biggest insight I got.

I read a lot about Java and Java EE each week (that’s probably the only advantage of a 1.5-hour commute with public transportation) and present the most interesting ones every Monday in a new issue of Java Weekly.

Using Stored Procedures With JPA, JDBC… Meh, Just Use jOOQ


The current edition of the Java magazine has an article about Big Data Best Practices for JDBC and JPA by Josh Juneau:
http://www.javamagazine.mozaicreader.com/MayJune2016

The article shows how to use a stored procedure with JDBC (notice how resources aren’t closed, unfortunately. This is commonly forgotten, even in Java Magazine articles)

// Using JDBC to call upon a database stored
// procedure
CallableStatement cs = null;
try {
    cs = conn.prepareCall("{call DUMMY_PROC(?,?)}");
    cs.setString(1, "This is a test");
    cs.registerOutParameter(2, Types.VARCHAR);
    cs.executeQuery();

    // Do something with result
    String returnStr = cs.getString(2);
} catch (SQLException ex){
    ex.printStackTrace();
}

And with JPA:

// Utilize JPA to call a database stored procedure
// Add @NamedStoredProcedureQuery to entity class
@NamedStoredProcedureQuery(
    name="createEmp", procedureName="CREATE_EMP",
    parameters = {
        @StoredProcedureParameter(
            mode= ParameterMode.IN,
            type=String.class,
            name="first"),
        @StoredProcedureParamter(
            mode = ParameterMode.IN,
            type=String.class,
            name="last")
    })

// Calling upon stored procedure
StoredProcedureQuery qry =
    em.createStoredProcedureQuery("createEmp");
qry.setParameter("first", "JOSH");
qry.setParameter("last","JUNEAU");
qry.execute();

Specifically the latter was also recently discussed in blog posts by Vlad Mihalcea and Thorben Janssen.

Do you like verbosity and complexity?

No? We neither. This is why we give you a third option instead: Just use jOOQ. Here’s the equivalent jOOQ code:

// JDBC example:
String returnStr = Routines.dummyProc(
    config, "This is a test");

// JPA example
Routines.createEmp(config, "JOSH", "JUNEAU");

Yes! That’s it. Don’t waste time manually configuring your bind variables with JDBC API calls, or JPA annotations. No one likes writing annotations for stored procedures. With jOOQ and jOOQ’s code generator, procedure calls are:

  • A one-liner
  • A no-brainer
  • A way to bring back the fun to stored procedures

Learn more about using Oracle stored procedures with nested collections and object types here:
Painless Access from Java to PL/SQL Procedures with jOOQ

Correlated Subqueries are Evil and Slow. Or are They?


A common myth in SQL is the idea that correlated subqueries are evil and slow. For example, this query here:

SELECT 
  first_name, last_name,
  (SELECT count(*) 
   FROM film_actor fa 
   WHERE fa.actor_id = a.actor_id)
FROM actor a

It “forces” the database engine to run a nested loop of the form (in pseudo code):

for (Actor a : actor) {
  output(
    a.first_name,
    a.last_name,
    film_actor.where(fa -> fa.actor_id = a.actor_id)
              .size()
}

So, for every actor, collect all the corresponding film_actors and count them. This will produce the number of films each actors has played in.

It appears that it would be much better to run this query in “bulk”, I.e. to run:

SELECT 
  first_name, last_name, count(*)
FROM actor a
JOIN film_actor fa USING (actor_id)
GROUP BY actor_id, first_name, last_name

But is it really faster? And if so, why would you expect that?

Bulk aggregation vs nested loops

Bulk aggregation in this case really just means that we’re collecting all actors and all film_actors and we then merge them in memory for the group by operation. The execution plan (in Oracle) looks like this:

-------------------------------------------------------------------
| Id  | Operation              | Name                    | A-Rows |
-------------------------------------------------------------------
|   0 | SELECT STATEMENT       |                         |    200 |
|   1 |  HASH GROUP BY         |                         |    200 |
|*  2 |   HASH JOIN            |                         |   5462 |
|   3 |    TABLE ACCESS FULL   | ACTOR                   |    200 |
|   4 |    INDEX FAST FULL SCAN| IDX_FK_FILM_ACTOR_ACTOR |   5462 |
-------------------------------------------------------------------

There are 5462 rows in our film_actor table, and for each actor, we join and group and aggregate them all to get to the results. Let’s compare this to the nested loop’s plan:

-----------------------------------------------------------------------
| Id  | Operation         | Name                    | Starts | A-Rows |
-----------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                         |      1 |    200 |
|   1 |  SORT AGGREGATE   |                         |    200 |    200 |
|*  2 |   INDEX RANGE SCAN| IDX_FK_FILM_ACTOR_ACTOR |    200 |   5462 |
|   3 |  TABLE ACCESS FULL| ACTOR                   |      1 |    200 |
-----------------------------------------------------------------------

I’ve now included the “Starts” row to illustrate that after having collected all the 200 actors, we start the subquery 200 times to get the count for each actor. But this time, with a range scan.

From the plan only, can we tell which one is faster? Bulk aggregation will need more memory (to load all the data), but has a lower algorithmic complexity (linear). Nested looping will need less memory (all the required info is available from the index directly) but seems to have a higher algorithmic complexity (quadratic).

Fact is, this isn’t exactly true.

The bulk aggregation is indeed linear, but according to O(M + N) where M = number of actors and N = number of film_actors, whereas nested looping isn’t quadratic, it’s O(M log N). We don’t need to traverse the entire index to get the count.

At some point, the higher complexity is worse, but with this little amount of data, it’s not:

On the x-axis is the size of N and on the y-axis is the “complexity value”, e.g. how much time (or memory) is used by an algorithm.

Effects of algorithmic complexity for large N

Effects of algorithmic complexity for large N

Effects of algorithmic complexity for "small" N

Effects of algorithmic complexity for “small” N

Here’s a disclaimer about the above:

Algorithmic complexity for “small N” = “works on my machine”😉

There’s nothing better than proving things with measurements. Let’s run the following PL/SQL program:

SET SERVEROUTPUT ON
DECLARE
  v_ts TIMESTAMP;
  v_repeat CONSTANT NUMBER := 1000;
BEGIN
  v_ts := SYSTIMESTAMP;
  
  FOR i IN 1..v_repeat LOOP
    FOR rec IN (
      SELECT 
        first_name, last_name,
        (SELECT count(*) 
         FROM film_actor fa 
         WHERE fa.actor_id = a.actor_id)
      FROM actor a
    ) LOOP
      NULL;
    END LOOP;
  END LOOP;
  
  dbms_output.put_line('Nested select: ' || (SYSTIMESTAMP - v_ts));
  v_ts := SYSTIMESTAMP;
  
  FOR i IN 1..v_repeat LOOP
    FOR rec IN (
      SELECT 
        first_name, last_name, count(*)
      FROM actor a
      JOIN film_actor fa USING (actor_id)
      GROUP BY actor_id, first_name, last_name
    ) LOOP
      NULL;
    END LOOP;
  END LOOP;
  
  dbms_output.put_line('Group by     : ' || (SYSTIMESTAMP - v_ts));
END;
/

After three runs, and against our standard Sakila database (get it here: https://github.com/jOOQ/jOOQ/tree/master/jOOQ-examples/Sakila) with 200 actors and 5462 film_actors, we can see that the nested select consistently outperforms the bulk group by:

Nested select: +000000000 00:00:01.122000000
Group by     : +000000000 00:00:03.191000000

Nested select: +000000000 00:00:01.116000000
Group by     : +000000000 00:00:03.104000000

Nested select: +000000000 00:00:01.122000000
Group by     : +000000000 00:00:03.228000000

Helping the optimiser

Some interesting feedback by Markus Winand (author of http://sql-performance-explained.com) was given on Twitter:

There is a third option: Nesting the GROUP BY operation in a derived table:

SELECT
  first_name, last_name, c
FROM actor a
JOIN (
  SELECT actor_id, count(*) c
  FROM film_actor
  GROUP BY actor_id
) USING (actor_id)

which produces a slightly better plan than the “ordinary” group by query:

--------------------------------------------------------------------
| Id  | Operation               | Name                    | A-Rows |
--------------------------------------------------------------------
|   0 | SELECT STATEMENT        |                         |    200 |
|*  1 |  HASH JOIN              |                         |    200 |
|   2 |   TABLE ACCESS FULL     | ACTOR                   |    200 |
|   3 |   VIEW                  |                         |    200 |
|   4 |    HASH GROUP BY        |                         |    200 |
|   5 |     INDEX FAST FULL SCAN| IDX_FK_FILM_ACTOR_ACTOR |   5462 |
--------------------------------------------------------------------

Adding it to the benchmark as such:

SET SERVEROUTPUT ON
DECLARE
  v_ts TIMESTAMP;
  v_repeat CONSTANT NUMBER := 1000;
BEGIN
  v_ts := SYSTIMESTAMP;
   
  FOR i IN 1..v_repeat LOOP
    FOR rec IN (
      SELECT
        first_name, last_name,
        (SELECT count(*) 
         FROM film_actor fa 
         WHERE fa.actor_id = a.actor_id)
      FROM actor a
    ) LOOP
      NULL;
    END LOOP;
  END LOOP;
   
  dbms_output.put_line('Nested select            : ' || (SYSTIMESTAMP - v_ts));
  v_ts := SYSTIMESTAMP;
   
  FOR i IN 1..v_repeat LOOP
    FOR rec IN (
      SELECT
        first_name, last_name, count(*)
      FROM actor a
      JOIN film_actor fa USING (actor_id)
      GROUP BY actor_id, first_name, last_name
    ) LOOP
      NULL;
    END LOOP;
  END LOOP;
   
  dbms_output.put_line('Group by                 : ' || (SYSTIMESTAMP - v_ts));
  v_ts := SYSTIMESTAMP;
  
  FOR i IN 1..v_repeat LOOP
    FOR rec IN (
      SELECT 
        first_name, last_name, c
      FROM actor a
      JOIN (
        SELECT actor_id, count(*) c
        FROM film_actor
        GROUP BY actor_id
      ) USING (actor_id)
    ) LOOP
      NULL;
    END LOOP;
  END LOOP;

  dbms_output.put_line('Group by in derived table: ' || (SYSTIMESTAMP - v_ts));
END;
/

… shows that it’s already very close to the nested select option:

Nested select            : +000000000 00:00:01.371000000
Group by                 : +000000000 00:00:03.417000000
Group by in derived table: +000000000 00:00:01.592000000

Nested select            : +000000000 00:00:01.236000000
Group by                 : +000000000 00:00:03.329000000
Group by in derived table: +000000000 00:00:01.595000000

Conclusion

We have shown that under some circumstances, correlated subqueries can be better than bulk aggregation. In Oracle. With small-medium sized data sets. In other cases, that’s not true as the size of M and N, our two algorithmic complexity variables increase, O(M log N) will be much worse than O(M + N).

The conclusion here is: Don’t trust any initial judgement. Measure. When you run such a query a lot of times, 3x slower can make a big difference. But don’t go replace all your bulk aggregations either.:)

Liked this article? The content is part of our Data Geekery SQL performance training

Cyclops-react Organises the Cambrian Explosion of Java 8 Libraries


We’re excited to announce another very interesting guest post on the jOOQ Blog by John Mcclean from AOL.

AOL is a global digital media and technology company, founded in 1985 and once known as America Online, AOL is now part of the Verizon Group. AOL focuses on four areas – video, mobile, ad technology and platforms, and open ecosystems. AOL connects publishers with advertisers across their global, programmatic platforms, tapping into Microsoft inventory and original content brands like TechCrunch, The Huffington Post and MAKERS.

johnmccleanJohn is an Architect at AOL. He works in the ad tech and platforms group, where he leads the advertising demand side forecasting team. A team that builds and runs a system that processes billions of RTB, impression and viewability records in realtime to generate price volume curves and other forecasts for advertising campaigns in milliseconds. John is also the lead developer for AOL open source projects cyclops-react and Microserver. Extracted from AOL’s forecasting system these projects allow AOL to rapidly deploy new features that work at scale, by guiding Java developers along the path of functional, reactive, microservices.

What is Cyclops-react?

The arrival of Lambda expressions and default methods in Java 8 heralded the biggest structural changes to the Java language in a decade. Building on top of this were some new cool APIs, such as Stream, Optional, CompletableFuture – finally Java developers could code in a more functional style. While this was very welcome, for many the enhancements did not quite go far enough.  

Stream, Optional, CompletableFuture all share the same abstract structure and obey the same rules. Yet the APIs don’t agree on common method names, never mind provide a common interface. For example Stream#map / Optional#map becomes CompletableFuture#thenApply. Also, the functionality added to Stream & Optional is missing from collections generally. Where is List#map ?

The JDK Stream implementation performs well, is totally lazy and well designed for extension, but provides only a limited subset of potential operators (constrained, perhaps, by a focus on data parallelism). Into the void stepped libraries such as jOOλ with its sequential Stream extension (called Seq). Seq adds many additional Streaming operators. jOOλ generally adds many missing functional features such as Tuples.

A core goal of cyclops-react, as well as adding original features such as FutureStreams, is to provide a mechanism for joining up both the JDK APIs and the third party functional libraries. There was a Cambrian explosion of cool libraries that emerged after the launch of Java 8. Libraries like Javaslang & Project Reactor. cyclops-react does this in the first instance by extending the JDK, and by leveraging other libraries such as  jOOλ, pCollections & Agrona. These libraries in turn also extend JDK interfaces where possible to add features such as Persistent Collections and wait free Many Producer Single Consumer Queues.

Beyond reusing and extending JDK interfaces our aims were to make it easy for developers to integrate with external libraries by making use of third party standards such as the reactive-streams API and by building our own abstractions where no set standard existed. The libraries we currently focus on integrating with are Google’s Guava, RxJava, Functional Java, Project Reactor and Javaslang. We’ve created abstractions for wrapping types like Stream, Optional & CompletableFuture – where no interface existed or was possible before. We chose these goals, because we are using cyclops-react in production across a Microservices architecture and being able to leverage the right technology for a problem and have it integrate smoothly with the rest of our code base is critical.

cyclops-react is quite a large feature rich project, and in addition has a number of integration modules. In the article below I’ll cover some of the available features with a particular goal of showing how cyclops-react helps join up the dots across the JDK and into the brave new world of the pace setting Java 8 open source community.
 

Extending the JDK

cyclops-react extends JDK APIs where possible. For example ReactiveSeq adds functionality for handling errors, asynchronous processing and much more extends extends both JDK Stream and jOOλ’s Seq. cyclops-react Collection extensions, rather than creating new collection implementations, implement and extend the appropriate JDK interfaces. cyclops-react LazyFutureStream in turn extends ReactiveSeq, and allows aggregate operations over Streams of Futures as if it were a simple Stream (this proves to be very useful for handling a large number typical Java I/O operations asynchronously and performantly).

ListX extends List, but adds operators that execute eagerly

ListX<Integer> tenTimes = ListX.of(1,2,3,4)
                               .map(i->i*10);

cyclops-react adds lots of operators for users to explore. We can, for example, apply functions across multiple collections at the same time

The reactive-streams API acts as a natural bridge between producers (publishers) of data and consumers (subscribers). All cyclops-react data types implement the Publisher interface from reactive-streams, and Subscriber implementations that can convert to any cyclops-react type are provided also. This makes direct integration with other reactive-streams based libraries, such as Project Reactor straightforward.

For example we can lazily populate a Reactor Flux from any cyclops publisher, such as SortedSetX, or populate a cyclops-react type from a Reactor type.

Flux<Integer> stream = Flux.from(
  SortedSetX.of(1,2,3,4,5,6,7,8));
//Flux[1,2,3,4,5,6,7,8]

ListX<Character> list = ListX.fromPublisher(
  Flux.just("a","b","c"));

Reactor Flux and Mono types can work directly with cyclops-react For comprehensions (each supported library also has their own set of native For comprehension classes in their integration module). 

// import static com.aol.cyclops.control.For.*;
        
Publishers.each2(
  Flux.just(1,2,3), 
  i -> ReactiveSeq.range(i,5),Tuple::tuple).printOut();
        
/*
(1, 1)
(1, 2)
(1, 3)
(1, 4)
(2, 2)
(2, 3)
(2, 4)
(3, 3)
(3, 4)
*/

A For comprehension is a way of managing nested iteration over types with flatMap and map methods, by cascading calls to the appropriate methods. In cyclops-react, nested statements can access the elements of the previous statements, so For comprehensions can be a very useful way of managing the behavior of existing. For example to ensure that calls to existing methods findId and loadData which may return null values, and will throw NPEs if provided with a null parameter we can make use of a For comprehension that will safely execute loadData only when an Optional with a value is returned from findId()

List<Data> data = 
For.optional(findId())
   .optional(this::loadData);
//loadData is only called if findId() returns a value

Similarly, a type such as Try could be used to handle exceptional results from either findId or loadData, Futures can be used to execute chained methods asynchronously and so on.

Building cross-library abstractions

Java 8 introduced Monads to Java (Stream, Optional, CompletableFuture), but didn’t provide a common interface that would help reuse, in fact the method names used in CompletableFuture differ significantly from those used in Optional & Stream for the same function. So map became thenApply and flatMap thenCompose. Across the Java 8 world monads are becoming an increasingly common pattern, but there is often no way to abstract across them. In cyclops-react, rather than attempt to define an interface to represent monads, we built a set of wrapper interfaces and a number of custom adapters to adapt different instances from across the main functional-style libraries for Java 8 to those wrappers. The wrappers extend AnyM (short for Any Monad) and there are two sub-interfaces – AnyMValue which represents any monadic type that resolves to a single value (like Optional or CompletableFuture) or AnyMSeq that ultimately resolves to a sequence of values (like a Stream or List). The cyclops extension wrappers provide a mechanism to wrap the types from RxJava, Guava, Reactor, FunctionalJava and Javaslang.

//We can wrap any type from Reactor, RxJava,
//FunctionalJava, Javaslang, Guava
AnyMSeq<Integer> wrapped = 
  Fj.list(List.list(1,2,3,4,5));

//And manipulate it
AnyMSeq<Integer> timesTen = wrapped.map(i->i*10); 

cyclops-react provides a common set of interfaces that these wrappers (and other cyclops-react types) inherit from, allowing developers to write more generic reusable code. AnyM extends reactive-streams publishers, meaning you can make any Javaslang, Guava, FunctionalJava or RxJava type a reactive-streams publisher with cyclops-react.

AnyMSeq<Integer> wrapped = 
  Javaslang.traversable(List.of(1,2,3,4,5));

//The wrapped type is a reactive-streams publisher
Flux<Integer> fromJavaslang = Flux.from(wrapped);

wrapped.forEachWithError(
  System.out::println,
  System.out::err);
  

Furthermore the reactive functionality from cyclops-react is provided directly on the AnyM types. This means we can, for example, schedule data emission from a Javaslang or FunctionalJava Stream – or execute a reduce operation lazily, or asynchronously.


AnyMSeq<Integer> wrapped = 
  Javaslang.traversable(Stream.of(1,2,3,4,5));

CompletableFuture<Integer> asyncResult = 
  wrapped.futureOperations(Executors.newFixedThreadPool(1))
         .reduce(50, (acc, next) -> acc + next);
//CompletableFuture[1550]

AnyMSeq<Integer> wrapped = 
  FJ.list(list.list(1,2,3,4,5));

Eval<Integer> lazyResult = 
  wrapped.map(i -> i * 10)
         .lazyOperations()
         .reduce(50, (acc,next) -> acc + next);
//Eval[15500]

HotStream<Integer> emitting = wrapped.schedule(
  "0 * * * * ?", 
  Executors.newScheduledThreadPool(1));

emitting.connect()
        .debounce(1,TimeUnit.DAYS)
        .forEachWithError(
           this::logSuccess,
           this::logFailure);

Theres a lot to explore both in cyclops-react and in the new broader Java 8 eco-system, hopefully you’ll have a fun adventure playing with, learning from and extending the Java 8 boundaries yourself!