All Libraries Should Follow a Zero-Dependency Policy

This hilarious article with a click-bait title caught my attention, recently:

View story at

A hilarious (although not so true or serious) rant about the current state of JavaScript development in the node ecosystem.

Dependency hell isn’t new

Dependency hell is a term that made it into wikipedia. It defines it as such:

Dependency hell is a colloquial term for the frustration of some software users who have installed software packages which have dependencies on specific versions of other software packages.

The big problem in dependency hell is the fact that small libraries pull in additional depedencies on which they rely in order to avoid too much code duplication. For instance, check out who is using Guava 18.0:

You’ll find libraries like:

  • com.fasterxml.jackson.core » jackson-databind
  • org.springframework » spring-context-support
  • org.reflections » reflections
  • org.joda » joda-convert
  • … 2000 more

Now, let’s assume you’re still using Guava 15.0 in your application. You want to upgrade Spring. Will your application still work? Is that upgrade binary compatible? Will Spring guarantee this to you, or are you on your own? Now what if Spring also uses Joda Time, which in turn also uses Guava? Does this even work? Conversely, can Guava ever depend on Joda Time or will a circular Maven Central dependency cause all singularities to collapse into one huge black hole?

Truth is: You don’t need the dependency

… and by you, I don’t mean the end user who writes complex enterprise applications with Guava (or whatever). You need it. But YOU, dear library developer. You certainly don’t need any dependency.

An example from jOOQ. Being a SQL string manipulation library, we pulled in a dependency on Apache Commons Lang because:

  • They have some nice StringUtils, which we like to use
  • Their license is also ASL 2.0, which is compatible to jOOQ’s license

But instead of hard wiring a jOOQ 3.x to commons-lang 2.x dependency, we opted for internalising one of their classes and only parts of it, repackaging it as Essentially, we needed things like:

  • abbreviate()
  • isEmpty()
  • isBlank()
  • leftPad() (hello node developers)

… and some more. That certainly doesn’t justify pulling in the entire dependency, does it? Because while it wouldn’t matter to us, it would matter to our thousands of users, who might prefer to use an older or newer version of commons-lang. And that’s just the beginning. What if commons-lang had transitive dependencies? Etc.

Please, library developers, avoid dependencies

So, please, dear library developers. Please avoid adding dependencies to your libraries. The only things you should depend on are:

  • The JDK
  • Some API governed by a JSR (e.g. JPA)

That’s it. We can all start writing better software and stop downloading the whole internet if YOU the library developers start being reasonable and stop being lazy.

Exceptions to the above rules:

  • Framework and platform vendors (e.g. Spring, Java EE) are excluded. They define the whole platform, i.e. they impose a set of well-documented dependencies. If you’re using a framework / platform in your application, then you have to abide to the platform’s rules

That’s all. Small libraries like jOOQ must not have any dependency.

The Java JIT Compiler is Darn Good at Optimization

“Challenge accepted” said Tagir Valeev when I recently asked the readers of the jOOQ blog to show if the Java JIT (Just-In-Time compilation) can optimise away a for loop.

Tagir is the author of StreamEx, very useful Java 8 Stream extension library that adds additional parallelism features on top of standard streams. He’s a speaker at conferences, and has contributed a dozen of patches into OpenJDK Stream API (including bug fixes, performance optimizations and new features). He’s interested in static code analysis and works on a new Java bytecode analyzer.

I’m very happy to publish Tagir’s guest post here on the jOOQ blog.


The Java JIT Compiler

In recent article Lukas wondered whether JIT could optimize a code like this to remove an unnecessary iteration:

// ... than this, where we "know" the list
// only contains one value
for (Object object : Collections.singletonList("abc")) {

Here’s my answer: JIT can do even better. Let’s consider this simple method which calculates total length of all the strings of supplied list:

static int testIterator(List<String> list) {
    int sum = 0;
    for (String s : list) {
        sum += s.length();
    return sum;

As you might know this code is equivalent to the following:

static int testIterator(List<String> list) {
    int sum = 0;
    Iterator<String> it = list.iterator();
    while(it.hasNext()) {
        String s =;
        sum += s.length();
    return sum;

Of course in general case the list could be anything, so when creating an iterator, calling hasNext and next methods JIT must emit honest virtual calls which is not very fast. However what will happen if you always supply the singletonList here? Let’s create some simple test:

public class Test {
    static int res = 0;

    public static void main(String[] args) {
        for (int i = 0; i < 100000; i++) {
            res += testIterator(Collections.singletonList("x"));

We are calling our testIterator in a loop so it’s called enough times to be JIT-compiled with C2 JIT compiler. As you might know, in HotSpot JVM there are two JIT-compilers, namely C1 (client) compiler and C2 (server) compiler. In 64-bit Java 8 they work together. First method is compiled with C1 and special instructions are added to gather some statistics (which is called profiling). Among it there is type statistics. JVM will carefully check which exact types our list variable has. And in our case it will discover that in 100% of cases it’s singleton list and nothing else. When method is called quite often, it gets recompiled by better C2 compiler which can use this information. Thus when C2 compiles it can assume that in future singleton list will also appear quite often.

You may ask JIT compiler to output the assembly generated for methods. To do this you should install hsdis on your system. After that you may use convenient tools like JITWatch or write a JMH benchmark and use -perfasm option. Here we will not use third-party tools and simply launch the JVM with the following command line options:

$ java -XX:+UnlockDiagnosticVMOptions -XX:+PrintCompilation -XX:+PrintAssembly Test >output.txt

This will generate quite huge output which may scare the children. The assembly generated by C2 compiler for ourtestIterator method looks like this (on Intel x64 platform):

  # {method} {0x0000000055120518} 
  # 'testIterator' '(Ljava/util/List;)I' in 'Test'
  # parm0:    rdx:rdx   = 'java/util/List'
  #           [sp+0x20]  (sp of caller)
  0x00000000028e7560: mov    %eax,-0x6000(%rsp)
  0x00000000028e7567: push   %rbp

  ;*synchronization entry
  ; - Test::testIterator@-1 (line 15)
  0x00000000028e7568: sub    $0x10,%rsp         
  ; implicit exception: dispatches to 0x00000000028e75bd
  0x00000000028e756c: mov    0x8(%rdx),%r10d    

  ;   {metadata('java/util/Collections$SingletonList')}
  0x00000000028e7570: cmp    $0x14d66a20,%r10d  

  ;*synchronization entry
  ; - java.util.Collections::singletonIterator@-1
  ; - java.util.Collections$SingletonList::iterator@4
  ; - Test::testIterator@3 (line 16)
  0x00000000028e7577: jne    0x00000000028e75a0 

  ;*getfield element
  ; - java.util.Collections$SingletonList::iterator@1
  ; - Test::testIterator@3 (line 16)
  0x00000000028e7579: mov    0x10(%rdx),%ebp    

  ; implicit exception: dispatches to 0x00000000028e75c9
  0x00000000028e757c: mov    0x8(%rbp),%r11d    

  ;   {metadata('java/lang/String')}
  0x00000000028e7580: cmp    $0x14d216d0,%r11d  
  0x00000000028e7587: jne    0x00000000028e75b1

  ; - Test::testIterator@24 (line 16)
  0x00000000028e7589: mov    %rbp,%r10          
  ;*getfield value
  ; - java.lang.String::length@1
  ; - Test::testIterator@30 (line 17)
  0x00000000028e758c: mov    0xc(%r10),%r10d    

  ;*synchronization entry
  ; - Test::testIterator@-1 (line 15)
  ; implicit exception: dispatches to 0x00000000028e75d5
  0x00000000028e7590: mov    0xc(%r10),%eax     
  0x00000000028e7594: add    $0x10,%rsp
  0x00000000028e7598: pop    %rbp

  # 0x0000000000130000
  0x00000000028e7599: test   %eax,-0x27b759f(%rip)        
  ;   {poll_return}                                       
  0x00000000028e759f: retq   
  ... // slow paths follow

What you can notice is that it’s surpisingly short. I’ll took the liberty to annotate what happens here:

// Standard stack frame: every method has such prolog
mov    %eax,-0x6000(%rsp)
push   %rbp
sub    $0x10,%rsp         
// Load class identificator from list argument (which is stored in rdx 
// register) like list.getClass() This also does implicit null-check: if 
// null is supplied, CPU will trigger a hardware exception. The exception
// will be caught by JVM and translated into NullPointerException
mov    0x8(%rdx),%r10d
// Compare list.getClass() with class ID of Collections$SingletonList class 
// which is constant and known to JIT
cmp    $0x14d66a20,%r10d
// If list is not singleton list, jump out to the slow path
jne    0x00000000028e75a0
// Read Collections$SingletonList.element private field into rbp register
mov    0x10(%rdx),%ebp
// Read its class identificator and check whether it's actually String
mov    0x8(%rbp),%r11d
cmp    $0x14d216d0,%r11d
// Jump out to the exceptional path if not (this will create and throw
// ClassCastException)
jne    0x00000000028e75b1
// Read private field String.value into r10 which is char[] array containing
//  String content
mov    %rbp,%r10
mov    0xc(%r10),%r10d
// Read the array length field into eax register (by default method returns
// its value via eax/rax)
mov    0xc(%r10),%eax
// Standard method epilog
add    $0x10,%rsp
pop    %rbp
// Safe-point check (so JVM can take the control if necessary, for example,
// to perform garbage collection)
test   %eax,-0x27b759f(%rip)
// Return

If it’s still hard to understand, let’s rewrite it via pseudo-code:

if (list.class != Collections$SingletonList) {
  goto SLOW_PATH;
str = ((Collections$SingletonList)list).element;
if (str.class != String) {
return ((String)str).value.length;

So for the hot path we have no iterator allocated and no loop, just several dereferences and two quick checks (which are always false, so CPU branch predictor will predict them nicely). Iterator object is evaporated completely, though originally it has additional bookkeeping like tracking whether it was already called and throwing NoSuchElementException in this case. JIT-compiler statically proved that these parts of code are unnecessary and removed them. The sum variable is also evaporated. Nevertheless the method is correct: if it happens in future that it will be called with something different from singleton list, it will handle this situation on the SLOW_PATH (which is of course much longer). Other cases like list == null or list element is not String are also handled.

What will occur if your program pattern changes? Imagine that at some point you are no longer using singleton lists and pass different list implementations here. When JIT discovers that SLOW_PATH is hit too often, it will recompile the method to remove special handling of singleton list. This is different from pre-compiled applications: JIT can change your code following the behavioral changes of your program.

“What Java ORM do You Prefer, and Why?” – SQL of Course!

Catchy headline, yes. But check out this Stack Overflow question by user Mike:

(I’m duplicating it here on the blog, as it might be deleted soon)

It’s a pretty open ended question. I’ll be starting out a new project and am looking at different ORMs to integrate with database access.

Do you have any favorites? Are there any you would advise staying clear of?

And the top voted answer (164 points by David Crawshaw is: “Just use SQL”:

I have stopped using ORMs.

The reason is not any great flaw in the concept. Hibernate works well. Instead, I have found that queries have low overhead and I can fit lots of complex logic into large SQL queries, and shift a lot of my processing into the database.

So consider just using the JDBC package.

The second answer (66 points by user simon) is, again: “Just use SQL”:

None, because having an ORM takes too much control away with small benefits. The time savings gained are easily blown away when you have to debug abnormalities resulting from the use of the ORM. Furthermore, ORMs discourage developers from learning SQL and how relational databases work and using this for their benefit.

The third answer (51 points by myself) is saying, once more: “Use SQL” (and use it with jOOQ).

The best way to write SQL in Java

Only the fourth answer (46 points by Abdullah Jibaly) mentiones Hibernate, the most popular ORM in the Java ecosystem.

The truth is, as we’ve shown numerous times on this blog: Hibernate/JPA/ORMs are good tools to get rid of boring (and complex) CRUD. But that’s just boilerplate logic with little value to your business logic. The interesting stuff – the queries, the batch and bulk processing, the analytics, the reporting, they’re all best done with SQL. Here are some additional articles:

Stay tuned as we’re entering an era of programming where object orientation fades, and functional / declarative programming makes data processing extremely easy and lean again.

How Functional Programming will (Finally) do Away With the GoF Patterns

A recent article about various ways to implement structural pattern matching in Java has triggered my interest:

The article mentions a Scala example where a tree data structure can be traversed very easily and neatly using Scala’s match keyword, along with using algebraic data types (more specifically, a sum type):

def depth(t: Tree): Int = t match {
  case Empty => 0
  case Leaf(n) => 1
  case Node(l, r) => 1 + max(depth(l), depth(r))

Even if you’re not used to the syntax, it is relatively easy to understand what it does:

  • There’s a function depth that calculates the (maximum) depth of a tree structure
  • It does so by checking if the input argument is empty, a leaf node, or any other node
  • If it is any other node, then it adds 1 to the maximum of the remaining tree, recursively

The elegant thing here is that the Scala type system helps the author of the above code get this right from a formal point of view, by offering formal type checking. The closest we can do in Java as illustrated by the article is this

public static int depth(Tree t) {
  if (t instanceof Empty)
    return 0;
  if (t instanceof Leaf)
    return 1;
  if (t instanceof Node)
    return 1 + max(depth(((Node) t).left), depth(((Node) t).right));
  throw new RuntimeException("Inexhaustive pattern match on Tree.");

But these instanceof checks do smell kind of fishy…

For more details, read the full article here, highly recommended:

How does this compare to the GoF design patterns?

In our object-orientation-brainwashed Java ecosystem (which inherited the OO brainwash from C++), the above instanceof logic would most likely be refactored into an implementation using the visitor pattern from the GoF design patterns book. This refactoring would be done by The Team Architect™ himself, as they are supervising the object oriented quality of your software. The 7 lines of code using instanceof would quickly bloat up to roughly 200 lines of weird interfaces, abstract classes, and cryptic accept() and visit() methods. When in fact, the functional programming approach was so much leaner, even in its imperfect Java instanceof form!

A lot of the GoF design patterns stem from a time when EVERYTHING needed to be an object. Object orientation was the new holy grail, and people even wanted to push objects down into databases. Object databases were invented (luckily, they’re all dead) and the SQL standard was enhanced with ORDBMS features (only really implemented in Oracle, PostgreSQL, and Informix, and maybe some other minor DBs), most of which – also luckily – were never widely adopted.

Since Java 8, finally, we’re starting to recover from the damage that was made in early days of object orientation in the 90s, and we can move back to a more data-centric, functional, immutable programming model where data processing languages like SQL are appreciated rather than avoided, and Java will see more and more of these patterns, hopefully.

If you’re not convinced by the above visitor pattern vs pattern matching example, do read this very interesting series of articles by Mario Fusco:

You will see that with functional programming, many patterns lose their meaning as you’re just starting to pass around functions, making code very simple and easy to understand.

As a wrap up, as Mario presented the content at Voxxed Days Ticino:

Happy functional programming!

jOOQ Tuesdays: Thorben Janssen Shares his Hibernate Performance Secrets

Welcome to the jOOQ Tuesdays series. In this series, we’ll publish an article on the third Tuesday every other month where we interview someone we find exciting in our industry from a jOOQ perspective. This includes people who work with SQL, Java, Open Source, and a variety of other related topics.


I’m very excited to feature today Thorben Janssen who has spent most of his professional life with Hibernate.

Thorben, with your blog and training, you are one of the few daring “annotatioficionados” as we like to call them, who risks diving deep into JPA’s more sophisticated annotations – like @SqlResultSetMapping. What is your experience with JPA’s advanced, declarative programming style?

From my point of view, the declarative style of JPA is great and a huge problem at the same time.

If you know what you’re doing, you just add an annotation, set a few properties and your JPA implementation takes care of the rest. That makes it very easy to use complex features and avoids a lot of boilerplate code.

But it can also become a huge issue, when someone is not that familiar with JPA and just copies a few annotations from stack overflow and hopes that it works.

It will work in most of the cases. JPA and Hibernate are highly optimized and handle suboptimal code and annotations quite well. At least as long as it is tested with one user on a local machine. But that changes quickly when the code gets deployed to production and several hundred or thousand users use it in parallel. These issues get then often posted on stack overflow or other forums together with a complaint about the bad performance of Hibernate…

Your training goes far beyond these rather esoteric use-cases and focuses on JPA / Hibernate performance. What are three things every ORM user should know about JPA / SQL performance?

Only three things? I could talk about a lot more things related to JPA and Hibernate performance.

The by far most important one is to remember that your ORM framework is using SQL to store your data in a relational database. That seems to be pretty obvious, but you can avoid the most common performance issues by analyzing and optimizing the executed SQL statements. One example for that is the popular n+1 select issue which you can easily find and fix as I show in my free, 3-part video course.

Another important thing is that no framework or specification provides a good solution for every problem. JPA and Hibernate make it very easy to insert and update data into a relational database. And they provide a set of advanced features for performance optimizations, like caching or the ordering of statements to improve the efficiency of JDBC batches.

But Hibernate and JPA are not a good fit for applications that have to perform a lot of very complex queries for reporting or data mining use cases. The feature set of JPQL is too limited for these use cases. You can, of course, use native queries to execute plain SQL, but you should have a look at other frameworks if you need a lot of these queries.

So, always make sure that your preferred framework is a good fit for your project.

The third thing you should keep in mind is that you should prefer lazy fetching for the relationships between your entities. This prevents Hibernate from executing additional SQL queries to initialize the relationships to other entities when it gets an entity from the database. Most use cases don’t need the related entities, and the additional queries slow down the application. And if one of your use cases uses the relationships, you can use FETCH JOIN statements or entity graphs to initialize them with the initial query.

This approach avoids the overhead of unnecessary SQL queries for most of your use cases and allows you to initialize the relationships if you need them.

These are the 3 most important things you should keep in mind, if you want to avoid performance problems with Hibernate. If you want to dive deeper into this topic, have a look at my Hibernate Performance Tuning Online Training. The next one starts on 23th July.

What made you focus your training mostly on Hibernate, rather than also on EclipseLink / OpenJPA, or just plain SQL / jOOQ? Do you have plans to extend to those topics?

To be honest, that decision was quite easy for me. I’m working with Hibernate for about 15 years now and used it in a lot of different projects with very different requirements. That gives me the experience and knowledge about the framework, which you need if you want to optimize its performance. I also tried EclipseLink but not to the same extent as Hibernate.

And I also asked my readers which JPA implementation they use, and most of them told me that they either use plain JPA or Hibernate. That made it pretty easy to focus on Hibernate.

I might integrate jOOQ into one of my future trainings. Because as I said before, Hibernate and JPA are a good solution if you want to create or update data or if your queries are not too complex. As soon as your queries get complex, you have to use native queries with plain SQL. In these cases, jOOQ can provide some nice benefits.

What’s the advantage of your online training over a more classic training format, where people meet physically – both for you and for your participants?

The good thing about a classroom training is that you can discuss your questions with other students and the instructor. But it also requires you to be in a certain place at a certain time which creates additional costs, requires you to get out of your current projects and keeps you away from home.

With the Hibernate Performance Tuning Online Training, I want to provide a similar experience to a classroom training in which you study with other students and ask your questions but without having to travel somewhere. You can watch my training videos and do the exercises from your office or home and meet with me, and other students in the forum or group coaching calls to discuss your questions.

So you get the best of both worlds without declaring any travel expenses😉

Your blog also includes a weekly digest of all things happening in the Java ecosystem called Java Weekly. What are the biggest insights into our ecosystem that you’ve gotten out of this work, yourself?

The Java ecosystem is always changing and improving, and you need to learn constantly if you want to stay up to date. One way to do that is to read good blog posts. And there are A LOT of great, small blogs out there written by very experienced Java developers who like to share their knowledge. You just have to find them. That’s probably the biggest insight I got.

I read a lot about Java and Java EE each week (that’s probably the only advantage of a 1.5-hour commute with public transportation) and present the most interesting ones every Monday in a new issue of Java Weekly.

Using Stored Procedures With JPA, JDBC… Meh, Just Use jOOQ

The current edition of the Java magazine has an article about Big Data Best Practices for JDBC and JPA by Josh Juneau:

The article shows how to use a stored procedure with JDBC (notice how resources aren’t closed, unfortunately. This is commonly forgotten, even in Java Magazine articles)

// Using JDBC to call upon a database stored
// procedure
CallableStatement cs = null;
try {
    cs = conn.prepareCall("{call DUMMY_PROC(?,?)}");
    cs.setString(1, "This is a test");
    cs.registerOutParameter(2, Types.VARCHAR);

    // Do something with result
    String returnStr = cs.getString(2);
} catch (SQLException ex){

And with JPA:

// Utilize JPA to call a database stored procedure
// Add @NamedStoredProcedureQuery to entity class
    name="createEmp", procedureName="CREATE_EMP",
    parameters = {
            mode= ParameterMode.IN,
            mode = ParameterMode.IN,

// Calling upon stored procedure
StoredProcedureQuery qry =
qry.setParameter("first", "JOSH");

Specifically the latter was also recently discussed in blog posts by Vlad Mihalcea and Thorben Janssen.

Do you like verbosity and complexity?

No? We neither. This is why we give you a third option instead: Just use jOOQ. Here’s the equivalent jOOQ code:

// JDBC example:
String returnStr = Routines.dummyProc(
    config, "This is a test");

// JPA example
Routines.createEmp(config, "JOSH", "JUNEAU");

Yes! That’s it. Don’t waste time manually configuring your bind variables with JDBC API calls, or JPA annotations. No one likes writing annotations for stored procedures. With jOOQ and jOOQ’s code generator, procedure calls are:

  • A one-liner
  • A no-brainer
  • A way to bring back the fun to stored procedures

Learn more about using Oracle stored procedures with nested collections and object types here:
Painless Access from Java to PL/SQL Procedures with jOOQ

Cyclops-react Organises the Cambrian Explosion of Java 8 Libraries

We’re excited to announce another very interesting guest post on the jOOQ Blog by John Mcclean from AOL.

AOL is a global digital media and technology company, founded in 1985 and once known as America Online, AOL is now part of the Verizon Group. AOL focuses on four areas – video, mobile, ad technology and platforms, and open ecosystems. AOL connects publishers with advertisers across their global, programmatic platforms, tapping into Microsoft inventory and original content brands like TechCrunch, The Huffington Post and MAKERS.

johnmccleanJohn is an Architect at AOL. He works in the ad tech and platforms group, where he leads the advertising demand side forecasting team. A team that builds and runs a system that processes billions of RTB, impression and viewability records in realtime to generate price volume curves and other forecasts for advertising campaigns in milliseconds. John is also the lead developer for AOL open source projects cyclops-react and Microserver. Extracted from AOL’s forecasting system these projects allow AOL to rapidly deploy new features that work at scale, by guiding Java developers along the path of functional, reactive, microservices.

What is Cyclops-react?

The arrival of Lambda expressions and default methods in Java 8 heralded the biggest structural changes to the Java language in a decade. Building on top of this were some new cool APIs, such as Stream, Optional, CompletableFuture – finally Java developers could code in a more functional style. While this was very welcome, for many the enhancements did not quite go far enough.  

Stream, Optional, CompletableFuture all share the same abstract structure and obey the same rules. Yet the APIs don’t agree on common method names, never mind provide a common interface. For example Stream#map / Optional#map becomes CompletableFuture#thenApply. Also, the functionality added to Stream & Optional is missing from collections generally. Where is List#map ?

The JDK Stream implementation performs well, is totally lazy and well designed for extension, but provides only a limited subset of potential operators (constrained, perhaps, by a focus on data parallelism). Into the void stepped libraries such as jOOλ with its sequential Stream extension (called Seq). Seq adds many additional Streaming operators. jOOλ generally adds many missing functional features such as Tuples.

A core goal of cyclops-react, as well as adding original features such as FutureStreams, is to provide a mechanism for joining up both the JDK APIs and the third party functional libraries. There was a Cambrian explosion of cool libraries that emerged after the launch of Java 8. Libraries like Javaslang & Project Reactor. cyclops-react does this in the first instance by extending the JDK, and by leveraging other libraries such as  jOOλ, pCollections & Agrona. These libraries in turn also extend JDK interfaces where possible to add features such as Persistent Collections and wait free Many Producer Single Consumer Queues.

Beyond reusing and extending JDK interfaces our aims were to make it easy for developers to integrate with external libraries by making use of third party standards such as the reactive-streams API and by building our own abstractions where no set standard existed. The libraries we currently focus on integrating with are Google’s Guava, RxJava, Functional Java, Project Reactor and Javaslang. We’ve created abstractions for wrapping types like Stream, Optional & CompletableFuture – where no interface existed or was possible before. We chose these goals, because we are using cyclops-react in production across a Microservices architecture and being able to leverage the right technology for a problem and have it integrate smoothly with the rest of our code base is critical.

cyclops-react is quite a large feature rich project, and in addition has a number of integration modules. In the article below I’ll cover some of the available features with a particular goal of showing how cyclops-react helps join up the dots across the JDK and into the brave new world of the pace setting Java 8 open source community.

Extending the JDK

cyclops-react extends JDK APIs where possible. For example ReactiveSeq adds functionality for handling errors, asynchronous processing and much more extends extends both JDK Stream and jOOλ’s Seq. cyclops-react Collection extensions, rather than creating new collection implementations, implement and extend the appropriate JDK interfaces. cyclops-react LazyFutureStream in turn extends ReactiveSeq, and allows aggregate operations over Streams of Futures as if it were a simple Stream (this proves to be very useful for handling a large number typical Java I/O operations asynchronously and performantly).

ListX extends List, but adds operators that execute eagerly

ListX<Integer> tenTimes = ListX.of(1,2,3,4)

cyclops-react adds lots of operators for users to explore. We can, for example, apply functions across multiple collections at the same time

The reactive-streams API acts as a natural bridge between producers (publishers) of data and consumers (subscribers). All cyclops-react data types implement the Publisher interface from reactive-streams, and Subscriber implementations that can convert to any cyclops-react type are provided also. This makes direct integration with other reactive-streams based libraries, such as Project Reactor straightforward.

For example we can lazily populate a Reactor Flux from any cyclops publisher, such as SortedSetX, or populate a cyclops-react type from a Reactor type.

Flux<Integer> stream = Flux.from(

ListX<Character> list = ListX.fromPublisher(

Reactor Flux and Mono types can work directly with cyclops-react For comprehensions (each supported library also has their own set of native For comprehension classes in their integration module). 

// import static*;
  i -> ReactiveSeq.range(i,5),Tuple::tuple).printOut();
(1, 1)
(1, 2)
(1, 3)
(1, 4)
(2, 2)
(2, 3)
(2, 4)
(3, 3)
(3, 4)

A For comprehension is a way of managing nested iteration over types with flatMap and map methods, by cascading calls to the appropriate methods. In cyclops-react, nested statements can access the elements of the previous statements, so For comprehensions can be a very useful way of managing the behavior of existing. For example to ensure that calls to existing methods findId and loadData which may return null values, and will throw NPEs if provided with a null parameter we can make use of a For comprehension that will safely execute loadData only when an Optional with a value is returned from findId()

List<Data> data = 
//loadData is only called if findId() returns a value

Similarly, a type such as Try could be used to handle exceptional results from either findId or loadData, Futures can be used to execute chained methods asynchronously and so on.

Building cross-library abstractions

Java 8 introduced Monads to Java (Stream, Optional, CompletableFuture), but didn’t provide a common interface that would help reuse, in fact the method names used in CompletableFuture differ significantly from those used in Optional & Stream for the same function. So map became thenApply and flatMap thenCompose. Across the Java 8 world monads are becoming an increasingly common pattern, but there is often no way to abstract across them. In cyclops-react, rather than attempt to define an interface to represent monads, we built a set of wrapper interfaces and a number of custom adapters to adapt different instances from across the main functional-style libraries for Java 8 to those wrappers. The wrappers extend AnyM (short for Any Monad) and there are two sub-interfaces – AnyMValue which represents any monadic type that resolves to a single value (like Optional or CompletableFuture) or AnyMSeq that ultimately resolves to a sequence of values (like a Stream or List). The cyclops extension wrappers provide a mechanism to wrap the types from RxJava, Guava, Reactor, FunctionalJava and Javaslang.

//We can wrap any type from Reactor, RxJava,
//FunctionalJava, Javaslang, Guava
AnyMSeq<Integer> wrapped = 

//And manipulate it
AnyMSeq<Integer> timesTen =>i*10); 

cyclops-react provides a common set of interfaces that these wrappers (and other cyclops-react types) inherit from, allowing developers to write more generic reusable code. AnyM extends reactive-streams publishers, meaning you can make any Javaslang, Guava, FunctionalJava or RxJava type a reactive-streams publisher with cyclops-react.

AnyMSeq<Integer> wrapped = 

//The wrapped type is a reactive-streams publisher
Flux<Integer> fromJavaslang = Flux.from(wrapped);


Furthermore the reactive functionality from cyclops-react is provided directly on the AnyM types. This means we can, for example, schedule data emission from a Javaslang or FunctionalJava Stream – or execute a reduce operation lazily, or asynchronously.

AnyMSeq<Integer> wrapped = 

CompletableFuture<Integer> asyncResult = 
         .reduce(50, (acc, next) -> acc + next);

AnyMSeq<Integer> wrapped = 

Eval<Integer> lazyResult = -> i * 10)
         .reduce(50, (acc,next) -> acc + next);

HotStream<Integer> emitting = wrapped.schedule(
  "0 * * * * ?", 


Theres a lot to explore both in cyclops-react and in the new broader Java 8 eco-system, hopefully you’ll have a fun adventure playing with, learning from and extending the Java 8 boundaries yourself!