The Awesome PostgreSQL 9.4 / SQL:2003 FILTER Clause for Aggregate Functions

Sometimes when aggregating data with SQL, we’d love to add some additional filters. For instance, consider the following world bank data:

GDP per capita (current US$)

          2009    2010    2011    2012
CA      40,764  47,465  51,791  52,409	
DE      40,270  40,408  44,355  42,598	
FR      40,488  39,448  42,578  39,759	
GB      35,455  36,573  38,927  38,649	
IT      35,724  34,673  36,988  33,814	
JP      39,473  43,118  46,204  46,548	
RU       8,616  10,710  13,324  14,091	
US      46,999  48,358  49,855  51,755	

And the table structure:

CREATE TABLE countries (
  code CHAR(2) NOT NULL,
  year INT NOT NULL,
  gdp_per_capita DECIMAL(10, 2) NOT NULL
);

Now, let’s assume we’d like to find the number of countries with a GDP higher than 40,000 for each year.

With standard SQL:2003, and now also with the newly released PostgreSQL 9.4, we can now take advantage of the new FILTER clause, which allows us to write the following query:

SELECT
  year,
  count(*) FILTER (WHERE gdp_per_capita >= 40000)
FROM
  countries
GROUP BY
  year

The above query will now yield:

year   count
------------
2012   4
2011   5
2010   4
2009   4

And that’s not it! As always, you can use any aggregate function also as a window function simply by adding an OVER() clause to the end:

SELECT
  year,
  code,
  gdp_per_capita,
  count(*) 
    FILTER (WHERE gdp_per_capita >= 40000) 
    OVER   (PARTITION BY year)
FROM
  countries

The result would then look something like this:

year   code   gdp_per_capita   count
------------------------------------
2009   CA           40764.00       4
2009   DE           40270.00       4
2009   FR           40488.00       4
2009   GB           35455.00       4

jOOQ 3.6 will also support the new FILTER clause for aggregate functions

Good news for jOOQ users. You can write the same query with jOOQ intuitively as such:

DSL.using(configuration)
   .select(
        COUNTRIES.YEAR,
        count().filterWhere(
            COUNTRIES.GDP_PER_CAPITA.ge(40000)
        ))
   .from(COUNTRIES)
   .groupBy(COUNTRIES.YEAR)
   .fetch();

… and

DSL.using(configuration)
   .select(
        COUNTRIES.YEAR,
        COUNTRIES.CODE,
        COUNTRIES.GDP_PER_CAPITA
        count().filterWhere(
                   COUNTRIES.GDP_PER_CAPITA.ge(40000))
               .over(partitionBy(COUNTRIES.YEAR)))
   .from(COUNTRIES)
   .fetch();

And the best thing is that jOOQ (as usual) emulates the above clause for you if you’re not using PostgreSQL. The equivalent query would be:

SELECT
  year,
  count(CASE WHEN gdp_per_capita >= 40000 THEN 1 END)
FROM
  countries
GROUP BY
  year

jOOQ: The best way to write SQL in Java

Read more about what’s new in PostgreSQL 9.4 here

Leaky Abstractions, or How to Bind Oracle DATE Correctly with Hibernate

We’ve recently published an article about how to bind the Oracle DATE type correctly in SQL / JDBC, and jOOQ. This article got a bit of traction on reddit with an interesting remark by Vlad Mihalcea, who is frequently blogging about Hibernate, JPA, transaction management and connection pooling on his blog. Vlad pointed out that this problem can also be solved with Hibernate, and we’re going to look into this, shortly.

What is the problem with Oracle DATE?

The problem that was presented in the previous article is dealing with the fact that if a query uses filters on Oracle DATE columns:

// execute_at is of type DATE and there's an index
PreparedStatement stmt = connection.prepareStatement(
    "SELECT * " + 
    "FROM rentals " +
    "WHERE rental_date > ? AND rental_date < ?");

… and we’re using java.sql.Timestamp for our bind values:

stmt.setTimestamp(1, start);
stmt.setTimestamp(2, end);

… then the execution plan will turn very bad with a FULL TABLE SCAN or perhaps an INDEX FULL SCAN, even if we should have gotten a regular INDEX RANGE SCAN.

-------------------------------------
| Id  | Operation          | Name   |
-------------------------------------
|   0 | SELECT STATEMENT   |        |
|*  1 |  FILTER            |        |
|*  2 |   TABLE ACCESS FULL| RENTAL |
-------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(:1<=:2)
   2 - filter((INTERNAL_FUNCTION("RENTAL_DATE")>=:1 AND 
              INTERNAL_FUNCTION("RENTAL_DATE")<=:2))

This is because the database column is widened from Oracle DATE to Oracle TIMESTAMP via this INTERNAL_FUNCTION(), rather than truncating the java.sql.Timestamp value to Oracle DATE.

More details about the problem itself can be seen in the previous article

Preventing this INTERNAL_FUNCTION() with Hibernate

You can fix this with Hibernate’s proprietary API, using a org.hibernate.usertype.UserType.

Assuming that we have the following entity:

@Entity
public class Rental {

    @Id
    @Column(name = "rental_id")
    public Long rentalId;

    @Column(name = "rental_date")
    public Timestamp rentalDate;
}

And now, let’s run this query here (I’m using Hibernate API, not JPA, for the example):

List<Rental> rentals =
session.createQuery("from Rental r where r.rentalDate between :from and :to")
       .setParameter("from", Timestamp.valueOf("2000-01-01 00:00:00.0"))
       .setParameter("to", Timestamp.valueOf("2000-10-01 00:00:00.0"))
       .list();

The execution plan that we’re now getting is again inefficient:

-------------------------------------
| Id  | Operation          | Name   |
-------------------------------------
|   0 | SELECT STATEMENT   |        |
|*  1 |  FILTER            |        |
|*  2 |   TABLE ACCESS FULL| RENTAL |
-------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(:1<=:2)
   2 - filter((INTERNAL_FUNCTION("RENTAL0_"."RENTAL_DATE")>=:1 AND 
              INTERNAL_FUNCTION("RENTAL0_"."RENTAL_DATE")<=:2))

The solution is to add this @Type annotation to all relevant columns…

@Entity
@TypeDefs(
    value = @TypeDef(
        name = "oracle_date", 
        typeClass = OracleDate.class
    )
)
public class Rental {

    @Id
    @Column(name = "rental_id")
    public Long rentalId;

    @Column(name = "rental_date")
    @Type(type = "oracle_date")
    public Timestamp rentalDate;
}

and register the following, simplified UserType:

import java.io.Serializable;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Timestamp;
import java.sql.Types;
import java.util.Objects;

import oracle.sql.DATE;

import org.hibernate.engine.spi.SessionImplementor;
import org.hibernate.usertype.UserType;

public class OracleDate implements UserType {

    @Override
    public int[] sqlTypes() {
        return new int[] { Types.TIMESTAMP };
    }

    @Override
    public Class<?> returnedClass() {
        return Timestamp.class;
    }

    @Override
    public Object nullSafeGet(
        ResultSet rs, 
        String[] names, 
        SessionImplementor session, 
        Object owner
    )
    throws SQLException {
        return rs.getTimestamp(names[0]);
    }

    @Override
    public void nullSafeSet(
        PreparedStatement st, 
        Object value, 
        int index, 
        SessionImplementor session
    )
    throws SQLException {
        // The magic is here: oracle.sql.DATE!
        st.setObject(index, new DATE(value));
    }

    // The other method implementations are omitted
}

This will work because using the vendor-specific oracle.sql.DATE type will have the same effect on your execution plan as explicitly casting the bind variable in your SQL statement, as shown in the previous article: CAST(? AS DATE). The execution plan is now the desired one:

------------------------------------------------------
| Id  | Operation                    | Name          |
------------------------------------------------------
|   0 | SELECT STATEMENT             |               |
|*  1 |  FILTER                      |               |
|   2 |   TABLE ACCESS BY INDEX ROWID| RENTAL        |
|*  3 |    INDEX RANGE SCAN          | IDX_RENTAL_UQ |
------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(:1<=:2)
   3 - access("RENTAL0_"."RENTAL_DATE">=:1 
          AND "RENTAL0_"."RENTAL_DATE"<=:2)

If you want to reproduce this issue, just query any Oracle DATE column with a java.sql.Timestamp bind value through JPA / Hibernate, and get the execution plan as indicated here.

Don’t forget to flush shared pools and buffer caches to enforce the calculation of new plans between executions, because the generated SQL is the same each time.

Can I do it with JPA 2.1?

At first sight, it looks like the new converter feature in JPA 2.1 (which works just like jOOQ’s converter feature) should be able to do the trick. We should be able to write:

import java.sql.Timestamp;

import javax.persistence.AttributeConverter;
import javax.persistence.Converter;

import oracle.sql.DATE;

@Converter
public class OracleDateConverter 
implements AttributeConverter<Timestamp, DATE>{

    @Override
    public DATE convertToDatabaseColumn(Timestamp attribute) {
        return attribute == null ? null : new DATE(attribute);
    }

    @Override
    public Timestamp convertToEntityAttribute(DATE dbData) {
        return dbData == null ? null : dbData.timestampValue();
    }
}

This converter can then be used with our entity:

import java.sql.Timestamp;

import javax.persistence.Column;
import javax.persistence.Convert;
import javax.persistence.Entity;
import javax.persistence.Id;

@Entity
public class Rental {

    @Id
    @Column(name = "rental_id")
    public Long rentalId;

    @Column(name = "rental_date")
    @Convert(converter = OracleDateConverter.class)
    public Timestamp rentalDate;
}

But unfortunately, this doesn’t work out of the box as Hibernate 4.3.7 will think that you’re about to bind a variable of type VARBINARY:

// From org.hibernate.type.descriptor.sql.SqlTypeDescriptorRegistry

    public <X> ValueBinder<X> getBinder(JavaTypeDescriptor<X> javaTypeDescriptor) {
        if ( Serializable.class.isAssignableFrom( javaTypeDescriptor.getJavaTypeClass() ) ) {
            return VarbinaryTypeDescriptor.INSTANCE.getBinder( javaTypeDescriptor );
        }

        return new BasicBinder<X>( javaTypeDescriptor, this ) {
            @Override
            protected void doBind(PreparedStatement st, X value, int index, WrapperOptions options)
                    throws SQLException {
                st.setObject( index, value, jdbcTypeCode );
            }
        };
    }

Of course, we can probably somehow tweak this SqlTypeDescriptorRegistry to create our own “binder”, but then we’re back to Hibernate-specific API. This particular implementation is probably a “bug” at the Hibernate side, which has been registered here, for the record:

https://hibernate.atlassian.net/browse/HHH-9553

Conclusion

Abstractions are leaky on all levels, even if they are deemed a “standard” by the JCP. Standards are often a means of justifying an industry de-facto standard in hindsight (with some politics involved, of course). Let’s not forget that Hibernate didn’t start as a standard and massively revolutionised the way the standard-ish J2EE folks tended to think about persistence, 14 years ago.

In this case we have:

  • Oracle SQL, the actual implementation
  • The SQL standard, which specifies DATE quite differently from Oracle
  • ojdbc, which extends JDBC to allow for accessing Oracle features
  • JDBC, which follows the SQL standard with respect to temporal types
  • Hibernate, which offers proprietary API in order to access Oracle SQL and ojdbc features when binding variables
  • JPA, which again follows the SQL standard and JDBC with respect to temporal types
  • Your entity model

As you can see, the actual implementation (Oracle SQL) leaked up right into your own entity model, either via Hibernate’s UserType, or via JPA’s Converter. From then on, it will hopefully be shielded off from your application (until it won’t), allowing you to forget about this nasty little Oracle SQL detail.

Any way you turn it, if you want to solve real customer problems (i.e. the significant performance issue at hand), then you will need to resort to vendor-specific API from Oracle SQL, ojdbc, and Hibernate – instead of pretending that the SQL, JDBC, and JPA standards are the bottom line.

But that’s probably alright. For most projects, the resulting implementation-lockin is totally acceptable.

Are You Binding Your Oracle DATEs Correctly? I Bet You Aren’t

Oracle database has its ways. In my SQL talks at conferences, I love to confuse people with the following Oracle facts:

sql-trivia-1

… and the answer is, of course:

sql-trivia-2

Isn’t it horrible to make empty string the same thing as NULL? Please, Oracle…

The only actually reasonable slide to follow the previous two is this one:

sql-trivia-3

But the DATE type is much more subtle

So you think VARCHAR2 is weird?

Well, we all know that Oracle’s DATE is not really a date as in the SQL standard, or as in all the other databases, or as in java.sql.Date. Oracle’s DATE type is really a TIMESTAMP(0), i.e. a timestamp with a fractional second precision of zero.

Most legacy databases actually use DATE precisely for that, to store timestamps with no fractional seconds, such as:

  • 1970-01-01 00:00:00
  • 2000-02-20 20:00:20
  • 1337-01-01 13:37:00

So, it’s always a safe bet to use java.sql.Timestamp types in Java, when you’re operating with Oracle DATE.

But things can go very wrong when you bind such variables via JDBC as can be seen in this Stack Overflow question here. Let’s assume you have a range predicate like so:

// execute_at is of type DATE and there's an index
PreparedStatement stmt = connection.prepareStatement(
    "SELECT * " + 
    "FROM my_table " +
    "WHERE execute_at > ? AND execute_at < ?");

Now, naturally, we’d expect any index on execute_at to be a sensible choice to use for filtering out records from my_table, and that’s also what happens when we bind java.sql.Date

stmt.setDate(1, start);
stmt.setDate(2, end);

The execution plan is optimal:

-----------------------------------------------------
| Id  | Operation                    | Name         |
-----------------------------------------------------
|   0 | SELECT STATEMENT             |              |
|*  1 |  FILTER                      |              |
|   2 |   TABLE ACCESS BY INDEX ROWID| my_table     |
|*  3 |    INDEX RANGE SCAN          | my_index     |
-----------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(:1:1 AND ""EXECUTE_AT""<:2)

But let’s check out what happens if we assume execute_at to be a date with hours/minutes/seconds, i.e. an Oracle DATE. We’ll be binding java.sql.Timestamp

stmt.setTimestamp(1, start);
stmt.setTimestamp(2, end);

and the execution plan suddenly becomes very bad:

-----------------------------------------------------
| Id  | Operation                    | Name         |
-----------------------------------------------------
|   0 | SELECT STATEMENT             |              |
|*  1 |  FILTER                      |              |
|   2 |   TABLE ACCESS BY INDEX ROWID| my_table     |
|*  3 |    INDEX FULL SCAN           | my_index     |
-----------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(:1:1 AND 
               INTERNAL_FUNCTION(""EXECUTE_AT"")<:2))

What’s this INTERNAL_FUNCTION()

INTERNAL_FUNCTION() is Oracle’s way of silently converting values into other values in ways that are completely opaque. In fact, you cannot even place a function-based index on this pseudo-function to help the database choose a RANGE SCAN on it again. The following is not possible:

CREATE INDEX oracle_why_oh_why
  ON my_table(INTERNAL_FUNCTION(execute_at));

Nope. What the function really does, it widens the less precise DATE type of the execute_at column to the more precise TIMESTAMP type of the bind variable. Just in case.

Why? Because with exclusive range boundaries (> and <), chances are that the fractional seconds in your Timestamp may lead to the timestamp being stricly greater than the lower bound of the range, which would include it, when the same Timestamp with no fractional sections (i.e. an Oracle DATE) would have been excluded.

Duh. But we don’t care, we’re only using Timestamp as a silly workaround in the first place! Now that we know this, you might think that adding a function-based index on an explicit conversion would work, but that’s not the case either:

CREATE INDEX nope
  ON my_table(CAST(execute_at AS TIMESTAMP));

Perhaps, if you magically found the exact right precision of the implicitly used TIMESTAMP(n) type it could work, but that all feels shaky, and besides, I don’t want a second index on that same column!

The solution

The solution given by user APC is actually very simple (and it sucks). Again, you could bind a java.sql.Date, but that would make you lose all hour/minute/second information. No, you have to explicitly cast the bind variable to DATE in the database. Exactly!

PreparedStatement stmt = connection.prepareStatement(
    "SELECT * " + 
    "FROM my_table " +
    "WHERE execute_at > CAST(? AS DATE) " +
    "AND execute_at < CAST(? AS DATE)");

You have to do that every time you bind a java.sql.Timestamp variable to an Oracle DATE value, at least when used in predicates.

How to implement that?

If you’re using JDBC directly, you’re pretty much doomed. Of course, you could run AWR reports to find the worst statements in production and fix only those, but chances are that you won’t be able to fix your statements so easily and deploy them so quickly, so you might want to get it right in advance. And of course, this is production. Tomorrow, another statement would suddenly pop up in your DBA’s reports.

If you’re using JPA / Hibernate, you can only hope that they got it right, because you probably won’t be able to fix those queries, otherwise.

If you’re using jOOQ 3.5 or later, you can take advantage of jOOQ’s new custom type binding feature, which works out of the box with Oracle, and transparently renders that CAST(? AS DATE) for you, only on those columns that are really relevant.

jOOQ is the best way to write SQL in Java

Other databases

If you think that this is an Oracle issue, think again. Oracle is actually very lenient and nice to use when it comes to bind variables. Oracle can infer a lot of types for your bind variables, such that casting is almost never necessary. With other databases, that’s a different story. Read our article about RDBMS bind variable casting madness for more information.

A True SQL Gem You Didn’t Know Yet: The EVERY() Aggregate Function

We’ve just added support for the EVERY() aggregate function (#1391) to jOOQ, and would like to take the opportunity of letting you know of this true SQL gem that can come in handy EVERY(now and then) (pun intended).

Let’s assume we have four books in our table:

INSERT INTO book VALUES (1, 1, '1984');
INSERT INTO book VALUES (2, 1, 'Animal Farm');
INSERT INTO book VALUES (3, 2, 'O Alquimista');
INSERT INTO book VALUES (4, 2, 'Brida');

Now the question is:

Is EVERY() ID lower than 10?

We’ll ask:

SELECT EVERY(id < 10)
FROM book

And the answer is:

every
-----
true

Does EVERY() book for each author end with the letter ‘a’?

We’ll ask:

SELECT author_id, EVERY(title LIKE '%a')
FROM book
GROUP BY author_id

And the answer is:

author_id   every
-----------------
1           false
2           true

Wonderful!

As with all aggregate functions, we can even use them as a window function!

SELECT 
  book.*, 
  EVERY(title LIKE '%a') OVER (PARTITION BY author_id)
FROM book

Which will produce

id  author_id   title          every
------------------------------------
1   1           1984           false
2   1           Animal Farm    false
3   2           O Alquimista   true
4   2           Brida          true

Who supports EVERY()

Well, the SQL standard has it:

10.9 <aggregate function>

<aggregate function> ::=
    COUNT <left paren> <asterisk> <right paren> [ <filter clause> ]
  | <general set function> [ <filter clause> ]
  | <binary set function> [ <filter clause> ]
  | <ordered set function> [ <filter clause> ]
  | <array aggregate function> [ <filter clause> ]

<general set function> ::=
    <set function type> <left paren> [ <set quantifier> ]
        <value expression> <right paren>

<set function type> ::=
    <computational operation>

<computational operation> ::=
    AVG
  | MAX
  | MIN
  | SUM
  | EVERY <-- yes, here! EVERY!
  | ANY
  | SOME
  | COUNT
  | STDDEV_POP
  | STDDEV_SAMP
  | VAR_SAMP
  | VAR_POP
  | COLLECT
  | FUSION
  | INTERSECTION

And, of course PostgreSQL!

But if your database is not PostgreSQL, don’t worry. EVERY() can be emulated on EVERY() database using SUM() and CASE expressions. Here’s how to emulate the first query:

-- SELECT EVERY(id < 10)
-- FROM book

SELECT CASE SUM(CASE WHEN id < 10 THEN 0 ELSE 1 END) 
         WHEN 0 THEN 1 
         ELSE 0 
       END
FROM book;

Or as window functions

-- SELECT 
--   book.*, 
--   EVERY(title LIKE '%a') OVER (PARTITION BY author_id)
-- FROM book

SELECT 
  book.*,
  CASE SUM(CASE WHEN title LIKE '%a' THEN 0 ELSE 1 END)
       OVER(PARTITION BY author_id)
    WHEN 0 THEN 1 
    ELSE 0
  END
FROM book;

And, as always on this blog, we’re happy to conclude that the upcoming jOOQ 3.6 will now handle EVERY(emulation) for you, so you can write:

DSL.using(configuration)
   .select(BOOK.fields())
   .select(every(BOOK.TITLE.like("%a"))
           .over(partitionBy(BOOK.AUTHOR_ID)))
   .from(BOOK)
   .fetch();

Have fun with this new function!

jOOQ: the best way to write SQL in Java

jOOQ Tuesdays: Yalım Gerger brings Git to Oracle

We’re excited to launch a new series on our blog: the jOOQ Tuesdays. In this series, we’ll publish an article on the third Tuesday every month where we interview someone we find exciting in our industry from a jOOQ perspective. This includes people who work with SQL, Java, Open Source, and a variety of other related topics.

yalim-gergerWe have the pleasure of starting this series with Yalım Gerger who will show us why he thinks that Oracle PL/SQL developers are more than ready for Git!

Hi Yalım – you’re the founder of Gerger, the company behind Formspider. What’s Formspider?

Hi. Formspider is an application development framework for Oracle PL/SQL developers. It enables PL/SQL developers to build high quality business applications using only PL/SQL as the programming language. No Java or JavaScript skills are needed to use Formspider.

Interesting, even from a Java developer’s perspective! Essentially, you’re offering a way to completely bypass Java as middleware (and of course HTML, JavaScript, CSS).

This is not entirely true. We still use Java in our product. Formspider has a middletier application that our customers can deploy to any JEE compliant application server. This middletier application helps us bridge Formspider JavaScript library running in the client’s browser to the PL/SQL running in the database. We also use Java Libraries to generate Excel files from the data stored in an application, a common use case for business applications. So yes, the applications are not coded in Java. Our customers are PL/SQL developers. But we use Java to improve our product. Same with HTML and Javascript. Our job is to understand these technologies and their capabilities really well and expose them as intuitive API’s to PL/SQL developers.

Do you also have customers that access their PL/SQL APIs both through Formspider as well as through their home-grown Java / .NET applications?

Yes. We have customers who have both PL/SQL teams that work with Formspider and Java teams that work with Java technologies. This requires a great deal of collaboration between two teams and that’s not always possible.

Usually, what happens is that Java/.NET teams try to move the application away from the database as much as possible. I was just talking to a friend who works at a large financial institution in which the OO guys are pushing hard to eliminate all the PL/SQL API’s. He was going mad. There are various reasons for this. It is partly political turf wars, partly pure ignorance about the capabilities of database software and PL/SQL.

We can feel the pain. There is no such thing as a magic bullet… So what should they do then? How should an application be architected? Do you think there is a “right” architecture?

No. I think it depends so much on the context. Are you building a consumer app or are you building a business app? Are you a company building a horizontal product or an IT department serving a business operating in a particular vertical? There are so many parameters to consider. At the risk of being too generic, I think an IT department serving a large enterprise should not try build a database agnostic application. That’s silly. On the contrary it should take full advantage of the database software, and other software it uses. You shouldn’t pretend to be seven different organizations building seven different layers of the application just happen to be collaborating. You are just one organization. Act like that. Cut through the fat. Integrate as deeply as you can. This is the most cost effective way to build well performing applications on time and on budget. Database agnostic software is for horizontal software companies.

We’ve recently blogged about the caveats of dual licensing, where we said that shipping our sources to paying customers is essential for a company that calls themselves an Open Source company like Data Geekery does. I’ve seen you ship your sources as well – but you’re not really doing “Open Source”. How would you describe your offering?

I loved that blog post by the way. I think the way jOOQ is licensed is brilliantly fair i.e. it is free if the database is free and it has a price tag if your database has a license fee. In our case, the database always has a license fee. So we don’t have a free option for Formspider. For the Oracle community and for the price tags that they are used to, our license fee is so small that it is practically free. Anyone who is thrown off by our price tag is probably not serious about using Formspider anyway.

Our customers who sign up for our highest level of support service may get the source code of our product for the duration of the professional service. This option is attractive for customers who invest a lot into the application they build with Formspider.

Yes, Oracle price tags have a reputation… Yalım, you seem to be an Oracle person. And as such, you are about to launch gitora. What is it?

Gitora is a free version control tool that integrates the Oracle database with Git. This is a little embarrassing to bring up in a blog mainly read by Java developers but very common version control tasks that most Java developers take for granted are very hard to do in PL/SQL. There is a good reason for this. PL/SQL has no concept of a working directory. PL/SQL is not a file based language i.e. source code units do not reside in the private file system of a developer but in the Oracle database as packages, procedures and functions globally  available to any developer. That makes version control very difficult if not impossible.

So what do people do?

Nothing mostly. Daily backups are used as a way to get back to a previous state of the code if needed.

Some teams create one working directory that is hooked to version control and store all their PL/SQL code in this directory by extracting the DDL’s, usually manually. That’s as sophisticated as it gets.

Proper team development and merging in PL/SQL is very difficult and I haven’t seen it done successfully very often. And I’ve interacted with a lot of PL/SQL teams all around the World. Gitora makes this very easy. It turns the database schema to a working directory. If you execute a Git command, any change to the working directory happens automatically in your database schema.

Interesting. We’ve recently implemented a home-grown “solution” for a customer, which implements automatic version control and installation from a Microsoft Team Foundation Server repository. Maybe, we should migrate to Gitora then?

I didn’t know that. That’s so cool. If you build a version control tool which works for PL/SQL and talks to TFS instead of Git, I think that is also very valuable. Essentially we build the same product but used different version control products. I encourage you to put it out there.

Why not. Maybe we’ll contribute!

Thanks for this very interesting insight, Yalım!

If this interview has triggered your interest, follow Yalım, FormSpider, or Gitora on Twitter:

For more information about Gitora, watch the Gitora tutorial:

Really Too Bad that Java 8 Doesn’t Have Iterable.stream()

This is one of the more interesting recent Stack Overflow questions:

Why does Iterable not provide stream() and parallelStream() methods?

At first, it might seem intuitive to make it straight-forward to convert an Iterable into a Stream, because the two are really more or less the same thing for 90% of all use-cases.

Granted, the expert group had a strong focus on making the Stream API parallel capable, but anyone who works with Java every day will notice immediately, that Stream is most useful in its sequential form. And an Iterable is just that. A sequential stream with no guarantees with respect to parallelisation. So, it would only be intuitive if we could simply write:

iterable.stream();

In fact, subtypes of Iterable do have such methods, e.g.

collection.stream();

Brian Goetz himself gave an answer to the above Stack Overflow question. The reasons for this omittance are rooted in the fact that some Iterables might prefer to return an IntStream instead of a Stream. This really seems to be a very remote reason for a design decision, but as always, omittance today doesn’t mean omittance forever. On the other hand, if they had introduced Iterable.stream() today, and it turned out to be a mistake, they couldn’t have removed it again.

Well, primitive types in Java are a pain and they did all sorts of bad things to generics in the first place, and now to Stream as well, as we have to write the following, in order to turn an Iterable into a Stream:

Stream s = StreamSupport.stream(iterable.spliterator(), false);

Brian Goetz argues that this is “easy”, but I would disagree. As an API consumer, I experience a lot of friction in productivity because of:

  • Having to remember this otherwise useless StreamSupport type. This method could very well have been put into the Stream interface, because we already have Stream construction methods in there, such as Stream.of().
  • Having to remember the subtle difference between Iterator and Spliterator in the context of what I believe has nothing to do with parallelisation. It may well be that Spliterators will become popular eventually, though, so this doubt is for the magic 8 ball to address.
  • In fact, I have to repeat the information that there is nothing to be parallelised via the boolean argument false

Parallelisation really has such a big weight in this new API, even if it will cover only around 5%-10% of all functional collection manipulation operations. While sequential processing was not the main design goal of the JDK 8 APIs, it is really the main benefit for all of us, and the friction around APIs related to sequential processing should be as low as possible.

The method above should have just been called

Stream s = Stream.stream(iterable);

It could be implemented like this:

public static<T> Stream<T> stream(Iterable<T> i) {
    return StreamSupport.stream(i.spliterator(), false);
}

Obviously with convenience overloads that allow for the additional specialisations, like parallelisation, or passing a Spliterator

But again, if Iterable had its own stream() default method, an incredible number of APIs would be so much better integrated with Java 8 out of the box, without even supporting Java 8 explicitly!

Take jOOQ for instance. jOOQ still supports Java 6, so a direct dependency is not possible. However, jOOQ’s ResultQuery type is an Iterable. This allows you to use such queries directly inline in foreach loops, as if you were writing PL/SQL:

PL/SQL

FOR book IN (
  SELECT * FROM books ORDER BY books.title
)
LOOP
  -- Do things with book
END LOOP;

Java

for (BookRecord book : 
  ctx.selectFrom(BOOKS).orderBy(BOOKS.TITLE)
) {
  // Do things with book
}

Now imagine the same thing in Java 8:

ctx.selectFrom(BOOKS).orderBy(BOOKS.TITLE)
   .stream()
   .map / reduce / findAny, etc...

Unfortunately, the above is currently not possible. You could, of course, eagerly fetch all the results into a jOOQ Result, which extends List:

ctx.selectFrom(BOOKS).orderBy(BOOKS.TITLE)
   .fetch()
   .stream()
   .map / reduce / findAny, etc...

But it’s one more method to call (every time), and the actual stream semantics is broken, because the fetch is done eagerly.

Complaining on a high level

This is, of course, complaining on a high level, but it would really be great if a future version of Java, e.g. Java 9, would add this missing method to the Iterable API. Again, 99% of all use-cases will want the Stream type to be returned, not the IntStream type. And if they do want that for whatever obscure reason (much more obscure than many evil things from old legacy Java APIs, looking at you Calendar), then why shouldn’t they just declare an intStream() method. After all, if someone is crazy enough to write Iterable<Integer> when they’re really operating on int primitive types, they’ll probably accept a little workaround.

The Inconvenient Truth About Dynamic vs. Static Typing

Sometimes there are these moments of truth. They happen completely unexpectedly, such as when I read this tweet:

David is the author of the lesser-known but not at all lesser-interesting Whiley programming language, a language that has a lot of static type checking built in it. One of the most interesting features of the Whiley language is flow sensitive typing (sometimes also simply called flow typing), which is mostly useful when used along with union types. An example from the getting started guide

function indexOf(string str, char c) => null|int:

function split(string str, char c) => [string]:
  var idx = indexOf(str,c)

  // idx has type null|int
  if idx is int:

    // idx now has type int
    string below = str[0..idx]
    string above = str[idx..]
    return [below,above]

  else:
    // idx now has type null
    return [str] // no occurrence

Remember, other languages like Ceylon also know flow-sensitive typing, and even Java does to a certain extent, because Java has union types, too!

try {
    ...
}
catch (SQLException | IOException e) {
    if (e instanceof SQLException)
        doSomething((SQLException) e);
    else
        doSomethingElse((IOException) e);
}

Granted, Java’s flow-sensitive typing is explicit and verbose. We could expect the Java compiler to infer all the types. The following should type-check and compile just as well:

try {
    ...
}
catch (SQLException | IOException e) {
    if (e instanceof SQLException)
        // e is guaranteed to be of type SQLException
        doSomething(e);
    else
        // e is guaranteed to be of type IOException
        doSomethingElse(e);
}

Flow typing or flow sensitive typing means that the compiler can infer the only possible type from the control flow of the surrounding program. It is a relatively new concept in modern languages like Ceylon, and it makes static typing extremely powerful, especially if the language also supports sophisticated type inference via var or val keywords!

JavaScript static typing with Flow

Let’s get back to David’s Tweet and have a look at what the article said about Flow:

http://sitr.us/2014/11/21/flow-is-the-javascript-type-checker-i-have-been-waiting-for.html

The presence of a use of length with a null argument informs Flow that there should be a null check in that function. This version does type-check:

function length(x) {
  if (x) {
    return x.length;
  } else {
    return 0;
  }
}

var total = length('Hello') + length(null);

Flow is able to infer that x cannot be null inside the if body.

That’s quite cunning. A similar upcoming feature can be observed in Microsoft’s TypeScript. But Flow is different (or claims to be different) from TypeScript. The essence of Facebook Flow can be seen in this paragraph from the official Flow announcement:

Flow’s type checking is opt-in — you do not need to type check all your code at once. However, underlying the design of Flow is the assumption that most JavaScript code is implicitly statically typed; even though types may not appear anywhere in the code, they are in the developer’s mind as a way to reason about the correctness of the code. Flow infers those types automatically wherever possible, which means that it can find type errors without needing any changes to the code at all. On the other hand, some JavaScript code, especially frameworks, make heavy use of reflection that is often hard to reason about statically. For such inherently dynamic code, type checking would be too imprecise, so Flow provides a simple way to explicitly trust such code and move on. This design is validated by our huge JavaScript codebase at Facebook: Most of our code falls in the implicitly statically typed category, where developers can check their code for type errors without having to explicitly annotate that code with types.

Let this sink in

most JavaScript code is implicitly statically typed

again

JavaScript code is implicitly statically typed

Yes!

Programmers love type systems. Programmers love to reason formally about their data types and put them in narrow constraints to be sure the program is correct. That’s the whole essence of static typing: To make less mistakes because of well-designed data structures.

People also love to put their data structures in well-designed forms in databases, which is why SQL is so popular and “schema-less” databases will not gain more market share. Because in fact, it’s the same story. You still have a schema in a “schema-less” database, it’s just not type checked and thus leaves you all the burden of guaranteeing correctness.

On a side note: Obviously, some NoSQL vendors keep writing these ridiculous blog posts to desperately position their products, claiming that you really don’t need any schema at all, but it’s easy to see through that marketing gag. True need for schemalessness is as rare as true need for dynamic typing. In other words, when is the last time you’ve written a Java program and called every method via reflection? Exactly…

But there’s one thing that statically typed languages didn’t have in the past and that dynamically typed languages did have: Means to circumvent verbosity. Because while programmers love type systems and type checking, programmers do not love typing (as in typing on the keyboard).

Verbosity is the killer. Not static typing

Consider the evolution of Java:

Java 4

List list = new ArrayList();
list.add("abc");
list.add("xyz");

// Eek. Why do I even need this Iterator?
Iterator iterator = list.iterator();
while (iterator.hasNext()) {
    // Gee, I *know* I only have strings. Why cast?
    String value = (String) iterator.next();

    // [...]
}

Java 5

// Agh, I have to declare the generic type twice!
List<String> list = new ArrayList<String>();
list.add("abc");
list.add("xyz");

// Much better, but I have to write String again?
for (String value : list) {
    // [...]
}

Java 7

// Better, but I still need to write down two
// times the "same" List type
List<String> list = new ArrayList<>();
list.add("abc");
list.add("xyz");

for (String value : list) {
    // [...]
}

Java 8

// We're now getting there, slowly
Stream.of("abc", "xyz").forEach(value -> {
    // [...]
});

On a side-note, yes, you could’ve used Arrays.asList() all along.

Java 8 is still far from perfect, but things are getting better and better. The fact that I finally do not have to declare a type anymore in a lambda argument list because it can be inferred by the compiler is something really important for productivity and adoption.

Consider the equivalent of a lambda pre-Java 8 (if we had Streams before):

// Yes, it's a Consumer, fine. And yes it takes Strings
Stream.of("abc", "xyz").forEach(new Consumer<String>(){
    // And yes, the method is called accept (who cares)
    // And yes, it takes Strings (I already say so!?)
    @Override
    public void accept(String value) {
        // [...]
    }
});

Now, if we’re comparing the Java 8 version with a JavaScript version:

["abc", "xyz"].forEach(function(value) {
    // [...]
});

We have almost reached as little verbosity as the functional, dynamically typed language that is JavaScript (I really wouldn’t mind those missing list and map literals in Java), with the only difference that we (and the compiler) know that value is of type String. And we know that the forEach() method exists. And we know that forEach() takes a function with one argument.

In the end of the day, things seem to boil down to this:

Dynamically typed languages like JavaScript and PHP have become popular mainly because they “just ran”. You didn’t have to learn all the “heavy” syntax that classic statically typed languages required (just think of Ada and PL/SQL!). You could just start writing your program. Programmers “knew” that the variables would contain strings, there’s no need to write it down. And that’s true, there’s no need to write everything down!

Consider Scala (or C#, Ceylon, pretty much any modern language):

val value = "abc"

What else can it be, other than a String?

val list = List("abc", "xyz")

What else can it be, other than a List[String]?

Note that you can still explicitly type your variables if you must – there are always those edge cases:

val list : List[String] = List[String]("abc", "xyz")

But most of the syntax is “opt-in” and can be inferred by the compiler.

Dynamically typed languages are dead

The conclusion of all this is that once syntactic verbosity and friction is removed from statically typed languages, there is absolutely no advantage in using a dynamically typed language. Compilers are very fast, deployment can be fast too, if you use the right tools, and the benefit of static type checking is huge. (don’t believe it? read this article)

As an example, SQL is also a statically typed language where much of the friction is still created by syntax. Yet, many people believe that it is a dynamically typed language, because they access SQL through JDBC, i.e. through type-less concatenated Strings of SQL statements. If you were writing PL/SQL, Transact-SQL, or embedded SQL in Java with jOOQ, you wouldn’t think of SQL this way and you’d immediately appreciate the fact that your PL/SQL, Transact-SQL, or your Java compiler would type-check all of your SQL statements.

So, let’s abandon this mess that we’ve created because we’re too lazy to type all the types (pun). Happy typing!

And if you’re reading this, Java language expert group members, please do add var and val, as well as flow-sensitive typing to the Java language. We’ll love you forever for this, promised!