What we Need is Standardised Non-OSS Licenses


If you’ve followed the recent (fake) news, you’ve probably already heard it. Oracle is “massively ramping up audits of Java customers it claims are in breach of its licences”

After a quick check on the source (The Register), here’s a more realistic, probably more accurate version of that headline:

Oracle is thinking about auditing 1-2 companies that massively ran the commercial Java extensions in production without paying

There, fixed. Also:

But there is a deeper problem to this discussion

Of course, all sorts of (ex) Red Hat and or Pivotal employees quickly jumped to the conclusion of the kind: Hey this wouldn’t happen with us – the good guys – the OSS guys.

For example:

That’s not surprising, of course. What’s also not surprising is that people who are already strongly opinionated will see their opinions reinforced. Another random example:

If you want more examples, just search Twitter for the article URL. There are tons of reactions.

The latter case is not very interesting. The former, however, is. Aleksey Shipilëv obviously has a good point.

use products with unambiguous licenses, like OSS

… and, of course, he’s not right at all. 🙂 There are some very ambiguous licenses in the OSS field, including many copyleft licenses. Take, for instance, LGPL 2.1, which is a very long license, and contains ridiculuous things like:

If such an object file uses only numerical parameters, data structure layouts and accessors, and small macros and small inline functions (ten lines or less in length), then the use of the object file is unrestricted, regardless of whether it is legally a derivative work. (Executables containing this object code plus portions of the Library will still fall under Section 6.)

(emphasis mine). ten lines of code. What’s a line? Everything between two \n characters? On Windows, does a line have to end in \r\n for this clause to be applicable? What if I remove formatting and have 10000 character lines? Such functions aren’t small, but certainly less than 10 lines. Right? RIGHT?

Hmm…

Not to mention that this single ambiguity (there are more) infects the entire rest of the license text, because it introduces unrestricted use in a rather restrictive library. Think that’s nuts? Go check Hibernate’s license. Most of it (and thus YOUR application, if you patched Hibernate) is affected.

Licensing = restricting

At the end of the day, pretty much every license will restrict rights in some way (except for the public domain “license”). The problem with commercial licenses, however, is that they’re very unique, whereas OSS licenses are usually always the same (mostly some [X]GPL or ASL, MIT, BSD). In other words, OSS licenses are standardised and thus: pretty well understood. And thus: Much less risky.

That’s not the case with commercial licenses. Take the jOOQ license for instance. As of the end of 2016, it’s 23 pages strong (including the annex containing pricing). What does the license mean to our customers? Here’s a TL;DR version (obviously, if in doubt: the actual license will apply, not this TL;DR version):

  • Developer workstations need a timely limited or perpetual license
  • All server workstations are licensed for free, perpetually
  • Object code may be distributed and sublicensed
  • Source code may be used (e.g. for maintenance), but not distributed

And, of course, there are different price plans, but those aren’t really part of the license. So, jOOQ feels like Open Source: source code is shipped, may be used for documentation purpose, may be patched, recompiled, but not distributed, i.e. it isn’t free as in freedom (of course not, it would be the end of our business).

But what does it mean that the source code may be used? The license explicitly allows “modification”, but what does that mean? Are you also allowed to document such modification, just not ship it? E.g. in a public GitHub issue? Such that other users who are affected may profit from your fix?

If in doubt, the best way forward is to ask the vendor. In our case, we’re very open minded and quick to answer – and also quick to improve the license when it is not clear.

In Oracle’s case, a bit less. Of course, because Oracle is a huge company, and who are you even going to ask? Who will take the time to answer an individual question? It’s simply not possible.

The solution: Standardised commercial licenses

There aren’t too many business models with software. First off, there are a few different categories of software, e.g.:

  • SaaS: This is still the wild west. But essentially, you don’t license the software, you rent an access point.
  • Servers: Databases, programming environments, operating systems, they all fall into this category. These are systems that run your software (and/or data).
  • Libraries: Things like jOOQ, Hibernate. These are programs that are embedded in other programs (e.g. SaaS or Servers)
  • Tools: Things like IntelliJ, JRebel. These are programs to create and manipulate data, but they aren’t needed to run it. They can be easily removed.

Each category works entirely differently. For instance, copyleft doesn’t really affect SaaS and tools categories (unless you want to protect your trade secrets, of course), whereas it’s a killer for libraries.

SaaS, libraries and tools are usually per seat licenses, whereas servers are usually per core licenses – i.e. whatever scales better for both the vendor and customer.

This is an extremely simplified overview of commercial licensing, but imagine: What if all vendors in each one of the above categories could just pick a couple of yes/no answers to a standardised set of questions (e.g. what may be distributed? what may be modified? what may be run?), and they could pick only well understood standard wording of these concepts, then everything would be much clearer.

Back to the original Oracle auditing story

In the linked article, Oracle allegedly starts auditing Java users. Because the OracleJDK obviously isn’t “free” (as in freedom), but partially, it is “free” (as in beer) because there are a variety of use-cases where you don’t pay. However, there are some features that are “commercial” (i.e. non-free-as-in-beer), such as JMC and the Flight Recorder.

The interesting thing is that both of these features (and some others) ship with the “free” (as in beer) OracleJDK, but they’re part of the “COMMERCIAL FEATURES” (legal yelling) and those features must even be documented in YOUR LICENSE using this notice, such that YOUR end users may also not use them for free:

Use of the Commercial Features for any commercial or production purpose requires a separate license from Oracle. “Commercial Features” means those features identified Table 1-1 (Commercial Features In Java SE Product Editions) of the Java SE documentation accessible at http://www.oracle.com/technetwork/java/javase/documentation/index.html

(Did you know that? If you’re using OracleJDK in your application, you have to embed the above in your own EULA).

But do note, outside of these cryptic licenses, I’ve found several references to

Java Mission Control is available free of charge for development

E.g. here: http://download.oracle.com/technology/products/missioncontrol/updatesites/base/5.2.0/eclipse

Of course, this has absolutely no legal value, it might have been true at some time but now outdated. But that’s how I remember it. I can use Java Mission Control for free for development (not for productive use). Now, we’re back to this discussion. What’s productive use?

  • Can I profile a simple test program for free? Probably yes.
  • Can I profile my entire program (e.g. jOOQ) for free? Probably yes.
  • Can I run the profile in an CI environment to detect regressions for free? Hmmm.

And how is that understanding of “free” encoded in the actual license?

Standardised wording

Oracle has a long tradition of giving away software for free-as-in-beer to developers. Back in the days (before OSS, when there was only Oracle and IBM), that was a cunning move, because the money is not in development. It’s in operations. So, if developers get top notch software for free, they become evangelists. They’ll love the products, and convince the end users.

But again. Who are developers? When do they stop developing and start operating? When they test? When they ship?

We’ll never know for sure – as every vendor writes their own, unique license.

What we need is a standardised set of well understood commercial licenses, just like the OSS folks have their standardised set of well understood OSS licenses. For our industry as a whole, this would be of immense value, because the little fish (like ourselves), we could compete much better with the big ones without having to give away all of our IP for free under the terms of an OSS license. Our customers would no longer run into any legal issues. All risks from weird license texts would be removed.

And hopefully, this would put pressure on the big ones. And prevent articles like the one from the Register.

Do You Really Have to Name Everything in Software?


This is one of software engineering’s oldest battles. No, I’m not talking about where to put curly braces, or whether to use tabs or spaces. I mean the eternal battle between nominal typing and structural typing.

This article is inspired by a very vocal blogger who eloquently reminds us to …

[…] Please Avoid Functional Vomit

Read the full article here:
https://dzone.com/articles/using-java-8-please-avoid-functional-vomit

What’s the post really about?

It is about naming things. As we all know:

There are only two hard things in Computer Science: cache invalidation and naming things.

— Phil Karlton

Now, for some reason, there is a group of people who wants constant pain and suffering by explicitly naming everything, including rather abstract concepts and algorithmic components, such as compound predicates. Those people like nominal typing and all the features that are derived from it. What is nominal typing (as opposed to structural typing)?

Structural typing

SQL is a good example to study the two worlds. When you write SQL statements, you’re creating structural row types all the time. For instance, when you write:

SELECT first_name, last_name
FROM customer

… what you’re really doing is you’re creating a new rowtype of the structure (in pseudo-SQL):

TYPE (
  first_name VARCHAR,
  last_name VARCHAR
)

The type has the following properties:

  • It is a tuple or record (as always in SQL)
  • It contains two attributes or columns
  • Those two attributes / columns are called first_name and last_name
  • Their types is VARCHAR

This is a structural type, because the SQL statement that produces the type only declares the type’s structure implicitly, by producing a set of column expressions.

In Java, we know lambda expressions, which are (incomplete) structural types, such as:

// A type that can check for i to be even
i -> i % 2 == 0

Nominal typing

Nominal typing takes things one step further. In SQL, nominal typing is perfectly possible as well, for instance, in the above statement, we selected from a well-known table by name customer. Nominal typing assigns a name to a structural type (and possibly stores the type somewhere, for reuse).

If we want to name our (first_name, last_name) type, we could do things like:

-- By using a derived table:
SELECT *
FROM (
  SELECT first_name, last_name
  FROM customer
) AS people

-- By using a common table expression:
WITH people AS (
  SELECT first_name, last_name
  FROM customer
)
SELECT *
FROM people

-- By using a view
CREATE VIEW people AS
SELECT first_name, last_name
FROM customer

In all cases, we’ve assigned the name people to the structural type (first_name, last_name). The only difference being the scope for which the name (and the corresponding content) is defined.

In Java, we can only use lambda expressions, once we assign them to a typed name, either by using an assignment, or by passing the expression to a method that takes a named type argument:

// Naming the lambda expression itself
Predicate<Integer> p = i -> i % 2 == 0

// Passing the lambda expression to a method
Stream.of(1, 2, 3)
      .filter(i -> i % 2 == 0);

Back to the article

The article claims that giving a name to things is always better. For instance, the author proposes giving a name to what we would commonly refer to as a “predicate”:

//original, less clear code
if(barrier.value() > LIMIT && barrier.value() > 0){
//extracted out to helper function. More code, more clear
if(barrierHasPositiveLimitBreach()){

So, the author thinks that extracting a rather trivial predicate into an external function is better because a future reader of such code will better understand what’s going on. At least in the article’s opinion. Let’s refute this claim for the sake of the argument:

  • The proposed name is verbose and requires quite some thinking.
  • What does breach mean?
  • Is breach the same as >= or the same as >?
  • Is LIMIT a constant? From where?
  • Where is barrier? Who owns it?
  • What does the verb “has” mean, here? Does it depend on something outside of barrier? E.g. some shared state?
  • What happens if there’s a negative limit?

By naming the predicate (remember, naming things is hard), the OP has added several layers of cognitive complexity to the reader, while quite possibly introducing subtle bugs, because probably both LIMIT and barrier should be function arguments, rather than global (im)mutable state that is assumed to be there, by the function.

The name introduced several concepts (“to have a breach”, “positive limit”, “breach”) that are not well defined and need some deciphering. How do we decipher it? Probably by looking inside the function and reading the actual code. So what do we gain? Better reuse, perhaps? But is this really reusable?

Finally, there is a (very slight) risk of introducing a performance penalty by the additional indirection. If we translate this to SQL, we could have written a stored function and then queried:

SELECT *
FROM orders -- Just an assumption here
WHERE barrier_has_positive_limit_breach(orders.barrier)

If this was some really complicated business logic depending on a huge number of things, perhaps extracting the function might’ve been worthwile. But in this particular case, is it really better than:

SELECT *
FROM orders
WHERE barrier > :limit AND barrier > 0

or even

SELECT *
FROM orders
WHERE barrier > GREATEST(:limit, 0)

Conclusion

There are some people in our industry who constantly want to see the world in black and white. As soon as they’ve had one small success story (e.g. reusing a very common predicate 4-5 times by extracting it into a function), they conclude with a general rule of this approach being always superior.

They struggle with the notion of “it depends”. Nominal typing and structural typing are both very interesting concepts. Structural typing is extremely powerful, whereas nominal typing helps us humans keep track of complexity. In SQL, we’ve always liked to structure our huge SQL statements, e.g. in nameable views. Likewise, Java programmers structure their code in nameable classes and methods.

But it should be immediately clear to anyone reading the linked article that the author seems to like hyperboles and probably wasn’t really serious, given the silly example he came up with. The message he’s conveying is wrong, because it claims that naming things is always better. It’s not true.

Be pragmatic. Name things where it really helps. Don’t name things where it doesn’t. Or as Leon Bambrick amended Phil Karlton’s quote:

There are only two hard things in Computer Science: cache invalidation, naming things, and off-by-one errors

Here’s my advice to you, dear nominal typing loving blogger. There’s are only two ways of typing: nominal typing and structural typing. And it depends typing.

SQL, Streams, For Comprehension… It’s All the Same


Recently, at Devoxx, I’ve seen this beautiful slide in a talk by Kevlin Henney

In his talk, he was displaying a variety of approaches to solve the FizzBuzz “problem”, including a couple of very elegant solutions in completely declarative approaches and languages.

In this particular slide, Kevlin used a notation that is derived from maths. The set builder notation. Here’s an example from Wikipedia:

even-numbers

The example reads: For all n in (the set of all integer numbers), take those for which there exists () another integer k, for which the following equation is satisfied: n = 2k.

Or in plain English: All even integers. (because for even integers, there exists another integer that is half the even integer)

Beautiful, eh? In imperative programming, we’d probably do something like this instead:

List<Integer> even = new ArrayList<>();
for (int i = /* hmm...? */; i < /* what to put here */; i++)
    even.add(i * 2);

Or this:

List<Integer> even = new ArrayList<>();
for (int i = /* hmm...? */; i < /* what to put here */; i = i + 2)
    even.add(i);

But there are several problems with the imperative approach:

  • We have to realistically start somewhere
  • We have to realistically end somewhere
  • We have to store all values in an intermediate collection

Sure, those aren’t severe limitations in every day use-cases, because we’re probably solving a real world problem where we don’t actually need an infinite number of even integers, and storing them in an intermediate collection doesn’t consume all of our memory, but still, the declarative, mathematical approach is much leaner, because we can still answer those questions about where to start and where to end later, and we never need to materialise any intermediate collection before we make those final decisions.

For instance, we can declare X to be that set, and then declare Y to be a set that is derived from X, and finally materialise Z, which is a very tiny set derived from Y. For this, we may have never needed to materialise all the (even) integers.

How this compares to SQL

Kevlin made a cunning comparison. Of course, all functional programming aficionados will immediately recognise that languages like Scala have something called a “for comprehension”, which models precisely the mathematical set-builder notation.

Java 8 now has the Streams API, which allows us, to some extent, model something similar (although not as powerful). But Kevlin didn’t use those “modern” languages. He used SQL as a comparison. That “arcane” declarative programming language that has been around forever, and that we love so much. Yes, here’s how we can declare all the even numbers in SQL:

SELECT n
FROM integers
WHERE EXISTS (
  SELECT k
  FROM integers
  WHERE n = 2 * k
)

If optimisers were perfect, this semi-self-join between the two references of the integers “table” could be optimised perfectly. In most databases, we’d probably manually transform the above notation to this equivalent one:

SELECT n
FROM integers
WHERE MOD(n, 2) = 0

Yes, indeed. The set-builder notation and the SQL language are very similar beasts. The former prefers using mathematical symbols for brevity and conciseness, the latter prefers using English words to connect the different operators, but it’s the same thing. And if you squint hard enough, you’ll see that Java 8 Streams, for instance, are also pretty much the same thing:

everything-is-a-table

I’ve blogged about this recently where all the Java 8 Streams operations are compared to their SQL clause counterparts:
https://blog.jooq.org/2015/08/13/common-sql-clauses-and-their-equivalents-in-java-8-streams

How is this better?

It’s simple. Both the set-builder notation, and the SQL language (and in principle, other languages’ for comprehensions) are declarative. They are expressions, which can be composed to other, more complex expressions, without necessarily executing them.

Remember the imperative approach? We tell the machine exactly what to do:

  • Start counting from this particular minimal integer value
  • Stop counting at this particular maximal integer value
  • Store all even integers in between in this particular intermediate collection

What if we don’t actually need negative integers? What if we just wanted to have a utility that calculates even integers and then reuse that to list all positive integers? Or, all positive integers less than 100? Etc.

In the imperative approach, we have to refactor constantly, to avoid the overhead of

  • Producing too many integers
  • Storing too many integers (or storing them at all)

In truly declarative languages like SQL, we’re just describing “even integers” with an expression, possibly assigning the expression a name:

CREATE VIEW even_integers AS
SELECT n
FROM integers
WHERE EXISTS (
  SELECT k
  FROM integers
  WHERE k = 2 * n
)

So, when we actually use and materialise the even integers, e.g. positive integers less than 100, the optimiser can optimise away the double access to the integer table and produce only the exact number of values that we’re requesting (without materialising them in intermediate collections):

SELECT n
FROM even_integers
WHERE n BETWEEN 0 AND 100

Conclusion

Thinking in terms of sets, in terms of declaring sets, has always been our dream as software engineers. The approach is extremely compelling and elegant. We can delegate a lot of boring algorithmic work to the implementation engine of the declarative programming language. In the case of SQL, it would be a SQL database optimiser, which figures out a great lot of optimisations that we might not have thought of.

The above example is trivial. We can perfectly live in a world where we manually iterate over a local integer variable that goes from 0 to 100:

for (int i = 0; i <= 100; i++)
  doSomething(i);

But stuff gets hairy quite quickly. Compare Mario Fusco‘s famous tweet’s two versions of the same algorithm:

This also applies to SQL, and what’s even better in SQL than with Streams: The SQL statement is a declarative expression tree, not a formally ordered set of stream pipeline operations. The optimiser can freely reorder / transform the expression tree into something that it thinks is more optimal. This isn’t just a promise. This works in modern SQL databases every day, for very complex queries, which you can write in a matter of seconds, rather than hours.

Stay tuned for a short series of blog posts on the jOOQ blog illustrating what modern cost-based optimisation can do for you, when you’re using the SQL language.

A Beginner’s Guide to the True Order of SQL Operations


The SQL language is very intuitive. Until it isn’t.

Over the years, a lot of people have criticised the SQL language for a variety of reasons. For instance: IDEs cannot easily guess what auto completion options to offer, because as long as you don’t specify the FROM clause, there are no tables in scope (yet):

-- Don't you wish this would be completed to first_name?
SELECT first_na...

-- Aaah, now it works:
SELECT first_na...
FROM customer

These things are weird, because the lexical order of operations does not match the logical order of operations. We humans may sometimes (often) intuitively understand this ordering difference. E.g. we know that we’re about to select from the customer table. But the IDE doesn’t know this.

GROUP BY contributes the most confusion

When a junior developer / SQL beginner starts working with SQL, quite quickly, they will find out about aggregation and GROUP BY. And they’ll quickly write things like:

SELECT count(*)
FROM customer

Yay, we have 200 customers!

And then:

SELECT count(*)
FROM customer
WHERE first_name = 'Steve'

Wow, 90 of them are called Steve! Interesting. Let’s find out how many we have per name…

SELECT first_name, count(*)
FROM customer
GROUP BY first_name

Ahaa!

FIRST_NAME   COUNT
------------------
Steve        90
Jane         80
Joe          20
Janet        10

Very nice. But are they all the same? Let’s check out the last name, too

SELECT first_name, last_name, count(*)
FROM customer
GROUP BY first_name

Oops!

ORA-00979: not a GROUP BY expression

Jeez, what does it mean? (note, unfortunately, MySQL users that do not use the STRICT mode will still get a result here with arbitrary last names!, so a new MySQL user won’t understand their mistake)

How do you easily explain this to a SQL newbie? It seems obvious to “pros”, but is it really obvious? Is it obvious enough that you can explain it easily to a junior? Think about it. Why are each one of these statements semantically correct or wrong?

-- Wrong
SELECT first_name, count(*)
FROM customer
WHERE count(*) > 1
GROUP BY first_name

-- Correct
SELECT first_name, count(*)
FROM customer
GROUP BY first_name
HAVING count(*) > 1

-- Correct
SELECT first_name, count(*)
FROM customer
GROUP BY first_name
ORDER BY count(*) DESC

-- Wrong
SELECT first_name, last_name, count(*)
FROM customer
GROUP BY first_name

-- Correct
SELECT first_name, MAX(last_name), count(*)
FROM customer
GROUP BY first_name

-- Wrong
SELECT first_name || ' ' || last_name, count(*)
FROM customer
GROUP BY first_name

-- Correct
SELECT first_name || ' ' || MAX(last_name), count(*)
FROM customer
GROUP BY first_name

-- Correct
SELECT MAX(first_name || ' ' || last_name), count(*)
FROM customer
GROUP BY first_name

The problem is syntax related

The SQL syntax works in a similar way like the English language. It is a command. We start commands with verbs. The verb is SELECT (or INSERT, UPDATE, DELETE, CREATE, DROP, etc. etc.)

Unfortunately, human language is incredibly ill-suited for the much more formal world of programming. While it offers some consolation to new users (possibly non-programmers) who are absolute beginners, it just makes stuff hard for everyone else. All the different SQL clauses have extremely complex interdependencies. For instance:

  • In the presence of a GROUP BY clause, only expressions built from GROUP BY expressions (or functional dependencies thereof), or aggregate functions can be used in HAVING, SELECT, and ORDER BY clauses.
  • For simplicity reasons, let’s not even talk about GROUPING SETS
  • In fact, there are even a few cases in which GROUP BY is implied. E.g. if you write a “naked” HAVING clause
  • A single aggregate function in the SELECT clause (in the absence of GROUP BY) will force aggregation into a single row
  • In fact, this can also be implied by putting that aggregate function in ORDER BY (for whatever reason)
  • You can ORDER BY quite a few expressions that reference any columns from the FROM clause without SELECTing them. But that’s no longer true if you write SELECT DISTINCT

The list is endless. If you’re interested, you can read the SQL standard documents and check out how many weird and complicated inter-dependencies there exist between the many clauses of the SELECT statement.

Can this ever be understood?

Luckily, yes! There’s a simple trick, which I’m always explaining to the delegates that visit my SQL Masterclass. The lexical (syntactical) order of SQL operations (clauses) does not correspond at all to the logical order of operations (although, sometimes, they do coincidentally). Thanks to modern optimisers, the order also doesn’t correspond to the actual order of operations, so we really have: syntactical -> logical -> actual order, but let’s leave that aside for now.

The logical order of operations is the following (for “simplicity” I’m leaving out vendor specific things like CONNECT BY, MODEL, MATCH_RECOGNIZE, PIVOT, UNPIVOT and all the others):

  • FROM: This is actually the first thing that happens, logically. Before anything else, we’re loading all the rows from all the tables and join them. Before you scream and get mad: Again, this is what happens first logically, not actually. The optimiser will very probably not do this operation first, that would be silly, but access some index based on the WHERE clause. But again, logically, this happens first. Also: all the JOIN clauses are actually part of this FROM clause. JOIN is an operator in relational algebra. Just like + and - are operators in arithmetics. It is not an independent clause, like SELECT or FROM
  • WHERE: Once we have loaded all the rows from the tables above, we can now throw them away again using WHERE
  • GROUP BY: If you want, you can take the rows that remain after WHERE and put them in groups or buckets, where each group contains the same value for the GROUP BY expression (and all the other rows are put in a list for that group). In Java, you would get something like: Map<String, List<Row>>. If you do specify a GROUP BY clause, then your actual rows contain only the group columns, no longer the remaining columns, which are now in that list. Those columns in the list are only visible to aggregate functions that can operate upon that list. See below.
  • aggregations: This is important to understand. No matter where you put your aggregate function syntactically (i.e. in the SELECT clause, or in the ORDER BY clause), this here is the step where aggregate functions are calculated. Right after GROUP BY. (remember: logically. Clever databases may have calculated them before, actually). This explains why you cannot put an aggregate function in the WHERE clause, because its value cannot be accessed yet. The WHERE clause logically happens before the aggregation step. Aggregate functions can access columns that you have put in “this list” for each group, above. After aggregation, “this list” will disappear and no longer be available. If you don’t have a GROUP BY clause, there will just be one big group without any key, containing all the rows.
  • HAVING: … but now you can access aggregation function values. For instance, you can check that count(*) > 1 in the HAVING clause. Because HAVING is after GROUP BY (or implies GROUP BY), we can no longer access columns or expressions that were not GROUP BY columns.
  • WINDOW: If you’re using the awesome window function feature, this is the step where they’re all calculated. Only now. And the cool thing is, because we have already calculated (logically!) all the aggregate functions, we can nest aggregate functions in window functions. It’s thus perfectly fine to write things like sum(count(*)) OVER () or row_number() OVER (ORDER BY count(*)). Window functions being logically calculated only now also explains why you can put them only in the SELECT or ORDER BY clauses. They’re not available to the WHERE clause, which happened before. Note that PostgreSQL and Sybase SQL Anywhere have an actual WINDOW clause!
  • SELECT: Finally. We can now use all the rows that are produced from the above clauses and create new rows / tuples from them using SELECT. We can access all the window functions that we’ve calculated, all the aggregate functions that we’ve calculated, all the grouping columns that we’ve specified, or if we didn’t group/aggregate, we can use all the columns from our FROM clause. Remember: Even if it looks like we’re aggregating stuff inside of SELECT, this has happened long ago, and the sweet sweet count(*) function is nothing more than a reference to the result.
  • DISTINCT: Yes! DISTINCT happens after SELECT, even if it is put before your SELECT column list, syntax-wise. But think about it. It makes perfect sense. How else can we remove distinct rows, if we don’t know all the rows (and their columns) yet?
  • UNION, INTERSECT, EXCEPT: This is a no-brainer. A UNION is an operator that connects two subqueries. Everything we’ve talked about thus far was a subquery. The output of a union is a new query containing the same row types (i.e. same columns) as the first subquery. Usually. Because in wacko Oracle, the penultimate subquery is the right one to define the column name. Oracle database, the syntactic troll 😉
  • ORDER BY: It makes total sense to postpone the decision of ordering a result until the end, because all other operations might use hashmaps, internally, so any intermediate order might be lost again. So we can now order the result. Normally, you can access a lot of rows from the ORDER BY clause, including rows (or expressions) that you did not SELECT. But when you specified DISTINCT, before, you can no longer order by rows / expressions that were not selected. Why? Because the ordering would be quite undefined.
  • OFFSET: Don’t use offset
  • LIMIT, FETCH, TOP: Now, sane databases put the LIMIT (MySQL, PostgreSQL) or FETCH (DB2, Oracle 12c, SQL Server 2012) clause at the very end, syntactically. In the old days, Sybase and SQL Server thought it would be a good idea to have TOP as a keyword in SELECT. As if the correct ordering of SELECT DISTINCT wasn’t already confusing enough.

There, we have it. It makes total sense. And if you ever want to do something that is not in the “right order”, the simplest trick is always to resort to a derived table. E.g. when you want to group on a window function:

-- Doesn't work, cannot put window functions in GROUP BY
SELECT ntile(4) ORDER BY (age) AS bucket, MIN(age), MAX(age)
FROM customer
GROUP BY ntile(4) ORDER BY (age)

-- Works:
SELECT bucket, MIN(age), MAX(age)
FROM (
  SELECT age, ntile(4) ORDER BY (age) AS bucket
  FROM customer
) c
GROUP BY bucket

Why does it work? Because:

  • In the derived table, FROM happens first, and then the WINDOW is calculated, then the bucket is SELECTed.
  • The outer SELECT can now treat the result of this window function calculation like any ordinary table in the FROM clause, then GROUP BY an ordinary column, then aggregate, then SELECT

Let’s review our original examples with an explanation why they work or why they don’t.

-- Wrong: Because aggregate functions are calculated
-- *after* GROUP BY, and WHERE is applied *before* GROUP BY
SELECT first_name, count(*)
FROM customer
WHERE count(*) > 1
GROUP BY first_name

-- logical order         -- available columns after operation
-------------------------------------------------------------
FROM customer            -- customer.*
WHERE ??? > 1            -- customer.* (count not yet available!)
GROUP BY first_name      -- first_name (customer.* for aggs only)
<aggregate> count(*)     -- first_name, count
SELECT first_name, count -- first_name, count



-- Correct: Because aggregate functions are calculated
-- *after* GROUP BY but *before* HAVING, so they're 
-- available to the HAVING clause.
SELECT first_name, count(*)
FROM customer
GROUP BY first_name
HAVING count(*) > 1

-- logical order         -- available columns after operation
-------------------------------------------------------------
FROM customer            -- customer.*
GROUP BY first_name      -- first_name (customer.* for aggs only)
<aggregate> count(*)     -- first_name, count
HAVING count > 1         -- first_name, count
SELECT first_name, count -- first_name, count



-- Correct: Both SELECT and ORDER BY are applied *after*
-- the aggregation step, so aggregate function results are 
-- available
SELECT first_name, count(*)
FROM customer
GROUP BY first_name
ORDER BY count(*) DESC

-- logical order         -- available columns after operation
-------------------------------------------------------------
FROM customer            -- customer.*
GROUP BY first_name      -- first_name (customer.* for aggs only)
<aggregate> count(*)     -- first_name, count
SELECT first_name, count -- first_name, count
ORDER BY count DESC      -- first_name, count



-- Wrong: Because the GROUP BY clause creates groups of
-- first names, and all the remaining customer columns
-- are aggregated into a list, which is only visiblbe to
-- aggregate functions
SELECT first_name, last_name, count(*)
FROM customer
GROUP BY first_name

-- logical order         -- available columns after operation
-----------------------------------------------------------------
FROM customer            -- customer.*
GROUP BY first_name      -- first_name (customer.* for aggs only)
<aggregate> count(*)     -- first_name, count
                         -- first_name, count (last_name removed)
SELECT first_name, ???, count 



-- Correct: Because now, we're using an aggregate function
-- to access one of the columns that have been put into that
-- list of columns that are otherwise no longer available
-- after the GROUP BY clause
SELECT first_name, MAX(last_name), count(*)
FROM customer
GROUP BY first_name

-- logical order         -- available columns after operation
-----------------------------------------------------------------
FROM customer            -- customer.*
GROUP BY first_name      -- first_name (customer.* for aggs only)
                         -- first_name, max, count
<aggregate> MAX(last_name), count(*) 
                         -- first_name, max, count
SELECT first_name, max, count



-- Wrong: Because we still cannot access the last name column
-- which is in that list after the GROUP BY clause.
SELECT first_name || ' ' || last_name, count(*)
FROM customer
GROUP BY first_name

-- logical order         -- available columns after operation
-----------------------------------------------------------------
FROM customer            -- customer.*
GROUP BY first_name      -- first_name (customer.* for aggs only)
<aggregate> count(*)     -- first_name, count
                         -- first_name, count (last_name removed)
SELECT first_name || ' ' || ???, count 




-- Correct: Because we can access the last name column from
-- aggregate functions, which can see that list
SELECT first_name || ' ' || MAX(last_name), count(*)
FROM customer
GROUP BY first_name

-- logical order         -- available columns after operation
-----------------------------------------------------------------
FROM customer            -- customer.*
GROUP BY first_name      -- first_name (customer.* for aggs only)
                         -- first_name, max, count
<aggregate> MAX(last_name), count(*)  
                         -- first_name, max, count (no last_name)
SELECT first_name || ' ' || max, count




-- Correct: Because both GROUP BY columns and aggregated
-- columns are available to aggregate functions
SELECT MAX(first_name || ' ' || last_name), count(*)
FROM customer
GROUP BY first_name

-- logical order         -- available columns after operation
-----------------------------------------------------------------
FROM customer            -- customer.*
GROUP BY first_name      -- first_name (customer.* for aggs only)
                         -- first_name, max, count
<aggregate> MAX(first_name || ' ' || last_name), count(*)
SELECT max, count        -- first_name, max, count

Always think about the logical order of operations

If you’re not a frequent SQL writer, the syntax can indeed be confusing. Especially GROUP BY and aggregations “infect” the rest of the entire SELECT clause, and things get really weird. When confronted with this weirdness, we have two options:

  • Get mad and scream at the SQL language designers
  • Accept our fate, close our eyes, forget about the snytax and remember the logical operations order

I generally recommend the latter, because then things start making a lot more sense, including the beautiful cumulative daily revenue calculation below, which nests the daily revenue (SUM(amount) aggregate function) inside of the cumulative revenue (SUM(...) OVER (...) window function):

SELECT
  payment_date,
  SUM(SUM(amount)) OVER (ORDER BY payment_date) AS revenue
FROM payment
GROUP BY payment_date

… because aggregations logically happen before window functions.

Want to learn more? We also have these articles for you to read:

Prevent SQL Injection with SQL Builders Like jOOQ


As long as we allow ourselves to write string-based dynamic SQL embedded in other programming languages like Java, we will have a certain risk of being vulnerable to SQL injection. That’s a fact. Don’t believe it? Check out this website exposing all vulnerabilities on Stack Overflow for PHP questions:

https://laurent22.github.io/so-injections

In a previous blog post, I’ve shown how fatal such a single vulnerability can be, if discovered. A lot of blog posts out there warn about the potential of attackers injecting a DROP DATABASE statement.

18mpenleoksq8jpg

Also, everyone knows this famous xkcd:
https://xkcd.com/327

But in my opinion, a much more important threat is not immediate damage to your system, but data leakage. Using tools like sqlmap, every script kiddie can download your credit card information and other sensitive data from your database.

The best remedy: Avoid string-based embedded SQL

The best remedy is to always avoid string-based embedded SQL, whenever you can. I.e. try not to do things like this too often:

try (PreparedStatement s = c.prepareStatement(
    "SELECT first_name, last_name "
  + "FROM users "
  + "WHERE user_id = ? ")) {

    s.setInt(1, userId);
    try (ResultSet rs = s.executeQuery()) {
        ...
    }
}

Sure, there is currently nothing wrong with the above Java code. There is no vulnerability as we’re using a bind variable. But the risk of some developer not paying attention and accidentally adding a vulnerability when adding another predicate is too high.

Fine, so we’re not using string-based embedded SQL. But what are the alternatives?

Use a query DSL like jOOQ to build your dynamic SQL

APIs like jOOQ help you build SQL statements in a type safe, composable way as if the Java language actually understood SQL. The previous JDBC prepared statement now translates to the following:

Result<?> result =
ctx.select(USERS.FIRST_NAME, USERS.LAST_NAME)
   .from(USERS)
   // Bind variable embedded in statement
   .where(USERS.USER_ID.eq(userId))
   .fetch();

In jOOQ, the underlying JDBC PreparedStatement is created transparently and the userId bind variable is placed right in the middle of the statement so you don’t have to worry about the boring details. There’s no way you can have any accidental SQL injection vulnerability in such jOOQ API calls, because every SQL clause and expression is part of an expression tree that is managed by jOOQ.

And what’s best: The Java compiler can now type check your SQL statement to a certain degree. This is a huge benefit in terms of productivity and code quality.

Of course, SQL builders aren’t a perfect shield for SQL injection, as they usually expose some API to insert custom SQL strings. For instance, in jOOQ, you can write:

Result<?> result =
ctx.select(USERS.FIRST_NAME, USERS.LAST_NAME)
   .from(USERS)
   // Bind variable embedded in "plain SQL" string
   .where("user_id = ?", userId)
   .fetch();

This alternative API usage is just as convenient as the previous one. The bind variable is located right where it is needed (not applied later on a PreparedStatement), but of course this is again string-based embedded SQL, which is potentially vulnerable if you get it wrong.

The important thing here is that the string-based method is the exception, which is used only very rarely when you need a very advanced SQL feature that is not supported by jOOQ. By default, you’re using the “save” approach without strings. And since jOOQ 3.9, you can also add a type checker that generates a compilation warning or error as soon as you’re using jOOQ’s “plain SQL” API. Read more about that here:
https://blog.jooq.org/2016/05/09/jsr-308-and-the-checker-framework-add-even-more-typesafety-to-jooq-3-9

Use views and stored procedures

The safest and least intrusive way to prevent such problems, of course, is not to use embedded SQL at all, but to use views and/or stored procedures instead. By moving and storing the SQL statements into the database, you get a few advantages in addition to the lack of SQLi vulnerability:

  • The database can type-check your statement in its entirety
  • You can easily reuse a statement in different applications, e.g. not all written in Java
  • You could even revoke grants to the underlying tables and grant access only to these views and stored procedures, which adds another layer of security to your system

If you’re writing PL/SQL in Oracle, the previous statement would be transformed to something like this:

FOR rec IN (
  SELECT first_name, last_name
  FROM users
  -- Bind variable again embedded in SQL statement
  WHERE user_id = l_user_id
) LOOP
  ...
END LOOP;

Just like with jOOQ, you can easily embed the SQL statement in the “host language”, except that this time, it isn’t Java but PL/SQL. There’s absolutely no way the above statement will ever be vulnerable to SQL injection, as the statement isn’t even a dynamic SQL statement anymore.

Again, you can write dynamic SQL also in PL/SQL (or other database’s stored procedure languages), e.g. by using EXECUTE IMMEDIATE, but the important thing is again that dynamic, string-based, embedded SQL is the exception not the default. That’s the important thing here!

Now, if you do decide to use stored procedures, then jOOQ is again here to help you, as you can easily call that procedure from jOOQ, again in a type safe way. More about that in this blog post:
Painless Access from Java to PL/SQL Procedures with jOOQ

Beware! SQL isn’t the only language vulnerable to injection

Vlad Mihalcea displayed an equally important threat to JPA based applications. Scroll down in his blog post to find him mentioning JPQL injection:
A beginner’s guide to SQL injection and how you should prevent it

Yes, if you’re doing something silly as:

public List<Post> getPostsByTitle(String title) {
    return doInJPA(entityManager -> {
        return entityManager.createQuery(
            "select p " +
            "from Post p " +
            "where" +
            "   p.title = '" + title + "'", Post.class)
        .getResultList();
    });
}

… then an attacker can inject any sort of JPQL code into the title variable. The possibilities are a bit more limited than with SQL injection, but it is still perfectly possible to read and dump the entire database (including credit card information, remember?) from such a vulnerability.

Again, if you’re doing this quite often, you should consider switching to a more type safe way to write JPQL (e.g. using the the infamous Criteria API), or again switch to SQL and jOOQ.

Read Vlad’s post for more details:
A beginner’s guide to SQL injection and how you should prevent it

Conclusion

There are many external DSLs, like SQL, JPQL, XPath, regular expressions, and what not. Some of them are extremely powerful and they’re used to operate on sensitive data. Which means that if you leak control of the language outside of your application, you’re very vulnerable.

Vulnerabiliy mostly happens when you embed those external DSLs into Java in a string-based form. The best remedies are:

  • Use an internal DSL that models the external DSL instead
  • Keep the external DSL external, e.g. by using views and stored procedures

Both of these approaches work because they both avoid strings.

The Java Ecosystem’s Obsession with NonNull Annotations


I’m not well known for my love of annotations. While I do recognise that they can serve a very limited purpose in some areas (e.g. hinting stuff to the compiler or extending the language where we don’t want new keywords), I certainly don’t think they were ever meant to be used for API design.

“unfortunately” (but this is a matter of taste), Java 8 introduced type annotations. An entirely new extension to the annotation type system, which allows you to do things like:

@Positive int positive = 1;

Thus far, I’ve seen such common type restriction features only in the Ada or PL/SQL languages in a much more rigid way, but others may have similar features.

The nice thing about Java 8’s implementation is the fact that the meaning of the type of the above local variable (@Positive int) is unknown to the Java compiler (and to the runtime), until you write and activate a specific compiler plugin to enforce your custom meaning. The easiest way to do that is by using the Checker Framework (and yes, we’re guilty at jOOQ. We have our own checker for SQL dialect validation). You can implement any semantics, for instance:

// This compiles because @Positive int is a subtype of int
int number = positive;

// Doesn't compile, because number might be negative
@Positive int positive2 = number;

// Doesn't compile, because -1 is negative
@Positive int positive3 = -1;

As you can see, using type annotations is a very strategic decision. Either you want to create hundreds of types in this parallel universe as in this example:

Or, in my opinion, you better leave this set of features alone, because probably: YAGNI

Unfortunately, and to the disappointment of Mike Ernst, the author of the Checker Framework (whom I’ve talked to about this some years ago), most people abuse this new JSR-308 feature for boring and simple null checking. For instance, just recently, there had been a feature request on the popular Javaslang library to add support for such annotations that help users and IDEs guarantee that Javaslang API methods return non-null results.

Please no. Don’t use this atomic bomb for boring null checks

Let me make this very clear:

Type annotations are the wrong tool to enforce nullability

– Lukas Eder, timeless

You may quote me on that. The only exception to the above is if you strategically embrace JSR-308 type annotations in every possible way and start adding annotations for all sorts of type restrictions, including the @Positive example that I’ve given, then yes, adding nullability annotations won’t hurt you much anymore, as your types will take 50 lines of code to declare and reference anyway. But frankly, this is an extremely niche approach to type systems that only few general purpose programs, let alone publicly available APIs can profit from. If in doubt, don’t use type annotations.

One important reason why a library like Javaslang shouldn’t add such annotations is the fact that in a library like Javaslang, you can be very sure that you will hardly ever encounter null, because references in Javaslang are mostly one of three things:

  • A collection, which is never null but empty
  • An Option which replaces null (it is in fact a collection of cardinality 0..1)
  • A non-null reference (because in the presence of Option, all references can be expected to be non-null)

Of course, these rules aren’t valid for every API. There are some low quality APIs out there that return “unexpected” null values, or leak “internal” null values (and historically, some of the JDK APIs, unfortunately, are part of these “low quality APIs”). But Javaslang is not one of them, and the APIs you are designing also shouldn’t be one of them.

So, let go of your null fear. null is not a problem in well-designed software. You can spare yourself the work of adding a @NonNull annotation on 99% of all of your types just to shut up your IDE, in case you turned on those warnings. Focus on writing high-quality software rather than bikeshedding null.

Because: YAGNI.

And, if you haven’t had enough bikeshedding already, consider watching this entertaining talk by Stuart Marks:

Use jOOQ to Read / Write Oracle PL/SQL RECORD Types


Some of the biggest limitations when working with Oracle PL/SQL from Java is the lack of support for a variety of PL/SQL features through the JDBC interface. This lack of support is actually not limited to JDBC, but also extends to Oracle SQL. For instance, if you’re using the useful PL/SQL BOOLEAN type as such:

CREATE OR REPLACE FUNCTION yes RETURN boolean AS
BEGIN
  RETURN true;
END yes;
/

It would now be terrific if you could do this:

SELECT yes FROM dual;

But it’s not possible. You’ll be getting an error along the lines of the following:

ORA-06552: PL/SQL: Statement ignored
ORA-06553: PLS-382: expression is of wrong type
06552. 00000 -  "PL/SQL: %s"
*Cause:    
*Action:

It’s crazy to think that the Oracle SQL language still doesn’t support the SQL standard boolean type, which is so useful as I’ve shown in previous blog posts. Here’s where you can upvote the feature request:

https://community.oracle.com/ideas/2633

BOOLEAN isn’t the only “inaccessible” SQL feature

Now, there are a couple of other data types, which cannot be “bridged” to the SQL engine, and thus (for some reason only the OJDBC driver gods can fathom) cannot be “bridged” to a JDBC client. Among them: The very useful PL/SQL RECORD type.

Very often, you want to do this:

CREATE PACKAGE customers AS
  TYPE person IS RECORD (
    first_name VARCHAR2(50),
    last_name VARCHAR2(50)
  );
  
  FUNCTION get_customer(p_customer_id NUMBER) RETURN person;
END customers;
/

CREATE PACKAGE BODY customers AS
  FUNCTION get_customer(p_customer_id NUMBER) RETURN person IS
    v_person customers.person;
  BEGIN
    SELECT c.first_name, c.last_name
    INTO v_person
    FROM customer c
    WHERE c.customer_id = p_customer_id;
    
    RETURN v_person;
  END get_customer;
END customers;
/

(we’re running this on the SAKILA database).

As in any language with the least bit of sophistication, we can define “structs” or records in PL/SQL, which we can now frequently reuse. Everyone knows what a PERSON is and we can pass them around between procedures and functions.

For instance, in PL/SQL client code:

SET SERVEROUTPUT ON
DECLARE
  v_person customers.person;
BEGIN
  v_person := customers.get_customer(1);
  
  dbms_output.put_line(v_person.first_name);
  dbms_output.put_line(v_person.last_name);
END;
/

… which yields:

MARY
SMITH

What about JDBC client code?

After having added support for PL/SQL BOOLEAN types in jOOQ 3.9, with jOOQ 3.9, we now finally support PL/SQL record types in stored procedures as well, at least in standalone calls, which are not embedded in SQL statements. The jOOQ code generator will pick up all of these package-level PL/SQL record types and their structures and generate the boring boiler plate code for you. E.g. (simplified):

package org.jooq.demo.sakila.packages.customers.records;

public class PersonRecord extends UDTRecordImpl<PersonRecord> {
    public void   setFirstName(String value) { ... }
    public String getFirstName()             { ... }
    public void   setLastName(String value)  { ... }
    public String getLastName()              { ... }

    public PersonRecord() { ... }
    public PersonRecord(String firstName, String lastName) { ... }
}

Notice how jOOQ doesn’t really make any difference in its API between the generated code for an Oracle SQL OBJECT type or an Oracle PL/SQL RECORD type. They’re essentially the same thing (from a jOOQ API perspective).

More interesting, what happened to the generated package and the function? This code is generated (simplified):

public class Customers extends PackageImpl {
    public static PersonRecord getCustomer(
        Configuration configuration, Long pCustomerId
    ) { ... }
}

That’s all! So all we now need to do is pass the ubiquitous jOOQ Configuration (which contains information such as SQLDialect or JDBC Connection) and the actual stored function parameter, the P_CUSTOMER_ID value, and we’re done!

This is how jOOQ client code might look:

PersonRecord person = Customers.getCustomer(configuration(), 1L);
System.out.println(person);

As you can see, this is just the same thing as the corresponding PL/SQL code. And the output of this println call is this:

SAKILA.CUSTOMERS.PERSON('MARY', 'SMITH')

A fully qualified RECORD declaration with schema, package, and type name.

How does it work?

Let’s turn on jOOQ’s built-in TRACE logging to see what jOOQ did behind the scenes:

Calling routine          : 
  DECLARE
    r1 "CUSTOMERS"."PERSON";
  BEGIN
    r1 := "CUSTOMERS"."GET_CUSTOMER"("P_CUSTOMER_ID" => ?);
    ? := r1."FIRST_NAME";
    ? := r1."LAST_NAME";
  END;
Binding variable 1       : 1 (class java.lang.Long)
Registering variable 2   : class java.lang.String
Registering variable 3   : class java.lang.String
Fetched OUT parameters   : +-----------------+
                         : |RETURN_VALUE     |
                         : +-----------------+
                         : |('MARY', 'SMITH')|
                         : +-----------------+

So, jOOQ usually doesn’t use JDBC’s very limited escape syntax to call stored procedures, it just produces an anonymous PL/SQL block with a local variable of type CUSTOMER.PERSON, i.e. of our RECORD type. The function call is then assigned to this local variable, and the local variable is descructured into its individual parts.

In the TRACE log, you can see the individual bind variables, i.e. there’s an IN variable at index 1 of type Long, and two OUT variables of type String at indexes 2 and 3.

How does jOOQ know the record types?

At runtime, all the information is hard-wired to the generated code. So, the magic is inside of the code generator. Warning: Some serious SQL ahead!

This beauty here queries the dictionary views for PL/SQL record types:

AllArguments a = ALL_ARGUMENTS.as("a");
AllArguments x = ALL_ARGUMENTS.as("x");
Field<BigDecimal> nextSibling = field(name("next_sibling"), x.SEQUENCE.getDataType());

DSL.using(configuration)
   .select(x.TYPE_OWNER, x.TYPE_NAME, x.TYPE_SUBNAME)
   .select(
       a.ARGUMENT_NAME                                               .as(ALL_TYPE_ATTRS.ATTR_NAME),
       a.SEQUENCE                                                    .as(ALL_TYPE_ATTRS.ATTR_NO),
       a.TYPE_OWNER                                                  .as(ALL_TYPE_ATTRS.ATTR_TYPE_OWNER),
       nvl2(a.TYPE_SUBNAME, a.TYPE_NAME, inline(null, a.TYPE_NAME))  .as("package_name"),
       coalesce(a.TYPE_SUBNAME, a.TYPE_NAME, a.DATA_TYPE)            .as(ALL_TYPE_ATTRS.ATTR_TYPE_NAME),
       a.DATA_LENGTH                                                 .as(ALL_TYPE_ATTRS.LENGTH),
       a.DATA_PRECISION                                              .as(ALL_TYPE_ATTRS.PRECISION),
       a.DATA_SCALE                                                  .as(ALL_TYPE_ATTRS.SCALE))
   .from(a)
   .join(table(
       select(
           a.TYPE_OWNER,
           a.TYPE_NAME,
           a.TYPE_SUBNAME,
           min(a.OWNER        ).keepDenseRankFirstOrderBy(a.OWNER, a.PACKAGE_NAME, a.SUBPROGRAM_ID, a.SEQUENCE).as(a.OWNER),
           min(a.PACKAGE_NAME ).keepDenseRankFirstOrderBy(a.OWNER, a.PACKAGE_NAME, a.SUBPROGRAM_ID, a.SEQUENCE).as(a.PACKAGE_NAME),
           min(a.SUBPROGRAM_ID).keepDenseRankFirstOrderBy(a.OWNER, a.PACKAGE_NAME, a.SUBPROGRAM_ID, a.SEQUENCE).as(a.SUBPROGRAM_ID),
           min(a.SEQUENCE     ).keepDenseRankFirstOrderBy(a.OWNER, a.PACKAGE_NAME, a.SUBPROGRAM_ID, a.SEQUENCE).as(a.SEQUENCE),
           min(nextSibling    ).keepDenseRankFirstOrderBy(a.OWNER, a.PACKAGE_NAME, a.SUBPROGRAM_ID, a.SEQUENCE).as(nextSibling),
           min(a.DATA_LEVEL   ).keepDenseRankFirstOrderBy(a.OWNER, a.PACKAGE_NAME, a.SUBPROGRAM_ID, a.SEQUENCE).as(a.DATA_LEVEL))
      .from(table(
          select(
              lead(a.SEQUENCE, 1, a.SEQUENCE).over(
                  partitionBy(a.OWNER, a.PACKAGE_NAME, a.SUBPROGRAM_ID, a.DATA_LEVEL)
                 .orderBy(a.SEQUENCE)
              ).as("next_sibling"),
              a.TYPE_OWNER,
              a.TYPE_NAME,
              a.TYPE_SUBNAME,
              a.OWNER,
              a.PACKAGE_NAME,
              a.SUBPROGRAM_ID,
              a.SEQUENCE,
              a.DATA_LEVEL,
              a.DATA_TYPE)
         .from(a)
         .where(a.OWNER.in(getInputSchemata()))
      ).as("a"))
      .where(a.TYPE_OWNER.in(getInputSchemata()))
      .and(a.OWNER.in(getInputSchemata()))
      .and(a.DATA_TYPE.eq("PL/SQL RECORD"))
      .groupBy(a.TYPE_OWNER, a.TYPE_NAME, a.TYPE_SUBNAME)
   ).as("x"))
   .on(row(a.OWNER, a.PACKAGE_NAME, a.SUBPROGRAM_ID).eq(x.OWNER, x.PACKAGE_NAME, x.SUBPROGRAM_ID))
   .and(a.SEQUENCE.between(x.SEQUENCE).and(nextSibling))
   .and(a.DATA_LEVEL.eq(x.DATA_LEVEL.plus(one())))
   .orderBy(x.TYPE_OWNER, x.TYPE_NAME, x.TYPE_SUBNAME, a.SEQUENCE)
   .fetch();

This is a nice little jOOQ query, which corresponds to the following equially impressive SQL query, which you can run directly in your SQL developer, or some other SQL client for Oracle:

SELECT 
  "x"."TYPE_OWNER",
  "x"."TYPE_NAME",
  "x"."TYPE_SUBNAME",
  "a"."ARGUMENT_NAME" "ATTR_NAME",
  "a"."SEQUENCE" "ATTR_NO",
  "a"."TYPE_OWNER" "ATTR_TYPE_OWNER",
  nvl2("a"."TYPE_SUBNAME", "a"."TYPE_NAME", NULL) "package_name",
  COALESCE("a"."TYPE_SUBNAME", "a"."TYPE_NAME", "a"."DATA_TYPE") "ATTR_TYPE_NAME",
  "a"."DATA_LENGTH" "LENGTH",
  "a"."DATA_PRECISION" "PRECISION",
  "a"."DATA_SCALE" "SCALE"
FROM "SYS"."ALL_ARGUMENTS" "a"
JOIN (
  SELECT 
    "a"."TYPE_OWNER",
    "a"."TYPE_NAME",
    "a"."TYPE_SUBNAME",
    MIN("a"."OWNER") KEEP (DENSE_RANK FIRST
      ORDER BY "a"."OWNER" ASC, "a"."PACKAGE_NAME" ASC, "a"."SUBPROGRAM_ID" ASC, "a"."SEQUENCE" ASC) "OWNER",
    MIN("a"."PACKAGE_NAME") KEEP (DENSE_RANK FIRST
      ORDER BY "a"."OWNER" ASC, "a"."PACKAGE_NAME" ASC, "a"."SUBPROGRAM_ID" ASC, "a"."SEQUENCE" ASC) "PACKAGE_NAME",
    MIN("a"."SUBPROGRAM_ID") KEEP (DENSE_RANK FIRST
      ORDER BY "a"."OWNER" ASC, "a"."PACKAGE_NAME" ASC, "a"."SUBPROGRAM_ID" ASC, "a"."SEQUENCE" ASC) "SUBPROGRAM_ID",
    MIN("a"."SEQUENCE") KEEP (DENSE_RANK FIRST
      ORDER BY "a"."OWNER" ASC, "a"."PACKAGE_NAME" ASC, "a"."SUBPROGRAM_ID" ASC, "a"."SEQUENCE" ASC) "SEQUENCE",
    MIN("next_sibling") KEEP (DENSE_RANK FIRST
      ORDER BY "a"."OWNER" ASC, "a"."PACKAGE_NAME" ASC, "a"."SUBPROGRAM_ID" ASC, "a"."SEQUENCE" ASC) "next_sibling",
    MIN("a"."DATA_LEVEL") KEEP (DENSE_RANK FIRST
      ORDER BY "a"."OWNER" ASC, "a"."PACKAGE_NAME" ASC, "a"."SUBPROGRAM_ID" ASC, "a"."SEQUENCE" ASC) "DATA_LEVEL"
  FROM (
    SELECT 
	  lead("a"."SEQUENCE", 1, "a"."SEQUENCE") OVER (
	    PARTITION BY "a"."OWNER", "a"."PACKAGE_NAME", "a"."SUBPROGRAM_ID", "a"."DATA_LEVEL" 
		ORDER BY "a"."SEQUENCE" ASC
	  ) "next_sibling",
      "a"."TYPE_OWNER",
      "a"."TYPE_NAME",
      "a"."TYPE_SUBNAME",
      "a"."OWNER",
      "a"."PACKAGE_NAME",
      "a"."SUBPROGRAM_ID",
      "a"."SEQUENCE",
      "a"."DATA_LEVEL",
      "a"."DATA_TYPE"
    FROM "SYS"."ALL_ARGUMENTS" "a"
    WHERE "a"."OWNER" IN ('SAKILA', 'SYS')     -- Possibly replace schema here
    ) "a"
  WHERE ("a"."TYPE_OWNER" IN ('SAKILA', 'SYS') -- Possibly replace schema here
  AND "a"."OWNER"         IN ('SAKILA', 'SYS') -- Possibly replace schema here
  AND "a"."DATA_TYPE"      = 'PL/SQL RECORD')
  GROUP BY 
    "a"."TYPE_OWNER",
    "a"."TYPE_NAME",
    "a"."TYPE_SUBNAME"
  ) "x"
ON (("a"."OWNER", "a"."PACKAGE_NAME", "a"."SUBPROGRAM_ID")
 = (("x"."OWNER", "x"."PACKAGE_NAME", "x"."SUBPROGRAM_ID"))
AND "a"."SEQUENCE" BETWEEN "x"."SEQUENCE" AND "next_sibling"
AND "a"."DATA_LEVEL" = ("x"."DATA_LEVEL" + 1))
ORDER BY 
  "x"."TYPE_OWNER" ASC,
  "x"."TYPE_NAME" ASC,
  "x"."TYPE_SUBNAME" ASC,
  "a"."SEQUENCE" ASC

Whew.

The output shows that we got all the required information for our RECORD type:

[  TYPE INFO COLUMNS  ]   ATTR_NAME   ATTR_TYPE_NAME  LENGTH
SAKILA CUSTOMERS PERSON	  FIRST_NAME  VARCHAR2        50
SAKILA CUSTOMERS PERSON	  LAST_NAME   VARCHAR2        50

All of this also works for:

  • Nested types
  • Multiple IN and OUT parameters

I’ll blog about a more advanced use-case in the near future, so stay tuned.