jOOQ Newsletter: November 28, 2014 – Black Friday jOOQ Sale – Only Today!

Subscribe to this newsletter here

Tweet of the Day and Webinar with Arun Gupta from Red Hat

Today, we have a very special Tweet of the Day by Oliver Hubaut who expresses what we believe is a general feeling in the industry. He says:

There is a lot of truth in his statement, albeit perhaps not the one he intended. Many people have misinterpreted JPA in the past, believing that it will be a full replacement for SQL. This couldn’t be farther from the truth.

Gavin King, the creator of Hibernate has told us the following:

… and this is also the point we’re trying to make. Join us next week on Wednesday, December 3 when we meet with Arun Gupta from Red Hat for his Webinar about JPA and jOOQ. If you have any questions that you’d like us to talk about, ask them here:

https://github.com/javaee-samples/webinars/issues/4

Black Friday Sale: Get 20% off any jOOQ purchase, today!

We’re participating in the Black Friday sale tradition and give you an incredible 20% off your purchase of any jOOQ license that you order today, Black Friday, November 28, 2014.

Ask your manager today to treat you to a wonderful pre-christmas gift! Don’t waste time, act quickly and order jOOQ licenses right now:

http://www.jooq.org/black-friday

jOOQ 3.5 released

If you’ve been following the jOOQ User Group, you’ve heard it already. Last Friday, we’ve shipped the awesome jOOQ 3.5 with loads of new features!

The new Binding SPI

The main improvement of this exciting release is the new org.jooq.Binding SPI which can be used to fully control all aspects of a user-type’s JDBC interaction. This goes much further than the existing org.jooq.Converter SPI that can be used to map standard JDBC types to user-types. With the new Binding SPI, virtually *ALL* vendor-specific types can be supported now. Examples include PostgreSQL’s JSON or HSTORE types, or Oracle’s DATE type – which is really incorrectly represented via java.sql.Timestamp, which is why we have retrofitted the existing <dateAsTimestamp/> feature to use such bindings, now.

Stored procedures are everywhere

Stored procedure support was generally improved in this release. This includes lots of new little features and conveniences for use with PL/SQL or Transact-SQL. For instance, jOOQ 3.5.0 now supports cross-schema references of PL/SQL OBJECT and TABLE types, which allows for binding directly to Oracle Spatial. We’ve blogged about this exciting improvement here:
https://blog.jooq.org/2014/11/04/painless-access-from-java-to-plsql-procedures-with-jooq/

And while we were at it, we’ve also added basic support for Oracle AQ, which integrates very nicely with our OBJECT type support!

In Transact-SQL and MySQL, we now support fetching arbitrary numbers of Results from stored procedures, and we’ve also implemented support for Firebird PSQL, including Firebird’s very interesting syntax for table-valued functions.

By the way, we support user-defined aggregate functions for a variety of databases, including Oracle, PostgreSQL, and HSQLDB. Definitely something you should look into!

SQL improvements

In this release, we’ve finally got support for UNION, INTERSECT, and EXCEPT right with respect to nesting such operations, as well as combining them with ORDER BY and LIMIT .. OFFSET.

Let’s talk some more DDL

We’ve continued to add support for DDL statements, including

  • CREATE TABLE
  • CREATE TABLE AS SELECT
  • CREATE VIEW and DROP VIEW
  • CREATE INDEX and DROP INDEX
  • CREATE SEQUENCE and DROP SEQUENCE
  • DROP .. IF EXISTS

We’ll continue to add support for more DDL statements also in the future.

Code generation improvements

We’ve added support for the new XMLDatabase, a code generation configuration that allows to read meta information from XML formats, e.g. from a standard INFORMATION_SCHEMA.xml, or from Vertabelo’s XML export format:
https://blog.jooq.org/2014/09/05/importing-your-erd-export-into-jooq/

Future versions of jOOQ will include built-in support for a variety of XML formats.

We’ve had an awesome contribution by Etienne Studer from Gradleware to help our users integrate the jOOQ code generation with their Gradle builds.

Last but not least: Informix!

Oh, and by the way, we now also support IBM’s second most popular database: Informix. Support for this database will be included in the jOOQ Enterprise Edition.

More information can be found here:
http://www.jooq.org/notes

jOOQ 3.2 End of Life

While 3.5 is out, 3.2 is now more than one year old, which means that it has reached its end of life. We’ll be shipping a last patch update 3.2.7 in early December. After that, we advise all our customers and users to upgrade to a newer minor release.

Do you want to stay on the 3.2 release? No problem, contact our sales team and we’ll find a solution for you.

Use MySQL’s Strict Mode on all new Projects!

MySQL is a database that has been bending the SQL standard in ways that make it hard to move off MySQL. What may appear to be a clever technique for vendor lockin (or maybe just oversight of the standard) can be quite annoying in understanding the real meaning of the SQL language.

One such example is MySQL’s interpretation of how GROUP BY works. In MySQL, unlike any other database, you can put arbitrary expressions into your SELECT clause, even if they do not have a formal dependency on the GROUP BY expression. For instance:

SELECT employer, first_name, last_name
FROM employees
GROUP BY employer

This will work in MySQL, but what does it mean? If we only have one resulting record per employer, which one of the employees will be returned? The semantics of the above query is really this one:

SELECT employer, ARBITRARY(first_name), ARBITRARY(last_name)
FROM employees
GROUP BY employer

If we assume that there is such an aggregation function as ARBITRARY(). Some may claim that this can be used for some clever performance “optimisation”. I say: Don’t. This is so weakly specified, it is not even clear if the two references of this pseudo-ARBITRARY() aggregate function will produce values from the same record.

Just look at the number of Stack Overflow questions that evolve around the “not a GROUP BY expression” error:

I’m sure that parts of this damage that has been caused to a generation of SQL developers is due to the fact that this works in some databases.

ONLY_FULL_GROUP_BY

But there is a flag in MySQL called ONLY_FULL_GROUP_BY, and Morgan Tocker, the MySQL community manager suggests eventually turning it on by default.

MySQL community members tend to agree that this is a good decision in the long run.

While it is certainly very hard to turn this flag on for a legacy application, all new applications built on top of MySQL should make sure to turn on this flag. In fact, new applications should even consider turning on “strict SQL mode” entirely, to make sure they get a better, more modern SQL experience.

For more information about MySQL server modes, please consider the manual:

http://dev.mysql.com/doc/refman/5.7/en/sql-mode.html

Don’t Migrate to MariaDB just yet. MySQL is Back!

Now that I have your attention, I’d like to invite you to a critical review of where we’re at in the MySQL vs. MariaDB debate. Around one month ago, I visited Oracle Open World 2014, and I’ve met with Morgan Tocker, the MySQL community manager at Oracle to learn about where MySQL is heading.

Who “is” MySQL

An interesting learning for myself is the fact that according to Morgan, there had been quite a few former MySQL AB employees that stayed with MySQL when Sun acquired the database, and that are still there now (perhaps after a short break), now that Oracle owns Sun and thus MySQL. In fact, right now as we speak, Oracle is pushing hard to get even more people on board of the MySQL team. When this article was written, there were 21 open MySQL jobs on this blog post dating February 25, 2014.

More details about Oracle’s plans to promote and push MySQL can be seen in this presentation by Morten Andersen:

oracle-mysql

So, if you’re still contemplating a migration to MariaDB, maybe give this all a second thought. MySQL is still the database that is being used by most large companies, including Facebook and LinkedIn. MySQL won’t go away and it will get much better in the next couple of years – maybe even to a point where it has a similar amount of features as PostgreSQL? We can only hope.

Need another convincing argument? You can now use Oracle Enterprise Manager for both Oracle and MySQL databases. How awesome is that!

enterprise-manager

Stay tuned for a couple of additional blog posts on this blog, about what’s new and what’s on the MySQL roadmap in the next years. See all of Morten Andersen’s slides here:

Have You Ever Wondered About the Difference Between NOT NULL and DEFAULT?

When writing DDL in SQL, you can specify a couple of constraints on columns, like NOT NULL or DEFAULT constraints. Some people might wonder, if the two constraints are actually redundant, i.e. is it still necessary to specify a NOT NULL constraint, if there is already a DEFAULT clause?

The answer is: Yes!

Yes, you should still specify that NOT NULL constraint. And no, the two constraints are not redundant. The answer I gave here on Stack Overflow wraps it up by example, which I’m going to repeat here on our blog:

DEFAULT is the value that will be inserted in the absence of an explicit value in an insert / update statement. Lets assume, your DDL did not have the NOT NULL constraint:

ALTER TABLE tbl 
  ADD COLUMN col VARCHAR(20) 
    DEFAULT "MyDefault"

Then you could issue these statements

-- 1. This will insert "MyDefault" 
--    into tbl.col
INSERT INTO tbl (A, B) 
  VALUES (NULL, NULL);

-- 2. This will insert "MyDefault" 
--    into tbl.col
INSERT INTO tbl (A, B, col) 
  VALUES (NULL, NULL, DEFAULT);

-- 3. This will insert "MyDefault"
--    into tbl.col
INSERT INTO tbl (A, B, col)
  DEFAULT VALUES;

-- 4. This will insert NULL
--    into tbl.col
INSERT INTO tbl (A, B, col)
  VALUES (NULL, NULL, NULL);

Alternatively, you can also use DEFAULT in UPDATE statements, according to the SQL-1992 standard:

-- 5. This will update "MyDefault"
--    into tbl.col
UPDATE tbl SET col = DEFAULT;

-- 6. This will update NULL 
--    into tbl.col
UPDATE tbl SET col = NULL;

Note, not all databases support all of these SQL standard syntaxes. Adding the NOT NULL constraint will cause an error with statements 4, 6, while 1-3, 5 are still valid statements. So to answer your question:

No, NOT NULL and DEFAULT are not redundant

That’s already quite interesting, so the DEFAULT constraint really only interacts with DML statements and how they specify the various columns that they’re updating. The NOT NULL constraint is a much more universal guarantee, that constraints a column’s content also “outside” of the manipulating DML statements.

For instance, if you have a set of data and then you add a DEFAULT constraint, this will not affect your existing data, only new data being inserted.

If, however, you have a set of data and then you add a NOT NULL constraint, you can actually only do so if the constraint is valid – i.e. when there are no NULL values in your column. Otherwise, an error will be raised.

Query performance

Another very interesting use case that applies only to NOT NULL constraints is their usefulness for query optimisers and query execution plans. Assume that you have such a constraint on your column and then, you’re using a NOT IN predicate:

SELECT *
FROM table
WHERE value NOT IN (
  SELECT not_nullable
  FROM other_table
)

In particular, when you’re using Oracle, the above query will be much faster when the not_nullable column has an index AND that particular constraint, because unfortunately, NULL values are not included in Oracle indexes.

Read more about NULL and NOT IN predicates here.

Don’t Miss out on Awesome SQL Power with FIRST_VALUE(), LAST_VALUE(), LEAD(), and LAG()

If you’re using a commercial database or PostgreSQL / Firebird / CUBRID, you will be able to take advantage of the full power of window functions. We’ve blogged about window functions’ awesomeness a couple of times, in particular about ROW_NUMBER(), RANK(), DENSE_RANK().

Today, we’re going to look into some awesome window functions that produce values of other rows that are positioned before or after the current row.

Setting up the test data

We’re going to do some interesting statistics today using publicly available data from the World Bank. To keep things simple, we’ll only do analyses for the G8 countries:

  • Canada (CA)
  • France (FR)
  • Germany (DE)
  • Italy (IT)
  • Japan (JP)
  • Russian Federation (RU)
  • United Kingdom (GB)
  • United States (US)

And for those countries, let’s consider the following data points for the years 2009-2012:

GDP per capita (current US$)

          2009    2010    2011    2012
CA      40,764  47,465  51,791  52,409	
DE      40,270  40,408  44,355  42,598	
FR      40,488  39,448  42,578  39,759	
GB      35,455  36,573  38,927  38,649	
IT      35,724  34,673  36,988  33,814	
JP      39,473  43,118  46,204  46,548	
RU       8,616  10,710  13,324  14,091	
US      46,999  48,358  49,855  51,755	

Central government debt, total (% of GDP)

          2009    2010    2011    2012
CA        51.3    51.4    52.5    53.5	
DE        47.6    55.5    55.1    56.9	
FR        85.0    89.2    93.2   103.8	
GB        71.7    85.2    99.6   103.2	
IT       121.3   119.9   113.0   131.1	
JP       166.8   174.8   189.5   196.5	
RU         8.7     9.1     9.3     9.4	
US        76.3    85.6    90.1    93.8	

Let’s put all that data into a fact table like so (PostgreSQL syntax):

CREATE TABLE countries (
  code CHAR(2) NOT NULL,
  year INT NOT NULL,
  gdp_per_capita DECIMAL(10, 2) NOT NULL,
  govt_debt DECIMAL(10, 2) NOT NULL
);

INSERT INTO countries
VALUES ('CA', 2009, 40764, 51.3),
       ('CA', 2010, 47465, 51.4),
       ('CA', 2011, 51791, 52.5),
       ('CA', 2012, 52409, 53.5),
       ('DE', 2009, 40270, 47.6),
       ('DE', 2010, 40408, 55.5),
       ('DE', 2011, 44355, 55.1),
       ('DE', 2012, 42598, 56.9),
       ('FR', 2009, 40488, 85.0),
       ('FR', 2010, 39448, 89.2),
       ('FR', 2011, 42578, 93.2),
       ('FR', 2012, 39759,103.8),
       ('GB', 2009, 35455,121.3),
       ('GB', 2010, 36573, 85.2),
       ('GB', 2011, 38927, 99.6),
       ('GB', 2012, 38649,103.2),
       ('IT', 2009, 35724,121.3),
       ('IT', 2010, 34673,119.9),
       ('IT', 2011, 36988,113.0),
       ('IT', 2012, 33814,131.1),
       ('JP', 2009, 39473,166.8),
       ('JP', 2010, 43118,174.8),
       ('JP', 2011, 46204,189.5),
       ('JP', 2012, 46548,196.5),
       ('RU', 2009,  8616,  8.7),
       ('RU', 2010, 10710,  9.1),
       ('RU', 2011, 13324,  9.3),
       ('RU', 2012, 14091,  9.4),
       ('US', 2009, 46999, 76.3),
       ('US', 2010, 48358, 85.6),
       ('US', 2011, 49855, 90.1),
       ('US', 2012, 51755, 93.8);

Start the querying fun

People who are used to SQL-92 syntax will be able to quickly find the highest GDP per capita or the highest debt from the table. It’s an easy query like this one:

SELECT MAX(gdp_per_capita), MAX(govt_debt)
FROM countries;

Which will return:

52409.00    196.50

But that’s not interesting. We don’t even know what countries and what years these values are associated with.

A standard SQL-92 (and also a standard relational) query to return all of these values would look something like this:

SELECT 
  'highest gdp per capita' AS what,
  c1.*
FROM countries c1
WHERE NOT EXISTS (
  SELECT 1
  FROM countries c2
  WHERE c1.gdp_per_capita < c2.gdp_per_capita
)
UNION ALL
SELECT
  'highest government debt' AS what,
  c1.*
FROM countries c1
WHERE NOT EXISTS (
  SELECT 1
  FROM countries c2
  WHERE c1.govt_debt < c2.govt_debt
)

In essence, we select those rows for which there doesn’t exist any other row with a higher value for either gdp_per_capita (first subselect) or govt_debt (second subselect).

Trick! Use quantified comparison predicates!

If your database supports quantified comparison predicates, then you can write this a bit more concisely like this:

SELECT 
  'highest gdp per capita' AS what,
  countries.*
FROM countries
WHERE gdp_per_capita >= ALL (
  SELECT gdp_per_capita FROM countries
)
UNION ALL
SELECT
  'highest government debt' AS what,
  countries.*
FROM countries
WHERE govt_debt >= ALL (
  SELECT govt_debt FROM countries
)

Which is essentially the same as…

SELECT 
  'highest gdp per capita' AS what,
  countries.*
FROM countries
WHERE gdp_per_capita = (
  SELECT MAX(gdp_per_capita) FROM countries
)
UNION ALL
SELECT
  'highest government debt' AS what,
  countries.*
FROM countries
WHERE govt_debt = (
  SELECT MAX(govt_debt) FROM countries
)

The output would be:

what                     code year       gdp    debt
----------------------------------------------------
highest gdp per capita   CA   2012  52409.00   53.50
highest government debt  JP   2012  46548.00  196.50

That’s a lot of SQL for only little analysis capability, and somehow, it just doesn’t feel entirely right to query the same table four times with all these subselects!

FIRST_VALUE() and LAST_VALUE()

This is where window functions come into play, and in this particular case, FIRST_VALUE() or LAST_VALUE(). For now, let’s focus on calculating the maximum GDP per capita from the data set:

SELECT
  countries.*,
  FIRST_VALUE (code)           OVER (w_gdp) AS max_gdp_code,
  FIRST_VALUE (year)           OVER (w_gdp) AS max_gdp_year,
  FIRST_VALUE (gdp_per_capita) OVER (w_gdp) AS max_gdp_gdp,
  FIRST_VALUE (govt_debt)      OVER (w_gdp) AS max_gdp_debt
FROM
  countries
WINDOW
  w_gdp  AS (ORDER BY gdp_per_capita DESC)
ORDER BY
  code, year

Notice how we make use of the SQL standard WINDOW clause, which is only currently supported by PostgreSQL and Sybase SQL Anywhere.

If you’re using Oracle or any other commercial database, you can simply substitute the window reference w_gdp into the various OVER() clauses to achieve equivalent behaviour – or you can use jOOQ’s WINDOW clause support and let jOOQ do the same for you.

jOOQ generates Java code from your database and lets you build type safe SQL queries through its fluent API.

The above query will not produce any aggregates, but it will add the values for the country / year with the highest GDP per capita to every row in the table:

each country             highest per year
-----------------------------------------------
CA 2009 40764.00 51.30   CA 2012 52409.00 53.50
CA 2010 47465.00 51.40   CA 2012 52409.00 53.50
CA 2011 51791.00 52.50   CA 2012 52409.00 53.50
CA 2012 52409.00 53.50   CA 2012 52409.00 53.50

This is extremely interesting because the data is not yet aggregated – the original data set remains unchanged, enriched with new computed columns.

You can then further process things, e.g. compare each country / year with the highest GDP per capita and with the highest debt per GDP of that country / year:

SELECT
  countries.*,
  TO_CHAR(100 * gdp_per_capita / FIRST_VALUE (gdp_per_capita) OVER (w_gdp) , '999.99 %') gdp_rank,
  TO_CHAR(100 * govt_debt      / FIRST_VALUE (govt_debt)      OVER (w_debt), '999.99 %') debt_rank
FROM
  countries
WINDOW
  w_gdp  AS (PARTITION BY year ORDER BY gdp_per_capita DESC),
  w_debt AS (PARTITION BY year ORDER BY govt_debt DESC)
ORDER BY
  code, year

Notice how I’ve added PARTITION BY to the window definitions of the WINDOW clause. I’ve done this because I want to partition the data set by year, in order to find the highest GDP / debt values for each year, not for the whole data set.

The outcome of the above query can then be seen here:

country                   percentages
------------------------------------------
CA   2009  40764   51.3    86.73%   30.76%
CA   2010  47465   51.4    98.15%   29.41%
CA   2011  51791   52.5   100.00%   27.70%
CA   2012  52409   53.5   100.00%   27.23%
DE   2009  40270   47.6    85.68%   28.54%
DE   2010  40408   55.5    83.56%   31.75%
DE   2011  44355   55.1    85.64%   29.08%
DE   2012  42598   56.9    81.28%   28.96%
FR   2009  40488   85.0    86.15%   50.96%
FR   2010  39448   89.2    81.57%   51.03%
FR   2011  42578   93.2    82.21%   49.18%
FR   2012  39759  103.8    75.86%   52.82%
GB   2009  35455  121.3    75.44%   72.72%
GB   2010  36573   85.2    75.63%   48.74%
GB   2011  38927   99.6    75.16%   52.56%
GB   2012  38649  103.2    73.74%   52.52%
IT   2009  35724  121.3    76.01%   72.72%
IT   2010  34673  119.9    71.70%   68.59%
IT   2011  36988  113.0    71.42%   59.63%
IT   2012  33814  131.1    64.52%   66.72%
JP   2009  39473  166.8    83.99%  100.00%
JP   2010  43118  174.8    89.16%  100.00%
JP   2011  46204  189.5    89.21%  100.00%
JP   2012  46548  196.5    88.82%  100.00%
RU   2009  8616     8.7    18.33%    5.22%
RU   2010  10710    9.1    22.15%    5.21%
RU   2011  13324    9.3    25.73%    4.91%
RU   2012  14091    9.4    26.89%    4.78%
US   2009  46999   76.3   100.00%   45.74%
US   2010  48358   85.6   100.00%   48.97%
US   2011  49855   90.1    96.26%   47.55%
US   2012  51755   93.8    98.75%   47.74%

We could say that among the G8 countries, Canada has really improved the most in the last years, decreasing their debt compared to the GDP on a global comparison, while at the same time increasing their GDP per capita on a global comparison.

Instead of partitioning the data set by year, we could also partition it by country, and find the best / worst year for each country over the years:

SELECT
  countries.*,
  TO_CHAR(100 * gdp_per_capita / FIRST_VALUE (gdp_per_capita) OVER (w_gdp), '999.99 %') gdp_rank,
  TO_CHAR(100 * govt_debt / FIRST_VALUE (govt_debt) OVER (w_debt), '999.99 %') debt_rank
FROM
  countries
WINDOW
  w_gdp  AS (PARTITION BY code ORDER BY gdp_per_capita DESC),
  w_debt AS (PARTITION BY code ORDER BY govt_debt DESC)
ORDER BY
  code, year

The result would now look quite different:

country                    percentages
------------------------------------------
CA   2009  40764   51.3    77.78%   95.89%
CA   2010  47465   51.4    90.57%   96.07%
CA   2011  51791   52.5    98.82%   98.13%
CA   2012  52409   53.5   100.00%  100.00%
DE   2009  40270   47.6    90.79%   83.66%
DE   2010  40408   55.5    91.10%   97.54%
DE   2011  44355   55.1   100.00%   96.84%
DE   2012  42598   56.9    96.04%  100.00%
FR   2009  40488   85.0    95.09%   81.89%
FR   2010  39448   89.2    92.65%   85.93%
FR   2011  42578   93.2   100.00%   89.79%
FR   2012  39759  103.8    93.38%  100.00%
GB   2009  35455  121.3    91.08%  100.00%
GB   2010  36573   85.2    93.95%   70.24%
GB   2011  38927   99.6   100.00%   82.11%
GB   2012  38649  103.2    99.29%   85.08%
IT   2009  35724  121.3    96.58%   92.52%
IT   2010  34673  119.9    93.74%   91.46%
IT   2011  36988  113.0   100.00%   86.19%
IT   2012  33814  131.1    91.42%  100.00%
JP   2009  39473  166.8    84.80%   84.89%
JP   2010  43118  174.8    92.63%   88.96%
JP   2011  46204  189.5    99.26%   96.44%
JP   2012  46548  196.5   100.00%  100.00%
RU   2009   8616    8.7    61.15%   92.55%
RU   2010  10710    9.1    76.01%   96.81%
RU   2011  13324    9.3    94.56%   98.94%
RU   2012  14091    9.4   100.00%  100.00%
US   2009  46999   76.3    90.81%   81.34%
US   2010  48358   85.6    93.44%   91.26%
US   2011  49855   90.1    96.33%   96.06%
US   2012  51755   93.8   100.00%  100.00%

As you can see, most countries have now generally performed better in terms of GDP per capita over the years, and also most countries have almost strictly increased their own debt per GDP (except for Germany, France and Italy), except for the (United Kingdom). Russia and Canada have seen the most growth.

In the above examples, we’ve been mainly using FIRST_VALUE(). LAST_VALUE() is almost the opposite function with respect to ordering, much like MAX() is the opposite function of MIN(). I’m saying almost because there is a caveat when using LAST_VALUE() with ORDER BY, because a window definition that uses ORDER BY is implicitly equivalent to a window definition that uses ORDER BY with a so-called “frame clause”:

-- Find the "last" year over the complete data set
-- This may not behave as expected, so always provide
-- an explicit ORDER BY clause
LAST_VALUE (year) OVER()

-- These two are implicitly equivalent. We're not
-- looking for the "last" year in the complete data
-- set, but only in the frame that is "before" the
-- current row. In other words, the current row is
-- always the "last value"!
LAST_VALUE (year) OVER(ORDER BY year)
LAST_VALUE (year) OVER(
  ORDER BY year 
  ROWS BETWEEN UNBOUNDED PRECEDING 
           AND CURRENT ROW
)

-- Find the "last" year in the complete data set with
-- explicit ordering
LAST_VALUE (year) OVER(
  ORDER BY year 
  ROWS BETWEEN UNBOUNDED PRECEDING 
           AND UNBOUNDED FOLLOWING
)

LEAD() and LAG()

The previous functions were about comparing values with the maximum / minimum (FIRST_VALUE() and LAST_VALUE()) within a data set. But using window functions, you can also compare things with the next / previous value. Or with the second next / second previous, etc. The functions used for this are called LEAD() (for the next value) and LAG() (for the previous value).

This is best explained by example:

-- Use this view as a data source containing
-- all the distinct years: 2009-2012
WITH years AS (
  SELECT DISTINCT year
  FROM countries
)
SELECT
  FIRST_VALUE (year)    OVER w_year AS first,
  LEAD        (year, 2) OVER w_year AS lead2,
  LEAD        (year)    OVER w_year AS lead1,
  year,
  LAG         (year)    OVER w_year AS lag1,
  LAG         (year, 2) OVER w_year AS lag2,
  LAST_VALUE  (year)    OVER w_year AS last
FROM
  years
WINDOW
  w_year AS (
    ORDER BY year DESC
    ROWS BETWEEN UNBOUNDED PRECEDING
             AND UNBOUNDED FOLLOWING
  )
ORDER BY year

The result is now simply:

first  lead2  lead1  year   lag1   lag2   last
----------------------------------------------
2012                 2009   2010   2011   2009
2012          2009   2010   2011   2012   2009
2012   2009   2010   2011   2012          2009
2012   2010   2011   2012                 2009

LEAD() and LAG() are really the best window functions to help understand the whole concept of window functions. For each year, you can see immediately how the previous and next year in the same window and frame can be generated using very simple function calls.

This could be used, for instance, to find the “neighboring” countries in terms of GDP per capita for every country / year:

SELECT
  year,
  code,
  gdp_per_capita,
  LEAD (code)           OVER w_gdp AS runner_up_code,
  LEAD (gdp_per_capita) OVER w_gdp AS runner_up_gdp,
  LAG  (code)           OVER w_gdp AS leader_code,
  LAG  (gdp_per_capita) OVER w_gdp AS leader_gdp
FROM
  countries
WINDOW
  w_gdp AS (PARTITION BY year ORDER BY gdp_per_capita DESC)
ORDER BY year DESC, gdp_per_capita DESC

Which returns:

year   country      runner-up    leader
------------------------------------------
2012   CA  52409    US  51755
2012   US  51755    JP  46548    CA  52409
2012   JP  46548    DE  42598    US  51755
2012   DE  42598    FR  39759    JP  46548
2012   FR  39759    GB  38649    DE  42598
2012   GB  38649    IT  33814    FR  39759
2012   IT  33814    RU  14091    GB  38649
2012   RU  14091                 IT  33814

2011   CA  51791    US  49855
2011   US  49855    JP  46204    CA  51791
2011   JP  46204    DE  44355    US  49855
2011   DE  44355    FR  42578    JP  46204
2011   FR  42578    GB  38927    DE  44355
2011   GB  38927    IT  36988    FR  42578
2011   IT  36988    RU  13324    GB  38927
2011   RU  13324                 IT  36988

2010   US  48358    CA  47465
2010   CA  47465    JP  43118    US  48358
2010   JP  43118    DE  40408    CA  47465
2010   DE  40408    FR  39448    JP  43118
2010   FR  39448    GB  36573    DE  40408
2010   GB  36573    IT  34673    FR  39448
2010   IT  34673    RU  10710    GB  36573
2010   RU  10710                 IT  34673

2009   US  46999    CA  40764
2009   CA  40764    FR  40488    US  46999
2009   FR  40488    DE  40270    CA  40764
2009   DE  40270    JP  39473    FR  40488
2009   JP  39473    IT  35724    DE  40270
2009   IT  35724    GB  35455    JP  39473
2009   GB  35455    RU   8616    IT  35724
2009   RU   8616                 GB  35455

If you want to do more fancy analyses, you could now compare percentages between leaders and runner-ups, etc. Another great use-case for LEAD() and LAG() can be seen in this article.

Conclusion

Window functions are an incredibly powerful feature that is available from all major commercial databases, and also from a couple of Open Source databases like PostgreSQL, Firebird, and CUBRID. There has essentially been SQL before window functions, and SQL after window functions.

With jOOQ, you can leverage window functions on a type safe level like anything else related to SQL. The last query we’ve seen can be written simply like this:

// Static import the generated tables and all
// of jOOQ's functions from DSL
import static org.jooq.example.db.postgres.Tables.*;
import static org.jooq.impl.DSL.*;

// Shorten the table reference by aliasing
Countries c = COUNTRIES;

// Specifiy a window definition
WindowDefinition w_gdp = 
  name("w_gdp").as(
    partitionBy(c.YEAR)
   .orderBy(c.GDP_PER_CAPITA.desc()
  )
);

// Write the query as if it were native SQL
System.out.println(
    DSL.using(conn)
       .select(
           c.YEAR,
           c.CODE,
           c.GDP_PER_CAPITA,
           lead(c.CODE)          .over(w_gdp).as("runner_up_code"),
           lead(c.GDP_PER_CAPITA).over(w_gdp).as("runner_up_gdp"),
           lag (c.CODE)          .over(w_gdp).as("leader_code"),
           lag (c.GDP_PER_CAPITA).over(w_gdp).as("leader_gdp")
       )
       .from(c)
       .window(w_gdp)
       .orderBy(c.YEAR.desc(), c.GDP_PER_CAPITA.desc())
       .fetch()
);

The above program will output

+----+----+--------------+--------------+-------------+-----------+----------+
|year|code|gdp_per_capita|runner_up_code|runner_up_gdp|leader_code|leader_gdp|
+----+----+--------------+--------------+-------------+-----------+----------+
|2012|CA  |      52409.00|US            |     51755.00|{null}     |    {null}|
|2012|US  |      51755.00|JP            |     46548.00|CA         |  52409.00|
|2012|JP  |      46548.00|DE            |     42598.00|US         |  51755.00|
|2012|DE  |      42598.00|FR            |     39759.00|JP         |  46548.00|
|2012|FR  |      39759.00|GB            |     38649.00|DE         |  42598.00|
|2012|GB  |      38649.00|IT            |     33814.00|FR         |  39759.00|
|2012|IT  |      33814.00|RU            |     14091.00|GB         |  38649.00|
|2012|RU  |      14091.00|{null}        |       {null}|IT         |  33814.00|
|2011|CA  |      51791.00|US            |     49855.00|{null}     |    {null}|
|2011|US  |      49855.00|JP            |     46204.00|CA         |  51791.00|
|2011|JP  |      46204.00|DE            |     44355.00|US         |  49855.00|
|2011|DE  |      44355.00|FR            |     42578.00|JP         |  46204.00|
|2011|FR  |      42578.00|GB            |     38927.00|DE         |  44355.00|
|2011|GB  |      38927.00|IT            |     36988.00|FR         |  42578.00|
|2011|IT  |      36988.00|RU            |     13324.00|GB         |  38927.00|
|2011|RU  |      13324.00|{null}        |       {null}|IT         |  36988.00|
|2010|US  |      48358.00|CA            |     47465.00|{null}     |    {null}|
|2010|CA  |      47465.00|JP            |     43118.00|US         |  48358.00|
|2010|JP  |      43118.00|DE            |     40408.00|CA         |  47465.00|
|2010|DE  |      40408.00|FR            |     39448.00|JP         |  43118.00|
|2010|FR  |      39448.00|GB            |     36573.00|DE         |  40408.00|
|2010|GB  |      36573.00|IT            |     34673.00|FR         |  39448.00|
|2010|IT  |      34673.00|RU            |     10710.00|GB         |  36573.00|
|2010|RU  |      10710.00|{null}        |       {null}|IT         |  34673.00|
|2009|US  |      46999.00|CA            |     40764.00|{null}     |    {null}|
|2009|CA  |      40764.00|FR            |     40488.00|US         |  46999.00|
|2009|FR  |      40488.00|DE            |     40270.00|CA         |  40764.00|
|2009|DE  |      40270.00|JP            |     39473.00|FR         |  40488.00|
|2009|JP  |      39473.00|IT            |     35724.00|DE         |  40270.00|
|2009|IT  |      35724.00|GB            |     35455.00|JP         |  39473.00|
|2009|GB  |      35455.00|RU            |      8616.00|IT         |  35724.00|
|2009|RU  |       8616.00|{null}        |       {null}|GB         |  35455.00|
+----+----+--------------+--------------+-------------+-----------+----------+

jOOQ generates Java code from your database and lets you build type safe SQL queries through its fluent API.

No matter whether you’re using jOOQ for your database integration, or just plain SQL – start using window functions today.

Liked this article?

Read more about how ROW_NUMBER(), RANK(), and DENSE_RANK() work.

Painless Access from Java to PL/SQL Procedures with jOOQ

PL/SQL is one of those things.

Most people try to stay clear of it. Few people really love it. I just happen to suffer from stockholm syndrome, since I’m working a lot with banks.

Even if the PL/SQL syntax and the tooling sometimes remind me of the good old times…

Fitzgerald, we need to rewind the tape and replace the PL/SQL cartridge.

“Fitzgerald, we’re cruisin’ for a bruisin’. I’ll rewind the tape.” – “Don’t have a cow, Lawrence. We can insert a new PL/SQL cartridge any time.”
Image in public domain

… I still believe that a procedural language (well, any language) combined with SQL can do miracles in terms of productiveness, performance and expressivity.

In this article, we’ll see later on, how we can achieve the same with SQL (and PL/SQL) in Java, using jOOQ.

But first, a little bit of history…

Accessing PL/SQL from Java

One of the biggest reasons why Java developers in particular refrain from writing their own PL/SQL code is because the interface between PL/SQL and Java – ojdbc – is a major pain. We’ll see in the following examples how that is.

Assume we’re working on an Oracle-port of the popular Sakila database (originally created for MySQL). This particular Sakila/Oracle port was implemented by DB Software Laboratory and published under the BSD license.

Here’s a partial view of that Sakila database.

Sakila-film-actor-category

ERD created with vertabelo.comlearn how to use Vertabelo with jOOQ

Now, let’s assume that we have an API in the database that doesn’t expose the above schema, but exposes a PL/SQL API instead. The API might look something like this:

CREATE TYPE LANGUAGE_T AS OBJECT (
  language_id SMALLINT,
  name CHAR(20),
  last_update DATE
);
/

CREATE TYPE LANGUAGES_T AS TABLE OF LANGUAGE_T;
/

CREATE TYPE FILM_T AS OBJECT (
  film_id int,
  title VARCHAR(255),
  description CLOB,
  release_year VARCHAR(4),
  language LANGUAGE_T,
  original_language LANGUAGE_T,
  rental_duration SMALLINT,
  rental_rate DECIMAL(4,2),
  length SMALLINT,
  replacement_cost DECIMAL(5,2),
  rating VARCHAR(10),
  special_features VARCHAR(100),
  last_update DATE
);
/

CREATE TYPE FILMS_T AS TABLE OF FILM_T;
/

CREATE TYPE ACTOR_T AS OBJECT (
  actor_id numeric,
  first_name VARCHAR(45),
  last_name VARCHAR(45),
  last_update DATE
);
/

CREATE TYPE ACTORS_T AS TABLE OF ACTOR_T;
/

CREATE TYPE CATEGORY_T AS OBJECT (
  category_id SMALLINT,
  name VARCHAR(25),
  last_update DATE
);
/

CREATE TYPE CATEGORIES_T AS TABLE OF CATEGORY_T;
/

CREATE TYPE FILM_INFO_T AS OBJECT (
  film FILM_T,
  actors ACTORS_T,
  categories CATEGORIES_T
);
/

You’ll notice immediately, that this is essentially just a 1:1 copy of the schema in this case modelled as Oracle SQL OBJECT and TABLE types, apart from the FILM_INFO_T type, which acts as an aggregate.

Now, our DBA (or our database developer) has implemented the following API for us to access the above information:

CREATE OR REPLACE PACKAGE RENTALS AS
  FUNCTION GET_ACTOR(p_actor_id INT) RETURN ACTOR_T;
  FUNCTION GET_ACTORS RETURN ACTORS_T;
  FUNCTION GET_FILM(p_film_id INT) RETURN FILM_T;
  FUNCTION GET_FILMS RETURN FILMS_T;
  FUNCTION GET_FILM_INFO(p_film_id INT) RETURN FILM_INFO_T;
  FUNCTION GET_FILM_INFO(p_film FILM_T) RETURN FILM_INFO_T;
END RENTALS;
/

This, ladies and gentlemen, is how you can now…

… tediously access the PL/SQL API with JDBC

So, in order to avoid the awkward CallableStatement with its OUT parameter registration and JDBC escape syntax, we’re going to fetch a FILM_INFO_T record via a SQL statement like this:

try (PreparedStatement stmt = conn.prepareStatement(
        "SELECT rentals.get_film_info(1) FROM DUAL");
     ResultSet rs = stmt.executeQuery()) {

    // STRUCT unnesting here...
}

So far so good. Luckily, there is Java 7’s try-with-resources to help us clean up those myriad JDBC objects. Now how to proceed? What will we get back from this ResultSet? A java.sql.Struct:

while (rs.next()) {
    Struct film_info_t = (Struct) rs.getObject(1);

    // And so on...
}

Now, the brave ones among you would continue downcasting the java.sql.Struct to an even more obscure and arcane oracle.sql.STRUCT, which contains almost no Javadoc, but tons of deprecated additional, vendor-specific methods.

For now, let’s stick with the “standard API”, though.

Interlude:

Let’s take a moment to appreciate JDBC in times of Java 8.

When Java 5 was introduced, so were generics. We have rewritten our big code bases to remove all sorts of meaningless boilerplate type casts that are now no longer needed. With the exception of JDBC. When it comes to JDBC, guessing appropriate types is all a matter of luck. We’re accessing complex nested data structures provided by external systems by dereferencing elements by index, and then taking wild guesses at the resulting data types.

Lambdas have just been introduced, yet JDBC still talks to the mainframe.

Rhonda said, she put that STRUCT right between jack 73 and 75 on array F-B4. I wonder if I need my AC/DC converter to plug it

Rhonda said, she put that STRUCT right between jack 73 and 75 on array F-B4. I wonder if I need my AC/DC converter to plug it
Image in public domain

And then…

And here be dragons. And STRUCTS

And here be dragons. And STRUCTS
Original image in public domain

OK, enough of these rants.

Let’s continue navigating our STRUCT

while (rs.next()) {
    Struct film_info_t = (Struct) rs.getObject(1);

    Struct film_t = (Struct) film_info_t.getAttributes()[0];
    String title = (String) film_t.getAttributes()[1];
    Clob description_clob = (Clob) film_t.getAttributes()[2];
    String description = description_clob.getSubString(1, (int) description_clob.length());

    Struct language_t = (Struct) film_t.getAttributes()[4];
    String language = (String) language_t.getAttributes()[1];

    System.out.println("Film       : " + title);
    System.out.println("Description: " + description);
    System.out.println("Language   : " + language);
}

From the initial STRUCT that we received at position 1 from the ResultSet, we can continue dereferencing attributes by index. Unfortunately, we’ll constantly need to look up the SQL type in Oracle (or in some documentation) to remember the order of the attributes:

CREATE TYPE FILM_INFO_T AS OBJECT (
  film FILM_T,
  actors ACTORS_T,
  categories CATEGORIES_T
);
/

And that’s not it! The first attribute of type FILM_T is yet another, nested STRUCT. And then, those horrible CLOBs. The above code is not strictly complete. In some cases that only the maintainers of JDBC can fathom, java.sql.Clob.free() has to be called to be sure that resources are freed in time. Remember that CLOB, depending on your database and driver configuration, may live outside the scope of your transaction.

Unfortunately, the method is called free() instead of AutoCloseable.close(), such that try-with-resources cannot be used. So here we go:

List<Clob> clobs = new ArrayList<>();

while (rs.next()) {
    try {
        Struct film_info_t = (Struct) rs.getObject(1);
        Struct film_t = (Struct) film_info_t.getAttributes()[0];

        String title = (String) film_t.getAttributes()[1];
        Clob description_clob = (Clob) film_t.getAttributes()[2];
        String description = description_clob.getSubString(1, (int) description_clob.length());

        Struct language_t = (Struct) film_t.getAttributes()[4];
        String language = (String) language_t.getAttributes()[1];

        System.out.println("Film       : " + title);
        System.out.println("Description: " + description);
        System.out.println("Language   : " + language);
    }
    finally {
        // And don't think you can call this early, either
        // The internal specifics are mysterious!
        for (Clob clob : clobs)
            clob.free();
    }
}

That’s about it. Now we have found ourselves with some nice little output on the console:

Film       : ACADEMY DINOSAUR
Description: A Epic Drama of a Feminist And a Mad 
             Scientist who must Battle a Teacher in
             The Canadian Rockies
Language   : English             

That’s about it – You may think! But…

The pain has only started

… because we’re not done yet. There are also two nested table types that we need to deserialise from the STRUCT. If you haven’t given up yet (bear with me, good news is nigh), you’ll enjoy reading about how to fetch and unwind a java.sql.Array. Let’s continue right after the printing of the film:

Array actors_t = (Array) film_info_t.getAttributes()[1];
Array categories_t = (Array) film_info_t.getAttributes()[2];

Again, we’re accessing attributes by indexes, which we have to remember, and which can easily break. The ACTORS_T array is nothing but yet another wrapped STRUCT:

System.out.println("Actors     : ");

Object[] actors = (Object[]) actors_t.getArray();
for (Object actor : actors) {
    Struct actor_t = (Struct) actor;

    System.out.println(
        "  " + actor_t.getAttributes()[1]
       + " " + actor_t.getAttributes()[2]);
}

You’ll notice a few things:

  • The Array.getArray() method returns an array. But it declares returning Object. We have to manually cast.
  • We can’t cast to Struct[] even if that would be a sensible type. But the type returned by ojdbc is Object[] (containing Struct elements)
  • The foreach loop also cannot dereference a Struct from the right hand side. There’s no way of coercing the type of actor into what we know it really is
  • We could’ve used Java 8 and Streams and such, but unfortunately, all lambda expressions that can be passed to the Streams API disallow throwing of checked exceptions. And JDBC throws checked exceptions. That’ll be even uglier.

Anyway. Now that we’ve finally achieved this, we can see the print output:

Film       : ACADEMY DINOSAUR
Description: A Epic Drama of a Feminist And a Mad 
             Scientist who must Battle a Teacher in
             The Canadian Rockies
Language   : English             
Actors     : 
  PENELOPE GUINESS
  CHRISTIAN GABLE
  LUCILLE TRACY
  SANDRA PECK
  JOHNNY CAGE
  MENA TEMPLE
  WARREN NOLTE
  OPRAH KILMER
  ROCK DUKAKIS
  MARY KEITEL

When will this madness stop?

It’ll stop right here!

So far, this article read like a tutorial (or rather: medieval torture) of how to deserialise nested user-defined types from Oracle SQL to Java (don’t get me started on serialising them again!)

In the next section, we’ll see how the exact same business logic (listing Film with ID=1 and its actors) can be implemented with no pain at all using jOOQ and its source code generator. Check this out:

// Simply call the packaged stored function from
// Java, and get a deserialised, type safe record
FilmInfoTRecord film_info_t = Rentals.getFilmInfo1(
    configuration, new BigInteger("1"));

// The generated record has getters (and setters)
// for type safe navigation of nested structures
FilmTRecord film_t = film_info_t.getFilm();

// In fact, all these types have generated getters:
System.out.println("Film       : " + film_t.getTitle());
System.out.println("Description: " + film_t.getDescription());
System.out.println("Language   : " + film_t.getLanguage().getName());

// Simply loop nested type safe array structures
System.out.println("Actors     : ");
for (ActorTRecord actor_t : film_info_t.getActors()) {
    System.out.println(
        "  " + actor_t.getFirstName()
       + " " + actor_t.getLastName());
}

System.out.println("Categories     : ");
for (CategoryTRecord category_t : film_info_t.getCategories()) {
    System.out.println(category_t.getName());
}

Is that it?

Yes!

Wow, I mean, this is just as though all those PL/SQL types and procedures / functions were actually part of Java. All the caveats that we’ve seen before are hidden behind those generated types and implemented in jOOQ, so you can concentrate on what you originally wanted to do. Access the data objects and do meaningful work with them. Not serialise / deserialise them!

Let’s take a moment and appreciate this consumer advertising:

jOOQ generates Java code from your database and lets you build type safe SQL queries through its fluent API.

Not convinced yet?

I told you not to get me started on serialising the types to JDBC. And I won’t, but here’s how to serialise the types to jOOQ, because that’s a piece of cake!

Let’s consider this other aggregate type, that returns a customer’s rental history:

CREATE TYPE CUSTOMER_RENTAL_HISTORY_T AS OBJECT (
  customer CUSTOMER_T,
  films FILMS_T
);
/

And the full PL/SQL package specs:

CREATE OR REPLACE PACKAGE RENTALS AS
  FUNCTION GET_ACTOR(p_actor_id INT) RETURN ACTOR_T;
  FUNCTION GET_ACTORS RETURN ACTORS_T;
  FUNCTION GET_CUSTOMER(p_customer_id INT) RETURN CUSTOMER_T;
  FUNCTION GET_CUSTOMERS RETURN CUSTOMERS_T;
  FUNCTION GET_FILM(p_film_id INT) RETURN FILM_T;
  FUNCTION GET_FILMS RETURN FILMS_T;
  FUNCTION GET_CUSTOMER_RENTAL_HISTORY(p_customer_id INT) RETURN CUSTOMER_RENTAL_HISTORY_T;
  FUNCTION GET_CUSTOMER_RENTAL_HISTORY(p_customer CUSTOMER_T) RETURN CUSTOMER_RENTAL_HISTORY_T;
  FUNCTION GET_FILM_INFO(p_film_id INT) RETURN FILM_INFO_T;
  FUNCTION GET_FILM_INFO(p_film FILM_T) RETURN FILM_INFO_T;
END RENTALS;
/

So, when calling RENTALS.GET_CUSTOMER_RENTAL_HISTORY we can find all the films that a customer has ever rented. Let’s do that for all customers whose FIRST_NAME is “JAMIE”, and this time, we’re using Java 8:

// We call the stored function directly inline in
// a SQL statement
dsl().select(Rentals.getCustomer(
          CUSTOMER.CUSTOMER_ID
      ))
     .from(CUSTOMER)
     .where(CUSTOMER.FIRST_NAME.eq("JAMIE"))

// This returns Result<Record1<CustomerTRecord>>
// We unwrap the CustomerTRecord and consume
// the result with a lambda expression
     .fetch()
     .map(Record1::value1)
     .forEach(customer -> {
         System.out.println("Customer  : ");
         System.out.println("- Name    : " 
           + customer.getFirstName() 
           + " " + customer.getLastName());
         System.out.println("- E-Mail  : " 
           + customer.getEmail());
         System.out.println("- Address : " 
           + customer.getAddress().getAddress());
         System.out.println("            " 
           + customer.getAddress().getPostalCode() 
           + " " + customer.getAddress().getCity().getCity());
         System.out.println("            " 
           + customer.getAddress().getCity().getCountry().getCountry());

// Now, lets send the customer over the wire again to
// call that other stored procedure, fetching his
// rental history:
         CustomerRentalHistoryTRecord history = 
           Rentals.getCustomerRentalHistory2(dsl().configuration(), customer);

         System.out.println("  Customer Rental History : ");
         System.out.println("    Films                 : ");

         history.getFilms().forEach(film -> {
             System.out.println("      Film                : " 
               + film.getTitle());
             System.out.println("        Language          : " 
               + film.getLanguage().getName());
             System.out.println("        Description       : " 
               + film.getDescription());

// And then, let's call again the first procedure
// in order to get a film's actors and categories
             FilmInfoTRecord info = 
               Rentals.getFilmInfo2(dsl().configuration(), film);

             info.getActors().forEach(actor -> {
                 System.out.println("          Actor           : " 
                   + actor.getFirstName() + " " + actor.getLastName());
             });

             info.getCategories().forEach(category -> {
                 System.out.println("          Category        : " 
                   + category.getName());
             });
         });
     });

… and a short extract of the output produced by the above:

Customer  : 
- Name    : JAMIE RICE
- E-Mail  : JAMIE.RICE@sakilacustomer.org
- Address : 879 Newcastle Way
            90732 Sterling Heights
            United States
  Customer Rental History : 
    Films                 : 
      Film                : ALASKA PHANTOM
        Language          : English             
        Description       : A Fanciful Saga of a Hunter
                            And a Pastry Chef who must
                            Vanquish a Boy in Australia
          Actor           : VAL BOLGER
          Actor           : BURT POSEY
          Actor           : SIDNEY CROWE
          Actor           : SYLVESTER DERN
          Actor           : ALBERT JOHANSSON
          Actor           : GENE MCKELLEN
          Actor           : JEFF SILVERSTONE
          Category        : Music
      Film                : ALONE TRIP
        Language          : English             
        Description       : A Fast-Paced Character
                            Study of a Composer And a
                            Dog who must Outgun a Boat
                            in An Abandoned Fun House
          Actor           : ED CHASE
          Actor           : KARL BERRY
          Actor           : UMA WOOD
          Actor           : WOODY JOLIE
          Actor           : SPENCER DEPP
          Actor           : CHRIS DEPP
          Actor           : LAURENCE BULLOCK
          Actor           : RENEE BALL
          Category        : Music

If you’re using Java and PL/SQL…

… then you should click on the below banner and download the free trial right now to experiment with jOOQ and Oracle:

jOOQ generates Java code from your database and lets you build type safe SQL queries through its fluent API.

The Oracle port of the Sakila database is available from this URL for free, under the terms of the BSD license:

https://www.jooq.org/sakila

Finally, it is time to enjoy writing PL/SQL again!

Further reading:

See also this article about how jOOQ 3.9 provides access to PL/SQL RECORD types.