Simulating Latency with SQL / JDBC

I’ve run across a fun little trick to simulate latency in your development environments when testing some SQL queries. Possible use-cases including to validate that backend latency won’t bring down your frontend, or that your UX is still bearable, etc.

The solution is PostgreSQL and Hibernate specific, though to doesn’t have to be. Besides, it uses a stored function to work around the limitations of a VOID function in PostgreSQL, but that can be worked around differently as well, without storing anything auxiliary to the catalog.

To remove the Hibernate dependency, you can just use the pg_sleep function directly using a NULL predicate, but don’t try it like this!

select 1
from t_book
-- Don't do this!
where pg_sleep(1) is not null;

This will sleep 1 second per row (!). As can be seen in the explain plan. Let’s limit to 3 rows to see:

explain analyze
select 1
from t_book
where pg_sleep(1) is not null
limit 3;

And the result is:

Limit  (cost=0.00..1.54 rows=3 width=4) (actual time=1002.142..3005.374 rows=3 loops=1)
   ->  Seq Scan on t_book  (cost=0.00..2.05 rows=4 width=4) (actual time=1002.140..3005.366 rows=3 loops=1)
         Filter: (pg_sleep('1'::double precision) IS NOT NULL)
 Planning Time: 2.036 ms
 Execution Time: 3005.401 ms

As you can see, the whole query took about 3 seconds for 3 rows. In fact, this is also what happens in Gunnar’s example from the tweet, except that he was filtering by ID, which “helps” hide this effect.

We can use what Oracle calls scalar subquery caching, the fact that a scalar subquery can be reasonably expected to be side-effect free (despite the obvious side-effect of pg_sleep), meaning that some RDBMS cache its result per query execution.

explain analyze
select 1
from t_book
where (select pg_sleep(1)) is not null
limit 3;

The result is now:

Limit  (cost=0.01..1.54 rows=3 width=4) (actual time=1001.177..1001.178 rows=3 loops=1)
   InitPlan 1 (returns $0)
     ->  Result  (cost=0.00..0.01 rows=1 width=4) (actual time=1001.148..1001.148 rows=1 loops=1)
   ->  Result  (cost=0.00..2.04 rows=4 width=4) (actual time=1001.175..1001.176 rows=3 loops=1)
         One-Time Filter: ($0 IS NOT NULL)
         ->  Seq Scan on t_book  (cost=0.00..2.04 rows=4 width=0) (actual time=0.020..0.021 rows=3 loops=1)
 Planning Time: 0.094 ms
 Execution Time: 1001.223 ms

We’re now getting the desired one-time filter. However, I don’t really like this hack, because it depends on an optimisation, which is optional, not a formal guarantee. This may be good enough for a quick simulation of latency, but don’t depend on this kind of optimisation in production lightheartedly.

Another approach that seems to guarantee this behaviour would be to use a MATERIALIZED CTE:

explain
with s (x) as materialized (select pg_sleep(1))
select *
from t_book
where (select x from s) is not null;

I’m now again using a scalar subquery, because I somehow need to access the CTE, and I don’t want to place it in the FROM clause, where it would impact my projection.

The plan being:

Result  (cost=0.03..2.07 rows=4 width=943) (actual time=1001.289..1001.292 rows=4 loops=1)
   One-Time Filter: ($1 IS NOT NULL)
   CTE s
     ->  Result  (...) (actual time=1001.262..1001.263 rows=1 loops=1)
   InitPlan 2 (returns $1)
     ->  CTE Scan on s  (cost=0.00..0.02 rows=1 width=4) (actual time=1001.267..1001.268 rows=1 loops=1)
   ->  Seq Scan on t_book  (cost=0.03..2.07 rows=4 width=943) (actual time=0.015..0.016 rows=4 loops=1)
 Planning Time: 0.049 ms
 Execution Time: 1001.308 ms

Again, containing a one-time filter, which is what we want here.

Using a JDBC based approach

If your application is JDBC based, you don’t have to simulate the latency by tweaking the query. You can simply proxy JDBC in one way or another. Let’s look at this little program:

try (Connection c1 = db.getConnection()) {

    // A Connection proxy that intercepts preparedStatement() calls
    Connection c2 = new DefaultConnection(c1) {
        @Override
        public PreparedStatement prepareStatement(String sql) 
        throws SQLException {
            sleep(1000L);
            return super.prepareStatement(sql);
        }
    };

    long time = System.nanoTime();
    String sql = "SELECT id FROM book";

    // This call now has a 1 second "latency"
    try (PreparedStatement s = c2.prepareStatement(sql);
        ResultSet rs = s.executeQuery()) {
        while (rs.next())
            System.out.println(rs.getInt(1));
    }

    System.out.println("Time taken: " + 
       (System.nanoTime() - time) / 1_000_000L + "ms");
}

Where:

public static void sleep(long time) {
    try {
        Thread.sleep(time);
    }
    catch (InterruptedException e) {
        Thread.currentThread().interrupt();
    }
}

For simplicity reasons, this uses jOOQ’s DefaultConnection which acts as a proxy, conveniently delegating all the methods to some delegate connection, allowing for overriding only specific methods. The output of the program is:

1
2
3
4
Time taken: 1021ms

This simulates the latency on the prepareStatement() event. Obviously, you’d be extracting the proxying into some utility in order not to clutter your code. You could even proxy all your queries in development and enable the sleep call only based on a system property.

Alternatively, we could also simulate it on the executeQuery() event:

try (Connection c = db.getConnection()) {
    long time = System.nanoTime();

    // A PreparedStatement proxy intercepting executeQuery() calls
    try (PreparedStatement s = new DefaultPreparedStatement(
        c.prepareStatement("SELECT id FROM t_book")
    ) {
        @Override
        public ResultSet executeQuery() throws SQLException {
            sleep(1000L);
            return super.executeQuery();
        };
    };

        // This call now has a 1 second "latency"
        ResultSet rs = s.executeQuery()) {
        while (rs.next())
            System.out.println(rs.getInt(1));
    }

    System.out.println("Time taken: " +
        (System.nanoTime() - time) / 1_000_000L + "ms");
}

This is now using the jOOQ convenience class DefaultPreparedStatement. If you need these, just add the jOOQ Open Source Edition dependency (there’s nothing RDBMS specific in these classes), with any JDBC based application, including Hibernate:

<dependency>
  <groupId>org.jooq</groupId>
  <artifactId>jooq</artifactId>
</dependency>

Alternatively, just copy the sources of the classes DefaultConnection or DefaultPreparedStatement if you don’t need the entire dependency, or you just proxy the JDBC API yourself.

A jOOQ based solution

If you’re already using jOOQ (and you should be!), you can do this even more easily, by implementing an ExecuteListener. Our program would now look like this:

try (Connection c = db.getConnection()) {
    DSLContext ctx = DSL.using(new DefaultConfiguration()
        .set(c)
        .set(new CallbackExecuteListener()
            .onExecuteStart(x -> sleep(1000L))
        )
    );

    long time = System.nanoTime();
    System.out.println(ctx.fetch("SELECT id FROM t_book"));
    System.out.println("Time taken: " +
        (System.nanoTime() - time) / 1_000_000L + "ms");
}

Still the same result:

+----+
|id  |
+----+
|1   |
|2   |
|3   |
|4   |
+----+
Time taken: 1025ms

The difference is that with a single intercepting callback, we can now add this sleep to all types of statements, including prepared statements, static statements, statements returning result sets, or update counts, or both.

Funky String Function Simulation in SQLite

SQLite is so light, it doesn’t have any useful string functions. It doesn’t have ASCII(), LPAD(), RPAD(), REPEAT(), POSITION(), you name it. It does, however, have a wonderful RANDOMBLOB() function. So if you really need a good random number generator, use a SQLite database and generate a 1GB blob. That should give you a couple of random numbers for the next years.

For a full (or rather, empty) list see the SQLite function reference here:
http://www.sqlite.org/lang_corefunc.html

Function Simulation: REPEAT()

Not having any functions doesn’t mean that you can’t simulate them. You can. Take REPEAT(), for instance. Apart from the RANDOMBLOB(), you can also generate a ZEROBLOB(). It’s a blob with lots of zeros in it. But you can’t just go and do this:

-- Simulate REPEAT('abc', 3)
replace(zeroblob(3), 0, 'abc')

That would be too easy. The problem with the zeroblob is, that when cast to a string, it is actually a zero-terminated string. Quite usual when programming in C. But hey, the first character is a zero, so the resulting string is terminated right at the beginning. How useful is that??

But here’s a trick, QUOTE() the ZEROBLOB(). That would escape all characters in hex format. In other words:

quote(zeroblob(3)) yields X'000000'

Nice. Now we’ve got three extra letters around twice as many zeroes as we wanted. So we’ll simply do this

-- Simulate REPEAT('abc', 3)
replace(substr(quote(zeroblob(2)), 3, 3), '0', 'abc')

-- Or more generally: X = 'abc', Y = 3
replace(substr(quote(zeroblob((Y + 1) / 2)), 3, Y), '0', X)

Doesn’t that start to make fun? Note, I have documented this simulation also here:
http://stackoverflow.com/questions/11568496/how-to-simulate-repeat-in-sqlite

Function Simulation: LPAD() and RPAD()

REPEAT() was easy. But REPEAT() was inspired by LPAD() and RPAD(), which is similar to REPEAT(), except that a character is padded to the left or right of another string, until a given length of the resulting string is reached. ZEROBLOB() will help us again! Let’s consider RPAD():

-- Simulate RPAD('abc', 7, '-')
'abc' || replace(replace(substr(quote(zeroblob(4)), 3, 4), '''', ''), '0', '-')

-- Or more generally:
-- RPAD() Using X = 7, Y = '-', Z = 'abc'
Z || replace(
       replace(
         substr(
           quote(zeroblob((X + 1) / 2)), 
           3, (X - length(Z))
         ), '''', ''
       ), '0', Y
     )

-- LPAD() Using X = 7, Y = '-', Z = 'abc'
replace(
  replace(
    substr(
      quote(zeroblob((X + 1) / 2)), 
      3, (X - length(Z))
    ), '''', ''
  ), '0', Y
) || Z

Now if this isn’t funky! This was actually something, I didn’t come up with myself. This was an answer I was given on Stack Overflow, where great minds spend lots of spare time on weird problems like this:
http://stackoverflow.com/questions/6576343/how-to-simulate-lpad-rpad-with-sqlite

Of course, these simulations will be part of the next version of jOOQ, so you don’t have to worry any longer about how to do LPAD(), RPAD(), and REPEAT().

Simulation of TRUNC() in Derby

Derby is missing out a lot of functions from the set of functions that other databases usually provide. One example is the TRUNC(value, decimals) function. According to the Wikipedia, truncation can be achieved as such:

-- trunc(x, n) 
CASE WHEN x > 0 
THEN
  floor(power(10, n) * x) / power(10, n) 
ELSE 
  ceil(power(10, n) * x) / power(10, n) 
END

Unfortunately, there is no POWER(base, exponent) function in Derby either. But no problem, we can simulate that as well. Let’s consider the Wikipedia again and we’ll find:

power(b, x) = exp(x * ln(b))

If we substitute that into the original simulation, we get for Derby:

-- trunc(x, n) 
CASE WHEN x > 0 
THEN 
  floor(exp(n * ln(10)) * x) / exp(n * ln(10))
ELSE 
  ceil(exp(n * ln(10)) * x) / exp(n * ln(10)) 
END

Verbose, probably quite inefficient, but effective! Let’s run a short test, too:

create table test (x numeric(10, 5), n int);

insert into test values (11.111, 0);
insert into test values (11.111, 1);
insert into test values (11.111, 2);
insert into test values (11.111, -1);

select
  x, n, 
  case when x >= 0
  then
    floor(exp(n * ln(10)) * x) / exp(n * ln(10))
  else
    ceil(exp(n * ln(10)) * x) / exp(n * ln(10))
  end "trunc(x, n)"
from test;

The above yields

X N TRUNC(X, N)
11.111 0 11
11.111 1 11.1
11.111 2 11.11
11.111 -1 10

How to simulate MySQL’s INSERT statement extensions

I have previously posted about the SQL MERGE statement, and how powerful it is here:
https://blog.jooq.org/2011/11/29/arcane-magic-with-the-sql2003-merge-statement/

Unfortunately, not all databases support this statement. Also, very often it is quite a bit of overkill, when what you want to do is to simply INSERT or UPDATE a single record, depending on whether it already exists.

MySQL’s nice INSERT statement extensions

MySQL has a couple of very nice extensions to the INSERT statement. Some of them are also supported by the CUBRID database. Let’s have a look at two features of the (simplified) syntax definition:

INSERT [IGNORE]
    [INTO] tbl_name [(col_name,...)]
    VALUES (expr,...),(...),...
    [ ON DUPLICATE KEY UPDATE
      col_name=expr
        [, col_name=expr] ... ]

 
Essentially, we’re looking at two keywords / clauses:

  1. IGNORE: This will cause the INSERT statement to silently fail with a warning, if we’re about to insert a duplicate key
  2. ON DUPLICATE KEY UPDATE: This will cause the INSERT statement to perform an update on all affected records, if we’re about to insert a duplicate key

The full documentation can be seen here:
http://dev.mysql.com/doc/refman/5.5/en/insert.html

How jOOQ models these statements in other dialects

jOOQ’s API conveniently allows for using those MySQL features directly in insert statements. An example

// Insert an author who has recently married and thus
// changed their last name. If we already have that author, then
// update their last name:

create.insertInto(AUTHOR, AUTHOR.ID, AUTHOR.LAST_NAME)
      .values(3, "Smith")
      .onDuplicateKeyUpdate()
      .set(AUTHOR.LAST_NAME, "Smith")
      .execute();

// Alternatively, we may not care if the author already
// exists, we just want to insert them without failure:

create.insertInto(AUTHOR, AUTHOR.ID, AUTHOR.LAST_NAME)
      .values(3, "Smith")
      .onDuplicateKeyIgnore()
      .execute();

Note how jOOQ doesn’t model the IGNORE keyword as a keyword between INSERT [IGNORE] INTO. This keeps the public API a bit more concise, while at the same time being more consistent with pre-existing functionality, both in the INSERT syntax, as well as in the Loader API:

https://blog.jooq.org/2011/08/23/loading-csv-data-with-jooq-2/

Now, databases that do not support these syntax elements can simulate them using MERGE. The above statements will render something like this on an Oracle database:

-- ON DUPLICATE KEY UPDATE statement
merge into "AUTHOR" 
using (select 1 from dual) 
on ("AUTHOR"."ID" = 3) 
when matched then update set "LAST_NAME" = 'Smith' 
when not matched then insert ("ID", "LAST_NAME") values (3, 'Smith')

-- ON DUPLICATE KEY IGNORE statement
merge into "AUTHOR" 
using (select 1 from dual) 
on ("AUTHOR"."ID" = 3) 
when not matched then insert ("ID", "LAST_NAME") values (3, 'Smith')

Conclusion

These practices allow you to write short and concise code using the jOOQ API while still being able to target several backend databases optimally using efficient SQL statements. This will help you increase both your development as well as your runtime performance!