How Adding a UNIQUE Constraint on a OneToOne Relationship Helps Performance

A lot of people use SQL constraints mainly to enforce data integrity, and that’s already a very good thing. A UNIQUE constraint, for instance, makes sure that there is at most one instance of any possible value (or tuple, in the case of a composite constraint) in a table. For instance:

CREATE TABLE x (
  a NUMBER(10),
  UNIQUE (a)
);

-- This works:
INSERT INTO x VALUES (1);

-- This fails:
INSERT INTO x VALUES (1);

Constraints are also good for (SELECT) performance

One thing that people often do not think about, though, is the fact that constraints can also be used as very valuable meta information by the database optimiser to make a better decision when finding the most optimal execution plan. It is a big difference for an optimiser…

  • To be unaware of how many different values there are in any given column (worst case)
  • To be able to estimate the different values in any given column (statistics are present)
  • To know that each value can appear at most once

In the last case, a UNIQUE (or PRIMARY KEY) constraint tells the optimiser exactly that. Once the optimiser knows that a certain operation returns at most one row (and not 10, 100, or even 10M rows), a nested loop suddenly becomes extremely cheap, as it can abort as soon as one row was found.

A Java analogy

Compare this with the following Java code:

// This is much "harder" ...
List<Object> objects = unknownSize();
for (Object object : objects) {
    doSomethingWith(object);
}

// ... than this, where we "know" the list
// only contains one value
for (Object object : Collections.singletonList("abc")) {
    doSomethingWith(object);
}

If Java had such optimisation capabilities (bonus question: can the JIT do it?), then the second loop could be optimised as such:

// The loop can be avoided:
doSomethingWith("abc");

Let’s look at Oracle

Let’s look at a concrete example with Oracle:

CREATE TABLE x1 (
  a NUMBER(10) NOT NULL,
  data VARCHAR2(100),
  CONSTRAINT pk_y1 PRIMARY KEY (a)
);

CREATE TABLE y1 (
  a NUMBER(10) NOT NULL, 
  optional_data VARCHAR2(100),
  CONSTRAINT fk_y1 FOREIGN KEY (a) REFERENCES x1(a)
);

CREATE INDEX i1 ON y1(a);
CREATE INDEX i1data ON x1(data);

INSERT INTO x1
SELECT level, dbms_random.string('a', 100)
FROM dual
CONNECT BY level <= 10000;

INSERT INTO y1
SELECT level, dbms_random.string('a', 100)
FROM dual
CONNECT BY level <= 5000;

This is a typical one-to-many relationship between x1 and y1. With the constraints in place, there can be between 0 and N rows in y1 for each row in x1. As a user, we know that there is only one value in y1 for any value in x1, but we don’t enforce this knowledge with a constraint.

Let’s look at the following query:

SELECT count(*)
FROM x1
JOIN y1 USING (a)
WHERE data LIKE 'a%';

What we’re doing here is we want all values in x1 whose data starts with the letter ‘a’, and for which we also have any optional_data. The execution plan is

-----------------------------------------------------
| Operation                | Name   | Rows  | Cost  |
-----------------------------------------------------
| SELECT STATEMENT         |        |       |    29 |
|  SORT AGGREGATE          |        |     1 |       |
|   HASH JOIN              |        |   176 |    29 |
|    VIEW                  |        |   176 |    24 |
|     HASH JOIN            |        |       |       |
|      INDEX RANGE SCAN    | I1DATA |   176 |     4 |
|      INDEX FAST FULL SCAN| PK_Y1  |   176 |    24 |
|    INDEX FAST FULL SCAN  | I1     |  5000 |     5 |
-----------------------------------------------------

As you can see, Oracle chooses to run a hash join operation, which means that all the values from x1 starting with ‘a’ are fetched (around 176), and joined in a hashmap with the entire set of values in y1, fetched from the index i1 (5000 values).

How does this compare with using a UNIQUE constraint?

We’ll create almost the exact same schema as follows:

CREATE TABLE x2 (
  a NUMBER(10) NOT NULL,
  data VARCHAR2(100),
  CONSTRAINT pk_x2 PRIMARY KEY (a)
);

CREATE TABLE y2 (
  a NUMBER(10) NOT NULL, 
  optional_data VARCHAR2(100),
  CONSTRAINT uk_y2 UNIQUE (a),
  CONSTRAINT fk_y2 FOREIGN KEY (a) REFERENCES x2(a)
);

CREATE INDEX i2data ON x2(data);

INSERT INTO x2
SELECT * FROM x1;

INSERT INTO y2
SELECT * FROM y1;

BEGIN
  dbms_stats.gather_table_stats('TEST', 'X2');
  dbms_stats.gather_table_stats('TEST', 'Y2');
END;
/

The data is exactly the same, but now we enforce a UNIQUE constraint on y2’s foreign key, making this effectively a one-to-one relationship. Check out what happens when we run the exact same query…

SELECT count(*)
FROM x2
JOIN y2 USING (a)
WHERE data LIKE 'a%';

Its execution plan is now:

-----------------------------------------------------
| Operation                | Name   | Rows  | Cost  |
-----------------------------------------------------
| SELECT STATEMENT         |        |       |    25 |
|  SORT AGGREGATE          |        |     1 |       |
|   NESTED LOOPS           |        |   176 |    25 |
|    VIEW                  |        |   176 |    25 |
|     HASH JOIN            |        |       |       |
|      INDEX RANGE SCAN    | I2DATA |   176 |     5 |
|      INDEX FAST FULL SCAN| PK_X2  |   176 |    24 |
|    INDEX UNIQUE SCAN     | UK_Y2  |     1 |     0 |
-----------------------------------------------------

As you can see, the overall cost has decreased from 29 to 25 as we’re no longer using a hash join, but a nested loop join operation, which is probably faster if our statistics are not way off, as we only have to look up the single value in y2 corresponding to x2 for each of x2’s estimated 176 rows that start with the letter ‘a’.

But let’s not trust the execution plan, let’s benchmark things (as discussed in a previous article). Run the following code in SQL Developer, for instance:

SET SERVEROUTPUT ON
DECLARE
  v_ts TIMESTAMP;
  v_repeat CONSTANT NUMBER := 1000;
BEGIN
  v_ts := SYSTIMESTAMP;
    
  FOR i IN 1..v_repeat LOOP
    FOR rec IN (
      SELECT count(*)
      FROM x1
      JOIN y1 USING (a)
      WHERE data LIKE 'a%'
    ) LOOP
      NULL;
    END LOOP;
  END LOOP;
    
  dbms_output.put_line('Without UNIQUE constraint: '
    || (SYSTIMESTAMP - v_ts));
  v_ts := SYSTIMESTAMP;
    
  FOR i IN 1..v_repeat LOOP
    FOR rec IN (
      SELECT count(*)
      FROM x2
      JOIN y2 USING (a)
      WHERE data LIKE 'a%'
    ) LOOP
      NULL;
    END LOOP;
  END LOOP;
    
  dbms_output.put_line('With    UNIQUE constraint: '
    || (SYSTIMESTAMP - v_ts));
END;
/

The above benchmark repeats each individual query 1000 times. The results speak for themselves:

Without UNIQUE constraint: +000000000 00:00:04.250000000
With    UNIQUE constraint: +000000000 00:00:02.847000000

Remark

The lack of a UNIQUE constraint may happen in situations where you prefer using a surrogate primary key in the referencing table (which I omitted in the examples for brevity). If you’re “sharing” the primary key or use natural keys, this issue won’t happen to you, of course.

Conclusion

The execution planner can make a more informed decision if it has formal knowledge about your data via an additional UNIQUE constraint. This formal knowledge is by far more powerful than any statistics that might indicate the same thing. In the absence of a formal UNIQUE constraint, the database will always have to make sure there is not another row once it has found one. With a formal UNIQUE constraint, it can stop looking as soon as that unique row was found. This can drastically speed up queries. As we’ve seen in the above example, this improves things by a factor of 1.5, so the second query is 50% faster!

Always tell your database as much as you can. Your SELECT performance will greatly increase, at the small cast of a little overhead when inserting data.

5 thoughts on “How Adding a UNIQUE Constraint on a OneToOne Relationship Helps Performance

  1. If Java had such optimisation capabilities (bonus question: can the JIT do it?)

    AFAIK there’s nothing preventing it from happening. Inlining everything, escape analysis, and unrolling the loop twice should make clear that there’ll be no more iterations. The other thing is if it actually happens.

    Always tell your database as much as you can.

    Sure. But sometimes I can’t: I have a table containing name VARCHAR(255) NOT NULL, isRegistered BOOLEAN NOT NULL.

    I’d like to tell the DB that
    1. name mustn’t be empty
    2. all names of registered entities must be unique (the non-registered ones are user-private and don’t matter).

    The first constrained is easily assured by the application. A trigger would do, too, but it’s not important enough to make me do such things.

    The second is something I’ll have to solve one day, but I’m a bit clueless. Am I modelling it wrong?

      • Looking forward to the JIT post.

        CREATE … INDEX … WHERE …

        Bad luck with Mysql. I guess, I’ll leave it to the application (as changes to name are extremely rare, I can lock everything). Or I create a column publicName containing NULL or name, if I need indexing.

        • Hmm, MySQL… Doesn’t even support function-based indexes. But how about this?

          create table t (a int, b int);
          create unique index i on t (a, b);
          
          insert into t values (1, 1);    // Works
          insert into t values (1, 1);    // Fails
          insert into t values (1, null); // Works
          insert into t values (1, null); // Works!
          

          Because null values are not considered unique. So, instead of using a 1/0 flag, you can use a 1/null flag…

          I begin to understand your aversion against SQL, though. You would perhaps feel differently about it, if you weren’t using MySQL 🙂

          • Instead of using a 1/0 flag, you can use a 1/null flag

            This is a bit ugly. I’d be nice if there were a single-valued data type (“isRegistered EMPTY NULL”).

            However, I guess, I can’t use NULL for 0 as it gets send to the client which is hard to get updated. I guess, I could create a custom mapping for Gson or for Hibernate, otherwise I probably need a dummy “isRegistered2.

            You would perhaps feel differently about it, if you weren’t using MySQL

            I guess, I’d dislike it even more with Oracle empty strings. I know that Mysql is quirky (I try to comply with the standard). The missing features are a problem, sometimes. But I’m afraid, SQL itself is the problem. Though I like its power, I hate its syntax and it feels like written partly backwards (CTEs would help sometimes).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s