How to Find Redundant Indexes in SQL

The following two indexes are redundant in most SQL databases:

CREATE INDEX i_actor_1 ON actor (last_name);
CREATE INDEX i_actor_2 ON actor (last_name, first_name);

It is usually safe to drop the first index, because all queries that query the LAST_NAME column only can still profit from the second index I_ACTOR_2. The reason being that LAST_NAME is the first column of the composite index I_ACTOR_2 (it would be a different story, if it weren’t the first column).

Note: It is usually safe to drop the first index, because the benefits probably outweigh the cost:

Benefits of dropping

Costs of dropping

  • Querying a composite index can be slightly slower as can be seen in the below benchmark

Let’s see the costs of dropping the index below for Oracle, PostgreSQL, and SQL Server in this particular case (beware as always when interpreting benchmarks, they heavily depend on a lot of context, especially data size!)

Oracle

Preparation:

CREATE TABLE t (
  a NUMBER(10) NOT NULL,
  b NUMBER(10) NOT NULL
);

INSERT INTO t (a, b)
SELECT level, level
FROM dual
CONNECT BY level <= 100000;

CREATE INDEX i1 ON t(a);
CREATE INDEX i2 ON t(a, b);

EXEC dbms_stats.gather_table_stats('TEST', 'T');

Benchmark:

SET SERVEROUTPUT ON
CREATE TABLE results (
  run     NUMBER(2),
  stmt    NUMBER(2),
  elapsed NUMBER
);

DECLARE
  v_ts TIMESTAMP WITH TIME ZONE;
  v_repeat CONSTANT NUMBER := 2000;
BEGIN

  -- Repeat benchmark several times to avoid warmup penalty
  FOR r IN 1..5 LOOP
    v_ts := SYSTIMESTAMP;
      
    FOR i IN 1..v_repeat LOOP
      FOR rec IN (
        SELECT /*+INDEX(t i1)*/ * FROM t WHERE a = 1
      ) LOOP
        NULL;
      END LOOP;
    END LOOP;
  
    INSERT INTO results VALUES (r, 1, 
      SYSDATE + ((SYSTIMESTAMP - v_ts) * 86400) - SYSDATE);
    v_ts := SYSTIMESTAMP;
      
    FOR i IN 1..v_repeat LOOP
      FOR rec IN (
        SELECT /*+INDEX(t i2)*/ * FROM t WHERE a = 1
      ) LOOP
        NULL;
      END LOOP;
    END LOOP;
      
    INSERT INTO results VALUES (r, 2, 
      SYSDATE + ((SYSTIMESTAMP - v_ts) * 86400) - SYSDATE);
  END LOOP;
  
  FOR rec IN (
    SELECT 
      run, stmt, 
      CAST(elapsed / MIN(elapsed) OVER() AS NUMBER(10, 5)) ratio 
    FROM results
  )
  LOOP
    dbms_output.put_line('Run ' || rec.run || 
      ', Statement ' || rec.stmt || 
      ' : ' || rec.ratio);
  END LOOP;
END;
/

DROP TABLE results;

The result being:

Run 1, Statement 1 : 1.4797
Run 1, Statement 2 : 1.45545

Run 2, Statement 1 : 1.1997
Run 2, Statement 2 : 1.01121

Run 3, Statement 1 : 1.13606
Run 3, Statement 2 : 1

Run 4, Statement 1 : 1.13455
Run 4, Statement 2 : 1.00242

Run 5, Statement 1 : 1.13303
Run 5, Statement 2 : 1.00606

Some notes on benchmarks here.

The fastest query execution in the above result yields 1, the other executions are multiples of 1. Yes, there’s a 10% difference in this case, so as you can see. The benefits (faster insertions) certainly should outweight the cost (slower queries), so, don’t apply this advice in a read-heavy / write-rarely database.

PostgreSQL

A similar difference can be seen in a PostgreSQL benchmark. No hints can be used to choose indexes, so we’re simply creating two tables:

CREATE TABLE t1 (
  a INT NOT NULL,
  b INT NOT NULL
);
CREATE TABLE t2 (
  a INT NOT NULL,
  b INT NOT NULL
);

INSERT INTO t1 (a, b)
SELECT s, s
FROM generate_series(1, 100000) AS s(s);

INSERT INTO t2 (a, b)
SELECT s, s
FROM generate_series(1, 100000) AS s(s);

CREATE INDEX i1 ON t1(a);
CREATE INDEX i2 ON t2(a, b);

ANALYZE t1;
ANALYZE t2;

Benchmark:

DO $$
DECLARE
  v_ts TIMESTAMP;
  v_repeat CONSTANT INT := 10000;
  rec RECORD;
BEGIN

  -- Repeat benchmark several times to avoid warmup penalty
  FOR r IN 1..5 LOOP
    v_ts := clock_timestamp();

    FOR i IN 1..v_repeat LOOP
      FOR rec IN (
        SELECT * FROM t1 WHERE a = 1
      ) LOOP
        NULL;
      END LOOP;
    END LOOP;

    RAISE INFO 'Run %, Statement 1: %', r, 
      (clock_timestamp() - v_ts); 
    v_ts := clock_timestamp();

    FOR i IN 1..v_repeat LOOP
      FOR rec IN (
        SELECT * FROM t2 WHERE a = 1
      ) LOOP
        NULL;
      END LOOP;
    END LOOP;

    RAISE INFO 'Run %, Statement 2: %', r, 
      (clock_timestamp() - v_ts); 
    RAISE INFO '';
  END LOOP;
END$$;

Result:

INFO:  Run 1, Statement 1: 00:00:00.071891
INFO:  Run 1, Statement 2: 00:00:00.080833

INFO:  Run 2, Statement 1: 00:00:00.076329
INFO:  Run 2, Statement 2: 00:00:00.079772

INFO:  Run 3, Statement 1: 00:00:00.073137
INFO:  Run 3, Statement 2: 00:00:00.079483

INFO:  Run 4, Statement 1: 00:00:00.073456
INFO:  Run 4, Statement 2: 00:00:00.081508

INFO:  Run 5, Statement 1: 00:00:00.077148
INFO:  Run 5, Statement 2: 00:00:00.083535

SQL Server

Preparation:

CREATE TABLE t (
  a INT NOT NULL,
  b INT NOT NULL
);

WITH s(s) AS (
  SELECT 1
  UNION ALL
  SELECT s + 1 FROM s WHERE s < 100
)
INSERT INTO t
SELECT TOP 100000 
  row_number() over(ORDER BY (SELECT 1)), 
  row_number() over(ORDER BY (select 1)) 
FROM s AS s1, s AS s2, s AS s3;

CREATE INDEX i1 ON t(a);
CREATE INDEX i2 ON t(a, b);

UPDATE STATISTICS t;

Benchmark:

DECLARE @ts DATETIME;
DECLARE @repeat INT = 2000;
DECLARE @r INT;
DECLARE @i INT;
DECLARE @dummy VARCHAR;

DECLARE @s1 CURSOR;
DECLARE @s2 CURSOR;

DECLARE @results TABLE (
  run     INT,
  stmt    INT,
  elapsed DECIMAL
);

SET @r = 0;
WHILE @r < 5
BEGIN
  SET @r = @r + 1

  SET @s1 = CURSOR FOR 
    SELECT b FROM t WITH (INDEX (i1)) WHERE a = 1;

  SET @s2 = CURSOR FOR 
    SELECT b FROM t WITH (INDEX (i2)) WHERE a = 1;

  SET @ts = current_timestamp;
  SET @i = 0;
  WHILE @i < @repeat
  BEGIN
    SET @i = @i + 1

    OPEN @s1;
    FETCH NEXT FROM @s1 INTO @dummy;
    WHILE @@FETCH_STATUS = 0
    BEGIN
      FETCH NEXT FROM @s1 INTO @dummy;
    END;

    CLOSE @s1;
  END;

  DEALLOCATE @s1;
  INSERT INTO @results 
    VALUES (@r, 1, DATEDIFF(ms, @ts, current_timestamp));

  SET @ts = current_timestamp;
  SET @i = 0;
  WHILE @i < @repeat
  BEGIN
    SET @i = @i + 1

    OPEN @s2;
    FETCH NEXT FROM @s2 INTO @dummy;
    WHILE @@FETCH_STATUS = 0
    BEGIN
      FETCH NEXT FROM @s2 INTO @dummy;
    END;

    CLOSE @s2;
  END;

  DEALLOCATE @s2;
  INSERT INTO @results 
    VALUES (@r, 2, DATEDIFF(ms, @ts, current_timestamp));
END;

SELECT 'Run ' + CAST(run AS VARCHAR) + 
  ', Statement ' + CAST(stmt AS VARCHAR) + ': ' + 
  CAST(CAST(elapsed / MIN(elapsed) OVER() AS DECIMAL(10, 5)) AS VARCHAR)
FROM @results;

Result:

Run 1, Statement 1: 1.22368
Run 1, Statement 1: 1.09211

Run 2, Statement 1: 1.05263
Run 2, Statement 1: 1.09211

Run 3, Statement 1: 1.00000
Run 3, Statement 1: 1.05263

Run 4, Statement 1: 1.05263
Run 4, Statement 1: 1.00000

Run 5, Statement 1: 1.09211
Run 5, Statement 1: 1.05263

As can be seen, predictably, in all databases the smaller non-composite index is slightly faster for this type of query than the composite index. In this particular benchmark, this is specifically true because the composite index acts as a covering index.

Yet both indexes can be used for the query in a reasonable way, so if disk space / insertion speed is an issue, the redundant single-column index can be dropped.

How to find such indexes

The following query will help you detect such indexes in Oracle, PostgreSQL, and SQL Server:

Oracle

WITH indexes AS (
  SELECT
    i.owner,
    i.index_name,
    i.table_name,
    listagg(c.column_name, ', ')
      WITHIN GROUP (ORDER BY c.column_position)
      AS columns
  FROM all_indexes i
  JOIN all_ind_columns c
    ON i.owner = c.index_owner
    AND i.index_name = c.index_name
  GROUP BY i.owner, i.table_name, i.index_name, i.leaf_blocks
)
SELECT
  i.owner,
  i.table_name,
  i.index_name AS "Deletion candidate index",
  i.columns AS "Deletion candidate columns",
  j.index_name AS "Existing index",
  j.columns AS "Existing columns"
FROM indexes i
JOIN indexes j
  ON i.owner = j.owner
  AND i.table_name = j.table_name
  AND j.columns LIKE i.columns || ',%'

Result:

TABLE_NAME   delete index   columns   existing index   columns
-------------------------------------------------------------------------
T            I1             A         I2               A, B

In short, it lists all the indexes whose columns are a prefix of another index’s columns

PostgreSQL

Get ready for a really nifty query. Here’s how to discover redundant indexes in PostgreSQL, which unfortunately doesn’t seem to have an easy, out-of-the-box dictionary view to discover index columns:

WITH indexes AS (
  SELECT 
    tnsp.nspname AS schema_name,
    trel.relname AS table_name,
    irel.relname AS index_name,
    string_agg(a.attname, ', ' ORDER BY c.ordinality) AS columns
  FROM pg_index AS i
  JOIN pg_class AS trel ON trel.oid = i.indrelid
  JOIN pg_namespace AS tnsp ON trel.relnamespace = tnsp.oid
  JOIN pg_class AS irel ON irel.oid = i.indexrelid
  JOIN pg_attribute AS a ON trel.oid = a.attrelid
  JOIN LATERAL unnest(i.indkey) 
    WITH ORDINALITY AS c(colnum, ordinality)
      ON a.attnum = c.colnum
  GROUP BY i, tnsp.nspname, trel.relname, irel.relname
)
SELECT
  i.table_name,
  i.index_name AS "Deletion candidate index",
  i.columns AS "Deletion candidate columns",
  j.index_name AS "Existing index",
  j.columns AS "Existing columns"
FROM indexes i
JOIN indexes j
  ON i.schema_name = j.schema_name
  AND i.table_name = j.table_name
  AND j.columns LIKE i.columns || ',%';

This is a really nice case of lateral unnesting with ordinality, which you should definitely add to your PostgreSQL tool chain.

SQL Server

Now, SQL Server doesn’t have a nice STRING_AGG function (yet), but we can work around this using STUFF and XML to get the same query.

Of course, there are other solutions using recursive SQL, but I’m too lazy to translate the simple string pattern-matching approach to something recursive.

WITH 
  i AS (
    SELECT 
	  s.name AS schema_name,
      t.name AS table_name,
      i.name AS index_name,
      c.name AS column_name,
      ic.index_column_id
    FROM sys.indexes i 
    JOIN sys.index_columns ic 
      ON i.object_id = ic.object_id 
      AND i.index_id = ic.index_id 
    JOIN sys.columns c 
      ON ic.object_id = c.object_id 
      AND ic.column_id = c.column_id 
    JOIN sys.tables t 
      ON i.object_id = t.object_id 
	JOIN sys.schemas s
	  ON t.schema_id = s.schema_id
  ),
  indexes AS (
    SELECT
	  schema_name,
      table_name,
      index_name,
      STUFF((
        SELECT ',' + j.column_name 
        FROM i j
        WHERE i.table_name = j.table_name 
        AND i.index_name = j.index_name 
        FOR XML PATH('') -- Yay, XML in SQL!
      ), 1, 1, '') columns
    FROM i
    GROUP BY schema_name, table_name, index_name
  )
SELECT
  i.schema_name,
  i.table_name,
  i.index_name AS "Deletion candidate index",
  i.columns AS "Deletion candidate columns",
  j.index_name AS "Existing index",
  j.columns AS "Existing columns"
FROM indexes i
JOIN indexes j
  ON i.schema_name = j.schema_name
  AND i.table_name = j.table_name
  AND j.columns LIKE i.columns + ',%';

A note on partial indexes

SQL Server and PostgreSQL support “partial indexes”, i.e. indexes that contain only parts of your data (and Oracle can emulate them in various ways). Such indexes might appear in the resulting list – you may want to be careful to check if they’re really redundant or not. Chances are, they’re there for a very good reason.

Conclusion

Now go run the above query on your production database and… Very carefully and reasonably think about whether you really want to drop those indexes 😉

Why You Should (Sometimes) Avoid Expressions in SQL Predicates

I’ve recently discovered a rather significant performance issue on a productive Oracle 11g customer database. And I’m sure you have this issue too, which is why I’m documenting it here.

This is a simplified representation of the setup at the customer site:

        ID PAYMENT_DATE TEXT                               
---------- ------------ -----------------------------------
     33803 21.05.16     DcTNBOrkQIgMtbietUWOsSFNMIqGLlDw...
     29505 09.03.16     VIuPaOAQqzCMlFBYPQtvqUSbWYPDndJD...
     10738 25.07.16     TUkxGpZPrGKaHzDRxczrktkFWvGmiwjR...

Let’s produce the above table with the following statement:

CREATE TABLE payment (
  id            NOT NULL PRIMARY KEY,
  payment_date  NOT NULL,
  text
) AS
SELECT
  level,
  SYSDATE - dbms_random.value(1, 500),
  dbms_random.string('a', 500)
FROM dual
CONNECT BY level <= 50000
ORDER BY dbms_random.value;

CREATE INDEX i_payment_date ON payment(payment_date);

EXEC dbms_stats.gather_table_stats('TEST', 'PAYMENT');

What we’re doing here: There’s a payment table with an ID primary key column, an indexed payment_date column and some “text” info. The real table is, of course, much bigger than this.

Now, this table needs infrequent house keeping. In nightly batch jobs, the customer ran through the table and removed all payments that were older than some fixed amount of days, e.g.:

DELETE
FROM payment
WHERE payment_date < SYSDATE - 470 // Older than 470 days

This might look fine at first, because:

  1. We have an index on payment_date, and it can be used for the deletion
  2. SYSDATE is constant per query, so the expression SYSDATE - 470 should also be constant per query

We can prove both statements easily:

1. Index usage

Let’s run:

EXPLAIN PLAN FOR
DELETE
FROM payment
WHERE payment_date < SYSDATE - 470;

SELECT * FROM TABLE(dbms_xplan.display);

And we’re getting:

----------------------------------------------------------------------------
| Id  | Operation         | Name           | Rows  | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | DELETE STATEMENT  |                |  3008 |    10   (0)| 00:00:01 |
|   1 |  DELETE           | PAYMENT        |       |            |          |
|*  2 |   INDEX RANGE SCAN| I_PAYMENT_DATE |  3008 |    10   (0)| 00:00:01 |
----------------------------------------------------------------------------
                                                                            
Predicate Information (identified by operation id):                         
---------------------------------------------------
                                                                            
   2 - access("PAYMENT_DATE"<SYSDATE@!-470)                                 

So, we get a relatively cheap access predicate on the I_PAYMENT_DATE index, and there’s this weird SYSDATE@!-470 constant in the plan.

2. SYSDATE being constant

There’s a lot of confusion about what the actual value of these non-deterministic, non-pure date/time functions is in each database. Sometimes, the function is evaluated on each row individually and produces a new value each time. In my opinion, that’s the worst, as such functions are completely unpredictable.

Sometimes, there’s a guarantee that these timestamps stay the same for the entire duration of a transaction. That’s a bit weird as a default, but OK why not.

In Oracle, it seems that SYSDATE (and SYSTIMESTAMP) are evaluated only a single time on a per-statement level. This would explain that weird SYSDATE@! notation in the execution plan, and it can be “proven” by doing something like this:

CREATE OR REPLACE FUNCTION sleep(i NUMBER) RETURN NUMBER 
AS
BEGIN
  dbms_lock.sleep(i);
  RETURN i;
END;
/

SELECT 
  min(CAST(sysdate AS TIMESTAMP)), 
  max(CAST(sysdate AS TIMESTAMP))
FROM dual
CONNECT BY level <= 5 AND sleep(1) > 0;

DROP FUNCTION sleep;
/

The result of this statement (which unsurprisingly takes around 4 seconds to execute) is:

MIN(CAST(SYSDATEASTIMESTAMP)) MAX(CAST(SYSDATEASTIMESTAMP))
----------------------------- -----------------------------
01.11.16 11:15:20.000000000   01.11.16 11:15:20.000000000  

Some background info on this OTN thread here. Unfortunately, the SYSDATE documentation fails to clearly specify this behaviour…

So, what’s the problem?

Even if we’re probably relying on an undocumented feature, it looks like everything is fine, right? SYSDATE - 470 is more or less a constant, so we’re fine putting it in that WHERE clause.

Wrong!

Let’s run a benchmark, comparing 3 times an equivalent query running each query 100 times (and for repeatability, we use SELECT, not DELETE):

SET SERVEROUTPUT ON
DECLARE
  v_ts TIMESTAMP WITH TIME ZONE;
  v_repeat CONSTANT NUMBER := 100;
  v_range CONSTANT NUMBER := 470;
  v_date CONSTANT DATE := SYSDATE - v_range;
BEGIN
  v_ts := SYSTIMESTAMP;
  
  -- Original query with inline expression
  FOR i IN 1..v_repeat LOOP
    FOR rec IN (
      SELECT *
      FROM payment
      WHERE payment_date < SYSDATE - v_range
    ) LOOP
      NULL;
    END LOOP;
  END LOOP;
    
  dbms_output.put_line('Statement 1 : ' || (SYSTIMESTAMP - v_ts));
  v_ts := SYSTIMESTAMP;
  
  -- Pre-calculated PL/SQL local variable
  FOR i IN 1..v_repeat LOOP
    FOR rec IN (
      SELECT *
      FROM payment
      WHERE payment_date < v_date
    ) LOOP
      NULL;
    END LOOP;
  END LOOP;
    
  dbms_output.put_line('Statement 2 : ' || (SYSTIMESTAMP - v_ts));
  v_ts := SYSTIMESTAMP;
  
  -- Magical 11g scalar subquery caching
  FOR i IN 1..v_repeat LOOP
    FOR rec IN (
      SELECT *
      FROM payment
      WHERE payment_date < (SELECT SYSDATE - v_range FROM dual)
    ) LOOP
      NULL;
    END LOOP;
  END LOOP;
    
  dbms_output.put_line('Statement 3 : ' || (SYSTIMESTAMP - v_ts));
END;
/

The above benchmark uses 3 techniques that all produce the same result but have different timing characteristics:

  1. is using an inline expression in the predicate
  2. is using a bind variable
  3. is using a scalar subquery

When we check out the estimated execution plans, nothing seems to indicate that any solution might be better than the other:

EXPLAIN PLAN FOR
SELECT *
FROM payment
WHERE payment_date < SYSDATE - :v_range;

SELECT * FROM TABLE(dbms_xplan.display);

EXPLAIN PLAN FOR
SELECT *
FROM payment
WHERE payment_date < :v_date;

SELECT * FROM TABLE(dbms_xplan.display);

-- CAST is there because of a "bug" in the EXPLAIN PLAN parser
EXPLAIN PLAN FOR
SELECT *
FROM payment
WHERE payment_date < (
  SELECT CAST(SYSDATE - :v_range AS DATE) FROM dual
);

SELECT * FROM TABLE(dbms_xplan.display);

The output is:

--------------------------------------------------------------------------------------
| Id  | Operation                   | Name           | Rows  | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                |  2500 |   453   (0)| 00:00:06 |
|   1 |  TABLE ACCESS BY INDEX ROWID| PAYMENT        |  2500 |   453   (0)| 00:00:06 |
|*  2 |   INDEX RANGE SCAN          | I_PAYMENT_DATE |   450 |     3   (0)| 00:00:01 |
--------------------------------------------------------------------------------------
                                                                                      
Predicate Information (identified by operation id):                                   
---------------------------------------------------                                   
                                                                                      
   2 - access("PAYMENT_DATE"<SYSDATE@!-:V_RANGE)                                      


--------------------------------------------------------------------------------------
| Id  | Operation                   | Name           | Rows  | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                |  2500 |   453   (0)| 00:00:06 |
|   1 |  TABLE ACCESS BY INDEX ROWID| PAYMENT        |  2500 |   453   (0)| 00:00:06 |
|*  2 |   INDEX RANGE SCAN          | I_PAYMENT_DATE |   450 |     3   (0)| 00:00:01 |
--------------------------------------------------------------------------------------
                                                                                      
Predicate Information (identified by operation id):                                   

Predicate Information (identified by operation id):                                   
---------------------------------------------------                                   
                                                                                      
   2 - access("PAYMENT_DATE"<:V_DATE)                                                 


--------------------------------------------------------------------------------------
| Id  | Operation                   | Name           | Rows  | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                |  2500 |   455   (0)| 00:00:06 |
|   1 |  TABLE ACCESS BY INDEX ROWID| PAYMENT        |  2500 |   453   (0)| 00:00:06 |
|*  2 |   INDEX RANGE SCAN          | I_PAYMENT_DATE |   450 |     3   (0)| 00:00:01 |
|   3 |    FAST DUAL                |                |     1 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------------------
                                                                                      

Predicate Information (identified by operation id):                                   
---------------------------------------------------                                   
                                                                                      
   2 - access("PAYMENT_DATE"< (SELECT CAST(SYSDATE@!-:V_RANGE AS DATE) FROM           
              "SYS"."DUAL" "DUAL"))                                                   

The only exception is that in the 3rd plan, we have this FAST DUAL operation which hints at scalar subquery caching kicking in.

Let’s compare actual results, though! (as always, I’m not publishing the real results, just qualitative numbers in a fictive unit of measurement to comply with Oracle legal blah blah)

-- Run 1
Statement 1 : 1.98 wombos
Statement 2 : 1.17 wombos
Statement 3 : 0.80 wombos

-- Run 2
Statement 1 : 1.49 blorfs
Statement 2 : 1.19 blorfs
Statement 3 : 0.78 blorfs

-- Run 3
Statement 1 : 1.46 gnarls
Statement 2 : 1.20 gnarls
Statement 3 : 0.75 gnarls

Wow! As you can see, using a bind variable certainly helps, but when you apply scalar subquery caching, THIS is where you get the most benefits. Now, this query is just a very simple example. The actual customer query was immensely more complex, and trust me – the performance improvement was 10x (I couldn’t believe it myself at first, and I still don’t know exactly why!).

Conclusion (for Oracle 11g)

Modern optimisers “recognise” a lot of expressions to be constant. For instance, in most databases, it doesn’t matter if you’re writing COUNT(1) or COUNT(*), they’re both translated to the same thing.

In this particular case, however, I was quite disappointed by the fact that there is a significant difference between these three perfectly equivalent queries as far as I’m concerned, and the least intuitive solution using scalar subquery caching performed the best on Oracle 11g.

But wait (Oracle 12c)

I’ve tested the same behaviour on my Oracle 12c on Docker instance:

Statement 1 : 1.23 xmfs
Statement 2 : 1.11 xmfs
Statement 3 : 1.14 xmfs

Statement 1 : 1.27 xlorgs
Statement 2 : 1.07 xlorgs
Statement 3 : 1.30 xlorgs

Statement 1 : 1.23 glrls (I can invent units of measurement all day!)
Statement 2 : 1.00 glrls
Statement 3 : 1.39 glrls

… where now there seems to be a “regression” in the scalar subquery solution (it’s now the same as without the scalar subquery) and the bind variable solution now seems to be the fastest.

If anything at all, this just proves that with SQL, you will always have to measure stuff on your end, because your setup and database versions may differ. The bottom line is: If performance of a single statement matters to you, chances are that you can improve things by 2-digit percentages with just some simple tricks, and in a batch job or under heavy load, this definitely matters!

Certainly, you should not use any expressions of the form SYSDATE - some_value in your predicates.

Does Your Database Really Use Your Index?

Adding the right index to speed up your queries is essential. But after a while, as your system grows, you may find yourself with tons of indexes, which all slow down writing to the database – as with each write to the table, the index needs to be updated as well in the same transaction.

Perhaps, 5 years later, your database (and your queries) have evolved in a way for some indexes to no longer be needed. For instance, there are some obvious cases when two indexes are redundant:

-- Original design
CREATE INDEX ON customer (first_name);

-- 5 years later
CREATE INDEX ON customer (first_name, last_name);

But in many other cases, things aren’t so obvious. For instance…

  • you may have added an index on a foreign key, but as the table grew, your database started running more hash joins rather than nested loop joins, in case of which indexes are not used.
  • Or, you’ve just entirely stopped querying for first names / last names.
  • Or you’ve started using a predicate that is way more selective than actual names.
  • Or your customers are suddenly all called Smith.
Everyone is called Smith - Tough for Indexing

Everyone is called Smith – tough luck for indexing!

If your indexes are no longer used, you can (and should) delete them.

But how to find unused indexes

If you’re using Oracle database and you have access to the production system, there’s actually a very nice way to query the diagnostics tables in order to know whether there is any query in the cursor cache that is currently using your index. Just run:

SELECT sql_fulltext
FROM v$sql
WHERE sql_id IN (
  SELECT sql_id
  FROM v$sql_plan
  WHERE (object_owner, object_name)
     = (('OWNER', 'IDX_CUSTOMER_FIRST_NAME'))
)
ORDER BY sql_text;

What does this query do? It runs through all the SQL statements in the cursor cache (v$sql) and for each one of them, checks if there is any execution plan element in the cursor cache (v$sql_plan) which accesses the index. Done.

Of course, if this doesn’t return any results, it doesn’t mean that no one uses your index. There might still be a very rare query that happens only once a year, which gets purged from the cursor cache, otherwise.

But if you run this as a job over a while, you can already conclude that if this query doesn’t return any rows, your index is probably no longer needed.

Can I discover unneeded indexes?

Sure! Run a similar query that lists all the indexes that are not referenced from the v$sql_plan table:

SELECT owner, index_name
FROM all_indexes
WHERE owner = 'OWNER'
AND (owner, index_name) NOT IN (
  SELECT object_owner, object_name
  FROM v$sql_plan
  WHERE object_owner IS NOT NULL
  AND object_name IS NOT NULL
)
ORDER BY 1, 2

Again, this doesn’t say that your indexes will never be used, but they haven’t been used recently.

Now, I won’t actually show you the query that would use the above statement, run across its result in a PL/SQL loop and drop all the indexes using EXECUTE IMMEDIATE, because you might just actually do that to try it in production. But just in case you do want to try, here’s a hint 🙂

BEGIN
  FOR i IN (/* above query here */) LOOP
    EXECUTE IMMEDIATE 
     'DR0P INDEX "' || i.owner || '"."' || i.index_name || '"';
  END LOOP;
END;
/

But as I said. Don’t actually do this!

Update: New Oracle 12cR2 feature: DBA_INDEX_USAGE

As you can see in the comments section, Oracle 12c now has this exciting new DBA_INDEX_USAGE feature, which you can see here:

Thanks, Dan McGhan for mentioning this!

How to Know if a Given Index Can be Dropped

It seems that perfection is attained not when there is nothing more to add, but when there is nothing more to remove.

Antoine de Saint Exupéry in Terre des Hommes

As SQL developers, we keep adding more and more indexes to our tables. Every time we run new queries that are potentially slow, a new index might solve the problem. But it also creates new problems. The more indexes you have, the slower your insertions and updates will become, and obviously, the more disk space your data will use.

But how do we know if we can remove an existing index? How do we know if there won’t be any performance regression?

In Oracle, there’s a way to answer this question! Just look at the cursor cache of your production system. In a previous blog post about UNIQUE constraints, we have added an index called “I1DATA” to our database, and we’ve run some queries that used this index.

Several days later, we want to know if that index is still being used in our “production” database. Just run the following query:

SELECT *
FROM v$sql
JOIN v$sql_plan USING (sql_id)
WHERE object_name = 'I1DATA'

This query uses Oracle’s wonderful system views, which reveal tons of information about your production system. Learn them, especially v$sql, and v$sql_plan and impress your coworkers!

The above query returns something like the following data:

SQL_TEXT                             LAST_ACTIVE
------------------------------------------------
SELECT COUNT(*) FROM X1 JOIN Y1  ... 14.07.16
SELECT count(*) FROM x1 JOIN y1  ... 14.07.16
SELECT count(*) FROM x1 WHERE a  ... 14.07.16
SELECT COUNT(*) FROM X1 WHERE A  ... 14.07.16

A whole lot of additional information about the actual execution plan is contained in the result set, e.g. whether the index was used for an INDEX RANGE SCAN, or something else. The interesting thing at this point is:

Yes, the index is still being used by several queries in the cursor cache

… so we probably cannot delete it yet.

Caution

These views won’t tell you reliably if an index is used. They are using the cursor cache, which may be configured in various ways, including not caching cursors for too long, or not all cursors.

In any case, the following rules can be established:

  • If an index appears in the cursor cache several times, you shouldn’t remove it
  • If an index appears in the cursor cache only once, perhaps it isn’t really needed for that particular query. Analyse
  • If an index does not appear in the cursor cache, run the query again, frequently, and after some time (and if you’re sure), you can remove it

Caveats

  • Some indexes may have been created for reports that run very infrequently. The suggested approach won’t cover that scenario
  • This information is most reliable from the production system. You cannot possibly obtain any meaningful information from your developer box

Remember: This is just a nice trick. Not a reliable rule. But it certainly does help you assess whether that big, expensive index that doesn’t appear to be useful can be safely removed.

Do Not Think That One Second is Fast for Query Execution

I keep encountering situations where RDBMS users think that one second for query execution is anything near fast. Most recently, in this Stack Overflow question:

Hibernate SQL In clause making CPU usage to 100%

The poster’s original question was why a similar query executes in one second when executed in SQL Server Management Studio whereas the (seemingly) same query executes in 60 seconds when executed from Hibernate. The query looks similar to this:

select Student_Id 
from Student_Table 
where Roll_No in ('A101','A102','A103',.....'A250');

There might be many reasons for this difference. But the most striking message here is:

Please do not believe that 1 second is fast.

Databases are incredibly fast and simple queries like the one above should execute in virtually no time, even on mediocre servers. Even on your laptop! Markus Winand has made it a point to tell you that in 80% of all performance issues, all you have to do is to add that missing index. And that’s the case here as well!

The poster’s original table contains only two indexes:

CREATE TABLE student_table (
       Student_Id BIGINT NOT NULL IDENTITY
     , Class_Id BIGINT NOT NULL
     , Student_First_Name VARCHAR(100) NOT NULL
     , Student_Last_Name VARCHAR(100)
     , Roll_No VARCHAR(100) NOT NULL
     , PRIMARY KEY (Student_Id)
     , CONSTRAINT UK_StudentUnique_1 
         UNIQUE  (Class_Id, Roll_No)
);

There is an index to implement the PRIMARY KEY, and there’s another index for the UNIQUE constraint, but both indexes aren’t really very useful, as the query predicate filters on Roll_No, which is only the second column of the UNIQUE constraint. When executing the above query on roughly 8 million rows, I’m getting an index scan on the UNIQUE index and the query runs in three seconds:

plan-1

This “Index Scan” operation is not good at all. I’m actually running through all of the index to find all the applicable Roll_No values in each index leaf node. This is explained well in Use The Index Luke’s page about concatenated indexes

The solution

But the good news is, SQL Server Management Studio gives you immediate tuning advice. Just right click on the execution plan and choose “Missing Index Details…” to get the following advice:

/*
Missing Index Details from SQLQuery1.sql -
    LUKAS-ENVY\SQLEXPRESS.test
The Query Processor estimates that implementing the 
    following index could improve the query cost 
    by 87.5035%.
*/

/*
USE [test]
GO
CREATE NONCLUSTERED INDEX [<Name of Missing Index>]
ON [dbo].[student_table] ([Roll_No])
INCLUDE ([Student_Id])
GO
*/

This doesn’t necessarily mean that the above index is the optimal choice for all your queries, but the fact that you’re querying using a predicate on Roll_No should be a strong-enough indicator that you should have an index on at least this Roll_No column. The simplest possible index here is simply:

CREATE INDEX i_student_roll_no
ON student_table (Roll_No);

With that index in place, we’ll now get an “Index Seek” operation, which runs instantly:

plan-2

Covering index

In this particular case, a “covering index” as suggested by Vlad Mihalcea in his answer might be appropriate. For instance:

CREATE INDEX i_student_roll_no
ON student_table (Roll_No, Student_Id);

The advantage of a covering index is that all the information needed for the query execution is already contained in the index. This is true in this particular case, but it can also be dangerous as:

  • The covering index needs more space, and if the table is already large, space might become an issue
  • The covering index only adds value as long as the query doesn’t also use additional columns (e.g. for projections, calculations, additional filtering, sorting, etc.). This is probably not the case in this simple example, which might change quickly in the near future

Thus, a covering index shouldn’t be your default choice in such cases. Better be conservative and add only those columns in the index, that add immediate value for filtering.

Conclusion

I’d like to repeat this time and again:

Do NOT think that one second is fast tweet this

In fact:

Do NOT think that anything beyond 2-3ms is fast!

Unless you’re doing heavy reporting or batch processing, where processing time obviously might take a bit longer, simple queries like these should never be anything slower than instant. And most of the time, you can achieve this speed by adding an index.

On a side-note

The poster of the aforementioned question obviously has other issues as well. None of the above explain the execution speed difference between Hibernate and “plain SQL”. But again, the message here is that if your “plain SQL” already takes more than one second, you have a very low-hanging fruit to fix.

Have You Ever Wondered About the Difference Between NOT NULL and DEFAULT?

When writing DDL in SQL, you can specify a couple of constraints on columns, like NOT NULL or DEFAULT constraints. Some people might wonder, if the two constraints are actually redundant, i.e. is it still necessary to specify a NOT NULL constraint, if there is already a DEFAULT clause?

The answer is: Yes!

Yes, you should still specify that NOT NULL constraint. And no, the two constraints are not redundant. The answer I gave here on Stack Overflow wraps it up by example, which I’m going to repeat here on our blog:

DEFAULT is the value that will be inserted in the absence of an explicit value in an insert / update statement. Lets assume, your DDL did not have the NOT NULL constraint:

ALTER TABLE tbl 
  ADD COLUMN col VARCHAR(20) 
    DEFAULT "MyDefault"

Then you could issue these statements

-- 1. This will insert "MyDefault" 
--    into tbl.col
INSERT INTO tbl (A, B) 
  VALUES (NULL, NULL);

-- 2. This will insert "MyDefault" 
--    into tbl.col
INSERT INTO tbl (A, B, col) 
  VALUES (NULL, NULL, DEFAULT);

-- 3. This will insert "MyDefault"
--    into tbl.col
INSERT INTO tbl (A, B, col)
  DEFAULT VALUES;

-- 4. This will insert NULL
--    into tbl.col
INSERT INTO tbl (A, B, col)
  VALUES (NULL, NULL, NULL);

Alternatively, you can also use DEFAULT in UPDATE statements, according to the SQL-1992 standard:

-- 5. This will update "MyDefault"
--    into tbl.col
UPDATE tbl SET col = DEFAULT;

-- 6. This will update NULL 
--    into tbl.col
UPDATE tbl SET col = NULL;

Note, not all databases support all of these SQL standard syntaxes. Adding the NOT NULL constraint will cause an error with statements 4, 6, while 1-3, 5 are still valid statements. So to answer your question:

No, NOT NULL and DEFAULT are not redundant

That’s already quite interesting, so the DEFAULT constraint really only interacts with DML statements and how they specify the various columns that they’re updating. The NOT NULL constraint is a much more universal guarantee, that constraints a column’s content also “outside” of the manipulating DML statements.

For instance, if you have a set of data and then you add a DEFAULT constraint, this will not affect your existing data, only new data being inserted.

If, however, you have a set of data and then you add a NOT NULL constraint, you can actually only do so if the constraint is valid – i.e. when there are no NULL values in your column. Otherwise, an error will be raised.

Query performance

Another very interesting use case that applies only to NOT NULL constraints is their usefulness for query optimisers and query execution plans. Assume that you have such a constraint on your column and then, you’re using a NOT IN predicate:

SELECT *
FROM table
WHERE value NOT IN (
  SELECT not_nullable
  FROM other_table
)

In particular, when you’re using Oracle, the above query will be much faster when the not_nullable column has an index AND that particular constraint, because unfortunately, NULL values are not included in Oracle indexes.

Read more about NULL and NOT IN predicates here.

MongoDB “Lightning Fast Aggregation” Challenged with Oracle

What does “Scale” even mean in the context of databases? When talking about scaling, people have jumped to the vendor-induced conclusion that:

  • SQL doesn’t scale
  • NoSQL scales

It is very obvious that NoSQL vendors make such claims. It has also been interesting that many NoSQL consumers made such claims, even if they probably confused SQL in general with MySQL in particular. They then go on about comparing MongoDB with MySQL scalability, which totally makes sense as MySQL is to SQL what MongoDB is to NoSQL.

Let’s get back down to earth…

… because in the last decades, there was no database to beat Oracle on the Transaction Processing Performance Council’s benchmarks. We trust that the CERN people have made an informed decision when they opted for Oracle Exadata and other Oracle products to manage their immensely huge data even before huge data was called Big Data.

So let’s make a quick comparison. Recently, Vlad Mihalcea has blogged about “lightning speed aggregation” (with MongoDB). He took a data set of 50 million records of the form:

created_on           | value
-------------------------------------------
2012-05-02T06:08:47Z | 0.9270193106494844
2012-09-06T22:40:25Z | 0.005334891844540834
2012-06-15T05:58:22Z | 0.05611344985663891
...                  | ...

These are random timestamps and random floats. He then aggregated this data with the following query:

var dataSet = db.randomData.aggregate([
    {
        $group: {
                "_id": {
                    "year" : {
                        $year : "$created_on"
                    },
                    "dayOfYear" : {
                        $dayOfYear : "$created_on"
                    }
                },
                "count": {
                    $sum: 1
                },
                "avg": {
                    $avg: "$value"
                },
                "min": {
                    $min: "$value"
                },
                "max": {
                    $max: "$value"
                }
            }
    },
    {
        $sort: {
            "_id.year" : 1,
            "_id.dayOfYear" : 1
        }
    }
]);

And got a “mind-numbing”, lightning speed aggregation result:

Aggregation took:129.052s

129s for a medium-sized table with 50M records is lightning speed for MongoDB? Alright, we thought. Let’s try this with Oracle. Vlad had the courtesy to provide us with the sample data, which we imported into the following trivial Oracle table:

CREATE TABLESPACE aggregation_test
  DATAFILE 'aggregation_test.dbf'
  SIZE 2000M ONLINE;

CREATE TABLE aggregation_test (
  created_on TIMESTAMP NOT NULL,
  value NUMBER(22, 20) NOT NULL
)
TABLESPACE aggregation_test;

Now, let’s load that data with sqlldr into my single-core licenced Oracle XE 11gR2 instance:

OPTIONS(skip=1)
LOAD DATA
  INFILE randomData.csv
  APPEND
INTO TABLE aggregation_test
  FIELDS TERMINATED BY ','
(
  created_on DATE
    "YYYY-MM-DD\"T\"HH24:MI:SS\"Z\"",
  value TERMINATED BY WHITESPACE
    "to_number(ltrim(rtrim(replace(:value,'.',','))))"
)

And then:

C:\oraclexe\app\oracle\product\11.2.0\server\bin\sqlldr.exe 
  userid=TEST/TEST 
  control=randomData.txt 
  log=randomData.log 
  parallel=true 
  silent=feedback
  bindsize=512000 
  direct=true

Loading the records took a while on my computer with an old disk:

Elapsed time was:     00:03:07.70
CPU time was:         00:01:09.82

I might be able to tune this one way or another, as SQL*Loader isn’t the fastest tool for the job. But let’s challenge the 129s for the aggregation. So far, we hadn’t specified any index, but that isn’t necessary as we’re aggregating the whole table with all 50M records. Let’s do this with the following equivalent query to Vlad’s:

SELECT
  EXTRACT(YEAR FROM created_on),
  TO_CHAR(created_on, 'DDD'),
  COUNT(*),
  AVG(value),
  MIN(value),
  MAX(value)
FROM aggregation_test
GROUP BY
  EXTRACT(YEAR FROM created_on),
  TO_CHAR(created_on, 'DDD')
ORDER BY
  EXTRACT(YEAR FROM created_on),
  TO_CHAR(created_on, 'DDD')

This took 32s on my machine with an obvious full table scan. Not impressive. Let’s try Vlad’s other query, filtering on a single hour, which executed in 209ms in his benchmark (what we believe is really not fast at all):

var dataSet = db.randomData.aggregate([
    {
        $match: {
            "created_on" : {
                $gte: fromDate,
                $lt : toDate
            }
        }
    },
    {
        $group: {
                "_id": {
                    "year" : {
                        $year : "$created_on"
                    },
                    "dayOfYear" : {
                        $dayOfYear : "$created_on"
                    },
                    "hour" : {
                        $hour : "$created_on"
                    }
                },
                "count": {
                    $sum: 1
                },
                "avg": {
                    $avg: "$value"
                },
                "min": {
                    $min: "$value"
                },
                "max": {
                    $max: "$value"
                }
            }
    },
    {
        $sort: {
            "_id.year" : 1,
            "_id.dayOfYear" : 1,
            "_id.hour" : 1
        }
    }
]);

Vlad generated a random date, which he logged as

Aggregating 
from Mon Jul 16 2012 00:00:00 GMT+0300
  to Mon Jul 16 2012 01:00:00 GMT+0300

So let’s use exactly the same dates. But first, we should create an index on AGGREGATION_TEST(CREATED_ON):

CREATE TABLESPACE aggregation_test_index
  DATAFILE 'aggregation_test_index1.dbf'
  SIZE 2000M ONLINE;

CREATE INDEX idx_created_on
ON aggregation_test(created_on)
TABLESPACE aggregation_test_index;

OK. Now let’s run the Oracle equivalent of Vlad’s query (note, there’s a new column in the GROUP BY, SELECT, ORDER BY clauses):

SELECT
  EXTRACT(YEAR FROM created_on),
  TO_CHAR(created_on, 'DDD'),
  EXTRACT(HOUR FROM created_on),
  COUNT(*),
  AVG(value),
  MIN(value),
  MAX(value)
FROM aggregation_test
WHERE created_on
  BETWEEN TIMESTAMP '2012-07-16 00:00:00.0'
  AND     TIMESTAMP '2012-07-16 01:00:00.0'
GROUP BY
  EXTRACT(YEAR FROM created_on),
  TO_CHAR(created_on, 'DDD'),
  EXTRACT(HOUR FROM created_on)
ORDER BY
  EXTRACT(YEAR FROM created_on),
  TO_CHAR(created_on, 'DDD'),
  EXTRACT(HOUR FROM created_on);

This took 20s in a first run. To be fair, before running this statement, I cleared some caches. I believe that Vlad’s MongoDB was already “warmed up” for the second query:

alter system flush shared_pool;
alter system flush buffer_cache;

Because when I ran the same query again, it only took 0.02 seconds as Oracle’s buffer cache kicked in, preventing actual disk access. I might have tuned my Oracle instance in a way to keep the whole table in memory from the first query, in case of which Oracle would have beat MongoDB by an order of magnitude.

Another option is to tune indexing. Let’s remove our existing index and replace it with a “covering” index as such:

DROP INDEX idx_created_on;

CREATE INDEX idx_created_on_value
ON aggregation_test(created_on, value)
TABLESPACE aggregation_test_index;

Let’s again flush caches:

alter system flush shared_pool;
alter system flush buffer_cache;

And run the query…

  • First execution: 0.5s
  • Second execution: 0.005s

From the execution plan, I can now tell that the query didn’t need to access the table any longer. All relevant data was contained in the index:

plan-2

Now that starts getting impressive, right?

So, what conclusion can we draw from this?

Here are a couple of findings:

Don’t jump to conclusions

First off, we should be careful with jumping to any conclusion at all. We are comparing apples with oranges. We’d even be comparing apples with oranges when comparing Oracle with SQL Server. Both benchmarks were quickly written, hacked-up benchmarks that do not represent any productive situations. What we can say already now is that there is not a significant winner when aggregating data in a 50M records table, but Oracle seems to perform better very quickly on an SSD (MongoDB) vs. HDD (Oracle) disk benchmark!

To be fair, my computer has a better CPU than Vlad’s:

system

My Computer

 

Vlad’s Computer

 

Both SQL and NoSQL can scale

Much of the NoSQL scaling debate is FUD. People always want to believe vendors who tend to say “their product doesn’t do this and that”. Don’t buy it immediately. Run a smallish benchmark as the above and see for yourself, scaling up is not a problem for both SQL and NoSQL databases. Don’t be blinded by Oracle’s initial “slow” query execution. Oracle is a very sophisticated database that can tune itself according to actual need, thanks to its cost-based optimiser and its statistics.

And, nothing keeps you (and your DBA) from using all of your tool’s features. In this example, we have only scratched the surface. For instance, we might apply IOT (Index-organised Table) or table partitioning, if you’re using Oracle Enterprise Edition.

50M is not Big Data

CERN has Big Data. Google does. Facebook does. You don’t. 50M is not “Big Data”. It’s just your average database table.

200ms is not fast

Obviously, you might not care too much how long it takes to prepare your offline report on a dedicated batch reporting server. But if you have a real user on a real UI waiting for it… If they do, 200ms is not fast, because they might be running 20 of these queries. And there might be 10’000 of these users. In a real OLTP system, having such reporting queries around even means that Oracle 0.005s aren’t fast!

So, in other words…

… don’t jump the SQL ship just yet!