Do Not Think That One Second is Fast for Query Execution

I keep encountering situations where RDBMS users think that one second for query execution is anything near fast. Most recently, in this Stack Overflow question: Hibernate SQL In clause making CPU usage to 100% The poster’s original question was why a similar query executes in one second when executed in SQL Server Management Studio whereas the (seemingly) same query executes in 60 seconds when executed from Hibernate. The query looks similar to this:

select Student_Id 
from Student_Table 
where Roll_No in ('A101','A102','A103',.....'A250');

There might be many reasons for this difference. But the most striking message here is:
Please do not believe that 1 second is fast.
Databases are incredibly fast and simple queries like the one above should execute in virtually no time, even on mediocre servers. Even on your laptop! Markus Winand has made it a point to tell you that in 80% of all performance issues, all you have to do is to add that missing index. And that’s the case here as well! The poster’s original table contains only two indexes:

CREATE TABLE student_table (
     , Class_Id BIGINT NOT NULL
     , Student_First_Name VARCHAR(100) NOT NULL
     , Student_Last_Name VARCHAR(100)
     , Roll_No VARCHAR(100) NOT NULL
     , PRIMARY KEY (Student_Id)
     , CONSTRAINT UK_StudentUnique_1 
         UNIQUE  (Class_Id, Roll_No)

There is an index to implement the PRIMARY KEY, and there’s another index for the UNIQUE constraint, but both indexes aren’t really very useful, as the query predicate filters on Roll_No, which is only the second column of the UNIQUE constraint. When executing the above query on roughly 8 million rows, I’m getting an index scan on the UNIQUE index and the query runs in three seconds: plan-1 This “Index Scan” operation is not good at all. I’m actually running through all of the index to find all the applicable Roll_No values in each index leaf node. This is explained well in Use The Index Luke’s page about concatenated indexes

The solution

But the good news is, SQL Server Management Studio gives you immediate tuning advice. Just right click on the execution plan and choose “Missing Index Details…” to get the following advice:

Missing Index Details from SQLQuery1.sql -
The Query Processor estimates that implementing the 
    following index could improve the query cost 
    by 87.5035%.

USE [test]
ON [dbo].[student_table] ([Roll_No])
INCLUDE ([Student_Id])

This doesn’t necessarily mean that the above index is the optimal choice for all your queries, but the fact that you’re querying using a predicate on Roll_No should be a strong-enough indicator that you should have an index on at least this Roll_No column. The simplest possible index here is simply:

CREATE INDEX i_student_roll_no
ON student_table (Roll_No);

With that index in place, we’ll now get an “Index Seek” operation, which runs instantly: plan-2

Covering index

In this particular case, a “covering index” as suggested by Vlad Mihalcea in his answer might be appropriate. For instance:

CREATE INDEX i_student_roll_no
ON student_table (Roll_No, Student_Id);

The advantage of a covering index is that all the information needed for the query execution is already contained in the index. This is true in this particular case, but it can also be dangerous as:
  • The covering index needs more space, and if the table is already large, space might become an issue
  • The covering index only adds value as long as the query doesn’t also use additional columns (e.g. for projections, calculations, additional filtering, sorting, etc.). This is probably not the case in this simple example, which might change quickly in the near future
Thus, a covering index shouldn’t be your default choice in such cases. Better be conservative and add only those columns in the index, that add immediate value for filtering.


I’d like to repeat this time and again:
Do NOT think that one second is fast tweet this
In fact:
Do NOT think that anything beyond 2-3ms is fast!
Unless you’re doing heavy reporting or batch processing, where processing time obviously might take a bit longer, simple queries like these should never be anything slower than instant. And most of the time, you can achieve this speed by adding an index.

On a side-note

The poster of the aforementioned question obviously has other issues as well. None of the above explain the execution speed difference between Hibernate and “plain SQL”. But again, the message here is that if your “plain SQL” already takes more than one second, you have a very low-hanging fruit to fix.

4 thoughts on “Do Not Think That One Second is Fast for Query Execution

  1. The same is true on insert speed, 10 insert per second is slow, you can easily get 1000 tps in oracle with batching – I was gonna say 10000 but wanted to be safe there ….

    because some developer have those wrong performance expectation they will try to use caches or other kind of performance work around instead of sorting the query. That increase the complexity of their app and make it harder to maintain.

    The database already has a cache and will do a better job at keeping it consistent.

    1. I completely agree. Since you’re talking about Oracle, they didn’t win the TPC benchmarks time and again for nothing. It’s really the best database out there, built by people who know what they’re doing. Also, using tools like Oracle Enterprise Manager, it is extremely easy to tune those queries that really need tuning (because they’re slow in production) – even when you’re under heavy load and the issues are “unexpected”, e.g. due to heavy contention.

  2. There’s another important thing to mention here: usage of an index depends on the kind of comparision, at least for Oracle, as a prominent example!

    If you use an “IN” or a sequenc of “=” expressions everything works fine.
    But if you use “!=” even an index will not prevent full table scans.

    Therefore, if you have a limited number of values in a table column, say an enumeration, you’d better convert a “WHERE x != ‘val'” to “WHERE (X = ‘Val1’ OR x = ‘val2’ … OR x = ‘valN’)” for fixed values. Then the index can be used.

    If you have a catalog of values that are changing over time, then put that catalog in a different table and use a subquery with “NOT IN”, surprisingly this still uses an index.


    1. But if you use “!=” even an index will not prevent full table scans.

      Well, a “!=” predicate will exclude exactly one value out of possibly millions. So, (full) scanning the index doesn’t help when you have to full scan the table anyway…

      if you have a limited number of values in a table column, say an enumeration, you’d better convert a “WHERE x != ‘val’” to “WHERE (X = ‘Val1′ OR x = ‘val2′ … OR x = ‘valN’)” for fixed values.

      Interesting. I would have thought that a CHECK constraint might remedy that. I just verified and it’s not true. Good to know!

Leave a Reply