## How to do this in SQL?

On this blog, every problem will find its solution in SQL. So does this. And in order to solve this problem, we’re going to use the awesome Stack Exchange Data Explorer, which exposes a lot of Stack Exchange’s publicly available usage information. Note that we won’t query the consecutive days of visits, as this information is not made available publicly. Instead, let’s query the consecutive days of posts a user has made. The backing database is SQL Server, so we can run the following statement:```
SELECT DISTINCT CAST(CreationDate AS DATE) AS date
FROM Posts
WHERE OwnerUserId = ##UserId##
ORDER BY 1
```

`UserId`

generates something like:
date ---------- 2010-11-26 2010-11-27 2010-11-29 2010-11-30 2010-12-01 2010-12-02 2010-12-03 2010-12-05 2010-12-06 2010-12-07 2010-12-08 2010-12-09 2010-12-13 2010-12-14 ... (769 rows)(run the statement yourself, here) As we can see in the data, there have been gaps in the very early days:

date -------------------------------------- 2010-11-26 2010-11-27 <---- Gap here after 2 days 2010-11-29 2010-11-30 2010-12-01 2010-12-02 2010-12-03 <---- Gap here after 5 days 2010-12-05 2010-12-06 2010-12-07 2010-12-08 2010-12-09 <---- Gap here after 5 days 2010-12-13 2010-12-14 ...Visually, it is very easy to see how many days in a row there were posts without any gaps. But how to do it with SQL? To simplify the problem, let’s “store” individual queries in common table expressions. The above query, we’ll call

`dates`

:
```
WITH
-- This table contains all the distinct date
-- instances in the data set
dates(date) AS (
SELECT DISTINCT CAST(CreationDate AS DATE)
FROM Posts
WHERE OwnerUserId = ##UserId##
)
...
```

```
SELECT
COUNT(*) AS consecutiveDates,
MIN(date) AS minDate,
MAX(date) AS maxDate
FROM groups
GROUP BY grp -- This "grp" value will be explained later
ORDER BY 1 DESC, 2 DESC
```

`grp`

” and count the number of dates in the group, as well as find the lowest and the highest date within each group.
## Generating groups for consecutive dates

Let’s look at the data again, and to illustrate the idea, we’ll add consecutive row numbers, regardless of the gaps in dates:row number date -------------------------------- 1 2010-11-26 2 2010-11-27 3 2010-11-29 <-- gap before this row 4 2010-11-30 5 2010-12-01 6 2010-12-02 7 2010-12-03 8 2010-12-05 <-- gap before this rowAs you can see, regardless whether there is a gap between dates (two dates are not consecutive), their row numbers will still be consecutive. We can do this with the

`ROW_NUMBER()`

window function, very easily:
```
SELECT
ROW_NUMBER() OVER (ORDER BY date) AS [row number],
date
FROM dates
```

```
WITH
-- This table contains all the distinct date
-- instances in the data set
dates(date) AS (
SELECT DISTINCT CAST(CreationDate AS DATE)
FROM Posts
WHERE OwnerUserId = ##UserId##
),
-- Generate "groups" of dates by subtracting the
-- date's row number (no gaps) from the date itself
-- (with potential gaps). Whenever there is a gap,
-- there will be a new group
groups AS (
SELECT
ROW_NUMBER() OVER (ORDER BY date) AS rn,
dateadd(day, -ROW_NUMBER() OVER (ORDER BY date), date) AS grp,
date
FROM dates
)
SELECT *
FROM groups
ORDER BY rn
```

rn grp date --- ---------- ---------- 1 2010-11-25 2010-11-26 2 2010-11-25 2010-11-27 3 2010-11-26 2010-11-29 4 2010-11-26 2010-11-30 5 2010-11-26 2010-12-01 6 2010-11-26 2010-12-02 7 2010-11-26 2010-12-03 8 2010-11-27 2010-12-05 9 2010-11-27 2010-12-06 10 2010-11-27 2010-12-07 11 2010-11-27 2010-12-08 12 2010-11-27 2010-12-09 13 2010-11-30 2010-12-13 14 2010-11-30 2010-12-14(run the statement yourself, here) All we did is subtract the row number from the date to get a new date “

`grp`

“. The actual date obtained this way is irrelevant. It’s just an auxiliary value.
What we can guarantee, though, is that for consecutive dates, the value of `grp`

will be the same because for all consecutive dates, the following two equations yield true:
date2 - date1 = 1 // difference in days between dates rn2 - rn1 = 1 // difference in row numbersYet, for non-consecutive dates, while the difference in row numbers is still 1, the difference in days is no longer 1. The groups can now be seen easily:

rn grp date --- ---------- ---------- 1 2010-11-25 2010-11-26 2 2010-11-25 2010-11-27 3 2010-11-26 2010-11-29 4 2010-11-26 2010-11-30 5 2010-11-26 2010-12-01 6 2010-11-26 2010-12-02 7 2010-11-26 2010-12-03 8 2010-11-27 2010-12-05 9 2010-11-27 2010-12-06 10 2010-11-27 2010-12-07 11 2010-11-27 2010-12-08 12 2010-11-27 2010-12-09 13 2010-11-30 2010-12-13 14 2010-11-30 2010-12-14Thus, the complete query can now be seen here:

```
WITH
-- This table contains all the distinct date
-- instances in the data set
dates(date) AS (
SELECT DISTINCT CAST(CreationDate AS DATE)
FROM Posts
WHERE OwnerUserId = ##UserId##
),
-- Generate "groups" of dates by subtracting the
-- date's row number (no gaps) from the date itself
-- (with potential gaps). Whenever there is a gap,
-- there will be a new group
groups AS (
SELECT
ROW_NUMBER() OVER (ORDER BY date) AS rn,
dateadd(day, -ROW_NUMBER() OVER (ORDER BY date), date) AS grp,
date
FROM dates
)
SELECT
COUNT(*) AS consecutiveDates,
MIN(date) AS minDate,
MAX(date) AS maxDate
FROM groups
GROUP BY grp
ORDER BY 1 DESC, 2 DESC
```

consecutiveDates minDate maxDate ---------------- ------------- ------------- 14 2012-08-13 2012-08-26 14 2012-02-03 2012-02-16 10 2013-10-24 2013-11-02 10 2011-05-11 2011-05-20 9 2011-06-30 2011-07-08 7 2012-01-17 2012-01-23 7 2011-06-14 2011-06-20 6 2012-04-10 2012-04-15 6 2012-04-02 2012-04-07 6 2012-03-26 2012-03-31 6 2011-10-27 2011-11-01 6 2011-07-17 2011-07-22 6 2011-05-23 2011-05-28 ...(run the statement yourself, here)

## Bonus query 1: Find consecutive weeks

The fact that we chose the granularity of days in the above query is a random choice. We simply took the timestamp from our time series and “collapsed” it to the desired granularity using a`CAST`

function:
```
SELECT DISTINCT CAST(CreationDate AS DATE)
```

```
SELECT DISTINCT datepart(year, CreationDate) * 100
+ datepart(week, CreationDate)
```

```
WITH
weeks(week) AS (
SELECT DISTINCT datepart(year, CreationDate) * 100
+ datepart(week, CreationDate)
FROM Posts
WHERE OwnerUserId = ##UserId##
),
groups AS (
SELECT
ROW_NUMBER() OVER (ORDER BY week) AS rn,
dateadd(day, -ROW_NUMBER() OVER (ORDER BY week), week) AS grp,
week
FROM weeks
)
SELECT
COUNT(*) AS consecutiveWeeks,
MIN(week) AS minWeek,
MAX(week) AS maxWeek
FROM groups
GROUP BY grp
ORDER BY 1 DESC, 2 DESC
```

consecutiveWeeks minWeek maxWeek ---------------- ------- ------- 45 201401 201445 29 201225 201253 25 201114 201138 23 201201 201223 20 201333 201352 16 201529 201544 15 201305 201319 12 201514 201525 12 201142 201153 9 201502 201510 7 201447 201453 7 201321 201327 6 201048 201053 4 201106 201109 3 201329 201331 3 201102 201104 2 201301 201302 2 201111 201112 1 201512 201512(run the statement yourself, here) Unsurprisingly, the consecutive weeks span much longer ranges, as I generally use Stack Overflow extensively.

## Bonus query 2: Simplify the query using DENSE_RANK()

In a previous article, we’ve shown that SQL Trick:`ROW_NUMBER()`

is to `SELECT`

what `DENSE_RANK()`

is to `SELECT DISTINCT`

.
If we go back to our consecutive days example, we can rewrite the query to find the distinct dates AND the groups in one go, using `DENSE_RANK()`

:
```
WITH
groups(date, grp) AS (
SELECT DISTINCT
CAST(CreationDate AS DATE),
dateadd(day,
-DENSE_RANK() OVER (ORDER BY CAST(CreationDate AS DATE)),
CAST(CreationDate AS DATE)) AS grp
FROM Posts
WHERE OwnerUserId = ##UserId##
)
SELECT
COUNT(*) AS consecutiveDates,
MIN(date) AS minDate,
MAX(date) AS maxDate
FROM groups
GROUP BY grp
ORDER BY 1 DESC, 2 DESC
```

## Further reading

The above has been one very useful example of using window functions (`ROW_NUMBER()`

) in SQL. Learn more about window functions in any of the following articles:
The first instance of this SQL in your article:

Should use MIN(date) and MAX(date) instead of MIN(week) and MAX(week).

You’re right, thanks, nice catch. Fixed.

It’s still not fixed.

Yeah, who would have thought that a “first instance” will be copy pasted :) Thanks. Fixed the second instance, too.

How could I make this same query to return the list of IDs with the longest consecutive series of events?

You already have the GROUP BY to group each series. Now you can collect all the IDs in a comma separated string, for instance, if that’s the format you’re looking for. In PostgreSQL, this would be with STRING_AGG. In Oracle with LISTAGG. In SQL Server with STUFF (a bit more tricky). Hope this helps.

Hi,

Been implementing this to look on a minute basis to be able to report dates where our system has been reportedly “down”. I have however found gaps in the monitoring tool which sometimes misses a minute of reporting and this causes a new group to be created.

Is there any way I can increase the limit to check for gaps if there are 2 minutes between the dates?

If you want to look for 2 minute gaps between timestamp, that’s a different problem statement than if you want to truncate timestamps to a 2 minute lower bound and see if those have gaps. The latter is much easier. The first is best solved with recursive SQL, I suspect, or with databases that have enhanced SQL features for time series analyses like Vertica.

I remember its call Tabibitosan Method

Excellent, I didn’t know it had a name (although, one that I won’t be able to remember, I think :) ). Thanks!

Thanks for this post, it’s a creative solution.

One suggestion: the week query won’t return correct results because the weeks don’t wrap around the end of the year. You’re “unfairly” breaking streaks due to the end of the year. A wrapping query would look something like this:

Excellent, thank you very much for the feedback (and nice observation!)

I think I’m on the right way with your sql, but what to do if we want to group the rows if we have a time part in the date (datetime), and that we want the grouping separation when we have 10 minutes or more between ordered datetime? Thanks.

Hey there,

I’m not really sure what you mean. I guess these kinds of problems are best described by example input/output. And while you’re at it, I think this would be an excellent question for Stack Overflow! :)

Thank very much! Effectively, Stack Overflow helped to answer! Here is the link

Oh, excellent! That was quick! Thanks for sharing here

Is there a way of getting consecutive weeks without WITH clause? I don’t think MYSQL before 8 has this clause. I have some hard time figuring out a way of doing it with JOIN only. Thanks!

MySQL 8 supports window functions and the WITH clause. But anyway, please ask your question on https://stackoverflow.com

A suggestion:

1) Identify the edges of the groups with the lag operator:

2) Cumulative sum on “edge”. You now have the group identifier.

Sure, that identifies a row at the “edge” of a group, or a the beginning of a series. But how to take it from there? I.e. how to implement your step 2)?

How are weekends handles in this sample. I am trying to count suspension days. With that said, a student may get suspended on a Thursday, or Friday for 4 days. I need to count Thursday, Friday, Monday, and Tuesday as consecutive days. Is that even possible?

Of course it’s possible :-) But quite a different problem to the one I discussed in the article. Here’s a solution for PostgreSQL:

It calculates the next 4 weekdays from now recursively. Surely there are other solutions as well, e.g. using window functions. Note that this only considers weekdays, not holidays or other special cases, in case of which you need to store your calendar in a table.

Hi there, I don’t understand the third part of your query. You’re selecting from a tabl “GROUP” and grouping by grp. Is this a seperate table that you already have or am I missing a step?

The common table expression is called “groups”, and it is defined in the WITH clause of each query. Hope this helps

How i can make the column “grp” in postgresql?

I don’t think I understand your question

In postgreSQL don’t exist the function dateadd, the similar is “date + INTERVAL ‘1 day’ “, but I can’t use the row_number as the parameter for that funcion.

Use date – row_number() OVER()::INT instead