Have You Ever Wondered About the Difference Between NOT NULL and DEFAULT?

When writing DDL in SQL, you can specify a couple of constraints on columns, like NOT NULL or DEFAULT constraints. Some people might wonder, if the two constraints are actually redundant, i.e. is it still necessary to specify a NOT NULL constraint, if there is already a DEFAULT clause?

The answer is: Yes!

Yes, you should still specify that NOT NULL constraint. And no, the two constraints are not redundant. The answer I gave here on Stack Overflow wraps it up by example, which I’m going to repeat here on our blog:

DEFAULT is the value that will be inserted in the absence of an explicit value in an insert / update statement. Lets assume, your DDL did not have the NOT NULL constraint:

ALTER TABLE tbl 
  ADD COLUMN col VARCHAR(20) 
    DEFAULT "MyDefault"

Then you could issue these statements

-- 1. This will insert "MyDefault" 
--    into tbl.col
INSERT INTO tbl (A, B) 
  VALUES (NULL, NULL);

-- 2. This will insert "MyDefault" 
--    into tbl.col
INSERT INTO tbl (A, B, col) 
  VALUES (NULL, NULL, DEFAULT);

-- 3. This will insert "MyDefault"
--    into tbl.col
INSERT INTO tbl (A, B, col)
  DEFAULT VALUES;

-- 4. This will insert NULL
--    into tbl.col
INSERT INTO tbl (A, B, col)
  VALUES (NULL, NULL, NULL);

Alternatively, you can also use DEFAULT in UPDATE statements, according to the SQL-1992 standard:

-- 5. This will update "MyDefault"
--    into tbl.col
UPDATE tbl SET col = DEFAULT;

-- 6. This will update NULL 
--    into tbl.col
UPDATE tbl SET col = NULL;

Note, not all databases support all of these SQL standard syntaxes. Adding the NOT NULL constraint will cause an error with statements 4, 6, while 1-3, 5 are still valid statements. So to answer your question:

No, NOT NULL and DEFAULT are not redundant

That’s already quite interesting, so the DEFAULT constraint really only interacts with DML statements and how they specify the various columns that they’re updating. The NOT NULL constraint is a much more universal guarantee, that constraints a column’s content also “outside” of the manipulating DML statements.

For instance, if you have a set of data and then you add a DEFAULT constraint, this will not affect your existing data, only new data being inserted.

If, however, you have a set of data and then you add a NOT NULL constraint, you can actually only do so if the constraint is valid – i.e. when there are no NULL values in your column. Otherwise, an error will be raised.

Query performance

Another very interesting use case that applies only to NOT NULL constraints is their usefulness for query optimisers and query execution plans. Assume that you have such a constraint on your column and then, you’re using a NOT IN predicate:

SELECT *
FROM table
WHERE value NOT IN (
  SELECT not_nullable
  FROM other_table
)

In particular, when you’re using Oracle, the above query will be much faster when the not_nullable column has an index AND that particular constraint, because unfortunately, NULL values are not included in Oracle indexes.

Read more about NULL and NOT IN predicates here.

What if every object was an array? No more NullPointerExceptions!

To NULL or not to NULL? Programming language designers inevitably have to decide whether they support NULLs or not. And they’ve proven to have a hard time getting this right. NULL is not intuitive in any language, because NULL is an axiom of that language, not a rule that can be derived from lower-level axioms. Take Java for instance, where

// This yields true:
null == null

// These throw an exception (or cannot be compiled)
null.toString();
int value = (Integer) null;

It’s not like there weren’t any alternatives. SQL, for instance, implements a more expressive but probably less intuitive three-value logic, which most developers get wrong in subtle ways once in a while.

At the same time, SQL doesn’t know “NULL” results, only “NULL” column values. From a set theory perspective, there are only empty sets, not NULL sets.

Other languages allow for dereferencing null through special operators, letting the compiler generate tedious null checks for you, behind the scenes. An example for this is Groovy with its null-safe dereferencing operator. This solution is far from being generally accepted, as can be seen in this discussion about a Scala equivalent. Scala uses Option, which Java 8 will imitate using Optional (or @Nullable).

Let’s think about a much broader solution

To me, nullability isn’t a first-class citizen. I personally dislike the fact that Scala’s Option[T] type pollutes my type system by introducing a generic wrapper type (even if it seems to implement similar array-features through the traversable trait). I don’t want to distinguish the types of Option[T] and T. This is specifically true when reasoning about types from a reflection API perspective, where Scala’s (and Java’s) legacy will forever keep me from accessing the type of T at runtime.

But much worse, most of the times, in my application I don’t really want to distinguish between “option” references and “some” references. Heck, I don’t even want to distinguish between having 1 reference and having dozens. jQuery got this quite right. One of the main reasons why jQuery is so popular is because everything you do, you do on a set of wrapped DOM elements. The API never distinguishes between matching 1 or 100 div’s. Check out the following code:

// This clearly operates on a single object or none
$('div#unique-id').html('new content')
                  .click(function() { ... });

// This possibly operates on several objects or none
$('div.any-class').html('new content')
                  .click(function() { ... });

This is possible because JavaScript allows you to override the prototype of the JavaScript Array type, modifying arrays in general, at least for the scope of the jQuery library. How more awesome can it get? .html() and .click() are actions performed on the array as a whole, no matter if you have zero, one, or 100 elements in your match. What would a more typesafe language look like, where everything behaves like an array (or an ArrayList)? Think about the following model:

class Customer {
  String firstNames;  // Read as String[] firstNames
  String lastName;    // Read as String[] lastName
  Order orders;       // Read as Order[] orders
}

class Order {
  int value;          // Read as int[] value
  boolean shipped() { // Read as boolean[] shipped
  }
}

Don’t rant (just yet). Let’s assume this wouldn’t lead to memory or computation overhead. Let’s continue thinking about the advantages of this. So, I want to see if a Customer’s orders have been shipped. Easy:

Customer customer = // ...
boolean shipped = customer.orders.shipped();

This doesn’t look spectacular (yet). But beware of the fact that a customer can have several orders, and the above check is really to see if all orders have been shipped. I really don’t want to write the loop, I find it quite obvious that I want to perform the shipped() check on every order. Consider:

// The length pseudo-field would still be
// present on orders
customer.orders.length;

// In fact, the length pseudo-field is also
// present on customer, in case there are several
customer.length;

// Let's add an order to the customer:
customer.orders.add(new Order());

// Let's reset order
customer.orders.clear();

// Let's calculate the sum of all values
// OO-style:
customer.orders.value.sum();
// Functional style:
sum(customer.orders.value);

Of course there would be a couple of caveats and the above choice of method names might not be the best one. But being able to deal with single references (nullable or non-nullable) or array references (empty, single-valued, multi-valued) in the same syntactic way is just pure syntax awesomeness. Null-checks would be replaced by length checks, but mostly you don’t even have to do those, because each method would always be called on every element in the array. The current single-reference vs. multi-reference semantics would be documented by naming conventions. Clearly, naming something “orders” indicates that multi-references are possible, whereas naming something “customer” indicates that multi-references are improbable.

As users have commented, this technique is commonly referred to as array programming, which is implemented in Matlab or R.

Convinced?

I’m curious to hear your thoughts!