A Guide to SQL Naming Conventions

One of Java’s big strengths, in my opinion, is the fact that most naming conventions have been established by the creators of the language. For example:

  • Class names are in PascalCase
  • Member names are in camelCase
  • Constants are in SNAKE_CASE

If someone does not adhere to these conventions, the resulting code quickly looks non-idiomatic.

What about SQL?

SQL is different. While some people claim UPPER CASE IS FASTEST:

Others do not agree on the “correct” case:

There seems to be a tendency towards writing identifiers in lower case, with no agreement on the case of keywords. Also, in most dialects, people prefer snake_case for identifiers, although in SQL Server, people seem to prefer PascalCase or camelCase.

That’s for style. And I’d love to hear your opinion on style and naming conventions in the comments!

What about naming conventions?

In many languages, naming conventions (of identifiers) is not really relevant, because the way the language designs namespacing, there is relatively little risk for conflict. In SQL, this is a bit different. Most SQL databases support only a 3-4 layered set of namespaces:

  1. Catalog
  2. Schema
  3. Table (or procedure, type)
  4. Column (or parameter, attribute)

Some dialect dependent caveats:

  • While SQL Server supports both catalog AND schema, most dialects only support one of them
  • MySQL treats the catalog (“database”) as the schema
  • Oracle supports a package namespace for procedures, between schema and procedure

In any case, there is no such concept as package (“schema”) hierarchies as there is in languages like Java, which makes namespacing in SQL quite tricky. A problem that can easily happen when writing stored procedures:

FUNCTION get_name (id NUMBER) IS
  result NUMBER;
BEGIN
  SELECT name
  INTO result
  FROM customer
  WHERE id = id; -- Ehm...

  RETURN result;
END;

As can be seen above, both the CUSTOMER.ID column as well as the GET_NAME.ID parameter could be resolved by the unqualified ID expression. This is easy to work around, but a tedious problem to think of all the time.

Another example is when joining tables, which probably have duplicate column names:

SELECT *
FROM customer c
JOIN address a ON c.id = a.customer_id

This query might produce two ambiguous ID columns: CUSTOMER.ID and ADDRESS.ID. In the SQL language, it is mostly easy to distinguish between them by qualifying them. But in clients (e.g. Java), they are less easy to qualify properly. If we put the query in a view, it gets even trickier.

“Hungarian notation”

Hence, SQL and the procedural languages are a rare case where some type of Hungarian notation could be useful. Unlike with hungarian notation itself, where the data type is encoded in the name, in this case, we might encode some other piece of information in the name. Here’s a list of rules I’ve found very useful in the past:

1. Prefixing objects by semantic type

Tables, views, and other “tabular things” may quickly conflict with each other. Especially in Oracle, where one does not simply create a schema because of all the security hassles this produces (schemas and users are kinda the same thing, which is nuts of course. A schema should exist completely independently from a user), it may be useful to encode a schema in the object name:

  • C_TABLE: The C_ prefix denotes “customer data”, e.g. as opposed to:
  • S_TABLE: The S_ prefix denotes “system data” or “master data”
  • L_TABLE: The L_ prefix denotes “log data”
  • V_VIEW: The V_ prefix denotes a view
  • P_PARAMETER: The P_ prefix denotes a procedure or function parameter
  • L_VARIABLE: The L_ prefix denotes a local variable

Besides, when using views for security and access control, one might have additional prefixes or suffixes to denote the style of view:

  • _R: The _R suffix denotes read only views
  • _W: The _W suffix denotes writeable (updatable) views

This list is obviously incomplete. I’m undecided whether this is necessarily a good thing in general. For example, should packages, procedures, sequences, constraints be prefixed as well? Often, they do not lead to ambiguities in namespace resolution. But sometimes they do. The importance, as always, is to be consistent with a ruleset. So, once this practice is embraced, it should be applied everywhere.

2. Singular or plural table names

Who cares. Just pick one and use it consistently.

3. Establishing standard aliasing

Another technique that I’ve found very useful in the past is a standard approach to aliasing things. We need to alias tables all the time, e.g. in queries like this:

SELECT *
FROM customer c
JOIN address a ON c.id = a.customer_id

But what if we have to join ACCOUNT as well? We already used A for ADDRESS, so we cannot reuse A. But if we don’t re-use the same aliases in every query, the queries start to be a bit confusing to read.

We could just not use aliases and always fully qualify all identifiers:

SELECT *
FROM customer
JOIN address ON customer.id = address.customer_id

But that quickly turns out to be verbose, especially with longer table names, so also not very readable. The standard approach to aliasing things I’ve found very useful is to use this simple algorithm that produces 4 letter aliases for every table. Given the Sakila database, we could establish:

PREFIX TABLE NAME
ACTO ACTOR
ADDR ADDRESS
CATE CATEGORY
CITY CITY
COUN COUNTRY
CUST CUSTOMER
FILM FILM
FIAC FILM_ACTOR
FICA FILM_CATEGORY
FITE FILM_TEXT
INVE INVENTORY
LANG LANGUAGE
PAYM PAYMENT
RENT RENTAL
STAF STAFF
STOR STORE

The algorithm to shorten a table name is simple:

  • If the name does not contain an underscore, take the four first letters, e.g CUSTOMER becomes CUST
  • If the name contains 1 underscore, take the first two letters of each word, e.g. FILM_ACTOR becomes FIAC
  • If the name contains 2 underscores, take the first two letters of the first word, and the first letter of the other words, e.g. FILM_CATEGORY_DETAILS becomes FICD
  • If the name contains 3 or more underscores, take the first letter of each word
  • If a new abbreviation causes a conflict with the existing ones, make a pragmatic choice

This technique worked well for large-ish schemas with 500+ tables. You’d think that abbreviations like FICD are meaningless, and indeed, they are, at first. But once you start writing a ton of SQL against this schema, you start “learning” the abbreviations, and they become meaningful.

What’s more, you can use these abbreviations everywhere, not just when writing joins:

SELECT 
  cust.first_name,
  cust.last_name,
  addr.city
FROM customer cust
JOIN address addr ON cust.id = addr.customer_id

But also when aliasing columns in views or derived tables:

SELECT 
  cust.first_name AS cust_first_name,
  cust.last_name AS cust_last_name,
  addr.city AS addr_city
FROM customer cust
JOIN address addr ON cust.id = addr.customer_id

This becomes invaluable when your queries become more complex (say, 20-30 joins) and you start projecting tons of columns in a library of views that select from other views that select from other views. It’s easy to keep consistent, and you can also easily recognise things like:

  • What table a given column originates from
  • If that column has an index you can use (on a query against the view!)
  • If two columns that look the same (e.g. FIRST_NAME) really are the same

I think that if you work with views extensively (I’ve worked with schemas of 1000+ views in the past), then such a naming convention is almost mandatory.

Conclusion

There isn’t really a “correct” way to name things in any language, including SQL. But given the limitations of some SQL dialects, or the fact that after joining, two names may easily conflict, I’ve found the above two tools very useful in the past: 1) Prefixing identifiers with a hint about their object types, 2) Establishing a standard for aliasing tables, and always alias column names accordingly.

When you’re using a code generator like jOOQ’s, the generated column names on views will already include the table name as a prefix, so you can easily “see” what you’re querying.

I’m curious about your own naming conventions, looking forward to your comments in the comment section!

Dogfooding in Product Development

Dogfooding, or eating your own dog food, is a practice that all product developers should implement all the time. According to wikipedia:

Dogfooding, occurs when an organization uses its own product. This can be a way for an organization to test its products in real-world usage. Hence dogfooding can act as quality control, and eventually a kind of testimonial advertising. Once in the market, dogfooding demonstrates confidence in the developers’ own products

I’ve recently started delivering this talk about API design at conferences, where I mentioned dogfooding as an excellent approach to make sure the user experience of your API is great:

The more you use your own API, the better it gets from a UX and usability perspective.

Dogfooding via tests

We do a lot of dogfooding ourselves, inevitably, as we write tons and tons of tests for jOOQ, to make sure jOOQ works correctly on all the currently 26 RDBMS that we support.

Writing a test for new API means we have to immediately use the new API for the first time. This helps discover the first usability problems. But there are even better ways:

Dogfooding via significant new functionality

Another great way to dogfood is to create significant new features. For example, this new schema diff tool that jOOQ 3.13 will ship with (#9425). It is part of a bigger project that we call DDL interpretation, where we start implementing the DDL part of a database by maintaining an up to date database schema depending on a stream of DDL statements. This will be part of a variety of improvements and value propositions in the area of database change management / SQL migrations. Recently, we’ve published a post about an improved Liquibase integration, which goes in a similar direction.

The schema diff tool is something many database products are offering. For us, it is relatively simple to implement as we:

  • Support a variety of schema meta representations
  • Support a lot of DDL syntax
  • Support exporting schema meta representations as DDL

We’ve always had a type called org.jooq.Meta. That type represents your database. Historically, it just gave jOOQ-style access to JDBC’s java.sql.DatabaseMetaData. But over time, we also started supporting exposing generated jOOQ code, or XML files as org.jooq.Meta.

With jOOQ 3.13, we’ll support interpreting arbitrary DDL (parsed or manually created using the jOOQ API), or also Liquibase XML files to create such a org.jooq.Meta representation. Which can then be turned again into DDL using the new Meta.ddl() method:

System.out.println(
  ctx.meta("")
     .apply("create table t (i int)")
     .apply("alter table t add j int")
     .apply("alter table t alter i set not null")
     .apply("alter table t add primary key (i)")
);

When we run the above program, we’re getting the following output:

create table t(
  i int not null,
  j int null,
  primary key (i)
);

That’s already cool – we can create a snapshot of our schema at any point of a set of database migration scripts. Can we also do the inverse? We can, with the new diff tool:

System.out.println(
  ctx.meta(
    "create table t (i int)"
  ).diff(ctx.meta(
    "create table t ("
  + "i int not null, "
  + "j int null, "
  + "primary key (i))"
  ))
);

The output generated from this is:

alter table t alter i set not null;
alter table t add j integer null;
alter table t add constraint primary key (i);

This is part of an exciting new set of DDL / migration features, we can hardly wait to publish with jOOQ 3.13 (around Q1 2020), as we believe it will greatly help with your existing database change management solution – perhaps as a Flyway or Liquibase plugin.

What does this have to do with dogfooding?

While developing this feature, we’ve discovered numerous missing features, which we also implemented for jOOQ 3.13:

  • #7752: Our current meta model for sequences does remember flags like MAXVALUE
  • #9428: Meta.toString() could just call Meta.ddl(), the DDL export. This is what I’ve shown above, very useful when debugging with small schemas!
  • #9433: Meta.equals() and Meta.hashCode() was not yet implemented, as this didn’t make sense, historically.
  • #9434: Our current DDL export should support reordering objects in alphabetical order for the export. This is useful for equality checks, text-based diffs, etc.
  • #9437: We don’t support ALTER SEQUENCE statements that allow for modifying sequence properties like MAXVALUE yet.
  • #9438: The current emulation of ALTER SEQUENCE .. RESTART hard codes restarting the sequence at 1, when in fact, it could be some other START WITH value
  • #9440: We’ll need a synthetic syntax to drop unnamed foreign keys.

This is just a short extract of things we’ve implemented this week based on our discoveries of what’s missing when implementing the “big” feature: The Meta.diff(Meta) method. The diff() method will be the one highlighted in the release notes of 3.13. But the little things above are what ultimately makes jOOQ so useful – the many little things that were discovered while dogfooding and that help everyone, not just the users that use the diff() method.

Dogfooding via blogging and documentation

But, perhaps even better than when implementing new features is blogging or documenting API. When blogging or documenting, the vendor / maintainer has to put themselves into the position of the user, in particular, the first time user of a specific API.

Imagine, after writing all this code that I as an author love, I have to reset my experience, and pretend I don’t know what to expect. Then, sit down, and write a really good and simple example using the API that I wrote.

The first example I came up with was not like this (as that API didn’t exist yet):

System.out.println(
  ctx.meta("")
     .apply("create table t (i int)")
     .apply("alter table t add j int")
     .apply("alter table t alter i set not null")
     .apply("alter table t add primary key (i)")
);

Instead, it was like this (which works the same way):

System.out.println(
  ctx.meta("create table t (i int);\n"
         + "alter table t add j int;\n"
         + "alter table t alter i set not null;\n"
         + "alter table t add primary key (i);")
);

I thought to myself, do I really need all that string concatenation? I already have this Meta.apply(Queries) method, which would add even more value. But then, my code would look like this:

System.out.println(
  ctx.meta("")
     .apply(queries(createTable("t").column("i", INTEGER)))
     .apply(queries(alterTable("t").add("i", INTEGER)))
     .apply(queries(alterTable("t").alter("i").setNotNull()))
     .apply(queries(alterTable("t").add(primaryKey("i")))
);

That would be cool, too. I could show that our DSL API and our parser can do the same things. But I didn’t want to show the DSL API in this blog post. It would only distract from my main point. I could, of course, parse the queries explicitly:

System.out.println(
  ctx.meta("")
     .apply(ctx.parse("create table t (i int)"))
     .apply(ctx.parse("alter table t add j int"))
     .apply(ctx.parse("alter table t alter i set not null"))
     .apply(ctx.parse("alter table t add primary key (i)"))
);

But why not just add convenience API that does this for me?

System.out.println(
  ctx.meta("")
     .apply("create table t (i int)")
     .apply("alter table t add j int")
     .apply("alter table t alter i set not null")
     .apply("alter table t add primary key (i)")
);

Where Meta.apply(String) looks like this:

public final Meta apply(String diff) {
  return apply(dsl().parser().parse(diff));
}

That’s mere convenience. It is not needed. But it is very useful:

  • For tests
  • For blog posts
  • For documentation
  • For new users
  • For simple applications

The more thorough, more powerful API that accepts Queries (which wraps a bunch of parsed or hand-constructed jOOQ Query) objects is the real feature here. But convenience helps the user very much!

I would not have discovered this requirement without dogfooding.

Convenience

A much underrated tweet by Brian Goetz is this one:

APIs need a ton of abstractions. The JDK Collector API is a very good example. In Java and in SQL, in order to write a custom aggregate function, you need these four operations:

  • Supplier<A>: A supplier that provides an empty, intermediary data structure to aggregate into
  • BiConsumer<A, T>: A accumulator that accumulates new values from the stream into our intermediary data structure.
  • BinaryOperator<A>: A combiner that combines two intermediary data structures. This is used for parallel streams only.
  • Function<A, R>: The finisher function that extracts the result from the intermediary data structure.

Writing a Collector from scratch is super annoying and “low level”. Using convenience API is much better, this is why the JDK has Collectors, or jOOλ has Agg, and there are other Collector libraries that provide some basic building blocks. The Stream API does not have to know about all of these building blocks, it just knows a single, abstract type. (serves the API). But we users, we don’t want only a single abstraction. We want those convenient building blocks (serves the user).

Convenience is never requirement, but always a big plus. In APIs like in languages.

Have you noticed this new method on InputStream, which has been added in Java 9?

public long transferTo(OutputStream out) throws IOException {
    Objects.requireNonNull(out, "out");
    long transferred = 0;
    byte[] buffer = new byte[DEFAULT_BUFFER_SIZE];
    int read;
    while ((read = this.read(buffer, 0, DEFAULT_BUFFER_SIZE)) >= 0) {
        out.write(buffer, 0, read);
        transferred += read;
    }
    return transferred;
}

How many times have we written this kind of code? N times (and a big N, too)

How many times have we enjoyed writing this code? 1 time

How many times too many have we written this code? N + 1 times.

Dogfooding is a very good way to make sure this kind of convenience our users love us for will make it into products much earlier on. Because we, the vendors, are the first users, and we hate writing this silly code all the time. And who is a better first user to implement cool new features for, if not ourselves?

How to Simulate a Liquibase Migration using H2

This post is part of a new blog series about database migrations, which will cover a variety of database change management topics.

In the near future, we’ll look much more into these topics, hoping to add more value to our users’ existing Flyway, Liquibase, and other integrations where the migration tools can profit a lot from jOOQ’s most recent features, including the parser.

Embracing Liquibase in the next jOOQ versions

In jOOQ 3.13, we’re going to offer a better integration for those users who use Liquibase for database change management. Our existing DDLDatabase parses, translates, and simulates a DDL script based migration against an in-memory H2 database instance.

This is using our built-in SQL translation functionality (test it on our website here: https://www.jooq.org/translate) to translate this Oracle SQL:

create table t (v varchar2(100))

to this H2 SQL:

create table t (v varchar(100))

That’s just a trivial example. More sophisticated translations are possible too. The main purpose of doing this has been, historically, to offer an “offline” jOOQ source code generation step that does not require connecting to an actual Oracle database instance to reverse engineer your schema, which you can represent in form of SQL migration scripts, which you’d typically run with something like Flyway.

In the future, we’ll add more features around this. One thing we’ve been thinking about has been to allow for translating Flyway migrations to other dialects, to make them vendor agnostic.

Simulating Liquibase migrations

Flyway and Liquibase work in quite a similar fashion, with Liquibase offering an additional abstraction layer over the SQL language. While pure SQL migrations are also possible with Liquibase’s SQL change, Liquibase also offers a set of mostly low level DDL command abstractions, such as the ADD COLUMN change.

Using their XML based DSL, you don’t have to remember whether the command is called:

alter table t add i int;
alter table t add column i int;

Or whatever creative syntax your SQL vendor came up with. Notice that, again, you can achieve the same thing with jOOQ’s translator, which you can use as an API or command line interface

Now, assuming you have the following liquibase database change log:

<?xml version="1.0" encoding="UTF-8"?>
<databaseChangeLog
    xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog
    http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-3.8.xsd">
    <changeSet author="authorName" id="changelog-1.0">
        <createTable tableName="TAB">
            <column name="COL" type="VARCHAR(10)">
                <constraints nullable="true"
                    primaryKey="false" unique="false" />
            </column>
        </createTable>
    </changeSet>
</databaseChangeLog>

And now, you would like to check if it is correct, i.e. if you can ship it to production as it is. Just create an in-memory H2 connection, and run the file against it:

Properties info = new Properties();
info.put("user", "sa");
info.put("password", "");

try (Connection con = new org.h2.Driver()
        .connect("jdbc:h2:mem:db", info)) {
    Database database = DatabaseFactory.getInstance()
      .findCorrectDatabaseImplementation(new JdbcConnection(con));
    Liquibase liquibase = new Liquibase("/path/to/liquibase.xml", 
      new FileSystemResourceAccessor(), database);
    liquibase.update("");
}

To check, just repeat the table definition a second time

<createTable tableName="TAB">
    <column name="COL" type="VARCHAR(10)">
        <constraints nullable="true"
            primaryKey="false" unique="false" />
    </column>
</createTable>
<createTable tableName="TAB">
    <column name="COL" type="VARCHAR(10)">
        <constraints nullable="true"
            primaryKey="false" unique="false" />
    </column>
</createTable>

and see how H2 complains about the statement:

Table "TAB" already exists; SQL statement:
CREATE TABLE PUBLIC.TAB (COL CLOB) [42101-200] [Failed SQL: (42101) CREATE TABLE PUBLIC.TAB (COL CLOB)]
	at liquibase.changelog.ChangeSet.execute(ChangeSet.java:646)
	at liquibase.changelog.visitor.UpdateVisitor.visit(UpdateVisitor.java:53)
	at liquibase.changelog.ChangeLogIterator.run(ChangeLogIterator.java:83)
	at liquibase.Liquibase.update(Liquibase.java:202)
	at liquibase.Liquibase.update(Liquibase.java:179)
	at liquibase.Liquibase.update(Liquibase.java:175)
	at liquibase.Liquibase.update(Liquibase.java:168)
	at LB.main(LB.java:21)

Using jOOQ’s tool chain to create schema snapshots

Want to export the schema again as DDL? Very easy, just use jOOQ for the task. For example, you can write the following Java code right after the above Liquibase migration simulation:

DSLContext ctx = DSL.using(connection);
ctx.ddl(ctx.meta().getSchemas("PUBLIC").get(0))
    .forEach(System.out::println);

And you can get, in H2 syntax (omitting the liquibase tables):

create table "PUBLIC"."TAB"(
  "COL" varchar(10) null
)

Want to export this to Oracle, instead? And using a custom schema, instead of H2’s PUBLIC? Write this instead:

DSLContext h2 = DSL.using(connection);
DSLContext oracle = DSL.using(ORACLE, new Settings()
   .withRenderMapping(new RenderMapping()
       .withSchemata(new MappedSchema()
           .withInput("PUBLIC")
           .withOutput("MY_SCHEMA"))));
oracle.ddl(h2.meta().getSchemas("PUBLIC").get(0))
    .forEach(System.out::println);

And now, we’re getting:

create table "MY_SCHEMA"."TAB"(
  "COL" varchar2(10) null
)

(notice, this works once #9384 is fixed).

Prefer an XML representation of your schema snapshot? Easy with jOOQ, as well:

DSLContext ctx = DSL.using(connection);
JAXB.marshal(ctx.informationSchema(
   ctx.meta().getSchemas("PUBLIC").get(0)
), System.out);

The output being:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<information_schema xmlns="http://www.jooq.org/xsd/jooq-meta-3.12.0.xsd">
    <schemata>
        <schema>
            <schema_name>PUBLIC</schema_name>
            <comment></comment>
        </schema>
    </schemata>
    <tables>
        <table>
            <table_schema>PUBLIC</table_schema>
            <table_name>TAB</table_name>
            <comment></comment>
        </table>
    </tables>
    <columns>
        <column>
            <table_schema>PUBLIC</table_schema>
            <table_name>TAB</table_name>
            <column_name>COL</column_name>
            <data_type>varchar</data_type>
            <character_maximum_length>10</character_maximum_length>
            <ordinal_position>1</ordinal_position>
            <is_nullable>true</is_nullable>
            <column_default></column_default>
            <comment></comment>
        </column>
    </columns>
</information_schema>

A useful file that can be put under version control with your current commit set of database change set to validate your schema, or export to other tools that can reverse engineer DDL or XML.

Using Liquibase migrations as jOOQ code generation input

Just like the pre-existing DDLDatabase, starting from jOOQ 3.13, you can simulate your Liquibase migration in-memory against an H2 database to reverse engineer it again for jOOQ’s code generator. This is documented here.

The relevant code generation configuration looks like this (standalone or with maven):

<configuration>
  <generator>
    <database>
      <name>org.jooq.meta.extensions.liquibase.LiquibaseDatabase</name>
      <properties>
        <property>
          <key>scripts</key>
          <value>src/main/resources/database.xml</value>
        </property>
      </properties>
    </database>
  </generator>
</configuration>

That’s it! As a bonus, you won’t have to manually simulate your migration anymore, as jOOQ’s LiquibaseDatabase already does it for you, behind the scenes, using the same three lines of code:

Database database = DatabaseFactory.getInstance()
  .findCorrectDatabaseImplementation(new JdbcConnection(con));
Liquibase liquibase = new Liquibase("/path/to/liquibase.xml", 
  new FileSystemResourceAccessor(), database);
liquibase.update("");

Stay tuned for more goodies on the topic of SQL migrations and database change management