SEQUENCEwith all defaults is good enough:
This sequence can then be used immediately in triggers when inserting new records in a table:
CREATE SEQUENCE my_sequence;
But if your table has heavy throughput with millions of insertions per day (e.g. a log table), you better configure the sequence cache correctly. The Oracle manuals state
CREATE OR REPLACE TRIGGER my_trigger BEFORE INSERT ON my_table FOR EACH ROW -- Optionally restrict this trigger to -- fire only when really needed WHEN (new.id is null) BEGIN SELECT my_sequence.nextval INTO :new.id FROM DUAL; END my_trigger;
Note: Oracle recommends using the CACHE setting to enhance performance if you are using sequences in an Oracle Real Application Clusters environment.We say: Consider using it also in other situations. Why? Sequence values are generated in an autonomous transaction. By default, Oracle caches 20 values before generating new ones in a new transaction. When you have a lot of inserts and thus generate a lot of sequence values, that will result in a lot of I/O on the sequence. Your best technique would be to run benchmarks to find a good value for your sequence cache in high-throughput scenarios.
8 thoughts on “Don’t Forget to Set the SEQUENCE CACHE Size”
Great tip. I know Hibernate added support for new id generators, but those are not backward compatible with the old ones, meaning we can’t use them. So we were stuck with SEQUENCE generators in PostgreSQL.
I saw this feature and I grabbed the Postgres docs and it seems they also have a CACHE clause:
Huh, I wasn’t aware of this. Can you show an example?
Good to know. Well, PostgreSQL / EnterpriseDB copy most of the important features off of Oracle, to help people migrate more easily…
Well this is the Hibernate doc page:
If you choose the new setting:
Then the SEQUENCE generator won’t always call the database, but instead it will use a sequence hi/lo algorithm where only the high value is taken from sequence, where the low values are assigned by clients. This may generate sequence gaps, but it’s more efficient since less calls are issues to the database.
Sigh… :-) If sequences are used for ID generation (e.g. instead of IDENTITY columns), and they’re NOT generated by the database itself (e.g. through triggers or through
GENERATED BY DEFAULT), then something is really wrong in any application design. The database really SHOULD be the only place responsible for ID generation and correct sequencing. You never know if a second application might just connect to it, and then possibly generate its own (incompatible) sequence values.
It depends what you want to do. For intensive batch processing, the classical sequence approach will end up calling the sequence for every new insert. So the hi/lo algorithm is not a bad idea at all. Once you reserve a hi number (with an allocation size of 10), then you can safely insert rows the low entries without clashing with any other external SQL DML INSERTS that also call the sequence. It’s a massive insert optimization.
I see, thanks for clarifying. Sounds like another blog post from your side :-)
This will go into the Hibernate saga. I will write my own training material for that. In the recent years i realized people are getting more and more estranged of the original purpose of these tools or the old ways of reading the docs before adopting new techs.
You mean the old ways of RTFM? ;-)