Hibernate, and Querying 50k Records. Too Much?

An interesting read, showing that Hibernate can quickly get to its limits when it comes to querying 50k records – a relatively small number of records for a sophisticated database:
http://koenserneels.blogspot.ch/2013/03/bulk-fetching-with-hibernate.html

Of course, Hibernate can generally deal with such situations, but you have to start tuning Hibernate, digging into its more advanced features. Makes one think whether second-level caching, flushing, evicting, and all that stuff should really be the default behaviour…?

6 thoughts on “Hibernate, and Querying 50k Records. Too Much?

  1. Hi Lukas.

    Couldn’t agree more. All this second-level caching, flushing, evicting nonsense is to paper-over philosophical mismatch between relational databases and the Hibernate “way”.

    You can read more on “how OneWebSQL differs from Hibernate”. It’s OneWebSQL specific, but you can get an insight on how Hibernate first breaks something that works (relational database model) and then tries to “solve” it by throwing more broken solutions at it.

  2. Hibernate sucks in performances. All three projects I’ve seen using it were screwed by it and design decision. I was researching alternatives, that’s how I found this blog. We have our own version of CRUD framework (simple ORM), which is there to bring good performances, however, my company haven’t managed to find clients for it, apart that we are using to power few of our products (web).

    1. Well, it doesn’t suck per se. It sucks “per default”. Once you know how to tweak it, it’s a pretty good framework, but you seem to have to know a lot, first…

  3. Hibernate suck so bad I am astounded that anyone in their right mind uses it. I cannot figure out WHO it is that makes these design decisions — managers? It seems most developers are too savvy to do such a stupid thing.

Leave a Reply