Why You Should NOT Implement Layered Architectures

Abstraction layers in software are what architecture astronauts tell you to do. Instead, however, half of all applications out there would be so easy, fun, and most importantly: productive to implement if you just got rid of all those layers.

Frankly, what do you really need? You need these two:

  • Some data access
  • Some UI

Because that’s the two things that you inevitably have in most systems. Users, and data. Here’s Kyle Boon’s opinion on possible choices that you may have

Very nice choice, Kyle. Ratpack and jOOQ. You could choose any other APIs, of course. You could even choose to write JDBC directly in JSP. Why not. As long as you don’t go pile up 13 layers of abstraction:

Geek and Poke's Footprints - Licensed CC-BY 2.0
Geek and Poke’s Footprints – Licensed CC-BY 2.0

That’s all bollocks, you’re saying? We need layers to abstract away the underlying implementation so we can change it? OK, let’s give this some serious thought. How often do you really change the implementation? Some examples:

  • SQL. You hardly change the implementation from Oracle to DB2
  • DBMS. You hardly change the model from relational to flat or XML or JSON
  • JPA. You hardly switch from Hibernate to EclipseLink
  • UI. You simply don’t replace HTML with Swing
  • Transport. You just don’t switch from HTTP to SOAP
  • Transaction layer. You just don’t substitute JavaEE with Spring, or JDBC transactions

Nope. Your architecture is probably set in stone. And if – by the incredible influence of entropy and fate – you happen to have made the wrong decision in one aspect, about 3 years ago, well you’re in for a major refactoring anyway. If SQL was the wrong choice, well good luck to you migrating everything to MongoDB (which is per se the wrong choice again, so prepare for migrating back). If HTML was the wrong choice, well even more tough luck to you. Likelihood of your layers not really helping you when a concrete incident happens: 95% (because you missed an important detail)

Layers = Insurance

If you’re still thinking about implementing an extremely nice layered architecture, ready to deal with pretty much every situation where you simply switch a complete stack with another, then what you’re really doing is filing a dozen insurance policies. Think about it this way. You can get:

  • Legal insurance
  • Third party insurance
  • Reinsurance
  • Business interruption insurance
  • Business overhead expense disability insurance
  • Key person insurance
  • Shipping insurance
  • War risk insurance
  • Payment protection insurance
  • pick a random category

You can pay and pay and pay in advance for things that probably won’t ever happen to you. Will they? Yeah, they might. But if you buy all that insurance, you pay heavily up front. And let me tell you a secret. IF any incident ever happens, chances are that you:

  • Didn’t buy that particular insurance
  • Aren’t covered appropriately
  • Didn’t read the policy
  • Got screwed

And you’re doing exactly that in every application that would otherwise already be finished and would already be adding value to your customer, while you’re still debating if on layer 37 between the business rules and transformation layers, you actually need another abstraction because the rule engine could be switched any time.

Stop doing that

You get the point. If you have infinite amounts of time and money, implement an awesome, huge architecture up front.

Your competitor’s time to market (and fun, on the way) is better than yours. But for a short period of time, you were that close to the perfect, layered architecture!

90 thoughts on “Why You Should NOT Implement Layered Architectures

  1. Little bit of a straw-man argument here. I agree “too much layering is bad/pointless/wasteful”.

    Does that mean that some layering is bad? Of course not. Some layering is essential.

    Choose which in your experience, has caused the most or biggest problems?
    (a) Too little layering
    (b) Too much layering

    For big projects, (a) is my choice. We’re (right now) replacing a big search engine with another big search engine. Huge amounts of pain, because the people who wrote the earlier code decided it was faster/easier to let details of the search engine creep up into business logic (and even UI!).

    OTOH, for (b), I’ve also seen way too much code (at a smaller level) where the developer thought a lot of levels would be cool… to the point where I have to traverse 5 layers just to find out what happens to a freaking string!

    1. Too little separation of concerns is bad.
      And no one has perfect insight into changes which will happen during an application’s lifecycle, so building in some flexibility is good.

      But the REAL HARD part iare
      1. figuring out which parts will change, and the ones which will not
      2. figuring out which layers of the app are in charge of each task/concerns, and keep those tasks/concerns confined

      In other words: I have seen most projects err on wrong-layering. It looks like too-little if you don’t look carefully enough, but it is closely related to spending time of putting in useless layers as well.

      Instead of blindly adopting “best practices” developers should stop and think a bit more. Get to know the business requirements in great detail. Do back-of-the-envelope calculations of the costs/benefits involved in refactoring, and the probabilities of things changing.

      Random example scenarios:

      – hardcode configuration settings in code (noob mistake): no excuse here, use a configuration layer

      – isolating your app from rdbms principles is generally a waste of time, as if you move to a non-relational storage, a lot of business logic will be impacted (transactions and joins provide strong cohesion points which are basically impossible to decouple away without killing perfs in run time and developer time)

      – change of db: mostly a solved problem for simple sql using existing toolkits, good luck if your app relies on transaction isolation levels or such deep stuff.
      Refactoring cost if not layered-in from day 1 is generally equal to app-rewrite cost, but chances of it happening are extra small, as well. A risk often worth taking

      – putting all of the business logic in a web controller: 99% of the time bad, as there will be one day a command line script needing to do the same thing

      – corollary of the above: do not separate in e.g. a web controller the phases of decoding inputs away from heir incoming representation into plain data structures, executing processing producing more plain data, and dressing up the output in the desired format

  2. Great post as usual! And I totally agree.
    Sometimes I wish I had the good old days back with 3270 UI and Cobol with embedded SQL. You can’t imagine how fast we implemented the business requirements.

    1. Yes!!!, those fast to implement CICS/COBOL/DB2 solutions were fabulous… when the underlying architecture don’t leave many fancy options to the programmers, the time to market of the solutions improves a lot!!!

  3. I agree that developers/architect should focus more on reasoning the usefulness of employing a certain layer. Blindly following “bets practices” is an anti-pattern.

    But choosing an architecture is not about adding more layers; it’s about designing a technical solution (developing best practices, testing procedures, enforcing security, flexibility on change requests – those are always happening).

    Embedding JDBC in JSP is not fun at all, because:

    – the logic is going to be difficult to test. If an implementation is difficult to be tested then it’s difficult to be used for other clients as well.
    – adding transactions for multiple operations is not going to happen.
    – reacting to change requests, when adding a new column for instance, might turn into a nightmare.

    So, layers are useful but only if you understand the value they bring in return.
    Some may argue that the EntityManagers is a DAO in itself so you can simply inject it into a web controller. While you might add transactions to the web controller, it’s still going to be much more difficult to test, especially when the class in question depends on way too many technologies (JPA, HTTP, Transactions).

    The Spring Data Repositories are very handy for common operations, yet they are serving only one entity type. But some business logic routines may involve multiple such repositories, all enlisted in the same transaction. This will require an upper layer, but you still have to decide if that’s going to be a Service or the Web layer.

    Cutting dependencies facilitates testing on layer boundaries, so layering is not a problem per se. The layering is a mean not a goal.

    1. Cutting dependencies facilitates testing on layer boundaries, so layering is not a problem per se. The layering is a mean not a goal.

      Absolutely. The “rant” against layering here is really just a bit of inflammatory (on purpose, of course). The main problem is mostly up-front overengineering with “default” layering, producing layers that will never be needed.

      Layers emerge naturally as people refactor. And then, they’re very useful.

      1. Somehow on the same wavelength is James Bach’s opinion on No Best Practices. A best practice is highly context bound. Maybe you are not facing the same problems as the one who created the best practice. This applies to frameworks as well. Maybe the framework authors addressed other problems with the framework the built. In software we always have to reason for ourselves, there are not many blue-prints or drop-in solutions, cause if it were we would have a software assembly line instead.

  4. I agree. My first programming experience was with switches on a machine with 256 bytes of memory… you had to physically close or open connections to write the “program”. Everybody should do that, none of this abstraction! /sarc

  5. Hi Lukas,

    I really often agree with you on architectural questions, but not here ! At least, not fully.
    When you say that having lots of layers slows down the development, I think you are right.
    However, when you say don’t use layers at all, they will always slow you down, you should write your logic in JSP, I could not disagree more :)

    Here are some reasons :
    – Having a base structure helps you place your code : you spend less time asking yourself “where should I write this logic ?”
    – Having some layers helps you test your application : let’s say you have a bug in your service layer, you can easily mock your APIs and Data layers to write a simple test that reproduce the bug. If you have a bug in your JSP where you have put everything : access to your database, access to some webservices, application logic, … good luck writing a test !
    – Having some layers helps refactoring : you don’t have to rewrite everything each time you want to refactor your database/API accesses.
    – Having “some” layers (data, api, service/app logic, ui) does not actually slow down the development once you know where to put each different logic : you just spread some logic you would write in one file otherwise.
    – You are an experienced developer, so you have maintained applications where more or less everything was is JSPs don’t you ?

    Anyway, I don’t think I have taught you anything, jOOQ is one of the best architectured library !
    I am just surprised to see you say “don’t structure your code, just write everything in JSP”. I think I have just misunderstood you, I just want to be sure !

    Cheers,
    Aurélien

    1. you should write your logic in JSP, I could not disagree more

      ;-)

      That JSP thing was really there to trigger discussion. No one sane would implement an E-Banking app that way. But it is certainly an option for some applications. People often design dozens of layers up front and waste time on a clean separation that they won’t need, and if they need it, they separated at the wrong spots.

      Of course, I could’ve written something like “use the right design for the right job”. But that wouldn’t have gotten me a lot of attention, would it? ;-)

      Bottom line: Don’t worry. I do appreciate the odd layer at the right place. But I have seen too much layered legacy to believe that many people on this planet will get up-front designs right. I might as well have ranted about waterfall, instead of layering.

  6. I am advocating the same idea for more than 25 years. This is the YAGNI idea. And what can I see? Everybody is using superfluous and totally useless layers. Many years ago I was already fed up with the assemblers. That was just a new layer added to your coding hiding the real programming in hexadecimal or even binary on the front panel of the PDP-11. Who needs an assembler when you have your code card. And now? We have interpreted languages implemented in a language that is compiled to an intermediary code that still needs interpretation or compilation to machine code. And above that layers, totally unnecessary like database SQL. Your program will be more effective if you tune your data structure to your problem and read and write disk sectors directly. From machine code of course!

    Now that we had the fun: there should be some golden path between. Perhaps an article about what to consider technical and business level when deciding to, or not to introduce a new abstraction layer?

    1. As always, I appreciate your sarcasm, Peter :-)

      Now that we had the fun: there should be some golden path between. Perhaps an article about what to consider technical and business level when deciding to, or not to introduce a new abstraction layer?

      I absolutely agree! So the challenge is yours: Publish a response-post to this one! I like the fact that there should be a decision about introducing new layers when needed, compared to designing layers up front.

    1. As all opinionated blogs in the universe, this one is indeed speaking for ourselves, with a mixture of truth, sarcasm, and over-doing. Since you don’t agree, how about showing counter-examples?

      1. SQL. You hardly change the implementation from Oracle to DB2 – of course, because postgresql and mysql have same syntax for filtering rows by count.
        DBMS. You hardly change the model from relational to flat or XML or JSON – of course, because if you say you will deliver data in xml, it will be like this for ever (and even after that).
        UI. You simply don’t replace HTML with Swing – and of course you don’t have few different clients, just one that works on just one platform and one type of devices.

      2. Lol, thanks for taking things to the extreme for fun and to show a point. Yes, many people have *way* too many layers. With that said, I have been on several projects in the past three years where we have changed out DBs (on one it was twice in a two year period), server platforms (twice), moved our DAO to use jOOq from eclipseLink/toplink/straight SQL (a great move, I must say), and moved from entire middle-end languages three times. It’s nice to be a contractor… people come to you, say, “I have xyz problems and abc dollars, fix them” and you do it. I’d say that usually you want a frontend and backend, with a shared set of utility libraries. You may then also need a BLO and DAO, but I’ve found most VOs to be overkill. If making a networked application, you’ll have a frontend and backend, with the backend composed of different modules (a networking module, one for filesystem IO, one to do computations, etc). So… I guess you weren’t too far off. Front and backends are the two big things and the middle will depend on the system requirements, but should be pretty flat (though it can be wide), with some support libraries there to be shared.

        1. Unfortunately, after all these years of blogging, not everyone understands taking things to the extreme for fun. I’m talking about “duty calls” :-)

          Interesting point about the contractor just executing what is ordered, too. I think you have a point there with calling the different composite parts “modules” rather than layers. There’s nothing wrong with abstraction and modularity, but that’s something completely different from layering.

          I’ve promised a follow-up post before, where I will also go into this distinction once more.

  7. > We need layers to abstract away the underlying implementation so we can change it?

    Not only: we need the proper amount of layers mainly for testability and readability. And then to abstract away the underlying implementation. Of course, just my experience.

  8. Here’s a sample of some Ruby code I worked on the other day:

    def getListValueKludge(action)
    list = action.parent.elements[‘list’]
    return get_attribute( list ).strip if list
    nil
    end

    do_action(action)
    original_value = get_group_value( action )
    original_value2 = getListValueKludge( action )

    if action.attributes[ “name” ]
    do_platform_action( action, “name” )
    else
    interpreter = action.attributes[ “exec” ]
    script = get_script(action)
    instance = action.attributes[ “instance” ]
    method = action.attributes[ “method” ]
    end
    new_value = get_group_value( action )
    new_value2 = getListValueKludge( action )
    @change_made = true if new_value != original_value or new_value2 != original_value2
    end

    def apply_change(change)
    change.elements.each(“action”) { |action| do_action action }
    end

    def apply_change()
    change = @root.elements[ xpath ]
    apply_change(change)
    end

    document = REXML::Document.new( File.open( @filename ) )
    @root = document.root

    Now, rather than having a clear description of what this code is trying to accomplish, you and I are forced to read through the irrelevancies of parsing a REXML document. With a layer that abstracts out the REXML (and this is real code), you would see calls like get_list_of_items_in_menu/perform_pending_actions.

    This is code without abstraction. It is not clear. You will waste time learning the details of how it is trying to accomplish its task rather than seeing what it is attempting to accomplish.

    This is a very small portion of a 2000 line Ruby file that display a menu in a terminal window. It would probably be about half the size if it had a layer that took care of the redundancies of the REXML parsing. Why should you care that this menu is configured via an XML document? You should not.

    1. Maybe, there is a confusion between layering and just simply writing an API. Layering is much more “heavy” and “thorough”, throughout the application. No one would object factoring out low-level logic behind an API

  9. Seems like a strawman argument; no one is advocating superfluously layered architectures. Well, at least no one that knows what they’re doing.

    Abstraction layers can be useful, though. It’s all about context. Having an absolute view on the subject is dangerous.

  10. Hmm, you had a fun list of things that don’t happen. Well, at work we had nearly all of those happen in a 25 year old application (okay, not really Java based, but same scale).

    SQL. You hardly change the implementation from Oracle to DB2

    From Ingres to Informix to Oracle to SQL Server.
    Now SQL Server, Oracle and SQLite are left.

    UI. You simply don’t replace HTML with Swing

    Well, MFC, HTML, Flash, HTML5…

    Transport. You just don’t switch from HTTP to SOAP

    Custom TCP, CORBA, HTTP, JSON via HTTP.

    Transaction layer. You just don’t substitute JavaEE with Spring, or JDBC transactions

    Various iterations, but we did change transaction layers.

    So yes, too many layers kill you. But too few are even worse. The hard part is as always finding a good balance.

    1. Finding a balance is really the hard part. I agree that these things happen, but they often happen in subtly or vehemently different ways than what any designed up-front, layered architecture predicted. Chances are that layering might cause more work when actually performing those heavy changes, rather than help.

      From Ingres to Informix to Oracle to SQL Server.
      Now SQL Server, Oracle and SQLite are left.

      That is quite something! Don’t you wish, jOOQ had existed when this all happened? :-)

  11. Least I forgot: I see many people advocating layers purely for testability.
    I’ll probably be flamed to ashed for this, but I’d take understandable code over testable code any day.

    1. The calm, logical response is “testable and understandable are two horses pulling the same cart in the same direction”. Usually when I make a messy class testable, I make it more understandable at the same time. And vice versa.

      Sometimes when adding testability, I’ll add an otherwise unnecessary layer, but only because someone used statics in a fundamentally evil way, and I have to put a wrapper around it. But only after careful consideration!

      The inflammatory response :-) would be “do you want your code to work correctly, or do you want to understand it? Choose one!”

      (Of course, that’s usually a false dichotomy. But Lukas likes a little heat with his light.)

  12. I think it’s matter of having the right number of layers. I would agree you don’t need tons of frameworks and stuff, if you’re always starting off new projects with 6 or 8 layers, you’re doing it wrong at least half the time.

    It’s really a mater of where layers are beneficial. For example, right now I’m working on a game in html5. I have multiple layers of translation from a screen of data into the details of rendering paths and arcs in a canvas. It is *really* useful to have those layers, it makes the rendering part of the project noticeably more productive. I really will try various different schemes of rendering later on to optimize the rendering performance. The higher layers will largely be unaffected by this, allowing me to try significantly different schemes relatively easily. But only this one aspect of the game is layered, because the other aspects are not going to benefit from layers.

    How do you tell where you need layers and where you don’t? That’s really a matter of where do you need flexibility? In real world applications today, people combine different kinds of databases together – store data in SQL, use Lucene to search really fast for data from any table, store really tricky relationships in a graph database so they can be queried in this lifetime. And yes, today you might very well migrate to another database – and not just to migrate back later.

    I think in a lot of cases, just adding one layer between your code and some library is a good idea – if you really have to switch libraries for some reason, and it does happen, you at least have your own API between the library and all the code that uses it. This can reduce a lot of pain down the road for not a lot of effort.

    It’s a matter of cost vs benefit, same as everything else.

  13. Well, my remaining grey matter disagrees, at least with your reasoning.

    First of all, remember the 3-tiers. No application consists only of data access and a UI. There is some (business) logic in between.
    So you should say a layer consists of ‘some data access’, some processing/logic and presentation (which does not need to be a UI).

    Having that in mind, every layer is a 3-tier component.
    It receives or retrieves some data, does something with it and presents the information for consumers (which might happen to be a UI).

    Of course, it is tedious to have at least 2 interfaces on every layer (one to access data, one to present data), which differ from the interfaces of other layers,
    but that’s another problem that many tried to solve.

    Be aware that a large scale applications
    – will constantly change the database (from Sybase to Oracle to Mysql to…)
    – integrate new data sources (new protocols, new layers)
    – over time, different UI will be implemented depending on the business needs (Console, Windows, html, Ajax, mobile, …)
    – extend business logic

    I’m not talking about a simple GUI that takes a timer interrupt (data source) and blinks in intervals (layering that would be over-engineering) but still,
    this is a 3 tier application (timer is the source, the timer callback contains the logic and a flashing widget is the presentation of the information).

    Properly layering an application will also allow you to scale an application vertically (try that with a monolith), at least if the interfaces of each layer are properly done.

    I remember a saying of a Professor at the technical high school in Zurich:
    Any business problem can be solved by adding a new layer.
    Any performance problem can be solved by removing a layer.

    1. I will disagree with the 3-tier default. In read-heavy applications, 2 tiers will often suffice if your data access language also has transformation capabilities, such as SQL, XQuery, XSLT, and many more.

      I remember a saying of a Professor at the technical high school in Zurich:
      Any business problem can be solved by adding a new layer.
      Any performance problem can be solved by removing a layer.

      Eh, that’s wise :-)

      1. Ouh, aeh… that should read “Professor”, just a small difference ;-)

        Well it is hard to spot the middle-tier of a file copy program but it is there, even if it is simply passing what has been read to the output file (that’s the business logic, otherwise the program wouldnt’ do anything). That’s simply the way I see things but of course, my reality differs from the realities of others ;-)

        1. Processor… Professor… Didn’t even notice, but yes, small difference :-)

          I think we can certainly agree on the fact that reality is perceived differently by various developers!

  14. I have to say, this article smells like quite terrible advice.

    Yes, layers for the sake of layers is a waste of time. What isn’t when what you do is only done for the sake of doing it? You should, off course, have good reasons for every line of code you write and every layer of abstraction you implement.

    I’m a .NET developer, but I’ld expect this to be quite universal.
    I’ld say that in a typical LOB application, you will always have a minimum of 3 layers: UI, business logic, data layer

    I don’t see myself writing any kind of application without those 3 distinct layers. Depending on the app, perhaps another one might be added in there, but I’ld say these 3 are a strict minimum.

    It’s not just about being flexible in responding to changes in requirements or switching databases or whatever. It’s also about clarity, reusability and testability of code.

    It’s also about immediatly knowing where you need to be in which file to implement a certain thing or to look something up.

    It’s not entirely clear to me what it is exactly you are advocating. Is it to stop using layers for the sake of layers? Is it to stop using layers alltogether?

    Here’s hoping that it’s just to stop using layers “for the sake of layers”, but not to stop using layers alltogether. Because, as said in the beginning, that would be some epicly bad advice.

    1. Here’s hoping that it’s just to stop using layers “for the sake of layers”, but not to stop using layers alltogether. Because, as said in the beginning, that would be some epicly bad advice.

      Precisely. There is no such thing as a default set of “best layers” (not even the three-tier one!)

  15. yeah, makes perfect sense, doesn’t it? shove all that business logic, data access and all of that stuff in the UI layer (if that’s even a layer now). While you are at it, hard code all your SQL statements in the UI, why don’t you? What sillyness?

    I agree over layering like over-anything is bad, same goes for under-“anything” but layering in general is bad and pointless???. Ludicrous. Have you heard of a thing called a middle path, a balance, a trade off? i started my career without layers and now I have been doing layers for over 6 years and when I look back to my non-layer days, I always laugh, “what was I thinking?”. Layering does not exist so you can fundamentally change something like “HTML to Swing”, at least not solely for that reason, but more for the fact that bug-fixing, patching and deploying code is much easier because its decoupled from your UI or data access code. Developers can independently modify code and keep a track of what has changed and whats stayed the same. Heard of source code version control?

    In large complex project architectures you will always find layering and for precisely the reasons that I mentioned above may be even more. Infact, right now we are having to deal with a system where the entire business logic is written in thousands of stored procedures and the code just calls these procs. There is no business logic in business layer and that system is slow and unmaintainable as anything not just because of this but the way its architected in general, if at all.

    The bottom line is no successful architecture gets defined all up front. it evolves over time and that’s where layering really shines.

    You have a right to your own opinion but I am sorry to say, it doesn’t make much sense to me.

  16. To be Layered or not to be layer that is not the question. How it will be layered is the question.
    Layering should be on the basis of domain/subject knowledge, not on the basis of technology; as technology (UI, Database) will change frequently than Domain.

    But anyway for the best architecture, we have to learn how our Brain works with million years of legacy layers, minimum energy, robust, efficient, parallel etc…

  17. I think layers has a lot less to do with swapping implementations than it has actually has to do with the structure of the team and size of the project. In other words, different parts of the team work on different parts of the project. This allows them to focus on certain areas and make changes and modifications to their portion of the project without destroying the entire solutions.

    Also think of code conflicts. If I had a database query strung throughout the project that needed updating, I code make that change easily enough either through searching or remembering every place it existed – which would be in some front end controllers other devs are working on no doubt. I push my changes and suddenly we’ve introduced a code conflict. Dozens of them.

    I do agree with layer overload though. I often throw out very thin layers as they can complicate the flow of the code as well.

    1. That’s really intriguing – we’ve had the same comment on the relevant reddit discussion, where Conway’s Law was referenced.

      In any case, the article is a bit inflammatory (helps creating interesting debate). Obviously, layers can be good, but often, up-front design is really just insurance against things that might never really happen. Factoring out things into layers / components, etc. is certainly useful.

    2. Layered design, though, is different than modular design. We went to modular programming to facilitate easier implementation by multiple teams/people, not layering (although layers did let ‘specialists’ write pieces without having to wory about things that they didn’t specialize in). A module should have a “gateway” — a class or set of classes that are public and everything in them is public. The gateway strains the data, validating it, and acts like the “layer”, but is thin. I can change the work done in the non-public classes and not impact the gateway. In fact, I can make stubs in the gateway and commit them to the repo long before I commit the “details” (the non-public class). Layers shoul be limited, but a modular design should use gateways (and in doing that you can get rid of the true spaghetti… all of those “impl” modules that people have to do everywhere).

  18. You make a lot of assumptions which just by itself make this article misleading:

    — half of all applications out there would be so easy, fun, and most importantly: productive to implement if you just got rid of all those layers. —

    Where do you get half from? That number is hard to get even inside your office, let alone considering all possible software development in the world.
    —————————————————

    — We need layers to abstract away the underlying implementation so we can change it? —

    This is just one of the reasons why you should have a layered architecture. Layered architectures provide reuse accross the application and accrosse different projects that consume the same business logic.

    If you are talking about dependency injection on the data layer, then I may partially agree with you. Those who do that, know the potential risk of data provider changes and plan accordingly. If they can’t predict potential business values with this move, they shouldn’t be the application architect in the first place.
    —————————————————

    — Your architecture is probably set in stone —

    You can’t possibly predict that. That’s a big unreasonable assumption. Frozen architectures defeat its purpose. Architectures should be defined towards the problem it is trying to solve. It is like buying a hammer and nails before knowing if you will build using concrete or wood.

    Architects exist to decide which architecture is good given a business domain, not to build a mold so everyone can use for every project.

    The secret is to know how to layer the application given a domain. Under layering an application will be at least as bad as over layering an application. Failing to identify the problems of under layering can lead a project to the recycle bin, while over layering an application can lead to more cost than its value, which could doom its profit margin.
    —————————————————

    So, in my opinion, you should be very careful before influencing people to just throw away good practices and patterns. There is no absolute truth in this area, and you fail to point that out and at the same time make a lot of assumptions that seems to be realted to your own reality.

    You should realize that there are a lot of realities out there and you should point out what is the specific scenario you’re trying to give advice to.

    1. It is like buying a hammer and nails before knowing if you will build using concrete or wood.

      That’s essentially what the article is saying, although not as obviously as in your comment.

      You should realize that there are a lot of realities out there and you should point out what is the specific scenario you’re trying to give advice to.

      I most certainly do, but compromise won’t trigger interesting debate.

  19. Completely agree. This Assembler commands are so abstract, I remember good old days when we’ve used punch cards, the assembler took all fun away)))

  20. “Too many layers are bad”…

    Well of course they are, that’s the very definition of “too many”.

    “Your app needs two layers”

    OK, clearly this is too few layers, but then “too few” layers is, by definition, bad.

    I mean, at the least you could draw a line somewhere and we could discuss where we fall on that line and why we do [not] need such a layer.

    But man, two layers? Like MVC is just too much? Let’s just do MV because who needs that Controller layer?

    I don’t think that’s really going to win you any converts,.

    1. I don’t think that’s really going to win you any converts,.

      No but it made people think. Two layers can be enough. I’ve seen and maintained a huge E-Banking application application where 90% of the screens were governed only by SQL (data access) and XSLT (UI). The other 10% had hooks somewhere in the middle where they could apply “business logic”.

      It’s not always the right solution, but it can be. People unfortunately default to something around a 5-tier application, when 2-tier can be enough. Having said so, if MVC works for you, that’s perfectly fine!

    2. I agree with the author and i certainly agree with folks that suggest a pragmatic balance. Throwing interfaces everywhere and for everything in the name of abstraction and layering certainly not what i like to work with. Some of the so called layered applications endup needing a chain of changes to accomodate minor stuffs.

    3. MVC has 2 layers: presentation (VC) and MODEL LAYER (M). Yes, the model is a layer, not preffixed/suffixed “Model” classes stuffed in a “models” folder.

  21. I’ve managed to inherit such a project where the person whom developed it put layers on top of layers. It’s an absolute nightmare to make changes to since if I change things in layer x, will I have any negative effects in layer y? Often the answer is yes.

    I also remember having a discussion with this same person trying to convince them that writing virtually the same code in 3 difference places (in case we need to modify just one of them slightly differently later) was a bad thing since any changes would need to be done 3 times. Talk about an overhead.

    For me, MVC is about all the layering that is needed. We’ve written some customised helper code to go between the layers where there is advantage to doing so, but I don’t consider that to be a layer in its own right. It’s just turn 5 lines of code repeated throughout the project into 1 line of code in its place. If it is not used for whatever reason, the old fashioned code will work just as well.

    For the record, I have seen a project where the developers attempted to “switch out layers”. It didn’t work very well at all, and whilst they did end up succeeding, it took them more time than it would have taken to simply build a new system and migrate the data.

    1. […] the same code in 3 difference places […]

      I’ve had this discussion numerous times :-)

      I have seen a project where the developers attempted to “switch out layers”

      That sounds interesting! Could you share a bit more? Why did it fail?

  22. You are kidding, right?

    Why on earth would I want to not make another layer, if the code I’m writing has nothing to do with neither Data Access nor UI, but is still a part of the whole application?

    Just like with methods and Classes, if the layers are properly implemented and designed to do one task and do it really good, then debugging and fixing code becomes so much more maintainable. Properly implemented layers helps adding new logic in one layer only. Layered architechture is an art, and very few developer really understand this art. This is why we have application achitectures to do this job and do it really well.

    Maybe the author of this article is only writing very simplistic applications, have never worked on an enterprise sized application in a big team, or just need to study more about what layered architecture really is about?

    1. This is why we have application achitectures to do this job and do it really well.

      Nah, I’ve seen this go so terribly wrong so many times, I doubt that heavy layering is really something anyone sane can manage. Don’t get me wrong. I’m not advocating stricly two layers. I’m only saying that you should think outside of your Enterprise box and not default to 10-ish layers when you start applications. Simply YAGNI.

      Maybe the author of this article is only writing very simplistic applications, have never worked on an enterprise sized application in a big team, or just need to study more about what layered architecture really is about?

      Among others: A medium-sized E-Banking application for a Swiss bank (4M customers, 200k daily logins). I helped removing most of the layers reducing them to just the bare essence in about 90% of all screens: SQL and XSLT. The outcome was beautiful, change requests were dirt easy and the customer was extremely happy. And most importantly, it also helped them cut down on quite a bit of WLS licensing costs.

      1. Allright, great answer! I completely agree with what you just said. Not forcing tons of layers when they are not needed is the way to go. The code can always be refactored later on if another layer is needed. But basic layering (2-4) is almost always the minimum, depending on the size of the project of course. So why not add them at design time when you already know what concerns are wanted separated?

        A smart developer already has the DataAccess layer and Presentetation layer ready for whatever future project they get next. Only the business logic change. …that’s code reusability, isn’t it? A different topic but related to layering your software…correct me if I’m wrong here.

        If something lightweight has to be scratched together quickly, with minimum budget and no plan for future maintainability then no layering may be the right solution. It really depends…

        1. If something lightweight has to be scratched together quickly, with minimum budget and no plan for future maintainability then no layering may be the right solution. It really depends…

          It does depend. And then again, this very blog is probably completely unlayered (wordpress). Still, more than 50% of all websites run on it. Can’t be that bad, can it? ;-)

          Ok, I’m kidding. WordPress probably isn’t a good example for anything.

        2. Note, I’ll probably write a follow-up post, which is less inflammatory. Ideally, of course, you’ll have a couple of more layers than 2 in a “regular” application. But the essence is to provide a way to bypass all layers for 90% of the screens that are read-only. It’s just great when you can stream data directly from the database to the UI with only stateless transformations in between (as I said in the example: e.g. SQL and XSLT, or SQL and jQuery, etc.)

  23. I’ve been down this path, and agree strongly with your experience.

    Database-independent persistence is a myth except for toy applications and demos.

    Trying to achieve “database independence” will just get you tied to a persistence provider. But don’t worry, JPA2 is there to save you, right? Tell me that again after you’ve taken an app that uses Hibernate, via only JPA APIs, and switched it to EclipseLink or vice versa. It doesn’t work. Especially with the Criteria APIs or JPQL there are just too many differences. If this sounds a lot like switching underlying SQL RDBMS implementations, you’re not mistaken … except that doing persistence-provider specific bits is a much more annoying process than DB-specific things.

    So you’re using the subset of JPA that your unit tests show both Hibernate and EclipseLink cope with. Great. Now a customer wants you to add MS SQL as a supported database instead of your usual PostgreSQL. You plug it in, go, and … everything explodes because casting rules, type mappings, etc are all just different enough. You go through your code and adjust to find the subset that works for (MS-SQL|PostgreSQL) on (EclipseLink|Hibernate).

    At this point, people are complaining that loading a customer takes over a minute and does 10000 queries because of n+1 fetching through the object graph. You’d love to use EclipseLink’s fetch hints, but Hibernate doesn’t have anything equivalent so you’d have to rewrite your Criteria code as JPQL…

    … then someone comes along and wants support for their existing Apache Cassandra based architecture, they don’t want to add a new RDBMS to their infrastructure.

    Meanwhile, another customer says they want your system to use the same shared cache as their existing OpenJPA based applications. Since you’re only using JPA, you can easily just switch out the performance provider. Right? Right?

    Database provider independence is a (cruel) myth. It doesn’t work if you want an app that performs OK and is reliable across the target platforms and versions. I haven’t yet seen an attempt that doesn’t just add a hard dependency on a database-provider-independence module further up the teetering stack of abstraction layers.

    There’s nothing wrong with trying to make most of your code DBMS independent. In fact, I strongly support doing just that, and find people who code directly to one RDBMS very frustrating.

    The problem with JPA, and to a lesser degree EclipseLink/Hibernate, is that they treat making it hard to access DBMS-specific capabilities as a feature. They’re PROTECTING you from the evils of database-specific coding. Never mind that in a real app you simply require it sometimes. (I know about native queries, but they’re painful, they don’t interact well with criteria and type mappings, they don’t solve problems with batch fetching, etc).

    Here’s an article I wrote back in 2012 about the problem of fetch control in JPA, to illustrate just one problem area.

    http://blog.ringerc.id.au/2012/06/jpa2-is-very-inflexible-with-eagerlazy.html

    1. That said, I’d like to qualify this by saying that I’m all in favour of good abstraction layers. Java its self is a good example. It lets you get down to the platform when you have to (though JNI is horrifying, JNA works well enough) but you don’t have to deal with it most of the time.

      I think the problem is mainly that JPA remains immature and restrictive. Like trying to write a non-trivial business application in BASIC. Sure, you can do it, but gee it’d be better to be able to use all the features the languae runtime isolates you from.

      The problem isn’t abstraction, it’s *bad* abstraction, and the use of another bad abstraction layer to try to paper over the flaws in the next one down.

      1. Yes of course. Even if this article was a bit polemic and inflammatory, the essence was to avoid adding layers as a means of “insurance” in up-front architectures and designs. Good design tends to emerge as we go, with experience.

        Java is a very good abstraction. When it was designed, it essentially solved one problem alone, introducing only one additional layer. That of platform independence.

    2. Thanks very much for your response. There’s a lot of frustration in that one :-)

      The bottom line is that tailoring architectures for large systems with a lot of stakeholders is hard. The point of my article was to say that most “insurance” you might’ve “bought” up-front are probably not going to help with concrete requirements that pop up down the line, which is why keeping things simple initially will help factoring out the right layers (and other types of abstractions).

      Interesting article about JPA being inflexible about lazy / eager fetching. Have you already played around with JPA 2.1, which claims to solve this with named entity graphs?

  24. Personally I start with the business logic(based on user cases). At that point I don’t care where the data will come from nor where it will go. When the most critical business logic is done, I start with creating a well designed spike from the datastorage provider to to UI or service or whatever method will be used to interface with the businesslogic. This spike will be the guidline for the rest of the implementations in my app.

    If decided that the app will be consumed through a web browser I develop a web page that can interface with the business logic. If the app I’m creating will be in the cloud but it is wanted that one can use it with lets say native Windows desktop app but also Mobile apps, then I put a service layer that talks with my business logic and have a well designed public interface.

    Then to the datastorage… does not matter wether I will be using mySQL, Oracle, MS-SQL or even serializing persistent data to a binary file.

    No point in going ahead of time when designing your app. What I just described was 2-4 layers with some variations for some of the layers. What I might add to all this is a security layer for filtering out threats that can’t be dealt with in the UI nor datastorage. The business layer will NEVER include this functionality however.

    1. Really? I feel that this approach won’t work for applications that are to last 10+ years, in case of which I really believe that the database should be the center of design (as the database will probably also last for 10+ years, unlike much of the business logic).

      But maybe, experiences differ.

      1. Yes of course the DB has to be designed properly for optimal preformance. But the business logic should not be aware of where the data is coming from, at least not be aware of any query logic. That would be a highly coupled solution. Only thing it should know about is the interface it communicates with.

        There are many design patterns around, one is DB first, others are businesslogic first or UI first. I can start from whatever direction and still end up with the same outcome. It is just a matter of which path help best visualising things.

        I want to decouple tasks that have nothing to do with each other. The persistance layer in my apps is desiged to do one thing only and that is to query a database or whatever datastorage solution I choose to use. Then let the aquired data flow forward to the layer that requested data in a form that has been defined in the interface.

        I also write my Queries by hand using as much ANSI SQL as possible. The rest of the DB specific query language words I put in a separate place where I can grab the right word from if the DB for some reason will change from one version to another. Good example would be an old version of MS-SQL that is upgraded to a newer version. Not 100% compatible but ANSI SQL will stay the same.

        Same idea for any other tasks, like diplaying data or receiving user input etc… that has nothing to do with business logic nor database query. The way of displaying things change every now and then. The UI is easy to update or change from desktop implementation to Web if it’s decoupled.

        Correct me if I’m wrong but I believe you are living in a Java/JavaScript/jQuery world whereas I am in C#/C++/.NET world. The design patterns may vary a bit it seem. But one thIng I still agree on is that implementing tons of layers when they are not needed is a waste of time.

        What I’ve seen in many apps with too many layers, is that the layers keep repeating same things over and over again when one layer could have done the job. Bad design in my opinion.

        1. Probably, the general ideas behind working in Java / .NET worlds are similar. I.e. this whole discussion could’ve also taken place on a more .NET-oriented blog.

          I agree that at some point, there is a certain desire to decouple what is often perceived as “business logic” from the “persistence layer”. Obviously, most people want to separate logic into components, hiding data access parts behind APIs. E.g. no one sane wants to have JDBC code in the middle of a very complex accounting calculation. But that doesn’t automatically endorse layering, where the whole application is required to be completely segregated. In many cases, being able to express querying predicates already in the business layer is much more efficient than repeating the same code / the same APIs in two layers just for the sake of clean layering. Sometimes, it is more efficient to keep working on sets of tuples also in the business layer, rather than mapping them to object graphs, pretending that they didn’t really originate from an RDBMS.

          As I’ve mentioned in other comments, what ultimately leads to the success of a good architecture is also the ability to bypass layers where it makes sense. This will be a topic for a follow-up post.

  25. Yes, having same logic twice in 2 or sometimes even 3 layers is insane. That would be a hint that there is something wrong with the design.

    Excited to read your follow up blog.

  26. I do hate too many layers and over engineering. But anything should not go extreme, right?

    Simple application should give simple and intuitive design. If an architect tells you his solution is a take-all one, he must be a cheater!

  27. As developers, whenever we find a problem with something we jump straight to the opposite extreme. The ideal is almost always somewhere in the middle.

    I’m a strong believer in YAGNI (You aint gonna need it), however I don’t think it applies when I KNOW I am going to need it and soon.

    For REST APIs that talk to a database (or possibly another backend REST API), the basic 3 layers makes a lot of sense. It’s just simple abstraction and separation of concerns.

    – One to deal with all the HTTP specific stuff. These are usually very thin. Often 1 line per endpoint.
    – One to do all your business logic that has no knowledge of the web or data storage.
    – And one to handle the data store specifics (maybe ORM details or third party API specifics).

    The benefits for large projects and/or large teams that I’ve seen (and I have spent many years writing with no layers, and many years since writing with basic layers):
    1. Everyone’s code follows a similar pattern and it’s very easy to read other peoples code because it’s extremely *predictable*. Particularly great if you’ve got junior developers working in a large team.
    2. You get a lot more *reuse*. We see this all the time when things are abstracted nicely, it’s easier to compose new code like lego.
    3. Refactoring is easier as you have at least some decoupling in your code by default. You may not be likely to swap one database server with another, but you are very likely to perhaps change the way a particular entity is stored. If you have hundreds of connections to that entity (tight coupling) this refactoring becomes very difficult.

    I’m sure there are some cases out there that really don’t need it, but all the web APIs I have ever worked on have benefited greatly from this basic engineering practice.

    1. Great comment Stuart. This is exactly how I see this. Whatever program I’m coding I always separate these 3 layers.

  28. Nice article, development models have become very sleek to adapt to market. Remember the old days where we had large number of translation layers and translators to talk between layers, it does make things heavy. But not having any layering is equally bad, minimal sleek layering is needed to keep the flow of data clean.

    1. You could have solved those problems with a UI layer and a data layer only. The middle layer was just there to make you feel non-guilty, because middle layers is what Java developers do for a living, right?

      But of course I don’t know your concrete application architecture, so I couldn’t tell if my experience (YAGNI: middle tier) and your application/team would have worked out.

  29. Well, we have a legacy system – a lot of JDBC native SQL is written in the Struts1 JSPs. We wanted to move to a SPA like Angular but couldn’t, due to so much business logic being in the JSPs. It would take at least a year to move it out whilst other feature development would be paused. If they had been separated in layers from the start, and had the JSP rendering dummy objects, it would have been possible. Surely it is best to separate from the start, especially when there is pressure to deliver in as quicker time as possible.

    Also agree with the testing – it is far easier and quicker to Unit Test, or Integration Test the service layer to ensure functionality is working as expected. Front End UI tests take hours to run, especially when multiple combos exist, and are more susceptible to breaking changes, such as moving buttons around on a screen.

    The application started as a tiny prototype, was sold, and then has been added to for the last 15 years.

    1. Sure, you have some additional pain now that you would not have had before. But there’s no guarantee that your abstraction would have worked out flawlessly for you now after having designed it 15 years ago. Additionally, for all those 15 years, you would have had to carry around layers of abstraction that would have gotten in the way compared to your simple design (or lack thereof) that had allowed you to move fast for 15 years.

      It’s difficult to weigh the pros and cons of both approaches objectively, given the immediate pain now, but the way I read your comment, I think the JSP approach was a success.

      And now, the modest price to pay is perhaps to re-implement both frontend and backend.

  30. The nature of application development as such leads to the three basic “n-tier” layers: where the data goes (the data layer, or database); how the application gets stuff done (the business layer, including an API to talk to it); and what the user sees (the presentation layer, where the UI talks to the business layer via the API). Any application of any complexity rapidly gets out of hand without those three basic layers (see: client/server model).

    1. Yeah, I dunno. That middle tier is really overrated. I’ve seen very successful, completely stateless applications that used SQL and XSLT for data transformation flows with almost no need for any “business logic”. It didn’t deteriorate.

      In contrast, I’ve seen an extreme amount of completely overengineered 3 tier application where the business logic was distributed between layers (a bit here, a bit there, in every layer), hardly any logic was reusable across layer, or even testable, and 90% of the “business logic” was just about wrapping / unwrapping stuff to hand it over to some other “layer”, because that other “layer” was written by someone else who had a different understanding of how to do things, and blah.

      So, I’m really not convinced that one approach or the other is a guarantee for success or failure.

  31. Excellent article. There is so much crap spouted about (and money wasted on) layered architectures and SOC.

    Look, if you are using a RDBMS which supports stored procedures (and if you’re not, it’s time to lose the training wheels) then you should have already implemented the principle that direct access to the data is forbidden: you do everything with sprocs. So, you’ve already got an abstraction layer wherein to implement your business rules. If scalability is a concern, and you really can’t configure your server(s) to scale, there’s always a bigger box you can throw at it (which is invariably cheaper than development effort).

    Re-usability is a chimera. Take MVC: where can you re-use anything? In another MVC system, and nowhere else, that’s where.

    Replacing a layer? Go ahead then. I’ll have re-written the entire system in less time.

    Customers pay for functionality, not architectures. If they understood how much of their money gets frittered away on internal plumbing rather than functionality they would freak out.

    1. Stored procedures have their merits, but their language and tooling capabilities are very far behind client side capabilities. There is absolutely no IDE for PL/SQL for example, as good as Eclipse or IntelliJ for Java, or Visual Studio for C#. That’s a killer argument in terms of productivity against using stored procedures.

      I’m not arguing against your point, I wish procedures were more powerful, but RDBMS vendors missed this opportunity for decades now.

Leave a Reply to lukasederCancel reply