I know. Heresy.
But let’s have the conversation that a lot of senior engineers have privately and almost nobody has publicly: for most companies, at most stages, microservices created more problems than they solved. And the industry’s collective refusal to say this out loud has cost billions of dollars in unnecessary complexity.
Before you close the tab. I’m not making a blanket argument against microservices. The architecture is right for certain organizations at certain scales dealing with certain kinds of problems. Netflix needs microservices. Uber needs microservices. You probably don’t. And the reason you adopted them probably had less to do with your technical requirements and more to do with the fact that Netflix and Uber talked about them at conferences.
How we got here
The microservices movement started as a reasonable reaction to a real problem. Monolithic applications, as they grew, became difficult to deploy, difficult to scale independently, and difficult to work on with large teams. The release cycle slowed. The blast radius of a bad deploy was the entire application. Teams stepped on each other. The pain was real.
Microservices offered a compelling answer: decompose the monolith into independently deployable services, each owned by a small team, each with its own data store, communicating over well-defined APIs. Teams move independently. Services scale independently. Deploys are small and scoped. Beautiful.
What happened next is that the industry took an architecture pattern designed for organizations with hundreds of engineers and thousands of requests per second and applied it to teams of fifteen building CRUD applications that serve a few hundred concurrent users. Because the conference talks didn’t come with a “you must be this tall to ride” sign.
The complexity nobody budgeted for
Here’s what the microservices pitch leaves out:
You now need service discovery. You need a service mesh or an API gateway or both. You need distributed tracing because a single user request touches seven services and when something breaks you can’t just read a stack trace. You have to correlate logs across multiple systems running on multiple nodes with clock skew. You need circuit breakers because Service A calling Service B calling Service C means a failure in C cascades backward. You need to think about data consistency across service boundaries, which means you’re now dealing with eventual consistency, sagas, or distributed transactions, each of which is a graduate-level distributed systems problem.
You also need Kubernetes, because managing thirty services on bare metal or basic VMs is operational pain that nobody wants. So now you need someone who understands Kubernetes, which is a specialization unto itself. And a CI/CD pipeline that can build, test, and deploy thirty services independently. And a monitoring stack that gives you a coherent view across all of them.
The team that was struggling to ship features in a monolith is now struggling to ship features in a distributed system while also operating the distributed system. They traded one set of problems for a harder set of problems. The original problems were organizational. The new problems are organizational AND technical.
This isn’t a theoretical argument. This is what we’ve walked into repeatedly. Teams that adopted microservices two or three years ago and are now spending 40% of their engineering capacity on infrastructure and operational overhead that didn’t exist when they had a monolith.
The dirty secret: most of the benefits don’t require microservices
Independent deployability? You can get that with a well-structured modular monolith and feature flags. Teams owning distinct areas of the codebase? That’s an organizational and code architecture decision, not an infrastructure one. Independent scaling of hot paths? Your cloud provider will let you scale a monolith vertically to a size that handles more traffic than you will ever have, and horizontal scaling of a monolith behind a load balancer handles the rest for most applications.
The one benefit that genuinely requires microservices is the ability for truly independent teams to use different technology stacks and deploy on completely different cadences with zero coordination. If you have five hundred engineers across dozens of teams building a platform with fundamentally different scaling characteristics across different components, yes, microservices make sense. That’s the Netflix use case. That’s the Amazon use case.
If you have forty engineers building a SaaS product and your biggest scaling challenge is your database, microservices is solving a problem you don’t have while creating a dozen problems you didn’t need.
The modular monolith isn’t settling
There’s been a quiet renaissance of the modular monolith in the last year or two, and it’s coming from the engineers who’ve lived through the microservices migration and come out the other side with scars.
The idea is straightforward: a single deployable application with clear internal module boundaries, enforced through code architecture rather than network calls. Each module owns its domain. Modules communicate through well-defined internal interfaces. The boundaries are real. You can’t just reach into another module’s database tables. But the communication is in-process, not over HTTP. The deployment is one artifact. The debugging is a stack trace, not a distributed trace.
You lose the ability to scale modules independently. You lose the ability to use different tech stacks for different modules. But you gain simplicity of deployment, simplicity of debugging, transactional consistency, and about 60% of your engineering team’s time back from infrastructure management.
For most products, at most stages, that’s an overwhelmingly good trade.
And here’s the thing that doesn’t get said enough: you can always decompose later. A well-structured modular monolith with clean boundaries is the best possible starting point for a future microservices migration, if and when you actually need one. The reverse, consolidating a distributed mess of microservices back into a coherent system, is one of the hardest things in software engineering.
What this isn’t
This isn’t a “monoliths good, microservices bad” take. That’s as useless as the “microservices good, monoliths bad” take it replaces.
This is an argument for matching your architecture to your actual constraints: your team size, your traffic patterns, your operational maturity, your scaling requirements as they exist today, not as you imagine they might exist in three years. It’s an argument for being skeptical of architectural decisions driven by industry trends rather than engineering analysis. And it’s an argument for being honest about the costs you’ve taken on and whether they’re paying for themselves.
If you’re running microservices and they’re working for you, genuinely working, not just “we’ve gotten used to the pain,” then great. Keep going. But if you’re running microservices and a significant chunk of your engineering effort goes to coordination overhead, infrastructure maintenance, and debugging distributed system failures that wouldn’t exist in a simpler architecture, it might be worth asking whether the architecture is serving you or whether you’re serving the architecture.
That’s not a comfortable question. But it’s an honest one.
This is an opinion piece. We don’t have a product to sell you on the back of it. If it made you think about your own architecture differently, that’s enough.