Microservices was always a solution to the organisational problem of getting more developers working on a system at once. A lot of people working on one code base over the top of each other often causes problems and issues with ownership. The solution was services independently developed with the trade off cost being increased complexity on a range of things.<p>There is an essay in the mythical man month about Conways law and microservices are very much about a way to use Conways law to achieve scale of development. You likely don't need microservice until you hit a scale where the monolith is a real pain point. You can probably cope with a monolith especially with some reasonable design effort up to the 100 developers maybe more with a lot of separation but at some point in the >25 developers range it becomes cheaper and easier to switch to multiple deployed units of software and deal with the increased system complexity and interface designs and the inherent versioning issues it causes.<p>It is easier to start with a monolith, find the right design and then split on the boundaries than it is to make the correct microservices to begin with.
Come to Erlang/Elixir (and OTP), you get the best of both worlds:<p><pre><code> - a mono repository
- a single codebase for a single system
- your micro services are supervisors and gen servers (and a few other processes) in a supervision tree
- you decide which erlang node run which apps
- your monolith can scale easily thanks to libcluster and horde
- ...
</code></pre>
Also, there is the midpoint between monolith and micro service, and this is called Service Oriented Architecture (SOA), you could have:<p><pre><code> - a DAL (Data Abstraction Layer) service
- a Business Logic service (talking to the DAL)
- an API (talking to the Business Logic service)
- a Frontend (talking to the API)
</code></pre>
Your API (or gateway, or whatever you want to call it) can serve as glue for third-party services (like Stripe, or anything unrelated to your business).<p>Microservices are a solution for an organizational problem, not a tech one. You need multiple teams to work on the same system without blocking each others. This is a solution for huge corporations, not your 2 pizzas team startup.
The monolith vs microservice debate always seemed so misguided to me. It seems exactly like vim vs emacs. Or ruby vs python. The arguments are always the same.<p>* "In vim I can do this in 3 keystrokes, in emacs it takes 5!" (where "this" is some vim-idiomatic thing you'd never do in emacs).<p>* "Ruby leads to bad programs, I have inherited a legacy ruby codebase and you wouldn't believe what they did..." / "Python leads to bad programs, I have inherited a legacy python codebase and you wouldn't believe what they did..."<p>* My team took a python/ruby app and rewrote it ruby/python, and now it's 99% faster and our productivity is way higher!<p>What I'm hinting at is this: You can write a bad or good microservice or monolith. The rules are different. You'll have different frustrations and tradeoffs. You'll have to play to the architecture's strengths and avoid it's weaknesses. You'll NEED institutional standards to keep people from doing the wrong thing for the architecture model and making a mess.
I think the biggest pain point for me is the almost constantly-changing definition of "microservices" - at work, whenever someone says "microservices" they really mean a services architecture. I can largely get behind that. Because when you say services architecture, to me anyway, that doesn't mean everything has to be a "service". Some stuff can be on the same machine if you do not have a firm point of separation between the services. Maybe they will share some code eventually... ok... put them together, who cares?<p>Last time I asked our ops guy to define microservices for me, he couldn't - instead he told me to read a 400 page book by some popular microservices preacher (who of course makes a lot of money by consulting for companies looking to use or currently using microservices ;))<p>If you cannot explain the general concepts to me, maybe you do not know what it is, and I think that is a big part of the problem.
Okay, Craig, you can have your monolith back when you can get your engineers to care <i>in the slightest</i> about keeping the codebase organized instead of turning it into a big ball of mud. I'm talking encapsulation. I'm talking well-defined interfaces that aren't glued to specific implementations. I'm talking some sort of dependency inversion, dependency injection, service locator patterns, any of that. I'm talking real use of the single-responsibility principle, all that software-engineery stuff that everyone <i>ignores</i>. Because I understand what you're getting at 110%, but by and large, all this stuff doesn't happen otherwise.<p>Until then you're going to have these things forced on you by your local Architect, forced on you by running a bunch of separate processes on separate containers with DNS as the cluster's service-lookup framework.
It's just about need. Most people don't need microservices. Let me repeat that. Most people don't need microservices. It's only relevant at scale. The scale I'm talking about is organizational scale and then potentially technical scale but it's mostly about solving people problems by enabling independent development of products and features.<p>Developers are their own worst enemy. They love shiny buzzwords and using unnecessary tools and concepts just to say they did. Conceptually you can't blame architecture patterns, that makes no sense. Blame those who choose to adopt patterns for the wrong reasons.
This is one of the big reasons I got into Elixir.<p>The way that you're encouraged to architect your code makes it really easy to separate a specific piece(s) if you need to. The functional, no-side-effects approach combined with Elixir's ability to easily communicate between nodes means that if I need to separate a particular set of functionality to certain servers...it's just moving things around.<p>If I take a function call_me(arg1, arg2) and it returns a result without side effects, it's no different for me to say call_me(node, arg1, arg2) because it's still going to give me the same result just from a different server.<p>This flexibility means that I can comfortably built a monolith and not have to worry about having to untangle it later if I need to. I love it. It gives me long term peace of mind with short term productivity.
I wonder if anyone who wants to bring back the monolith has ever worked on a true large monolithic codebase? Not something that's like 5-10k lines of code excluding frameworks, but monsters that are 100k+. Where test suites take an hour to run end-to-end and you've got 50+ devs all issuing pull requests in the same codebase? I feel like webapps usually have a sweet spot in terms of size and logical reach. This whiplash hype cycle of "X" and "anti-X" just exposes the ever lingering problem of letting blog posts on HN determine your architecture decisions.
I relate so much with this article
I've had a previous experience with a backend monolith repo, and these days I have to deal with a backend that has 20+ repos. It is hell. Duplicated code, duplicated logic, duplicated tests, duplicated settings, an hell to introduce newbies to the architecture, async calls to external basic APIs that could've been just simple method calls.<p>I think that the one major disadvantage of having a big monorepo, is that with those multiple entry points, you might end up with a bunch of unused dependencies. But even that is manageable I think: you can have different package dependencies definitions whilst using the same codebase.<p>I've always worked with small teams (max up to 5 or 6 developers) and that's another point in favor of monorepos. I understand that big companies might want to have different teams working on different repos, for organisation reasons.
On the NodeJS / React ecosystem - no one really wants you to do monoliths.<p>NextJS, for example, doesn’t let you manage service lifecycle methods (unless you write a custom server - which they explicitly warn you against). NextJS wants to be special, that’s dumb and it sucks (well, in the context of using it to drive your SaaS platform, it’s pretty smart).<p>Remix, for example, wants to control your client/server API calls, so you’re hard pressed to use other tools like GraphQL, and incur the risk for growing into microservices if you need it. Same story as NextJS about being special and probably driving SaaS platform sales.<p>Amazon just makes you manage an alphabet soup worth of products, which are all pretty expensive - unless you want to just lambda it… meaning custom runtimes and … back to microservices.<p>Point: microservices aren’t just driven by your org anymore, they’re pushed by vendors.
This is dated 2019-03-13 and it says "It feels like we’re starting to pass the peak of the hype cycle of microservices" but wasn't the peak already earlier? Looking at HN posts - <a href="https://hn.algolia.com/?q=microservices" rel="nofollow">https://hn.algolia.com/?q=microservices</a> - it looks like a lot of the high voted posts against microservices were 4-7 years ago.<p>Highest scoring one (2018): <a href="https://news.ycombinator.com/item?id=17499137" rel="nofollow">https://news.ycombinator.com/item?id=17499137</a>
I remember a talk a few years ago where the CTO of a local startup was gushing about microservices and how productive they made his team. Sounded like an tech evangelist who had drank their own koolaid.<p>Then one of the "questions" in the Q&A was a pretty aggressive attack based on how unproductive the skeptics company had been to date and how they were on the verge of failing.<p>The speaker asked how many development teams they had and what was the size of their DevOps/Tooling team(s). When the skeptic admitted they only had a few developers, the speaker recommended they IMMEDIATELY pivot to Django/Rails/Node.js/.Net; "whatever you are most comfortable with". And then said "Why are you still standing there? You need to pivot tomorrow morning."<p>I think of those two questions every time I read or consider microservices. "How many teams. How big is your DevOps/Tooling team."
Why do we organize by shape, and not by function in our monoliths? We put all our controllers in a package, all our DB accessors in another package, all our "data objects" in another package. Even in OO languages like Java where we have package accessors that never get used because we layout our code such that package scope can never be used. We instead layout the code according to "shape".<p>When figuring out what to spin-off as a microservice, we do this by functionality. Why not just make it easier in the monolith and organize by functionality instead? Let's act like 4yr olds, not 3yr olds: <a href="https://pubmed.ncbi.nlm.nih.gov/12090481/" rel="nofollow">https://pubmed.ncbi.nlm.nih.gov/12090481/</a>
One wrinkle I liked was to build a monolith, distributing the same bits everywhere, and to configure the actual services to start on an instance-by-instance basis. This simplifies distribution (e.g., "unpack this single zip file, it's got everything") and you can rapidly change what is running and where with a command-and-control system of your choice.<p>I can't imagine having different build products and deployment stories for every service type, nor can I imagine institutionalized version skew of more than a couple of weeks.<p>This probably doesn't scale to large teams, but it let a small team work pretty effectively with thousands of microservices.
> Setup went from intro chem to quantum mechanics<p>Doesn't really seem fair. On the one hand you might need to do a bit of work so that you can say `docker-compose up` or `nomad up` or whatever, but at the same time there are plenty of issues with running binaries/databases directly on a laptop - version skew, for instance.<p>> So long for understanding our systems<p>This is fundamental to all asynchronous systems. You don't have backtraces anymore. If your service has concurrency primitives you probably already have to solve this problem with tracing, microservices just give you another asynchronous primitive.<p>> If we can’t debug them, maybe we can test them<p>Bringing up your entire application is what you'd have to do in a monolith as well, I don't understand this criticism. Also, "teaching" your CI to do this is 0 additional work - it's gonna be another "docker-compose up" or whatever, generally speaking.<p>Our microservice codebase runs on laptops just as it runs in the cloud. It's pretty nice.<p>With regards to "That is probably a bit too much effort so we’re just going to test each piece in isolation" - again, this is the same thing with your monolith. You'll just do this at the module level.<p>This is really a "right tool for the job" situation. And that's hard for people to understand, since oftentimes you don't know what you're building upfront.
As a junior engineer (only working 6 months), the K8s side of things has been my biggest barrier for learning. On top of having to learn about software engineering practices, I’ve had to learn helm files, deployments, services etc. It was/is very overwhelming. I know I’m new and naive but it seems needlessly complicated and it’ll be another few rounds of abstraction beyond K8s before people are happy with it.
Both styles have their places. I've done migrations of monolith to microservices and I've done microservices to monolith. In the end is about what's best for the project/client, not the purism of "I only do xxx or only yyy". If you're one of those developers then don't call yourself "senior", you're still in the mindset of a junior.
Hey, the longer it takes to understand, iterate, and work on a particular architecture, the more money you're making per task completed as an engineer... so the more ridiculous the architecture with which you're working, the slower your work is, the longer it takes to get done, the longer you stay employed!
In my experience, the project to break apart a monolith often happens without a clear definition of what problems we trying to solve by breaking apart the monolith. Which ends up creating a raft of new problems plus the old problems and a bunch of sticky left overs that are perpetually "going away soon".<p>And since you have no clear definition of why and what outcomes you expect, you also get massive scope creep in the middle of all this. Then you run into all the things no one planned for because there was no plan like how do we serve business functions like BI from our 30 new micro-service databases.
If you're dealing with any complex architecture, there will be portions of your service that will be hit more than others.<p>I'm currently building a backend in which will need real time capabilities and also standard restful http services.<p>Separating the real time service which will need a significant amount of performance more than the restful services will help me better scale.<p>Furthermore, the entire backend is written in python, because that is what I'm currently capable of at the moment, but in the future, migrating the real time service to Go will be heavily favorable - by separating it into its own service allows for that rewrite to happen at ease.<p>Now, there are many cases where building microservices are an overkill, but this isn't a one size fits all approach as the author would suggest, and I think we should all be tired of hearing a this or that type of article.
I've had great success with having a monolithic code base with multiple entry points. Each entry point is sort of a micro service (or just service), but it can access the same db as the other services (if it makes sense), use the same types, and crucially, it can easily be integration tested with the other entry points. With full debug support.<p>Such a "monolith" need not be the only one in the company. One per high level module or team works well.<p>I guess my point is, it doesn't have to be either giant monolith or tiny micro services in separate repos. There's everything in between as well.
Microservices are a great idea, if you don't religiously try to make everything a microservice.<p>I feel more inefficiencies were caused by forcing a microservice solution to every problem; than by big monoliths.
Conway's Law does not demand Microservices, it describes abstraction barriers. Forcing conflicts to be resolved in VCS as opposed to at runtime is a feature of monoliths, not a bug.
IMO you should never create a new service unless it serves an engineer reason instead of an organizational one. There's a lot of tools to help out monoliths, and a lot of ways to make it easier to shard the monolith as well.<p>Some services need it, like a backend intake platform of some sort that needs to have radically different performance characteristics than a user facing frontend. But for most services it just does not make a lot of sense to do this.
It also depends on what kind of monolith we're talking about.<p>We've found that "moduliths" (modular monoliths split into clearly defined bounded contexts with public APIs) work as well as microservices for scaling development: each team is responsible for their own module, there are very few conflicts, there's no spaghetti because we have architectural reviews whenever a module wishes to cross the "module barrier" and call into another module etc. (i.e. introducing a new dependency). You can spin up as many modulith instances as you wish as well.<p>The problem is that our modulith is written in PHP using a very popular enterprisey framework. PHP is based on the paradigm of spinning up a new process per request (php-fpm can recycle them but still), so every request ends up reinitializing the whole framework every time: its entire dependency injection tree. Every new module increases response times linearly, it doesn't scale. Another issue is that the single DB (common for monoliths) becomes the bottleneck, as all modules/contexts go through it.<p>Our PHP modulith is very costly in terms of runtime. A similar request into a microservice is usually 20-50 times faster because it's written in Go and manages its own DB. I think if our monolith was written in Go or Java from the very beginning we would have less impressive results after switching to microservices. Stuff rewritten from scratch is also usually faster than tons of old accumulated cruft.<p>Deployment/compilation is much faster now, the old monolith also used to have a lot of JS/CSS processing, PHP linters during build etc. so a tiny change to a module would trigger full recompilation of all modules running for 30-40 minutes. Each microservice is a separate deployment however, so a change to it only takes 1-2 minutes to deploy/release.<p>My point is that when people are talking about monoliths vs microservices they are often comparing dinosaurs written 10-15 years ago (PHP, old frameworks with bad design decisions, tons of accumulated spaghetti) to modern, more lightweight languages/tooling (for example, Go, k8s etc)<p>I think a "modern modulith" has its right to exist and is a viable competitor to microservices, provided they use more lightweight frameworks/tools, use paradigms such as modules and CQRS, and if somehow they allow smart, incremental deployments.
I want to know how many people who are using "microservices" are actually using "microservices" with separate databases for each service, separate teams and so on...?<p>If you asked my boss, we're using microservices. But really we're just taking common tasks and breaking them out to their own service. Now that's kinda like microservices, and it is very handy ... but it is not the full definition that I know of.
These days, I try to not to prematurely optimize by setting up micro services from day 1. I find starting with a monolith, with an eye towards micro services, works well for most projects and as patterns and abstractions emerge, slowly design and provision micro services.
The pros have an RPC when a program that needs to timing on one SKU needs something from a program that needs a different SKU. Otherwise, you don’t do the RPC.<p>Micro service or monolith? Hmm, I’d like F1 car or tank. If it fucking matters, you know which one you need.
Discussed at the time:<p><i>Give Me Back My Monolith</i> - <a href="https://news.ycombinator.com/item?id=19382765" rel="nofollow">https://news.ycombinator.com/item?id=19382765</a> - March 2019 (411 comments)
The lack of a stacktrace alone should be a hard blocker to microservice migration.<p>After that, the amount of cpu (i.e. dollars) and wall time wasted on encode-decode.<p>Anyone who does a microservice migration is not accounting for the above two costs.
engineers don't want to admit it, but microservices are a form of busywork. What used to be a few lines of native API code now require : RPC API boilerplate CRUD code, a build, a deployment, CI , dependency management etc.<p>Not to mention the additional complexity – now every "service" needs an LB.<p>You've converted a simple 1 person job into multiple days for many people.<p>microservices are a scam.
this is about scale. dont make things in a certain way because someone tells you to do it. collect metrics and use science to figure out what is best for you and your team today. engineers are building and using the things have to be reasonably happy with their tools or you get teenage behaviour.
To be fair, most micro service setups and tutorials have been overly complicated; however, let’s agree that distributed workloads and architecture are superior in a number of ways.<p>Generally, when people discuss going back to the “monolith” they just haven’t found the right distributed architecture.
If you have a simple crud system, a monolith I’d likely preferable. If your business domain has a lot of complexity, which you can discover through Event Storming, then breaking up the monolith will provide clear development criteria and much simpler maintenance.
Then boring technologies are no that bad after all, imho at the end it’s about facing changes quickly and if you have the tools and systems to do that then good !
This is bullshit article, sorry for bad words, but it's hard for me to keep using monothlich for any reason. Monothlich is the way to hell. It kills developer's productivity and team velocity by margin cost.
If you use the wrong tools, everything looks like a disaster.<p>Microservices on AWS with lambda+dynamo+auroa+api-gateway work very well. There's built in transparency. There is logging. You can set everything up with terraform.<p>Terraform makes the setup trivial. Compared to monoliths that I've seen that involve a lot of brittle manual steps, it's not even a competition. AWS Lambda with xray and other logging tools makes tracking down errors trivial. I have yet to see a monolith with anything comparable.<p>"Oh but I get a stack trace in my monolith" is false advertisement. How useful is that stack trace when the stack is corrupted? Or when memory is corrupted because one line in another part of the monolith has an error that slowly screwed up some datastructure in another part of the monolith? I'll take Lambdas that are all short, totally isolated, and easy to understand, any day. Debugging and understanding is much harder with a monolith.<p>And yes. To test, you need to bring up the entire working application. Just like you need to bring up the monolith. Oh? You mean, most people who test monolith don't bring them up, they just test some mocked version of some module in isolation? Well, they're probably testing the testing framework itself more than the monolith. With localstack you can bring up the entire AWS setup locally, automatically, run component and end to end tests. It's far more testable than a monolith. And far more obvious when an interaction is not tested.<p>Monoliths are dead. Stop writing them. And start learning modern tooling.