Software would be 50% better if every developer understood Tesler's Law:<p><i>"Complexity can neither be created nor destroyed, only moved somewhere else."</i><p>The drive to simplify is virtuous, but often people are blind to the complexity that it adds.<p>Okay, so your microservices are each very simple, but that made the interactions and resource provisioning very complex. What was the net gain?<p>The correct solution depends on the circumstances. There are excellent uses of microservices. There are excellent uses of monoliths. There are excellent uses of monorepos. There are excellent uses of ... (wait never mind monorepos are just better).<p>Understand what is ESSENTIAL complexity and what is INCIDENTAL complexity. Simplify your system to remove as much incidental complexity as possible, until you are left with the essential and be satisfied with that.
A good rule of thumb is that if you’re starting a new project and immediately implement microservices, you’ve already failed.<p>Never seen it work, don’t think I ever will. The only way microservices can ever work is if they’re extracted over time as the monolith gets fleshed out, likely over the course of years. And even they’re probably a bad idea.
Can't we all just go back? Seriously every system I've worked on in the last 10 years seems worse in every metric than what I worked on 2000-2010.
HN could be a little less pessimistic. People aren't choosing microservices merely because of the hype.<p>Here's why I'd choose microservices for a large project:<p>1. People don't produce uniform code quality. With microservices, the damage is often contained.<p>2. Every monolith is riddled with exceptional cases in a few years. Only a few people know about corner cases after a few years, and the company becomes dependent on those developers.<p>3. It's easier for junior developers to start contributing. With a monolith you'd need to be rigid with code reviews, whereas you could be a little lax with microservices. Again, ties into (1) above. This also allows a company to hire faster.<p>4. Different modules have different performance and non-functional requirements. For example, consider reading a large file. You don't want such an expensive process to compete for resources with say a product search flow. Even with a monolith, you wouldn't do this - you'd make an exception. In a few years, the monolith is full of special cases which only a few people know about. When those employees leave, the project sometimes stalls and code quality drops. Related to (2).<p>5. Microservices have become a lot easier thanks to k8s and docker. If you think about it, microservices were becoming popular even before k8s became mainstream. If it was viable then, it's a lot easier today.<p>6. It helps with organizing teams and assigning responsibility.<p>7. You don't need super small microservices. A microservice could very well handle all of a module - say all of payments (payment processing, refunds, coupon codes etc), or all of authentication (oauth, mfa etc).<p>8. Broken Windows Theory more often applies to monoliths, and much less to microservices. Delivery pressure is unavoidable in product development at various points. Which means that you'll often make compromises. Once you start making these compromises, people will keep making them more often.<p>9. It allows you the agility to choose a more efficient tech/process when available. Monoliths are rigid in tech choices, and don't easily allow you to adopt a different programming language or a framework. With Microservices, you could choose the stack that best solves the problem at hand. In addition, this allows a company to scale up the team faster.<p>Add:<p>10. It's difficult to fix schemas, contracts and data structures once they're in production. Refactoring is easier with microservices, given that the implications are local compared to monoliths.
I think the first step toward sanity is to stop factoring services by team sizes - “we have 100 people so require 20 microservices”.<p>Instead, factor services along natural fault lines. These are areas in the solution that scale differently from other parts and can tolerate communicating over http or message queue.<p>It is fine to have lots of people work on a single service. We compose things using 3rd party libraries all the time. Just treat internal code a bit more like 3rd party libraries.
Werner's advice in my own words:<p>1. Don't pick an architecture because it's all the rage right now.<p>2. Don't pick an architecture that mimics your org's structure. Aka don't fall prey to Conway's law: <a href="https://en.wikipedia.org/wiki/Conway%27s_law" rel="nofollow">https://en.wikipedia.org/wiki/Conway%27s_law</a>.<p>3. Don't pick an architecture that your team can't operationalize--e.g. due to lack of skills or due to business constraints.
Like most articles in distributed systems, this makes wild assumptions about the most important, i.e. the human layer. I would bet $100 that this is written by the same sort of person who thinks "Managers – what do they do all day exactly?" [1]<p>> If you hire the best engineers....<p>Guess what, there is no broad consensus on what "best engineer" means. I bet your org is rejecting <i>really</i> good engineers right now because they don't know what Kubernetes is. Same goes for literally any other technology that has been part of a hype cycle (Java in 2001, Ruby on Rails in 2011, ML in 2011; no the precise years don't matter).<p>> ...trust them to make the best decisions.<p>A <i>lot</i> of work encapsulated there in less than ten words. If you hire a bunch of people and tell them "you are the best", you think they are going to sit around and run the Raft protocol for consensus on deciding how to architect the system? No, each of them is going to reinvent Kubernetes, and likely not in an amazing way.<p>Microservices are often best deployed when there is a mixture of cultural and engineering factors that makes it easy to split up a system into different parts that have interfaces with each other. It has little to do with the superiority of the technical architecture.<p>----------------------------------------<p>[1] Looks like the article was written by the CTO of Amazon, which...surprises me a bit. Then again, from all accounts, Amazon's not exactly known as the best place to work; so maybe I'm right? In any case, anything written by Amazon is not directly applicable to the vast majority of small-to-medium companies.
I can’t believe it’s news that someone said this. I thought everyone understood: you don’t try to do microservices until you <i>have to</i>. Before you get to that point you make your monolith modular enough that if you ever need microservices you’re prepared to break them out.
"Since its launch in 2006 with just a few microservices, (AWS) S3 has grown to over 300,"
I would love if articles like this would give a bit more context on the size of codebase and team.<p>Like is it two pizzas per microservice or 10 microservices per one pizza.
There was a perceptive comment on HN a few days ago [1] to the effect that microservices are a useful way to package a body of code if the team that is consuming the code doesn't trust the team that built it. This brings to mind Conway's law - "Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization's communication structure" [2] - and also that meme from a few years back about the internal structure of tech companies [3]. So you can argue that monolith/non-monolith it not purely a tech-driven consideration<p>[1] Too lazy to keep looking right now<p>[2] <a href="https://en.wikipedia.org/wiki/Conway%27s_law" rel="nofollow">https://en.wikipedia.org/wiki/Conway%27s_law</a><p>[3] <a href="https://www.reddit.com/r/ProgrammerHumor/comments/6jw33z/internal_structure_of_tech_companies/" rel="nofollow">https://www.reddit.com/r/ProgrammerHumor/comments/6jw33z/int...</a>
Microservices were a zero-interest rate phenomena that benefited no one but cloud service marketing teams.<p>As money becomes more expensive and as we inch further towards a massive economic crisis, companies that have allowed their R&D budget to bloat out of proportion with needlessly-distributed architectures are NGMI.
The monolith/microservice dichotomy is a red herring.<p>What even is a microservice?<p>There are other distinctions, borders and splits that are more important to consider.<p>Here's a koan that highlights a few of these considerations:<p>> If you run a kubernetes cluster on a single physical server, is it a monolith or is it a microservice architecture?
So how does that all help me, who is seeing "microservices AWSGCPAzure EDA" job ads exclusively, not a single regular job ad anymore. It's all hypeshit shit shit.
Lot of false comparisons here.<p>It’s not “monolith vs distributed”.<p>It’s “good monolith vs bad monolith or big ball of mud vs domain-driven design”<p>It depends on what your primary domain is, level of complexity, number and makeup of enterprise integrations, and more.<p>Some monoliths are very bad.<p>Some distributed systems are very bad.<p>My rundown is:<p>- is it a simple crud system?
—-- monolith<p>Otherwise:
- model it, identify bounded contexts, proceed accordingly
For many years to come, for better or for worse, people will point to the Prime team's blog post as the definitive proof that microservices are inferior. And instead of perspective of nuance, it'll be used for absolutist arguments. I'm already tired of it in anticipation...
The whole debate is rather silly.<p>Just pick the right architecture for the given problem. Sometimes it's monolith, sometimes it is not. The end.
> My rule of thumb has been that with every order of magnitude of growth you should revisit your architecture, and determine whether it can still support the next order level of growth.<p>The last hyper growth startup I worked at grew 10x in scale every 2-3 quarters for nearly 3 years at meaningfully large scale (millions of monthly transacting users). In that time, the number of different business-lines/categories and amount of functional flows and their intersecting/overlapping complexity also grew multiple folds.<p>So, we were adding whole new things and throwing away old things and basically refactoring everything every 18-months. Without knowing consciously, the superpower we had was our ability to refactor large live systems so well.
In hindsight it became clear to me that our ability to do this hinged on a few different things:<p>1. A critical mass of engineers at both senior and junior levels understood the whole systems and flows end to end. A lot of engineers stayed with their own team developing strong functional-domain understanding. Similarly a good number of senior engineers rotated across teams.<p>2. The devops culture was extreme – every team (of 10-12 engineers) managed all aspects (dev-qa-ops-oncall etc) of building and operating their systems. This meant even very junior engineers were expected to know both functional and non-functional characteristics of their systems. Senior engineers (5-10 yr experience) were expected to evaluate new tech stacks and make choices for their team and live with the consequences.<p>3. Design proposals were openly shared and sought critical feedback. Technical peer reviews were rigorous. Engineers were genuinely curious to learn things, ask and understand things, challenge/debate things etc. Strong emphasis on first-principles thinking/reasoning and focusing on actual end-to-end problem-solving without being territorial or having dogmas was strongly encouraged and the opposite was strongly discouraged.<p>4. Doing live-migrations – we mastered the art of doing safe live migrations of services whose API schema or implementation was changing and of datastore whose schema or underlying tech was changing. We had a lot of different database tech migrations – from monolith SQL dbs, to NOSQL clusters to distributed SQL dbs and their equivalent in-memory dbs and caches.<p>Surprisingly, the things we didn't do so well but didn't really hurt our ability to refactor safely were:<p>1. Documentation – we had informal whiteboard diagrams photographed and stuck in wiki pages. We didn't have reams and reams of documentation.<p>2. Tests – we didn't have a formal and rigorous test coverage. We had end-to-end tests for load-testing and we had a small manual QA for doing end-to-end integration testing for critical flows. These came about a bit later – but trying to scale them effectively proved very challenging. But these were not seen as hurdles for doing refactors.<p>3. Formal architecture councils and formal approval processes – we didn't have these. Instead we had strong people to people connect and strong team level accountability – culturally people owned up their mistakes and do everything they could to fix things and do better next time. Humility was high.<p>Later, I worked at a large mature company with very large scale – and everything was exactly flipped on all the points above and any major refactors were a serious pain and migrations took forever and actually never completed. The contrast was very eye-opening and I realized in hindsight the above contributing factors.
For the context this is most likely in response to DHH’s post [1] where he heavily came down on AWS’s serverless offerings.<p>He’s been ranting against cloud too.<p>[1] <a href="https://world.hey.com/dhh/even-amazon-can-t-make-sense-of-serverless-or-microservices-59625580" rel="nofollow">https://world.hey.com/dhh/even-amazon-can-t-make-sense-of-se...</a>