There are some web apps still in production that I wrote almost a decade ago in Node+Express in the simplest, dumbest style imaginable. The only dependencies are Express and some third-party API connectors. The database is an append-only file of JSON objects separated by newlines. When the app restarts, it reads the file and rebuilds its memory image. All data is in RAM.<p>I figured these toys would be replaced pretty quickly, but turns out they do the job for these small businesses and need very little maintenance. Moving the app to a new server instance is dead simple because there's basically just the script and the data file to copy over, so you can do OS updates and RAM increases that way. Nobody cares about a few minutes of downtime once a year when that happens.<p>There are good reasons why we have containers and orchestration and stuff, but it's interesting to see how well this dumb single-process style works for apps that are genuinely simple.
I was interviewing for software jobs recently, and while I was studying up on the "system design" portion I kept circling around the same insight that Dan Luu writes about so well here.<p>I would sit down at an interview and try to create these "proper" system designs with boxes and arrows and failovers and caches and well tuned databases. But in the back of my mind I kept thinking, "didn't Facebook scale to a billion users with PHP, MySQL, and Memcache?"<p>It reminds me of "Command-line Tools can be 235x Faster than your Hadoop Cluster" at <a href="https://adamdrake.com/command-line-tools-can-be-235x-faster-than-your-hadoop-cluster.html" rel="nofollow">https://adamdrake.com/command-line-tools-can-be-235x-faster-...</a> , and the occasional post by <a href="https://rachelbythebay.com/w/" rel="nofollow">https://rachelbythebay.com/w/</a> where she builds a box that's just fast and with very basic tooling (and a lot of know-how).
I think the biggest problem for most developers is not understanding what one computer can actually do and how reliable they are in practice.<p>Additionally, understanding of how tolerant 99% of businesses are to real-world problems that could hypothetically arise can help one not frustrate over insane edge case circumstances. I suspect a non-zero number of us have spent time thinking about how we could provide deterministic guarantees of uptime that even unstoppable cosmic radiation or regional nuclear war couldnt interrupt.<p>I genuinely hope that the recent reliability issues with cloud & SAAS providers has really driven home the point that a little bit of downtime is almost never a fatal issue for a business.<p>"Failover requires manual intervention" is a <i>feature</i>, not a caveat.
The vast, vast, vast majority of organizations don't need micro services, don't need half of the products they bought and now have to integrate into their stack, and are simply looking to shave their yak to meet the bullet list of "best practices" for year 202X. Service oriented architectures and micro services solve a particular problem for companies that are operating on a massive scale and can invest (read waste money) on teams devoted to tooling. What most companies should do is build a monolith that makes money, but hire good software engineers that can write packages/modules whatever with high levels of cohesion and loose coupling, so that <i>one day</i> when you become the next Google, it will be less of a pain to break it into services. But in the end it really doesn't matter if it's painful anyway, because you'll have the money to hire an army of people to do it while the original engineers take their stock and head off to early retirement.
I think especially for small teams starting out, complex architecture can be a huge trap.<p>Our architecture is extremely simple and boring - it would probably be more-or-less recognizable to someone from 2010 - a single Rails MVC app, 95+% server-rendered HTML, really only a smattering of Javascript (some past devs did some stuff with Redshift for certain data that was a bad call - we're in the process of ripping that out and going back to good old Postgres)<p>Our users seem to like it though, and talk about how easy it is to get set up. Looking at the site, the interactions aren't all that different from what we would build if we were using a SPA. But we're just 2 developers at the moment, and we can move faster than much larger teams just because there's less stuff to contend with.
In terms of the choices they're unsure about; I'd say it's best to stay away from Celery / RabbitMQ if you don't really need it. For us just using RQ (Redis backed queue) has been a lot less hassle. Obviously it's all going to depend on your scale, but it's a lot simpler.<p>RE the sqlalchemy concern; you do need to decide on where your transactions are going to be managed from and have a strict rule about not allow functions to commit / rollback themselves. Personally I think that sqla is a great tool, it saves a lot of boilerplate code (and data modelling and migrations are a breeze).<p>But overall the sentiments in this article resonate with my experience.
I don't know how the author can claim that they run a "simple" architecture.<p>From their job pages:<p>Our stack :<p><pre><code> backend: Python 3 (+ mypy)
API layer: GraphQL
android frontend: Kotlin/Jetpack
iOS frontend: Swift/SwiftUI
web frontend: TypeScript/React
database: Postgres
infrastructure: GCP / Terraform
orchestration: Kubernetes
</code></pre>
That is <i>not</i> simple by any stretch of the imagination.
Nah, I don't much like the tone of this article. Not at all.<p>The engineering message should be: keep your architecture as simple as possible. And here are some ways (to follow) on how to find that minimal and complete size 2 outfit foundation in your size 10 hoarder-track-suite-eye-sore.<p>Do we really need to be preached at with a warmed over redo of `X' cut it for me as a kid so I really don't know why all the kids think their new fangled Y is better? No we don't.<p>If you have stateless share nothing events your architecture should be simple. Should or could you have stateless share nothing even if that's not what you have today? That's where we need to be weighing in.<p>Summary: less old guy whining/showing-off and more education. Thanks. From the Breakfast club kids.
At the risk of making an ad-hominem attack, I found this website unreadable.<p>Minimalism is fine. But there comes a point when there's so little, it is nothing. danluu.com is a bucket of sand facing an overbuilt cathedral.
Just boils down to not optimising until you need to. Start with a 3 tier web app (unless your requirements lead you to another solution), then start with read replicas, load balancing, sharding, redis/RabbitMQ etc
I understand his point but I actually think micro-services can be simpler than monoliths.<p>Even for his architecture, it sounds like they have an API service, a queue and some worker processes. And they already have kubernetes which means they must be wrapping all of that in docker. It seems like a no-brainer to me to at least separate out the code for the API service from the workers so that they can scale independently. And depending on the kind of work the workers are doing you might separate those out into a few separate code bases. Or not, I've had success on multiple projects where all jobs are handled by a set of workers that have a massive `switch` statement on a `jobType` field.<p>I think there is some middle ground between micro-services and monoliths where the vast majority of us live. And in our minds we're creating these straw-man arguments against architectures that rarely exist. Like a literal single app running on a single machine vs. a hundred independent micro-services stitched together with ad-hoc protocols. Micro-services vs. monoliths is actually a gradient where we rarely exist at either ludicrous extreme.
So, I definitely agree with this. Most of us don't have to do any thing at FAANG scale. But what counts as simple?<p>It's quite easy these days to deploy an app using AWS Lambda, DynamoDB, SNS, etc., all with a single Cloud Formation template. Is that simple? In one sense I've abstracted away a lot of the operational work that comes with self-hosted, but now I've intertwined (Rich Hickey might say <i>complected</i>) myself into Amazon's ecosystem.<p>Also, is a document store like DynamoDB, MongoDB, etc., simpler than a relational database like Postgres? On the one hand, a document database's interface is very simple compared to the complexity like SQL. On the other, that simplicity is generally considered a necessary sacrifice to scale. If you don't need to scale, why make the sacrifice?<p>Also, there can be simple things that are better at scaling. Elixir is a very nice scripting language like Ruby or Python, but it also has much better performance scaling (comparable with NodeJS or Go).
> GraphQL libraries weren’t great when we adopted GraphQL (the base Python library was a port of the Javascript one so not Pythonic, Graphene required a lot of boilerplate, Apollo-Android produced very poorly optimized code)<p>What do people use instead of Graphene? Strawberry?
Simple architectures work well, until they don't. A good example is ye olde ruby on rails monolith. Dead simple to set up and iterate quickly, but once you reach a certain organization and/or codebase size, velocity starts to degrade exponentially
How far can you get with a single Postgres instance on a single machine? I know things like cockroach and citus existence but generally Postgres isn’t sharded as far as I know.
> <i>one major African market requires we operate our “primary datacenter” in the country</i><p>What country could that be? That sounds challenging.
What's the year on this? anybody know?<p>Normally I check the Internet Archive, but <a href="https://web.archive.org/web/*/https://danluu.com/simple-architectures/" rel="nofollow">https://web.archive.org/web/*/https://danluu.com/simple-arch...</a>.
This doesn't sound very simple at all. It's a single codebase that handles all your mobile API interactions, authentication, account management, presumably usage tracking and notifications, and all your offline processing, all interacting with a single database and queue infrastructure? And that same codebase marshalls all that through a GraphQL API and implements a custom data protocol?<p>And you're calling that <i>simple</i>?<p>I've worked on monolithic codebases, and the one thing none of them have ever been is simple. They have complex interdependencies (oh hey, like database transaction scopes); they have that 'one weird way of doing things' that affects every part of the system (like, 'everything has to be available over GraphQL')...