Great write-up. Note that the author is only serving 70 requests per second at peak with 15 thousand registered users and 3 thousand in revenue per month. This just shows that you don't always have to plan to scale to thousands (or millions) of requests per second.<p>This blue-green deployment workflow reminds me of a similar setup used by the creator of SongRender[1], which I found about via the running in production podcast[2]. One thing to be aware of with smaller VPS providers like Linode, DO, and Vultr is that they charge per hour rather than per second. So if you boot up a new VM every time you deploy you're charged for a full hour each time.<p>[1] <a href="https://jake.nyc/words/bluegreen-deploys-and-immutable-infrastructure-with-terraform/" rel="nofollow">https://jake.nyc/words/bluegreen-deploys-and-immutable-infra...</a><p>[2] <a href="https://runninginproduction.com/podcast/83-songrender-lets-you-create-audio-visualizer-videos-from-audio-clips" rel="nofollow">https://runninginproduction.com/podcast/83-songrender-lets-y...</a>
I think we can safely put docker images (not k8s) in the "boring technology" category now. You don't need k8s or anything really. I like docker-compose because it restarts containers. Doesn't need to be fancy.
When you're a 1 man show, you need to both save your own time, and compute costs.<p>To save your time, use the simplest thing you know how to use. Whatever you can set up easily and gets you serving a hello world on the internet.<p>To save compute, just don't do stupid things. In particular, you should consider what happens if this project <i>isn't</i> a runaway success. One day, you might want to leave it running in 'maintenance mode' with just a few thousand users. For that, you'd prefer to be running on a $10/month VPS than a $2000/month cloud setup which is going to require constant migrations etc.<p>Things like automatic failover and loadbalancing are seldom required for 1-man shows - the failure rate of the cloud providers hardware will be much lower than your own failure rate because you screwed up or were unavailable for some reason.
Im secretly a fan of the boring tech statement. Sometimes all of the new containerization and microservice paradigms just feel like an excuse to overengineer everything. Running solo and starting from scratch (no code, no infra, no cloud subscriptions) means you'll have to simplify and reduce moving parts.
Just a small comment, blue/green usually implies some sort of load balancing, here OP is just flipping a switch that changes a hostname and flips the roles of blue/green from staging/production.<p>Nothing bad with that, thought, and part of its genius is how simple it is.
For what it is worth, I am handling about 130k views and registering ~1k paying users per day with a t2.large instance server running node, redis and nginx behind a free tier Cloudflare proxy, a db.t2.small running postgres 14, plus CloudFront and S3 for hosting static assets.<p>Everything is recorded in the database and a few pgcron jobs aggregate the data for analytics and reporting purposes every few minutes.
I love this simple setup. Big fan. I also do everything simple. Add more workers? Just enable another systemd service, no containers. Let the host take care of internal networking. The biggest cost is probably a managed db, but if you are netting $$ and want some convenience, why not?
Using two servers, one for production and the other for staging/failover, then switching upon release is a neat technique.<p>Been using it for our API backends for about ten years.
I'm doing about 1M database writes per day. DB is sqlite3, server is a Hetzner instance + extra storage that costs about $4 / month total.<p>Computers are fast.
Fascinating.<p>I have the exact same number of visits and run the site with a boring PHP/MySQL setup from a cheap but very reliable 200€/y shared hoster. Deployment via git over ssh as well.
Not gonna lie, I was triggered by the no CI/CD and shared database between staging and prod. But those concerns where very satisfactorily addressed. I'd miss some form of CI/CD if it was a team, but I suppose that for a single person show, running tests locally is enough.<p>I do miss infra as code mentioned. If shit goes tits up with the infrastructure and everything was setup with clicking around in control panels, ad-hoc commands and maybe a couple of scripts, your recovery time will be orders of magnitude bigger than it has to be.
Maybe I'm missing something here, but what are the advantages of having two identical servers with a floating IP that switches between them instead of just running two instances of the app on the same server and switching between them by editing the nginx proxy config?
Oh my all the opinions again. Software is not, believe it or not, a True or False game. TIMTOWTDI, folks. This guy rocks a solid process that he is comfortable with and that works. I for one applaud him for it.
if it was mine i would most certainly opt for ansible or something likewise, the overhead of logging into a machine and doing all the things by hand/manual is more complicated and error prone than a playbook would be (at least for me and me forgetting all the steps all the time ^^).<p>But who are we to judge, impressive to earn 2,5k every month with it, kudos.
This is pretty close to my current (hypothetical) plan for how I'd stand up a small full-stack app as a solo dev. Only thing I didn't think about was blue/green deployment which sounds great. Glad to see a real-world case study showing that the overall strategy works well
> The trick is to separate the deployment of schema changes from application upgrades. So first apply a database refactoring to change the schema to support both the new and old version of the application<p>How do you deploy a schema that serves both old and new versions? Anyone got any resources on this?
FAQ says an upgraded board (~10$) lasts forever... surely it wont be here in 2150 CE ? I'm always curious about the word 'forever' in legal agreements, when truly do we expect a 'forever' product to go out of service.
I'm a strong advocate of boring technology too but I'm also very much in favor of keeping things off my dev machine. In that case they have to run ssh, git and run a script to switch endpoints.<p>My current boring system for a simple Rails app (<a href="https://getbirdfeeder.com" rel="nofollow">https://getbirdfeeder.com</a>) is that I push to GitLab, the CI builds a docker image and does the ssh & docker-compose up -d dance. That way I can deploy / roll back from anywhere even without my computer as long as I can log into GitLab (and maybe even fix things with their web editor). Seems a lot "more boring" to me and having the deploy procedore codified in a .gitlab-ci.yml acts as documentation too.
With December approaching, sort your hampers for the dear ones in Germany with a dash of creativity and a bit of utility. That’s what makes the best gift combos. If you’re not good at handling the finer balances, take online suggestions from gifts2germany.com and send across amazing gift baskets to pals and people in Germany, at least costs. We have gourmet hampers, cakes, baked goods, wines, champagnes and many more in our Christmas Hampers to Germany , with the promise of 100% sure deliveries of 1-2 days to Germany, with absolute free shipping. Enjoy the handpicked baskets, boutique Christmas Gifts to Germany with our 24*7 customer care and seamless delivery updates.
Source: www.gifts2germany.com/Christmas_Germany.asp
I don't see why the author is so proud of avoiding tooling that would make their build and deploy process simpler? Even something like digitalocean's own buildkit based "apps" would be an upgrade here. Deploying your app using ssh and git is not magic or simple it's just a refusal to learn how actual software delivery is done.<p>Even totally dodging docker/k8s/nomad/dagger or anything that's even remotely complicated platforms like AWS/DO/Fly.io/Render/Railway/etc obsolete this "simple" approach with nothing but a config file.<p>I also theorize that the author is likely wasting a boatload of money serving ~0 requests on the staging machine almost all the time, due to him literally switching a floating IP rather than using two distinctly specced machines for production and staging