Please correct me if I am wrong. I think that bulk of the cost must be coming from SQL Server. If the author migrates to MySQL hosted on cloud VM's, then this cost might be reduced by 50 - 70%
It sounds like most of the separate backend apps could be squeezed together on a single host or into a single service. .Net has good threading, and some quality runtime bits and bobs - cram everything together into a monolith (api, queue, front-end hosting, auth). Use your DB as the queuing service. Gogs/Gitea and Laravel demonstrate this “super cheap all in one” quite well. Make network RPC into in-process function calls. Trim down from 3 SQL dbs to a single DB with multiple namespaces. Forget the redis caching layer - at the 1000 concurrent user mark, you don’t need it; or try materialized views or expression indexes.<p>Not only will this cut the hosting costs in a quarter, but removing many of these DBs and caches will also make the service much easier to develop - so now you can open-source it easier.
Title is wildly inaccurate. From the site author:
"
Yeah it turned out to be a bit optimistic :) The most traffic we had at once was when we were featured on the front page of the BBC, and there were a few thousand people browsing. We had to scale up briefly but everything ran smooth."
What is the best way to test load and pricing like this?<p>In my mind, the way I would do it is run the server and then hit it with a load tester (i.e. a number of queries simulating the number of users desired). That will tell us if it can withstand the load. Then I would seeing how much it costs after 5 minutes of this test, and then multiply that by 7200. It just feels kind of primitive and naive. There must be a better way. For example, Google can't simulate real-world service loads like this, so there must be.