Hi! I’m Todd, the solopreneur founder of Prerender.io and I created that $1,000,000/year AWS bill. I sold Prerender.io to Saas.group in 2020 and the new team has done an incredible job growing and changing Prerender since I left.<p>$1M per year bill is a lot, but the Prerender back end is extremely write-heavy. It’s constantly loading URLs in Chrome in order to update the cached HTML so that the HTML is ready for sub-second serving to Google and other crawlers.<p>Being a solo founder with a profitable product that was growing organically every month, I really didn’t have the time to personally embark on a big server migration with a bunch of unknown risks (since I had never run any bare metal servers before). So the architecture was set early on and AWS allowed me the flexibility to continue to scale while I focused on the rest of business.<p>Just for a little more context on what was part of that $1M bill, I was running 1,000+ ec2 spot instances running Chrome browsers (phantomjs in the early days). I forget which instance type but I generally tried to scale horizontally with more smaller instance sizes for a few different reasons. Those servers, the rest of the infrastructure around rendering and saving all the HTML, and some data costs ended up being a little more than 50% the bill. Running websites through Chrome at scale is not cheap!<p>I had something like 20 Postgres databases on RDS used for different shards containing URL metadata, like last recache date. It was so write heavy that I had to really shard the databases. For a while I had one single shard and I eventually ran into the postgres transaction ID wraparound failure. That was not fun so I definitely over provisioned RDS shards in the future to prevent that from happening again. I think RDS costs were like 10%.<p>All of the HTML was stored in s3 and the number of GET requests wasn’t too crazy but being so write heavy on PUT requests for recaching HTML, with a decent sized chunk of data, the servers to serve customer requests, and data-our from our public endpoint, that was probably 30%.<p>There were a few other things like SQS for populating recache queues, elasticache, etc.<p>I never bought reserved instances and I figured the new team would go down that route but they blew me away with what they were able to do with bare metal servers. So kudos to the current Prerender team for doing such great work! Maybe that helps provide a little more context for the great comments I’m seeing here.