> <i>Today, Pinterest's memcached fleet spans over 5000 EC2 instances across a variety of instance types optimized along compute, memory, and storage dimensions. Collectively, the fleet serves up to ~180 million requests per second and ~220 GB/s of network throughput over a ~460 TB active in-memory and on-disk dataset, partitioned among ~70 distinct clusters.</i><p>Wow.<p>Assuming $0.09 per GB egress on EC2, that's $51,321,600/mo. Of course, they must be on some Enterprise plan of some sort, but how much discount must they get to make it affordable?<p>By comparison, 180m requests per second on an "egress-free" serverless compute like Workers would cost $77,760,000/mo (assuming 6m per $1) or $233,280,000/mo (2m per $1).<p>Cloud is wild.
I do hope Pinterest would fund memcached development going forward, especially the language clients for memcached. I've been using pylibmc to access memcached fast, and that project seems to be almost dead (<a href="https://github.com/lericson/pylibmc/issues" rel="nofollow">https://github.com/lericson/pylibmc/issues</a>).
> the fleet serves up to ~180 million requests per second<p>For comparison, Google serves about 63k queries per second. I hope there's not a typo in the above line in Pinterest's blog