>It is quite remarkable that from 2015 to 2021, the cost of compute barely changed.<p>Quoting myself [1]<p>>Basically for Server Intel has been stuck on 14nm for far too long. The first 14nm Broadwell Xeon was released in 2015, and as of mid 2021 Intel barely started rolling out 10nm Xeon Part based on IceLake.<p>Core Count may have increased as per the article suggested, but price per core hasn't changed much at all.<p>Basically the industry, including DRAM and NAND hasn't seen any unit cost reduction for <i>years</i>.<p>AMD EPYC, and ARM Core on Server is only just getting started. I am expecting some competition to drive cost down. Google has already announced [2] its Tau instances based on EPYC Milan where its vCPU is finally a <i>full</i> CPU core instead of the current industry standard which is a single thread on x86. It has a 42% cost / performance increase over AWS Graviton 2.<p>Unfortunately by far the most expensive on AWS or any HyperScaler are their bandwidth cost. And that doesn't seems to be changing or reducing any time soon.<p>[1] <a href="https://news.ycombinator.com/item?id=27722764" rel="nofollow">https://news.ycombinator.com/item?id=27722764</a><p>[2] <a href="https://www.anandtech.com/show/16765/google-announces-amd-milanbased-cloud-instances-out-with-smt-vcpus" rel="nofollow">https://www.anandtech.com/show/16765/google-announces-amd-mi...</a>