It's difficult to top AWS just based on an apples-to-apples comparison of buying servers versus turning up EC2 instances. EC2 is pretty price aggressive if you know what your needs are, identify an instance type that fits well, and reserve it for 3 years. A lot of people don't actually achieve all three, but it's often possible.<p>Most comparisons overlook the crucial price differentiator between AWS and a datacenter build: bandwidth costs. AWS bills based on bytes transferred, every IP transit provider bills by 95th percentile or similar.<p>A 1 gig commit on a 10 gig circuit is $1-2/meg in a well served on-net building, so let's say $2000/mo. The switch is $5000 or so, something that can't do full table BGP but is layer 3 capable, and support is $1000/yr. The cross connect is $300/mo. Plus a little bit more for optics and fiber. Over three years the cost is $91,000 (plus power to run the switch), if you never go above the 1 gig commit. Seems like a lot of money right?<p>Compare this to transferring 500mbit/s constantly to the Internet over 3 years at AWS pricing. That amounts to 156 TB / month transferred. Per month that will cost $11,878.40. Over 3 years the cost is $427,622.40.<p>There are some other key differences between AWS and datacenters:
- it puts all of your spending into opex, eliminating capex (this matters for some businesses);
- it limits the ways you can solve problems, there's no VRRP support for example, which is very limiting for a lot of service types;
- there is no ability to peer or receive settlement free transit if you deploy in AWS<p>However, in terms of raw dollars, the method that AWS uses to bill bandwidth consumption is always the major cost differentiator.
I don’t quite agree with all the calculations of AWS vs DIY.<p>m3.medium instance - 1 core, 3.75GB RAM, 4GB SSD<p>Typical server this days:
2*10 physical cores, 256GB RAM, 2 TB SSD
For around ~$10k<p>So you can run AT LEAST 20 m3.medium instances on a physical box (without overbooking). If you use overbooking a single server can handle probably 40 m3.medium class machines.
So instead of 100 servers, you need 30, or more probably 20.<p>Bottom line is:
People locked in the cloud mindset do not understand how fast physical hardware is this days compared to "cloud" offerings.
I don't agree with the methodology for calculating AWS -vs- Moore's Law as mentioned in blog articles such as this one.[1]<p>A more accurate cost model would require <i>multiple components</i> in addition to number of transistors that amazon buys.<p>For example, to fully build an AWS service:<p>++You need a plot of land for the data center. Do real estate prices follow Moore's Law?<p>++Concrete and steel to erect the data center building. Do raw building materials follow Moore's Law?<p>++Energy costs. Does the price of terawatts follow Moore's Law?<p>++Bandwidth costs. Does the price of network transfers from Tier 1 backbones follow Moore's Law?<p>++Staffing costs. Do the programmers, system admins, and other techies' salaries paid by amazon follow Moore's Law?<p>++etc, etc.<p>If transistor count was the <i>overwhelming</i> cost item for supplying an "AWS" offering, we could then ignore all other component costs as an insignificant rounding error. Is this the case?<p>[1]<a href="https://gigaom.com/2014/04/19/moores-law-gives-way-to-bezoss-law/" rel="nofollow">https://gigaom.com/2014/04/19/moores-law-gives-way-to-bezoss...</a>
No, but Google Compute Engine does a bit. For instance, Azure is at least twice as expensive than Google for cloud compute. (In addition to having their weirdass PaaS design leaking into their IaaS. Nice service otherwise.)<p>I was super hesitant to use Google for anything (A: because I dislike them now, B: I couldn't be sure they'd be committed to it, C: I recall they screwed people on Google App Engine?). But their offering is so much more straightforward and much cheaper. No commits, just use VMs and get discounts.<p>GCE is also way faster to start up. And single-core performance blows away Azure.<p>Azure and AWS make a big deal about storage and transfer, while ignoring they're way overpricing the CPU.<p>Edit: Here's Google's pricing "philosophy": <a href="https://cloud.google.com/pricing/philosophy/" rel="nofollow">https://cloud.google.com/pricing/philosophy/</a> where they explicitly say they're committed to Moore's Law and others aren't.
Nice story, but the numbers don't add up at all. If you order managed colo at a provider like Rackspace (which is a much higher level of support and guarantees than AWS, basically 24x7 and 100% on network and cooling etc) and get dual or quad CPU machines (really standard, I think they don't even have single CPU as an option anymore) with 256 or 512 GB ram you can run 64 or even much more "AWS medium" nodes on 1 piece of hardware. Because the AWS "core" is based on a very old type of Xeon.<p>So in reality you could run this setup with fully managed network and a 100% SLA on 9 or 10 dedicated machines at Rackspace (or even a cheaper competitor of theirs). The price for that would be around 9x 800 and the cost of two dedicated firewalls for a redundant high performance setup. So I would guess 7 to 8k USD per month or 288.000 USD in three years. Which is over 60% cheaper than the proposed AWS setup.<p>So maybe if you buy way too much hardware and combine it with a lot of manual work that could be done cheaper by a good provider, then yes... AWS looks cheaper. But if you look at a realistic setup on that scale, hardware is much more cost effective.<p>The sweat spot for AWS is a small setup or very flexible load. If you're big with a steady load, a smart setup with a managed provider is much more effective.<p>(And as a last remark: both AWS and Google have huge discounts for long commitments, Google even does it without upfront commitment. So if you use those prices the two options end up much closer again)
Quick comment on the comparison of AWS cost to DIY cost - for AWS costs you're using the on demand pricing for 3 years and comparing it to the cost of buying hardware over 3 years. If you're going to be running instances for 3 years you're probably going to be using reserved instances - in the case of m3.medium instances the rate drops from $0.070 / hour to $0.0261 / hour.<p>So, with the 3 year reserved pricing your total cost ends up being something like $411,544 - less than half the cost of the referenced $880,000 hardware purchase price.
Fascinating topic - yes AWS pricing falls slower than Moore's Law, this reflects the degree of market power shared by few large public cloud providers given the scale needed to compete at the top level. The profit maximizing strategy for each is enjoy rising margins while costs fall and reset prices to a lower level as a pack once costs have fallen so low that it is most profitable to lower the price to serve a larger market (google 'oligopoly' or 'kinked demand' for further clarity).<p>There was actually a slide addressing this very question used in a Urs Hölzle keynote at Google Cloud Live 3/25/14. It was titled 'but prices are not falling fast enough' and showed 2006-2014 cloud prices falling 6-8% vs. 20-30% improvement in hardware pricing. I included a screenshot in my deep-ish dive I'm developing on the economics of cloud market pricing <a href="http://www.stackalpha.com/blog/2015/2/25/cloud-price-wars-the-joke-is-on-us" rel="nofollow">http://www.stackalpha.com/blog/2015/2/25/cloud-price-wars-th...</a>
Price drop speed has already been examined: <a href="https://gigaom.com/2014/04/19/moores-law-gives-way-to-bezoss-law/" rel="nofollow">https://gigaom.com/2014/04/19/moores-law-gives-way-to-bezoss...</a>
The increasing energy cost in computing these days should change our thinking on Moore's law. We shouldn't be worried as much about doubling the power of a cpu, especially in a cloud setting, because you can always do that by using 2 cpus. We should instead create an equation based on the computing power, energy consumption, and energy prices.
remember, price != cost. pricing has many other influences.<p>Interesting conundrum however that the author discusses, that the gigaom link does not. At first glance this may appear that big consumers would be better off with their own hardware long term since they could theoretically follow the moore curve, but the missing piece is the cost of ownership AND cost of staying on the curve(since the curve keeps falling, you're continuously upgrading)<p>Next up is to build such that you can treat all cloud hosts as a commodity. Continuously monitoring pricing and loads and moving large amounts of instances from one provider to the next to optimize your costs.
I hope to have time to build a google spreadsheet to compare your numbers more closely.<p>Mose people don't realize that an AWS "core" is a threaded core. So the example quoted below of 2 processors with 10 cores is not 20 equivalents to amazon cores, but 40 cores.<p>Additionally, you're not depreciating your costs over several years as near as I can tell.<p>Whatever these numbers come out to, it's clear that if you're small and need unpredictable agility, AWS is cheaper. If you're larger and have enough foresight into your usage pattern, AWS is never cheaper at the right economy of scale.
I'm always amazed that anyone can accurately calculate any cost of a set of Amazon web service elements, must less do it sufficiently accurately to measure over time.
Moore's Law refers only to the number of transistors in a component, and while transistors <i>generally</i> translates to computing power, this is not a 1:1 thing.<p>Lately the shift has been from CPU power to GPU power, so while there's more compute capability than ever, you need a hybrid system to take full advantage of Moore's Law.
It probably doesn't if the article's headline has to phrase it as a question instead of just stating outright "Amazon Web Services Pricing Follows Moore's Law".
GPU instances still seem overpriced. Reserved gets you $0.30 an hour, which would run you ~$8000 over 3 years. Hard to imagine the specs are much better than a $1500 computer.