Sigh, I wonder why people write stories like this.<p>Data Center servers don't suck, and I'd bet that most folks running them understand what their utilization is. Blekko has over 1500 servers in its Santa Clara facility and we know pretty much exactly how utilized they are, but that is because we designed the system that way.<p>Its funny how things have come full circle. Back in the 70's you might have a big mainframe in the machine room, it was so expensive that the accounting department required you to get maximum use out of the asset, so you had batch jobs that ran 24/7. You charged people by the kilo-core-second to use economics to maximize their value.<p>Then minicomputers, and later large multi-CPU microcomputer servers (think Sun E10000 or the IBM PowerPC series) replaced mainframes they didn't cost as much so the pressure to 'get the return' was a bit lower, you could run them at 50% utilization and they still cost less than you'd expect to pay for equivalent mainframe power.<p>Then the dot.com explosion and suddenly folks were 'co-locating' a server in a data center because it was cheaper to get decent bandwidth there rather than run it the last mile to where your business was. But you didn't need a whole lot of space for a couple of servers, just a few 'U' (1.5" each of vertical space) in a 19" rack. And gee, some folks said why bring your own server, we can take a machine and put a half dozen web sites on it and then you could could pay like 1/6th the cost of 4U of rack space in the Colo for your server. Life was good (as long as you weren't co-resident with a porn or warez site :-)<p>Then, at the turn of the century, the Sandia 'cheap supercomputer' and NASA Beowulf papers came out and everyone wanted to put a bunch of 'white box' servers in racks to create their own 'Linux Cluster' and the era of 'grid' computing was born.<p>The interesting thing about 'grid' computing though was that you could buy 128 generic machines for about $200K which would out perform a $1.2M big box server. The accountants were writing these things off over 3 years so the big box server cost the company $400K/year in depreciation costs, the server farm maybe $70K/year (if you include switches and such like) so it really didn't matter to the accountants if the server farm was less 'utilized' since the dollars were so much lower but the compute needs were met.<p>Now that brings us up to the near-present. these 'server farms' provided compute at hitherto un-heard of low costs, and access to the web became much more ubiquitous. That set up the situation where even if you could offer a service where you got just a few dollars per 1,000 requests to this farm, like a real farm harvesting corn, you made it up in volume. Drive web traffic to this array of machines (which have a fixed cost to operate) and turn electrons into gold.
If you can get above about $5 revenue per thousand queries ($5 RPM) you can pretty much profitably run your business from any modern data center.<p>But what if you can't get $5 RPM? Or your traffic is diurnal and you get $5 RPM during the day but $0.13 RPM at night? Then your calculation gets more complex, and of course what if you have 300 servers, 150 of which are serving and 150 of which are 'development' so you really have to cover the cost of all of them from the revenue generated by the 'serving' ones.<p>Once you start getting into the "business" of a web infrastructure it gets a bit more complicated (well there are more things to consider, the math is pretty much basic arithmetic). And 'efficiency' suddenly becomes something you can put a price on.<p>Once you get to that point, you can point at utilization and say 'those 3 hours of high utilization made me $X' and suddenly the accountants are interested again. For companies like Google whose business is information 'crops', they were way ahead of others in computing these numbers, Amazon also because well they sell price this stuff with EC2 and AWS and S3 and they need to know what prices are 'good' and which are 'bad.' It is 'new' to older businesses that have yet to switch over to this model. And that is where a lot of folks are making their money (you pay one price for the 'cloud' which is cheaper than you had been paying, so you don't analyze what it would cost to do your own 'cloud' type deployment). That will go away (probably 5 to 10 years from now) as folks use savings in that infrastructure to be more competitive.