TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Amazon Orders More than 10,000 Nvidia Tesla cards

86 点作者 SlimHop超过 12 年前

2 条评论

Karhan超过 12 年前
I remember reading a blog post of the peculiarities of GPU programming and the post noting that for most modern graphics cards (at the time) if you can keep your computable data in chunks no bigger than 64kb a piece you can expect to see enormous performance gains even on top of what you'll see by using openCL/CUDA because of a physical memory limit on the actual GPU itself.<p>I also remember thinking that a 64kb row size for DynamoDB was very odd.<p>I wonder if these things are at all related.
评论 #4615834 未加载
评论 #4615781 未加载
mercuryrising超过 12 年前
Amazon's cloud might be one of the coolest things I've seen in a while, hop on, get some of the best computing performance possible, get off and save some money. If you have a random data analysis problem that would take your computer three weeks, why not just pay $10 and get it done in two hours (plus a few hours of debugging)?<p>If the article is correct, Amazon paid 15 million for those cards which will be out of style in about two years (not that they have to get rid of them, but something faster, easier to maintain (if Nvidia starts opening up to Linux), with more memory and less power usage will come out. They'll have to fork over a large sum of money again to keep their top "on demand computing" title.<p>Amazon's cluster GPU right now has two Nvidia Tesla Fermi's in it. I'm going to assume Amazon will split their new cards into twos and fours, at about half of each. That's ~1750 new computers that are going to load up. Looking at the current rates of the cluster, it's $2.100 for an hour of the normal, I'll say it will be $4.200 for an hour on the jumbo with 4 GPUs.<p>They paid $15 million for just the cards. They need to get 2380952 hours of usage out of the machines to break even on the cards. They need to log 1360 hours per machine to break even, or have someone run all the machines at full bore for 56 days. As the cards are the most expensive component (assumption), and the total price of the computer will be about the price of one of the cards, we'll add a little bit of over head for all the other things they need to do to make it work - 120 days of full time use to break even on an investment of about $25 million (they need to buy lots of other things to put all the GPUs in, and worry about all that heat, and have a place to put it all, and have people install the new computers, etc...). I wonder what the actual usage of those clusters are, and if they've had anyone sign a deal saying we'll use the cluster for an entire month. That's a beautiful maneuver though, say CERN didn't want to do all the data analysis from the LHC in house because by the time they got to this part of the experiment, their technology they purchased previously would be way out of date. Just let Amazon do it. They will always have the latest technology, and you'll have an inexpensive way of leveraging that power.<p>Assuming they can make it all work (and I'm sure a lot of their decisions now are strategic decisions aimed at future investments) this is a great time to be a computer user, log on and get the best for a couple hours for a couple dollars. Instead of shelling out $1500 on a new computer personally, I could log a ton of EC2 hours getting significantly faster, more powerful machines, that never get 'stale', and their lives are much happier (my computer probably doesn't do anything "intensive" 70% of its life, whereas the EC2s are probably pushed a bit harder than that).
评论 #4615552 未加载
评论 #4615506 未加载
评论 #4615734 未加载
评论 #4615496 未加载