TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

How to Make More Money Renting a GPU Than Nvidia Makes Selling It

124 pointsby ankitg12about 1 year ago

9 comments

Majromaxabout 1 year ago
The math seems off to me here. In particular:<p>&gt;&gt; So here is the deal: If you have 16,000 GPUs, and you have a blended average of on-demand (50 percent), one year (30 percent), and three year (20 percent) instances, then that works out to $5.27 billion in GPU rental income over four years with an average cost of $9.40 per hour for the H100 GPUs.<p>This makes a very strong assumption that the rental cost of an H100 will not change over the next four years. This is wildly optimistic. Instead, we can infer expected prices by looking at the differential rates for one and three-year commitments:<p>&gt;&gt; We estimated the one year reserved instance cost $57.63 per hour, or $7.20 per GPU-hour, and we know that the published price for the three year reserved instance is $43.16, or $5.40 per GPU-hour.<p>On the margin, the cloud provider should be indifferent between an on-demand rental, a one-year reservation, and a three-year reservation. That implies that three consecutive one-year reservations should provide about the same income as the three-year reservation.^[1]<p>Someone who places a three-year commitment will spend $16.20 per hour in one year, over three years. The one-year commitment is $7.20 per hour in one year, over one year. Subtract the two, and we get the residual of $9.00, and divide by the two years remaining in the contract to get $4.50.<p>With this rough calculation, the two-year, one-year forward price of the H100 is about $4.50&#x2F;hr. If we further assume that the price changes per year with a constant ratio (0.72), we can break that up the per-hour, one-year reservation prices as $7.20 (today), $5.22 (one year from now), and $3.78 (two years from now).<p>Going further into speculation and applying this ratio to rental revenue on the whole, that &quot;$5.27b over four years&quot; instead becomes $3.47b. Still a reasonable multiple of the purchased infrastructure cost, but it&#x27;s less outlandish and emphasizes the profit potential of moving first in this sector (getting those long-term commitments while it&#x27;s still a seller&#x27;s market).<p>[1 — I&#x27;m ignoring the option-value in the one-year commitment, which allows the customer to seek a better deal after twelve months. This option-value is minimal of the GPU cloud is expected to be at-capacity forever, such that the provider can replace customers easily.]
评论 #40248226 未加载
评论 #40248058 未加载
评论 #40248856 未加载
latchkeyabout 1 year ago
Great article. I&#x27;m in the process of building this business myself, so I&#x27;m intimately aware of everything in the article. Just keep in mind that a lot of the math is back of the napkin guesses. All of this is managed on personal relationships and deals with the vendors and much of the pricing that actually gets set, isn&#x27;t made public.<p>Our first offering is the AMD MI300x instead of Nvidia or Intel products. But, unlike all of my competitors, we are not picking sides. Our unique value prop is that we eventually plan to offer the best of the best compute of anything our customers want. Effectively building a varied super computer for rent, with bare metal access to everything you need and white glove service and support. In essence, we are the cap&#x2F;opex for businesses that don&#x27;t want to take on this risk themselves.<p>What people don&#x27;t understand is how difficult and capital intensive to deploy and manage large scale compute, especially on the high end cutting edge. Gone are the days of just racking&#x2F;stacking a few servers. This stuff is way more complicated and involved. It is rife with firmware bugs, limitations, and hardware failures.<p>The end of the article says some nonsense about a glut of GPU capacity. I do have to call that out. It isn&#x27;t going to happen for a long while at least. The need for compute is growing exponentially. Given the complexities of just deploying this stuff, it isn&#x27;t physically possible to get enough of it out there, fast enough, to satisfy the demand.<p>I love every challenge of this business. Happy to answer questions where I can.
评论 #40248716 未加载
评论 #40248760 未加载
KaoruAoiShihoabout 1 year ago
Just adding some information, the article claims Tesla has 15k H100s, but they actually have 40k H100s and 85k by the end of the year. <a href="https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;NVDA_Stock&#x2F;comments&#x2F;1cbwvnr&#x2F;tesla_40k_h100s_now_85k_by_end_of_the_year&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;NVDA_Stock&#x2F;comments&#x2F;1cbwvnr&#x2F;tesla_4...</a><p>As usual need to have a bit of doubt on the third party estimates.
评论 #40248592 未加载
nabla9about 1 year ago
AI boom combined with bottlenecks in foundry capacity and advanced packaging has created contango in GPU sales. Value of old H100 has gone up after sales.<p>GPU supply-demand curve is nowhere close where Nvidia would like it to be. Demand is so high that Nvidia would make at least 2X profits if it could sell 3X more GPU&#x27;s. TSMC just cant build new fabs fast enough.
评论 #40248014 未加载
评论 #40248297 未加载
_xanderabout 1 year ago
Great article. If this sustainable arbitrage exists from renting gpu time instead of selling and shipping gpus, why doesn&#x27;t nvidia become a cloud provider itself?
评论 #40246916 未加载
评论 #40246986 未加载
评论 #40294879 未加载
评论 #40247042 未加载
评论 #40246972 未加载
评论 #40247413 未加载
评论 #40246900 未加载
评论 #40247015 未加载
评论 #40246937 未加载
评论 #40246902 未加载
评论 #40246947 未加载
评论 #40248374 未加载
DeathArrowabout 1 year ago
So should we buy CoreWeave and Lambda shares after IPO or not?<p>For enthusiasts, even renting from Runpod, Salad, Vast.ai which are an order of magnitude cheaper than established cloud providers is much more expensive than buying a Rtx 3090 or 4090. Which got me thinking why don&#x27;t companies who need training don&#x27;t put their money together, buy some H100 and share them?
评论 #40248691 未加载
评论 #40252771 未加载
评论 #40247183 未加载
hinkleyabout 1 year ago
Nvidia is a manufacturer. They’ve read all the books of all of the giants of logistics. They know that throughput means nothing if it doesn’t include sales. They know how to reduce in-process work and get from raw materials to boxes on store shelves as quickly as possible.<p>Which is to say: they know inventory is a liability and they know how to get rid of it.<p>Renting out equipment is an inventory management problem, which manufacturers don’t understand <i>on purpose</i>. That’s somebody else’s domain.
mklabout 1 year ago
&gt; there are only 35,064 hours in a year with 365.25 days<p>Wut. 96 hours per day? I don&#x27;t trust the maths in this article.
评论 #40249288 未加载
评论 #40248854 未加载
ItsTotallyOnabout 1 year ago
This is actually not an accurate report. <a href="https:&#x2F;&#x2F;twitter.com&#x2F;glennklockwood&#x2F;status&#x2F;1786242487767662845" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;glennklockwood&#x2F;status&#x2F;178624248776766284...</a>
评论 #40247531 未加载
评论 #40248256 未加载