TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Nvidia H200 Tensor Core GPU

132 pointsby treesciencebotover 1 year ago

12 comments

brucethemoose2over 1 year ago
The H200 GPU die is the same as the H100, but its using a full set of faster 24GB memory stacks:<p><a href="https:&#x2F;&#x2F;www.anandtech.com&#x2F;show&#x2F;21136&#x2F;nvidia-at-sc23-h200-accelerator-with-hbm3e-and-jupiter-supercomputer-for-2024" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.anandtech.com&#x2F;show&#x2F;21136&#x2F;nvidia-at-sc23-h200-acc...</a><p>This is an H100 141GB, not new silicon like the Nvidia page might lead one to believe.
评论 #38252331 未加载
评论 #38253356 未加载
评论 #38251967 未加载
wolframhempelover 1 year ago
I&#x27;m curious: Do you think there is a realistic chance for another chip maker to catch up and overtake NVidia in the AI space in the next few years or is their lead and expertise insurmountable at this point?
评论 #38252015 未加载
评论 #38252179 未加载
评论 #38252938 未加载
评论 #38251942 未加载
评论 #38253708 未加载
评论 #38252131 未加载
评论 #38253924 未加载
评论 #38253270 未加载
评论 #38251918 未加载
deadballcretinover 1 year ago
The performance jumps that Nvidia has had in a fairly short amount of time is impressive, but I can&#x27;t help but feel like there is a real need for another player in this space. Hopefully AMD can challenge this supremacy soon.
评论 #38251783 未加载
评论 #38251659 未加载
评论 #38252664 未加载
评论 #38255133 未加载
schrodingerscowover 1 year ago
This may be a naive question, but all the metrics seem to be for inference. Should we expect similar gains on training?
评论 #38252671 未加载
Mistletoeover 1 year ago
Can anyone explain to a layman what exactly I&#x27;m looking at in that picture? It looks like a neat little city or building from Bladerunner.
评论 #38251853 未加载
sberensover 1 year ago
Where does the H200 fit in if the B100 is coming out the same year with 2x the performance? Is the H200 just cheaper than the B100?
评论 #38251749 未加载
christkvover 1 year ago
Is the limit on the speed on inference a memory bandwidth issue or compute?
评论 #38252334 未加载
评论 #38252304 未加载
评论 #38253343 未加载
mtwover 1 year ago
I had a shock when I looked up prices for H100 gpus, wanting to use one just for personal experimentation and for an upcoming hackathon. How much this one costs? $300,000?
评论 #38252881 未加载
bearjawsover 1 year ago
&quot;GPU&quot; - zero video output capabilities built in.
评论 #38252130 未加载
评论 #38252084 未加载
nojvekover 1 year ago
With cookies banner and ad banner, the page has barely 1&#x2F;4th of screen space on mobile device.
NoMoreNicksLeftover 1 year ago
Am I the only one that&#x27;s annoyed by the non-alphabetical model numbers? Why not do B100 after the A100, then jump to H (supposing there won&#x27;t be a C100 or D200 at some point)? Like, wtf Nvidia.
评论 #38251916 未加载
评论 #38252358 未加载
评论 #38251911 未加载
gosub100over 1 year ago
Why do they still sell hardware now that practically every other business has moved to being a service provider? If we set aside the fact that it would be an awful move for end-users, what&#x27;s to stop Nvidia from cornering the market by only renting them in their own data centers? Is it the logistics of moving the massive training sets?
评论 #38252619 未加载
评论 #38252666 未加载
评论 #38253320 未加载
评论 #38252529 未加载
评论 #38252478 未加载