The H200 GPU die is the same as the H100, but its using a full set of faster 24GB memory stacks:<p><a href="https://www.anandtech.com/show/21136/nvidia-at-sc23-h200-accelerator-with-hbm3e-and-jupiter-supercomputer-for-2024" rel="nofollow noreferrer">https://www.anandtech.com/show/21136/nvidia-at-sc23-h200-acc...</a><p>This is an H100 141GB, not new silicon like the Nvidia page might lead one to believe.
I'm curious: Do you think there is a realistic chance for another chip maker to catch up and overtake NVidia in the AI space in the next few years or is their lead and expertise insurmountable at this point?
The performance jumps that Nvidia has had in a fairly short amount of time is impressive, but I can't help but feel like there is a real need for another player in this space. Hopefully AMD can challenge this supremacy soon.
I had a shock when I looked up prices for H100 gpus, wanting to use one just for personal experimentation and for an upcoming hackathon. How much this one costs? $300,000?
Am I the only one that's annoyed by the non-alphabetical model numbers? Why not do B100 after the A100, then jump to H (supposing there won't be a C100 or D200 at some point)? Like, wtf Nvidia.
Why do they still sell hardware now that practically every other business has moved to being a service provider? If we set aside the fact that it would be an awful move for end-users, what's to stop Nvidia from cornering the market by only renting them in their own data centers? Is it the logistics of moving the massive training sets?