TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Nvidia at SC23: H200 Accelerator with HBM3e and Jupiter Supercomputer for 2024

6 pointsby amirover 1 year ago

1 comment

treesciencebotover 1 year ago
Seems like the underlying die is basically the same as H100, with just a wider memory bus (and potentially changed IMC?). Which is very nice to see, since with H100 especially for inference workloads, the main problem was always memory bandwith being the bottleneck for us. $/perf was never there compared to A100s. Assuming this replaces H100s and the price becomes somewhat similar, we might be finally able to utilize them for our own inference workloads.