TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: What is everyone doing with all of these GPUs?

11 点作者 formercoder9 个月前
I know accelerator demand is blowing up. However, there are only a few players training foundation models. What are the core use cases everyone else has for all of these accelerators? Fine tuning, smaller transformer models, general growth in deep learning?

3 条评论

lafeoooooo9 个月前
Different scenarios have varying demands for GPU types. For tasks like model inference or basic operations, a CPU or even on-device solutions (mobile, web) might suffice.<p>When a GPU is necessary, common choices include T4, 3090, P10, V100, etc., selected based on factors like price, required computing power, and memory capacity.<p>Model training also has diverse needs based on the specific task. For basic, general-purpose vision tasks, 1 to 50 cards like the 3090 might be enough. However, cutting-edge areas like visual generation and LLMs often require A100s or A800s, scaling from 1 to even thousands of cards.
talldayo9 个月前
Inference. 99% of the customers that aren&#x27;t buying GPUs to train on are either using it for inference or putting it in a datacenter where inference is the intended use-case.
评论 #41276327 未加载
评论 #41277052 未加载
the__alchemist9 个月前
I&#x27;m playing UE5 games, and doing some computational chem with CUDA.