TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Deploy dedicated DeepSeek 32B on L40 GPUs ($8/hour)

19 点作者 wfalcon4 个月前

6 条评论

woodr774 个月前
Everyone's saying I needed H100s for this. L40 is way easier for me to get my hands on. great news.
ashenWon4 个月前
Is this running ollama, vllm or sglang under the hood? Curious about these performance numbers.
lmilad4 个月前
How well does DeepSeek R1 handle generating long pieces of text with Qwen 32B?
tchaton844 个月前
Does it support largest Deepseek model ?
yewnork4 个月前
curious the performance / price tradeoffs between deepseek-r1 671b, 70b, 32b
neilbhatt4 个月前
nice, i can actually use my AWS start up creds