TE
科技回声
首页
24小时热榜
最新
最佳
问答
展示
工作
中文
GitHub
Twitter
首页
Deploy dedicated DeepSeek 32B on L40 GPUs ($8/hour)
19 点
作者
wfalcon
4 个月前
6 条评论
woodr77
4 个月前
Everyone's saying I needed H100s for this. L40 is way easier for me to get my hands on. great news.
ashenWon
4 个月前
Is this running ollama, vllm or sglang under the hood? Curious about these performance numbers.
lmilad
4 个月前
How well does DeepSeek R1 handle generating long pieces of text with Qwen 32B?
tchaton84
4 个月前
Does it support largest Deepseek model ?
yewnork
4 个月前
curious the performance / price tradeoffs between deepseek-r1 671b, 70b, 32b
neilbhatt
4 个月前
nice, i can actually use my AWS start up creds