TE
TechEcho
Home
24h Top
Newest
Best
Ask
Show
Jobs
English
GitHub
Twitter
Home
Deploy dedicated DeepSeek 32B on L40 GPUs ($8/hour)
19 points
by
wfalcon
4 months ago
6 comments
woodr77
3 months ago
Everyone's saying I needed H100s for this. L40 is way easier for me to get my hands on. great news.
ashenWon
4 months ago
Is this running ollama, vllm or sglang under the hood? Curious about these performance numbers.
lmilad
4 months ago
How well does DeepSeek R1 handle generating long pieces of text with Qwen 32B?
tchaton84
4 months ago
Does it support largest Deepseek model ?
yewnork
4 months ago
curious the performance / price tradeoffs between deepseek-r1 671b, 70b, 32b
neilbhatt
4 months ago
nice, i can actually use my AWS start up creds