TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

What is a good machine to run LLM's on?

3 点作者 rabbitofdeath大约 1 年前
I would like to really just run OpenwebUI and a few models for local chat use. I'm not into training (yet) and am patient- what is a good cost effective way to get started?

4 条评论

LorenDB大约 1 年前
VRAM is king if you want to run larger (and therefore more accurate) models. 12 GB VRAM will let you run 13B models, which are great for local chat, but you could get away with 8 GB VRAM to run an 8B model as well; I'd recommend Llama 3 8B for that.
talldayo大约 1 年前
A cheap Nvidia GPU with lots of VRAM, like the 3060 12gb model. About the fastest you can expect for the lowest amount of money.
FlyingAvatar大约 1 年前
Any M-series Mac with RAM larger than the model you want to run on it.
rabbitofdeath大约 1 年前
Thank you!