TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: Does anyone have experience running LLMs on a Mac Mini M2?

6 点作者 etewiah超过 1 年前
I would like to run an LLM locally so I can be absolutely sure that the data I send to it is private. Does anyone have experience doing this with the latest Mac Mini? Any insights will be very much appreciated. Thanks.

3 条评论

mtmail超过 1 年前
I ran the 13GB <a href="https:&#x2F;&#x2F;simonwillison.net&#x2F;2023&#x2F;Nov&#x2F;29&#x2F;llamafile&#x2F;" rel="nofollow">https:&#x2F;&#x2F;simonwillison.net&#x2F;2023&#x2F;Nov&#x2F;29&#x2F;llamafile&#x2F;</a> on an M3 without issues. 40 tokens per second output.
评论 #38992596 未加载
paeselhz超过 1 年前
I bought a Mac Mini M2 last year to start playing around with some personal projects, and I did some tests using LM Studio running Mixtral models with pretty good throughput, I also tested Open AI&#x27;s whisper models to do some transcriptions and those ran fine as well.<p>I do, however, recommend that you upgrade the RAM, 8GB is barely enough as is, so getting at least 16GB would be better. (I don&#x27;t recommend upgrading the SSD though, since because of Thunderbolt 4 you can have a fast external SSD for half the price that Apple charges for storage).
评论 #39025021 未加载
geoah超过 1 年前
download km studio and you&#x27;re done. depending on the amount of ram you have you can run different models. check out the mixtral 8x7b ones for generally good results. <a href="https:&#x2F;&#x2F;lmstudio.ai&#x2F;" rel="nofollow">https:&#x2F;&#x2F;lmstudio.ai&#x2F;</a>