TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: People with new Macs / computers with GPU's, do you run LLM's locally?

3 点作者 vishalontheline2 天前
I am considering finally upgrading to a new computer - most likely an M4 Mac with the primary goal of running coding assistants and training my own models locally.<p>Is this a good idea? Have you tried it? How&#x27;s the performance?<p>Thank you!

2 条评论

fiiv2 天前
I do this. Ollama makes it very easy - just pull the model you want. The great thing is the ability to test them in the same tasks, there&#x27;s a huge difference in comparable models.<p>You can set it up in your editor of choice - I use Zed, and it&#x27;s just listed as one of the providers you can choose.<p>In terms of performance - it works decently well.
bookworm1232 天前
Cool idea, however when I have to create an account for a service just to test it out, I will naturally decline to do so. Maybe you could upload a sample document for people to play with?