TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: Do you still make yourself believe some local LLM was helpful?

4 点作者 Haeuserschlucht2 个月前
Contrary to what I wrote before, I had a thorough look at my experience and I changed my mind. Now I see it like this:<p>I tested every model available on huggingface.com and none of it made it into any kind of regular use for me. Useless, heavily biased replies even with so-called uncensored models and hallucinations are the main reasons. Nothing beats cloud llms when it comes to quality and unfortunately, if you have privacy sensitive data, you better not rely on AI to deal with it, because local LLMs won&#x27;t make you happy anytime soon. You will just notice how much time you wasted hoping to achieve something with local LLMs that they won&#x27;t deliver. I wrote this out of anger that nobody adresses this elephant in the room.

3 条评论

dtagames2 个月前
I agree that it&#x27;s niche but also typical of us programmers to declare &quot;my way is the only way&quot; even when we&#x27;re into a niche thing. Look at the number of people who setup home movie streaming servers vs pay for Netflix, for example.<p>To the platform victor go the spoils. I&#x27;ve had the greatest leaps recently with Cursor, which not only is a terrific RAG application but integrates several models. Does anyone want to write and maintain that who is not in that business? No. Hence, platforms.
meristohm2 个月前
I&#x27;ve yet to consider any LLM helpful. I&#x27;d far rather invest in early-childhood family education to reduce trauma and set us up to enjoy lifelong learning and a sense of purpose rather than school as preparation for making money for bosses. The resource cost of machine learning is an opportunity cost; invest in people and culture and ecological diversity, not things.
caprock2 个月前
It&#x27;s very clear that more computation (and ram) on larger models will produce better results when talking about general model use. This will be true for a while.<p>Eventually the marginal benefits might plateau in combination with enough optimizations to make local use outweigh any cloud models.<p>More specific and narrow use cases are a different matter.