TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Tell HN: Don't just call fine-tuning “training”

2 点作者 sbussard大约 2 年前
It&#x27;s easy to misunderstand claims of running LLMS locally, as if anyone can write the next ChatGPT on their laptop.<p>Even though fine tuning is a type of training, it is not the hard part, so one solution is to communicate more clearly and always call fine-tuning fine-tuning. There are a lot of new people wanting to get into the field, and having clarity in your claims will help us out.<p>Thanks

1 comment

coxomb大约 2 年前
Well if the LLM is closed and proprietary, there is no insight into how training data is even used. It&#x27;s just a black-box we have to use blindly and &#x27;hope&#x27; the designers are using a blend of fine-tuning coupled with better training data.