TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: Retrieval-Augmented Generation vs. Fine-Tuning – Which Is the Future?

5 点作者 BohuTANG超过 1 年前

2 条评论

srirangr超过 1 年前
A combination of RAG and FineTuning will be much more useful IMHO.<p>Think of LLMs as generic models that can answer anything but with lower accuracy.<p>You fine-tune them to learn specifics of a particular domain. This way LLMs can provide more factual answers to domain specific questions.<p>Finally, you can add RAG on top of fine tuned models to get answers in context of your organisation or specific documents.<p>How all this will pan out remains to be seen, but surely there are interesting applications to come out of these technologies.
apothegm超过 1 年前
Both. They serve different purposes.