TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Exchanging more frontier LLM compute for higher accuracy in RAG systems

1 点作者 mskar9 个月前

1 comment

mskar9 个月前
We&#x27;re sharing some experiments in designing RAG systems via the open source PaperQA2 system (<a href="https:&#x2F;&#x2F;github.com&#x2F;Future-House&#x2F;paper-qa">https:&#x2F;&#x2F;github.com&#x2F;Future-House&#x2F;paper-qa</a>). PaperQA2&#x27;s design is interesting because it isn&#x27;t concerned with cost, so it uses expensive operations like agentic tool calling and LLM based re-ranking and contextual summarization for each query.<p>Even though the costs are higher, we see that the RAG accuracy gains (in question-answering tasks) are worth it. Including LLM chunk re-ranking and contextual summaries in your RAG flow also makes the system robust to changes in chunk sizes, parsing oddities and embedding model shortcomings. It&#x27;s one of the largest drivers of performance we could find.