TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: How will LLMs work on an entire codebase?

1 点作者 gettodachoppa大约 1 年前
Like many of you I use ChatGPT for specific questions, completing a function from comments, etc. But I&#x27;m reading that LLMs will soon become actual developers.<p>How can that be? Let&#x27;s forget about quality, hallucinations, etc. The largest context window from an accessible&#x2F;affordable LLM is 32k (Mixtral or GPT4). That&#x27;s barely enough for a TODO app, let alone a real project. The smallest project I work on, a desktop app, has 60k LOC&#x2F;6M characters&#x2F;1.5M tokens.<p>So what changes are coming that would allow an LLM modify an existing codebase, e.g. to modify a feature and write its tests? (without having to spoonfeed it the perfect context the way we do now in ChatGPT)

1 comment

hiddencost大约 1 年前
Your question is posed as a hypothetical, but the problem is already solved...<p>Add a dependency graph of different agents and tools. Use summarization (either selecting subsections or rewriting). Give it a scratch space. Use RAG.<p>Why would it need to load the whole code base into memory? We can build very complex architectures on top of this task that mix LLMs with software.<p><a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2402.09171" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2402.09171</a><p>This isn&#x27;t hypothetical; all of these