TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Wasting Inferences with Aider

139 点作者 Stwerner大约 1 个月前

21 条评论

fxtentacle大约 1 个月前
For me, a team of junior developers that refuse to learn from their mistakes is the fuel of nightmares. I&#x27;m stuck in a loop where every day I need to explain to a new hire why they made the exact same beginner&#x27;s mistake as the last person on the last day. Eventually, I&#x27;d rather spend half an hour of my own time than to explain the problem once more...<p>Why anyone thinks having 3 different PRs for each jira ticket might boost productivity, is beyond me.<p>Related anime: I May Be a Guild Receptionist, But I&#x27;ll Solo Any Boss to Clock Out on Time
评论 #43673564 未加载
评论 #43673746 未加载
评论 #43674331 未加载
评论 #43673742 未加载
评论 #43673980 未加载
denidoman大约 1 个月前
The current challenge is not to create a patch, but to verify it.<p>Testing a fix in a big application is a very complex task. First of all, you have to reproduce the issue, to verify steps (or create them, because many issues don&#x27;t contain clear description). Then you should switch to the fixed version and make sure that the issue doesn&#x27;t exists. Finally, you should apply little exploratory testing to make sure that the fix doesn&#x27;t corrupted neighbour logic (deep application knowledge required to perform it).<p>To perform these steps you have to deploy staging with the original&#x2F;fixed versions or run everything locally and do pre-setup (create users, entities, etc. to achieve the corrupted state).<p>This is very challenging area for the current agents. Now they just can&#x27;t do these steps - their mental models just not ready for a such level of integration into the app and infra. And creation of 3&#x2F;5&#x2F;10&#x2F;100 unverified pull requests just slow down software development process.
评论 #43675137 未加载
评论 #43674101 未加载
评论 #43674538 未加载
wrs大约 1 个月前
I’ve been using Cursor and Code regularly for a few months now and the idea of letting three of them run free on the codebase seems insane. The reason for the chat interface is that the agent goes off the rails on a regular basis. At least 25% of the time I have to hit the stop button and go back to a checkpoint because the automatic lawnmower has started driving through the flowerbed again. And paradoxically, the more capable the model gets, the more likely it seems to get random ideas of how to fix things that aren’t broken.
评论 #43674310 未加载
评论 #43694398 未加载
评论 #43680595 未加载
tekacs大约 1 个月前
Over the last two days, I&#x27;ve built out support for autonomy in Aider (a lot like Claude Code) that hybridizes with the rest of the app:<p><a href="https:&#x2F;&#x2F;github.com&#x2F;Aider-AI&#x2F;aider&#x2F;pull&#x2F;3781">https:&#x2F;&#x2F;github.com&#x2F;Aider-AI&#x2F;aider&#x2F;pull&#x2F;3781</a><p>Edit: In case anyone wants to try it, I uploaded it to PyPI as `navigator-mode`, until (and if!) the PR is accepted. By I, I mean that it uploaded itself. You can see the session where it did that here: <a href="https:&#x2F;&#x2F;asciinema.org&#x2F;a&#x2F;9JtT7DKIRrtpylhUts0lr3EfY" rel="nofollow">https:&#x2F;&#x2F;asciinema.org&#x2F;a&#x2F;9JtT7DKIRrtpylhUts0lr3EfY</a><p>Edit 2: And as a Show HN, too: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=43674180">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=43674180</a><p>and, because Aider&#x27;s already an amazing platform without the autonomy, it&#x27;s very easy to use the rest of Aider&#x27;s options, like using `&#x2F;ask` first, using `&#x2F;code` or `&#x2F;architect` for specific tasks [1], but if you start in `&#x2F;navigator` mode (which I built, here), you can just... ask for a particular task to be done and... wait and it&#x27;ll often &#x27;just get done&#x27;.<p>It&#x27;s... decidedly expensive to run an LLM this way right now (Gemini 2.5 Pro is your best bet), but if it&#x27;s $N today, I don&#x27;t doubt that it&#x27;ll be $0.N by next year.<p>I don&#x27;t mean to speak in meaningless hype, but I think that a lot of folks who are speaking to LLMs&#x27; &#x27;inability&#x27; to do things are also spending relatively cautiously on them, when tomorrow&#x27;s capabilities are often here, just pricey.<p>I&#x27;m definitely still intervening as it goes (as in the Devin demos, say), but I&#x27;m also having LLMs relatively autonomously build out large swathes of functionality, the kind that I would put off or avoid without them. I wouldn&#x27;t call it a programmer-replacement any time soon (it feels far from that), but I&#x27;m solo finishing architectures now that I know how to build, but where delegating them to a team of senior devs would&#x27;ve resulted in chaos.<p>[1]: also for anyone who hasn&#x27;t tried it and doesn&#x27;t like TUI, do note that Aider has a web mode and a &#x27;watch mode&#x27;, where you can use your normal editor and if you leave a comment like &#x27;# make this darker ai!&#x27;, Aider will step in and apply the change. This is even fancier with navigator&#x2F;autonomy.
评论 #43673939 未加载
评论 #43674535 未加载
评论 #43674590 未加载
pton_xd大约 1 个月前
The trend with LLMs so far has been: if you have an issue with the AI, wait 6 months for a more advanced model. Cobbling together workarounds for their deficiencies is basically a waste of effort.
danenania大约 1 个月前
Plandex[1] uses a similar “wasteful” approach for file edits (note: I’m the creator). It orchestrates a race between diff-style replacements plus validation, writing the whole file with edits incorporated, and (on the cloud service) a specialized model plus validation.<p>While it sounds wasteful, the calls are all very cheap since most of the input tokens are cached, and once a valid result is achieved, other in-flight requests are cancelled. It’s working quite well, allowing for quick results on easy edits with fallbacks for more complex changes&#x2F;large files that don’t feel incredibly slow.<p>1 - <a href="https:&#x2F;&#x2F;github.com&#x2F;plandex-ai&#x2F;plandex">https:&#x2F;&#x2F;github.com&#x2F;plandex-ai&#x2F;plandex</a>
kgeist大约 1 个月前
I&#x27;ve noticed that large models from different vendors often end up converging on more or less the same ideas (probably because they&#x27;re trained on more or less the same data). A few days ago, I asked both Grok and ChatGPT to produce several stories with an absurd twist, and they consistently generated the same twists, differing only in minor details. Often, they even used identical wording!<p>Is there any research into this phenomenon? Is code generation any different? Isn&#x27;t there a chance that several &quot;independent&quot; models might produce the same (say, faulty) result?
joshstrange大约 1 个月前
This is a very interesting idea and I really should consider Aider in the &quot;scriptable&quot; sense more, I only use interactively.<p>I might add another step after each PR is created where another agent(s?) review and compare the results (maybe have the other 2 agents review the first agents code?).
评论 #43673190 未加载
DeathArrow大约 1 个月前
I don&#x27;t really think having an agent fleet is a much better solution than having a single agent.<p>We would like to think that having 10 agents working on the same task will improve the chances of success 10x.<p>But I would argue that some classes of problems are hard for LLMs and where one agent will fail, 10 agents or 100 agents will fail too.<p>As an easy example I suggest leetcode hard problems.
评论 #43675664 未加载
评论 #43673678 未加载
评论 #43674725 未加载
评论 #43674796 未加载
IshKebab大约 1 个月前
We&#x27;re going to have no traditional programming in 2 years? Riiight.<p>It would also be nice to see a demo where the task was something that I couldn&#x27;t have done myself in essentially no time. Like, what happens if you say &quot;tasks should support tags, and you should be able to filter&#x2F;group tasks by tag&quot;?
评论 #43673234 未加载
评论 #43674483 未加载
评论 #43679384 未加载
评论 #43675692 未加载
canterburry大约 1 个月前
I wouldn&#x27;t be surprised if someone tries to leverage this with their customer feature request tool.<p>Imagine having your customers write feature requests for your saas, that immediately triggers code generation and a PR. A virtual environment with that PR is spun up and served to that customer for feedback and refinement. Loop until customer has implemented the feature they would like to see in your product.<p>Enterprise plan only, obviously.
aqme28大约 1 个月前
It&#x27;s cute but I don&#x27;t see the benefit. In my experience, if one LLM fails to solve a problem, the other ones won&#x27;t be too different.<p>If you picked a problem where LLMs are good, now you have to review 3 PRs instead of just 1. If you picked a problem where they&#x27;re bad, now you have 3 failures.<p>I think there are not many cases where throwing more attempts at the problem is useful.
emorning3大约 1 个月前
I see &#x27;Waste Inferences&#x27; as a form of abductive reasoning.<p>I see LLMs as a form of inductive reasoning, and so I can see how WI could extend LLMs.<p>Also, I have no doubt that there are problems that can&#x27;t be solved with just an LLM but would need abductive extensions.<p>Same comments apply to deductive (logical) extensions to LLMs.
评论 #43679624 未加载
phamilton大约 1 个月前
Sincere question: Has anyone figured out how we&#x27;re going to code review the output of an agent fleet?
评论 #43673287 未加载
评论 #43676405 未加载
评论 #43673808 未加载
评论 #43676050 未加载
评论 #43673485 未加载
precompute大约 1 个月前
Feels like a way to live with a bad decision rather than getting rid of it.
lherron大约 1 个月前
I love this! I have a similar automation for moving a feature through ideation&#x2F;requirements&#x2F;technical design, but I usually dump the result into Cursor for last mile and to save on inference. Seeing the cost analysis is eye opening.<p>There’s probably also some upside to running the same model multiple times. I find Sonnet will sometimes fail, I’ll roll back and try again with same prompt but clean context, and it will succeed.
评论 #43675717 未加载
KTibow大约 1 个月前
I wonder if using thinking models would work better here. They generally have less variance and consider more options, which could achieve the same goal.
billmalarky大约 1 个月前
I&#x27;ve been lucky enough to have a few conversations with Scott a month or so ago and he is doing some really compelling work around the AISDLC and creating a factory line approach to building software. Seriously folks, I recommend following this guy closely.<p>There&#x27;s another guy in this space I know who&#x27;s doing similar incredible things but he doesn&#x27;t really speak about it publicly so don&#x27;t want to discuss w&#x2F;o his permission. I&#x27;m happy to make an introduction for those interested just hmu (check my profile for how).<p>Really excited to see you on the FP of HN Scott!
evertedsphere大约 1 个月前
love to see &quot;Why It Matters&quot; turn into the heading equivalent to &quot;delve&quot; in body text (although different in that the latter is a legitimate word while the former is a &quot;we need to talk about…&quot;–level turn of phrase)
dimal大约 1 个月前
Makes me think of The Sorcerers Apprentice.
charlie0大约 1 个月前
The 10 cents is BS. It was only that because it was a trivial bug. A non-trivial bug requires context and the more context something requires, the more expensive it gets. Also once you are working with larger apps you have to pick the context, especially with LLMs that have smaller windows.