TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Literate Development: AI-Enhanced Software Engineering

38 点作者 maga大约 2 个月前

8 条评论

btbuildem大约 2 个月前
This is the most insightful article on the intersection of LLMs and software development I have read to date. There is zero fluff here -- every point is key, every observation relevant. In a time of paradigm shift, this is a fantastic guide on how to stay in the driver&#x27;s seat, and most effectively leverage these tools. The inevitable shift here is upwards, away from the gritty detail of code.<p>Documentation (as in &quot;design doc&quot;, not &quot;API reference&quot;) is the absolute initial entry point: iterating on the problem statement, stakeholder requirements, business constraints, etc, until a coherent plan emerges, then organizing it at a high level. Combining this with &quot;deep research&quot; mode can yield fantastic results, as it draws on existing solutions and best practices across a vast body of knowledge.<p>The trick then is a sliding scope context window: with a high-level design doc in context, iterate to produce an architecture document. Once that is reviewed and hand-tuned, you can use it in turn to produce more detailed technical designs for various components of the system. And so on and so forth down the scale of granularity, until you&#x27;re working with code. The important part is to never try and hold the entire thing in scope, instead, balance the context and granularity so that there&#x27;s enough information to guide the LLM, and enough space to grow the next tier of the solution. Work in a manner that creates natural interfaces where artifacts can be decoupled. Piecemeal, not all at once.<p>The test aspect is also incredibly relevant: as you&#x27;re able to work across a vastly larger codebase, moving much more quickly, tests become truly invaluable. And they can be squared against the original design documentation, to gauge how well the produced artifacts fulfill the original intent.<p>I&#x27;ll acknowledge that this is most relevant in context of greenfield projects; but, LLMs&#x27; ability to ingest and summarize code makes them useful tools in dealing with legacy solutions. The point about documentation stands; adding features or fixing issues in existing codebases is the bottom of the pyramid; with these tools now you can stir things at the PM level, and better shape both the understanding of problems, and the approaches to solving them.<p>It&#x27;s a very exciting time, it feels like having worked by hand for decades, only to now have access to power tools and heavy machinery.
评论 #43526786 未加载
评论 #43527140 未加载
btown大约 2 个月前
A rule of thumb I’ve started using is: “if your function name and arguments aren’t good enough to have Copilot tab completion make a cogent attempt at implementing the full behavior, you need more comments&#x2F;docstrings and&#x2F;or you need to create utility methods that break down the complexity.”<p>Alternatively: “if you tab complete a docstring and it doesn’t match what you expect, your code can be clearer and you should add comments and rename variables accordingly.”<p>This isn’t hard and fast. Sometimes it risks yak shaving. But if an LLM can’t understand your intent, there’s a good chance a colleague or even your future self might similarly struggle.
评论 #43532626 未加载
评论 #43538513 未加载
siquick大约 2 个月前
My strategy is generally to have a back and forward on the requirements with the LLM for 3&#x2F;4 prompts, then get it write a summary, and then a plan. Then get it to convert the plan to a low level todo list and write it to TODO.md.<p>Then I get it to go through each section of the todo list and check each item off as it completes it. Generally results in completed tasks that stay on track but also means that I can stop half way through and go back to the tasks without having to prompt from the start again.
andy24大约 2 个月前
This article describes a method for LLM-assisted coding process but don’t provide anything of substance to back it up. It’s unclear whether the suggestions and techniques mentioned in the article came from personal experience or have otherwise been verified or experimented with with a real team and a real project.
评论 #43526575 未加载
评论 #43527093 未加载
jamil7大约 2 个月前
I’ve landed on a few similar techniques and have been using unit tests quite a bit as a guardrail for LLMs. One thing that’s useful when using aider is alternating between the &#x2F;add and &#x2F;read-only contexts so that it can only edit the tests or the code but “see” both.
MoonGhost大约 2 个月前
Regardless, are there any good examples of projects generated by LLMs? There was a game like Angry Birds. But that was long time ago. I did some simple &#x27;games&#x27;. If it&#x27;s easy there should be a lot of open source projects, right?
评论 #43532838 未加载
评论 #43533577 未加载
评论 #43535550 未加载
amadeuspagel大约 1 个月前
Documentation explains how an app works currently. But part of the context that the LLM lacks is how I imagine the app to work. This is difficult to integrate into version control.
deterministic大约 1 个月前
I recommend that the author actually tries this approach on a reasonably sized project before recommending it to others.