TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Two Programming-with-AI Approaches

30 点作者 intellectronica4 个月前

5 条评论

MutedEstate454 个月前
I attempted the approach of having an AI write the code while I casually audited it, but the results didn’t work out well for me. It kept adding bugs and logical errors that were hard to find. I was using Cursor along with Claude 3.5 Sonnet, and perhaps my use case was too complex. What have you built using this methodology?
评论 #42840055 未加载
GuB-424 个月前
It is actually not just an AI-related problem. It can apply to traditional code generators too.<p>For example, if you want to access a database, you can write the SQL queries yourself, possibly taking advantage of some tools to help you build it. But in the end, that&#x27;s your code.<p>Another approach is to take a tool that analyzes the database schema and generate the database access code for you (ORM). That&#x27;s not really your code anymore, and if you make changes, you are expected to do them from the tool.<p>These two approaches have their pros and cons, but if you don&#x27;t make a distinction between fully generated code and at least partially hand written code, you are going to make a mess.
satisfice4 个月前
This style of programmer post is strange. We are asked to take it on faith that quality results have been achieved, despite being given no evidence that anyone has deeply tested any of it.<p>This is faith-based programming.<p>I can’t do anything with this advice.
messh4 个月前
in the middle there is pair programming with ai, see for e.g. aider: <a href="https:&#x2F;&#x2F;aider.chat&#x2F;" rel="nofollow">https:&#x2F;&#x2F;aider.chat&#x2F;</a>
评论 #42840423 未加载
评论 #42839571 未加载
noodletheworld4 个月前
&gt; modularisation is an approach to software construction that I generally tend to avoid - as a human writing code I just find it confusing and high overhead<p>How baffling.<p>Anyone who avoids modularisation or considers it &#x27;overhead&#x27; has never worked on a large system; or, has only worked on small over-engineered projects.<p>In large projects, it is physically impossible to &#x27;be across&#x27; everything that&#x27;s happening in the code.<p>The only, and I do mean <i>only</i> way to work effectively in a code base like this is to become familiar with a small part of it, clearly identify the boundaries of &#x27;this part&#x27; and make sure that your changes happen only with that context where you:<p>- understand the side effects<p>- understand the domain<p>- can work effectively.<p>That is what modularisation is.<p>It is not &#x27;making an NPM package for lpad&#x27;; the rough size of &#x27;code you can keep track of&#x27; is very obvious to most people; when the code gets bigger than that you need to break it up into modules with clearly defined boundaries.<p>You then learn the boundaries.<p>The boundaries are smaller than the code.<p>You can therefore work effectively by understanding a subset of the code (my module + the boundaries of the modules I interact with) which is strictly &lt; (my module + the content of the modules I interact with).<p>...<p>Amazingly, the same thing applies to LLMs that applies to people!<p>For small projects, you can just go in and do whatever you like and it&#x27;ll be fine; for larger ones, you have to split it into smaller parts and do each part independently.<p>Absolutely right... but, surely not surprising to anyone?<p>...<p>Here&#x27;s a pro-tip for you LLM work:<p>You can easily pick the size to use for modules based on the volume of code inside it.<p>Compare the size and complexity of the API for interacting with the module (class, crate, whatever) with the implementation details. I use &#x27;lines of code + documentation + examples for the LLM to use it&#x27; as my vague metric.<p>If you&#x27;re doing an &#x27;lpad&#x27; and the implementation is &lt;= the size of the API, don&#x27;t bother to make it a module. You&#x27;re wasting your time.<p>When the implementation gets ~10x the size of the API, split it.<p>It ain&#x27;t rocket science. LLMs are good at writing glue code from well defined APIs.
评论 #42840725 未加载