TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Hallucinations in code are the least dangerous form of LLM mistakes

17 点作者 OuterVale3 个月前

3 条评论

Terr_3 个月前
As much as I&#x27;ve agreed with the author&#x27;s other posts&#x2F;takes, I find myself resisting this one:<p>&gt; I&#x27;ll finish this rant with a related observation: I keep seeing people say “if I have to review every line of code an LLM writes, it would have been faster to write it myself!”<p>&gt; Those people are loudly declaring that they have under-invested in the crucial skills of reading, understanding and reviewing code written by other people.<p>No, that does not follow.<p>1. Reviewing depends on what you know about the expertise (and trust) of the person writing it. Spending most of your day reviewing code written by familiar human co-workers is very different from the same time reviewing anonymous contributions.<p>2. Reviews are not just about the code&#x27;s potential mechanics, but inferring and comparing the intent and approach of the writer. For LLMs, that ranges between non-existent and schizoid, and writing it yourself skips that cost.<p>3. Motivation is important, for some developers that means learning, understanding and creating. Not wanting to do code reviews all day doesn&#x27;t mean you&#x27;re <i>bad</i> at them. Also, reviewing an LLM&#x27;s code has no social aspect.<p>However you do it, somebody else should still be reviewing the change afterwards.
评论 #43228712 未加载
eternityforest3 个月前
It&#x27;s exactly the same problem with human written code. To me it seems like it&#x27;s not an LLM problem, it&#x27;s a lack of testing and review problem.
sonorous_sub3 个月前
You have to make sure the machine is hypnotized correctly, or otherwise it can hallucinate on you.