TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Can you trust ChatGPT’s package recommendations?

29 点作者 DantesKite将近 2 年前

12 条评论

veidr将近 2 年前
No, that&#x27;s the only thing to understand about it! You can&#x27;t trust anything it says about anything. What it does is emit plausible-sounding information. But we&#x27;ve seen -- I think across every single realm of anything -- that this information includes complete nonsense, whether due to it being trained on people just shitposting bullshit, or due to combining tokens (words, sentence fragments, whatever) in new ways without the benefit of any actual understanding of it.<p>It can be useful, but it&#x27;s at best as useful as a smart dog that&#x27;s figured out how to operate a voice synthesizer, and is also on amphetamines or hallucinogens. I&#x27;m glad to see this term &quot;LLM hallucination&quot;, as that&#x27;s how it&#x27;s felt to me.<p>I find ChatGPT really useful for things that I can double-check instantaneously (or nearly so), such as doc comments for the code I just wrote, or small unit tests that match the comment I just wrote.<p>But in my experience, it&#x27;s worse than useless for writing real code, or any complex endeavor that isn&#x27;t instantly verifiable&#x2F;rejectable at a glance. Because vetting plausible looking code -- including dependency specification -- is almost always more taxing than just writing the code or package.json entries yourself.
评论 #36280698 未加载
puttycat将近 2 年前
&quot;No&quot;. [1]<p>[1] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Betteridge%27s_law_of_headlines" rel="nofollow noreferrer">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Betteridge%27s_law_of_headline...</a>
mnd999将近 2 年前
No, you can’t trust ChatGPT for anything. Check, check and check again.<p>Don’t want to be like that lawyer citing made up cases.
评论 #36279868 未加载
kasrkin623将近 2 年前
But how do you make sure that it provides consistent hallucinations?<p>Between GPT being consistently patched and such output being prone to statistical error I&#x27;m not sure if this vector of attack is efficient in the first place
评论 #36286867 未加载
taneq将近 2 年前
Treat its output like you’d treat an anonymous forum post. It can be a good source of pointers or ideas but don’t use it as ground truth for anything ever.
matkoniecz将近 2 年前
Pop ups are annoying and for me sufficient to close tab and treat website as spammers.<p>In this case popup spammed me to read article that I was reading before popup appeared.
ryukoposting将近 2 年前
Does ChatGPT have a hard-wired incentive structure that encourages good-faith answers? No. Can you trust its package recommendations? No.
NikkiA将近 2 年前
Every time I ask it (3.5) to produce C code it assumes clang&#x27;s blocks are present, even if I ask for C99 compliant code... blocks, blocks everywhere.
JestUM将近 2 年前
Sooner or later, this can accidentally slip into important pieces of code.<p>Attackers could upload malicious modules that don&#x27;t exist yet due these hallucinations.
johnisgood将近 2 年前
Probably not. It is outdated, and sometimes it gets the name of the libraries wrong, too.
ulrischa将近 2 年前
No
ziml77将近 2 年前
WTF is a recommendation in the context of a language model? It does not have preferences, just weights influenced by proximity and frequency of tokens.
评论 #36280031 未加载
评论 #36280133 未加载
评论 #36281076 未加载