TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

AI 'hallucinated' fake legal cases filed to B.C. court in Canadian first

43 pointsby luuover 1 year ago

12 comments

happymellonover 1 year ago
&gt; In one case, a judge imposed a fine on New York lawyers who submitted a legal brief with imaginary cases hallucinated by ChatGPT — an incident the lawyers maintained was a good-faith error.<p>They need to be disbarred. Submitting legal filings that contain errors because you used ChatGPT to make up crap is the opposite of a &quot;good-faith&quot; error.
评论 #39115598 未加载
评论 #39115773 未加载
评论 #39115687 未加载
评论 #39115575 未加载
yositoover 1 year ago
The danger of &quot;AI&quot; is that we actually believe the plausible fabrications it produces are &quot;intelligent&quot;. The other day, I debated a guy who thought that the utopian future was governments run by AI. He was convinced that the AI would always make the perfect, optimal decision in any circumstance. The scary thing to me is that LLMs are probably really good at fabricating the kind of brain dead lies that get corrupt politicians into power.
评论 #39115540 未加载
评论 #39115719 未加载
评论 #39115501 未加载
评论 #39115634 未加载
评论 #39115593 未加载
评论 #39115691 未加载
_fizz_buzz_over 1 year ago
I think people under and overestimate AI at the same time. E.g. I asked ChatGPT4 to draw me a schematic of a simple buck converter (i.e. 4 components + load). In the written response it got the basics right. Drawing that schematic is completely garbled non-sense.<p>I was expecting something like this maybe: <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Buck_converter#&#x2F;media&#x2F;File:Buck_conventions.svg" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Buck_converter#&#x2F;media&#x2F;File:Buc...</a><p>I got this: <a href="https:&#x2F;&#x2F;imgur.com&#x2F;a&#x2F;tEqprGq" rel="nofollow">https:&#x2F;&#x2F;imgur.com&#x2F;a&#x2F;tEqprGq</a>
评论 #39115743 未加载
评论 #39115644 未加载
评论 #39115630 未加载
eurekinover 1 year ago
Right. It&#x27;s the AI that is the problem.<p>I have another use case for LLM&#x27;s, I haven&#x27;t thought of before: absolution of responsibility. The public is already primed to focus on AI in such cases.
评论 #39115572 未加载
评论 #39115588 未加载
happytigerover 1 year ago
A hundredth the price and a quarter the quality means that this is here to stay. Might be a little early in the accuracy phase to start riding AI written briefs into court unchecked, but then I’ve never met a lawyer who didn’t try to make their billing efficient.<p>But logically, since all that is needed is improved accuracy it’s more likely that improved accuracy will be the answer rather than any change in human behavior.
评论 #39115468 未加载
评论 #39115706 未加载
评论 #39116044 未加载
mrtksnover 1 year ago
Isn’t “hallucination” named after a human phenomena? People too remember things that never happened.<p>Wouldn’t be solvable with a second AI agent which checks the output of the first one and be like “bro, you sure about that? I never heard of it”.<p>In my experience with LLMs, they don’t insist when corrected, instead they apologize and generate a response with that correction in mind.
评论 #39115846 未加载
评论 #39115642 未加载
评论 #39115616 未加载
评论 #39115615 未加载
lapcatover 1 year ago
I haven&#x27;t used or paid much attention to ChatGPT, but the other day I was reading a macOS question on Reddit, and one of the &quot;answers&quot; was completely bizarre, claiming that the Apple Launchpad app was developed by Canonical. I checked the commenter&#x27;s bio, and sure enough, they were a prolific ChatGPT user. It also turns out that Canonical has a product called Launchpad, which was the basis of ChatGPT&#x27;s mindlessly wrong answer.<p>The scary thing is that even though ChatGPT&#x27;s response was completely detached from reality, it was articulate and sounded authoritative, easily capable of fooling someone who wasn&#x27;t aware of the facts. It seems to me that these &quot;AI tools&quot; are a menace in a society already rife with misinformation. Of course the Reddit commenter didn&#x27;t have the decency to preface their comment with a disclaimer about how it was generated. I&#x27;m not looking forward to the future of this.
jruohonenover 1 year ago
It is spreading like a wildfire. Yet, the question remains about the repercussions.
评论 #39115586 未加载
评论 #39115488 未加载
taffronautover 1 year ago
From TFA, &quot;the case was a high-net-worth family matter&quot; so probably not an existential threat to anyone.
elzbardicoover 1 year ago
The problem with AI is not the AI itself, it is people. Dumb people. Dumb people with credentials and power.
guil177over 1 year ago
The title of the article should be: somebody faked a legal case using AI.
flanked-everglover 1 year ago
How are language models doing what they are known to do newsworthy? This feels a bit like reporting that water is wet.
评论 #39115659 未加载
评论 #39115578 未加载