> In one case, a judge imposed a fine on New York lawyers who submitted a legal brief with imaginary cases hallucinated by ChatGPT — an incident the lawyers maintained was a good-faith error.<p>They need to be disbarred. Submitting legal filings that contain errors because you used ChatGPT to make up crap is the opposite of a "good-faith" error.
The danger of "AI" is that we actually believe the plausible fabrications it produces are "intelligent". The other day, I debated a guy who thought that the utopian future was governments run by AI. He was convinced that the AI would always make the perfect, optimal decision in any circumstance. The scary thing to me is that LLMs are probably really good at fabricating the kind of brain dead lies that get corrupt politicians into power.
I think people under and overestimate AI at the same time. E.g. I asked ChatGPT4 to draw me a schematic of a simple buck converter (i.e. 4 components + load). In the written response it got the basics right. Drawing that schematic is completely garbled non-sense.<p>I was expecting something like this maybe: <a href="https://en.wikipedia.org/wiki/Buck_converter#/media/File:Buck_conventions.svg" rel="nofollow">https://en.wikipedia.org/wiki/Buck_converter#/media/File:Buc...</a><p>I got this: <a href="https://imgur.com/a/tEqprGq" rel="nofollow">https://imgur.com/a/tEqprGq</a>
Right. It's the AI that is the problem.<p>I have another use case for LLM's, I haven't thought of before: absolution of responsibility. The public is already primed to focus on AI in such cases.
A hundredth the price and a quarter the quality means that this is here to stay. Might be a little early in the accuracy phase to start riding AI written briefs into court unchecked, but then I’ve never met a lawyer who didn’t try to make their billing efficient.<p>But logically, since all that is needed is improved accuracy it’s more likely that improved accuracy will be the answer rather than any change in human behavior.
Isn’t “hallucination” named after a human phenomena? People too remember things that never happened.<p>Wouldn’t be solvable with a second AI agent which checks the output of the first one and be like “bro, you sure about that? I never heard of it”.<p>In my experience with LLMs, they don’t insist when corrected, instead they apologize and generate a response with that correction in mind.
I haven't used or paid much attention to ChatGPT, but the other day I was reading a macOS question on Reddit, and one of the "answers" was completely bizarre, claiming that the Apple Launchpad app was developed by Canonical. I checked the commenter's bio, and sure enough, they were a prolific ChatGPT user. It also turns out that Canonical has a product called Launchpad, which was the basis of ChatGPT's mindlessly wrong answer.<p>The scary thing is that even though ChatGPT's response was completely detached from reality, it was articulate and sounded authoritative, easily capable of fooling someone who wasn't aware of the facts. It seems to me that these "AI tools" are a menace in a society already rife with misinformation. Of course the Reddit commenter didn't have the decency to preface their comment with a disclaimer about how it was generated. I'm not looking forward to the future of this.