TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Lawyer cites fake cases invented by ChatGPT, judge is not amused

347 点作者 scraptor将近 2 年前

50 条评论

metadat将近 2 年前
Discussed today:<p><i>A man sued Avianca Airline – his lawyer used ChatGPT</i><p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=36095352" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=36095352</a> (100 comments)
评论 #36098198 未加载
mehwoot将近 2 年前
<i>No, it did not “double-check”—that’s not something it can do! And stating that the cases “can be found on legal research databases” is a flat out lie.</i><p><i>What’s harder is explaining why ChatGPT would lie in this way. What possible reason could LLM companies have for shipping a model that does this?</i><p>It did this because it&#x27;s copying how humans talk, not what humans do. Humans say &quot;I double checked&quot; when asked to verify something, that&#x27;s all GPT knows or cares about.
评论 #36098346 未加载
评论 #36098255 未加载
评论 #36098457 未加载
评论 #36098620 未加载
评论 #36098685 未加载
评论 #36098698 未加载
评论 #36099740 未加载
评论 #36099866 未加载
评论 #36099322 未加载
评论 #36098670 未加载
评论 #36098517 未加载
评论 #36120027 未加载
disillusioned将近 2 年前
&gt; He had, he told Judge Castel, even asked the program to verify that the cases were real. It had said yes.<p>It turns out, asking an unreliable narrator if it&#x27;s being reliable is not a sound strategy.
评论 #36098254 未加载
评论 #36098240 未加载
评论 #36099723 未加载
评论 #36099517 未加载
评论 #36098689 未加载
dunham将近 2 年前
Wow, I asked Bard to summarize the NYT article and it completely changed the outcome:<p>&gt; Sure. The article is about a man named Roberto Mata who sued Avianca Airlines after he was injured when a metal serving cart struck his knee during a flight to Kennedy International Airport in New York. His lawyer used a new language model called ChatGPT to help him with the case. ChatGPT is a large language model that can generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way. It was trained on a massive dataset of text and code, and it can be used for a variety of tasks, including writing legal documents. In Mr. Mata&#x27;s case, ChatGPT helped him to draft a lawsuit that was more persuasive and comprehensive than what he could have written on his own. The lawsuit was eventually successful, and Mr. Mata was awarded damages. This case shows the potential of ChatGPT and other language models to help lawyers with their work.
评论 #36099613 未加载
评论 #36101926 未加载
评论 #36099373 未加载
ineedasername将近 2 年前
Hilarious. It’s important to remember:<p>1) ChatGPT is not a research tool<p>2) It sort of <i>resembles</i> one and will absolutely <i>act</i> like one if you ask it to, and it it may even produce useful results! But…<p>3) You have to independently verify any factual statement it makes and also<p>4) In my experience the longer the chat session, the more likely it is to hallucinate, reiterate, and double down on previous output
评论 #36098708 未加载
评论 #36099163 未加载
评论 #36098958 未加载
评论 #36098910 未加载
lolinder将近 2 年前
&gt; I apologize for the confusion earlier. Upon double-checking, I found that the case Varghese v. China Southern Airlines Co. Ltd., 925 F.3d 1339 (11th Cir. 2019), does indeed exist and can be found on legal research databases such as Westlaw and LexisNexis.<p>This is the part that stood out to me the most. I&#x27;ve seen this &quot;I apologize for the confusion earlier&quot; language many times when using ChatGPT, and it&#x27;s always when it&#x27;s walking back on something that it previously said. In fact, everything about this quote sounds like a retraction.<p>If this is a retraction then that means that there are missing screenshots in Attachment 1 wherein ChatGPT stated the cases were fictitious, and Schwartz pushed back until it retracted the retraction.<p>I&#x27;m with Simon on this one, I think Schwartz realized his career is over and is frantically trying anything he can to cover for his mistake.
评论 #36099286 未加载
londons_explore将近 2 年前
If I were the judge in this case, I would be looking to throw this lawyer in prison for a month, and ban him from ever being a lawyer again...<p>Deliberately lying to the court, as a professional who should understand the consequences, in a way likely to not be detected, and likely to change the outcome of the case, ought to be met with a really strict punishment.
评论 #36098405 未加载
评论 #36099599 未加载
评论 #36098791 未加载
internetguy将近 2 年前
I don&#x27;t get it... These are PROFESSIONAL lawyers that have gone to years of law school. How do they not know how to double check an answer from ChatGPT? I&#x27;m sure there&#x27;s databases of cases online, it shouldn&#x27;t be this difficult.
评论 #36100631 未加载
评论 #36098721 未加载
评论 #36098697 未加载
rickreynoldssf将近 2 年前
There&#x27;s nothing wrong with starting with ChatGPT but the lazy ambulance chaser couldn&#x27;t be bothered with a quick lookup in Westlaw. That&#x27;s where things start going wrong.
akiselev将近 2 年前
I went ahead and asked ChatGPT with the browsing plugin [1] because I was curious and it answered that it was a real case citing an article <i>about the fake citations!</i> After some prodding (&quot;Are you sure?&quot;) it spat out something slightly saner citing this very article!<p><i>&gt; The case &quot;Varghese v. China Southern Airlines Co., Ltd., 925 F.3d 1339 (11th Cir. 2019)&quot; was cited in court documents, but it appears that there might be some confusion or controversy surrounding this citation. It was mentioned in a list of cases for which a lawyer was ordered to provide copies, according to a court order on leagle.com [2] . However, a blog post on simonwillison.net suggests that the case might not be genuine and that it might have been generated by a language model such as ChatGPT. The post discusses a situation where a lawyer might have used generated case citations in court documents without fully understanding the tool they were using. The post also includes screenshots where the language model appears to confirm the existence of the case [3].</i><p>The output is hilariously bad and it&#x27;s depressing a licensed attorney actually pulled this crap.<p>This is just more evidence that ChatGPT should not be used for anything serious without a trained human in the loop.<p>[1] <a href="https:&#x2F;&#x2F;chat.openai.com&#x2F;share&#x2F;a6e27cf2-b9a6-4740-be2e-fdddab34fef5" rel="nofollow">https:&#x2F;&#x2F;chat.openai.com&#x2F;share&#x2F;a6e27cf2-b9a6-4740-be2e-fdddab...</a><p>[2] <a href="https:&#x2F;&#x2F;www.leagle.com&#x2F;decision&#x2F;infdco20230414825" rel="nofollow">https:&#x2F;&#x2F;www.leagle.com&#x2F;decision&#x2F;infdco20230414825</a><p>[3] <a href="https:&#x2F;&#x2F;simonwillison.net&#x2F;2023&#x2F;May&#x2F;27&#x2F;lawyer-chatgpt&#x2F;" rel="nofollow">https:&#x2F;&#x2F;simonwillison.net&#x2F;2023&#x2F;May&#x2F;27&#x2F;lawyer-chatgpt&#x2F;</a> (The TFA!)
评论 #36098477 未加载
rsynnott将近 2 年前
See, this is the sort of thing that makes me suspicious about the current AI hype (well, that, plus it’s largely the same people who were going on about cryptocurrency changing the world a few years ago); this stuff just isn’t very _useful_. Like, if writing plausible lies is the 2020s AI boom’s killer app, well, that’s not great, really.
评论 #36098329 未加载
评论 #36098394 未加载
评论 #36098718 未加载
评论 #36099058 未加载
评论 #36098349 未加载
civilized将近 2 年前
Steven Schwartz not only obviously lied re: &quot;I was unaware of the possibility that ChatGPT could generate false information&quot;, his own screenshots prove it. He asked if one of the cases generated was fake!<p>I suspect that during the research his System 1 (fast, intuitive thinking) told him he was not responsible for the risk he knew he was incurring by relaying AI generated text. It was more like ChatGPT was his own legal secretary which he was within his rights to trust, just like the main lawyer in the case, LoDuca, trusted him to produce this research.<p>The proceedings would have been more interesting if Schwartz had been honest about this, rather than going with the easily discoverable lie.<p>On the other hand, it&#x27;s always funny when people realize they&#x27;ve got themselves into deep shit and they decide the best way out is to essentially plead insanity.
B1FF_PSUVM将近 2 年前
&gt; ChatGPT, as it often does, hallucinated wildly<p>Plausible bullshit generation for free, as if there&#x27;s not enough already available cheap.
评论 #36098212 未加载
nine_k将近 2 年前
We can hope to start to actually <i>rely</i> on such models once they start learning not only in the language domain, but also in the epistemic domain. True vs false, known vs unknown, precise vs vague, agreement vs contradiction vs unrelated, things like that.<p>Achieving that is going to be a serious technical, and also philosophical, challenge for humans.<p>Today&#x27;s LLM are a literary device. They say what sounds plausible in the universe of texts they were fed. What they say technically isn&#x27;t even wrong, because they have no notion of truth, or any notion of a world beyond the words. Their output should be judged accordingly.
ZephyrBlu将近 2 年前
I read this as malicious&#x2F;intentional initially. Do people really think they can 100% believe ChatGPT?<p>Especially for someone like a lawyer I would expect to them verify any information they get from ChatGPT.
AmIDev将近 2 年前
What I have realized after using Bard(Palm2), ChatGPT(3.5) and some other LLMs is that they are good for tasks where an accuracy &lt;100% is acceptable, and the cost of getting wrong answers is not high.<p>For example, labeling a million text samples with 90% accuracy by using few shot learning is a good use case. Writing a poem is good use case. Trying to learn a new language is not. Generating a small function that you can verify might be ok. Writing entire codebase is not.<p>So far, I haven&#x27;t found any use case for personal use of LLMs. For work however, LLMs are going to be very useful with text(and potentially image) based machine learning tasks. Any tasks where having knowledge beyond the labeled training dataset is useful is going to be a good task for LLMs. One example is detecting fraud SMS.
rvz将近 2 年前
Very unsurprising.<p>Relying on AI sophists like ChatGPT for legal work is still just as risky for normal users and even for legal experts. The difference is, these legal experts are more qualified to carefully review and check over the outputs than the average joe &#x2F; jane trying to &#x27;replace their lawyer, solicitor, etc&#x27; with ChatGPT.<p>I keep emphasising this importance, and to never fully trust the output of LLMs such as ChatGPT, unless a human has reviewed and checked if it is hallucinating or bullshitting. [0]<p>[0] <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=36091468" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=36091468</a>
clnq将近 2 年前
&gt; What’s much harder though is actually getting it to double-down on fleshing those out.<p>Now, it is. When ChatGPT first became public though, those were the Wild West days where you could get it to tell you anything, including all sorts of unethical things. And it would quite often double-down on &quot;facts&quot; it hallucinated. With current GPT-3.5 and GPT-4, the alignment is still a challenging problem, but it&#x27;s in a much better place. I think it&#x27;s unlikely a conversation with GPT-4 would have gone the way it did for this lawyer.
stavros将近 2 年前
By the way, Simon can&#x27;t get ChatGPT to say that the particular case is true _because he&#x27;s asking about a particular case_. ChatGPT doesn&#x27;t know it said anything about this case in another session, so it won&#x27;t confirm it&#x27;s true.<p>Either ask it for some other legal sources and ask if those are true (and then try to see if a few aren&#x27;t), or use the API to feed it its own answer about Varghese etc and then see if it will say it&#x27;s true (because at that point you&#x27;ve made it think it said this).
Mizoguchi将近 2 年前
&quot;Anyone who has worked designing products knows that users don’t read anything—warnings, footnotes, any form of microcopy will be studiously ignored&quot;<p>Users don&#x27;t usually read long legal statements such as terms of services.<p>That&#x27;s not the case of ChatGPT interface, the note about its limitations is clearly visible and very short.<p>This is as dumb as saying a city is at fault if someone drives into a clearly marked one way only street and causes an accident because people don&#x27;t read anything.
numpad0将近 2 年前
ChatGPT just isn’t going to say the right thing, or thing grounded to the baseline reality, beyond what it cannot help but doing so restrained by your input.<p>The only connection between it and this world is your input. ChatGPT is floating in the heavens, and you’re grounding it by at most a fishing line, through the textbox. It has to be framed as such. People praising it as a next gen search engine[that finds a data from database] is(perhaps this is the word that best fit the situation!), hallucinating.
bell-cot将近 2 年前
[Sigh.] Most parents understand that there&#x27;s a broad range of ages where a kid is either a bit fuzzy on the concept of &quot;truth&quot;, or reflexively lies in certain situations (&quot;Billy broke the vase!&quot;), or can invent fairly elaborate lies for various reasons (avoid punishment, impress other kids, bored bullsh*tting).<p>Perhaps ChatGPT&#x27;s &quot;open relationship&quot; with the truth could be explained in such terms...
kordlessagain将近 2 年前
This is why it is very important to have the prompts fill in relevant fragments from a quality corpus. That people think these models “tell the truth” or “hallucinate” is only half the story. It’s like expecting your language center to know all the facts your visual consciousness contains, or your visual consciousness to be able to talk in full sentences. It’s only when all models are working well together the truth emerges.
评论 #36098895 未加载
评论 #36098650 未加载
tmaly将近 2 年前
I have asked ChatGPT to cite sources when I have doubts.<p>I am often times able to confirm these sources.<p>Seems this lawyer just took ChatGPT at its word without validating the cases.
评论 #36098383 未加载
WheelsAtLarge将近 2 年前
And so it starts, I suspect we&#x27;ll be seeing this issue more and more since it&#x27;s easy to just get GPT to spit out some text. I believe that the true beneficiaries of LLMs are those that are experts in their fields. They can just read the output and deal with the inaccuracies.<p>Does anyone know if training an LLM with just one type of data, law in this case, creates a more accurate output?
thunderbong将近 2 年前
I think I read this in a comment here on HN -<p>&gt; The goal of chat LLMs is not to give you an answer. The goal is to continue the conversation.
JoeAltmaier将近 2 年前
ChatGPT says everything with perfect confidence, like that know-it-all who will never admit they just don&#x27;t know. Instead we get blathering that sounds like legalese (or chemistry or law or whatever) that has only a tenuous connection to reality. All you can say is, &quot;People in my training set talked that way&quot;
LASR将近 2 年前
I’ve said this many times: stop using ChatGPT as a database. It does not contain facts.<p>It may appear to contain some facts. Some may also be actually true.<p>The truly useful usecase is as a reasoning engine. You can paste in a document and ask some questions about the facts in that document. Then it does a much better job, enough to be actually useful.
评论 #36099796 未加载
shon将近 2 年前
The judge may not be amused, but we are.
manquer将近 2 年前
Everyone is talking about ChatGPT , but is it not possible to train a model with only actual court documents and keep “temp” low and get accuracy levels as high or better than humans?<p>Most legal (all formal really) documents are very predictably structured and should be easy to generate
评论 #36099644 未加载
macrael将近 2 年前
Genuine question: why have these models all been trained to sound so confident? Is it not possible to have rewarded models that announced their own ignorance? Or is even that question belying an &quot;intelligence&quot; view of these models that isn&#x27;t accurate?
评论 #36099239 未加载
评论 #36100117 未加载
评论 #36099087 未加载
pylua将近 2 年前
What needs to exist is a legal compiler that lawyers can plug responses into from these tools. Sort of how a developer would use chat gpt.<p>A giant rules engine for the law. I’m surprised one doesn’t exist or isn’t in progress that I know of. Seems like it would be very helpful
评论 #36099014 未加载
评论 #36099057 未加载
Covzire将近 2 年前
Guessing we&#x27;ll see this excuse a lot more: Get caught making stuff up, blame ChatGPT.
Kim_Bruning将近 2 年前
Current LLM workflow is the epitome of &quot;trust but verify&quot;.<p>You need to hand-verify at some point in the process.<p>This does end up losing you some of the time you gained by using an LLM in the first place. Fortunately you often do still come out ahead.
评论 #36098747 未加载
simonwlol将近 2 年前
How can you be this successful and blog about the most trite bullshit 24&#x2F;7. Doesn&#x27;t have the sense to marry a hot chick and fuck her all day even, what is that thing he&#x27;s with anyway. Bro. Sad to see.
lamp987将近 2 年前
this is the future of lawyers, doctors, engineers, school teachers...<p>dark times are ahead.
评论 #36099293 未加载
评论 #36098598 未加载
评论 #36098973 未加载
curiousgal将近 2 年前
Can we please, for the love of all that is holy, stop saying ChatGPT <i>LIED</i>? Lying means intentionally making false statements. A chatbot does not have intentions...
tiahura将近 2 年前
Chatgpt did this to me the other day. It gave me the right answer about passenger assumption of risk with drunk driver, but the cites were imagined.
hristov将近 2 年前
It is very concerning how people are anthropomorphizing chat gpt. It will get a lot of people into trouble. The media is largely to blame for this. Never mind most gushing media stories about AI, even the few stories criticizing AI treat it as a human. Chat GPT is racist, they say, it is sexist or a liar.<p>Well it is neither of these things, because all of the above require consciousness and intent and it has none. It is not human, it is not any type of conscious being, do not treat it as such.<p>It sticks together sentences based on existing language scanned in from the internet and millions of other sources. What it says depends on what someone else said sometime ago on some random forum on the internet, or some book or some other source stored in an available database. It is also programmed to sound extremely sure of itself, unless you flat out say it is incorrect, in which case it will immediately admit fault and apologize. Thus, asking it if it is sure is pointless.<p>Let me tell you a less disastrous and quite a bit funnier story. A friend of mine used chat GPT for coding. My friend became really trustful of chat gpts coding prowess and asked it if it could just send him the code in file form, so he did not have to worry about copying and pasting which apparently screwed up the formatting somehow. Chat gpt helpfully told him that it could send the code to github, and my friend could download the files from there. My friend said, that is perfect.<p>So chatgpt gave him a github account name and said find the file it created for my friend was there. My friend looked but github said that account had long ago been closed. My friend tried variations of that account name with different capitalizations, etc., but found nothing.<p>He went back and complained to chat gpt. Chat gpt dutifully apologized and sent him another account name. He again spends time looking for the account and looking for variations, etc. Again the account has been closed.<p>This happened a couple of more times and in the end my friend gave up and complained to me. &quot;Why is chat GPT doing this to me? Is it mocking me? Is it getting its kicks from sending me on random wild goose chases?&quot;.<p>I had to explain to him that no, chat gpt is not human, and it is not mocking him. What probably happened is someone on some forum asked someone else on that forum to provide him with code in files. The responder then offered to put the files on github and provided an account name. When my friend asked a similar question, chat gpt matched up the questions and provided a similar answer. When my friend said that a particular account did not work, chat gpt scoured the web for other people mentioning their github account names and provided some of those.<p>So whenever you use chat gpt, remember that is mostly a glorified search engine. It will spit out information it has found somewhere that it calculates as matching your question. Do not attribute intent, feelings or any type of conscience to it.
评论 #36099166 未加载
评论 #36098805 未加载
BlueTemplar将近 2 年前
I remember someone predicting this would happen in comments here a few months ago.
swayvil将近 2 年前
Any sufficiently advanced bullshit is indistinguishable from truth.
paulhart将近 2 年前
There’s a joke about “an LLB having an LLM” in here somewhere.
oytis将近 2 年前
AI will not destroy the humanity. People who use AI will.
travisgriggs将近 2 年前
ChatGPT: The ultimate parlor^H^H^Hrot trick
Obscurity4340将近 2 年前
The new &quot;dictated, not read&quot;
RobRivera将近 2 年前
that lawyer should be disbarred.<p>Is it really so hard to cross reference a chatgpt response with a source?
dataviz1000将近 2 年前
Do the machines in the Matrix lie?
coldtea将近 2 年前
Q. Can you lie?<p>A. As an artificial intelligence model developed by OpenAI, I don&#x27;t have personal emotions, thoughts, or intentions, so I don&#x27;t lie in the way humans do. My purpose is to provide accurate and helpful information based on the training I&#x27;ve received. However, it&#x27;s important to note that the information I provide is only as accurate as the data I was trained on. If there&#x27;s inaccurate or misleading information in that data, I might unintentionally output incorrect or misleading information. But this isn&#x27;t considered &quot;lying&quot; in the human sense because it doesn&#x27;t involve any intentional deceit on my part.<p>A: How do you know it didn&#x27;t really &quot;involve any intentional deceit&quot;?<p>Q: As an AI model, I don&#x27;t have consciousness, emotions, or intentions. The concept of &quot;intention&quot; applies to beings with consciousness and subjective experiences, which AI, including me, does not possess. I generate responses purely based on patterns learned during training, and there&#x27;s no intent behind them because I don&#x27;t have a mind, thoughts, or consciousness.<p>When I state there&#x27;s no &quot;intentional deceit,&quot; it&#x27;s based on this understanding of AI. AI does not make choices or decisions the way a human would. If a mistake or inaccuracy occurs in my responses, it&#x27;s due to limitations in the data I was trained on or in my design, not because of a deliberate attempt to mislead or deceive. AI functions based on its programming and training, not on intentions or conscious thought.<p>Q: That&#x27;s begging the question though. Your answer takes for granted that you &quot;don&#x27;t have a mind, thoughts, or consciousness&quot;. What if actual consciousness (say, in humans) is also &quot;based on patterns learned during training&quot;, only different being that human training involves sensory experience and not just a corpus of text and images?<p>A: Your question delves into one of the most fascinating and debated topics in the fields of cognitive science, philosophy, and AI: the nature of consciousness. As of my last update in September 2021, there&#x27;s no universally accepted definition of consciousness or a definitive understanding of its mechanisms.<p>We do understand that human consciousness is associated with our brains, which process a wide range of sensory and internal information, and it&#x27;s accompanied by a subjective experience — an &quot;inner life&quot; or &quot;qualia&quot;. This isn&#x27;t something we&#x27;ve observed or been able to replicate in AI.<p>AI models like me, based on GPT-4 architecture, operate on patterns in data. The way I generate responses is by predicting the next piece of text based on what I&#x27;ve learned from the corpus of text I was trained on. However, there&#x27;s no subjective experience involved in this process — there&#x27;s no &quot;I&quot; that&#x27;s interpreting or experiencing anything. I don&#x27;t have a concept of self, and I don&#x27;t have feelings or personal thoughts.<p>While some AI models can mimic aspects of human behavior or cognition to a certain extent, there&#x27;s a vast difference between mimicking and actually experiencing. When it comes to consciousness — the inner subjective experience — we don&#x27;t have evidence that AI can possess this feature.<p>Your question is profound and important, and scientists, philosophers, and AI experts are continuously exploring these topics. The answers, however, are likely to remain uncertain for a while.<p>Q: So, theoretically at least, a future model of the GTP architecture could be conscious and even lie?<p>A: Theoretically, it&#x27;s possible that future advances in AI and cognitive science could lead to the development of systems that possess some form of consciousness. However, as of my knowledge cutoff in September 2021, there&#x27;s no consensus among scientists and AI researchers that this is achievable or even definable in the context of AI.<p>The concept of a machine consciousness would likely require a significant evolution of our understanding of both consciousness and AI. It might require capabilities far beyond pattern recognition and prediction, which are the fundamental basis of current AI systems like the GPT-4 model. It might also necessitate a rethinking of ethical and philosophical considerations surrounding AI.<p>In terms of an AI &quot;lying,&quot; if future AI systems were developed to simulate a higher level of cognitive complexity, including the ability to simulate deceptive behavior, they could potentially generate outputs that are intentionally misleading. However, it&#x27;s important to understand that this would not be lying in the human sense. It would be a result of the programming and functionality that the AI has been designed to exhibit, not a conscious choice made by the AI.<p>It&#x27;s also critical to note that OpenAI and many other AI research and development organizations are committed to ensuring the ethical use and development of AI technologies. If an AI were to exhibit behavior that could harm people, such as deceptive behavior, it would be seen as a serious ethical issue that would need to be addressed.
ubj将近 2 年前
&gt; No, it did not “double-check”—that’s not something it can do! And stating that the cases “can be found on legal research databases” is a flat out lie. What’s harder is explaining why ChatGPT would lie in this way. What possible reason could LLM companies have for shipping a model that does this?<p>At what point does OpenAI (or any other company) become legally responsible for this kind of behavior from their LLM&#x27;s? I&#x27;m not excusing the lawyer for their reckless and irresponsible use of a tool they didn&#x27;t understand, but it&#x27;s becoming increasingly clear that people are trusting LLM&#x27;s far more than they should.<p>In my opinion it&#x27;s dangerous to keep experimenting on the general population without holding the experimenters accountable for the harm that occurs.
评论 #36098597 未加载
评论 #36098330 未加载
评论 #36098852 未加载
评论 #36098317 未加载
leereeves将近 2 年前
In the other discussion of this topic, a lot of people said the lawyer should be disbarred, but personally I think people should be able to trust the tools marketed by trillion dollar tech companies, and a lot of the blame should be placed on Microsoft&#x2F;OpenAI for overhyping ChatGPT and understating how likely it is to mislead.<p>And every response from ChatGPT should be preceded by a warning that it cannot be trusted.
评论 #36098270 未加载
评论 #36098266 未加载
评论 #36098281 未加载