TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Air Canada Has to Honor a Refund Policy Its Chatbot Made Up

243 pointsby gavmanover 1 year ago

27 comments

yodonover 1 year ago
Dupe <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=39378235">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=39378235</a> (400+ comments)
评论 #39455777 未加载
floatrockover 1 year ago
Here&#x27;s the real punchline:<p>&gt; Experts told the Vancouver Sun that Air Canada may have succeeded in avoiding liability in Moffatt&#x27;s case if its chatbot had warned customers that the information that the chatbot provided may not be accurate.<p>Here&#x27;s a glimpse into our Kafka-esque AI-powered future: every corporate lawyer is now making sure any customer service request will be gated by a chatbot containing a disclaimer like &quot;Warning: the information you receive may be incorrect and irrelevant.&quot; Getting correct and relevant information from a human will be impossible.
评论 #39456441 未加载
评论 #39456075 未加载
评论 #39459120 未加载
评论 #39456848 未加载
评论 #39457023 未加载
评论 #39458093 未加载
评论 #39457504 未加载
评论 #39456676 未加载
评论 #39457374 未加载
评论 #39457187 未加载
评论 #39456290 未加载
评论 #39457098 未加载
quartzover 1 year ago
&gt; &quot;Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives—including a chatbot,&quot; Rivers wrote. &quot;It does not explain why it believes that is the case&quot; or &quot;why the webpage titled &#x27;Bereavement travel&#x27; was inherently more trustworthy than its chatbot.&quot;<p>This is very reasonable-- AI or not, companies can&#x27;t expect consumers to know which parts of their digital experience are accurate and which aren&#x27;t.
评论 #39455686 未加载
评论 #39456045 未加载
评论 #39456923 未加载
评论 #39455694 未加载
评论 #39457110 未加载
VHRangerover 1 year ago
Why are we posting this when it&#x27;s just rephrasing an Ars Technica article? It&#x27;s even mentionned at the bottom:<p>&quot;This story originally appeared on Ars Technica.&quot;<p>Give the clicks to the original article:<p><a href="https:&#x2F;&#x2F;arstechnica.com&#x2F;tech-policy&#x2F;2024&#x2F;02&#x2F;air-canada-must-honor-refund-policy-invented-by-airlines-chatbot&#x2F;" rel="nofollow">https:&#x2F;&#x2F;arstechnica.com&#x2F;tech-policy&#x2F;2024&#x2F;02&#x2F;air-canada-must-...</a>
评论 #39455908 未加载
评论 #39456957 未加载
JJMcJover 1 year ago
In early days of computerization, companies tried to dodge liability due to &quot;computer errors&quot;. That didn&#x27;t work, and I hope the &quot;It was the AI, not us&quot; never gets allowed either.
评论 #39455428 未加载
评论 #39455668 未加载
评论 #39457636 未加载
yifanlover 1 year ago
<a href="https:&#x2F;&#x2F;decisions.civilresolutionbc.ca&#x2F;crt&#x2F;crtd&#x2F;en&#x2F;item&#x2F;525448&#x2F;index.do" rel="nofollow">https:&#x2F;&#x2F;decisions.civilresolutionbc.ca&#x2F;crt&#x2F;crtd&#x2F;en&#x2F;item&#x2F;5254...</a><p>The resolution is n amazingly clear piece of legal writing that explains the involved thought process of the the decision and then awarding the damages. I might end up using this pattern for writing out cause and effect.
评论 #39456919 未加载
lgleasonover 1 year ago
Good. If you use a tool that does not give the correct answers you should be held liable for the mistake. The takeaway is, you better vet your tool. If the amount of money you loses from mistakes with the tool is less than the money you saved using it then you make money, if not, you may want to reconsider that cost saving measure.
评论 #39457671 未加载
danpalmerover 1 year ago
I&#x27;m glad to see that cases are starting to be decided about the liability of using AI generated content. This is something the general public should not need to second-guess.
评论 #39455832 未加载
upofadownover 1 year ago
Peter Watts comments:<p>* <a href="https:&#x2F;&#x2F;www.rifters.com&#x2F;crawl&#x2F;?p=10977" rel="nofollow">https:&#x2F;&#x2F;www.rifters.com&#x2F;crawl&#x2F;?p=10977</a>
alsetmusicover 1 year ago
My father died in hospice the night before a flight to see him. I missed the flight because there was no longer any reason to get to the airport before dawn. I called to reschedule a few hours later.<p>The human on the other end rescheduled and gave me a bereavement rate. She told me it was less money, but didn&#x27;t mention the reason. I didn&#x27;t put that together until later. She just helped me out because she had compassion.<p>I am too cynical to think that an AI controlled by a corporation will do this.
liendolucasover 1 year ago
Good. I hope people out there also discover chatbot holes and exploit them. Chatbots are one of the most useless and time wasters things out there, they literally serve for absolutely no purpose. And most of them work exactly like nested dropdowns where you select one option after the other. Oh and when you really want to talk to a human being in almost every scenario that option is not available. What a wonderful world powered by &quot;AI&quot;.
logicalmonsterover 1 year ago
Would a company be liable to uphold its promises if a rogue human customer service agent promised something ridiculous such as 1 million dollars worth of free flights?
评论 #39456351 未加载
评论 #39457059 未加载
评论 #39457131 未加载
评论 #39459740 未加载
jeffbeeover 1 year ago
I don&#x27;t even see how this is a big story. An expressed representation about what a product is or does, how it works, and the terms of sale in consumer contexts are binding on the seller. It is the same in America. If you go into a store and they say that you can return it if you don&#x27;t like it, then you can. If you buy a TV and the guy at the store tells you it also makes pancakes, you can get your money back if it turns out that it does not make pancakes. This is true even if the representation is made by some 16-year-old kid working at Best Buy. By extension it would still be true even if it is made be an automaton.
评论 #39456610 未加载
jerfover 1 year ago
I think this article&#x27;s full import is not being properly processed yet by a lot of people. The stock market is in an absolute AI frenzy. But this article trashes one of the current boom&#x27;s biggest supposed markets. If AIs can&#x27;t be put in contact with customers without exposing the company to an expected liability cost greater than the cost of a human customer representative, one of their major supposed use cases is gone, and that means the money for that use case is gone too. There&#x27;s probably halo effects in a lot of other uses as well.<p>Now, in the medium or long term, I expect there to be AIs that will be able to do this sort of thing just fine. As I like to say I expect future AIs will not &quot;be&quot; LLMs but merely use LLMs as one of their component parts, and the design as a whole will in fact be able to accurately and reliably relay corporate policies as a result. But the stock market is not currently priced based on &quot;AIs will be pretty awesome in 2029&quot;, they&#x27;re priced on &quot;AIs are going to be pretty awesome in July&quot;.<p>LLMs are a huge step forward, but they really aren&#x27;t suitable for a lot of uses people are trying to put them to in the near term. They don&#x27;t really &quot;know&quot; things, they&#x27;re really, really good at guessing them. Now, I don&#x27;t mean this in the somewhat tedious &quot;what is <i>knowing</i> anyhow&quot; sense, I mean that they really don&#x27;t have any sort of &quot;facts&quot; in them, just really, really good language skills. I fully expect that people are working on this and the problem will be solved in some manner and we will be able to say that there is an AI design that &quot;knows&quot; things. For instance, see this: <a href="https:&#x2F;&#x2F;deepmind.google&#x2F;discover&#x2F;blog&#x2F;alphageometry-an-olympiad-level-ai-system-for-geometry&#x2F;" rel="nofollow">https:&#x2F;&#x2F;deepmind.google&#x2F;discover&#x2F;blog&#x2F;alphageometry-an-olymp...</a> That&#x27;s in the direction of what I&#x27;m talking about; this system does not just babble things that &quot;look&quot; or &quot;sound&quot; like geometry proofs, it &quot;knows&quot; it is doing geometry proofs. This is not quite ready to be fed a corporate policy document, but it is in that direction. But that&#x27;s got some work to be done yet.<p>(And again, I&#x27;m really not interested in another rehash of what &quot;knows&quot; really means. In this specific case I&#x27;m speaking of the vector from &quot;a language model&quot; and &quot;a language model + something else like a symbolic engine&quot; as described in that post, where I&#x27;m simply <i>defining</i> the latter as &quot;knowing&quot; more about geometry than the former.)
anonuover 1 year ago
Reminds me of <a href="https:&#x2F;&#x2F;www.moralmachine.net&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.moralmachine.net&#x2F;</a>
frudover 1 year ago
The real story here is that Air Canada&#x27;s lawyers argued, among other things, that the chatbot was a separate and independent legal entity from Air Canada and therefore Air Canada was not obligated to honor the made up policy.<p>In other words, this was possibly the first historical argument made in a court that AI&#x27;s are sentient and not automated chattel.
评论 #39457446 未加载
pier25over 1 year ago
Did Air Canada use a chatGPT for their legal defense?<p>Also:<p>&gt; <i>Air Canada essentially argued that &quot;the chatbot is a separate legal entity that is responsible for its own actions,&quot;</i><p>What does this mean?<p>That the chatbot was provided by a third party hence they are responsible for the content provided?<p>Or that, literally, a chat bot can be considered a legal entity?
spywaregorillaover 1 year ago
The real desire here is to get it to promise a lifetime of free service.<p>edit: arguing that the chatbot is a separate legal entity is a wild claim. It would imply to me that air canada could sue the ai company for damages if it makes bad promises; not that air canada is excused from paying the customer.
评论 #39457102 未加载
评论 #39457542 未加载
评论 #39457205 未加载
hunter2_over 1 year ago
I don&#x27;t really understand why generative LLM output is being presented to (and interpreted by) users as intelligence. This seems incredibly disingenuous, and I would prefer to see it characterized as as fiction, creative, realism, etc. -- words that make it clear to average people that while this might be entertaining and even mimicking reality, it&#x27;s completely distinct from the writing of a person. Disclaimers (often small, low contrast, or &quot;ToS;DR&quot;-esque) are insufficient when the UI is specifically crafted to appear like chatting with a person.
dnussbaumover 1 year ago
This type of failure is becoming more and more common as companies roll out AI systems without robust accuracy audits &amp; human supervision.<p>I&#x27;m working on something to make this easier - reach out if I can be helpful (email in bio).
dlqxover 1 year ago
This mistake could have been done by a human agent as well and the consequences would have been most likely the same, wouldn&#x27;t it?
lazycog512over 1 year ago
This is why you link the source content from the RAG pipeline instead of pretending the bot knows everything.
fnordpigletover 1 year ago
This should hold. If AI can remind us of what humane policies are, so be it.
stainablesteelover 1 year ago
i would expect something like this to severely stunt chatbot adoption<p>i think the only reason this should go through is if it didn&#x27;t have a proper disclaimer at the beginning of the conversation
zzz999over 1 year ago
Sometimes the courts get it right
matthewfelgateover 1 year ago
hahahahaha
评论 #39456410 未加载
im3w1lover 1 year ago
I hope there can be some reasonable middle ground. I think in this case it&#x27;s good the woman got her money. But Air Canada, presumably scared of what the next case might cost them decided to turn the chat bot off entirely. I think that&#x27;s a bit unfortunate.<p>I don&#x27;t know what the solution looks like. Maybe some combination of courts only upholding &quot;reasonable&quot; claims by AI. And then insurance to cover the gaps?
评论 #39455814 未加载
评论 #39456097 未加载
评论 #39455807 未加载
评论 #39456050 未加载
评论 #39455754 未加载
评论 #39456008 未加载
评论 #39456413 未加载
评论 #39456637 未加载