Here's the real punchline:<p>> Experts told the Vancouver Sun that Air Canada may have succeeded in avoiding liability in Moffatt's case if its chatbot had warned customers that the information that the chatbot provided may not be accurate.<p>Here's a glimpse into our Kafka-esque AI-powered future: every corporate lawyer is now making sure any customer service request will be gated by a chatbot containing a disclaimer like "Warning: the information you receive may be incorrect and irrelevant." Getting correct and relevant information from a human will be impossible.
> "Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives—including a chatbot," Rivers wrote. "It does not explain why it believes that is the case" or "why the webpage titled 'Bereavement travel' was inherently more trustworthy than its chatbot."<p>This is very reasonable-- AI or not, companies can't expect consumers to know which parts of their digital experience are accurate and which aren't.
Why are we posting this when it's just rephrasing an Ars Technica article? It's even mentionned at the bottom:<p>"This story originally appeared on Ars Technica."<p>Give the clicks to the original article:<p><a href="https://arstechnica.com/tech-policy/2024/02/air-canada-must-honor-refund-policy-invented-by-airlines-chatbot/" rel="nofollow">https://arstechnica.com/tech-policy/2024/02/air-canada-must-...</a>
In early days of computerization, companies tried to dodge liability due to "computer errors". That didn't work, and I hope the "It was the AI, not us" never gets allowed either.
<a href="https://decisions.civilresolutionbc.ca/crt/crtd/en/item/525448/index.do" rel="nofollow">https://decisions.civilresolutionbc.ca/crt/crtd/en/item/5254...</a><p>The resolution is n amazingly clear piece of legal writing that explains the involved thought process of the the decision and then awarding the damages. I might end up using this pattern for writing out cause and effect.
Good. If you use a tool that does not give the correct answers you should be held liable for the mistake. The takeaway is, you better vet your tool. If the amount of money you loses from mistakes with the tool is less than the money you saved using it then you make money, if not, you may want to reconsider that cost saving measure.
I'm glad to see that cases are starting to be decided about the liability of using AI generated content. This is something the general public should not need to second-guess.
My father died in hospice the night before a flight to see him. I missed the flight because there was no longer any reason to get to the airport before dawn. I called to reschedule a few hours later.<p>The human on the other end rescheduled and gave me a bereavement rate. She told me it was less money, but didn't mention the reason. I didn't put that together until later. She just helped me out because she had compassion.<p>I am too cynical to think that an AI controlled by a corporation will do this.
Good. I hope people out there also discover chatbot holes and exploit them. Chatbots are one of the most useless and time wasters things out there, they literally serve for absolutely no purpose. And most of them work exactly like nested dropdowns where you select one option after the other. Oh and when you really want to talk to a human being in almost every scenario that option is not available. What a wonderful world powered by "AI".
Would a company be liable to uphold its promises if a rogue human customer service agent promised something ridiculous such as 1 million dollars worth of free flights?
I don't even see how this is a big story. An expressed representation about what a product is or does, how it works, and the terms of sale in consumer contexts are binding on the seller. It is the same in America. If you go into a store and they say that you can return it if you don't like it, then you can. If you buy a TV and the guy at the store tells you it also makes pancakes, you can get your money back if it turns out that it does not make pancakes. This is true even if the representation is made by some 16-year-old kid working at Best Buy. By extension it would still be true even if it is made be an automaton.
I think this article's full import is not being properly processed yet by a lot of people. The stock market is in an absolute AI frenzy. But this article trashes one of the current boom's biggest supposed markets. If AIs can't be put in contact with customers without exposing the company to an expected liability cost greater than the cost of a human customer representative, one of their major supposed use cases is gone, and that means the money for that use case is gone too. There's probably halo effects in a lot of other uses as well.<p>Now, in the medium or long term, I expect there to be AIs that will be able to do this sort of thing just fine. As I like to say I expect future AIs will not "be" LLMs but merely use LLMs as one of their component parts, and the design as a whole will in fact be able to accurately and reliably relay corporate policies as a result. But the stock market is not currently priced based on "AIs will be pretty awesome in 2029", they're priced on "AIs are going to be pretty awesome in July".<p>LLMs are a huge step forward, but they really aren't suitable for a lot of uses people are trying to put them to in the near term. They don't really "know" things, they're really, really good at guessing them. Now, I don't mean this in the somewhat tedious "what is <i>knowing</i> anyhow" sense, I mean that they really don't have any sort of "facts" in them, just really, really good language skills. I fully expect that people are working on this and the problem will be solved in some manner and we will be able to say that there is an AI design that "knows" things. For instance, see this: <a href="https://deepmind.google/discover/blog/alphageometry-an-olympiad-level-ai-system-for-geometry/" rel="nofollow">https://deepmind.google/discover/blog/alphageometry-an-olymp...</a> That's in the direction of what I'm talking about; this system does not just babble things that "look" or "sound" like geometry proofs, it "knows" it is doing geometry proofs. This is not quite ready to be fed a corporate policy document, but it is in that direction. But that's got some work to be done yet.<p>(And again, I'm really not interested in another rehash of what "knows" really means. In this specific case I'm speaking of the vector from "a language model" and "a language model + something else like a symbolic engine" as described in that post, where I'm simply <i>defining</i> the latter as "knowing" more about geometry than the former.)
The real story here is that Air Canada's lawyers argued, among other things, that the chatbot was a separate and independent legal entity from Air Canada and therefore Air Canada was not obligated to honor the made up policy.<p>In other words, this was possibly the first historical argument made in a court that AI's are sentient and not automated chattel.
Did Air Canada use a chatGPT for their legal defense?<p>Also:<p>> <i>Air Canada essentially argued that "the chatbot is a separate legal entity that is responsible for its own actions,"</i><p>What does this mean?<p>That the chatbot was provided by a third party hence they are responsible for the content provided?<p>Or that, literally, a chat bot can be considered a legal entity?
The real desire here is to get it to promise a lifetime of free service.<p>edit: arguing that the chatbot is a separate legal entity is a wild claim. It would imply to me that air canada could sue the ai company for damages if it makes bad promises; not that air canada is excused from paying the customer.
I don't really understand why generative LLM output is being presented to (and interpreted by) users as intelligence. This seems incredibly disingenuous, and I would prefer to see it characterized as as fiction, creative, realism, etc. -- words that make it clear to average people that while this might be entertaining and even mimicking reality, it's completely distinct from the writing of a person. Disclaimers (often small, low contrast, or "ToS;DR"-esque) are insufficient when the UI is specifically crafted to appear like chatting with a person.
This type of failure is becoming more and more common as companies roll out AI systems without robust accuracy audits & human supervision.<p>I'm working on something to make this easier - reach out if I can be helpful (email in bio).
i would expect something like this to severely stunt chatbot adoption<p>i think the only reason this should go through is if it didn't have a proper disclaimer at the beginning of the conversation
I hope there can be some reasonable middle ground. I think in this case it's good the woman got her money. But Air Canada, presumably scared of what the next case might cost them decided to turn the chat bot off entirely. I think that's a bit unfortunate.<p>I don't know what the solution looks like. Maybe some combination of courts only upholding "reasonable" claims by AI. And then insurance to cover the gaps?