˂  Back

Air Canada Case Exposes AI Chatbot Hallucination Risks: A Mitigation Guide for General Counsel

Image

In the current business landscape, the race to harness the power of Artificial Intelligence (“AI”) is in full swing. One of the most straightforward and cost-effective strategies is the integration of AI Chatbots into company websites, as these AI Chatbots are capable of interacting with customers and answering their queries, which can significantly reduce expenses tied to traditional customer service. However, while many companies are eager to adopt AI Chatbots, there is a critical issue that often goes overlooked or remains unaddressed: the problem of AI Chatbot hallucinations. This issue can lead to severe legal complications, such as negligent misrepresentation, and in this article, we aim to delve deeper into this serious concern.

 .

Understanding AI Chatbot Hallucinations

To fully grasp the issue at hand, it is essential to understand what “hallucination” means in the context of AI. While many might expect AI to provide perfect and flawless answers, the reality often falls short, with AI-generated outputs frequently being inaccurate—a phenomenon referred to as “hallucination.”

In the realm of AI, especially in machine learning and neural networks, hallucination refers to the generation of incorrect, nonsensical, or entirely fabricated information during data processing or generation. This issue is often most prevalent in generative models, such as GPT (for text) or DALL-E (for images), where the AI might produce outputs that do not accurately reflect the input data or real-world knowledge. These inaccuracies can stem from biases in the training data, overfitting, underfitting, or limitations in the model’s architecture. For instance, an AI trained on a dataset of images might “hallucinate” objects in generated images that weren’t present in the original prompt, or it might combine features of different objects in nonsensical ways. Similarly, in natural language processing, an AI might generate plausible-sounding but factually incorrect statements based on patterns it learned during training, which don’t actually represent real-world knowledge.

 .

The Air Canada Case: Legal Implications of AI Chatbot Hallucinations

The hallucination effect of AI Chatbots could land companies in hot water legally, especially when customers rely on the information provided by the AI chatbot to make decisions, and this is exactly what happened in the very recent decision of Moffatt v. Air Canada, 2024 BCCRT 149 (“the Air Canada case”) where Air Canada faced legal consequences due to the hallucination effect of their AI Chatbot.

The Air Canada case, while seemingly straightforward, carries profound implications and offers invaluable lessons for companies implementing AI Chatbots on their websites, apps, or other platforms.

The Air Canada case revolves around a customer who, following the death of a family member, sought to book a flight with Air Canada. The customer interacted with the AI Chatbot on the Air Canada website, which advised that the customer could apply for bereavement fares retroactively by submitting a request within 90 days of ticket issuance. Relying on the advice and information provided by the AI Chatbot, the customer purchased the ticket and then applied for the bereavement reduction within the 90 days stipulated period as advised. However, the situation took a turn when Air Canada denied the bereavement fare claim, explaining that the AI Chatbot had provided “misleading words” that contradicted the information on the bereavement travel webpage, as according to the webpage, the bereavement policy does not apply retroactively, rendering the customer ineligible for the bereavement fares.

Air Canada attempted to absolve itself of liability for the wrong information provided by the AI Chatbot by arguing that the AI Chatbot is a separate legal entity that is responsible for its own actions. This argument, however, was rejected by the Civil Resolution Tribunal (“CRT”) in Canada. The CRT unequivocally stated that “while a chatbot has an interactive component, it is still just a part of Air Canada’s website. It should be obvious to Air Canada that it is responsible for all the information on its website… I find Air Canada did not take reasonable care to ensure its chatbot was accurate.” The CRT further ruled that the customer had relied on the chatbot to provide accurate information, which the AI Chatbot failed to do. Therefore, this is a case of negligent misrepresentation on the part of Air Canada, and the customer is entitled to damages.

The Air Canada case serves as a critical examination of how companies utilize AI Chatbots and the potential legal ramifications. Indeed, Air Canada attempted to make an interesting argument by claiming that the AI Chatbot is a separate legal entity and should be responsible for its own actions. However, it is a concept that is often misunderstood that AI is a sentient entity capable of independent thought and action. In reality, AI operates through neural networks that undergo continuous training and adjustment of weights and biases based on the input data. It is crucial to grasp that AI doesn’t possess consciousness or autonomy; rather, its functionality is entirely determined by the parameters set during its training. The outcomes produced by AI are essentially predictable and controllable, as they are guided by the patterns and information ingrained within the training data. In essence, AI should be viewed as a sophisticated tool that executes tasks based on predefined algorithms and learned patterns, rather than exhibiting genuine cognitive processes or decision-making abilities.

Given that companies owe a duty of care to ensure that the representation, advice or answers provided by the AI Chatbot to be true, accurate and not misleading, the next question is at the current state of “hallucinations” condition in AI, is it even possible for companies to completely eliminating “hallucinations” or errors in AI-generated content.

The truth is eliminating hallucinations entirely in AI systems is a daunting task. Even reducing these errors and striving for greater accuracy would demand significant resources and effort, as it involves the acquisition of high-quality data, the training of sophisticated models that require substantial computational power, and the continuous development of new model architectures or training techniques that can better handle the nuances of human language and knowledge. While many companies are keen to employ and leverage AI in their technology, most may not be prepared to invest such high costs in ensuring the accuracy and correctness of AI-generated answers due to the extremely high costs of investment and resource-intensive work involved.

Therefore, companies need to find a balance between leveraging AI technology in their offerings to reduce costs and investing resources to eliminate hallucinations in AI. This involves ensuring that the representations made are true, accurate, and not misleading to potential customers. This issue also poses a significant concern for general counsels, as traditionally, legal teams would provide training to business units and employees to ensure that representations made to customers are accurate and to avoid negligent misrepresentation. However, general counsels cannot provide training to AI Chatbots, posing a potential risk and crisis management issue that should now be considered by general counsels.

.

Addressing the Challenge: Strategies for Risk Mitigation

In response to the challenges arising from potential inaccuracies and distortions in AI-generated content, companies utilizing AI Chatbots can adopt several strategic insights to effectively address and mitigate these concerns:

 

  1. 1. Strengthening Terms of Use: Companies should promptly reinforce their terms of use or terms of service agreements on their platforms. These updates should explicitly acknowledge the potential for inaccuracies in AI Chatbot responses, and customers should be informed of their responsibility not to solely rely on AI Chatbot information and to cross-reference data from official website sources.
  2. .
  3. 2. Implementing Robust Disclaimers: It is imperative for companies to incorporate clear and comprehensive disclaimers and terms of use notices for users engaging with AI Chatbots. These disclaimers should unequivocally state the possibility of inaccuracies in the advice or information provided by the AI Chatbot, and users should explicitly acknowledge and agree that such responses cannot be construed as misrepresentation, thereby protecting the company from liabilities stemming from inconsistencies or inaccuracies.
  4. .
  5. 3. Providing Training and Developing Internal Policies: Collaboration between legal and technology teams responsible for AI Chatbot deployment is paramount. Legal counsel should conduct training sessions to enhance the understanding of the data inputs driving the neural network systems behind AI Chatbots. Moreover, these interdisciplinary teams should collaborate to devise internal policies aimed at continuously enhancing the accuracy and reliability of the AI system’s outputs.
  6. .
  7. 4. Regular Auditing, Monitoring, and AI Model Red Teaming: Implementing regular audits, monitoring procedures, and AI model red teaming can collectively help identify and mitigate potential legal risks associated with AI Chatbot interactions. Companies should establish protocols for monitoring the performance and behavior of AI Chatbots, including reviewing chat logs, analyzing user feedback, and conducting periodic assessments of accuracy and compliance with legal standards. Additionally, integrating AI model red teaming, where teams simulate adversarial attacks to uncover vulnerabilities, can provide valuable insights into potential weaknesses and enhance overall robustness.
  8. .
  9. 5. Transparent Communication Channels: Providing transparent communication channels for users to report inaccuracies or raise concerns about AI Chatbot responses can help mitigate legal risk. Companies should establish clear avenues for users to provide feedback or seek assistance when they encounter misleading or incorrect information from AI Chatbots. Additionally, companies should communicate openly with users about the limitations of AI technology and the steps being taken to improve accuracy and reliability. By fostering transparency and accountability, companies can build trust with users and minimize the risk of legal disputes related to AI Chatbot interactions.
  10. .

Conclusion

By adopting these strategic insights, general counsels can effectively mitigate the risks associated with AI-generated content, ensure transparency with their customers, and proactively enhance the accuracy of their AI Chatbot interactions. As this field continues to evolve, it is advisable for companies and general counsels to collaborate with legal professionals well-versed in technology law to develop the right internal policies and strengthen the current terms and conditions on their webpages. In doing so, companies can continue to advance their technology while simultaneously reducing the risk of potential lawsuits arising from AI Chatbot hallucinations by ensuring a balance between technological advancement and legal safety.

 

If your organization is grappling with concerns regarding the accuracy of AI Chatbots and the potential legal risks associated with misrepresentation, our team is poised to provide expert assistance. Leveraging our proficiency in AI technology and legal frameworks, we offer tailored guidance to safeguard your Chatbot’s outputs and ensure compliance with legal standards. Contact us today to proactively address these critical considerations.


About the authors

Ong Johnson
Partner
Head of Technology Practice Group
Transactions and Dispute Resolution, Technology,
Media & Telecommunications, Intellectual Property,
Fintech, Privacy and Cybersecurity
johnson.ong@hhq.com.my

.

Lo Khai Yi
Partner
Co-Head of Technology Practice Group
Technology, Media & Telecommunications, Intellectual
Property, Corporate/M&A, Projects and Infrastructure,
Privacy and Cybersecurity
Halim Hong & Quek
ky.lo@hhq.com.my.

.


More of our Tech articles that you should read:

Our Services