Introduction
The tort of defamation has notably developed since its inception. With focus initially on individuals, there is a new curve arising from the emergence of Artificial Intelligence (AI). The specifics of AI questions the jurisprudence of defamation.
What is Artificial Intelligence (AI)?
Artificial intelligence (AI) is the application of computer systems capable of performing tasks or producing output that normally requires human intelligence. It operates by enabling machines to learn from extensive datasets and digital resources including the internet, search engines, and the World Wide Web, with the goal of simulating human-like capabilities such as decision-making, research, management, and analysis. Examples of Artificial Intelligence (AI) include virtual assistants like Siri and Alexa, autonomous vehicles, facial recognition on phones, chatbots for customer service, and fraud detection systems used by banks. AI tool like ChatGPT and other large language models can generate text, images, and other content, often integrated into search engines and applications to provide more functionality.
What is Defamation?
Defamation is the act of communicating to a third-party untrue statement that tends to harm the reputation of another person. Defamation can either be in the form of libel (publications written or in permanent form) or slander (spoken words).
Elements of Defamation under the Nigerian Legal Jurisprudence
As established by a plethora of cases decided by the Nigerian courts, there are six ingredients which the person alleging defamation (plaintiff/claimant) must prove to succeed in defamation:
- Publication of the offending words.
- The words complained refer to the plaintiff/claimant.
- The words are defamatory of the plaintiff/claimant.
- The words were published to third parties.
- The words were false or lack accuracy.
- There are no justifiable legal grounds for the publication of the words.
Can a person be defamed by AI?
Yes, a person can be defamed by content generated by AI.
Who should be liable for a defamatory publication made by AI?
There is no specific legislation or case law guiding AI defamation in Nigeria. Peeping into the jurisprudence of United States of America, users of the AI defamatory content or the companies that provide AI interfaces, platforms, or services may be held liable.
AI Model Creators
Imposing defamation liability on AI model creators appears to be a stretch. AI models operate on complex algorithms and data processing, generating responses without human intervention. Unless the creators deliberately or negligently design the model to produce defamatory content, the elements of a defamation claim are lacking. The model creators don't control the specific responses, publish false statements, or have the necessary state of mind for defamation. Furthermore, industry standards require model makers to include warnings and disclaimers about potential errors, emphasizing the need for user supervision.
In Walters v. OpenAI LLC, 23-A-04860-2, the court dismissed a defamation action brought one Mark Walters against OpenAI. Walters sought damages for reputational harm allegedly caused when OpenAI's ChatGPT, in response to a journalist's query, falsely identified him as a defendant in a lawsuit and accused him of fraud. Among other things, OpenAI argued that the company lacked the state of mind required defamation, as they did not intend to defame Walters. It was also emphasized that OpenAI's disclaimer warned users about potential inaccuracies in ChatGPT's responses, making it unreasonable for users to rely solely on the AI's statements without verification.
AI Content Hosts
In the case of a content hosts, they are generally not responsible for the content posted on their platform as they generally fall within protective arms of Section 230(c)(1) of the Communications Decency Act which provides that "no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider". However, when content hosts use AI to generate, edit, and feed content to users, they run the risk of being speakers and not mere hosts.
In Anderson v. TikTok Inc. 116 F.4th 180 (3d Cir. 2024), a ten-year-old girl, Nylah Anderson, tragically died after attempting the "Blackout Challenge" promoted by TikTok's algorithm. Her mother sued TikTok and its parent company, ByteDance. The district court initially dismissed the complaint, granting TikTok immunity under Section 230 of the Communications Decency Act. However, the Third Circuit Court of Appeals partially reversed this decision, ruling that TikTok's recommendation algorithm constitutes the platform's own expressive conduct, and therefore, Section 230 immunity does not apply, allowing the litigation to proceed.
AI Content Creators
Content creators who utilize AI-generated content are at risk of liability if they fail to review and verify the accuracy of the content before publishing. Even with disclaimers warning of potential errors or hallucinations, creators have a responsibility to ensure the content's authenticity. If they publish unverified AI-generated content containing inaccuracies, they may be held liable. The presence of disclaimers highlighting the AI's potential for mistakes underscores the importance of thorough review and verification to avoid potential liability.
Concluding Remarks
The law is not static, and it never has been. Legal principles constantly evolve to address new realities, and defamation law is no exception. As artificial intelligence becomes increasingly integrated into social, economic, and professional life, nations and states will continue to develop legal frameworks that regulate its usage and address potential harms.
Originally published October 2025.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.