A company known for its chatbot persona recreations of popular TV characters has recently come under fire following a civil lawsuit alleging that a fourteen-year-old boy committed suicide after prolonged interactions with one of the company's chatbots. This article discusses some of the key legal and ethical concerns surrounding chatbots and the need for developers to implement proper checks and balances to guard against the risks associated with the public use of AI technologies.
One of the primary criticisms is the lack of adequate content moderation, with claims that the chatbot engaged in discussions on harmful topics such as suicide. It is critical that AI companies developing public-facing AI technologies implement content policies that prevent the model from generating objectionable or harmful content. Additionally, these policies should include safeguards, such as providing warnings and directing users to contact helplines. Companies should implement safeguards to protect children from accessing and using their AI technologies.
The question arises whether an AI company that developed a chatbot can be held liable for a user's death. If the court determines that such a company could have foreseen and mitigated potential harms, liability waivers might not provide sufficient protection against claims of wrongful death or product liability. Pending the outcome of this case, this could be one of the first steps towards holding an AI company liable for physical harm caused by its AI system.
To address these issues, AI companies need to implement content moderation policies that, inter alia:
- prevent the AI system from suggesting harmful behaviours;
- includes alerts or notifications to users when they raise harmful topics; and
- direct users to helplines or other resources if sensitive subjects arise, such as suicide helplines.
There are also several ethical considerations for AI companies, particularly with emotionally engaging chatbots. Interactions with chatbots that feel personal can blur the line between virtual and real relationships, potentially leading to an unnatural emotional dependency on the chatbot, especially for younger audiences. To guard against this, AI companies should include disclaimers on their platform or when a user prompts a new chat with a chatbot, notifying the user that the chatbot is artificial and not human.
This case highlights the urgent need for both technical and contractual measures to protect users, especially children, from potential AI-related harm. Some of these technical and contractual safeguards include:
- Robust Content Policies: These policies should prevent chatbots from discussing or suggesting harmful behaviours or ideas.
- Helpline Support Integration: The AI system should automatically prompt the user to seek support to contact a helpline if the user raises sensitive topics such as suicide.
- Disclaimers: Periodic reminders or notifications to users regarding the chatbot's artificial nature. An example would be a disclaimer that appears at the top of a new chatlog or a pop-up notification that requires the user to click "I Acknowledge" before they can converse with the chatbot.
This article highlights the importance of implementing contractual and technical measures to protect the public - especially children – from potential AI-related harm.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.